added
string
created
string
id
string
metadata
dict
source
string
text
string
version
string
2018-12-05T04:29:18.654Z
2016-01-01T00:00:00.000
55997422
{ "extfieldsofstudy": [ "Political Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.bio-conferences.org/articles/bioconf/pdf/2016/02/bioconf-oiv2016_03012.pdf", "pdf_hash": "17926d0bed81734b4844754e47c00d8d26d1b7e3", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46211", "s2fieldsofstudy": [ "Business" ], "sha1": "17926d0bed81734b4844754e47c00d8d26d1b7e3", "year": 2016 }
pes2o/s2orc
Strategies practices of outstandings national wineries in the Brazilian market This work aimed to identify the strategies practices of outstanding national wineries on the Brazilian wine market from the perspective of Strategy as Practice. To conduct the survey, a multi-case study was developed in three Brazilian wineries, where the managers involved in strategic activities were interviewed and observed. Faithful to the recommendations of Strategy as Practice authors, this work sought to understand how context’s forces, past experiences, and firms resources shape and guide the disposition of actors for a particular strategic choice that became ordinary practices. Introduction In recent decades, various theories and models were created for the study of phenomenon related to the field of strategy in business [1]. In this light, organizations are seen as objective entities that have a clear target that must adapt to the economic environment, play resources and gain competitive advantage over their competitors to survive [1]. Examples of such widespread analysis tools are SWOT, BCG and the Value Chain [1]. Despite the great appeal of these approaches, the structure of the discipline has been developed through concepts and analysis tools that neglected the practice of people involved in the strategy [2]. The main criticism raised from these models is that, because there is a predominance of a macro view of the organizations' strategy, this approach reduces the importance of micro processes involved in the development of strategies; marginalizing tools, activities and practices that are used by professionals on a daily basis [2]. In order to understand what people do in relation to the strategies and how that influences the organization and its context, arises the Strategy Practice theory [2]. In this sense, the strategy is something that people do, not something that the organization has [3]. While previous approaches tend to take on the behavior of those involved in the strategy, SAP seeks to understand how people interact and use tools for the formation of strategies in practice [2]. In this context, this study aimed to identify the strategies practices of outstanding companies in the Brazilian market from the perspective of Strategy the Practices theory with the intention of discussing the role of firm resources, past experience, and influence of the social context to generate a unique strategy practices process. The theoretical analysis was empirical multicase study on three wineries companies located at the Serra Gaúcha region, in the state of Rio Grande do Sul, and focus on determining which practices generated competitive advantages on the Brazilian wine market. To achieve this purpose, four specific objectives were structured: to analyze the context of companies and their possible impacts on strategic practices; classify the strategic activities of the companies surveyed in practical dimensions to analyze how they impact on differentiation; describe the analytical and technological tools used by companies for the construction of strategies and their impact on practices; identify distinctive and common practices among the surveyed companies. Practice: A socially constructed guide for action The study of the strategists often focused in activities and management roles, disregarding the institutional forces of context. Thus, authors such as Whittington [3] argue the need for research into the institutions of the strategy, more specifically on the relationship between the conduct of social actors at the micro level and the institutions at the macro level. From these concepts, strategy's practices are understood as socially constructed. That means: the specific activities of actors cannot be separated from society, since the rules and the features they provide are essential for the action. Society is therefore the own producer of action. Social forces shape and guide the willingness of actors to a particular strategic choice. Thus, the individual is predisposed to behave in a particular way and react to strategic circumstances in a way that is congruent with their own sense of identity and education [4]. The internalization of practice Social theorists have been concerned about how practices are internalized through social interaction. In this sense, learning plays a decisive role. According to Gherardi [5], learning is always related to a practice developed by a group working out an identity based on participation. From this perspective, the practice is always the product of specific historical conditions resulting from previous practices that become our present practice. The contribution of this approach is the view that the practice is an activity system in which knowing is not separate from doing considering learning a social event and not just a cognitive activity [6]. Much of the action and group interactions are not based on shared agreements, but were constituted around a set of tacit assumptions that are not fully explained or completely explainable, since they were absorbed tacitly. Thus, social action is closely linked to a moral condition that the actors recognize as correct, legitimate and appropriate to a specific context. Human beings act in relation to certain facts based on the meaning that these facts have for the group to which they belong [6]. Gherardi [5] defines a practice as a socially and accepted stable way of ordering heterogeneous elements into a coherent whole through time. The author explains that the activities acquire meaning depending on the context, the moment where the action takes place and that society recognizes this action as legitimate. Thus, practices are best understood when they are situated, in other words, when there is a context and a group of people that build practices collectively binding and giving identity to that group. Bispo [6] interprets that from this perspective the reality does not exist a priori, but it is what people live and know. A practice is not understandable only to the agent that it practices, but it is also understandable for potential observers within the same culture [7]. In this sense, Bispo [6] explains that a practice is something that gives identity to a group that is organized from it. Learning occurs through interactions between social actors and human and non-human elements, and it is the result of tacit and aesthetic dimension of these interactions. Elements of practice The most important authors of SAP recognize three essential elements for the existence of practice: praxis, practice and practitioners. According to Reckwitz [7] Praxis is an emphatic term to describe any human action. Practices, on the other hand, refers to shared routines of behavior, including traditions, norms and procedures for thinking, acting and using things, this last in the broadest sense [3]. Finally, the practitioners are the actors; those individuals who interrelate with the practices and praxis. The best way to understand a practice is observing the reciprocal relationships between the three key elements. A practice perspective on strategy should incorporate consideration of how strategy practitioners (managers, consultants, others) draw on more or less institutionalized strategic practices (routines, procedures, techniques and types of discourse at organizational and extra-organizational levels) in idiosyncratic ways in their strategy praxis (specific activities such as meetings, conversations, talk, interactions) to generate what is then conceived as strategy, constituting in the process both themselves as strategy practitioners, and potentially their own activities as the seeds for new strategy practices [8]. An integrative model of strategy as practice Economists have recently built up a stream of research identifying best practices in management and determining the effect of their implementation. This work, primarily relying on survey methods and experimental manipulation of practice implementation, has highlighted the performance impacts of practice differences. In surveys across vastly different contexts of firms, industries, and countries, these scholars have shown that substantial gains in outcomes, such as profitability and sales growth, are correlated with best practice adoption. However, Jarzabkowski et al [9] argue that partial models that focus only on "best" practices in isolation are liable to misattribute performance effects. The aim of their work is to build an integrative model in which the complex links between practices, the ways in which they are engaged, who engages them, and their potential outcomes can be fully recognized [9]. It is important to note that the practice elements detailed in the previous section and those in the strategy-as-practice field call praxis, practitioners, and practices where replaced in by the authors model as "what," "who," and "how". The model shows that without recognizing the status and backgrounds of those who transfer and apply practices, there is a risk of confusing effects arising from practices with effects arising from the legitimacy or skills of the practitioners involved. Practices are developed, transferred, and enacted by practitioners, for instance, senior and middle managers or strategy consultants. Practices and practitioners are entangled. Hence, the effect of practitioners may be an important omitted variable in the evaluation of the impact of practices. Therefore, when attributing outcomes, it is necessary to consider how practice effects are intertwined with practitioner effects [9]. Second, the model emphasizes the importance of how these practices are actually enacted in the field. The authors argue that without close attention to the situated enactment of practices, observers are liable to overvalue formal practices while undervaluing practice adaptations in context. Practices do not occur automatically and unproblematically. Rather, they are enacted in context, often in ways that vary considerably from their espoused pattern. Such variations are not necessarily failures of practice, but rather necessary adaptations or improvisations in changing circumstances. Sometimes, such adaptations of practices are also strategic, such as when firms deliberately decouple what they claim to do from what they actually do. These deviations can be enormously generative, enabling, for example, changes in firm strategy. In other words, there is often a gap between apparent practice and what happens on the ground, with improvisations and workarounds important for achieving desired outcomes [9]. As Figure 1 indicates, practices are strongly shaped by the practitioners who develop and advocate them. The specific characteristics of different types of practitionersin terms of cognitive traits, roles, and organizational positions will have strong implications for practice use in different firms. Accordingly, inferences about the relationships between practices and performance are insecure if we do not account for the practitioners involved and the varying effects they can have. The same practice may have different performance outcomes when introduced by a prestigious consulting firm, by a powerful CEO or by a middle manager. Similarly, strategy practitioners may be more or less successful in their use of particular strategizing practices, according to their social skills and the contexts in which they operate [9]. The model also suggests certain feedback effects of the use of practices on practitioners in terms of their identities, skills, and career prospects. Practices are rarely blueprints that can simply be plugged into a context in unproblematic ways, as their use will be shaped by practical adaptations associated with specific contexts or practitioners needs. Moreover, close observation of practice adaptation may reveal sources of potential practice innovation. If we move beyond views of practices as largely transferred intact between contexts and actors, to understanding how those practices are enacted locally in practice, often in ways that make them barely recognizable to their originators, we may develop theories about the critical role of practice adaptation or even practice transformation in generating performance outcomes. Practice theory generally analyzes practices as bundles rather than singly. The model shows that effects of a practice will vary according to the presence or absence of other practices [9]. The model also would inform a wider approach to practice outcomes that would allow researchers to consider not just firm performance but also impacts on the practices themselves. So to speak, the enactment of practices feeds back on those practices [9]. A more integrated practice perspective holds that it is important to examine practices in context, attending to who engages them and how they work. Such a practice perspective also emphasizes that strategic outcomes depend on the interaction of the what, who, and how of practices. This lead the authors to propose the integrated model of strategy practice (Fig. 1). In practice, the elements are highly entangled with each other: for example, practitioners are inseparably carriers of practices, while practices have only a virtual existence outside of praxis. However, this model shows that the relationship between practices and economic performance cannot be understood without taking into account not only "what" practices exist but also "who" implements them and "how." It is by integrating the what, who, and how of practices that can trace the links between firm practices and heterogeneous firm performance [9]. Methodology In this research, the hypothesis came through the question: which are the strategies practices of outstanding national wineries in the Brazilian wine market? Seeking to answer this question, a field survey was conducted from a qualitative and exploratory design method that used non-participant observation, in-depth interview with a semi-structured script and research documentary as data collection strategy. Due to the unique characteristics presented by the context and the aim of the work, the most suitable working method for deep understanding of this phenomenon is the multi case study. This method is suitable because "[...] it can be useful in the discovery of factors that are common to all cases in the selected group; factors that are not common to all, but only in some subgroups; and factors that are unique to a particular case" [10]. In general, the multi case study allows a wider range of results, exceeding the limits of unity obtained when restricted to a company. Research design The companies surveyed were selected by convenience sampling and specificity. In this research, this technique was the most appropriate because it allowed the selection of sample components according to the necessary characteristics for obtaining typical cases considering the population [11]. For the sample definition, some specific criteria were adopted. The first selection criterion was that each company surveyed should have a superior performance than the sector media. Therefore, it was necessary to filter the sample avoiding companies that could receive financial support from a larger economic group, discarding any possibility of superior performance as a result of this advantage. In recent years owning a winery became object of desire for many entrepreneurs who make money in other sectors and fulfill the dream of developing their own wines redirecting capital from profitable companies to sustain economically unviable wineries. Those kinds of companies were excluded from the sample. Finally, companies should be a) from Rio Grande do Sul; b) small or medium; c) administered by the owners. The first point is justified because the gaucho sector represents almost 90% of national wineries, thus, analyzing the market of Rio Grande do Sul it is possible to have a clear idea of the Brazilian wine industry as a whole. The second and third points are related to the concentration practices in few practitioners. This concentration helps to identify a greater number of strategizing practices in one or two strategists. Data collection procedure A more integrated perspective of practice argues that it is important to examine the practices in context [9]. Scholars must be aware of how the context forces shape and guide the willingness of actors to a particular strategic choice [4]. Taking these recommendations, it was considered essential to conduct a thorough analysis of the sector to understand the social, cultural and economic context in which the strategies practices of the companies take place. Aiming to understand the context at the macro level of the companies surveyed, data of the wine industry was looked up in books, magazines, newspapers, periodicals, dissertations, theses, conferences and institutions linked to the wine sector. Aiming to understand the context at the micro level of the companies and to answer the main question of this study, the primary research data were collected through non-participant observation and interviews with semi-structured script. Interviews with semi-structured script the interview has always been considered an appropriate way to lead a person to say what he think, to describe what he lived or what he saw, or what he witnessed [12]. The conduct of the respondent by certain tracks does not imply predictability of the conversation [13]. Therefore, the in-depth interview does not remain stuck to pre-established technical rules, but presents itself as a flexible method of data collection that can be adjusted at the time the interview is taking place to suit the needs of the problem investigated. In order to disclose the companies' history and their strategies practices, interviews in-depth were conducted with the companies' owners in two rounds. Also, two employees or partners involved in the strategies practices dimensions were interviewed. The purpose of these interviews was to make a counterpoint to information received by the owners. As a complement of secondary data about the context, the CEO of the Brazilian Institute of Wine (IBRAVIN) and the CEO of the Association of Wine Producers at the Wine Valley (APROVALE) were also interviewed. In order to complement the information researched and aiming to add other points of view about the micro and macro contexts, two experts on the topics covered in this study were personally interviewed. A semi structured script was used in all the interviews to delimit the answers within the desired information needs. Table 1 shows the list of interviewees with positions and functions in each company. The interviews were conducted during the months of August, September and October 2015. Non-participant observation How practices are performed determines the results. Detailed observation elucidates what managers actually do in the field, and this observation is an essential complement to practice research [9]. Observation allows the researcher to find out how something works or actually occurs. Then, practices must be observed, since interviews and narratives are just reports of practices, but not practices [14]. Regarding this study, data collection was performed in two kind of non-participant observations in each company. The first one was in the role of mystery shopper on the wineries retail stores. This observation had about two hours in each winery. The second one had place as a nonparticipation observer of the owners at their workplace. The purpose of these observations was to follow managers in their natural habitat; that is, making decisions in practice and interacting with customers, suppliers and staff. Data analysis There are few pre-established formulas for the analysis of information from a multi-case study, drawing on researchers to depend on their own style and vigor. The analysis was developed comparing the speeches of the surveyed public, grouping content obtained in interviews and document analysis, identifying thus the relevant factors of each case [15]. Following the recommendations of Strategy as Practice's authors about the necessity to understand how the context's forces shape and guide the willingness of actors to a particular strategic choice, this work began with a thorough analysis of the context in macro level of the firms. Evolution, consumption, and the current situation of the Brazilian market, were widely discussed. Next, the context at the micro level of the firms was deeply addressed. Here, a detailed description of the wineries and the practitioners of the strategy were discussed: history, origin, culture, location, products, generic strategies and financial results were introduced to situate the context in which the practices occur. Subsequently, the practices identified were classified and grouped into seven large dimensions, considered as strategic for the results of companies in the wine market: a) human resources; b) sales and distribution; c) marketing; d) wine tourism; e) product and winemaking process; f) price; g) planning. The results of the analysis of the seven strategic dimensions of practice are presented in the next section of this paper, after a brief summary of the context's analysis. Context analysis The firm's context analysis showed that the Brazilian wine industry faces an unfavorable reality. The 2008 global economic crisis, associated with the entry of other producer's countries in the market has resulted in an excess of quality wine supply in the world. Over the last decade, the appreciation of the Real against the US Dollar and the increased purchasing power of Brazilians, facilitated the entry of imported wines in the Brazilian market. The Brazilian consumers became interested in higher quality wines; but national wineries could not follow the expansion of this new market, that was taken mostly by Chilean and Argentine wines [16]. In the last decade, imports of this type of wine more than doubled it, rising from 35 million liters to almost 80 million liters in the period 2004 to 2014. On the other hand, in the same period, domestics wines growth rate equal to zero. Here, there is a consumer prejudice against Brazilian wine, especially in comparison with imported wine, which brings a better quality image, brand strength and, especially, better cost-benefit ratio [17]. Despite the adverse scenario and stigma in relation to the national wine, sample wineries have positioned their wines in a level close of the imported ones, and still grow and profit in this market. Table 2 summaries the principal characteristics of the three firms of the sample described at the context's analysis at the Micro-Level of the wineries. The following section identifies the strategies practices of the wineries surveyed, aiming to understand through the analysis dimensions which practices have led these firms to outstanding position in the market. Dimension 1: Human resources practices The three firms in the sample are managed by families in which one of the members is always the winemaker. By tradition, viticulture is usually delegated to the father. The members naturally assume others relative responsibilities towards a common goal. Through deep analysis of the practices, it is concluded that those who make the strategy determine what will be done and how it will be implemented, that is, the skills and personal characteristics of the owners guide the strategic choices of companies to different paths. It is clear that the chosen strategy and practices depend on specific characteristics of different types of practitioners. The ALPHA company, -in which the partners have excellent communication skills-, focused its strategy to exploit these resources thorough lectures, meetings with institutions, events, contests and wine judgments. The BETA company, -in which the owners manage an efficient cost control procedures-, focused its strategy on the production scale and cost-benefit product. Finally, the GAMMA winery, -in which the owner has an excellent network in the region of vineyards-, bets on tourism and regional sales. As shown in the model (Fig. 1), the context is the starting point and practitioners influence the choice of best practices, according to the situation and the resources available. Dimension 2: Distribution and sales practices The winery ALPHA -the one with the higher prices level -needs a direct sales force to compete with the imported wines. Higher prices results in better margin and value for distributors channel. However, wineries that choose this strategy must raise the quality and marketing of their products. Thus, better marketing practices and additional investments in product process are required. Then, the strategic choice of a practice determines the adoption of other practices to the first one achieves the desired results. The wineries struggle to enter in the restaurant's wine lists. At restaurant consumers are willing to taste new wines, then, this is a great opportunity for the wineries to conquer a new customer for the brand. Driven by hefty margins, wine importers legitimized in the restaurant market a range of practices including offering gifts and prizes to the best selling waiters. None of the surveyed companies adopted this practice, appealing exclusively to a friendly relationship with the restaurant staff and to the idea of a superior national wine. In this case, the waiver of a practice determined the choice of another practice, supposedly more effective or advantageous to the winery, at the expense of the one legitimated by the market. In general, those who do not adapt to the practices required by the market, can hardly maintain or enter it. But for different reasons and contexts, the practice based on a friendly relationship and a superior national wine works for ALPHA and GAMMA wineries. For ALPHA it works because wine critics and specializes media tends to commend their wines influencing the restaurants. On the other hand, for GAMMA the strategy works because the winery concentrates all its forces in regional restaurants, meaning that, establishments with natural demand for Brazilian wines and where the owner has an intimate relationship. Again, it is possible to recognize practices that look similar at a first glance, but when analyzed in detail, reveal that context, other business practices, and practitioner's skills, affect the implementation process of those practices, establishing serious difficulties in comparing or replicate them. Another distinctive practice identified in the sample firms is a real interest in preserving lasting business relationships -mainly based on trust-with the distribution channel. All sample companies seek to respect the trade agreements and avoid conflicts of interest with the channel. This practice may seem obvious, but habitually demands the declination of short term profits; an effort that most of the industry's winery do not seem willing to do. Dimension 3: Marketing practices The use of prizes and endorsers points is a powerful legitimated practice in the wine market. ALPHA and BETA seek to win prizes in contests to improve the perceived value of their wines. However BETA wins major awards, the winery do not win proportionally the same amount of spontaneous media, or even reach a close relationship with the wine entities, as ALPHA does. This is attributed primarily to the ALPHA pioneering -the company highlighted the specialized press at a time that few national wines were bandied as plausible to be great wines-and a better use of relationship skills of the ALPHA owners with the wine institutions. These factors have been influencing the wine's judgment from critics and specialists, who almost automatically attribute to these wines a higher quality level. Different results obtained by ALPHA and BETA in the implementation of the same marketing practice show that the elements of the integrative model (cf. Fig. 1) rarely equal between different firms. Specific contexts or practitioners skills make the practices barely recognizable when compared. That is, the same practice may have different performance outcomes when introduced by two different companies. On the other hand, GAMMA focused its marketing on its excellent relationship with the restaurants and hotels in the close tourist area. GAMMA's marketing practices are mainly based on wine tourism. Certainty the location company is a competitive advantage. But it is the owner relationship skills with the local establishments and his engagement in tourist development of the region that turns GAMMA a recognized and recommendable winery in the Wine Valley. GAMMA positioning does not begin in the consumer's mind, but in waiters and hotels staff mind; that is; those who, ultimately, recommend to the tourists which wineries to visit, or which wine they should try. Regarding on common practices, none of the three companies in the sample invests in direct media or build a communication strategy on social networks, or develops its brand, packaging and labels with professionals; giving space to handmade marketing practices. Dimension 4: Wine tourism practices Among the practices identified in the survey, no other dimension provides a better example about the impact of context in the implementation process than the wine tourism practices dimension. It was observed that the three companies of the sample adopt an identical praxis: the wineries receive tourists and offer tasting experiences to create a bond between the wine brand and consumers. But in this dimension, the weight of the location (context) is so significant, that practices implementations and practitioners' skills diminished their impact on the outcome. This conclusion is detached from the comparison between the practices of companies GAMMA and BETA. In BETA tourists are always attended by the owners and the structure and landscapes are better, however the distant location of the winery from the tourist region affects the practice performance when compared to GAMMA. In GAMMA implementation of praxis is not perfect; that is, because of its exceptional location the winery receive a high flow of tourists, then, the winery needs to charge the tastings and offers a less personalized service than BETA. Thus, in spite of receiving more tourists, GAMMA is not as good as BETA in the practice implementation. However, the weight of location determines relatively superior performance of the practice in GAMMA than in BETA. The example above shows that to achieve superior results, context, praxis, practice and practitioners must be in line with the strategy; or at least not clash to the point of harming those elements in which the company has an advantage. This is the case of GAMMA that, without shining, the winery does not prejudice its location advantage by implementing a reasonable service for the tourists. On the other hand, a company with a favorable context, but which does not reach an acceptable level of implementation of the practice, or place unprepared practitioners to implement the practice, may see its early advantage missed, as happens in ALPHA. Although ALPHA is much better located than BETA, the first one has a relatively inferior performance outcome in the tourism practice. At this point, it is possible to observe the impact of a new practice on context or in other practices firm (cf. Fig. 1). ALPHA failed in tourism practice (Practice 1) implementation mainly for the lack of space, organization and cleanliness. The process takes place as follows: ALPHA decided to elaborate sparkling wines to serve a market demand (Practice 2). The adoption of this practice impacts context because it reduces the space and mess up the winery cellar, giving a bad impression to visitors. Thus, wine tourism practice (Practice 1) was affected by the production of sparkling wine (Practice 2). Through this comparative analysis of the wine tourism practices, is possible to reach two important conclusions: first, the value of a resource does not depend on its existence, but of its use. Secondly, good results in a practice may adversely affect other practices. Companies must make choices, and these choices not only affect the results of the practice in question, but also impact on the environment, the practitioners and other organizational practices. Dimension 5: Product and winemaking practices For the owners their wineries and their wines mean much more than a way to make money. To elaborate higher quality wines is not only a deliberate practice aiming success, but also a practice that confirms the identity of everyone involved in the winemaking process. Not making the best wine -or a wine out of the standards levels expected-is understood not only as a risk for the brand image, but also as betrayal to the practitioner's identity. In this sense, it was revealed that there is a deviation from the local producer's social practice when compared to those from the sample. According to the wine experts interviewed, in general, local producer's practices prioritize short term profits over quality consistency through time. These national wineries social practices have been disappointed costumers and damaging the Brazilian wines image. On the other hand, the sample companies share a real concern about high standard quality performance. But this shared value does not imply identical winemaking process. Far from it, the resources and contexts of the sample firms are as heterogeneous as the strategies chosen to achieve a competitive advantage with them. BETA winery -the one that owns the largest vineyards, with great capacity and strong orientation to the product -specializes in industrial scale equipment, facility and operation controls, resulting in cost leadership strategy. ALPHA -a pioneer in quality enology in Brazil and excellent relationship skills in the wine world -launches attractive wines for the higher segments of the market that gets the endorsement from the specialized critics, creating differentiation value to similar competitors. Finally, GAMMA -the one that owns one of the most privileged locations in the Vineyards Valley and has a great influence in local restaurants and hotels-elaborates wines exclusively for the tourists. That is, a focus strategy. In summary, each company chooses a winemaking process and a wine style that respects its identity and values their resources, aiming to achieve through them a competitive advantage in a specific segment. Dimension 6: Price practices The analysis shows that the sample companies do not follow a formal pricing model, but fix their prices through the experience gained over the years serving a specific market. In this sense, the combination of the target market chosen, the winemaking process, and others competitors prices, determines the prices of the wines. Actually, the target market is the starting point that determines not only the price's practices, but also the costs and practices required to provide value and an effective positioning for those customers. The companies of the sample choose the target markets according to their advantages and abilities, or to the lack of them. ALPHA -the one that positioned their products on the top price segments -needs to invest in a specialized sales force to persuade sommeliers, waiters and wine critics about their wines qualities. This activity demands trained employees. That increases labors costs and, consequently, increases the products' prices. On the other hand, BETA selected a lower segment than ALPHA that requires no further skills on the sales channel. However, this target market demands to produce the same quality of many of the competitors, but pricing for less. Finally, GAMMAthe one who knows the prices that the middle tourist is willing to pay for a regional bottle-prefers not to confuse the visitors and fixes its portfolio on a single price level, facilitating positioning and avoiding mental exhaustion of those who just want to buy a nice regional wine. Dimension 7: Planning practices Planning is the dimension of analysis in which less practices were identified. It was noted that the owners do not believe that an external consultant or analytical models could improve company's strategies performance in any way, so they do not use them. The wineries do not develop any activity in the planning area, at least not explicitly. The decision making process in wineries is built through daily contact between family members and close employees. Hence, is very difficult to recognize or identify strategy practices in this dimension. They certainly exist, but to follow the strategizing dynamics of these firms demands to be part of the group. Strategic issues are discussed among practitioners in the winery office, in a family lunch, or on a vacation trip. As family firms, values and goals appear embedded in the practices tacitly. At this point, we can resume once again the concepts of Bishop (2013), who states that much of the action and group interactions constitute around a set of tacit assumptions that are not fully explained or completely explicable since they are absorbed tacitly. Tacit knowledge absorption and historical and cultural values within the family members turns formal planning dispensable. Companies have resources and skills that guided them naturally to a strategic choice. ALPHA follows a differential strategy; BETA a cost leadership strategy; and GAMMA a focus strategy. But those choices did not happen smoothly. Rather, it is a trial and error process, in which survived the strategies that best explored the rare resources that achieved at the end, a competitive advantage in the market. Final considerations Although there are similarities between some practices, in general, the aggregate analysis determined that when studied in detail, the elements of the practices -context, skills of practitioners or the way they are implementeddiffer, establishing serious difficulties in comparing or replicate them. Therefore the outstanding position of these wineries must be attributed to a combination of practices in the specific context of each company. The results showed that the impact of the practitioners is an important variable in the implementation of practices. Because of the sample companies are small businesses, the weight of the owners particular's skills influence the direction and the way in which the strategy takes place, transferring to practices a personality that make them virtually inimitable. The outstanding position is not related to a particular practice, a specific implementation, or a singular context; but to a very accurate alignment between the few available resources and a set of strategic practices chosen to exploit them. The performance is the result of a complex combination of practices and elements that impact each other. This relationship between practices is so tangled, that it would be a huge mistake to attribute the success to an isolated practice. Each firm surveyed has chosen a winemaking process, a style of product, a distribution channel, a market, and a way to promote their distinctive features; aiming to reach through them, a competitive advantage. But this advantage is not easily conquered. In these companies strategies' practices take place more as a strategic reaction to past experiences than as a deliberate strategic planning. By analyzing practices, it was concluded that a company with a favorable context, but that it does not reach an acceptable level of implementation of the practice; or puts unprepared practitioners in the implementation process; may see its initial advantage missed. Then, the value of a distinctive feature not only depends on its existence but of its use. The study shows that to achieve superior performance the elements -context, praxis, practice and practitioners-must be in line with the strategy or at least not on clash at the point of harming the key element in which the company has a competitive advantage. It is important to highlight the presence of certain values and shared criteria within the strategies practices of the three companies in the sample. First, it should be noted a real concern about strengthening a long-term trade relations with the distribution channels. Second, and not least, stands a real concern about the quality of products offered to the market. The latter may seem a minimum requirement for business success; however, arise from the analysis a clear dichotomy between the owners interviewed practices' and local producers practices. A high quality winemaking process requires not only to choose the best grapes and aging the wine in oak barrels, but mainly to give up revenue, volume, margin, and short term profits when the crops are not good in order to maintain a constant quality standard. These sacrifices are instinctive natural practices when identity and strategy are inextricably linked, as on the three companies in the sample. The fact that the owners are directly involved in the winemaking process, reaffirms the relationship between the owners identities and their wines. Not to make the best wine possible or a wine out of the standards expected for the brand would not only run the risk of burning the image of the product on the market, but hurt a strongly ingrained value in the practitioner's identity. Managerial implications From this work arise some important managerial implications that can be followed by firms and entrepreneurs that pretend to improve their performance in the Brazilian wine market. First, companies must make a real commitment to quality and the market if they want to be part of the select group of successful wineries. The sample companies struggle obsessively for a superior wine. As discussed in the final considerations, to produce quality wines in the Brazilian climatic conditions requires a lot of perseverance and sacrifices. Second, companies must abandon the idea of imitate or follow practices in which they not have the resources or skills required for a proper implementation. It is natural that producers feel tempted by external practices that work when implemented by other companies, or succeeded in other countries. However, as widely discussed, when adapted to a new context or implemented by other practitioners practices can hardly survive intact to the process, suffering different transformations and outcomes. Finally, the wineries must identify their different resources and exploit them, aiming to achieve a competitive advantage. In this sense, the results of this work are encouraging. The survey revealed that even following very different generic competitive strategies, the three companies in the sample reached a prominent position in wine market. That is, there are different ways to achieve the same goal. The Brazilians will legitimize the national wine when the peaks and troughs in quality that frustrate consumers were eliminated. This is what has happened in the sparkling wine national market, where favorable weather conditions and cost barriers to entry converged to a final product that rarely disappoints. Still, for the national wine it won't be easy to extirpate its stigma. Whether conditions are less favorable and the consumer cannot separate the wheat from the chaff. An organized and united wine industry that promotes a greater adherence of producers committed to quality and to preserve the domestic's wine image among consumers could be a starting point. Limitations of the study and avenues for future research Despite the implications presented, the study has some limitations. The strategic themes are strongly linked to the competitive advantages and the decision-making process within companies. This fact imposed limitations on the depth of detail that the owners may be willing to revel during the interviews. The information could be enriched by the vision of internal employees, video recordings and other techniques used by strategy as practice researchers that were impossible to implement in this work. The seven dimensions used for the analysis of the results did not follow an academically proven model. The dimensions emerged over the research from the necessity to organize and classify the data collected during the survey. It is recognized at the end of this research, that there are plenty avenues for future research in the strategy as practice study. For future research, it is suggested a survey that compares larger wineries. This perspective could reduce the strong influence of entrepreneurs in the strategies practices observed in this work. The fact that companies choose the markets and strategies in which they have some skill or advantage to be exploited suggests a very close relationship with the concepts of Resource Based View (RBV). A survey that explores from the RBV perspective the impact of practice implementation or practitioner's characteristics as the main source of inimitable resources could be valuable for the theory of strategy.
v3-fos-license
2021-07-27T00:05:34.907Z
2021-05-26T00:00:00.000
236394729
{ "extfieldsofstudy": [ "Sociology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/00380385211015563", "pdf_hash": "5efb123adeaa1a49a2980bc51d66b3ac81e6b770", "pdf_src": "Sage", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46212", "s2fieldsofstudy": [ "Business" ], "sha1": "b9d74dfa1ead960dd8691b0c39fd0a6f262665b1", "year": 2021 }
pes2o/s2orc
Family Practices and Temporality at Breakfast: Hot Spots, Convenience and Care Drawing on 34 semi-structured interviews, this study investigates the temporality of family practices taking place in the hot spot. It does so by looking at how breakfast is inserted in the economy of family time in Italy. Our data show that breakfast, contrary to other meals, allows the adoption of more individualised and asynchronous practices, hinged on the consumption of convenience products. These time-saving strategies are normalised as part of doing family. Although the existing literature suggests that convenience and care are in opposition, and consumers of convenience products can experience anxiety and a lack of personal integrity, such features were not a dominant feature of our participants’ accounts. These findings suggest that the dichotomies of hot/cold spots and care/convenience are not always experienced in opposition when embedded within family practices. Hence, this study furthers understandings of family meals, temporality and the distinction between hot and cold spots. Introduction This article investigates how family life is practised and accounted for during time pressed meals, by focusing on 34 interviews with Italian participants on their experience of breakfast. Following Morgan's seminal works on family practices (1996, 2011: 6), we investigate 'doing family' during a mundane activity which 'appears to be trivial or meaningless' but which can provide a valuable understanding of the sense of the everyday, the 'doing' and the effort that individuals invest in re-producing and maintaining family life. As well as looking at what is done during family practices, it is also useful to see how such practices are accounted for, that is how they are explained by participants. Breakfast is one of the family practices that 'seems unremarkable, hardly worth talking about' (Morgan, 2011: 6), as shown by the very limited studies dedicated to this neglected meal. Most of the literature on family meals refers to dinners and to lunch, the meals mostly consumed together (Brannen et al., 2013;Milani and Pegoraro, 2006;Yates and Warde, 2017). This silence is important in itself, revealing the way in which much existing research tends to implicitly perpetuate the idea of lunch and dinner as the quintessential family meals, overlooking other eating occasions. Responding to this scant attention, this study explores the link between eating and standards of care at breakfast. Thus, it contributes to an understanding of less studied family practices as well as providing insight into the relationships between meals, temporality and moral accounts. In looking at such relationships we take inspiration from Warde's (2016) understanding of mealtimes where he argues that through their necessity and frequency, meals offer a useful window through which we can gain a deeper understanding of family life. As he says: 'Meals have considerable analytic potential because they pull together social aspects of household organisation, temporal rhythms, practical priorities, social (and actor) networks, social convention and rituals ' (2016: 20). In understanding the analytical potential of breakfast for investigating family life, we look at how people organise their morning routines around ideals, temporal rhythms and priorities. The Italian context is particularly relevant since breakfast is a relatively recent meal, heavily shaped by marketplace representations of convenience bakery products which were positioned around the exclusionary ideal of the middle-class and patriarchal 'cereal packet' family (Arvidsson, 2003;Maestri and D'Angelo, 1995). Drawing on interviews on breakfast with participants from various family arrangements, this study engages with the literature on family meals and temporality. Among the works on temporality, the notion of hot spots (Southerton, 2003) was particularly relevant for informing our analysis of domestic breakfast. According to Southerton (2003: 19) who coined the term, hot spots are predictable moments during the day 'characterised by a compression of tasks into specified time frames so that "time" was "saved" for more "meaningful" social activities'. These are alternated with 'cold spots' which may also be called 'quality time', 'potter time', 'chill time' and 'bonding time', and are usually 'devoted to interaction with significant others' (Southerton, 2003: 19). Daily experience of time is thus characterised by a sequence of hot and cold spots. In his theorisation of hot and cold spots, Southerton (2003: 21-22) points out that: Hot and cold spots are metaphors for the tensions between care and convenience, or concerns about maintaining social standards and personal integrity. [. . .] Hot spots not only refer to a density of practices allocated in time frames that intensify senses of haste; in addition, and because hot spots often involve the use of convenience devices and services, they also magnify anxieties that a lack of time leads to a compromise of normative social standards expressive of care. The notion of the hot spot (Southerton, 2003) is useful for understanding the temporality of family practices since it refers to moments of the day in which the goal of completing tasks in a limited and fixed time frame, such as having breakfast in a rush, often causes feelings of time shortage. Thus, harriedness is generated from the need to designate time frames in which to schedule activities, in order to free up cold spots for quality time and care (Southerton, 2003). According to Southerton, during hot spots normalised standards of care are compromised and harriedness is supplemented by anxiety. To support scheduling, convenience devices and services are used, resulting in a feeling of having compromised in relation to care (Southerton, 2003) and the ideals of a proper family meal, which will be illustrated further down. This is contrasted with cold spots, which are seen as quality time in which social standards of care are maintained within family practices (Southerton, 2003). Thus, sustaining acceptable standards of care is linked to morality, as these standards reflect an ideal of how functional families should behave. Not respecting these standards is therefore seen as necessarily triggering anxiety. Our findings critique the notion that using convenience food when time is short is associated with anxiety. Instead, we illustrate more nuanced experiences of convenience food, care and time management. Our findings show that breakfast is part of the family time economy (Maher et al., 2008), commonly as a hot spot where expectations around synchronicity and presence around the table are negotiated without questioning togetherness. Participants experience breakfast as an informal, routinised food occasion in which care is enacted via individualised consumption of convenience food without being associated with 'a dereliction of familial duty' (Jackson, 2018(Jackson, : 2517. The lack of conviviality and synchronicity does not appear to cause anxiety among our participants, who imply that breakfast is a 'different' meal which does not follow the same standards which are applied to other meals. Theoretically this article extends our understanding of family meals and temporality by showing that a clear-cut distinction between hot and cold spots does not represent the complexity of family life, since care is enacted in moments of harriedness through the consumption of convenience food. The findings also show that breakfast practices where individual priorities prevail over commensality do not necessarily cause anxiety. We argue that this is because they are not seen as lacking care. Family Practices and Temporality The concept of family practices (Morgan, 1996) provides a tool that allows us to foreground mundane routines and habits through which we make sense of and produce/ reproduce family as a set of relationships (Morgan, 2011). By focusing on what families do, the family practices concept intervened at a moment when substantial attention was focused on family structure (Morgan, 1996). While the term conveys a sense of routine, family practices operate on a number of levels from the everyday to the occasional, from the mundane to the more spectacular (Morgan, 1996). Analytically, the concept 'opens up the possibility of movement between the perspectives of the observer and the perspectives of family members', and allows the wider contexts of history and biography to be part of the analysis (Morgan, 2011: 6). In Rethinking Family Practices, Morgan (2011: 80, emphasis in original) highlights that 'family practices are conducted within time and space and involve the use of time and space'. Family life unfolds and evolves through events and rituals which mark the passing of the days and seasons (Morgan, 2011). Meal times help to structure the day but also provide a sense of the passing of time through celebrations such as Christmas and Thanksgiving. Maher et al. (2008) employ the term 'family time economy' to illuminate the 'interrelated and complex temporalities of work and care in contemporary family life ' (2008: 547). Family time is not infinite, given the limited hours in the day and the juggling of aspects of family life with other commitments such as work, school and leisure (Maher et al., 2008(Maher et al., , 2010. Families with children may be negotiating and splitting time between paid work, school schedules, travel and extra-curricular arrangements for children to name a few (Maher et al., 2008). When exploring family temporality, we refer to scheduling and commitment, two measures of coordination of family timetables (Morgan, 2019). Scheduling refers to the allocation of practices to a time frame (Southerton, 2003), and it implies the effort of coordinating everyone's schedules (Southerton and Tomlinson, 2005). Personal commitment is the effort of doing that activity together, synchronising schedules (Morehead, 2001). The trade and supply of family time can generate tensions, such as a feeling of 'chasing time' in the effort to preserve some 'free' time dedicated to care and presence (Maher et al., 2010). This is particularly exacerbated for working mothers given the gendered expectations around care and domestic work, combined with the way in which the timetables for schools do not reflect those of the workplace (Maher et al., 2008). Such tensions can be related to the subjective experience of time, as shown by mothers who synchronise the linear time of work with the cyclical time of care even when at home (Morehead, 2001), or by children who prefer 'mush time' -free time uninterrupted by external timetables, intrusions or demands -which involves being together in a relaxed way while apart in the home (Baraitser, 2013). Maher et al. (2010) argue that there is the need for further analysis of family time schedules beyond time use, in order to understand family pressures in contemporary family life. We use hot and cold spots (Southerton, 2003) to understand such a pressure during breakfast as a morning family meal. Family Meals Defining 'family meals' is a slippery exercise since both terms -family and meals -are problematic and complex. As an essentialist view of family is inadequate to capture the complexity of family forms, we adopt an approach which sees family not as a 'naturally occurring collection of individuals' but rather a social unit which is formed and re-formed through everyday activities including the preparation and eating of meals (Jackson et al., 2009). Defining what constitutes a meal is also challenging since it implies engaging with interpretations of meal propriety, including moral accounts of what constitutes 'appropriate' food, and the broader notions of care and feeding, as care needs to be expressed in a way that is morally acceptable. If some have engaged with a structural approach looking at the composition and sequencing of dishes (Douglas, 1972), others have gone beyond and looked at the symbolic meanings around the materiality of the meal (see, for example, Valentine, 1999). Indeed, commensality round the domestic dining table (Fischler, 2011) and the sharing of the same food are part of a powerful symbolic myth -the myth of the family meal -propagated in the marketplace by brands, products and media around the mantra that good families eat together and stay together (Pirani et al., 2018). Through everyday practices such as food consumption and preparation, family is constantly reproduced (Morgan, 1996). Commensality produces bonding (Fischler, 2011), and eating together as a family, sharing the same table, time and food, reproduces the togetherness of family (Brannen et al., 2013). Studies confirm that the dining table is an 'important symbol or even metonym of the family' (Lupton, 1996: 39). The valorisation of family mealtimes around the table is considered a measure of doing family well (Gillies, 2011), a discourse consolidated by advertising representations of happy families consuming breakfast together (Pirani et al., 2018). Reflecting on the normative power of the 'happy family' meal, Wilk (2010) remarks how this ideal is connected to the middle classes. As studies adopting a Bordieusian perspective have shown, middle-class families often see the evening meal as an opportunity to transmit an extensive culinary taste involving a particular appreciation for healthy options to their children (Wills et al., 2011). Likewise, Italian middle-class family meals are used to educate children over food appreciation, leading children to interiorise a focus on nutritional content and table manners (Oncini, 2020). Taking one's time is part of the picture of what is seen as an acceptable culinary habit in Italian middle-class households, as 'feeding oneself is secondary to the fact of doing it in the way that is believed the most culturally appropriate (sat at the table, with no rush)' (Sassatelli et al., 2015: 101). Consuming a family meal regularly remains 'a goal that most parents would like to achieve, not only because it is a way of "doing family" but also for practical and budgetary reasons' (Brannen et al., 2013: 428). The ideal of regular family meals consumed together is met with the fear of losing such tradition, although it has been noted that this is based on an illusion of the past rather than empirical evidence (Mestdag, 2005), and eating together is still remarkably common (Yates and Warde, 2017). Research has shown how the ideal of a cooked meal eaten together increases women's time and labour in feeding the family in accordance with conventions (Brannen et al., 2013;Bugge and Almas, 2006;Moisio et al., 2004;Pirani et al., 2018). Literature suggests that this effort is sustained because the prioritisation of individual meals can be seen as a source of 'shame' (Brannen et al., 2013: 426), as solitary or asynchronous eating is perceived more negatively than eating together as a family (Fischler, 2011). This study shows how breakfast is one meal where eating asynchronously and consuming convenience food seems to be acceptable and does not open up spaces for negative moral judgement. Considering the pervasive ideal of eating together, it is not surprising that parents feel harried and anxious to prepare and share meals 'on time' (Brannen et al., 2013;Bugge and Almas, 2006). As previously mentioned, Southerton (2003) sees daily life as comprised of a sequence of cold and hot spots and the routinely family meal is an example of the latter. Following Southerton, certain meals can be seen as hot spots when they are inserted before timed events that take priority, such as a rushed breakfast before morning routines. These meals, in which quality time and care is not a priority, are contraposed to other activities which are moments of the day wherein care is exchanged. The use of convenience foods has been considered by scholars as compromising standards of meal propriety and care (Bugge and Almas, 2006;Moisio et al., 2004). This is also the view of scholars including Warde (1997) whose work on the dichotomy of care versus convenience has influenced many studies on domestic food routines. The consumption of convenience products is a typical strategy that many adopt to cope with the anxiety of time shortage, although people may worry they will be criticised for compromising 'normative social standards expressive of care' (Southerton, 2003: 22). Further literature evidenced the way in which convenience is not always seen as an acceptable shortcut, raising concerns about the affect and morality of the consumption of convenience products (Carrigan and Szmigin, 2006;Jackson, 2018). Using convenience food to save time can lead to a sense of guilt because it feels like 'cheating' given the dominant cultural script of the homemade family meal (Moisio et al., 2004). Recently some have criticised the negative and moralising connotation that convenience food has received in the literature (Meah and Jackson, 2017). Others have shown that in many families, convenience food is combined with fresh products and participants do not make a distinction among different types of food (Carrigan and Szmigin, 2006). Meah and Jackson (2017) have also highlighted how many see convenience food as caring food, since through providing such products parents enact care for their children. In reviewing the scant literature on breakfast conducted in different geographical contexts, studies have illustrated how the consumption of convenience items has often replaced the consumption of a cooked breakfast (Green, 2007;Schneider and Davis, 2010). Squeezed between inflexible working and schooling schedules (Veeck et al., 2016), breakfast is considered an important meal of the day (Marshall, 2005), but skipping it or reducing it to the consumption of snacks is a common trend across different geographical contexts (see, for example Le Pape and Plessz, 2017;Pirani et al., 2018). Unless there are children in the household, breakfast is a quicker and more solitary meal in comparison to those consumed later in the day (Mestdag, 2005;Yates and Warde, 2017). As such, commensality at breakfast is unusual (Le Pape and Plessz, 2017). Some research suggests that parents try and enforce breakfast for their children even though they may end up skipping it themselves (Le Pape and Plessz, 2017). In Italy this meal is still in its infancy. What is today known as the 'Italian breakfast' is a relatively recent meal and it consists of hot milk with coffee and pastries, biscuits and other confectionaries (DOXA-AIDEPI, 2015; Milani and Pegoraro, 2006;Pirani et al., 2018). Scholars report that people are gradually introducing breakfast into their daily routines, especially in households with children (Mortara and Sinisi, 2016). Considering how breakfast differs from other meals and how a more complex relationship between care and convenience might happen within this meal, it is surprising to see how little research has been conducted on it. Methodology This article draws upon the dataset of a larger project that collected semi-structured interviews with 34 participants conducted between November 2016 and May 2017. Participants were recruited from two towns in the same region of the north of Italy using snowball sampling techniques (Silverman, 2001). A diverse definition of families was adopted, based on marriage, civil partnerships and long-term relationships, with or without children. The sample evenly comprised both heterosexual and lesbigay families (Carrington, 2013), with a majority of participants being female (23). The sample was predominantly white, with an average age of 41 years old and they generally self-identified as middle class. Interviews were carried out individually, with the majority of participants coming from different families, in order to focus on individual accounts of collective practices and meanings (Orbuch, 1997). All the interviews were conducted in Italian by the first author, who tape-recorded, transcribed and translated them. Ethical approval was gained from the institution where the researchers were based at the time of the fieldwork. Participants have been granted confidentiality and anonymity through the use of pseudonyms, and they received a report of the findings at the end of the research. Interviews were manually coded, using a thematic coding frame that aimed at unpacking how respondents structure their morning routine and what meanings they attach to it, using codes both derived from literature and from data. We adopted a twostep coding process: first each group was coded separately and then it was compared for more re-coding. Following the principles of collaborative coding (Cornish et al., 2013), the second author coded a subset of data to check for reliability, while the third author was involved as auditor of the emerging codes. The interpretation aimed at unpacking family practices at breakfast. Talking about Breakfast In asking participants about breakfast, their immediate answers were 'it is not a big deal for us', 'it is a very simple matter' or 'well, we do not really have a breakfast as such', positioning it as a hot spot that does not raise moral concerns. For example, this is how Beatrice and Ascanio describe breakfast in their households: We have different schedules. Breakfast is not planned apart from holidays. (Beatrice, heterosexual, housewife, two children) People do struggle to have time for breakfast. We do not have time for having breakfast together. Fabio leaves home at 7, I leave at 6, Francesca around 8, then Maria has breakfast later. Everyone gets up at different times, we do not manage to get up at the same time. But we make sure to save time for lunch and dinner, depending on working commitments. (Ascanio, heterosexual, sales agent, two children) These two quotes reveal how breakfast is inserted in the family economy of time (Maher et al., 2008) and in the cosmology of the meals (Douglas, 1972) and as such it can be understood only in relation to other meals. Breakfast is squeezed between inflexible paid work and schooling schedules (Le Pape and Plessz, 2017), and people 'do not have time' for breakfast, as Ascanio says, reflecting its status as a hot spot. Indeed, time seems to be perceived as a scarce commodity (Maher et al., 2008) and thus it is allocated cyclically to daily meals. In Ascanio's household, for example, time is saved for lunch and dinner, while in Beatrice's household time for breakfast is 'found' during holidays. In other households, time for breakfast is found at the weekends. Squeezing breakfast in or struggling to have breakfast together is not seen as a morally problematic. Participants often locate their organisation of breakfast as common and generic statements asserting that 'people struggle to have time for breakfast' are frequent. If in other studies conducted in the UK and the USA participants seem to be concerned with the erosion of time for having breakfast together, our participants also report a lack of time but do not seem to express concern about it (Kremer- Sadlik et al., 2008). This lack of concern is particularly relevant in understanding participants' memories of breakfast: My dad used to stuff our faces with Nastrine [convenience pastry] before we went to school [laugh] we were obviously always late. So my dad, to save time, did not give us the chance to chew it, and would put a whole Nastrina in our mouth. (Paola, heterosexual, employee, two children) My mum used to be out at 6.15, so even before I got up. My dad used to have breakfast at 7.00 and I did at 7.15/7.30 to gain some time, so we all had it on our own. (Fabiola, heterosexual, social educator, married) Both Paola and Fabiola have a vivid memory of breakfast within a tight schedule requiring the coordination of time and food. Paola's memories of breakfast focus on her father's attempt at network coordination, getting both her brother and her to school on time. The connection between parental care of feeding children and time scarcity is a common feature in participants' accounts. Reflecting on their current and past routines of having breakfast, participants frame this family practice as 'normal', attaching to it a sense of regularity and indeed a sense of the everyday (see Morgan, 2011). As Pietro explains: To be honest there is a pattern: I am the one who gets up first and prepares the coffee and breakfast for my children. The little one gets up after me and gets a merendina [convenience pastry], the eldest gets up at the last minute and he forces himself to have something before going. After all this, maybe there is time for a coffee with my wife, but always in a rush! (Pietro, heterosexual, entrepreneur, two children) Interestingly Pietro admits that although breakfast is not 'a big deal', there is a pattern in its daily performance and there is indeed a 'being together'. While breakfast is not consumed by the entire family around the table, there is precise pattern which is a sequence of events and his execution of specific tasks at a specific time. Pietro knows by heart when, what and how his children and his wife are eating, even if they are each having breakfast on their own. Knowing other family members' preferences reflects the 'distinctiveness' of this family practice, which reproduces family ties while distinguishing family members from other relationships (Morgan, 2011). We found such intimate knowledge of breakfast a common feature among our sample, revealing how, despite the initial dismissive description of breakfast, this meal is more important to family life than first anticipated. The Rhythm of the Morning: Between Synchronicity and Commitment At first glance breakfast could be considered a quintessential example of a hot spot, as it has been theorised by Southerton (2003). The density of morning activities to be performed in a short amount of time and the coordination of such activities among different family members are certainly characteristics of a pressured time. This is particularly evident in households with young children, where parents need to juggle different tasks at the same time. Multitasking is not about doing more, but rather doing it all at once (Southerton, 2003). This is the case of Benedetta, who is responsible for coordinating her family's morning timetable, such as waking everyone up: I immediately wake up when the alarm rings. Mara [her daughter] instead takes 40-45 minutes. I wake up and I put my alarm in her room, because she does not wake up immediately, and she doesn't like being touched. So I put the alarm on snooze, first 6.45, then 6.50, then 7.05. After a while she gets up and she brings me the alarm. Sometimes she cuddles a bit with Btissam [Benedetta's partner]. I prepare tea and she has zwiebacks with Nutella, we have a decaf and we have breakfast the three of us together. (Benedetta, lesbian, therapist, one child) Benedetta's multitasking which combines waking up her daughter and getting breakfast ready reveals how 'getting things done' is her responsibility. The 40 minutes everyone needs to get ready are populated by a density of actions that Benedetta coordinates; her daughter and her partner seem to be free from managing time and tasks, including preparing food that can be shared. In households where young children are present, participants see breakfast as a 'good' and 'healthy' habit to be enforced regardless of their sacrifice to organise the meal, confirming what has been observed in other European contexts (Le Pape and Plessz, 2017). In fact, some share the same commitment that Benedetta has in making sure that breakfast is shared among the family members. For example, Linda (heterosexual, support teacher, two children) affirms that 'we all sit, eat, we have a chat, we are always in a rush, but the food is important for us'. In other households sharing breakfast is important even if varying ways of doing breakfast occur: Some days of the week we are all together, others Sebastiano is in Rome, we do it differently. I must say that when my husband is not there we stay on the couch, we are a little messier. Sometimes we also have milk in bed, on the couch, we do the things you shouldn't do. (Giacomo, gay, lawyer, civil partnership with a child) If alone with his son, ordinarily breakfast is a hot spot that prioritises the quality time of cold spots. Giacomo becomes a relaxed parent, performing 'things you shouldn't do', such as eating on the couch. Giacomo suggests that eating properly means eating at the table, but infringing this rule does not generate any moral anxiety. When Giacomo's husband is at home, breakfast becomes an opportunity to spend time together as a family, sitting together around the table and involving the child in a more elaborate version of breakfast. This more relaxed commitment to having breakfast together is also present in couples without children. For example, Michele says: If we wake up together it means that we both have time, so we eat with no rush and we talk about the day. This is 50% of the time, while in the other 50% it means we have different schedules and we eat on our own. (Michele, heterosexual, surveyor, without children) Breakfast is still considered as a pleasurable moment for family bonding, but not a compulsory one to attend. Conflicting schedules or tiredness are considered sufficient reasons for not having breakfast together. However, committing to having breakfast together implies focusing on quality time and interaction with the other person, borrowing elements from the cold spot even on weekdays. Later in the interview, Michele explains how breakfast with his wife often implies a tablecloth, signalling a special effort, and it would be consumed away from the television, which would disrupt the conversation. When on his own, Michele describes having other priorities and prefers to have a quick breakfast without setting the table (Marshall, 2005). If the aforementioned examples show attempts of having breakfast together and the effort parents like Benedetta make to synchronise their own tasks with other family members' rhythms, there are also households in which such attempts are absent: In our house everybody wants to stay in bed. We all have breakfast on our own, because we have different schedules, everyone gets his own one ready [. . .]. Someone should wake up earlier to have breakfast together [. . .] we have other moments we look at during the day. (Francesca, heterosexual, stay at home mother, two children) Instead of having breakfast together, Francesca prioritises her own sleep. Her lack of commitment towards synchronising tasks is revealing of how breakfast is considered outside of her role of feeding the family (DeVault, 1991). In fact, later in the interview Francesca explains that her family always tries to eat together, but not at breakfast, and how other meals are her own responsibility. Her effort to share family meals goes as far as regularly postponing lunch until 2 p.m. when her eldest son comes home from school. If time is a resource to be 'saved' and 'protected' for lunch, time for breakfast competes with other tasks. Unlike other meals, breakfast can be consumed individually without jeopardising the ideals attached to doing family around the dining table. A Convenient Breakfast A significant aspect of breakfast is that participants eat the same convenience food every day. This seems to echo international trends highlighting the predominance of daily consumption of convenience items (Yates and Warde, 2017). Interestingly, family members do not necessarily share the same preferences, and convenience food is consumed individually. Take for example the case of Sabrina and her household in which family members have individual preferences: We have it [breakfast] in two rounds. Those who go to primary school need to be out earlier, so they eat earlier. With the two younger ones, who are not independent and need to be spoon-fed, [comes a] second round. Because breakfast is conditioned by the time at which you must be out, lunch and dinner are not self-service, we eat together. Usually my husband wakes up earlier and he starts preparing the coffee. Everyone has their own taste, we are six and we eat six different things. He [the younger son] eats Pan di Stelle [a Mulino Bianco biscuit], the younger daughter cereals, the older bread and Nutella, the middle son bread and tomato, which is a slice of bread with my mother-in-law's tomato sauce and some salt. It is a sort of red pizza. The father has milk, coffee and biscuits. He prefers Macine [another kind of Mulino Bianco biscuit], or bread and jam [. . .]. I have cereals, but different from those that my daughter has. Each one of us eats on our own. There is the idea that since you don't eat much you can have what you prefer. With other meals you can make requests [before it is cooked] but once it is ready either you eat what's on the table or you fast. (Sabrina, heterosexual, consultant, two children) In this detailed description of how 'self-service' breakfast, as she defines it, is organised the intricate relationship between time and food emerges very clearly: six people eating six different food items in the same space and in a short amount of time. Referring to breakfast as a 'self-service' meal, which in the Italian language is a term often used as a synonym for canteen, Sabrina describes the sense of efficiency and time management. Convenience food and individualised consumption allow Sabrina and her husband to take turns feeding the children, or to let them prepare their own breakfast. This arrangement is not simply a matter of practicality but also of gratification, as personal preferences can be expressed without affecting other family members. Convenience food allows a moment of private indulgence where everybody's taste can be satisfied. As such, convenience food is not experienced as a compromise or a shortcut (Southerton, 2003: 21) but rather as part of routine care enacted within the family (Meah and Jackson, 2017). As underlined by Sabrina, this does not happen during other meals in which care is enacted with a more rigid control on health (Wills et al., 2011) and with the moral obligation of eating what is available, summarised by Sabrina saying 'either you eat what's on the table or you fast'. Southerton (2003) highlights how convenience products and shortcuts adopted during hot spots generate anxiety among individuals, since they are seen as lacking care or not meeting social standards of appropriate food. Instead, we found that respondents considered convenience food nutritionally adequate, and that family consumption validated this choice: 'I eat milk with biscuits and wholemeal rusks. [. . .] I have always had them with my family, as many as we wanted. They are nutritious products, and it is fine with me' (Stella, heterosexual, school teacher, three children). The example of Stella shows how convenience food is part of life-long consumption patterns, present from childhood as well as in her current household. Convenience bakery products are not seen as an exceptional indulgence, but rather as a reasonably nutritious food that can be consumed quite liberally. In fact, in our sample participants do not show any anxiety around feeding their children convenience food at breakfast and instead preferred brands of ready-made snacks are mentioned as part of caring for children: I do not usually have breakfast [. . .]. They [her daughters] have a yoghurt, a kinder Delice or a Kinder Brioss [two branded breakfast pastries]. I selected those because they have some milk in them and since they stopped having yoghurt and they are not having milk I thought let's give them milk in another way, even if that is not really milk. (Paola, heterosexual, employee, two children) In Paola's account, branded pastries represent a 'good enough' (Molander, 2019) option for feeding her children in a short amount of time when other tasks need to be done. Convenience food interlaces with childcare, as it allows parents to feed 'something' to children who are perceived as fussy in terms of eating (Jackson, 2018). The careful selection among other branded products shows Paola's care in feeding her daughters, and her interest in giving them food they would eat and enjoy, while revealing the moral compromises that underpin her responsibility of feeding the family. Discussion Our findings addressed the temporality of family routines in Italy to understand the experience of eating in the context of being squeezed for time. Applying the notion of hot and cold spots (Southerton, 2003) to these accounts of breakfast, this article makes three main contributions. First, it confirms the utility of focusing on the temporal nature of family practices, in agreement with Maher et al. (2010), Morgan (2019) and Southerton (2003). Second, it critiques Southerton's (2003) claim regarding the anxiety about taking shortcuts by showing that participants do not experience guilt around a meal based on time-saving strategies and by offering a moral account of such strategies, which goes against the norm of most family mealtimes. Third, it affirms that care takes place in the hot spot, contrary to the original theorisation that sees care as an element of cold spots (Southerton, 2003). Our first contribution stresses the importance of time in the study of family practices. Hot spots are generated in the effort of coordinating different schedules and family needs (Southerton, 2003). By looking at breakfast, we show the implications of hot spots in doing family, as hot spots ease the 'sense of obligation' implied in creating quality time for others (Southerton and Tomlinson, 2005). In the hot spot individual needs, such as sleeping a bit longer or getting ready for the day ahead, can be prioritised without compromising family meanings, and expectations around synchronicity and presence around the table are negotiated without questioning togetherness. Moreover, we showed how boundary practices, which contribute to the feeling of belonging (Morgan, 2019), are present in the hot spot too, such as remembering by heart what other family members eat even if breakfast is not consumed together. The prioritisation of individual needs taking place in the hot spot has particular implications for the distribution of gendered work within families. The division of labour is very important in understanding gendered temporal practices. Research has shown how the organisation of children's lives is impacting most on the temporal rhythms of mothers, who tend to be the ones in charge of synchronising multiple dimensions of time (Morehead, 2001;Southerton, 2006). Our data showed how the lack of moral judgement over convenience food and the frequent de-synchronisation makes breakfast the meal in which women have the least obligation to tend to their family members. This role of breakfast should be seen in relation to other meals (Douglas, 1972) where the expectations over eating together are higher and women do not enjoy the same flexibility. Our second contribution is that participants do not experience guilt or anxiety for using time-saving strategies in the hot spot. This contradicts the original argument about hot spots: 'the forms of convenience necessary to negotiate hot spots also generated anxiety about "taking short cuts" and not "doing a job well" (all narratives of personal integrity)' (Southerton, 2003: 21). Yet participants in this study did not feel particularly anxious about taking shortcuts in relation to breakfast (such as simply opening a packet of pre-prepared biscuits) and did not express tensions between care and convenience (Jackson, 2018;Meah and Jackson, 2017;Warde, 1997). Hence, we contend that in the 'hot spot' participants did not express a sense of guilt or loss of their personal integrity for using shortcuts motivated by time management. Interestingly convenience food does not stop being consumed once there is more time for breakfast, for example during the weekend, showing how the exceptionality of this meal is not only related to time scarcity. This perhaps connects with Morgan's (2011: 88) observation that family practices are not simply defined by the time in which they take place, but that 'it is also that a sense of time and space is created or recreated by these practices and the relationships involved'. Our third contribution is the observation that care can be enacted also in the hotspot. This contradicts the argument that hot and cold spots reproduce the tension between care and convenience (Southerton, 2003). In our sample, care was enacted through attentiveness rather than commensality. Examples of care in the hot spot include the accommodation of individual needs within collective schedules, memorising each other's morning rhythm, or the labour involved with feeding children even when parents were not having breakfast themselves. Convenience is not antithetical to care, as care is made possible through convenience food. Breakfast products are not simply seen as an acceptable convenience (Carrigan and Szmigin, 2006) but part of enacting care (Meah and Jackson, 2017). Thanks to such products, individual preferences can be accommodated, and parental care is maintained also during a hot spot. Valentine (1999) observed how individual preferences can be satisfied only at the expense of family food. Breakfast, instead, emerges as the only meal in which the expression of individual and indulgent preferences does not call into question whether the family is eating 'properly'. We want to conclude by making some suggestions for future research. The findings in this article suggested that there could be merit in further investigating the dichotomy between hot and cold spots, raising the question of whether this is a straightforward binary, and whether family members might have a different experience of this temporal rhythm. Our data indicated that hot and cold spots might not be so rigidly divided, since breakfast showed a combination of both. There is also the question of whether all family members experience temporality in the same way. While this article does not explore the discrepancy between individual perceptions of time pressure, it acknowledges that 'one person's interpretation of rush may be another's experience of leisure' (Southerton, 2006: 443). As hot and cold spots and care and convenience are not always in opposition as previously theorised, further research could illuminate how these dichotomies apply to family life. Conclusion This study contributes to understandings of how family practices are inserted in the family time economy (Maher et al., 2008). Inspired by Morgan's (2011) view that family practices are conducted with the use of time, this study has shown that time in the morning is a scarce resource in family life. As such, the allocation of time to certain tasks rather than others reveals priorities and commitments of individuals and their families. In looking at the specific case of breakfast in Italy, this study has shown how this meal is inserted in a flux of competing activities and thus it needs to be understood in relation to temporal priorities. Acknowledging such flux implies recognising that family practices might compete for time and that certain tasks might be squeezed among others that take priority. In studying the complexity of balancing and allocating time in family life, the investigation of what is eaten, how, how often and with whom becomes a matter of temporality and care. In investigating people's accounts of their experiences of breakfast, this study contributes to a deeper understanding of how individuals make sense of their daily schedules and enactments of care through food. The theoretical dichotomy between care and convenience and the related anxiety around eating and sharing convenience food were not confirmed in our research. A broader view of care was provided by participants, which departed from a simple nutritional understanding of food as good/caring versus bad/ convenient. Providing convenience food for the self and others was not seen as morally problematic nor as neglecting 'normative social standards of expressive care' (Southerton, 2003: 22). It was seen as a pragmatic compromise between paid work and family life and between parental duties and individual schedules. Such standards might also be framed in relation to other family meals, in which, it seems, different standards of care and different temporal arrangements were applied. Daniela Pirani is Lecturer in Marketing at University of Liverpool. Her research interests include gender performances in the marketplace, cultural branding, food practices and visual consumption. She has published on the commodification of veganism, on family in advertising and on the creation of brand practices. Vicki Harman is Senior Lecturer in Sociology. Her research interests include family life and social identities and inequalities such as gender, social class and ethnicity. She has recently conducted research on parents' perspectives on preparing lunchboxes for their children and mothers' perspectives on feeding the family in hard times. Currently, she is involved in empirical research projects investigating (1) the changing nature of grandparenting in Britain, (2) arts-based and participative approaches to research in women's refuges, (3) gender and ballroom dancing and (4) food poverty in Liverpool and Stoke-on-Trent. Benedetta Cappellini is Professor in Marketing at Durham University. Her research falls broadly into the areas of Consumer Culture, Critical Marketing and Sociology of Consumption. Topics she has recently studied include food cultures, meal practices, austerity, intensive mothering and domestic violence. Date submitted March 2019 Date accepted March 2021
v3-fos-license
2019-11-07T14:50:58.671Z
2019-12-01T00:00:00.000
209572466
{ "extfieldsofstudy": [ "Geography" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scielo.br/pdf/aa/v49n4/1809-4392-aa-49-04-299.pdf", "pdf_hash": "6561664a992f9195fd4b11a522456d6b44c125ae", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46214", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "sha1": "f7c677470a8a0885f3adca2b59566732741779ed", "year": 2019 }
pes2o/s2orc
Biometric relationships between body size and otolith size in 15 demersal marine fish species from the northern Brazilian coast The relationships between fish size and sagitta otolith measurements were calculated for the first time for 15 species belonging to six families from the northern Brazilian coast. A total of 220 fish were sampled from the bycatch landed by the bottom-trawl industrial shrimp-fishing fleet between August and September 2016. All species had strong relationships between otolith measurements and fish total length with the coefficient of determination (r2) ranging between 0.71 and 0.99. The variable most strongly related to fish total length was found to be the sagittal otolith length (OL) with 98% of the variability. These relationships are a useful tool to estimate length and mass of preyed fish from otoliths found in stomach contents of marine predators. INTRODUCTION Otoliths are structures composed mainly of calcium carbonate, located in the inner ear of Osteichthyes, which have body balancing and hearing as main functions (Campana 2004). These structures are arranged in pairs, called sagitta, asteriscus, and lapillus in bony fish, and vary widely in size and shape among species (Campana 2004;Popper et al. 2005). Due to its largest size in the majority of bone fishes, sagittae otoliths are the most suitable for systematic and ecological studies, for taxon identification, age estimation, and life history tracking (Harvey et al. 2000;De La Cruz-Agüero et al. 2016;Assis et al. 2018). Their nondigestible calcified structure has allowed them to be widely used to identify fishes ingested by different predators such as aquatic mammals, seabirds and fishes (Battaglia et al. 2010;Tuset et al. 2010). Furthermore, relationships between fish size and otolith measures are useful to elucidate the feeding behavior of piscivorous fauna, providing subsidies for the management of these species (De Pierrepont et al. 2005;Lombarte et al. 2006;Battaglia et al. 2010). Studies on otoliths in marine fish from the Western South Atlantic are scarce (Waessle et al. 2003;Assis et al. VOL. 49(4) 2019: 299 -306 ACTA AMAZONICA 2018; Souza et al. 2019). The high nutrient and sediment load from the Amazon River, near the Northern Coast of Brazil favors the occurrence of abundant fishery stocks, mainly shrimp (Penaeidae) and several fish species (Isaac and Braga 1999;Marceniuk et al. 2019). The ichthyofauna is functionally important as an intermediate trophic level for many consumers, however, its importance is not well understood (Barletta et al. 2010). The present study aimed to determine the relationship between fish size (i.e. length and weight) and sagittae otolith measurements (i.e. length, width, and weight) for the 15 most abundant demersal species captured along the northern Brazilian coast. MATERIAL AND METHODS Fish were captured between August and September 2016 in an area characterized by the estuarization of inshore waters, due to the proximity to the Amazonas River (5°02'21.6"N, 47°49'33.9"W; 0°51'02.0"N, 47°50'30.0"W northern and southern limits of the sampling area, respectively; Figure 1), and is inserted in the world's second largest mangrove (~700 000 ha), which is an important fishery area (Isaac and Braga 1999;Giarrizzo and Krumne 2008). Samples were randomly collected from the bottom trawl of industrial shrimp trawlers using a 30 x 21 mm mesh bottom trawl of 22.4 m length towed at velocity of ~2.5 knots at 35 -49 m depth. Bycatch were taken to the laboratory and kept frozen until processing. Sampled fish were identified to species level, and measured (standard length, SL, and total length, TL, precision 0.01 cm) and weighed for total body weight (BW, 0.01 g). Sagittae otoliths were removed, cleaned and stored dry in coded microtubes. Each otolith was weighed (WO, 0.0001 g) using an analytical balance, and measured for maximum length (OL, 0.001 cm), as the horizontal distance between the anterior and posterior tips of the sagitta, and width (OW, 0.001 cm), as the greater distance between the dorsal and ventral margins of the otolith (Harvey et al. 2000;Battaglia et al. 2010). Vouchers of each species were fixed in 10% formalin after processing, then preserved in 70% alcohol and deposited in the ichthyological collection of the Grupo de Ecologia Aquática (GEA) at Universidade Federal do Pará (UFPA). Potential differences between the dimensions of the right and left sagittae otoliths were tested using a paired Student's t-test per species (Park et al. 2018). The length-length relationship (LLR) was determined by the method of least squares to fit a simple linear regression model: TL = a + bSL. The length-weight relationship (LWR) was determined as: W = aSL b , and was fitted to the data using a linear regression of the log 10 -transformed data. Morphometric relationships between TL and otolith dimensions were calculated using linear (Y = a + bX) and linearizable (Y = aX b ) regression models, which best fit the data. When present, outliers were removed by graphical inspection of the plot before performing the regression analyses (Froese et al. 2011). The coefficient of determination (Pearson r-squared, r 2 ) was used as indicator of regression quality and to check if fish growth (b) was statistically different from isometric growth, a t-test (H 0 : b = 3) (Froese et al. 2011). A significance level of α < 0.05 (confidence level ± 95%) was routinely adopted. RESULTS The analyses were performed using 220 specimens from 15 species (see Figure 2 for otoliths) and six families. The most representative family was Sciaenidae with nine species, followed by Haemulidae, with two species, and the remaining four families, with only one species each (Table 1). Body weight ranged between 6.7 to 911.1 g, SL between 7.1 to 69.5 cm, and TL from 8.9 to 72.1 cm. Paired t-tests did not detect differences between left and right sagittae otoliths for OL, OW, and WO (p > 0.05 for all species). Hence, all further analyses were standardized by using only the left otolith measures. Differences between the b values estimated by the different regression models were species specific. However, for species of the same family with similar body shape (e.g. Sciaenidae and Haemulidae), values were similar ( Figure 3). Length-weight relationships (LWR) were highly significant (p < 0.001) only for six species: Ctenosciaena gracilicirrhus, Macrodon ancylodon, Menticirrhus americanus (all Sciaenidae), Haemulon steindachneri (Haemulidae), DISCUSSION Body size and mass relationships are important tools for the functional understanding of a species at specific locations (Froese et al. 2011). Generally the length-weight relationships (LWRs) are used for converting lengths into fish mass and vice versa (Froese 2006;Froese et al. 2011), and length-length relationships (LLRs) are used to convert one length into another (e.g. standard length to total length). Additionally, assuming that otolith size is closely correlated to fish size, and its shape is species specific (Campana 2004), it is suggested that otolith analysis is a feasible and reliable method to identify fish species and to estimate fish size and weight (Battaglia et al. 2010;Park et al. 2018). Froese (2006) suggests that the angular coefficient does not differ from isometry when b = 3. Accordingly our b values of the BW-TL relationship were isometric for sciaenids, which, in addition, had coefficients of determination (r 2 ) higher than 0.90 (see Figure 3). However, VOL. 49 (4) ACTA AMAZONICA despite the strong biometric relationships derived from our data, our estimated parameters should be used with caution, as our small sample sizes (mainly those ≤ 10 for B. ronchus, P. harroweri, N. grandicassis, Trichiurus lepturus, and Paralonchurus brasiliensis) and a selective effect of the mesh size used by the shrimp trawlers may have caused the size distributions in our samples to be underrepresented. The lack of statistical differences between left and right sagitta indicates that otoliths on either body side are indistictively usable for fish-size estimations (Battaglia et al. 2010;Mehanna et al. 2016;Park et al. 2018;Yilmaz et al. 2015). The high coefficients of determination for the relationships between otolith measurements and fish size in all our species indicates that length or weight of fish can be reliably estimated from otoliths found in stomach contents of predators. Our values of b varied considerably among the species, owing to the variable size and shape of the sagittae among the species. Yet, at family level (e.g. Scianidae), the b values tended to negative allometry in most relationships, as species within families are relatively more similar in shape. Most studies providing relationships between otolith and fish size have used only the width and length of the otolith (Giménez et al. 2016;Assis et al. 2018;Park et al. 2018). Considering the high values of correlation in our analyses, the inclusion of otolith weight in our estimations contributed to strengthen the relationship models. The relationship between otoliths and fish size was estimated for species from different regions around the CONCLUSIONS This study is a contribution to the knowledge about the relationships between otolith and fish size in 15 fish species from the northern Brazilian coast for the reliable estimation of species-specific fish length or weight from otolith size. Our results form a baseline for future studies on trophic ecology and fish distribution, and will enable a more accurate evaluation of length and/or biomass of demersal fishes consumed by predators.
v3-fos-license
2019-09-19T09:09:48.035Z
2019-09-18T00:00:00.000
203704779
{ "extfieldsofstudy": [ "Computer Science", "Psychology" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://repositorio.iscte-iul.pt/bitstream/10071/20852/1/Smile%20Study_converis.pdf", "pdf_hash": "378eddea0120165c3c4dd4d501709f26481f7817", "pdf_src": "TaylorAndFrancis", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46216", "s2fieldsofstudy": [ "Psychology" ], "sha1": "efe7a26bad110d46826a4ada9e766f19d23ddc08", "year": 2020 }
pes2o/s2orc
Does a Smile Matter if the Person Is Not Real?: The Effect of a Smile and Stock Photos on Persona Perceptions ABSTRACT We analyze the effect of using smiling/non-smiling and stock photo/non-stock photo pictures in persona profiles on four key persona perceptions, including credibility, likability, similarity, and willingness to use. For this, we collect data from an experiment with 2,400 participants using a 16-item survey instrument and multiple persona profile treatments of which half have a smiling photo/stock photo and half do not. The results from structural equation modeling, supplemented by a qualitative analysis, show that a smile enhances the perceived similarity with the persona, similar personas are more liked, and that likability increases the willingness to use a persona. In contrast, the use of stock photos decreases the perceived similarity with the persona as well as persona credibility, both of which are significant predictors to a willingness to use a persona. These professionally crafted stock-photos seem to diminish the sense of identification with the persona. The above effects are consistent across the tested ages, genders, and races of the persona picture, although the effect sizes tend to be small. The results suggest that persona creators should use smiling pictures of real people to evoke positive perceptions toward the personas. In addition to presenting quantitative evidence on the predictors of willingness to use a persona, our research has implications for the design of persona profiles, showing that the picture choice influences individuals’ persona perceptions even when the other persona information is identical. Introduction Defined as fictive people representing real user groups, personas (Cooper, 1999) are a means for analyzing and communicating the goals and needs of different user types. Personas have been widely employed in many domains and with many stakeholders, e.g., designers, software developers, and marketers (Marsden & Haag, 2016;Matthews, Judge, & Whittaker, 2012;Nielsen & Hansen, 2014). Personas summarize core user groups or customer segments of an organization (Floyd, Cameron Jones, & Twidale, 2008), including website or mobile application users, online game players, content audiences, users of a software system, or target groups for advertising campaigns (Dong, Kelkar, & Braun, 2007;Nacke, Drachen, & Stefan, 2010;Pruitt & Grudin, 2003;Scott, 2007). Thus, personas are used in many industries and contexts, and at different organizational levels (Nielsen, 2013) for a variety of tasks. In these activities, having personas as decision-making guidelines can result in better commercial outcomes, such as yielding a positive return-on-investment (Forrester Research, 2010). The root cause of why personas are useful can be attributed to personas being an effective vehicle of communication about the users or customers of an organization (Matthews et al., 2012), providing a shared mental model of the end users' needs and wants, and summarizing data about users in an empathetic format that is more memorable than numbers (Goodwin, 2009;Hill et al., 2017;Pruitt & Adlin, 2006). Persona creators are known to have design power when crafting persona profiles, resulting in varied sense-making and possible biases by the end users of personas (Hill et al., 2017;. One of the most prominent sections is the picture, typically a portrait photo, that is an essential part of a persona profile (Nielsen, Hansen, Stage, & Billestrup, 2015). The picture choice has been shown to affect the end users' perception of the persona , influencing the end users' thinking concerning the persona. For example, Salminen et al. found that a black person's picture leads end users to interpret the same information differently. Prior work has shown that aspects of profile photos reflect the personality of the individual (Kim & Kim, 2019) and that people can infer the emotional aspects of the individuals in the photos (Kätsyri & Sams, 2008). However, there is little research on how to effectively design the persona profile, and there is even less prior research on how to choose the types of pictures used in persona profiles. Even though there are a variety of areas to investigate, in this research, we are interested in the effect of two related conditions on persona perceptions 1 : (a) the use of smile in the persona profile pictures (with two types of treatments: smiling and non-smiling) and (b) the use of stock photos displaying professional models versus using photos of "real" people (with two types of treatments: stock photo and non-stock photo). Like photos of smiling people, the use of stock photos on persona perceptions has not been CONTACT Joni Salminen joolsa@utu.fi HBKU Research Center, RC1, Doha, Qatar. Color versions of one or more of the figures in the article can be found online at www.tandfonline.com/hihc. investigated, even thoughaccording to our experiences of various persona designs in the fieldthe use of stock photos is quite common in persona profiles. While several studies have looked at the impact of smiling on individual attributes such as attractiveness (Deutsch, 1990;Lau, 1982;Reis et al., 1990), emotional contingent (Krämer, Kopp, Becker-Asano, & Sommer, 2013), and the effect of emotion for information process (Mori, Yamane, Ushiku, & Harada, 2019), there is no existing research on the impact of the smile on persona perceptions that we could locate. Nevertheless, prior research has shown that individuals' perceptions of the personas influence the adoption and use of personas in real organizations (Rönkkö, 2005;Rönkkö, Hellman, Kilander, & Dittrich, 2004), and such visual stimuli can influence how people process the available information (Jiang, Guo, Yaping, & Shiting, 2019). Often mentioned perceptions in association to personas include credibility, trustworthiness, and believability (Howard, 2015;Miaskiewicz, Sumner, & Kozar, 2008;Pruitt & Grudin, 2003), likability (Anvari, Richards, Hitchens, & Babar, 2015), immersion, and identification (Chang, Lim, & Stolterman, 2008;Marsden & Haag, 2016;Miaskiewicz et al., 2008;Nielsen, 2013), empathy (Friess, 2012;Pruitt & Grudin, 2003), and usefulness (Kari, 2005;Nielsen & Hansen, 2014;Rönkkö et al., 2004). Therefore, it is worthwhile to pursue a better understanding of individuals' perceptions toward personas and what kind of choices drive these perceptions if personas are to be utilized effectively in customer-facing decision making. With this research, we aim to provide actionable insights to aid persona designers in developing better persona profiles. If the perception of a persona can be influenced by the choice of a smiling image or not (or using a stock photo), then this has direct implications for the persona profile design. To this end, we measure whether and how the smile of the person in persona profile pictures and the use of a stock photo influence persona perceptions. Considering the research question at hand, we evaluate four relevant perceptions, namely persona likability, persona credibility, perceived similarity, and willingness to use a persona. We measure the impact of a smile in the persona profile and use of stock photos on these perceptions, defined in Table 1. The research questions are as follows: (1) How does using a smiling persona picture in the profile affect individuals' perceptions of the persona? (2) How does using a stock photo in the persona profile affect the persona perceptions? (3) Are the perceptual effects consistent across different age, gender, and ethnicity of the persona? Credibility has been considered as a notable perceptual challenge of personas, as individuals need to be able to find the personas plausible and authentic to take them seriously (Chapman & Milham, 2006). Likability is similar to interpersonal attraction; another construct often evoked in social psychology research (Byrne, 1961). However, we find likability more appropriate for the scope of our study than attraction, since attractiveness often implies a relationship between opposite genders, whereas likability is more applicable between genders. The perceived similarity to the persona is akin to the identification of a common bond. As we explain in the literature review, being exposed to smiling pictures increases similarity and identification in general. However, this has not been tested with personas. In this research, we particularly want to know if perceived similarity increases with smiling personas, as we similarity might influence individuals' willingness to use the persona for their information needs. Willingness to use is crucial for personas in practice; as pointed out by several persona scholars, personas often lack employment in real use after their creation and they risk being "left in the desk drawer" (Friess, 2012;Rönkkö, 2005;Matthews et al., 2012). Thus, to better understand the applicability of personas, it is important to analyze how the persona picture influences willingness to use, either directly or indirectly (via other perceptions). In the following section, we review the related literature, along with formulating specific hypotheses. After this, we explain the experimental setting, including the creation of the treatments and data collection. This is followed by an analysis of the results. We conclude by presenting practical advice for persona creators, along with identifying important questions for future research. Smile in persona profiles: An open research gap There are a plethora of studies investigating the effect of a smile in human-computer interaction contexts. These studies tend to relate to encounters between humans and virtual agents, creating "virtual rapports" between the two actors (Huang, Morency, & Gratch, 2011) that can enhance the attitudes and first impressions of humans when dealing with artificial human-like interfaces (Cafaro et al., 2012). As noted by Qiu and Benbasat (Qiu & Benbasat, 2005), "Naturalistic avatars are usually humanoid in form, but with a degraded level of detail. This type of avatar can emulate natural protocols just enough to achieve recognition of familiar features, like a smile, a waving hand, and a nodding head." (p. 81). Östberg et al. (Östberg, Lindström, & Per-Olof, 1989) consider that "smile or a frown will serve as powerful feedback" (p. 151) in a videophone system, while Brito and Stoyanova (Brito & Stoyanova, 2018) note, in augmented reality context, that "[t]he smile is the most complex of the facial expressions." (p. 820). Overall, these studies tend to be concerned with how users can use a smile for interacting with computer systems. Table 1. Operational definitions of the research constructs, adapted from . Construct Operational definition Credibility Persona information is clearly presented to the individual the persona is shown to. Likability The persona is liked by the individual the persona is shown to. Similarity The individual feels like the persona is like him or her. Willingness to use The individual would make use of this persona in his or her work or in the use case provided. However, from our review of literature, we could locate no previous research that investigates the presence of a smile specifically in persona pictures. This relates to the general lack of research on the effect of images on the design of personas or the effectiveness of their use on end users' perceptions. Among the few studies on this topic, Salminen et al. studied the inclusion of contextual photos in persona profiles and the confusion and information inferred from different persona photos (Salminen et al., 2019). Eriksson et al. (Eriksson, Artman, & Swartling, 2013) found that through pictures, the users of personas draw inferences and memories about similarly-looking people they have met previously. In a similar vein, Nielsen et al. (Nielsen et al., 2017) found the persona pictures to be a considerable source of sense-making by the persona users. In their meta-analysis of 47 persona templates, Nielsen et al. (Nielsen et al., 2015) found the picture to be an integral part in almost all of the analyzed persona profiles. Because we found no studies concerning the use of a smile in persona profiles, we decided to investigate if the research papers reporting personas show smiling or non-smiling pictures. For this, we manually analyzed a sample of 45 persona articles published in peer-reviewed journals and conferences between 2002 and 2017 (retrieved by searching the ACM Digital Library). Reviewing these articles, we found that 71% of these publications did not include a persona profile within the article, highlighting a general lack of attention on profile design as a part of persona research. In the 13 articles (29%) that did include a persona profile, all included one to five images of a persona for a total of 42 persona images. We then coded each persona image presented in these articles with a binary classification of 'smiling' or 'not smiling', finding that, from the found persona profiles, 55% (23) contained smiling images and 45% (19) contained non-smiling images. Therefore, there seems to be no consensus on whether the image of the persona should contain a smiling or not smiling person. Nevertheless, understanding the effect of the smile has a direct impact on the design and implementation of personas, especially given (a) the effects of the smile discovered in social psychology research, and (b) the general importance of pictures for sense-making of persona users. In the following section, we explore the former. Smile and person perceptions Because we could locate no prior studies that would investigate the effect of smiling images on persona design, we turn to research concerning actual people, as person perceptions can be viewed as conceptually applicable to personas (Marsden & Haag, 2016;. Most of the research done on this topic originates from the field of social psychology, although there are also studies in human-computer interaction that have explored the interaction of smiles and technology. For example, Turner and Hunt (Turner & Hunt, 2014) investigated social network users' assessment of other users' personality traits based on their profile pictures and found that smiling had a significant impact on personality assessments. One of the first studies examining smile and person perception is from Brannigan and Humphries (Brannigan & Humphries, 1972) who studied nonverbal behavior as a means of communication. The authors identified three types of smiles: closed smile, upper smile, and a broad smile. Results showed that there is a difference between the perception of each type of smile; the upper smile was considered to be the most common smile in social interactions, while the closed smile was used in non-social interactions (Brannigan & Humphries, 1972). Kalick (Kalick, 1977) investigated plastic surgery, physical appearance, and person perception and the researcher found that women that have undergone plastic surgery were perceived as more attractive, kind, sensitive, responsive, and likable. In a similar vein, Reis et al. (Reis et al., 1990) found that a smile was perceived as more attractive compared to neutral facial expressions. Smiling people were also considered to be more sociable, sincere, and competent than neutral people but showed a lower level of masculinity and independence (Reis et al., 1990). Otta, Abrosio, and Hoshino (Otta, Abrosio, & Hoshino, 1996) who studied the communicative impact of smiling found that smiling was associated more with happiness, kindness, and attractiveness. In a similar vein, Lau (Lau, 1982) investigated the effect of smiling on person perception (however, not on persona perception) and found that smiling people were more liked and positively perceived than non-smiling people. Also, smiling was associated with intelligence and warmth (Lau, 1982). Wang et al. (Wang, Mao, Li, & Liu, 2017) observed that the intensity of the smile affects interpersonal perceptions, specifically the perceptions of warmth and competence. The aggregated consensus from these previous studies suggests that smiling generally evokes positive sentiments, for example, liking. Following the previous research, we formulate the following hypothesis: H01: Smile and persona likability are positively associated. Moreover, Lau (Lau, 1982) found that positive associations can be linked to emotional contagion, i.e., feeling happy by looking at other people being happy. Such an effect was also found by Barger and Grandey (Barger & Grandey, 2006) who analyzed the relationship between smile and appraisal mechanisms relating to services. They found that mimicry, a type of primitive emotional contagion, was significantly used in encounters between strangers during food service. Even though smiling was not correlated with post-encounter mood and appraisals, it was correlated with high customer service ratings (Barger & Grandey, 2006). In a similar vein, Hinsz and Tomhave (Hinsz & Tomhave, 1991) found that participants reacted back with a smile to a smiling facial expression, and the effect was stronger than the frown-tofrown reaction. In support of these findings, Chartrand and Bargh (Chartrand & Bargh, 1999) detail an assimilation effect, according to which a smile results in a greater sense of communality between the subject smiling and the subject exposed to smile. Overall, emotional contagion has been observed in a range of contexts, also in online systems (Del Vicario et al., 2016;Kramer, Guillory, & Hancock, 2014). The broad array of research suggests that emotional mimicry is an innate human ability, with intrinsic and instinctive manifestation in social engagement with others. These previous studies suggest that smiling is positively associated with a sense of similarity and identification. Following this logic, we formulate the following hypothesis: H02: Smile and perceived similarity with the persona are positively associated. Note: with "perceived similarity," we indeed refer to perceived similarity, not the real similarity (in terms of matching age and gender). This is because a person belonging to a different demographic group might have a feeling of similarity with a persona based on shared interests instead of shared demographics (for example, a middle-aged woman and teenage buy can both be interested in Pokémon Go). Moreover, we hypothesize that the willingness to use a personaa construct operationalized with items measuring how much the individual wants to learn more about the persona as well as use the persona for professional decision making is enhanced by the smile: H03: Smile and willingness to use a persona are positively associated. Smile perceptions seem to be related to the gender and age of the smiling individual. Otta et al. (Otta et al., 1996) found age differences showing that young people were considered to be more extroverted and ambitious than middle-aged and older people and middle-aged and older women were perceived as less attractive in comparison to middle-aged and older men, for whom results were the same as for young men. The study concluded that positive attributes associated with a smile affect the person perception (Otta et al., 1996). In contrast, Lau (Lau, 1982) did not find gender differences in their research, but Deutsch (Deutsch, 1990), who examined the effect of role on smiling in men and women, found that gender differences can arise, involving, for example, the more frequent association of non-smiling female persons as unhappier, less carefree, and less relaxed than men. This may influence perceivers' associations and create biases (Deutsch, 1990). The gender stereotype of women, both smiling and receiving more smiles than men, is also postulated by Hall (Hall, 1990). Therefore, smile perceptions are mediated by demographic attributes such as age and gender. To account for these effects, we vary age and gender in our experimental treatments. Also, we include race as an experimental variable, as the persona's race has been noted to influence user perceptions in previous persona studies (Hill et al., 2017;. While Floyd, Jones, and Twidale (Floyd et al., 2008) advise against racial, gender, or age profiling when creating personas, choosing any picture of a person forcefully means assigning the race of a persona. Therefore, the effects of such choices should be empirically tested. Finally, the type of smile has been shown to affect person perception, so that smile intensity (Abel & Kruger, 2010) and attractiveness of the person smiling (the "beautiful people" effect, i.e., the smile of an attractive person has a larger impact) (Van der Geld, Oosterveld, Van Heck, & Kuijpers-Jagtman, 2007) affect the perceptions of the smiling person. Here, we consider this prior finding by adding an experimental condition of the stock photo. Particularly, stock photos tend to depict professional models, whereas authentic pictures portray ordinary people that can be thought of as more representative of real users. It is, thus, an important question to clarify how using ordinary or stock photos affects how the personas are perceived. We present the following hypotheses: H04: Smile and persona credibility are negatively associated. H05: Use of stock photos and persona credibility are negatively associated. H06: Use of stock photos and perceived similarity with the persona are negatively associated. The rationale for H04-H06 is that use of stock photos comes with a certain sense of "fakeness," so that the personas seem less authentic and less like real people ("less like me"). This is because stock photos typically represent professional models that may reduce the sense of identifying with the persona that the picture represents. For example, Stanford et al. (Stanford, Ip, & Durham, 2014) analyzed individuals' views of dentofacial appearance and found some participants referring to "too perfect" smiles: "I mean Simon Cowell's teeth are just, I don't like them because they're just, you can tell that they're … they're too perfect. (…) I think they're not real. (Patient 7)" (p. 292). In contrast, we expect that there is a positive effect between the use of stock photos and the likability of the persona, as individuals are likely to "idolize" attractive professional models: H07: Use of stock photo and persona likability are positively associated. Persona perceptions To conclude our hypothesis development, we form some hypotheses relating to the internal relationships of the persona perception constructs. These are justified in the following. From a psychological perspective, the benefits of personas are rooted in self-identification (Miaskiewicz & Kozar, 2011). Through the cognitive processing of persona information, decision makers can obtain an empathic understanding of users, immersing themselves in real situations of others. Decision makers can use this ability to predict the users' behavior under different circumstances (Pruitt & Grudin, 2003). This mental modeling relies on human beings' innate ability of empathy and immersion (Krashen, 1984); therefore, it is a powerful agent for motivation and purpose. Typically, personas are communicated in the form of a story or narrative, e.g., "Mary is a 35-year-old woman who likes … ". A persona can be seen as a story that conveys critical experiences, those that the decision makers would not necessarily know otherwise. Since human beings tend to be receptive to narratives (Polkinghorne, 1988), storytelling facilitates the conveying and absorption of key attributes of the personas (Madsen & Nielsen, 2010). As argued by Hill et al. [17,p. 6660], "[a decision maker's] ability to engage and empathize with personas comes in part from the fact that a persona seems like a personnot like a list of facts, a philosophical stance, or an educational documentbut an actual person." For this reason, we hypothesize that individuals are more inclined to like personas that they perceive as similar to themselves, and they are more interested in knowing more about the personas they like. Therefore, the following hypotheses are presented: H08: Perceived similarity with the persona and persona likability are positively associated. H09: Persona likability and willingness to use a persona are positively associated. However, since personas are human representations of data, they are likely to be judged like humans by other humans (Marsden & Haag, 2016). Therefore, there are also perceptual challenges involved in the creation, adoption, and use of personas. Most notably, lack of credibility has been raised as a major concern in the persona literature (Chapman & Milham, 2006), arising from the fact that personas are often created from relatively few qualitative interviews without formal representativeness of the actual user base. Decision makers are unlikely to adopt the personas for real use if there are doubts about their credibility (Rönkkö et al., 2004). For example, in a study by Long (Long, 2009), designers were shown to lack trust in a persona if they did not participate in the persona creation. In a study by Matthews et al. (Matthews et al., 2012), the participants found the personas abstract, impersonal, misleading, and distracting. Considering these studies, we expect that a credible persona enhances the willingness to use the persona. To empirically investigate this association, we formulate the following hypothesis: H10: Persona credibility and willingness to use a persona are positively associated. Moreover, according to Marsden and Haag (Marsden & Haag, 2016), users of personas implicitly infer attributes from personas, and this process typically involves biases and stereotyping. Similar results have been found by Hill et al. (Hill et al., 2017) and , suggesting that the cognitive processing of personas is greatly influenced by individualized sense-making. This sense-making is directed by the information that the persona creators have decided to include in the persona profiles (Nielsen et al., 2017). As Marsden and Haag (Howard, 2015) note, "the use of personas seemed to activate pre-understandings, prejudices, and assumptions [of individuals exposed to personas]" (p. 4020). In summary, the cost of increasing empathy and immersion by presenting user information as personas seems to be that there is a heightened degree of stereotyping and perceptual biases involved in interpreting the persona information. To investigate these effects, we formulate our final hypothesis: H11: Perceived similarity and willingness to use a persona are positively associated. The consensus of previous work is, therefore, that perceptions are crucial in the deployment of personas and that they are inherently associated with the cognitive process and attitudes of individuals viewing the personas. Therefore, we expect the probing of smile and stock photo conditions to yield interesting results. Methodology Our research process comprises six steps: (1) We first collect smiling/non-smiling and stock photo/non-stock photo image pairs, then (2) create the personas using those image pairs, after which we (3) create the questionnaire, (4) create the crowd experiments, (5) collect data and, finally, (6) analyze it, using both quantitative and qualitative means. The following sections explain the steps of the research process. Experimental design and image selection The study follows a between-subjects experimental design. We present crowd workers with persona profiles that vary by the following experimental variables (levels in To test the smile variable, the persona profiles have a smiling version and non-smiling version of a picture portraying a person, both stock and non-stock photos. To test the stock photo variable, we create two sets from each demographic combination (Age, Gender, Race), one with a stock photo and the other one with a non-stock photo. Likewise, we ensure that each demographic combination has a smiling and non-smiling version. Overall, combining the variable levels requires us to obtain 48 images (2 age groups × 2 genders × 3 ethnicity × 2 smile × 2 stock = 48 photos), of which 24 are stock photos, and 24 are photos of regular people. To collect the images, we utilize two tactics: (1) find image pairs of smiling/non-smiling people from online stock photo banks and (2) take photos of real people smiling and not smiling. For the former, we browse both free and paid online stock photo services (e.g., Pixabay, 123rf.com, iStockPhotos). We devised the following criteria for finding stock photos: (a) looks like a professional photo, (b) is technically high quality, and (c) corresponds to the demographic profile of the taken photos (age, gender, ethnicity). Stock photos are typical of professional models and often used for marketing and advertising purposes. To test the effect of stock photos against photos of regular people, we engaged a professional photographer to take facial pictures of people with different age, gender, and race. The photos were taken at a popular tourist destination in the Philippines, where it was possible to locate people from diverse age, gender, and ethnic groups. We instructed the photographer to keep everything else constant for the image pairs apart from the smile condition. In other words, the image pairs need to have the same pose, background, and gaze direction. When taking the pictures, the people being photographed were explained that the pictures are to be used in academic research, and their consent was obtained for this purpose. Figures 1 and 2 show examples of the obtained photos, and Appendix 1 contains all the photos. We validated the smiling/non-smiling condition by recruiting eleven external raters from Upwork 2 , an online freelancer service, and asked them to evaluate if a person in the picture contains a smiling or non-smiling person. All the 48 face pictures were shown for each participant, mixing their order randomly to avoid direct comparison between the faces of the same person. Furthermore, the participants were instructed to give their first impression of a smile or not and not to change their evaluation if they saw the same person later. Each participant was given a reward of $5 USD (in total $55 USD). This way, we obtained 48 × 11 = 528 manual ratings. For each picture, we calculated a majority vote from the external raters; if the number of ratings exceeded 50% (6/11), then the winning class was assigned as the majority vote. We compared the majority votes with the smile/non-smiling conditions we had assigned for each picture ("ground truth"), obtaining an agreement of 98% (47/48) (Cohen's kappa = 0.95, "almost perfect agreement" (Richard Landis & Koch, 1977)). Only one majority vote deviated from the expected class (see Figure 3). Thus, the smiling/non-smiling conditions we assigned correspond to general smile perceptions of people. Creation of persona profiles After collecting the photos, we proceed with creating the persona profiles (treatments). The key attributes of personas typically include age, gender, location, topics of interests, even psychological attributes such as attitudes, beliefs, feelings (Faily & Flechais, 2011), goals, skills, and needs (Vincent & Blandford, 2014). Although there are dozens of different layouts for persona profiles (Nielsen et al., 2015), in this research, we adopt the layout and information content presented by Jung et al. , as it is a common layout. The personas were created manually using Photoshop image editing software. Overall, we created 48 treatments, varying age, gender, race, smile, and stock photo condition. Figure 4 illustrates the treatments. Apart from changing the picture according to the experimental variables, all other information (e.g., topics of interest, most viewed content, quotes) was kept unchanged in the persona profiles. Table 2 defines the information elements of the persona profile. Survey creation and data collection The measured constructs, along with their levels, are shown in Table 4. We utilize the constructs and items from the Persona Perception Scale introduced by Salminen et al. . This instrument deals with various dimensions related to users' perceptions toward personas, including credibility, clarity, consistency, and so on. From this instrument, we chose four constructs, as outlined above, with their associated measurement items. We created a questionnaire using the items of Table 3 as statements shown to respondents. For each statement, we utilized a seven-point Likert scale with the options ranging from (1) Strongly disagree to (7) For data collection, we used the crowdsourcing platform FigureEight (formerly known as CrowdFlower). This platform has been used in several human-computer interaction studies, for example, to annotate tweets or images (Alam, Ofli, & Imran, 2018;Michalco, Simonsen, & Hornbaek, 2015;Plotnick & Hiltz, 2018). To control the answer quality, we Figure 3. The picture, corresponding to "mature female, non-smiling, non-stock photo", was rated as smiling by six out of eleven and non-smiling by five out of eleven external raters. This picture represents a borderline case where individuals have a high disagreement of whether a person smiles or not and is the only case where the assigned condition deviated from the expected result. The picture for each persona was replaced according to ethnicity, gender, age group, smile, and stock photo condition The demographic information was changed to match the gender and age group of the picture Figure 4. Example treatment (Mature woman smiling non-stock photo). The picture of the persona was changed to one of the 48 tested versions, and the age group and gender were matched with each picture. Other than that, the information in the created persona profiles remained the same. undertook several measures following the approach by Huang et al. (Huang, Weber, & Vieweg, 2014). First, we set the participant quality level to Level 3 (Highest quality). Second, we set a minimum time of 120 seconds for the experiment; any answer taking less time than this would be disqualified. Third, we prevented the same participants from enrolling in many surveys by using the "custom blacklist" feature of the survey platform. The sampling was geographically narrowed to four English-speaking countries: United States (USA), United Kingdom, Canada, and Australia. The reward for filling in the survey was 0.30 US dollars. The respondents were explained that we are interested in knowing their thoughts about the persona they were shown. We defined the persona as follows: A persona is a fictive person describing a bigger customer segment. It can be understood as a typical or average customer. We instructed the respondents to review the persona information carefully, paying attention to the picture, name, and other information in the persona profile. Then, we asked them to answer the statements about the persona. At any time while responding to the survey, they could review the persona profile. Note that the platform does not report sociodemographic data like gender, age, socio-economic status; rather, the crowd workers are participating anonymously. The only demographic variable we can retrieve for our sample is country: out of the 2400 ratings, 2252 (93.8%) were obtained from crowd workers located in the USA, 47 (2.0%) from Canada, and 101 (4.2%) from Great Britain. Posch et al. (Posch, Bleier, Flöck, & Strohmaier, 2018) conducted a study on sociodemographic variables of CrowdFlower workers in general. They collected data of workers from ten countries, with 900 participants per country. The countries were selected from three groups: high-income (USA, Germany and Spain), middle-income (Brazil, Russia, and Mexico), and low-income group (India, Indonesia and the Philippines). They also collected data from Venezuela because this was the most active country on CrowdFlower at the time. The findings showed that, in most countries, crowd workers were predominantly male, with the proportion of male workers exceeding 60%. Most crowd workers were between 18 and 34 years of age, and, most countries had a higher share of nonmarried workers than married workers. Also, most countries had a household size of two or more people, with a low share of single households (below 10%). Typically, over a third of the crowd workers had a full-time job besides their activity on the crowdsourcing platform. Moreover, CrowdFlower workers were found to be well educated in general, with more than 30% of workers having a Bachelor's degree or higher in all countries. Path analysis In this analysis, we specified the structural model to be tested. Composite scores were computed based on the simple mean for the items in each scale (DiStefano, Zhu, & Mindrila, 2009), as previous validation exercises indicated good psychometric properties of the scale in terms of both reliability and factorial validity . We employed the Maximum Likelihood (ML) for model specification, as it is a common and robust estimation method (Kline, 2015). Interaction terms were created by the multiplication of the standardized variables, except for the Stock * Smiling term that refers to the condition where both "Stock" and "Smiling" are set to zero. After the initial model was specified, we conducted a multi-group analysis to evaluate moderating effects from Age, Gender, and Race (Zhou, Xingda, Helander, & Jiao, 2011), which we coded as nominal variables (e.g., 1 = Female and 2 = Male, for Gender). The nested models for each sub-group were initially compared with a chi-square test to identify candidate models for a path-by-path analysis (Maroco, 2003). Figure 5 shows the path analysis for the global model using the full sample. From our analysis, we observe that a considerable number of predictors have significant paths. Smiling has a marginally positive effect on perceived similarity (B 3 = 0.066, p < .05), which matches the hypothesized effect (H02). However, unlike what we hypothesized, smiling was found to have no significant impact on likability (B = 0.023, p = .285) (H01). The effect of similarity on likability was aligned with our hypothesis (H08), with a significant positive effect (B = 0.635, p < .001). Moreover, as we expected, likability was found to have a significant positive effect on willingness to use (B = 0.292, p < .001) (H09). Credibility was also found to have a significant positive effect on willingness to use (B = 0.345, p < .001) (H10). Moreover, there was not a significant effect of smiling on credibility (B = 0.022, p = .423) or willingness to use (B = 0.012, p = .442), which refutes our hypotheses on the significance of these paths (H04 and H03). However, conforming to our hypothesis (H05), stock photos were found to negatively impact credibility (B = -0.234, p < .001). This negative effect of the stock photo was also present on the similarity path (B = -0.092, p < .001), which is aligned with our hypothesis (H06). These two findings can be interpreted as an indication that stock photos have a significant impact on persona perceptions. A negative effect was found between stock photos and likability (B = -0.103, p < .001), contrary to our hypothesized positive relationship (H07). Finally, similarity had a positive effect on the willingness to use (B = 0.296, p < .001), confirming the postulated hypothesis (H11). Other notable effects are the interaction between likability and credibility (B = -0.044, p < .01), indicating that the combined effect is less than the sum of the individual effects, and the interaction term between similarity and credibility (B = 0.101, p < .001), indicating that there is a synergistic effect between these two variables regarding willingness to use. Table 4 summarizes the results of the structural modeling. Moderation analysis of demographic variables (age, gender, race) The moderation analysis was conducted using multi-group analysis, using the procedure described by Maroco (Maroco, 2003), where the unconstrained model (i.e., path coefficients are free to vary across groups) is compared with a constrained model where path coefficients are assumed to be identical across groups. We test differences using a chi-square test. In this test, a significant result indicates that there are significant differences between groups, i.e., a moderation effect. In this scenario, a follow-up path-by-path analysis can be conducted to determine in which specific paths the differences lie. We began by testing for a moderation effect of the Age variable, i.e., whether the models differ for personas classified as Mature or Young. The personas were divided into classes by appearance of age. Although we did not know the exact age of the person in the picture, this division was not difficult, as we purposefully collected pictures of young and older people. Figure 6 shows an example of young and mature people. To ensure that this age comparison is valid, we conducted an independent rating of pictures to "young" or "mature" among two raters (i.e., two researchers independently coded the tested pictures). As expected, we reached a perfect agreement (Cohen's Kappa = 1.00), a by-product of the young pictures being distinguishable from the mature pictures. The chi-square test indicates that no significant differences in perceptions exist between the models in terms of age (χ2(18) = 27.989, p = .062). Thus, there is no evidence of a moderation effect regarding the Age variable. We proceed by testing for the moderation effect of the Gender variable, i.e., whether the models differ across persona genders (levels: male, female). As previously, verifying that a picture contains a male or female yields a perfect agreement (Cohen's Kappa = 1.00) between two independent raters. Again, the chi-square test was not significant (χ2(18) = 20.813, p = .289), indicating the absence of a moderation effect of gender. Finally, we tested for a moderation effect of the Race of the persona (levels: Asian, White, Black). Contrasting the unconstrained with the constrained models yielded a non-significant chi-square test (χ2(36) = 25.354, p = .907). In other words, there is no evidence of a moderating effect from the Race of the persona, either. The non-significance of all three moderating effects provides evidence of model invariance across age, gender, and race (Kline, 2015) thus, suggesting that the effects identified in the global model are universal, at least for these three demographic variables. Thus, it can be concluded that a smileand whether the photo is stock or notis an important determinant of user perception, regardless of the intrinsic features of the person being pictured. Table 5 summarizes the results. Qualitative analysis To better understand the impact of smiles on persona perceptions, we conducted a qualitative survey with 40 respondents using the Prolific survey platform (Palan & Schitter, 2018). This platform enables online participants to voice their opinions on various matters. To investigate the perceptions toward personas, we showed the respondents four persona profiles: Black Young Male Smiling Stock Photo (BYMS_sp), White Mature Male Smiling Stock Photo (WMMS_sp), White Young Male Not Smiling Not Stock Photo (WYMNS_ns), and Black Young Male Smiling Not Stock Photo (BYMS_ns). We asked the respondents to write answers to three tasks: • Please describe this persona in your own words • Tell us why you think that way about the persona • Write down three adjectives that describe this persona The responses were stored in a spreadsheet and analyzed manually by searching for mentions of the pictures. In other words, we counted the times specific persona information (e.g., picture, demographics, quotes, etc.) was mentioned. For example, the participant response "He is young, but he looks to be someone who is who he says he is." was counted as demographic information = 1 (cue word: "young"), picture = 1 ("looks"), quotes = 1 ("he says he is"). The average answer length was 93.6 characters, which highlights the brevity of responses to online surveys (i.e., about the length of a typical English sentence). However, the participants still made frequent references to the persona information. Table 6 shows the frequencies of different persona information elements. The results indicate that the influence of pictures on persona perceptions is varied. Many respondents do not explicitly express that the pictures influence their perceptions. Also, it is possible that, in written explanations, respondents either are unaware of the impact of pictures on their perceptions or try to avoid appearing judgmental by not basing their perceptions on the looks of the persona. Nevertheless, several respondents did refer to the pictures when explicating their sense-making process. For example, Respondent 14 (R14) (commenting on BYMS_sp): "He seems to be a nice, smart, energetic person who thinks about others (is kind)". When asked why R14 thinks like this, she said: "They have a nice smile, went to college and the way they commented on things." Moreover, lifestyle aspects are inferred from the smiling pictures: Figure 6. Example of age comparison. • "his picture, dress code, data in his cv, and the smile [make me think he is] active young person. entrepreneur, competitive." (Respondent 10 on BYMS_sp) • "Warm, optimistic, hopeful, active [because of] smiling face, positive quotes, liking of frivolity, the mid 60s but still living life to the full." (R29 on WMMS_sp) Also, stock photos are associated with a sense of fakeness by some respondents. For example, "Seems to be a bit condescending, maybe a little fake, Big audience so popular, professionally shot photo." (Respondent 1, BYMS_sp). Respondent 1 continues, when shown another persona with a non-stock photo (WYMNS_ns): "Looks the same to me, if not a little more likable because of his more likable profile picture, its not perfect which is nicer." In a similar vein, R28 on WMMS_sp: "Older person interested in heath & fitness, looks fake & artificial [because] the photo is professional and or Photoshoped." Additionally, R28 elaborates on BYMS_ns: "More normal person [because] photo looks natural, still a bit fake." However, more authentic photos can also raise stronger antipathy, possibly because they are more relatable: "His profile picture doesn't look especially friendly, and he looks a little like he might be a bit smug. He's also unmarried, which makes me feel the same way." (R18 commenting on WHMNS_ns). R4 on the persona with non-stock picture (WYMNS_ns): "He seems a bit smug, but more relatable than the previous persona." Both R4 and R18 are Western women with the same age range as the persona, which supports an anecdotal proposition that individuals rate the persona of their age differently from personas in other age groups. For example, when this age range of respondents was evaluating the elderly male persona, the interpretations seem to be less critical. Thus, further research should investigate the question of age match between the users and personas to evaluate whether there is a systematic effect. Finally, the qualitative analysis supports the findings from previous research showing that people tend to infer nonobvious information from the persona profiles (Marsden & Haag, 2016;Nielsen et al., 2017;. For example, consider the answer by R18: "His profile picture looks confident, and his quotes and interests tell me he is active and not interested in heavy issues. He's wearing a suit which implies he is professional. His videos show me he enjoys funny things, but also looks for advice on meditation and depression which tells me he may suffer from depression/anxiety." This answer shows that respondents may infer non-related information from the persona profiles, such as the mental health of the persona. Another example is from R6, showing that the stereotypical thinking of the respondents affects their interpretation of the persona: "Kani is a bit more modern-thinking but he's still quite controlling. Plenty of disposable income. Wants to marry." When asked why, the respondent answered: "He's male, he'll dominate. He's young." (i.e., young males are controlling); and "He's male, he's single, perhaps been a bachelor all his life." (i.e., singles want to marry). Table 5 contains potential explanations of the results. Here, we focus on positioning the findings to the previous body of literature. Positioning findings to earlier research The lack of support for H01 (smile and likability), H03 (smile and willingness to use), and H04 (smile and credibility) suggests that the role of the smile is not overwhelming when individuals interpret persona profiles. This proposition is consistent with the idea that personas are composite descriptions (Bødker, Christiansen, Nyvang, & Zander, 2012), with each piece of information playing a role for the end-user perceptions. It suggests that the overall effect of smiling pictures, although having some effect on persona perceptions, is not overwhelming, meaning that respondents form their overall perception using other informational cues as well. The qualitative answers also support this conclusion, as the respondents repeatedly referred to other informational content along with the picture, mainly the topics of interest and quotes of the persona. Again, this is consistent with previous research that shows the persona's quotes and topics of interest are particularly impactful information for end users (Salminen et al., 2019. While previous research has conceptually established the point of personas being composite descriptions (Chapman & Milham, 2006), this study is among the first to empirically verify that idea through the analysis of persona perceptions. The fact that smile and perceived similarity with the persona are positively associated is consistent with the previous studies in social psychology postulating that a smile enhances the sense of identification between individuals (Barger & Grandey, 2006;Hinsz & Tomhave, 1991). What is interesting is that similarity is positively associated with likability. In other words, when the respondents viewed the persona similar to them, they liked the persona more. These findings are consistent with social group behavior theory, implying that "people like like-minded people" (Bessi, 2016;Del Vicario et al., 2016). Stock photos decreased the credibility of the persona and the sense of perceived similarity with the persona, suggesting that, for many respondents, using professional models made the personas seem more elusive than using the pictures of "regular people". Moreover, contrary to what we expected, the respondents did not like the "beautiful people" in the stock photos more than the regular peoplein fact, to the contrary, there was a negative relationship between use of stock photos and persona likability. These results are advising against the use of stock photos in favor of more authentic pictures of real people when creating persona profiles. A possible explanation for these findings is that, as stock photos are not viewed as realistic, individuals might experience more difficulty in relating to personas portrayed using stock photos. Finally, according to our knowledge of the persona literature, this study is the first one to present quantitative evidence on the perceptual predictors for willingness to use a persona. By applying structural equation modeling, we were able to establish multiple significant linkages between persona perceptions and willingness to use a personawith perceived similarity, credibility, and likability contributing positively to the willingness to use a persona. Overall, these results imply that individuals want to learn more about personas (people) that they like, find authentic, and can relate to. Therefore, the way the persona profiles are crafted is likely to have a sizable impact on how and if individuals in real organizations adopt the created personas and use them in their work. Limitations and future research avenues Concerning the limitations, the study sample was restricted to four English-speaking countries. It remains, therefore, an interesting question for future research to validate the findings in other cultures and regions of the world. Different sociodemographic variables of the participant sample were not available, so their effects remain unknown. Encoding the type of smile in a more granular fashion, for example, closed smile, upper smile, and a broad smile (Brannigan & Humphries, 1972) or Duchenne/non-Duchenne smile (Ilicic, Kulczynski, & Baxter, 2018) could make a difference in the observed effects. Likewise, "smile perception" (i.e., individuals' different perception of whether a person is smiling or not) should be measured and controlled for in future studies. Here, we expected all subjects to agree on the smile condition in the pictures but, while generally true, the validation of smiling hints to this not being the case. Moreover, other conditions beyond a smile and a stock photo, such as the technical quality of photos, their lighting, applied styles/editing, and backgrounds could influence persona perceptions and could be tested in future research. The selected stock photos can have some quality variation that is hard to quantify, as we had to use several photobanks to cover all the experiment variables. Specifically, stock photos in our sample tend to have reduced background elements relative to non-stock photos. While this is an unfortunate source of Figure 7. Eye-tracking heatmap from a persona user study , showing the gaze densities toward screen areas. In general, the attention of the participants is focused on people rather than backgrounds. potential confounding, eye-tracking experiments on persona profiles show that there is a tendency of individuals to focus on faces and people instead of backgrounds (see Figure 7). Finally, future studies should investigate dissecting the relative effect of different information elements (e.g., picture, quotes, topics, etc.) on the overall persona perceptions. While we provided some indicative evidence on the role of the information elementsespecially picturesfor the persona perception formation, a more nuanced understanding of this topic is needed. Practical advice for persona creators The results presented here directly aid in the design and implementation of personas within organizations. We provide the following recommendations: • We recommend persona creators to use smiling pictures because this increases the perceived similarity with the persona, which, in turn, increases the willingness to use a persona. • We recommend against the use of stock photos because the use of stock photos decreases the persona credibility and likability, while also decreasing the perceived similarity between the persona and the perceiver. • We advise persona creators to focus on improving persona credibility and likability as well as perceived similarity with the persona to increase people's willingness to use personas. A non-stock picture of a smiling person can help in this. Conclusion We find that individuals feel more similar to personas with smiling pictures. Individuals are also inclined to like personas they perceive like themselves, and they are more willing to use personas which they like. Therefore, although smiling did not have a direct effect on willingness to use, it increases the perceived similarity that, in turn, increases the willingness to use a persona. However, the type of persona picture matters. Using stock photos in personas reduces their credibility, likability, and sense of similarity, likely because individuals find stock photos less authentic than pictures of normal people. These effects advise against the use of stock photos in persona profiles, while supporting the use of smiling pictures. From a theoretical point of view, the way that the persona profile information influences the overall impression of a persona is a complex process, where a smile, albeit having some effects, is not overwhelming for many of the tested persona perceptions. In addition to the profile picture, the quotes and topics of interest seem to play an especially important role in interpreting the persona. Relatively, rather than the choice of smiling picture, the choice is of stock photo vs. real photo is more impactful for the persona perceptions, with photos of regular people resulting in more favorable impressions. Notes 1. Note that throughout the manuscript, we italicize the concept of persona perception, in order to make it more visually distinct for the reader, relative to person perception. Conceptually, the difference is that person perceptions are targeted to real people, whereas persona perceptions are targeted to personas, i.e., fictitious people that represent a certain customer or user segment. 2. https://www.upwork.com. 3. B is the standardized regression coefficient. It is similar to the unstandardized regression coefficient β, except it measures shifts in standard deviations rather than absolute values. It is more often used in structural equation modeling since it always direct comparison of the relative intensity of the effect.
v3-fos-license
2019-05-07T14:11:44.712Z
2018-01-01T00:00:00.000
146707260
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2018/42/e3sconf_i-trec2018_04007.pdf", "pdf_hash": "0e9d23929c837e4c988d1b2c1bdfb4d90be6fe81", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46217", "s2fieldsofstudy": [ "Engineering" ], "sha1": "0799da020cf10af760f05ba62493f699cd0a0975", "year": 2018 }
pes2o/s2orc
Energy Audit on Oil and Gas Industry Facility: Case Study at Field Y, East Kalimantan This paper discusses energy audit in operating oil and gas production facility in Indonesia by taking a case study at Field Y, East Kalimantan. Energy audit is essential to be carried out in order to identify current rate of efficiency and energy intensity in oil and gas production facility, and extrapolate the data as baseline to recommend potential room for improvement in increasing efficiency. Calculation on efficiency and energy intensity has been performed against main equipments which consumes fuel gas, namely generators and turbo compressors. Data from 2015-2017 were collected to perform the calculation. Calculation results showed that generator thermal efficiency ranges from 13.54% - 17.45%, which was affected by generator load power itself. The efficiency improves as the load power increases. Meanwhile, compressor thermal efficiency ranged from 28.36% - 33.79% depending on process variables and compressed gas volume. Energy intensity calculation result showed the value of 64.554 - 71.064 and greenhouse gas emission ranged from 160.48-208.17 kt CO2 eq. From this study, it is identified that improvements to increase efficiency and reduce energy intensity can be made through operating one generator and one compressor, and assessing the use of renewable energy resources to supply power requirement for non-process facilities on site. Background and Objectives CO 2 is one among many components of greenhouse gases (GHG) that is significantly affecting environmental changes such as the rising of land and ocean temperature, as well as climate change, both severely impacting human life. Concentration of CO 2 in the atmosphere is rapidly increasing, to which fossil fuel combustion and industrial process holds 78% contribution in 2000-2010 alone [4]. Indonesia ratified Kyoto Protocol as a commitment to contribute to the global effort of reducing the impact of GHG emission to environment. Kyoto Protocol is an international agreement related to United Nations Framework Convention on Climate Change (UNFCC) signed on December 11, 1997 in Kyoto, Japan. Countries who have ratified this Protocol have committed to collectively reduce emission of CO 2 and other 5 greenhouse gases (methane, nitrous oxide, sulphur hexaflouride, hydrofluorocarbon and perfluorocarbon) to 5% below the 1990 level during the period of 2008 to 2012. In continuation of Kyoto Protocol, in 2015 Conference of Parties (COP) 21 was signed in in Paris. The outcome of the meeting outcome was a legallybinding consensus of 195 countries agreeing to stop the rising of earth's temperature to 2 o C through CO 2 emission reduction. In 2010, 35% of GHG emission occurred from energy sector [4]. Oil and gas sector is an energy intensive industry, accordingly oil and gas companies have began finding the right strategy to achieve energy efficiency in their operational activities in order to save energy and contribute to effort of reducing the impact of climate change. Indonesia itself targeted to reduce GHG emission in energy sector to 39 million tones of CO 2 or 26% in year 2030 [1]. In line with this target, energy management system, particularly in energy industry sector, shall be implemented to control energy consumption in order to use energy effectively and efficiently. Energy required to extract oil and gas is increased due to increasing numbers of wells demanding artificial lift. Besides, reservoir pressure depletion and the maturity of gas reservoir oblige the installation of booster compressor at surface production facility. This causes the increase of both energy use and production cost. At the same time, volatility of oil price forces the industry to seek methods to keep the business profitable, therefore all operational aspects are revisited to identify cost saving rooms. Energy-related cost is identified as the most significant component of the whole operations, therefore energy efficiency method in oil and gas upstream industry is most demanded. One of strategic objectives issued by Indonesia Ministry of Energy and Mineral Resources (MEMR) is to fulfill domestic energy and fuel by enhancing efficiency of energy use and reduce emission, indicated by energy intensity and CO 2 emission reduction. Energy intensity is a parameter to assess country's energy efficiency and is defined as amount of energy consumption per Gross Domestic Product (GDP). Meanwhile, one of the methods to reduce CO 2 emission is by performing energy conservation. In line with this objective, MEMR issued Ministry Decrees Number 14 Year 2012 on Energy Management obliging energy source user and energy user of 6000 tones oil equivalent per year and more to perform energy management by conducting regular energy audit and implementing the audit result. This research is aimed at mapping the energy consumption and energy intensity in oil and gas industry, particularly the upstream sector through energy auditing. The audit would provide description on CO 2 emission contribution from oil and gas upstream sector in Indonesia as well as efficiency on energy use per production unit in barrel oil equivalent (boe). The audit would also provide rooms for improvement to reduce CO 2 emission and energy use. As for the industry itself, identified rooms for improvement would be to reduce production cost as the production process will be more efficient and reliability of production equipments, particularly turbo machinery as main equipment, is increased. Profile of Field Y Field Y is a central facility to gas and condensate processing, consisting of main facilities of gas receiving manifold platform, compression platform, glycol dehydration platform and oily water treatment platform. The facility was designed to process high, medium and low pressure gas but nowadays production mode is only for medium and low pressure due to reservoir pressure depletion. Main production equipments at Field Y are 2 turbo compressors with the capacity of 425 MMSCFD respectively operating in parallel to increase gas pressure from low to export pressure. Prior to being exported, gas is dehydrated by glycol absorption-regeneration. Two generators of 4.5 MW capacity are providing electricity to all production equipments, office and accommodation. Main fuel gas consumer equipments on Field Y is described in Table 1 below. Methodology Energy audit was performed by performing calculation on turbo generator and turbo compressor efficiency, energy intensity and greenhouse gas emission using Microsoft Excel. Analysis was carried out against calculation result and rooms for improvement are identified. This method generally can be applied to all oil and gas production facility using main equipment to burn natural gas as fuel. Data Gathering Operation data from 1 January 2015 -31 Desember 2017 was gathered, as displayed on Table 2 and Table 3 below. Table 2. Generator data. Data G-1 G-2 Load power Wa1 Wa2 Fuel gas flow rate mfg1 mfg2 Fuel gas flow rate mfg1 mfg2 Besides the above data, laboratory analysis result on fuel gas and composition on the aforementioned duration was also gathered as reference on Lower Heating Value. Fuel gas Lower Heating Value (LHV) Fuel gas LHV was calculated with the following equation: where: xi = component i fraction LHVi = LHV of component i Generator gas turbine efficiency Overall efficiency of gas turbine cycle was calculated with the following equation: (2) where: Wa = actual shaft work, kW m f = fuel gas mass flow, kg/s LHV = lower heating value of fuel, kJ/kg Compressor gas turbine efficiency Compressor gas turbine efficiency was calculated by brake power and fuel intake with the following equation: where: Bp = brake power, kW m f = fuel gas mass flow, kg/s LHV = lower heating value of fuel, kJ/kg Compressor shaft power (brake power) was calculated with the following equation: where: Bp = brake power, kW Gp = gas power, kW Gas power is defined as actual compressor power without considering mechanical losses, calculated with the following equation: where: Gp = gas power, actual compression power excluding mechanical losses, kW w = produced gas mass flow, kg/h H is = isentropic head, N.m/kg η is = isentropic efficiency Isentropic head and efficency was calculated with the following equations: Where: H is = isentropic head, N.m/kg Z avg = average compressibility factor R = universal gas constant M = molecular mass T 1 = inlet temperature, K P 1 = inlet pressure, kPa P 2 = outlet pressure, kPa k = isentropic exponent, Cp/Cv Value of k was calculated with the following equation [9]: where: γ = gas relative density, i.e. ratio of gas molecular weight with molecular weight of air A = 0.000272 T = temperature, K Mechanical losses are loss of power due to friction on bearings, seals and speed increasing gears. These losses were calculated by using Scheel's equation below: Mechanical losses = 0,75 (Gp) 0,4 (9) Energy Intensity Energy intensity on gas production was calculated with the following equation: Energy intensity calculation is on annual basis, hence company's performance on energy conservation can be measured. Greenhouse gas emission calculation Greenhouse gas emission from natural gas combustion as fuel gas on stationer source was calculated as follows, referring to guideline issued by Indonesian Ministry of Environment and Forestry [8]: Generator Efficiency Thermal efficiency calculation result on generator G-1 and G-2 is displayed on Figure 1 below. Fig. 1. Generator effiency. Average generator thermal efficiency ranges from 10-15%, with the details described on Table 4 below. It is shown that during 2015-2016 period, G-1 and G-2 efficiency ranged from 13-15%, meanwhile in 2017 the value increased significantly to 16.41% for G-1 and 17.45% for G-2. This result demonstrated that G-1 & G-2 operated in parallel below optimum load power, which means that both generators operated with low efficiency. Load power increase results in more efficient operation of generator as shown on Figure 2. Compressor Efficiency Compressor thermal efficiency is a ratio of resulted shaft power and fuel gas power. Calculation result of compressor thermal efficiency is shown on Figure 3 below. Average yearly efficiency details are both compressors is shown on Table 5. Generally, the performance of both compressors was good. Efficiency value varied due to variation on operating condition and produced gas flow rate as shown on Table 6. It is shown that in 2016, the volume of processed gas was higher than that of 2015 and 2017, hence thermal efficiency of both compressors in 2016 was the highest. Compressor thermal efficiency was affected by shaft power and fuel gas power, defined as power resulted from combustion of each fuel gas volume. Correlation of fuel gas power and processed gas flow rate is shown on Figure 4. Figure 4 above shows that the higher volume of compressed gas, the required power resulted from fuel gas combustion is lower; hence the system is more efficient. Energy Intensity Calculation Energy intensity is an indicator of energy consumption to produce one unit of product. This calculation defines energy consumed as energy of fuel gas burned to produce one unit of gas export in annual basis. Energy intensity calculation in Field Y is shown on Table 7 below. It is shown that energy intensity value in 2017 was the highest compared to previous 2 years due to less volume of processed gas. Correlation between energy intensity and efficiency is shown on Figure 5. Fig. 5. Efficiency and energy intensity correlation. It is shown that although generator efficiency increased in 2017, decrease on compressor efficiency has resulted in the high energy intensity of the year. Greenhouse Gas Emission Calculation Result on greenhouse gas emission resulted from fuel gas combustion calculation is presented on Table 8. It is shown that emission contributed on fuel gas combustion in Field Y ranges from 160.48 to 208.17 kt CO 2 eq per year. Fuel gas combustion of compressor contributed 94% to overall GHG emission per year in Field Y. Correlation on compressor efficiency and GHG emission is shown on Figure 6. It is shown that during period of year 2015-2017, emission and efficiency indicated the similar trend. In general, increase on compressor efficiency causes the decrease in specific fuel gas consumption. However, increase of greenhouse gas emission was due to increasing fuel gas consumption during 2015-2016. In the year 2017, although compressor efficiency decreased, fuel gas consumption decreased as well, therefore greenhouse gas emission significantly decreased compared with previous years. Rooms for Improvement Based on the above calculation and analysis result, some rooms for improvements were identified to increase equipment efficiency and decrease energy intensity as well as greenhouse gas emission, namely: 1. To operate one generator to increase the load power, as it enhances its thermal efficiency and cause specific fuel gas consumption to decrease. 2. To operate one compressor by considering future gas production rate, so that fuel gas consumption decreases, as along with energy intensity and greenhouse gas emission. 3. To ensure routine maintenance for each equipment and follow up its inspection result to enhancethe performace of the equipment. 4. To use renewable energy source to supply power required by non-process facilities such as accommodation, workshop, office and other supporting facilities. Based on geographical location, solar panel is potential resource to be further studied. The author would like to acknowledge Hibah PITTA 2018 for publication financial support.
v3-fos-license
2021-09-25T15:51:53.873Z
2021-08-25T00:00:00.000
239621261
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2076-3417/11/17/7827/pdf", "pdf_hash": "7d310366124bfda806f5528086ad433be96c61d1", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46218", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "0b3276e25cf9f996cb7c31b91eab2fecfa714c2c", "year": 2021 }
pes2o/s2orc
Influence of Printing Substrate on Quality of Line and Text Reproduction in Flexography : This study characterizes and compares the parameters of the quality reproduction of fine elements in flexography on coated and uncoated paper as well as on OPP film (oriented polypropy-lene). A monochrome test form was created and printed using cyan UV ink. The analysis of results confirms the importance of interaction between the printing substrate and ink; it also indicates identical line and text deformations on the print. Quality reproduction on coated paper is higher in relation to OPP film for all the research parameters. The ink penetrates significantly more and with more irregularity into the pores and throats of the uncoated paper, which results in less homogeneous elements, and in such way that it loses its original shape. In coated paper and OPP film, the ink spreads more on the substrate area which gives it a significantly more homogeneous shape. However, due to the surface spread of the ink, the biggest changes in the size of fine elements are noticeable in the OPP film. The scientific contribution of this paper is based on the comparison of print quality parameters of fine elements, which can contribute to the optimization of the production process and quality of the final graphical product. Introduction The ability of a printing system to reproduce a sharp image with clear details is of crucial importance for high-quality reproduction [1]. The geometry of printed elements, sharpness and noise of the edge, together with a uniform ink layer, are important indicators of quality reproduction that need to be analyzed. They are directly connected to the reproduction of lines, text and dots that are part of every image [2]. Quality parameters of fine elements can be assessed by measuring line wicking, which makes line and text become fuzzy or bold. The quality reproduction in flexography is conditioned by a combination of different parameters that relate to platemaking technology and the type of polymer plates [3], specification of the anilox roller, the strength of pressure in print and characteristics of the printing substrate [4]. The print quality of prints performed with flexographic printing technology is influenced by various parameters such as viscosity of the printing inks, printing substrates, plates, anilox rolls, etc. [5]. The lightest pressure or "kiss impression" is ideal for printing. Kiss impression is a clean print image created while applying the lowest value of pressure possible with the plate against the paper [6]. It is often not easy to print using kiss impression, primarily due to the characteristics of the printing substrate surface, uneven height of the printing elements or the types of printing jobs. On the other hand, if the pressure is too high, dots will be squeezed more and can be deformed [7]. Tomašegović et al. investigated how different pressures in flexographic printing and smoothness of the paper affect the printed line width and legibility of the printed typographic elements of 4pt size [8]. Printouts were obtained on five different printing substrates made of recycled paper and was found that the smoothness of the paper is directly related to ink spreading on the surface of the print. The composition and the surface characteristics of the printing substrate significantly influence the ability of the ink to penetrate into its structure [9]. Coated glossy or matte papers are harder to absorb the ink than the uncoated ones because they are less porous and less permeable [10]. Therefore, the ink spreads more over the surface of the printing substrate, and the ink layer on the printing surface is more uniform. Ding et al. analyzed the print quality of edible inks on a coated printing substrate in the technique of flexographic and screen printing [11]. It is important for the ink to adhere well to the printing substrate [12], which is directly influenced by the texture of the printing surface or treating the surface of PET films in order to decrease the surface tension [13]. Mariappan et al. in their work researched the dynamics of liquid transfer between nanoporous stamps and solid substrates [14]. Although many types of printing substrates (plastics, film and foil) are widely used for flexible packaging, paper-based materials remain popular due to their good printing characteristics [15]. The most important properties for the packaging material are: the gas barrier property, mechanical, thermal, rheological, optical and physical properties. The entire process of flexography consists of a large number of influential parameters that need to be standardized for specific printing conditions [16]. This research comprises the comparison of the most important quality parameters of graphic reproduction in accordance with the ISO 12647-6 standard for three types of printing substrates: uncoated and coated paper and OPP film. Zhang et al. found that by analyzation, the ink penetration depth into the substrate can theoretically predict the quality of printed matter [17], and presented the mechanism of interaction between the ink and paper based on two models, static penetration and dynamic diffusion. The common way of evaluating the quality of graphic reproduction consists of an objective evaluation of the color and tone in the printed image using the spectrometric and densitometric measurement methods. Havenko et al. analyzed the influence of surface roughness of cardboard as a printing medium on printing properties of environmentally friendly inks, and in their research they presented the effect of the surface layer of cardboard on the microgeometry of prints formed by cyan, yellow, black and magenta inks [18]. Plazonić et al. investigated colorimetric changes of the water-based flexographic ink printed on three types of hemp-based printing substrates subjected to artificial ageing [19], and it was proven that the most stable prints under the influence of light were made with black ink. The application of both instruments and measurement standards together has contributed substantially to the improvement of print quality in flexography. As flexographic printing technology fast evolved over the last two decades, the application of an increasing number of substrate materials of different characteristics and types of printing ink, many critical print quality issues such as sharpness, line quality and text quality and micro and macro uniformity often become worse. The quantification of these parameters requires a completely different approach to determining print quality based on image analysis systems or image analyzers. This specific metrics can determine the effect of surface structure of the printing substrate on print quality of fine elements which cannot be determined by standard methods. In order to obtain a broader perspective of quality reproduction, in addition to standard evaluation methods, more space is being given to image analysis. The ability to measure image structure characteristics such as sharpness and line edge noise, as well as other fine detail deformations is what contributes to an image analysis system, while also distinguishing it from a conventional densitometer or spectrometer [20]. Therefore, during the evaluation of the quality reproduction these parameters need to be analyzed as well. The goal of this paper is to research the influence of printing substrate on quality reproduction of the lines and text based on image analysis and visual evaluation in precisely defined conditions for three chosen printing substrates. The obtained results will show the importance of interaction between the three types of printing substrate and printing ink, and the influence on the print quality parameters. Parameters of Quality Reproductions of the Fine Elements in the Process of Image Analysis Evaluation A typical system for image analysis consists of a high-resolution digital camera or microscope which captures areas of interest and of a specialized software which analyzes quality reproduction characteristics based in a digital image. Microscopes with a highresolution optical model (5 μm per pixel or 5080 ppi) are applied in different methods of research for displaying fine elements invisible to the naked eye. The evaluation of quality reproduction in line with the ISO13660 standard is based on the following three parameters ( Figure 1): line width, raggedness and blurriness [21]. Line sharpness: This is a cross-section profile of the line edge describing the characteristics of transition from black to white, that is, the transition from full color to the color of the printing substrate [22]. A sharper, that is, sudden transition on the edge indicates a higher degree of line sharpness as well as higher quality reproduction. In blurry, softer edges, the transition from full line color to the color of the printing substrate is gradual, that is, smoother. The quality of text reproduction is evaluated based on the edge degradation, fidelity of the character shapes and uniform ink layer [23] which is manifested in the touching, breaking and smearing of text characters, and it significantly influences the legibility of the text. In accordance with the evaluation of the line, an examination of the text can also be performed on the area and perimeter of each character. This will show whether the smearing or some other ink spreading mechanism has modified the shape of the characters and to what extent. Acceptable text quality is defined based on the acceptable tolerance of the area and perimeter. Tolerance is defined in percentages in relation to the referent value measured in the digital template. Imaging-based research methods allow visualization, inspection and quantification of image data, which are necessary for objective quantitative print quality analysis [24]. Image analysis methods serve a crucial role in the print quality determination and have had tremendous growth in the past decade [25]. The print quality of the fine elements can also be judged by visual evaluation based on comparison of the images, captured on printouts using an image analysis system [26]. Experimental Methodology The goal of this study is to determine the influence of printing substrate on quality reproduction of fine elements such as thin lines and small text sizes in order to compare quality reproduction parameters and optimization of the production process. The study was conducted by evaluating the most important parameters of line and text quality reproduction, and those are line edge noise, line sharpness and uniformity of density. The research framework describing the purpose and process of research is shown in Figure 2. The experimental part of this paper begins with the creating of a test image designed to enable the evaluation of the quality reproduction in fine elements, by using acceptable and validated scientific methods and research techniques. The test image was not newly designed. Instead of that, a standard test form Kodak_NX_Target_v10 provided by the equipment supplier in 1-bit tiff document was used. The test image is monochrome, and all evaluated elements of the testing image have a solid tone, that is, 100% surface coverage. A part of the test form, which served for the analysis of fine elements, contains the following elements ( Figure 3):  serif text 2-12 pt in positive and negative shape  line width 1-128 pixels at 2400 ppi in positive and negative shape (0.01-1.36 mm) The positive version of the shape used dark colors for the front elements on a light background, and the negative versions of the shape used light colors for the front elements on a dark background. The following parts of test image are used for the analysis of fine elements: 85 μm and 170 μm line thickness, and 6 pt and 8 pt text size. The flexographic photopolymer printing plate used for this research was the Kodak Flexcel NX plate (hardness acc. to DIN 53505: Shore A is 73) without advanced DigiCap NX patterning of the surface on the polymer plate. The characteristics of Flexcel NX platemaking technology [27] include a flattened top of the halftone dot which requires a light impression and, in such way, enables quality transfer of ink from the printing plate to the printing substrate [28]. Plate-making process specification: A six-color flexo printing machine Nilpeter FB4200, max printing width 420 mm, was used for printing. Printing was performed using the principle "from Roll to Roll", with a printing speed of 60 m/min, applying UV cyan ink. UV inks have a number of good printing properties: almost 100% of transferred color is used for creating ink film; smaller anilox volume is needed in comparison with water inks; the ink does not change consistency; due to lower viscosity there is less color bleeding; the ink can remain in the inking system for a longer time without requiring to be cleaned. The optimal anilox line screen for printing is determined in line with halftone screen and minimum dot size [29]. During the printing process, a substrate passes between the plate cylinder and the impression cylinder. The space between them must be optimal to give the proper printing pressure [30]. The gap value or the nip engagement between the plate cylinder and impression cylinder for the lightest printing pressure is one three-thousandth of an inch, or 75 μm (0.0762 mm). The next level of pressure is a higher pressure, and the gap value is one sixthousandth of an inch or 150 μm (0.1524 mm). During the printing experiment, the test form was printed on three different printing substrates while other printing parameters, i.e., speed, pressure level, anilox roller characteristics and UV curing system settings, were kept to constant. Printing specification:  Flexo printing machine: Nilpeter FB4200; The amount of ink delivered by anilox roller is controlled by a pattern of small precisely sized dimples, or cells. Cell volumes are typically expressed in billion cubic microns per square inch (BCM/in 2 ). The power and the position of the UV lamps did not change for the whole printing experiment. The distance to the printing substrate is constant across the entire web width and adjusted in line with manufacturer's instructions and recommendations. In order to be able to compare the results of the research, it is important for the printing experiment to take place under controlled printing conditions. Each of the chosen substrate types belongs to an individual quality group regarding their characteristics. The control of CIELCH cyan values on prints for three groups of printing substrates is the basis for color matching of prints with target values and allowed tolerances in accordance with ISO 12647-6 [31]. Color matching with the mentioned standard is based on the difference in the color tone ∆hab, based on CIELCH values for the solid tones of the process colors. Due to the analysis of deformation in fine elements, it is necessary to capture the chosen areas on the prints of all the three printing substrates. Digital Microscope Dino-Lite AM4000 is used for this purpose, with a resolution of 1.3 pixels and in-built LED light which enables a better display of the captured object. Samples for the analysis are captured with 200× magnification and 1280 × 1024 resolution. The software ImageJ 1.47 was used for processing and analysis of the microscopic images; different image analysis techniques were used for evaluation. Measurement results must be shown in real values (mm, μm, …). Therefore, it is necessary to set a correspondent ratio based on known values at the same magnification while doing the measuring. A ratio 500:1 is set for image analysis, i.e., 500 pixels amounts to 1 mm, which gives the image size of 2.56 × 5.12 mm. When evaluating the area and the perimeter of fine elements in the ImageJ software, the threshold method of image processing was used. Thresholding is a method where the image converts from color or grayscale mode into a binary image mode. Then, the red threshold color in the image was used to measure the area and perimeter of the evaluated elements. The measured results for the particular imprints do not depend on the number of measurements and are valid for the specific measurement condition, especially color threshold settings. However, during the printing process, additional deformations of fine elements occur, which would certainly give different results, and are not the subject of research in this paper. Extremely often it is a variation of certain parameters of the printing process that change during printing, especially during the printing of large print runs. Fine element deformation in the printing process due to wear of soft and flexible material of photopolymer printing plates is a highly common occurrence, which is a good topic for further research. Evaluation of print quality of fine elements includes the following parameters in this research:  Line quality: line width, edge noise/raggedness, edge sharpness/smoothness and uniformity of ink layer density. Results and Discussion All the important parameters of printout quality reproduction of line and text elements were researched, such as edge degradation and uniformity of color density. Analysis of Line Deformation Line deformation on the imprint is researched based on three printed lines' physical characteristics: line area, line perimeter and line width. Two types of line width were chosen for the analysis: 85 μm and 170 μm. The values of area and perimeter for 2.56 mm line length and line widths are shown in Table 1. Line width can be directly connected with line spreading, which was obtained by measuring line area. All printed lines show a deviation from target values. This spreading of the line in the positive shape leads to an increase in the thickness of the line, and in the negative shape to the closing of the line. Higher deviation of line width occurs in the negative shape (−23 μm) in relation to the positive shape (+11 μm for 170 μm line thickness). With a thicker line, the value of the total deviation in negative shape increases, and in the positive shape it decreases. For example, deviation for 85 μm line thickness in the positive shape on coated paper is +16 μm, and for the line of 170 μm the deviation amounts to +7 μm ( Table 1). The average value of measured line width is correspondent with the measures of their area. This only confirms the authenticity of the measuring results. A more detailed insight into the print quality of lines printed in positive and negative shapes is shown by percentage deviation of line area and line perimeter from target value (Figure 4). The dotted line in the diagram shows the target value. Lines printed in positive shapes for all three printing substrates indicate mildly positive deviations of line area, and in negative shapes there are mildly negative deviations from the target area ( Figure 4). The biggest percentage deviation from the target value can be seen on OPP film (22.7% in positive shape, −18.2% in negative shape for line width of 85 μm). The reason is the surface characteristics of the material, which is non-absorbent; therefore, the ink spreads only on the surface of the material. It can be noticed that absolute line area increases with increases of line thickness, but percentage deviation decreases for all three printing substrates. The increase in line thickness has the least influence on the change in deviation on coated paper in negative shapes (−4.5% for line thickness of 85 μm and −2.3% for line thickness of 170 μm). Completely different tendencies in percentage deviations can be seen in the results of line perimeter measurements. Line perimeter is used to evaluate line raggedness, that is, line edge noise. Therefore, an assumption can be made that higher line perimeter values are indications of the higher line raggedness characteristics, which can be related to the absorbency of the printing substrate. All perimeter measurements show a positive deviation from target with significantly higher deviations on uncoated paper (over 38% regarding the target in positive and negative shapes for 85 μm line thickness). Although the OPP film is a completely non-absorbent printing substrate, there are no significant differences in the values of percentage deviation between coated paper and OPP film. It refers to each observed line thickness for each shape separately, for both positive and negative shapes (up to 1.5% at 85 μm line thickness and up to 3.5% at 170 μm). Significantly larger deviations on uncoated paper are the result of surface irregularities of the printing surface, which leads to the conclusion that the rough surface due to its better absorption characteristics has a favorable effect on the irregular spreading of lines, i.e., on the appearance of edge noise. The line quality in negative shapes is influenced by the same parameters as in positive shapes. However, the main characteristics of the line quality in a negative shape is openness. Thinner lines printed in negative shape are particularly sensitive to filling, which influences their visibility. Increased line edge noise, ink spreading on printing substrate and blurriness are the main reasons for the decrease of line quality in a negative shape. Ink spreading influences the line edge sharpness, which results in blurry edges. Figure 5 compares the quality reproduction of lines in positive and negative shapes. The line with least edge noise and sharpest edge is a line printed on coated paper, and it is visually the most homogeneous, which is directly related with the result of perimeter measurements. Significantly bigger edge raggedness on uncoated paper can be particularly easily noticed, visually. Analysis of Text Deformation For the analyses of quality reproduction of text, the area and perimeter of lowercase characters "r" and "s" were measured for two text sizes, 6 pt (2.15 mm) and 8 pt (2.82 mm). The values of area and perimeter of characters together with their target values are shown in Table 2. The biggest change in text area for both observed text character, "r" and "s", was noticed on OPP film (similar to cases of previously considered line area characteristics) which can be seen in the highest value of character area in a positive shape (0.66 mm 2 for character "s"), and the lowest in a negative shape (0.50 mm 2 for the same character "s") ( Table 2). Higher text character perimeter values indicate higher line raggedness. The highest values for this print quality parameter were obtained on uncoated paper (8.98 mm), then OPP film (8.24 mm) and coated paper (8.08 mm). Those text perimeter distinctions between the samples printed on OPP film and coated paper are extremely slight (0.16 mm in negative shape and 0.1 mm in positive shape). The relative values of deviation of text area and text perimeter in percentage from target value give a complete insight into the quality of reproduction ( Figure 6). The dotted line shows the target value. Although the surface of the text character "s" is 20% bigger than the surface of the text character "r", the difference in area deviation percentage, but also in the perimeter between the analyzed letter characters (Figure 6), is completely negligible (on OPP film and coated paper it is up to 3%, and on uncoated paper it is up to 5%). This can be seen on line diagrams which are identical for individual parameters, text area and text perimeters. Smaller deviations of area values from target value on paper, especially on coated paper (up to −3.6% in negative shape), are associated with good absorption of paper as a printing substrate; therefore, the ink quickly penetrates into the structure of the material and spreads less on the surface of the material [32]. In text deformation in a negative shape, mild closures of characters occur due to the spread of ink. For this reason, the character area is smaller than in the text deformation in a positive shape. Perimeter analysis assesses the edge degradation of the characters, which is extremely important for visibility, fidelity character shape and text legibility. The deviations from the target value on coated paper and OPP film are extremely similar (up to 3% in negative shape, and 1.5% in positive shape) and significantly smaller than on uncoated paper, which also means smaller edge raggedness of the text ( Figure 6). The analysis of the results shows that the relative raggedness of the text (percentage deviations) on uncoated paper increases as the area of the evaluated element decreases, i.e., with a decrease in text size (from 8 pt to 6 pt) and with the decrease in stroke length of the characters (character "s" has a longer stroke length than character "r"). The same trend is visible also on the other two investigated printing substrates. This analysis confirms why quality reproduction of a smaller size text is extremely difficult to achieve, particularly in absorbent printing substrates. Additional comparison of percentage deviations between the text and lines will give an insight into the general behavior of fine elements during the printing process on three types of printing substrates. Figure 7 compares the results of percentage deviations and perimeter of 85 μm lines and 8 pt text size from the target value. Line diagrams for each parameter, area and perimeter show that percentage deviations of the line and text follow the same trends. There is more data dissemination in the negative shape. In most cases, the deviation is up to 10%, which means that the reproduction of fine elements can be described using common positive and negative deviations for all three types of printing substrates. Therefore, the deformation of fine elements is directly connected to the length of stroke, in both the printouts of line and text. Raggedness (percentage perimeter deviation) is still somewhat bigger in lines than in text characters, especially in the negative shape. This can be connected with characters which have curved strokes, and for that reason compensate ink spreads more easily. The text with the most edge noise along the stroke (edge raggedness) is printed on uncoated paper which is visually noticeable (Figure 8, upper images), and it is directly connected to the results of perimeter measurement. Excessive noise at the edges of the strokes of the characters makes the text blurry and unclear, reducing its visibility and legibility. Line Sharpness ImageJ software (Plot Profile Tool), which generates a 2D line profile based on microscopic images, was used for the analysis of line edge sharpness. First, a line is drawn perpendicular to the stroke to mark the area for the analysis (Figure 9, upper images), and then a 2D line profile is created based on the color density. The X-axis represents the distance along the line and the Y-axis is the pixel intensity. Figure 9 displays a two-dimensional graph of the intensities of pixels along a yellow line (Figure 9, lower images) for a line of 170 μm width for three printing substrates. The graph curve shows grey values (grey levels of 80-150) along the entire measurement length, which is the same for all three printing substrates, and amounts to 300 μm (0.30 mm). Sharpness of the printed element actually represents the evaluation of sharpness or smoothness of the edge in the analyzed element [33]. According to this, the line sharpness is defined using the distance needed for the transition from the lightest to the darkest edge level. In this study, the transition to gray level 95 was analyzed as this is the highest value for uncoated paper. This distance measured for uncoated paper is 80 μm, while for the other two types of printing substrates it amounts to 60 μm, which is directly related to line edge sharpness (Figure 9). A smaller transition means that the sharpness of the printed element is greater (lines on uncoated paper have the least sharpness). A significantly lower maximum value in graphs for uncoated paper is an indication of lower color density. Edge sharpness is highly important for line elements, especially for reading line codes since they directly influence the reading speed. Uniformity of Line Ink Density The analysis of the ink layer uniformity for the solid tone of cyan was performed by visual evaluation of microscopic images from the print based on a 3D topographic representation of the cyan color density. Microscopic images of the evaluation elements are analyzed through a 3D rendering program (tool: Interactive 3D Surface Plot) that transforms density into proportional height to see just how thick the ink film is. The average value of solid ink density (SID) during the printing process for each printing substrate amounted to: SID (uncoated paper) = 1.05, SID (OPP film) = 1.24, SID (coated paper) = 1.41. Visual evaluation is based on the number of dominant peak protrusions on the printing surface of each line that are visible on the 3D view of the observed samples [34]. Uncoated paper (Figure 10, left image) shows a significantly higher degree of line non-homogeneity, i.e., non-uniformity of color density in relation to the remaining two printing substrates, which is clearly visible in a larger number of dominant peaks. This non-homogeneity is characterized by gaps that occur on the line surface due to incomplete coverage of the printing substrate, i.e., lower color density. This can also indicate that the surface tension of the printing ink and the surface tension of the substrates are not optimally adjusted to each other. Uniform ink layer on coated paper is insignificantly higher in relation to OPP film. The mentioned results for edge sharpness and the uniformity of the line color density can be fully attributed to the text since this concerns a correspondent evaluation of fine elements in an identical printing process. Conclusions Analysis of measurement results indicates correlation in the deformation of lines and text on printouts. Correlation in deformation of fine elements is visible in line diagrams and is related to the type of printing substrate and the type of reproduction mode in positive or negative shapes. Spreading of fine elements on printouts was estimated based on their area measurements. The biggest deviations from target were recorded on OPP film (22.7% in positive shape, −18.2% in negative shape). Therefore, fine elements printed in the positive shape have the tendency to expand, and those printed in the negative shape have the tendency to close. Deviation of line area in relation to the text area is negligibly small for all three printing substrates (up to 8% in a negative shape and up to 5% in a positive shape). Ink spreading can have a negative effect on the print quality of fine elements, especially tiny text in a negative shape on a colored background, which can result in poor text legibility. On uncoated paper, the ink expands more into the structure of the material itself; therefore, the surface spread of the ink is less. Edge degradation of fine elements on printouts was evaluated based on perimeter measurements and the 2D profile plot. The analysis of both examined parameters also yielded correlating data. Significantly increased edge noise (edge raggedness) was recorded in uncoated paper (up to 40% deviation) due to easier penetration and capillary spreading of ink. The two remaining substrates show similar values (up to 20% deviation) with a slightly smaller deviation from target in coated paper. The lowest edge sharpness was recorded on uncoated paper (distance is 80 μm), and on OPP film and coated paper is it higher and amounts to 60 μm. Excessive edge noise of fine elements affects the clarity and visibility of the elements as well as the shape fidelity and legibility of the text. Percentage of area and perimeter deviation of fine element increases with the decrease of the text size and line thickness. Visual evaluation of ink layer uniformity of fine elements on the printouts based on the number of dominant peaks showed significantly smaller homogeneity on the uncoated paper. Some unexpected uniformity of the ink layer is higher on coated paper compared with OPP film, although the ink layer on OPP film spreads only on the surface of the substrate. It can be concluded that the coated paper showed the best results in all research parameters, which were slightly better results than those on OPP film. The analysis of microscopic images showed that the deformation of fine elements occurs due to a mechanism involving the spreading and penetration of ink. The share of each occurrence in deformation depends on the surface characteristics of the printing substrate. The evaluation of qualitative reproduction parameters has yielded important indicators that can significantly improve the production process and result in an increase in the print quality of fine elements.
v3-fos-license
2017-06-02T08:00:04.430Z
2014-12-03T00:00:00.000
18019126
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1371/journal.pone.0114409", "pdf_hash": "ec0968f321d0430483a9ec2390273c1b0dce2348", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46219", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "ec0968f321d0430483a9ec2390273c1b0dce2348", "year": 2014 }
pes2o/s2orc
GAIP Interacting Protein C-Terminus Regulates Autophagy and Exosome Biogenesis of Pancreatic Cancer through Metabolic Pathways GAIP interacting protein C terminus (GIPC) is known to play an important role in a variety of physiological and disease states. In the present study, we have identified a novel role for GIPC as a master regulator of autophagy and the exocytotic pathways in cancer. We show that depletion of GIPC-induced autophagy in pancreatic cancer cells, as evident from the upregulation of the autophagy marker LC3II. We further report that GIPC regulates cellular trafficking pathways by modulating the secretion, biogenesis, and molecular composition of exosomes. We also identified the involvement of GIPC on metabolic stress pathways regulating autophagy and microvesicular shedding, and observed that GIPC status determines the loading of cellular cargo in the exosome. Furthermore, we have shown the overexpression of the drug resistance gene ABCG2 in exosomes from GIPC-depleted pancreatic cancer cells. We also demonstrated that depletion of GIPC from cancer cells sensitized them to gemcitabine treatment, an avenue that can be explored as a potential therapeutic strategy to overcome drug resistance in cancer. Introduction Macroautophagy, commonly termed as autophagy, is an essential catabolic process that cells implement in diverse biological and physiological activities [1,2]. Under normal cellular conditions, this process maintains cellular and tissue homeostasis in a protective manner by recycling and degrading cellular components during cell death [2][3][4][5]. Previously, it was believed that the autophagosome, a double-membraned vesicle, engulfs organelles randomly [1,2,5]; however, recent studies have shown that the selection of organelles is directed by cargo specific factors [6]. Additionally, autophagy plays an important role in many disease processes, including cancer [7]. In several cancer types, autophagy can influence the initiation and progression of disease [8,9] and promote tumor development under the metabolic stress of hypoxia. Because mutations of autophagy-related genes have been reported in human cancers [10,11], studies have focused on genetic and chemical inhibition of autophagy as a therapeutic strategy [12]. A recent study suggests that the formation and release of arrestin domaincontaining protein 1-mediated microvesicles (ARMMs) at the plasma membrane depends upon the recruitment of TSG101 protein [36]. There is accumulating evidence that GIPC plays an important role in cellular trafficking. In particular, GIPC acts as a scaffold to control receptor-mediated trafficking [20,22,37] and after receptor internalization, GIPC transiently associates with a pool of endocytic vesicles close to the plasma membrane [15]. Exosome biogenesis as well as formation of the autophagosome involves endocytotic vesicles. However, there is no clear evidence that these two mechanisms of vesicle formation crosstalk with each other [38]. In this present study, we reveal a unique regulatory role of GIPC on autophagy through metabolic pathways and the modulation of exosome secretion. We also demonstrate that depletion of GIPC from cancer cells sensitizes them to chemotherapeutic drugs such as gemcitabine, an avenue that can be further explored as a potential therapeutic strategy against drug resistance. Cell culture & GIPC knockdown cell lines Pancreatic cancer cell lines AsPC-1 and PANC-1 were purchased from the American Type Culture Collection (ATCC, Rockville, MD). Cell lines were cultured in RPMI 1640 media (for AsPC-1) or high glucose DMEM (for PANC-1) supplemented with 10% fetal bovine serum (FBS), 5% L-glutamine, and 1% penicillin/streptomycin (Invitrogen, Carlsbad, CA). Cells were maintained at 37˚C in an atmosphere containing 95% air-5% CO 2 (v/v). Stable GIPC knockdown cell lines were generated using lentivirus shRNA. The lentivirus particles were prepared using 293T cells co-transfected with the gag-pol expression plasmid pCMVD8.91, the VSVG envelope expression plasmid pMD-G, and the vector plasmid pLKO.1 encoding cDNAs for the expression of GIPC/Synectin shRNA (59-CCGGGCAAATGCAATAATGCCCTCACTCGAGTGAG-GGCAT-TATTGCATTTGCTTTTTG-39). GIPC/Synectin shRNA in pLKO.1 was purchased from Open Biosystems. Supernatant was collected 48 h post-transfection and frozen at 280˚C. PANC-1 or AsPC-1 cells were then infected overnight at 37˚C and stable colonies were isolated after puromycin selection (1 mg/ml). To ensure the efficiency of the GIPC/Synectin knockdown, protein lysates were analyzed by immunoblot for GIPC/Synectin. Control cells were transduced with an empty protein vector. Retroviral pBABE-puro mCherry-EGFP-LC3B plasmid from Addgene (Addgene plasmid 22418) was used to prepare retrovirus particles using 293T cells following standard procedure. AsPC-1 or PANC-1 cells were infected with retrovirus particles and stable colonies were isolated after puromycin selection (1 mg/ml). Experiments were performed at 70-80% cell confluency and confirmed in at least three independent experiments. RNA interference, transfection After a 24-hour incubation with antibiotic-free medium, cells were transfected with anti-GIPC small interfering RNA (siRNA) using the DharmaFECT 2 Transfection Reagent (Dharmacon, Lafayette, CO). Seventy-two to 96 h after transfection, GIPC knockdown was confirmed by Western blot analysis. A similar siRNA approach was adopted for anti-Atg7 and anti-Beclin1 knockdown. For glucose starvation experiments, both control siRNA and GIPC siRNA treated AsPC-1 cells were kept in glucose free RPMI supplemented with 10% FBS for final 16 Antibodies and immunoblot analysis Whole cell lysates were prepared in NP-40 lysis buffer supplemented with a protease inhibitor cocktail (Sigma, St. Louis, MO) and Halt phosphatase inhibitor cocktail (Thermo Scientific, Waltham, MA). Supernatant was collected after centrifugation at 13,000 rpm for 10 min at 4˚C and separated by SDS-PAGE. Anti-GIPC, anti-PLCc, and the horseradish peroxidase-conjugated secondary antibodies were purchased from Santa Cruz Biotechnology. Antibodies against ABCG2, mTOR, phospho-mTOR, p70S6K, phospho-p70S6K, Atg7, Beclin1, AMPK-a, and phospho-AMPK-a were purchased from Cell Signaling Technologies. Anti-CHMP4b and anti-TSG101 was purchased from Abcam; antib-actin was purchased from Sigma; and the Alix antibody was purchased from Thermo Scientific. Western blots were developed using the SuperSignal West Pico substrate (Thermo Scientific) and immunoprecipitations were performed as previously described [19]. Immunofluorescence Cells (2610 4 ) were seeded on a coverslip in antibiotic-free medium for 24 h. Cells were then transfected with GIPC siRNA or scrambled siRNA (Dharmacon) and the medium was changed 48 h post-transfection. After 96 h, cells were washed and fixed with 4% paraformaldehyde. After blocking with 10% goat serum for 15 min, the cells were permeabilized with 0.2% Triton X-100 at room temperature for 5 min. The slides were then stained with primary antibodies against LC3 for 2 h in 1% goat serum. After incubating the slides with secondary antibodies conjugated to AlexaFluor 488 (1:200; Life Technologies, Grand Island, NY) for 1 h, slides were mounted with Vectashield (Vector Laboratories, Burlingame, CA) containing 49,6-diamidino-2-phenylindole (DAPI) and confocal microscopy was performed. In another set of experiments, cells expressing mCherry-EGFP-LC3B were seeded in coverslips and transfected with GIPC siRNA or scrambled siRNA. After 96 h, cells were washed and fixed with 4% paraformaldehyde. Slides were mounted with Vectashield containing DAPI as previously described. Glucose uptake and Intracellular glucose measurement assay Stable cells, either transfected with GIPC shRNA or the control vector, were seeded in 6-well plates and cultured for 48 h. Glucose uptake was measured using the Glucose Uptake Cell-Based Assay Kit (Cayman Chemical, Ann Arbor, MI) using a fluorescently labeled deoxyglucose analog. For an intracellular glucose concentration measurement, the Amplex Red Glucose Assay Kit (Life Technologies) was used with a slight modification to the manufacturer's protocol as described previously [39]. Cells were collected by centrifugation and the resulting cell pellet was washed twice in PBS and dispersed in 1X reaction buffer from the kit. Cells were lysed by probe sonication with three cycles of 10 seconds on, 30 seconds off at 20% power while continuously maintained on ice. Fifty ml of reaction solution (10 mM Amplex Red, 10 U/ml HRP, 100 U/ml glucose oxidase, 50 mM sodium phosphate buffer, pH 7.4) was added to 50 ml of cell lysate in a 96-well plate and incubated in the dark at 37˚C for 30 min. The fluorescence (excitation: 544, Emission: 590) was then measured using a SpectraMax plate reader and values were expressed as Relative Fluorescence Units (RFU)/mg protein. Exosome isolation Exosomes were isolated from conditioned medium of PANC-1 and AsPC-1 cells by differential centrifugation. Cells were grown to 70-80% confluency and media was replaced with media containing 10% fetal bovine serum deprived of microparticles through centrifugation (60 min at 100,0006g). After 72 h of incubation, supernatants were collected and cleared of cellular debris and dead cells with two sequential spins at 4˚C, 3,0006g for 10 min. Cleared supernatants were then further centrifuged at 4˚C, 60,0006g for 70 min. The resulting exosome pellets were washed with phosphate-buffered saline (PBS) solution, and then centrifuged again at 4˚C, 100,0006g for 70 min. The final exosome pellets were re-suspended in PBS or water depending on the experiment. Electron microscopy Freshly prepared exosomes re-suspended in water were further dispersed in Trump's fixative solution, composed of 4% (v) formaldehyde and 1% (v) glutaraldehyde in 0.1 M phosphate buffer at pH 7.2. The exosomes were then washed with 0.1 M phosphate buffer, 1% osmium tetroxide in 0.1 M phosphate buffer, distilled water, 2% (v) uranyl acetate, distilled water, ethanol, and absolute acetone in sequence. Finally, exosomes were placed on a TEM grid for examination using a Philips Technai T12. Proteomics analysis Protein identification was performed via in-gel trypsin digestion using nanoLC-MS/MS with hybrid orbitrap/linear ion trap mass spectrometry. Briefly, protein from the exosomes of GIPC-deficient stable cell lines was resolved on a 4-12% NuPage gel (MOPS buffer) with 20 ml of SDS-PAGE sample buffer containing 50 mM DTT. The gels were stained with BioSafe colloidal blue dye (BioRad) and the desired bands were excised from the gel for mass spectrometry analysis using the following procedures. Colloidal blue stained gel bands were destained in 50% acetonitrile/50 mM Tris pH 8.1 until clear. The bands were then reduced with 50 mM TCEP/50 mM Tris, pH 8.1 at 55˚C for 40 min and alkylated with 20 mM iodoacetamide/50 mM Tris pH 8.1 at room temperature for 60 min in the dark. Proteins were digested in situ with 30 ml (0.005 mg/ml) trypsin (Promega Corporation, Madison WI) in 20 mM Tris pH 8.1/0.0002% Zwittergent 3-16, at 55˚C for 2 h, followed by peptide extraction with 10 ml of 2% trifluoroacetic acid and then 60 ml of acetonitrile. The pooled extracts were concentrated to less than 5 ml on a Speed-Vac concentrator (Savant Instruments, Holbrook, NY) and then brought up in 0.2% trifluoroacetic acid for protein identification by nano-flow liquid chromatography electrospray tandem mass spectrometry (nanoLC-ESI-MS/ MS) using a ThermoFinnigan Orbitrap Elite Hybrid Mass Spectrometer (Thermo Fisher Scientific, Bremen, Germany) coupled to an Eksigent nanoLC-2D HPLC system (Eksigent, Dublin, CA). The digested peptide mixture was loaded onto a 250 nl OPTI-PAK trap (Optimize Technologies, Oregon City, OR), custompacked with Michrom Magic C8 solid phase (Michrom Bioresources, Auburn, CA). Chromatography was performed using 0.2% formic acid in both the A solvent (98% water/2% acetonitrile) and B solvent (80% acetonitrile/10% isopropanol/10% water), and a 2% B to 45% B gradient over 70 min at 300 nl/ min through a hand-packed PicoFrit (New Objective, Woburn, MA) 75 mm6200 mm column (Michrom Magic C18, 3 mm). The Orbitrap Elite mass spectrometer experiment was set to perform a FT full scan from 340-1500 m/z with resolution set at 120,000 (at 400 m/z), followed by linear ion trap CID MS/ MS scans on the top fifteen ions. Dynamic exclusion was set to 1 and selected ions were placed on an exclusion list for 30 seconds. Database searching Tandem mass spectra were extracted by msconvert (version 3.0.4019; ProteoWizard) and all MS/MS samples were analyzed using Mascot (Matrix Science, London, UK; version 2.4.0), Sequest (Thermo Fisher Scientific; version 27, rev. 12) and X1 Tandem (The GPM, thegpm.org; version CYCLONE (2010.12.01.1)). Mascot, Sequest, and X1 Tandem were set up to search the February 2012 Swissprot database, restricted to human with a decoy reverse database, and assuming the digestion enzyme trypsin. Mascot and X1 Tandem were searched with a fragment ion mass tolerance of 0.60 Da and a parent ion tolerance of 10.0 PPM. Sequest was searched with a fragment ion mass tolerance of 0.60 and a parent ion tolerance of 0.01 Da. Oxidation of methionine and iodoacetamide derivative of cysteine were specified in Mascot, Sequest, and X1 Tandem as variable modifications. RNA isolation and Quantitative PCR analysis Total RNA was isolated from cell lines and exosomes using the miRCURY RNA Isolation Kit -Cell & Plant (Exiqon, Woburn, MA) followed with spectrophotometry (NanoDrop, Thermo Scientific) for quantification and qualitative analysis. Equal amounts of total RNA was reverse-transcribed by oligo (dT) priming using the iScript cDNA Synthesis kit (Bio-Rad, Hercules, CA) following the manufacturer's instructions. Real-time PCR was performed using the ABI 7500 Real-Time PCR System (Applied Biosystems, Foster City, CA) and the SYBR Green PCR Master Mix (Applied Biosystems) as described previously [40]. Glut1 and b-Actin primers were purchased from SABiosciences (Frederick, MD). Drug sensitivity assay Briefly, 5610 3 cells were seeded per well in triplicate, in 96-well flat-bottom plates with 100 ml of medium. After 24 h, variable concentrations of gemcitabine (mg/ ml) were added and the cells were incubated for an additional 72 h. At the end of the treatment period, 20 ml of MTS solution containing PMS (MTS: PMS 520:1 vol. ratio) were added to each well and the cells were incubated at 37˚C for 1 to 2 h. The absorbance at 490 nm was recorded using a SpectraFluor PLUS (Molecular Devices, Sunnyvale, CA) and the half maximal inhibitory concentration (IC50) values were calculated as concentrations corresponding to a 50% reduction of cellular growth. Prior to the drug sensitivity testing, cell viability was determined by the MTS assay (Promega, Madison, WI). Statistical analysis The data in the bar graphs represent the mean ¡ standard deviation of at least three independent experiments, each performed with triplicate samples. Statistical analyses were performed using a Student's t test, with a two-tailed value of P,0.05 to be considered significant. GIPC depletion induces autophagy in pancreatic cancer cells Utilizing the GIPC-depleted AsPC-1 and PANC-1 pancreatic cell lines, we investigated whether GIPC modulated autophagy by assessing the autophagyrelated microtubule-associated protein light chain 3 (LC3) conversions (LC3-I to LC3-II) via Western blot analysis. It is well known that the conversion of the light chain 3-I (LC3-I), upon conjugation to phosphatidylethanolamine (PE), forms the conjugate light chain 3-II (LC3-II) which is then recruited to the membranes of autophagosomes [13,20]. LC3 expression has been widely used to monitor and establish the status of autophagy as the amount of LC3II correlates with the number of autophagosomes [41]. After a thorough investigation of the LC3-II level in the pancreatic stable cell lines, we observed an elevated LC3-II level in cells deficient for GIPC, indicating the activation of autophagy ( Figure 1A). We also observed an increase in LC3-II (green) puncta formation in the GIPC-depleted cells by immunofluorescence study ( Figure 1B). GIPC knockdown in presence of lysosomal protease inhibitors, Pepstatin-A and E-64d, further increased LC3-II levels in a dose-dependent manner compared with GIPC knockdown alone, indicating enhancement of autophagic flux ( Figure S1A). Furthermore, we used a tandem fusion protein mCherry-EGFP-LC3B containing acid-insensitive mCherry and acid-sensitive EGFP as an autophagic flux reporter system [42,43]. During autophagosome formation, both EGFP and mCherry are detected in autophagosomes which appear as yellow puncta. However, once autophagosomes fuse with lysosomes, the green fluorescence is lost because of the degradation of EGFP by acid lysosomal proteases resulting only red puncta. Therefore, presence of both yellow and red puncta indicates a functional autophagic flux process. Here we have used both AsPC-1 and PANC-1 cell lines stably expressing mCherry-EGFP-LC3B to show the increase in both yellow and red puncta upon GIPC knockdown which also indicated an increase in autophagic flux ( Figure S1B). These findings suggested that GIPC knockdown induces the formation of autophagosomes in pancreatic cancer cells. We further investigated the effect of two autophagy-related genes, Atg7 and Beclin1, on GIPC-mediated autophagic regulation. To assess the interaction of Atg7 and Beclin1, we reduced the level of Atg7 and Beclin1 by RNA interference (RNAi) in both PANC-1 and AsPC-1 cells. As shown in Figures 2A and 2B, we did not observe any significant change in Atg7 or Beclin1 expression after GIPC depletion in both pancreatic cancer cells. As Atg7 and Beclin1 are two key components for autophagosome biogenesis, we also observed a decrease in LC3 II conversion from LC3 I upon knockdown of Atg7 and Beclin1 in both the pancreatic cancer cells. In AsPC-1 cells, we noticed that induction of autophagy with depletion of GIPC was significantly impeded by reduction of Atg7 and Beclin1. On the contrary, in PANC-1 cells, Atg7 and Beclin1 could not affect the LC3II conversion subject to GIPC depletion. We further explored the association of GIPC with Atg7 and Beclin1 by co-immunoprecipitation experiments and found Beclin1 to be in the same complex with GIPC (Figures 2C) but did not get conclusive result for Atg7 (data not shown). GIPC mediates autophagy through metabolic stress pathways Glut1 is associated with glucose uptake in cancer cells and GIPC is known to stabilize Glut1 in the cell membrane as a PDZ domain-containing interaction partner [14]. In this regard, we examined whether knocking down GIPC in pancreatic cancer cells would destabilize Glut1 and disrupt glucose uptake into these cells. As expected, we found a significant decrease in Glut1 expression in both mRNA and protein level upon GIPC knockdown in AsPC-1 and PANC-1 cells ( Figure 3A & 3B). Furthermore, we found that the relative glucose uptake for AsPC-1 and PANC-1 cells was significantly reduced in the absence of GIPC, compared to that of control cells ( Figure 3C). To determine whether intracellular levels of glucose were also dependent upon the status of GIPC, we monitored the intracellular glucose level after GIPC knockdown in the same pancreatic cancer cell lines and found levels to be significantly reduced when compared to wild type cells ( Figure 3D). Importantly, under stress conditions, cellular AMP usually regulates the intracellular glucose level. AMP levels were elevated in glucose starvation, which, in turn, further activated the kinase activity of AMPK-a through phosphorylation [44,45]. To investigate this mechanism in pancreatic cancer cell lines, we examined the AMPK-a status by immunoblot in GIPC stable knockdown cells. Our results revealed a high level of phosphorylated AMPK-a upon GIPC depletion ( Figure 4A), suggesting that GIPC may modulate the AMPK pathways. We further investigated the molecular mechanism of autophagy by examining downstream molecules of the AMPK-a pathway. We observed decreased levels of mTOR phosphorylation after GIPC knockdown in AsPC-1 and PANC-1 cells; however, total mTOR expression did not change. Additionally, we observed a decrease in a known downstream effector of mTOR, the phospho-p70S6K to p70S6k ratio, in GIPC-depleted cell lysates compared to the control parental cells ( Figure 4B). Removal of extracellular glucose further enhanced AMPK-a phophorylation and reduced mTOR phosphorylation as well as p70S6K phosphorylation ( Figure S2). However, LC3 levels were decreased upon removal of extracellular glucose which corroborates with previous reports [46] suggesting extracellular glucose removal kills the cells either by apoptosis or necrosis in stead of inducing autophagy as a prosurvival effect. Taken together, our results suggest that GIPC controls autophagy through the regulation of metabolic pathways in pancreatic adenocarcinoma cells. GIPC influences exosome secretion and biogenesis With the exosomes collected from the stable transfectants, we performed enzymatic assays for acetylcholine esterase activity as described previously [14]. This assay revealed a greater abundance of exosomes in the conditioned media of GIPC-deficient cell lines. A 3.5 or greater fold increase in exosome production was observed in conditioned media collected from GIPC-depleted AsPC-1 cells compared to the control ( Figure 5A). We obtained similar result with GIPCdepleted PANC-1 cells as well (data not shown). We also determined the concentration of total RNA in these exosomes as another measure of exosome abundance and found similar results (Figures 5B & 5C). Nanoparticle tracking analysis using the NanoSight LM10 confirmed the size distribution of our exosome preparations. With a mode of approximately 100 nm, their size was consistent with the current exosome definition ( Figure 5D). We then performed morphological characterization of the exosome preparation with an ultra-structural analysis of the exosome pellets by heavy metal negative staining and transmission electron microscopy (TEM). Analysis of the TEM images confirmed the exosome dimensions in our samples. Figure 5E represents the typical morphology of the overall exosome population in a lower TEM magnification. Further analysis of the TEM images at higher magnification confirmed the typical cupped shape structure of exosomes ( Figure 5F). These analyses confirmed that the presence or absence of GIPC did not affect exosome morphology. To confirm whether the increased exosomes in GIPC-depleted cells correlated with activation of the exosome biosynthesis machinery, we checked the expression of key genes (Alix, TSG101, CHMP4B) involved in exosome biogenesis by immunoblot. We observed an increased expression of Alix, TSG101, and CHMP4B in GIPC knockdown cells when compared to control cells ( Figure 5G). GIPC influences exosome content and sensitizes pancreatic cancer cell lines to chemotherapeutic drugs To compare exosome content in GIPC knockdown and wild type cells, we performed proteomics analyses on the exosomes collected from the PANC-1 stable cell lines. For proteome analysis, protein was extracted from the secreted exosomes and we found that the content of exosomes greatly varied depending on GIPC status. In support of the robustness and sensitivity of our analysis methods, proteomics data confirmed the absence of GIPC protein in exosomes isolated from the GIPC deficient cells but not in the control samples. This also demonstrated for the first time the presence of GIPC in exosomes. Furthermore, proteomic analysis of the exosomes isolated from the PANC-1 stable cells revealed a significant enrichment of genes involved in drug resistance (data not shown). Among these genes, the most notable was the ATP-binding cassette sub-family G member 2 (ABCG2). Mass spectrometry analysis identified ABCG2 to be overexpressed in GIPC deficient exosomes by 13 fold when compared to control exosomes (data not shown). We have verified this observation for exosomes at the protein level as shown in Figure 6A. We did not observe any change in ABCG2 expression in cell lysates. To verify the role of ABCG2 in drug sensitivity, we tested gemcitabine, a frontline pancreatic cancer drug, at different concentrations in GIPC-depleted PANC-1 cells. Our results show that gemcitabine treatment sensitized the GIPC deficient PANC-1 cells by decreasing the IC50 values of the drug from 26 nM to 6 nM. The results suggest the involvement of GIPC in pancreatic cancer drug response and make more resistance phenotypes ( Figure 6B). Discussion GIPC has already been identified as an important regulatory molecule for stabilizing transmembrane proteins. It acts as a scaffold to control receptormediated trafficking [20,22,37]. Following receptor internalization, GIPC transiently associates with a pool of endocytic vesicles close to the plasma membrane [15]. Because GIPC is directly involved in the trafficking of endocytotic vesicles, it was therefore logical to investigate its influence on autophagy. In this work, we report a novel role for GIPC as a master regulator for both autophagy and exosome biogenesis. We show in pancreatic cancer cells that depletion of GIPC created an environment of metabolic stress. This, in turn, induced autophagy and microvascular shedding. We also observed that the GIPC status determined the cellular message sent to the extracellular space via exosome secretion. To perform this study, stable GIPC-deficient pancreatic cancer cell lines were generated and the status of autophagy was monitored by assessing the expression of LC3-II, a protein that serves as a marker for autophagy. The increase in LC3-II expression as well as the abundance of LC3-II positive vesicles in GIPC-deficient cells clearly illustrated the involvement of GIPC in autophagy. However, an increase in LC3-II level or a greater number of LC3-II positive vesicles cannot confirm whether autophagosome formation is upregulated or autophagic degradation is blocked. Therefore, we performed autophagic flux experiment in presence of lysosomal protease inhibitors [47]. A further increase in LC3-II levels were observed in presence of lysosomal protease inhibitors, indicating that GIPC knockdown was really inducing autophagosome formation. Increased abundance of both yellow and red LC3-II puncta in AsPC-1 and PANC-1 cells expressing mCherry-EGFP-LC3B upon GIPC knockdown also confirmed this observation. We further investigated the role of the autophagy-associated genes Beclin1 and Atg7 in our stable pancreatic cell lines. Our results show that the expression of Beclin1 and Atg7 did not change in GIPC-deficient cells. These findings suggest that GIPC induced autophagy through an alternate mechanism independent of Beclin1 and Atg7. Rapidly growing and proliferating cells, such as cancer cells, require elevated metabolism. Cancer cells for biosynthesis and energy production, in particular, preferentially consume glucose. Glucose uptake is an essential step in glucose metabolism and is achieved by facilitative glucose transporters (Glut family members). Glut1 facilitates glucose transport across the plasma membranes of mammalian cells [21,48,49] and helps maintain the low-level basal glucose uptake required to sustain respiration in all cells. GIPC is known to interact with many transmembrane proteins, including Glut1, through the C-terminal PDZ domain-binding motif and help in their stabilization [14]. With GIPC depletion, Glut1 expression and glucose uptake decreases. We observed a similar phenomenon in our pancreatic cancer cell lines where Glut1 expression as well as glucose uptake and intracellular glucose levels dropped with GIPC knockdown. With this glucose deprivation, we further observed high levels of phosphorylated AMPK-a During nutrient deprivation and metabolic stress, AMPK is allosterically activated by an elevated intracellular AMP/ATP ratio [50], followed by the phosphorylation of threonine 172 within its a subunit [44]. Additionally, AMPK is known to negatively regulate mTOR signaling [45,[51][52][53][54] and we observed decreased phosphorylation of mTOR after activation of AMPK in the GIPCdeficient cells. Under different stress situations, a linear relationship exists between the degree of phosphorylation of ribosomal protein S6 and the percentage of inhibition of autophagic proteolysis [55]. Our results are in agreement with already published data and we have observed a similar effect with p70S6K. Removal of extracellular glucose further increased AMPK-a phosphorylation and reduced phosphorylation of both mTOR and p70S6K. However, LC3 levels were decreased upon extracellular glucose removal which suggests that not all forms of starvation induce autophagy [46]. Previous studies have reported that GIPC plays an important role in cellular trafficking by acting as a scaffold. There is substantial evidence that after receptor internalization, GIPC transiently associates with a pool of endocytic vesicles close to the plasma membrane. The known role of GIPC in cellular trafficking prompted us to hypothesize that GIPC may play a role in exosome secretion. We prepared exosomes from the stable GIPC deficient cell lines and their wild type controls. To confirm the quality of our exosome preparation, we measured the abundance of exosomes by acetylcholine esterase enzymatic assays and RNA quantification. Despite the conflicting reports in the literature regarding the definition of microvesicles and exosomes, our transmission electron micrograph (TEM) and NanoSight results confirmed that our samples contained exosomes based on size (40 to 100 nm) and morphology. In this study, we report increased exosome secretion after GIPC knockdown. This observation was not only confined to pancreatic cancer cell lines but also in the renal cancer cell line 786-O (data not shown). Because the same number of cells was plated for exosome collection, the variation in exosome secretion was attributed to the influence of GIPC. We further investigated the status of exosome biogenesis in these cells. Exosome production is initiated by the formation of MVEs and budding through the plasma membrane. In this study, we observed that in the absence of GIPC, the expression of Alix, TSG101, and CHMP4B increased in comparison to the GIPC control cells. These findings suggest that in absence of GIPC, the MVE synthesis machinery is overactive and stimulates exosome secretion. In addition to exosome secretion and biogenesis, the molecular composition of the exosomes varied significantly in physiological and disease states. Therefore, we investigated the effect of GIPC depletion at the protein level by mass spectrometry analysis. Interestingly, we identified the drug resistance gene ABCG2 to be significantly upregulated in our exosome preparations and evaluated the effect of gemcitabine on GIPC depleted and control cells. We found an increased sensitivity to gemcitabine in GIPC-deficient pancreatic cancer cells. In summary, GIPC modulates autophagy in pancreatic cancer cells through the metabolic pathways and glucose deprivation. GIPC not only controls exosome biogenesis but also influences exosome content. Most likely, the absence of GIPC promotes the depletion of the drug resistance molecule ABCG2 through exosome exocytosis. As a result, GIPC-deficient cells become more drug sensitive. Alternatively, depletion of GIPC in cancer cells may result in the sequestering of ABCG2 in vesicles, rendering it inaccessible and therefore, nonfunctional. This then sensitizes the cells to gemcitabine. These findings can be further explored as a novel therapeutic approach to overcome the drug resistance so often observed in cancers.
v3-fos-license
2017-09-15T21:40:32.869Z
2010-01-01T00:00:00.000
34220263
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.scielo.br/j/jbchs/a/mM6H8ywFmCgNYr6KhVXHJ4Q/?format=pdf&lang=en", "pdf_hash": "ed95e70c1e3dffb5aa102a3e96685a8579b22aad", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46220", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "sha1": "ed95e70c1e3dffb5aa102a3e96685a8579b22aad", "year": 2010 }
pes2o/s2orc
Pd / C-Mediated Dual C-C Bond Forming Reaction in Water : Synthesis of 2 , 4-Dialkynylquinoline Pd/C catalisa a reação de acoplamento entre 2,4-diiodoquinolina e alquinos terminais em água, proporcionando uma síntese prática, de uma etapa, de 2,4-dialquinilquinolinas. Uma série de derivados de quinolinas relacionadas foi preparada em bons a excelentes rendimentos, usando esta metodologia baseada em água. O uso de outros catalisadores de paládio e solventes foi examinado e o mecanismo de reação discutido. Introduction 2][3][4] Though a wide variety of quinoline derivatives have been reported earlier a thorough literature search however revealed that 2,4-dialkynylquinolines surprisingly remained unexplored so far.As part of our ongoing program on building of quinoline-based library of small molecules we were in need of a wide range of 2,4-dialkynylquinolines and a convenient synthetic methodology to access these derivatives. The palladium mediated alkynylation of aryl/ heteroaryl halides under Sonogashira condition 5 has become a highly effective tool for the introduction of an alkynyl moiety to aryl or heteroaryl ring. 6The reactivity of 2,4-dihaloquinolines towards alkynylation reaction under Sonogashira or modified conditions has been studied earlier (Scheme 1).For example 2,4-dibromoquinoline derivative provided the 2-alkynylated product A (Scheme 1) selectively when treated with a terminal alkyne. 7Similarly, 2,4-dichloroquinoline afforded a 2-alkynyl derivative when coupled with a terminal alkyne under Pd/C-Cu catalysis in water. 8Coupling of 2-bromo-4-iodoquinoline derivative with a terminal alkyne in the presence of (PPh 3 ) 2 PdCl 2 -CuI though provided a 2,4-dialkynylated product B in 17% yield (Scheme 1) the 4-alkynyl derivative however was isolated as a major product in this case. 7Nevertheless, to the best of our knowledge this is the only example known so far for the preparation of 2,4-dialkynylquinoline (B) and no detailed study on this reaction has been reported.Therefore, the development of a general method leading to compound B (Z = H) exclusively was required.Herein, we report a direct and practical method for the preparation of 2,4-dialkynylquinoline under Pd/C-Cu catalysis in water.To the best of our knowledge this is the first example of Pd/C-mediated dual C-C bond forming reaction in water. Results and Discussion Initially, we attempted to prepare the 2,4-dialkynylquinoline from its 2,4-dichloro precursor via a two-step method.Thus 2-alkynyl quinoline prepared via earlier method 8,9 was treated with a terminal alkyne in the presence of a number of palladium catalysts e.g.(PPh 3 ) 2 PdCl 2 , Pd(OAc) 2 or Pd/C.However, none of these reactions provided the desired product in good yields.It is well known that the chloro group is less reactive than iodo group towards palladium catalysts under normal Sonogashira conditions.Hence we decided to use a 2,4-diiodoquinoline (2) for our synthesis which could be readily obtained from the corresponding 2,4-dichloro analogue (1) as shown in Scheme 2. 10 Having prepared the 2,4-diiodoquinoline (2) we initially coupled it with 1-octyne in the presence of various palladium catalysts (Table 1).Because of our earlier success in the use of 10% Pd/C-PPh 3 -CuI as a catalyst system 8 in water we decided to conduct the reaction of 2 (1.0 equiv.)with 1-octyne (3.0 equiv.) in the presence of same catalysts in water using Et 3 N as a base.The reaction proceeded smoothly affording the desired product 3a in 85% yield (Entry 1, Table 1) and no monoalkynylated or other side product was detected in the reaction mixture.The reaction was carried out for 10 h and an increase of reaction time did not improve the product yield (Entry 2, Table 1).The product formation was almost suppressed in the absence of PPh 3 (Entry 3, Table 1).The use of other catalysts, e.g.PdCl 2 (PPh 3 ) 2 , Pd(PPh 3 ) 4 or Pd(OAc) 2 -PPh 3 afforded the product 3a albeit in inferior yield (Entries 4-6, Table 1).While the use of other solvents, e.g.1,4-dioxane, DMF and EtOH provided 3a in good yield (Entries 7-9, Table 1), water was however the solvent of our choice. We then decided to explore the scope and generality of this Pd/C mediated dual C-C bond forming reaction on a quinoline ring in water.Thus a variety of terminal alkynes were reacted with 2 in the presence of 10% Pd/C, PPh 3 and CuI in water (Scheme 2).The results of this study are summarized in Table 2. Terminal alkynes containing a range of groups in their side chain such as alkyl (Entries 1-3, Table 2), aryl (Entries 4-6, Table 2), cyano (Entry 7, Table 2) or hydroxyl alkyl moieties (Entries 8-10, Table 2) were reacted with the diiodo compound 2 and all these groups were well tolerated under the condition employed.The reaction proceeded well in water in all these cases affording the corresponding 2,4-dialkynylquinolines (3a-j) in good to excellent yields.All the products isolated were well characterized by spectral (NMR, IR and MS) and analytical data.The presence of alkyne moieties was indicated by a sharp IR absorption shown by all the quinoline derivatives synthesized in the region 2200-2230 cm -1 .This was further supported by the appearance of signals corresponding to the sp-carbons at d 70-96 in 13 C NMR spectra of compounds 3a-j. Mechanistically, the reaction may appeared to proceed via a simultaneous C-C bond forming reaction at C-2 and C-4 of the quinoline ring as neither C-2 nor C-4 mono alkynyl derivative was isolated as a side product from the reaction mixture.However, such a reaction would mechanistically be difficult as it would involve third-order kinetics in the Pd-mediated C-I oxidative addition and/or transmetallation process.Moreover, because of the use of elevated reaction temperatures and the fact that theoretically, the second alkynylation could be kinetically more facile than the first one; the real step-by-step process was practically not observed during the reaction.Nevertheless, because of high reactivity of the actual catalytic species Pd(0) generated 11 in situ towards C sp2 -I bond the organopalladium intermediate (X) formed as a result of oxidative addition of Pd(0) to 2,4-diiodoquinoline would contain one C sp2 -Pd-I moiety initially (Scheme 3).It is likely that the oxidative addition of Pd(0) would take place initially at C-2 of the quinoline ring 8,9 due to the higher reactivity of iodo group at this position.The copper acetylide generated from the terminal alkyne and CuI would undergo trans metalation with the organo-palladium species and subsequent reductive elimination of Pd(0) would produce the monoalkynyl quinoline intermediate (Y).A further alkynylation of this intermediate provided the dialkynyl product 3. Though the nature of catalytic species generated in situ is not known when the reaction was performed in the absence of PPh 3 affording the desired product in 15% yield (Entry 3, Table 1), Pd/C-mediated Sonogashira coupling of aryl iodides with terminal alkynes in the absence of ligand has been reported earlier. 12Nevertheless, a similar dialkynylation reaction was observed when 2,4-dichloroquinazoline was reacted with a terminal alkyne in the presence of (PPh 3 ) 2 PdCl 2 , CuI and Et 3 N for 20 h. 13 Notably, 2,4-dichloroquinoline did not participate in a dual dialkynylation reaction due to the lack of reactivity of C-4 chloro group. 8,9We overcame this issue in the present reaction by opting for a more reactive iodo group over chloro on the quinoline ring.Thus judicious selection of quinoline halide can help in affording a product of particular choice. Conclusions In conclusion, we have described a practical and one-step synthesis of 2,4-dialkynylquinoline from 2,4-diiodoquinoline and commercially available terminal alkynes under Pd/C-Cu All the desired products were isolated in excellent yields.The reaction does not involve the use of expensive catalysts or solvents.Since the product nature can be altered conveniently by changing the halide moiety of the quinoline ring hence overall, the present process would be complimentary to the methods previously reported for alkynylation of quinolines.Due to the operational simplicity and easy availability of starting materials we believe that this process would find wide usage in the preparation of quinoline-based libraries of pharmaceutical interest. General methods Unless stated otherwise, reactions were monitored by thin layer chromatography (TLC) on silica gel plates (60 F 254 ), visualizing with ultraviolet light or iodine spray.Flash chromatography was performed on silica gel (60-120 mesh) using distilled petroleum ether and ethyl acetate. 1 H and 13 C NMR spectra were determined in CDCl 3 solution using 400 and 50 MHz spectrometers, respectively.Proton chemical shifts (d) are relative to tetramethylsilane (TMS, d 0.0) as internal standard and expressed in parts per million.Spin multiplicities are given as s (singlet), d (doublet), t (triplet), and m (multiplet) as well as b (broad).Coupling constants (J) are given in hertz.Infrared spectra were recorded on a FTIR spectrometer.Melting points were determined by using thermal analysis and differential scanning calorimetry (DSC) was generated with the help of DSC-60A detector.MS spectra were obtained on a mass spectrometer.Chromatographic (HPLC) purity was determined by using area normalization method and the condition specified in each case: column, mobile phase (range used), flow rate, detection wavelength, and retention times.Elemental analyses were performed using C,H,N All the reactions were carried out using 2,4-diiodoquinoline (1.0 equiv.),alkyne (3.0 equiv.),10% Pd/C (0.26 equiv.),CuI (0.05 equiv.),PPh 3 (0.20 equiv.)and Et 3 N (3.0 equiv.) in water (5.0 mL).b Isolated yield. Entry Alkynes Preparation of 2,4-diiodoquinoline, 2 14 To a cold solution (5-10 °C) of 2,4-dichloroquinoline (5.0 g, 25.3 mmol, 1.0 equiv.) in acetonitrile (50 mL) was added acetyl chloride (6.0 g, 76.4 mmol, 3.0 equiv.)and sodium iodide (30.0 g, 200.1 mmol, 8.0 equiv.).The mixture was stirred at 5-10 °C for 30 min and slowly heated to reflux.The mixture was stirred at refluxing temperature for 6 h under nitrogen atmosphere and the progress of reaction was monitored by TLC.After completion of the reaction the mixture was cooled to room temperature and diluted with 10% Na 2 S 2 O 3 to adjust the pH to 8.0-8.5.The solid precipitated out was filtered and was purified by column chromatography to give the desired product as a light brown solid (7.2 g, yield 75%); mp 89-90 °C (lit 14 General procedure for the preparation of dialkynyl quinolines, 3 A mixture of 2,4-diiodoquinoline (2), 10% Pd/C (0.26 equiv.),CuI (0.05 equiv.),PPh 3 (0.20 equiv.)and triethylamine (3.0 equiv.) in water (5.0 mL) was stirred at room temperature for 30 min.To this mixture was added terminal alkyne (3.0 equiv.)with stirring.The mixture was then stirred at 80-85 °C for time mentioned in Table 1.After completion of the reaction (indicate by TLC) the reaction mass was cooled to room temperature, filtered through celite and extracted with ethyl acetate (2 × 30 mL).The organic layers were collected, washed with water (3 × 30 mL), dried over anhydrous Na 2 SO 4 , and concentrated.The crude residue obtained was purified by column chromatography on silica gel, using light petroleum (60-80 °C)-ethyl acetate to afford the desired product 3. Y catalysis in water.The reaction proceeds via dual C-C bond forming reaction without generating any side products.
v3-fos-license
2018-11-05T09:10:22.159Z
2015-09-07T00:00:00.000
53571557
{ "extfieldsofstudy": [ "Engineering" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1049/iet-est.2014.0042", "pdf_hash": "f7bf5d010003951f229ee16a219effc30a087504", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46221", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "sha1": "f7bf5d010003951f229ee16a219effc30a087504", "year": 2015 }
pes2o/s2orc
Fault classification and diagnostic system for unmanned aerial vehicle electrical networks based on hidden Markov models In recent years there has been an increase in the number of unmanned aerial vehicle (UAV) applications intended for various missions in a variety of environments. The adoption of the more-electric aircraft has led to a greater emphasis on electrical power systems (EPS) for safe flight through an increased number of critical loads being sourced with electrical power. Despite extensive literature detailing the development of systems to detect UAV failures and enhance overall system reliability, few have focussed directly on the increasingly complex and dynamic EPS. This study outlines the development of a novel UAV EPS fault classification and diagnostic (FCD) system based on hidden Markov models (HMM) that will assist and improve EPS health management and control. The ability of the proposed FCD system to autonomously detect, classify and diagnose the severity of diverse EPS faults is validated with development of the system for NASA’s advanced diagnostic and prognostic testbed (ADAPT), a representative UAV EPS system. EPS data from the ADAPT network was used to develop the FCD system and results described within this study show that a high classification and diagnostic accuracy can be achieved using the proposed system. Introduction The increasing trend of unmanned aerial vehicle (UAV) deployment for a variety of missions can mainly be attributed to the promise of reduced costs and reduced risk to human operators [1].However, eliminating the function of pilot from unmanned aircraft and replacing it with completely autonomous flight control complicates a number of issues, such as vehicle reliability.UAVs rely on a robust and intelligent control system that monitor and anticipate problems occurring in the flight dynamics, as well as compensating for communication time delays. A UAV reliability investigation undertaken by the US Department of Defence [2] showed that the major sources of failure can mainly be divided into power/propulsion system flight control, communication and human/ground subsystems.Owing to the power system being integral to UAV reliability, the proper management of its health is imperative to UAV affordability, mission availability, operational efficiency and their acceptance into civil airspace. Electrical systems are a critical aspect of UAV power systems, particularly with the advent of the more-electric aircraft [3].On-board electrical loads include crucial subsystems such as avionics, propulsion, life support and environmental controls.UAV electrical power systems (EPS) operate in harsh environments and are characterised by physically compact topologies, where high-density generation provides energy to power electronics interfaced loads.Within the EPS, a diverse range of failure modes exist that have varying effect on network reliability; a major challenge is the design of fault tolerant control systems that can quickly detect and diagnose both critical and degraded faults to ensure robust health management and reliable operation.Previously, systems based on advanced diagnostic techniques [4][5][6] have been utilised for this purpose, although, generally, there has been limited focus on the EPS. This research proposes the development of an EPS fault classification and diagnostic (FCD) system based on HMM that has the ability to accurately detect, classify and assess the impact of UAV network faults.The application of HMM to the EPS domain has previously been researched; Abdil-Galil et al. [7] investigated their implementation to the classification of power quality disturbances and Suxiang et al. [8] utilised them for the diagnosis of power transformer faults.The main value identified in these applications included the inherent scalability and potential to simultaneously infer the probability of multiple system state hypotheses.The proposed FCD system evaluates their applicability to UAV EPS and how their use can supplement health management and fault tolerant control. This paper outlines the development and operation of the FCD system, where the operation can be divided into two separate stages: † Stage 1 -Classification of EPS network state.† Stage 2 -Diagnosis of fault severity through parameter calculations.This two stage system has the capacity to autonomously discriminate between a variety of potential system conditions and quantify the severity of any fault occurrence using EPS data.Both outputs of the system are vital elements in the control and monitoring of the network that provide key information regarding network behaviour.The application of the proposed system was verified with data collected from a subset of NASA's advanced diagnostic and prognostic testbed (ADAPT) network, the ADAPT-Lite (ADL) network [9]. The paper opens by presenting background information on; the ADL network and ADL data; the challenges associated with classifying EPS faults; related work, and an introduction to HMM.The following section outlines the proposed FCD system applied to the ADL network and Section 4 presents operational results of the system.Future work is explained in Section 5 and the paper is concluded in Section 6. ADAPT-lite system The NASA ADAPT [9] is a unique facility that is designed to test, measure, evaluate and help mature diagnostic and prognostic health management technologies.The ADAPT system is representative of the topology of an EPS vehicle system in that it provides energy generation/conversion, energy storage, power distribution and power management functions. For the purpose of this paper, the FCD system is applied to a subset of the ADAPT system, the ADL network.A schematic of the ADL network is shown in Fig. 1.It includes a single battery, two AC loads and one DC load.An inverter converts DC power from the battery into AC to power the two AC loads.The single DC load is powered directly from the 24 V battery.Sensors throughout the network monitor voltage, current, temperature and switch positions.The circuit breakers (CBs) are nominally closed.The network has a non-redundant power configuration of the EPS that supports mission and vehicle critical loads. The FCD system was designed and tested using data from the ADL network.The data was publicly available and distributed by the Second International Diagnostic Competition (DXC'10) [9].The data involved individual controlled experiments undertaken on the ADL network with each experiment detailing sensor readings for all sensors within the network.Each experiment covered roughly four minutes of time, with sensor readings detailed every 100 ms.Within a number of the experiments, failure scenarios were injected into the network.Only one failure was present during each experiment, meaning multiple failures within the network are not considered. The injected failure scenarios are characterised by the location of the fault and the fault mode.Faults are injected to all components within the ADL network, including sensors.Fault modes include 'abrupt', 'intermittent' and 'incipient'.The severity of the injected fault was either network 'critical' or 'degraded'.ADL fault characteristics are illustrated in Fig. 2. With the occurrence of critical faults, the UAV mission is no longer sustainable and abort recommendations should be provided; with degraded faults occurring, the network can still support critical loads and no abort recommendations need be provided.The data enabled the development and operation of the FCD system to include both network state classification and a diagnosis of fault severity. Diagnostic challenges of EPS The complex and dynamic nature of EPS leads to a number of challenges in attempting to accurately diagnose the occurrence of system faults and correctly initiate network recovery options to optimise reliable operation.The first of these challenges involves the number of mode inducing components such as relays, CB's and loads leads to a large range of network mode possibilities having to be considered [4].Secondly, transients introduced into the system by mode inducing components throughout nominal switching periods, means the implementation of simple threshold based monitoring systems is an inadequate solution, because of the high false positive rates the transients would induce.In addition, the failure of system components and sensor noise distortion can lead to system state uncertainty.Furthermore, a diversity of EPS faults produces a range of fault onset periods, ranging from seconds for switch based faults, to days or weeks for source based faults.These challenges highlight the necessity to develop robust monitoring systems that can handle EPS state uncertainty as well as detect faults that manifest over differing time periods and have varying impact on network reliability. Related work The challenge of detecting and diagnosing EPS anomalies is a widely researched area [10,11].Generally, the techniques that are developed in an attempt to address this problem fall within two categories: non-model based and model based.Non-model based methods typically involve limit or trend checking [12], the installation of special sensors [13] and the development of expert systems [14] that implement the knowledge of diagnostic experts to determine the implication that observed symptoms have on network state.The model based approach [15] usually concerns the development of models which capture nominal behaviourthese come in a number of different forms, including signal processing [16], statistical [17] and causal [18].Fault detection and diagnosis is achieved through the generation of residuals, that is, differences between measurements and the expected normal behaviour.Within this paper, a statistical, multiple model based technique is employedthis approach involves the use of system data to develop separate state-space models that correspond to both nominal and fault conditions.Diagnosis then involves probabilities being assigned to each model, given observational data.Similar systems have been developed using interacting multiple models of Kalman filters [19,20]. Regarding UAV's specifically, the majority of fault diagnosis is centred on assessing hardware faults in the flight control surfaces and sensors, and the failure of communication links to the control station; Cork et al. [21] and Bateman et al. [22] focussed efforts on identifying failures and implementing a reconfiguration of the control system to bring the aircraft to a normal state, or, in the worst case, abort the mission.Most of these techniques are based on parameter estimation for residual generation [23], data driven artificial neural networks [24] and mathematical models such as petri nets [25]. With respect to UAV EPS fault diagnosis, and the ADL in particular, Mengshoel et al. [4], used Bayesian networks, a form of causal model, to represent sensors which were compiled to arithmetic circuits to determine network diagnoses.Wilson et al. [5] used causal dependency graphs of fault causes and fault effect propagation paths to detect system faults and Narasimhan et al. [6] used a Hybrid Diagnostic Engine framework where behavioural and transitional models formed a basis for diagnosing changes in the operating modes of ADL components.A number of these techniques are based on graphical representations of the networks being modelled and successful implementations of such systems depend upon proper selection of the type of network structure.The utilisation of diagnostic systems based on HMM, as proposed in this paper, could overcome this modelling issue as the there is no requirement for network structures to be specified; instead, HMM use data to learn model parameters that statistically describe certain conditions. Their ability to provide probabilistic reasoning under uncertainty and solve classification problems associated with time series input data under minimal computational burden makes them a potentially attractive solution for UAV EPS fault detection, classification and diagnosis. Traditional applications of HMM are in areas such as speech, handwriting and gesture recognition [26][27][28].Recently, HMMs have been applied to classifying patterns in process trend analysis [29], anomaly detection in nuclear reactor cores [30], machine condition monitoring [31,32] and classifying electrical grid distribution network line disturbances [7,33].Rabiner [28] provides a comprehensive introduction to HMM. Hidden Markov models The ADL sensor data is an example of multivariate time series data where non-stationary periods define the presence of fault conditions.The ability to determine the latent physical state responsible for such changes in the data is the main goal of fault classification.Relating observational data to latent variables is a fundamental concept of HMM.This relationship involves non-stationary periods in the data representing transitions between latent states and, conversely, stationary periods in the data representing some form of latent state.It is therefore vital to have the capability to model data in a way that certain temporal aspects are explicit.Modelling the distribution of the ADL data and then detecting shifts in its characteristics would enable such changes to become explicit. There are a number of distribution functions that can be used for modelling the probability distribution of observed variables.Typically, the simplest function applied for continuous density observations assume Gaussian distributions per latent state [34].Considering the multidimensional nature of the ADL data, approximating the distribution with a single Gaussian function would provide an overgeneralised fit [35].A solution to this is to approximate the unknown density with a mixture of simple density functions.The general form of a variable x of dimension d using M mixture components is given by where θ i are the parameters of the ith simple density used as a mixture component.The most widely used mixture model is the Gaussian mixture model (GMM) [30], where each base distribution is a Gaussian with parameters θ i = {μ i , ∑ i ) comprising the mean vector μ i and covariance ∑ i .The likelihood of an observation for each mixture component is given by Changes in observation distribution can be detected by testing which base mixture component returns the highest likelihood for a given observation where each distribution comprising the GMM represents a latent class conditional density [34].The relationship between latent states and observational data is illustrated in Fig. 3.This example shows both the 'hidden' Markov temporal dynamics and the GMM representation of the observation space.A Markov model is a state based model that assumes the presently active state has been generated solely by the previous n states it has been in, where n is the model order.A HMM abstracts time series observation data into a state based form and uses a first order Markov chain to model the dynamics of the hidden state sequence [30]. The observations in Fig. 3 are current sensor data from a load resistance offset fault within the ADL network; the GMM has two base densities representing the distribution of the data.Regions in the data where the current remains constant can be modelled by a single mixture component which in turn can be mapped to certain states within the hidden sequence. At fault onset, current magnitude increases; the increase in current corresponds to a change in the most likely mixture component represented by the increased current value, resulting in a change of state in the HMM. Inference of the state evolution in a HMM for a given observation period can be undertaken using a number of different methods [34].A maximum a posteriori (MAP) estimation infers the most probable state sequence in chain structured models; in the context of HMM, the MAP estimation is known as Viterbi decoding [36].The Viterbi algorithm (VA) computes The VA enables the optimum underlying system state sequence, s*, to be inferred across the observation period, x 1:T , where all possible state sequences, s 1:T , are considered.In Fig. 3, it can be inferred from the data that the ADL is in a nominal state until the point in time where the change in current magnitude enhances the likelihood of the network being in a faulted state.An additional property of HMM concerns the probability of their statistical parameters yielded through training.These are a measure of how well a model has fitted the training examples presented to it through its parameters.A framework that contains multiple HMM permits the classification of candidate observation sequences by inferring the probability of the sequence being generated by a given model.This measure can be used to select the model which returns the highest likelihood and in doing so allows it to be classified with a label associated with that model. In this application, a series of HMM are trained based on different input data sets, each representing different system conditions.New data is classified by applying it to each of the models, with the model returning highest probability of generating the data assumed to be the closest match and therefore the most likely condition of the system. FCD system outline The operation of the FCD system proposed by the authors, and illustrated in Fig. 4, is split across two stages -Stage 1 classifies the network condition, and, once the network condition has been classified, Stage 2 diagnoses the severity of any fault that may have occurred.In this section, an overview of FCD operation and development is provided.This overview highlights the system's ability to differentiate between a number of EPS network conditions and identify both critical and degraded modes of ADL network operation.A total of 15 conditions, described in Table 1, are modelled within the framework.A decision on network state is made by primarily calculating the log-likelihood [28] of the input data, given each model's trained statistical parameters; classification then involves selecting the labelled model that returns the highest log-likelihood.Calculating fault parameters enables the severity of any ADL fault to be quantified.Fault parameter calculation algorithms (FPA) were developed that use the models' optimal state sequence, calculated using the VA, to determine the parameters.The set of parameters required for the quantification of fault severity is dependent on the mode of fault that has been classified.Hence, three separate FPA's were developed corresponding to the three modes of fault (abrupt, intermittent and incipient) within the ADL network, as outlined in Table 1. As an example of operation, if FC1 is classified after Stage 1, the optimal state path for the particular HMM of this fault condition will be calculated and then, considering FC1 relates to an abrupt fault mode, the FPA for calculating parameters for an abrupt fault would be initialised.The algorithms essentially utilise the optimal state path sequence to detect points in time where the state of the system changes.Deciphering points of state changes enables parameters to be calculated. After fault parameters have been calculated, the severity of the fault can be determined.In the case of UAV operation, information on the criticality of EPS faults occurring is necessary to determine the impact the fault may have on vehicle and mission reliability. Data preparation: Machine learning is critically dependent on the quality and volume of training data and the selection of features that are presented to the learning algorithms [28].Training a model on inappropriate data will result in an inadequate representation of the generalised behaviour of the modelled condition, and produce a model that will perform poorly at the inference stage.Extracting unique signatures for each condition is integral to FCD system development, especially when attempting to discriminate between a large set of network conditions.Consequently, to attempt to provide each HMM with the best data representation of condition behaviour, several processes were undertaken to prepare the data.Firstly, capturing the dependencies that existed within the multivariate ADL data throughout certain conditions was necessary to eliminate any redundant information being used during model training, and to reduce the dimension of the observation space.This can be achieved through a simple analysis, such as variable plotting, or through a more formalistic approach, such as principal component analysis [37]. Also, in order to align to a notionally common scale, the data for fault conditions was normalised.However, for nominal conditions, normalisation would convert the data to the common scale and any sensor noise would be undesirably magnified.Accordingly, the absolute deviation of the nominal data was extracted to maximise the constancy associated with nominal conditions.De-noising of the data was also undertaken using wavelet analysis [38].These processes, being applied to a current sensor within the ADL network during an intermittent fault and under nominal conditions, are illustrated in Fig. 5. As a result of the preparation, the data applied to each HMM for model training were feature vectors describing sensor data for a variety of sensors sensitive to the specific network condition being modelled. Model selection: Modelling the observation space of HMM with a GMM captures non-stationary intervals through changes in dominant mixture distribution and thus changes in latent state.However, the degree to which non-stationary periods are measured depends upon the number of mixtures that represent the distribution because of the fact that some non-stationary behaviour is absorbed into changes within the dominant mixture component as opposed to changes between distributions. Although increasing the number of states and mixture components will implicitly capture a finer degree of non-stationary behaviour, the computational complexity of the model will increase.This modelling flexibility poses the problem of determining the cardinality of parameters, for example, how many states to use and how many mixture components will be present in the observation model.The quantity of training data also has to be considered with respect to learning the parameters of the models and whether the set of training data is sufficient to specify a set of parameters that suitably model the condition. When fitting HMM to data using the expectation maximisation (EM) learning algorithm [39], increasing the cardinality of states and mixture components will increase the likelihood of the trained parameters.The problems associated with increasing the likelihood of trained parameters are that models become over fitted to the training examples presented to them.Over fitting [35] is a phenomenon in which the models learn features pertinent only to the training set, and which will therefore perform poorly at inferring new, unseen data.A solution to overcome such problems is to introduce terms in the model selection criteria that punish model complexity, but still take into account the model fit. One such technique that considers model likelihood but retains a term to punish model complexity is Bayesian information criterion (BIC), which is defined formally as where X is the training data set, θ is the maximum likelihood estimate of the model, N is the dimension of the training set and N m is the number of degrees of freedom (parameters) of the model.Minimising the BIC value will optimise the number of parameters in terms of both model fit and complexity.Consequently, for each of the 15 modelled conditions within the ADL network, BIC was used in determining model selection. The relatively limited volumes of training data, particularly with regards to fault conditions, meant the number of model parameters considered was limited [28].Accordingly, when developing the HMM, BIC ratings for each of the models were calculated by increasing the number of states from 2 through to 5 and training files, describing each condition, from 1 through to 5. Table 2 shows optimal models for selected conditions, chosen by minimising the model BIC.The log-likelihood details the degree to which the parameters of the HMM describe the training files presented, with a value closer to zero detailing a higher model fit. The BIC considers all model elements, and determines if there is the necessity to either increase or decrease cardinality.Table 2 highlights the state variability among selected models within the framework, where some modelled conditions require a greater number of states to achieve model optimality, compared with others. Parameter calculation algorithm development: Considering the FPAs are based around the determination of optimal state sequence within HMMs, and that each condition model has a variable number of states, there is a requirement to establish how these states should be interpreted.The workings of the FPAs assume that the initial state within the state sequence represents the nominal network state.The fault parameters are calculated on the basis that diversions from this initial state are changes from a nominal to a fault state.This is illustrated in Fig. 6, which shows current sensor data for an AC Load intermittent resistance fault and the associated optimal sequence within the related four state HMM of that condition.The state sequence begins in State 1, and, at fault onset, the state sequence changes.Whilst in a fault condition, the state sequence alters between States 3 and 4.However, when in a nominal condition, the state sequence returns to State 1.The algorithms utilise the times of state transitions to extract fault parameters from the system data.The severity of any fault occurring can be determined through the extraction of the parameters. Table 3 outlines the parameters calculated by each of the three FPA's. FCD system operational results Operational testing validates the FCD systems ability to detect the occurrence of, classify and diagnose the severity of ADL faults.Testing was undertaken with the application of ADL data to the FCD system.129 test cases, separate to the training cases, were appliedwithin each case, the types of fault present as well as fault severity was labelled, thus enabling the accuracy of the system to be measured. Classification accuracy The classification results of the system are outlined in Table 4.These results show that the classification system was 95.3% accurate at discriminating between the 15 network conditions.This equates to six misclassifications out of the 129 test cases presented to the system.Out of the six misclassifications, four of these are attributed to the misclassification of incipient faults.In all six test cases that were misclassified, the network was classified to be in a nominal condition. Fault severity diagnosis accuracy The diagnostic results of the system are also presented in Table 4. Severity diagnostic accuracy is based on the ability of the system to accurately calculate the fault parameters with these parameters determining the severity of the fault to the networkseverity can be either network critical or network degraded.Table 4 highlights both the calculation accuracy of the fault parameters and the accuracy of the diagnostic decision. Fault Parameters were deemed accurate if they were within ±5% of the actual parameters.In the majority of fault test cases, the calculation of parameters was accurate.For abrupt and intermittent faults, the calculation accuracies were high.The main instance where accuracy was not sufficiently high was when calculating parameters for incipient faults, which, in some cases was as low as 64.28%.The relatively low value of 89.89% for parameter calculation accuracy can mainly be accredited to the inaccuracies of incipient parameter calculations. The diagnostic decision accuracy in determining fault severity was 99%.Out of the 129 test cases, there was only one instance where the severity of the fault was misdiagnosed. Discussion The test results validate that the FCD system can utilise system data to classify and diagnose fault severity with high accuracy.During fault instances where data was misclassified, the system was classified into a nominal condition; hence, there was no requirement to diagnose fault severity and the FCD system concluded that no critical condition had manifested.Despite the fact that the system had misclassified six fault instances, in five cases, the faults that had developed were minimal, and the network could indeed maintain reliable operation. The majority of misclassified faults and inaccurately calculated fault parameters were accredited to incipient fault conditions, with the majority of inaccuracies concerning the time of fault onset as opposed to the magnitude of drift gradient.This suggests that it is necessary to increase the number of hidden states when modelling incipient conditions because, particularly in cases where there is a marginal drift from nominal behaviour in sensor readings, HMM with higher state variances were not detecting shifts within the data and hence fault onset.Examination of state sequence evolution when fault data was applied to incipient fault models showed that there was a delay between network fault onset and the model inferring a change in network state.Increasing the number of states would enhance sensitivity to slight changes in data, albeit with a trade off with model complexity. Consideration also has to be given to the volume of available training data.A drawback of data driven multiple model approaches is that, compared with cases involving nominal condition data, there is significantly less data available describing fault conditions.This lack of data can result in fault condition models being over fitted with poor performance when inferring new instances of the same condition.In the case study presented in this paper, the BIC was used to optimise each HMM based on various parameters, including the number of training cases available.Test results have shown that the abrupt and intermittent fault models accurately inferred test cases, even though some models were trained using only two separate examples.The incipient fault models however, were not as accurate despite being provided with similar numbers of training examples.The solution to improving the performance of incipient fault models by training examples.Such issues highlight that, while increasing the volume of data will lead to a better generalisation within all fault models, certain fault conditions are more dependent on the quantity of training cases for accurate inference of unseen data. Overall, results have shown that the FCD system can detect and classify a range of network faults as well as measure the impact such faults will have on network reliability.There is a wide range of distinct conditions within UAV EPS networks, and, it is imperative that system dynamics are monitored and evaluated throughout a mission cycle.The development of the proposed FCD system using ADL data has shown that it has the ability to determine and quantify complex system dynamics from network data and that it has the potential to aid system monitoring and reliability enhancement. Future work The work reported in this paper represents the initial steps towards developing the FCD system based on HMM for application to UAV EPS.Further development would comprise extending the system for online application to EPS data.This expansion would involve the appropriate partitioning, or windowing, of the EPS data; windowed data covering a certain period of time would be input to the FCD system.The system would classify the network condition and, if required, diagnose the severity of any fault present over the time period.Network status would be updated when data covering the next windowed time period is input. The system could also be updated to handle multiple network faults.The inclusion of a threshold within the likelihood classification framework would enable the detection of multiple failures.Presently, there is significant discrepancy between the likelihood of one fault model and the rest because the ADL data only describes a single fault.In the event of multiple faults, theoretically, the likelihood of multiple models will be similarly high.A likelihood threshold would determine whether there is enough evidence to suggest the presence of multiple faults. Conclusions The purpose of this paper was to outline a two stage HMM based FCD system that would detect, classify and determine the impact of EPS faults within UAV.The ability of the system to aid health management through the detection of degraded and critical faults, the discrimination between a number of fault types and locations, and the determination of fault parameters and the risk their occurrence poses to system reliability has been validated with development of the system for NASA's ADL network.Tests using ADL data proved that the system can operate with high accuracy, even with limited volumes of training data used throughout development.Despite the relatively simple application described within this paper, the system can be used as a framework to progress and apply to increasingly elaborate networks and fault conditions.Operationally there would be a requirement for data acquisition, through multiple sensor deployment, within such networks that would facilitate the FCD system to aid the understanding of complex UAV EPS behavioural dynamics throughout mission cycle, and enable support in enhancing both vehicle and mission reliability. Fig. 1 Fig. 1 Schematic of ADL Network on which the proposed FCD system is validated IET Electr.Syst.Transp., pp.1-9 3 This is an open access article published by the IET under the Creative Commons Attribution License (http://creativecommons.org/ licenses/by/3.0/) 3. 1 System operation 3.1.1Stage 1 -Fault classification: A framework of multiple trained HMM corresponding to separate conditions within the ADL network enables the classification of candidate system data. 3. 1 . 2 Stage 2 -Fault severity diagnosis: Stage 2 operates on the basis that a fault has been classified from Stage 1 of the system; hence, if a nominal state has been classified after Stage 1, there is no requirement for the implementation of severity diagnosis.However, in the event of a fault being classified, it is necessary to diagnose fault severity to determine the impact the presence of the classified fault has on the reliable operation of the UAV. Fig. 3 Fig. 3 Illustration of relationship between latent states and observational data that form HMM Data (right-hand side) is modelled by a GMM (left-hand side).Shifts in dominant mixture distributions indicate hidden state transitions Fig. 4 Fig. 4 Outline of two-stage FCD system applied to the ADL network Fig. 5 Fig. 5 Illustration of data preparation undertaken prior to model trainingRaw data was de-noised using wavelet analysis.For data describing nominal conditions, preparation involved calculating the absolute deviation before model application; for fault condition data, a normalisation process was applied Fig. 6 Fig.6Example of optimal state sequence when ADL intermittent fault data is applied to a four state intermittent fault HMM Table 1 Conditions modelled within the FCD systemThere are 15 conditions in total -1 nominal and 14 fault conditions.Note that Sensor Stuck and Failed Off faults are akin to abrupt faults.Also, the FPA utilised for each condition is detailed.For clarity, Abrupt fault mode FPA is titled #I, Intermittent fault mode FPA is titled #II and incipient mode FPA is titled #III. Table 2 Optimal HMM parametrisation Table 3 Fault parameters required for determination of fault severity This is an open access article published by the IET under the Creative Commons Attribution License (http://creativecommons.org/ licenses/by/3.0/)increasing the number of hidden states is dependent on the number of training cases available.Without more training examples, increasing the number of hidden states will simply result in a model over fitted to the select IET Electr.Syst.Transp., pp.1-9 7
v3-fos-license
2021-12-03T16:26:52.323Z
2021-11-26T00:00:00.000
244807439
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2077-0383/10/23/5555/pdf", "pdf_hash": "c8c0dac3b923b8df3a7fd47829669cac165bd1a3", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46223", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "3e106dddd2b7cb6d18111a86693e2618517bcf4e", "year": 2021 }
pes2o/s2orc
Central Sleep Apnea Is Associated with an Abnormal P-Wave Terminal Force in Lead V1 in Patients with Acute Myocardial Infarction Independent from Ventricular Function Sleep-disordered breathing (SDB) is highly prevalent in patients with cardiovascular disease. We have recently shown that an elevation of the electrocardiographic (ECG) parameter P wave terminal force in lead V1 (PTFV1) is linked to atrial proarrhythmic activity by stimulation of reactive oxygen species (ROS)-dependent pathways. Since SDB leads to increased ROS generation, we aimed to investigate the relationship between SDB-related hypoxia and PTFV1 in patients with first-time acute myocardial infarction (AMI). We examined 56 patients with first-time AMI. PTFV1 was analyzed in 12-lead ECGs and defined as abnormal when ≥4000 µV*ms. Polysomnography (PSG) to assess SDB was performed within 3–5 days after AMI. SDB was defined by an apnea-hypopnea-index (AHI) >15/h. The multivariable regression analysis showed a significant association between SDB-related hypoxia and the magnitude of PTFV1 independent from other relevant clinical co-factors. Interestingly, this association was mainly driven by central but not obstructive apnea events. Additionally, abnormal PTFV1 was associated with SDB severity (as measured by AHI, B 21.495; CI [10.872 to 32.118]; p < 0.001), suggesting that ECG may help identify patients suitable for SDB screening. Hypoxia as a consequence of central sleep apnea may result in atrial electrical remodeling measured by abnormal PTFV1 in patients with first-time AMI independent of ventricular function. The PTFV1 may be used as a clinical marker for increased SDB risk in cardiovascular patients. Introduction Sleep-disordered breathing (SDB) is a common co-morbidity in patients with cardiovascular disease [1][2][3]. Nearly 50% of patients undergoing coronary artery bypass surgery (CABG) were found to have SDB [4]. Obstructive sleep apnea (OSA) is characterized by the presence of repetitive episodes of upper airway collapse. In contrast, central sleep apnea (CSA) is caused by an intermittent lack of centrally controlled respiratory drive, which often manifests as Cheyne-Stokes respiration and leads to significant oxygen desaturation. Epidemiologic studies indicate a strong association between both OSA and CSA and atrial fibrillation (AF) [5,6]. The most commonly used treatment is continuous positive airway pressure (CPAP), which can alleviate the clinical symptoms of SDB. However, the adherence to this therapy is generally poor and no significant benefit has been shown regarding cardiovascular outcome in patients with OSA [7,8]. The recent randomized controlled trial led by Traaen et al. demonstrated that CPAP treatment does not affect the burden of AF 2 of 13 after 5 months of therapy [9]. Moreover, adaptive servo-ventilation has even been reported to increase the risk of cardiovascular death in patients with reduced left ventricular ejection fraction (LV EF) and CSA [10]. Therefore, identification of novel risk markers and new treatment options are of utmost importance. The P wave terminal force in electrocardiographic (ECG)-lead V 1 (PTFV 1 ) was firstly introduced by Morris et al. in 1964 [11]. It is defined as the algebraic product of the amplitude and duration (µV*ms) of the negative area of the P-wave in lead V 1 (Figure 1). Accumulating evidence has since linked an abnormally large PTFV 1 to atrial dysfunction [4] and AF [12] with increased risk for cardioembolic or cryptogenic stroke [13,14]. Moreover, an abnormally PTFV 1 has also been shown to predict cardiovascular risk and cardiac death or hospitalization for heart failure in patients with prior myocardial infarction [15]. shown regarding cardiovascular outcome in patients with OSA [7,8]. The recent randomized controlled trial led by Traaen et al. demonstrated that CPAP treatment does not affect the burden of AF after 5 months of therapy [9]. Moreover, adaptive servo-ventilation has even been reported to increase the risk of cardiovascular death in patients with reduced left ventricular ejection fraction (LV EF) and CSA [10]. Therefore, identification of novel risk markers and new treatment options are of utmost importance. The P wave terminal force in electrocardiographic (ECG)-lead V1 (PTFV1) was firstly introduced by Morris et al. in 1964 [11]. It is defined as the algebraic product of the amplitude and duration (µV*ms) of the negative area of the P-wave in lead V1 (Figure 1). Accumulating evidence has since linked an abnormally large PTFV1 to atrial dysfunction [4] and AF [12] with increased risk for cardioembolic or cryptogenic stroke [13,14]. Moreover, an abnormally PTFV1 has also been shown to predict cardiovascular risk and cardiac death or hospitalization for heart failure in patients with prior myocardial infarction [15]. Interestingly, we have recently shown that an abnormally large PTFV1 was associated with atrial functional and electrical remodeling by activation of Ca/calmodulin-dependent protein kinase II (CaMKII). CaMKII-dependent dysregulation of cardiomyocytes ion homeostasis has already been associated with atrial pathologies [16], and increased CaMKII-dependent atrial pro-arrhythmic activity was found in cardiovascular patients with SDB [4]. Since CaMKII can be activated by oxidation, intermittent hypoxia could be an important upstream factor. To date, however, it is unclear which pathophysiologic factor-be it negative intrathoracic pressure fluctuations, intermittent hypoxia, increased production of reactive oxygen-species (ROS), or autonomic imbalance [17]-might be most significant for atrial electrical remodeling. In addition, little is known about the relationship between PTFV1 and SDB in patients with acute myocardial infarction. Therefore, this present study investigated the relationship between SDB and SDB-related hypoxia with PTFV1 in patients presenting with acute myocardial infarction. Study Approval and Design We performed a sub-analysis of a prospective observational study in patients with acute MI that were enrolled at the University Medical Center Regensburg (Regensburg, Germany) between March 2009 and March 2012. Details of the study design have been published previously [3]. Patients (age 18-80 years) with a first-time AMI and successful percutaneous coronary intervention (PCI) treated at the University Hospital Regensburg within 24 h after Interestingly, we have recently shown that an abnormally large PTFV 1 was associated with atrial functional and electrical remodeling by activation of Ca/calmodulin-dependent protein kinase II (CaMKII). CaMKII-dependent dysregulation of cardiomyocytes ion homeostasis has already been associated with atrial pathologies [16], and increased CaMKIIdependent atrial pro-arrhythmic activity was found in cardiovascular patients with SDB [4]. Since CaMKII can be activated by oxidation, intermittent hypoxia could be an important upstream factor. To date, however, it is unclear which pathophysiologic factor-be it negative intrathoracic pressure fluctuations, intermittent hypoxia, increased production of reactive oxygen-species (ROS), or autonomic imbalance [17]-might be most significant for atrial electrical remodeling. In addition, little is known about the relationship between PTFV 1 and SDB in patients with acute myocardial infarction. Therefore, this present study investigated the relationship between SDB and SDB-related hypoxia with PTFV 1 in patients presenting with acute myocardial infarction. Study Approval and Design We performed a sub-analysis of a prospective observational study in patients with acute MI that were enrolled at the University Medical Center Regensburg (Regensburg, Germany) between March 2009 and March 2012. Details of the study design have been published previously [3]. Patients (age 18-80 years) with a first-time AMI and successful percutaneous coronary intervention (PCI) treated at the University Hospital Regensburg within 24 h after symptom onset were eligible for inclusion. Exclusion criteria were previous MI or previous PCI, indication for surgical myocardial revascularization, cardiogenic shock, contraindications for cardiac magnetic resonance imaging (CMR), and severe comorbidities (e.g., lung disease, stroke, treated SDB). The study protocol was reviewed and approved by the local institutional ethics committee (Regensburg, 08-151) and is in accordance with the Declaration of Helsinki and Good Clinical Practice. A written informed consent was obtained from all patients prior to enrolment. Of 252 consecutive patients who underwent percutaneous coronary intervention, 74 patients were eligible for the prospective observational study, which involved an evaluation of cardiac function (CMR) and SDB severity at the time of MI. In total, 34 patients were excluded from this sub-analysis due to missing CMR (n = 10), missing polysomnography (n = 6), and atrial fibrillation (n = 2). The final sub-analysis included 56 patients, who were divided into two cohorts depending on the PTFV 1 (PTFV 1 < 4000 µV*ms [n = 40] and PTFV 1 ≥4000 µV*ms [n = 16]) ( Figure 2). Of 252 consecutive patients who underwent percutaneous coronary intervention, 74 patients were eligible for the prospective observational study, which involved an evaluation of cardiac function (CMR) and SDB severity at the time of MI. In total, 34 patients were excluded from this sub-analysis due to missing CMR (n = 10), missing polysomnography (n = 6), and atrial fibrillation (n = 2). The final sub-analysis included 56 patients, who were divided into two cohorts depending on the PTFV1 (PTFV1 < 4000 µV*ms [n = 40] and PTFV1 ≥4000 µV*ms [n = 16]) ( Figure 2). Electrocardiography Standard 12-lead electrocardiograms were recorded at a paper speed of 50 mm/s and a standardization of 10 mm/1 mV. All ECGs were digitally processed and scaled using ImageJ (Version 2.00; Java-based image processing program; LOCI, University of Wisconsin, USA) and individually analyzed by two skilled physicians (mean of 3 consecutive P waves). Both investigators were blinded to the clinical and MRI data. PTFV1 was defined as the algebraic product of amplitude (µV) and duration (ms) of the terminal negative component of the P wave in lead V1 ( Figure 1) also known as Morris-Index [11]. A PTFV1 of ≥ 4000 µV*ms was considered to be abnormal. Polysomnography Polysomnography (PSG) was performed in all subjects using standard polysomnographic techniques (Alice System; Respironics, Pittsburgh, PA, USA) as previously Electrocardiography Standard 12-lead electrocardiograms were recorded at a paper speed of 50 mm/s and a standardization of 10 mm/1 mV. All ECGs were digitally processed and scaled using ImageJ (Version 2.00; Java-based image processing program; LOCI, University of Wisconsin, USA) and individually analyzed by two skilled physicians (mean of 3 consecutive P waves). Both investigators were blinded to the clinical and MRI data. PTFV 1 was defined as the algebraic product of amplitude (µV) and duration (ms) of the terminal negative component of the P wave in lead V 1 (Figure 1) also known as Morris-Index [11]. A PTFV 1 of ≥ 4000 µV*ms was considered to be abnormal. Polysomnography Polysomnography (PSG) was performed in all subjects using standard polysomnographic techniques (Alice System; Respironics, Pittsburgh, PA, USA) as previously described [3]. Briefly, respiratory efforts were measured with the use of respiratory inductance plethysmography and airflow by nasal pressure. Sleep stages and arousals, as well as apneas, hypopneas, and respiratory effort-related arousals, were determined according to the American Academy of Sleep Medicine guidelines [18] by an experienced sleep technician blinded to the clinical data. Hypopneas were classified as obstructive if there was out-ofphase motion of the ribcage and abdomen, or if airflow limitation was present. In order to achieve optimal distinction between obstructive and central hypopneas without using an esophageal balloon, we used additional criteria, such as flattening, snoring, paradoxical effort movements, arousal position relative to hypopneas, and associated sleep stage (rapid eye movement (REM)/non-REM). SDB was defined by an apnea-hypopnea-index (AHI) > 15/h determined as the number of central or obstructive apnea and hypopnea episodes per hour of sleep. CSA was defined as >50% central apneas and hypopneas of all apneas and hypopneas. Pulse oximetry implemented in PSG was used to measure oxygen saturation and ODI (number of events per hour in which oxygen saturation decreased by ≥3% from baseline). Cardiovascular Magnetic Resonance Details of CMR data acquisition have been previously described [3]. Shortly, CMR studies were performed on a clinical 1.5 Tesla scanner (Avanto, Siemens Healthcare Sector, Erlangen, Germany) using a phased array receiver coil during breath-hold and that was ECG triggered. Examination of ventricular function was performed by acquisition of steady-state free precession (SSFP) cine images in standard short axis planes (trueFISP; slice thickness 8 mm, inter-slice gap 2 mm, repetition time 60.06 ms, echo time 1.16 ms, flip angle 60 • , matrix size 134 × 192, and readout pixel bandwidth 930 Hz*pixel −1 ). The number of Fourier lines per heartbeat was adjusted to allow the acquisition of 25 cardiac phases covering systole and diastole within a cardiac cycle. The field of view was 300 mm on average and was adapted to the size of the patient. Calculation of left ventricular volumes and EF was performed in the serial short axis slices using commercially available software (syngo Argus, version B15; Siemens Healthcare Sector). Statistical Analysis Continuous variables were compared by Student's T-test or Welch's Test depending on their variance. The Chi-square or Fisher's exact test were used for categorial variables depending on the number of observations. Continuous variables are expressed as mean ±95% confidence interval (CI), and categorial variables as frequencies and percentages, respectively. After linear regression of PTFV 1 or AHI with important clinical factors, multivariate linear regression was performed for all variables with a p value < 0.2. An intra class correlation (ICC, by two-way mixed model, type absolute agreement) was used to assess the reproducibility of PTFV 1 analysis. All reported P values are two-sided and the threshold for significance was set at p < 0.05. Statistical analysis was performed in SPSS (SPSS Statistics for Mac OS, Version 26.0 Armonk, NY, USA: IBM Corp.). Study Population A total of 56 patients consisting of 80% men with an age of 55 ± 9.9 years were separated into groups with normal and abnormal PTFV1 (baseline characteristics in Table 1). There was no significant difference in demographic parameters or comorbidities, such as age, gender, arterial hypertension, diabetes mellitus, hypercholesterolemia, or smoking. Patients with abnormal PTFV 1 presented significantly less with ST segment elevation myocardial infarction (STEMI) (p = 0.035) and had higher levels of NT-proBNP at discharge (p = 0.002) ( Table 1). The LV EF was mildly reduced in both groups but worse in patients with abnormal PTFV 1 (43.15 ± 11.51% vs. 48.93 ± 7.45%, p = 0.035). Interestingly, volumetric parameters for LA size and function, such as LA fractional area change (FAC) or systolic LA area, were not significantly increased in patients with abnormal PTFV 1 ( Table 1), indicating that the magnitude of PTFV 1 more likely reflects electrical but not structural remodeling as published previously [19]. Central Sleep Apnea Is Independently Associated with Abnormal PTFV 1 Respiratory and sleep characteristics are shown in Table 2. The Epworth Sleepiness Scale score reflecting the daytime sleepiness was within the normal range in both groups (Table 2). Interestingly, in patients with abnormal PTFV 1 , SDB was highly prevalent (86.7%), with significantly more patients exhibiting central but not obstructive sleep apnea ( Table 2). In contrast, only a minority of patients with normal PTFV 1 had SDB (42.5%) and if so, a majority was obstructive (Table 2). Moreover, central (cAHI) but not obstructive (oAHI) apnea events were significantly associated with the magnitude of PTFV 1 (Table 3). Importantly, the extent of oxygen desaturation (ODI) was an even stronger predictor of the extent of PTFV 1 than that of the frequency of central apneas (R 2 = 0.268, Table 3). In contrast to this association, the mean arterial oxygen saturation was similar in both groups. There was a trend towards lower minimum arterial oxygen saturation in the group with patients with abnormal PTFV 1 (85.74 ± 5.87 vs. 82.20 ± 6.09, p = 0.055) ( Table 2). To test for possible confounding, multivariate linear regression was performed. The association of both ODI and cAHI with the magnitude of PTFV 1 remained significant after inclusion of important co-factors, such as age, LVEF, eGFR, and NT-proBNP at discharge. Importantly, the associations of both ODI and cAHI were also independent from obstructive apnea events. For cAHI, R 2 was 0.256 (adj. R 2 = 0.186; p = 0.014, Table 4), and for ODI, R 2 was 0.408 (adj. R 2 = 0.317; p = 0.002, Table 4). PTFV 1 as a Diagnostic Marker for Predicting Sleep-Disordered Breathing Univariate linear regression for AHI indicated that beside PTFV 1 , BMI, NT-proBNP at discharge, systolic LA area, LVEF, and smoking status may correlate with apnea and hypopnea events. Strikingly, after incorporation of these factors into a multivariate linear regression model, only PTFV 1 significantly correlated with the magnitude of AHI (model 1, R 2 = 0.326 (adj. R 2 = 0.213); p = 0.021, Table 5). Similarly, after dichotomizing PTFV 1 into normal and abnormal, the presence of an abnormal PTFV 1 significantly predicted a more severe AHI in multivariate linear regression (model 2, B 21.495; CI [9.097, 20.193]; p < 0.001, Table 5). Interestingly, no meaningful interactions were found with myocardial ischemia markers, such as troponin I or creatine kinase and abnormal PTFV 1 (Table 5), despite the higher prevalence of STEMI in the group with normal PTFV 1 (92.5% vs. 68.8%). Discussion In the present study, we investigated the relationship between SDB and SDB-related hypoxia with PTFV 1 in patients presenting with acute myocardial infarction. We show here that nocturnal oxygen desaturation in SDB was associated with atrial electrical remodeling measured by abnormal PTFV 1 in patients with first-time AMI independent of ventricular function. Moreover, we propose PTFV 1 as a broadly available clinical marker for increased SDB risk in cardiovascular patients. Possible Mechanisms for an Abnormal PTFV 1 in SDB We report here a prevalence of SDB in patients with AMI of 54.5% with 25.9% central sleep apnea, which closely resembles previous data reporting an SDB prevalence ranging from 33.1% to 50% with about 20% central sleep apnea [4,20,21]. CSA in patients with heart failure is commonly explained by pulmonary congestion due to ventricular overload with consequent autonomic triggered tachypnea and subsequently reduced PaCO 2 , which results in the occurrence of an apnea episode. This leads to accumulation of PaCO 2 and restoration of respiratory effort. However, CSA could also have pathophysiological effects on the heart that are independent of ventricular dysfunction. A small study by Lanfranchi showed that severe CSA was associated with increased arrhythmic risk without association to the severity of hemodynamic impairment due to LV dysfunction. This association may be caused by CSA-mediated nocturnal desaturations, which have been proposed as a consequence of impaired autonomic control and disturbed chemoreflex-baroreflex interactions frequently found in CSA [22]. Interestingly, for patients with AMI, a high probability of CSA-dependent nocturnal oxygen desaturations has already been shown [21]. We observe here a high ODI among patients with AMI, which strongly correlates with abnormal PTFV 1 independent from many clinical covariates including left ventricular ejection fraction, which might provide an interesting insight into the pathogenesis of atrial remodeling and the development of atrial cardiomyopathy. There is growing evidence that atrial structural and electrical remodeling even in the absence of atrial fibrillation can also increase the risk of clot formation and cardioembolic stroke. The latter alterations, also known as atrial cardiomyopathy, expand the traditional view of clot formation [13,[23][24][25]. In fact, the ongoing ARCADIA trial is investigating the optimal anticoagulant therapy (anticoagulant therapy vs. standard ASA therapy) in patients with cryptogenic stroke and atrial cardiomyopathy and specifically uses an abnormal PTFV 1 as an additional clinical marker for atrial cardiomyopathy [26]. We have recently shown that an abnormal PTFV 1 is linked to increased CaMKII-dependent atrial pro-arrhythmic activity and atrial contractile dysfunction [4,19]. Atrial CaMKII is a key regulator of cardiac excitation-contraction coupling and plays an important role in triggering arrhythmias and atrial electrical remodeling [4]. Beside arrhythmias, it is tempting to speculate that CaMKII-dependent atrial contractile dysfunction may also be involved in atrial clot formation even in the absence of atrial fibrillation. Thus, CaMKII may be a promising novel treatment target for patients with atrial cardiomyopathy. In this context, the mechanisms of CaMKII activation should be elucidated in more detail. Beside the canonical Ca-dependent activation, CaMKII has been shown to be activated by increased amounts of reactive oxygen species (ROS) [27,28]. SDB-related intermittent hypoxia with consequently increased generation of ROS [29] may result in activation of atrial CaMKII and CaMKII-dependent electrical remodeling manifesting as abnormal PTFV 1, but this remains to be shown. Additionally, only little is known about SDB-related hypoxia and electrical atrial remodeling before atrial fibrillation emerges. Interestingly, in patients with abnormal PTFV 1 , atrial fibrosis was less likely to be observed [19], indicating that the generation of abnormal PTFV 1 may require functional cardiomyocytes. Beside SDB and SDB-related hypoxia, acute myocardial infarction may also lead to acute ventricular contractile dysfunction, which could also contribute to atrial functional and/or structural alterations. A longitudinal study recently demonstrated that increasing NT-proBNP levels were associated with LA remodeling and LA contractile dysfunction [30]. In the current study, we observed significantly higher NT-proBNP levels at discharge and lower LV EF in the group with abnormal PTFV 1 , which may contribute to impaired atrial function and abnormal PTFV 1 . In accordance, we recently demonstrated a significant negative correlation between functional LA parameters, such as LA conduit and reservoir function, as measured by feature-tracking (FT) strain analysis of cardiac magnetic resonance (CMR) images, and the extent of PTFV 1 [19]. In contrast to atrial strain, volumetric MRI parameters for LA function such as systolic LA area or LA FAC did not show a significant association with PTFV 1 in the present study, which agrees with previous studies [31,32]. On the other hand, multivariate linear regression analysis revealed that neither higher NT-proBNP levels nor lower LVEF were significantly associated with the magnitude of PTFV 1 if SDB and SDB-related hypoxia were also incorporated in the multivariate model. This suggests that ventricular contractile dysfunction is unlikely to contribute decisively to the extent of PTFV 1 , at least when there is concomitant SDB. Consistent with this, in the current study, there was also no association of PTFV 1 with acute ischemia markers (creatine kinase, troponin I), which may correlate with infarct size and affect LV function. In addition to the possible subordinate role of LV dysfunction for PTFV 1 , an explanatory approach could also be that a proportion of patients were protected from more extensive infarct-associated ventricular myocardial injury by ischemic preconditioning due to the repetitive SDB-associated hypoxia, which has been shown previously [33]. However, the latter phenomenon should be interpreted with caution and cannot be generalized to all patients after AMI, because the healing process, as measured by myocardial salvage and reduction in infarct size, was worse in patients with SDB within three months after AMI [34]. In addition, patients with AMI and SDB showed worse hospital outcomes [21,35,36]. Regardless of a possible protective or detrimental role of SDB for ventricular injury after AMI, the role of ventricular injury for atrial remodeling and the extent of PTFV 1 may be less important, as discussed above. PTFV 1 as a Diagnostic Marker for SDB and SDB-Related Arrhythmias It has been found that patients with SDB especially CSA have higher severity of ACS and worse prognosis with longer hospital stay and more complications during hospitalization [21]. However, a clinical marker identifying patients at highest risk is lacking. In our cohort, oxygen desaturation index as a measure of nocturnal desaturation was significantly associated with abnormal PTFV 1 . Therefore, measurement of PTFV 1 may be a simple and cost-effective tool for stratifying patients admitted to the hospital with a first-time AMI. Measurement of PTFV 1 was highly reliable in different observers (Table A1). Therefore, we suggest that all patients with abnormal PTFV 1 should receive PSG and be stratified according to their SDB risk for follow-up care. Unfortunately, CPAP therapy may be without benefit for patients with sleep apnea [7][8][9][10], so new treatment options are urgently needed. We have recently shown that increased CaMKII activity is significantly associated with abnormal PTFV 1 [19]. Currently, several CaMKII inhibitors are under preclinical investigation [37]. One could speculate that abnormal PTFV 1 might help in selecting patients who could benefit from specific pharmacological treatment, such as CaMKII inhibition. Limitations This was a cross-sectional study at a single center with a relatively small sample size that was not designed to examine long-term follow-up of clinical endpoints. In addition, we do not know whether the abnormal PTFV 1 we detected at the time of myocardial infarction is a transient phenomenon or persists over time. Larger studies are needed to validate our findings and to investigate the impact on cardiac arrhythmias and serious adverse cardiac events including heart failure exacerbations. Moreover, the definition of the negative part of the P-wave based on the isoelectric line in a slightly rising PR segment is sometimes difficult. However, the interobserver variability ICC for PTFV 1 measurements in this study showed very good accuracy (ICC 0.888; lower CI 0.647; upper CI 0.951, Table A1). Conclusions This study shows that abnormal PTFV 1 is tightly linked to SDB and especially to central instead of obstructive sleep apnea. Therefore, we hypothesize that atrial dysfunction expressed as abnormal PTFV 1 is caused by stimulation of ROS-dependent pathways due to intermittent hypoxia represented here predominantly in CSA independent of ventricular function. We show that the severity of SDB can be easily recognized by PTFV 1 . This ubiquitously available ECG parameter may thus be a simple and cost-effective tool to stratify patients admitted to hospital with first-time AMI for further PSG. Therefore, all patients with abnormal PTFV 1 should obtain PSG and be stratified for follow-up care. Institutional Review Board Statement: The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Ethics Committee of the University Hospital Regensburg (Regensburg, 08-151). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented in this study will be shared on reasonable request to the corresponding author. The data are not publicly available due to privacy restrictions. Conflicts of Interest: The authors declare no conflict of interest.
v3-fos-license
2012-12-13T06:22:50.000Z
2012-10-04T00:00:00.000
6730709
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBYSA", "oa_status": "GOLD", "oa_url": "http://www.emis.de/journals/SIGMA/2012/098/sigma12-098.pdf", "pdf_hash": "0278629c8aad6692a327d0feb00eef9c67bdaada", "pdf_src": "Arxiv", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46224", "s2fieldsofstudy": [ "Physics" ], "sha1": "0278629c8aad6692a327d0feb00eef9c67bdaada", "year": 2012 }
pes2o/s2orc
Loop Quantum Gravity Phenomenology: Linking Loops to Observational Physics Research during the last decade demonstrates that effects originating on the Planck scale are currently being tested in multiple observational contexts. In this review we discuss quantum gravity phenomenological models and their possible links to loop quantum gravity. Particle frameworks, including kinematic models, broken and deformed Poincar\'e symmetry, non-commutative geometry, relative locality and generalized uncertainty principle, and field theory frameworks, including Lorentz violating operators in effective field theory and non-commutative field theory, are discussed. The arguments relating loop quantum gravity to models with modified dispersion relations are reviewed, as well as, arguments supporting the preservation of local Lorentz invariance. The phenomenology related to loop quantum cosmology is briefly reviewed, with a focus on possible effects that might be tested in the near future. As the discussion makes clear, there remains much interesting work to do in establishing the connection between the fundamental theory of loop quantum gravity and these specific phenomenological models, in determining observational consequences of the characteristic aspects of loop quantum gravity, and in further refining current observations. Open problems related to these developments are highlighted. characteristic aspects of loop quantum gravity, and in further refining current observations. Open problems related to these developments are highlighted. Introduction Twenty five years ago Ashtekar, building on earlier work by Sen, laid the foundations of Loop Quantum Gravity (LQG) by reformulating general relativity (GR) in terms of canonical connection and triad variables -the "new variables". The completion of the kinematics -the quantum theory of spatial geometry -led to the prediction of a granular structure of space, described by specific discrete spectra of geometric operators, area [228], volume [162,169,228], length [50,171,253] and angle [187] operators. The discreteness of area led to an explanation of black hole entropy [56,158,160,224] (see [83] for a recent review). Although granularity in spatial geometry is predicted not only by LQG, but also by some string theory and non-commutative geometry models, the specific predictions for the spectra of geometry operators bears the unique stamp of LQG. Quantum effects of gravity are expected to be directly perceptible at distances of the order of the Planck length, about 10 −35 m, in particle processes at the Planck energy, about 10 28 eV (c = 1), and at a Planck scale density. With the typical energy M QG for quantum gravity (QG) assumed to be of the order of the Planck energy, there are fifteen orders of magnitude between this energy scale and the highest attainable center-of-mass energies in accelerators and, in the Earth's frame, eight orders of magnitude above the highest energy cosmic rays. So fundamental quantum theories of gravity and the realm of particle physics appear like continents separated by a wide ocean. (Although, if the world has large extra dimensions, the typical energy scale of quantum gravity may be significantly lower.) The situation is worsened by the fact that none of the tentative QG theories has attained such a degree of maturity that would allow to derive reliable predictions of such a kind that could be extrapolated to our "low-energy" reality. It would appear that there is little hope in directly accessing the deep quantum gravity regime via experiment. One can hope, however, to probe the quantum gravity semi-classical regime, using particle, astrophysical and cosmological phenomena to enhance the observability of the effects. In spite of this discouraging perspective, over one decade ago a striking paper by Amelino-Camelia et al. [21] on quantum gravity phenomenology appeared. The paper was based on a plausibility argument: The strong gravity regime is inaccessible but quantum gravity, as modeled in certain models of string theory and, perhaps, in the quantum geometry of LQG, has a notion of discreteness in its very core. This discreteness is understood to be a genuine property of space, independent of the strength of the actual gravitational field at any given location. Thus it may be possible to observe QG effects even without strong gravitational field, in the flat space limit. In [21] the authors proposed that granularity of space influences the propagation of particles, when their energy is comparable with the QG energy scale. Further, the assumed invariance of this energy scale, or the length scale, respectively, are in apparent contradiction with special relativity (SR). So it is expected that the energy-momentum dispersion relation could be modified to include dependence on the ratio of the particle's energy and the QG energy. At lowest order with the parameter ξ > 0 of order unity. Relations like (1.1) violate, or modify, local Lorentz invariance (LLI). According to the sign in (1.1), the group velocity of high-energy photons could be sub-or super-luminal, when defined in the usual way by ∂E/∂p. Like with all QG effects, the suppression of Lorentz invariance violation by the ratio of the particle energy to the QG energy may appear discouraging at first sight. To have a chance to detect an effect of the above modification, we need an amplification mechanism, or "lever arm". The authors of [21] showed that if the tiny effect on the speed of light accumulates as high energy photons travel cosmic distances, the spectra of γ ray bursts (GRB) would reveal an energy-dependent speed of light through a measurable difference of the time of arrival of high and low energy photons. Due to different group velocities v = ∂ω/∂k 1 + ξk/M QG , photons emitted at different momenta, k 1 and k 2 , would arrive at a distant observer (at distance D) at times separated by the interval ∆t ξ(k 2 − k 1 )D/M QG . Distant sources of γ-ray photons are the best for this test. Despite the uncertainties concerning the physics of the psroduction of such γ-rays, one can place limits on the parameter ξ 0 . The current strongest limit is ξ 0.8 reported by the Fermi Collaboration using data from the γ-ray burst GRB 090510 [2]. This is discussed further in Section 4.1.1. In the years following this work, the nascent field of QG phenomenology developed [18] from ad hoc effective theories, like isolated isles lying between the developing QG theories and reality, linked to the former ones loosely by plausibility arguments. Today the main efforts of QG phenomenology go in two directions: to establish a bridge between the intermediate effective theories and the fundamental QG theory and the refinement of observational methods, through new effective theories and experiments that could shed new light on QG effects. These are exceptionally healthy developments for the field. The development of physical theory relies on the link between theory and experiment. Now these links between current observation and quantum gravity theory are possible and under active development. The purpose of this review on quantum gravity phenomenology is three-fold. First, we wish to provide a summary of the state of the art in LQG phenomenology and closely related fields with particular attention to theoretical structures related to LQG and to possible observations that hold near-term promise. Second, we wish to provide a road map for those who wish to know which physical effects have been studied and where to find more information on them. Third, we wish to highlight open problems. Before describing in more details what is contained in this review, we remind the reader that, of course, the LQG dynamics remains open. Whether a fully discrete space-time follows from the discreteness of spatial geometry is a question for the solution to LQG dynamics generated by the master constraint, the Hamiltonian constraint operator, and/or spin foam models [226,257]. Nevertheless if the granular spatial geometry of LQG is physically correct then it must manifest itself in observable ways. This review concerns the various avenues in which such phenomenology is explored. The contents of our review is organized in the the following four sections: Section 2: A brief introduction to the geometric operators of LQG, area, volume, length and angle, where discreteness shows up. Section 3: An overview of particle effective theories of the type introduced above. In this section we review particle kinematics, discuss arguments in LQG that lead to modified dispersion relations (MDR) like the kind (1.1) and discuss models of symmetry deformation. The variety of models underlines their loose relation to fundamental theories such as LQG. Section 4: A brief review of field theories leading to phenomenology including effective field theory with Lorentz symmetry violation and non-commutative field theory. The effective field theories incorporate MDR and contain explicit Lorentz symmetry violation. A model with LLI is discussed and, in the final part, actions for field theories over non-commutative geometries are discussed. Section 5: A brief discussion of loop quantum cosmology and possible observational windows. Cosmology is a promising observational window and a chance to bridge the gap between QG and reality directly, without intermediate effective theories: The cosmic microwave and gravitational wave background fluctuations allow a glimpse into the far past, closer to the conditions when the discreteness of space would play a more dominant role. The Planck length is given when the Compton length is equivalent to the Schwarzschild length, P := G/c 3 1.6 × 10 −35 m. Similarly, the Planck mass is given when the Compton mass is equivalent to the Schwarzschild mass, M P := c/G 1.2 × 10 28 eV/c 2 . These conditions mean that at these scales, quantum effects are comparable to gravitational effects. The usual physical argument, which [88] made more rigorous, is that to make a very precise measurement of a distance, we use a photon with very high energy. The higher the precision, the higher the energy will be in a small volume of space, so that gravitational effects will kick in due to a large energy density. When the volume is small enough, i.e. the precision very high, the energy density is so large that a black hole is created and the photon can not come back to the observer. Hence there is a maximum precision and a notion of minimum length P . This argument goe s beyond the simple application of dimensional analysis of the fundamental scales of the quantum gravitational problem, c, G, , and Λ, the cosmological constant. In the remainder of this review, except where it could lead to confusion, we set c = 1 and denote the Planck scale mass by κ so that the Planck scale κ = M P = 1 P can be interpreted as Planck momentum κ = M P c, Planck energy κ = M P c 2 or Planck rest mass κ = M P . Discreteness of LQG geometric operators Loop quantum gravity hews close to the classical theory of general relativity, taking the notion of background independence and apparent four dimensionality of space-time seriously. The quantization has been approached in stages, with work on kinematics, the quantization of spatial geometry, preceding the dynamics, the full description of space-time. The kinematics, all but unique, reveals a picture of quantized space. This quantization, this granularity, inspired the phenomenological models in this review and is the subject of this section. We focus on the geometric observables here. For a brief review of the elements of LQG and the new variables see Appendix A. In LQG the operators representing area, angle, length, and volume have discrete spectra, so discreteness is naturally incorporated into LQG. This fundamental discreteness predicted by LQG, must be at some level physically manifest. Much of the work in phenomenology related to LQG has been an effort to link up this predicted discreteness with possible observational contexts. In later sections we will introduce a fundamental length or energy scale, adding terms to the effective action for particles, exploring effects of an minimum area on cosmological models, and studying the affects of underlying combinatorics on geometric quantities. Area Classically the area of a two-dimensional surface is the integral over the square root of the determinant of the induced two-dimensional metric. Thus, (For details see [38,226].) The operators related to E a i , namely the flux operators E i (S), associated to S, have a Lie algebra index and so are not gauge invariant. Nor does its "square" E 2 (S) := E i (S)E i (S) give rise to a gauge-invariant operator in general, because the integration over S complicates the transformation properties, when there are more than one intersection of a spin network (SNW) graph γ with S. Its action on a single link intersecting S, however, is simple: Each E i inserts an su(2) generator τ (j) i into the corresponding holonomy, which results in the Casimir operator of SU (2) in the action of E 2 (S), namely To make use of this simple result in the case of extended graphs intersecting S, one partitions the surface into n small surfaces S i , such that each of them contains not more than one intersection point with the given graph, and then takes the sum over the small sub-surfaces, This defines the area operator. Area can be equally well defined in a combinatorial framework as discussed in Appendix A. The action on a SNW function Ψ Γ is A(S)|Ψ Γ = 8πγ κ 2 p∈Γ∩S j p (j p + 1) |Ψ Γ , (2.1) where j p is the spin, or color, of the link that intersects S at p and γ is the Barbero-Immirizi parameter. The area operator acts only on the intersection points of the surface with the SNW graph, γ ∩ S and so gives a finite number of contributions. SNW functions are eigenfunctions. The eigenvalues are obviously discrete. The quanta of area live on the edges of the graph and are the simplest elements of quantum geometry. There is a minimal eigenvalue, the so-called area gap, which is the area when a single edge with j = 1/2 intersects S, This is the minimal quantum of area, which can be carried by a link. The eigenvalues (2.1) form only the main sequence of the spectrum of the area operator. When nodes of the SNW lie on S and some links are tangent to it the relation is modified, see [38,226]. The important fact, independent of these details, is that discreteness of area with the SNW links, carrying its quanta, comes out in a natural way. The interpretation of discrete geometric eigenvalues as observable quantities goes back to early work in [222]. This discreteness made the calculation of black hole entropy possible by counting the number of microstates of the gravitational field that lead to a given area of the horizon within some small interval. Intriguingly, area operators acting on surfaces that intersect in a line fail to commute, when SNW nodes line in that intersection [32]. One may see this as resulting from the commutation relations among angular momentum operators in the two area operators. Recently additional insight into this non-commutativity comes from the formulation of discrete classical phase space of loop gravity, in which the flux operators also depend on the connection [94]. Another, inequivalent, form of the area operator was proposed in [159]. This operator,à S , is based on a non gauge-invariant expression of the surface metric. Fix a unit vector is the Lie algebra, r i then the classical area may be expressed as the maximum value of where the maximum is obtained by gauge rotating the triad. On the quantum mechanical side this value is the maximum magnetic quantum number, simply j so the spectrum is simplỹ This operator, frequently used in the spin foam context, is particularly useful in systems with boundary such as where gauge invariance might be (partially) fixed. Volume Like the area of a surface, the volume of a region R in three-dimensional space, the integral of the square root of the determinant of the metric, can be expressed in terms of densitized triads, Regularizations of this expression consist of partitioning the region under consideration into cubic cells in some auxiliary coordinates and constructing an operator for each cell. The cells are shrunk to zero coordinate volume. This continuum limit is well-defined, thanks to discreteness reached when the cells are sufficiently small, but finite. Readers interested in precisely how this is done should consult [39,228]. There are, primarily, two definitions of the operator, one due to Rovelli and Smolin (RS) [228] and the other due to Ashtekar and Lewandowski (AL) [39]. Here we present the AL volume operator of [39], presented also in [257]. For a given SNW function based on a graph Γ, the operatorV R,Γ of the volume of a region R acts nontrivially only on (at least four-valent [169]) vertices in R. According to the three triad components in (2.2), which become derivatives upon quantization, in the volume operator three derivative operatorsX i v,e I act at every node or vertex v on each triple of adjacent edges e I , V R,Γ = when e I is outgoing at v. This is the action of the left-invariant vector field on SU (2) in the direction of τ i ; for ingoing edges it would be the right-invariant vector field. Given the "triple-product" action of the operator (2.3), vertices carry discrete quanta of volume. The volume operator of a small region containing a node does not change the graph, nor the colors of the adjacent edges, it acts in the form of a linear transformation in the space of intertwiners at the vertex for given colors of the adjacent edges. It is then this space of intertwiners that forms the "atoms of quantum geometry". The complete spectrum is not known, but it has been investigated [66-68, 85, 198, 254]. In the thorough analysis of [66,67], Brunnemann and Rideout showed that the volume gap, i.e. the lower boundary for the smallest non-zero eigenvalue, depends on the geometry of the graph and doesn't in general exist. In the simplest nontrivial case, for a four-valent vertex, the existence of a volume gap is demonstrated analytically. The RS volume operator [228] (see also [226]) differs from the AL operator outlined above. In this definition the densitized triad operators are integrated over surfaces bounding each cell with the results that the square root is inside the sum over I, J, K and the orientation factor s(e I , e J , e K ) is absent. Due to the orientation factor the volume of a node with coplanar tangent vectors of the adjacent links is zero, when calculated with the AL operator, whereas the RS operator does not distinguish between coplanar and non-coplanar links. The two volume operators are inequivalent, yielding different spectra. While the details of the spectra of the Rovelli-Smolin and the Ashtekar-Lewnadowski definitions of the volume operator differ, they do share the property that the volume operator vanishes on all gauge invariant trivalent vertices [168,169]. According to an analysis in [104,105] the AL operator is compatible with the flux operators, on which it is based, and the RS operator is not. On the other hand, thanks to its topological structure the RS volume does not depend on tangent space structure; the operator is 'topological' in that is invariant under spatial homeomorphisms. It is also covariant also under "extended diffeomorphisms", which are everywhere continuous mappings that are invertible everywhere except at a finite number of isolated points; the AL operator is invariant under diffeomorphisms. For more on the comparison see [226,257]. Physically, the distinction between the two operators is the role of the tangent space structure at SNW nodes. There is some tension in the community over the role of this structure. Recent developments in twisted discrete geometries [98] and the polyhedral point of view [51] may help resolve these issues. It would be valuable to investigate ways in which the tangent space structure, and associated moduli [123], could be observationally manifest. In [52] Bianchi and Haggard show that the volume spectrum of the 4-valent node may be obtained by direct Bohr-Sommerfeld quantization of geometry. The description of the geometry goes all the way back to Minkowski, who showed that the shapes of convex polyhedra are determined from the areas and unit normals of the faces. Kapovich and Millson showed that this space of shapes is a phase space, and it is this phase space -the same as the phase space of intertwiners -that Bianchi and Haggard used for the Bohr-Sommerfeld quantization. The agreement between the spectra of the Bohr-Sommerfeld and LQG volume is quite good [52]. Length In constructing the length operator one faces with the challenges of constructing a one-dimensional operator in terms of fluxes and of constructing the inverse volume operator. There are three versions of the length operator. One [253] requires the same trick, due to Thiemann [258], that made the construction of the inverse volume operator in cosmology and the Hamiltonian constraint operator in the real connection representation possible. The second operator [50], due to Bianchi, uses instead a regularization guided by the dual picture in LQG, where one considers (quantum) convex polyhedral geometries dual to SNW nodes, the atoms of quantum geometry. For more discussion on the comparison between these two operators, see [50]. The third operator can be seen to be an average of a formula for length based on area, volume and flux operators [171]. To give a flavor of the construction we will review the first definition based on [253]. Classically the length of a (piecewise smooth) curve c : [0, 1] → Σ in the spatial 3-manifold Σ with background metric q ab is given by In LQG the metric is not a background structure, but can be given in terms of the inverse fundamental triad variables, The problem is to find an operator equivalent to this complicated non-polynomial expression: any operator version of the denominator would have a huge kernel in the Hilbert space, so that the above expression cannot become a densely defined operator. Fortunately q ab can be expressed in terms of Poisson brackets of the connection A a := A a i τ i (τ i ∈ su(2)) with the volume V can be formulated as a well-defined operator. The connection A a , on the other hand, can be replaced by its holonomy, when the curve is partitioned into small pieces, so that the exponent A aċ a of the holonomy is small and higher powers can be neglected in first approximation. The zeroth-order term (which is the unity operator) does not contribute to the Poisson brackets. The length operator is constructed as a Riemann sum over n pieces of the curve and by inserting the volume operatorV and replacing the Poisson brackets by 1/i times the commutators,L In the limit n → ∞ the approximation of A by its holonomy becomes exact. In [253] it is shown that this is indeed a well-defined operator on cylindrical functions and, due to the occurrence of the volume operator, its action on SNW functions gives rise to nonzero contributions only when the curve contains SNW vertices. As soon as the partition is fine enough for each piece to contain not more than one vertex, the result ofL n Ψ remains unchanged when the partition is further refined. So the continuum limit is reached for a finite partition. However this action on SNWs raises a problem. For any given generic SNW a curve c will rarely meet a vertex, so that for macroscopic regions lengths will always be predicted too short in relation to volume and surface areas: c is "too thin". To obtain reasonable results in the classical limit, one combines curves together to tubes, that is two-dimensional congruences of curves with c in the center and with cross-sections of the order of 2 P . The spectra of the so-constructed tube-operators are purely discrete. None of the phenomenological models discussed in this review depend on the specific form of the length operator. These have already been compared from the geometric point of view [50]. As with the volume operators it would be interesting to develop phenomenological models that observationally distinguish the different operators. Angle The angle operator is defined using a partition of the closed dual surface around a single SNW node into three surfaces, S 1 , S 2 , S 3 , the angle operator is defined in terms of the associated flux variables E i (S I ) [187] θ (12) n := arccos As is immediately clear from the form of the operator (and dimensional analysis), there is no scale associated to the angle operator. It is determined purely by the state of the intertwiner, the atom of quantum geometry. Deriving the spectrum of the angle operator of equation (2.4) is a simple exercise in angular momentum algebra [187]. Dropping all labels on the intertwiner except those that label the spins originating from one of the three partitions, we havê where the j i are the spins on the internal graph labeling the intertwiner. As such they can be seen to label "internal faces" of a polyhedral decomposition of the node. For a single partition of the dual surface the angle operators commute. But, reflecting the quantum nature of the atom of geometry and the same non-commutativity as for area operators for intersecting surfaces, the angle operators for different partitions do not commute. As is clear from a glance at the spectrum there are two aspects of the continuum angular spatial geometry that are hard to model with low spin. First, small angles are sparse. Second, the distribution of values is asymmetric and weighted toward large angles. As discussed in [191,239] the asymmetry persists even when the spins are very large. Physicality of discreteness A characteristic feature of the above geometric operators is their discrete spectra. It is natural to ask whether it is physical. Can it be used as a basis for the phenomenology of quantum geometry? Using examples, Dittrich and Thiemann [87] argue that the discreteness of the geometric operators, being gauge non-invariant, may not survive implementation in the full dynamics of LQG. Ceding the point in general, Rovelli [225] argues in favor of the reasonableness of physical geometric discreteness, showing in one case that the preservation of discreteness in the generally covariant context is immediate. In phenomenology this discreteness has been a source of inspiration for models. Nonetheless as the discussion of these operators makes clear, there are subtleties that wait to be resolved, either through further completion of the theory or, perhaps, through observational constraints on phenomenological models. Local Lorentz invariance and LQG It may seem that discreteness immediately gives rise to compatibility problems with LLI. For instance, the length derived from the minimum area eigenvalue may appear to be a new fundamental length. However, as SR does not contain an invariant length, must such a theory with a distinguished characteristic length be in contradiction with SR? That this is not necessarily the case has been known since the 1947 work of Snyder [250] (see [167] for a recent review). In [230] Rovelli and Speziale explain that a discrete spectrum of the area operator with a minimal non-vanishing eigenvalue can be compatible with the usual form of Lorentz symmetry. To show this, it is not sufficient to set discrete eigenvalues in relation to Lorentz transformations, rather, one must consider what an observer is able to measure. The main argument of [230] is that in quantum theory the spectra of geometric variables are observer invariant, but expectation values are not. The authors explain this idea by means of the area of a surface. Assume an observer O measures the area of a small two-dimensional surface to be A, a second observer O , who moves at a velocity v tangential to the surface, measures A . In classical SR, when O is at rest with respect to the surface in flat space, the two areas are related as A = √ 1 − v 2 A. If A is sufficiently small, this holds also in GR. However this relation, which allows for arbitrarily small values of A , cannot be simply taken over as a relation between the area operators and in LQG. The above form suggests that A is a simple function of A and so and should commute. This is not the case. The velocity v, as the physical relative velocity between O and O , depends on the metric, which of course is an operator, too. Rovelli and Speziale show thatv does not commute withÂ, and so [Â, ] = 0. This means that the measurements of the area and the velocity of a surface are incompatible. The apparent conflict between discreteness and Lorentz contraction is resolved in the following way: The velocity of an observer, who measures the area of a surface sharply, is completely undetermined with respect to this surface and has vanishing expectation value. The indeterminacy of the velocity means that an observer who measures the area A precisely cannot be at rest with respect to the surface. On the other hand, an observer with a nonzero expectation value of velocity relative to the surface cannot measure the area exactly. For this observer the expectation value is Lorentz-contracted, whereas the spectrum of the area operator is the same. More recent considerations in the spin-foam framework can be found in [229], where the Hilbert space of functions on SU(2) is mapped to a set K of functions on SL(2, C) by the Dupuis-Livine map [89]. In this way SU(2) SNW functions are mapped to SL(2, C) functions that are manifestly Lorentz covariant. Furthermore these functions are completely determined by their projections on SU(2), so K is linearly isomorphic to a space of functions on SU (2). It is shown in [229] that the transition amplitudes are invariant under SL(2, C) gauge transformations in the bulk and manifestly satisfy LLI. While these papers suggest strongly that LLI is part of LQG -just as might be expected from a quantization of GR -other researchers have explored the possibility that the discreteness spoils or deforms LLI through the modification of dispersion relations and interaction terms. From a fundamental theory point of view, the symmetry group associated to the field theory of the continuum approximation, from which particles acquire their properties through irreducible representations, will be dynamically determined by quantum gravity theory and the associated ground state. Originating in work by Kodama [188,192,247]. The space-time has DeSitter as a semiclassical limit [247]. There is also an intriguing link between the cosmological constant and particle statistics [188]. It is well-known that for space-time with boundary, boundary terms and/or conditions must be added to the Einstein-Hilbert action to ensure that the variational principle is well defined and Einstein's equations are recovered in the bulk. (Possible boundary conditions and boundary terms for real Ashtekar variables were worked out in [137,188].) However, there are severe difficulties with this choice of complex-valued self-dual connection variables and the Kodama state: The kinematic state space of complex-valued connections is not yet rigorously constructed -we lack a uniform measure. The state itself is both not normalizable in the linearized theory, violates CPT and is not invariant under finite gauge transformations (see [257] for discussion). An analysis of perturbations around the Kodama state shows that the perturbations of the Kodama state mix positive-frequency right-handed gravitons with negativefrequency left-handed gravitons [178]. The graph transform of the Kodama states, defined through variational methods, acquires a sensitivity to tangent space structure at vertices [185]. Finally, the original q-deformation of the loop algebra suggested in [188,192] is inconsistent [186]. These difficulties have made further progress in this area challenging, although there is work on generalizing the Kodama state to real Ashtekar variables, where some of these issues are addressed [218]. Following the lead of developments in 3D gravity coupled to point particles, where particle kinematics is deformed when the topological degrees of freedom are integrated out, one may wonder whether a similar situation holds in 3 + 1 when the local gravitational effects are integrated out [156]. In [156] the authors showed that, for BF theory with a symmetry breaking term controlled by a parameter [99,249,251], (point) particles enjoy the usual dispersion relations and any deformation appears only in interaction terms. In the next section we review frameworks in which the symmetry groups are deformed or broken. .1 Relativistic particles and plane-waves We start by recalling the fundamental structures associated with the physics of free particles in the phase space picture. Constructing a phenomenological model to incorporate the Planck scale consists in generalizing or modifying this structure. A relativistic particle (with no spin) propagating in Minkowski spacetime is described in the Hamiltonian formalism by the following structures. • A phase space P ∼ T * R 4 ∼ R 4 × R 4 , the cotangent bundle of the flat manifold R 4 . It is parameterized by the configuration coordinates x µ ∈ R 4 and the momentum coordinates p µ ∈ R 4 . These coordinates have a physical meaning, i.e. they are associated with outcome of measurements (e.g. using rods, clocks, calorimeters, etc.). P is equipped with a Poisson bracket, that is, the algebra of (differentiable) functions over the phase space C(P) is equipped with a map {, } : C(P) × C(P)→C(P) which satisfies the Jacobi identity. For the coordinate functions, the standard Poisson bracket is given by • Symmetries given by the Poincaré group P ∼ SO(3, 1) T , given in terms of the semidirect product of the Lorentz group SO(3, 1) and the translation group T . So that there exists an action of the Lorentz group on the translation, which we note Λ £ h, ∀ (Λ, h) ∈ P. The product of group elements is hence given by The Lie algebra P of P is generated by the infinitesimal Lorentz transformations J µν and translations T µ which satisfy The action of P is given on the phase space coordinates by This is extended naturally to the functions on phase space. • Particle dynamics given by the mass-shell or dispersion relation 1 p 2 = p µ η µν p ν = m 2 . This is a constraint on phase space which implements the time reparameterization invariance of the following action λ is the Lagrange multiplier implementing the constraint p 2 −m 2 = 0. This action contains the information about the phase space structure and the dynamics. We can perform a Legendre transform in the massive case (or a Gauss transform in the massless case) to express this action in the tangent bundle T R 4 , ß = m dτ ẋ µẋν η µν (x),ẋ µ = dx µ dτ . With this description, we recover the familiar fact that the relativistic particle worldline given by a geodesic of the metric. When we require the Poincaré symmetries to be consistent with all these phase space and particle dynamics structures, these pieces fit together very tightly. • The Poincaré symmetries should be compatible with the Poisson bracket. If we define our theory in a given inertial frame, physics will not change if we use a different inertial frame, related to the initial one by a Poincaré transformation t, • The mass-shell condition/dispersion relation p 2 = m 2 encodes the mass Casimir of the Poincaré group. As such this mass-shell condition is invariant under Lorentz transformations. When dealing with fields or multi-particles states, we have also the following important structures. • The total momentum of many particles is obtained using a group law for the momentum, adding extra structure to the phase space. We are using R 4 , which is naturally equipped with an Abelian group structure 2 . From this perspective, one can consider the phase space as a cotangent bundle over the group R 4 . This picture will be at the root at the generalization to the non-commutative case. • Plane-waves e ix µ kµ , where k µ is the wave-covector, are an important ingredient when we deal with field theories. The plane-wave is usually seen as the eigenfunction of the differential operators encoding the infinitesimal translations on momentum or configuration space Since the momentum operator P µ is usually represented as −i∂ x µ , it is natural to identify the wave-covector to the momentum k µ = p µ . When this identification is implemented, the product of plane-waves is intimately related to the addition of momenta, hence the group structure of momentum space. • The infinitesimal translation T µ is represented as ∂ x µ therefore it can be related to the momentum operator from (3.1). Modifying momentum space is then synonymous to modifying the translations. As we are going to see in the next sections, introducing QG effects in an effective framework will consist in modifying some of the above structures, either by brute force by breaking some symmetries or, in a smoother way, by deforming these symmetries. Introducing Planck scales into the game: modif ied dispersion relations Light, or the electromagnetic field, is a key object to explore the structure of spacetime. In 1905, light performed a preferred role in understanding Special Relativity. In 1919, Eddington measured the bending of light induced by the curvature of spacetime. As a consequence, these results pointed to the fact that a Lorentzian metric is the right structure to describe a classical spacetime. In the same spirit, a common idea behind QG phenomenology is that a semi-classical spacetime should leave imprints on the propagation of the electromagnetic field such as in [21] discussed in the Introduction. In this example the lever arm that raises possible QG effects into view is the proposed cumulative effects over great distances. The concept of a modified dispersion relation (MDR) is at the root of most QG phenomenology effective theories. Depending on the approach one follows, there can also be some modifications at the level of the multiparticle states, i.e. how momenta are added. One can readily see that such a modified dispersion relation is not consistent with the Lorentz symmetries, so that they have to be broken or deformed. We shall discuss both possibilities below. There is nevertheless a semi-classical regime where the Planck scale is relevant and possible non trivial effects regarding symmetries could appear. Indeed, the natural flat semi-classical limit in the QG regime is given by 3 Λ, G, →0. There are a number of possibilities to implement these limits [97]. An interesting flat semi-classical limit is when Λ = 0 and G = κ 2 is kept constant in the limit G, →0. This regime is therefore characterized by a new constant κ, which has dimension either energy, momentum or mass. Note that in this regime the Planck length 2 P = G naturally goes to zero, hence there is no minimum length from a dimensional argument. The key question is how to implement this momentum scale κ, that is to identify the physical motivations which will dictate how to encode this scale in the theory. Following the paper by Amelino-Camelia et al. [21], many modeled potentially observable QG effects with "semi-classical" effective theories. In some cases discreteness was put in "by hand". In others deviations from Special Relativity, suppressed by the ratio (particle energy)/(QG scale) or some power of it, were modeled. This is the approach followed when considering Lorentz symmetry violation discussed in Section 4.1. Another approach taken was to introduce the Planck length in the game as a minimum length and investigate possible consequences. For a recent review on this notion and implications of minimum length see [134]. Alternatively the Planck energy, or the Planck momentum, was set as the maximum energy [69] (or maximum momentum) that a fundamental particle could obtain. Implementing this feature can also generate a modified dispersion relation. This is the approach which is often considered in the deformed symmetries approach. Both of these later proposals affect dispersion relations and hence the Poincaré symmetries. Therefore in the regime lim ,G→0 G = κ 2 , it is not clear that the symmetries must be preserved and some non-trivial effects can appear. In general, the idea is to cook up more or less rigorously an effective model and then try to relate it to a given QG model (bottom-top approach). The models which are (the most) well defined mathematically are, to our knowledge, given by the non-commutative approach and the Finsler geometry approach. Among these two, Finsler geometry is the easiest to make sense at the physical level. There are fewer attempts to derive semi-classical effects from QG models (top-down approach). Most of the time, these attempts to relate the deep QG regime and the semi-classical are heuristic: there is no real complete QG theory at this time and the semi-classical limit is often problematic. We shall review some of them when presenting the different QG phenomenological models. Even though these attempts were few and heuristic, they were influential, promoting the idea that it is possible to measure effects originating at the Planck scale. Currently, QG phenomenology is therefore not firmly tied to a particular quantum theory of gravity. For a brief, general review over quantum gravity phenomenology, independent of a fundamental theory, see [164]. Contemporary observational data are not sufficient to rule out QG theories, not only because of the lack of stringent data, but particularly because the link between fundamental theories and QG phenomenology is loose. Nevertheless, present observational data restrict parameters in some models, effectively ruling out certain modifications, such as cubic modifications to dispersion relations in the effective field theory (EFT) context. We shall review this in Section 4.1. In the following we are going to present the main candidates to encode some QG effective semi-classical effects. When available we shall also recall the arguments relating them to LQG. As a starter, we now recall different arguments which attempt to justify a MDR from the LQG perspective. Arguments linking modif ied dispersion relations and LQG We present three quite different strategies to establish a firmer tie between LQG and modified dispersion relations. The first one introduces a heuristic set of weave states, flat and continuous above a characteristic scale L, and then expands the fields around this scale. The second strategy starts from full LQG and aims at constructing quantum field theory (QFT) on curved spacetime which is an adaptation of conventional QFT to a regime of non-negligible, but not too strong gravitational field. In this construction coherent states of LQG are employed, which are quantum counterparts of classical flat space. Due to the enormous complications, this venture must resort to many approximations. The third strategy deals in a very general way with quantum fluctuations around classical solutions of GR. This approach is rather sketchy and less worked-out in details. Given the preliminary stage of development of LQG all of the derivations employ additional assumptions. Nevertheless they provide a starting point for exploring the possible effects of the discreteness of LQG. Departures from the standard quadratic energy-momentum relations and from the standard form of Lorentz transformations can of course originate from the existence of a preferred reference frame in the limit of a vanishing gravitational field, i.e. a breaking of Lorentz invariance at high energies. Nevertheless, this need not necessarily be the case. The relativity principle can be valid also under the conditions of modified dispersion relations and Lorentz transformations. In [19] the compatibility of a second invariant quantity in addition to the speed of light, a length of the order of the Planck length, with the relativity principle was shown. The product of this length with a particle energy is a measure for the modification of the dispersion relation. Frameworks with two invariant scales, where the second one may also be an energy or a momentum, were dubbed "doubly special relativity theories" (DSR). As an outcome of the theory's development, it was found that "DSR" may also be an acronym for "deformed special relativity" in that Poincaré Lie algebra of symmetry generators, namely the energy and momentum operators, may be deformed or embedded into a Hopf algebra [182], whereas in the doubly special relativity framework the representation of the Poincaré group, i.e. the action on space-time or momentum space, is nonlinearly deformed. Deformed algebras are used in the κ-Minkowski and in the κ-Poincaré approach [154,170]. For relations to doubly special relativity see [155]. MDR from weave states Following the first strategy of introducing an heuristic state Gambini and Pullin [101] modeled a low energy semi-classical kinematic state with a "weave", a discrete approximation of smooth flat geometry, characterized by a scale L. In an inertial frame, the spatial geometry reveals its atomic nature below the characteristic length scale. Above this length scale L, space appears flat and continuous. In this preferred frame the expectation value of the metric is of the form To see the leading order effect for photons Gambini and Pullin analyzed the Maxwell Hamiltonian, importing one key idea from LQG. The densitized metric operator q ab / √ q is expressed as a product of two operatorsŵ a (v i ), which are commutators of the connection and the volume operator. These operators are finite and take non-vanishing values only at vertices v i of the graph. Regulating the Hamiltonian with point splitting, the authors took the expectation value of the Hamiltonian in the weave state, averaging over a cell of size L. They expanded the fields around the center of the cell P and found that the leading order term is a tensor with three indices. Assuming rotational symmetry, this term is proportional to abc P /L, thus modifing Maxwell's equations. The correction is parity violating. The resulting dispersion relations enjoy cubic modifications, taking the form in the helicity basis. The constant χ was assumed to be order 1. Hence the weave states led to birefringence. As discussed in Section 4.1.2 these effects may be constrained by observation. Furthermore some theoretical arguments can also be proposed against the validity of such proposal as we shall see in Section 3.4. Taking a similar approach and specifying general properties of a semi-classical state, Alfaro et al. found that, in an analysis of particle propagation, photon [12] and fermion [11,13] dispersion relations are modified. They find these by applying LQG techniques on the appropriate quantum Hamiltonian acting on their states. Following similar steps to Gambini and Pullin, Alfaro et al. expand the expectation value of the matter Hamiltonian operators in these states. To determine the action of the Hamiltonian operator of the field on quantum geometry Alfaro et al. specify general conditions for the semi-classical state. The idea is to work with a class of states for geometry and matter that satisfy the following conditions: 1. The state is "peaked" on flat and continuous geometry when probed on length scales larger than a characteristic scale L, L P . 2. On length scales larger than the characteristic length the state is "peaked" on the classical field. 3. The expectation values of operators are assumed to be well-defined and geometric corrections to the expectations values may be expanded in powers of the ratio of the physical length scales, L and P . The authors dub these states "would-be semi-classical states". States peaked on flat geometry and a flat connection are expected for semiclassical or coherent states that model flat space. Lacking the quantum Hamiltonian constraint for the gravitational field and thus also for the associated semi-classical states, the work of Alfaro et al. is necessarily only a forerunner of the detailed analysis of semi-classical states. See [233,234] for further work on semiclassical states and dispersion relations. To parameterize the scaling of the expectation value of the gravitational connection in the semiclassical state the authors introduce a parameter Υ that gives the scaling of the expectation value of the geometric connection in the semi-classical state | W φ where φ are the matter fields. The determination of the scaling is a bit of a mystery. Alfaro et al. propose two values for L: The "mobile scale" where L = 1/p, and the "universal" value where L is a fixed constant, p is the magnitude of the 3-momentum of the particles under consideration. We will see in the next section that matching the modifications to the effective field theory suggests a universal value L P and Υ ≥ 0, so we will use the universal value. It is not surprising that Lorentz-violating (LV) terms arise when the spatial distance L is introduced. Expanding the quantum Hamiltonian on the semi-classical states Alfaro et al. find that particle dispersion relations are modified. Retaining leading order terms in p/κ, the scaling with (Lκ) and next to leading order terms in κ, but dropping all dimension 3 and 4 modifications for the present, the modifications are, for fermions, where p is the magnitude of the 3-momentum and the dimensionless κ i parameters are expected to be O(1) (and are unrelated to the Planck scale κ. The labels are for the two helicity eigenstates. These modifications are derived from equation (117) of [13], retaining the original notation, apart from the Planck mass κ. Performing the same expansion for photons Alfaro et al. find that the semi-classical states lead to modifications of the dispersion relations, at leading order in k/κ and scaling (Lκ) where the θ i parameters are dimensionless and are expected to be O(1). The leading order term is the same polarization-dependent modification as proposed in Gambini and Pullin [101]. In the more recent work [233,234] the structure of the modification of the dispersion relations was verified but, intriguingly, the corrections do not necessarily scale with an integer power of κ. As is clear in the derivation these modified dispersion relations (MDR) manifestly break LLI and so are models of LQG with a preferred frame. The effects are suppressed by the Planck scale, so any O(1) constraints on the parameters are limits placed on Planck-scale effects. These constraints, without a complete dynamical framework that establishes the conservation, or deformation, of energy and momentum, must come from purely kinematic tests. Interestingly, as we will see in Section 4.1, Alfaro et al. found the modifications to the dispersion relations corresponding to the dimension 5 and the CPT-even dimension 6 LV operators in the effective field theory framework. Of course given the limitations of the model they did not derive the complete particle dynamics of the EFT framework. It was suggested in [157] that different choices for the canonical variables for the U(1) field theory could remove the Lorentz violating terms. However Alfaro et al. pointed out that this is inconsistent; the only allowed canonical pairs in LQG are those that have the correct semiclassical limit and are obtained by canonical transformation [10]. Finally, we must emphasize that these derivations depend critically on assumptions about the semi-classical weave state, the source of the local Lorentz symmetry violations. Quantum f ield theory in curved space from LQG Sahlmann and Thiemann studied dispersion relations in a framework of QFT on curved space from basic LQG principles by heavily making use of approximations [233,234]. In the first step QFTs on discrete space are constructed on an essentially kinematic level. Rather than taking the total Hamiltonian constraint of gravity and matter, the matter Hamiltonians of gauge, bosonic and fermionic fields are treated as observables, dependent on geometric variables of the background. Then the gravitational field is assumed to be in a coherent state, where expectation values for field variables yield the classical values and the quantum uncertainties are minimal. The Hilbert space of matter states H Fock m (m) depends on the state g of the geometry. The vacuum state Ω m (g) is the ground state of some (geometry-dependent) matter Hamiltonian operatorĤ m (g). In QG m becomes an operator, and so Ω m (g) becomes a "vacuum operator", i.e. a function of the matter degrees of freedom with values in L(H kin geom ) ⊗ H kin m . L is the space of linear operators on a background independent Hilbert space of kinematic states of geometry, H kin m is a kinematic matter Hilbert space. From this vacuum operator a vacuum state may be constructed in principle as expectation value in a state of quantum geometry, which is peaked at classical flat space. As a technical detail and interesting twist, the construction of annihilation and creation operators involves fractional powers of the Laplacian on a background metric. In [233,234] these operators are constructed but the spectra required to calculate fractional powers are not known. To circumvent this problem the expectation values of the Laplacian in a coherent state, mimicking flat space, are calculated first, then fractional powers are taken. In coherent states this approximation coincides with exact calculations in zeroth order in . The coherent state employed is modeled by spin networks with an irregular 6-valent lattice. Creation and annihilation operators, an approximate vacuum state, and approximate Fock states are constructed from the matter Hamiltonians in the sense of the described approximation. The influence of geometry is included in an effective matter Hamiltonian, the matrix elements of which are defined in the following way Here the general building principle of a matter-geometry Hamiltonian, related to a graph Γ iŝ whereM a matter operator with some discrete (collective, matter and geometry) label l, v is a SNW vertex andĜ is an operator of geometry. Recently work in the context of cosmological models has also found hints of a modified dispersion relation [84]. Working in the context of a quantized Bianchi I model with a scalar field, the authors found, when taking back-reaction into account, that the scalar field modes propagate on a wave number-dependent metric and the dispersion relation is modified [84]. The discreteness and irregularity of the underlying lattice breaks translation and rotation symmetry. There are no exact plane wave solutions for matter fields, only in the long distance limit the irregularities average out and so for long wavelengths -compared to the lattice spacing -there are at least approximate plane waves. In this limit the matter Hamiltonians simplify sufficiently, so that the sketched program becomes feasible and yields an energy-momentum dispersion relation for low energies, which carries the imprints of both discreteness and fluctuations. With these possible modifications to particle dispersion relations, the obvious next step is to explore the effects that arise from these Lorentz violating modifications. Early studies [140,149] used particle kinematics phenomenology, modified dispersion relations plus energymomentum conservation. We now know, see e.g. [141], that constraints require a full dynamical framework for the fields. The most obvious, and certainly most developed framework is effective field theory. We review the physical effects of cubic modifications to the dispersion relations in effective field theory in Section 4.1. We briefly discuss alternate frameworks and higher dimension modifications in Sections 3.10 and 4.1.7. MDR from Hamilton-Jacobi theory In [245] by Smolin the occurrence of corrections to particle kinematics in the low-energy limit of QG is made plausible in a very general way. This derivation is based on the quantum fluctuations around classical GR in the connection representation. The dynamics is formulated in the Hamilton-Jacobi theory with the aid of the action functional S[A]. Canonical conjugate momenta are given by with ρ being a constant with dimension (length) 2 . The solutions to the dynamics form a trajec- with some parameter t. The parameter can be chosen to be proportional to the Hamilton-Jacobi functional. This functional, in turn, can be written as an integral over a density S on the spatial manifold, or some local coordinate neighborhood, Therefore, on the classical space-time there exists a time T proportional to S[A], defining a slicing. The slicing constructed in this way is determined by the classical solution. When connections depart slightly from the classical trajectory the slicing fluctuates. So variations of functions on configuration space, evaluated at the classical trajectory, can be related to variations on space-time, expressed as , with the orthonormal inverse triad e 0 i related to E 0 a i . In the neighbourhood of a classical trajectory the connection A can be formulated through a dependence on S and quantities a a i [248], so that The first term can be understood as variation in the internal time coordinate, a a i contains the gravitational degrees of freedom. Going over to quantum theory we construct the operator and the semiclassical state functional To study semiclassical QG effects on the propagation of a matter field φ, Smolin takes quantum states in the Born-Oppenheimer form Ψ In the neighborhood of the classical trajectory and the action ofÊ a i on such functions iŝ where ρ M µ has dimension of length or time in natural units. The only length scale in the problem (leaving aside Λ) being the Planck length, we may write where the constant α is determined later on in [245]. On the classical trajectory χ[S, a a i , φ] = χ T, a a i , φ . At the semiclassical level we can neglect δ/δa a i , which describes couplings of matter to gravitons. Finally we havê Now consider a semiclassical state of definite frequency with respect to the time T , The action of the triad operator on such a function iŝ So the classical solution E 0 a i (x, T ) effectively goes over into , which implies an energy-dependent spatial metric, This may be interpreted as a "rainbow metric" [180]. Assuming that the corresponding contravariant metric in momentum space to be the inverse spatial metric, we arrive at a universal modification of dispersion relations, In view of the universality of the effect -the independence of the form of matter in discussionand the absence of any (explicit) preferred frame vector field, which could distinguish a preferred reference frame, it is argued in [245] that the proper framework for these MDR is not Lorentz invariance breaking but a deformation with a helicity-independent, energy-dependent speed of photons. We shall come back to this point in Section 3.7. In [244] a more general formulation is presented, which does not rely on connection representation and so is not restricted to LQG, but rather makes Lorentz invariance deformation plausible for a wider class of QG theories. This attribute is shared with the models described in the next section. Broken Poincaré symmetries: Finsler geometry As we recalled in Section 3.2, a common idea behind QG phenomenology is that a semi-classical spacetime should leave some imprint on the propagation of light, in particular through a modified dispersion relation. A natural way to encode this idea is to approximate a semi-classical spacetime by a medium whose properties are to be determined by the specific underlying QG theory. As is known in solid state physics, the description of light propagation in a special medium (see, for example, references in [242]) is conveniently expressed with Finsler metrics, which are a generalization of the notion of Lorentzian/Riemannian metrics. From this perspective, it seems then quite natural to explore Finsler geometries as a candidate to describe effectively QG semi-classical effects [109]. The mathematics behind Finsler geometries are not yet as solid as in the Riemannian or Lorentzian geometry cases. For example the notions of signature and curvature of a Finsler metric are still active topics of discussion among the specialists. Finsler geometry provides however a nice framework to develop new mathematics and QG phenomenology. In fact it also provides ways to theoretically constrain the possible QG phenomenological proposals [219,238]. Let us recall the construction. The propagation of the electromagnetic field u A = (E i , B j ) (Capital Latin letters designate a pair of spatial indices.), in this medium/semi-classical spacetime is described by some effective Maxwell equations. We assume here that for simplicity, spacetime is R 4 . Following [219], we will make the assumption that these effective equations are still linear partial differential equations The spacetime indices α i run from 0 to 3. We note the presence of the scale κ which encodes the QG effects. The standard Maxwell equations are recovered when κ→0 Assuming there are no non-linear effects means that the coefficients Q α 1 ···αn AB (κ) do not depend on the fields u A (but they could depend on the position x). This is not such a strong restriction since the existing proposals of QG modified Maxwell equations such as Gambini-Pullin's [101] are of this type (see Section 3.3). To solve these equations, it is common to use the short wave-length approximation, i.e. the Eikonal approximation. This means that we consider the particle approximation of the electromagnetic field. Skipping the details found in [219], this approximation leads to the modified dispersion relations or mass-shell constraints in terms of momentum p This is the eikonal equation, which can be seen as the covariant dispersion relation. The reader can quickly check that keeping only the usual Maxwell equation, leads to a standard dispersion relation 4 M(x, κ = 0, p) = η µν p µ p ν = p 2 = 0. The particle approximation gives rise to an action, expressed in phase space or the cotangent space T * R 4 , of the type ß mod = dτ ẋ µ p µ − λ(M(x, κ, p)) . If one wants to introduce the concept of energy E and spatial momentum p, we need to introduce an observer frame 5 . However, we need the frame e expressed in the cotangent bundle. We recall that the Gauss map G : T * R 4 →T R 4 in the massless case (or the Legendre transform in the massive case) allows to jump from a description in the cotangent bundle to the tangent bundle. Indeed, from the variations of the action, we obtainẋ in terms of p. The Gauss map (and the Legendre transform) inverts this expression. Explicitly, this map allows to express momentum (i.e. a covector) as a function of vectors 6 Let us postpone to later the discussion on the existence of such a map and let us assume it exists and it is invertible. The observer is encoded by a curve in the spacetime manifold R 4 and its tangent vector v µ defines the time part 0 µ of the frame α µ . The other components i µ , i = 1, 2, 3, are determined such that they span the tangent plane. Now we can define the dual frame e α µ in the cotangent bundle using the inverse Gauss map (or inverse Legendre transform in the massive case). We define e 0 µ = G −1 ( 0µ ) and the rest of the frame e i µ , i = 1, 2, 3, is defined as spanning the rest of the cotangent plane, so that e α µ defines a frame in the cotangent plane. To have a physical notion of energy E and 3-momentum, we need to project the momentum p µ on e. This point is actually often forgotten in the literature. The modified dispersion relation expressed in terms of energy and 3-momentum is then recovered from M(x, κ, Ee 0 + p i e i ) = 0. We emphasize that in this approach the notion of energy or 3-momentum is defined in terms of a frame in the cotangent bundle, defined itself in terms of an observer frame and the (inverse) Gauss map. Contrary to the usual approach in QG phenomenology, where one defines arbitrarily the notion of energy, without specifying the notion of frame observer or having a good control on it. Performing the Gauss map at the level of the action, we obtain its expression in the tangent bundle, ß = dτ λM (x,ẋ, κ). M (x,ẋ) is a homogeneous function ofẋ, of degree different than M, in general. In the standard electromagnetic case on R 4 , given in terms of (3.5), we recover M (x,ẋ) = ẋ µ η µνẋ ν . In this special case, the Minkowski metric is recovered with If the polynomial M (x,ẋ) is more general, one obtains a metric which depends both on the position x and the vectorẋ. This type of metric is called Finsler metric. Riemann was actually aware of this possible extension (i.e. a metric which depends also on vectors) of the metric, but it is only in 1917 that Finsler explored this generalization for his PhD thesis. This generalized notion of metric can be seen as the rigorous implementation of the notion "rainbow metric" [180]. The notion of a Lorentzian Finsler metric is still a matter of discussion in the Finsler community. The most recent proposals that have been developed concurrently with the approach recalled above can be found in [213,219]. The reader could argue that it might not be possible to identify a well defined (invertible) Gauss map or Legendre transform in general. However if few natural assumptions are added, [219] pointed out that the Gauss map and the Legendre map are well defined. Let us summarize these assumptions. (Clearly they can be discussed and one could explore the consequences of removing them.) • The first assumption is one that we made at the beginning: we assumed that the equations encoding the effective dynamics are linear partial differential equations. • We take as a very reasonable (or conservative) assumption that if we know the initial conditions, we can predict the propagation of the plane-waves at any later time in the semi-classical spacetime. Not having this assumption would make prediction difficult. On the other hand, one can argue that in the full QG regime, the notion of causality could disappear. We assume here that at the semi-classical level we do not see such effects. • Finally, we expect that the notion of energy E defined using the observer frame should be positive for any observer. It is quite striking that these three physical assumptions (linearity, predictability, energy positivity) can be translated into very different types of mathematics, such as algebraic geometry and convex analysis. With these assumptions, the Legendre transform (massive case) and the Gauss map (massless case) are well-defined and invertible. Taken together these assumptions put strong constraints on the possible shape of the covariant dispersion relation M(x, κ, k). [219] showed that some well-known QG motivated modified dispersion relations (such as the Gambini-Pullin's [101]) actually do not satisfy some of the above assumptions. This means that these MDR cannot be understood in this setting and it is quite unlikely that they can be physically interpreted in the context of Lorentz symmetry breaking. For the full details we refer to the [219], the presentation of necessary mathematics would go beyond the scope of this review. To conclude this quick overview of the Finsler framework, we hope we have conveyed to the reader that there exist more general metric structures than the Lorentzian ones, which seem to be natural candidates to encode the semi-classical QG effects. This mathematical framework has been introduced fairly recently in the QG phenomenology framework [109]. Even more recently, [219] has shown that this framework can be very much constrained from mathematical arguments so that we do not have to explore blindly in every direction what we can do. There is a nice mathematical framework to guide us, which awaits to be further developed. Finally, (some of) the Lorentz symmetries are broken in the Finsler geometry framework. There is no known way to accommodate for spacetime symmetries consistent with the scales present in the modified dispersion relation. This is consistent with the analogy of a medium, which will in general contain some preferred direction and/or scale. Non-linear realization of Poincaré symmetries Besides the obvious way to introduce an invariant scale by choosing a preferred reference frame and so sacrificing Lorentz invariance, it is possible to keep the relativity principle intact in presence of a second invariant quantity, a Planck scale κ in addition to the speed of light. One does so by modifying both the transformation laws from one inertial system to another, and the law of energy-momentum conservation in such a way that a MDR become compatible with observer independence. This can be achieved through a non-linear realization of (some of) the Poincaré symmetries [179,181]. This is a first attempt to describe deformed symmetries since in this framework, the realization of the full Poincaré symmetries in spacetime is not clear. In the papers [179,181], Magueijo and Smolin do not make any specific assumption regarding the structure of momentum space. It can be flat or curved, it is left open. To fix the notations, define first the momentum π and the infinitesimal boost which is realized as J 0i = π 0 ∂ π i −π i ∂ π 0 . It acts linearly on π We introduce then a map U κ : R 4 →R 4 so that one can define the non-linear realization K 0i of the boosts J 0i The choice of U κ is such that the non-linear boost K 0i still satisfies the Lorentz algebra where J i encodes the infinitesimal rotations. An example is given by Note that this map is not unitary. In this example, we have a maximum energy κ as one can check by applying a boost on π µ . This specific non-linear realization leaves invariant the modified dispersion relation Clearly other choices of non-linear realizations can be performed. They are simply constrained to satisfy (3.7) and can be chosen to implement a maximum energy or a maximum 3d momentum. Another way to present this proposal is to consider a momentum p µ = U κ (π µ ) as the measured "physical" momentum. The meaning of (3.6) is that first we go to the linear auxiliary variable π, perform the boost transformation, then deform back to the physical variable p. The physical meaning of the variable π is then not very clear. The case of constructing multiparticles has been discussed in this context. Once again, the idea is to use the linear momentum π to induce the sum of the physical momenta p. A preliminary proposal was This proposal however suffered from an immediate drawback: the "soccer ball problem". If it makes sense that a fundamental particle has a momentum bounded by the QG scale, large systems (such as a soccer ball) have doubtlessly a momentum larger than the QG scale. For instance a flying mosquito has a momentum bigger than the QG scale. The sum in (3.8) is such that the total momentum of two particles will still be bounded by the QG scale. The solution to this problem is to have a rescaling of the QG scale so that in the large limit, we can deal with systems which have larger momentum than κ. Magueijo and Smolin therefore proposed to deal with a sum This is a rescaling of the Planck scale implemented by hand. One should find a justification for such rescaling by considering a more complete model. Different arguments have been proposed in [110,181]. We note, en passant, that this addition of momenta is commutative, since it is based on the standard commutative addition π 1 + π 2 . This approach is therefore not equivalent to the non-commutative approach, which we will discuss later. To reconstruct spacetime, Magueijo and Smolin propose to see the coordinates as the infinitesimal translations on momentum space. In the specific example they chose, the coordinates are commutative for the Poisson bracket They are however functions over the full phase space and not only in configuration space. Their physical meaning is not very clear. The definition of the translations in spacetime is not given explicitly. For a recent attempt to formulate a particle-dependent, "non-universal DSR", see [17]. Modif ied reference frame This approach tries to provide a physical meaning to the different momenta p, π introduced in the previous subsection. In [14,15,165], the authors recall that to measure the momentum of a particle π µ , we actually use a reference frame e a µ , a = 0, . . . , 3. It is the local inertial frame which can be constructed at every point of the spacetime manifold from the metric g µν through g µν = e a µ e b ν η ab . The outcome of the measurement is then p a = e a µ π µ . In standard Minkowski spacetime, we have e a µ ∼ δ a µ so that π a ∼ p a Two reference frames e a µ , e a µ are related by This transformation induces the standard linear realization of the Lorentz transformation on p a The first idea -proposed in [165] -is to argue that due to some QG effects, the measured momentum p will be a non-linear function of the components e a µ π µ Changing frame by performing a Lorentz transformation on e as in (3.9) will clearly lead to a non-linear realizationΛ of the Lorentz transformation on p. When constructing explicitly some models to implement this idea, the actual relationship between momentum and frame has been modified in [14,15]. Instead of (3.10), the measured momentum should be defined with respect to an effective tetrad E µ α (e, π, κ). In this sense, this is a picture close to the Finsler geometry approach (cf. Section 3.4) or Smolin's derivation from LQG (cf. Section 3.3.3), since the frame is now momentum dependent. An important assumption is that the map relating a trivial frame e with the effective frame E should be reversible, at least in some approximation. When the fluctuations have appropriate symmetries, the map U κ takes the simple form E µ α = F (e, π, κ)e µ α , and F (e, π, κ) → 1 when κ → ∞. For example, the Magueijo-Smolin dispersion relation [179,181] can be expressed as p α = F (e, π, κ)e µ α π µ with F (e, π, κ) = 1 Lorentz transformations act linearly on the tetrad field e α , so their actionΛ on the effective tetrad E α is specified by the following commutative diagram This induces a non-linear transformation of the measured momentum p, Different proposals argue that QG fluctuations lead to an effective frame E µ α (e, π, κ) [15,108]. We have recalled above in Section 3.3. 3 Smolin's argument in this sense. Another motivation comes from models from quantum information theory [113]. From the QG perspective it is natural to consider that frames should be quantized. Some physical aspects of the use of quantum reference frame can be explored using finite-dimensional systems, such as spin systems. For example, we can use three quantum spins J a , a = 1, 2, 3, as 3d reference frame and look at the projection of another quantum spin S in this frame. The corresponding observable is then We can take by analogy with the QG semi-classical limit a semi-classical frame, that is we consider the reference frame J a given in terms of coherent states |ψ . We have therefore the semi-classical reference ψ| J a |ψ ≡ J a . We consider the spin S projected in this semi-classical frameS A priori the frame J a is independent of the system S. However it was shown [217] that consecutively measuring the observable S a leads to a kickback of the system on the frame, which state then becomes dependent on the system S. Hence consecutive measurements induce the map This is the analogue of the deformation (3.11). It was shown that this map can be decomposed as a rotation together with some decoherence [217]. The decoherence part is non invertible, so the map (3.12) is not properly reversible. It is reversible only in the approximation where the decoherence can be neglected. For further details we refer to [106,113]. Non-commutative space-time Non-commutative spaces can be understood as geometries where coordinates become operators. In this sense we move away from the phase space structure we discussed earlier and use the quantum setting. Non-commutative geometries have been mostly introduced by mathematicians and are therefore very well defined mathematically. In general one uses the algebraic concepts of Hopf algebra and so on. In order to restrict the amount of material, we shall present only a pedestrian overview of the topic and refer the more curious reader to the relevant references. Historically, one of the first examples of non-commutative space is due to Snyder [250] who tried to incorporate the Planck length in a covariant way, i.e. without breaking the Lorentz symmetries. The idea is simple and follows similar philosophy as in the LQG case. Space coordinates can be "discretized" if they are operators with a discrete spectrum. Snyder used the subspace generated by the infinitesimal de Sitter boosts J 4µ , of the Lie algebra so(4, 1) generated by the elements J AB to encode spacetime. The coordinates are The spatial coordinates X i = 1 κ J 4i are represented as (infinitesimal) rotations and therefore have a discrete spectrum. The time coordinates has instead a continuum spectrum. From the definition of the Lie algebra so(4, 1), there is a natural action of the infinitesimal Lorentz transformations on X µ , Hence it is possible to have a discrete structure with Lorentz symmetry implemented. By assumption, momentum space is not flat but curved: it is de Sitter space instead of the standard hyperplane R 4 . Snyder picked a choice of coordinates on the de Sitter space such that Lorentz symmetries are implemented. Considering the embedding of de Sitter in R 5 π A given by π A η AB π B = −κ 2 , Snyder picked The dispersion relation would be the classic one, since the Lorentz symmetries are the usual ones, Note that his choice of coordinates is actually very close to the one picked up in special relativity. Indeed in this context, we have the space of 3d speeds which becomes the hyperboloid where the natural choice of coordinates is Snyder did not discuss the addition of momenta. However since momentum space is now curved it is clear that the addition has to be non-trivial. We can take inspiration from the addition of velocities in special relativity: it is given in terms of a product which is non-commutative and non-associative. The space of velocities is then not a group but instead a K-loop or a gyrogroup [259]. A similar addition can be defined for the Snyder case [107]. The addition of momenta will then be non-commutative and non-associative. This means that the notion of translations becomes extremely non-trivial. We shall come back on this structure when recalling the recent results of "relative locality" [23]. For different approaches to deal with Snyder's spacetime we refer to [111] and the references therein. Snyder's spacetime is related to the Doplicher-Fredenhagen-Roberts (DFR) spacetime, which was constructed independently of Snyder's spacetime, using different tools [88], The operators Q µν can be seen as (commutative) coordinates on some extra dimensional space. This space can be seen as an abelianized version of Snyder spacetime [75,111]. Starting from Snyder's commutation relations (i.e. the algebra so(4, 1)), consider Q µν = 1 κ J µν and the limit κ, κ →∞ with κ κ 2 = 1 κ fixed. It is not complicated to see that Snyder's algebra gives an algebra isomorphic to the DFR algebra in this limit. Both Snyder's and DFR's non-commutative spaces can also be understood as non-commutative spaces embedded in a bigger non-commutative space of the Lie algebra type [111]. Once again in this case there is no deformation of the Lorentz symmetries, the modified dispersion relation is again the standard one. However there are non trivial uncertainty relations which implement a notion of minimum "area" [88], The famous Moyal spacetime is as an example of the DFR spacetime, as amy abe seen by projecting Q µν onto a specific eigenspace [91]. The Moyal spacetime is the most studied of the non-commutative spacetimes. There exists a huge literature on the physics in this spacetime, we refer only to [44,129], where the relevant literature can be found. The tensor θ µν is a tensor made of c-numbers. It is invariant under translations and Lorentz transformations (unlike the Q µν in the DFR space which transform under Lorentz transformations). It is not complicated to check that (3.13) transforms covariantly under an infinitesimal translation. However the case of the (infinitesimal) Lorentz transformations is more tricky since where we have used J µν £ X α = η να X µ − η µα X ν and the Leibniz law One can then say that the Lorentz symmetries are broken since the non-commutativity is not consistent with the change of frame. Most often in the study of Moyal spacetime, this is the chosen perspective. One can also look for a deformation of the action of these Lorentz symmetries, to make them compatible with the non-commutative structure [210]. This deformation can be understood as a modification of the Leibniz law It can be extended easily to arbitrary product of functions. With this new Leibniz law, the non-commutativity (3.13) will be consistent with the Lorentz symmetries as one can check. The proper way to encode this modification is to use algebraic structures such as quantum groups [182]. Indeed, the modified Leibniz law comes from a non-trivial coproduct structure. There exists therefore a deformation of the Poincaré group that is the symmetry group of Moyal's spacetime and as such this spacetime can be seen as a flat non-commutative spacetime. There is no deformation of the action of the Lorentz transformations on the momenta so that it is the usual mass-shell relation that one considers. Furthermore since the translations are not modified either, there is no modification of the addition of momenta. We refer to [44,129] and references therein for some discussions of the phenomenology of this space. An important example of non-commutative space for QG phenomenology is the κ-Minkowski spacetime. We also mention his cousin the κ -Minkowski spacetime. Their non-commutative structures are defined, respectively, as In the κ-Minkowski case, it is the time coordinate that is not commutative with the space coordinates, whereas in the κ -Minkowski case, it is a space coordinate (here X 1 , but clearly other cases can be considered) that is non-commutative with the others. They encode the most rigorous models of deformed special relativity 7 (DSR). But due to (3.14), and ∂ µ £ X α = η µα , we see we get for example which is a contradiction. We can then argue either that the translation symmetries are broken or that we can modify the Leibniz law in order to have an action of the translations compatible with the non-commutative structure. One can check that the modified Leibniz law on the spatial derivatives (the time derivative satisfies the usual Leibniz law) does the job. As in the Moyal case, there is a deformation of the Poincaré group which encodes such a modification. This quantum group is called the κ-Poincaré group [170,184]. When one identifies the translations with momenta, the alter-ego of the modified Leibniz law is a modified addition law of momenta which comes from the non-trivial coproduct of the κ-Poincaré quantum group. Just as in the Snyder case, the momentum space in the κ-Minkowski case is given by the de Sitter space. However in this case, the de Sitter space dS ∼ SO(4, 1)/SO(3, 1) is equipped with a group product, so that the momentum addition (3.15) is non-commutative but associative. The group structure can be obtained by factorizing the group SO(4, 1) = G.SO (3, 1), where G is the group (actually two copies of the group called AN 3 ) encoding momentum space. This implies that there is an action of the Lorentz group on momentum space G but also a back action of G on SO(3, 1) [182,184]. Putting all together, we get a non-linear realization of the Lorentz symmetries on momentum space (N i , R i are respectively the boosts and the rotations) Just like for the translations, the compatibility of the non-commutative structure (3.14) with Lorentz transformations implies a deformation of the Leibniz law for the action of the Lorentz transformations [184]. This is also inherited from the κ-Poincaré group. This non-commutative space was one of the first frameworks used to discuss the non-trivial propagation of gammarays [26]. We refer to [16,28,128,153] for a discussion of the phenomenology of this space. We have presented a set of non-commutative spaces and discussed the realization of the Poincaré symmetries. It is interesting to ask whether all these spaces can be classified. To our knowledge, there are two types of possible classifications: • The first one is to look at all the possible deformations of the Poincaré group using quantum group techniques, i.e. algebraic techniques. If one identifies the momentum operator with the translation, this provides a set of different momentum spaces, from which we can determine by duality the dual spacetime. Therefore determining the deformations of the Poincaré group specifies momentum spaces and the relevant non-commutative spaces. In 4d (as well as in 3d), this classification was performed [215] and a set of 21 deformations have been identified. The Moyal spacetime and the κ-and κ -Minkowski spaces 8 are of course among them. However, Snyder's spacetime is not among them since it is related to a non-associative deformation. This classification is hence missing some (at least historically) interesting spaces. Note that among this classification, there is a number of momentum spaces that appear as Lie groups. It would be interesting to check whether the full set of 4d Lie groups appear in this classification. Finally, we emphasize, that a priori all these 21 deformations are equally valid candidates for a non-commutative description of QG semi-classical flat spacetime. • The second approach is geometric in nature but it has not been performed in detail yet. It consists in classifying all the possible 4d smooth loops (i.e. manifold equipped with a product, a unit and an inverse) that carry an action of the Lorentz group. This would provide a classification of possible momentum spaces, and one would need to check if one can find some action of the Lorentz group on them to get the relevant Poincaré group deformation. Different works have pointed out that classifying the loops can be seen as a classification of the possible connections on some given manifold (especially in the homogenous case). For example on the de Sitter space as momentum space, one can have a group structure G = AN 3 which would give rise to κ-Minkowski spacetime, or a K-loop structure, giving rise to Snyder's spacetime [23,93,107]. These different choices amount to different types of connections on the de Sitter space. We should note however that with this geometric classification, we would get the nonassociative spaces but we would miss momentum spaces of the quantum group or of the Moyal type. Hence the algebraic and geometric approaches are complementary. The Moyal and κ-Minkowski spacetimes have attracted most attention in the QG phenomenology/non-commutative communities. Among these two, the Moyal spacetime is the best understood, probably because the non-commutativity is in fact of a simpler nature (a twist) than κ-Minkowski. Historically Moyal spacetime was also identified before κ-Minkowski. In the LQG community, the κ-Minkowski case is the most popular example, though κ -Minkowski also appears in 3d QG [199]. Non-commutative structures appear in the LQG context through the failure of area operators acting on intersecting surfaces to commute [32]. Currently, there is no derivation of non-commutative "coordinates" in 4d LQG. We shall come back in Section 4.3.2 on a derivation of non-commutative field theory for matter using the group field theory approach. Relative locality This recent development in the construction of effective theories proposes a tentative interpretation of non-commutative coordinates at the classical level. The basic claim of "Relative Locality" (RL) is that we live in phase space, not in space-time. In this view, rather than a global space-time, there are only energy-momentum dependent cotangent spaces (interpreted as spacetime) of the curved momentum space M, a concept first proposed in [180]. RL attempts to describe "classical, non-gravitational quantum gravity effects", i.e. remnants of QG, when gravity and quantum theory are switched off by going to the limits → 0 and G → 0. In this limit the Planck length goes to zero as well, whereas the Planck mass κ is finite. In this way an invariant energy scale is obtained, but not an invariant length. The scale κ can be introduced by considering a homogeneous momentum space with (constant) curvature κ. In fact even more general curved momentum spaces can be introduced. If momentum space is not homogeneous of curvature κ, but of a more general type, the scale κ will still appear in the modified dispersion relation or the non-trivial addition in order to have dimensional meaningful quantities. Quite strikingly the structures associated to momentum space -dispersion relation, addition of momenta -can be related to geometric structures on momentum space. Indeed, it can be shown that the metric on momentum space encodes the dispersion relation, whereas the sum of momenta is encoded through a connection (which does not have to be either metric compatible or torsion free or flat). To be more explicit, consider an event. Following Einstein an event can be seen as the intersection of different worldlines. From a quantum field theory perspective, an event can be seen as a vertex in a Feynman diagram, with a given number of legs. Each leg can be seen as a particle labelled by J and momentum p J µ . In case of a general addition ⊕, we can write the total momentum at this vertex as with the connection 9 Γ bc a on momentum space. A non-zero torsion is equivalent to the non-commutativity of the addition, whereas nonassociativity of the addition is equivalent to a connection with non-zero curvature [23,93]. Hence, a Lie algebra type non-commutative space which has therefore a momentum addition (in general) non-commutative but associative will correspond -in this RL context -to a choice of a flat connection with torsion. In particular, the κ-Minkowski spacetime illustrates this case [125]. The particle J has momentum p J ∈ M and position x J ∈ T * p J M. At the intersection of the worldlines when particles interact, we expect conservation of momenta, that is P tot µ = 0. This means that this event (the vertex) sits in the cotangent space T * p=0 M at the origin of momentum space and has coordinates z µ . To get the vertex interaction in T * 0 M, we therefore need to parallel transport the (covector) coordinates where τ µ J ν encodes the parallel transport associated to the particle J. In [22], the authors showed that an action for N particles with the constraint implementing momentum conservation P tot µ lead to such a construction. There is a unique coordinate z µ for the interaction vertex and in particular (cf. (3.16)) The interaction coordinates do not Poisson commute, T σ µν and R σ µνρ are respectively the torsion and the Riemann tensor for the connection Γ associated to the momenta addition (3.16). These coordinates are therefore interesting candidates for the meaning of the quantum operators encoded in the non-commutative geometry as discussed in Section 3.7. Indeed, if we take the case of a connection with constant torsion but zero curvature, (3.17) becomes the classical analogue of a Lie algebra type non-commutative space. In particular one can retrieve the κ-Minkowski classical case [125]. The transition to the reference frame of a distant observer is carried out simply by a translation with the infinitesimal form In consequence, for different particles δx µ I is different, so that the translated endpoints of the worldlines do not meet at the vertex and the interaction appears non-local for distant observers. In this way locality becomes relative to a certain extent, as pointed out in [27,132,135,138,243,246]. In [23] the emission of a low-energy and a high-energy photon and their absorption by a distant detector as a model for radiation from gamma-ray-bursts is discussed in detail. In RL, the speed of light is an invariant, but the trajectories of photons at different energies with their origin and end points lie in different copies of space-time. To compare them and to calculate a possible time delay between the photons, one must parallel transport the corresponding cotangent spaces into one. Note that, for example for an absorption event, the endpoint of the detector's worldline before the absorption, the origin of its worldline after the absorption and the endpoint of the photon's worldline do not coincide in general. These non-localities at the absorption or emission events are relative, depending on the observer's reference frame, but the resulting time delay in first order, is an observer-independent invariant. T is the running time of the high-energy photon in the detector's frame (at rest with the source in the model), E is the photon's energy in this frame and N +++ denotes the component of the non-metricity tensor (N abc = ∇ a g bc ) of the connection along the photon direction. In the RL framework current observations [2,3] can be interpreted as implying a bound on non-metricity A second effect, derived from the same model is dual gravitational lensing: Two photons with proportional momenta need not propagate into the same directions. When the connection of the curved momentum space has torsion, a rotation angle of is predicted, where the vector T a arises by projection of the torsion tensor into the direction of the photons' momenta and E i are the photons'energies. Effects of curvature show up in the approximation quadratic in E/κ [93]. The framework of κ-Ponicaré algebras is an example with non-metric connection, zero curvature and non-zero torsion. Further experiments that may measure or bound the geometry of momentum space at order κ −1 include tests of the linearity of momentum conservation using ultracold atoms [31] and the development of air showers produced by cosmic rays [29]. Generalized uncertainty principle This approach is connected with the strong gravity regime rather than with the amplification of "low" energy effects. Nevertheless, it has formal similarity with DSR theories. But the concept of the generalized uncertainty principle (GUP) has a physically compelling basis: QG effects should occur at densities comparable with the Planck density, not in the presence of large, extended masses. For a modeling of gravity-caused modifications of scattering processes at extreme energy, see [133]. GUP is tied to the center-of-momentum (c.o.m.) energy of two or more particles, concentrated in a small region in an interaction process, or the energy of one particle in relation to some matter background, like the CMB. When in a scattering process the c.o.m. energy is high enough, so that in the scattering region the energy density comes close to the Planck density, there is significant space-time curvature and gravity is non-negligible. The gravitational influence is described by a local energy dependence of the metric. At this point GUP introduces a split between the momentum and the wave vector of a particle. In the case of two scalar particles scattering in the c.o.m. system the asymptotic momenta are p µ and −p µ , related linearly to the wave vectors k µ = p µ / in the asymptotic region. The curvature caused by the energy density is described by a dependence of the metric on the wave vector, g µν = g µν (k), leading to a modified dispersion relation where m is a mass parameter. Relation (3.18) containing higher-order terms in k, k µ is not a Lorentz vector and will not transform according to standard flat-space Lorentz transformations. Provided we do not assume graviton production, the asymptotic momenta are conserved, but in the interaction region the relation between p µ and k µ becomes nonlinear. Formally, the asymptotic momenta, which are acted upon linearly by the Lorentz group, play a role analogous to that of the pseudo-variables in DSR. But, whereas in DSR the latter ones are mere auxiliary quantities, here they have a clear, distinct physical meaning. The nonlinear variables k µ , on the other hand, play a more or less auxiliary role, in contrast to the nonlinear "physical variables" in DSR. So far, this is nothing more than an effective description of gravity, when it plays a role in high energy particle interaction. The place where it is encoded in the effective theory is the form of the function k(p), or its inverse, respectively. This function could, in principle, encode Newtonian gravity, GR, or QG. The input from QG that is made here is the existence of a minimal length, 1/κ. General conditions on the functional dependence of k on p are given in [133]: 1. For energies much smaller than κ the usual linear relation is found. 2. For large energies, k goes asymptotically to κ, not to infinity. The function is invertible, i.e. it is monotonically increasing. With these conditions satisfied, k remains bounded when p grows arbitrarily and the (effective) wavelength λ = 2π/|k| does not decrease below the invariant length. Theories of this type have been examined in various contexts as to their analytical structure and phenomenological consequences [73,146,175,176]. Recalling the quantum mechanical relation p = k, an energy-dependent relation between momentum and wave vector can be formulated as an energy dependence of Planck's constant, p a = (p)k a , thus introducing an additional quantum uncertainty. Such a modification of quantum mechanics was suggested for the first time by Heisenberg [127]. The physical idea is that a sufficiently high energy particle, released in the interaction at a detecting process, curves and disturbs space-time so that an additional position uncertainty arises, which enhances the quantum mechanical one. In this way the accuracy of position measurement is bounded from below by an invariant length scale. To formulate GUP in terms of commutation relations, we postulate the canonical relation for the wave vector with a coordinate x and derive the relations between coordinates and momenta, This results in the generalized uncertainty relations Comparison with DSR-type theories shows modified dispersion relations as a common feature, in this sense they are almost two sides of the same coin [131]. The interpretation of MDRs, however, is quite controversial. DSR deals with particle propagation in flat space, GUP deals with QG effects in regions with strong gravity. Both approaches could be compared with each other and with the angle operator [190] by calculating corrections to scattering cross sections. A comparison of DSR versus GUP was made in [130], resulting in an opposite influence of DSR and GUP on scattering cross sections. QG decoherence Another very general approach, without close relation to a specific QG theory, is decoherence [260]. This framework considers quantum theoretical fluctuations of the Minkowski metric, so the proper time of particles fluctuates at a time scale λT P , a few orders above the Planck time. In matter wave interferometry, described in [260], atoms act as clocks with a very high frequency. When in such an experiment a matter wave is split into two components and recombined, spacetime fluctuations are expected to cause decoherence, a process, where ingoing pure states become mixed by the dissipative action of the fluctuating background. [260] presents one realization and potential observable consequences, which predict a factor λ of the order 10 3 . Other realizations can be found, for example, in references in [260] and in [145], where impacts on neutrino physics are considered. For an early contribution see [211]. Quantum f ield theory frameworks In this section we review Lorentz symmetry violating EFTs, associated constraints, a possible "combinatoric lever arm" in scattering experiments, and non-commutative field theory. Constraints on Lorentz violation with ef fective f ield theory A successful area of quantum gravity phenomenology in recent years is in the area of Lorentz symmetry breaking. There is a well-developed framework, many new effects, and very strong constraints on these effects, even below the Planck scale. However we do not know whether violations of local Lorentz invariance occur in LQG -in fact there are strong arguments suggesting LLI should be preserved as discussed in Section 2.6. The now-extensive body of work provides a firm foundation for future derivations, constraints and observational searches. When local Lorentz symmetry is broken a cascade of physical effects appear. Whether the LV is via symmetry breaking or by an additional background field, the effects are studied by adding the possible terms with the new Lorentz violating vector and tensor fields. To organize these effects and the associated constraints, the terms in the particle Lagrangian are characterized by their mass dimension. Without ties to a specific fundamental theory, the minimal Standard Model Extension provides a framework of organizing renormalizable higher dimension terms in the standard model [79]. There is an extensive literature on these effects and the associated constraints. For more on these constraints see [53,150]. See [152] for data tables on the standard model extension parameters, updated annually. There is a wealth of effects that arise from LV. Some of these include • sidereal variation in signals as the Earth moves relative to the preferred frame or directions. • new processes such as photon decay, photon splitting, and vacuum Cerenkov radiation • shifting of thresholds of allowed processes in LLI physics such as the GZK threshold • kinematic effects arising from modifications to dispersion relations that accumulate over cosmological distances or over a large number of particles Work on dimension 5 and 6 operators is more recent and this section will focus on these. For more on work in LV prior to the developments in LQG see the reviews [139,141,163,194]. The primary reference to consult for dimension 5 LV QED is [141]. Effective Field Theory (EFT) is the framework for describing much of the standard model of particle physics. According to the Wilsonian view of renormalization, at the relatively low energies (as compared to the Planck scale in the case of quantum gravity) explored by accelerators, the dominant interactions in the Lagrangian/action are necessarily the relevant terms at these energies. This means that when introducing new effects it is natural to use the EFT framework to model the physics. In this section the EFT framework is used to explore the possible LV dimension 5 operators in QED, yielding cubic modifications to dispersion relations. Including a time-like background four vector field u µ in QED Myers and Pospelov found three LV dimension 5 operators that were (i) quadratic in the same field, (ii) gauge invariant, and (iii) irreducible under use of the equations of motion or a total derivative [208]. Due to the background vector field u µ there is a preferred frame, usually assumed to be the frame in which the cosmic microwave background is isotropic. The resulting operators (up to factors of 2 used here to simplify the dispersion relations) are Recall that since κ = M P , the parameters ξ, ζ i are all dimensionless. These terms violate CPT symmetry. The resulting modified dispersion relations include cubic modifications compared to the LLI case. For photons, where the signs indicate left and right polarization and the modification depends on the magnitude of the momentum. While, for fermions, in which η ± = 2(ζ 1 ± ζ 2 ) and the signs are the positive and negative helicity states. The analysis of the free particle states is in [141] where it is shown that the positron helicity states have a relative sign compared to the electron; there are only two fermion parameters η ± . When the sign of the modification is negative (positive) the energy decreases (increases) relative to the LLI theory. Thus the curve E(p) flattens (steepens) out at high energies. This strongly affects the process rate, thresholds, and the partitioning of momentum. Comparing these dispersion relations to those found with the heuristic LQG computations in Section 3. (Recall that L is the characteristic scale of the semi-classical state, above which the geometry is flat.) The photon MDR is essentially identical. The fermion MDR has only one helicity parameter and has an additional suppression due to the scaling Lκ. Further work on LQG coherent states would clarify this scaling and illuminate the additional scaling arising in the preliminary calculation, perhaps even show an associated custodial symmetry. The dimension 6 terms are identical to the EFT framework. However, none of these operators considered have been derived unambiguously from the framework of LQG. The EFT analysis has been extended in a variety of ways. Hadrons [174] and even heavy nuclei [237] were included, the framework has been folded into the Standard Model Extension [151]. Additionally, the analysis was generalized to arbitrary 4-vectors is [124]. In the original work of Myers and Pospelov the 4-vector u µ only had a non-vanishing time component in the preferred frame, chosen due to the stringent constraints on spatial anisotropy set by clock comparison and spin-polarized matter experiments. Given the remarkable bounds on the parameters in the Myers-Pospelov model discussed below, the authors of [124] work with a model of anisotropic media and show that the bounds in the Myers-Pospelov model may be weakened when analyzed in the more general, anisotropic model. In the context of LV EFT a variety of new phenomena occur [140,141] • New processes, forbidden in the usual LLI theory, can occur. • Thresholds for processes in the LLI theory can shift. • Upper thresholds can occur; momenta can be high enough so that processes turn off. The modifications become important when the mass term is comparable to the modification. Thus for dimension 5 operators the cubic corrections become important when p crit ≈ (m 2 M P ) 1/3 . Since the effects arise at high momentum many calculations are done for m p κ, which allows for considerable simplification. We'll use to denote results in this "high momentum" limit. It is worth keeping in mind that this framework excludes a wider class of theories, for instance those that contain violations of local energy-momentum conservation. This wider arena was briefly reviewed in Section 3.10. Nevertheless at some energy scale the new theories must match known results and fit within the EFT framework so at lower energies the framework is an excellent approximation. The EFT framework also has the advantage that clear physical predictions can be made such as in the cases of particle process thresholds, where the rates of new particle processes determine the thresholds [141]. Physical ef fects giving current constraints To give a flavor of the nature of the constraints in the next subsections we sketch the derivations of the current tightest constraints on dimension 5 LV QED. There are many other effects but in the interest of reviewing those that give both a sense of the calculations and the strongest constraints, we focus on vacuum birefringence and photon decay. The field is comprehensively reviewed in [141], to which the reader should turn for details. A recent update is [163]. The first phenomenon, arising from birefringence, is purely kinematical. The remaining constraints on dimension 5 LV QED are dynamical in the sense that the dynamics of EFT is employed to derive the constraints. The effects usually involve an analysis of process thresholds. Threshold constraints require answers to two questions, "Is the process allowed?" and "Does it occur?" A typical process involves a decay of an unstable particle into two particles. In the EFT framework we have usual field theory tools at our disposal so we can compute the rate of decay using the familiar expression from field theory. Denoting the outgoing momenta p and p and helicity with s and the matrix element by M (p, s, p , s , p , s ) the rate is Because of the modifications in the dispersion relations the nature of the integration differs from the simple textbook case. The threshold for the process is derived by determining the momentum at which the matrix element is non-vanishing and the momentum space volume is sufficiently large to ensure that the process is rapid. Due to the modifications in the dispersion relations the momentum space volume differs significantly from the corresponding LLI calculation. Kinematic constraints arising from birefringence From the form of the dispersion relation (4.1) it is clear that left and right circularly polarized photons travel at different speeds. Linear polarized high energy radiation will be rotated through and energy-dependent angle, depolarizing the radiation. After a distance d the polarization vector of a linearly polarized plane wave with momentum k rotates as [140] θ Given the finite bandwidth of a detector, k 1 < E < k 2 , the constraint on the parameter ξ can be derived from the observation of polarization in the relevant bandwidth; if the LV term was large enough there would be no measured net polarization. If some polarization is observed then the angle of rotation across the bandwidth must be less than π/2 [114,142]. The simple detection of a polarized signal yields the constraint, from (4.4), of For a refinement of this argument, and other approaches relying on knowledge of the source, see [163, Section IV.B]. The current best constraint on |ξ| is discussed in Section 4.1.5. An intriguing change in polarization during a GRB was recently reported [263]. Dynamical constraints arising from photon stability With LV photons can decay via pair production, γ → e + e − . The threshold for photon decay is determined as described above, by investigating the rate. Then given the observed stability of high energy photons, constraints can be placed on the new LV terms in the dispersion relations. Photon decay generally involves all three LV parameters ξ, η + , and η − . However as argued in [141], we can obtain constraints on the pairs of parameters (η − , ξ) or (η + , ξ) by considering the case in which the electron and positron have opposite helicity. We present the calculation for η + ≡ η and will see that the η − case is easily obtained from this one. The LV terms are η + p 3 − for the electron and −η + p 3 + for the positron. In the threshold configuration the outgoing momenta are parallel and angular momentum is not conserved. But, above the threshold, the outgoing momenta can deviate from the parallel configuration and, with the additional transverse momentum, the process will conserve angular momentum. For slight angular deviations from the threshold configuration of the outgoing momenta the matrix element does not vanish and is proportional to the perpendicular momentum of the outgoing particles [141]. The volume of the region of momentum space where photon decay occurs is determined by energy conservation and the boundary of the region occurs when the perpendicular momentum vanishes. We denote the photon momentum k, electron momentum p − , positron momentum p + , and the helicity parameter η = η + < 0. Thus, from ω = E + + E − and equations (4.1) and (4.2) we see that or, using conservation of momentum and the high momentum limit, the expression of energy conservation becomes where p ⊥ is outgoing particle transverse momentum. The LV terms in the dispersion relation raise or lower the particle's energy as a function of the momentum. Because of the flattening out of the energy at high momentum for negative η, the outgoing energy is reduced if one particle carries more momentum than the other. The threshold is thus determined by an asymmetric momentum partition [140]. Partitioning via an asymmetry parameter ∆ > 0 we have, with p − = (k/2)(1 − ∆) and p + = (k/2)(1 + ∆), This relation is enforced in the rate of equation (4.3) by the energy-conserving delta-function. Carrying out the integrations over the momenta, the integral for the rate is reduced to a single integration over the longitudinal momentum, when the perpendicular momentum is determined by energy conservation. Then Given the strong constraints on ξ via vacuum birefringence (see Section 4.1.2), ξ 0, we proceed with vanishing ξ. The volume in momentum space for photon decay opens up as the perpendicular momentum increases from zero. The threshold is then determined by p ⊥ = 0, or − 4m 2 κ ηk 3 + ∆ 1 − ∆ 2 = 0. A simple optimization shows that the asymmetry is ∆ = 1/ √ 3 and the outgoing momenta are (k/2)(1 ± 1/ √ 3) yielding a threshold of , as determined from (4.5) with ξ = 0. The rate increases rapidly above this threshold [141] so observation of photons up to this energy places limits on the size of the parameter η ≡ η + . Thus, when we consider the 80 TeV photons observed by HEGRA [9] we have |η + | < 0.05. In general high energy photon observations place limits on one helicity. The more general case with non-vanishing ξ may be derived from (4.6), with the complete allowed region in (η, ξ) parameter space determined numerically. It is interesting to note that, contrary to what one might expect from early work [140,149], the threshold for the process is not determined by minimizing the outgoing energy, which is found from the solution of (4m 2 κ/ηk 3 )∆ − (1 − ∆ 2 ) 2 = 0. Instead we start by asking, does the process occur? The rate (4.3) then shows that the threshold is determined by the opening up of momentum space due to non-vanishing p ⊥ . There are a wide variety of other processes that can contribute to the dimension 5 LV QED constraints. In addition to the ones discussed above, there is vacuum Cherenkov (e ± → e ± γ), helicity decay (e ± → e ∓ γ), fermion pair production (e ± → e ± e ± e ∓ ), photon splitting (γ → nγ), and the shifting of the photon absorption γγ → e + e − threshold. For more on these the reader should consult [139,141,163]. In addition the model has been generalized to include processes including dimension 6 CPT even operators in QED and for hadrons [174], and for nuclei [237]. Neutrino physics Planck-scale modifications of neutrino physics are usually considered in the framework of LLI violating effective field theory. It is well-known that neutrinos in a definite flavor state |ν α , α = e, µ, τ , are superpositions of neutrinos in definite mass states, |ν i , i = 1, 2, 3, with the unitary mixing matrix U αi . On the other hand, neutrinos in a definite mass state are superpositions of flavor states, In [78] neutrino oscillation in a simplified two-flavor model is considered. The typical oscillation length L, i.e. the distance, after which a neutrino, moving approximately at the speed of light in special relativity, is likely to have changed its flavor, is derived in a simple, straight-forward calculation. It depends on the neutrino's energy, with ∆m 2 = m 2 2 −m 2 1 and ∆p = p 2 −p 1 . The approximate equality is based on for each i = 1, 2. Planck scale modifications arise when we assume modified dispersion relations of the type (leaving aside possible birefringence effects) with coefficients η i , possibly different for each mass state. Such MDRs would imply flavor oscillations even for neutrinos with negligible mass, as long as the coefficients η For the sake of simplicity, here we present only the version suggested in [78], based on the (exact) alternative MDR which avoids oscillations of neutrinos with zero rest mass. With this, the modified oscillation length becomes with the mass difference ∆m = m 2 − m 1 . Neutrino detectors like IceCube may be sensitive to oscillation length corrections of atmospheric neutrinos [195]. For cosmogenic neutrinos, which have oscillated many times on their way to earth, a detection of such effects would be very difficult. In effective field theory with higher mass dimension terms, coupled to a LLI breaking fourvector and exact energy and momentum conservation, neutrinos are favorite objects for threshold calculations. Neutrino splitting, ν → ννν for example, is forbidden in SR, but could be made possible by LV effects. Generally, for processes derived from the MDR (4.7) and energymomentum conservation one obtains a threshold energy E th ∼ (m 2 M n−2 P ) 1 n . With η i assumed to be of order unity, the small neutrino mass leads to a threshold energy of ∼ 20 TeV for ν splitting. If this process takes place, it must result in a cutoff in the energy spectrum of cosmic neutrinos. Taking the decay length for neutrinos above the threshold energy into account, for η i ∼ 1 in [195] an estimate of 10 18 −10 19 eV for the cutoff is given, when the neutrinos come from a distance of the order of Mpc. Observation of higher neutrino energies would lower the upper bounds to the parameters η (n) i . Current bounds from astrophysical observation: summary The current best constraints on the dimension 5 LV QED parameters arise from γ-ray bursts and from an extensive analysis of the spectrum from the Crab Nebula [172]. Using the vacuum birefringence effect and recent observations of the γ-ray burst GRB041219a, the dimension 5 photon parameter ξ has been constrained |ξ| 10 −14 [161,252]. At 95 confidence level |η ± | < 10 −5 from the analysis of the spectrum from the Crab nebula [173]. Since the terms are already suppressed by the Planck scale, terms arising from dimension 5 operators are sufficiently constrained as to appear ruled out. The current constraints, and some prospects for further improvements are illustrated in Fig. 1. Other astrophysical arenas have been suggested for testing these theories including neutron stars, [74,118]. But it is clear from these studies that in the EFT framework the effects are too small to be observationally accessible [118] 10 . Considering that the Planck scale is already factored into the modifications, the severity of the constraints is impressive. These results from astrophysics inform the development of quantum gravity. However, there is another reason to suspect that these modifications do not occur in the EFT framework, the naturalness problem. The naturalness problem Within the EFT framework a naturalness problem suggests that the LV operators are ruled out. The argument is based on renormalization: The higher dimension LV operators generate (a) (b) Figure 1. The constraints on dimension 5 (a) and dimension 6 (b) QED parameters. The log-log plots show the allowed region (dark grey) and the constraints. The current constraints on dimension 5 parameters are shown in red and grey. The grey horizontal lines are due to constraints due to the lack of birefringence (see Section 4.1.2) from GRB photons [161,252]. The red vertical lines are due to the analysis of the Crab spectrum [172]. The blue solid lines show limits that would arise from an upper threshold on pair production at k th 10 20 eV. The green dash-dot lines show the constraints that would arise from a γ-decay threshold of k th 10 19 eV. The dimension 6 constraints (b) have the same color coding. The GRB birefringence constraint is not relevant to the dimension 6 case since the operator is parity even, contrary to what is reported in [161]. Original plots courtesy of Stefano Liberati. See [163] for more detail. lower dimension operators through radiative corrections and the low-energy effective theory will contain all terms consistent with the symmetries of the microscopic theory. The usual power-counting arguments that give divergences also determines the natural size of the Planckscale suppressed terms. Generically these appear at dimension 4 or less, with no Planck scale suppression. One can show [80,81] that the operators generated by radiative corrections produce effects that are incompatible with current particle data. Thus the LV parameters would have to be unnaturally fine-tuned to cloak the LV. Within the EFT framework this naturalness problem is significant. However if there is a custodial symmetry then the radiative terms will not appear at lower dimension. Even a subgroup of the symmetry manifest in the macroscopic, continuum approximation is sufficient. A good example of this is in Euclidean lattice theory where the discrete rotation sub-group on a hypercubic lattice, which becomes the full rotation group as the lattice spacing goes to zero, is enough to prevent the generation of the lower dimensional operators. Other symmetries existing in the vast range of scales from the "low" TeV scale to the high Planck scale might protect the theory from these terms. This is the case for a model with supersymmetry; the symmetry-preserving operators first appear at dimension 5 [120]. This solves the naturalness problem in that the lower dimensional operators are protected by a custodial symmetry. But, of course, supersymmetry is not a low-energy symmetry. When supersymmetry is softly broken with explicit symmetry breaking terms at a scale M SUSYB , the lower dimension terms return, albeit additionally suppressed by factors of M SUSYB /κ raised to some power so, again, the naturalness problem returns at dimension 5 and the parameters would have to be fine-tuned. At dimension 6 however, the additional suppression is sufficient to evade current limits, provided the supersymmetry breaking scale M SUSYB < 100 TeV [63]. Another possibility was raised recently. Gambini et al. [103] study a Euclidean lattice model, with distinct spatial, a, and temporal, (1 + aµ)a lattice scales. The authors find that at the one loop order the leading contribution to effective dimension 4 operators is aµ. If one takes the stand that the lattice spacing a should remain small but finite then the lower dimension term in the selfenergy is suppressed by the additional factor µ and thus could evade the current bounds on this effect. Nonetheless within the EFT framework of LV this does not solve the naturalness problem since the theory would have to be fine-tuned through the scale µ. See [216] for further comments. Incidentally Gambini et al. show that in the 4D "isotropic scaling" case in which the temporal lattice scaling is also a, that lower dimensional corrections do not arise, since the "isotropy" symmetry protects the theory from such corrections. The authors argue in Section IV.B of [102] that in a generally covariant theory the propagators should be constructed with reference matter fields, physical "rods and clocks" rather than the flat background used in EFT. Since the distribution associated to these physical rods and clocks cannot be infinitely narrow, and should be considerably larger than the Planck scale, they would provide a natural cutoff for LV effects without the generation of corrections at lower dimension. In the EFT framework, the widths could provide an additional scale and associated additional suppression, as in the soft breaking of super symmetry. It would be interesting to see whether the hint of just such a scale and suppression in the modified dispersion relations (3.3) could be made more precise. Generalizations and prospects for improved constraints Given the naturalness problem and the tight constraints on dimension 5 LV QED in the EFT framework several generalizations have been considered. Building on earlier work dimension 6 CPT-even LV operators were studied [193] and constrained [163]. In the pure photon sector in the Standard Model Extension there are bounds up to dimension 9 [151]. Ultra-high energy cosmic rays (UHECR) hold promise for tightening the current constraints. One striking aspect of the cosmic ray spectrum is the expected GZK cutoff. In the LLI theory cosmologically sourced ultra-high energy protons interact with cosmic microwave background producing pions and lower energy protons, neutrinos, and γ-rays, reducing the energy of the original photons. The resulting GZK (so named for Greisen, Zatsepin and Kuzmin) cutoff occurs at E GZK 5 × 10 19 (ω cmb /1.3 meV) −1 eV, where ω cmb is the energy of the background photon. LV terms in the hadronic dispersion relations can shift the threshold for photo-pion production [149,174]. Recently observations in support of the GZK cutoff were announced [1,4,221] so the LLI theory is favored. A detailed statistical analysis of UHECRs severely constrains proton and pion parameters in the dimension 5 and 6 CPT-even model [174]. The GZK process provides another mechanism for constraints. Photo-pion production from the UHECR protons leads to the production of γ-ray pairs. Cosmic ray observatories have placed limits on the fraction of UHE photons in the cosmic ray spectrum [5,231] that can in turn place limits on photon parameters. With LV, processes can have upper thresholds at high energies and the existence of an upper threshold for pair production at high energies increases the photon fraction, yielding a conflict with current photon fraction bounds [100]. Likewise a neutrino flux is created by pions originating from the same photo-pion production. The possible tests of LV in neutrino physics are outlined in [163,195]. Additionally, in an intriguing development in the non-relativistic regime Amelino-Camelia et al. argue that cold-atom-recoil experiments constrain dispersion relations modifications of the form mp/κ [24]. There remains work to be done on the theoretical side, understanding the status of LLI in LQG and, if verified, constraining the MDR parameters, for instance as in equations (3.2) and (3.3). In particular, it would be very interesting to determine which specific structures in LQG, together with any additional assumptions, yield specific effects. As those specific effects are constrained, then the structures and assumptions may be tested directly. Some questions for further work include: • Since the scaling in terms of Υ holds for different dimension operators, the clock comparison experiments, for instance, should yield bounds on Υ and vacuum birefringence for dimension 4 (see [141] page 29). The correlation between the dimension 3 and 4 results could yield limits on Υ. • Building on [233,234] is there non-integer scaling in the dispersion relations? What specific structures in LQG is the scaling tied to? • Is there an analogous calculation that can be completed in the spin foam context? • Is is not easy to see how such non-integer scaling could occur in EFT, can it be ruled out? Or, is EFT missing something? (see, e.g. [102,103]). • Can physical measurements and the underlying spatial discreteness of LQG kinematics be reconciled with the EFT framework? EFT Phenomenology without Lorentz violation: a combinatoric lever arm Another model in the EFT framework demonstrated that there may be "lever arms" intrinsic to the granularity of space [189,190], demonstrating one way in which the combinatorics of SWN nodes may raise the effective scale of the granularity. The model is based on the angle operator. The asymmetry in the angular spectrum shifts the distribution of angles away from the sin θ isotropic distribution of polar angles in 3-dimensional flat space. The distribution is recovered when the spin through the three surfaces, "flux" or s i , satisfies 1 s i s 3 for i = 1, 2. Fluxes s that satisfy these relations are called "semi-classical". These fluxes are essentially the areas of the partitions discussed in Section 2.4. The model of [190] is based on the assumption that the states of the atom of 3-geometry or intertwiner are equally likely. In addition to the uniform probability measure, the model assumes that all incident edges to the node are all spin-1 2 . The combinatorics of the atom can be solved analytically for semi-classical fluxes. Given the fluxes s, or areas of the partitions, and the labels j on the intertwiner vertex, or "intertwiner core", that connects the three partitions, the normalized probability distribution is given ( n = 2 j) Accessible measurements of the atom include 3-volume, which is approximately determined by the total area or flux, and angle, determined by the states | n of the intertwiner core. The fluxes s determine a mixed state, where P | n is the projector on the orthonormal basis of the intertwiner core. The sum is over the admissible 3-tuples of integers n such that n i ≤ s i . In the discrete case the projector is P | n =| θ I θ I |, where | θ I = n c θ I ( n) | n . The probability of finding the angle eigenvalue θ I in the mixed state ρ s is Prob(θ = θ I ; ρ s ) = tr (ρ s P θ I ) = n p s ( n)| n | θ I | 2 ≡ p s (θ). The affect of the modified distribution of polar angles is that the 'shape' of space is altered by the combinatorics of the vertex; the local angular geometry differs from flat 3-space. While these effects would be in principle observable at any flux, the results here are valid for semi-classical flux, 1 s j s 3 for j = 1, 2. As mentioned briefly above, the total flux s = i s i determines the 3-volume of the spatial atom and thus an effective mesoscopic length scale, s = √ s/κ, greater than the fundamental discreteness scale of 1/κ. The combinatorics of the intertwiner provides a lever arm to lift the fundamental scale of the quantum geometry up to this larger mesoscopic scale. So while the the shape parameter is free from the Planck scale, the effective length scale, determined by total flux s, is tied to the discreteness scale of the theory. If the scale s of the spatial atom is large enough then the underlying geometry would be accessible to observations of particle scattering. An overly simple example of Bhabha scattering is worked out in [190]. However, the model needs further development. In particular for coherent states defined with classical directions lever arm is too short to raise the scale up to an observable window [189]. It is an open question as to whether the long combinatoric lever arm exists for coherent states built from quantum information intrinsic to the states of the atom of geometry. Further, the QED vertex should be modeled in detail. Overview As discussed in Section 3.7, a non-commutative geometry can be generated when the coordinates functions X µ are operators that do not commute. To introduce a field over this non-commutative space, we have to consider a field over these coordinates operators. As in Section 3.7, we shall focus on flat non-commutative geometries. In the case of a scalar field, this can be done rigorously in most of the non-commutative geometries discussed in Section 3.7. However, instead of using operators and all their machinery, we can simplify the mathematics and use instead c-numbers x µ for the coordinates with a non-trivial product which encodes the non-commutative structure. This approach can be understood either as a specific representation of the non-commutative algebra (defined in terms of operators) or as an example of deformation. At the end, the formalism is equivalent. The representation is obtained through the Weyl map W , which is such that We see that the non-trivial operator product is encoded in a new product denoted * . This is naturally extended to general (analytic) functions. The deformation can be understood as modifying the point-wise product of functions. The usual algebra C ∞ (R 4 ) of commutative functions over spacetime R 4 is equipped with the pointwise product It becomes a non-commutative algebra if we replace the pointwise product by the * -product. The * -product of coordinates functions x µ encodes a similar non-commutative structure as the operators discussed in Section 3.7. For example, respectively in the Moyal case and the κ-Minkowski case, we will have This representation is not unique. For example for the Moyal spacetime, the most popular representations of the * -product are given by [42] Moyal: Voros This * -product construction of a non-commutative space is analogue to the phase-space formulation of quantum mechanics introduced by Moyal [207] and Groenewald [119]. With the help of this * -product, we can generalize the usual construction of the scalar field action with a φ 3 interaction term for a real scalar field φ from Note that in the case of the DFR spacetime as well as Snyder spacetime interpreted as a subspace of a larger space, one should also consider the measure on the extra coordinates. We refer to [88,111] for further details on these cases. The usual approach in QG phenomenology is to construct a scalar field theory as it is the simplest field theory. The case of fields with higher spin is more complicated. Indeed, spin is usually related to a representation of the Poincaré group. If we deform this symmetry group to accommodate for the non-commutative structure, the representation theory will change and hence what we call spin could change. Furthermore the relation spin-statistics will be nontrivial since, in the case of a quantum group, the tensor product of representations becomes non-commutative. See [44,214] for the Moyal case and [30,264] for the κ-Minkowski case. At this stage, it is only in the Moyal case where the notions of spinor [64] and vector field have been introduced, since the non-commutative structure is a "mild" one in terms of deformation (it is a twist [182]). Since the hope is to measure some semi-classical QG effects in the propagation of electromagnetic field, one would like to construct a U(1) gauge theory in one's preferred non-commutative spacetime. Unfortunately this can be done to our knowledge only in the Moyal case. Even in this case, things are highly non trivial at the classical level (and of course at the quantum case as well!). For example, a U(1) gauge theory behaves essentially like a non-Abelian gauge theory due to the non commutativity [240]. It is difficult to construct non-Abelian gauge theories with simple local groups since the non-commutativity will always destroy the traceless property [144]. There are also constraints on the transformations of the matter fields under gauge transformations [77]. Since the dispersion relation is not modified in the Moyal case, we do not expect to see effects measurable in gamma-ray bursts a priori. There is no definite construction of an Abelian nor non-Abelian Yang-Mills theory in either of the κ-Minkowski, DFR and Snyder cases. This is an important issue to address if one intends to make precise predictions for the FERMI experiment. Once we have defined the scalar field action in spacetime, we can try to define it in momentum space, that is we introduce plane-waves and a Fourier transform. Majid has introduced in a general setting the notion of Fourier transform for Hopf algebras [182]. We follow here a more pedestrian presentation. As we have emphasized in Section 3.1, momentum space is equipped with a group structure since we need to add momenta. Furthermore the pointwise product of plane-waves as functions on spacetime incorporates this momenta addition, (e p .e q )(x) = e p·x e iq·x = e i(p+q)·x = e p+q (x). The generalization to the non-commutative case then uses the * -product between plane-waves. The Moyal case and the κ-Minkowski case lead to different cases. Moyal: (e p * e q )(x) = e i(p+q)·x e i 2 p·θ·q , κ-Minkowski: (e p * e q )(x) = e p⊕q (x), where we have used the non-trivial sum p⊕q of (3.15) inherited from the non-Abelian group AN 3 as discussed in Section 3.7. With this in hand, we define the Fourier transform and its inverse asφ where we have used the relevant measure [d 4 p] on momentum space. For example, AN 3 is isomorphic to half of de Sitter space, hence [d 4 p] will be the measure on de Sitter space expressed in the chosen coordinates. In the case of the inverse Fourier transform F −1 , we see the planewave as a function over momentum space and we have to deal with the relevant inverse of the addition ⊕, i.e. p ⊕ p = 0 = p ⊕ ( p). We use the commutative pointwise product for the algebra of functions over the momentum manifold. Deforming this product as well, i.e. to have a non-commutative momentum space, would mean that we are dealing with a quantum group momentum space. This case has not yet been studied to our knowledge. To perform the Fourier transform of the action (4.9), it is usually convenient to consider the plane-wave as the eigenfunction of the derivative ∂ µ . This requires in general a careful study of the differential calculus over the non-commutative space as for example in κ-Minkowski case. We refer to [96,241] for further details on this. We assume that the plane wave is the eigenfunction of the derivative so that Note that when dealing with functions over a group (which we call here momentum space), there is another notion of Fourier transform which is usually used. For example, in the case of compact groups (for instance SU(2) as a 3d Euclidian momentum space), the Fourier transform one would think to use consists in decomposing the functions over the group in terms of the matrix elements of the representations of the group, thanks to the Peter-Weyl theorem. This is not the Fourier transform we have discussed above in (4.10). There exists nevertheless a natural isomorphism between the different types of Fourier transform [143]. To our knowledge the isomorphism has not been studied in detail in the case of non-compact groups for momentum space. With the Fourier transform (4.10) the λφ 3 action becomes, in the Moyal case, and, in the κ-Minkowski case, Note that we still have a conservation of momenta in both cases. We notice the key difference: In the Moyal case, the Dirac delta function comes decorated with a phase depending on momentum and θ, but the conservation of momenta is obtained through the usual commutative addition of momenta. In the κ-Minkowski case, the conservation of momenta is done through the modified addition, inherent to the new group structure that we use. Furthermore we could have a modified propagator K(p) -related to a modified dispersion relation -in the κ-Minkowski case. If we focus on a general group for momentum space (therefore on a spacetime with Lie algebra type), we can rewrite the action for a scalar field in momentum space just in terms of group elements (4.11) [dg] is the Haar measure on the group of interest. Note that to be rigorous one should be careful with the different coordinates patches used to cover the group. We omit this subtlety and refer to [95,143] for further details. Equation (4.11) is nothing but a sum of convolution products of functions over the group evaluated at the identity since The usual commutative scalar field theory can also be put under this form if we use the Abelian group R 4 . The κ-Minkowski case appears when the group is non-Abelian and is AN 3 . Deforming the addition of momenta can be seen as another way to deform our theory. We can start with the standard scalar field theory defined over momentum space given by R 4 and introduce a non-trivial addition. By considering plane-waves with a non-trivial product, inherited from the deformed momenta addition, we reconstruct the * -product. This way of proceeding is essentially the dual to the one we have presented here. The abstract writing of the scalar field action is useful as it can help to understand how matter can arise from a spinfoam [112]. We shall discuss this in the next subsection. We have discussed the classical definition of the scalar field action. Before discussing the quantum case, let us comment on two specific topics: symmetries and conserved charges. We have discussed in Section 3.7 how some non-commutative spaces can be seen as flat noncommutative spaces. We can therefore expect that the action of the scalar field will be invariant under the deformed Poincaré symmetries [76,95]. A scalar field will be simply the trivial representation of the deformed Poincaré group. According to the chosen deformation, the main difference with the usual case will be how the tensor product of fields are transformed. For example, for a translation by in the κ-Minkowski case, we havê φ(p)→e (p)φ(p),φ(p) ⊗φ(q)→e p⊕q ( ) φ (p) ⊗φ(q) = (e p * e q )( ) φ (p) ⊗φ(q) . We notice the appearance of the * -product and the modified momenta addition which encodes the deformation of the symmetries. In particularφ(p) ⊗φ(q) andφ(q) ⊗φ(p) do not transform in the same way since p ⊕ q = q ⊕ p. This is another way to see that the tensor product is no longer commutative when using quantum groups. Nevertheless, it can be shown that the scalar field action in the different non-commutative spacetimes is invariant under the relevant deformed symmetries as expected. In the κ-Minkowski case, we have a deformation of the Poincaré symmetries but one can also encounter Lorentz symmetry breaking if one is not careful about the choice of coordinates patch [95]. If we have symmetries, one can expect to have conserved charges following Noether's theorem. This is indeed true in the non-commutative context. They have been analyzed for both Moyal [20] and κ-Minkowski spacetimes [96]. The analysis relies on the understanding of the differential calculus over the non-commutative space. For example in the κ-Minkowski case there exist different types of differential calculus [241] which leads then to different notions of conserved charges [6,96]. We refer to the original articles for more detail. The quantization of non-commutative scalar field theory can be performed. The Moyal case has been analyzed in great detail, the other non-commutative geometries much less so. For a recent overview of some phenomenology of field theory in Moyal spacetime, we refer to [43]. As we have alluded few times already, when dealing with a quantum group, the tensor product of its representations (i.e. here the scalar field) becomes non-commutative. If we want to permute representations we have to use a structure called "braiding", which encodes the non-commutativity of the tensor product. Now, when constructing Feynman diagrams we use Wick's theorem and fields permutations extensively. Then we have a choice: either consider the braiding related to the deformation of the Poincaré group, or use a trivial braiding (i.e. the one associated to the usual Poincaré group). In the first case, this will ensure that the Feynman diagrams are invariant under the deformed Poincaré group. This is the setting of braided field theory as developed by Oeckl [209] and Majid [182]. In the second case, we can encounter symmetry breaking. In particular non-planar diagrams will often fail to be invariant under the Poincaré symmetries. Quite strikingly, it has been shown that such braiding in the Moyal case means that the non-commutative scalar field theory has the same amplitude as the commutative one [45,92]! Thus we see that the Moyal deformation is a "mild" one. Quantum gauge theories do feel the non-commutativity since at the classical level already -as we have recalled -some non-trivial effects happen. From this perspective, not considering the braiding in the Moyal case leads to a more interesting scalar field theory, not equivalent to the standard one. However one has to face a new problem: the ultraviolet-infrared (UV-IR) mixing. This problem arises when dealing wit non-planar diagrams which are most sensitive to the non-commutative structure as we recalled above. When the external momenta are not zero, the space-time non-commutativity regularizes the ultraviolet divergences, just like one would hope non-commutativity to do. However when the external momenta are zero, the amplitude of the non-planar diagram diverges again. Small (external) momenta lead to a high energy divergence [196,205]. This has made the control of the renormalization analysis of the Moyal non-commutative scalar field theory quite involved [122,126]. One might wonder if braiding could simplify the amplitudes of a quantum scalar field theory in the κ-Minkowski case. A priori, we should not expect to recover the same amplitudes as in the un-deformed case there one really modifies momentum space by adding curvature. In particular the measure is different in the flat case and curved case. This is only a preliminary argument since the κ-Minkowski case faces another issue: the braiding is not completely understood; see [265] for further comments. Finally, the IR-UV mixing also arises when considering a scalar field in κ-Minkowski [121]. As a final comment, let us recall that non-commutative geometry was introduced by Snyder [250] as a hope to regularize the divergencies of field theory. This hope was not realized in general. Perhaps the only examples where this holds are DFR space [41] and, when momentum space is a quantum group [183,220]. Relating non-commutative f ield theory and spinfoam models Group field theories (GFT) are tools to generate spinfoams. Namely, they are scalar field theories with a non-local interaction term, built on a product of groups. Upon quantization, using the path integral formalism, the Feynman diagram amplitudes can be interpreted as spinfoam amplitudes, constructed out of gravity or some topological models. As we have recalled in Section 4.3.1, even standard field theories can seen as some kind of group field theories. The only difference is whether the group is Abelian or not, so that the dual space becomes commutative or not. Realizing this is the first key to understand how one can recover non-commutative field theories encoding matter from a spinfoam model. GFT built on SO(4, 1), it could probably contain in some ways a DSR scalar field theory. One has to carefully identifies the scalar degrees of freedom in the spinfoam GFT. One key-difficulty is to identify the DSR propagator since often the spinfoam GFT has a trivial propagator. As always, 3d quantum gravity being simpler than 4d quantum gravity models, it provides the ideal framework to illustrate this idea. We consider therefore Boulatov's GFT which generates the Ponzano-Regge spinfoam amplitude describing Euclidian 3d quantum gravity [65]. It is defined in terms of a real scalar field ϕ : SU(2) 3 → R, which is required to be gauge invariant under the diagonal right action of SU(2), ϕ(g 1 , g 2 , g 3 ) = ϕ(g 1 g, g 2 g, g 3 g), ∀ g ∈ SU(2). In his PhD thesis [166], Livine identified some solutions of this equation of motion. Later on, Fairbairn and Livine realized that the scalar perturbationsφ(g) around some specific solutions would actually behave exactly like a scalar field theory with SU(2) as momentum space [90]. The effective action forφ is constructed using ϕ(g 1 , g 2 , g 3 ) = ϕ (0) (g 1 , g 2 , g 3 ) + φ g 1 g −1 3 and Boulatov's action (4.12) with the kinetic term and the 3-valent coupling given in terms of F One can choose F such that K(g) becomes the standard propagator p 2 − m 2 with a non-zero mass [90]. We recognize then an action that is very close to the one in (4.11). From this perspective, in 3d, using a GFT to generate spinfoams we can find some degrees of freedom which can be interpreted as matter. Furthermore these matter degrees of freedom have naturally a curved momentum space. If we perform the Fourier transform (4.10), we would recover matter as propagating in a non-commutative spacetime, of the Lie algebra type. The 4D extension of this model to recover a scalar field in κ-Minkowski space from a topological GFT (i.e. giving the BF spinfoam amplitudes) was proposed in [112]. The construction is a bit more involved than in the above example since we have to deal with non-compact groups. Furthermore momentum space in κ-Minkowski is the group AN 3 which is not the one used to build the spinfoam model. One has then to use different tricks to recover this group in the GFT. For further details we refer to [112]. After these particle and field theory models we now turn to the LQG formulation of cosmological models, a promising observational window for QG phenomenology. In fact, already in the standard model of cosmology the metric is used in the quantized perturbation variables. Loop quantum cosmology A line of research that renders potentially observable results is Loop Quantum Cosmology (LQC). (For readers new to this subject we suggest the recent LQC reviews [40,46,55,71].) In contrast to the subjects of the foregoing sections in this branch of QG phenomenology we do not consider amplifications of tiny effects in the weak gravitational field regime, but rather today's remnants of the strong gravitational regime in the early universe. Given the observational windows onto the early universe, this line of work holds promise for accessible hints of fundamental space-time structure. We do not have solutions to full LQG that could be restricted to cosmological models. So, to model the early universe and to obtain a dynamical evolution with observable consequences, one assumes a cosmological background -usually highly symmetric, homogeneous or homogeneous and isotropic models. With a scalar inflaton field one can consider perturbations around the background by means of effective equations. From the effective equations one can derive estimates for correlation functions of quantities of scalar and tensorial type, constructed from perturbations of the connection around the isotropic case and relevant for the period of inflation. Finally these can be compared with the CMB inhomogeneities. In homogeneous cosmological models the degrees of freedom are reduced to a finite number by symmetry reduction prior to (loop) quantization. This results in simplified operators, and particularly, in a simplified constraint algebra, tailored to the cosmological model under consideration. For such systems we often know exact or at least numerical solutions. Although not solutions of full LQG, but of a simplified offspring of LQG, these constructions are guided by the effort to be as close as possible to the full theory. In the following we illustrate the approach to LQC with the simplest cosmological model, the Friedmann-Lemaître-Robertson-Walker (FLRW) model with zero spatial curvature [40]. The gravitational part of this model is one-dimensional, the only geometrical dynamical variables are the scale factor a(t) of the universe and the expansion velocityȧ(t). The Gauss and the diffeomorphism constraints do not show up explicitly, they are automatically satisfied, what remains to solve is the Hamiltonian constraint in form of a difference equation. Discreteness plays a significant role only in the very early phase of the universe, in the ensuing continuous evolution the difference equation can be approximated by a differential equation. The intermediate regime between these two phases is the domain of quantum corrections to classical equations. The metric of the flat FLRW model is usually given in the form with a fiducial spatial Euclidian metric. As the spatial topology of the model is that of R 3 , one has to choose a fiducial cell C to obtain finite integrals in quantities like the total Hamiltonian, the symplectic structure, and others. In comoving euclidian coordinates the volume of such a cell is denoted by V 0 , the corresponding geometric volume is V = a 3 V 0 . When introducing Ashtekar variables, we can, thanks to the symmetry of the model, choose the homogeneous and isotropic densitized triad and connection variables In terms of metric variables we have where γ is the Barbero-Immirzi parameter. The Poisson bracket is independent of the size of the fiducial cell, With these variables the gravitational phase space of homogenous and isotropic models is spanned by one canonical pair and, with a spatially constant field φ and its canonical momentum, we have finite-dimensional quantum mechanics. Approaches of this kind are summarized under the notion of Wheeler-DeWitt (WDW) theory or "Geometrodynamics", see [86]. In LQC we want to take into account discreteness and so in the spirit of LQG we construct holonomies from the connection variable. For this purpose we choose an edge of C, whose coordinate length V 1/3 0 is multiplied by a dimensionless parameter µ. Like in LQG, where SNW edges carry quanta of area, µ will later turn out to be a measure of area. Obviously it suffices to take A brief introduction into this formalism, the Bohr compactification of the real line, may be found in [257, Chapter 28] and [55]. In the kinematic Hilbert space with the norm the functions N µ constructed above form an orthonormal basis, Note that on the right-hand side there is the Kronecker-δ. These functions are analogs of the SNW functions in LQG. The actions of the holonomy and flux operators on a state function are by multiplication and derivative, respectively, It is also possible to go over to the p-representation, which is sometimes more convenient. Here the quantum states are functions of µ and the operators act in the following way, i.e. as shift operators and by multiplication. Here µ, which was originally introduced as a dimensionless ratio of lengths in [40], is proportional to area, as a factor in the eigenvalue ofp. The dynamics of cosmological models will be dealt with in Section 5.3. As the main goal in LQC is to be as close as possible to the full theory, an adaption of the full LQG Hamiltonian is more convenient than the simplified Hamiltonian constraint resulting from a symmetry reduced model. In this way discreteness enters the dynamics in a much more natural way. Some more general features of LQC, applicable to cosmological models of different degrees of complexity summarize the expected LQC corrections. • The LQG Hamiltonian constraint contains an inverse volume expression. The volume operator has a zero eigenvalue and therefore does not have a densely defined inverse, for the inverse volume an operator of its own must be constructed. This is done in such a way that for "large" volume its eigenvalues go like V −1 , but for "small" volume they do not diverge, but in the limit V → 0 eventually go to zero. This construction contains one parameter, on the value of which it depends, how many Planck volumes have to be considered as "large" or "small" in the above sense, this gives rise to quantum ambiguities. The well-defined inverse volume operator is an important ingredient in resolution of the classical cosmic singularity. (See [40] for more on singularity resolution.) • The classical Hamiltonian constraint contains the connection, which is not gauge invariant and so has no operator equivalent in the gauge-invariant Hilbert space. Like in full LQG, connection variables are replaced in one or the other way by corresponding holonomies, as in the example described above. This introduces in principle infinitely many terms of arbitrary powers of the connection, leading to corrections in the classical equations. • There are quantum back reaction effects from fluctuations, which occur in any system, when the expectation value of the Hamiltonian operator is not the classical Hamiltonian function of the expectation values of its arguments, Ĥ (q, p) = H ( q , p ). This is the case for cosmological Hamiltonians. Back reaction terms are included into an effective Hamiltonian in the effective Friedmann equations. • Effective Poisson brackets with a correction parameter α, constructed from correction terms, should be anomaly-free. Anomaly-freeness means that the constraints remain firstclass, which is essential for consistency and was shown explicitly in several cases, but is not established in general in LQC. In the following we consider holonomy corrections and inverse volume correction in dependence of a QG length parameter and their interesting interplay. Intuitively we can expect that the smaller the length scale, the smaller the holonomy corrections and the larger the inverse volume corrections, and vice versa. Holonomy corrections As above, we assume the fiducial cell C with comoving coordinate volume V 0 and physical volume V = a 3 V 0 of a cosmological model partitioned into N elementary building blocks of volume v = a 3 V 0 N . This gives a length scale L = v 1/3 [58]. Setting L = µV 1/3 with µ corresponding to the state functions |µ ties L to the quantum theory. Here µ appears first as a dimensionless proportionality factor of (classical) lengths; in quantum theory it is connected with fluxes, see (5.2). A typical QG density 8π times the Planck density M P / P , when L = P . Polynomial terms in the connection in the LQG Hamiltonian constraint operator are replaced by holonomies (5.1) along an edge, this leads to higher-order corrections. Holonomy corrections become large when the exponent µc is of order one. From the classical Friedmann equation we can express the matter density in terms of L and c, Thus holonomy corrections are large when ρ ≈ γ −2 ρ QG . As a measure for holonomy corrections we introduce This relation implies that we may expect considerable holonomy corrections in early phases of the universe, when the density is large. Inverse volume corrections Thiemann [258] showed that expressions containing the inverse volume, like (5.3), which comes from the Hamiltonian constraint, can be classically expressed in terms of the Poisson bracket of the connection and the volume, So in quantum theory the Poisson bracket can be expressed by a commutator of well-defined operators, when the connection is replaced by the corresponding holonomy. This construction is at the root of the cosmological singularity problem. For holonomies with links of coordinate length L/a we write whereė a is the tangent vector to a link e adjacent to the node v; h v,e is a holonomy along a link adjacent to v; and V v is the volume of a region containing v. There is an ambiguity in the SU (2) representation, in which the trace is taken. The parameter labeling this ambiguity influences the scale, where the transition from the discrete quantum universe to the continuous classical universe takes place. It enables us to model the time scale of inflation [54]. In the older literature a fixed discreteness scale µ 0 = const with respect to the fiducial metric was employed, which led to problems in the continum limit. For comparison we present the fixed lattice formulation and postpone the refined lattice to the next subsection. In this case, a volume eigenstate |µ witĥ , and for the simplest choice of j = 1/2 for the SU(2)-representation in the trace, the inverse volume operator acts in the following way From this, one derives the action of the self-adjoint gravitational Hamiltonian constraint operator (constructed from curvature terms of full LQG) on |µ : When the commutator in (5.4) is expressed in terms of holonomy and flux operators, its expectation values in quantum states do not have the classical relationship with the expectation values of the basic operators. Classically the flux through an elementary lattice site is L 2 (a), where L is the length scale introduced in the foregoing subsection, which depends on the scale factor according to its definition. In [58] the flux operator is rewritten in the formF = F + (F − F ) and the volume operator as function ofF is expanded inF − F . With F = L 2 (a) in lowest order one obtains a correction function α to classical Hamiltonians, depending on the scale factor, whose expansion for small deviations from the classical value for large L(a) is α(a) = 1 + α 0 δ Pl (a) + · · · with α 0 being a constant and Volume corrections become large, when L is small of the order of the Planck length. In comparison with holonomy corrections we may observe that δ Pl is small, when L P , ρ QG becomes small and holonomy corrections become large. The relation between these two kinds of corrections can be better seen in terms of densities, Inverse volume corrections depend on the ratio of the QG density to the Planck density, whereas holonomy corrections depend on the ratio between the actual density and the QG density. For small densities in an expanding universe, holonomy corrections decrease, but (5.6) tells us that δ Pl cannot simultaneously go down arbitrarily. This gives at least a lower bound for LQC corrections, from which M. Bojowald et al. in [58] derive lower bounds to correlation functions of inhomogeneities in the CMB. Here we do not have only upper bounds for LQC effects, but an estimate that gives rather narrow bounds for parameters like α 0 . Here we note that, as the size of the inverse volume corrections relies on the size of a fiducial cell, in [40] it is argued that they become negligible, when the limit of an infinite cell is taken, so that it extends over all R 3 . The correction function α appears also in the effective constraint algebra, where the Poisson brackets of two smeared-out Hamiltonian constraints H(M ) and H(N ) are modified in the form (D is the diffeomorphism constraint). The effective algebra is, importantly, first-class, i.e. anomaly-free. Anomaly freeness was shown for not too large departures from FLRW. In modeling inflation the inhomogeneities superposed on the FLRW background find their way into the classical Friedmann equation and the equation of motion of the scalar field in form of holonomy and inverse-volume corrections. So they enter into the basis of inflation models with different sorts of dilaton potentials and eventually emerge in the perturbation power spectrum of CMB, where it comes into contact with observations. The lower bounds of the predicted LQC corrections are only a few orders of magnitude away from the present upper observational bounds [58], so that we can hope for an experimental judgement in the not too distant future. However, see Section 5.4 for further discussion on this. Dynamics and lattice ref inement In the dynamical evolution of an expanding model universe in LQC, were it rigorously derived from LQG, we might expect a steady creation of new nodes by the Hamiltonian constraint operator, which keeps the typical link length small. Without the full LQG dynamics we cannot see this creation of links. What we can do is to try to model a refinement of the SNW lattice by hand [235,236]. As mentioned above, the dynamics in LQC is constructed with the aid of the LQG Hamiltonian constraint, including some kind of matter as "internal clock", usually a scalar field. Beside the inverse volume, in the Hamiltonian the curvature F ab k plays an important role. Classically it can be written as limit of holonomies around a plaquette in the (a, b) plane, when the area of the plaquette goes to zero. However, due to the discreteness of the spatial geometry, in LQC this limit does not exist. The curvature term in the Hamiltonian is expressed for finite plaquettes, the area of which is chosen to be equal to the area gap ∆A = 4 √ 3πγ 2 P , the lowest nonzero eigenvalue of the LQG area operator. The plaquettes introduce a new length scale in the classical theory, when we assume that a face of a fiducial cell C is partitioned into N plaquettes of area μV 1/3 0 2 . The parameterμ is distinct from the parameter µ which characterizes holonomies. A dynamical length scale, µ appears in the regularization of the Hamiltonian constraint operator. To determine a relation betweenμ and the characteristic value µ of a state |µ = Ψ(µ) which the Hamiltonian constraint operator is to act on, we take the area of a face of the fiducial cell and take into account that it is covered by N plaquettes, Eliminating N we find the discreteness scale in the Hamiltonian depends on quantum states viā This means, when the area of a face of C, or its physical volume V , grow due to a growing scale factor, the partition of a fiducial cell is refined. This is necessary for the sizeμ of the quanta of geometry to remain small, when the universe expands. The necessity for this lattice refinement is best seen in considering the classical, continuous limit. As the scale factor a becomes large in the course of the dynamical evolution, the difference equation stemming from the Hamltonian constraint can be approximated by a differential equation, the WDW equation, for a smooth wave function. The wave function oscillates on scales ∼a −1 , and for growing a, this becomes smaller than the discreteness scale, when the latter were given by a constant and thus firmly tied to the scale factor. We refer to [40,235,236] for the details. For a fixed lattice scale µ 0 , holonomies exp(iµ 0 c/2) act as simple shift operators by the constant µ 0 on states |µ , the action of exp(iμc/2) is more complicated, becauseμ is a function of µ. However on volume eigenstates |ν with the refined lattice holonomies act by a shift and the self-adjoint gravitational Hamiltonian acquires a form analogous to (5.5) In summary, lattice refinement fulfills all the following conditions: 1) Independence of the elementary cell chosen in an open cosmological model to make integration finite. 2) Inflation becomes "natural" in the sense that an inflaton mass M inf ≤ 10 2 M P is sufficient, in contrast to much lower values without lattice refinement. 3) Factor ordering in the macroscopic WDW equation becomes unique. 4) The requirement of "pre-classicality" is fulfilled, i.e. quantum corrections at large scales are avoided. Consequences for inflation can be seen in [235]. Here we mention just a few facts about modeling inflation with and without lattice refinement. We take a wave function, depending on the volume of the universe, or p = V 2/3 , respectively, and an inflaton field φ, where |ν is an eigenstate of volume, ν is related with µ by (5.7). In the continuous limit, when the evolution equation of the wave function is approximated by a differential equation, we assume ψ(p, φ) = Υ(p)Φ(φ). In the continuum limit the wave function must vary slowly on the discreteness scaleμ of QG (pre-classicality, see [54]). This can be formulated in the way that the distance between two zeros in terms of µ, ∆µ = 3 4πγ 2 P ∆p must be at least equal to 4μ, which yields the condition of pre-classicality ∆p > 16 (πγ) 3/2 3 P p −1/2 , which leads to an upper bound of the inflaton potential, V (φ) ≤ 2.35 · 10 −2 −4 P , in contrast to V (φ) 10 −28 −4 P for fixed lattice. To be in accordance with the COBE-DMR measurements, the potential must further satisfy If we choose an inflaton potential V (φ) ≈ m 2 φ 2 2 then we obtain, from the least two conditions, m ≤ 10M P , compared with m ≤ 70(e −2N cl )M P for the fixed lattice. N cl is the number of efolds. The condition that a significant proportion of the inflationary regime takes place during the classical era imposes a condition on the inflaton mass, which is much more natural for the refined lattice. Loop Quantum Cosmology: possible observational consequences With the development of LQC and recent observational missions in cosmology there are rich veins of work to explore in the phenomenology of the early universe. There are many approaches to this work. As yet there is no consensus on the best route from the discrete quantum geometry of LQG to observable cosmological predictions. In this review of LQC phenomenology, which is very brief, we do not attempt a comprehensive review of the literature but rather provide a guide to starting points for further exploration of these veins of work. Our scope is further reduced by focusing on observational signatures that will be accessible in the near future. The current best observational window on the early universe is the power spectrum of small angular fluctuations in the cosmic microwave background (CMB) radiation. In the standard inflationary model these fluctuations are generated by scalar and tensor perturbations. Thus, the power spectra of the scalar and tensor perturbations are the key tools for investigations of cosmic background radiation in electromagnetic and gravitational sectors. Because of the difference in scales at which decoupling occurs, the gravitational wave background originates at an earlier epoch than the CMB, thus allowing a view into the very early universe. However, observation of the gravitational background remains a huge experimental challenge. The now-familiar plot of the CMB power spectrum is of angular correlations of the temperature-temperature, or "TT", power spectrum. Polarization modes of the CMB are decomposed into curl-free electric E-modes and gradient-free magnetic B-modes. The B-mode power spectrum arises from two effects: directly from primordial gravitational waves (the tensor modes) and indirectly through lensing from the conversion of E-mode to B-mode. Therefore, most intriguingly, it may be possible to study tensor modes from polarization measurements of the CMB, without directly observing the gravitational wave background. As in the case of LV, there are a number of different perturbation frameworks under development. For instance in one class of widely-used frameworks, the background space-time enjoys LQC modifications (corrections due to holonomy, inverse volume, or both) and the perturbations take a form similar to the perturbations in FLRW cosmologies, where the linear perturbations of Einstein's equations are quantized. Because the background does not follow Einstein's equations in this framework it is not immediately clear that perturbation equations are consistent. There may exist a consistent set, but this issue is currently not resolved. In another framework (see, e.g. [40, Section VI.C]), the classical theory is first reduced by decomposing the gravitational phase space into homogeneous and purely inhomogeneous parts (for matter as well as gravitational variables); the linear perturbations are entirely within the inhomogeneous phase space. Then both the background and the linear perturbations are quantized with LQC techniques. In related work of [34,84], a quantum scalar field is analyzed on a quantized Bianchi I background. This work provides a framework for perturbations on an effective 'dressed' quantum geometry that may also contain back reaction. Most current studies are some blend of the traditional framework of cosmological perturbations and an effective LQC framework. The formulation of these frameworks are currently a matter of lively debate; see, for instance, [40, Section VI.D], [58,Sections 2.3.4], and [261]. Nevertheless there is an impressive body of work in developing the phenomenology of the very early universe. Loopy modifications to the power spectrum have been derived. LQC offers at least two modifications to the usual scenario, holonomy and inverse-volume or inverse-triad corrections as discussed in the last section. Both these corrections have been incorporated in models of the background space-time using LQC methods. As yet we lack a comprehensive study of all the LQC effects on the scalar and tensor perturbations. Nonetheless there are many studies analyzing how specific models of LQC corrections affect the power spectra. We will mention two lines of work, one on affects in the power spectrum of scalar and tensor perturbations and the other on the chirality of tensor perturbations. This last work is outside the symmetry reduced LQC models and is included here as it concerns tensor perturbations. Scalar and tensor perturbations In the effective Friedmann equation framework scalar perturbations with inverse triad corrections are discussed in [57,[60][61][62] and with holonomy corrections in [70,261,262]. In [70] correction terms were introduced without a gauge choice with the result that the perturbation equations are anomaly-free. Tensor modes in the same framework are discussed in [57,59,82,117,204] with inverse triad corrections and in [59,115,115,[200][201][202] with holonomy corrections. Starting with [59] the work in [116,204] develops a phenomenological model of the tensor modes within a model bouncing cosmology with a single massive scalar field. They find that the effects can be modeled with two parameters [116], one "bump parameter" is simply related to the inflaton mass. The second parameter, a transition wave number, is related in a complicated way to the critical density and the scalar potential energy-critical density ratio at the bounce. The authors find that the tensor power spectrum is suppressed in the infra-red regime, agrees with the standard general relativistic picture in the UV, and has both an increase in amplitude and damped oscillations at intermediate scales. This work suggests that the next generation B-mode experiments could provide a successful constraints on the model parameters [116]. For more on this approach see [115-117, 201, 203, 204]. The power spectra of scalar and tensor modes, with inverse triad corrections, are derived in the inflationary scenario with effective Friedmann equations in [57,58,72]. In [57,58] the corrections are parameterized with corrections related to the area gap, a parameterization of quantization ambiguities, and how the number of lattice sites changes in the evolution of the cosmology. This leads to an enhancement of power on large angular scales. The model is compared to WMAP 7 year data as well as other astronomical surveys in [58]. Normally in the inflationary scenario, because of the rapid expansion, the universe is in its vacuum state shortly after the onset of inflation. It was pointed out in [7,8], however, that if pre-inflationary physics in LQC led to a non-vacuum state then spontaneous generation of quanta would have observational consequences in terms of non-Gaussianities in the CMB and in the distribution of galaxies. The wealth of phenomenological models means that there is guidance as to many effects arising from aspects of LQG. The field has evolved to the point where these models can be directly compared with current data. But there remain many questions on the derivation of these effects from a more fundamental level, both from LQC and from LQG. For instance, the parameter capturing the phenomenology of inverse volume corrections depends on the fiducial volume [57]. While the parameter can be fixed by the size of the Hubble horizon at horizon crossing the parameterization is debated. So the status of inverse volume corrections in the presence of inhomogeneities is a matter of current debate (particularly when the spatial topology is non-compact), see e.g. [58, Section 2.4] and [40, Section VI.D]. Chirality of tensor perturbations Working with the Ashtekar-Barbero connection formalism and deriving the tensor perturbations in a de Sitter background, Magueijo and collaborators find that the graviton modes have a chiral asymmetry in the vacuum energy and fluctuations if the Immirzi parameter has an imaginary part [48,49,177,178]. This is significant as the chirality would leave an imprint on the polarization of the cosmic microwave background and might be observed with the PLANCK mission. The asymmetry depends on operator ordering. Phenomenology of black hole evaporation The subject of this subsection is closely related to cosmology and has potentially observable consequences. In [206] semiclassical models of Schwarzschild and Reissner-Nordström black holes are presented. They are based on LQG's discreteness of area and a resulting repulsive force at extremal densities. With these ingredients the space-time metric outside "heavy" black holes (with respect to the Planck mass) is only slightly modified in relation to the classical form, but inside the horizon the singularity is smoothed out and in the limit when the radial coordinate goes to zero the metric becomes asymptotically Minkowski. By the introduction of the new coordinate R = a 0 /r, where a 0 is the LQG-inspired minimal area, the regularized metric is shown to be self-dual in the sense of T-duality: An observer at R → ∞ sees a black hole with mass ∼m Pl /m, when m is the mass of the black hole seen by observers in the asymptotic flat region r → ∞. For "light" (= sub-Planckian) black holes also the outside metric is modified considerably. "Light" black holes do not evaporate completely, although they would emit high-energy radiation at an extremely low rate. They are supposed to explain two cosmological puzzles: Being practically stable, ultralight black holes created during the inflation process could account for dark matter as well as they could be the so far unknown source for UHECR. In the sequel [47,136] it was shown that the discreteness of area leads to features that distinguish black hole evaporation spectra based on LQG. They are very distinctly discrete in contrast to the classical Hawking spectrum and observation, should it become possible, should be able to distinguish LQG from other underlying QG theories. Conclusions In this review we have described ways in which LQG, mainly by means of discreteness of the spatial geometry, may lead to experimentally viable predictions. We discussed ways in which the discreteness may (or may not) lead to a large variety of modifications of special relativity, particle physics and field theory in the weak field limit. In Sections 3 and 4 effective particle and field theory frameworks are presented in some detail. Where possible we have given current observational bounds on the models. In these sections, as well as in the LQC section, we have pointed out numerous approaches and some of the theoretical and experimental open problems. Many of these are collected below. Of course QG and QG phenomenology remain open problems. We lack strong ties between observationally accessible models and LQG. These models have ansätze that are often in striking contradiction with each other and none has a clear support from observations. Furthermore, should any of the effective models presented here be favored by experimental data in the near future, this will hardly point uniquely at one of the fundamental theories, or to a certain version of them 11 . On the other hand, this field has seen tremendous progress since the mid-90's when it was tacitly assumed that there were essentially no experimentally accessible windows into QG. Quite the contrary, now there are many avenues to explore QG effects and stringent bounds have already been placed on effects originating at the Planck scale. These developments are an essential first step toward a physically viable quantum gravity theory. Concluding, the subject of LQG phenomenology, and of QG phenomenology in general, is now far reaching. We expect that QG phenomenology will remain a very active field and will hopefully bring new perspectives and clarity on the ad hoc assumptions and models. Indeed, in spite of its shortcomings, phenomenology is indispensable for LQG, or any other quantum theory of gravity, if it is to become a physical theory. A Elements of LQG In the first part of this appendix there is an overview of the basics of LQG. In the second part we very briefly review the theory's kinematics. For more details the reader should consult the recent brief reviews by Rovelli [227] and Sahlmann [232]. For longer reviews the reader should consult [35,212,255,256] and the texts [226,257]. LQG is a quantization of GR. Due to the special features of GR it looks in many points quite different from other quantum field theories: • LQG takes into account that space and time are not an external background for physics, but part of physical dynamics. • Gravity is self-interacting, but the self-interaction cannot be treated perturbatively, because the theory is non-renormalizable. • Due to general covariance of GR the gauge group is the group of diffeomorphisms, not the Poincaré group. Whereas non-linearity is shared with other QFTs, the issue of dynamical space-time is unique for gravity. For comparison recall that usual QFTs deal with fields on either Minkowski space or some curved Riemannian space with a given metric and the corresponding Levi-Civita connection. LQG, on the other hand, is a QFT on a manifold, a priori without further structure, and the geometric properties of physical space are realized in form of dynamical fields. Concretely, in LQG the basic field variables are not metric components, as in GR, but orthogonal bases (triads) in the tangent space of every point in three-dimensional space and the connection. The introduction of triads brings further gauge degrees of freedom into the game, namely local rotations, i.e. elements of the group SO(3) (or SU (2)). Further, due to space-time covariance of GR, the embedding of the spatial 3-manifold into a 4-manifold and the choice of a time coordinate is also a matter of gauge. In the canonical formalism this is reflected by the appearance of a further gauge generator. As common in gauge theories, gauge generators are constraints. In the present case the Gauss constraint, which has a formal analogy to the Gauss law in electrostatics, generates triad rotations. The theory also has the diffeomorphism constraint and the Hamiltonian constraint, which generates transitions from one spacelike 3-manifold to another one. In LQG these constraints are imposed as operators, which annihilate physical, i.e. gauge-invariant, quantum states. This leads immediately to a surprising feature in canonical QG: The propagation from a hypersurface to the next one being a gauge transformation, all physical states are invariant under these transitions and there is no physical time evolution. Gauge-invariant states contain all the history of a state of the gravitational field. This was pointed out long before the advent of LQG [86]. Time evolution must be introduced in an operational way in relation to some suitable kind of matter, which is coupled to gravity and may be considered as clock. The problem of non-linearity, coming from gravity's self interaction, is more a technical than a conceptual problem. Non-linearity in interacting QFTs in the standard model is successfully dealt with by renormalization methods. The non-renormalizability of GR appears as a serious obstacle, but it can be traced back to the background -splitting the metric into a background, e.g. Minkowski space, and a (small) field on it in the form g ik = η ik + ψ ik , |ψ ik | 1 and quantizing "the ripples ψ ik on the background" does not work. The conclusion, which is drawn from this in LQG, is that only spatial geometry as a whole has a chance to be successfully quantized. Presently the success and limitations of LQG can be summarized very briefly in the following way. Local connection components are not gauge invariant, they are not even tensor components. Constructing gauge-invariant quantities from them is possible by means of closed contour integrals, so-called Wilson loops. Their further development are holonomies and spin networks. They introduce non-locality into the theory. In consequence, metric quantities (which are not introduced from the beginning as basic variables), like area and volume, turn out to have discrete spectra with a fundamental role of the Planck length. In other words, LQG yields "quanta of space" or "atoms of geometry". The existence of a minimal length provides a natural ultraviolet cutoff for other QFTs. The main open problem is the non-linearity of the Hamiltonian constraint. Thiemann [258] succeeded to formulate it as well-defined operator in several versions, but there remain some ambiguities. A more recent approach is the master constraint programme [257]. The problem of the Hamiltonian constraint and time evolution have been satisfactorily solved in simplified models, mainly in cosmology. Concerning technicalities, at the very basis of LQG stands a (3 + 1) decomposition of spacetime and a canonical formulation of GR in terms of densitized triad variables E a i and connection variables A a i := Γ i a − γK i a , which are canonically conjugate, on the spatial 3-manifold 12 . The connection depends on both the spin connection Γ i a and the extrinsic curvature K i a . The parameter γ is the Barbero-Immirzi parameter, which may be fixed by matching with the Bekenstein-Hawking black hole entropy formula, see e.g. [197]. In the connection representation, the quantum Hilbert space is spanned by functionals of the connection. A convenient basis, invariant under SU(2) gauge transformations of the triads, is provided by spin network (SNW) functions, defined with the aid of graphs Γ, where to each edge or link e I a representation of SU(2), corresponding to a spin j, is associated, the "color" of the edge. SNW functions are constructed from the path ordered exponential of the connection, the holonomies The holonomies are connected by intertwiners at the nodes or vertices in such a way that ψ is a scalar function. The functions Ψ Γ (A), having a finite number of arguments, namely the number of edges of Γ, are called cylindrical functions. They may be considered as coordinates on the space of smooth connections modulo gauge transformations, denoted by A/G. The Hilbert space of LQG is the closure of the space of cylindrical functions on generalized (distributional) connections modulo gauge transformations with the Ashtekar-Isham-Lewandowski measure [33,36,37], constructed from the Haar measure on SU (2). On this Hilbert space the configuration variable A a i would act as a multiplication operator, were it well-defined, and the momentum variable E a i as a functional derivative with respect to A a i . As is common in quantum field theory, elementary variables do not enter quantum theory as operators, but as operator-valued distributions, which have to be regularized by integrating out with some test functions. In the case of the connection, the above defined holonomy operators (A.1) arise from integrating A a i in one dimension, which is natural for one-forms. These operators either add a holonomy along an link present in a SNW, or create a new link. The momentum variable E a i is a vector density, which can be associated with a two-form η abc E a i with the aid of the Levi-Civita density η abc . So it is natural to smear it out by integration over a two-dimensional surface. Let a surface S be defined by (σ 1 , σ 2 ) → x a (σ 1 , σ 2 ), where σ = (σ 1 , σ 2 ) are coordinates of S and x a coordinates in the three-dimensional space, where S is embedded. Then Minkowski space is represented in LQG by a state of the gravitational field, which is not the vacuum state, but a superposition of excited SNW states. Discreteness comes in a natural way from the "polymeric" structure of the SNWs, suggesting the presence of QG effects even in flat space. The above formulation is in the "embedded framework" of LQG. This has the advantage of having clear ties to the classical theory but in the kinematic Hilbert space is non-seperable. In addition the state space has physically mysterious continuous moduli that label equivalence classes of diffeomorphism invariant states [123]. Partly in response to these difficulties an alternate framework has received increasing attention. The combinatorial framework of LQG 13 was introduced by Zapata [266,267] and recently used as the kinematic setting for spin foam models in the review [223]. In this framework the kinematical Hilbert space is separable and is free of the mysterious moduli.
v3-fos-license
2023-11-10T16:45:33.420Z
2023-02-28T00:00:00.000
265091977
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://journals.balaipublikasi.id/index.php/amplitudo/article/download/23/23", "pdf_hash": "eaaa7b15cd72da407a4043c803eea43dedd2d3ce", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46225", "s2fieldsofstudy": [ "Education", "Engineering", "Physics" ], "sha1": "fa214a68340bde51c6d3122f0958bad4faa278d4", "year": 2023 }
pes2o/s2orc
Mapping Science, Technology, Engineering, and Mathematics (STEM) Research on Physics Topics Using Bibliometric Analysis : This study aims to provide an overview of physics learning research using the STEM approach in Indonesia. This study uses a descriptive qualitative method which aims to determine the depth of research using a certain matrix so that a conclusion can be presented. This research uses the Publish or Perish application to search for scientific articles. Search for scientific articles indexed by Google Scholar with a publication range between 2018-2022. The results of the search obtained 107 scientific articles that match the search keywords. Furthermore, from 107 scientific articles, a more in-depth screening was carried out so that 45 scientific articles were obtained that specifically discussed STEM Education in physics. The analysis was carried out using the VOS viewer application with the aim of mapping the direction of the research using the keywords "STEM Education" and "Physics". Introduction Many developing countries are conducting research to improve student achievement and the quality of STEM (Science, Technology, Engineering, and Mathematics)-based education.The STEM-based education system has received full and significant support in America and the European Union (Corlu & Aydin, 2016).In the STEM system there are six main pillars, namely: higher-order thinking skills, inquirybased learning skills, problem-solving skills, contextual learning, collaborative learning and project-based learning.The STEM system can be applied to elementary school (SD), junior high school (SMP) and senior high school (SMA) students. The current education curriculum in Indonesia is the Freedom to Learn curriculum, which is an option that can be implemented by education units.The Education Unit can implement the Free Learning and STEM curricula simultaneously.But there is still much debate about the lack of student interest in STEM.The lack of student interest in STEM can be seen from the following studies.Yung's research (2010) found that only five percent of students in the United States continue their undergraduate studies in science programs (Yang, 2010).In England, when children enter secondary school, their interest in science topics begins to wane (Barmby et al., 2008). The reason for students' lack of interest in STEM is because they feel anxious due to the perception that it is too difficult to get good grades in STEM subjects on the topics of Physics, Chemistry and Mathematics.The reduced number of academics in science learning is caused by fear and lack of confidence in STEM-related subjects (Amelia et al., 2019).Because learning science is considered to have a learning syllabus that is difficult and different from other learning materials.There is also an opinion that students who will continue their undergraduate education will have difficulty using STEM learning (Ring et al., 2017).Negative brush against STEM learning will be a barrier for children pursuing STEM careers (Sin et al., 2013). By considering some of the aspects above, the author in making this article aims to provide a bibliometric analysis of STEM-related literature on Physics topics indexed by Google Scholar (GS).Through analysis and categories based on distribution and author affiliation.This analysis could become a research topic that is the subject of more publications and the topic "STEM Education on physics" in the future.The methodology used to carry out the analysis is to use bibliometric analysis, including the instruments and methods in which there is a Publish or Perish (PoP) software application.Then to present the results of data processing using the VOSviewer application followed by a discussion session and conclusions from the results of the literature test that has been carried out. Method In this research the writer uses descriptive qualitative method.Qualitative descriptive method is a research method that utilizes qualitative information and is described descriptively.This method began to be developed by experts around the 1970s marked by a book entitled "The Discovery of Grounded Theory" written by Glaser andStrauss in 1967 (Packer-Muti, 2016).Descriptive research is a type of research that produces conclusions through a description of the problem and not through a statistical calculation process.Some advantages in the decision-making process, using qualitative methods for assessment and testing.In descriptive research to gain deeper insight into designing, administering, and interpreting assessments and tests and exploring behavior, perceptions, feelings, for the understanding of test takers. Some of the weaknesses in descriptive research are smaller sample sizes and time consuming.Quantitative research methods, on the other hand, involve a larger sample, and do not require a relatively longer time for data collection (Rahman, 2016).This is also in line with qualitative research which aims to answer questions related to the development of understanding the dimensions of meaning and experience of human life and the social world (Wohlrapp, 2014).In addition to using descriptive methods in this study also use Bibliometric Analysis.Bibliometric analysis is based on a systematic and explicit method (Garza-Reyes, 2015) as well as a method that maps on Knowledge Constraints (Tranfield et al., 2003).In mapping the Bibliometric Analysis there are 5 steps that must be carried out as shown in Figure 1.At the stage of determining the search database, the Publish or Perish (PoP) application is used to make it easier for researchers to find articles that have been published.The Publish or Perish (PoP) application can be used to retrieve scientific article publication meta data from crossref, Google Scholar, PubMed, Open Alex, Scopus, Semantic Scholar, and Web of Science.In this study, the search for articles using the Google Scholar database.The choice of the Google Scholar database is because Goole itself is a popular search engine and is rich in indexed scientific articles (Aulianto et al., 2019;Saputro, 2022). Define Keywords The second stage in the bibliometric analysis is to determine the keywords that will be used to search the meta data.Determination of these keywords is very important to produce quality journal articles.The keywords used can also be combined using the conjunction "and" or "or".For keywords that are more than one word can be enclosed in double quotation marks, for example the word "STEM Education". Organizing Search Results The third stage is compiling search results using the PoP application which was carried out in the second stage.Search results can be exported into documents with the extension Research Information System (RIS), BibTex, CSV, Endnote and other formats.Furthermore, these search results can be completed and processed using the Mendeley or Zotero applications to complete the details of each article easily and quickly. Statistical Data Compilation The fourth stage is to compile statistical data from search results.In this stage, the Excel application can be used to process statistical data.Statistical data compilation can be done easily using the excel application, for example displaying the 10 most cited articles, displaying the number of articles per year and displaying articles based on certain keywords. Data analysis The final stage is the analysis of search results data using the PoP application.In this stage the author uses the VOSviewer application to display network visualization between keywords.The results of network visualization between keywords are then followed by analyzing the interrelationships between keywords.Gap analysis can also be carried out or the distance between keywords to determine the updating of related studies. Result and Discussion The search results for articles using the Publish or Perish (PoP) application with the Google Scholar (GS) database get 107 articles from 2018-2022.Use the STEM Education keyword in the keyword column, and Journal keyword in the publication name.Can be seen in Figure 2 The next step is to take the 10 articles with the highest citation value using the keyword "STEM Education" as shown in Table 2. Furthermore, 45 searched articles using the Publish or Perish application are saved in RIS format, to be processed using the VOSviewer application.In the preparation stage of data processing using VOSviewer, 16 keyword terms were generated as shown in Table 3.There are several keywords, namely approach, class, development, engineering, Indonesia, interest, level, mathematics, n gain, project, science, skill, stem approach, stem education, studies and technology. The visualization results using the VOSviewer application will obtain a bibliometric map as shown in Figure 3.It can be seen that the visualization network has 3 network clusters with different colors red, green and blue.Cluster 1 contains 7 keyword items, namely: class, Indonesia, level, n gain, skill, stem approach and study in red.Cluster 2 contains 6 keyword items, namely: approach, engineering, mathematics, project, science, and technology in green.Cluster 3 has 3 items, namely: development, interest and stem education.Figure 4 shows an overlay visualization of articles per year related to STEM Education keywords.As seen in Figure 4, it shows articles between 2019 and 2021.From the color of the overlay being deep dark in 2019 and starting to lighten in 2021, it means that the keywords technology, engineering, mathematics, science in 2019 have had many articles using these keywords.Furthermore, the keywords skills, stem education are still bright in color, which means that these keywords can still be further researched to produce the latest articles.4. Conclusion From the results of the processing of the Bibliometric analysis that has been carried out, it can be concluded that STEM Education for physics subjects is a theme that can still be developed through the use of technology or projects.As shown in Figure 5, which still shows that STEM Education research has not touched much of it.Hopefully the results of this analysis can be useful and can be used to determine the theme of further research. In this study, it experienced limitations where the results of research assessments were subjective, especially in determining keywords and determining the range of years used.So, it is still possible for errors to occur in selecting keywords.In addition, considering that the database used in the search only uses GS, the results displayed are limited to GS only. This research is also an answer to research (Zulaikha et al., 2021) which suggests adding samples to be analyzed so that the authors have broad opportunities to carry out the analysis.Addition of sample articles to more than 200 articles where previously only 45 articles were written.For further research, the authors suggest using a sample of more than 200 articles by compiling search databases using Scopus or web of science and analyzing them using applications such as BiBExcel and HistCite or other applications. Figure 1 . Figure 1.Stages of Bibliometric Analysis the search results, the names Diana Bogusevschi, Cristina Muntean and Gabriel-Miro Muntean with the article title Teaching and Learning Physics using 3D Virtual Learning Environment: A Case Study of Combined Virtual Reality and Virtual Laboratory in Secondary School have received the most citations i.e. 124 for articles published in 2020. Figure 2 . Figure 2. Article search results in Publish or Perish A total of 107 articles were produced which were further analyzed in more depth and detail to produce relevant research.The author selects and searches for articles that have the keyword "STEM Education" with Table 1 . Table 1 of the Data Matrix.Data Matrix Table 2 The Top Ten Highest Citation Articles Table 3 . Term Keyword Analysis Vosviewer Table 4 . Three Article Clusters
v3-fos-license
2016-05-12T22:15:10.714Z
2013-03-27T00:00:00.000
4809311
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://conflictandhealth.biomedcentral.com/track/pdf/10.1186/1752-1505-7-7", "pdf_hash": "701181f3f801495fccb24e1e7ddaa2881ab2139d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46226", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "sha1": "c969859a0b62834b0e954c455efa2630f0c8b4bf", "year": 2013 }
pes2o/s2orc
Family therapy sessions with refugee families; a qualitative study Background Due to the armed conflicts in the Balkans in the 1990s many families escaped to other countries. The main goal of this study was to explore in more detail the complexity of various family members’ experiences and perceptions from their life before the war, during the war and the escape, and during their new life in Sweden. There is insufficient knowledge of refugee families’ perceptions, experiences and needs, and especially of the complexity of family perspectives and family systems. This study focused on three families from Bosnia and Herzegovina who came to Sweden and were granted permanent residence permits. The families had at least one child between 5 and 12 years old. Method Family therapy sessions were videotaped and verbatim transcriptions were made. Nine family therapy sessions were analysed using a qualitative method with directed content analysis. Results Three main categories and ten subcategories were found - 1. Everyday life at home, with two subcategories: The family, Work and School/preschool; 2. The influence of war on everyday life, with three subcategories: The war, The escape, Reflections; 3. The new life, with five subcategories: Employment, Health, Relatives and friends, Limited future, Transition to the new life. Conclusions Health care and social welfare professionals need to find out what kind of lives refugee families have lived before coming to a new country, in order to determine individual needs of support. In this study the families had lived ordinary lives in their country of origin, and after experiencing a war situation they escaped to a new country and started a new life. They had thoughts of a limited future but also hopes of getting jobs and taking care of themselves and their families. When analysing each person’s point of view one must seek an all-embracing picture of a family and its complexity to tie together the family narrative. To offer refugee families meetings with family-oriented professionals to provide the opportunity to create a family narrative is recommended for the health and social welfare sector. Using this knowledge by emphasizing the salutogenic perspectives facilitates support to refugee families and individuals. This kind of support can help refugee families to adapt to a new system of society and recapture a sense of coherence, including all three components that lead to coherence: comprehensibility, manageability and meaningfulness. More studies are needed to further investigate the thoughts, experiences and needs of various refugee families and how refugee receiving societies can give the most effective support. Background Refugee families are affected by different types of stressors before the flight, during the flight, and during the resettlement processes. The effects vary and have different time scales for the parents compared to the children [1]. Refugee children tend to be resilient and resourceful despite the many adversities they face [2]. Most children, particularly younger ones, cope with the separation from their home countries more easily than the parents, and they experience fewer barriers to social network rebuilding [3]. In spite of this, many young refugees experience mental health difficulties. Thus the awareness of society and clinicians concerning relevant risks and protective factors is important [4]. Exposure to severe traumatic events in the refugees' home country, and the medical and psychological effects of this exposure, are known to critically influence the possibilities for resettlement in a new country [5]. Negative health consequences are especially high when relocation is forced due to severe conflicts in the home country associated with violence and man-made trauma [6]. However, post-migration factors such as language barriers [7], loss of culture and support [8], and a prolonged asylum process [9], have also been found to have a negative impact on psychological well-being. In one study of war-wounded refugees exposed to severe traumas in their home countries, the results indicated that life circumstances and events related to the present situation, "here and now", were more important for their well-being and social integration than background factors [10]. A review of 22 studies of refugee children found substantial variation in the definitions used and measurements made of the children's problems and reported levels of post-traumatic stress disorder ranging from 19 to 54% [11]. Traumas and negative life events may give rise to negative changes in attachment between children and their parents. Culturally appropriate counselling theories and their respective interventions can be helpful in finding treatment options [12]. Psychological problems are frequent in refugee children but over time in exile the extent of these problems is reduced. Traumatic experience before arrival is the most important factor determining the short-term reaction of the children, while stressful life in exile seems to be the most important factor affecting the children's ability to recover from early traumatisation, according to Montgomery [13], who also points out that the quality of family life seems to be important for both short-and long-term mental health. Robertson and Duckett [14] studied displaced Bosnian mothers' experiences caring for their children during and immediately after the war (1992)(1993)(1994)(1995) and they concluded that although families need to move forward, they may need to look back, at least from time to time. Weine et al. [15] has concluded that qualitative family research is useful for better understanding of refugee families and in helping them through family-oriented mental health services. Al-Baldawi [16] points out that it is important to distinguish psychosomatic manifestations due to stress from pathological symptoms developed as a result of psychiatric or somatic diseases, in order to reduce the risk of over-or under-diagnosing the patient's problems, and to choose the correct treatment to promote better and quicker integration. One qualitative study interviewing refugees about their experiences with the Swedish healthcare system showed that care providers' conversations about daily life were seen as a sign of commitment, knowledge and professional skill [17]. Another study has shown that secondgeneration immigrant children did not differ from the non-immigrant children in their own presentation of mental health at the age of 12 [18]. A study of Bosnian war prisoners who came to Sweden points out that the most important factor for their wellbeing during the first period in exile was whether or not the family and other relatives were reunited and if they knew what had happened to other members of their family [19]. A study examining the functioning of the family and the child's psychological adaptation while staying in a refugee camp in Sweden concluded that family members should not be separated during the asylum and that a follow-up process is desirable when they have obtained residence permits allowing them to stay [20]. Hopes regarding education and family reunion were central in the resettlement of West African refugees in Sweden [21]. In one study evaluating mental health and social adjustment of Iranian children 3,5 years after arrival in Sweden, the conclusion was that current life circumstances in receiving host countries, such as peer relationships and exposure to bullying, are of equal or greater importance than previous exposure to organised violence [22]. Another study showed that extended family and, in particular, parental siblings play important roles in the acculturation experience and family functioning of Vietnamese refugee families in Norway [23]. Goldin [24] found that protection of the refugee children was associated with, among other things, a warm family climate and above all a family sense of hope for the future. Alinder et al. reported positive effects of family sessions after treatment at home of eight families from Bosnia who had fled to Sweden [25]. Due to the armed conflicts in the Balkan region in the 1990s many families escaped to other countries. During the period 1992-1995, nearly 50 000 persons coming from Bosnia and Herzegovina were granted permanent residence permits to stay in Sweden [26]. Research on refugee family perspectives in order to obtain a fair idea of their complexity is of the utmost importance to create useful guidelines for professionals in the health and social welfare sector. Most studies have focused mainly on individual perspectives. Few studies focusing on the family perspectives of refugees have been carried out. Thus there is a need for more research work to be done in this area. The main goal of this study was to explore in more detail the complexity of various members' experiences and perceptions from their life before the war, during the war and during their escape, and finally during their new life in Sweden. Study setting Refugee families from Bosnia-Herzegovina were asked to participate in this study by a nurse in a medical health centre or by social workers in the communities where the families lived. This study focused on using a qualitative method to study the lives and experiences of three refugee families who came from Bosnia-Herzegovina to Sweden. It is part of another project in which initially 14 families participated [27]. That project was an intervention study with the aim of creating a family narrative and supporting the whole family by giving three family therapy sessions. The diagnostic interviews before the intervention with the family therapy sessions were carried out between 1995 and 2000. The families had arrived to Sweden between 1992 and 1995. Eleven out of 14 families who were initially recruited took part in the family therapy sessions. The inclusion criteria were that the families should: 1) come from Bosnia-Herzegovina, 2) have permanent residence permits in Sweden, and 3) have at least one child between five and twelve years old. Study group Three families who had participated in family therapy sessions were chosen in this part of the study. They were selected because of the rich descriptions shown in the transcripts. The material consists of three sessions from each family i.e. a total of nine family therapy sessions. The families consist of both mother and father, and in two of the families one child, and in the third, two children. The ages of the children were: one aged four, two aged seven, and one aged twelve. The three families had been in Sweden for about two years, four years and six and a half years respectively at the start of the first family session. The children in this study had no pronounced psychiatric problems and to our knowledge their parents had not sought psychiatric help for them. Data collection The families received three sessions of family-based therapy in which the children participated with their parents. Each session lasted about one hour, so in total approximately nine hours of session data was analysed. The purpose of giving the family these sessions and going through different themes illustrating their life before, during, and after the war was to give everyone the possibility of being involved, and of sharing experiences and thoughts with each other, to tie together a family narrative with the aim of supporting the whole family. One idea was to open the communication within the family to help the family members to handle stressful and new situations better. Another idea was to share thoughts and experiences to prevent concealment of destructive family secrets. The intervention was influenced by systemic and narrative approach with crisis and salutogenic theory as the framework [28][29][30]. The themes in the sessions were: former life situation before the war, the war, the escape from the home country from each family member's point of view, the present situation in regard to role changes, network, thoughts about the future, and coping strategies in the family. The war was mentioned in passing. In comparison to other themes, less time was spent talking about the war. The intention of the therapists was not to focus on traumatic experiences, but to keep the family story on track. All family members, including children, were involved in talking during the sessions. In this study two well-trained and experienced interpreters were involved. All sessions, except one, used an interpreter (the absence was due to sick leave). The sessions were conducted in Swedish and when needed the interpreter spoke the families' mother tongue. All sessions were videotaped. Verbatim transcripts were made from the videotapes. Interactions between the family members were noted. The first author was in charge of the family therapy sessions together with a social worker; both were trained in family therapy. The same structure and approach was kept in all family sessions. Analysis Data were analysed using directed content analysis. Content analysis is a method of analysing verbal or written communication in a systematic way [31]. This method is used to interpret meaning from the content of text data [32]. With a directed approach, analysis starts with an analysis of the different themes that have already been selected and focused on in the family therapy sessions. Material from all three sessions with each of the three families was analysed in this study. Transcripts from each session were read several times. First the text was read so that the different themes could be selected in the family therapy sessions. The next step was to collect the text material belonging to each theme from the nine different family therapy sessions. The text was then reread several times and meaning units assigned to a single topic were sorted into three categories. The three main categories were subdivided into ten subcategories. The categories and subcategories were then validated through a systematic analysis of the material, and the analysis and findings were checked by an experienced and skilled qualitative researcher (author CB). Ethical considerations The ethical principles of autonomy, non-maleficence, beneficence and justice were considered. Concerning autonomy, the families had agreed to participate in the study and they knew they could end their involvement at any time. One dilemma to consider was that talking about traumatic experiences might remind them about hard times and thus might worsen their current mental condition. On the other hand, there was also a possible gain represented by the principle of beneficence. The purpose was to help the family members to leave traumatic experiences behind and continue their new life in a better way. The work with the family therapy sessions was done without taking account of gender, social or economic status, ethnicity or any other factor. The study was approved by the Ethics committee of the University of Linköping (93092). Result Three main categories emerged from the analysis of the family therapy sessions: "Everyday life at home", "Influence of war on everyday life", and "The new life". A total of ten subcategories comprised the main categories, as shown in Figure 1. Everyday life at home Everyday life at home was the main category that emerged from the analysis of the material from the sessions focusing on life before the war. Narratives came up about family members who still live in Bosnia and Herzegovina and who the informants missed very much. The informants said positive things about relatives who lived with each other and with whom the informants associated often and closely, and who helped them in different ways. Besides family, work and school/preschool were important. The parents did well at their jobs in Bosnia and Herzegovina and their relatives helped them. Family life functioned well economically. The informants had some difficulties at first in remembering what had happened in their home country before the war but they remembered better after they had been talking for a while. Overall, the informants highlighted mostly the positive aspects of life before the war. Family Family life before the war was elaborated on. Many relatives lived close by and sometimes even lived with the family. Children had access to grandparents and socialized closely with them. Positive traditions with many relatives who celebrated ceremonies together were described. One informant described, for example, birthdays when as many as 20 people often gathered to celebrate together. The informants said that they lived a good life, being able, for example, to go on vacation to the seaside every year. They also mentioned other free time activities like visits to forests. Work and school/preschool Work concerned the jobs held by the adults, and school/ preschool concerned day-care centres since the children were small when they were in the home country. The parents said that they liked their jobs in their home country and that because they had these jobs and earned money, the family functioned well economically. Getting help and support from family and relatives were common. The parents had access to day-care centres and the children had both positive and negative memories. An example of a negative memory from a parent was remembering when a pre-school teacher at the day-care centre shouted at the children. Some positive memories were having access to day-care centres while the parents were working, and living in close proximity to day-care. The influence of war on everyday life This quiet everyday life was suddenly changed when war broke out. Some informants were close to the war zone, while others were further away but were still affected. Ethnic cleansing was mentioned. The informants escaped from their home country in different ways, with part or the whole family together. War, escape and different reflections from this period of war and escape are described under this main category. War Life was changed dramatically because of the war when the families began to experience bombings and shooting. One was suddenly in the midst of war and saw these events at close range. "I saw a bomb fall in the meadow" (child) "A bomb was thrown into a garden. . ." (father) "I heard bombings in town" (mother) One family was not as close to the war zone as the family just quoted above, and they compared themselves with the families who had experienced much worse things than they themselves had. Escape Escape was expressed as the fleeing to Sweden. In two cases the whole family came together. In the third case one parent and the child came first and the other parent arrived after a few months. One of the families first went to Croatia and got passports there and then came to Sweden. Feelings (crying, fear), somatic complaints (for example vomiting), and depressive symptoms were described. Uncertainty about where to escape to and how to do it were brought up. Family members gave different descriptions depending on their specific experiences. "Yes, yes we came together with eleven others" (father) ".. . .one Sunday when we went, . . .. we heard on the radio that it could happen that we would be arrested" (father) "Yes, I had to wait there for the aircraft even though it was very windy, when the aircraft came it was so windy as it never have been. . . eight billion aircrafts there or more. . .it was very windy. That I remember" (child) Reflections The informants presented different reflections. Family members reflected on changes and difficulties associated with the war and the escape. Another change highlighted was the language. One family lived away from the war but had friends who lived close to the war zone. Different stories were told, depending on what traumatic experiences they had been through. Parents in mixed marriages in which the spouses came from different ethnic groups reported difficulties arising from this. Someone said that it was better to forget what had been. One informant reflected about the big change with a negative war situation that later on gave an opportunity to find a new and good life. They mentioned things that were important in their home country but were left behind there, for example toys. ". . .best to forget about what has happened" (father) "We do not only look at one side. . .but at all three sides" (mother) The new life The third main category was the new life. Different ways to adapt to a new society were described here under the five subcategories related to this main category. The process of finding jobs and learning a new language were mentioned. In their new life some of them were dealing with health conditions linked to the war and to traumatic experiences, but also ordinary health problems such as colds and infections. Much effort was put into keeping the family together and making new friends. Much of the talking was about here and now, with expressions of feelings of a limited future. The informants expressed their thoughts about their transition to a new life. Employment There were reports that much time was spent on learning Swedish and on job searching and on getting children started with school work. It was not easy for parents to find a similar job to the one they had in their home country. Also, day-care matters played a part in employment difficulties. "In the beginning it was difficult because it was kind of a problem for the children to adjust, and day-care and everything. . ." (mother) Dilemmas in making different aspects of life work together were highlighted, for example, when a person had got a job but had to go there by car and there was a delay starting the job because of the need to get a driving license first. Health The new life was accompanied by health aspects linked to cultural differences and traumatic experiences, as well as by ordinary health problems that children face, for example repeated colds and infections. War injuries were discussed, and various operations and forms of bodily harm resulting from these were mentioned. Traumatic memories and nightmares were mentioned, and also sleeping problems and depressive symptoms. Hearing disorders were mentioned and different assessments and ways to cope with these were discussed as well. ". . .she is sitting at the table (the daughter) eating and cannot breathe,.one or two spoonfuls of food and cannot breathe through her nose..does not manage to eat" (mother) "Well, on the whole, still affected by it, I mean that I sleep badly" (mother) "You get tired" (father) Relatives and friends Concerning the subcategory relatives and friends, the importance of having relatives was mentioned and some stories were told about relatives being separated and living in different countries around the world. In one case, a couple attempted to bring in parents from their home country to stay with them in Sweden. Visits of grandparents were described in detail as well as natural deaths and how these had influenced the family. One informant talked about having no real friends in Sweden. Difficulties in socializing and making friends were described. "Not as much contact with Swedish children. . .as there could be" (father) Limited future The subcategory of the limited future was named so as to capture the difficulties the grown up informants described in thinking far ahead about the future. Many of their tales were about the situation here and now. It also appeared that not all difficulties were linked to cultural differences between living in different countries, but instead that persons had different personalities and that experiences, at least to some extent, depended more on this than cultural differences. The children could see a future which was not limited. "I cannot think five years ahead" (father) "We do not talk much about the future" (mother) "I want to be a doctor or work at a pharmacy. I change often, I do not know. . ." (child) Transition to the new life Concerning the transition to the new life, there was a positive attitude towards getting permanent permission to stay and thus being able to avoid having to move around with suitcases. "What is important for us . . .is to work. . ." (father) One person emphasized the importance of complying with Swedish laws and regulations. Several of the families had broken up several times in Sweden and lived in different refugee camps. The informants talked about couples having different roles in their home country but also in their new life. They did not focus on gender differences but more on differences between living in the countryside and in town. One informant spoke, for example, about a grandfather who was a better cook than anyone else. Thoughts of not getting stuck in the past came up. "You have to move on and not think about what happened before" (mother) "We used to put emphasis on the future of the children" (father) Discussion This study examined life and experiences of three families from Bosnia and Herzegovina who had fled from the war and got permanent residence permits to stay in Sweden. The families said that in the Balkans they had lived a life which they described as normal and good most of the time. Life was changed because of the war situation and they were forced to start all over again. The parents' thoughts about the years ahead can be interpreted to mean that they feel they have limited scope in the future. On questions about life within five years they had problems thinking that far ahead. Much focus was instead on more narrow problems such as getting jobs and taking care of themselves economically. The children did not have worries or problems when talking about jobs. For them it seemed more certain that they would get jobs in the future. In the attempt to generalize the findings from qualitative studies, as in all research, it is necessary to consider if the findings, based on the data presented, are transferable to other similar groups [33]. Salutogenesis is important theoretically when meeting these families. Antonovsky [30] developed the term "salutogenesis", and the main concept in his theories is a sense of coherence. The need for coherence gives an explanation for the role of stress in human functioning. The sense of coherence has three components: comprehensibility, manageability and meaningfulness. These components seemed valuable when interpreting what the family members were talking about in the sessions. The intention to not focus on traumatic experiences probably influenced the time used to talk about war and other traumatic experiences. Another explanation could be that the family members did not want to be reminded about the war and handled the situation by neglecting that issue. It seemed important for the families who had experienced a concept of coherence in their home country also to have a sense of coherence in the new country. Their statements, for example, about having close relations to family and relatives, and their job situation in their home country showed the importance of coherence. The war interrupted their sense of coherence. Several of the members stressed the importance of making friends and getting jobs in the new country and in that way getting a new sense of coherence. Bronfenbrenner's ecological theory of human development [34] is another theory that is worth considering when trying to understand refugee families in their new life. He analysed different types of systems that aid in human development. To understand a child's development and situation it is important to not only look at the child, its family, and its immediate environment but also at the interaction with the wider environment. The families in this study talked about their new life where not only the immediate environment influenced them but also the interaction with the wider environment, such as how society is built up and what kind of culture, rules and regulations there are in the new country. For the refugee children, the support, not only of their families, but also of school teachers and their new friends, were of equal importance. Also, their contacts with their extended family, perhaps living in another part of the world, were described as important. The main categories found in this study seem reliable, but could possibly be different in other refugee groups. The subcategories would probably differ more, particularly if the families came from another cultural background. These families had a good life before the war and before the flight to Sweden, with no psychiatric problems. Because of the traumatic events which they have gone through they might be vulnerable. Even if the children do not have psychiatric problems it is valuable to get information from the child itself in order to understand the child's psychological condition [27]. The UN convention on the Rights of the Child [35] is important, among other things, for how it impacts the interplay between government policy and practice and refugee children's welfare [36]. Refugee children constitute a vulnerable group, which is in need of special care and attention [37]; however few studies to date have focused on the assimilation in the new country viewed from a longer time perspective [38]. The intention of this study was to focus on and listen to all members of the family, but less space was allocated to children's statements. One explanation could be that it was easier for the grown-ups to remember things about their home country than it was for the children, and the therapist continued to ask more questions about their memories. One of the children was less than one year old when arriving in Sweden and therefore could not express memories from the home country. Another explanation could be the tradition that grown-ups talk more with each other than with children. The doubts expressed by a parent in one family during the follow-up session to our study about talking with children about their experiences could illustrate a known phenomenon that has been described by Almqvist & Broberg [39] as a strategy of denial and silence within a family about previous traumatic experiences. This strategy of mutual silence might become an obstacle for giving traumatised children parental support and professional treatment. In research as well as in therapy sessions, basic ethical principles such as autonomy, non-maleficence, beneficence and justice should each be taken into consideration. These principles may be considered from the point of view of each of the actors involved: the patient, the family members, the therapist and the interpreter [40]. When using an interpreter, potential threats to validity arise at various points [41]. Methodological issues with respect to interpreters have received only limited attention in cross-cultural interview studies [42]. An interpreter provides verbal translation during an interaction/conversation between two or more persons who speak different languages. The quality of data and translation of speech can affect the accuracy of any study. One must pay attention to the risk that the person interpreting could modify participants' responses to what she/he thinks the clinician or researcher wants to hear. Interpreters are active but neutral during the data generating process. In this study the person leading the sessions (GJB) did not understand the mother tongue spoken by the family members and the interpreter, so it was not possible to control any bias in the interpreting. In Sweden there is a law [43] stating that people who do not understand or speak Swedish have the right to an interpreter in all contacts with public authorities. Interpreter agencies supply well-educated and authorized interpreters [44]. They are educated in language, laws and regulations, secrecy, professional attitudes and medical terminology. In this study, to minimize possible bias, well-trained and experienced interpreters employed by an interpreter agency were selected. The interpreters were known to have broad experience as interpreters in clinical work and were recognised as professionally competent. In a general sense there is a need to develop and evaluate specific and appropriate training programs for interpreters as well as for clinicians working in child mental health, as has been pointed out by Rousseau et al. [45]. Several limitations of this study have to be taken into consideration. One is that only a few cases are examined. Another limitation is that the families came from one part of Europe and the adults were well educated and had lived a life quite similar to other European people and might not be comparable with other refugee groups from other cultures. There were some technical problems in the videotaping. Some words and sentences could not be interpreted in spite of reviewing and listening several times and in spite of another person watching and listening to the taping. Other limitations are the therapists' lack of linguistic skills and how interviews might be affected by the presence of an interpreter [41]. In all the sessions except one there was an interpreter present, and it is not known how accurate the interpretation was. In some sessions the interpreter was attending but was silent throughout the session since Swedish was spoken by all members of the family. One has to consider whether the interpreter had an effect on the communication, for example if family members would have said other things if an interpreter had not been involved. In order to limit the risks identified and reduce misunderstandings the intention was to use the same interpreter in each family for all three sessions. For two out of three families the same interpreter was involved in all three sessions. All families had members who knew Swedish fairly well and were able to evaluate the interpreter's translation as accurate. Conclusions In order to determine what the individual needs of support are in refugee families it is valuable to find out what kind of lives these families have led before coming to a new country. In this study the families had lived normal lives in their country of origin, similar to others around them, but after experiencing a war situation their lives changed when they escaped to a new country and started a new life. Even if they had thoughts of a limited future they had hopes of getting jobs and taking care of themselves and their families. It is important to get an all-embracing picture of a family and listen to each person's point of view to understand the complexity of the family system and tie together the family narrative. A recommendation for the health and social welfare sector is to offer refugee families with children meetings with professionals who have family-oriented knowledge. The purpose would be to let the family members tell their individual experiences while the others are listening, so that all can be joined into a family narrative. If someone has psychiatric problems, for example, depression, it is important to also offer individual treatment. Family therapy can be helpful in strengthening the family members' ability to cope with life by providing a common picture of the complexity of the family system but it is not enough to cure individual psychiatric problems. Some limitations of family therapy interventions could be cultural aspects, for example, the idea that grown-ups are the ones who should talk, with the consequence that the children do not speak much. There may be views that children need to be protected against talking about traumatic experiences. The complexity of family perspectives and family systems is important to consider when handling psycho-social support to refugees. In family therapy it is known that if one individual does not feel happy, it will influence the whole family. There is a risk that this phenomenon will be magnified in refugee situations, since the families' normal social networks have usually deteriorated. If everyone is heard you get to know different thoughts and feelings and with support it is possible to help the family on the whole to feel better. More focus and space could be given to the children to add to the knowledge of the complexity of the family. Using knowledge by emphasizing the salutogenic perspectives facilitates the provision of support to refugee families. This support helps refugee families to adapt to a new system of society and recapture a sense of coherence, including all three components: comprehensibility, manageability and meaningfulness. More studies are needed to further investigate the perceptions, experiences and needs of various refugee families, and especially the complexity of family perspectives and family systems. Consent Informed consent was obtained from the families for the whole research process.
v3-fos-license
2018-12-15T00:43:07.279Z
2015-09-10T00:00:00.000
157167383
{ "extfieldsofstudy": [ "Political Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://dinamikahukum.fh.unsoed.ac.id/index.php/JDH/article/download/445/394", "pdf_hash": "ccd2451ebd7a106195ed332c72c6a08f6a3e3694", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46227", "s2fieldsofstudy": [ "Economics", "Law" ], "sha1": "ccd2451ebd7a106195ed332c72c6a08f6a3e3694", "year": 2015 }
pes2o/s2orc
THE RELATION BETWEEN THE OBJECTIVE OF WTO AND ECONOMIC RIGHTS WTO Members are obliged to provide trade rules and mechanism conducive for their citizens to conduct economy activities across frontier in order to pursue their economic interests. This obligation is based on economic right that is granted in their national constitutions. When the WTO Members implement its obligation under the WTO Agreements, they need to consider economic right as the main reason to involve in WTO. It is thus necessary to analyse the relation between the objective of WTO Agreements and economic rights, in order to seek the clarity of the primary intention of WTO Members to conduct international trade under the WTO Agreements. To this end, this article attempt to prove that there is a relation between WTO Objective and economic right in order to urge the WTO Members to imply the WTO Obligation based on economic rights that is granted in their constitutions. Introduction In 1994, over one hundred governments took part in the Uruguay Round, defending the interests of countries of all sizes, stage of development and economic structures in World Trade Organization (WTO).All WTO Members brought their national economic interests and national trade policies into negotiation in Marrakesh.Furtherance in every WTO negotiation rounds, members are focusing on bargaining of trade obligations among them, the WTO therefore remains about multilateral and bilateral trade negotiation.These WTO Members are subject to trade obligations among them.However, WTO is not only accommodating obligations among members, but also accommodating the obligetions of government to its citizens.According to Panel in Section 301-310 of US Trade Act of 1974 case, "[t]he object and purpose of the Dispute Settlement Understanding (DSU), and the WTO more generally, that are relevant to a construction of Article 23, are those which relate to the creation of market conditions conducive to individual economy activity in national and global markets and to the provisions of a secure and pre-dictability multilateral trading system (emphasize added)." 1 Panel emphasized that the obligation among WTO Members is driven by the obligation to create conducive market condition to individual economy activity both in national and global market, while this obligation is supported by economic rights that promulgated in each constitution of WTO Members.Economic right is known as right granted by national constitution, but on the other side it is the obligation of government.This right is inherently accommodating individuals to pursue economic interest across frontier.In order to accommodate this right, a government is obliged to provide trade rules and mechanisms for its citizens to pursue their economic interests across frontier.It is thus necessary for a government to commit to preserve economic rights in the international scope by participating in international economic relation such WTO. This article attempts to analyse the relation between the objective of WTO and economic right in order to seek the clarity of the primary intention of WTO Members to conduct international trade under the WTO, where WTO is not only consisting of obligation among members but also obligation from government to its citizen based on economic rights.To this end, the purpose of this article is to describe the importance of correlating the WTO Objective with economic right, in order to urge the WTO Members to imply the WTO Obligation based on economic rights that is granted in their constitutions.There are two layers of obligations in terms of WTO Agreements.First, it is obligations among the WTO Members (or obligations to individuals in other WTO Members jurisdiction), and second, it is the obligations from a government to individuals within its jurisdiction. 2The government is obliged to provide trade rules and mechanisms conducive for its citizens to conduct economy activity across frontier in order to pursue their economic interests.Panel also declared that "it would be entirely wrong to consider that the position of individuals is of no relevance to the GATT/WTO legal matrix.Many of the benefits to members which are meant to flow as a result of the acceptance of various disciplines under the GATT/WTO depend on the activity of individual economic operators in the national and global market places.The purpose of many of disciplines, indeed one of the primary objects of the GATT/WTO as a whole, it is to produce certain market conditions which would allow this individual activity to flourish." 3he obligation of government to provide trade rules and mechanisms basically based on economic right that is granted by national constitution as a legal support and protection for individuals to conduct economic activities across frontier.Without legal support and protection from the government, including national and international legal support, these individuals find a paucity of economic benefits. The Obligation of Government to Provide Trade Rules and Mechanisms for Individuals to pursue Economic Interest across Frontier based on Economic Rights according to Constitution Economic rights exist in almost all modern national constitution in different phrases,4 such as economic freedom 5 , right to work 6 , right to property 7 , right to trade or to conduct business 8 , intellectual property rights 9 , and other right associate with economic activities. 10All these rights are foundation for all individuals to engage in any economic activities, such as activity to produce goods, to provide services, to sell and purchase goods, to distribute goods and services, and to own the property derives from any economic activities. 11The government is obliged to secure individual's economic rights in order to guarantee all individuals to pursue economic interests according to national constitution. 12tates are aware to legitimate economic interest of their citizens in the constitution.For example, the EU Charter of Fundamental Rights contains few rights which can be clearly classified as modern and advanced economic rights. 13ne of the prominent rights is property right which is recognised as the right of possession. 14ossessions are given a wide interpretation to include various assets acquired through economic activities.All vested rights having an economic value are included.The right is also including the means to earn an income from business. 15hrough the protection of right to property, the EU Charter has certainly incorporated a wide range of economic activities within sphere of legal protection.The European Court of Justice (ECJ) has held that individuals obtain rights from the treaties creating the EU which are endowed with economic rights.This right, however, relates to the establishment of a common market area. 16he economic right is a foundation for government to provide a legitimate rule and mechanism for individual to pursue economic interest across frontier.For example, the regulation of market access in order to simplify access for citizens to conduct their economic activities across border.The higher level of government provides the necessary discipline and guarantee of market access, such as the economic liberty which is guaranteed as fundamental rights in Swiss Federal Constitution17 , then in the regional level, the four fundamental freedoms are guaranteed by the EU law18 , and in the global level like the WTO enshrines market access for individuals from all WTO Members. The government commits to preserve individual's economic rights in the international scope by participating in international economic relation such WTO, because the core of intensive 13 Terence Daintith, "the Constitutional Protection of Economic Rights", International Journal of Constitutional Law, Vol. 2, No. 1, January 2004 Edition, Oxford: Oxford University Press and New York University School of Law, page 56 -90.14 European Charter of Fundamental Rights, art.17, Available at http://fra.europa.eu/en/charterpedia/article/17-right-property international economic relations begins with individual economic interest.Historically, Voitovich gave his statement that created international economic relation.According to him "the global economic relation derives from common interest of states which is influenced by individual economic interest within the country, to meet this common economic interest, states therefore construct extraterritorial economic agreement which is creating international legal rules." 19Van Themaat also posited that "interdependency of states in economic activity is obvious to realise the objective of national economic interest, since historically at national level in the west, that is no longer possible to achieve a number of objectives of national economic policy through national means only.It is thus necessary to have international intervention in addition to international rules for liberalisation and non-discrimination simply for realisation of the objective of national economic interest." 20It can also be concluded from the main point of GA Resolution 1974 that the national economic interest is genuinely representing the individual economic interest that could be the primary reason to commit to international economic law in the sphere of development of cross border economic activities.Significantly, the motivation behind any international economic law is the intention of a state to enhance its individuals to gain a broader benefit. 21In the particular contemporary world, the closer relationship between international norms and domestic norms, tend to make international economic relations affects citizens.As the world becomes more economically interdependent, the citizens thus will more find greater possibility to conduct their business and to provide better income that is affecting their quality of life.The result of this, citizens of the country could be expected to assert them in more aggressive and require their government to respond 19 Sergei A Voitovich, 1995, International Economic Organizations their needs to a greater extent in development of international economic relations.To that end, the intention of a government to join the WTO is to support its individuals to gain a broader economic benefit based on their economic rights. The Commitment of Government to join the WTO is to Guarantee Individual's Economic Rights The key objective of WTO law is the progressive removal of barriers that prevent or make more difficult beneficial exchange between producers and consumers located in different countries.The removal of barriers intends to enhance support that the WTO promotes growth and economic stability, in turn to support the protection of economic rights for individual. 22All WTO Members is adopting WTO rules based on mandate from their constitutions to guarantee economic rights of individual, indirectly the corresponding to individual right hence exists in the coherence context between the intention of the states to join the WTO and their obligations to guarantee economic rights. Although WTO and GATT do not contain economic rights, but WTO law regulates precise rule of non-discrimination in the sense of most favoured nation treatment and national treatment which is very significant to guarantee functions with regard to the safeguarding of unimpeded trade.The economic rights that are underlying in national constitution of each WTO Member become a main purpose for all individual to involve in international trade.In the light of it, when WTO creators negotiated multilateral trading system, they created objective of WTO in accordance with economic right of individual.The government of WTO Members therefore relies on this objective in regard to provide trade rules and mechanisms for individual to trade across border.In order to support the argument above, the following sub section discusses an analytical approach of the objective of the WTO and its relation to economic rights. The Relation between the Objective of the WTO and Economic Rights The Preamble of the WTO Agreements underlines that Members are "Recognizing that their relations in the field of trade and economic endeavour should be conducted with a view to raising standards of living, ensuring full employment and a large and steadily growing volume of real income and effective demand, and expanding the production of and trade in goods and services, while allowing for the optimal use of the world's resources in accordance with the objective of sustainable development, seeking both to protect and preserve the environment and to enhance the means for doing so in a manner consistent with their respective needs and concerns at different levels of economic development". 23e main objective of the WTO is "to raising standard of living, ensuring full employment and a large and steadily growing volume of real income and effective demand, and expanding the production of good and services". 24 The WTO law is trade rules and mechanisms for individuals to conduct trade across frontier that are negotiated by their governments.These trade rules and mechanisms are created based on the intention of all WTO Members to conduct trade and economic activities with a view to raise standards of living for all individuals by expanding trade in goods and services and reducing barriers to trade. 26ariff concession and non-trade barriers consider as rules that are provided for individuals to simplify their economic activities across border. 27With these rules, they are supposed to pursue economic interest while their governments are supporting it through constitutional commitment to protect right to obtain profit from economic activity.One of significant economic rights is right to property.This right becomes a major intention for each nation to involve in the WTO.The establishment of secure and stable right to property has become a key element in the rise of modern economic growth.It stands to reason that individuals would not have the incentive to accumulate and innovate unless they had adequate control over the return to the assets that are thereby produced or improved, and at the end individuals have rights to enjoy the benefit from it. In relation with the objective of the WTO, the protection of property right will elevate the full raising of standard of living when individual has right to obtain and enjoy benefit from their economic activity, without any restriction or deprivation from national policy. 28 of the WTO accommodates the promotion of rights lie exclusively in the international economic sphere, such as the rights of exporters and importers to enjoy the property, freedom of contract, non-discrimination in relation to other like industries, and freedom of movement goods and services across border. European Union (EU) had experience regarding the consequence of violating WTO Agreements when the violation is deemed as an infringement of individual's economic right.In Biret case, Biret Company claimed to have suffered damage as a consequence of EU Legislation prohibiting the importation of hormone treated meat. 29Biret referred to the WTO Dispute Settlement Body (DSB) Decision in Hormone Case 30 that EU Ban on imports of meat and meat products from cattle treated with any of six specific hormones for growth promotion purposes was inconsistent with the provisions of the SPS Agreement, and required EU to lift the hormone ban in the absence of any scientific risk assessment of harm. 31Biret also sought compensation for damage because the ban their rights to conduct business.However, EU General Court (GC) rejected the claim for damage because the Court did not identify the unlawful conduct of EU.The court also denied the possibility for individuals to rely on provision of the WTO Agreement in order to rectify their economic rights that is violated. General Advocate Alber in Biret Case argued that the Court's reasoning to refuse to comply with the DSB Decision is infringing fundamental right or economic right. 32 not continue its normal commercial activity because the EU has decided not to comply with WTO law which is affecting its business and the economic right is affected in its core. 33The EU made restriction on trade through the adoption of SPS measures which was in discrimination between domestic and imported goods and those who engaged in trade such goods.Restriction on trade therefore affects the citizen's freedom to pursue an economic activity.Meanwhile, the SPS Agreement is of considerable important to citizens to engage in trade as it regulates in Article 2 (3) that this agreement intends to prevent a disguise restriction on international trade. 34To this end, the hormone ban that restricts Biret Company to conduct its economic activity under the SPS Agreement is infringing the right to pursue economic activity which is granted by the EU Charter Article 16 (freedom to conduct a business) and Article 17 (right to property) while basically Biret Company has inviolable right to trade protection under EU Charter. Full Employment The WTO negotiator also created rules and mechanisms which are relating to employment. 35or example, the GATT has several provisions relating to employment, such as GATT Article XII: (3) para.(a), mentions that 'contracting parties undertake, in carrying out their domestic policies, to pay due regard to the need for maintaining or restoring equilibrium in their balance of payment on a sound and lasting basis and to the desirability an uneconomic employment of productive resources.'This article relates to do- mestic policies directed toward the achievement and maintenance of 'full and productive employment'. 36mployment dimension also plays a role in other WTO Agreements, for example in Agreement on Subsidies and Countervailing Measures (SCM Agreement), Article 15 (4).According to this article, the examination of the impact of the subsidized imports on the domestic industry shall include an evaluation of all relevant economic factors and indices having a bearing on the state of the industry, including employment. 37Agreement on Textiles and Clothing, Article 6 (3), also regulates a standard examination of the effect on import that is able to relate to employment. 38he most profound agreement regarding the employment is GATS Article V bis: Labour Markets Integration Agreement.It states that WTO Agreements shall not prevent any of its members from being a party to an agreement establishing full integration of the labour markets between or among the parties.Another rule regarding the employment is GATS Annex on Movement of Natural Persons Supplying Services.This annex applies to measures affecting natural persons who are services suppliers of a member, and natural persons of a member who are employed by a services supplier of a member. 39he context of full employment is pertinent to economic rights which constitutes as foundation for all individuals to earn personal income derives from their economic activities.National trade policy should not deprive this right since it is essential for all individuals in or-der to gain the benefit from trade across frontier according to WTO law. Trade security and predictability The WTO Member should take necessary measure to provide stability and predictability of trade mechanism in order to secure individual economic right related to trade or business. 40It is also relating to achieve the broad objective of the WTO Agreements, as declared by Panel in Section 301-310 of US Trade Act of 1974 Case that "the multilateral trading system is, per force, composed not only of States but also, indeed mostly, of individual economic operators.The lack of security and predictability affects mostly their individual operators.Hence, providing security and predictability to multilateral trading system is another central object and purpose of the trade system which could be instrumental to achieve the broad objective of the WTO Agreements." 41 In the case of Argentina -Measures Affecting Imports of Footwear, Textiles, Apparel and Other Items, the Government of Argentina concerns to provide stability and predictability of trade mechanism which is promulgated under the Law No. 22.415 whereby importers have procedural right to challenge any duties assessed beyond the bound rate which purportedly a part of Argentine Law. 42This procedural right derives from right to trade and to conduct business underlined in Argentina Constitution. 43In this settled case, Argentina also stated that the stability and predictability of concessions in it schedule commitments were supported by 1994. 44These commitments were at the top of the legal hierarchy and, therefore, took precedence over domestic legislation.All judges in Argentina have power to declare, at the request of an interest party, the unconstitutionally of any measures adopted in breach of rules contain in an international treaty, such as WTO Agreements. 45 Closing Conclusion WTO law is not only about rights and obligations to conduct international trade among members, but also commitment from a government to provide market conditions conducive to individual economy activity in national and global markets.This commitment is conceivable to support individual to achieve better income and benefit, to promote positive result of enhancing welfare and full employment based on economic rights that is granted by national constitution.Although WTO does not regulate economic rights directly, but the Members of WTO negotiated WTO Rules and Mechanism in accordance with economic rights that is embodied in the objecttive of WTO Agreements.The relation between objective of WTO and economic rights is that those WTO Members have primary intention to conduct international trade in order to support its citizen to gain economic benefits based on economic rights.It is significantly seen in the context of objective of WTO that raising standard of living and full employment become primary concern for WTO Members to conduct international trade under the WTO Agreements. However, WTO Member, such EU, sometimes finds paucity to imply the WTO obligation merely based on economic rights since international trade cost much more than legal obligation between EU and its citizen.It is thus necessary for EU Court to rely on the objective of WTO in order to support its citizen to engage in international trade based on their economic rights.On the other hand, another WTO Member, such Argentina, has commitment in preserving 44 Ibid, Chapter lV, Powers of Congress, Section 75, para.22. economic rights as a main reason to involve in WTO as it promulgates in the Argentine Constitution.Its constitution declares that the right to trade and to conduct business as a part of economic right take precedent over the implementation of WTO obligations. Suggestion The main intention of WTO Members to involve in WTO is to provide better trade mechanism in order to support their citizens to conduct trade across frontier.The Government can support its citizens by granting economic rights based on its constitutions.It means that any trade policy cannot diminish this inviolable right when the government implying WTO obligations.It is hence significant that correlating the WTO Objective with economic right will urge the WTO Members to implement the WTO Obligation merely based on economic rights.Such in Biret Case, AG Alber argued that EU Court must not disregard the freedom of trade and freedom to pursue economic activity, when Biret Company sought compensation for damage due to the violation of its economic right.The EU Court needs to consider that the economic rights and liberty derives from national law of each WTO Members become a main purpose for all indivi-dual to involve in the international trade, since the accession to WTO Agreement contains several possibilities for individuals to gain their economic interest. Section 301 -310 of US Trade Act of 1974 -Panel Report, (27 January 2000) WT/DS152/R, para.7.71 2 Steve Charnovitz, "Economic and Social Actors in the World Trade Organization", ILSA Journal of International and Comparative Law, Vol. 7, No. 2, Spring 2001 edition, Florida: Nova South eastern University, page 259-274 3 WTO, US, Supra Note 1, para.7.73 States, Article 5, available at: http://www.un-documents.net/a29r3281.htm.Robert D Anderson and Hannu Wager, "Human Rights, Development, and the WTO: the Case of Intellectual Property and Competition Policy", Journal of International Economic Law.Vol. 3 No 9, August 2006 edition, Oxford: Oxford University Press, page 740 It elaborates the relation between the existent of individual right to raise standard of living and full employment underlines in WTO and the economic activity of individual as an engine for such economic growth.The success of the WTO to increase the world's economic welfare depends on a considerable extent of individuals initiatives.The objective of the WTO of increasing human welfare with an open trading system that fosters employment and development at the same time requires and promotes individual freedom and economic rights.Economic rights serve trade interest because they enhance economic potential 23 Peter van den Bossche and Werner Zdouc, 2013, The Law and Policy of the World Trade Organization: text, cases and material, Cambridge: Cambridge-university Press, page 82 24 WTO:US, Supra Note 1, Section ( c ), para.7.74.25 Jakob de Haan, Susanna Lundstrom, and Jan Egbert Strum, "Market-Oriented Institutions and Policies and Economic Growth: A Critical Survey", Journal of Economic Surveys Vol.20 No.2, April 2006 Edition, Sussex: John Willey ltd, page 157-191.and protect economic freedom as it underlines in the concept of economic freedom. 25The Achievement of Trade and Economic Endeavour should be Conducted with a view to Raising Standards of Living 32 Alberto Alemanno, "Judicial Enforcement of the WTO Hormones Ruling Within the European Community: Toward EC Liability for the Non-Implementation of WTO Dispute Settlement Decisions?",Harvard International Law Journal, Vol.45, Issue 2, July 2004, Massachusetts: Harvard University Press, page 560.33 Marco Bronckers and Sophie Goelen, "Financial Liability of the EU for Violations of WTO Law: A Legislative Proposal Benefiting 'innocent Bystanders'", Legal Issues of Economic Integration Vol.39, Issue 4, June 2012, Oxford: Oxford University Press, page 399-418.34 Opinion AG Alber, Supra Note 28, para 117.35 Steve Charnovitz, "the (Neglected) Employment Dimension of the World Trade Organization", Working Paper of the George Washington University Law school, Public Law and Legal Theory, No. 131, April 2006.Washington: George Washington University Press, page 1-35.
v3-fos-license
2020-08-20T10:05:44.742Z
2020-08-14T00:00:00.000
222932270
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2504-477X/4/3/116/pdf", "pdf_hash": "73609b86e14e00645587faa06a13a8995b9cb622", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46230", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "d8bae4a06f8a405fb4ac16ab7dee056e22b111b4", "year": 2020 }
pes2o/s2orc
Recent Progress in the Study of Thermal Properties and Tribological Behaviors of Hexagonal Boron Nitride-Reinforced Composites Ever-increasing significance of composite materials with high thermal conductivity, low thermal expansion coefficient and high optical bandgap over the last decade, have proved their indispensable roles in a wide range of applications. Hexagonal boron nitride (h-BN), a layered material having a high thermal conductivity along the planes and the band gap of 5.9 eV, has always been a promising candidate to provide superior heat transfer with minimal phonon scattering through the system. Hence, extensive researches have been devoted to improving the thermal conductivity of different matrices by using h-BN fillers. Apart from that, lubrication property of h-BN has also been extensively researched, demonstrating the effectivity of this layered structure in reduction of friction coefficient, increasing wear resistance and cost-effectivity of the process. Herein, an in-depth discussion of thermal and tribological properties of the reinforced composite by h-BN will be provided, focusing on the recent progress and future trends. Introduction The rise of graphene in 2004 [1] followed by an in-depth interpretation of the thin carbon film properties has provoked an exhaustive search on other alternative two-dimensional (2D) materials due to their newly emerged size-dependent privileges in properties and structures [2,3]. Owing to the development of efficient structural manipulation approaches, the fast growing list of 2D materials are no longer limited to graphene and can be expanded to metal oxides/hydroxides [2,4,5], transition metal carbides and nitrides (MXenes) [6,7], transition metal dichalcogenides (TMDs) [8,9], and h-BN [10]. Among this family, 2D h-BN with high structural resemblance with graphene has gravitated considerable attention and is so-called as "white graphene". h-BN bulk is a layered material consisting of individual basal planes known as BN nanosheet (BNNS). Each layer is comprised of alternative boron (B) and nitrogen (N) atoms partially-covalent bonded in a honeycomb (sp 2 ) configuration [10,11]. Due to the electronegativity difference between B and N atoms (N = 3.04 and B = 2.04 [12]), an ionicity also permeates within the 1.44 Å B-N bond distorting the electronic states symmetry. Then it reflects a severe lack of delocalized Pz electrons in both Valance and Conduction bands and subsequent generation of a large band gap of 5.1-5.9 eV [12,13]. Moreover, possessing ultra-flat atomic surface and a negligible lattice constant mismatch with graphene (1.7%) [14] can idealize them being an ideal supporter for graphene-based nanoelectronics bearing superior chemical and thermal stabilities [10,15,16]. Furthermore, the outstanding oxidation/corrosion resistance of h-BN layers as a capping layer and/or dielectric provide an opportunity to protect the susceptible substances from any chemical/structural damages [17,18]. Being an electric insulator also paves a way for further optoelectronic applications such as ultraviolet light-emitters [10,17]. Recent efforts on bandgap tunability of h-BN through doping or chemical functionalization has also revealed their potential for a wide range of electrochemical applications, from energy storage to biosensors [19][20][21][22][23][24][25]. Apart from all these excellences, high in-plane thermal conductivity (TC) of h-BN is capable of improving heat dissipation in many applications. Therefore, fabrication of various composites having h-BN as a thermal conductive filler is now highly demanded due to simultaneous provision of high TC along with high chemical and mechanical stabilities. As an example, in electronic packaging industries with polymeric materials, longevity and reliability of electronic devices increase when heat dissipation is well-controlled [26,27]. Furthermore, due to layered structure and weak van der Waals interactions between the adjacent layers, h-BN is considered as an extensively-used lubricant material [11]. Therefore, its low shear strength and capability to preserve the lubrication capability at elevated temperatures or in oxidative environments has gained numerous attentions in improving wear resistance of durable polymer/metal/ceramic matrix tribo-composites in both dry and wet media [28,29]. Despite all the researches and publications on the TC and tribological properties of BNNSs-reinforced composites, there has been not a comprehensive article covering all aspects of this trend. Herein, in the second chapter, we discussed fundamental thermal conductivity theories with focus on 2D h-BN and conveying h-BN's superior thermal properties to polymer matrix composite (PMC) through various synthesis techniques. In the third chapter, we elaborated basics of tribological properties relevant to h-BN all the way up to its applicability in tribological applications. Conductivity Theory Generally, heat energy transfer occurs through three different pathways: radiation, convection, and conduction. In a solid-state material, thermal conduction is the dominant mechanism. According to a fundamental point of view, thermal energy appears as vibrational energy in materials, thereby, the transfer of a particle's vibrational energy to the neighboring particles without moving the location of matter is called thermal conduction. Heat diffusion mechanism within a solid material stems from two contributions: (1) atoms collisions and interactions known as wave-like phonon conduction and (2) electron movements. TC of metals basically originates from energized electrons, while phonon conduction is dominant in nonmetallic systems [30]. Thermal energy in crystalline systems such as metals, 2D materials including graphene, h-BN, and etc. disseminates through harmonized vibrational waves, causing simultaneous vacillation of whole particles with the same frequency. Figure 1a clearly portrayed the heat transfer mechanism into a crystalline material from surface heat absorption to heat conduction/radiation to surroundings. However, structural discontinuities such as defects, grains boundaries, and dislocations induce heterogeneity to the structure that ultimately are disturbing particles harmonic vibrations. The result of this inharmonious vibration is phonon scattering, a phenomenon that phonon conduction is not transferred by using a unique vibrational wave through the material. Moreover, phonon scatterings inevitably induce thermal resistance called "Kapitza resistance". Unlike ideal crystalline materials, amorphous materials such as polymers, intrinsically have a combination of structural discontinuities including chain folding and chain ends due to the absence of long-range ordered structure. Therefore, they suffer from disordered vibrations throughout the chains (Figure 1b) and possessing decelerated heat transfer compared to crystalline systems. To understand the difference between heat transfer mechanisms of crystalline and amorphous structures, they are often resembled to a Newton pendulum shown in Figure 1c,d. The ordered crystalline structure quickly disseminates the starting vibration to the other side, while the same vibration should be propagated throughout the entire chain in an amorphous structure, leading to inharmonious vibration and delayed heat transfer [12,30]. J. Compos. Sci. 2020, 4, x FOR PEER REVIEW 3 of 44 "Kapitza resistance". Unlike ideal crystalline materials, amorphous materials such as polymers, intrinsically have a combination of structural discontinuities including chain folding and chain ends due to the absence of long-range ordered structure. Therefore, they suffer from disordered vibrations throughout the chains ( Figure 1b) and possessing decelerated heat transfer compared to crystalline systems. To understand the difference between heat transfer mechanisms of crystalline and amorphous structures, they are often resembled to a Newton pendulum shown in Figure 1c,d. The ordered crystalline structure quickly disseminates the starting vibration to the other side, while the same vibration should be propagated throughout the entire chain in an amorphous structure, leading to inharmonious vibration and delayed heat transfer [12,30]. representation of difference in heat transfer mechanism of (c) crystalline and (d) amorphous (polymer) materials via Newton pendulum models (Reprinted with permission from Elsevier, Copyrights 2016) [30]. TC is considered a material's capacity in conducting heat through a certain thickness of material, perpendicular to the surface area and during a certain amount of time as a result of imposing temperature gradient to the opposite surfaces. All these parameters which play decisive roles in TC's value are hidden within its mathematical equation shown below as equation (1): Where k is TC (W m -1 K -1 ), Cp is the specific heat capacity (the amount of heat required to elevate the temperature of a material by 1 o C, J kg -1 K -1 ), ρ is the material's density (kg m -3 ), and α is thermal diffusivity (speed of the transferred heat through the material, m 2 s -1 ). In isotropic materials, heat conduction occurs uniformly irrelevant to the direction, while in anisotropic materials like composites in which properties are under the influence of direction, TC depends on the direction of heat flux. Thereby, TC is determined by a tensor being a function of particles orientations and corresponding directions within the composite. k x,y,z,T = ( k xx k xy k xz k yx k yy k yz k zx k zy k zz ) (2) Finally, the following equation is the correct form of TC in anisotropic systems [12,30]: TC is considered a material's capacity in conducting heat through a certain thickness of material, perpendicular to the surface area and during a certain amount of time as a result of imposing temperature gradient to the opposite surfaces. All these parameters which play decisive roles in TC's value are hidden within its mathematical equation shown below as Equation (1): Thermal properties of h-BN where k is TC (W m −1 ·K −1 ), C p is the specific heat capacity (the amount of heat required to elevate the temperature of a material by 1 • C, J kg −1 ·K −1 ), ρ is the material's density (kg·m −3 ), and α is thermal diffusivity (speed of the transferred heat through the material, m 2 s −1 ). In isotropic materials, heat conduction occurs uniformly irrelevant to the direction, while in anisotropic materials like composites in which properties are under the influence of direction, TC depends on the direction of heat flux. Thereby, TC is determined by a tensor being a function of particles orientations and corresponding directions within the composite. k xx k xy k xz k yx k yy k yz k zx k zy k zz Finally, the following equation is the correct form of TC in anisotropic systems [12,30]: k xx δ 2 T δx 2 + k yy δ 2 T δy 2 + k zz δ 2 T δz 2 + k xy + k yx δ 2 T δxδy + k yz + k zy Thermal Properties of h-BN In the recent decade, layered bulk h-BN as an anisotropic material has gained considerable attentions towards the thermal management of electronic devices, mainly due to its high in-plane (parallel to its basal plane) TC of 600 W m −1 ·K −1 . The anisotropy of h-BN stems from strength difference between strong covalent bonds within hexagonal planes of h-BN (intralayer) and weak van der Waals forces which attach the adjacent BNNSs (interlayer) [31]. Therefore, most of h-BN's properties are direction-dependent, known as anisotropic. Being an intrinsic anisotropic material makes the h-BN's out-of-plane (perpendicular to its basal plane) TC at least 1-5 W m −1 ·K −1 and at most 30 W m −1 ·K −1 [32,33]. Besides its high in-plane TC, high surface area with atomic flatness provides a platform to dissipate heat without formation of localized hot spots [32,33]. Based on theories of TC, reduced layer numbers and the absence of weak van der Waals interlayer interactions subsequently reduce the phonon-phonon scattering. Therefore, in the absence of experimental results from direct measurements of TC values of h-BN's monolayers, we can rely on theories that h-BN monolayers possess significantly higher TC value than their multilayered, bulk counterparts due to the reduction of phonon-phonon scattering in the 2D structure. According to numerical results of the phonon Boltzmann transport equation, a theoretical TC value of >600 W m −1 ·K −1 is calculated for h-BN monolayer which is higher than the calculated value for multilayered, bulk h-BN (400 W m −1 ·K −1 ). Besides phonon-phonon scattering, the out-of-plane vibration can also determine the TC value in multilayer h-BN. To be more specific, interlayer interaction in multilayer h-BN leads to a significant reduction in the TC value compared to monolayer h-BN [2,10,34]. Fabrication of h-BN-Reinforced Polymer-Based Composites Highly demanded miniaturization of electronic devices along with their multi-functionality necessitates a systematic control on heat dissipation to enhance the longevity and reliability. In general, the responsibility of heat dissipation is appointed to polymer matric composites, known as electronic packaging materials, which possess high TC while being dielectric [26,27,35]. To meet the required features for real-time practicality, highly thermal conductive fillers such as ceramic fillers h-BN, silicon carbide (SiC) [36,37], silicon nitride (Si 3 N 4 ) [38], aluminum nitride (AlN), aluminum oxide (Al 2 O 3 ) [39]), carbon-based fillers (carbon nanotube (CNT) [40], graphene [41], diamond [42], carbon fiber [43]), and metal fillers (silver nanowires [44], copper [45], aluminum [46]) have been extensively used to yield thermally conductive polymer composites. High electrical conductivity of carbon-based materials, metal oxides, and metals as fillers also increases the electrical conductivity of the final composite, leading to delayed signal propagation in electronic devices and restrict their application in electronic industry. Among ceramic fillers, h-BN has high TC, high chemical stability, large aspect ratio while possessing lowest dielectric constant among ceramic fillers (~4) and being a perfect electrical insulator (σ = 10 −11 S·cm −1 ) due to its large band gap. Therefore, all these merits turn h-BN into a promising candidate for utilization in electronic packaging materials [32,[47][48][49][50]. Since h-BN is regarded as an anisotropic filler with distinctive difference in its in-plane and out-of-plane TCs, the TC of the h-BN-reinforced polymer composite is also affected by filler's orientation, filler-filler and filler-polymer interfacial properties [51]. Thereby, many efforts have been devoted to controlling the orientation of h-BN platelets through synthesis techniques to exploit the ultimate potential of its high TC in polymer composites. In the following, a wide range of synthesis methods focused on orientation manipulating and incorporation of h-BN fillers into various polymer matrices are discussed. Freeze-Drying Freeze-drying is regarded as one of the effective methods in constructing 3D oriented inner structures which is capable of maintaining the as-generated structure during the ice-nucleation stage. In an interesting work, a 3D nacre-shaped thermal conductive network based on BNNSs/epoxy composite was fabricated through a bidirectional freezing technique, as shown in Figure 2a [52]. For this purpose, an aqueous slurry of mechanically exfoliated BNNSs and polyvinyl alcohol (PVA) was placed into a freezing mold equipped with a polydimethylsiloxane (PDMS) wedge to generate temperature gradient in both the vertical and horizontal directions. Ice nucleation in a lamellar pattern as a result of bidirectional freezing ultimately rendered a 3D lamellar structure of BNNSs/PVA aerogels. The freeze-dried BNNSs/PVA aerogels were followed by epoxy resin infiltration and curing to obtain BNNSs/epoxy composite. Scanning electron microscopy (SEM) evaluations (Figure 2b,c) revealed that the highly ordered, aligned lamellar nacre-shaped network remained intact even after resin infiltration, leading to high TC due to providing prolonged phonon pathways. In a similar work [53], a freeze-dried BNNSs foam was hybridized with PDMS to manufacture a flexible phonon transmitter composite. The method's superiority lies in the carbonization welding process, which assures the physical structure of the freeze-dried BNNSs foam and favors its static charges' dissipation. As shown in Figure 2d, anisotropic BNNSs foam was synthesized by directional freezing of a chitosan/BNNSs dispersion followed by vacuum freeze-drying for 48 h at −50 • C. Afterward, the as-obtained BNNSs foam underwent carbonization at 800 • C for 0.5 h. At last, the BNNSs foam was immersed into PDMS resin to construct a thermal conductive network. Morphological assessments clearly confirmed highly ordered BNNSs walls formed along with the as-grown ices. Upon post-treatment carbonization welding, well-connected BNNSs walls with the distance of~50 µm (3.0 vol.% BNNSs) were achieved. Even after PDMS infiltration, the order of BNNSs foam is intact with no trace of pores, implying the successful resin immersion (Figure 2e,f). pattern as a result of bidirectional freezing ultimately rendered a 3D lamellar structure of BNNSs/PVA aerogels. The freeze-dried BNNSs/PVA aerogels were followed by epoxy resin infiltration and curing to obtain BNNSs/epoxy composite. Scanning electron microscopy (SEM) evaluations (Figure 2b,c) revealed that the highly ordered, aligned lamellar nacre-shaped network remained intact even after resin infiltration, leading to high TC due to providing prolonged phonon pathways. In a similar work [53], a freeze-dried BNNSs foam was hybridized with PDMS to manufacture a flexible phonon transmitter composite. The method's superiority lies in the carbonization welding process, which assures the physical structure of the freeze-dried BNNSs foam and favors its static charges' dissipation. As shown in Figure 2d, anisotropic BNNSs foam was synthesized by directional freezing of a chitosan/BNNSs dispersion followed by vacuum freezedrying for 48 h at −50 °C . Afterward, the as-obtained BNNSs foam underwent carbonization at 800 °C for 0.5 h. At last, the BNNSs foam was immersed into PDMS resin to construct a thermal conductive network. Morphological assessments clearly confirmed highly ordered BNNSs walls formed along with the as-grown ices. Upon post-treatment carbonization welding, well-connected BNNSs walls with the distance of ~50 μm (3.0 vol.% BNNSs) were achieved. Even after PDMS infiltration, the order of BNNSs foam is intact with no trace of pores, implying the successful resin immersion (Figure 2e,f). Manipulation of BNNSs to meet the requirement of a lightweight 3D conductive network with both outstanding thermal and mechanical performance is quite challenging in the electronic industry. However, a superelastic, lightweight nanocomposite based on 3D h-BN/Polyimide (PI) aerogels was constructed based on facile and green freeze-drying method [54]. Firstly, BNNSs was obtained by mechanical exfoliation of h-BN in the presence of D-glucose and nitrogen in a steel milling container. Prior hybridization with PI, the resultant hydroxyl-functionalized BNNSs were dialyzed for 1 week to eliminate any trace of D-glucose. The well-stirred BNNSs and poly (amic acid) (PAA) suspension was exposed to freeze-drying process at −20 • C for 48 h. Finally, the as-synthesized aerogels were transferred to N 2 atmosphere tube furnace under accurate time and temperature schedule to gain thermally cross-linked BNNSs/PI aerogels (Figure 3a). During synthesis, two main factors including the interaction between hydrogen bonding of the functionalized h-BN and intrinsically oxygen, nitrogen-containing groups of PPA, and the π-π interaction between the B-N layer of h-BN and benzene ring of PPA backbone are responsible for the improved adhesion between these two components. According to the morphological evaluations, the functionalized BNNSs possessed lateral size of 200 nm with 3-9 atomic layers. After hybridization, the morphological assessments revealed a honeycomb-shaped network fulfilling its superelasticity and stretchability, as shown in Figure 3b,c. In addition, the BNNSs possessed highly aligned structures, parallel to PI layers, causing an inorganic-organic binary network. Its unique superelasticity accompanied by considerable hydrophobicity provides a platform to be utilized in any desired shape in harsh environments. Manipulation of BNNSs to meet the requirement of a lightweight 3D conductive network with both outstanding thermal and mechanical performance is quite challenging in the electronic industry. However, a superelastic, lightweight nanocomposite based on 3D h-BN/Polyimide (PI) aerogels was constructed based on facile and green freeze-drying method [54]. Firstly, BNNSs was obtained by mechanical exfoliation of h-BN in the presence of D-glucose and nitrogen in a steel milling container. Prior hybridization with PI, the resultant hydroxyl-functionalized BNNSs were dialyzed for 1 week to eliminate any trace of D-glucose. The well-stirred BNNSs and poly (amic acid) (PAA) suspension was exposed to freeze-drying process at −20 °C for 48 h. Finally, the as-synthesized aerogels were transferred to N2 atmosphere tube furnace under accurate time and temperature schedule to gain thermally cross-linked BNNSs/PI aerogels (Figure 3a). During synthesis, two main factors including the interaction between hydrogen bonding of the functionalized h-BN and intrinsically oxygen, nitrogen-containing groups of PPA, and the π-π interaction between the B-N layer of h-BN and benzene ring of PPA backbone are responsible for the improved adhesion between these two components. According to the morphological evaluations, the functionalized BNNSs possessed lateral size of 200 nm with 3-9 atomic layers. After hybridization, the morphological assessments revealed a honeycomb-shaped network fulfilling its superelasticity and stretchability, as shown in Figure 3b,c. In addition, the BNNSs possessed highly aligned structures, parallel to PI layers, causing an inorganic-organic binary network. Its unique superelasticity accompanied by considerable hydrophobicity provides a platform to be utilized in any desired shape in harsh environments. Template-assisted Achieving a continuous high TC in both out-of-plane and in-plane direction is challenging for bulk layered materials. Based on recent studies, a 3D segregated filler structure has shown to be promising in supplying uniform TC over both directions [55][56][57]. As an example, integration of a 3D segregated structure of h-BN fillers with epoxy resin has been shown capable of enhancing TC [58]. For this purpose, h-BN microbeads (BNMBs) were formed via a facile salt-template method in which NaCl recrystallization and PVA presence as cohesive agents hold h-BN nanoparticles together to Template-Assisted Achieving a continuous high TC in both out-of-plane and in-plane direction is challenging for bulk layered materials. Based on recent studies, a 3D segregated filler structure has shown to be promising in supplying uniform TC over both directions [55][56][57]. As an example, integration of a 3D segregated structure of h-BN fillers with epoxy resin has been shown capable of enhancing TC [58]. For this purpose, h-BN microbeads (BNMBs) were formed via a facile salt-template method in which NaCl recrystallization and PVA presence as cohesive agents hold h-BN nanoparticles together to form spherical agglomerates. Consecutively, washing away NaCl templates in cold water, drying, and resin infiltration led to the formation of segregated BNMBs/epoxy composite. Four evaporation stages of the salt-template technique are as follow: 1. Over-saturated NaCl solution as a result of water evaporation embarks NaCl recrystallization on the surface of h-BN; PVA, as a binding agent, fixes recrystallized NaCl particles on h-BN surface. Then the PVA/NaCl/h-BN mixture gradually becomes flocculated. 2. Flocculated seeds are accumulated, enlarge to minimize surface free energy, leading to the formation of primary BNMBs particles. 3. By losing more water, more NaCl/h-BN particles joined the primary particles, giving rise to the formation of spherical secondary BNMBs particles. 4. Upon template removal through cold water, the recrystallization is exterminated and hollow BNMBs can be obtained. Customizing continuous heat conductive structure through template-assisted method is still an obstacle in electronic packaging industries. However, continuous and connected large and small sizes of BNNSs coupled with thermoplastic polyurethane (TPU) have recently been introduced as a rapid heat transfer composite [59]. For this purpose, ultrasonic exfoliated large and small BNNSs (L-BNNSs and S-BNNSs, respectively) were mixed with N,N-dimethylformamide (DMF) and 90 wt.% TPU. The resultant mixture was transferred into Teflon dished to evaporate DMF gradually until it gets dried, as shown in Figure 4a. According to morphological evaluations, BNNSs are dispersed evenly within the TPU matrix with good contact compatibility, enhancing the overall TC ( Figure 4b). As a result of template evaporation, adjacent L-BNNSs are connected to deliver a continuous, conductive pathway and S-BNNSs filled the gaps between them to construct an interconnected percolation network (Figure 4c). By losing more water, more NaCl/h-BN particles joined the primary particles, giving rise to the formation of spherical secondary BNMBs particles. 4. Upon template removal through cold water, the recrystallization is exterminated and hollow BNMBs can be obtained. Customizing continuous heat conductive structure through template-assisted method is still an obstacle in electronic packaging industries. However, continuous and connected large and small sizes of BNNSs coupled with thermoplastic polyurethane (TPU) have recently been introduced as a rapid heat transfer composite [59]. For this purpose, ultrasonic exfoliated large and small BNNSs (L-BNNSs and S-BNNSs, respectively) were mixed with N,N-dimethylformamide (DMF) and 90 wt.% TPU. The resultant mixture was transferred into Teflon dished to evaporate DMF gradually until it gets dried, as shown in Figure 4a. According to morphological evaluations, BNNSs are dispersed evenly within the TPU matrix with good contact compatibility, enhancing the overall TC ( Figure 4b). As a result of template evaporation, adjacent L-BNNSs are connected to deliver a continuous, conductive pathway and S-BNNSs filled the gaps between them to construct an interconnected percolation network ( Figure 4c). Mechanical Milling To harness the highest potential of h-BN's high in-plane TC, alignment of h-BN platelets towards one direction seems a promising solution. In a recent work [60], the insoluble and non-melting characteristics of PI are exploited to fabricate highly in-plane aligned h-BN/PI composite by a facile and controllable method known as ball milling, high-pressure compression and low-temperature sintering. SEM images of post-milling demonstrate that the h-BN platelets are uniformly dispersed within polymer matrix made of spherical PI particles (Figure 4d). By optimizing h-BN content to 30 wt.%, PI particles wearing is minimized due to lubrication characteristic of h-BN protecting them against wear deterioration and ultimately, preserves the original sizes. High in-plane alignments of the incorporated h-BN platelets can be seen clearly in SEM cross-section image of half-finished samples after high-pressure compression (Figure 4e). Since the compression stage is crucial in inducing in-plane orientation, low-temperature sintering at 290 • C can create well-coalesced PI particles and a complete plasticization of the composite, as shown in Figure 4f. Therefore, a complete heat conductive network is prepared, confirming ameliorated TC of h-BN/PI composite. Being an anisotropic filler has made h-BN capable of producing various TC properties in different directions. Since heat dissipation between heat sinks and electronic devices majorly happens in vertical direction, thus, out-of-plane TC takes precedence over in-plane TC. Producing vertically aligned h-BN structure can be carried out by a variety of techniques including freeze-drying [52,61], electrically/magnetically induced alignment [62] as well as straightforward mechanical two-roll milling approach. For instance, a research work [63] focused on two-roll milling technique to fabricate vertically aligned h-BN/silicon rubber (SR) composite. As illustrated in Figure 5a, a mixture of h-BN/SR is sheared five times by a two-roll milling machine with the rolling distance of 0.35 mm to obtain a monolithic aligned h-BN/SR composite sheet. Subsequently, the as-obtained sheet is cut off perpendicular to the shear direction, lined the as-cut pieces up vertically, and cured at 170 • C in a hydraulic hot press to deliver vertically aligned h-BN/SR composite. SEM images, shown in Figure 5b, display the well-dispersed and well-vertically aligned h-BN sheets with the thickness of~300 nm and diameter of~10 µm within SR matrix. In an alternative work [64], the synergistic effect of binary hybrid filler was exploited to construct a BNNSs-reinforced silicon thermal grease (STG) composite. As shown in Figure 5c, the hybrid filler is firstly prepared by self-assembly of BNNSs on the surface of reduced graphene oxide (RGO) in the presence of PVA as a polymeric binder. The adhesive characteristic of PVA not only bonds BNNSs together in clusters, but also adheres those formed clusters to the surface of RGO efficiently. The dried RGO/h-BN precursor then is grinded in a planetary ball mill followed by pyrolization in a tube furnace with argon atmosphere to remove polymeric binder and obtain 3D RGO/h-BN stacking structure. The final thermal interface material (TIM) is prepared by shearing a mixture of RGO/h-BN and STG resin 6-9 times in a three-roller machine, the as-synthesized 3D RGO/h-BN stacking composite with the lateral size of 11 µm in which graphene is encircled by BNNSs clusters is confirmed by SEM assessment (Figure 5d,e). h to orientate BNNSs platelets in the vertical direction ( Figure 6a). The response of the as-synthesized FeCo-BNNSs complex to external magnetic field is clearly demonstrated in Figure 6b. During the selfassembly stage, the {001} facets of BNNSs were attached firmly to the {001} facets of PDDA@FeCo magnetic nanocubes. Upon applying a vertical magnetic field, FeCo nanocubes can easily orientate along {001} as an easy magnetization direction and accordingly, BNNSs platelets accompany FeCo in this magnetic-assisted orientation. The successful vertical orientation of BNNSs in the PDMS composite is displayed in Figure 6c. Other synthesis methods Taking benefit from the synergistic effect of 3D hybrid fillers in heat dissipation, chemical vapor deposition (CVD) grown-carbon nanotubes (CNTs) on the surface of BNNSs has also been coupled with epoxy resin to produce a high TC TIM [68]. For this purpose, catalyst-loaded BNNSs are prepared by ultrasonication of a certain amount of BNNSs and nickel acetate, vacuum drying, and grinding to yield well-dispersed catalyst-BNNSs. In-situ CVD growth then is carried out in two stages: (1) the catalyst-loaded BNNSs exposure to 500 °C under the flow of mixed argon and hydrogen gases for 1 h to reduce nickel acetate to nickel nanoparticles (Figure 6d), (2) the temperature is raised to 900 °C and methane gas as carbon source was pumped into the quartz furnace. The CNTs growth time should then be adjusted between 30-120 min. At last, CNTs/BNNSs/epoxy resin is Magnetic/Electric-Field Assisted A common approach to intensify the TC of TIMs is loading high amounts of BNNSs fillers into a polymer matrix, although it may deteriorate the mechanical properties of the final composite. Recently, vertical configuration of BNNSs has shown obviating usage of high amounts of filler loading, with flexibility lost in the obtained TIM [65][66][67]. Thereby, magnetic/electric fields are considered high efficiency and cost-effective techniques to yield vertically aligned BNNSs phonon transmission networks. In a recent work [62], a flexible TIM was designed based on magnetic-assisted vertically aligned BNNSs/PDMS composite with the assistance of FeCo magnetic nanocubes. To deliver well-oriented heat conduction channels, surface modification was firstly applied separately for FeCo nanocubes and h-BN powder to generate positively charged poly (diallyldimethylammonium chloride) (PDDA)@FeCo nanocubes and negatively charged BNNSs. Then, the strong electrostatic interactions between these two opposite-charged particles in the self-assembly process produced FeCo-BNNSs complex nanomaterials. To obtain FeCo-BNNSs/PDMS composite, a mixture of dried FeCo-BNNSs nanoparticles and PDMS resin was transferred to a spray gun and spray-coated on a hydrophobic glass substrate. At last, the spray-coated substrate was placed amid two vertically aligned rare permanent magnets with the field intensity of 35 mT for 1.5 h to orientate BNNSs platelets in the vertical direction ( Figure 6a). The response of the as-synthesized FeCo-BNNSs complex to external magnetic field is clearly demonstrated in Figure 6b. During the self-assembly stage, the {001} facets of BNNSs were attached firmly to the {001} facets of PDDA@FeCo magnetic nanocubes. Upon applying a vertical magnetic field, FeCo nanocubes can easily orientate along {001} as an easy magnetization direction and accordingly, BNNSs platelets accompany FeCo in this magnetic-assisted orientation. The successful vertical orientation of BNNSs in the PDMS composite is displayed in Figure 6c. J. Compos. Sci. 2020, 4, x FOR PEER REVIEW 10 of 44 prepared in a high-shear mixer followed by curing at 130 °C . Morphological evaluations in Figure 6e,f revealed that long CNTs under prolonged growth time grew between BNNSs in an interesting way that connected those sheets together, producing a continuous CNTs/BNNSs thermal conduction network. In addition, the as-grown CNTs are mainly multi-walled CNTs with outer diameters of 10-50 nm. Using the same CVD technique, an interesting 3D network based on BNNSs-reinforced graphene tube woven fabric (GTWF)/PDMS composite was fabricated with remarkable in-plane TC. Inexpensive Ni fabrics used as a growth substrate and graphene tubes were grown on its surface under the flow of methane gas in a quartz tube furnace to obtain GTWF-Ni fabric. Upon etching Ni in an etching solution (HCl:FeCl3), pure GTWF was achieved. Eventually, BNNSs/GTWF/PDMS composite was prepared by infiltration of BNNSs/PDMS mixture into five layers of stacked GTWF and cured at 80 °C , as depicted in Figure 7a. The morphological changes from pristine Ni substrate to as-synthesized BNNSs/GTWF/PDMS composite are shown in Figure 7a, confirming the successful hybridization of these three components in one TMI. In an alternative work, the combination of electrospinning and vacuum-assisted impregnation was utilized to construct a flexible TIM based on an interconnected and vertically aligned BNNSs/PVA/PDMS composite [69]. Firstly, a PVA supported BNNSs was prepared via electrospinning technique on an aluminum foil with the working voltage of 16 kV. Then, the as-synthesized PVA/BNNSs film was cut into narrow strips with the size of 15 mm, as shown in Figure 7b. All the obtained strips were rolled up perpendicular to the direction of the electrospun fibers in a way that the next strip got engaged with the previous one to construct a PVA/BNNSs cylinder with the diameter of 15 mm. At last, the as-ready cylinder was impregnated with PDMS resin under vacuum and followed by curing at 100 °C for 1 h. According to the morphological examinations, electrospun PVA/BNNSs fibers possessed a highly vertical ordered Other Synthesis Methods Taking benefit from the synergistic effect of 3D hybrid fillers in heat dissipation, chemical vapor deposition (CVD) grown-carbon nanotubes (CNTs) on the surface of BNNSs has also been coupled with epoxy resin to produce a high TC TIM [68]. For this purpose, catalyst-loaded BNNSs are prepared by ultrasonication of a certain amount of BNNSs and nickel acetate, vacuum drying, and grinding to yield well-dispersed catalyst-BNNSs. In-situ CVD growth then is carried out in two stages: (1) the catalyst-loaded BNNSs exposure to 500 • C under the flow of mixed argon and hydrogen gases for 1 h to reduce nickel acetate to nickel nanoparticles (Figure 6d), (2) the temperature is raised to 900 • C and methane gas as carbon source was pumped into the quartz furnace. The CNTs growth time should then be adjusted between 30-120 min. At last, CNTs/BNNSs/epoxy resin is prepared in a high-shear mixer followed by curing at 130 • C. Morphological evaluations in Figure 6e,f revealed that long CNTs under prolonged growth time grew between BNNSs in an interesting way that connected those sheets together, producing a continuous CNTs/BNNSs thermal conduction network. In addition, the as-grown CNTs are mainly multi-walled CNTs with outer diameters of 10-50 nm. Using the same CVD technique, an interesting 3D network based on BNNSs-reinforced graphene tube woven fabric (GTWF)/PDMS composite was fabricated with remarkable in-plane TC. Inexpensive Ni fabrics used as a growth substrate and graphene tubes were grown on its surface under the flow of methane gas in a quartz tube furnace to obtain GTWF-Ni fabric. Upon etching Ni in an etching solution (HCl:FeCl 3 ), pure GTWF was achieved. Eventually, BNNSs/GTWF/PDMS composite was prepared by infiltration of BNNSs/PDMS mixture into five layers of stacked GTWF and cured at 80 • C, as depicted in Figure 7a. The morphological changes from pristine Ni substrate to as-synthesized BNNSs/GTWF/PDMS composite are shown in Figure 7a, confirming the successful hybridization of these three components in one TMI. In an alternative work, the combination of electrospinning and vacuum-assisted impregnation was utilized to construct a flexible TIM based on an interconnected and vertically aligned BNNSs/PVA/PDMS composite [69]. Firstly, a PVA supported BNNSs was prepared via electrospinning technique on an aluminum foil with the working voltage of 16 kV. Then, the as-synthesized PVA/BNNSs film was cut into narrow strips with the size of 15 mm, as shown in Figure 7b. All the obtained strips were rolled up perpendicular to the direction of the electrospun fibers in a way that the next strip got engaged with the previous one to construct a PVA/BNNSs cylinder with the diameter of 15 mm. At last, the as-ready cylinder was impregnated with PDMS resin under vacuum and followed by curing at 100 • C for 1 h. According to the morphological examinations, electrospun PVA/BNNSs fibers possessed a highly vertical ordered structure in which BNNSs adhered to the PVA fibers so tightly (Figure 7c,d). This strong bond between PVA and BBNSs originates from amino and hydroxyl groups of BNNSs which facilitated its dispersion in PVA solution and interacted with PVA through strong hydrogen bonding. This strong hydrogen bonding is responsible for the well-stacked and interconnected BNNSs which were assembled on the surface of PVA fibers, resembling a fallen domino. This in-plane overlapping connection is expected to drastically decrease the interfacial thermal resistance within the PVA/BNNSs fibers. Eventually, the PDMS resin was fully filled the pores and gaps of PVA/BNNSs cylinder and no debonding was occurred between fibers and polymer matrix. . This strong bond between PVA and BBNSs originates from amino and hydroxyl groups of BNNSs which facilitated its dispersion in PVA solution and interacted with PVA through strong hydrogen bonding. This strong hydrogen bonding is responsible for the well-stacked and interconnected BNNSs which were assembled on the surface of PVA fibers, resembling a fallen domino. This in-plane overlapping connection is expected to drastically decrease the interfacial thermal resistance within the PVA/BNNSs fibers. Eventually, the PDMS resin was fully filled the pores and gaps of PVA/BNNSs cylinder and no debonding was occurred between fibers and polymer matrix. Thermal conductivity evaluations of h-BN-reinforced polymer composites According to theoretical studies, it is possible to exploit the ultimate potential of h-BN's TC within a hybrid composite if the phonon-phonon scattering within its 2D structure is overcome. Agari Thermal Conductivity Evaluations of h-BN-Reinforced Polymer Composites According to theoretical studies, it is possible to exploit the ultimate potential of h-BN's TC within a hybrid composite if the phonon-phonon scattering within its 2D structure is overcome. Agari et al. suggested a mathematical model to define the TC of filler-reinforced PMC [70]: where k composite is the overall TC of the polymer composite, k h-BN is the TC of h-BN fillers, and k polymer is the TC of polymer matrix; V h-BN is the volume content of incorporated h-BN fillers. term is the effect of hybridization on the quality of h-BN's heat conduction, explaining how the TC of polymer matrix limits the TC of h-BN fillers while log C polymer × k polymer term unfolds how the crystallinity of polymer matrix changes upon incorporation of h-BN fillers. C polymer and C h-BN are considered as specific heat capacities. k h-BN /k polymer ratio in the former term is known as "decreased TC of h-BN fillers", expounding how much the TC of h-BN fillers within the polymeric matrix is lower than the TC of pristine h-BN fillers. Since the polymer matrix affects the TC of its incorporated h-BN fillers, this ratio actually implies that polymer matrix is like an impediment against heat conduction [12,30]. h-BN as a Single Filler Recently, a flexible BNNSs/PDMS composite having a 3D network structure was fabricated via a two-step method of freeze-drying and carbonizing. The resulting 3D BNNSs/PDMS with 15.8 vol.% showed a prominent TC of 7.46 W m −1 ·k −1 which was enhanced by 3900% compared to neat PDMS. This improvement mainly stems from the dominant thermal transport channels formed through the polymer matrix. Plus, the residual carbon remaining in the composite structure led to a tremendous antistatic behavior and preserved the composite from dust and destruction. Figure 8a,b illustrated the comparison of surface resistivity and volume resistivity of Random PDMS and 3D BNNSs/PDMS. Both parameter values are dropped down in the 3D-BNNSs/PDMS composite, which shows that the residual carbon in the PDMS matrix formed a conductive network, leading to the anti-static performance and preserves the composite from dust and destruction. The schematic of this mechanism is shown in Figure 8c. A more comprehensible image of this feature is represented in Figure 8d,e by comparing the charge dissipation ability of Pure PDMS, Random BNNSs/PDMS, and 3D-BNNSs/PDMS samples. The conductive network in 3D-BNNSs/PDMS composite depletes the accumulated surface static charges immediately, while the Random BNNSs/PDMS sample adsorbed the most significant portion of Polystyrene spheres owing to its high surface resistivity. Figure 8f highlights the developed thermal performance of the 3D network composite compared with the random structure. In the 3D-BNNSs/PDMS composite, the neighboring BNNSs are welded together and brought efficient phonon pathways, so that the heat dissipation can occur faster. Yet, the phonon scattering at the random BNNSs and PDMS interfaces is more likely due to the absence of functional heat conductive channels, causing a deficient heat dissipation behavior [53]. size, resulted in a drastic rise of TC in the out-of-plane direction. This behavior is due to the more compact thermal channels of the structure. Sample C represented h-BN 35 µm-V composite and possessed the adequate heat dissipation, owing to the orientated heat conductive pathways formed by large h-BN microspheres [58]. [53]. Schematic illustration of (g) the Aerogel for thermoelectric generator device and (h) phonon scattering through the aerogel structure (Reprinted with permission from American Chemical Society, Copyrights 2019) [54]. The BNNSs/epoxy resin composite (with 15 vol.% BNNSs) has also been fabricated through the freeze-drying method with the ultimate purpose of having a nacre-mimetic 3D thermally conductive network with strong thermal stability. As is reported, the achieved TC of 6.07 W m -1 k -1 for BNNSs/epoxy can reach up to almost 32 times higher than the TC value of bare resin, showing the significance of h-BN. To substantiate the commercializing potential of this composite, Han et al. [52] had compared a commercial silicone sheet and BNNSs/epoxy composite as TIMs, which were [53]. Schematic illustration of (g) the Aerogel for thermoelectric generator device and (h) phonon scattering through the aerogel structure (Reprinted with permission from American Chemical Society, Copyrights 2019) [54]. Similarly, Functional BN/Polyimide (FBN/PI) aerogel was also fabricated using the freeze-drying, a light-weight and elastic thermal conductor composite. The particular cellular honeycomb structure of FBN-PI has given rise to the anisotropic TC with 6.70 and 2.20 W m −1 ·k −1 for out-of-plane and in-plane directions at 30 • C, respectively. The schematic design of aerogel for thermoelectric generating purpose and phonon scattering through the structure are given in Figure 8g,h, respectively [54]. Xiao et al. [58] fabricated an epoxy resin-based composite having hollow h-BN microbeads (BNMBs). They utilized the salt-template technique to synthesize the hollow BNMBs and introduce the epoxy resin via the infiltrating method. The segregated structure of this composite with the optimized amount of 65.6 vol.% hollow BNMBs improved the TC of the neat polymer matrix from 0.2 W m −1 ·k −1 to 17.61 and 5.08 W m −1 ·k −1 for in-plane and out-of-plane directions, respectively. For determining the practical application of these materials, h-BN composites were situated on a heating plate with a point-heating source to record the heat dissipation performance by a thermal imaging camera. This evaluation demonstrated that heat distributes more effectively in the sample with a bigger size and higher vol.% of h-BN, without a concentration in one spot. The impact of different h-BN sizes in heat dissipation revealed that in sample A (composite with 10 µm h-BN and not as compressed as other samples), the approximative spherical-shape of h-BN microspheres brought a near isotropic low TC. Sample B, which is more compressed than sample A but with the same h-BN size, resulted in a drastic rise of TC in the out-of-plane direction. This behavior is due to the more compact thermal channels of the structure. Sample C represented h-BN 35 µm-V composite and possessed the adequate heat dissipation, owing to the orientated heat conductive pathways formed by large h-BN microspheres [58]. The BNNSs/epoxy resin composite (with 15 vol.% BNNSs) has also been fabricated through the freeze-drying method with the ultimate purpose of having a nacre-mimetic 3D thermally conductive network with strong thermal stability. As is reported, the achieved TC of 6.07 W m −1 ·k −1 for BNNSs/epoxy can reach up to almost 32 times higher than the TC value of bare resin, showing the significance of h-BN. To substantiate the commercializing potential of this composite, Han et al. [52] had compared a commercial silicone sheet and BNNSs/epoxy composite as TIMs, which were integrated within a 20W LED chip and a Cu heat sink (Figure 9a,b). An Infrared camera tracked the changes in the surface temperature of the LED chips. Figure 9c presents the temperature change map for both chips in different time durations after lighting up the LED chips. It can be seen that the Silicone chip faces a sharp rise in temperature, comparing to BNNSs/epoxy composite. The impressive 10 • C difference in temperature change (Figure 9d) reveals the exceptional functionality of BNNSs/epoxy composite in heat dissipation applications. Plus, the thermal stability of this composite was investigated in Figure 9e by recording chip temperature in "on" (4 min) and "off" (2 min) stages, showing top-notch thermal stability [52]. Ball milling, high-pressure compression, and low-temperature sintering approaches have also been employed to obtain thermoset Polymer/h-BN composites. The Polyimide (PI)/h-BN composite reported by Wang et al. [60] was shown having a promising in-plane TC of 2.81 W m −1 ·k −1 compared with pure PI (0.87 W m −1 ·k −1 ) due to dense and adequate thermal pathways along this direction [60]. Although possessing both high TC and flame-retardancy behavior is challenging to reach, Tian and co-workers [61] reported an h-BN PMC with the mentioned performance. This composite had been fabricated via molding techniques with h-BN skeleton (sBN) (12.53 vol.%)/Phosphorus-free Bismaleimide (BD) resin, which exhibited a 1.53 W m −1 ·k −1 TC and enhanced the bare BD resin performance 9.4 times along with releasing a fewer smoke amount (42.5%). The flame-retardancy examination of pure BD resin and BD/h-BN composites with different h-BN structures revealed that sBN/BD with its high TC transmits the generated heat faster than other samples through the material. This performance prevents the drastic temperature rise in the material and delays the degradation of the local material. Plus, the 3D porous framework structure of the sBN/BD composite improves the fabrication of a protection layer on the surface of BD resin. This structure enhances the development of a continuous carbon layer and the quality of char graphitization after combustion. On the contrary, the formed char on the BD resin surface was loose with visible micro-cracks and could not act as a useful protector. The BN/BD composite flame-retardancy performance is between the bare BN and sBN/BD, owing to its medium heat conductivity and deficient char protection layer [61]. Moreover, Silicon rubber has also been extensively utilized as a polymer substrate possessing outstanding TC, a wide range of temperature stability, and electrically insulation behavior. Further enhancement of silicon rubber can be done by adding 39.8 vol.% vertically aligned h-BN as filler via rolling technique, to form active thermally conductive paths through the polymeric matrix. This leads to achieving a 5.4 W m −1 ·k −1 in TC, almost 33 times higher than the pure Silicone Rubber [63]. for both chips in different time durations after lighting up the LED chips. It can be seen that the Silicone chip faces a sharp rise in temperature, comparing to BNNSs/epoxy composite. The impressive 10 °C difference in temperature change (Figure 9d) reveals the exceptional functionality of BNNSs/epoxy composite in heat dissipation applications. Plus, the thermal stability of this composite was investigated in Figure 9e by recording chip temperature in "on" (4 min) and "off" (2 min) stages, showing top-notch thermal stability [52]. Ball milling, high-pressure compression, and low-temperature sintering approaches have also been employed to obtain thermoset Polymer/h-BN composites. The Polyimide (PI)/h-BN composite reported by Wang et al. [60] was shown having a promising in-plane TC of 2.81 W m -1 k -1 compared with pure PI (0.87 W m -1 k -1 ) due to dense and adequate thermal pathways along this direction [60]. Although possessing both high TC and flame-retardancy behavior is challenging to reach, Tian and co-workers [61] reported an h-BN PMC with the mentioned performance. This composite had been h-BN within Hybrid Filler Configurations Apart from polymers, hybrid fillers can potentially escalate the effectivity of the fillers' role throughout the matrix, more than individual h-BN sheets. The combination of different thermal conductive fillers like CNT [35,71], Graphene [72][73][74], Graphite [27,75], ZnO [76], Al 2 O 3 [77], and so on, with h-BN, leads to a synergetic effect in the TC performances among counterparts [68]. As an example, a promising hybrid nanocomposite of in-situ grown CNT on BNNSs embedded in epoxy resin exhibited a 615% and 380% TC performance improvement for the cross-plane direction compared with pure epoxy and BNNSs/epoxy composite, respectively. Adding a low volume of CNT (2 vol.%) builds bridges between BNNSs and improves their connectivity. Plus, BNNSs blockage of CNTs path maintains the electrical resistivity of the hybrid composite [68]. Another appealing hybrid composite is the vertically self-aligned BNNSs (50 wt.%)-FeCo (30 wt.%) as fillers in poly (diallyldimethylammonium chloride) (PDDA) as the matrix. The unique structure of this composite shown in Figure 6c brings the thermal dissipation channels by transferring more phonons through a superior thermally conductive pathway and increases the TC of PDDA from 0.11 to 2.25 W m −1 ·k −1 [62]. Graphene can also be considered as a potential phonon-transferring substrate for h-BN. This is due to the relative resemblance of their thermal expansion coefficients (TEC), making a desirable behavior for high heat dissipation [78]. For instance, by taking advantage of the synergic effects of a self-assembly RGO/h-BN and introducing to the Silicone Thermal Grease (STG), Liang et al. [64] could improve the TC value up to 68% upon addition of 12 vol.% RGO/h-BN. Figure 10a,b reveals the proper heat dissipation performance of RGO/h-BN/STG compared to h-BN/STG and pure STG. Infrared thermal imaging was used to study the thermal management potential of RGO/h-BN/STG, h-BN/STG, and STG. Thermal map of heating and cooling steps (Figure 10a) demonstrates an accelerated and visible color change for RGO/h-BN/STG, indicting the most productive heat absorption from the hot-stage among other samples. The cooling curves of the samples are also plotted in Figure 10b. It is clear that the RGO/h-BN/STG presented the fastest cooling rate, followed by h-BN/STG and bare STG. This excellent behavior is raised from the higher TC and lower thermal resistance of this composite comparing with other samples [64]. Glass Fiber cloth (GF)/epoxy is one of the highly demanded functional composites in a wide range of applications such as aerospace, electronics, and electrical fields due to its superior chemical inertness and electrical resistivity [79]. However, these PMCs suffer from poor interfacial adhesion to the epoxy matrices causing a gradual degradation in the mechanical properties and lowering the TC [80]. Glass Fiber cloth (GF)/epoxy is one of the highly demanded functional composites in a wide range of applications such as aerospace, electronics, and electrical fields due to its superior chemical inertness and electrical resistivity [79]. However, these PMCs suffer from poor interfacial adhesion to the epoxy matrices causing a gradual degradation in the mechanical properties and lowering the TC [80]. These drawbacks which have limited GF/epoxy applications in the electronic industry, are shown to be solved by the addition of h-BN as a thermally conductive filler. Tang et al. reported a laminated hybrid GF/spherical h-BN/epoxy composite, made by blending-impregnation and hot compression. The as-fabricated composite exhibited considerable improvement in both vertical (3 times) and parallel directions (6 times) [81]. Table 1 summarizes the recent efforts devoted to the improvement of thermoset polymer matrices' TC. Despite the high potential of thermoplastic polymer matrices, their integrations with h-BN fillers are much less explored compared with thermoset polymer matrices [12]. Polyethylene glycol (PEG) is one of these promising thermoplastic polymers with the feature of phase changing, making it capable of storing and releasing the thermal energy through a cycle. This potential suits them for many applications such as energy conversion, intelligent textile engineering, and heat management of different electronic parts [82,83]. Nevertheless, they exhibit deficient mechanical properties and low TC. As the earliest attempt, Yang et al. [84] addressed these challenges by encapsulating PEG with BNNSs-doped Cellulose nanofiber, resulting in a shape-stable PEG-composite and 42.8% improvement in TC upon addition of 1.9 vol.% BNNSs. Thermoplastic polyurethane (TPU) polymer matrices are other engaging polymer materials for miniaturized high-power electronic devices. One research in this area showed the significant role of BNNSs size (large and small) and volume content on the TC of TPU. The TC results showed that BNNSs (10 wt.%)/TPU composite reached 14.7 W m −1 ·k −1 in TC for in-plane direction. The enhanced thermal performance from nearly 0.5 W m −1 ·k −1 can be ascribed to the connection of small BNNSs to the neighbor large BNNSs, forming a continuous thermal path, and structural reinforcement. The effect of S-BNNSs content is investigated in Figure 11a. The higher S-BNNSs content (Up to 10 wt.%) results in the better TC, which is attributable to three reasons: First, the formation of heat conductive channels is developed by bonding S-BNNSs to their near L-BNNSs. Second, the S-BNNSs filled the gaps of L-BNNSs and constructed an interconnected structure through the matrix. Third, the S-BNNSs increased the filler-filler density, plotted in Figure 11b. The comparison of bare TPU and its nanocomposites is given in Figure 11c. It is visible that the 10 wt.% BNNSs/TPU is the optimized TPU nanocomposite in TC, which is originated from the advanced thermal channels built by the interconnection of S-BNNSs and L-BNNSs through the TPU network. A general comparison of the present nanocomposite with other TPU composite is presented in Figure 11d, showing the notable TC of this sample compared to others [59]. [84] plotted in Figure 11b. The comparison of bare TPU and its nanocomposites is given in Figure 11c. It is visible that the 10 wt.% BNNSs/TPU is the optimized TPU nanocomposite in TC, which is originated from the advanced thermal channels built by the interconnection of S-BNNSs and L-BNNSs through the TPU network. A general comparison of the present nanocomposite with other TPU composite is presented in Figure 11d, showing the notable TC of this sample compared to others [59]. Tribology Theory Historically, the study of tribology goes back to hundreds of years ago. In fact, the term of tribology is derived from the Greek word tribos meaning 'rubbing' [86]). However, the definition and science behind that are relatively new [87]. Generally, science of tribology studies the phenomena that is taking place between two moving surfaces [88] and it focuses on science of friction, lubrication, and wear involved in moving contacts [86,89]. Structural deformation, dimensional variations and degradation of sliding parts in commercial working systems necessitate the reduction of the friction and enhancing the performance of the involved counterparts [90]. Therefore, all aspects of mechanical, chemical, and materials sciences are involved simultaneously to boost up the performance of a system [86,91,92]. The earliest spikes of considering tribology as a science was seen in the development of high velocity internal combustion engines in the beginning of 20th century [93]. In fact, reduction of the friction and wear can mitigate energy consumption of the system. It also provide the appropriate condition for the fast and precise motions with minimum required maintenance cost and increment of the efficiency [94][95][96][97][98][99][100]. Having said that, tribology has rendered several valuable applications such as gas turbine engines, automotive parts, artificial human joints, hard disk drives for data storage and an increasing number of electromechanical devices [101][102][103][104]. Friction Friction is the force resisting the sliding of two surfaces against each other and can be simply calculated by the Amonton's law [105]: where F is friction force (N), µ is coefficient friction (COF) (dimensionless), and W is normal load (N). From the tribological point of view, µ is an important factor determining wear rate of the components. Except the nature of the materials, COF mostly depends on the characteristics of the surface as well as lubrication condition [106][107][108][109]. Therefore, tailoring the surface characteristics and lubrication condition may optimize the performance of the components under friction [110][111][112]. Wear Wear is the damage to a solid surface, generally involving progressive loss of material due to relative motion between the surface and a contacting substance or substances [87]. Despite the massive development in characterization equipment, it is still not possible to comprehend and examine the wear phenomena thoroughly. Hence, three main complexities can make the surface analyses difficult: (1) continuous alteration of the chemical composition on the surface within the process due to wear, (2) the changing surface topography of the specimen, and (3) the existence of complex and blended mechanisms of the wear [113][114][115]. Nonetheless, one common method of measuring the amount of wear is through the Archard wear equation, represented in Equation (6) [86,87]: where Q is the volume removed from the surface per unit sliding distance, K is wear coefficient (dimensionless), W is the normal load applied to the surface by its counterbody, and H is the indentation hardness of the wearing surface. By variation of K, the severity of different wear processes can be compared. For instance, for a lubricated condition, it stands in the range of 10 −14 -10 −6 while a dry sliding process and usage of hard particles give the ranges of 10 −6 -10 −2 and 10 −4 -10 −1 , respectively ( Figure Lubrication Nowadays, production of hundreds of million lubricants' barrels per day indicates the indispensable roles lubrication in many industries [86]. In fact, performance and lifetime of the Lubrication Nowadays, production of hundreds of million lubricants' barrels per day indicates the indispensable roles lubrication in many industries [86]. In fact, performance and lifetime of the components struggling with friction and wear will be diminished without employing lubricants. As can be seen in Figure 12, a low amount of lubricant (falls on the boundary lubrication) reduces the K by several orders of magnitude. The friction-lubrication relationship often is described by Stribeck diagram (Figure 13), in which COF (µ) is directly correlated to the viscosity of the lubricant (η) and difference in velocity of two sliding surfaces (V), while is inversely proportional to the normal load (P) (bearing number = ηV/P) [116][117][118]. When bearing number is high, µ linearly ascends and the mechanism is fluid film lubrication [116,117]. Indeed, by increasing the load or lubricant viscosity and/or decreasing the sliding speed, the bearing number gradually decreases. Thereafter, the lubricant film gets thinner, and consequently, COF declines to its minimum value [116,117]. For smaller bearing number values, the reducing trend of lubricant thickness continues, and the asperities of the rubbing surfaces interact slightly, causing an increment in the COF. This behavior is known as mixed lubrication mechanism [116,117]. Further declination of the bearing factor causes more severe interaction of the sliding surfaces attributing to thinner lubricant and higher COF value. This regime is often characterized by boundary lubrication [116,117]. selenides, tellurides, etc.), soft metals (Pb, Sn, Bi, In, Cd, and Ag), organic compounds with chain structure of the polymeric molecules (PTFE and polychlorofluoroethylene) [119]. Apart from having the lamellar structure, h-BN is in the center of attentions due to capability of maintaining the lubrication characteristic up to 1200 °C in oxidizing environment [28]. Here we aim to survey the recent efforts devoted to composite manufacturing by h-BN as a lubricant with the perspective of tribological studies. Materials development PMC Growing needs to light-weight materials possessing self-lubricating characteristics as well as high chemical stability in aqueous solution has urged researchers to manufacture new PMCs suitable for aqueous media applications such as ships, water pumps, and washing machines [123][124][125]. However, by working under a high loading and low-velocity condition, the created water film does not reveal a high carrying capacity because of the low viscosity which results in a mixed or boundary lubrication condition [126]. Hence, fabrication of durable tribo-materials for aqueous media lubrication condition is highly demanded. Considering that wear for wet friction is considerably lower than any dry friction (due to prevention of water medium from material transfer and tribochemical phenomenon and lowering the interface's temperature), addition of reinforcements will lead to the generation of a lubricating film [127]. In this regard, h-BN has represented supreme lubrication behavior in water medium [128,129]. For instance, COF of h-BN/h-BN rubbing pair in aqua is reported 0.06-0.12, while it is 0.18-0.25 in dry friction [130,131]. The reason behind this low COF in water lubrication condition is claimed the reaction of h-BN with water molecules, resulting in the formation of a tribo-chemical layer consisting of H3BO3 and B2O3 [29,132]. However, regarding Solid Lubrication Solid state lubricants compounds are famous for decreasing COF and increasing wear resistance of sliding parts [28,119]. Their high stability can suite them for harsh conditions that liquid lubricants cannot tolerate. Stable functionality at elevated temperature (>350 • C), ultra-low temperatures (e.g., liquid nitrogen working temperature) or under ultrahigh load, severe oxidation, ultrahigh vacuum, and intense radiation are a few examples of their merits [120][121][122]. In addition, they can be used in form of free-flowing powders, anti-friction pastes, anti-friction coatings, and oil additive [122]. Solid lubricant compounds, by providing boundary lubrication, help the COF and wear reduce. One vital element to consider is the thin layer that is transferred from the surface of lubricated surface to the counterface, named tribo-film, tribo-layer, or transfer film [123][124][125]. Moreover, there are a broad variety of solid lubricants, e.g., inorganic materials with lamellar structure (graphite, h-BN, sulfides, selenides, tellurides, etc.), soft metals (Pb, Sn, Bi, In, Cd, and Ag), organic compounds with chain structure of the polymeric molecules (PTFE and polychlorofluoroethylene) [122]. Apart from having the lamellar structure, h-BN is in the center of attentions due to capability of maintaining the lubrication characteristic up to 1200 • C in oxidizing environment [28]. Here we aim to survey the recent efforts devoted to composite manufacturing by h-BN as a lubricant with the perspective of tribological studies. PMC Growing needs to light-weight materials possessing self-lubricating characteristics as well as high chemical stability in aqueous solution has urged researchers to manufacture new PMCs suitable for aqueous media applications such as ships, water pumps, and washing machines [126][127][128]. However, by working under a high loading and low-velocity condition, the created water film does not reveal a high carrying capacity because of the low viscosity which results in a mixed or boundary lubrication condition [129]. Hence, fabrication of durable tribo-materials for aqueous media lubrication condition is highly demanded. Considering that wear for wet friction is considerably lower than any dry friction (due to prevention of water medium from material transfer and tribo-chemical phenomenon and lowering the interface's temperature), addition of reinforcements will lead to the generation of a lubricating film [130]. In this regard, h-BN has represented supreme lubrication behavior in water medium [131,132]. For instance, COF of h-BN/h-BN rubbing pair in aqua is reported 0.06-0.12, while it is 0.18-0.25 in dry friction [133,134]. The reason behind this low COF in water lubrication condition is claimed the reaction of h-BN with water molecules, resulting in the formation of a tribo-chemical layer consisting of H 3 BO 3 and B 2 O 3 [29,135]. However, regarding the self-lubricating performance of the tribo-chemical layer, it is declared that H 3 BO 3 plays the leading role owing to its layered triclinic structure [136]. Taking advantage of the self-lubrication characteristic and low liquid absorption ratio of polyformaldehyde (POM), Gao et al. [137] constructed POM/3 vol.% h-BN composite and studied its tribological behavior in the water medium. Assessments revealed that the COFs of both POM and POM/h-BN have reduced gradually by increasing the sliding time, while the composite represented lower COF, especially when the applied load was increased from 50 N to 200 and 300 N. In parallel, the addition of h-BN reduced the wear rate significantly. For instance, in low and high loads, the wear rate was about one order of magnitude lower than neat POM. The obtained attenuated total reflectance Fourier transform infrared spectroscopy (ATR-FTIR) spectra of the inside and outside of the wear scar indicated no sign of adsorbed water (Figure 14a,b), meaning the composite is suitable for utilizing in aqueous environment. More importantly, an excess band at 945 cm −1 appeared on the worn surface, attributing to H 3 BO 3 (Figure 14a), which is responsible for the superior lubrication performance of the h-BN composites. The below equations can describe how this surviving compound has been formed, dispersive X-ray (EDX) analysis showed the top layer of the boundary film is a B-rich layer, and the measured lattice fringes of 0.319 and 0.321 nm proved the existence of H3BO3 and B2O3, respectively (Figure 14f (refer to the tribo-chemical reactions of Equations (7) and (8))). Also, there exist some intact h-BN with 0.333 nm lattice fringe. Having considered that, these layers are oriented in parallel to the sliding direction, which enhance the lubrication capability. Moreover, it is shown that using only 10 vol.% carbon fiber (POM/10CF-3hBN) reduces the wear rate by about 50%, which is originated from reinforcing synergic effect of the fiber and boundary layer of h-BN [134]. Moreover, the SEM images taken from the steel counterface after rubbing with POM and POM/h-BN illustrated that the detached particles of POM are concentrated into the roughness trenches, but the flat areas are mostly intact (Figure 14c). While in the composite rubbing case, the boundary film is spread over the whole steel counterface (Figure 14d), resulting in higher wear resistance of POM/h-BN. (7) and (8))). Also, there exist some intact h-BN with 0.333 nm lattice fringe. Having considered that, these layers are oriented in parallel to the sliding direction, which enhance the lubrication capability. Moreover, it is shown that using only 10 vol.% carbon fiber (POM/10CF-3hBN) reduces the wear rate by about 50%, which is originated from reinforcing synergic effect of the fiber and boundary layer of h-BN [137]. Epoxy is a popular polymer for structural applications; however, it has a serious shortcoming, i.e., low wear resistance due to forming a 3D cross linking network during the curing process [138,139]. Adding nano-fillers is a promising way to compensate the poor tribological performance of epoxy. For example, -OH functionalized h-BN and c-BN were utilized to improve tribological properties of epoxy coating [140]. Observations showed that there were some long cracks in the neat epoxy coating, while BN nanofillers acted like a barrier against cracks and hindered their expansion. The outcomes of the COF evaluation indicated that the minimum value belongs to composites containing 0.5 wt.% h-BN and c-BN, with 12.4 and 9.74% lower COF in comparison to neat epoxy in dry rubbing test, as well as 39.27 and 30.03% lower COF in comparison to neat epoxy in seawater sliding condition, respectively. Except increasing the toughness of the composite through hindering propagation of cracks, forming a solid boundary film caused reduction of the COF of the composites. Wear rate results showed significant improvement for both reinforcements; however, c-BN reinforced composite was a little better. For example, the wear rate of epoxy/h-BN reduced 73.61 and 68.36% with respect to pure epoxy under the dry and wet sliding condition, respectively. Remarkable performance of the composite in seawater condition is attributed to diminishing the direct contact of the sliding surfaces which declines adhesive wear, taking away the generated frictional heat which prevents thermal softening, removing the produced wear debris which results in smoother and cleaner surface [140]. In a similar work, polydopamine (PDA) was used to functionalize the BN nanofillers in order to better dispersion within the epoxy matrix; moreover, the binary reinforcements containing both h-BN and c-BN with proportion of 1:2 (HC12), 1:1 (HC11), and 2:1 (HC21) were examined [141]. As can be seen in the cross-section profiles of the depth of wear cracks (Figure 15a), in dry condition the maximum value belongs to pure epoxy with 46 µm and the minimum values belong to HC21 and c-BN with 14 and 12 µm, respectively. Under wet sliding test, the results were different; pure epoxy revealed a depth of 68 µm which was the maximum for sure, and for HC21, HC12, and c-BN its was 2.7, 3.6, and 1.3 µm, respectively (Figure 15c). The wear rate was more or less following this sequence which is illustrated in Figure 15b,d. Obviously, c-BN represented better results than h-BN due to its higher hardness. However, when the reinforcement was contained both of them, the composite showed excellent anti-wear behavior due to synergy of hard cubic phase of BN and good lubricity of hexagonal phase of BN [141]. of epoxy. For example, -OH functionalized h-BN and c-BN were utilized to improve tribological properties of epoxy coating [137]. Observations showed that there were some long cracks in the neat epoxy coating, while BN nanofillers acted like a barrier against cracks and hindered their expansion. The outcomes of the COF evaluation indicated that the minimum value belongs to composites containing 0.5 wt.% h-BN and c-BN, with 12.4 and 9.74% lower COF in comparison to neat epoxy in dry rubbing test, as well as 39.27 and 30.03% lower COF in comparison to neat epoxy in seawater sliding condition, respectively. Except increasing the toughness of the composite through hindering propagation of cracks, forming a solid boundary film caused reduction of the COF of the composites. Wear rate results showed significant improvement for both reinforcements; however, c-BN reinforced composite was a little better. For example, the wear rate of epoxy/h-BN reduced 73.61 and 68.36% with respect to pure epoxy under the dry and wet sliding condition, respectively. Remarkable performance of the composite in seawater condition is attributed to diminishing the direct contact of the sliding surfaces which declines adhesive wear, taking away the generated frictional heat which prevents thermal softening, removing the produced wear debris which results in smoother and cleaner surface [137]. In a similar work, polydopamine (PDA) was used to functionalize the BN nanofillers in order to better dispersion within the epoxy matrix; moreover, the binary reinforcements containing both h-BN and c-BN with proportion of 1:2 (HC12), 1:1 (HC11), and 2:1 (HC21) were examined [138]. As can be seen in the cross-section profiles of the depth of wear cracks (Figure 15a), in dry condition the maximum value belongs to pure epoxy with 46 µ m and the minimum values belong to HC21 and c-BN with 14 and 12 µ m, respectively. Under wet sliding test, the results were different; pure epoxy revealed a depth of 68 µ m which was the maximum for sure, and for HC21, HC12, and c-BN its was 2.7, 3.6, and 1.3 µ m, respectively (Figure 15c). The wear rate was more or less following this sequence which is illustrated in Figure 15b,d. Obviously, c-BN represented better results than h-BN due to its higher hardness. However, when the reinforcement was contained both of them, the composite showed excellent anti-wear behavior due to synergy of hard cubic phase of BN and good lubricity of hexagonal phase of BN [138]. Poly aryl ether ketone (PAEK) is the other polymer which has a great potential to be used in tribological applications [142]. In a new study, the effect of adding micro and nano h-BN as secondary solid lubricant was studied [143]. In this regard, the composite of PAEK/30glass fiber-10graphite-10h-BN (named C HM ) was chosen as the baseline, and then 3% of h-BN was substituted by nano h-BN (named C HN ). Evaluations revealed that by adding nano h-BN the value of COF and wear rate diminished at both low and high sliding velocity. According to Figure 16a,b, the boundary film in C HM is thick and discontinuous, while that of C HN is smooth and uniform. Besides, SEM images of the worn surfaces indicate that the fibers in C HM are cracked and debonded (Figure 16c,d), resulting in high wear. The labeled area in Figure 16c, show (1) debonding of large fibers, (2) broken and disoriented fiber pieces causing high friction, (3) an array of fiber pieces that have just been broken and are about to become disoriented on further shearing, (4) cavities left after removal of fibers and materials. In contrast, there is not much evidence of cracking or debonding of the fibers in the case of C HN [143]. tribological applications [139]. In a new study, the effect of adding micro and nano h-BN as secondary solid lubricant was studied [140]. In this regard, the composite of PAEK/30glass fiber-10graphite-10h-BN (named CHM) was chosen as the baseline, and then 3% of h-BN was substituted by nano h-BN (named CHN). Evaluations revealed that by adding nano h-BN the value of COF and wear rate diminished at both low and high sliding velocity. According to Figure 16a,b, the boundary film in CHM is thick and discontinuous, while that of CHN is smooth and uniform. Besides, SEM images of the worn surfaces indicate that the fibers in CHM are cracked and debonded (Figure 16c,d), resulting in high wear. The labeled area in Figure 16c, show (1) debonding of large fibers, (2) broken and disoriented fiber pieces causing high friction, (3) an array of fiber pieces that have just been broken and are about to become disoriented on further shearing, (4) cavities left after removal of fibers and materials. In contrast, there is not much evidence of cracking or debonding of the fibers in the case of CHN [140]. Typically, fiber reinforced CMCs represent the best mechanical performances among different kind of them. One important strengthening mechanism in this group of composite materials is crack deflection in which lubricity between the fiber and matrix seems essential [141]. Conventional solid lubricants for SiC fiber are pyrolytic carbon and h-BN [142]. The main advantage of h-BN coating is the ability of working in high temperature oxidizing environments [143]. However, this process is challenging since depositing a uniform crack free layer on fibers is difficult. Recently, a uniform layer of h-BN was successfully substituted graphene oxide (GO) on SiC fiber by Tak et al. [144] In this process, utilizing (3-amonipropyl)triethoxysilane the surface of SiC fiber was modified with amine groups. Therefore, GO deposited on the fiber and by heating at 1200-1400 °C under N2 and NH3 atmosphere. By increasing the temperature from 1200 to 1400 °C, the thickness of the coating increased from 10 nm to 1.10 µ m (Figure 17a-c). However, in order to obtain a high crystalline structure heating at 1400 °C is necessary. Two thermodynamically stables allotropes of BN (h-BN and c-BN) have also been utilized as reinforcement and matrix, respectively [145]. It's been shown that addition of h-BN flakes as lubricant Typically, fiber reinforced CMCs represent the best mechanical performances among different kind of them. One important strengthening mechanism in this group of composite materials is crack deflection in which lubricity between the fiber and matrix seems essential [144]. Conventional solid lubricants for SiC fiber are pyrolytic carbon and h-BN [145]. The main advantage of h-BN coating is the ability of working in high temperature oxidizing environments [146]. However, this process is challenging since depositing a uniform crack free layer on fibers is difficult. Recently, a uniform layer of h-BN was successfully substituted graphene oxide (GO) on SiC fiber by Tak et al. [147] In this process, utilizing (3-amonipropyl)triethoxysilane the surface of SiC fiber was modified with amine groups. Therefore, GO deposited on the fiber and by heating at 1200-1400 • C under N 2 and NH 3 atmosphere. By increasing the temperature from 1200 to 1400 • C, the thickness of the coating increased from 10 nm to 1.10 µm (Figure 17a-c). However, in order to obtain a high crystalline structure heating at 1400 • C is necessary. concentration are shown in Figure 17e-g. As can be seen, by increasing the h-BN from 5 wt.% to 15 wt.%, the fracture mode has transformed from trans-granular into inter-granular. As can be seen, 5 wt.% of h-BN additive provides a relatively uniform dispersion of particles in the matrix (Figure 17e). However, increasing h-BN content to 10 wt.% leads to formation of micro-cracks in c-BN grains (Figure 17f). Therefore, with higher h-BN amount (up to 15 wt.%), the larger number of micro-cracks as well as micro-pores will appear (Figure 17g), resulting in lower mechanical strength. 10 w.t% h-BN represented the optimal tribological and mechanical properties [145]. A study on wear characteristics of Si3N4/h-BN ceramic composites under the marine atmospheric environment reflected a notable enhance in the tribological behavior of silicon nitride with the addition of the second phase of h-BN [146]. Adding 20 wt.% h-BN lowered the COF to 0.302, and the wear rate to 2.93 × 10 −6 m 3 N -1 m -1 [147]. Figure 18a,b show the worn surface of Si3N4 and Si3N4/20% h-BN, respectively, and the smoother worn surface in 20% h-BN is obvious. This improvement is the consequence of forming a tribo-chemical film on a worn surface. In marine atmosphere, plenty of Two thermodynamically stables allotropes of BN (h-BN and c-BN) have also been utilized as reinforcement and matrix, respectively [148]. It's been shown that addition of h-BN flakes as lubricant to the self-lubricating c-BN composite improves the wear behavior of as-prepared composites, although negatively affects fracture strength. c-BN prepared via powder metallurgy, Cu-Sn alloy powder and Ti particles mixed with different amounts of h-BN from 5 to 15 wt.% were used to make the specimens which were tested in a ball on disc setting. Considerable reduction in COF from 0.7 to 0.21 has also been observed by increasing the h-BN from 5 wt.% to 15 wt.%, proving the role of h-BN in wear characteristic enhancement of the composites. Figure 17d shows the COF in five different amount of h-BN 5 wt.%, 7.5 wt.%, 10 wt.%, 12.5 wt.% and 15 wt.%. As it is obviously observable in the diagram, the major drop is shown in increasing h-BN from 10 wt.% up to 12.5 wt.% which is from 0.58 to about 0.35 [148]. To interpret the fracture mode transformation of the composite through increment of the h-BN content, SEM micrographs of the ball's worn surface as a function of the h-BN concentration are shown in Figure 17e-g. As can be seen, by increasing the h-BN from 5 wt.% to 15 wt.%, the fracture mode has transformed from trans-granular into inter-granular. As can be seen, 5 wt.% of h-BN additive provides a relatively uniform dispersion of particles in the matrix (Figure 17e). However, increasing h-BN content to 10 wt.% leads to formation of micro-cracks in c-BN grains (Figure 17f). Therefore, with higher h-BN amount (up to 15 wt.%), the larger number of micro-cracks as well as micro-pores will appear (Figure 17g), resulting in lower mechanical strength. 10 w.t% h-BN represented the optimal tribological and mechanical properties [148]. A study on wear characteristics of Si 3 N 4 /h-BN ceramic composites under the marine atmospheric environment reflected a notable enhance in the tribological behavior of silicon nitride with the addition of the second phase of h-BN [149]. Adding 20 wt.% h-BN lowered the COF to 0.302, and the wear rate to 2.93 × 10 −6 m 3 N −1 m −1 [150]. Figure 18a,b show the worn surface of Si 3 N 4 and Si 3 N 4 /20% h-BN, respectively, and the smoother worn surface in 20% h-BN is obvious. This improvement is the consequence of forming a tribo-chemical film on a worn surface. In marine atmosphere, plenty of ions will encourage the formation of a tribo-chemical film and higher content of h-BN which rises up the viscosity of the tribo-chemical film, will increase the resistance of the film [150]. h-BN is also used to boost the tribological properties of carbon nanotubes (CNTs). In a research by Apart from PMCs and CMCs, MMCs can also be reinforced by h-BN to enhance the tribological functionality. Among the alternatives, Cu, Al, Ni, and Fe have recently grabbed the consideration of tribology scientists. Owing to the supreme electrical and thermal conductivities of Cu, they are introduced as a platform of various industries; however, their applications are often suppressed by their low mechanical strength and wear resistance [152,153]. Hence, ceramic nanoparticles are often used to strengthen the Cu-matrix without considerable negative effect on the electrical and thermal conductivities [154]. One possible choice, in this group of reinforcements is h-BN, however, there is only one published report available. It was demonstrated that the addition of h-BN to the Cu matrix improves the lubricity of the final composite [155]. By detail, addition of 2.5 wt.% h-BN caused only 0.008 reduction in COF, while 5, 7.5, and 10 wt.% h-BN resulted in 0.079, 0.102, and 0.112 reduction, respectively (Figure 18e). That minor improvement for Cu/2.5 wt.% h-BN nanocomposite could be due to the pinning and alignment of h-BN planes by the matrix grains within the sintering process. Also, it seems the optimal amount is obtained by 5 wt.% h-BN since no further improvement was observed with higher h-BN contents (that could be due to random orientation of h-BN platelets). Moreover, as it is demonstrated in Figure 18f, wear rate increases by addition of h-BN relating to existence of low shear strength along the direction of rubbing [155]. Wear behavior of aluminum alloy (AA6082) reinforced by TiB 2 and h-BN nano particles has been investigated by Palanivel et al. [156]. In this research, the friction stir processing (FSP) was used as synthesis process. The shape and morphology of the nanosized h-BN particles were found unchanged during FSP, while fragmentation of TiB 2 particles varied during the process. Additionally, nano h-BN particles could enhance the wear resistance by formation of a tribo-film. Initially, the wear rate of the AA6082 itself was measured 23.75 × 10 −5 mm 3 N −1 m −1 . However, by addition of TiB 2 and h-BN reinforcements, wear rate reduced around 35% and 40%, respectively. That's why AA6082/(TiB 2 + h-BN) hybrid composite showed the wear rate as low as 13 × 10 −5 mm 3 N −1 m −1 . To understand the morphological difference of the samples, Figure 19a-d exhibit the SEM images of the worn surface related to pure AA6082, AA6082 composite containing TiB 2 , TiB 2 + h-BN, and h-BN particles, respectively. As can be seen, Figure 19a indicates a plastic deformation happened in the parent material, and the worn surface of the composite reinforced by TiB 2 is covered by wear debris (demonstrated in Figure 19b). However, in Figure 19c, the worn surface is debris-free due to the lubricative role of h-BN as well as load distribution of TiB 2 to lower the material removal. In the other words, by acting as a solid lubricant, h-BN forms a tribo-film on the worn surface and creates a smoother surface [157]. Hence, it is expected that the presence of wear debris shown in Figure 19b is due to the absence of h-BN. Lastly, wear debris in Figure 19d which are related to AA6082 and AA6082/h-BN particles is originated from material removal in the adhesive mechanism, which was much lower in the presence of TiB 2 , as well as the abrasive mechanism [158]. Importantly, presence of TiB 2 and h-BN in the wear interface can altered the wear mechanism from adhesive to abrasive [156]. Tribological behavior of BN nano-platelet (BNNP) reinforced Ni3Al intermetallic matrix composite has also been researched [156]. The result of a ball-on-disk wear testing revealed a good wear resistance of Ni3Al/h-BN with a COF of 0.22-0.26 while pure Ni3Al showed the range of 0.29-0.33. The ultimate anti-friction and high wear resistivity of the composites was shown to be high, attributing to the higher density of dislocation, mainly Orowan's mechanism [157] in which BNNPs are pulled out and bridged [12] (Figure 19e). As a result, lower shear stress associated with the addition of BNNPs and their self-lubrication characteristic enhanced the weight loss resistance. Figure 19f illustrates the worn surface of neat Ni3Al containing large debris, while the finer particles in debris on the worn surface of the composite can be seen, causing a lower wear rate [156]. In another research, the high temperature tribological properties of Ni-based self-lubricating coatings deposited by atmospheric plasma sprayed coating were studied [158]. The outcomes displayed that the COF and wear rate of the samples containing 5 wt.% and 10 wt.% of h-BN decrease when the temperatures Tribological behavior of BN nano-platelet (BNNP) reinforced Ni 3 Al intermetallic matrix composite has also been researched [159]. The result of a ball-on-disk wear testing revealed a good wear resistance of Ni 3 Al/h-BN with a COF of 0.22-0.26 while pure Ni 3 Al showed the range of 0.29-0.33. The ultimate anti-friction and high wear resistivity of the composites was shown to be high, attributing to the higher density of dislocation, mainly Orowan's mechanism [160] in which BNNPs are pulled out and bridged [12] (Figure 19e). As a result, lower shear stress associated with the addition of BNNPs and their self-lubrication characteristic enhanced the weight loss resistance. Figure 19f illustrates the worn surface of neat Ni 3 Al containing large debris, while the finer particles in debris on the worn surface of the composite can be seen, causing a lower wear rate [159]. In another research, the high temperature tribological properties of Ni-based self-lubricating coatings deposited by atmospheric plasma sprayed coating were studied [161]. The outcomes displayed that the COF and wear rate of the samples containing 5 wt.% and 10 wt.% of h-BN decrease when the temperatures is elevating up to 800 • C (Figure 19h). They reported that a smooth tribo-layer of lubricious phases is formed as a result of synergetic action of h-BN in the coating, and consequently, there will be no direct contact between the wear surface and the sliding ball [162]. As result of this tribo-layer, the minimum friction coefficient of 0.23 achieved where 10 wt.% of h-BN could reduce this parameter to 0.27 compared to the amount of 0.32 for the composite without h-BN [161]. formed as a result of synergetic action of h-BN in the coating, and consequently, there will be no direct contact between the wear surface and the sliding ball [159]. As result of this tribo-layer, the minimum friction coefficient of 0.23 achieved where 10 wt.% of h-BN could reduce this parameter to 0.27 compared to the amount of 0.32 for the composite without h-BN [158]. Hammes et al. has investigated the impact of h-BN and graphite on the mechanical scuffing resistance of self-lubricating iron-based composite [160]. They revealed that formation of a tribo-layer on the worn surface containing both graphite and h-BN can upgrade the composite wear resistance. Figure 20a shows the role of total lubricant amount on COF. As is shown, where the COF with total 5% of lubricant is around 0.3; having total 7.5% and 10% of lubricant can reduce the COF to 0.09 and 0.75 respectively. In all these samples, 1% of total amount of lubricant was h-BN. At the same point, 2.5% of h-BN in total 10% of lubricant leads to a COF of about 0.88. This reduction in COF can be attributed to the probability of formation of an oxide layer due to tribo-chemical reaction. The presence of both h-BN and graphite provides a good source of solid lubricant and higher volume of this source supports the formation of the layer [161]. Further increase in h-BN content of lubricant leads to a higher COF, because of remains following the removal of materials on the wear surface. However, lower proportions of h-BN in the mixtures enhanced mechanical properties. Figure 20b exhibits mechanical strengths for the composite with different wt.% of lubricant as well as different Hammes et al. has investigated the impact of h-BN and graphite on the mechanical scuffing resistance of self-lubricating iron-based composite [163]. They revealed that formation of a tribo-layer on the worn surface containing both graphite and h-BN can upgrade the composite wear resistance. Figure 20a shows the role of total lubricant amount on COF. As is shown, where the COF with total 5% of lubricant is around 0.3; having total 7.5% and 10% of lubricant can reduce the COF to 0.09 and 0.75 respectively. In all these samples, 1% of total amount of lubricant was h-BN. At the same point, 2.5% of h-BN in total 10% of lubricant leads to a COF of about 0.88. This reduction in COF can be attributed to the probability of formation of an oxide layer due to tribo-chemical reaction. The presence of both h-BN and graphite provides a good source of solid lubricant and higher volume of this source supports the formation of the layer [164]. Further increase in h-BN content of lubricant leads to a higher COF, because of remains following the removal of materials on the wear surface. However, lower proportions of h-BN in the mixtures enhanced mechanical properties. Figure 20b exhibits [165]. Consequently, the best mechanical and simultaneously tribological improvement can be achieved in 1 vol.% of h-BN and 9 vol.% of graphite [163]. The recent developments in h-BN reinforced composites for tribological applications is summarized in Table 2. Lubrication Additive Except enhancing the tribological properties of metal, ceramic, and polymer matrices, h-BN can be used as lubrication additive in water or oil [131,[166][167][168]. In one investigation, benefiting from different synthesis routs and treatments, BNNSs with different microstructure and sizes were obtained [169]. The samples obtained from route 2 (Equation (10)) were characterized by thin and small surface area, in contrast, the BNNSs of route 1 (Equation (9)) were relatively thicker and larger. Utilizing them as the lubrication additive in water revealed that the COF of bulk h-BN is near to pure water within the sliding test. In the case of BNNSs, the results become different; the behavior of all the samples became approximately the same till 500 s. Afterwards, up to 1800 s, the COF of BNNSs-2 and BNNSs-A gradually increased, whereas BNNSs-1 kept its COF low (Figure 20c). The possible reason is declared as poor mechanical strength of BNNSs-2 and BNNSs-A, which easily could be broken under applying high loads. However, thick and large BNNSs-1 can tolerate higher loads. The reason behind not appropriate performance of bulk h-BN can be the poor dispersion in water. The wear rate variation of the samples was seen similar to the COF behaviors. It can be concluded that the size of nanoparticles strongly affects the tribological properties of materials. Typically, size reduction should result in lowering wear resistance since a high density of defects is introduced to the structure and degrades the mechanical properties. In an investigation, composite nanostructure of graphene and BNNSs was utilized to enhance the friction-reducing and anti-wear lubrication performance of oil [170]. This nanoparticles with diameter of larger than 200 nm and thickness of 10 nm were synthesized via high-energy ball milling for 20 h. The result of this harsh process was a flexible layered structure which can decrease the COF significantly and the wear scar diameters through providing mending and polishing effect (Figure 20d,e). It is worthy to note that the higher duration of ball milling caused reduction in the size of nanosheets, promoting forming an inflexible and agglomerated nanosheets. Surprisingly, benefiting computational calculations it was predicted that a large enough heterostructure of graphene/h-BN can lead to a low COF [31]. To show the excellent tribological properties of h-BN due to the interlayer slip, Xiaojing et al. used it as lubricant additive to Gas to Liquid-8 (GTL-8) as base oil [171]. Boron Nitride nanosheets were synthetized by molten alkali-assisted exfoliation and the mixture (2.84 g NaOH, 2.16 g KOH and 1 gr h-BN) was maintained in reactor for 2 and 6 h in 180 • C, and samples were respectively named BNNS-1 and BNNS-2. This increasing of exfoliation time led to declining the thickness from about 150 nm for raw h-BN to 45 and 3 nm for BNNS-1 and BNNS-2, respectively. The tribological assessments revealed that utilizing 0.3 mg mL −1 BNNS-1 in GTL-8 has represented the best performance by roughly 35% and 95% reduction in COF and wear volume, respectively. Figure 20f-h exhibits the wear surface taken by TEM. In the case of BNNS-1, a tribo-film can be observed with a thickness of 150 nm, as it is marked in Figure 20f. The same way in Figure 20g,h formed tribo-film with the thickness of 50 nm and 40 nm are shown in wear surface of h-BN and BNNS-2, respectively. The thick tribo-film of 150 nm corresponds to superior performance of BNNS-1. Also, having some wear debris on the worn surface as black pits can prove that the wear mechanism was abrasive [172]. For better understanding the role of BNNS, as shown in Figure 20i, we can consider them as a high-velocity rail which provides BNNS-1, simultaneously the oil molecules bring BNNS-1 and introduce them to the rubbing interface, leading to formation of a thick tribo-film. The problem with raw h-BN is its high thickness which cause cannot enter to the contact surface easily. So, the thickness of BNNS as a lubricant additive plays a crucial role in tribological behavior of oil-based lubricants. For better understanding the role of BNNS, as shown in Figure 20i, we can consider them as a highvelocity rail which provides BNNS-1, simultaneously the oil molecules bring BNNS-1 and introduce them to the rubbing interface, leading to formation of a thick tribo-film. The problem with raw h-BN is its high thickness which cause cannot enter to the contact surface easily. So, the thickness of BNNS as a lubricant additive plays a crucial role in tribological behavior of oil-based lubricants. [170]. Wear surface of (f) BNNS-1, (g) h-BN, and (h) BNNS-2; (i) illustration of friction mechanism (Reprinted with permission from Elsevier, Copyrights 2020) [171]. In another study, in order to compensate the poor dispersibility of h-BN in oil-based lubricant, firstly, h-BN was exfoliated and then fully oxidized to form hydroxyl functional groups [173]. Then, through the hydroxyl groups long alkyl chain carrying octadecyltriethoxysilane (ODTES) were chemically attached to BNNSs (Figure 21a). BNNS-ODTES was completely stable in synthetic polyol ester lube base oil owing to van der Waals interaction between the alkyl groups of polyol ester and octadecyl chains of BNNS-ODTES. Tribological evaluations indicated positive effect of BNNS-ODTES as an additive on wear behavior of synthetic polyol ester lubricant. For instance, studying wear track profile of steel disc which was lubricated with polyol ester without and with 0.04 mg mL −1 BNNS-ODTES depicted that BNNS-ODTES significantly reduced the wear width (from 570 to 345 µm) and depth (from 12.9 to 5.2 µm) (Figure 21b-d). The shear-induced delamination of the BNNS-ODTES and subsequent formation of a transfer film on the rubbing surface can be responsible for reducing wear [173]. Thus h-BN related materials can efficiently improve tribological behavior of components and enhance their performance. In another study, in order to compensate the poor dispersibility of h-BN in oil-based lubricant, firstly, h-BN was exfoliated and then fully oxidized to form hydroxyl functional groups [170]. Then, through the hydroxyl groups long alkyl chain carrying octadecyltriethoxysilane (ODTES) were chemically attached to BNNSs (Figure 21a). BNNS-ODTES was completely stable in synthetic polyol ester lube base oil owing to van der Waals interaction between the alkyl groups of polyol ester and octadecyl chains of BNNS-ODTES. Tribological evaluations indicated positive effect of BNNS-ODTES as an additive on wear behavior of synthetic polyol ester lubricant. For instance, studying wear track profile of steel disc which was lubricated with polyol ester without and with 0.04 mg mL -1 BNNS-ODTES depicted that BNNS-ODTES significantly reduced the wear width (from 570 to 345 µ m) and depth (from 12.9 to 5.2 µ m) (Figure 21b-d). The shear-induced delamination of the BNNS-ODTES and subsequent formation of a transfer film on the rubbing surface can be responsible for reducing wear [170]. Thus h-BN related materials can efficiently improve tribological behavior of components and enhance their performance. Forming a tribo-layer in friction interface [148] Conclusions and Future Perspectives To recapitulate, 2D h-BN nowadays encompasses almost all geographic borders of scientific realms with its outstanding properties including high TC, electrical insulation character with its tunable 5.9 eV band gap, excellent chemical/thermal stability, superior resistivity against corrosion/oxidation and being and intrinsically lubricant material as a result of its layered structure. To be more specific, having high TC and dielectric characteristics at the same time, arises numerous research interests in this material to be utilized as a reinforcement agent towards enhancing heat transfer quality of PMCs in the electronic packaging industry. Since h-BN is regarded as an anisotropic filler with a distinctive difference in its in-plane and out-plane TCs, the overall TC of the h-BN-reinforced polymer composite is correlated to filler's alignment in the polymer matrix, filler's functional groups, the quality of fillers dispersion and the effort in producing a continuous heat conduction path, filler-filler and filler-polymer interfacial properties. In addition, possessing atomic flatness, high aspect ratio, and crystallinity of h-BN favors efficient heat dissipation without the formation of localized hot spots. According to theoretical points of view, BNNSs takes precedence from bulk h-BN due to the suppressed phonon scattering in the few-layered materials. Thereby, many efforts have been devoted to controlling the orientation of h-BN platelets through synthesis techniques to exploit the ultimate potential of its high TC in polymer composites. Herein, we discussed comprehensively the most common techniques focused on orientation manipulating and incorporation of h-BN fillers which are freeze-drying, magnetic field/template-assisted, CVD, mechanical milling, and electrospinning. Apart from that, easy sliding layers, low friction coefficient as a result of low shear strength and ability to function in a wide range of environments such as wet/dry/oxidative/high temperatures have been favored in producing highly durable composites with increased wear resistance. Despites of virtues and merits of h-BN in improving thermal properties and tribological behaviors of polymer/metal/CMCs, this research topic is still faced with some challenges that affects the overall performance of the final composite in their relevant applications and hinders h-BNs utilization in those fields. The first challenge is developing fully controlled BNNSs in terms of crystallinity, morphology, number of layers, functionalization and surface chemistry. The next one aims at the compatibility of the BNNSs-matrix interface that plays a crucial role in their correspondence application. Meticulous evaluation of these problems undoubtedly paves a new path for the utilization of h-BN in novel thermal and tribological applications where other materials could not enter and endure.
v3-fos-license
2024-06-08T06:19:41.024Z
2024-06-06T00:00:00.000
270312802
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "8cd81459fbf7096110082d59dcafb997c85a0ea6", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46231", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "sha1": "b5234cb5a2f8d10c2e78d415f9499cef2c1ff51b", "year": 2024 }
pes2o/s2orc
Highly controlled multiplex electrospinning Applications of electrospinning (ES) range from fabrication of biomedical devices and tissue regeneration scaffolds to light manipulation and energy conversion, and even to deposition of materials that act as growth platforms for nanoscale catalysis. One major limitation to wide adoption of ES is stochastic fiber deposition resulting from the chaotic motion of the polymer stream as is approaches the deposition surface. In the past, fabrication of structures or materials with precisely determined mesoscale morphology has been accomplished through modification of electrode shape, use of multi-dimensional electrodes or pins, deposition onto weaving looms, hand-held electrospinning devices that allow the user to guide deposition, or electric field manipulation by lensing elements or apertures. In this work, we demonstrate an ES system that contains multiple high voltage power supplies that are independently controlled through a control algorithm implemented in LabVIEW. The end result is what we term “multiplex ES” where multiple independently controlled high-voltage signals are combined by the ES fiber to result in unique deposition control. COMSOL Multiphysics® software was used to model the electric field produced in this novel ES system. Using the multi-power supply system, we demonstrate fabrication of woven fiber materials that do not require complex deposition surfaces. Time-varied sinusoidal wave inputs were used to create electrospun torus shapes. The outer diameter of the tori was found, through parametric analysis, to be rather insensitive to frequency used during deposition, while inner diameter was inversely related to frequency, resulting in overall width of the tori increasing with frequency. Multiplex ES has a high-frequency cutoff based on the time response of the high voltage electrical circuit. These time constants were measured and minimized through the addition of parallel resistors that decreased impedance of the system and improved the high-frequency cutoff by up to 63%. Introduction Electrospinning (ES) fabrication was first observed in 1897 [1] followed by a series of patents granted for textile applications.In 1969, a publication by Taylor [2] resulted in research that utilized ES fabrication for many applications that sought to make polymer materials with micro to nano-scale features that exhibited high surface-area-to-volume ratios.Since then, ES has been used to fabricate fuel cells, generators, and provide photocatalytic surfaces [3].As well as to prevent degradation of perovskite solar cell layers [4][5][6][7] and to pattern nanoscale polarizers via lithography [8].Biomedical applications of electrospun materials include enzyme immobilization, sensors, tissue engineering, wound healing [9], and drug delivery [10][11][12].ES fiber materials have also been used for the creation of nanomaterials that range in application from energy conversion to medicine and that exhibit desirable material properties, such as high strength or modulus [13]. ES fabrication requires delivery of a solvent-dissolved [14][15][16] or melted [17] liquid polymer into a high strength electrostatic field that exists between a metallic spinneret and a collection surface.Once the liquid polymer reaches end of the spinneret, the electrostatic field causes surface charge buildup at the surface of the polymer bead at the end of the spinneret.At critical value, the polymer bead is deformed into a Taylor cone [2].At the tip of the Taylor cone, a micro or nano-scale polymer jet is pulled by electrostatic force toward the deposition surface, resulting in deposition of polymer fibers or beads.During flight, the polymer jet experiences a chaotic phase, whereby, solvent evaporation occurs [18].The force required to initiate ES is described by the following formula: where permittivity is represented by r (relative) and 0 (in a vacuum), A is the area of the collection plate, V is the applied voltage, and d is the separation distance between spinneret and collection surface [19]. The breadth of materials that ES has enabled is far reaching and relevant in application from fundamental chemistry and materials synthesis to applied use in industry.The span of applicable uses for ES has led to iterations of ES equipment that accommodate implementation for fabrication of specialized materials.Melt ES, for example, allows the user to avoid the use of solvents during the process [17].Other iterations involve alteration of the deposition surface to produce aligned structures that are beneficial for enhanced charge transport [20,21], production of polarized light emission [22,23], improved absorption and photovoltaic properties [24,25], and crystal properties beneficial for optoelectronics among other applications [26][27][28][29].Alignment is also relevant to the biomedical industry to provide a scaffold for directional cell growth [30] and guided cell differentiation [31].Alignment of polymer fibers can be accomplished through the use of rotating collector drums [32,33] parallel gap electrodes [8,34], or counter electrodes [35].The electric field that provides electrostatic force for polymer deposition has also been manipulated to guide fiber deposition and material spot size [19,36,37].Passive methods for electric field manipulation include using copper rings as lensing elements to dampen chaotic motion [38] and use of aperture plates to reduce resulting fiber mat spot size [36,39].Researchers have also accomplished miniaturization of the ES system and added configuration modifications that allow ES systems to be handheld and deposit onto any surface regardless of charge [37,40,41]. In this work, we present an iteration of ES fabrication that enables precisely controlled deposition of electrospun fibers by use of multiple high voltage power supplies, each linked to separate electrodes and controlled by a wave generator.Sinusoidal control (input) signals with appropriate phase lag were generated in LabVIEW to control fiber deposition in two dimensions, which resulted in woven polymer fabrics and complex shapes.Woven polymer fabrics are desirable for strength, dimension, flexibility, porosity, elongation, and failure strength in multiple directions [42] in addition to enabling long-term drug release [43] and tissue mimicry [44].The production of woven electrospun materials is performed through the use of novel deposition surfaces, such as weaving-looms [43] or rotating collectors with conducting tines [45].In this work, we demonstrate multiplex ES with multiple independently-controlled highvoltage power supplies to create woven polymer fabrics on flat, non-complex surfaces with precise control rather than random attachment and deposition onto complicated, moving surfaces.Our novel process enables coating of objects or materials that are placed onto the flat deposition surface that are not feasible for loom or conducting tine deposition substrates.Time-varied sinusoidal wave inputs were also used to demonstrate deposition of electrospun torus shapes that have not been demonstrated with other highly-controlled ES systems.Using parametric analysis, predictable torus dimensions can be achieved.Demonstration of torus deposition among other complex morphologies provides examples of the highly controlled structures made possible by the multiplex ES system.Deposition of complex polymer morphologies expands applicability of this versatile and economically feasible manufacturing method for producing flexible materials that can wrap around bone, coat a sharp corner, or be shaped to slide into a non-linear crevice to enable novel functional materials.For multiplex ES, the time constant that dictates the highvoltage high frequency cutoff is also reduces through the reduction of electrical impedance.Parallel resistors added to the high voltage circuit reduced the circuit impedance, lowering the time constant and further demonstrating the high level of fiber deposition control multiplex ES can provide. (1) Polymer preparation Polycaprolactone (PCL) was purchased from Sigma Aldrich (80 k MW) was used because of its suitability for ES and biocompatibility.Compared to other polymers we have used in the lab, PCL has provided us with the most consistent morphological results when deposited by ES and provides a hydrophobic structure that stays in tact in any humidity and for electron microscopy preparation and imaging.PCL was prepared at 9 wt% in 2, 2, 2-trifluoroethanol (TFE, Sigma Aldrich) by stirring on a hot plate at 90 °C. Electrospinning parameters The grounded spinneret used during ES was 22 gauge, and separation distances ranged from 10 to 20 cm, while polymer flow rate ranged between 0.25 and 0.45 mL/hr.Input voltage from the DAQ ranged from 0 to 5 V, which resulted in a 0-20 kV output from the power supplies (scaled linearly) to the electrodes.Multiplex ES was performed in a laboratory held at 27 °C and 25% relative humidity.Higher temperature during fiber formation has been shown to increase solvent evaporation rate and polymer viscosity decreased with increasing temperature [46].During ES, the solvent-dissolved polymer is electrified, thereby creating electrostatic repulsion among the surface charges that feature the same polarity [47].To improve repeatability, such parameters should be controlled during ES. COMSOL modeling of the electrostatic field The electric field of the multiplex ES system was modelled using COMSOL Multiphysics® software.The model was based on solutions to the Poisson's equation and mapped the electric field in three dimensions, which allowed conceptualization of the system geometry and how system configuration affected electrostatic forces that act on the polymer during ES. Microscopy A Hitachi S-4500 field emission scanning electron microscope (SEM) was used to image resultant ES mats.Preparation for SEM involved adhering the mats to aluminum stubs by carbon tape and gold coating the samples in a Denton Desktop unit for 1 min.Secondary imaging was collected on a Keyence VHX-5000 digital light microscope. ImageJ analysis of electrospun tori Electrospun tori were imaged with a camera that had 82.4 pixel/cm average resolution at a distance of 30.5 cm.Camera images were then thresholded in ImageJ software.To define the pixel boundaries, a threshold of 95/255 was applied to the images, allowing the edges of the tori mats and the background of the image to be distinguished and the dimensions of the tori mats to be measured. Electrostatic force models for multiplex ES The electrostatic force for the multiplex ES system is represented by Eq. 1, where permittivity is represented by r (relative) and 0 (in a vacuum), A is the area of the collection plate, V is the applied voltage, and d is the separation distance between spinneret and collection surface.In the four-electrode setup, the four electrodes were symmetrically spaced apart from the spinneret with shapes defined by circular sectors of angle θ, inner radius r i, and outer radius r o (Fig. 2).Integration along the surface of the electrode allows the magnitude of the electrostatic force F es between individual electrodes to be calculated as follows: (2) where permittivity is represented by r (relative) and 0 (in a vacuum), sector angle θ, inner radius r i and outer radius r o , V is the applied voltage, and d is the separation distance between spinneret and collection surface.Since the electrode geometry was symmetric with respect to the spinneret, the location of nanofiber deposition becomes proportional to the balanced amount of electrostatic force directed in the direction of each electrode.The summation of the electrostatic forces in each direction is what ultimately determines the driving force causing deposition and the most likely deposition location of the nanofiber. The electrostatic field within the multiplex ES system was modelled in COMSOL Multiphysics® software.Values for the voltage at each electrode were consistent with the input control signals generated in LabVIEW, amplified by the high-voltage power supplies, and supplied to the electrodes (Fig. 3A).The spinneret was grounded (0 V) during the experiment and is visible in blue and surrounded with a smooth transition to the high voltage environment (Fig. 3B). Deposition of woven electrospun mats by multiplex ES Multiple, time-varied signal inputs were generated in LabVIEW and used to control the voltage on the power supplies, thereby directing fiber deposition.The sinusoidal input used consisted of a 2 V amplitude, 3 V offset, 1 Hz frequency, and 90° phase shift between each signal, amplified by 4 kX by the high voltage power supplies used.Generated input and measured output signals are shown in Fig. 4. Using this input, woven PCL fiber mats were produced by the pattern shown in Fig. 5A.The woven pattern is represented on an electron micrograph in Fig. 5B.During deposition, the polymer jet is directed toward the direction of the maximum electrostatic field, which changes with time according to the control scheme.Under the wave input conditions used, the high strength electrostatic force is alternated between the four electrodes.In this system, polymer was first deposited onto electrode A, followed by electrode C. From C, the polymer jet is directed to electrode B, still following the high strength field but avoiding the center mat.From electrode B, the polymer fibers are directed across to electrode D, then to A (avoiding the center mat), and then onto C, to B (avoiding the center mat), and back to electrode D. This pattern results in truly woven polymer mats fabricated in the central area, between electrodes (Fig. 5C).( 4) dA = rdrd Parametric analysis of tori deposited by multiplex ES Parametric analysis has been used previously to generate mathematical understanding of the relationship between ES parameters and the resultant fiber mat spot size [19,37].The multiplex ES device enables deposition of fiber mats with features that exhibit higher complexity.Parametric analysis of frequency used during ES was used to provide mathematical understanding of the resultant fiber dimensions when multiplex ES was used to deposit torus structures (Fig. 6).During ES, all parameters were held constant with exception of frequency.Frequencies of 2, 3, and 4 Hz were used during these studies, and tori were deposited and imaged in triplicate (Fig. 6A-C).ImageJ was then used to apply a threshold to the fiber tori structures (Fig. 6D-F), and resultant dimensions were compared.Results showed that the outer dimensions of the tori were rather insensitive to alterations to frequency (p = 0.088, Fig. 7A).However, the inner diameter of the tori varied inversely with frequency of the applied electrostatic field described in the previous section (Fig. 7B).In all comparisons, mats deposited at 2, 3, and 4 Hz exhibited inner diameters that varied significantly (p < 0.015 or less).The alteration to the inner diameter of the tori ultimately resulted in alterations to the overall thickness of the structures (Fig. 7Cp, p = 0.05 or less). Intricate designs enabled by multiplex ES Simultaneous and independent control over electrode voltage and sinusoidal wave inputs of the multiplex electrospinner enables design of intricate structures that could be used for a variety of applications where the deposition surface/s is/are in complex configurations.In Fig. 8, the point of the strongest electrostatic force was moved between electrodes using sinusoidal inputs in a pattern that resulted in formation of looped ring structures.Guided, intricate designs may be useful when depositing a fiber mat that is functionalized by the configuration used, or where the deposition surface is in a complex configuration Response time and time constant minimization of the multiplex ES system The multiplex ES system output exhibited a slight lag in response time and decreased amplitude as compared to the input signal (Fig. 4) due to the capability of the high-voltage power supplies to dissipate charge during a desired decrease in the electric field strength.The speed at which the system can respond to changes in the control signal is fundamental to controlling the location of nanofiber deposition; and therefore, minimization of the system time response would result in enhanced control over material morphologies.To understand how the response time of the multiplex system could be reduced, the high voltage power supplies were modeled as charge pumps.In the multiplex system, the high voltage power supplies contain internal capacitors, that cannot dissipate charge instantaneously.The charge pump characteristics of the high voltage power supply has essentially an inertia such that the rate of charge build-up defines the maximum rate that charge can be dissipated.While the rising portion of the input sinusoid signal occurred in equal time for rising and falling portions of the wave (0.368 s for both rising and falling), the output signal exhibited approximately 2X increase in timing for the falling portion of the wave (0.368 s for rising, 0.633 s for falling).As the input frequency is increased, the time discrepancy between rising and falling output signals increases. To decrease the time constant of the multiplex ES system, parallel resistance between the high voltage electrodes and ground were added.Given that the additional resistance is less than that of the unmodified system resistance when placed in parallel, the total resistance of the system decreased significantly with the resistances used.The minimum resistances used were chosen so the maximum current rating of the high voltage power supplies was not exceeded.To determine the time constant of the multiplex ES system, ES was performed initially with a 0 kV input before an input of 20 kV was then supplied to the system.After a steady state output was reached, the system was again given an input of 0 kV.For an increasing signal (the rising portion of the sinusoid), the time constant (τ) is defined as the time it takes the system to go from 0 input voltage to 1-1/e ≈ 63.2% of its final asymptotic value.For a decreasing system (the falling portion of the sinusoid), the time constant is defined as the time it takes the system to decay to 1/e ≈ 36.8% of its initial value.Following determination of the time constants of the multiplex system, resistor banks were added in parallel with the high voltage power supplies, and the rising and falling time constants were again measured.For this work, resistor banks of 10, 50 100, 120 MΩ were used.When using resistors between 10 AND 100 MΩ, the falling time constant was reduced by approximately 63% in all cases (Fig. 9).At 120 MΩ resistance, however, the falling time constant began to increase, and the reduction in time constant was only 51%.This is due to the current rating of the high voltage power supply.Adding resistance in parallel with the system decreases the overall resistance, and subsequently increases the amount of current the system will allow.This increase in current results in improved response times and control over deposition.Between 100 and 120 MΩ, it is assumed that the minimum resistance of the system was reached, and therefore, the 120 MΩ resistor no longer improved the falling time constant response.Figure 8 shows measured voltage used to characterize the time constant when 10 and 120 MΩ resistors were added in parallel with the high voltage power supplies. In Table 1, the rising and falling time constants of the multiplex system are listed, along with that for the 10 and 120 MΩ resistance: Fig. 9 Time response data collected from oscilloscope for the multiplex ES system.Using resistors added in parallel with the high voltage power supplies, the falling time constant that corresponded to charge dissipation of the power supplies was reduced.This reduction in response time of the high voltage power supply to the input signal improves control over fiber deposition and material/device morphology Conclusions Precise control over fiber deposition during ES enables novel device design and promotes new applications and capabilities of the polymer materials produced.In this work, an ES system was fabricated to contain four regular electrodes, each controlled by an independent power supply to create what we term multiplex ES.Use of sinusoidal control signals, amplified by high-voltage power supplies, enabled modifying the electrostatic field strength according to electrode voltage, allowing the user to precisely control fiber deposition and mesoscale structure.Using electrode geometry and separation distances within the multiplex ES system, we were able to determine an analytic model for the electrostatic force acting on a fiber, shown in Eq. 6. Device configuration, material properties, and applied voltage are important in determining the electrostatic force.COMSOL Multiphysics® software was also used as a visualization tool to show the electrostatic field governed by the multiplex ES system and showed that when high-voltage inputs were supplied to specific electrodes, the corresponding electrode voltage in the model matched as predicted. Less stochastic fiber deposition during multiplex ES enabled creation of a woven fiber mat.Use of sinusoidal inputs moved the high-voltage signal from one electrode to another, producing a woven polymer fabric.In another demonstration, without altering the electrodes, voltage inputs that move from one electrode to the next in a circular pattern produced torus structures.Outer diameter of the torus was found, through parametric analysis, to be rather insensitive to the frequency used during deposition, and inner diameter was found to be inversely related to frequency, resulting in overall thickness of the torus increasing with frequency. The multiplex ES system exhibited a slight lag in response time as compared to the input signal used to guide fiber deposition.This lag was due to the capability of the high-voltage power supplies to dissipate charge.Because the response rate of the system is fundamental to controlling the location of nanofiber deposition, minimization of the electrical time response was investigated using resistors placed in parallel with the high voltage power supplies and ground.When using resistors between 10 to 100 MΩ, the falling time constant was reduced by approximately 63% in all cases.At 120 MΩ resistance, however, the falling time constant began to increase, and the reduction in time constant was only 51% due to the current rating of the high voltage power supply.It is assumed that the minimum resistance of the system was reached between 100 and 120 MΩ, and therefore, the 120 MΩ resistor no longer improved the falling time constant response. Multiplex ES has been demonstrated and used to create woven fiber-based mats whereby independent, simultaneous control of high-voltage power supplies, corresponding electrodes, and improved response times resulted in highly-controlled fiber deposition and material morphologies. Fig. 1 Fig.1Graphical representation of multiplex ES system.The multiplex ES system contains four electrodes (A-D) and a spinneret, each connected to an independentlycontrolled power supply.Each voltage input to the system is controlled by a National Instruments DAQ and controlled by an algorithm implemented in LabVIEW.Independent control over each electrode enables deposition of complex structures Fig. 2 A Fig. 2 A Graphical representation of the electrode setup for the multiplex ES system showing the symmetrically spaced electrodes used.B Incorporation of the electrode shape shown in part A along with the separation distance of the needle to the deposition surface Fig. 3 COMSOL Fig. 3 COMSOL Multiphysics® model showing electrostatic field strength within the multiplex ES system.A The generated model shows four electrodes placed equidistant from each other.During acquisition of the model, the high-voltage signal was supplied to electrode C as shown.B The generated model shows the electrodes from A with respect to the ES spinneret.All electrodes were placed equidistant from the spinneret in the multiplex ES system Fig. 4 AFig. 5 A Fig. 4 A Input signals generated in LabVIEW and fed through the DAQ.B Output signals were measured with an oscilloscope to be 4 kX that of the input signals.Variation between the signals is due to the response time of the high voltage power supplies Fig. 6 AFig. 7 Fig. 8 Fig.6A-C Images of electrospun fiber tori fabricated using multiplex ES.Electrospun tori were removed from the system and placed in a light box prior to acquisition of images, which were thresholded with ImageJ.D-F Corresponding images (top to bottom) showing the tori fiber mats after a threshold had been applied in ImageJ.Tori and images were collected in triplicate, and dimensions from these images were used to provide mathematical understanding of the fiber mat that would result when specific ES parameters were used
v3-fos-license
2024-05-19T06:17:25.945Z
2024-05-17T00:00:00.000
269837103
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "87c4d1c1d9aa45f0ae3533153c472358a5480890", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46233", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "sha1": "da4cfc38b1878d3e6738402fb2b289222bd6e0b6", "year": 2024 }
pes2o/s2orc
Function of MYB8 in larch under PEG simulated drought stress Larch, a prominent afforestation, and timber species in northeastern China, faces growth limitations due to drought. To further investigate the mechanism of larch’s drought resistance, we conducted full-length sequencing on embryonic callus subjected to PEG-simulated drought stress. The sequencing results revealed that the differentially expressed genes (DEGs) primarily played roles in cellular activities and cell components, with molecular functions such as binding, catalytic activity, and transport activity. Furthermore, the DEGs showed significant enrichment in pathways related to protein processing, starch and sucrose metabolism, benzose-glucuronic acid interconversion, phenylpropyl biology, flavonoid biosynthesis, as well as nitrogen metabolism and alanine, aspartic acid, and glutamic acid metabolism. Consequently, the transcription factor T_transcript_77027, which is involved in multiple pathways, was selected as a candidate gene for subsequent drought stress resistance tests. Under PEG-simulated drought stress, the LoMYB8 gene was induced and showed significantly upregulated expression compared to the control. Physiological indices demonstrated an improved drought resistance in the transgenic plants. After 48 h of PEG stress, the transcriptome sequencing results of the transiently transformed LoMYB8 plants and control plants exhibited that genes were significantly enriched in biological process, cellular component and molecular function. Function analyses indicated for the enrichment of multiple KEGG pathways, including energy synthesis, metabolic pathways, antioxidant pathways, and other relevant processes. The pathways annotated by the differential metabolites mainly encompassed signal transduction, carbohydrate metabolism, amino acid metabolism, and flavonoid metabolism. The Larix spp., a deciduous tree belonging to the pine family, is renowned for its impressive height, exceptional cold tolerance, and rapid growth rate.During its early stages of development, the larch exhibits vigorous growth, making it highly suitable for the afforestation 1 .Moreover, it serves as a primary source of timber and plays afforestation efforts in northeast China 2 .Moreover, larch species possess large genomes and intricate genetic backgrounds.Regrettably, the absence of transcriptome sequencing studies and publications specifically for larch has hindered the exploration and application of drought resistance genes.Enhancing productivity, wood quality, and resilience to biological and abiotic stresses through tree genetic engineering has been a primary objective in larch forestry biotechnology community for decades.Despite numerous challenges, significant progress has been made in tree biotechnology in recent years 3 . Drought is undeniably one of the most significant environmental issues faced globally.In arid regions, plants undergo a multitude of physiological and development changes throughout their growth stages.Unraveling the mechanisms that enable plants to maintain productivity in adverse conditions, particularly drought, and harnessing these mechanisms to enhance plant adaptability to environmental fluctuations remain paramount challenges in the realm of plant research 44 .When confronted with drought stress, plants employ various protective strategies to ensure their survival, including modifications in root and leaf morphology 6 , adjustments in metabolite profiles 7 , and regulation of drought resistance gene expression 8 .Drought conditions can significantly impact plant water potential and increase the vulnerability of xylem to weathering.Research has indicated that larch, compared to other coniferous species, exhibits greater sensitivity to soil moisture and experiences slower growth under drought conditions.Additionally, the early stages of plant growth are particularly critical for larch, as water loss during this period can result in stunted growth or even mortality.While significant attention has been dedicated to exploring the response to drought stress in broad-leaved tree species and crops such as birch 9 , poplar 10 , soybean 11 , maize 12 , the studies on larch are notably scarce.Therefore, a comprehensive investigation into the drought resistance mechanisms of larch.In this study, transcriptome sequencing was performed with PEG treatment of the used plant material, and a MYB family gene with drought-resistant functions, T_tran-script_77027, was identified.It was subsequently named LoMYB8. The MYB family is one of the largest transcription factor families in plants and has the most members with the most diversified functions 13 .These transcription factors contain a specific MYB domain that can induce the expression of downstream genes.Many transcription factors containing an MYB domain in animals and plants have since been identified and isolated, resulting in the classification of these transcription factors into a new gene family.A number of studies have shown that MYB transcription factors are involved in the response of plants to drought stress, offering an avenue for the improvement of drought resistance in plants.Liao et al. identified 156 GmMYB genes in soybean, 43 of which were involved in the response to drought stress under abscisic acid (ABA) induction 14 .In Arabidopsis, AtMYB44/AtMYBR1, AtMYB60, AtMYB13, AtMYB15, and AtMYB96 control the degree of stomatal opening by regulating the accumulation of ABA to enhance the tolerance of plants to drought 15 .MYB transcription factors are also related to drought-stress responses in poplar 16 , apple 17 , and Jatropha curcas 18 .Several studies have shown that most of the MYB transcription factors involved in plant drought-stress responses are R2R3-MYB TFs with two R structures, which is the type of MYB that has the most members in plants 1919191919 .MYB transcription factors play a role in the drought-resistance response of plants via various mechanisms that are mostly related to ABA 222222 and light signalling 23 . Materials and methods cDNA library construction and transcriptome sequencing The embryogenic callus of Hybrid larch in the laboratory using immature zygote embryos.Careful selection good development and stable growth for further experiments.Subsequently, these callus samples were treated with 5% PEG6000 for different durations: 0 h (CK), 12 h (T1), 24 h (T2), and 48 h (T3).At each time point, three repetitions were performed to ensure sample reliability.After rapid freezing with liquid nitrogen, the samples were stored at − 80 °C and then transported to Beijing for sequencing analysis.Prior to constructing the cDNA library, all samples underwent quality testing.Once the quality assessment was completed, construction of both Illumina and PacBio cDNA libraries took place using magnetic bead enrichment method followed by computer-based sequencing.The Illumina HiSeq 2500 system from Illumina in San Diego, CA, USA was used for sequencing the cDNA library alongside full-length transcriptome sequencing performed by PacBio instrument. Screening, annotation and analysis of DEGs In Differentially Expressed Transcripts analysis, to obtain comprehensive annotation information for DEGs, they can be compared against various databases such as NR 59 , Swissprot 60 , GO 61 , COG 62 , KOG 63 , Pfam 64 , and KEGG 65 . Quantification and verification of gene expression levels To validate the accuracy of the RNA-seq analysis, RT-qPCR analysis was performed on 17 DEGs selected from the predicted DEGs in response to drought stress.RNA was extracted from embryonic callus using the Universal Plant Total RNA Extraction Kit (BIOTEKE, Beijing, China), and cDNA was synthesized using the PrimeScriptTM RT reagent Kit with gDNA Eraser (Perfect Real Time) (TaKaRa Biotech, Dalian, China).RT-qPCR primers were designed using Primer5 software (Table 1).Quantitative fluorescence analysis was performed using TB Green® Premix Ex Taq TM II (Tli RNaseH Plus) kit (TaKaRa Biotech, Dalian, China).Each gene was analyzed with three replicates using an ABI7500 fluorescence quantitative PCR instrument.Data analysis was conducted using Microsoft Excel 2016, and the results were analyzed using the 2-ΔΔCt method, with α-tubulin serving as the reference gene for normalization (NCBI accession number MF278617.1). Transient genetic transformation of larch seedlings Thirty to Forty Five-day-old seedlings of larch lacking a fully expanded needle leaf were selected for transient genetic transformation.The seedlings were soaked in hypertonic solution for 10 min and then transferred to a container with a liquid suspension of bacteria (laboratory Agrobacterium strains, GV3101), and the air pressure in the container was pumped down for 10 min.The container was then placed on a shaker at a constant temperature of 26 °C and 120 rpm for 4 h.The infected seedlings were rinsed three times with sterilized water, and the water remaining on the seedlings was removed using sterilized filter paper.The seedlings were then cultured in sterilized soil mix and covered with a plastic membrane to retain moisture.After 48 h, the seedlings were removed from the soil mix and rinsed with sterilized water, with the remaining water on the seedlings removed using sterilized filter paper. The 1 mol/L mannitol hypertonic solution was prepared as follows: 182.17 g mannitol powder was weighed and completely dissolved in 1 L deionized water with stirring, and then the solution was placed at room temperature for immediate use.The ingredients of the infection solution for larch transformation included sucrose 3%, KT 1.5 mg/L, 2,4-D 5.0 mg/L, CaCl2 10 mmol/L, MgCl2 10 mmol/L, sucrose 3%, coniferyl alcohol 100 μmol/L, mannitol 400 mmol/L, DTT 0.2 g, Tween 0.05% (v/v), and MES 10 mmol/L at pH 5.6. Differential gene expression and metabolite analysis The analysis of differential gene expression is described above.Sequencing libraries were generated using the NEBNext®Ultra™ RNA Library Prep Kit for Illumina® (NEB, USA) following the manufacturer's recommendations, and index codes were added to attribute sequences to each sample (CK-T48 and OE-T48). Samples were thawed on ice at 4 °C.Subsequently, 100 μL of each sample was transferred to an EP tube and extracted using 300 μL of methanol.Following this, 20 μL of internal standard substances were added.The samples were vortexed for 30 s, sonicated for 10 min (while incubating in ice water), and then incubated for 1 h at − 20 °C to precipitate proteins.Afterward, the samples were centrifuged at 13,000 rpm for 15 min at 4 °C.The supernatant (200 μL) was carefully transferred to a fresh 2 mL LC/MS glass vial.Additionally, 20 μL of each sample was pooled to create quality control (QC) samples.A 200 μL aliquot of the supernatant was designated for UHPLC-QTOF-MS analysis.The specific analysis method has been referred to tian's method 27 .LC-MS/MS analyses were conducted utilizing an UHPLC system (model 1290, Agilent Technologies).The MS raw data (.d) files were converted to the mzXML format using ProteoWizard and processed using the R package XCMS (https:// bioco nduct or.org/ packa ges/ relea se/ bioc/ html/ xcms.html). Ethical approval Research and field studies on plants (either cultivated or wild), including the collection of plant material, was carried out in accordance with relevant institutional, national, and international guidelines and legislation Screening, functional annotation and enrichment of DEGs The transcriptome of embryogenic callus under drought stress was sequenced to identify DEGs.According to the screening criteria (Fold Change ≥ 1.50 and FDR < 0.05), a total of 1,654 DEGs were identified (Fig. 1).In calli subjected to 3d-long stress, the highest number of both upregulated and downregulated DEGs was detecred.These findings suggest that the embryonic callus initiated the regulation of DEG expression, thus responding to drought stress, following a 48-h simulation of drought stress using PEG.Functional annotations of DEGs are made using the database (Table 3).Out of a total of 1,580 DEGs, the majority (99.8%) were successfully annotated using the NR database, indicating high accuracy and coverage of the annotation process.Figure 2 illustrates that the DEGs from the six comparisons in Table 3 predominantly exhibit involvement in distinct tissue processes within metabolic, cellular and biological processes.Specifically, these DEGs are associated with cellular components such as cells, cell parts, organelles, and membranes, while performing molecular functions such as binding, catalytic activity, and transporter activity.Based on these findings, it is reasonable to speculate that in larch trees, the reception of drought signals from the external environment triggers a series of signal transduction cascades.These cascades subsequently activate transcription factors (TFs) response of the relevant function, promoting the synthesis of metabolites that enable the tree to respond and cope with drought conditions. In organisms, different gene products collaborate with one another to carry out biological functions.Pathway enrichment analysis of DEGs aids in determining whether these genes are are over-represented in specific pathways.In the 6 control groups, KEGG enrichment analysis was performed on the DEGs, resulting in Fig. 3, which displays the top 20 pathways with the lowest significant q-values.In the CKvsT1 comparison, DEGs were notably enriched in protein processing pathways, as well as several others.In CKvsT2, the enrichment significance of DEGs was comparatively low, with a higher enrichment observed in the RNA degradation pathway.On the other hand, in CKvsT3, DEGs were significantly enriched in pathways related to starch and sucrose metabolism, phentose-glucuronate interconversion, and phenyl-C biosynthesis.Regarding the T1vsT2 comparison, DEGs exhibited significant enrichment in nitrogen metabolism, as well as the metabolism of alanine, aspartic acid, and glutamic acid.For T1vsT3, DEGs showed significant enrichment in phenyl-C biosynthesis, phenylalanine metabolism, and flavonoid biosynthesis pathways.Lastly, in the T2vsT3 comparison, DEGs were significantly enriched in ABC transporters.These pathways provide valuable insights into the metabolic information of embryogenic callus in hybrid larch under drought stress.Furthermore, they contribute to a better understanding of the potential regulatory mechanism associated with drought resistance.Based on the above analysis, some of the DEGs which have been significantly enriched in GO analysis and also in the KEGG pathway have been selected, and the gene T_transcript_77027 has been selected from them for verification.www.nature.com/scientificreports/ Quantification and verification of DEGs expression To validate the accuracy of RNA-seq analysis of hybrid larch under drought stress, we selected 17 DEGs from genes form RNA-seq. Figure 4 demonstrates a high level of agreement between the RT-qPCR results and the RNA-seq data for these 17 DEGs.While there may be some variations in the expression levels, the overall expression patterns remained consistent with RNA-seq results, indicating the reliability and authenticity of the RNA-seq findings. The results showed that more MYB genes were identified in transcriptome sequencing, and MYB genes were mainly enriched in the cellular transport active molecular pathway, while in the KEGG pathway, MYB genes were mainly enriched in sucrose metabolism, phenyl-propyl biosynthesis, glutamate metabolism, and flavonoid biosynthesis pathway.Meanwhile, in previous studies 28 , it was found that MYB gene was also expressed in secondary xylem of stems and roots.At the same time, the MYB gene family plays a role in the development process and defense response of plants, so the MYB8 gene was screened in the transcriptome, and subsequent tests were conducted to verify whether the gene has drought resistance. Gene expression of transiently transformed seedlings The expression level of the LoMYB8 gene in transiently transformed larch seedlings was 6.55 times higher than that in the control plants, which confirmed that the transformation system for larch effectively resulted in the overexpression of the LoMYB8 gene (Fig. 5). The expression level of the LoMYB8 gene in the transiently transformed plants under PEG-simulated drought stress was significantly higher than that in the unstressed transiently transformed plants at the same time points (Fig. 6).The expression level of the LoMYB8 gene increased significantly in the transiently transformed plants Biochemical indicators in transiently transformed seedlings under PEG stress The soluble sugar contents of the plants that had undergone different treatments are shown in Fig. 7.At 0 h, the soluble sugar content of the transiently transformed plants was slightly higher than that of the control plants. After 24 h of PEG treatment, the soluble sugar content of both the treated plants and control plants increased, and the soluble sugar content of the transiently transformed plants was approximately 1.14 times higher than that of the control plants, but the difference was nonsignificant.After 48 h of PEG treatment, the soluble sugar content of the transiently transformed plants was 1.5 times higher than that of the control plants, and the difference was significant.With the increase in treatment duration, the soluble sugar content of the control plants first increased and then decreased, while the soluble sugar content of the transiently transformed plants continued to increase.The soluble protein contents in the plants that had undergone different PEG treatments are indicated in Fig. 8.At T0, the soluble protein content in the transiently transformed plants was slightly higher than that in the control plants.After 24 h of stress treatment, the soluble protein content in both the control plants and transiently transformed plants increased.The soluble protein content in the transiently transformed plants was 1.5 times higher than that at 0 h of treatment, and the difference was significant.After 48 h of stress, the transiently transformed plants showed a significantly higher soluble protein content than those that had been stressed for 24 h, and the soluble protein content in the transiently transformed plants was significantly higher than that in the control plants. The MDA contents of the plants sampled from the different PEG treatments are shown in Fig. 9.At 0 h, the difference in MDA content between the transiently transformed plants and control plants was nonsignificant.At 24 h, both the transiently transformed plants and control plants showed an increased MDA content, although the www.nature.com/scientificreports/MDA content in the transiently transformed plants was lower than that in the control plants.At 48 h, the MDA content in the transiently transformed plants was significantly lower than that in the control plants. The POD activity in the plants under different treatments is shown in Fig. 10.At 0 h, the POD activity in all the plants was basically the same.At 24 h, the POD activity in both the transiently transformed and control plants increased, and the POD activity in the transiently transformed plants was significantly higher than that in the control plants.At 48 h, the POD activity in both the transiently transformed and control plants increased, but the increase in POD activity in the control plants was not obvious.The increase in POD activity in the transiently transformed plants was higher than that in the control plants. The SOD activity in the plants under different treatments is shown in Fig. 11.At 0 h, the activity of SOD enzymes in all the plants was basically the same.At 24 h, the SOD activity in the control plants slightly increased, and the increase in SOD activity in the transiently transformed plants was higher than that in the control plants.At 48 h, the increase in SOD enzyme activity in the transiently transformed plants was higher than that in the control plants. When plants are subjected to drought stress, they generally reduce the cellular osmotic potential to prevent excessive cellular water loss by reducing the intracellular water content, shrinking the size of cells, and increasing the content of soluble substances in the cells, so as to maintain the normal life activities of plants 66 www.nature.com/scientificreports/physiological and biochemical indexes of the samples showed that after different times of stress, the soluble sugar content and soluble protein content of Changbai larch transformed with the LoMYB8 gene had a relatively obvious increase compared with CK, while the MDA content, SOD enzyme activity, and POD enzyme activity of the transgenic plants transformed with LoMYB8 showed an upward trend with the prolongation of the stress time. Differentially expressed genes under PEG treatment of MYB transgenic plants A total of 1740 differentially expressed genes were found in the transiently transformed plants at 48 h of PEG treatment, of which 238 genes were upregulated and 1502 genes were downregulated compared with those in the control plants.The differentially expressed genes are shown in a volcano plot (Fig. 12).There were more downregulated genes than upregulated genes in the transiently transformed plants. As shown in Fig. 13, the GO enrichment classification results of the samples showed that 1227 differentially expressed genes obtained GO annotation by MYB8-T48 compared with CK-T48.The figure shows that MYB8-T48 is enriched with 16, 13 and 17 biological functions in cellular component, molecular function and biological processe, respectively, and 46 biological functions in total.More differential genes were accumulated in cell and cell part, accounting for 64.79% and 64.47% of the total number, respectively.Among the molecular function, the number of differentially expressed genes was 53.30% and 48.41%, respectively.In the biological process, more differentially expressed genes were enriched in metabolic process and cellular process, accounting for 66.01% and 64.30% of the total number, respectively. Pathway enrichment analysis, which assess the presence of differentially expressed genes in certain pathways (overpresentation), was used to determine the metabolic pathways and signalling pathways in which the www.nature.com/scientificreports/differentially expressed genes in the transiently transformed and control plants subjected to PEG-simulated drought stress were involved 29 .A total of 814 differentially expressed genes were found through the comparison of MYB8-48 h and CK-48 h plants and were annotated using the KEGG database, which indicated that they were involved in 98 metabolic pathways.More than 16 differentially expressed genes were annotated to 23 KEGG pathways (Table 4).A total of 168 differentially expressed genes were annotated to ribosomal pathways, accounting for 20.64% of the total number of differentially expressed genes successfully annotated using KEGG.A total of 38 and 33 differentially expressed genes were annotated to carbon metabolism and amino acid biosynthetic pathways, respectively.The top 23 metabolic pathways to which the differentially expressed genes were annotated were mostly related to the synthesis and metabolism of carbohydrates, amino acids, and flavonoids, and some of them were related to the pathways for the synthesis and metabolism of the substances used in signal transduction, photosynthesis, respiration, and oxidation.In addition, 9 of the pathways showed significant differences with a corrected P-value less than 0.05, most of which were related to the metabolism of carbohydrates and amino acids.Therefore, KEGG metabolic pathway enrichment analysis showed that the differences in the differentially expressed genes were mostly related to energy synthesis and metabolism and antioxidant pathways. Analysis of differential metabolites Under PEG-simulated stress, the levels of many metabolites in the transiently transformed and control plants differed.A total of 460 metabolites were differentially regulated between the transiently transformed and control plants (Fig. 14).There were many metabolites upregulated in the transiently transformed plants compared with those in the control plants, and there were many metabolites that showed no significant difference between the two groups.A total of 460 differentially regulated metabolites were detected, of which 80.87% were upregulated.The top 10 upregulated and downregulated metabolites with multiple-fold differences are shown in Fig. 13.Among the top 10 upregulated metabolites (Fig. 15), meta_710 and meta_478 were annotated as benzyl butyl phthalate and amino acids, respectively, and the rest were unknown metabolites.Among the top 10 downregulated metabolites, only meta_51, meta_269, and meta_62 were successfully annotated as mevalonolactone, dacarbazine, and 2-methoxybenzoic acid, respectively. To explore the function of the differential metabolites we detected in response to drought stress, KEGG pathway enrichment analysis was performed.Among these, 29 differentially regulated metabolites were successfully annotated via KEGG (Table 5), and the metabolites were annotated into 34 metabolic pathways.Over 10% of the annotated metabolites were involved in 12 metabolic pathways, including metabolic pathway, secondary metabolite biosynthesis, glycerophospholipid metabolism, tyrosine metabolism, ABC transporter, phenylpropanoid biosynthesis, carbon metabolism, isoquinoline alkaloid biosynthesis, vitamin B6 metabolism, amino www.nature.com/scientificreports/acid biosynthesis, phenylalanine metabolism, and the biosynthesis of phenylalanine, tyrosine, and tryptophan.One differentially regulated metabolite was annotated to each of the following pathways: flavone and flavonoid biosynthesis, pentose phosphate pathway, α-linolenic acid metabolism, arachidonic acid metabolism, and fatty acid biosynthesis. Discussion LoMYB8 is an R2R3-MYB transcription factor with two R structures.R2R3-MYB transcription factors play an important role in controlling plant growth processes, including primary and secondary metabolism, cell growth and development, and responses to abiotic and biotic stresses 30 .Some MYB genes play a role in the drought response by regulating lateral root growth.In Arabidopsis, the AtMYB60 and AtMYB96 genes are involved in the regulation of lateral root growth.Auxin induces the expression of AtMYB60 in the roots, and the overexpression of this gene in Arabidopsis plants growing on MS medium containing mannitol resulted in greater root mass 31 .Huang found a gene, namely, NbPHAN, that controls leaf development and drought tolerance in Nicotiana benthamiana 32 .NbPHAN belongs to the AS1-RS2-PHAN (ARP) protein complex in the R2R3-type MYB subfamily 30 .The NbPHAN gene in the newly emerged young leaves of N. benthamiana plants was silenced by means of virus-induced gene silencing (VIGS), which resulted in a change in leaf shape and abnormal growth of the blades along the main veins, while the other organs of the plants remained normal.These plants showed a weakened tolerance to drought stress and increased water loss, but their stomatal density was unchanged.Silencing of the NbPHAN gene lowered the expression of stress-related genes that are usually expressed at a high level under water deficit conditions, such as genes involved in polyamine biosynthesis and reactive oxygen detoxification.Additionally, under water deficit conditions, compared with that in nonsilenced plants, the expression level of NbDREB but not NbAREB decreased in the silenced plants, indicating that NbPHAN plays a role in the response to drought stress through an ABA-independent mechanism 32 .Plant hormones are highly sensitive physiological agents in response to drought stress.In order to ensure normal metabolism, growth, and development, a variety of hormones work coordinately to regulate the physiological responses and gene expression in plants through changes in their concentrations 3333 .Research has indicated that drought stress regulates the expression of numerous genes in plants, with a significant portion responding to ABA 35 .In this study, we observed that the DEGs in the ABA and auxin signaling pathways were the most prevalent, which aligns with findings from a study that simulated drought stress in potatoes using PEG 36 .Within the ABA pathway, PP2C were down-regulated after drought stress.Research conducted on Populus euphratica has demonstrated that PP2C plays a negative regulatory role in the ABA signaling pathway, and its overexpression reduces plant tolerance 37 , which is consistent with the findings of this study.It is worth noting that the expression of ABA response factor genes showed both upregulated and downregulated, indicating potential differences in the expression patterns of different genes under drought stress.Additionally, an indole-3-acetate amide synthase gene was down-regulated after drought stress, which is consistent with the results of PEG simulation on drought-stressed potatoes 37 .The stability and activity of auxin response factors are regulated by auxin itself.The indole-3-acetate amide synthase gene is an early auxin-responsive gene that plays a crucial role in plant growth and development 38 . Drought stress usually causes the accumulation of soluble sugars, which play a role in signal transduction and osmotic regulation 39 .The results of our study showed that under PEG-simulated drought stress, the soluble sugar content gradually increased with an increase in stress duration.Under normal growth conditions, the concentration of ROS in plants is very low.Drought stress increases the ROS content in plants, causing oxidative damage 40 .Through the enzymatic protection system, plants use POD to remove active oxygen, thereby protecting the membrane system from damage 41 .Under drought conditions, the permeability of the cell membrane changes, leading to an increase in the relative conductivity and peroxidation of membrane lipids, which results in the production of MDA.The increase in MDA content causes damage to cells 42 .Therefore, the MDA content can be used as an indicator of the degree of damage to plants under drought stress, which indirectly reflects the drought resistance of plants; that is, the higher the MDA content, the greater the damage caused in plants, and the lower the drought resistance 43 .This suggested to a certain extent that LoMYB8 might slow the increase in MDA accumulation in the transiently transformed plants, regulate the response of plants to drought stress, and enhance the short-term drought resistance of plants. Under drought stress, plants experience an excessive accumulation of ROS, leading to oxidative damage and inhibition of photosynthesis 44 .POD, an impotrant oxidoreductase, plays a regulatory role in plants by catalyzing the redox process and maintaining the balance of H 2 O 2 45 .Previous studies have shown that enhanced POD activity can enhance plants' resistance to oxidative stress and drought 46 .Similarly, Nikoleta-Kleio demonstrated that applying kaolin clay particles and other substances to young olive trees could elevate POD activity 47 , effectively alleviating water deficiency-induced stress.Furthermore, GST functions as an active oxygen scavenger, and research by George revealed that the PjGSTU1 protein exhibits glutathione transferase activity 48 .Their findings indicated that, PjGSTU1 transgenic tobacco plants exhibited higher survival rates than the control group under drought stress, suggesting its potential role in scavenging reactive oxygen species.In our experiment, the physiological and biochemical indicators in the transiently transformed plants carrying the LoMYB8 gene and control plants of Larix spp.before and after drought stress were determined.The results showed that the soluble sugar content, soluble protein content, MDA content, SOD activity, and POD activity increased in all the plants, which reflects a universal change in plants under drought stress.This conclusion is consistent with that drawn by other researchers.For example, Cui found that the contents of soluble sugar, soluble protein, and MDA in rice and Arabidopsis plants under drought stress were higher than those in unstressed control plants 49 .Wang reported that the activity of SOD and POD in maize under drought stress 50 .In our experiment, under drought stress, the soluble sugar content, soluble protein content, SOD activity, and POD activity in the transiently transformed plants overexpressing the LoMYB8 gene were higher than those in the control plants, while the increase in MDA in the transiently transformed plants was less than that in the control plants, indicating that the transiently transformed Larix spp.plants overexpressing the LoMYB8 gene had stronger drought resistance than the control plants.The changes of physiological and biochemical indexes of transient transformed plants under drought stress in this study were consistent with those in previous studies 51 . In a study conducted on transiently transformed Larix spp.plants overexpressing the LoMYB8 gene, differentially expressed genes and differentially regulated metabolites were found.These genes and metabolites were annotated to various pathways related to energy synthesis and metabolism, signal transduction, and synthesis and metabolism of flavonoids.These pathways include glycolysis 52 , gluconeogenesis 53 , pyruvate metabolism 54 , pentose phosphate pathway 55 , phenylpropanoid biosynthesis 56 , flavonoid biosynthesis 57 , and flavone and flavonol biosynthesis 58 .These metabolites were shown to vbe inolved in the drought resistance response of plants after drought stress. Figure 13 . Figure 13.Differentially expressed gene GO analysis results. Figure 14 . Figure 14.Volcano plot of differential metabolites.Note: Each dot in the figure represents a metabolite, red represents the upregulated differential metabolites, green represents the downregulated differential metabolites, and black represents the insignificantly differential metabolites.The abscissa represents the change in the difference multiple of metabolites, and the ordinate represents the logarithm value of p-value with base 10.The size of the dot represents the VIP value and the reliability of the metabolite.The larger the VIP value, the more reliable it is. Table 1 . Primers used in real-time RT-PCR. treated with PEG for 24 h and reached the same expression level as that in the untreated transiently transformed plants at 48 h.Following treatment with PEG for 48 h, the expression level of the LoMYB8 gene appeared continuously upregulated. Table 4 . Annotated pathways with more than 16 differentially expressed genes. Table 5 . Number of KEGG annotations for differential metabolites.
v3-fos-license
2023-09-14T15:27:39.873Z
2023-09-01T00:00:00.000
261782913
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1648-9144/59/9/1638/pdf?version=1694412757", "pdf_hash": "3e3d3f2789d359556bf0324e9221a50d606b8f7c", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46234", "s2fieldsofstudy": [ "Medicine" ], "sha1": "309f5de57d11520b19919f07448543a08043d20c", "year": 2023 }
pes2o/s2orc
Bartter Syndrome: A Systematic Review of Case Reports and Case Series Background and Objectives: Bartter syndrome (BS) is a rare group of autosomal-recessive disorders that usually presents with hypokalemic metabolic alkalosis, occasionally with hyponatremia and hypochloremia. The clinical presentation of BS is heterogeneous, with a wide variety of genetic variants. The aim of this systematic review was to examine the available literature and provide an overview of the case reports and case series on BS. Materials and Methods: Case reports/series published from April 2012 to April 2022 were searched through Pubmed, JSTOR, Cochrane, ScienceDirect, and DOAJ. Subsequently, the information was extracted in order to characterize the clinical presentation, laboratory results, treatment options, and follow-up of the patients with BS. Results: Overall, 118 patients, 48 case reports, and 9 case series (n = 70) were identified. Out of these, the majority of patients were male (n = 68). A total of 21 patients were born from consanguineous marriages. Most cases were reported from Asia (73.72%) and Europe (15.25%). In total, 100 BS patients displayed the genetic variants, with most of these being reported as Type III (n = 59), followed by Type II (n = 19), Type I (n = 14), Type IV (n = 7), and only 1 as Type V. The most common symptoms included polyuria, polydipsia, vomiting, and dehydration. Some of the commonly used treatments were indomethacin, potassium chloride supplements, and spironolactone. The length of the follow-up time varied from 1 month to 14 years. Conclusions: Our systematic review was able to summarize the clinical characteristics, presentation, and treatment plans of BS patients. The findings from this review can be effectively applied in the diagnosis and patient management of individuals with BS, rendering it a valuable resource for nephrologists in their routine clinical practice. Introduction Bartter syndrome (BS) is a rare group of autosomal-recessive salt-losing tubulopathies characterized by impaired transport mechanisms in the thick ascending limb of the loop of Henle (TAL), resulting in pronounced salt wasting.It was first reported in 1988 by Frederic C. Bartter as a novel syndrome [1], marked by hypokalemic metabolic alkalosis with hyperreninemic hyperaldosteronism in a normotensive patient [2]. BS is classified into five types, based on distinct genotypic and phenotypic manifestations.Although all of the types involve defective salt reabsorption along the TAL, the phenotypes often overlap, with molecular patterns associated with specific genes [2]. In Type I BS, the symptoms typically appear at birth, characterized by severe salt wasting, hyposthenuria, elevated PGE2 production, and failure to thrive.Some symptoms may arise in utero, leading to polyhydramnios and premature birth.It is considered to be the most common form, often caused by mutations in the SLC12A1 gene, affecting the NKCC2 cotransporter in TAL [2][3][4].Type II BS is a subtype that is also known as antenatal Bartter syndrome, which is primarily linked to mutations in the KCNJ1 gene, affecting the ROMK channel.It presents prenatally or shortly after birth with polyhydramnios, premature delivery, and severe dehydration [4,5].Type III BS results from CLCNKB gene mutations, impacting the chloride channel ClC-Kb in the kidneys' distal tubules.It exhibits milder symptoms than the classic form, often appearing in childhood or adolescence [2].Type IV is sub-grouped into two types: Type IVa and Type IVb.BSND gene mutation causes Type IVa BS, leading to defective barttin insertion in the CLC-Kb and CLC-Ka channels within the kidneys' loop of Henle and the inner ear, disrupting salt transport.Conversely, Type IVb involves mutations in both the CLCNKA and the CLCNKB genes, resulting in impaired functioning of two chloride channels, severe salt wasting, and deafness.Both BSND and CLCNKA/CLCNKB mutations are associated with polyhydramnios, preterm delivery, and impaired urinary concentration [2].Type V is a newly discovered one, with a usual X-linked recessive inheritance pattern, contrary to the other types, which are autosomal-recessive.Here, a CASR gene mutation leads to hypercalciuria, in addition to the main underlying symptoms seen in BS patients. Salt supplementation, NSAIDs, and aldosterone antagonists are considered viable options for treating BS [2,6].Prenatally, amniocentesis and/or indomethacin therapy have been reported to be effective.Due to the relatively new discovery and rarity of this disease, the treatment options are very much limited, with no curative options available, thus rendering the management of these patients entirely symptomatic.Adding to this, the literature covering the clinical, epidemiological, and therapeutic interventions for this syndrome is very limited.Therefore, we aim to conduct a systematic review of the available case reports and case series reporting BS. Materials and Methods This review has been reported in accordance with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) statement, as indicated in the PRISMA checklist [7], and registered with PROSPERO (IDCRD42022351227; www.crd.york.ac.uk/ prospero accessed on 15 June 2023). Search and Selection An electronic search of five bibliographic databases, including Pubmed, Cochrane, DOAJ, Science Direct, and JSTOR, was conducted for case reports/series regarding BS published in English between April 2012 and 2022, i.e., in the last 10 years.Using a combination of keywords and medical subject headings (MeSH), we used vocabulary related to "Bartter" OR "Bartter syndrome" OR Bartter* OR "Sodium-Potassium-Chloride Symporters" AND "Tubulopathy". Synthesis of Results Descriptive statistics were used to calculate simple frequency, percentage, and proportion from the extracted data, reporting continuous data points as median (IQR) or mean (+/− SD), categorical variables as percentages, and outcomes as a number and percentage. Assessment of Risk of Bias Two authors (R.Q. and A.F.) independently performed the quality assessment of the included studies using the Joanna Briggs Institute (JBI) critical appraisal checklist for case reports and series [9].Any discrepancies were resolved through discussion. Study Selection In our systematic review, 2664 records were initially identified from the search strategy, of which only 1856 were retrieved after removing 808 duplicate articles.A total of 1671 records were excluded after screening the title and the abstract, giving us a total of 185 records for the full-text screening.Among these, 11 records could not be retrieved, and 117 failed to meet the inclusion criteria and were, therefore, excluded.Finally, a total of 57 articles were included in the systematic review, among which 48 were identified as case reports and 9 as case series (Figure 1). Patient Characteristics Overall, we identified a total of 48 case reports and 9 case series, amounting to 118 patients in total.Males were seen to be predominantly affected by BS, as 68 participants were male and 50 were female.The age of diagnosis varied from 22.6 gestational weeks to 59 years, and the majority of patients were less than 5 years of age (66.1%). Out of the 100 patients assessed for the genetic type of BS, 59 were reported to have Type Ⅲ BS (n = 59), followed by Type Ⅱ (n = 19), Type I (n = 14), Type Ⅳ (n = 7), and only 1 case was classified as Type Ⅴ BS (Table 1).Genetic-based testing to confirm the diagnosis of the patients was reported in 31.4% of the patients.Furthermore, consanguinity was observed in about 20% (21/118) of our cases. Patient Characteristics Overall, we identified a total of 48 case reports and 9 case series, amounting to 118 patients in total.Males were seen to be predominantly affected by BS, as 68 participants were male and 50 were female.The age of diagnosis varied from 22.6 gestational weeks to 59 years, and the majority of patients were less than 5 years of age (66.1%). Out of the 100 patients assessed for the genetic type of BS, 59 were reported to have Type III BS (n = 59), followed by Type II (n = 19), Type I (n = 14), Type IV (n = 7), and only 1 case was classified as Type V BS (Table 1).Genetic-based testing to confirm the diagnosis of the patients was reported in 31.4% of the patients.Furthermore, consanguinity was observed in about 20% (21/118) of our cases.Most of the BS patients were followed up between 1 month and 14 years, with an average follow-up time of 3.35 years.The average birth weight was calculated to be 2.17 ± 0.81, with the lowest birth weight, reported by Azzi et al. [58], being 0.84 kg (Table 2), and the highest being 3.68 kg, reported by Adachi et al. [11] (Table 2).The distribution curve of the measured weights among the reported BS cases is shown in Figure 2a.Most of the BS patients were followed up between 1 month and 14 years, w average follow-up time of 3.35 years.The average birth weight was calculated to be 0.81, with the lowest birth weight, reported by Azzi et al. [58], being 0.84 kg (Table 2 the highest being 3.68 kg, reported by Adachi et al. [11] (Table 2).The distribution of the measured weights among the reported BS cases is shown in Figure 2a. Epidemiology/Case Distribution The geographical spread of BS cases is shown in Figure 2b.Most of the cases were reported in Asia (n = 85), followed by Europe (n = 23), North America (n = 9), and South America (n = 1).No cases were reported from Africa, Australia, or Antarctica.The maximum number of cases was seen in the year 2020 (n = Furthermore, 62.7% presented with gastrointestinal symptoms such as vomiting, diarrhea, and constipation.About 15.2% presented with neurological deficits (seizures, hypotonia, hypertonia, and carpopedal spasm), and respiratory distress was found in 7.6% of the patients. Additionally, 3.4% of the cases were associated with developmental anomalies such as macrocephaly and peculiar facies (e.g., triangular-shaped face, high forehead, asymmetric eyelids, retrognathism, or low-set prominent ears), with 13.5% reporting a premature birth.Furthermore, 33.05% were reportedly affected by polyhydramnios during the antenatal period. Management The treatment options for each case are described in Supplementary Table S1.Most of the cases (74/118) were treated with indomethacin, along with fluids and electrolyte therapy (86/118), which were given intravenously, orally, or as a change in diet plan.Alternatively, a dual therapy with indomethacin and spironolactone was given to 55 patients (46.6%).A total of six patients (5%) received indomethacin therapy via amniocentesis for prenatal management.Furthermore, antiemetics/anticonvulsants/calcimimetics were added as required by the patients. Quality Assessment The Supplementary Materials contain figures for the quality assessment of our included studies.For the case reports (Supplementary Figure S1), most of the included studies described the patient characteristics clearly, including the clinical condition on presentation, except for Sobash et al. [47], Verma et al. [50], and Yaqub et al. [55].The assessment methods and the results were clearly described in all of the reports, except for Alasfour et al. [15] and Raza et al. [45].Most of the reports gave clear descriptions of the intervention or treatment procedures, along with the post-intervention clinical condition. For the included case series (Supplementary Figure S2), the quality assessment of the included studies identified two studies [59,61] as high quality, six studies [58,[62][63][64][65][66] as medium quality, and one study [60] as low quality.Only five [59,61,62,64,66] studies gave clear criteria for inclusion in the case series.The condition was not measured in a standard, reliable way for the included participants in two studies [60,63].All of the studies used a valid method for the identification of the condition.There were only two [58,66] studies in which the reporting of the demographics of the participants was not clearly defined, and only one [66] study in which the reporting of the clinical information of the participants was not clear.The outcomes and follow-up results were clearly presented in only four studies [59,61,63,65]. Discussion This systematic review aimed to analyze and highlight the various clinical manifestations that a BS patient can display to a clinician.To our knowledge, no previous systematic reviews have been conducted on BS; therefore, ours is the first study on the topic.Most of the patients were diagnosed during the first decade of their lives.The majority of the patients were male and exhibited a positive response to the treatment.Consanguinity was seen in a minority of cases.We found that polyuria was the most reported presentation, while fever was the least common.Most of the patients suffering from BS were found to be male, and most were found to be suffering from Type III BS.We also noted that the majority of patients were diagnosed clinically, and not by genetic-based testing.A near-normal distribution of birth weights was seen across all of the cases.Most of the cases came from China, and the maximum number of cases reported in a year was in 2020.Almost one third of the patients were found to have polyhydramnios, and a tenth were found to be born prematurely.We were also able to identify four cases in which an initial rare hyperkalemic presentation was noted (Table 2).Hypokalemia and metabolic acidosis were reported to be the most common lab findings.The treatment options were mainly limited to indomethacin, and spironolactone, with added supplements as per the requirement of the patient.The results of the study can be utilized in diagnosis and patient management. The successful management of BS hinges upon early identification, coupled with the expertise of the attending physician.Our systematic review has revealed that only 31% of patients were diagnosed through genetic testing.Considering the high analytical sensitivity of over 90% and clinical sensitivity of approximately 75% in children [2,67,68] and 12.5% in adults, as reported by a recent consensus of experts from Europe [69], genetic testing remains an underutilized resource, likely contributing to the significant frequency of delayed BS detection worldwide.Furthermore, due to the potential overlapping of biochemical markers and clinical symptoms with Gitelman's syndrome (GS), genetic analysis assumes a crucial significance in ensuring an accurate diagnosis in cases of BS.Even though the overlap between the two might be clinically challenging to differentiate, studies have pointed out some significant differences.BS is notably associated with a more pronounced failure to thrive and growth retardation, compared to GS.Furthermore, individuals with BS often present with hypercalciuria, predisposing them to nephrocalcinosis and nephrolithiasis [70].It is worth noting that, while the thiazide test has been proposed as a useful diagnostic tool for differentiation, its appropriateness for children under the age of seven remains a subject of debate, due to concerns regarding potential volume depletion [71]. There were three occasions of death (Table 2) reported in our review.A patient in a case by Afzal et al. [12] died due to a sudden cardiopulmonary arrest, leading to instant death, highlighting the unpredictable nature of the CVS complications in BS, where sudden cardiac events can occur, even with prompt medical interventions.This case underscores the need for heightened vigilance in managing electrolyte imbalances in order to prevent such catastrophic events.In another case reported by Akuma et al. [14], the constant deterioration of lung function led to the patient's demise.In a case by Soumya et al. [48], the patient succumbed to aspiration pneumonia, leading to his eventual death. Although rare, BS can also be present in adults.We found 10 such cases where patients presented for the first time in adulthood (Table 2), usually with nephrocalcinosis, fatigue, periodic mild paralysis, muscle cramps, and other unusual blood chemistry, as seen in BS.Interestingly, Özdemir et al. [40] discussed the possible etiological relationship between adult BS and mania-like symptoms, where electrolyte disturbances, such as hypokalemia, hyponatremia, and metabolic alkalosis, have been suggested as the possible causes, while others have suggested that mood swings are linked to these imbalances [72,73].Hatta et al. [74] included some patients presenting with acute psychotic episodes of schizophrenia with hypokalemia. Even though BS characteristically presents with hypokalemia, certain reports have discussed otherwise.We have identified six such case reports, in which an initial hyperkalemic presentation was seen.Mani et al. [35] suggest that an early postnatal transient hyperkalemia with a history of prematurity, polyuria, and polyhydramnios should raise suspicion for antenatal BS, due to KCNJ1 mutation.Adding to that, a report by Akuma et al. [14] suggests that physicians must be aware of the Type II subtype of neonatal BS, which presents with early transient hyperkalemia, ultimately preventing a misdiagnosis, such as pseudohypoaldosteronism or otherwise. Under stressful situations, such as surgical procedures, the body fluid levels may rapidly change, and pose a significant challenge for both anesthesiologists and surgeons.Raza and colleagues [45] talk about such a case, emphasizing that the management of these patients requires a special focus on the maintenance of cardiovascular stability, control of the plasma potassium level, and the prevention of renal damage.This adds to the already established guidelines on the perioperative management of patients with inherited salt-wasting alkalosis.These guidelines [75] highlight the importance of preoperative risk assessment, considering factors such as the nature of the surgery, concurrent medications, and cardiovascular risk factors.Additionally, they highlight the significance of electrolyte stability and the avoidance of rapid preoperative correction.The guidelines [75] set the minimum acceptable potassium levels (3.0 mmol L −1 ) based on the serum magnesium levels (≥0.5 mmol L 1 ) and suggest appropriate monitoring during anesthesia and recovery. Those born from consanguineous marriages have a greater probability of inheriting defective recessive genes [76].Autosomal-recessive disorders like BS have been reported widely in communities with high consanguinity rates [77].It has been widely established in the clinical literature that such marriages lead to an increased expression of autosomalrecessive disorders, increased birth defects, and mortality in offspring, as reported by many other studies [76,[78][79][80].Our review has revealed a comparable trend, with 21 out of 118 cases of BS being reported in the offspring of consanguineous couples. Limitations and Strengths To the best of our knowledge, our study is the first systematic review summarizing the available clinical literature in the form of case reports and case series on BS.We present a comprehensive overview of the published data, with a robust quality appraisal of the included studies. We acknowledge that this systematic review had its limitations.We only included case reports and case series, due to the limited literature published on BS; therefore, there is a potential risk of bias.After a thorough screening and data collection, we were not able to retrieve all of the required information in all of the categories.These missing data were associated with some skewness among the datasets.Some of the patients were diagnosed based on clinical signs and symptoms and not actual genetic testing.Since we only included papers from the last 10 years reporting BS, we might have missed important clinical data from prior years.In addition, it is noteworthy that not all of the patients received genetic diagnoses, as some were clinically diagnosed.However, it is essential to underscore our rigorous review process, wherein each case report or series underwent a thorough examination by at least two independent authors.Only the papers in which the attending physician conclusively diagnosed BS were considered for inclusion.Nevertheless, while we maintain a high degree of confidence in our diagnostic criteria, we acknowledge that a very minimal proportion could potentially represent GS rather than BS. Conclusions Although Bartter syndrome is a rare diagnosis, we were able to summarize the clinical characteristics, presentation, and treatment from all five types reported through a robust systematic review, including the literature from the past decade.For BS testing, premature neonates with unexplained polyhydramnios, growth retardation, or electrolyte abnormalities should be investigated.The clinical presentation, epidemiology, treatment options, and follow-up of the BS patients presented in this review could be useful for physicians in clinical practice. Figure 2 . Figure 2. (a).The frequency distribution curve of body weight for identified BS patients.(b graphical distribution of identified cases of BS. Figure 2 . Figure 2. (a).The frequency distribution curve of body weight for identified BS patients.(b).Geographical distribution of identified cases of BS. Table 1 . Laboratory analysis and period of follow-up of the included studies in our systematic review. Table 1 . Laboratory analysis and period of follow-up of the included studies in our systematic review. Table 2 . Patient demographic and other clinical characteristics. Table 2 . Patient demographic and other clinical characteristics. : Quality Assessment of Included Case Reports based on JBI (Joanna Briggs Institute) Critical Appraisal Checklist for Case Reports; Figure S2: Quality Assessment of Included Case Series Based on JBI (Joanna Briggs Institute) Critical Appraisal Checklist for Case Series; Table S1: Detailed Treatment options and First Presentations for Bartter Syndrome Patients.
v3-fos-license
2018-04-03T03:10:21.988Z
2016-12-14T00:00:00.000
1050084
{ "extfieldsofstudy": [ "Geography", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0166043&type=printable", "pdf_hash": "75ad04e6fde55dfdbd953b44f013f6448b7855fa", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46236", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "75ad04e6fde55dfdbd953b44f013f6448b7855fa", "year": 2016 }
pes2o/s2orc
Migratory Connectivity at High Latitudes: Sabine’s Gulls (Xema sabini) from a Colony in the Canadian High Arctic Migrate to Different Oceans The world's Arctic latitudes are some of the most recently colonized by birds, and an understanding of the migratory connectivity of circumpolar species offers insights into the mechanisms of range expansion and speciation. Migratory divides exist for many birds, however for many taxa it is unclear where such boundaries lie, and to what extent these affect the connectivity of species breeding across their ranges. Sabine’s gulls (Xema sabini) have a patchy, circumpolar breeding distribution and overwinter in two ecologically similar areas in different ocean basins: the Humboldt Current off the coast of Peru in the Pacific, and the Benguela Current off the coasts of South Africa and Namibia in the Atlantic. We used geolocators to track Sabine’s gulls breeding at a colony in the Canadian High Arctic to determine their migratory pathways and wintering sites. Our study provides evidence that birds from this breeding site disperse to both the Pacific and Atlantic oceans during the non-breeding season, which suggests that a migratory divide for this species exists in the Nearctic. Remarkably, members of one mated pair wintered in opposite oceans. Our results ultimately suggest that colonization of favorable breeding habitat may be one of the strongest drivers of range expansion in the High Arctic. Introduction Determining the extent to which breeding populations overlap during the non-breeding season (i.e., migratory connectivity) is essential to interpret the ecological and evolutionary patterns of migratory species [1]. Migratory divides delineate the boundaries between adjacent breeding populations with divergent migration pathways and are common in many migratory bird species [2][3][4]. Intraspecific variation in migratory routes may be driven by physical factors such as past glacial events, geographical barriers, or suitable habitat for refueling [5][6][7], or biological factors such as the distribution of resources, energetic costs of migration, or competition between breeding populations [8,9]. The Canadian High Arctic is a vast archipelago which forms part of a nearly continuous area of relatively homogenous High Arctic tundra habitat extending from the Nearctic to the Palearctic [10]. Even species which breed across large or even circumpolar ranges within this region are typically divided into discrete populations that breed and winter in disjunct ranges with varying degrees of migratory connectivity [10]. The study of migration patterns in the Canadian High Arctic is of particular interest for several reasons: (i) it is ecologically a very "young" area, having only become accessible as nesting habitat for birds since the last major ice age [5]; (ii) it extends so far north of the Nearctic continental landmass that in its northern reaches it is geographically an equally likely destination for migrants from the both the Nearctic and western Palearctic; and (iii) it extends from the North American continent symmetrically, so that its relative midpoint lies approximately equidistant from both the Atlantic and Pacific coasts [11]. These factors have led to the colonization of the Canadian High Arctic archipelago by migratory seabird species from three source regions: Atlantic, Pacific and Palearctic [12][13][14]. Determining how species and populations are distributed through the Canadian High Arctic archipelago can help clarify the evolutionary process behind the migration patterns seen in Arctic birds as a group [11]. For most Palearctic migratory birds, there is a distinct migratory divide at 100˚E along the Taymyr Peninsula in Russia, which forms the most northerly continental barrier to east-west migration, and lies roughly halfway between suitable wintering habitat in the Atlantic and Pacific regions [15,16]. Efforts to study migration patterns in the Nearctic have failed to find a corresponding geographic divide between migratory bird species [11]. For example, many shorebirds appear to be divided in the western Arctic [13], while some passerines follow a divide in the east [17]. Jaegers, terns, and gulls [11], as well as some waterfowl [12,18] migrate both east and west out of the Nearctic, with no consistent shared geographic boundary across species. It remains unclear exactly what factors result in these inconsistencies, but the relatively recent colonization of the region as a whole may be an important factor. The Sabine's gull (Xema sabini) is a small seabird that exhibits a patchy, circumpolar breeding range [19]. It is highly pelagic in the non-breeding season, and spends the majority of its annual cycle in offshore waters [20]. All breeding populations are presumed to migrate to either of two known wintering areas in major upwelling systems in the southern hemisphere [20,21]. The Pacific wintering population occupies a region within the Humboldt Current off the coast of Peru [22], while the Atlantic wintering population occupies a region within the Benguela Current off the coast of South Africa and Namibia [20,23]. It remains unclear how Sabine's gulls segregate between these two ecologically similar but geographically disparate wintering areas, and the distribution of Atlantic and Pacific wintering birds at breeding colonies is unknown [19,24]. Birds breeding in Siberia, Alaska, and the Western Canadian Arctic are thought to winter in the Pacific, while birds from breeding sites in the Eastern Canadian Arctic, Greenland, and Svalbard are thought to winter in the Atlantic [21]. The migratory divide between Atlantic and Pacific wintering populations in the Palearctic is thought to lie along the Taymyr Peninsula [15,16], while the divide in the Nearctic is presumed to lie somewhere in the central Canadian Arctic [21,25]. Here, we used geolocators to track Sabine's gulls breeding at a colony in the central Canadian High Arctic to determine their migratory pathways and wintering sites. We interpret the revealed migratory patterns of Sabine's gulls from this site in relation to the ecology and evolution of Arctic breeding migratory birds. Ethics Statement All work was conducted under valid permits (CWS Animal Care EC-PN-11-020, CWS Scientific Permit NUN-SCI-09-01, Government of Nunavut Wildlife Research Licence WL 2010-042, Nunavut Water Board licence 3BC-TER0811, Indian and Northern Affairs Land Use Reserve 068H16001, and CWS Banding Permit 10694), and their renewals. Study Site We conducted field research on Nasaruvaalik Island, Nunavut, (75.8˚N, 96.3˚W ; Fig 1), between early June and late August over five years between 2008-2012. Nasaruvaalik Island is a small gravel island 1.4 km 2 in size, supporting a large and diverse colony of marine birds that forage in several nearby polynyas. The island is characteristic of the High Arctic tundra ecoregion [26] and has been previously described in detail [27]. Sabine's gulls are annual breeders, and we have recorded 16-31 breeding pairs annually over eight years of study, all of which nest in association with both Arctic terns (Sterna paradisaea) and Ross's gulls (Rhodostethia rosea) in two colonies at either end of the island. Nesting habitat in the colonies consists of low gravel beach ridges interspersed with patches of moss and purple saxifrage (Saxifraga oppositifolia) and small, shallow ponds [27]. Sabine's gull philopatry at this site is high (mean annual return rate of 80% over 6 years), based on capture-mark-resight data (S. E. Davis, unpubl. data). Deployment and Recovery of Geolocators We deployed 47 geolocators (44 LAT2900 and 3 LAT2500, Lotek Wireless, Canada) on 33 adult breeding Sabine gulls on Nasaruvallik Island over three years. In 2008, we deployed geolocators on three birds. In 2010, we deployed geolocators on 23 birds, one of which was previously tagged in 2008. In 2011, we deployed geolocators on 21 birds, 13 of which were tagged previously in 2010. In total, we deployed geolocators on 16 females and 17 males, 14 of which (seven males and seven females) we tagged twice. We captured breeding Sabine's gulls with a spring-loaded bow net [28] or a handheld CO 2 powered net gun (see [29] for details). We attached geolocators to Darvic tarsal bands with plastic cable ties, totaling 2.1g (LAT2900) and 3.8g (LAT2500), averaging 1.1% and 2.0% of adult body weight, respectively. All tagged birds were also fitted with a numbered metal band and a unique combination of colored Darvic bands on the opposite leg. We determined the sex of tagged birds through an analysis of 2-3 drops of blood collected from the brachial vein. We recaptured tagged birds the following year to recover the geolocators (one unit was recovered after two years), and downloaded the data in LAT Viewer Studio (Lotek Wireless, Canada). Data Processing The geolocators used in this study estimated location once daily; latitude was estimated from the duration of daylight between sunset and sunrise, and longitude from the exact time of sunrise and sunset [30]. The geolocators sampled sea-surface temperature (SST) when immersed for more than two consecutive samples (i.e., 120 s) and recorded the minimum daily value (˚C) [31]. To improve the accuracy of latitude estimates, we used SST correlation (LAT Viewer Studio) based on the approach used by Shaffer et al. [32], which allowed us to retain data around the equinoxes. We used 8-day composites of nighttime SST grids from the MODIS TERRA satellite in this study (http://whiteshark.stanford.edu/public/lotek_sst/, 4 km resolution), which are suitable for comparison to the tag values [33]. We then filtered locations [34] to remove positions implying an unrealistic flight speed in Program R [35]. We assumed Sabine's gulls did not exceed a maximum velocity of 13.9 m/s (> 50 km/h sustained over a 48 h period) [36]. To further reduce the mean error in positions estimates, we smoothed each track using a moving weighted average (with a window size of three), whereby each smoothed position was the weighted average (in a 1:2:1 ratio) of the previous, current, and subsequent position (as per [37]). Fixed start positions (at breeding colony) and positions that showed large daily movements (greater than 4˚of longitude or 6˚degrees of latitude) were not smoothed to avoid introducing positional errors [38]. Analysis of Movement Data We pooled all valid locations and generated kernel density estimations to represent the annual distribution of tracked birds (ESRI ArcGIS 10.1, search radius: 200 km, output cell size: 10 km). A search radius of 200 km was chosen for analysis in this study in order to be directly comparable to recent studies of arctic breeding long distance migrants [20,39]. We created occupancy contours (25, 50, 75%) in Geospatial Modelling Environment (GME; [40]) to determine areas of high use throughout the annual cycle. We used the 50% occupancy contour generated around either one of the known wintering areas in the Southern Hemisphere [19,20] to set the boundary for the "wintering area" (as per [37]). For the purpose of this study, we did not use positions that occurred after the wintering period (spring migration) in the remaining analysis. We assigned positions to either "stopover" or "travel" categories with each bird initially defined to be in a stopover period (i.e., starting at the breeding site). We identified transition to a travel period when three or more positions (within a sliding window of five) showed movement more than 100 km/d, which represents the mean daily movement during the wintering period. Similarly, we identified transition back to a stopover period when three or more positions failed to meet the distance criteria (less than 100 km/d). This approach is comparable to methods used by similar studies of migratory seabirds breeding in the arctic [37,39], where distance between daily positions is used to reduce bias towards the poles when using change in longitude [41] and bias towards east-west migration when using change in latitude [20,42]. Stopover periods were then examined for burst travel days, which occurred when birds travelled fast and far for 1-2 d, which would not trigger a transition to travel, however birds were clearly travelling to a new stopover area [33]. These burst travel days were manually adjusted to reflect the travel behavior. Tracks were then split into two periods; fall migration and winter. Fall migration was defined as the period between departure from the breeding area (i.e., first "travel" location identified after breeding period) and arrival to the wintering area (i.e., first "stopover" location within the pre-defined wintering area) (as per [37,43]). For each wintering site (Pacific and Atlantic), we generated kernel density estimations (ESRI ArcGIS 10.1, search radius 200 km, output cell size 10 km) using winter locations, which were first transformed to an equal area projection appropriate for the site (South America Albers for Pacific and Africa Albers for Atlantic). To represent the distribution of birds at each wintering site, we created 25%, 50%, and 75% occupancy contours in GME [40]. We calculated great-circle distances between each pair of valid locations in Program R [35], and subsequently calculated distance per day based on the number of days between locations. Travel distance (km) was defined as the distance travelled during fall migration not including movement during stopover periods, and travel speed (km/d) as the travel distance divided by the days travelled ("travel" days only) during fall migration (as per [44]). Welch's t-test was used to test for differences in travel distance and speed between wintering populations in Program R [35]. Results We recovered 38 of 47 (81%) geolocators deployed on Nasaruvaalik Island, Nunavut from 2008 to 2012. Four additional tagged birds were seen at the colony but did not breed, while one bird returned and successfully bred without its tag (92% of tags were re-sighted). After filtering, our dataset contained 6,354 locations (91.8% valid), averaging 177 days per track. Twenty-eight geolocators tracked birds to their wintering site, while eight geolocators confirmed migration direction (Pacific or Atlantic) but failed before arrival to the wintering site. Two geolocators failed during the breeding season and were not included in the analysis (n = 36). Ten birds were tracked twice; therefore our data describe the movement of 26 individual birds. Birds breeding on Nasaruvaalik Island disperse to both the Atlantic and the Pacific oceans during the non-breeding season (Fig 2-A). The majority of birds tracked (93%) migrated west to the Pacific wintering site (Fig 2-B), while two of the birds tracked (7%) migrated east to the Atlantic wintering site (Fig 2-C). Remarkably, one pair of Sabine's gulls (confirmed mates for six seasons 2009-2014) migrated to different oceans for the non-breeding season; the female migrated west to the Pacific (Fig 2-A; red tracks) while the male migrated east to the Atlantic (Fig 2-A; green tracks). This pair of birds was tracked for two consecutive years (Fig 2-A; represented by 2 tracks of the same color for each bird). Sabine's gulls showed high wintering site fidelity; all ten birds that were tracked for two years wintered in the same area both years, including one Atlantic wintering bird. Sabine's gulls left the breeding site in late August and arrived at the wintering site in early November (Table 1). During fall migration, tagged birds travelled 14,578 km to the Pacific wintering site, and 14,615 km to the Atlantic wintering site, excluding movement during stopover periods (Table 1). Both Pacific and Atlantic birds spent 84 days migrating to the wintering site, flying at a speed of c. 350 km/day on travel days (Table 1). There was a statistically significant difference in travel distance between years (travel speed did not differ significantly) as determined by a one-way ANOVA (F 2,25 = 3.4, p = .049), however post hoc comparisons using a more conservative Tukey HSD test showed travel distance did not significantly differ among years. When comparing migration metrics between wintering populations, we found no significant difference in travel distance (t 1 = 0.02, p > 0.5) or travel speed (t 8 = 0.41, p > 0.5) between Pacific and Atlantic migrants. Discussion Here, in the first tracking study of Sabine's gulls from the North American Arctic, we report that birds from a single colony dispersed to both the Pacific and Atlantic oceans during the non-breeding season. This study confirms a migratory divide for this species in the Nearctic around 96˚W. Our work on Sabine's gulls is one of only a few other studies documenting breeding populations of any species from the Canadian Arctic moving to disjunct wintering areas [12,17]. Because much of the North American Arctic has only relatively recently been exposed after the last glacial period, the colonization and migration patterns of birds breeding there are difficult to interpret; some species show distinct genetic structuring in populations (e.g. northern fulmars Fulmarus glacialis; [45]) while others do not (e.g. ivory gulls Pagophila eburnea; [46]). Such differences may be attributed to how long these populations were isolated as well as their propensity to colonize newly available habitat following glacial periods. Combined data from several species and studies suggests a zone of transition or overlap between Atlantic and Pacific wintering populations around 100˚W in the Canadian Arctic. [11,12,17]. In the High Arctic, migratory divides occur between areas which offer an optimal combination of suitable breeding habitat balanced with a relatively low cost of migration to suitable wintering habitat, considering both the distance to travel as well as the ecological or topographical barriers en route [47,48]. Our results show that Sabine's gulls travelling from Nasaruvaalik Island to either of the two wintering sites used face very similar energetic costs, at least in terms of flying distance, speed, and duration. Throughout most of their breeding range, Sabine's gulls prefer low-lying tundra habitat associated with freshwater or tidal marshes [19]. Only a small portion of the global population of Sabine's gulls breeds in the High Arctic [19], and little is known about Sabine's gulls breeding in the northernmost part of their range, such as those we studied here. The nearest known major breeding colonies of Sabine's gulls lie hundreds of kilometers to the southeast and southwest [19,49] of our study site, yet Sabine's gulls breeding on Nasaruvaalik Island experience higher reproductive success [27] than birds breeding in more typical Low Arctic environments [49]. Consequently we suggest that the birds nesting at Nasaruvaalik Island may represent a relatively recent colonization of particularly favorable habitat by a diverse and distinct population of birds representing the northernmost breeders from both Atlantic and Pacific wintering populations, consistent with the theory that the colonization of suitable breeding habitat may be one of the strongest drivers of range expansion in the High Arctic. Nasaruvaalik Island has been identified as one of the most important breeding sites for a wide variety of ground-nesting seabirds in the Canadian High Arctic on account of several small but highly productive polynyas nearby that provide reliable foraging opportunities when surrounding waters are still completely frozen in the early breeding season [50]. The brief and unpredictable High Arctic breeding season places a high premium on timing arrival at the breeding site to coincide with optimal nesting conditions, and for individuals to arrive in prime breeding condition [8]. Coordination of behavior within pairs during the breeding season (e.g. timing of foraging trips, nest defense) is often pronounced in long-lived seabirds which require biparental care for successful reproduction [51]. Outside the breeding season however, behavior is driven by prey availability, genetics, and/or climate, and mated pairs may winter in the same area because of shared traits rather than coordination of behavior [51]. Our study shows that in rare cases, mated pairs of birds migrate to opposite ocean basins during the winter, returning to the same breeding site without knowing how the schedule of their respective partner is affected by environmental conditions en route. Even birds migrating along the same routes and relying on the same cues to time their arrival at breeding sites are susceptible to misjudging local conditions upon arrival [52]. Although some polar seabirds disperse from a single colony to disparate wintering areas [53,54], this appears to be a rare phenomenon, and to our knowledge, our results are the first confirmed example showing divergent migratory pathways between members of a breeding pair of any species. Sabine's gulls form strong multi-year pair bonds [55], and the reproductive costs involved in deferring breeding or finding a new partner if a former mate fails to arrive at the breeding site are considerable, and would presumably be exaggerated in mixed pairs arriving from different directions. While it is difficult to extrapolate beyond the one example we discovered of a mixed pair, the fact that these individuals have bred successfully over four consecutive years suggest that either the conditions at Nasaruvaalik Island are particularly favorable in order to sustain this risky union or this site is far enough north that there may be less variability in the possible timing of nesting, so the breeding window is very small. Information about how populations are geographically linked throughout the year is lacking for many species of migratory birds [56], including Sabine's gulls [19]. This research is the first to examine the degree of migratory connectivity in Sabine's gulls breeding in the Nearctic, and shows that birds breeding on Nasaruvaalik Island exhibit somewhat diffuse migratory connectivity due to mixed wintering area preference. Ultimately, this study provides new insight into the migration ecology and behavior of Arctic breeding migrants. Limited areas of suitable breeding habitat within the High Arctic attract and sustain colonies of birds nesting at the limits of their range. At high latitudes, breeding colonies that lie relatively equidistant from suitable winter habitat may consist of individuals from different wintering populations, as shown in this study. The reproductive disadvantages of increased variability in timing of migration and arrival within a single breeding population at such mixed colonies may be offset by exceptionally favorable breeding conditions at specific sites such as Nasaruvaalik Island.
v3-fos-license
2018-04-03T00:00:38.673Z
2017-11-14T00:00:00.000
205626939
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-017-15860-1.pdf", "pdf_hash": "7d1266c3b9756e17e8858581759822e701ffd9c7", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46239", "s2fieldsofstudy": [ "Medicine" ], "sha1": "1f9628124f6096d879eed1f0e46e5f397265cca2", "year": 2017 }
pes2o/s2orc
Diagnostic performance of susceptibility-weighted magnetic resonance imaging for the detection of calcifications: A systematic review and meta-analysis Since its introduction, susceptibility-weighted-magnetic resonance imaging (SW-MRI) has shown the potential to overcome the insensitivity of MRI to calcification. Previous studies reporting the diagnostic performance of SW-MRI and magnetic resonance imaging (MRI) for the detection of calcifications are inconsistent and based on single-institution designs. To our knowledge, this is the first meta-analysis on SW-MRI, determining the potential of SW-MRI to detect calcifications. Two independent investigators searched MEDLINE, EMBASE and Web of Science for eligible diagnostic accuracy studies, which were published until March 24, 2017 and investigated the accuracy of SW-MRI to detect calcifications, using computed tomography (CT) as a reference. The QUADAS-2 tool was used to assess study quality and methods for analysis were based on PRISMA. A bivariate diagnostic random-effects model was applied to obtain pooled sensitivities and specificities. Out of the 4629 studies retrieved by systematic literature search, 12 clinical studies with 962 patients and a total of 1,032 calcifications were included. Pooled sensitivity was 86.5% (95%-confidence interval (CI): 73.6–93.7%) for SW-MRI and 36.7% (95%–CI:29.2–44.8%) for standard MRI. Pooled specificities of SW-MRI (90.8%; 95%–CI:81.0–95.8%) and standard MRI (94.2; 95%–CI:88.9–96.7%) were comparable. Results of the present meta-analysis suggest, that SW-MRI is a reliable method for detecting calcifications in soft tissues. Patient and study characteristics. In total, 962 patients (male = 554, female = 257, sex unknown = 151) were included with an average age of 52.2 years. MRI and CT examinations were performed within a period of 0.5 to 90 days. All the studies were single-centre and study design was described as prospective in six of the studies, as retrospective in five of the studies and remained unclear in one case. Six of the studies were performed on 1.5T scanners, five of the studies at 3T and one of the studies was performed at both 1.5 and 3T. The characteristics of the patients and studies included in this meta-analysis are provided in Table 1. Quality assessment. The detailed results of the QUADAS-2 (QUality Assessment of Diagnostic Accuracy Studies) evaluation of the methodologic quality (risk of bias and concerns regarding applicability) are presented in Fig. 2. The majority of the studies were assessed as having low concerns regarding applicability. Methods of reconstruction and especially the interpretation of the CT and MRI scans were poorly described. In five of the studies the risk of bias for patient selection was unclear due to incomplete reporting on the methods of patient selection. Risk of bias related to the conduction of the index test or the reference standard was unclear in eight or seven of the studies, as no information was provided on whether the radiologists were blinded to the reference standard or the index test. Several studies did not describe, if there was an appropriate interval between the interpretation of the index test(s) and the reference standard 10,14,15,17,19,20 . The time intervals given ranged from less than 24 hours to three months. For one patient, the interval between the MRI and the CT examination was 10 months, because of which the corresponding study was rated as having a high risk of bias related to flow and timing. Assessment of calcifications. The diagnostic performance of SW-MRI for the identification of calcifications was analysed in all studies. The resulting forest plots of the log diagnostic odds ratios are given in Fig. 3. Table 2 provides an overview of the detailed results for sensitivities and specificities. The pooled sensitivity for SW-MRI was 86.5% (95% CI: 73.6-93.7%) and the pooled specificity was 90.8% (95% CI: 81.0-95.8%). sROC curves of overall diagnostic accuracy for SW-MRI and MRI are provided in Fig. 4. The area under the curve (AUC) for SW-MRI was 0.95 (MRI: 0.78). Especially for standard MRI, the extrapolation of the sROC curve is highly vulnerable to outliers. The marginally lower specificity of SW-MRI compared to standard MRI is most likely to be explained by the inverse proportional relationship of sensitivity and specificity. To evaluate, if the differences observed between standard MRI and SW-MRI were significant, an analysis of variance (ANOVA) was performed. Sensitivity and specificity differed significantly for MRI and SW-MRI, when the imaging method was added as a covariate (p < 0.0001). The data retrievable from Zhu et al. 22 only permitted the calculation of the sensitivity for the detection of calcifications and could thus not be included into the bivariate model. In the study by Chen et al. 12 , SW-MRI was compared against quantitative susceptibility mapping (QSM), whereby SW-MRI showed a significantly lower diagnostic performance. Heterogeneity and publication bias. The Chi-squared test suggested heterogeneous results for SW-MRI (p < 0.001 for sensitivity and for specificity) and also partly for MRI (p < 0.001 for sensitivity; p = 0.11 for specificity). Covariate analysis could only explain part of the observed heterogeneity. Adding the location of the lesion (intracranial/body) as a covariate to the model showed the greatest effect on the variability estimates for SW-MRI, whereby adding "intracranial" as a covariate decreased variability estimates from 1.12 and 1.09 (for sensitivity and specificity) to 1.04 and 0.89. Considering the relatively small number of studies included in the present meta-analysis (n = 12), funnel plots and regression tests of asymmetry based on it may be inconclusive as a tool for detecting publication bias 23 . Although the regression test of asymmetry revealed a negative test result (p = 0.45 for SW-MRI and p = 0.19 for standard MRI), this cannot necessarily be taken as indicator of a low probability of publication bias. Especially with regard to standard MRI, with only six studies evaluating the performance of both SW-MRI and standard MRI, publication bias cannot be excluded. Discussion Major advances in MRI have led to the recent development of SW-MRI, which has opened the door to an improved non-invasive detection of even small amounts of calcification and haemorrhage. Over the last decade, Figure 3. Forest plots showing the log diagnostic odds ratios (black squares) for susceptibility weighted imaging and standard magnetic resonance imaging (MRI) (where available) of each study with 95% confidence intervals (horizontal lines). The area of each square is proportional to the study's weight in the meta-analysis and the summary measure of effect is plotted below as a diamond. An effect size of zero is indicated by the vertical dashed line. several literature reviews on SW-MRI have been published 5,24-32 , but to date there has been no systematic approach, critically evaluating and combining the results of comparable studies. To our knowledge, this is the first meta-analysis to focus on the diagnostic performance of SW-MRI for the detection of calcifications. Pooled sensitivity and specificity estimates for SW-MRI were high. In the studies, that evaluated the performance of both standard MRI and SW-MRI, above two times more patients could be correctly assessed with SW-MRI compared to standard MRI. Traditionally, CT is considered the reference standard for the detection of calcifications. However, it is associated with radiation exposure and accounts for the majority of the radiation exposure related to medical imaging 33 . This is especially relevant in children and younger patients, who are at a higher risk of developing radiation-induced tumors, infertility and other side effects, and in patients facing multiple follow-up examinations. Therefore, the reduction of radiation dose has become a major concern in clinical routine. In this context, SW-MRI can offer an alternative radiation-free approach. Its development began in the 1990s with the introduction of "phase imaging" as a means to map susceptibility. After the acquisition of the magnitude and phase images, raw phase images are unwrapped and further processed, usually by transformation into a phase mask, which is then multiplied by the magnitude image 34 . Advances in phase unwrapping and background phase removal have been among the key steps to reduce artifacts and enhance tissue phase contrast. Over the years, various unwrapping and post-processing techniques such as Fourier-based unwrapping, Homodyne-filtering, Gaussian filtering and phase-unwrapping high-pass filtering have been developed in order to reduce artifacts and to enhance the susceptibility contrast 6,34 . So far, there have only been a limited number of studies comparing the performance of the different post-processing approaches and the influence of different filter types, but a recent study suggested, that phase wrapping followed by high-pass filtering might perform most accurately 34 . The fields of clinical application for SW-MRI include the imaging of venous blood in acute or chronic ischemia, the visualization of the vascularization, haemorrhage and calcification of tumors, the identification of epilepsy-associated calcified and vascular abnormalities and measuring calcification or iron deposition in neurodegenerative diseases 2,[9][10][11][12]14,15,18,22,[35][36][37][38][39] . It has been indicated that with regard to the differentiation between small calcification and haemorrhage, SW-MRI might even be superior to the reference standard CT, as due to a considerable overlap of attenuation values these conditions are not always easy to distinguish on CT scans 19,40 . Besides brain imaging, possible clinical applications of SW-MRI have also been extended to other areas, among which belong the detection of prostatic calcification 1,21 , the identification of calcific tendonitis 41 and subacromial spurs 42 as well as the visualization of peripheral vessel calcifications 16 . Depending on their location and pattern, calcifications can hint at various pathologies. In the brain, calcification is a very important factor in the diagnosis of brain neoplasms. As different tumors show overlapping features in different diagnostic imaging modalities, detecting whether a tumor is associated with calcifications is useful in narrowing the differential diagnosis. Tumors frequently showing intratumoral calcification include oligodendrogliomas, meningiomas, craniopharyngiomas, pineal gland tumors and ependymomas 2,43 . For prostate cancer, which has become one of the major challenges to public health, the identification of prostatic calcifications and differentiation from haemorrhage is an important diagnostic step, as the reliable detection of haemorrhage can be used as a biomarker for cancerous tissue 44 . Prostatic calcifications can indicate several urological diseases and symptoms such as underlying inflammation 45 . Within rotator cuff tendons, calcium deposition is a diagnostic clue for calcific tendonitis 41 . In vessels, calcifications can be a sign of advanced stages of atherosclerosis 20 . With regard to vessel calcifications and the imaging of complex plaque features with intraplaque haemorrhage and/or inflammation, SW-MRI has advantages over conventional imaging techniques by being able to detect even small foci of haemorrhage and to differentiate them from calcification 32 20,47,48 . The common primary endpoint of all included studies was the detection of calcium-phosphate deposition in brain and body soft tissues. The present meta-analysis has several limitations. First, the number of studies that met our inclusion criteria was relatively low, whereby the small study size is especially relevant with regard to MRI. Therefore, no subgroup analyses were performed, as the sample size was considered too small to obtain reliable results. Also, the included studies were heterogeneous, e.g. regarding the size of the study populations and the location and type of the calcifications, whereby covariate analysis could only explain part of this heterogeneity. Furthermore, not all authors provided sufficient information about the study design and the assessment of the index text and/or the reference standard. Another aspect is, that possible bias could have resulted from the facts that inclusion criteria were not standardized and that the studies were conducted in different clinical settings. Although CT is considered the best imaging technique for the detection of calcifications, the additional use of histopathology to confirm the diagnosis would have been superior, but was only applied in two of the studies 15,19 . A further limitation is, that covering a large time period, this meta-analysis includes studies with different algorithms and post-processing techniques. While the SW-MRI image contrast is relatively consistent, the quality of SW-MRI images naturally depends on the robustness and accuracy of the post-processing on the phase image 34 . Therefore, the quality of the image data in the present meta-analysis may differ. Also, due to the variance in diagnostic performance and the relatively small number of data points, the extrapolation of the sROC curve is highly vulnerable to outliners, especially for standard MRI; and assumptions on significant differences between standard MRI and SW-MRI cannot be made solely based on the sROC curves. Furthermore, the SW-MRI phase image has the disadvantage of aliasing if the field is large enough so that the phase exceeds π radians, which makes it difficult to obtain the exact shape and extent of especially larger calcifications 4 . Finally, we did not include grey literature, but only published studies, which might cause a selection bias, as potentially unpublished data could have shown unexpected results, because of which it might not have been intended for publication or may not have met the journal's criteria. As SW-MRI does not enable quantitative measurements, new susceptibility-based techniques, such as QSM, are currently developed and implemented 49 . QSM has shown the potential for more accurate measurements of total volume and susceptibility and may thus be a solution for quantifying calcification or haemorrhage on MR images 12 . So far, there have only been a limited number of studies published on the diagnostic performance of QSM in the detection of calcifications, which suggested a convincing sensitivity and specificity 12,50-52 . In a comparison study of SW-MRI and QSM, Chen et al. showed, that QSM might achieve a higher sensitivity and specificity than SW-MRI (80.5% vs. 71% and 93.5% vs. 76.5%) in the detection of intracranial calcifications. Therefore, QSM may play a significant role in the future applications of SW-MRI and may enable a reliable differentiation and detection of soft tissue calcium deposits in various clinical applications, with initial clinical evidence affirming its effectiveness and its potential superiority to SW-MRI. However, more studies are warranted for confirmation. In conclusion, this meta-analysis shows that SW-MRI is a reliable technique for the detection of calcifications with an accuracy close to CT. Studies that evaluated the performance of both standard MRI and SW-MRI suggested, that the diagnostic performance of SW-MRI was superior to standard MRI. However, further large, multi-centre and prospective studies are required in order to confirm these findings. Materials and Methods The present meta-analysis is in accordance with the guidelines provided by the PRISMA 8 (see checklist). The protocol was registered with PROSPERO (International Prospective Register of Systematic Reviews; registration number CRD42017059736). Only studies using SW-MRI for the detection of calcifications with CT as the standard of reference were identified. The literature search was performed using Pubmed (MEDLINE), OvidSP (EMBASE) and Web of Science (ISI). The present meta-analysis is exempt from ethical approval of the Institutional Review Board, as the analysis only involves de-identified data and all the included prospective studies have received local ethics approval. Studies were excluded if they included overlapping samples. If the patient sample data was published in more than one publication, the latest study with the largest patient sample was selected and the duplicate study was removed. Search Strategy and study selection. Data extraction and quality assessment. Data were independently extracted by two authors, by use of standardized data extraction sheets. The extracted data included information on: First author, journal and year of publication, the number of (included) patients, patient age (mean, standard deviation, range), false positives/negatives and true positives/negatives for SW-MRI and MRI (if available), the number of patients excluded (because of study overlap, different index test, no or different reference standard), technical parameters of CT and MRI imaging, absolute attenuation threshold used to differentiate calcification from other tissues in CT. Studies with multiple readers and different numbers of true positives, true negatives, false positives and false negatives were averaged in order to obtain study-level data and, if necessary, rounded to the nearest whole number. All ensuing disagreements were resolved by consensus. The QUADAS-2 tool was used. It comprises four domains, which are patient selection, index test, reference standard, and flow and timing 53 , which are assessed in terms of risk of bias and concerns regarding applicability. A time span of three months was considered an acceptable interval between MRI and CT imaging, given the slow progression of calcifications. The tool was applied to all studies by two independent investigators. Disagreements were resolved by consensus. Statistics and data analysis. Data was exported as comma separated values and further processed using 'R' Statistical Software (Version 3.2.2, R Development Core Team, Vienna, Austria, 2016). Data from the 2 × 2 tables was summarized in forest plots for each study. Forest plots of log diagnostic odds ratios were generated along with their 95% confidence intervals (CI). Since some of the 2 × 2 tables included zero cells, a continuity correction of 0.5 was done prior to regression analysis. The bivariate diagnostic random-effects model by Reitsma et al. 54 was used to compare pooled estimates of sensitivity and specificity for the index tests (SW-MRI, standard MRI, if available). Pairs of sensitivity and specificity are analysed together, whereby any correlation that might exist between the two measures could be added to the bivariate model and result in separate effects on the sensitivity and specificity 54 . The standard output of this bivariate model includes the pooled logit sensitivity and specificity values with 95% confidence intervals. The summary receiver operating characteristic (sROC) curve was constructed and areas under the curve (AUC), which were calculated by use of bivariate models, showed the diagnostic performance of SW-MRI and standard MRI for the detection of calcifications. To assess whether significant heterogeneity, in the form of variance between the study estimates of sensitivity and specificity, was present, a chi-squared test (χ 2 ) was performed. A covariate analysis was employed to further investigate sources of heterogeneity. To investigate publication bias, a regression test of asymmetry was performed 55 . A p-value of less than 0.05 was considered statistically significant. Meta-analytical data evaluation and creation of the graphs was performed using the freely available package 'mada' (version 0.5.7) 56 . Data availability. The datasets generated or analyzed in the course of the present meta-analysis can be requested from the corresponding author. All relevant data are within the paper and its Supporting Information files.
v3-fos-license
2022-06-12T15:06:34.882Z
2022-06-01T00:00:00.000
249585809
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2077-0383/11/12/3283/pdf?version=1655275264", "pdf_hash": "4fde353992477598f9cb77a68a7993ccdc976c64", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46242", "s2fieldsofstudy": [ "Medicine" ], "sha1": "fb26691126a73d00388fad561a07705a38806f26", "year": 2022 }
pes2o/s2orc
Soluble Urokinase Plasminogen Activator Receptor (suPAR) in the Emergency Department (Ed): A Tool for the Assessment of Elderly Patients Emergency department (ED) overcrowding is a global issue setting challenges to all care providers. Elderly patients are frequent visitors of the ED and their risk stratification is demanding due to insufficient assessment methods. A prospective cohort study was conducted to determine the risk-predicting value of a prognostic biomarker, soluble urokinase plasminogen activator receptor (suPAR), in the ED, concentrating on elderly patients. SuPAR levels were determined as part of standard blood sampling of 1858 ED patients. The outcomes were assessed in the group of <75 years (=younger) and ≥75 years (=elderly). The elderly had higher median suPAR levels than the younger (5.4 ng/mL vs. 3.7 ng/mL, p < 0.001). Increasing suPAR levels were associated with higher probability for 30-day mortality and hospital admission in all age groups. SuPAR also predicted 30-day mortality when adjusted to other clinical factors. SuPAR acts successfully as a nonspecific risk predictor for 30-day mortality, independently and with other risk-assessment tools. Low suPAR levels predict positive outcomes and could be used in the discharging process. A cut-off value of 4 ng/mL could be used for all ED patients, 5 ng/mL being a potential alternative in elderly patients. Introduction Overcrowding of the emergency departments (EDs) is a widely discussed issue involving all EDs worldwide, caused by exit-blocks, decreasing numbers of ED beds and increasing need for acute care and, eventually, resulting in increased mortality rates, costs and prolonged length of stays (LOS) in the EDs. Consequently, this impairs the quality and safety of acute care [1][2][3][4]. The EDs face a rather heterogenous population of patients with both urgent and non-urgent medical conditions. Frail elderly patients are one of the most substantial and frequent visitors of the EDs, and their clinical presentation differs from the younger patient population: due to delayed, diminished or atypical clinical presentations and symptoms, the risk stratification of these patients is considered remarkably challenging. Additionally, due to age-related organ function declines, the patient population tend to have a higher risk for negative outcomes during their stay in the ED. For earlier mentioned reasons, the perspective of the aging population and consequential increases in elderly patients seeking care from the EDs is concerning [5,6]. The risk stratification in the EDs rely principally on vital-sign based track-and-trigger score systems, such as National Early Warning Score (NEWS) system. However, they can be insufficient in assessing the patients, especially elderly, with normal vital signs but with high risk of critical illness [7]. Therefore, tools for reflecting the underlying pathogenetic pathways of existing comorbidities as well as different acutellnesses are needed to improve the patient flow of the overcrowded EDs. The improved patient flow [8,9], would ideally result in more safe discharges and leave the hospital beds to the patients that require most clinical attention. Consequently, this could provide the EDs with increased resources and reduced costs, not to mention the advantages from the aspect of elderly hospitalization [10]. Prognostic biomarkers have been suggested as a potential tool for the clinical decisionmaking in the emergency setting [11]. One of the novel biomarkers, soluble urokinase plasminogen activator receptor (suPAR), is a nonspecific inflammatory biomarker, which is released in blood plasma when the urokinase plasminogen activator receptor (uPAR) is cleaved from the cell membrane of immunoactive cells such as monocytes, activated T-lymphocytes and macrophages in response to inflammatory stimuli. The plasma concentration of suPAR increases in both acute and chronic inflammatory states such as infectious diseases, sepsis, autoimmune diseases, malignancies, cardiovascular diseases and organ dysfunctions such as liver and kidney failure, when, in contrast, stays rather low in primary healthy individuals [12][13][14]. Furthermore, suPAR values in the general population increase with advancing age: a former study suggests that patients aged 74-89 years had significantly higher suPAR values than individuals between 24-66 years [15]. SuPAR has shown to have excellent prognostic value in both healthy individuals and in individuals with comorbidities [16][17][18][19]. In critically ill patients, suPAR levels are associated with increased risk of mortality, hospital admission, readmission rates as well as further complications [14,[20][21][22][23]. Furthermore, suPAR values are studied to be strong predictors of mortality when adjusted with NEWS scoring, age and sex in ED patient population, and, interestingly, in hospitalized COVID-19 patients [24,25]. In contrast, low suPAR values have been observed to support the decision of discharge from the ED without increasing the risk for negative outcomes [26]. The EDs need additional tools for the risk assessment of their patients to improve their patient flow and avoid overcrowding. SuPAR is well understood when it comes to its characteristics and prognostic values. However, considering that aging increases suPAR levels, the optimal clinical setting for its use in the risk stratification of elderly ED patients is unclear. For that reason, in addition to evaluating the risk-predicting value of suPAR in the ED setting, this study aimed to determine the optimal cut-off values for the utilization of suPAR, concentrating on the elderly patient population. Patient Population and Data Collection This study was a prospective cohort study conceived in two Finnish hospital regions (Helsinki and Mikkeli). The included study population consists of unselected acute medical patients that sought care from the two study EDs between 4 March 2020 to 11 May 2020 (Mikkeli) or between 1 May 2020 to 31 May 2020 (Helsinki, Meilahti). The patient populations of the two hospitals were similar and consisted of patients from all medical specialties (internal medicine, surgery, trauma etc.). The data were collected from the two hospital areas' electronic health record systems (Uranus in Helsinki, Effica in Mikkeli). To be included in the study, the patient's index admission was required to involve routine venous blood sampling and given consent (in Meilahti). Biomarker Measurements Plasma suPAR levels were incorporated as part of the standard blood sampling at the EDs. The actual measurement was carried out using suPARnostic ® Turbilatex assay (ViroGates A/S, Birkerød, Denmark) on a Cobas c501 clinical chemistry analyser (Roche Diagnostics Ltd., Espoo, Finland). The analyzing process was performed according to the manufacturer's instructions. The other laboratory markers (C-reactive protein, creatinine, troponin T) were measured following regional standards. The suPAR values were available for the ED physicians in the same time frame as the other laboratory test results. Statistics The results are presented as numbers [N (%)] for categorical variables and as median [interquartile range (IQR)] for continuous variables. The patients were divided in two groups by age: (1) ≥75 years (=elderly) and (2) <75 years (=younger). For comparison of these groups, Fisher's exact test or Pearson's chi-squared test was used for categorical variables and Mann-Whitney U-test or Student's t-test for continuous variables. Multivariable logistic regression analysis was used to determine independent risk factors for 30-day mortality, the results of which are presented as odds ratios (OR) with 95% confidence intervals (CI). We compared models with age group and suPAR interaction to ones without using likelihood-ratio tests (LRTs). Some unevenly and widely distributed values are presented on a logarithmic scale. NEWS scoring was excluded from the multivariable analysis due to missing data. A p-value less than 0.05 was considered statistically significant. The data was analyzed with SPSS Statistics Software 27.0 (IBM, Armonk, NY, USA). Outcomes The primary outcomes of this study were the all-cause mortality within 30 days of index admission and the number of discharges from the ED within 24 h of index admission. Secondary outcomes were hospital admissions, 7-day and 30-day readmissions and LOSs in the ED and in the hospital. All the outcomes were assessed in the whole population and separately in the elderly and in the younger. Whole Study Population and Age Groups A total amount of 1858 (Mikkeli 1747 and Helsinki 111) patients were included in the study. Median age of the study population was 70 years (IQR 56-79) and 961 (52%) were women. 88 patients (5%) died within 30 days of index admission. Median length of stay (LOS) in the ED was 254 min (IQR 176-364), and 2 days (IQR 1-5) in the hospital. The elderly constituted 36% (669/1858) of the patients with a female proportion of 48%. The rest 64% (1190/1858) of the patients were younger with a female proportion of 48%, respectively. The elderly had higher 30-day mortality compared with the younger (8% vs. 2%, p = 0.001) The elderly were discharged from ED significantly less frequently during the first 24 h compared with the younger (30% vs. 54%, p < 0.001). Similar difference between the age groups was seen in hospital readmissions within 7 days of discharge (10% vs. 6%, p = 0.001). On the contrary, the amount of hospital admissions was higher in the elderly (68%) than in the younger (46%). SuPAR values were available for 1845 (99.3%) patients. Median suPAR level in the whole study population was 4.1 ng/mL (IQR 3.3-6.0), 3.7 ng/mL (IQR 3.0-5.0) in the younger, and 5.4 ng/mL (IQR 4.1-7.7) in the elderly, respectively. Statistically significant differences between the age groups were additionally seen in the higher median glomerular filtration rates (GFRs) of the younger and in the higher median NEWS scores as well as median plasma levels of C-reactive protein (CRP) and troponine T (TnT) of the elderly. For more detailed characteristics of the study groups, see Table 1. SuPAR levels were additionally observed between the discharged patients and the patients who died within 30 days of index admission. The differences were investigated in the two age groups. The median suPAR levels of the younger group who died within 30 days [5.8 ng/mL (IQR 4.1-9.7) were significantly higher than the levels of the younger discharged group [3.5 ng/mL (IQR 2.9-4.4)]. A similar trend was seen in the elderly group [7.6 ng/mL (IQR 5.5-10.1) vs. 4.5 ng/mL (IQR 3.7-5.8)]. Median suPAR values were higher in the elderly group, both in the discharged group and mortality group (See Figure 2). Different SuPAR Cut-Offs in the ≥75 Years Group To evaluate the predictive value of suPAR levels in the elderly population, the study's outcomes were also assessed with different ranges using three separate suPAR cut-off values (0-4 ng/mL, 0-5 ng/mL and 0-6 ng/mL) in the elderly group separately (Table 2). First, in the suPAR 0-4 ng/mL group, there were 153/23% elderly patients. In this group, 45% were discharged within 24 h, whereas 47% were admitted to hospital. One patient (0.6%) died within 30 days of index admission. The median LOS was 264 min (170-391) in the ED and 2 days (1.0-4.0) in the hospital. SuPAR levels were additionally observed between the discharged patients and th patients who died within 30 days of index admission. The differences were investigate in the two age groups. The median suPAR levels of the younger group who died with 30 days [5.8 ng/mL (IQR 4.1-9.7) were significantly higher than the levels of the young discharged group [3.5 ng/mL (IQR 2.9-4.4)]. A similar trend was seen in the elderly grou [7.6 ng/mL (IQR 5.5-10.1) vs. 4.5 ng/mL (IQR 3.7-5.8)]. Median suPAR values were high in the elderly group, both in the discharged group and mortality group (See Figure 2). Different SuPAR Cut-Offs in the ≥75 Years Group To evaluate the predictive value of suPAR levels in the elderly population, the study's outcomes were also assessed with different ranges using three separate suPAR cut-off values (0-4 ng/mL, 0-5 ng/mL and 0-6 ng/mL) in the elderly group separately ( Table 2). Determination of Predictors for 30-Day Mortality-Unadjusted and Adjusted with Other Risk-Predicting Factors The results for regression models can be found from Figure 3. SuPAR had an odds ratio (OR) of 1.23 (95% CI: 1.16-1.29) as a 30-day mortality predictor. When adjusting suPAR by age its OR slightly dropped: 1.18 (95%CI: 1.11-1.25). As age was correlated with suPAR, we kept it as a predictor and further adjusted the model with neurological and cardiovascular comorbidities, diabetes mellitus and logarithmized plasma levels of creatinine (krea) and troponine T (TnT). All of these had an association of equivalent level as when only adjusting suPAR with age. Only C-reactive protein (CRP) lowered the OR of suPAR considerably, when also adjusting with age the OR dropped to 1.09 (95%CI: 1.02-1.17). However, adding creatinine to the model with both age and CRP did not lower the OR of suPAR further (OR 1.09 95% CI: 1.01-1.17). Adding an interaction between the age and suPAR did not significantly increase fit of the model (LRT age as group p = 0.72, age as continous p = 0.63). 1 Figure 3. Multivariable analyses of suPAR and 30-day mortality adjusted with age and other clinical factors. suPAR = soluble urokinase plasminogen activator receptor, OR = odds ratio, CI = confidence interval, DM = diabetes mellitus, CV = cardiovascular disease, NEU = neurological disease, NEWS = National Early Warning Score, log(x) = the outcome x on a logarithmic scale, krea = plasma creatinine, TnT = troponin T, CRP = C-reactive protein. Discussion The EDs are overcrowding and the current methods for the risk stratification are insufficient, especially in the elderly patient population. Thus, new methods for the assessment of these patients are needed. This study aimed to evaluate suPAR, a nonspecific prognostic biomarker, as a tool of this kind in the ED patient population. Additionally, the study analysed, for the first time according to our knowledge, the prognostic role and risk-predicting value of suPAR in the elderly population. According to LRT, adding interaction between suPAR and age did not improve any of the models significantly. As a previous study working with the same research data has concluded [27], this study confirms that suPAR has prognostic value in predicting both negative and positive outcomes: patients with increased suPAR levels are more likely to die within 30 days of index admission, and patients with low suPAR levels are more likely to be discharged from the ED and survive within 30 days of index admission, regardless of age. Vice versa, the suPAR levels among patients who died within 30 days were significantly higher than the levels of the discharged patients. Additionally, our regression analysis indicates that suPAR acts as a predictor for 30-day mortality both independently and when adjusted with age, NEWS scoring, CRP and comorbidities such as diabetes mellitus, cardiovascular diseases and neurological diseases. When suPAR is simultaneously adjusted with three factors, the predictive value weakens (OR 1.09 (1.01-1.17). Moreover, the study results suggest that suPAR levels are positively associated with age and the median suPAR level among the elderly population (5.4 ng/mL) is significantly higher than in the whole population (4.1 ng/mL) and in the younger population (3.7 ng/mL). Additionally, according to Figure 2, the median suPAR levels increase with age, regardless of whether the patient dies or is discharged from the ED. However, despite the higher median suPAR levels, the study data suggests that the utilization of 6 ng/mL cut-off value would lead to excessive mortality rates in the elderly population (2.5%) and would thus impair the safety-related properties of low suPAR levels. ( Table 2). The incidence of 30-day mortality was highest in the suPAR 0-6 ng/mL group when compared to the 0-4 ng/mL group and the 0-5 ng/mL group. Between the groups, an increase of this kind was additionally seen in both the number of discharges (8.0% increase from 0-4 ng/mL group to 0-5 ng/mL group, 5.1% increase from 0-5 ng/mL group to 0-6 ng/mL group) and the amount of 30-day readmissions (4.3% increase, 3.2% increase). The median length of stay in the ED or in the hospital did not significantly differ between the groups. For that reason, a cut-off value of 4 ng/mL would successfully work as a predictor for both positive and negative outcomes in all patients, regardless of age. On the other hand, in the elderly, an elevation of the cut-off value from 4 ng/mL to 5 ng/mL resulted in a significant increase in the proportion of discharges (10.3% vs. 18.3%) but only one death within 30 days of index admission. SuPAR is a nonspecific biomarker, and elevated suPAR values can be caused by chronic non-acute as well as acute diseases. The aim of this study was to determine if suPAR can predict negative outcomes in an unselected patient population with various chronic illnesses, especially in the elderly. According to the study results, suPAR predicts mortality in this group, regardless of age. However, due to its unspecificity, suPAR is not a diagnostic tool. For that reason, suPAR should be used more as a directional prognostic tool alongside other clinical features and assessment methods such as clinical examination, scoring systems and other laboratory markers. Judging by previous study and the data presented in this manuscript, suPAR could thus be used in the decision to either admit or discharge the ED patient. Limitations As with the majority of studies, this study is subject to limitations. First, the ED physicians were conscious of the patients' suPAR results in Mikkeli but not in Helsinki, and therefore the evaluation of the effects on the outcomes is not possible. Second, the smoking habits of the included patients were not taken into account, regardless of knowing that regular smokers have approximately 1 ng/mL higher suPAR levels than non-smokers [28,29]. Third, as drawn blood samples and given consent in Meilahti were required for the inclusion, the study excluded the patients with minor clinical issues, mental issues or nurse visits, for example. Additionally, the patients that were not able to give a consent in Melahti were excluded from the study. Conclusions The study results suggest that suPAR levels were clearly elevated in the ED patients, the elderly patients displaying the highest levels. However, age and suPAR were not associated with 30-day mortality. High suPAR concentrations were associated with higher mortality and lesser probability to be discharged from the ED. Furthermore, as a nonspecific prognostic biomarker utilized in the ED, suPAR successfully predicts all-cause 30-day mortality in all age groups. SuPAR maintains its predictive value when it is used with other commonly used risk assessment tools. Low suPAR values can work as a support in discharging patients from the ED without increasing the risk of negative outcomes. For all the patients arriving at the ED, the safest cut-off value for suPAR would be 4 ng/mL. On the other hand, a cut-off value of 5 ng/mL should be considered as a potential alternative in the elderly population. The cut-off value of 6 ng/mL should not be utilized. Our study confirmed that suPAR could successfully act as an addition to the risk assessment of elderly patients and the patients of which the current risk stratification methods fail to identify, especially as these patients are one of the most time-and resourceconsuming patients of the ED. Data Availability Statement: The data presented in this study are available on request from the corresponding authors.
v3-fos-license
2022-03-10T16:24:59.857Z
2022-03-01T00:00:00.000
247350457
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1422-0067/23/6/2910/pdf", "pdf_hash": "b9acd67dc57f347ed119383b9cd78cbf8c00ceb1", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46243", "s2fieldsofstudy": [ "Medicine" ], "sha1": "bc0c5352c35d8ec51cc61280d7470925445d5657", "year": 2022 }
pes2o/s2orc
Antibody Responses to Transglutaminase 3 in Dermatitis Herpetiformis: Lessons from Celiac Disease Dermatitis herpetiformis (DH) is the skin manifestation of celiac disease, presenting with a blistering rash typically on the knees, elbows, buttocks and scalp. In both DH and celiac disease, exposure to dietary gluten triggers a cascade of events resulting in the production of autoantibodies against the transglutaminase (TG) enzyme, mainly TG2 but often also TG3. The latter is considered to be the primary autoantigen in DH. The dynamics of the development of the TG2-targeted autoimmune response have been studied in depth in celiac disease, but the immunological process underlying DH pathophysiology is incompletely understood. Part of this process is the occurrence of granular deposits of IgA and TG3 in the perilesional skin. While this serves as the primary diagnostic finding in DH, the role of these immunocomplexes in the pathogenesis is unknown. Intriguingly, even though gluten-intolerance likely develops initially in a similar manner in both DH and celiac disease, after the onset of the disease, its manifestations differ widely. Introduction Dermatitis herpetiformis (DH) is an extraintestinal manifestation of celiac disease (CeD). Both conditions are driven by the ingestion of dietary gluten in wheat, rye and barley, which induces an inflammatory response featuring B and T cell activation. While CeD and DH patients both evince small intestinal inflammation and often also villous atrophy, CeD patients suffer primarily from gastrointestinal symptoms, whereas DH manifests additionally, or exclusively, with a blistering rash, most often affecting the elbows, knees and buttocks. The primary diagnostic finding in DH is the appearance of granular deposits of immunoglobulin A (IgA) in the papillary dermis, particularly in the perilesional areas of the skin [1]. Irrespective of the different primary manifestations, DH and CeD share genetic susceptibility conferred by HLA-DQ2 or -DQ8 [2]. The majority of untreated CeD patients are seropositive for antibodies against gluten-derived peptides and transglutaminase 2 (TG2), a member of the transglutaminase family of enzymes and the primary autoantigen in CeD [3]. Likewise, most DH patients develop circulating TG2 autoantibodies [4]. Approximately one-third of CeD patients are also seropositive for autoantibodies against transglutaminase 3 (TG3). Meanwhile, a much higher proportion of DH patients develop circulating autoantibodies against TG3, which is considered to be the primary autoantigen in this phenotype [5]. Similar to CeD, circulating autoantibodies against both TG2 and TG3 disappear as a result of gluten-free diet (GFD), the treatment of choice for DH. The granular immunocomplexes in the dermis, considered to comprise TG3 and IgA-class antibodies against TG3, may persist in the skin of seronegative patients for months or even years after the initiation of GFD [5,6]. In this review, we discuss the immunological processes relevant for TG3 autoantibody response and potentially underlying DH disease pathogenesis. Transglutaminase 3-The Epidermal Transglutaminase Transglutaminases constitute a family of nine enzymes which crosslink proteins covalently in a calcium (Ca2+)-dependent manner. TG3 is expressed as an inactive 77 kDa zymogen which must be activated by limited proteolytic processing into two fragments (44 kDa and 30 kDa) of which the larger, N-terminal fragment carries the catalytic activity [7,8]. The enzyme responsible for this processing has not been identified but it has been suggested that at least cathepsin L released from degraded lysosomes could cleave the TG3 zymogen in vivo [9]. In vitro studies have shown that proteinase K, trypsin, dispase and thrombin are also able to activate TG3 via cleavage [8,10]. Once activated, TG3 catalyzes the formation of isopeptide bonds between the γcarboxamide group of glutamine and the ε-amino group of lysine via an enzyme-substrate thioester intermediate. TG3 is best known for its role in the formation of the cornified envelope, linking differentiated keratinocytes and inner hair sheath cells ( Figure 1). Accordingly, TG3 protein expression was first discovered in hair follicles [11,12] and later in the epidermis, brain, stomach, spleen, small intestine, testes, and skeletal muscles [13,14]. Although the expression of TG3 has been detected in a number of tissues and organs, its biological function has only been well-described for skin, where it is expressed predominantly in the stratified squamous epithelium and has not been thoroughly investigated in other tissues or organs. TG3 has been linked to gluten-sensitive autoimmune disorders together with two other transglutaminases: TG2 and TG6. All three transglutaminases are encoded by genes located on chromosome 10q21 and share significant sequence homology, particularly with respect to the catalytic domain. Likewise, all three enzymes are able to deamidate gluten-derived gliadin peptides, although with isoform-dependent efficiency and substrate specificity [17]. These enzymes also differ with respect to their ability to form covalent iso-peptide complexes with gluten. TG2 can form complexes with gliadin peptides via both iso-peptide and thioester bonds. In comparison, TG3 and TG6 can form enzyme-peptide thioester complexes less efficiently and TG3 lacks the ability to form iso-peptide-linked complexes with gliadin. TG3 has been linked to gluten-sensitive autoimmune disorders together with two other transglutaminases: TG2 and TG6. All three transglutaminases are encoded by genes located on chromosome 10q21 and share significant sequence homology, particularly with respect to the catalytic domain. Likewise, all three enzymes are able to deamidate glutenderived gliadin peptides, although with isoform-dependent efficiency and substrate specificity [17]. These enzymes also differ with respect to their ability to form covalent isopeptide complexes with gluten. TG2 can form complexes with gliadin peptides via both iso-peptide and thioester bonds. In comparison, TG3 and TG6 can form enzyme-peptide thioester complexes less efficiently and TG3 lacks the ability to form iso-peptide-linked complexes with gliadin. Systemic Responses against TG3 in DH DH patients typically produce autoantibodies against TG3 in a gluten-dependent manner. DH is often considered to develop as a result of prolonged gluten exposure and untreated CeD, but it is not known whether the autoimmune responses against TG2 and TG3 develop in certain patients in parallel, or whether TG3 merely becomes targeted via gradual loss of antigen specificity against TG2 in a subset of CeD patients. It is noteworthy, however, that while both conditions respond to gluten-free diet (GFD), if gluten is reintroduced to the diet of DH patients, the disease may manifest with either gastrointestinal or skin symptoms. The latter response would suggest that a certain component in the permanent loss of immune tolerance is very specific to DH. We have reviewed the current understanding-or lack thereof-of the immunological processes potentially underlying the development of gluten-driven TG3 autoimmunity. Mechanisms of Anti-TG3 Antibody Development It is unclear why some patients with CeD develop antibodies towards TG3, and why only a subset of these TG3-antibody positive subjects have DH. In addition, how TG2 and TG3 antibody responses develop from an initial antigliadin response has for long remained unknown. However, recent advances in CeD research have suggested that anti-TG2 responses arise as a result of epitope spreading from gliadin to TG2, mediated by anti-TG2 B cells interacting with antigliadin T cells. Epitope spreading generally refers to the process where an immune response develops towards an epitope distinct from the original, disease-causing epitope [18]. Epitope spreading is well-characterized in antibody-mediated diseases such as systemic lupus erythematosus [19]. While less literature is available on the development of TG3 antibody responses, it has been proposed that TG3 antibodies originate from TG2 antibodies, as evidenced by a degree of antibody cross-reactivity in a subset of DH patients [5]. This is also supported by the fact that TG3 antibodies are rarely detected in children with CeD, in contrast to TG2 antibodies [20]. Furthermore, autoantibodies against TG2, TG3 and TG6 have been implicated in gluten-linked autoimmune disorders, implying potential overlap between the specific autoimmune responses. However, it is also possible that TG3 antibodies arise similarly to TG2 antibodies as a result of epitope spreading from gliadin. Before addressing the possible mechanisms for TG3 antibody development in more detail, we are going to briefly review the development of anti-TG2 antibody response using TG2 in CeD as an example. Although there are a few articles suggesting that TG2 antibody responses are enabled by T cells recognizing TG2 [21,22], the existence of these cells has remained enigmatic [23]. Due to this discrepancy, the development of IgA class-switched TG2 antibodies has puzzled CeD researchers. The most obvious explanation would be that anti-TG2 responses are T cell independent. This would imply that either TG2-specific B cells are B1 (T cell independent) B cells, or that TG2 is able to function as a thymus-independent antigen. The TG2 antibodies characterized from CeD patients are typically class-switched to an IgA isotype, as well as having gone through affinity maturation [24][25][26]. Both the aforementioned processes occur at very low levels in B1 B cells and are conventionally considered to require T cell help, which makes it unlikely that TG2-specific B cells in CeD and DH are B1 B cells. If TG2 were to act as a thymus-independent antigen, it could activate B cells to produce antibodies without T cell help. However, thymus-independent antigens have to possess strong crosslinking properties, such as bacterial polysaccharides that contain highly repetitive structures. TG2 does not contain such structures, and it has been found that anti-TG2 antibodies bind specific epitopes in the N-terminal region of TG2 [27,28]. It has been hypothesized that since TG2 remains enzymatically active while bound to B cell receptors (BCRs), it could crosslink B cell receptors and thereby activate these cells [24]. In addition to this, it has been shown that B cells are also able to bind multimers of several TG2 molecules complexed with gliadins [29,30], which could also lead to increased crosslinking of B cell receptors [29]. Although BCR crosslinking antigens can activate B cells without T cell help, in autoreactive cells, this type of recognition of strongly crosslinked antigens conventionally leads to clonal deletion [31]. It is therefore unlikely that TG2 crosslinks BCRs sufficiently to lead to T cell independent activation. It has also been shown in vitro that TG2-specific B cells have markedly reduced proliferation in response to TG2 if T cell help is unavailable [29]. Further supporting T-cell-dependence is the HLA-dependency of both CeD and DH. All of these observations support the notion that anti-TG2 antibody responses require T cell help for their initiation. Although TG3 antibody responses are less well-characterized, factors such as HLA-dependency of DH [32] could indicate that TG3 antibodies in CeD and DH are also T cell-dependent. Having now established that the production of TG2 antibodies most likely depends on T cell help, we still face the aforementioned dilemma that TG2-or TG3-specific T cells have not been universally recognized in CeD or DH patients. It has been proposed that TG2 antibodies arise as a result of epitope spreading from gliadin after the failure of tolerance mechanisms towards autoreactive B cells during development [33]. These B cells have been thought to be clonally ignorant after having evaded central tolerance [33]. TG2-specific B cells are present in CeD patient intestine [16,25,26,34], and when in the intestine, the TG2-autoreactive B cells are thought to bind complexes of TG2 bound to gliadin peptides [17,29,30,35,36]. After internalization, the TG2-gliadin complex becomes degraded into peptide fragments by endosomal proteases. These fragments are presented to CD4+ T cells on class II HLA molecules. The B cell does not distinguish which peptide fragment was the epitope bound by the BCR and, therefore, presents both TG2 and gliadin peptides to T cells. When gliadin-specific CD4+ T cells are presented with deaminated gliadin peptides, they become activated and, in turn, give the antigen-presenting B cells signals initiating class switching and affinity maturation. The process is illustrated in Figure 2. This type of mechanism is perhaps better known as the hapten-carrier effect, where allergy or autoimmune disease towards haptens develops as a result of complexes formed between carrier proteins and small molecule antigens [37]. The possibility of TG2specific B cells presenting gliadin to T cells and thereafter receiving the appropriate signals for proliferation and class switching would indeed give a plausible explanation to the dilemma presented earlier. There are, however, a few prerequisites that need to be fulfilled in order for this model to function. Most importantly, gliadin-TG2 complex formation has not yet been proven in vivo in humans, despite being well-established in vitro and in mice [17,29,30,35,36]. Assuming that TG2 is indeed able to create complexes with gliadins in vivo, in order for anti-TG2 responses to develop, the tolerance mechanisms that B cells are subjected to need to fail. du Pré et al. (2019) elegantly demonstrated that TG2-specific B cells do not differ in functionality from endogenous B cells in mice, and evaded tolerance mechanisms. This study was executed by creating transgenic mice possessing TG2-specific B cell receptors derived from CeD patients [33]. Assuming that clonal ignorance was the reason for the development of TG2-reactive B cells, we should be able to find these autoreactive B cells in the general population. Finding these autoreactive B cell clones in healthy individuals would prove that TG2-reactive B cells develop endogenously. However, their identification of such cells might prove difficult before they have been clonally expanded as a result of activation. Although efforts have been made in order to ascertain how TG2 antibodies develop, knowledge on the development of TG3 responses is lacking. TG3reactive B cell clones have not been modelled in animal studies, nor has their interaction with gliadin-specific CD4+ T cells been assessed. What we do know is that TG3 has been found to create complexes with gliadin peptides [17]. The complexes created by TG3 and gliadin in vitro are linked through a thioester bond, whereas TG2 has been found to create both iso-peptide and thioester linkages [17]. Findings of TG3 forming complexes with gliadin [17] render it plausible for us to imagine that the mechanism for anti-TG3 antibody development could be somewhat similar to that of TG2 antibody development. A plausible model for the development of TG3 antibodies in DH could follow the mechanisms described above (Figure 2), where B cells autoreactive to TG3 evade the body's tolerance mechanisms and develop like any other B cell. Once these B cells locate to the intestine, they internalize complexes of gliadin and TG3, and present gliadin peptides to gliadin-specific T cells. In individuals possessing the predisposing genetic background, T cells give activating signals to the B cells that presented the gliadin. In this way, epitope spreading from gliadin to TG3 allows for the development of class-switched, TG3-reactive plasma cells. This model suggests that TG3 responses arise from strictly TG3-reactive B cells, and not as a result of cross-reactivity between TG2 antibodies with TG3. Assuming B cells autoreactive to all TG isoforms in addition to TG2 and TG3 evade tolerance mechanisms in this way, we would also have an explanation as to why some DH and CeD patients have autoantibodies against TG6 [38]. This hypothesis also requires TG3 to be available to the intestinal B cells. While TG2 expression in the intestine is well-established [39], the evidence for TG3 expression in the intestine is scarce. TG3 has been found in sporadic cells in the intestine of selected DH patients via fluorescent staining [40], and anti-TG3 plasma cells have been found in approximately half of DH patients following gluten challenge, but only in one CeD patient [41]. Thus, the expression of TG3 in the intestine is low, if present at all. According to The Human Protein Atlas, TG3 is expressed in the esophagus, which opens up the possibility of TG3 shedding into the digestive track and ending up in the intestinal lumen, similarly to TG2, which is thought to shed from dying enterocytes [42], enabling the antigen to become available to B cells. TG3 could, theoretically, follow a similar pattern of release from the epithelium into the esophagus, leading to small amounts of the antigen finding their way into the intestine. It is of course entirely possible that the anti-TG3 antibody response observed in DH originates from a distinct site, and not the gastrointestinal tract. However, given the scarce literature available on DH-specific immune responses, one can only speculate the plethora of options. We have chosen to base our reasoning on the literature available on TG2 responses in CeD. Assuming that TG2 antibodies and TG3 antibodies arise from separate B cells, we would expect the production of different TG antibodies to occur at roughly the same rate. However, we mostly observe TG2 antibodies in CeD patients [20,[43][44][45][46]. One explanation for this discrepancy could be antigen availability. Given that TG2 is able to create iso-peptide and thioester linkages with gliadin, while TG3 only creates thioester linkages [17], it is conceivable to imagine that TG2 is able to sequester most of the available gliadin proteins during gluten exposure as a result of more effective complex-forming abilities than TG3. This could lead to mostly anti-TG2 B cells becoming activated, unless the gluten exposure is prolonged as proposed in DH development [5,47,48], in which case, more antigen would be available for anti-TG3 B cells. As noted, TG2 is abundantly expressed in the gut while TG3 is not. This would also contribute to the restricted access of B cells to TG3. A plausible model for the development of TG3 antibodies in DH could follow the mechanisms described above (Figure 2), where B cells autoreactive to TG3 evade the body's tolerance mechanisms and develop like any other B cell. Once these B cells locate to the intestine, they internalize complexes of gliadin and TG3, and present gliadin peptides to gliadin-specific T cells. In individuals possessing the predisposing genetic background, T cells give activating signals to the B cells that presented the gliadin. In this way, epitope spreading from gliadin to TG3 allows for the development of class-switched, TG3reactive plasma cells. This model suggests that TG3 responses arise from strictly TG3reactive B cells, and not as a result of cross-reactivity between TG2 antibodies with TG3. Assuming B cells autoreactive to all TG isoforms in addition to TG2 and TG3 evade tolerance mechanisms in this way, we would also have an explanation as to why some DH and CeD patients have autoantibodies against TG6 [38]. This hypothesis also requires TG3 to be available to the intestinal B cells. While TG2 expression in the intestine is well-established [39], the evidence for TG3 expression in the intestine is scarce. TG3 has been found in sporadic cells in the intestine of selected DH patients via fluorescent staining [40], and Epitope spreading from gliadin to TG2 or TG3. A simplified depiction of the suggested mechanism for epitope spreading during CeD and DH. B cells specific to TG2 and/or TG3 internalize and process gliadin-TG2 or -TG3 complexes through the endocytic pathway, leading to presentation of peptides on HLA II molecules. Gliadin-specific CD4+ T cells give survival signals to gliadinpresenting B cells, while TG2-or TG3-presenting B cells do not receive survival signals. Activated B cells class-switch into IgA and produce anti-TG2 or -TG3 antibodies. Created with BioRender.com. Another possible model for TG3 antibody development assumes that TG3 antibodies originate from the cross-reactivity of TG2 antibodies with TG3. This model suggests that initial TG2 responses with weak affinity to TG3 results in the eventual development of highaffinity TG3 antibodies. While some studies have indicated that TG3 antibodies originate from cross-reactive TG2 antibodies [5], others have reported that TG2 and TG3 antibodies are not mutually cross-reactive [27,41]. Lack of cross-reactivity has been shown for both patient-derived TG2 [27] and TG3 [41] antibodies. However, the idea of separate TG2 and TG3 reactive B cells existing (as suggested above) does not explain why CeD patients do not always present with TG3 antibodies [20,[43][44][45][46], and why not all CeD patients develop DH symptoms despite possessing TG3 antibodies [5,47,48]. It is known that the TG2 epitopes recognized by B cells are conformational [27], opening up the possibility of shared conformational epitopes between TG2 and TG3 when bound to different substrates. The process driving the development of high-affinity TG3 antibodies from initial low-affinity, cross-reactive TG2 antibodies is unknown and unresearched, but would most likely require repeated cycles of gluten exposure and prolonged inflammation. By assessing the degree of somatic mutations in anti-TG3 BCRs compared to anti-TG2 BCRs, it might be possible to determine whether anti-TG3 BCRs undergo affinity maturation to a higher degree than anti-TG2 BCRs. This information would be valuable in ascertaining whether anti-TG3 responses arise from anti-TG2 cross-reactivity with TG3. The previously discussed antigen availability in the intestine could also play a role in the transition from low-to high-affinity TG3 antibodies. The current model for TG2 and TG3 antibody development suggests that anti-TG responses are T cell-dependent. B cells escape tolerance towards the autoantigens by receptormediated endocytosis of TG2/TG3-gliadin complexes, presenting gliadin to gliadin-specific CD4+ T cells. As for TG3 antibodies, very little research has been conducted to establish their origin. While it is possible that TG3 antibodies initially arise from strictly TG3 reactive B cells, data on CeD and DH disease progression speak against it. Due to the fact that DH is rare in children with CeD and TG3 seropositivity in CeD increases with age [20], it would seem more likely that TG3 antibody responses are somehow developmentally tied to anti-TG2 antibody responses. Origins of Serum and Skin Antibodies This section will discuss the literature available on the skin deposits of IgA and TG3 in DH, as well as the plausible sites of origin for the serum TG3 antibodies, once again using anti-TG2 antibodies in CeD as an example. The distinguishing feature of DH is skin lesions, accompanied by closely situated deposits of IgA and TG3. These IgA-TG3 complexes are the primary diagnostic criteria for DH [46]. It is unclear where the complexes of TG3 and IgA in DH skin are formed, but it is currently assumed that they are either TG3-IgA complexes originating from the circulation [49], or IgA from circulation binding and forming complexes with TG3 in situ [50]. Complexes originating from the circulation are also supported by findings of TG3 being present in serum [15]. The complexes of TG3 and IgA are found on the dermal-epidermal boundary, where TG3 is not endogenously expressed [51]. While TG3-IgA complexes are a characteristic feature in DH, they do not seem to be pathogenic by themselves, as they are often found in areas of the skin adjacent to the actual lesions in DH [52,53], as well as occasionally also in CeD patients not exhibiting any DH symptoms [54][55][56]. As for the IgA in these complexes, very little research on the characteristics and origin is available. The scarce literature available suggests that the IgA in DH skin is in fact dimeric [57], thereby suggesting a connection with the gut. The skin-deposited antibodies in DH patients are mostly of the IgA1 subclass [58,59], like the majority of anti-TG2 IgA found in CeD patient serum [34]. Due to the paucity of literature studies on IgA-TG3 deposits in the skin, we will be focusing on the origin of serum TG3 and TG2 antibody responses for the remainder of this section. In DH, TG3 antibody-secreting plasma cells have been found in the small intestine of patients [40,41]. Although anti-TG2 plasma cells are well-established in the gut of CeD patients [16,25,26,34,40,41], studies have found that serum TG2 antibodies and TG2 antibodies produced in the intestine have distinct molecular composition [34]. This observation opens up the possibility that individual B cell clones have given rise to distinct plasma cell populations responsible for the serum and gut antibodies [34]. Although both the gut and serum antibodies were found to target the same epitope in TG2, and had matching amino-acid sequences in the antigen-binding regions, the serum antibodies were found to be associated with less J-chain [34]. Since J-chain is the component that allows dimeric IgA to be transported into the gut lumen from the intestinal tissue, the authors hypothesized that the majority of the serum TG2 antibodies are not produced in the gut [34]. It has indeed been found that plasma cells formed during gut immune responses can contribute to the bone marrow plasma cell population, both in mice [60] and humans [34,61], and therefore it is possible that TG2 and TG3 antibodies in both DH and CeD patient serum originate from bone marrow. There is however an inconsistency in this hypothesis-the gluten dependency of serum TG2 and TG3 antibodies [18,40,62]. If serum TG2 and TG3 antibodies were produced by bone marrow plasma cells, we could expect to detect low titers regardless of GFD, as bone marrow plasma cells produce antibodies at a constant rate irrespective of antigen exposure. Both TG2 and TG3 antibody levels respond to gluten [18,40,62], with the exception of some CeD patients who experience no reduction in TG3 antibodies during GFD [20]. Regardless, due to the gluten dependency of the TG2 and TG3 antibodies [18,40,62], it is unlikely that the antibodies in patients' sera originate from long-lived bone marrow plasma cells. In general, the functions and origins of serum IgA in humans are less well-established than those of mucosal IgA. However, it has been suggested that some of the B cells activated in gut-associated lymphoid tissues could migrate to the marginal zone in the spleen and contribute to the serum IgA pool from there [63]. This type of mechanism could explain the gluten dependency of DH and CeD TG2 and TG3 antibody responses, yet more research is required to elucidate the dynamics of humoral immune responses originating from the gut. Little is known about the IgA-TG3 complexes in the skin, as well as serum TG3 antibodies in DH. While studies in CeD suggest that TG2 antibodies in the serum may not originate from the gut, no corresponding characterizations have been made of TG3 antibodies. However, the fact that IgA-TG3 complexes in the skin are dimeric could point towards an intestinal origin. Conclusions Although considerable advances have been made to ascertain how anti-TG2 responses develop in CeD, little attention has been paid to anti-TG3 immune responses and DH. Based on the literature available, we have in this review summarized what is known of the development and characteristics of anti-TG3 antibodies. Due to the lack of knowledge on anti-TG3 antibody responses in DH, we have used the literature available on anti-TG2 responses in CeD to hypothesize how anti-TG3 antibody responses might conceivably develop. The current view of anti-TG2 antibody development suggests that clonally ignorant anti-TG2 B cells are able to present gliadin to gliadin-specific CD4+ T cells in the intestine, thereby receiving activating signals. There are mainly two plausible models for the development of anti-TG3 antibody responses. These responses could either develop from TG3-specific, non-cross-reactive B cells, or they could develop from anti-TG2 antibody responses. Given that DH manifests almost exclusively in adults with CeD, it is likely that anti-TG3 responses arise from initial anti-TG2 responses, although there is little direct evidence supporting either of the two models. More research efforts should be directed towards studying B cell responses in DH. Virtually nothing is known, for example, of the existence and longevity of TG3-linked memory cells-either of T or B cell type. Likewise, further studies should be conducted both on the expression patterns of the autoantigen TG3 and the occurrence of autoimmunerelated phenomena such as the pathognomonic TG3-IgA deposits. The TG3 autoantibodies appear to originate from the gut but it is puzzling why the TG3-linked DH appears to manifest in limited areas of the skin. This is even more so, since, in the light of current knowledge, TG3 is also expressed, e.g., in the epithelium of esophagus. It is thus possible entirely possible that anti-TG3 deposit could be discovered at or near other sites of TG3 expression. Author Contributions: Conceptualization, H.K., E.K. and K.L.; writing-original draft preparation, H.K., E.K. and K.L.; writing-review and editing, H.K., E.K., T.S. and K.L.; funding acquisition, T.S. and K.L. All authors have read and agreed to the published version of the manuscript.
v3-fos-license
2016-06-17T22:56:47.249Z
2015-07-13T00:00:00.000
293778
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2015.00200/pdf", "pdf_hash": "fad5d96bc26b7a369d598463c99558fc054cf0ce", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46248", "s2fieldsofstudy": [ "Medicine" ], "sha1": "fad5d96bc26b7a369d598463c99558fc054cf0ce", "year": 2015 }
pes2o/s2orc
Serum PINP, PIIINP, galectin-3, and ST2 as surrogates of myocardial fibrosis and echocardiographic left venticular diastolic filling properties Objectives and Background: Serum biomarkers have been proposed to reflect fibrosis of several human tissues, but their specific role in the detection of myocardial fibrosis has not been well-established. We studied the association between N-terminal propeptide of type I and III procollagen (PINP, PIIINP, respectively), galectin-3 (gal-3), soluble ST2 (ST2), and myocardial fibrosis measured by late gadolinium enhanced cardiac magnetic resonance imaging (LGE CMR) and their relation to left ventricular diastolic filling properties measured by tissue Doppler echocardiography (E/e') in patients with stable coronary artery disease (CAD). Methods and Results: We determined the PINP, PIIINP, gal-3, and ST2 serum levels and performed LGE CMR and echocardiography on 63 patients with stable CAD without a history of prior myocardial infarction. Myocardial late gadolinium enhancement T1 relaxation time was defined as a specific marker of myocardial fibrosis. ST2, PINP, and PIIINP did not have a significant correlation with the post-LGE T1 relaxation time tertiles (NS for all), but the lowest post-LGE T1 relaxation time tertile had significantly higher gal-3 values than the other two tertiles (p = 0.002 and 0.002) and higher E/é-values (p = 0.009) compared to the highest T1 relaxation time tertile. ST2 (p = 0.025 and 0.029), gal-3 (p = 0.003 and < 0.001) and PIIINP (p = 0.001 and 0.007) levels were also significantly higher in the highest E/é tertile, compared to the other two tertiles. Conclusions: Elevated serum levels of gal-3 reflect the degree of myocardial fibrosis assessed by LGE CMR. Gal-3, ST2, and PIIINP are also elevated in patients with impaired LV diastolic function, suggesting that these biomarkers are useful surrogates of structural and functional abnormality of the myocardium. Introduction Impaired left ventricular (LV) diastolic filling in subjects with preserved LV function has been associated with worse outcome among coronary artery disease (CAD) patients and patients with heart failure with preserved LV function (Rusinaru et al., 2014). The etiological background for impaired LV diastolic filling properties is not well-known, but one possible underlying mechanism could be the accumulation of diffuse fibrosis in the myocardium. Diffuse interstitial fibrosis can be assessed with cardiovascular magnetic resonance imaging (CMR) by using late gadolinium enhancement (LGE). Relaxation time T1 mapping has been shown to correlate with interstitial fibrosis measured from endomyocardial biopsies or with invasively measured left ventricular stiffness (Iles et al., 2008;Miller et al., 2013;Ellims et al., 2014). Many biomarkers of fibrosis have also been proposed to reflect myocardial fibrosis. Biomarkers related to fibrosis, such as N-terminal propeptide of type I and III procollagens (PINP and PIIINP, respectively), galectin-3 (gal-3), and soluble ST2 protein (ST2) have been associated with poor outcome among heart failure patients (Cicoira et al., 2004;Pascual-Figal et al., 2009;Velagaleti et al., 2010;De Boer et al., 2011;Bayes-Genis et al., 2012;Lok et al., 2013). Although these biomarkers may have predictive value, their ability to detect myocardial interstitial fibrosis is not well-established. The aim of this study was to determine the association of fibrosis biomarkers and myocardial interstitial fibrosis measured by LGE CMR and their relation to left ventricular diastolic filling measured by tissue Doppler echocardiography in patients with stable coronary artery disease (CAD). Patient Population Sixty-three consecutive patients with angiographically documented stable CAD were prospectively recruited from the ARTEMIS-Oulu database (Cardiovascular Complications in Type II Diabetes Study; registered at ClinicalTrials.gov, Record 1539/31/06, Identifier NCT01426685). The exclusion criteria included rhythm other than sinus rhythm, reduced left ventricular ejection fraction, greater than mild valvular disease or previous valve surgery, clinical history of myocardial infarction (Q-waves in ECG, myocardial scar, or segmental wall motion abnormalities seen in echocardiography), permanent pacemaker, significant renal disease, and claustrophobia. The study was approved by the local institutional ethics committee. Written informed consent was obtained from all the patients. Biomarkers The concentrations of gal-3 and ST2 were determined from serum samples. Serum was prepared by allowing the blood to clot for 30 min followed by centrifugation at 2000 ×g for 10 min. The serum was stored at -20 • C until analyzed. ST2 levels were analyzed using a sandwich, enzyme-linked immunosorbent assay (ELISA) (Human ST2/IL-1 R4 Quantikine ELISA, R&D Systems Inc., Minneapolis, MN) with a sensitivity of 5.1 pg/mL. Gal-3 levels in serum were determined by an enzyme-linked immunosorbent assay (ELISA) from BG Medicine (Waltham, MA, USA). The limit of detection (LoD) for the assay was 1.13 ng/mL (Christenson et al., 2010). Echocardiography A thorough transthoracic echocardiographic evaluation was made utilizing the same General Electric Vivid seven for all patients. Parasternal long axis view and M-mode were used to obtain left ventricular (LV) diameters and wall thickness. LV mass was derived from the ASE equation. LV ejection fraction was measured from the apical view by the biplane method from the 2-and 4-chamber views. Left ventricular diastolic filling was assessed by end-expiration ratio of peak early diastolic mitral velocity to tissue Doppler-derived peak early diastolic mitral annular velocity measured in the septal mitral annulus (E/é). Late Gadolinium Enhancement CMR CMR with the same 1.5T scanner (GE Optima MR450w, GE Healthcare, Milwaukee, WI) and 32-channel cardiac coil was performed on all the patients. Mid-ventricular short axis images were obtained. T1 was measured i.e., T1 mapping was performed with a ECG-triggered Look-Locker sequence (TR 4440 s, TE 2.016 ms, 30 TI's between 95 and 1279 ms, in-plane resolution 1.48 mm, slice thickness 8 mm). The images were acquired during breath holds. MRI images were analyzed off-line using an in-house MATLAB application (Matlab, MathWorks Inc, Natick, MA). Regions of interest were segmented manually. The segments with visible LGE i.e., replacement fibrosis were excluded from the analysis. The possible position differences between images with different TI's were taken into account by segmenting each image separately and calculating ROI-wise T1 using the mean intensities of each image. T1 relaxation time was measured 10-15 min after contrast agent injection (0.4 mL/kg but not more than 30 mL of Dotarem (500 mM, Guerbet AG, Zürich, Switzerland). Statistical Analysis All continuous data are presented as mean ± standard deviation. IBM R SPSS R Statistics v. 21 (IBM Corp., Armonk, NY) was used to analyze all statistics. Because of the skewed distribution of the variables, no correlation co-efficient were analyzed. For the same reason and due to the small study population we tested differences between tertiles divided according to LGE T1 relaxation time and E/é. Oneway analysis of variance was used followed by post hoc Bonferroni test in comparisons between the tertiles. All results with p < 0.05 were considered statistically significant. Age, gender, BMI, diabetes, history of smoking, medication, blood pressure, left ventricular ejection fraction, LV mass, and LV mass index, Syntax score, renal function, HbA1c levels, serum cholesterol levels, high sensitive CRP, and BNP levels did not differ significantly between the T1 relaxation time tertiles (Table 1). Main Findings Gal-3 was the only serum biomarker that had a significant correlation to diffuse myocardial fibrosis estimated by LGE CMR T1 mapping in patients with stable CAD. Elevated levels of other biomarkers, such as ST2 and PIIINP, had no significant association to fibrosis, but these biomarkers were associated with impaired left ventricular filling assessed by tissue Doppler echocardiography. Furthermore, LGE T1 relaxation time was closely associated with E/é showing that diffuse myocardial fibrosis is an important determinant of cardiac diastolic function in patients with uncomplicated stable CAD. Myocardial fibrosis had no significant relationship with any demographic variable, left ventricular systolic function, severity of CAD (Syntax score), or any metabolic risk variable. Cardiac MRI CMR has emerged as a non-invasive imaging method for focal fibrosis but it also allows the assessment of diffuse interstitial fibrosis. Isolated post-contrast T1 relaxation time has been shown to have a strong correlation with histologically confirmed myocardial fibrosis in small studies involving patients with heart failure (Iles et al., 2008;Miller et al., 2013). T1 relaxation time has also some correlation with echocardiographic markers of impaired diastolic function such as septal E' and E/é in patients with diabetes but no underlying CAD as a marker of so-called diabetic cardiomyopathy (Jellis et al., 2011;Ng et al., 2012). One study including patients with ischemic cardiomyopathy also showed a correlation between E/é and visually estimated LGE (Raman et al., 2009). Isolated post-contrast T1 relaxation time was also the only variable that correlated with invasively measured LV stiffness after multivariate analysis in cardiac transplant recipients (Ellims et al., 2014). Previous studies using T1 relaxation time analysis to estimate the amount of fibrosis have excluded patients with stable uncomplicated CAD. In our study we were able to show that also in patients with CAD, isolated post-contrast T1 relaxation time correlates with diastolic filling properties, even when areas of visible LGE where excluded from analysis. In post hoc analysis we could show that the patients in the tertile with the highest amount of interstitial fibrosis in CMR had significantly higher E/é-values than the tertile with least fibrosis. FIGURE 3 | The correlation between diastolic function and biomarkers. When subjects were divided into tertiles according to LV diastolic function (E/é-values) the tertile with the most impaired LV diastolic function had significantly higher serum levels of GAL-3, ST2, and PIIINP compared to the other two tertiles. Biomarkers Myocardial collagen tissue consists mostly of collagen type I and III. Procollagen N-terminal peptides, which can be measured from blood samples, have been used as surrogates of myocardial fibrosis. Nevertheless there has been some controversial evidence of PINP and PIIINP as biomarkers of collagen biosynthesis and as predictors of outcome of heart failure patients. Gal-3 and ST2 are more novel biomarkers of fibrosis. Gal-3 is an important mediator that induces fibroblasts to proliferate and deposit collagen, which contributes to myocardial fibrosis and remodeling. ST2 is a member of the interleukin receptor family and the gene expression of ST2 is upregulated in fibroblasts and cardiomyocytes subjected to mechanical stress. ST2 also prevents the IL-33 effects in reducing fibrosis and hypertrophy. GAL-3 (Lok et al., 2010(Lok et al., , 2013De Boer et al., 2011;Ho et al., 2012;Lopez-Andrés et al., 2012) and ST2 (Pascual-Figal et al., 2009;Manzano-Fernandez et al., 2011) have been shown to have predictive value for adverse cardiac events and mortality especially in patients with heart failure and ST2 also in patients after acute coronary syndrome (Eggers et al., 2010). In this study we found a correlation between the LGE T1 relaxation time and GAL-3, but the other biomarkers of fibrosis did not have a significant correlation to myocardial fibrosis measured by LGE CMR. One of the reasons for these observations might be that ST2 concentrations are thought to raise as a result of increased myocardial strain whereas gal-3 can be seen as an initiator of collagen deposition and therefore as a better surrogate of incipient or existing interstitial fibrosis even without systolic or diastolic impairment. In the tertile analysis divided according to diastolic function i.e., E/é-values, we were able to show significant differences in the biomarker levels of Gal-3, ST2, and PIIINP between the most marked diastolic impairment tertile compared to the other two tertiles. These data provide evidence of the utility of these serum biomarkers in the rapid diagnosis of cardiac diastolic dysfunction, but these findings need confirmation in larger patient samples. Limitations The relatively small sample size of the study prevents definite conclusions regarding the lack of correlation between some of the biomarkers, such ST2 and PIIINP, and myocardial fibrosis measured by CMR. Measurement of relaxation time T1 by the LGE method may also have some limitations in terms of reliable quantification of diffuse myocardial fibrosis. Despite these limitations, we feel that the present findings provide some useful information about the value of serum biomarkers as predictors of both myocardial fibrosis and cardiac diastolic properties. Conclusion In patients with uncomplicated CAD, serum biomarkers, especially gal-3, are associated with diffuse interstitial fibrosis imaged with cardiac MRI. Additionally, these biomarkers are associated with echocardiographically measured impaired LV diastolic filling properties. These results suggest that LV interstitial fibrosis plays an important role in impaired diastolic function among CAD patients and the level of cardiac diastolic dysfunction can be assessed with serum biomarkers of fibrosis.
v3-fos-license
2021-10-22T15:43:17.172Z
2021-09-02T00:00:00.000
239106225
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.researchsquare.com/article/rs-842204/v1.pdf?c=1631903307000", "pdf_hash": "d9803c6fdb4e77b7527926ddf7fd6cfe701ab534", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46252", "s2fieldsofstudy": [ "Medicine" ], "sha1": "cf9d636c918ac9a4bb64ad5bb0013de93322319c", "year": 2021 }
pes2o/s2orc
A Case of Ectopic Odontogenic Ghost Cell Tumor: Histogenetic Features of a New Entity Background: Odontogenic tumors arising from extra-alveolar sites are extremerly rare. Dentinogenic ghost cell tumor (DGCT) is an uncommon odontogenic neoplasm characterized by CTNNB1 mutation, ghost cell appearance, and dentinoid-like calcication. We present a case of an ectopic DGCT arising from a calcifying odontogenic cyst in the oor of the mouth. Case presentation: A 72-year-old man presented with a painless sublingual swelling. Imaging revealed a multi-lobulated, solid-cyst mass on the oor of the mouth. Cytology showed folded epithelial clusters composed of basaloid cells, keratinized material, and dentinoid matrix. Histology also revealed a multi-cystic, cribriform to solid nest. Immunohistochemically, CK19, CK5/6, bcl-2, and p63 were diffuse positive. CTNTTB1 mutation was detected, leading to the nal diagnosis of an ectopic DGCT. There was no recurrence during a 6-month follow-up. Conclusion: This is the rst report to comprehensively describe the clinicopathological features of an ectopic DGCT of odontogenic origin, developing similarly to that of a true odontogenic DGCT. Accurate diagnosis of this rare entity is necessary to avoid overtreatment. We present a case of ectopic DGCT arising from a COC on the oor of the mouth. This is the rst report of an ectopic DGCT having an odontogenic origin, with a development pathway and precursor lesion similar to that of a true odontogenic DGCT. We believe this report could promote accurate diagnosis of this rare entity. Case Presentation A 72-year-old Japanese man with no remarkable medical or family history presented with a painless sublingual swelling discovered during follow-up for myocardial infarction. Clinical examination revealed an elastic mass in the sublingual area covered by normal mucosa. Magnetic resonance imaging (MRI) showed a well-circumscribed lobulated, multi-cystic solid mass located on the oor of the mouth (Fig. 1a). There was no connection between the mass and the gingiva and jaw bone (Fig. 1b). Fine needle aspiration showed folded epithelial clusters with duct-like formation (Fig. 2a). These clusters consisted of basaloid cells lacking prominent nuclear atypia and admixed orange G positive, round material lacking nuclei. Peripheral palisading and some hyalinized dentin-like contents were observed (Fig. 2b, c). Cytologically, a basaloid tumor was suspected, and a diagnosis of "atypia of undetermined signi cance" was considered. Based on the location and cytological features, a sublingual tumor was suspected, and tumor excision was performed. Intraoperative ndings revealed a circumscribed mass with no connection to the alveolar bone and oral oor mucosa. The surgical specimen was a tan-white, elastic, lobulated solid mass, including multi-small cystic spaces ( Fig. 3a). Histological examination revealed a multi-cystic, solid mass surrounded by a thin brous capsule. The cyst lining, of variable thickness, was composed of squamous-to-polygonal epithelial cells, which translated to the plexiform and cribriform component adjacent to the dentinoid material deposition ( Fig. 3b-d). Both the cyst and solid components had anuclear eosinophilic cells (Fig. 3e). The tumor nest contained basaloid cell proliferation with peripheral palisading (Fig. 3e). Tumor cells showed hyperchromatic nuclei with mild atypia and mitoses (2/10HPF). There was no invasion of the adjacent salivary gland, adipose tissue, lymphovascular, or perineural structures. In the gland-like structure, alcian blue staining showed focal positivity, whereas d-PAS was negative. Immunohistochemically, CK19, CK5/6, bcl-2, and p63 were diffusely positive. Nuclear accumulation of β-catenin was detected (Fig. 3f) and Ki-67 index was 5%. Myoepithelial cell markers, such as S-100, GCDFP, and WT1, were absent. Immunostaining for ductal markers, such as CK7, was positive in the cyst wall, whereas that for CEA was negative. There were no true ducts composed of ductal cells and myoepithelial or basaloid cells. Nextgeneration sequencing (AmpliSeq Cancer Hotspot Panel V2) revealed missense point mutation in CTNNB1 (p.Ile35Ser, c.104T > G). A nal diagnosis of DGCT associated with COC on the oor of the mouth was established. The resection margin was tumor-free, and no additional treatment was performed postoperatively. The patient was followed up for 6 months with no sign of recurrence on MRI. Discussion And Conclusion Odontogenic ghost cell lesions, originally described by Gorlin et al. in 1962 [3], comprise COC, DGCT and GCOC [1,2]. DGCT is considered the solid counterpart of COC and is occasionally associated with it. These lesions can be classi ed as central (intraosseous) or peripheral (gingival or alveolar mucosal) based on their clinical presentation, and an extraoral or ectopic DGCT is not yet an established entity [1,2]. Clinically, most DGCTs occur in the jaw bone (maxilla:mandible = 1:1) and show benign but locally in ltrating behavior [1,2]. They are more common in men (M:F = 2:1), especially at a younger age (range 11-79 years, mean: 39.7) [1]. The patients usually complain of progressive or slow-growing nodules with swelling, with or without pain [1,6]. Radiologically, DGCT shows a cystic or solid mass with calci cation [7]. In the present case, although the patient was older than the mean age for DGCT occurrence, the imaging ndings were consistent with those reported previously [1,2,7]. To our knowledge, this the rst report to meticulously describe the cytological ndings of DGCT. The cell cluster chie y consisted of basaloid cell proliferation with peripheral palisading. These ndings are consistent with those of basal cell adenoma/carcinoma, and we also suspected salivary gland tumors. However, calci cation and admixed orange G positive structures without nuclei, similar to ghost cells, are a differential feature and therefore an important cytological feature of DGCT. The histological features of DGCT include basaloid cell proliferation with ameloblatoma-like epithelial nests resembling the stellate reticulum. Aberrant keratinization was seen with ghost cells having enlarged, polygonal eosinophilic cytoplasm, with or without nuclei, and immature to mature dentinoid or dentino-osteoid structures [1,2]. Findings indicating the odontogenic nature of the tumor and its transition from a COC are important. The neoplastic cells have been shown to be strongly positive for cytokeratin AE1/3, 5, 7, 14, and 19, but negative for vimentin, desmin, SMA, and CD34. The Ki-67 index has been reported to be < 5% [1,2]. These histological ndings are consistent with those of the present case. Considering the anatomical site and histogenetic features, the most important differential diagnosis in the present case is of basal cell adenoma/carcinoma. However, basal cell adenoma/carcinoma exhibits two-cell morphology consisting of CK7-positive ductal structures and p63, SMA, CK5/6, WT-1, or podoplanin positive myoepithelial/basal cell components, unlike the ndings of the present case. Moreover, all above-mentioned basaloid tumors with ghost cell differentiation lack histological ndings of dentinoid material and precursor COC-like cystic components. To date, only two cases of ectopic dentinogenic ghost cell-like lesions have been reported. One was a DGCT-like lesion in the ethmoid sinus of an 8-year-old boy [4], and the other was a GCOC-like carcinoma on the oor of the mouth of a 54-year-old man [5]. Both exhibited characteristic odontogenic epithelium proliferation with ghost cells but lacked the anatomic association to the oral and alveolar mucosa and bone on radiological, intraoperative, and pathological examinations. Further, CTNNB1 mutation was detected in the latter case. The clinicopathological features of the present case were similar to those reported previously. Moreover, the characteristic precursor lesion, COC, was detected, with no history of an odontogenic tumor, trauma, or surgery that could have caused tumor dissemination or metastasis. Based on this clinical, histological, and genetic evidence, a nal diagnosis of extraosseous DGCT arising from a COC in the oor of mouth was con rmed. The development of DGCT occurs through two major pathways: de novo or from a preceding COC. However, the true etiology of an extraosseous DGCT remains unclear [4,5,15]. Peripheral DGCT can originate from oral epithelium following trauma or exposure to an irritating agent [6,15]. In the present case, these factors were absent, and the lesion had no connection with the oral mucosa. Therefore, ectopic odontogenic epithelium may have been associated with the tumor's development. The recurrence rates of central and peripheral DGCT are 73% and 0%, respectively [1]. While segmental resection is indicated for central DGCT, simple excision is recommended for peripheral DGCT. As an ectopic DGCT is extremely rare, the tumor aggressiveness and optimal treatment are unknown. Liu et al. [4] reported no recurrence of an ectopic DGCT arising from the ethmoid sinus after endoscopic sinus surgery, during a 2-year follow-up. Similar to our ndings, they observed that the Ki-67 labeling index was not high, and there was no invasion of the adjacent tissue, vascular, and perineural structures, suggestive of low malignant potential of ectopic DGCT. In our opinion, simple excision of the tumor is therefore justi ed, and further studies are needed to clarify the nature of the tumor. This report described a case of DGCT occurring as an ectopic lesion. Despite characteristic histological features, its diagnosis is di cult. Comprehensive clinicopathological examination is important to accurately identify this rare entity to avoid misdiagnosis and overtreatment. List Of Abbreviations COC, calcifying odontogenic cyst; DGCT, dentinogenic ghost cell tumor; GCOC, ghost cell odontogenic carcinoma; HE, hematoxylin and eosin; MRI, magentic resonance imaging Declarations Ethics approval and consent to participate This brief report was conducted in accordance with the Declaration of Helsinki, and the study protocol was approved by the Institutional Review Board of Kansai Medical University Hospital (Approval no.: 160954). Written informed consent was obtained from the patient. Consent for publication Written informed consent was obtained from the patient for the publication of this report. Availability of data and material Not applicable Competing interests The authors declare that they have no competing interests.
v3-fos-license
2022-12-14T05:06:44.281Z
2022-12-01T00:00:00.000
254590157
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "87bf41aa21f31476e90aa7dc68805130990da5a6", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46256", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "sha1": "87bf41aa21f31476e90aa7dc68805130990da5a6", "year": 2022 }
pes2o/s2orc
Detection of prostate cancer bone metastases with fast whole-body 99mTc-HMDP SPECT/CT using a general-purpose CZT system Background We evaluated the effects of acquisition time, energy window width, and matrix size on the image quality, quantitation, and diagnostic performance of whole-body 99mTc-HMDP SPECT/CT in the primary metastasis staging of prostate cancer. Methods Thirty prostate cancer patients underwent 99mTc-HMDP SPECT/CT from the top of the head to the mid-thigh using a Discovery NM/CT 670 CZT system with list-mode acquisition, 50-min acquisition time, 15% energy window width, and 128 × 128 matrix size. The acquired list-mode data were resampled to produce data sets with shorter acquisition times of 41, 38, 32, 26, 20, and 16 min, narrower energy windows of 10, 8, 6, and 4%, and a larger matrix size of 256 × 256. Images were qualitatively evaluated by three experienced nuclear medicine physicians and quantitatively evaluated by noise, lesion contrast and SUV measurements. Diagnostic performance was evaluated from the readings of two experienced nuclear medicine physicians in terms of patient-, region-, and lesion-level sensitivity and specificity. Results The originally acquired images had the best qualitative image quality and lowest noise. However, the acquisition time could be reduced to 38 min, the energy window narrowed to 8%, and the matrix size increased to 256 × 256 with still acceptable qualitative image quality. Lesion contrast and SUVs were not affected by changes in acquisition parameters. Acquisition time reduction had no effect on the diagnostic performance, as sensitivity, specificity, accuracy, and area under the receiver-operating characteristic curve were not significantly different between the 50-min and reduced acquisition time images. The average patient-level sensitivities of the two readers were 88, 92, 100, and 96% for the 50-, 32-, 26-, and 16-min images, respectively, and the corresponding specificities were 78, 84, 84, and 78%. The average region-level sensitivities of the two readers were 55, 58, 59, and 56% for the 50-, 32-, 26-, and 16-min images, respectively, and the corresponding specificities were 95, 98, 96, and 95%. The number of equivocal lesions tended to increase as the acquisition time decreased. Conclusion Whole-body 99mTc-HMDP SPECT/CT can be acquired using a general-purpose CZT system in less than 20 min without any loss in diagnostic performance in metastasis staging of high-risk prostate cancer patients. Supplementary Information The online version contains supplementary material available at 10.1186/s40658-022-00517-4. Introduction Whole-body bone SPECT/CT is a more accurate method than planar bone scintigraphy for the detection of bone metastases in cancer patients [1][2][3][4][5][6]. Currently, a separate CT examination is used to compensate for the low specificity of planar bone scintigraphy. Nonetheless, the diagnostic confidence obtained with SPECT/CT is higher than that of combined planar bone scintigraphy and CT [7,8]. Despite these benefits, the current use of bone SPECT/CT is often limited to partial-body imaging as an addition to the routinely performed planar bone scintigraphy. This limitation is partly due to the lack of fast acquisition protocols for whole-body bone SPECT/CT [8]. The total acquisition time of a whole-body SPECT/CT performed according to the current guidelines is at least 40 min when the detector and bed movements are included [3,4]. These guidelines were written prior to the advent of the general-purpose cadmium-zinc-telluride (CZT) system [9], which allows optimization of acquisition protocols, including imaging time. The properties of CZT detector-based SPECT systems enable imaging with higher sensitivity and spatial and energy resolution than systems based on conventional NaI detectors [10]. The higher sensitivity allows for faster acquisition or lower injected activity. CZT-based SPECT systems acquire data in list-mode, which can be resampled into sinograms with different acquisition parameters. Shortening bone SPECT acquisitions using list-mode data from a CZT SPECT system have been previously introduced by Gregoire et al. [11]. However, the effect of a short acquisition time on the diagnostic performance of whole-body SPECT/CT has not been studied, as earlier research has mainly focused on visually evaluated image quality. We explore the potential of the high spatial and energy resolution of the CZT detector on SPECT image quality by increasing the acquisition matrix and narrowing the energy window. The large matrix enhances spatial details in images and might improve the visibility of small lesions. Narrowing the energy window can be regarded as the optimal scatter correction method because the scattered photons are directly rejected in the preprocessing instead of being approximated and subtracted during the reconstruction [12]. These effects are studied by qualitative and quantitative image analyses, as well as by measuring standardized uptake values (SUVs) of lesions. The effect of post-filtering on the fast acquired SPECT images is also investigated. We also evaluate the effects of the acquisition time of SPECT on the diagnostic performance of whole-body 99m Tc-HMDP SPECT/CT in the primary metastasis staging of prostate cancer. The findings are validated against multimodal reference data consisting of 18 F-PSMA-1007 PET/CT, whole-body diffusion-weighted magnetic resonance, and follow-up images. Our analyses are based on fused SPECT/CT images as opposed to SPECT without CT in previous studies [11,13]. Patients This study included 30 prostate cancer patients at high risk for bone metastases who had undergone 99m Tc-HMDP planar bone scintigraphy and SPECT/CT, 18 F-PSMA-1007 PET/CT, contrast-enhanced CT, and 1.5-T whole-body diffusion-weighted magnetic resonance imaging within 14 days. These patients were retrospectively selected from the population recruited for a previous clinical trial (NCT03537391). Fifteen patients had bone metastases, and the other 15 had only benign findings according to the 99m Tc-HMDP SPECT/CT readings of that trial [14]. All procedures performed in human participants were in accordance with the ethical standards of the institutional research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed consent to participate was obtained from all individuals included in the study. SPECT/CT acquisition The SPECT images were acquired 185 ± 17 (mean ± SD) min after intravenous injection of 693 ± 22 (mean ± SD) MBq of 99m Tc-HMDP using a Discovery NM/CT 670 CZT system (GE Healthcare, Haifa, Israel). The SPECT system includes digital CZT detectors. The images were acquired in list-mode with the following parameters: wide-energy high-resolution collimators, three bed positions from the top of the head to the midthigh, step-and-shoot, body contouring, 60 views (120 projections) over 360° with 13-s acquisition time per view, 15% energy window centered at 140 keV, 128 × 128 matrix, 4.4 × 4.4 mm pixel size, and 4.4-mm slice thickness. Low-dose CT images were acquired immediately after SPECT from the top of the head to the mid-thigh with modulated mAs (noise index 70), 120 kVp, 1.35 pitch, and 2.5-mm slice thickness. The gamma camera was calibrated for activity concentration measurement by imaging a uniform Jaszczak phantom (Data Spectrum Corporation, Durham, NC, USA) without any inserts inside and filled with water and 131.1 MBq of 99m Tc-pertechnetate. The calibration image was acquired in list-mode with the same parameters as the patient images. Data processing for qualitative and quantitative image analyses The SPECT data were reconstructed with HybridRecon-Oncology software (version 3.0, HERMES Medical Solutions AB, Stockholm, Sweden) using the ordered-subset expectation maximization algorithm with 6 iterations and 15 subsets and corrections for photon attenuation, scatter, and collimator response. Attenuation correction was based on the attenuation coefficient maps derived from the CT images. Scatter correction was performed with a Monte Carlo simulation using 10 6 simulated photons and two scatter update iterations. The collimator response was corrected using a Gaussian diffusion model. The images were filtered using a Gaussian filter with 7-mm full width at half maximum (FWHM). From the calibration image, a conversion factor to convert the reconstructed counts into units of activity concentration (Bq/ml) was calculated as the ratio between true activity and reconstructed counts in a homogeneous volume of interest (VOI). Voxel SUVs were then calculated using the equation where c is the activity concentration (Bq/ml), W is the patient body weight (g) converted to volume (ml) assuming a density of 1 g/ml, and A is the injected activity (Bq) corrected for decay and syringe residual activity. For the quantitative and qualitative analyses, ten more image data sets were generated. The acquired list-mode data were resampled using Lister software on a Xeleris 4 workstation (GE Healthcare, Haifa, Israel) to produce sinograms with either the energy window narrowed from 15 to 10, 8, 6, or 4%, the matrix size increased from 128 × 128 to 256 × 256, or the acquisition time per view reduced from 13 to 10, 9, 7, 5, or 3 s. The idle time caused by the detector and bed movements was 11 min. Therefore, the acquisition times of 13, 10, 9, 7, 5, and 3 s per view correspond to total acquisition times of 50, 41, 38, 32, 26, and 20 min, respectively. These data sets were reconstructed as the original SPECT data. The energy window narrowing was also applied to the calibration image, and separate conversion factors were calculated for the narrower energy windows. Data processing for diagnostic performance analysis For the evaluation of diagnostic performance with different acquisition times, three additional image data sets with total acquisition times of 32, 26, and 16 min were generated. The dataset with 16-min total acquisition time was generated by halving the number of views from 60 to 30 in the images with 5-s acquisition time per view. The number of views was halved using Angular Resampling software on the Xeleris workstation. This reduction in views reduced the idle time from 11 to 8 min. Unlike in previous studies [11,15], our patients were administered using constant target activities of 670 MBq per patient instead of weight-dependent activities of 10 MBq/ kg. To obtain images more comparable to those used in the previous studies [11,15], the 5-and 7-s acquisition times per view used in the list-mode resampling were adjusted separately for each patient as if they had received weight-dependent activities of 10 MBq/kg. For the best possible reproduction of an image set, the 16-min images were processed using the same software and parameters as in the previous study [11]. These images were reconstructed with the Evolution for Bone SPECT software on the Xeleris workstation using the ordered-subset expectation maximization algorithm with 3 iterations and 10 subsets and corrections for photon attenuation and collimator response. A Butterworth post-filter with a cutoff frequency of 0.48 cycles/cm and an order of 1.2 was applied. The new 32-and 26-min total acquisition time data sets were reconstructed as the original SPECT data using HybridRecon-Oncology software, except the Gaussian filter FWHMs were increased to 10 and 12 mm, respectively. The FWHMs of increased filtering were selected such that the 32-and 26-min images had noise levels similar to those of the original 50-min SPECT images. Qualitative image analysis Qualitative analysis was performed in two rounds. The first round included the originally acquired images, images with 10, 8, 6, and 4% energy window widths, and images with a 256 × 256 matrix from 15 patients. These patients were selected such that the ratio of patients with bone metastases (n = 8) to patients with only benign findings (n = 7) was similar to that ratio in the original 30-patient population. The second round included the originally acquired 50-min images and images with 38-, 32-, 26-, and 20-min acquisition times from all 30 patients. Lesion visibility and overall image quality were scored by three experienced nuclear medicine physicians on a five-point scale: 1 = insufficient, 2 = almost sufficient, 3 = sufficient, 4 = good, and 5 = excellent for diagnostic use. Quantitative image analysis The originally acquired 50-min images, images with 41-, 32-, 26-, and 20-min acquisition times, images with 10, 8, 6, and 4% energy window widths, and images with 256 × 256 matrix from all 30 patients were included in the quantitative analysis. Benign and metastatic lesions were first segmented from the original images using an initial threshold of SUV = 12. The threshold was lowered if the resulting VOI was clearly smaller than the area of high uptake. The threshold was increased if another high-uptake area was nearby. The same threshold value was used for the same lesion in different images. From the resulting VOIs, lesion mean, maximum, and peak SUVs (SUV mean , SUV max , SUV peak ) and volume were measured. In addition, 5-10 circular regions of interest (ROIs) with a 1-cm diameter were drawn on normal appearing bone adjacent to the lesion. These ROIs were summed to form the background VOI, whose mean SUV (SUV mean,bg ) and SD of SUV (SUV SD,bg ) were defined. Contrast was then calculated by dividing the difference between SUV mean and SUV mean, bg by SUV mean, bg , noise was calculated by diving SUV SD, bg by SUV mean, bg , and the contrast-to-noise ratio (CNR) was calculated by dividing contrast by noise. Diagnostic performance analysis Diagnostic performance analysis included the original 50-min SPECT images and the specially processed 32-, 26-, and 16-min images of all 30 patients. Suspicious bone metastatic lesions were reported from the fused SPECT/CT images by two experienced nuclear medicine physicians. The lesions were reported in a pessimistic manner, such that equivocal lesions were considered metastatic. In addition, overall image quality was scored on the five-point scale described earlier. To create true positive, true negative, false positive and false negative classes, the reported lesions were validated against the reference diagnosis, which was created during the previous clinical trial. The reference diagnosis is based on the consensus reading of 99m Tc-HMDP planar bone scintigraphy and SPECT/CT, 18 F-PSMA-1007 PET/CT, 1.5-T whole-body diffusion-weighted magnetic resonance, and contrast enhanced CT imaging and clinical, laboratory, and follow-up data [14]. The diagnostic performance of the 50-, 32-, 26-, and 16-min images was compared at the patient, region, and lesion levels. In the region-level analysis, the skeleton was divided into six segments: skull, spine, ribs, pectoral girdle and sternum, pelvis, and limbs. Statistical analysis Statistical analyses were performed using MedCalc statistical software (version 19.2.6, MedCalc Software Ltd, Ostend, Belgium). Lesion visibility and overall image quality scores given by the readers were pooled, reported using the mean and SD, and compared using the Wilcoxon test for paired samples. Lesion visibility and overall image quality failure rates represent the percentage of images rated 1 or 2, i.e., not sufficient for diagnostic use. The failure rates were compared using the N-1 chi-squared test. The median, percentiles, and interquartile range (IQR) are used to describe nonnormally distributed data. Differences in SUV measures are reported by Bland-Altman analysis, where the 2.5th and 97.5th percentiles correspond to the 95% limits of agreement (LOA 95% ). Diagnostic performance was evaluated in terms of sensitivity, specificity, accuracy, and area under the receiver-operating characteristic curve (AUC). The sensitivity, specificity, and accuracy were compared between different images at the patient and region levels using Fisher's exact test. AUC values were calculated using the trapezoid rule and compared between different images using the method of Hanley and McNeil. P values < 0.05 were considered statistically significant. Qualitative image analysis Original images scored best in terms of both lesion visibility and image quality. However, the energy window could be narrowed to 8%, the acquisition time reduced to 38 min, and the matrix size increased to 256 × 256 without significantly affecting lesion visibility or image quality failure rates. The overall image quality scores were significantly different between images with 8 and 6% energy windows (p = 0.03) and between images with 38 and 32 min acquisition times (p < 0.001). The overall image quality failure rate was not significantly different between images with 8 and 6% energy windows, but it was rather high (27-31%). The given scores for lesion visibility and overall image quality in different images and their corresponding failure rates are summarized in Table 1. Figure 1 contains a visual example of how the overall image quality decreases with the acquisition time. Quantitative image analysis A total of 130 lesions were included in the quantitative analysis. The SUV threshold used for lesion segmentation varied from 3 to 15 with a median of 10. Generally, SUV measures and lesion volumes were not affected by changes in energy window width, matrix size or acquisition time per view (Additional file 1). The only exception was noticeably low SUV peak in images with 256 × 256 matrix size, as the median difference was -13% with respect to original images, and LOA 95% ranged from −24 to −2%. The median differences for other measures and images ranged from −4 to 2%. SUV mean was the most robust measure, as the width of LOA 95% for the difference ranged from 11 to 22 percentage units. The widths of LOA 95% for SUV max , SUV peak , and lesion volume ranged from 25 to 48, 22 to 49, and 61 to 114 percentage units, respectively. Acquisition time shortening, energy window narrowing, and 256 × 256 image matrix all increased contrast slightly but less than they increased noise, resulting in decreased CNR (Table 2). Energy window narrowing reduced the sensitivity of the SPECT acquisition, such that the conversion factors acquired from the calibration measurement were Diagnostic performance analysis According to the reference diagnosis, 12 patients out of 30 had bone metastases, 35 different skeletal regions were metastatic, and altogether 100 lesions were considered positive for bone metastases. All metastatic patients were detectable, but 10 metastatic bone regions and 28 bone metastases could not be detected by original SPECT/CT analysis. Acquisition time reduction had little effect on the diagnostic performance, as sensitivity, specificity, accuracy, and AUC were not significantly different between the 50-min total acquisition time and reduced acquisition time images. The average patient-level sensitivities of the two readers were 88, 92, 100, and 96% for the 50-, 32-, 26-, and 16-min images, respectively, and the corresponding specificities were 78, 84, 84, and 78%. The average region-level sensitivities of the two readers were 55, 58, 59, and 56% for the 50-, 32-, 26-, and 16-min images, respectively, and the corresponding specificities were 95, 98, 96, and 95%. The number of equivocal lesions tended to increase as the acquisition time decreased. The results of the patient-, region-, and lesion-level analyses with decreasing acquisition time are given in Tables 3, 4, and 5. Even though noise was suppressed by widening the Gaussian filter, the overall image quality scores were still lower in the images with shorter acquisition times. The mean (SD) image quality scores were 3.4 (1.0), 2.9 (0.7), 2.7 (0.7), and 1.8 (0.7), and the image quality failure rates were 20, 32, 45, and 85% for 50-, 32-, 26-, and 16-min images, respectively. Examples of images with different acquisition times and filters are shown in Fig. 2. Discussion The most common current approach to diagnose prostate cancer bone metastases is still planar bone scintigraphy and CT separately. Acquisition time is an important factor regarding the feasibility of whole-body bone SPECT/CT for the imaging of bone metastases. With a shorter acquisition time, the clinical use could potentially increase Fig. 2 Whole-body 99m Tc-HMDP SPECT maximum intensity projections of a 72-year-old prostate cancer patient with different acquisition times and post-processing filters. The 50-, 32-, and 26-min images are filtered using Gaussian filters with FWHMs of 7, 10, and 12 mm, respectively, and the 16-min image is filtered using a Butterworth filter with a cutoff frequency of 0.48 cycles/cm and an order of 1.2. The 16-min image is acquired and processed using the same parameters as in an earlier study [11] significantly. Furthermore, whole-body bone SPECT/CT has shown superior diagnostic performance compared to planar bone scintigraphy [1][2][3][4][5][6]. However, the breakthrough of bone SPECT/CT into clinical routine has yet to become [8]. To the best of our knowledge, this is the first receiver-operating characteristic analysis of fast whole-body bone SPECT/CT in actual diagnostic use with a multimodal reference standard. Previously, fast bone SPECT has been investigated using various approaches. Gregoire et al. evaluated visual image quality [11]. Alqahtani et al. optimized reconstruction parameters to preserve image quality with reduced acquisition time [16]. Zacho et al. demonstrated fast partial-body bone SPECT/CT as an add-on to whole-body planar bone scintigraphy [15,17]. Ichikawa et al. [18] presented fast bone SPECT by using a custom-designed phantom and a reconstruction algorithm based on CT zonal mapping. Pan et al. [19] proved the feasibility of deep learning for enhancing low-count bone SPECT data. In addition, the physical performance of a CZT system similar to ours has been described by Ito et al. [20]. The general-purpose CZT system has been used to reduce examination times in bone [11], myocardial perfusion [21], and dopamine transporter imaging [22]. We evaluated the effects of fast SPECT acquisition on the diagnostic performance of whole-body 99m Tc-HMDP SPECT/CT and showed that the total acquisition time can be reduced from 50 to even 16 min without any loss of diagnostic performance. Patientand region-level sensitivity, specificity, accuracy, and AUC values for bone metastasis detection were not significantly different between the 50-min images and any of the shorter time images. No systematic changes could be identified for diagnostic performance values either on the patient or region level with shortening acquisition time. The only identified systematic change was the increase in equivocal lesions for one reader when the acquisition time became shorter. The higher number of equivocal lesions was probably caused by increased noise and decreased image quality. However, the number of equivocal lesions might become lower as readers gain more experience on noisier short-acquisition-time images. According to the quantitative and qualitative analyses, a noise level of approximately 0.10 was associated with generally accepted image quality. This noise level was also used as the target when selecting filters for the 32-and 26-min images used in the diagnostic performance analysis. However, the overall image quality of these images was still evaluated to be lower than that of 50-min images. The 16-min images were processed differently from other images to mimic the processing method used in a previous study [11]. The short acquisition time combined with unoptimized image processing resulted in the highest number of equivocal lesions but had little effect on the patient-and regionlevel diagnostic performance. Diagnostic performance being unaffected by the acquisition time was most likely caused by the preserved high lesion contrast in the images with short acquisition times ( Table 2). The reconstruction parameters of the 50-, 32-, and 26-min images were similar to those suggested to be optimal by Alqahtani et al. [16], except post-processing filtering was increased for the 32-and 26-min images. Even though the 16-min SPECT/CT images resulted high for metastasis detection, most readers considered that image quality was insufficient for diagnostic use. However, visually evaluated image quality can be very reader-dependent, as images similar to our 16-min images have been rated sufficient for diagnostic use in a previous study [11]. Generally, visual image quality grades given by the reading physicians may partly reflect the image quality to which they are accustomed. In line with a recent study [23], the results of SUV and lesion volume measurements were not affected by changes in the acquisition parameters. SUV peak with a 256 × 256 matrix was the only exception, but this can be explained by the difference in voxel size between 128 × 128 and 256 × 256 matrices, which causes different actual volumes for the 1 cm 3 cube used for measuring SUV peak . Moreover, the repeatability of SUV and lesion volume measurements is expected to decrease as image noise increases [23]. Energy window narrowing and a larger image matrix size increased quantitatively measured contrast, but this relatively small change did not affect qualitative lesion visibility scores. Noise increased more than contrast, resulting in decreased CNR and overall image quality scores. However, the overall image quality failure rates were not significantly higher when the energy window was narrowed to 8%, the acquisition time was reduced to 38 min, or the matrix size increased to 256 × 256. Additionally, the contrast of the smallest lesions did not increase significantly by increasing the image matrix size. To properly benefit from the 256 × 256 matrix, more advanced reconstruction algorithms, such as those using CT for anatomical a priori information [24,25], are likely required. In the reconstruction, we employed a rather sophisticated scatter correction method based on the CT attenuation map and Monte Carlo simulation. If the scatter correction had been omitted, the contrast increase in narrowed energy window images might have been more apparent. The noise increase in narrow energy window images is mostly caused by reduced counts, but it may also be associated with detector uniformity. We used a single uniformity map acquired with a 15% energy window for all energy windows, although it would have been more suitable to acquire separate uniformity maps for different energy windows [26]. However, the post-acquisition change of the uniformity map was not supported by the list-mode resampling software at that time. Another improvement would be the modeling of the characteristic hole tailing effect of CZT detectors during the reconstruction [27]. We used only symmetric energy windows, but it might have been beneficial to explore asymmetric energy windows where only the lower threshold is adjusted, as scattered photons are more likely included in the lower end of the accepted energy spectrum. The asymmetric energy window has also been shown to slightly improve image quality in planar bone scintigraphy [28]. Although we could not find benefits from the energy window narrowing in the current study, it should be noted that our focus was on bone SPECT images, where the lesions are more active than the background, as opposed to, for example, cardiac SPECT, where the lesions are less active than the background. Under those conditions, narrowing the energy window might have a different effect on image quality, as scatter correction has been shown to increase cold contrast slightly more than hot contrast [29]. On the other hand, it has been reported that the contrast increase caused by scatter correction is reduced when the object size decreases and that scatter correction could even decrease the contrast of very small (diameter ≤ 6 mm) objects [29]. Regarding the future of skeletal imaging in nuclear medicine, we expect a shift from planar bone scintigraphy to whole-body SPECT/CT [6]. In this development, the reduction of acquisition time for whole-body SPECT/CT is of paramount importance. Currently, the acquisition time for whole-body SPECT/CT examinations is typically more than 40 min, and for planar bone scintigraphy, it is approximately 20 min. In this study, we have shown that the acquisition time of whole-body SPECT can be lowered from 50 to 16 min without losing diagnostic performance for lesion detection. To smoothen the transition from planar bone scintigraphy to whole-body SPECT/CT, we have validated reprojected bone SPECT/CT as a method to facilitate the reading of SPECT images [30]. This study was performed using a digital CZT SPECT/CT system. However, the results of acquisition time shortening can be generalized to analogic SPECT/CT systems by considering the differences in system sensitivity and spatial resolution. The acquisition time can be normalized with respect to the sensitivity difference between the SPECT/ CT systems if they have similar spatial resolution. The volumetric sensitivity of our digital CZT SPECT/CT system is 364 kcps/(MBq/cm 3 ) with a 20% energy window width, and the system spatial resolution (FWHM) is 3.8-5.4 mm when no post-processing filtering is applied [31]. The limitations of our study include a rather low number of patients and only two readers for the evaluation of diagnostic performance. Ideally, the images with different acquisition times would have been read by different physicians. However, the order in which the image sets were read was from the shortest acquisition time to the longest, and hence, no positive bias is expected for the diagnostic performance of the 16-min images. Additionally, there were at least three weeks between the readings of different images from the same patient. We validated fast whole-body bone SPECT/CT in prostate cancer patients. The high osteogenic features of prostate cancer may have promoted our findings [32], so further research is required to generalize our results into other cancers. This is important, as the use of bone SPECT/CT in the diagnosis of prostate cancer may decline due to the increased use of PSMA PET and SPECT ligands in the near future, which will allow for the detection of both bone and soft tissue metastases [33,34]. Conclusion Whole-body 99m Tc-HMDP SPECT/CT can be acquired using a general-purpose CZT system in less than 20 min without any loss in diagnostic performance in metastasis staging of high-risk prostate cancer patients.
v3-fos-license
2024-09-16T15:09:48.613Z
2024-01-01T00:00:00.000
272669854
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1155/2024/1785321", "pdf_hash": "469dd8c0f0b5cc8f80ee45fa3b11f18b7f8b46dd", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46259", "s2fieldsofstudy": [ "Medicine" ], "sha1": "17cf13f717407b84e318f8cb1b8d7368ac6cd910", "year": 2024 }
pes2o/s2orc
Effects of Glucagon-Like Peptide-1 Receptor Agonists on Bone Metabolism in Type 2 Diabetes Mellitus: A Systematic Review and Meta-Analysis Background Glucagon-like peptide-1 receptor agonists (GLP-1 RAs) are an intriguing class of antihyperglycemic drugs for type 2 diabetes mellitus (T2DM). Such drugs not only play a primary role in regulating blood glucose levels but also exhibit additional pleiotropic effects, including potential impacts on bone metabolism and fracture risk. However, the mechanism of such drugs is unclear. The purpose of this study was to evaluate the effect of GLP-1 RAs on bone metabolism in T2DM. Methods From database inception to May 1, 2023, the searches were conducted on multiple databases such as Web of Science, Embase, PubMed, CNKI, the Cochrane Library, Wanfang, and VIP. We systematically collected all randomized controlled trials of bone metabolism in patients with T2DM treated with GLP-1 RAs. The quality evaluation was performed according to the Cochrane Handbook for Systematic Reviews of Interventions. Data extraction was analyzed using Review Manager 5.4 software, and funnel plots were drawn to evaluate publication bias. Results Twenty-six randomized controlled trials that met the inclusion criteria were included, involving a total of 2268 participants. In this study, compared to other antidiabetic drugs or placebo, GLP-1 RAs were found to significantly increase serum calcium (mean difference (MD) = 0.05, 95% confidence interval (CI) (0.01, 0.09), P = 0.002], bone alkaline phosphatase [standardized MD (SMD) = 0.76, 95% CI (0.29, 1.24), and P = 0.001), and osteocalcin (SMD = 2.04, 95% CI (0.99, 3.08), and P = 0.0001) in T2DM. Specifically, liraglutide increased procollagen type 1 N-terminal propeptide (SMD = 0.45, 95% CI (0.01, 0.89), and P = 0.04). GLP-1 RAs were also associated with a reduction in cross-linked C-terminal telopeptides of type I collagen (SMD = −0.36, 95% CI (−0.70, −0.03), and P = 0.03). In additionally, GLP-1 RAs increased lumbar spine bone mineral density (BMD) (SMD = 1.04, 95% CI (0.60, 1.48), and P < 0.00001) and femoral neck BMD (SMD = 1.29, 95% CI (0.36, 2.23), and P = 0.007). Conclusions GLP-1 RAs can not only improve BMD in the lumbar spine and femoral neck of patients with T2DM but also protect bone health by inhibiting bone resorption and promoting bone formation. Systematic Review Registration. PROSPERO, identifier CRD42023418166. Background Type 2 diabetes mellitus (T2DM) is the most common endocrine disorder.Due to long-term exposure to high blood glucose, patients can develop a series of complications, mainly afecting the major blood vessels of the heart, microvessels in the kidneys and retina, as well as the nervous and skeletal systems [1].Such conditions exert a substantial impact on the quality of life for patients and their families. In recent times, T2DM is increasingly recognized as a signifcant contributor to secondary osteoporosis and fragility fractures in the skeletal system [2].In clinical practice, several commonly prescribed antidiabetic medications not only contribute to controlling the blood glucose level of patients but also have diverse efects on their bone health.For instance, a study has indicated that metformin plays a benefcial role in promoting bone formation and improving bone metabolism [3].However, as shown in a study, metformin does not have a substantial impact on enhancing bone mineral density (BMD) in patients with T2DM [4].Conversely, thiazolidinedione drugs have been found to induce osteoblast apoptosis, leading to reduced bone formation and an increased risk of fractures [5]. Glucagon-like peptide-1 receptor agonists (GLP-1 RAs) are a novel class of antidiabetic medications favored by T2DM patients due to their dual benefts of lowering blood glucose levels and promoting weight loss [6].According to a study, GLP-1 RAs can reduce bone breakdown by affecting osteoclasts, thereby inhibiting bone resorption.Besides, GLP-1 RAs can enhance osteoblast activity and promote bone formation [7].Furthermore, GLP-1 RAs can control blood glucose levels positively, thus infuencing bone health.Such processes exert an antiosteoporotic role [8].Although GLP-1 RAs can inhibit bone resorption, stimulate bone formation, enhance BMD, and improve overall bone quality [8], the impact of GLP-1 RAs on fracture risk remains highly controversial.A meta-analysis demonstrated that compared to other antidiabetic medications, the use of GLP-1 RAs does not result in a reduction in fracture risk among patients with T2DM [9].Nevertheless, a separate network meta-analysis published in 2018 indicated that GLP-1 RAs signifcantly decrease the risk of fractures in patients with T2DM compared with placebo or other antidiabetic medications [10]. As a result, the impact of GLP-1 RAs on fracture risk and their infuence on sensitive bone metabolism markers remain uncertain.In addition, diferences in the structure and duration of action among various GLP-1 RAs may contribute to variations in their efects.Currently, comprehensive meta-analyses focusing on the relationship between GLP-1 RAs and fracture risk are very limited, and the efects of these medications on bone metabolism markers and BMD have not been extensively studied in meta-analyses.Terefore, the purpose of this study was to systematically evaluate and analyze the efects of GLP-1 RAs on selected bone metabolism markers and BMD in T2DM. Protocol and Registration. Te protocol for this systematic review and meta-analysis has been registered with PROSPERO (registration number: CRD42023418166).Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement [11], we present our methods and fndings (Supplemental Table 1). Eligibility Criteria. Randomized controlled trials (RCTs) were included to compare the efcacy of GLP-1 RAs with other antidiabetic medications or placebo.Each trial included participants who were aged 18 years or older and diagnosed with T2DM according to the diagnostic criteria of the World Health Organization or the American Diabetes Association if glycated hemoglobin (A1C) reaches or exceeds 6.5%, or if fasting plasma glucose levels reach or exceed 126 mg/dL (7.0 mmol/L), or if the 2-hour plasma glucose during an oral glucose tolerance test (OGTT) reaches or exceeds 200 mg/dL (11.1 mmol/L), or if random plasma glucose levels at any time reach or exceed 200 mg/dL (11.1 mmol/L) accompanied by typical symptoms of diabetes, the diagnosis of T2DM can be made based on any of these criteria [12].Tere were no restrictions based on race, gender, or duration of the disease, and exclusion criteria did not consider the presence of osteoporosis or a history of fractures.Te primary outcomes of interest were BMD or bone metabolism markers.In the case of multiple publications from the same study, we selected the data that provided the most comprehensive and longest follow-up period. Search Strategy. We searched PubMed, Cochrane, Embase, Web of Science, CNKI, Wanfang, and VIP for relevant literature from the database inception to May 1, 2023.Detailed information regarding our search strategy is presented in the electronic supplementary material (Supplemental Table 2).To ensure inclusiveness, we included search terms related to "GLP-1 RAs" as well as specifc terminologies of diferent types of GLP-1 RAs such as semaglutide, liraglutide, exenatide, dulaglutide, and benaglutide to capture all potentially eligible studies and avoid any omissions. Selection Process. All search results were imported into EndNote (version X9, Tomson Reuters, Philadelphia, PA, USA) to remove any duplicate records.Two reviewers independently conducted an initial screening based on the title and abstract of each article.Subsequently, the remaining articles underwent a full-text assessment to determine their eligibility for inclusion, with reasons for exclusion carefully documented.In case of disagreements, a third reviewer was consulted to reach a consensus.Articles that did not provide the necessary data were excluded, as well as those for which the required data could not be obtained even after contacting the corresponding authors. Data Collection and Risk of Bias Assessment.Two researchers independently extracted the relevant data.Te included data consisted of the frst author's name, publication year, country where the study was conducted, sample sizes and relevant information for the treatment and control groups, names of GLP-1 RAs, dosage, duration of the study, and results of bone metabolism-related markers before and after treatment.In cases of diferences of opinion during the extraction process, the resolution was achieved through discussion or involvement of a third researcher.Te Review Manager 5.4 software was used by the two researchers to assess the risk of bias in the included RCTs.Tis assessment was conducted using the bias risk assessment tool provided in the Cochrane Handbook for Systematic Reviews of Interventions, version 5.3 [13]. 2.6.Statistical Analysis.Te relevant outcome markers refecting bone metabolism and BMD collected in this study were all continuous variables.Terefore, the mean diference (MD) or standardized mean diference (SMD) with standard 2 International Journal of Endocrinology deviation (SD) was chosen as efect markers.Heterogeneity between included studies was evaluated using the Q test and I 2 statistic.Te Q test primarily was used to assess the p value, and if the result of the heterogeneity test was p ≥ 0.1 and I 2 < 50%, it indicated that there was no statistically signifcant heterogeneity among the studies, and a fxedefect model was used for meta-analysis.Otherwise, the subgroup analysis could be performed to identify the source of heterogeneity or a random-efects model could be used to pool the efect sizes for meta-analysis.Sensitivity analysis was conducted to assess the stability of the results by sequentially excluding individual studies and reanalyzing the data.If the exclusion of a particular study led to signifcant changes in the pooled efect size or its heterogeneity, further reading and evaluation of that study were necessary.Publication bias was assessed by a visual funnel plot of the main outcome markers.Statistical analysis of all predetermined outcome markers was performed using RevMan 5.4 provided by the Cochrane Collaboration.P < 0.05 was considered statistically signifcant. Search Results . According to the established retrieval strategy, a total of 6081 studies were screened from 7 databases.Besides, EndNote X9 was used to remove 1389 duplicate records and 1986 records were automatically marked as ineligible.Ten, we read the titles and abstracts of the remaining articles based on the inclusion and exclusion criteria, ultimately excluding 2353 articles.Among the remaining 353 articles, 8 were inaccessible in full text.After thoroughly reading the full texts, we found that 302 articles did not contain the required outcome markers, 10 were not RCTs, and 7 were study protocols.Terefore, 26 studies were ultimately included , involving a total of 2268 participants (Figure 1).[34][35][36], 1 on the combination of exenatide and dulaglutide [37], and 1 study on benaglutide [38].For a comprehensive understanding of the included studies, the detailed characteristics were shown in Table 1.Summary of fndings is provided in Supplemental Table 3. Study Of the 25 studies included, 16 studies provided detailed descriptions of the methods used for random sequence generation, and the remaining 9 studies mentioned randomization without specifying the exact methods used.Tere were 10 studies explicitly stating the allocation concealment method, and 10 studies in blinding for participants and personnel, as well as 24 studies on outcome assessors.Te risk of bias assessment for each study was presented in Figure 2. International Journal of Endocrinology Such a result indicated a statistically signifcant diference (Figure 8).According to a subgroup analysis based on the type of medication, liraglutide markedly improved the osteocalcin level (SMD � 2.35 and 95% CI (1.11, 3.60)) (Figure 9). In addition, a sensitivity analysis was performed by systematically excluding each study, and no signifcant change was found in the overall efect size and heterogeneity. Funnel plots were created to assess the impact of GLP-1 RAs on CTX and lumbar spine BMD (Supplemental Figures 4 and 5).Te plots displayed a symmetrical distribution of studies on both sides of the axis, indicating a relatively low risk of publication bias. Discussion Ultimately, GLP-1 RAs played a benefcial efect on BMD and bone metabolism in patients with T2DM.Compared to other antidiabetic drugs or placebo, GLP-1 RAs may demonstrate greater potential benefts for bone health in the treatment of T2DM.Specifcally, dipeptidyl peptidase-4 (DPP-4) inhibitors and GLP-1 RAs share similar mechanisms of action.However, some studies suggest that DPP-4 inhibitors have a neutral or mildly positive efect on bone health [40].In contrast, clinical trials have demonstrated more pronounced efects of GLP-1 RAs in reducing glycated hemoglobin and body weight, making them a more favorable option for most patients [41].Furthermore, insulin GLP-1 Ras, glucagon-like peptide-1 receptor agonists; BMI, bone mineral density; HbA1c, hemoglobin A1c; Lira, liraglutide; Exen, exenatide; Bena, benalutide; NR, not reported; QD, quaque die; BID, bis in die; QW, quaque week; TID, ter in die. International Journal of Endocrinology therapy for diabetes may alter levels of certain biomarkers associated with bone metabolism, such as advanced glycation end products and other indicators related to bone density and fracture risk, thereby infuencing the processes of bone formation and resorption [42].Our study also concluded that the longer treatment duration was associated with the more signifcant improvements in lumbar spine BMD.Furthermore, CTX could be reduced and BALP, osteocalcin, as well as P1NP, could be increased.Tese fndings indicated that GLP-1 RAs could suppress bone resorption and promote bone formation in T2DM.However, notably, the studies included in this analysis primarily focused on liraglutide and no statistically signifcant changes were revealed in subgroup analyses for other GLP-1 RAs.Tis could be attributed to the limited number of studies and sample sizes for other GLP-1 RAs, as well as potential diferences in molecular structure between exenatide and dulaglutide in contrast to liraglutide. In recent years, GLP-1 RAs have attracted much attention as an antidiabetic medication.As shown in the study, these drugs not only reduced glucose sugar levels and had a cardiovascular protective efect but also may have a certain protective efect on bone health.According to the animal experiments, as opposed to the wild-type control group, GLP-1 RAs knockout mice exhibit an increase in osteoclast numbers as well as bone resorption levels, and a decrease in BMD [43].In a mouse model induced by lipopolysaccharide, the combined treatment with GLP-1 RAs signifcantly reduces osteoclast numbers and CTX in comparison with mice receiving lipopolysaccharide alone [44].As reported by Sedky, GLP-1 RAs can enhance osteocalcin in diabetic rats, thereby increasing bone mass and strength [45].Tese animal studies indicated that GLP-1 RAs can inhibit bone resorption, promote bone formation, and improve BMD.Such results were consistent with the fndings of this study. However, there was still no defnitive and consistent conclusion from existing clinical studies on the impact of GLP-1 RAs on bone metabolism markers and fracture risk in patients with T2DM.In a clinical trial published by Gilbert in 2016 [18], the trial lasting for 104 weeks demonstrated that liraglutide as monotherapy does not afect total BMD in patients.However, the dropout rate of this trial is as high as 52%, which may greatly afect the fnal results and may be the reason for the inconsistency with our study fndings.Zhang [28] pointed out that liraglutide had no signifcant efect on whole-body BMD and bone formation markers in obese and overweight patients with T2DM after 26 weeks of treatment.However, it does reduce the level of the bone resorption marker CTX, which was consistent with the results of our study.Te diference in results might be only obese and overweight patients with T2DM were included in the clinical study, and the low-grade infammatory state in patients with obesity also afected bone metabolism [46,47].In addition, some studies have suggested that exenatide may increase fracture risk.However, this meta-analysis only assessed fractures as adverse events without including detailed bone metabolism markers or bone quality indicators, such as bone resorption and formation markers, BMD, and calcium and phosphate levels [48].In contrast, our study provided a detailed analysis of these markers, enabling a more accurate evaluation of the efects of GLP-1 RAs on bone health.Overall, although the analyses suggest that liraglutide may reduce fracture risk in patients with T2DM, patients using GLP-1 RAs do not exhibit a signifcant increase in fracture risk; in some cases, GLP-1 RAs may be associated with a lower fracture risk.Compared to other antidiabetic medications, GLP-1 RAs generally show a safer profle regarding fracture risk [49].6 International Journal of Endocrinology Tis meta-analysis has several limitations.First, most of the included RCTs primarily used liraglutide, while the number of RCTs for other GLP-1 RAs was limited, with smaller sample sizes, potentially afecting the comprehensiveness of the results.Second, the efects of the medication may take a longer time to manifest, with treatment durations International Journal of Endocrinology in the studies ranging from 4 weeks to 104 weeks.Shorter treatment periods could impact the strength of the evidence.Tird, we only included published clinical trials and did not include unpublished trials, which may introduce publication bias and omit relevant data.Furthermore, although the heterogeneity of results was high, sensitivity analyses were performed by systematically excluding each study to assess their impact on heterogeneity and overall efect size, with fndings indicating that these exclusions did not signifcantly alter the overall efect size or heterogeneity.Finally, it is noteworthy that despite using multiple international databases to ensure comprehensive data retrieval, regional bias may still be present.For instance, studies from Western countries generally suggest that GLP-1 receptor agonists have no signifcant impact on BMD or bone turnover markers, while studies from China have shown potential Heterogeneity: tau 2 = 0.37; chi 2 = 6.07, df = 1 (P = 0.01); I 2 = 84% Test for overall effect: Z = 0.27 (P = 0.79) Total (95% CI) Heterogeneity: tau 2 = 0.32; chi 2 = 82.62,df = 12 (P < 0.00001); I 2 = 85% Test for overall effect: Z = 2.13 (P = 0.03) Test for subgroup differences: chi 2 = 0.31, df = 1 (P = 0.58), I 2 = 0% International Journal of Endocrinology as opposed to exenatide and dulaglutide.Patients with T2DM are already at high risk of osteoporosis or fractures, so it is important to choose antidiabetic medications that not only lower blood glucose but also minimize the risk of osteoporosis or fractures. Figure 4 : Figure 4: Comparison of total hip BMD (a) and femoral neck BMD (b) in the GLP-1 RAs group compared with the control group. Figure 6 : Figure 6: Comparison of CTX in the GLP-1 RAs group compared with the control group. Table 1 : Characteristics of included studies. Figure 5: Comparison of serum calcium in the GLP-1 RAs group compared with the control group. RAs can not only improve BMD at the lumbar spine and femoral neck but also enhance bone quality.Such results can delay the occurrence and progression of osteoporosis and reduce the risk of fractures in patients with T2DM.Among the GLP-1 RAs, liraglutide seems to have more efective efects in reducing CTX and increasing osteocalcin Figure 10: Comparison of P1NP in the liraglutide group compared with the control group.
v3-fos-license
2018-04-03T02:12:01.920Z
2017-04-05T00:00:00.000
26429282
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.4172/2469-9780.1000114", "pdf_hash": "54c2c1cda1bd3dc9cbe9441248c006b8dbc037a9", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46261", "s2fieldsofstudy": [ "Biology" ], "sha1": "54c2c1cda1bd3dc9cbe9441248c006b8dbc037a9", "year": 2017 }
pes2o/s2orc
Region Specific Effects of Aging and the Nurr1-Null Heterozygous Genotype on Dopamine Neurotransmission The transcription factor Nurr1 is essential for dopamine neuron differentiation and is important in maintaining dopamine synthesis and neurotransmission in the adult. Reduced Nurr1 function, due to the Nurr1-null heterozygous genotype (+/−), impacts dopamine neuron function in a region specific manner resulting in a decrease in dopamine synthesis in the dorsal and ventral striatum and a decrease in tissue dopamine levels in the ventral striatum. Additionally, maintenance of tissue dopamine levels in the dorsal striatum and survival of nigrostriatal dopamine neurons with aging (>15 months) or after various toxicant treatments are impaired. To further investigate the effects of aging and the Nurr1-null heterozygous genotype, we measured regional tissue dopamine levels, dopamine neuron numbers, body weight, open field activity and rota-rod performance in young (3–5 months) and aged (15–17 months) wild-type +/+ and +/− mice. Behavioral tests revealed no significant differences in rota-rod performance or basal open field activity as a result of aging or genotype. The +/− mice did show a significant increase in open field activity after 3 min of restraint stress. No differences in tissue dopamine levels were found in the dorsal striatum. However, there were significant reductions in tissue dopamine levels in the ventral striatum, which was separated into the nucleus accumbens core and shell, in the aged +/− mice. These data indicate that the mesoaccumbens system is more susceptible to the combination of aging and the +/− genotype than the nigrostriatal system. Additionally, the effects of aging and the +/− genotype may be dependent on genetic background or housing conditions. As Nurr1 mutations have been implicated in a number of diseases associated with dopamine neurotransmission, further data is needed to understand why and how Nurr1 can have differential functions across different dopamine neuron populations in aging. Introduction Dopamine neurotransmission has been implicated in a number of pathological conditions including Parkinson's disease, schizophrenia, attention deficit hyperactivity disorder and addiction [1][2][3][4]. Nurr1 (NR4A2) is a nuclear receptor/transcription factor that is essential for proper development of mesencephalic dopamine neurons as homozygous disruption of Nurr1 stops differentiation of these neurons [5][6][7]. Nurr1 is the upstream regulator of genes involved in the synthesis, packaging, transport and reuptake of Dopamine [8,9] Overexpression of Nurr1 and Pitx3 in mouse induced pluripotent stem cells can program them into functional dopaminergic like neurons [10]. Several intrinsic mechanisms have been identified in the mesencephalic DA neurons that are linked to Nurr1 mediated cell survival [11][12][13][14]. In vitro and in vivo studies demonstrate that Nurr1 gene delivery/therapy and Nurr1 activation/activating compounds enhance DA as well as protect mesencehpalic dopaminergic neurons from cell injury induced by toxins or neuroinflammation and nigrostraiatal associated motor behaviors of dopamine neurotransmission [15][16][17][18]. Nurr1 is also neuroprotective in nature as it inhibits the expression of Pro-inflammatory neurotoxic mediators in microglia and astrocytes by recruiting CoREST corepressor complex thereby preventing the loss of dopaminergic neurons [19,20]. Mutation analysis has implicated a role for Nurr1 in some of these pathological conditions. Mutations in Nurr1 have been linked to Parkinson's disease [21][22][23][24] and Nurr1 is reduced in patients with Parkinson's disease and correlates with the loss of tyrosine hydroxylase immunoreactivity [25][26][27]. Two different missense mutations in exon 3 of Nurr1 were identified in 3 patients with either schizophrenia or bipolar disorder [28]. Nurr1 has been implicated as a transcription factor that regulates the expression of several dopamine neurotransmission genes including tyrosine hydroxylase, dopamine transporter, vesicular monoamine transporter and GTP cyclohydrolase [9,26,[29][30][31][32][33]. With the potential to alter multiple parameters regulating dopamine neurotransmission, the effects of Nurr1 on dopamine neurotransmission is complex. The role of Nurr1 in the regulation of dopamine neurotransmission in adult animals is mostly based on experiments in the Nurr1-null heterozygous mice (+/−). The Nurr1 +/− genotype has been shown to have a subtle but significant effect on dopamine neuron function in the nigrostriatal dopamine system [31,34]. Although the Nurr1 +/− mice have normal numbers of nigrostriatal dopamine neurons and dopamine levels in the striatum, these mice have reduced tyrosine hydroxylase activity and an apparently reduced capacity to maintain dopamine levels [31]. Additionally, reduced Nurr1 function in these mice increases the susceptibility of nigrostriatal dopamine neurons to the neurotoxins MPTP, amphetamine and rotenone and the irreversible proteasome inhibitor lactacystin [35][36][37]. Similarly, when dopamine neurons from +/+ and +/− newborn pups were grown in culture, survival and neurite growth in dopamine neurons from +/− mice was significantly reduced [38]. Although the nigrostriatal dopamine system is impacted by the Nurr1 +/− genotype, the mesoaccumbens dopamine system (i.e., dopamine neurons in the substantia nigra pars Compacta and ventral tegmental area) that innervate the ventral striatum consisting of the nucleus accumbens core and shell (NAC and NAS, respectively) appears to be more susceptible to the effects of the +/− genotype. Significant reductions in tissue dopamine levels in the nucleus accumbens and GTP cyclohydrolase mRNA expression in the ventral tegmental area have been reported [31,39,40] No difference in these parameters were observed in nigrostriatal system [31,39,40] Additionally, a significant elevation in synaptic dopamine levels, as measured by microdialysis was found in the shell of the nucleus accumbens of Nurr1 +/− mice that was not observed in the striatum [41]. Previous studies have found that aging is an important parameter that affects both dopamine neurotransmission and Nurr1 levels. Aging produces various changes in the function of the nigrostriatal dopamine neurotransmission; however, the mechanisms of these changes are unclear. A number of changes in dopamine function have been reported in the striatum. One of the most consistent is the decrease in D2 receptor expression in the striatum [42]. In the Nurr1 +/− mice, reductions in striatal dopamine levels, reduced numbers of dopamine neurons and a decrease in rota-rod performance in the aged (>9 months) +/− mice have been reported [34,43]. Based on these data, the Nurr1 +/− mice may represent a potentially useful model of Parkinson's disease because it combines a genetic mutation that increases the susceptibility of the nigrostriatal dopamine neurons to an environmental stressor such as aging Which mimics parameters thought to contribute to Parkinson's disease. These data suggest that aging can influence Nurr1 expression and also produces various changes in the function of the nigrostriatal dopamine neurotransmission. It is unclear, however, how aging could also affect other dopamine systems particularly the mesoaccumbens system which could provide insight into the differences in regulation between these neuron populations. These experiments were initiated to further examine the effects of aging in the Nurr1 +/− mice on both the nigrostriatal and mesoaccumbens dopamine systems. The effect of the combination of aging and the Nurr1 +/− genotype on extracellular dopamine levels and dopamine release in the striatum has not been reported. Furthermore, these studies were investigated to also determine if aging has similar or distinct effects on the mesoaccumbens dopamine neurons associated with aging. The data indicate that aging and the +/− genotype can produce subtle effects on nigrostriatal dopamine neurotransmission. However, the regulation of tissue dopamine levels in the ventral striatum is the most susceptible to the combination of aging and the +/− genotype. Chemicals and reagents Quinpirole, standards for high performance liquid chromatography (HPLC) analysis including dopamine, 3,4-dihydroxyphenylacetic acid (DOPAC) and homovanillic acid (HVA) and HPLC reagents were purchased from Sigma-Aldrich (St. Louis, MO). CNS perfusion fluid was purchased from CMA Microdialysis (North Chelmsford, MA). Reagents for the HPLC mobile phase were purchased from Sigma-Aldrich (St. Louis, MO). Animals and guidelines The Nurr1-null heterozygous mice used for this study were obtained from a colony bred at Mississippi State University originally produced in the laboratory of Dr. Vera Nikodem at the National Institute for Diabetes and Digestive and Kidney Diseases [6]. Mice were genotyped as previously described to distinguish +/− and +/+ mice [6]. Litters were chosen at random for either young or aged mice. At 19-21 days of age, mice were weaned and housed in groups of 3-5/cage. Mice were housed in cages with steel grid lids and all cages were located in the same room. All procedures were performed in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals, and study protocols were approved by the Institutional Animal Care and Use Committee at Mississippi State University. All animals used in this project were housed in the AAALAC accredited facilities of the College of Veterinary Medicine, Mississippi State University. The individual room temperatures were maintained between 18-22°C with food and water ad libitum. Care of the mice was overseen by a laboratory animal veterinarian. Male mice were used for behavior analysis, immunohistochemistry, and neurochemistry measurements. Behavior To assess motor coordination, mice were tested using a 4 station rota-rod treadmill for mice (Med Associates, St. Albans, VT). The rota-rod was set to increase rotation speed from 3-30 rpm. Each mouse underwent 2 rounds of training. The mice were placed on the rod for 2 min. If the mouse fell off during this time it was placed back on the treadmill so that all mice received the same amount of training time. Each mouse had a rest time 4 min in between each training round. After the training, 3 test trials were done on each mouse. The time spent and the speed reached by each mouse was recorded and compared between +/+ and +/− young and aged mice. At the end of rota-rod testing, mice were weighed. To assess spontaneous and stress induced locomotor activity, mice were placed in an open field chamber consisted of a 25 cm × 25 cm plexiglass enclosure with a video camera mounted above and attached to a computer containing the LimeLight software to measure total distance traveled. The basal activity of the mice is monitored initially for 45 min by a video camera. After this 45 min activity period, mice were placed in a Broome rodent restrainer for 3 min to produce restraint stress, then the mice were put back into the open field chamber and the activity was monitored for another 45 min. The total distance traveled during the basal condition and the stress condition was compared across age and genotype. The number of mice used for behavior analysis included 22 +/+ young, 37 +/− young, 7 +/+ aged, and 8 +/− aged. Tissue dissection One week after the open field test, the mice were euthanized with CO 2 asphyxiation, the brains removed, cut with a coronal section at approximately 1 mm caudal to Bregma into forebrain and midbrain pieces. The forebrain was further split in half with a sagittal cut. The midbrain piece and the right forebrain piece were immersion fixed in 4% paraformaldehyde for 24 h then placed in 30% sucrose for 2 days. The left forebrain piece was frozen on dry ice and stored at −80°C. Catecholamine isolation The left piece of frozen forebrain tissue was mounted in a custom made tissue slicer with OCT compound (Sakura Finetek, Torrance, CA) and 600-800 µm frozen sections were cut and mounted on glass slides. Micropunches of the dorsal striatum were isolated using a blunt 20 gauge needle. Micropunches of the nucleus accumbens core was taken with a 22 gauge blunt needle then the remaining nucleus accumbens shell was dissected using an 18 gauge blunt needle. A micropunch of the prefrontal cortex was also taken with a blunt 20 gauge needle. Approximate locations of these dissected regions are shown on tyrosine hydroxylase immunohistochemistry sections in Figure 1. Micropunches were used for determination of dopamine and metabolite levels using high performance liquid chromatography (HPLC) and electrochemical detection. Micropunches were sonicated in 0.1 M perchloric acid and 100 Μm EDTA at 4°C then cleared by two successive centrifugations at 10,000 g. The cleared supernatant was injected into a HPLC system consisting of a Waters 2695 Separation module. The remaining pellet was solubilized in 1M HCl and total protein was determined using BCA according to manufactures instructions. The number of male mice used for tissue dopamine levels consisted of 7 +/+ young, 10 +/− young, 7 +/+ aged, and 8 +/− aged. Catecholamine measurements Tissue extractions (10-20 µL) or microdialysis fractions (18 µL) were injected into a HPLC system consisting of a Waters 2695 Separation module, a SupleCosil LC-18-DB column with the Waters 2465 electrochemical detector set at 20 nA and an Ec=+0.67 V using a mobile phase of 100 mM phosphate, 17.5% methanol, 25 µM EDTA, 1 mM octyl sodium sulfate at pH 3.65. The quantity of each compound was determined based on the response of a known amount of standards of dopamine, DOPAC and HVA (Sigma Aldrich, St. Louis, MO) and are reported as pg in the dialysate. Immunohistochemistry and stereology Fixed midbrain and forebrain tissue was serial sectioned into 30 nm sections. Every 6th section was used for immunhistochemistry. Free-floating sections were washed 3 times in phosphate buffered saline (PBS) with 1% bovine serum albumin (BSA), and then incubated for 30 min in 1% H 2 O 2 in PBS. Sections were washed three times in PBS then incubated in blocking serum (PBS with 1% Triton-X 100, 4% normal goat serum, 1% BSA) for 30 min. Sections were then incubated in a rabbit polyclonal anti-tyrosine hydroxylase antibody (Millipore, Billerica, MA) diluted 1:5000 at room temperature for 2 h. Sections were rinsed 10 times with PBS containing 1% BSA and 0.2% Triton X-100 then incubated in biotinylated anti-rabbit IgG for 2 hours, rinsed 3 times. Sections were then incubated for 2 h in equilibrated ABC reagent (Vector Laboratories, Burlingame, CA) diluted in PBS-0.02% Triton X-100 and 1.0% bovine serum albumin. Sections were rinsed 2 times in PBS then incubated in a 0.5× diaminobenzidine solution (Sigma-Aldrich, St. Louis, MO) with 0.003% H 2 O 2 for 5 min. Sections were rinsed in PBS and counterstained with nuclear fast red (Sigma-Aldrich, St. Louis, MO). Sections were mounted onto silanized slides, dehydrated in graded ethanol series followed by xylene, then cover slipped with Permount. Immunoreactivity was evaluated and stereology was performed using an Olympus BX51 microscope with a CCD camera and a motorized Z-stage which is all connected to a computer with Stereo Investigator Stereology Software from MicroBrightField Inc. (Williston, VT). Unbiased stereology on tyrosine hydroxylase immunoreactive profiles in the substantia nigra pars compacta and ventral tegmental area was performed using Stereo Investigator Stereology Software from MicroBrightField Inc. The optical fractionation method was used for estimating tyrosine hydroxylase immunoreactive profiles. The substantia nigra pars compacta and the ventral tegmental area were outlined based on tyrosine hydroxylase labeling. A 60× oil immersion objective was used to count profiles and measure section thickness at each sample site. Stereologic parameters used were a grid size of 150 µm × 150 µm with a random orientation and an optical dissector height of 16 µm. Estimates of immunoreactive profiles were made in young and aged +/+ and +/− mice (n=3/ group) in both the substantia nigra pars compacta and ventral tegmental area. The number of mice used for stereological counts consisted of 3 +/+ young, 3 +/− young, 3 +/+ aged, and 3 +/− aged. Statistical Analysis Behavioral data, tissue catecholamine levels and stereological estimates were analyzed using ANOVA with Fisher's PLSD post-hoc comparison. Rota-rod performance and open field activity in young and aged, +/+ and +/− mice Behavioral analysis was carried out on the young and aged, +/+ and +/− mice. The behavioral test consisted of rota-rod performance, open field activity in a novel environment and after 3 min of restraint stress. The weight of each mouse was also recorded. Aging significantly increased weight. Although there was no significant difference across genotype, the +/− mice had slightly lower body weights (Figure 2A). There was a positive correlation between age and weight and a negative correlation between rota-rod performance and weight. In the rota-rod test, aging significantly reduced rota-rod performance in both +/+ and +/− mice ( Figure 2B). No significant differences were observed on rota-rod performance due to genotype. There was a trend of reduced performance in the aged +/+ mice but much of this was the result of two animals with very low performance (3.7 and 5.7 s). In open field activity, there were no significant differences in total distance traveled in the basal condition ( Figure 2C). The young +/− mice, however, were significantly more active after the restraint stress than the young +/+ mice ( Figure 2D). In the aged mice, there was no genotype difference in stress induced activity although there were fewer aged mice tested than young mice. Tissue dopamine levels in the ventral striatum are attenuated by aging in the +/− mice Tissue dopamine and metabolite levels were measured in micropunches from the dorsal striatum, the ventral striatum separated into the NAC and NAS, and the prefrontal cortex across young and aged +/+ and +/− mice. Within the dorsal striatum, there were no significant differences in tissue dopamine levels observed due to the +/− genotype or aging (F3,28=2.692, p=0.0660) ( Figure 3A). In fact, there was a trend toward a reduction in tissue dopamine levels from the dorsal striatum in the aged +/+ mice as compared to young +/+ mice and aged +/− mice ( Figure 3A). There were no significant differences in DOPAC or HVA levels or dopamine turnover in the dorsal striatum ( Table 1). The ventral striatum, consisting of the NAS and the NAC, showed a different pattern of changes in dopamine neurochemistry as compared to the dorsal striatum. Specifically, there was a significant reduction in tissue dopamine levels in the aged +/− mice as compared to both the young +/− mice and the aged +/+ mice similarly in both the NAS (F3,28=2.949, p=0.049) and NAC (F3,28=4.05, p=0.0165) ( Figure 3B and 3C). Within the NAC, there was also a significant reduction in DOPAC levels in the aged +/− mice (F3, 28=4.092, p=0.0162) ( Table 1). No differences in HVA levels were found across groups in the either the NAC or NAS. There were significant elevations in dopamine turnover (dopamine/HVA) in the aged +/− mice in the NAS and NAC. No differences in dopamine neurochemistry were observed in the prefrontal cortex across age or genotype. Dopamine neuron immunohistochemistry found no effects with aging and the +/− genotype on dopamine neuron survival Survival and innervations of dopamine neurons was determined across aging and the +/− genotype using tyrosine hydroxylase immunohistochemistry. To determine if aging and the +/− genotype affected survival of dopamine neurons, unbiased stereology was used to estimate dopamine neuron population in the substantia nigra pars compacta and the ventral tegmental area. No noticeable differences in the intensity or distribution of dopamine neurons in either the substantia nigra pars compacts or ventral tegmental area were observed across groups in either of these areas (Figure 4). Unbiased stereology found no differences in the estimated number of tyrosine hydroxylase immunoreactive neurons in either the substantia nigra or ventral tegmental area across age or genotype ( Table 2). Target areas of dopamine neuron innervations were also investigated using tyrosine hydroxylase immunohistochemistry. No obvious differences were observed in any of these target areas, dorsal or ventral striatum (data not shown). Discussion Currently, animal models that reproduce all of the neuropathology and neurodegeneration found in Parkinson's disease are lacking. The most widely used models consist of the use of various toxins to kill or damage dopamine neurons. Because of the important contribution of aging in the etiology of Parkinson's disease as well as links to genetics, a more ideal model would consist of a genetic mutation that, when combined with aging, produces progressive deficits in dopamine levels and neurodegeneration of nigrostriatal dopamine neurons. Previous reports using aged Nurr1 +/− mice have described significant effects in the nigrostriatal dopamine system. Aging (as early as 9-12 months) in the Nurr1 +/− mice resulted in a significant reduction in dopamine levels in the striatum, impaired performance on the rota-rod, and reduced numbers of dopamine neurons in the substantia nigra pars compacta in the aged +/− mice [34,43]. Because of the potential importance of this model, we began further investigations into how aging alters dopamine transmission in Nurr1 +/− mice but also included other mesencephalic dopaminergic systems. In contrast to previous reports, the current data found no difference in numbers of dopamine neurons or tissue dopamine levels in the dorsal striatum of aged +/− mice. However, significant reductions in tissue dopamine levels in the ventral striatum, including the NAS and NAC, resulted from the combination of aging and the +/− genotype. Potential reasons for these differences could be explained by either the background strain of mice or construct used. Three independently derived strains of Nurr1 knockout mice were produced by the laboratories of Dr. Conneely [8], Dr. Perlmann [5] and Dr. Nikodem [6] and used between the various studies. Comparisons between parameters used to create these different lines have been reviewed previously [44]. Backman et al. using the Perlmann derived mice, reported no differences in tissue dopamine levels using micropunches from the striatum or number of dopamine neurons in the substantia nigra pars compacta in Nurr1 +/− mice 12-15 months of age [45]. In fact, these authors found higher, but not significantly higher, dopamine and DOPAC levels in the +/− mice. These results more closely resemble the data in this current report using the Nikodem line of Nurr1 +/− mice. Jiang et al. and Zhang et al. both found significant reductions in dopamine neuron numbers in the substantia nigra and reduced tissue dopamine in the striatum at 9-12 months [35] and 15-19 months [46] of age in the Nurr1 +/− mice using the Connelly line of Nurr1 +/− mice. Interestingly, [36] reported no differences in tissue dopamine levels in the striatum or the number of nigrostriatal dopamine neurons in aged (13-14 month) Nurr1 +/− mice, also using the Connelly line. More recently, when the construct used here was breed into C57-BL6 mice, there was a significant decrease in body weight and increase in novel open field activity in the Nurr1 +/− mice [47]. This suggests that the background strain could have an impact on the penetrance or expressivity of the Nurr1 +/− mutation to impact survival of dopamine neurons with aging. Although genes have been identified that can cause Parkinson's disease, such as SNCA, LRRK2, parkin, PINK1, and DJ-1 [48]. most cases have a relatively small genetic component with low concordance rates among monozygotic twins [49][50][51]. Therefore, understanding the genetic context necessary for the Nurr1 +/− genotype to produce a loss of dopamine neurons and tissue dopamine in the striatum could be very informative. Differences in housing condition or other aspects of the environment may also impact the role of Nurr1 in regulating dopamine neuron function and survival. Nurr1, as an immediate early gene, is sensitive to stress, can be induced by various drugs, and could be sensitive to various housing conditions [41,42,52,53]. Previous data demonstrated that isolation had a significant effect on tissue dopamine levels in the dorsal striatum and that isolation could have a differential effect, depending on the +/− genotype, on dopamine neurotransmission in the NAS [41,42]. In the current study, all mice were raised in groups of 3-5, none were isolated. Differences across laboratories in how the mice were reared, such as when they are weaned or types of caging could also have an effect on the results. Any factor that produces different levels of stressors has the potential to impact Nurr1 expression and dopamine neuron function. Elevated stress induced open-field activity has been consistently reported in the Nurr1 +/− mice. This result was first reported in by Eells et al. and replicated by others and appears to be the most robust behavioral finding in these mice regardless of the derived strain [40,45,46,54,55]. To produce a stress response, we restrained the mice for 3 min prior to placing them back in the open-field. The young +/− mice showed a significant increase in open-field activity after stress. We had previously assigned the stress induced increase in activity to differences in mesoaccumbens dopamine neurotransmission. However, based on the present study, this difference is apparent in the absence of any detected difference in dopamine levels in the nucleus accumbens of the +/− mice. Although alterations in the mesoaccumbens neurotransmission could account for differences in open-field activity in the +/−, the precise mechanism that mediates the difference in the stress induced open-field activity in the +/− mice is unclear. Previous data found that tyrosine hydroxylase activity is reduced in the striatum of the +/− mice and that this difference is enhanced when feedback on the dopamine neuron is block via inhibiting dopamine release [32] This suggests a potential difference in feedback on the dopamine autoreceptor to more closely maintain dopamine synthesis. Although the Nurr1 +/− genotype has been found to have significant effects on the nigrostriatal dopamine system, effects on the mesoaccumbens dopamine system appear to be more prominent although less well characterized. Significant reductions in tissue dopamine levels were reported in the +/− mice in the nucleus accumbens without significant effects in the striatum [40,41]. In the current data, no significant differences in dopamine levels in the ventral striatum of young +/− mice were observed, either in the NAS or NAC. Differences in dissection technique (dorsal and ventral striatum isolation from fresh tissue versus micropunches in frozen sections) and electrochemical detection methods (extraction versus direct measurement) could account for some differences. Aging, however, may be an important variable in producing the deficit in dopamine in the ventral striatum. Additionally, breeding may impact the effect of the +/− mutation as mentioned above. Further studies with direct comparisons at different ages will be important to differentiate effects here. Additionally, the effect aging has on Nurr1 levels in dopamine neurons in the ventral tegmental area have not been investigated. The striatum consists of medium spiny neurons that primarily receive synaptic input from the pyramidal neurons in the cerebral cortex along with dopamine innervations from the mesencephalon. Dopamine innervation to the dorsal striatum consists of dopamine neurons in the substantia nigra pars compacta. The ventral striatum, however, receives dopamine innervations primarily from the ventral tegmental area, mostly the NAS. Dopamine neurons in the medial substantia nigra pars compacta and lateral ventral tegmental area innevate the NAC [56]. Studies have found differences in electrophysiology and gene expression between nigrostriatal dopamine neurons in the substantia nigra pars compacta and mesoaccumbens dopamine neurons in the ventral tegmental area [57,58]. Differences in autoreceptor function and/or dopamine uptake between these areas could underlie the observed effects of aging and the +/− genotype between the dorsal and ventral striatum. Currently, how Nurr1 can differentially affect these separate dopamine systems has not been elucidated. It is unclear whether these differences are due to differences in the neurons innervating these areas or whether there are local effects across the dorsal and ventral striatum that result in the differences in tissue dopamine levels observed. Conclusions The Nurr1 +/− genotype appears to be an important regulator of tissue dopamine as it relates to the mesoaccumbens dopamine system and that aging is an important variable in how Nurr1 regulates dopamine levels. As for the nigrostriatal dopamine system, there are, apparently, other factors that influence whether dopamine neuron numbers and tissue dopamine levels are affected by the +/− genotype. Understanding the interaction between how the environment or genetic background can interact with the Nurr1 +/− genotype could have important implications for understanding the genetic complexity of Parkinson's disease. The interaction between aging and the +/− genotype suggest that aging is an important factor for the regulation of Nurr1 and the function of the mesoaccumbens dopamine system, which could have implications to other neurological problems such as psychosis, addiction or attention deficit hyperactivity disorder in which the mesoaccumbens dopamine neurotransmission has a prominent role. Behavior analysis due to aging: Body weight (A), rota-rod performance (B) and total distance traveled in an open field activity field under basal conditions (C) and after 3 min of restraint stress (D) were measured in young and aged wild-type (+/+) and Nurr1-null heterozygous (+/−) mice. No genotype differences in body weight were found, although aging significantly increased with body weight. Rota-rod performance was also significantly impaired with aging. The aged +/+ mice showed a trend toward impaired rota-rod performance, but was not significantly different compared to the aged +/− mice (C). No significant differences were found in basal open field activity, however, +/− mice were significantly more active after 3 min of restraint stress. Bars represent mean ± S.E.M. Brackets represent significant difference between treatments based on ANOVA with Fisher's PLSD post-hoc comparison with p<0.05. Tissue dopamine levels: Dopamine levels were measured in tissue punches from the dorsal striatum (A), nucleus accumbens shell (B), and n ucleus accumbens core (C) across young and aged wildtype (+/+) and Nurr1-null heterozygous (+/−) mice. There was a significant reduction in dopamine in the nucleus accumbens shell and core in the aged +/− mice. Bar graphs represent mean ± S.E.M. Brackets represent significant difference between treatments based on ANOVA with Fisher's PLSD post-hoc comparison with p<0.05. Table 2 Stereological estimation of regional tyrosine hydroxylase immunoreactive neurons. Estimates of Tyrosine hydroxylase immunoreactive neurons in the Substanstia nigra and Ventral tegmental area of young and aged +/+ and +/− mice using unbiased stereology. Results are expressed as mean ± SEM
v3-fos-license
2022-07-02T15:15:33.408Z
2022-06-29T00:00:00.000
250205514
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2076-3417/12/13/6603/pdf?version=1657266747", "pdf_hash": "8bd9ecc7c431998f96897c330429926f59f73a27", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46262", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "9d2c16a61c114c2b1ac226cb669c807ff4ff7c18", "year": 2022 }
pes2o/s2orc
Evolutionary Game—Theoretic Approach for Analyzing User Privacy Disclosure Behavior in Online Health Communities : Privacy disclosure is one of the most common user information behaviors in online health communities. Under the premise of implementing privacy protection strategies in online health communities, promoting user privacy disclosure behavior can result in a “win–win” scenario for users and online health communities. Combining the real situation and evolutionary game theory, in this study, we first constructed an evolutionary game model of privacy disclosure behavior with users and online health communities as the main participants. Then, we solved the replication dynamic equations for both parties and analyzed the evolutionary stable strategies (ESSs) in different scenarios. Finally, we adopted MATLAB for numerical simulations to verify the accuracy of the model. Studies show that: (1) factors such as medical service support and community rewards that users receive after disclosing their private personal information affect user game strategy; and (2) the additional costs of the online health communities implementing the “positive protection” strategy and the expected loss related to the privacy leakage risk affect the online health communities’ game strategy. In this regard, this paper puts forward the following suggestions in order to optimize the benefits of both sets of participants: the explicit benefits of users should be improved, the internal environment of the communities should be optimized, the additional costs of the “positive protection” strategy should be reduced, and penalties for privacy leakages should be increased. Introduction In recent years, with the acceleration of urbanization and industrialization, China's economy has developed rapidly, and the living standards of Chinese residents have further improved.According to the data released by the National Bureau of Statistics, in 2021, the per capita disposable income of Chinese residents was 35,128 Renminbi (RMB), representing a year-on-year increase of 8.1%.Moreover, the per capita consumption expenditure was RMB 24,100, representing a year-on-year increase of 12.6%, of which per capita healthcare expenditure accounted for 8.8% [1].On the one hand, it can be seen that, while material needs are constantly being satisfied, people begin to pay more attention to their own health problems, and people are eager to obtain more convenient and efficient online medical services [2].On the other hand, China's medical security system is facing a series of challenges, including the following: (1) A high rate of population aging-by 2020, China's elderly population aged 65 years or older will be as high as 190 million, accounting for 13.5% of China's total population [3].In contrast, data published by the United Nations show that the average percentage of the world population aged 65 years Academic Editor: Anton Civit or older in 2019 was only 9.1% [4].(2) There are more patients with noninfectious chronic diseases (NCDs)-data show that the prevalence of NCDs in China was as high as 34.3% in 2018, with a prevalence rate of 52.3% for those aged 65 years or older [5].Furthermore, the top four diseases in terms of mortality in China in 2020 were malignant tumors, heart disease, cerebrovascular disease, and respiratory diseases, all of which are NCDs [5].Notably, they were also the top 10 causes of death in the world in 2019, resulting in approximately 33.2 million deaths [6].It is evident that NCDs have a particularly harmful effect.(3) The proportion of health expenditure to gross domestic product (GDP) is lowin 2020, China's health expenditure occupied 7.12% of the GDP, while those of developed countries such as the United States, the United Kingdom, Japan, and Australia generally exceeded 10% [7,8].Therefore, the Chinese government has to being investigating how to build a new medical and health service system. In this context, in order to meet the medical and health needs of more residents without reducing the quality of medical care, the Chinese government has decided to accelerate the construction of the "Internet medical system".As a result, since 2018, the Chinese government has introduced a series of polices to promote the development of Internet medical services, such as "The Opinions on Promoting the Development of "Internet plus Medical Health"", "Guidance from National Healthcare Security Administration on improving the price of "Internet plus" medical services and medical insurance payment policies" [9,10].Under the dual influence of market demand and national policy guidance, as products of the in-depth integration of "Internet" and "medical and health services", Chinese online health communities, such as "Good Doctor Online", "Chunyu Doctor", and "We Doctor", have developed rapidly.They are favored by many Chinese users and patients because of their low costs, the diversity of medical services available, and the convenience of the communication between doctors and patients [2,11,12].In June 2021, the number of online medical users in China reached 239 million, accounting for 23.7% of the total number of netizens in China (Figure 1) [13].Online Health Communities (OHCs) refer to the virtual communities whose main function is to exchange medical and health information or who have both online medical and online social functions.There are three types of OHCs: Patient to Doctor (P2D), Patient to Patient (P2P), and Doctor to Doctor (D2D).When using online health communities, users engage in various forms of information behavior, such as information searching, information adoption, and knowledge sharing [14][15][16].Among these information behaviors, privacy disclosure behavior, also known as personal information disclosure or self-disclosure, refers to the scenario in which users actively disclose or share their occupation, past medical history, treatment experiences, or other private information to the health technicians or other ordinary users in the online health communities [17].For users, disclosing private personal information can not only help them obtain more effective treatment plans, medical treatment experience, sympathy, and encouragement from others, but may also provide a reference for other users with the same disease.However, this behavior is also accompanied by the risk of privacy leakage (Figure 2) [18][19][20].User privacy may be compromised if the online health communities' cyber-security is compromised or if health technicians are not aware of privacy protection.This can lead to scenarios in which they receive sales pitches for drugs and healthcare products, and they may even fall victim of telecommunication scams online.For online health communities, the active disclosure of private personal information by users not only improves the operational efficiency of online medical services, but also encourages an atmosphere of community, promoting trustworthy relationships between doctors and patients or between other ordinary users.Hence, internal social networks can be formed, which can enhance the influence of the communities [21][22][23].However, it is worth noting that, once a user's private information is leaked, the online health community suffers both in terms of reputation and economic losses due to the negative news [23].On the basis of the above analysis, it is not difficult to see that, there is a certain game relationship between users and the online health communities as regards personal privacy disclosure, and both parties are constantly adjusting their strategies according to their interests and the strategies adopted by the other party.In view of this, in this study, we adopted evolutionary game theory to analyze the gaming process between users and online health communities in order to establish an evolutionary game model and identify the optimal evolutionary stable strategies.On the basis of the research results, this paper puts forward suggestions on how to achieve optimal strategies with the aim of reaching a "win-win" scenario between the users and communities by strengthening privacy protection within the communities while promoting user privacy disclosure behaviors.The main contributions of this study are listed as follows: (1) We establish an evolutionary game model to explain the different strategies of users and online health communities in the game process in which privacy disclosure behavior occurs. (2) We explore the evolutionary stable strategy of the game between users and online health communities in various scenarios and analyze the reasons for the results. (3) We conduct numerical simulation experiments to simulate the evolutionary game process to prove that our proposed model is effective. (4) We put forward some suggestions based on the development status of online health communities to facilitate the formation of optimal evolutionary stable strategies. The rest of this paper is organized as follows: In Section 2, we review the related research on privacy disclosure intentions and behavior.In Section 3, we first describe the different decisions of users and online health communities for privacy disclosure; we then build an evolutionary game model based on the cost-benefit matrix of both sides in the game, and we finally analyze the evolutionary stable strategy (ESS point) in different scenarios.In Section 4, we validate the evolutionary game model using numerical simulation methods.In Section 5, we elaborate our findings and make recommendations from multiple perspectives with which to promote user privacy disclosure behaviors.In Section 5.3, a brief overview of the content and the weaknesses of the study are given, and various future research directions are provided. Literature Review In recent years, scholars in the fields of Information Science and Library Science and Medical Informatics in China and abroad have carried out a series of studies on privacy disclosure intentions and behavior, mainly focusing on mobile applications, social media, e-commerce, and Internet medical services. (1) Mobile applications.The research in this scenario mainly focuses on analyzing the influencing factors of mobile application user privacy disclosure willingness and relevant information behaviors.From the perspective of emotional attitude, Tang et al. proposed the concept of "privacy fatigue" and believed that the privacy disclosure willingness of mobile application users is affected by a combination of privacy fatigue, privacy concerns, and user personal characteristics [24].Scholars such as Mouakket found that user internal satisfaction (such as entertainment, escapism) and social satisfaction (such as social communication) also affected their willingness to disclose private information [25].In addition, Brandtzaeg and other scholars investigated the privacy policy of mobile applications.Their research showed that certain mobile applications continue to track and share private user data when they are not in use, which clearly violates the privacy policy provided to users and would have a negative impact on user privacy disclosure behavior [26]. (2) Social media.Certain scholars analyzed the influencing factors of social media user privacy disclosure willingness and behavior from different perspectives.Wang et al. conducted research from the perspective of information systems and concluded that the system quality, service quality, and information quality of social networking sites have a positive impact on user willingness to disclose personal location-related information (PLRI) [27].From the perspective of sociology, Lin et al., Liu et al., and other scholars analyzed the privacy disclosure willingness of social networking site users with social exchange theory and trust theory.Factors such as trust in social networking sites and other social networking site users, potential social rewards, and the "supportive atmosphere" constructed by online social support such as emotion, information, respect, etc., had a positive impact on user willingness to disclose private information [28,29].Scholars such as Thompson and Brindley, Sun et al., and Ashuri et al. found that users' perceived usefulness, perceived enjoyment, perceived risk, and platform rewards all affect social media user willingness to disclose private information [30][31][32].On the other hand, scholars such as Li K et al. explored the behavior patterns and formation mechanism of the privacy disclosure behavior of social media users.They defined two behavioral modes, i.e., voluntary sharing and mandatory provision.Voluntary sharing refers to the active personal privacy disclosure behavior of users, which is mainly affected by positive factors such as perceived benefits, social network scale, and personalized services.Mandatory regulations refer to platforms that force users to disclose personal information, which is mainly affected by negative factors such as age, privacy policy, and perceived risk [33]. (3) E-commerce.Various scholars have attempted to develop technical solutions to resolve the contradiction between user data collection and user privacy protection in ecommerce companies.Liu et al. designed an information technology solution called the "negotiation, active-recommendation privacy policy application", and confirmed, through experiments, that the program could help companies resolve the contradiction between user data collection and privacy protection, reduce user privacy concerns, and increase user willingness to disclose privacy and actual disclosure behavior [34].Various scholars also analyzed the potential influencing factors of e-commerce user privacy disclosure behavior.Gomez-Barroso used an experimental method to demonstrate that, when users shop online, the platform gives appropriate monetary incentives, which can promote user privacy disclosure behavior [35]. (4) Internet medical treatment.Certain scholars attempted to optimize medical equipment to improve the probability of user privacy disclosure behavior.Alaqra et al. proposed a privacy-enhancing technology that enables a user to edit their personal information in a signed document while preserving the validity of the signature and the authenticity of the document [36].The application of this technology in electronic health medical records can effectively protect the personal privacy of patients, thereby promoting their privacy disclosure behavior [36].Various scholars also analyzed the influencing factors of health information exchange and health information disclosure willingness; for example, Robinson, Esmaeilzadeh, and other scholars demonstrated that the perceived transparency of privacy policies, trust in medical service providers, and the severity of users' diseases all affect user privacy disclosure behavior [37,38].Research by Zhang et al., Wang et al., Hur et al., and Zhou shows that many individuals use online health communities for information support and emotional support, in which emotional support, such as being listened to and receiving attention and encouragement from other users in the communities, can encourage users to voluntarily disclose or share their private personal information, such as their disease conditions and consultation experiences [39][40][41][42].In other words, the emotional support received by users is one of the influencing factors of their privacy disclosure behavior. Overall, the current research on privacy disclosure willingness and behavior provides a solid theoretical foundation and rich research results.Scholars in China and abroad have fully applied multidisciplinary theories such as privacy computing theory, social exchange theory, and social support theory to analyze the influencing factors of privacy disclosure willingness and behavior, and the related privacy protection technologies and privacy protection policies [37,[43][44][45].It is worth noting that, from the perspective of research objects, existing studies mainly focus on social media users or mobile application users, while relatively few studies have been carried out on online health community users.In fact, the online health communities have the dual attributes of online medical care and online social networking.The research on user privacy disclosure behavior in this context can provide theoretical references for various scenarios, such as the sharing of medical data, the construction of Internet hospitals, and the privacy protection of social media.In this regard, we should pay more attention to the privacy disclosure behavior of online health community users.From the perspective of research methods, structural equation modeling is the most common research method in existing research.Although this method can effectively verify the influencing factors of user privacy disclosure behavior, it cannot observe the changes in user behavior over time.Moreover, we believe that the key factors affecting the privacy disclosure behavior of users are still perceived risks and perceived benefits, and the evolutionary game model can be combined with the real situation to analyze the costs and benefits of both parties in the game, and to observe the strategic changes made by both sides over time.In recent years, evolutionary game models have been widely used in many scenarios, such as virtual product development on social networking sites, social media crisis communication, and privacy protection in mobile health systems, and have achieved good results [46][47][48].In view of this, this research takes the users of online health communities as the research object and adopts an evolutionary game model to analyze their privacy disclosure behavior. Evolutionary Game Theoretical Model We analyzed the privacy disclosure behavior of online health community users in real life using evolutionary game theory, built an evolutionary game model with users and online health communities as the main participants, and solved the evolutionary stable strategies using the replication dynamic equation and Jacobian matrix. Problem Description and Model Assumptions We believe that the main players in the evolutionary game model of user privacy disclosure behavior in online health communities are users and the online health communities.Thus, we propose the following assumptions: (1) The game strategy of users is "disclosure" or "non-disclosure".When users choose to disclose their private personal information, although there may be a risk of privacy leakage, they can also obtain certain benefits.The specific benefits are as follows: Firstly, disclosing private personal information to health technicians who provide online medical services in the communities may lead to higher quality and more personalized online medical services; secondly, disclosing private personal information to ordinary users may lead to emotional support, information support, and rewards from the communities.When the users choose not to disclose private personal information, they cannot obtain any benefits, but also do not bear the risk of privacy leakage. (2) The game strategy of online health communities is to implement the "positive protection" or "negative protection" strategy for user personal privacy.When a community implements the "positive protection" strategy, the community needs to strengthen the privacy protection of users, such as strengthening the network security of the information system, improving the privacy protection awareness of registered health technicians, etc.This requires extra costs; however, it can also significantly reduce the probability of user privacy leakage.When a community implements the "negative protection" strategy, the community only needs to implement basic user privacy protection in accordance with the relevant laws and regulations, and the investment cost is relatively low. (3) Both users and online health communities are "economic people" with bounded rationality.They only act according to their own benefits and costs, they constantly adjust their strategies according to the benefits, and may eventually reach an evolutionary stable state. On the basis of the above assumptions, in order to facilitate the construction and solution of the model, the following parameters were used for the calculation in this study, as shown in Table 1.Additionally, the relationship between online health communities and users in the game model is presented in Figure 3. Notation Description M Medical service support obtained by users after disclosing their personal information, E Emotional support obtained by users after disclosing their personal information, 0 E > I Information support obtained by users after disclosing their personal information, Loss of users after their privacy leaks, Loss of OHCs after users' privacy leaks, R Rewards from the OHCs after users disclose their personal information, Costs of implementing the "negative protection" strategy in the OHCs, Extra costs of implementing the "positive protection" strategy in the OHCs, Probability of privacy leakage under the "positive protection" strategy, 0 Probability of privacy leakage under the "negative protection" strategy, 0 The evolutionary game focuses on the decision-making behavior of different groups.Therefore, it can be assumed that, in the game process, the proportion of users choosing "disclosure" is x, then the proportion of users choosing "non-disclosure" is (1 − x).As far as online health communities are concerned, the proportion of online health communities selecting "positive protection" is y, while the proportion of online health communities that choose "negative protection" is (1 − y).As shown in Figure 2, there are two types of user privacy disclosure behaviors, and their probabilities of occurrence are a and b.Therefore, when we assume that users adopt a "disclosure" strategy, in order to ensure that privacy disclosure behaviors must occur, we need to assume that 0 a b + ≠ .Hence, we obtain the payoff matrix for online health communities and users as follows in Table 2. Online Health Communities Positive Protection Positive Protection Users Disclosure ( ) Model Calculation and Stability Analysis We define the expected benefit of choosing to "disclose" their personal information for users as 1 P , the expected benefit of choosing to "non-disclose" personal information as 2 P , and the average expected benefit as P .Therefore, ( ) From Formulas (1) to (3), we obtain the replication dynamic equation of user game strategy as follows: We define the expected benefit of choosing to implement the "positive protection" strategy for OHCs as 1 G , the expected benefit of choosing to implement the "negative protection" strategy as 2 G , and the average expected benefit as G .Therefore, ( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) From Formulas ( 5) to (7), we obtain the replication dynamic equation of OHC game strategy as follows: To simplify the calculation, we let ( ) , and obtain the replication dynamic system equations of users and online health communities as shown in Formula ( 9): Furthermore, the Jacobian matrix can be obtained by solving Formula (9), as shown in Formula (10): We used the research method proposed by Friedman (1998) [49] to determine the stability of the above potential equilibrium points, in accordance with the methods of Chen et al. (2021) [50] and Lv et al. (2022) [51].According to this method, we can obtain the calculation formulas of detJ and trJ , as shown in Formulas (11) and (12).If the conditions 0 detJ > and 0 trJ < are met, then the potential equilibrium points can be considered as the stable strategy of the evolutionary game, i.e., the Evolutionary Stable Strategy (ESS). Equilibrium Points detJ trJ ) We used the cost-benefit analysis method to discuss the stability of the game strategies of the two sides in different situations. From the perspective of the users, ( ) or Δ represents the medical service support, emotional support, information support, and rewards given by the OHCs after users disclose their personal information, i.e., the total benefits of users. u cL and u dL are the expected loss of user privacy leakage under the "positive protection" strategy and "negative protection" strategy implemented by the OHCs, respectively, i.e., the total costs of users under the "positive protection" strategy and "negative protection" strategy, respectively.Therefore, there are three situations in the relationship between the total benefits and the total costs of users: , and From the perspective of OHCs, B is the increase in the influence and recognition of the communities and other benefits received by the OHCs after users disclose their personal information, i.e., the total benefits of the OHCs. ( ) is the sum of all types of extra costs paid by the OHCs after implementing the "positive protection" strategy and the reduced expected loss of user privacy leakage, as compared with the "negative protection" strategy, i.e., the total costs of the OHCs.It is worth noting that ( ) may be greater than 0 or less than 0, which needs to be discussed in terms of classification.Therefore, the relationship between the total benefits and the total costs of OHCs also exist in three situations: ( ) ( ) On the basis of the above discussion, after analyzing the costs and benefits of both players, the following nine scenarios can be obtained, as shown in Table 4.In these scenarios, since the sum of the benefits obtained by users for disclosing their private personal information is less than the sum of the costs under the "positive protection" strategy, i.e., u cL Δ < , users tend not to disclose private personal information ( ) x → .On the other hand, for the OHCs, there is no benefit to their "positive protection" strategy as users gradually choose not to disclose their private information.For cost reasons, they will tend to choose to implement the "negative protection" strategy ( ) . Therefore, the evolutionary game process will eventually converge to D1 (0, 0), which can be seen in Figure 4a. Proposition 2. When the conditions are those in Scenarios 4 to 5, D1 (0, 0) is an ESS, and users and OHCs will choose (non-disclosure, negative protection). In these scenarios, since the sum of the benefits obtained by the users who disclose private personal information is between the sum of the costs of the "positive protection" strategy and that of the "negative protection" strategy, i.e., , some users will choose to disclose their personal information ( ) x → , while other users will choose nondisclosure ( ) x → .On the other hand, for the OHCs, the additional costs to the OHCs are greater than the reduced expected loss of the lowered privacy risk, i.e., ( ) > , so the OHCs still tend to implement the "negative protection" strategy ( ) . Over time, when all online health communities choose to implement the "negative protection" strategy, users will gradually shift to not disclosing their private personal information ( ) because the relationship between the sum of benefits and the sum of costs will become u dL Δ < . Therefore, the evolutionary game process will eventually converge to D1 (0, 0), which can be seen in Figure 4a. In this scenario, the sum of the benefits obtained by users for disclosing private personal information is between the sum of the costs of the "positive protection" strategy and that of the "negative protection" strategy.As a result, there are users who are willing to disclose privacy ( ) and users who are unwilling to disclose privacy ( ) When the users disclose personal information, the sum of the benefits obtained by the OHCs must be greater than the sum of its costs under the "positive protection" strategy, and the additional cost is less than the reduced expected loss of the lowered privacy risk, so the OHCs will tend to implement the "positive protection" strategy ( ) Moreover, when users do not disclose private personal information, the OHCs will tend to implement the "negative protection" strategy in order to reduce costs ( ) Therefore, the evolutionary game process will eventually form two ESS points: D1 (0, 0) and D4 (1, 1), see Figure 4c.Among them, the proportion of the population at D1 (0, 0) and D4 (1, 1) is related to the critical point D5 (x*, y*). Proposition 4. When the conditions are those in Scenarios 7 to 8, D2 (1, 0) is an ESS, and users and OHCs will choose (disclosure, negative protection). In these scenarios, since the sum of the benefits obtained by the user disclosing private personal information is greater than the sum of the costs under the "negative protection" strategy, i.e., u dL Δ > , the users will tend to disclose personal information ( ) Moreover, no matter how much benefit users can bring to the OHCs by disclosing their private information, as long as the additional cost is greater than the reduced expected loss of the lowered privacy risk, i.e., ( ) > , the OHCs will tend to choose to implement the "negative protection" strategy ( ) . Therefore, the evolutionary game process will eventually converge to D2 (1, 0), which can be seen in Figure 4b. Proposition 5. When the conditions are those in Scenario 9, D4 (1, 1) is an ESS, and users and OHCs will choose (disclosure, positive protection). In this scenario, since the sum of the benefits obtained by users for disclosing personal information is more than the sum of the costs under the "negative protection" strategy, the users will tend to disclose personal information ( ) x → .On the other hand, the sum of the benefits received by the OHCs because of user disclosure behavior is greater than the sum of the costs of implementing the "positive protection" strategy for the OHCs, and the additional cost is less than the reduced expected loss of the lowered privacy risk, i.e., ( ) < .As a result, in order to obtain higher returns, the community will tend to implement "positive protection" strategies ( ) the evolutionary game process will eventually converge to D4 (1, 1), as shown in Figure 4d. Numerical Simulation Experiment In the previous section, we described the construction of the evolutionary game model, concluded that there were nine different scenarios in the evolutionary game process, and finally obtained the ESS points in all scenarios through calculation.In this section, we use MATLAB (R2021a) to carry out numerical simulation experiments.This simulation is used to show that the final results of the evolutionary game process in all scenarios are consistent with our analysis results, so as to prove the accuracy of the model and obtain the decisive influencing factors. The simulation process was as follows: (1) In terms of parameter value setting, we needed to ensure that the values of all parameters in each scenario met their constraints.Therefore, according to Table 4, we preset the following values as the initial values of each parameter in this study, as shown in Table 5. (2) In terms of setting the initial values of x and y, we interviewed various OHCs user groups in the early stage.The results showed that the probability of different user groups choosing to disclose their private personal information was different, ranging from 10% to 90%.Moreover, combined with the simulation method applied by Zhu et al. (2018) [52] and Li et al. (2022) [53], we set the initial values of x and y to simulate randomly from 0.1 to 0.9, with 0.1 as the starting point and 0.1 as the fixed step.(3) As regards setting the iteration times, we referred to the methods of Zhu et al. (2018) [52] and Li et al. (2022) [53] and set the evolutionary time to replace the number of iterations.After testing, in order to more intuitively show the evolutionary game process, we set the evolutionary time to 5. (1) According to the data in Table 5, Figure 5a-e were obtained by simulating the evolutionary game trend of users and the online health communities in Scenarios 1 to 5, respectively.It can be seen from Figure 5a-e that the ESSs of these scenarios are all (0, 0), i.e., users and OHCs choose (non-disclosure and negative protection).The simulation results in the above scenarios are consistent with the evolutionary process calculated by Proposition 1 and Proposition 2. It can be obtained from a, b, and c in Figure 5 that, when the values of other parameters are constant, the value of C2 is reduced from 15 to 10.Although there is a decrease in the evolutionary rate of both sides, it is not significant.However, when the value of C2 is reduced from 10 to 5, the evolution rate of both sides decreases significantly.It can be concluded that the influence of C2 on the evolutionary rate of both sides presents a nonlinear correlation.On the other hand, it can be obtained from Figure 5d,e that, when the value of C2 is reduced from 15 to 10, the evolutionary rate of both sides exhibited a significant decrease.We can conclude that the reduction in C2 at this time will significantly reduce the evolutionary rate of both sides simultaneously.In addition, we can also see from Figure 5d,e that, when the values of other parameters are constant and the value of Δ increases from 5 to 10, the evolutionary rates of both parties in the game model exhibited a significant decrease. (2) On the basis of the data in Table 5, Figure 5f was obtained by simulating the evolutionary trend of the game between users and the online health communities in Scenario 6.It can be seen from Figure 5f that the evolutionary results in this scenario are not unique, and there are two ESSs at the same time, namely, (0, 0) and (1, 1), i.e., users and OHCs choose (non-disclosure, negative protection) or (disclosure, positive protection).The simulation results in the above scenarios are consistent with the evolutionary process calculated by Proposition 3. It can be seen from Figure 5f that, from the beginning of the game, there are some users or communities that tend to both 0 and 1.We can see that in the early stage of the game, both sides are in a state of swing.Neither the sum of the benefits Δ gained by users from disclosing private personal information nor the additional costs C2 paid by the communities to implement the "positive protection" strategy provide sufficient motivation for users or the communities to form a unified choice.Over time, two different evolutionary stable strategies (0, 0) and (1, 1) will eventually be formed. (3) On the basis of the data in Table 5, Figure 5g,h were obtained by simulating the evolutionary game trend of users and online health communities in Scenarios 7 and 8.It can be seen from Figure 5g,h that the ESSs of these scenarios are all (1, 0), i.e., users all choose to disclose private personal information and communities all choose to implement the "negative protection" strategy, which is consistent with the evolutionary process calculated by Proposition 4. Comparing g and h in Figure 5, it can be seen that, when u dL Δ > and other parameters are constant and the value of C2 decreases from 15 to 10, there is no significant difference in the evolutionary rate of users, while the evolutionary rate of the communities decreases significantly. It can be concluded that, when the sum of the user benefits is greater than the sum of the costs under the "negative protection" strategy, the drop in C2 has little effect on the users' strategy choice, but has a more significant impact on the communities' strategy. (4) On the basis of the data in Table 5, Figure 5i was obtained by simulating the evolutionary game trend of users and online health communities in Scenario 9.It can be seen from Figure 5i that the ESS of Scenario 9 is (1, 1), i.e., users choose to disclose their personal information and the communities choose to implement the "positive protection" strategy, which is consistent with the evolutionary process calculated by Proposition 5. In this scenario, users can obtain the rewards they want by disclosing their private personal information.Moreover, the communities can enhance the influence of the platform by implementing positive privacy protection strategies and reduce the losses caused by user privacy leakage.Consequently, this result is the optimal evolutionary stable strategy in all scenarios and it represents a "win-win" for both sides of the game simultaneously. On the basis of the above analysis, we found that the numerical simulation of Scenario 1 to 9 is completely consistent with the results calculated by the model.Therefore, it can be concluded that the evolutionary game model constructed in our research is accurate and effective.This also shows that the parameters involved in Formula ( 9) are decisive factors that can affect the decisions of both parties, such as Δ , Lu, Lc, C2, c, and d.Correspondingly, the parameters not involved in Formula (9) will not affect the evolutionary process, such as B and C1. Conclusions, Policy Implications, and Future Research In this section, we summarized our conclusion of research based on evolutionary game model.From this conclusions, we have put forward a series of policy implications, which may provide suggestions to the government in order to promote the progress of privacy protections in OHC.Finally, this research also has its limitations, so the direction of future research has been given in the end. Conclusions On the basis of the simulation results, the evolutionary game model constructed in this study exhibits good accuracy and accurately reflects the game behavior and evolutionary process of both participants in the model.In this regard, we were able to draw the following conclusions: (1) The total benefits, such as medical service support, emotional support, information support, and community rewards, obtained by users disclosing private personal information in the communities are the central factors that affect whether users choose to disclose personal information.If the total benefits obtained by users are greater than the expected loss caused by privacy leakage in the worst case, even if the communities choose to implement negative privacy protection policies, users will still actively and firmly choose to disclose their private personal information. (2) Although the active disclosure of private personal information by users can help to form a good atmosphere within the community, thereby enhancing the credibility and popularity of the community and bringing certain benefits to the development of the community, this benefit is not the decisive factor for the community to choose which privacy protection strategy to implement.The relationship between the extra cost that the community needs to pay to implement the "positive protection" strategy and the expected loss caused by the difference in privacy leakage risk between the "positive protection" and "negative protection" strategy is what affects the community's choice of privacy protection strategy. Policy Implications On the basis of the above conclusions, in order to improve the privacy protection of the communities and promote user privacy disclosure behavior, this study proposes the following suggestions (Figure 6): (1) Improve the users' explicit benefits.On the basis of the actual situation, for users, the explicit benefits obtained from disclosing private personal information, such as higher-quality online medical services, rewards from the communities, etc., will lead them to disclose private personal information.Therefore, community managers should focus on the improvement of explicit benefits to users. Specifically, this firstly involves improving the quality of online medical services.For communities with online medical functions, community managers should deepen cooperation with medical institutions, medical schools, medical research institutes, and other institutions, integrate multiple resources, and improve the quality of online medical services as much as possible.Thus, when users disclose personal information, they obtain more accurate, comprehensive, and personalized online medical services. Secondly, a reasonable incentive mechanism should be developed.For communities with online social functions, community managers should formulate a reasonable reward mechanism to provide richer material or spiritual rewards for users who actively share their private personal information or experiences through leaving messages, comments, and posts, such as giving a certain amount of vouchers or "excellent user certification" for highly active users, etc., in order to promote user privacy disclosure behavior. (2) The internal environment of the community should be optimized.A relaxing and comfortable social environment can enhance users' sense of belonging, promote communication and interaction between users, and effectively increase user willingness to disclose and share personal information in the communities.Therefore, community managers not only need to pay attention to the explicit benefits that users might obtain, but also to the optimization of the internal environment of the community.Specifically, this involves firstly strengthening daily management.Daily management has an important impact on the community environment.When the management is principally absent, a large number of conflicts and disputes, false information, marketing, and other violations can occur within the community.Therefore, community managers should consider selecting certain active users or opinion leaders from the community to form a special management team to assist them in daily management tasks, thereby improving the efficiency of handling violations. Thereafter, rules and regulations should be formulated.Community managers should formulate corresponding rules and regulations based on the community environment, and clearly inform users of their behavioral norms and legitimate rights and interests.The establishment of this system not only reduces the probability of violations within the community, but also provides a basis for the community management team to deal with various violations. (3) The additional costs of the "positive protection" strategy should be reduced.For the online health communities, implementing a negative privacy protection strategy only needs to meet the minimum requirements stipulated by national laws or local regulations, but implementing an active privacy protection strategy entails certain additional costs, which may be prohibitively high.Therefore, community managers should try to reduce the additional costs of the "positive protection" strategy in the following ways. Firstly, information systems and network equipment should be procured in a centralized manner.The implementation of an active privacy protection strategy in the communities is inseparable from the improvement of information systems and network equipment.Therefore, when specific needs are identified, community managers should actively seek cooperation with other enterprises or entrust specialized agencies to carry out centralized procurement, in order to reduce procurement costs. Thereafter, training on privacy security should be designed and undertaken.The training cost of personnel is one of the additional costs that the community needs to pay for implementing an active privacy protection strategy.Traditional training courses are expensive and cannot be reused.Therefore, community managers can seek professional privacy security training companies to develop relevant online courses and question banks for health technicians and staff in the community.Only after meeting the required learning time and passing the test can a person become a registered health technician or staff member in the community, so as to improve users' understanding of privacy security knowledge and reduce personnel training costs. (4) The penalties for privacy leakage should be increased.In actual situations, if there is a user privacy leakage incident in the online health community, the community's reputation not only suffers due to the exposure, but there are also administrative penalties from government regulatory authorities.In order to avoid punishment as much as possible in situations in which the punishment is severe, the community will tend to implement active privacy protection strategies.Therefore, local governments can consider increasing the punishment in the following ways. Specifically, local laws and regulations should be improved.Since November 2021, China has officially implemented the Personal Information Protection Law of the People's Republic of China [54].Although the law clearly stipulates the government's punishment for enterprises, it does not involve the compensation from enterprises to users.As compared to the serious consequences of harassing marketing techniques and telecommunication fraud after user privacy leaks, the compensation from enterprises to users remains relatively limited at present.Therefore, local governments can improve the local laws and regulations to increase the compensation from enterprises. Secondly, supervision should be strengthened.Government supervision departments can implement "integrated online and offline supervision".On the one hand, they can hold online activities such as cyber-security offensive and defensive drills to test the level of network security of the community.On the other, they can use on-site inspections to ascertain the actual operation of the community, and urge the community to pay attention to user privacy protection through the improvement of supervision. Future Research This research focuses on the privacy disclosure behavior of online health community users, and combines evolutionary game theory with real scenarios to construct an evolutionary game model of privacy disclosure behavior.Herein, how the model was built, analyzed, and numerically simulated using MALTAB R2021a to verify the results is described.Finally, suggestions are put forward to promote user privacy disclosure behavior, which provides a theoretical reference for the development of online health communities. The main limitations of this study are as follows: (1) Some parameters, such as emotional support, information support, etc., are abstract concepts, which are difficult to directly quantify; and (2) this study only considers the two-party game between users and the online health community and does not consider the intervention of the government as a third-party regulator. In this regard, in the follow-up research, on the one hand, we will consider designing a series of scenario experiments based on relevant economic theories, so as to quantify the value of medical services, emotions, and the information obtained by users after disclosing their private personal information.On the other hand, we will consider conducting a privacy disclosure behavior game study involving the participation of users, online health communities, and the government. Figure 1 . Figure 1.Scale and utilization rate of online medical users. Figure 2 . Figure 2. User privacy disclosure behavior in online health communities (P2D). Figure 3 . Figure 3. Relationship between online health communities and users in the game model. Proposition 1 . When the conditions are those in Scenarios 1 to 3, D1 (0,0) is an ESS, and users and OHCs will choose (non-disclosure, negative protection). Figure 6 . Figure 6.Policy implications and practical suggestions from the model. Table 1 . Notation and description. Table 2 . Payoff matrix for users and online health communities. Table 3 . Determinant and trace of the Jacobian matrix. Table 4 . Determination of potential equilibrium points in different scenarios. Table 5 . Initial values of parameters in different scenarios.
v3-fos-license
2017-06-25T18:10:27.008Z
2013-12-01T00:00:00.000
14631740
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40200-019-00440-z.pdf", "pdf_hash": "5c6667ff9bc88f2bdac9a0e25d4a6e33cbb4fa93", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46264", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "5c6667ff9bc88f2bdac9a0e25d4a6e33cbb4fa93", "year": 2013 }
pes2o/s2orc
Investigating GSTT1 and GSTM1 null genotype as the risk factor of diabetes type 2 retinopathy Background Diabetes is one of the multifactorial disorders with genetics and environmental factors playing important role in its cause. In diabetes, the defects in cellular metabolism results in increasing free radicals. These radicals react with other vital cellular molecules which are responsible in diabetes side effects. Human glutathione S-transferases (GST) are a family of enzymes that catalyses conjugation of electrophilic substances with glutathione. In this research the deletion of two of the most important genes of this family; GSTT1 and GSTM1 genes was investigated as the risk factor for diabetes mellitus type II and one of its most important complications; retinopathy. Material and methods In this study deletion of GSTT1 and GSTM1 genes in 57 diabetics’ patients with retinopathy and 58 diabetic peoples without retinopathy was examined. DNA was extracted from peripheral blood and then multiplex PCR was performed following agarose gel electrophoresis to detect GSTT1 and GSTM1 null genotypes. Data were analyzed with SPSS v16 software. Results The results indicated that there was significant relationship between GSTM1 null genotype with retinopathy side effect of diabetes type 2. While there was no significant relationship between GSTT1 null genotypes with retinopathy in diabetes type 2. Conclusion Significant correlation between GSTM1 null genotype and retinopathy in this and other studies could indicate this fact that impair cellular metabolism result in increase free radicals and oxidative pressure. Therefore, GST null genotypes may result in decrease antioxidant capacity which causes side effects of diabetes. Considering the performance of different classes of GST null genotypes additional studies are required to confirm this study. Electronic supplementary material The online version of this article (doi:10.1186/2251-6581-12-48) contains supplementary material, which is available to authorized users. Background There are many genetics and environmental factors involve in multifactorial diseases such as heart diseases, diabetes, high blood pressure and cancer. Interaction of these factors and inheritance pattern is complex. Unlike monogenic disease the occurrence chance of these diseases cannot be predicted, but we can predict the incidence rate of the disease [1]. Type 2 diabetes mellitus (T2DM) is recognized as a worldwide public health problem due to the high medical and socioeconomic costs that result from complications associated with the disease. In general, T2DM is the most common metabolic and multifactorial disease in which both genetic and environmental factors are involved [1][2][3]. Diabetes is the latest step of a chronic and accelerating disorder which results from insulin resistance, decrease of functional pancreatic β cells and increase of glucose level. Approximately all of the T2DM patients are insulin resistance. Despite of numerous studies on insulin resistance, the main cause of it is still not known. It seems that post translation modification and mutations in the genes lead to defect in the cell signaling pathway which can result in insulin resistance [4]. Several genes have been identified that are involved in the cellular pathway of glucose metabolism and storage. Defects in these genes can lead to diabetes or diabetes background. Among these genes are: Adiponectin [1,2], PTPN1 [4], GLUT4,2 [5,6], PAX4 [7], HNF1B [8] and PPARG [9]. People with T2DM are at risk for several complications, including damage to the vascular system that leads to increase mortality [10]. Many side effect of T2DM are cardiovascular disease, nephropathy, retinopathy, and neuropathy. Diabetic retinopathy is one of the most severe complications that can cause blindness in patients. Blindness in diabetic patients is 25 times higher than non-diabetics [11]. These complications could be due to the cellular metabolism leading to hyperglycemia and to the production of free radicals which combined with vital molecules result in various diseases. The human glutathione S-transferases (GSTs) are a family of enzymes known to act in the body as the defense systems for neutralize free radicals. They play an important role in the detoxification of electrophiles by glutathione conjugation. For example, the function of the GST enzymes has traditionally been considered to be the detoxification of several carcinogens found in tobacco smoke. There is a wide range of electrophilic substrates both endogenous (e.g. by-products of reactive oxygen species activity) and exogenous (e.g. polycyclic aromatic hydrocarbons) [12]. GSTs are dimeric proteins that catalyze conjugation reactions between glutathione and tobacco smoke substrates, such as aromatic heterocyclic radicals and epoxides [13][14][15]. In addition to their role in phase II detoxification, GSTs also modulate the induction of other enzymes and proteins important for cellular functions, such as DNA repair. This class of enzymes is therefore important for maintaining cellular genomic integrity and, as a result, may play an important role in cancer susceptibility [16]. The loci encoding the GST enzymes located on at least seven chromosomes. This multigene family divided in seven families (Alpha, Mu, Pi, Theta, Sigma, Zeta, and Omega) with functions ranging from detoxification to biosynthesis and cell signaling. Many of the GST genes are polymorphic, therefore, there has been substantial interest in studying the associations between particular allelic variants with altered risk of a variety of diseases. Several GST polymorphisms have been associated with an increased or decreased susceptibility to several diseases. Two of the important members of the GST family, named glutathiones-transferase mu 1 (GSTM1) and glutathione-s-transferase theta 1 (GSTT1) have polymorphic homozygous deletion or null genotypes. Persons with homozygous deletions of either the GSTM1 or the GSTT1 locus have no enzymatic functional activity of the respective enzyme. This has been confirmed by phenotype assays that have demonstrated 94% or greater concordance between phenotype and genotype [3]. Recently in two different studies, the GSTT1 null genotype or both the GSTT1 and GSTM1 null genotypes interacting with current-smoking status have been shown to be a genetic risk factor for the development of T2DM and its cardiovascular complications [17,18]. In another study to investigate the associations of GSTM1 and GSTT1 polymorphisms with type 1 diabetes (T1DM), the results suggest that the GSTM1 null genotype is associated with T1DM protection and T1DM ageat-onset and that susceptibility to T1DM may involve GST conjugation [19]. Regarding the complications of diabetes, it has been shown that GSTT1 wild allele and GSTT1 wild/GSTM1 null genotype can be considered as risk factors for cardiovascular autonomic neuropathy in Slovak adolescents with T1DM [20]. Recently in one study reported from the Sinai area of Egypt on 100 T2DM patients and 100 healthy controls matched for age, gender and origin, the proportion of the GSTT1 and GSTM1 null genotypes was significantly greater in diabetic patients when compared to controls. It was reported that there was a 3.17-fold increased risk of having T2DM in patients carrying both null polymorphisms compared to those with normal genotypes of these two genes (P = 0.009) [21]. To our knowledge, there was no study regarding GSTT1 and GSTM1 null genotypes and diabetes retinopathy in Iranian population. In addition there is still debate about the results of limited number of researches in this regard in the other parts of the world. Therefore, in this study GSTM1 and GSTT1 null genotype as one of the genetics factors which may be related to the diabetes and its complications is investigated. Materials and methods In this study, diabetic patients have been selected from individuals referred to Yazd Diabetes Research Center, Yazd, Iran. Other factor such as age, sex, response to treatment and changes in hematological indices were extracted from patient records. Among patients with diabetes, 115 patients were selected who were 35 to 65 years old. Among them, 58 patients had no complication of diabetes (control group) and 57 patients had diabetes with retinopathy side effect (case group). The criteria of retinopathy were based on retinal examination by physician and finding neovascularization (based on the WHO index). The patients were selected by physician after examination. The research was carried out in compliance with the Helsinki Declaration and was approved by the Ethical Committee of Shahid Sadoughi University of Medical Sciences, Yazd, Iran. To examine GSTT1 and GSTM1 gene deletion in patients, a sample of 10 ml peripheral blood was taken in tubes and DNA was extracted by salting out method. Molecular examination preformed by multiplex PCR using 3 sets of primer pairs for GSTT1, GSTM1 and β globin gene for control. A total of 100 ng of genomic DNA was used for PCR amplification, in 30 μL of reaction mixture that contained 2 mM MgCl2 and 12.5 pM each of the forward and reverse primers ( Table 1). The PCR condition was one cycle of 94°C for 5 minutes followed by 30 cycles of 94°C, 62°C, and 72°C for 1 min each. The PCR products were visualized using 2% agarose gel electrophoresis. DNA bands for GSTM1, GSTT1, and β globin alleles were 219 bp, 480 bp, and 268 bp, respectively. The absence of bands for GSTM1 or GSTT1 in the presence of β globin PCR product indicates null genotype for each ( Figure 1). Samples positive for all three PCR products were considered 'wild-type'. The data were analyzed by SPSS v16 software and Chi-Square test. Discussion Diabetes mellitus is one of the most common chronic diseases in nearly all countries; the number of people with diabetes is increasing due to population growth, aging, urbanization, and increasing prevalence of obesity and reduced physical activity. Oxidative stress plays a major role in the pathogenesis of T2DM. β-cells are low in antioxidant factors such as glutathione peroxidise and catalase. Therefore, they are particularly sensitive to oxidative stress which may not only result from hyperglycemia associated with diabetes, but may also have an important causal role in β-cell failure and the development of insulin resistance and T2DM [21]. There are several complex mechanisms in human that protect the body against environmental agents including inappropriate dietary, UV radiation, smoking and free radicals which are produced from defective oxidation. The ability of human for metabolizing carcinogens (cancer causing substances) varies and people who have little ability to produce detoxification substance are at high risk of various diseases including diabetes and cancer. It seems that glutathione is important as a carcinogen neutralizing for free radicals [13,14]. GST modulates the effects of various cytotoxic and genotoxic agents. GST genes encode a family of phase II enzymes (molecular mass 17-28 kD) that have major roles in catalyzing the conjugation of glutathione to a wide variety of hydrophobic and electrophilic substrates and carcinogens such as benzpyrene and reactive oxygen species (ROS). Therefore, there is an increasing interest in the role that polymorphisms in phase I and phase II detoxification enzymes may play in the etiology and progression of diseases. Polymorphisms reducing or eliminating these enzyme detoxification activities could increase a person's susceptibility to diseases including T2DM [21]. GSTs are multifunctional proteins that can function as enzymes catalyzing the conjugation of glutathione thiolate anion with a multitude of second substrates or as non-covalent binding proteins for a range of hydrophobic ligands [13,14]. Peoples act in different ways to detoxification, this theory can describe the risk differences for various diseases include cancer and diabetes that cause by exogenous and endogenous agents. GSTT1 and GSTM1 genes expressed in many form in populations and people with null genotype have no active enzyme for detoxification [22,23]. GSTM1 and GSTT1 null genotypes in Caucasian populations have frequencies of approximately 40-60% and 10-20%, respectively [19,[24][25][26][27]. We thus determined the polymorphism frequency for each of these enzymes in our study populations and looked for relationships between them and the clinical parameters in T2DM. There are many studies dealing with GST polymorphism in various diseases, but only a few studies have addressed the role of GST polymorphisms in diabetes and T2DM complications. In the current study, we attempted to move beyond single gene polymorphism to two-gene polymorphisms that may help predict the susceptibility to the incidence of T2DM and their effect on T2DM complications in Yazd province population. The statistical analysis between GSTT1 and retinopathy show no significant association (p = 0.187) that confirms the research of others [28,29]. While the statistical analysis between GSTM1 and retinopathy show significant association (p = 0.04) that confirm the effect of free radical in T2DM in other studies [30][31][32][33][34]. But is inconsistent with the only study that show GSTM1 null genotype might confer protection against retinopathy in Caucasians with T2DM [35]. Finally, the statistical analysis between GSTT1 and GSTM1 interaction in retinopathy show weak significant association (p = 0.052). To our knowledge there is no other research about the effect of GST genotype in side effects of diabetes (diabetes complication), therefore more researches with more cases is needed [28]. Conclusion These results suggest that although the absence or deletion of detoxification pathway of GSTT1 has no significant effect on the side effects of T2DM but GSTM1 null genotype had significant relationship with diabetes retinopathy, indicating the role of detoxification of this genes in this regards. Consent Written informed consent was obtained from the patients for the publication of this report and any accompanying images. Competing interest There is not any conflict of interest for authors in this manuscript. Authors' contributions AD contributed to the study design, interpretation of data, performing all genetics experiments and writing the manuscript. MHSH contributed to conception of the idea and study design, provided assistance in performing all genetics experiments and editing the manuscript. MD contributed to conception of the idea and helped with statistical analysis and interpretation of data and editing the manuscript. MAA contributed to the patients' selection and examination. All authors have read and approved the final form of the manuscript.
v3-fos-license
2019-03-11T13:12:16.034Z
2013-09-22T00:00:00.000
59019763
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.macrothink.org/journal/index.php/ire/article/download/4052/3548", "pdf_hash": "16e21fd2b351eaaccdbc830c05e70e49c42762f5", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46268", "s2fieldsofstudy": [ "Education" ], "sha1": "4c46d5a9d5915c7313ff55dd30957576ecf74f89", "year": 2013 }
pes2o/s2orc
Kosovo Higher Education Teaching Quality Based on Students’ Evaluations and Their Performances The overall goal of this study is to identify, through undergraduate students, the evaluation level of teaching quality in the Kosovo Public University, “Hasan Prishtina”. One of the main theses of this analytical study is to determine whether the quality of higher education in Kosovo is an influential factor in students’ emotional wellbeing and academic achievements during their studies at the above University. To identify the scope of this study, the quantitative research method has been used. The measuring instrument has been designed in the form of a questionnaire that was conducted with the 1006 students who are currently pursuing their BA degree at Hasan Prishtina University. For the conduction of the research, students were selected from the 12 different University departments. The research sample was determined for 1006 students, or 10% of students from the selected departments. Results of the study indicate that a significant number of students of the Hasan Prishtina University are not satisfied with the teaching methods and assessments used by their University professors. Moreover, according to the survey, results can also imply that academic factors are impacting the students’ emotional wellbeing. Introduction The overall goal of this study is to identify the undergraduate student evaluations on the teaching quality in the "Hasan Prishtina" University in Prishtina. The main aim of this International Research in Education ISSN 2327-5499 2013 analytical study is to determine whether the quality of higher education in Kosovo is an influential factor in the academic achievements and emotional wellbeing of the above students throughout the completion of their undergraduate studies. Literature Review The "Hasan Prishtina" University, formerly known as University of Prishtina, was established in 1969. According to its status, one of the main University's goals is to be fully integrated into the European Integrated Zone of Higher Education, and take all the necessary and adequate steps for reformation in order to accomplish this goal (article 6, SU, 2012). However, based on numerous evaluation reports conducted within the lasts years on Kosovo's Higher Education System, this system is still facing many challenges. The lack of motivation shown by academic staff at the "Hasan Prishtina" University to provide effective teaching was one of the key elements pointed out by the Organization for Security and Cooperation Mission in Kosovo (Attand, 2009). Similar evaluations year later have also brought attention to this factor (Canaj & Tahiri, 2010), evaluations these that consider improving the professionalism of academic and supporting staff to be one of the challenges for higher education at the University of Prishtina (p. 6). In his study, (Attand, 2009) has also mentioned that the University of Prishtina is following its efforts in the implementation of key Bologna structures, tools and themes of the Bologna Process. However, according to the author, the University seems to be hovering somewhere mid-way of the whole reform process. Based on his study findings, the reforms on the University of Prishtina seem to be done in a very superficial way and without necessary monitoring. Moreover, according to Attard's evaluation, the students at the "Hasan Prishtina" University are not yet considered to be fully involved in the Bologna reforms (p. 19). Worldwide theoretical perspectives on education and research conducted in the field, demonstrate that the education system approach together with other external factors have an impact on students achievements and their motivation to learn. According to Helme and Clarke, students carry out with themselves along with their studies, a number of characteristics which influence their cognitive development, skills, knowledge, dispositions, aspirations, expectations, perceptions, needs, values and goals (Helme & Clarke, 2001, p. 138). Therefore, the education system should be designed in a way that involves appropriate teaching methods, creates learning environment situation, and takes into consideration the prior conceptions of students as well (Kane & Russell, 2005). Moreover, in order to positively affect students' achievements, teachers must be given the skills and knowledge to develop pedagogical content knowledge, to critique practice and challenge traditional pedagogy (Nuangchalerm, 2009;Nuangchalerm & Prachagool, 2010;Nuangchalerm, 2011). Furthermore, since the assessments are considered the most important part of education that measure students' achievements, according to Goldberg and Stevens, "In a brain compatible classroom, assessment both measures achievement and provides motivation" (Goldberg & Stevens, 2001, p. 125), and assessment should be designed to fit the students, not vice versa (Caine, Caine, McClinitic, & Klimek, 2005). Furthermore, according to Erlauer (2003), Immediate and constructive feedback from the teachers increases motivation and makes students aware of how to improve their work. According to other theoretical perspectives, the emotional condition of students has an influence on their academic achievements as well. Students' emotional states influence their level of academic achievement. Therefore, the education system must provide a safe environment in which students are not anxious about their surroundings, but rather, open and receptive to new information (Caine et al., 2005). Also, in order to improve the quality of teaching and protect students from major emotional distress, a series of theorists suggest that education policies along with being developed, should take into consideration drafting methods which contribute to building confidence in students and raising awareness of emotional concerns (Cowie, Boardman, Barnsley, & Jennifer, 2004). Methodology For the study, the quantitative research method has been used. The measuring instrument has been designed in the form of a questionnaire that was conducted with the 1006 students who are currently pursuing their BA degree at "Hasan Prishtina" University. For the conduction of the research, students were selected from the 12 departments of the University. The research sample was determined for 1006 students, or 10% of students of the departments selected. Student participation in the research was voluntary, and completion of the questionnaire was anonymous. Participants in various forms completed data collection by visiting the respective faculties, by contacting students directly after their lectures, exams, stay in the library, at their student center, cafeteria or student gatherings. The data collected from the questionnaire were processed through the social science statistical package SPSS. Interaction between the tested variables is presented through interactive analysis (cross-tabulation analysis), while the results for the standard deviation (SD) of the tested variables are tested and released through Pearson's chi-squared test (χ 2 ). Results The four below statements aim at identifying how teachers are evaluated from their students at the Hasan Prishtina University. As presented in the table below, there are different levels of student evaluations. A significant number of students agree that their teachers are adequately prepared, clearly indicate student responsibilities, and use teaching techniques that encourage students to relate their knowledge with their new experiences from practice. Yet, a certain number of students disagree with these statements and deem that their teachers do not consider students' suggestions seriously and do not highly evaluate their students' critical thinking (see Table 1). International Research in Education ISSN 2327-5499 2013, Vol. Different result outcomes have been noticed while analyzing the students' evaluations about their teachers' evaluation qualities-techniques used to evaluate student performance. From the total number of students (N = 1006) who participated in the survey, the majority of students (N = 344, or 34.2) and (N = 141, or 14.0%) 'do not agree' and 'do not agree in any way' that their teachers evaluate their exams and assignments correctly. Moreover, the majority of the students who were part of the survey, also stated that they 'do not agree' (N-392, or 39.0%) 'do not agree in any way' ( N= 139, or 13.8%) that their teachers offer individual feedback to their students on their assessments (see Table 2). The Interaction Between Teachers' Evaluation and Psychological Effects of Students The data collected from the 1006 student who took part on the survey, show that students who have positively evaluated the quality of their teachers, appear to have less symptoms of burnout, energy reduction or lack of motivation to complete their studies. Of the total 1006 students who participated in this study, the number of students who agree and strongly agree that their professors clearly indicate the responsibilities of their students for successful completion of the course, compared to other students have less signs of burnout (See Table 3). Also, the survey data analysis shows an interaction between the students' satisfaction with the way they are evaluated by their professors and their general emotional health. Students who have declared that they agreed that their professors evaluate exams and assignments correctly, have shown to have less symptoms of lack of confidence, and/or feelings of helplessness compared to students who did not agree that they receive fair evaluations from their professors (See Table 4) Table 4. Comparison between student evaluation on the teachers evaluation and lack of self-confidence and sense of helplessness ISSN 2327-5499 2013 Connection Between Academic Performance and Psychological Changes The study results also confirm the connection or interaction between academic performance of the "Hasan Prishtina" University and signs of panic and anxiety. Students who are moderately satisfied with their academic performance, have asserted that they have signs of panic or anxiety often (once a week) or several times a month (See Table 5) Discussion On a daily basis, there are numerous factors that influence a student's academic performance. These factors may have social, financial or educational backgrounds. The research survey conducted at Hasan Prishtina University with 1006 students has focused on one specific factor that regards faculty's influence in student academic performance. This conducted research helps analyze student views on their professors' teaching methods and how students feel, work and perform due to these methods. What is clear from the surveyed students at this University, is that a great deal of the student body is unsatisfied with their professors teaching approaches and assessment. With a majority of students indicating that their professors would not take student suggestions seriously, the learning process in this University may very possibly be one-sided and majorly dominated by the professor. From the surveys above it is also clear that it is not uncommon for students to feel panic and anxiety at least once a week in their learning environment, this becoming a factor of undeniable impact on student academic performance. Conclusion Though the results of this research study encompass a great number of students, through data analysis it has become evident that these students' evaluations do not necessarily represent the opinions of the general student body at the given University. The results of this study may be utilized to reflect the local educational situation based on student opinions. Also, the author is skeptical that the results collected through these questionnaires are completely reliable for surveyed students have shown fear and insecurities in expressing their honest opinions on the matter at hand. Moreover, a common manifestation noticed throughout the research process, was discovering different answers written on the questionnaire compared to those spoken out lout during conversation with those conducting the research. However, from the both evaluation reports conducted within the last years in the field, and the survey results, it can be concluded that despite the progress that has been made within the Kosovo Higher Education system for reforming its education approach, the higher education system in Kosovo is still facing challenges. Moreover, the survey results also prove that students who are benefiting from this education system are not satisfied with their professors teaching methods and evaluation approaches. This lack of satisfaction has shown to have an influence in their academic achievements and emotional wellbeing, by contributing towards signs of panic or anxiety, lack of self-confidence, burn out symptoms, or lack of energy and motivation to complete their studies.
v3-fos-license
2022-07-12T16:07:17.015Z
2022-07-12T00:00:00.000
250430487
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00432-022-04140-9.pdf", "pdf_hash": "7494a50d9edf3e708c89e1f2050a4a4c151fdcef", "pdf_src": "Springer", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46269", "s2fieldsofstudy": [ "Medicine" ], "sha1": "7494a50d9edf3e708c89e1f2050a4a4c151fdcef", "year": 2022 }
pes2o/s2orc
Nurses’ knowledge of chemotherapy-induced neutropenia and its management: a cross-sectional survey Background Chemotherapy-induced neutropenia (CIN) is a serious and potentially life-threatening condition that is associated with high morbidity, mortality, and healthcare costs. Objective This study aims to assess nurses’ level of knowledge of CIN and its association with socio-demographic factors. Methods A cross-sectional survey design was used. Results: Participants had a mean age of 34.1 years (SD = 7.1 years) and were predominantly female (78%) and with a bachelor’s degree in nursing (95.6%). The nurses had a moderate level of knowledge about neutropenia and its management (mean total score 16.3 out of 30, SD = 3.7). Those who had a post-graduate degree (P = .048), had received an oncology educational course (P = .011), had attended a course on neutropenia (P = .007), who were working in an oncology unit (P = .002), and had more oncology experience (P = 001) were more likely to have a higher level of knowledge of CIN and its management compared to their other counterparts. Conclusion Based on the findings of a moderate level of knowledge of CIN among nurses, the findings call for the need for further education and training. As a long-term plan, this might be accomplished by encouraging nurses to pursue post-graduate education or oncology-specialized certification and supporting them with scholarship grants. However, deliberate plans for short courses, training and workshops on oncology or CIN are other choices with a more immediate impact on nurses’ knowledge and clinical practice. Finally, integrating oncology nursing education within nursing curricula is urgently needed. Introduction Cancer is a significant health and economic burden affecting more than 50 million people worldwide, with over 19 million new cases discovered in 2020 (The Global Cancer Observatory 2020). It is the second leading cause of death globally, accounting for nearly 10 million deaths a year (The Global Cancer Observatory 2020). In Oman, 21,000 cancer cases were recorded among the Omani population in the period 1996-2015, an average of 1,050 cases annually (Al-Lawati et al. 2019). Chemotherapy is one of the gold standards in cancer treatment (Schelenz et al. 2012). 3 However, its use has a considerable number of side effects, including hematopoietic suppression which results in neutropenia, known as chemotherapy-induced neutropenia (CIN) (Crawford et al. 2004;Sureda et al. 2019). Neutropenia is serious and potentially life threatening; it occurs when neutrophil is not produced at a desirable rate, dulling the inflammatory response and predisposing the individual to a higher risk of the development of infection (Yarbro et al. 2018). Clinically, neutropenia is defined as having an absolute neutrophil count (ANC) less than 1500/ mm 3 ; it often requires hospitalization and belligerent treatment to prevent sepsis (Yarbro et al. 2018). CIN is associated with higher morbidity, mortality and healthcare costs (Abou Saleh et al. 2013;Crawford et al. 2004;Lyman et al. 2010;Schelenz et al. 2012). Once neutropenia is detected, chemotherapeutic treatments are reduced or withheld, which further compromises survival and negatively affects the course of treatment (Crawford et al. 2004;Schelenz et al. 2012;Sureda et al. 2019). Oncology nurses have a fundamental role in easing the burden of patients suffering from the consequences of neutropenia. They must have the knowledge, proficient skills, and a compassionate attitude to ensure optimal care and reduce patient suffering (Kvåle and Bondevik 2010). However, nurses in other units such as medical, surgical, emergency, and intensive care also deal with cancer patients. As patient advocates, nurses can put their knowledge to good use by working collaboratively with the multidisciplinary team and recommending evidence-based interventions to the advantage of their patients (Kaplow and Spinks 2015). For example, early prophylactic treatments have been found to curtail hospital stay (Gerlier et al. 2010). While these are essential features, knowledge of CIN among nurses is a topic largely unexplored in nursing research and scholarship, and the literature is inconclusive when it comes to related concepts such as nurses' knowledge and practice of infection control and prevention. Literature review A comprehensive literature review reveals a dearth of studies assessing the knowledge of oncology nurses of CIN and its management (Naghdi et al. 2019;Nirenberg et al. 2010;Tarakcioglu Celik and Korkmaz 2017;Teleb Osman and Mohamed Bayoumy 2016). In addition, these studies provide contrasting findings. Two studies, in Turkey and America, show that oncology nurses have a generally high level of knowledge of CIN and neutropenic patient management (Nirenberg et al. 2010;Tarakcioglu Celik and Korkmaz 2017). More specifically, they have a high level of knowledge of neutrophil functions, clinical manifestations of infection and suitable nursing care in neutropenic patients, although poor adherence to infection control practices such as hand hygiene and preparation and administration of medicines (Tarakcioglu Celik and Korkmaz 2017). On the other hand, a study conducted in Iran shows that nurses had moderate knowledge of CIN and moderate practice of infection control and prevention (Naghdi et al. 2019). It is noteworthy that very few nurses have commendable practice (< 20%). There is also a significant correlation between their knowledge and practice for vital signs assessment and medication preparation (Naghdi et al. 2019). These studies indicate that nurses' knowledge and practice in the care of patients with CIN are suboptimal, with a clear knowledge-practice gap. It is not surprising for nurses to encounter patients with cancer in non-cancer units such as medical or surgical units. Because the infection is one of the significant life-threatening complications of CIN, nurses must have adequate knowledge and practice of infection control and prevention. Many studies indicate that nurses' general knowledge of infection control is high (Chuc et al. 2018;Gulilat and Tiruneh 2014;Okanlawon 2014;Parmeggiani et al. 2010;Suliman et al. 2018). However, most reflect that nurses' practice of infection prevention is average (Sarani et al. 2016) or poor (Chuc et al. 2018;Gulilat and Tiruneh 2014;Parmeggiani et al. 2010). On a very alarming note, many studies indicated the wide knowledge-practice gap with regard to infection control practices (Accardi et al. 2017;Adegboye et al. 2018;Chuc et al. 2018;Gulilat and Tiruneh 2014;Nasiri et al. 2019;Okanlawon 2014;Suliman et al. 2018;Tenna et al. 2013) which clearly calls for immediate corrective action. With the goal of improving nurses' knowledge of CIN and practice in the care of neutropenic patients, a quasiexperimental study was conducted to evaluate the impact of a nursing intervention bundle on the prevention of neutropenia-associated infections (Teleb Osman and Mohamed Bayoumy 2016). The result for the baseline data collection point indicated that nurses had poor knowledge of neutropenia and preventive measures against infection. Their knowledge significantly improved after they were subjected to an intensive educational program (Teleb Osman and Mohamed Bayoumy 2016). However, the use of a non-randomized noncontrolled design, convenience sampling and small sample size (n = 30) makes it difficult to generalize the results of the study. It is also important to note that the nurses' knowledge was somehow reduced two months after the intervention (Teleb Osman and Mohamed Bayoumy 2016). An extensive literature search revealed that nurses' knowledge of CIN and corresponding patient care is an unexplored topic in the Sultanate of Oman. Such a study could provide vital information for nursing administrators, nurses, clinical educators and nursing scholars alike. The findings could be utilized as a baseline for the implementation of nursing education programs and policy development. As the healthcare professionals who spend the most time with the patient, nurses play a crucial role in preventing the occurrence of infection or its likely dangerous progression at the earliest point possible. Nurses are at the forefront in identifying patients at risk of infection, and possible sources of infection, caring and educating patients with CIN, monitoring symptoms, carrying out effective infection control strategies and taking action at the very first sign of CIN and infection (Kaplow and Spinks 2015). Being equipped with the right knowledge is fundamental to establish the quality of care. Hence, this study aims to assess the nurses' level of knowledge of CIN and its management, and the association of knowledge with their socio-demographic factors. Design This study utilized a cross-sectional survey design. Sample, sampling, and sample size The study sample consisted of 182 nurses who were practicing in oncology units. Nurses working in medical and surgical units, and intensive care units, were also included as they occasionally receive patients with cancer. All nurses who have a bachelor's degree in nursing or above, have working experience of at least six month, and have agreed to participate, were included. The research team utilized a simple random sampling technique. A list of all nurses was obtained and numbered sequentially. Then, a computer-generated list of 181 numbers was constructed, and those with selected numbers were contacted. The sample size depended on the possible proportion of correct responses in the neutropenia knowledge assessment tool. For 50% correct answers, knowing that there are about 340 nurses meeting the inclusion criteria, a sample size of 181 is considered adequate. From http:// www. raoso ft. com/ sampl esize. html, this would allow the percentage of correct answers to be estimated with a 95% margin-of-error of at most ± 5%. Settings The study was conducted in a large referral hospital where most cancer patients are treated. This hospital is located in the capital city Muscat, and it has oncology pediatric and adult inpatient wards. It has outpatient chemotherapy clinics and a bone marrow transplant unit. Instruments A demographic data sheet and neutropenia knowledge questionnaire were used to collect the required data. Demographic data sheet collected information about nurses' age, gender, education level, years of experience, working unit, nationality and previous education about oncology nursing and neutropenia. Neutropenia Knowledge Questionnaire is a tool to evaluate nurses' knowledge of neutropenia and the care of neutropenic patients (Tarakcioglu Celik and Korkmaz 2017). It comprises 30 true/false statements with an "I don't know" option to avoid guessing. Each correct answer is given a score of "1", otherwise zero. The responses to all items are summed to produce the total score. A score range of 0-10 indicates a poor level of knowledge, 11-20 moderate knowledge and 21-30 good knowledge (Naghdi et al. 2019;Tarakcioglu Celik and Korkmaz 2017). The tool is reported to have established content validity (content validity index 0.95) and reliability (Cronbach alpha = 0.7) (Naghdi et al. 2019). The English version of the tool was used. Ethical considerations The required ethical approvals were obtained from the College of Nursing Ethics Committee (Ref. No. CON/ NF/34) and Medical Ethics Committee (Ref. No. SQU-EC/331/2021) prior to embarking on the study. All participants were given information about the study's purpose and requirements. This information was provided in the first section of the questionnaire. All participants were informed that their participation was voluntary, no names or identifying data would be collected, and that completing the questionnaire was considered implicit consent. No harm or risk was expected because of participation in this study. Data collection procedure Following the required ethical approval, the head nurses of the respective units were approached to explain the study's purposes and procedure. Then, a list of nurses and their phone numbers from each unit was obtained. They were compiled into a single list and numbered sequentially. Using available free online resources, a list of 181 randomly selected numbers was generated. Then, an invitation to participate was sent to potential respondents; if they showed interest in participating in the study, a member of the research team called them and arranged a meeting. During this meeting, conducted in the workplace, nurses completed the questionnaire and returned it to the research team either in person or by putting in the designated box within the unit. Data analysis All questionnaires were first assessed for completeness, then coded and entered into SPSS version 23. Descriptive statistics such as means, frequencies and percentage were used to describe the sample's characteristics and knowledge level. In addition, independent t test, ANOVA and Pearson correlation tests were used to analyze the association between the nurses' knowledge level and the socio-demographic variables. Nurses' characteristics The total number of nurses who agreed to participate and completed the study was 182. Participants' characteristics are detailed in Table 1. Their mean age was 34.1 years (SD = 7.1 years), with females being dominant in the sample (78%). Most (95.6%) had a bachelor's degree in nursing. Participants were divided almost equally among three working areas: oncology units (32.4%), medical and surgical units (32.4%), and intensive care units (35.2%). Most of the nurses were of Indian (44%) or Omani nationality (40.1%), with 15.9% being Filipino. The majority of the sample had not received any educational program either on oncology nursing (74.2%) or on neutropenia (86.3%)). Nurses' Knowledge of Neutropenia and its Management The total sample of nurses had a mean score of 16.3 out of 30 (SD = 3.7) on the Neutropenia Knowledge Questionnaire, corresponding to moderate knowledge of neutropenia. Some 16 questions were answered correctly by 50% or more of the nurses. Table 2 presents the frequency of the correct answer for each item in the Knowledge Questionnaire. Here it can be noticed that the top three correctly answered questions were: item number 30 (95.6%), which asked about informing patients and family about the infection control procedure; item number 24 (91.8%), which asked if neutropenic patients must be put in a private room; and item number 29 (88.5%), which asked if the skin and mucosal membranes should be assessed on a daily basis. Conversely, the three least correctly answered questions were: item number 19 (6.6%), which asked about rinsing a neutropenic patient's mouth three times a day; item number 11 (10.4%), which asked about bathing the neutropenic patients on a daily basis; and item number 27 (18.7%), which asked about wearing gloves, masks, and gowns during neutropenic patients' care. It is also worth mentioning here that less than half of the Comparison of total neutropenia knowledge scores To check the distribution of the total mean knowledge score between the different study groups identified in Table 1, an independent t-test was used for groups with two categories, i.e., gender, education level, receiving education about oncology nursing or management of neutropenia; a one-way ANOVA was used for groups with more than two categories, i.e., nationality and working area. The results of the t test showed that: nurses who had a post-graduate degree had a significantly higher knowledge score than those with a bachelor's degree (P = 0.048); nurses who had received an oncology educational course had a significantly higher knowledge score than those who had not (P = 0.011); nurses who had received an education course about neutropenia had a significantly higher knowledge score than those who had not (P = 0.007). However, there was no statistical difference in the total knowledge scores between male nurses and female nurses (P = 0.265). The results of the comparative analysis are presented in Table 3. Further, the one-way ANOVA analysis showed that nurses who were working in the oncology unit had a significantly higher knowledge score than those working in other units (P = 0.002); and Filipino nurses had a significantly higher knowledge score compared to other nationalities (P < 0.001) ( Table 3). To further understand factors affecting total knowledge score, a Pearson correlation test was used to measure if there was any correlation between total knowledge score and age, years of experience, and oncology experience. Results of the correlation showed that there was a statistically significant weak positive correlation between total score and age, r (182) = 0.30, p < 0.001; total score and experience, r (182) = 0.29, p < 0.001; and total score and oncology experience, r (182) = 0.24, p = 001. Discussion This study explored nurses' knowledge of neutropenia in cancer patients. It also underscores the association of nurses' knowledge with their demographic variables. A comprehensive literature search revealed only a few studies that measured nurses' knowledge of CIN (Naghdi et al. 2019;Nirenberg et al. 2010;Tarakcioglu Celik and Korkmaz 2017;Teleb Osman and Mohamed Bayoumy 2016), hence limiting points of direct comparison with the present study. Relevant studies pertinent to nurses' knowledge of other aspects of oncology care and infection control were included to enrich the discussion. The findings show that nurses have moderate knowledge of neutropenia (mean score = 16.3 / 30, SD = 3.7). This is similar to the findings of some studies (Naghdi et al. 2019;Teleb Osman and Mohamed Bayoumy 2016), but lower than other studies which yielded a high knowledge level (Nirenberg et al. 2010;Tarakcioglu Celik and Korkmaz 2017). Out of the 30 items, respondents scored poorly on 14, with less than 50% giving the correct answer. These low-scoring items can be classified into four categories: the definition and criteria for CIN; identification and monitoring of signs and symptoms of CIN as well as infection among CIN patients; general nursing care of CIN patients (vital signs monitoring, hygiene, diet); and infection-prevention mechanisms (wearing of personal protective equipment (PPE), isolation protocol). Disconcertingly, the majority of the nurses failed to identify the basic definition of neutropenia, and this item yielded the lowest score when compared to other studies (Naghdi et al. 2019;Tarakcioglu Celik and Korkmaz 2017). Although the majority of the nurses were not able to identify that neutrophil should be below 1500 cells/mm 3 , their score was encouraging compared to one study (Naghdi et al. 2019). Nurses scored lowest on the frequency of rinsing neutropenic patients' mouths, similar to the findings of one study (Naghdi et al. 2019), but higher than in another (Tarakcioglu Celik and Korkmaz 2017). The second-lowest item was about frequency of bathing CIN patients, contrary to the results of other studies which yielded higher scores (Naghdi et al. 2019;Tarakcioglu Celik and Korkmaz 2017). The third lowest item was about the use of PPE in neutropenic patient care: higher than in one study (Naghdi et al. 2019) but far lower than in another study (Tarakcioglu Celik and Korkmaz 2017). Differences in the level of nurses' knowledge across studies can be attributed to various reasons such as study settings and respondents' characteristics. Some studies collected data from hospitals with rigorous neutropenic patient care protocol, or where the majority of respondents working in specialized oncology units had advanced oncology certification or had received in-service education on CIN and infection control. The result of the study in terms of nurses' level of knowledge is generally unfavorable and requires prompt attention and action. Remarkably, patients with cancer recover faster, feel safer and are more secure when they are cared for by nurses who are knowledgeable and equipped with competent skills (Corner et al. 2013;Kvåle and Bondevik 2010). Also, higher knowledge is associated with a higher level of practice (Naghdi et al. 2019). Strategies to improve nurses' knowledge include the provision of courses or advanced certification, professional training and workshops, orientation training at the beginning of employment, simultaneous cutting-edge theoretical and practical programs to bridge the theory-practice gap, and improving nurses' attitude toward infection-control principles (Nasiri et al. 2019). The use of oncology simulation with the effective integration of mnemonics, road maps and case-based learning also improved nurses' knowledge, perceived competence and skills acquisition (Linnard-Palmer 2012). The intensive implementation of evidence-based nursing protocols which consisted of lecture, demonstration, re-demonstration and distribution of printed protocol booklets have drastically improved their knowledge and practice in caring for patients with CIN (Teleb Osman and Mohamed Bayoumy 2016). Moreover, ensuring the availability and accessibility of CIN clinical practice guidelines encourage better implementation at the bedside (Nirenberg et al. 2010). Nurses must also update themselves with the latest research evidence on CIN care to guide their practice (Kaplow and Spinks 2015). Finally, integrating oncology nursing courses in nursing curricula is strongly recommended. Significant associations between nurses' knowledge of CIN and their socio-demographic variables were also established in the study. The findings reveal a positive association with their nursing degree, with those holding a post-graduate degree being more likely to have greater knowledge than their BSN counterparts, as in other studies (Alojaimy et al. 2021;Nirenberg et al. 2010;Sharour 2019;Suliman et al. 2018;van Veen et al. 2017). Also, nurses who had received an educational course on oncology or CIN demonstrated higher knowledge scores than those who had not, contrary to other findings (Suliman et al. 2018). Evidence indicates that nurses with higher education, training or advanced certification exhibit higher levels of competence, confidence and compliance with guidelines (Al-Rawajfah et al. 2013;Nirenberg et al. 2010). Hence, it is essential that nurses pursue professional development in terms of post-graduate studies, advanced oncology certification, and participation in training and workshops related to oncology nursing or CIN. Hospital and nursing administrators must look into these more intently by providing scholarship grants, more flexible working hours, and equal opportunity for nurses to enrich themselves professionally. Nurses working in oncology units had a significantly higher knowledge score than nurses working in other units, in contrast with other findings (Sarani et al. 2016). The former concentrate on the care of cancer patients on a daily basis, while those in other units may receive them less frequently. Because the oncology unit is highly specialized, they have more opportunities to reinforce their knowledge throughout their clinical practice. Filipino nurses had a significantly higher knowledge score than other nationalities; this may be attributed to the differences in undergraduate preparation in their respective countries. Older nurses are more likely to possess greater knowledge of CIN, similar to other studies (Hafeez et al. 2020;van Veen et al. 2017), but this may be indirectly linked to their years of clinical practice experience. Nurses with more years of general nursing experience as well as oncology experience had higher knowledge scores, similar to other studies (Hafeez et al. 2020; Teleb Osman and Mohamed Bayoumy 2016). As nurses age and mature in their profession, their cumulative clinical experience serves as an opportunity to develop their knowledge in neutropenic patient care. These findings call for the necessity of specialization in oncology nursing. This can be achieved by ensuring that core nurses for oncology care remain in the same unit for a significant period without interruption, to allow for repeated exposure to similar cases, hence making the clinical environment a significant learning ground to enrich their knowledge and caliber in caring for patients with CIN. Implications for nursing practice Chemotherapy-induced neutropenia is associated with various serious consequences such as life-threatening infection, delays in cancer treatment, higher morbidity and mortality, and excessive healthcare cost (Abou Saleh et al. 2013;Crawford et al. 2004;Lyman et al. 2010;Schelenz et al. 2012). Sensible strategies to augment nurses' knowledge of CIN are well-established, such as pursuing higher studies, provision of certification, courses, training, and workshops (Nasiri et al. 2019); the use of simulation (Linnard-Palmer 2012); making CIN clinical practice guidelines readily available (Nirenberg et al. 2010); and radical and intentional implementation of evidence-based practice (Kaplow and Spinks 2015). Fundamentally, nurses must reconsider their core values as the heritage and bloodline of their profession. They must assume responsibility and critically analyze their own level of knowledge and practice, exercise leadership and strengthen their advocacy (Challinor et al. 2020;Nirenberg et al. 2010). The empathetic use of self, a caring attitude, and genuine concern must motivate them to refine their knowledge and patient care (Fall-Dickson and Rose 1999). Limitations This cross-sectional study was conducted in a single tertiary hospital in Oman. Keeping this in mind, readers must exercise caution in interpreting the generalizability of study findings. Self-reported questionnaires as data collection also have the risk of response bias, and using an observational approached may yield more reliable data. Lastly, the dearth of available literature on the topic limited points for a comprehensive comparison. This highlights the need for future studies to explore CIN knowledge among nurses, as well as other variables such as attitude, compliance, practices, and interventional studies to enrich nursing scholarship and evidence-based practice. Conclusion The results of the current study showed that nurses have a moderate level of knowledge of CIN. The findings call for the need for further education and training. As a long-term plan, this might be accomplished by encouraging nurses to pursue post-graduate education or oncology-specialized certification, supported by scholarship grants. However, deliberate plans for short courses, training and workshops on oncology or CIN would have a more immediate impact on nurses' knowledge and clinical practice. Finally, integrating oncology nursing education within nursing curricula is needed. Funding Open access funding provided by Kristianstad University. The authors declare that no funds, grants, or other support were received during the preparation of this manuscript. Data availability The datasets analysed during the current study are available from the first author [Mohammad Al Qadire] on reasonable request. Conflict of interest The authors have no relevant financial or non-financial interests to disclose. Consent to participate Informed consent was obtained from all individual participants included in the study. Consent to publish Not applicable. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
v3-fos-license
2019-01-04T22:05:32.985Z
2014-07-01T00:00:00.000
145770278
{ "extfieldsofstudy": [ "Psychology" ], "oa_license": "CCBYSA", "oa_status": "GOLD", "oa_url": "http://journal.teflin.org/index.php/journal/article/download/188/165", "pdf_hash": "755e6efeca0076363fbdb06611a1b777e3de74da", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46270", "s2fieldsofstudy": [ "Education", "Computer Science" ], "sha1": "755e6efeca0076363fbdb06611a1b777e3de74da", "year": 2014 }
pes2o/s2orc
USING INFORMATION AND COMMUNICATION TECHNOLOGY ( ICT ) TO ENHANCE LANGUAGE TEACHING & LEARNING : AN INTERVIEW WITH DR In recent years, information and communication technology (ICT) has become embedded and affected the every aspect of our lives. Rapid development of ICT has changed our language teaching pedagogy at all levels. Teachers, curriculum developers, researchers have been constantly striving to find techniques to use some form of it to both assist and enhance language learning. What is more exciting is that studies have demonstrated positive effects that ICT brings towards students’ learning motivation (Chenoweth, Ushida & Murday, 2006; SteppGreany, 2002), students’ personal needs and learning styles (Gimenez, 2000), students’ language mastery (Stepp-Greany, 2002), effective teaching and learning process (Al-Jarf, 2004), etc. Although these studies have shown that ICT has the potential important role in supporting and enhancing language learning, the use of ICT should never be the goal in and of itself. The responsibility for language instruction should be in the hands of qualified teachers who have the knowledge and expertise to manage and to make the best use of it to accomplish learning objectives. USING INFORMATION AND COMMUNICATION TECHNOLOGY (ICT) TO ENHANCE LANGUAGE TEACHING & LEARNING: AN INTERVIEW WITH DR. A. GUMAWANG JATI Flora Debora Floris (debora@peter.petra.ac.id) Associate Editor TEFLIN Journal In recent years, information and communication technology (ICT) has become embedded and affected the every aspect of our lives.Rapid development of ICT has changed our language teaching pedagogy at all levels.Teachers, curriculum developers, researchers have been constantly striving to find techniques to use some form of it to both assist and enhance language learning.What is more exciting is that studies have demonstrated positive effects that ICT brings towards students' learning motivation (Chenoweth, Ushida & Murday, 2006;Stepp-Greany, 2002), students' personal needs and learning styles (Gimenez, 2000), students' language mastery (Stepp-Greany, 2002), effective teaching and learning process (Al-Jarf, 2004), etc.Although these studies have shown that ICT has the potential important role in supporting and enhancing language learning, the use of ICT should never be the goal in and of itself.The responsibility for language instruction should be in the hands of qualified teachers who have the knowledge and expertise to manage and to make the best use of it to accomplish learning objectives.This interview highlights the issue of bringing ICT into the English language classrooms.Dr. A. Gumawang Jati, a senior lecturer at Faculty of Arts and Design, Bandung Institute of Technology (ITB) who specializes in the area of Technology and Education, was interviewed to share his experiences and in-sights on how ICT could be effectively used to support the language development process.2. What is ICT and how important is it in the curriculum of language teaching and learning? ICT refers to technologies that provide access to information through telecommunications.It is similar to Information Technology (IT), but focuses primarily on communication technologies.This includes the Internet, wireless networks, cell phones, and other communication mediums.ICT has become so essential in language learning.Its utilization in education has contributed to the improvement of language learning.In my opinion, ICT should be integrated in the curriculum to facilitate students and teachers in language teaching and learning process. 3. What are the advantages and the disadvantages of the implementation of ICT in the process of language teaching and learning? There are some advantages.First, both teachers and students of English can have quick and affordable access to the most up-to-date sources and information.Many focused exercises can be found on the net for free and software can be bought via Internet or in any store and some are free.Students can practice speaking in English with Siri in their iPad or iPhone or Assistant in their android devices.With the wide range of teaching and learning materials available for free in the Internet, teacher can select the ones that fit better to the students' needs according to their age, level, and abilities.There are also many discussion groups for professional development, interactive reading books for students, sound recordings for both teachers and students.I believe that ICT promotes student achievement because this tool allows them to progress at their pace and needs.With good access to sources of information, learners are also able to enhance their learning and creativity.Furthermore, the Internet also provides an easy and fast access to the current and authentic materials in the language being studied, which is motivating for language learners.Such authentic materials include, for instance, online newspapers, webcasts, podcasts, newsrooms, video clips or even video sharing websites such as YouTube.Another motivating language learning opportunity using ICT is provided by chat rooms and virtual environments such as Second Life which enable learners to practice the written and spoken language, without the fear of making mistakes. There are some potential disadvantages of using ICT for language teaching.It is expensive for the first investment (computers, Internet connection, servers, employment of ICT personels, etc).It is also expensive in running ICT training for teachers (and administrative staff).Thus, teachers often have minimum exposure and experience in the use of ICT in English Language Teaching (ELT).Due to these potential problems, some institutions do not have the will to integrate ICT into their school system. How have teachers and school administrators responded? The biggest challenge in promoting the use of ICT is dealing with the institution.Some school leaders want to integrate ICT into teaching and learning merely for the sake of keeping up with technological and educational advancement.Some institutions do not have the will to integrate ICT into their school system at all.Some school leaders do not understand and believe in the benefits of ICT for their learners.Some school administrators or teachers who are new to the integration of ICT in the ELT curriculum are usually "trapped" into the sophisticated software and they just simply convert the teaching and learning materials into digital without considering the learning process.Designing digital materials is actually a very complex process.The complete procedures can be read at http://issuu.com/gumawang/docs/online_mat_dev. 7. What arguments do you think would be the most convincing in persuading reluctant school administrators or teachers about the benefits of ICT in language classrooms? In the near future (it actually has started), everything that can be put into digital will be digitalized.With smart-phone generation, almost everything from computer will be put into the smart phone including learning languages.You can see it now at Play Store.Many scholars even have stated that it is now the time to move from CALL and to focus more on MALU (Mobile Assisted Language Use).I think in the future, our students will learn theories and read articles at home.The classroom would be a place for discussion and practice.This is what Flip Classroom is about.I understand this might be a problem for Indonesia since most of the English teachers are not ICT literate. 8. How would you help teachers to overcome their difficulties or reluctance using ICT in their language classrooms? The only way I see now is by giving trainings.I always start with "eye opener" of what the education world will look like in the next decade.Then I introduce them to practical free software for language learning such as Hotpotatoes, Cartoon Story Maker, etc.I relate those applications into classroom activities and language learning theory. 9 First I will install the access to the Internet, which is possible in even remote areas with mobile network.Then I will train teachers on how to use email and Facebook for educational purposes.Next I will introduce them to some free website resources and encourage them to adapt those free materials for classroom activities.I will also create simple school blog for teachers and students.The whole process might need at least 3 years. 11. Could you suggest some research areas or topics related to ICT in language classrooms that ELT scholars could explore? There are many issues that still need further observations.Some of the plausible topics or areas are: • Impact of ICTs on learning and achievement Monitoring and evaluation issues • Equity issues: gender, special needs and marginalized groups • Current implementations of ICTs in education: teaching, learning, content, curriculum, and tools • ICT in Education Policy issues 12.Is there any final thought or suggestions about the use of ICT in language classrooms that you would like to leave us with? I believe implementing ICT in the school will also improve the quality of teaching and learning when the schools do it right.It is very important that education systems develop e-content materials and do not merely digitalize the printed materials and conventional classroom interactions.If there is no e-content developed it is like building roads without cars on the road.ICT is not about purchasing computers for schools but upgrading skills and knowledge of teachers and administrators. 13. Thank you very much for sharing your expertise and experiences, Pak Jati.I am sure our readers will enjoy reading your insightful ideas.All the best for your future professional projects. Thank you Bu Flora and TEFLIN Journal for inviting me to share my ideas and experiences.I hope our readers will get inspired and see ICT as an important tool in language education.If TEFLIN Journal readers would like to know more about this topic, please do not hesitate to contact me at gumawang.jati@gmail.comor visit my websites. 4. Would you please give us one or two examples on how to integrate ICT in language classrooms?A good example is to apply offline activities for Cartoon Story Maker (CSM).With CSM it is possible to make 2D screen-based cartoon stories to illustrate conversations and dialogues.Stories can include an unlimited number of frames and are viewed frame by frame.Each frame can include images, text bubbles, and voice recordings.Stories are then saved as HTML page (webpage) or printed.Completed stories can also be loaded back into the CSM and edited. 5. How have your students responded to the use of ICT in language learning?I first introduced blogging to my Technical Writing students in 2005.The students loved it for some reasons, e.g.free website, purposeful readers, etc.Their complaints were mostly related to the slow Internet access (see http://elt-gumawang.blogspot.com/2005/12/students-comment-technicalwrt.html for further description).So far I have integrated ICT in all classes that I teach.Students' response to ICT is always positive.
v3-fos-license
2019-08-28T15:46:04.556Z
2019-08-28T00:00:00.000
201656899
{ "extfieldsofstudy": [ "Computer Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/s12859-019-3003-2", "pdf_hash": "0b1ce31d88bf04c6f0f841c47154c6d67efabb15", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46271", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "0b1ce31d88bf04c6f0f841c47154c6d67efabb15", "year": 2019 }
pes2o/s2orc
Fine-grained alignment of cryo-electron subtomograms based on MPI parallel optimization Background Cryo-electron tomography (Cryo-ET) is an imaging technique used to generate three-dimensional structures of cellular macromolecule complexes in their native environment. Due to developing cryo-electron microscopy technology, the image quality of three-dimensional reconstruction of cryo-electron tomography has greatly improved. However, cryo-ET images are characterized by low resolution, partial data loss and low signal-to-noise ratio (SNR). In order to tackle these challenges and improve resolution, a large number of subtomograms containing the same structure needs to be aligned and averaged. Existing methods for refining and aligning subtomograms are still highly time-consuming, requiring many computationally intensive processing steps (i.e. the rotations and translations of subtomograms in three-dimensional space). Results In this article, we propose a Stochastic Average Gradient (SAG) fine-grained alignment method for optimizing the sum of dissimilarity measure in real space. We introduce a Message Passing Interface (MPI) parallel programming model in order to explore further speedup. Conclusions We compare our stochastic average gradient fine-grained alignment algorithm with two baseline methods, high-precision alignment and fast alignment. Our SAG fine-grained alignment algorithm is much faster than the two baseline methods. Results on simulated data of GroEL from the Protein Data Bank (PDB ID:1KP8) showed that our parallel SAG-based fine-grained alignment method could achieve close-to-optimal rigid transformations with higher precision than both high-precision alignment and fast alignment at a low SNR (SNR=0.003) with tilt angle range ±60∘ or ±40∘. For the experimental subtomograms data structures of GroEL and GroEL/GroES complexes, our parallel SAG-based fine-grained alignment can achieve higher precision and fewer iterations to converge than the two baseline methods. Subtomogram alignment aims to rotate and translate a subtomogram to minimize its dissimilarity measure with a reference structure. The reference-free averaging process iteratively aligns a large number of subtomograms together with their own simple average as the initial reference to approximate the macromolecular structure of interest [7][8][9][10]. In the iteration procedure of optimizing subtomogram averaging, each subtomogram is rotated and translated in different ways but with the same reference structure. Much software has been developed for subtomogram alignment and classification [8,11,12]. Most implement algorithms that use a dissimilarity measure or a distance function as the alignment metric between the subtomogram and the reference [8,[12][13][14]. In three dimensional space, there is one translation and one rotation parameter along each axis. Therefore, for averaging N subtomograms, the parameter search space is 6 N−1 dimensional. If an exhaustive 6D search was performed in Cartesian space or in Fourier space for each subtomogram, the computational cost would be infeasible. To accelerate the search of translational parameters, Fourier transform is commonly used [15]. However, the computational cost for the exhaustive search of rotational parameters is still a major bottleneck. Fast translationinvariant rotational matching that obtains better rotational parameter candidate sets using spherical harmonics functions in Fourier space [16] has been proposed [17,18] and extended to subtomogram alignment [9,10,19,20]. A local fine-grained alignment can be applied for obtaining a better rotational parameter candidate set close to the optimal solution. Based on previous local refinement alignment on a very sparsely distributed starting rotational parameter candidate set [20,21], we further explore the potential of utilizing locally optimized alignment methods in a sparse rotational parameter candidate set. In this article, we design a competent stochastic average gradient (SAG) fine-grained alignment algorithm for dissimilarity measure between a pair of subtomograms in real space. We utilize an MPI parallel architecture, which can distinctly fulfill the simultaneous improvement of different alignment candidates. We demonstrate our SAG-based fine-grained alignment algorithm on realistically simulated data of GroEL and experimental GroEL and GroEL/GroES complexes subtomograms. The results show that SAG-based fine-grained alignment method can achieve higher alignment precision and better averaging of subtomograms at a low SNR of 0.003 with tilt angle range from +60°to -60°and from +40°to -40°, as compared to baseline methods. Methods We design a three-dimensional fine-grained alignment framework for subtomogram alignment based on stochastic average gradient [22], which minimizes the dissimilarity score defined by the Euclidean distance between a function with fixed parameters and a function with optimized parameters. We design dissimilarity scores of subtomogram alignment with missing wedge correction: constrained dissimilarity score in real space. We provide parallelization of our algorithm on the MPI parallel computing platform. Parameter definitions We define a subtomogram as an integrable function, V (x) : R 3 → R. We define T T as the operator of translation on subtomogram for T ∈ R 3 , which be expressed by In the 3D rotation group SO (3), we define R as the operator of rotation for a rotation R, which be expressed by where rotation R is a 3 × 3 rotation matrix [17]. The 3D subtomograms V (x) rotation and translation operation can be described as: The transformation parameters include rotation operation and translation operation can be represent as β = (R, T) = (φ, θ, ψ, τ 1 , τ 2 , τ 3 ) , where rotation parameters R = (φ, θ, ψ) can be deemed as Euler angles in the 'ZYZ' usage [23] or 'y' usage [24], and translation parameters as T = (τ 1 , τ 2 , τ 3 ) . Fine-grained alignment of subtomograms using constrained dissimilarity measure in a real space We now propose a fine-grained registration algorithm for the subtomogram alignment based on the stochastic average gradient. The goal of fine-grained alignment is to search for a local minimum value provided the given rough parameters of rotation R and translation T. To perform the alignment, one must define an alignment metric. We use a dissimilarity measure function for the alignment of two subtomograms. Many challenges exist, such as low resolution, low SNR, distortions owing to partial data loss (i.e., missing wedge effect). These factors must be considered during the subtomogram alignment procedure. To handle the significant missing wedge in Fourier space, the most common approach to correct the missing wedge is the constrained correlation coefficient (CCC) measure recommended by Förster et al. [8]. A binary mask function M : R 3 → {0, 1} is defined to represent the corresponding missing wedge. In cryo-electron tomography with single tilt ±θ, the missing wedge mask functions M(ζ ) : = I (|ζ 3 |≤|ζ 1 |tan(θ)) (ζ ), where I is symbolic function [19]. The overlap region after the alignment of two subtomograms in the Fourier space : = M R M. It only considers the best overlap region by rotation in Fourier space when two subtomograms are aligned, and eliminates the transform depending on the property of Fourier space. To reduce the effects of noise, focus on the particles, we also define a binary mask M in real space. Related to the Fourier space, the constrained function of subtomogram f can be expressed as: where FT denotes the Fourier transformation, FT −1 denotes the inverse Fourier transformation. The subtomogram mean value off must be restricted to M and : The constrained function of subtomogram g can be expressed as: In fact, for convenient calculation on discrete voxel points, we define the constrained cross-correlation function of normalized and aligned subtomograms f and g β can be given as: During the alignment, the dissimilarity score d is normalized, which is derived from the CCC. Given a normalized and aligned subtomogram f and g β , d can be represented as: By using the fast rotational matching (FRM) [9,19,20], we can get an initial set of the top N best rough rotations candidate set {R 1 , R 2 , . . . , R N }, and then obtain the top N best rough translations candidate set {T 1 , T 2 , . . . , T N }, that can efficiently minimize the normalized Euclidean distance d using fast translational matching (FTM), where N is the cardinality of the rotations or translations set. The selected rotation candidate sets have the highest CCC value compared to other rotation sets that are not selected. For each rotation R j in the set {R 1 , R 2 , . . . , R N }, we can utilize FTM to search the best translations T j between f and g (T,R) . For comparison purpose, the acquisition of the initial rotations candidate set was implemented using the corresponding fast rotation matching code of Chen's method. Two volume (subtomogram and reference) are transferred into Fourier space, the power spectrum (i.e. the magnitude of Fourier components) of a subtomogram and reference are only considered, and then we convert the Fourier coefficients to spherical coordinates and calculate fast rotational match by spherical harmonics convolution. The power spectrum is translation invariant. Therefore the fast rotation matching does not depend on translation. Given a certain combination of R and T, we can get the new rotation value R k and translation value T k using the stochastic average gradient (SAG) fine-grained alignment algorithm on three-dimensional density map, so that the normalized Euclidean distance decreases. The SAG algorithm was firstly applied to the twodimensional matrix [22]. Standard stochastic gradient descent algorithm implements sublinear rates, because the randomness introduces variance. The SAG algorithm stores previous calculated gradients to achieve a linear convergence rate. We expand the SAG algorithm and apply it to the three-dimensional matrix to form the 3D SAG algorithm. We design a 3D version of SAG algorithm and apply it to 3D rigid registration on subtomogram alignment procedure. Since the function f is fixed, we only use SAG finegrained alignment algorithm to update the β = (R, T). Now we redefine the loss function J for 3D subtomogram alignment. where n is the length of the volume on the x-axis, The recursive form of the SAG algorithm is given as: where at each iteration a index i k along the x-axis in the experimental data is random selected redundantly and uniformly in {1, . . . , n}, α k is step size and y k i can be given as: Similar to the standard full gradient (FG) method, the procedure contains a gradient in regard to the whole experimental subtomogram data. However, similar to the stochastic gradient (SG) method, the each iteration of SAG method only calculates the gradient in regard to a slice of the whole experimental subtomogram data along the x-axis. So, the iterative cost is independent of n, thus giving the SAG method low iteration cost and a linear convergence rate. In other words, by randomly choosing index i k and maintaining the memory of the latest gradient value calculated for each slice of the whole experimental subtomogram data, the iteration accomplishes a faster convergence rate than the iteration of the SG method. So SAG method does not increase the capability of getting trapped into local minima. For our loss function J, we adopt empirical step size α k = 1/L. In practice, Lipschitz constant L is unknown. The estimation of Lipschitz constant L will be doubled when the instantiated Lipschitz obeys the inequality [22]. We modify the estimation rule of Lipschitz constant L by selecting the max value in the experimental data. We implement the method in Algorithm 1 through equation 11 and 12, and we utilize a variable D to express the gradient of β. For the purpose of parallelism and vectorization, the stochastic average gradient completions usually divide the data into "small batches" and implement the stochastic average gradient iterations on small batches. We similarly perform the 3D version of the SAG-based fine-grained subtomogram alignment on small batches (a slice) along the x-axis. Algorithm 1 Basic SAG fine-grained subtomogram alignment method for minimizing constrained dissimilarity score 1 In order to speed up the SAG algorithm convergence rate and adequately decrease the memory space of SAG method, we optimize small batches SAG algorithm in 3D space, which select small batches slices along the xaxis in the experimental subtomograms data, rather than only selecting a slice along the x-axis in the experimental subtomograms data in Algorithm 2. In an optimized SAG fine-grained subtomogram alignment algorithm (Algorithm 2), small batches slices depends on the side length of subtomogram data, for example, small batches is about 4 ∼30 for our simulation subtomogram, in which the side length is 64. We use a loop to judge whether each slice is visited, instead of the visitation policy of each slice in the SAG algorithm. Algorithm 2 Optimized SAG fine-grained alignment method for minimizing constrained dissimilarity score The comparison of computing time between Algorithm 1 and 2 is described in the Results section. Algorithm 2 is faster than Algorithm 1, so Algorithm 2 is selected for fine-grained subtomogram alignment. In the optimized SAG fine-grained subtomogram alignment algorithm, the number of x-slices in each iteration is about 1 16 to 1 2 of side length of subtomogram. For the original candidate set R and T, the final result of iteration produces the refined parameters of subtomo- y k i through optimized SAG fine-grained subtomogram alignment algorithm (Algorithm 2), where k and k + 1 are the iteration numbers. Message passing interface frame parallel fine-grained subtomogram alignment procedure To find global optimal rotation and translation parameters, it is necessary to perform multiple refining processes from different rotation and translation parameter candidate sets. To initialize on different parameter sets synchronously, we use Message Passing Interface (MPI) frame to calculate the score of dissimilarity in parallel. We compare dissimilarity scores gained by using different candidate rotation and translation parameter sets to find the least dissimilarity score in Algorithm 3. With the MPI parallel model, we can quickly search for the optimal rotation and translation candidate parameter in all candidate sets. Algorithm 3 Based on MPI parallel SAG fine-grained subtomogram alignment procedure 1: get top N candidate set (rotation and translation) 2: for each candidate set, we use SAG refine-grained subtomogram alignment method to optimize the sum score of dissimilarity between subtomograms and reference in parallel mode 3: get the minimum score of dissimilarity in the score of dissimilarity data sets 4: end procedure Message Passing Interface is a communication protocol on different computing nodes for concurrent computation, and supports peer to peer and broadcast. MPI is also a messaging application interface that includes protocol and semantic descriptions. MPI is specifically designed to allow applications to run in parallel on multiple independent computers connected over a network in Fig. 1. We choose MPI frame as parallel programming for several advantages: • MPI is the message passing library that can be regarded as a standard library. In fact, almost all HPC platforms support it. • When we change applications to different platforms that conform to MPI standards, there is little or no need to modify the source code. • There are many functions and a variety of implementations are available. Finally, we outline some key differences of our stochastic average gradient fine-grained alignment method for the subtomogram alignment from Chen's approach [20] and Xu's approach [21]: 1. In Xu's approach, they use Levenberg-Marquardt algorithm to calculate increment value, which needs total volume data to calculate the Jacobian matrix and parameters. In Chen's approach, they calculate the crosscorrelation coefficient of a 3D matrix in each iteration and find the best rotation and location values in the 3D matrix. They also utilize spherical harmonic function to calculate the new cross-correlation coefficient between the 3D experimental volume and the reference volume, to find the best cross-correlation score in each iteration. 2. Xu's approach uses stochastic parallel refinement framework. Chen's approach uses MPI frame to parallelize subtomogram alignment. 3. Our method utilizes a 3D version of stochastic average gradient algorithm to execute fine-grained subtomogram alignment and apply MPI frame to parallelize subtomogram alignment. Our SAG-based fine-grained alignment only needs a partial batch slices of the 3D volume in each iteration. Generating simulated cryo-electron tomograms We downloaded the atomic model from Protein Data Bank (PDB), specified the resolution and voxel spacing, and conducted low-pass filtering of the data. After getting the density maps, we performed random rotation and translation operations. Contrast Transfer Function (CTF) was simulated using a known defocus value. The volume density maps were projected onto the specified tilt angles and angle increment. The projection images were applied with Gaussian-distributed noise and Modulation Transfer Function noise (MTF) to simulate electron optical effect. The projection images were reconstructed with a weighted back projection (WBP) algorithm to produce the simulated subtomogram datasets. Atomic model (PDB ID:1KP8) was used to generate subtomograms of size 64 3 with voxel size 0.6nm and -6μm defocus. We utilized tilt angle ±60 • and ±40 • with 1°angular increment respectively. The simulations procedure were implemented using the Situs PDB2VOL [25] program to get volume electron density maps. The central slices of different tilt ranges and SNRs are shown in Fig. 2. Subtomograms with smaller tilt range and lower SNR shows more deformation than noise-free subtomograms (i.e. reference). Experimental groEL and groEL/ES subtomograms The experimental GroEL and GroEL/ES dataset were obtained in [8]. To collect the GroEL 14 GroES 7 , 1μM GroEL 14 and 5μM GroES 7 were incubated in a buffer for 15 min at 30°C, which contained 5mM MgCl 2 , 5mM KCl, 5 mM ADP, 1mM DTT, and 12.5 mM Hepes (pH 7.5). 3.5μl of protein solutions were confused with 0.5μl of a 10 nm BSA-colloidal gold suspension using mesh grids. The sample was vitrified with plunge-freezing. The single-axis tilt series were obtained by a Tecnai G2 Polara microscope, which was equipped with 2k ×2k FEI CCD camera. The tilt series were acquired from tilt angle ±65 • with 2°or 2.5°angular increment at a different defocus levels between 7 and 4 μm. The object pixel size was 0.6nm. Classification of experimental groEL and groEL/ES subtomograms Thousands of subtomograms, which also contain putative particles, were selected manually and aligned to subtomograms average according to cross-correlation. Eliminating lower cross-correlation coefficients (e.g., CCC ≤0.42), the remainder of particles were chosen for subtomogram alignment and classification. The dataset of experimental ∼800kDa GroEL 14 and GroEL 14 /GroES 7 subtomograms complex basically conducted as a quasi-standard in the subtomogram alignment and classification's research [8,12,26,27]. The 786 subtomograms in the data set were aligned by the average of all subtomograms in the facultative direction and an unsupervised manner. Subsequently, we used an MCO-A classification [12] with 10 initial classes and a seven-fold symmetry. The MCO-A method converged to three different class, whose result is consistent with those published previously in [8,12,27,28]. The central slices with each classification average resulting from the MCO-A classification are shown in Fig. 3, and class 1 is look-like the fitted volume of GroEL 14 , class 2 is associated with the fitted atomic model of GroEL 14 /ES 7 , class 3 is virtually less than the volume of GroEL 14 . Comparison of fine-grained subtomogram alignment accuracy to the baseline methods We simulated 20 GroEL subtomograms with random rotation and translation of various SNRs under tilt range ±40 • and ±60 • respectively. We first compared our method with Chen's approach [20] and Xu's approach [21] to assess the subtomogram alignment accuracy against the noise-free reference volume, which was produced from the GroEL structure (PDB ID: 1KP8). The reference volume was low-pass filtered to 6nm resolution and was used as the starting reference for the alignment procedure. We aligned the 20 simulated subtomograms with the reference volume using the three methods. The alignment accuracy was assessed using the constrained crosscorrelation (CCC) defined in Section Parameter definitions. The resulting CCCs were compared using the t-test of pair-wise data between our method and the two baseline methods, where the data are assumed by normal distribution [29]. We also used non-parametric test without Gaussian assumption (Wilcoxon signed-rank test) to calculate P-value, and the results are similar to the t test (Supplementary Section 1). As shown in Table 1, our method outperformed the two baseline methods using simulated subtomograms of SNR 0.03 and 0.003 under tilt range ±60 • . The alignment accuracy comparison for subtomograms simulated with tilt angle range ±40 • is shown in Table 2. We note that although Chen's method outperformed ours under some conditions, under a more realistic SNR 0.003 with different tilt angle ranges, our method has substantial improvement on the resulting CCC alignment accuracy (Figs. 4 and 5). We also used 50 particles to evaluate subtomogram alignment accuracy under different conditions and compared the resolution value under the 0.143 criteria of FSC (Supplementary Section 2). This comparison proves that our method outperformed the two baseline methods using simulated subtomgrams of SNR 0.003 under tilt range ±60 • and ±40 • . Computation time compared to other methods in subtomogram alignment Next, we compared the computational time between our SAG fine-grained subtomogram alignment method and the Xu's method and Chen's method. For an objective and fair comparison, we implemented the three alignment method in Python and performed them on 20 simulated subtomogram of SNR 0.003 under tilt range ±60 • . We used the original reference-free model as the initial reference for our algorithm. The most common Reference-free alignment rules are to use the subtomograms average in a random direction as an original reference [28]. The so-called no reference is not without any reference, but does not need a external reference, because external reference leads to reference bias. We recorded the running time of each method in obtaining the best resolution. Every time the subtomogram alignment method converged, we got a resolution value. By defining the same convergence times, we evaluated which method can get After each iteration, we got the subtomograms averaging and used FSC means to measure the resolutions, and then reported the running time for our SAG fine-grained subtomogram alignment method. Afterward, we repeated the protocol using Xu's method and Chen's method with an SNR of 0.003 conditions. Finally, we compared the resolutions of the average and the running time in three different subtomogram alignment methods. The computation time cost of basic SAG fine-grained alignment method and optimized SAG fine-grained alignment method is 50.7 seconds and 40.5 seconds respectively, but Xu's method and Chen's method cost 150.2 seconds and 149.4 seconds respectively (Fig. 6). The computation time of different alignment method is the time for each alignment algorithm to be used once. Figure 6 depicts the computation time of different alignment algorithms (basic SAG fine-grained alignment method, optimized SAG fine-grained alignment method, Xu's method and Chen's method). We note that our SAG fine-grained alignment method is faster than Xu's method and Chen's method in the computation time. Then we compared the elapsed time of getting the best resolution in three alignment methods. To get the best resolution, different alignment methods may run many times, for example, our optimized SAG-based finegrained subtomogram alignment method got the best resolution (37.1Å) by iterating 14 times, Xu's method got the best resolution (40.7Å) with 11 iterations and Chen's method got the best resolution (39.7Å) with 13 iterations (Fig. 8). Reference-free fine-grained alignment of subtomograms on simulated and experimental data set We tested our SAG fine-grained alignment method and the two baseline alignment methods for subtomogram alignment without external reference. We first tested different alignment method on simulated subtomograms data set. Then we applied the three methods to the experimental GroEL subtomograms data set (Fig. 3) [8]. Subtomograms data sets were divided into odd and even data sets and aligned separately. The odd and even datasets were averaged separately. The normalised crosscorrelation coefficient between the odd and even average density map over corresponding shells in Fourier space is measured by FSC to get many FSC values. Under the condition of FSC 0.143 that is "gold-standard" [30], the corresponding resolution values were calculated by many FSC and voxel values, and then the odd and even data sets were combined as the subtomograms average. The subtomograms average was used as a new reference and was low-pass filtered until the end of the cycle or the frequency did not meet the conditions. We averaged the subtomograms after reference-free subtomogram alignment and computed their resolution curves. For simulated subtomograms dataset, our SAG fine-grained alignment method was applied for subtomogram alignment at SNR of 0.003 and tilt angle range ±60 • (Figs. 7 and 8), and finally obtained the 37.1Å average resolution after 14 iterations according to gold-standard criteria of 0.143 FSC [30]. Applying Xu's method and Chen's method to subtomogram alignment respectively, the final average resolution (0.143 FSC criteria) was 40.7Å after 11 iterations and 39.7Å after 13 iterations respectively. Our SAG fine-grained subtomogram alignment method can get better resolution than Xu's alignment method, and slightly better than Chen's alignment method. During the subtomogram averaging, we often need thousands of subtomograms and spend weeks to complete. Our SAG fine-grained subtomogram alignment method can reduce computational cost and get better resolution compared to the two baseline methods. We then applied the three methods to an experimental GroEL subtomogram dataset (Fig. 3). Throughout our iterative alignment and averaging procedure, averaging of GroEL subtomograms transformed from a blurring structure to the barrel structure of the seven symmetry, resembling the true GroEL structure. According to the 0.143 criteria of FSC, the resolution of the final average was 25.1Å after 4 iterations (Fig. 9). In order to calculate the FSC resolution, all alignment methods were performed on the dataset divided into two independent halves. Using Xu's alignment method and Chen's alignment method, the resolution of the final average (0.143 criteria) was 32.5Å after 9 iterations and 27.9Å after 12 iterations according to the FSC. Furthermore, we utilized the final average, which was acquired with different alignment methods, to fit atomic structures of complexes (PDB ID: 1KP8) in Fig. 9. From Fig. 9, the final average acquired by our SAG-based fine-grained alignment method is better than the final average acquired by Xu's alignment method and Chen's alignment method in subtomogram alignment procedure. Therefore, our SAG-based fine-grained alignment method outperforms Xu's alignment method and Chen's alignment method for subtomogram referencefree averaging. We also added FSC curves for reference-free finegrained alignment of subtomograms on simulated and experimental data set according to the 0.143 criterion (Supplementary Section 3). Discussion In this article, we propose the stochastic average gradient (SAG) fine-grained alignment method by optimizing constrained dissimilarity scores. However, the original SAG algorithm was firstly applied to the two-dimensional matrix. So we designed two versions of 3D SAG-based fine-grained alignment method on subtomogram alignment procedure. Since randomness introduces variance, standard stochastic gradient descent algorithm implements sublinear rates. Our SAG fine-grained subtomogram alignment method only selects the slice or the mini-batch slices along the x-axis in the experimental data in each iteration, maintains the memory of the latest gradient value calculated for each slice and the whole iteration produces a gradient of the subtomogram alignment. The size of mini-batch slices depends on the side length of subtomogram data. So our SAG fine-grained subtomogram alignment method has a linear convergence rate. On the other hand, by comparing the computational time between Algorithm 1 and 2, Algorithm 2 is faster than Algorithm 1, so Algorithm 2 is selected for fine-grained subtomogram alignment. But, Xu's method and Chen's method require the whole 3D volume to do the calculation in each iteration, and thus take more time. Compared to other methods, our method requires more temporary space in memory. For the alignment accuracy comparison, Chen's method performs better than our SAG fine-grained alignment method on SNR=0.03 and SNR=0.01 subtomograms under tilt range ±40 • , probably because Chen's method searches for the best cross-correlation coefficient value between 3D cross-correlation matrix, which is accurate under higher SNR. However, our method is more robust to a more realistic low SNR setting of SNR 0.003. Our SAG fine-grained alignment method uses MPI frame to calculate the score of dissimilarity in parallel for subtomogram alignment, however, using MPI is not easy to program and requires some experience, unlike multi-threading. Conclusion Our SAG fine-grained subtomogram alignment method optimizes a constrained dissimilarity score in real space. It is obvious that our method is more accurate on subtomogram alignment and averaging at SNR=0.003 of tilt range ±60 • and ±40 • . By comparing the elapsed time of different alignment method, our SAG fine-grained subtomogram alignment method is faster than Xu's method and Chen's method, and our method obtains better resolution, which is well validated on the simulated subtomograms datasets and experimental GroEL and GroEL/ES subtomograms datasets. Additionally, we utilized a very efficient Message Passing Interface (MPI) frame parallel refinement alignment procedure, which is particularly designed to apply in parallel on multiple independent computers nodes connected by a network. MPI significantly accelerates the simultaneous refinement of multiple subtomogram alignment candidates set. We will consider classification problems in the future and try to use new classification algorithms, not only including deep learning. In addition, we will continue to study subtomogram alignment. We will also test the new alignment algorithm with larger, updated subtomograms data sets.
v3-fos-license
2019-11-30T14:32:39.690Z
2019-11-29T00:00:00.000
208355957
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://jhoonline.biomedcentral.com/track/pdf/10.1186/s13045-019-0831-5", "pdf_hash": "d794dfe3bd96630e708f002a9189406bce472607", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46272", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "sha1": "5cccc2dc649ff3c6c39736b2f652a33f607d1255", "year": 2019 }
pes2o/s2orc
Targeting glycosylation of PD-1 to enhance CAR-T cell cytotoxicity Asparagine-linked (N-linked) glycosylation is ubiquitous and can stabilize immune inhibitory PD-1 protein. Reducing N-linked glycosylation of PD-1 may decrease PD-1 expression and relieve its inhibitory effects on CAR-T cells. Considering that the codon of Asparagine is aac or aat, we wondered if the adenine base editor (ABE), which induces a·t to g·c conversion at specific site, could be used to reduce PD-1 suppression by changing the glycosylated residue in CAR-T cells. Our results showed ABE editing altered the coding sequence of N74 residue of PDCD1 and downregulated PD-1 expression in CAR-T cells. Further analysis showed ABE-edited CAR-T cells had enhanced cytotoxic functions in vitro and in vivo. Our study suggested that the single base editors can be used to augment CAR-T cell therapy. To the Editor, Chimeric antigen receptor T (CAR-T) cells are not satisfying in treating solid tumors [1]. PD-1 limits CAR-T cell therapy within solid tumors. CRISPR/cas9 can downregulate PD-1 [2] but also potentially leads to carcinogenesis, because success of such tools relies on suppressing DNA damage response [3]. Furthermore, CRISPR/cas9 could bring in missense mutations that might exacerbate T cell dysfunction. Hence, we need safer and more precise geneediting tools to produce better CAR-T cells. N-linked glycosylation can stabilize PD-1 to compromise anti-tumor immunity [4]. As N-linked glycosylation is restricted on asparagine coded by aac or aat, adenine base editors (ABE) can convert a·t to g·c base pair [5], and may be used to diminish such glycosylation. Herein, we explored the potentials of ABE to edit and downregulate PD-1 in CAR-T cells. Mutated PD-1 at N74 had decreased surface expression (Fig. 1a). Therefore, N74 in PD-1 is a good target for ABE. Three types of amino acids may be produced after base editing at N74 coded by aac (Fig. 1b). All 3 types of mutations into D74 (gac), S74 (agc), and G74 (ggc) comparably downregulated the surface and total PD-1 (P < 0.001) ( Fig. 1c and Additional file 1: Figure S1c). Next, we investigated whether ABE was able to decrease PD-1 in CAR-T cells. The delivery of gRNA using lentivirus is efficient [6], so we constructed lentiviral vectors simultaneously expressing mesothelin-directed CAR and gRNA targeting non-specific sites (scramble) or N74 of PDCD1 (gRNA), under 2 independent promoters (Additional file 1: Figure S1a). T cell transduction efficacies were over 85% (Additional file 1: Figure S1b). Then the commercially synthesized ABE proteins were delivered into CAR-T cells by electroporation. Sequencing data showed the conversion to g majorly happened from the first adenine within N74 codon of PDCD1 in CAR-T cells expressing specific gRNA (Fig. 1d). Conversion was also noticed at the second adenine with lower frequencies (Fig. 1d). This editing pattern is consistent with previous report [7]. In following experiments, the ratios of CAR-expressing cells were comparably adjusted to 85%. In gRNA CAR-T cells, PD-1 expression was decreased at protein level but not at mRNA level ( Fig. 1e and f). Consistently, surface PD-1 was remarkably decreased in resting and activated gRNA CAR-T cells (P < 0.01) (Fig. 1g). Further analysis suggested that ABE editing did not impair the proliferation and activation of CAR-T cells (P > 0.05) ( Fig. 1h-j) when PD-L1 was absent. Then mesothelin-positive cells with high PD-L1 expression were prepared (Fig. 2a). After washing out exogenous cytokines, CAR-T cells and Fig. 1 Mutations of N74 decreased PD-1. a Surface expressions of wild-type PD-1 and its derivate N74A (A74) mutation in 293 T cells. b Potential mutations resulted from single-nucleotide conversions at N74. c Mutations at N74 decreased surface expression of PD-1. PD-1 harboring wild-type or mutated N74 were tandemly linked with self-cleaving P2A and GFP, then transiently expressed in 293 T cells. Surface PD-1 expression was determined in GFP + cells by FACS assay. d Sanger sequencing of PDCD1 of CAR-T cells expressing scramble or N74-targeted gRNA after base editing. e-j CAR-T cells having comparable rates of GFP + cells were activated with equal amounts of anti-CD3/CD28 beads without exogenous cytokines. e Western blots of PD-1 in CAR-T cells activated or not. f qRT-PCR detecting PD-1 expressions in resting and activated CAR-T cells. g Surface expressions of PD-1 in CAR-T cells before and after activation. And mean fluorescence intensity (MFI) values were compared. h CAR-T cells were stained with eFluor 670 dyes and then continued to culture with or without beads. Forty-eight hours later, proliferations of CAR-T cells were determined according to eFlour 670 dilution. Activation markers, CD69 (i) and CD27 (j) were detected and compared in different CAR-T cells before and after activation. **P < 0.01 and ****P < 0.001 Fig. 2 Single base conversion reduced PD-1-mediated suppression. a IFN-γ (100 IU/mL) induced PD-L1 expression in target cells. After that, target cells were washed to discard IFN-γ and used in following experiments. b-d CAR-T cells were co-incubated with target cells without exogenous cytokines. b CAR-T cells expanded with or without target cells for 48 h. c CAR-T cells were co-cultured with target cells at indicated effector to target ratios (E:T) for 24 h. The cytolytic potencies of CAR-T cells were tested using bioluminescence imaging. d CAR-T cells were incubated with tumor cells at E:T = 1:1. Twenty-four hour later, IL-2 and IFN-γ in supernatants were detected using ELISA. e-j The anti-tumor effects of ABE-edited CAR-T cells in vivo. e Five days after infusion, the ratios of infiltrated T cells (CD45 + CD3 + ) were determined using flow cytometry after excluding dead cells (n = 4 per group). f, g The expressions of PD-1, CD69, and CD27 were detected in infiltrated T cells. In addition, effects of CAR-T cells on tumor growth (h, i) and survival of mice (j) were monitored weekly (each group had 5 mice). *P < 0.05 and **P < 0.01 target cells were co-incubated. Upon target cell engagement, CAR-T cells divided efficiently (Fig. 2b). Compared with gRNA counterparts, the proliferations of CAR-T cells expressing scramble RNA were significantly suppressed (P < 0.05) (Fig. 2b). gRNA CAR-T cells had enhanced cytolytic capacities (P < 0.05) and increased secretions of IL-2 and IFN-γ (P < 0.05) after activation by tumor cells (Fig. 2c and d). To further confirm the effectiveness of ABE in relieving T cell inhibition, we checked the anti-tumor functions of CAR-T cells in vivo. Consistently, CAR-T cells expressing N74-targeted gRNA attained greater expansion (P < 0.05) (Fig. 2e and Additional file 2: Figure S2). Decreased surface PD-1 (P < 0.01) and upregulated activation markers (CD69 and CD27) (P < 0.05) were noticed on gRNA CAR-T cells ( Fig. 2f and g). gRNA CAR-T cells more efficiently delayed tumor growth and improved overall survival when compared with scramble counterparts (P < 0.05) (Fig. 2h-j) (Additional files 3 and 4). Single base editing can modulate the stability and function of target protein by changing a single residue [8]. Our work further uncovered the potential of such editing tool in T cells. Compared with CRISPR/cas9, ABE has narrower editing window and much less frequent off-target events [9], representing a safer and more precise approach for gene editing. ABE-mediated point mutation can downregulate the inhibitory PD-1, therefore providing an alternative approach to augment T cell immunotherapy. Additional file 1: Figure S1. CAR-T cell construction. (a) Structure of lentiviral vector simultaneously delivering CAR and gRNA. (b) Transduction efficacy of T cells. Transduction efficacies were determined by GFP expression on day 5, before performing single base editing on the same day. (c) Vectors coding wild type (N74) or mutated (D74, S74 or G74) PD-1 were transiently transfected into 293 T cells. 48 hours later, cell lysis was subjected to western blot analysis. This assay showed the alterations at N74 of PDCD1 decreased the expression of PD-1 protein. Additional file 2: Figure S2. CAR-T cells divided within tumor. Almost all the T cells (CD45 + CD3 + ) accumulating within tumors were CAR-T cells (GFP + ). In the infused T cells, about 85% were GFP + . In the activated T cells within tumors, the ratios of GFP + cells were over 97%, indicating CAR-T cells but not the non-engineered cells divide upon antigen engagement in vivo. Untransduced T cells were used as control. Additional file 3: Table S1. Antibodies and materials list. Additional file 4:. Detailed method information and procedures of experiments.
v3-fos-license
2024-07-31T15:17:42.480Z
2024-07-27T00:00:00.000
271564134
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.3390/ani14152192", "pdf_hash": "a64672cfb018854d951c699916aed2797d04879a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46273", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "83f7fdb800524dc1fc9344275d11213144e67f21", "year": 2024 }
pes2o/s2orc
Automatic Perception of Typical Abnormal Situations in Cage-Reared Ducks Using Computer Vision Simple Summary Efficient breeding of meat ducks using three-dimensional and multi-layer cages is a novel approach being actively explored in China. In this process, timely and accurate detection of abnormal situations among ducks is crucial for optimizing and refining the cage-rearing system, and ensuring animal health and welfare. This study focused on the overturned and dead status of cage-reared ducks using YOLOv8 as the basic network. By introducing GAM and Wise-IoU loss functions, we proposed an abnormal-situation recognition method for cage-reared ducks based on YOLOv8-ACRD. Building on this, we refined the identification of key body parts of cage-reared ducks, focusing on six key points: head, beak, chest, tail, left foot, and right foot. This resulted in the development of an abnormal posture estimation model for cage-reared ducks, based on HRNet-48. Furthermore, through multiple tests and comparative verification experiments, it was confirmed that the proposed method exhibited high detection accuracy, generalization ability, and robust comprehensive performance. The method proposed in this study for perceiving abnormal situations in cage-reared ducks not only provides foundational information for the progress and improvement of the meat duck cage-reared system but also offers technological references for the intelligent breeding of other cage-reared poultry. Abstract Overturning and death are common abnormalities in cage-reared ducks. To achieve timely and accurate detection, this study focused on 10-day-old cage-reared ducks, which are prone to these conditions, and established prior data on such situations. Using the original YOLOv8 as the base network, multiple GAM attention mechanisms were embedded into the feature fusion part (neck) to enhance the network’s focus on the abnormal regions in images of cage-reared ducks. Additionally, the Wise-IoU loss function replaced the CIoU loss function by employing a dynamic non-monotonic focusing mechanism to balance the data samples and mitigate excessive penalties from geometric parameters in the model. The image brightness was adjusted by factors of 0.85 and 1.25, and mainstream object-detection algorithms were adopted to test and compare the generalization and performance of the proposed method. Based on six key points around the head, beak, chest, tail, left foot, and right foot of cage-reared ducks, the body structure of the abnormal ducks was refined. Accurate estimation of the overturning and dead postures was achieved using the HRNet-48. The results demonstrated that the proposed method accurately recognized these states, achieving a mean Average Precision (mAP) value of 0.924, which was 1.65% higher than that of the original YOLOv8. The method effectively addressed the recognition interference caused by lighting differences, and exhibited an excellent generalization ability and comprehensive detection performance. Furthermore, the proposed abnormal cage-reared duck pose-estimation model achieved an Object Key point Similarity (OKS) value of 0.921, with a single-frame processing time of 0.528 s, accurately detecting multiple key points of the abnormal cage-reared duck bodies and generating correct posture expressions. Introduction China is the largest global producer of duck meat.In recent years, the demand for duck meat and advancements in efficient breeding techniques have increased the annual number of slaughtered ducks.By 2023, China had 4.218 billion ducks, a 5.40% increase from 2022, with a total output value of 126.369 billion yuan, reaching a 5.04% increase from the previous year [1].Traditionally, the duck meat industry in China has relied on extensive breeding methods such as free-range and open-field grazing, which are inefficient and cause severe environmental pollution.In response to governmental demands for sustainable livestock practices, intensive, efficient, and environment-friendly cage rearing has become the primary modern poultry breeding model [2].During cage rearing, increased stocking density and reduced activity space, combined with the inherently sensitive and stress-prone nature of ducks, can often lead to abnormal behaviors, resulting in significant economic losses [3].These behaviors include overturning and mortality.Overturning, often stress-induced, tends to occur around 10 days of age when the ducks' skeletal development is insufficient to support their bodies, leaving them unable to right themselves without assistance.Without manual intervention, their physiological health is at risk.Moreover, deceased ducks, as carriers of infectious diseases, can cause large-scale breeding accidents if not promptly addressed.Therefore, efficient, accurate, and timely detection of abnormal situations in cage-reared ducks is crucial for ensuring healthy breeding practices and improving the cage-rearing mode. Traditional screening for abnormal poultry primarily relies on manual inspection, which is time-consuming, labor-intensive, and highly subjective [4].Additionally, this method can facilitate the cross-species transmission of pathogens.Although attaching acceleration sensors [5][6][7][8] and RFID [9][10][11] tags to poultry can automatically obtain motion information, this approach has drawbacks including high cost, cumbersome operation, and rejection by animals.Recently, with the continuous advancement of AI technology, many scholars have explored deep learning-based detection methods for poultry, utilizing various physiological features, such as appearance, behavior, sound, body temperature, and feces to address these issues. Cui et al. [12] proposed a method for recognizing the health status of broiler chickens based on an improved YOLOv5 model, achieving an average accuracy of 97.80%.Zhuang et al. [13] leveraged differences in feather coverage and posture between healthy and diseased chickens, and optimized the Single Shot Multibox Detector (SSD) to enhance the accuracy and achieve precise classification of diseased and healthy chickens within a flock.Yang et al. [14] developed a method for calculating feather coverage on the backs of laying hens using thermal infrared images and the Otsu algorithm, elucidating the correlation between feather coverage and body temperature.Aydin [15] used depth cameras to collect 3D data on broilers, number of lying events (NOL), and latency to lie down (LTL) of lying behaviors.By comparing these data with previous records, a method was proposed for detecting limping in broiler chickens.Xiao et al. [16] achieved the precise semantic segmentation of caged chickens' heads and body parts using binocular vision, conducting 3D reconstruction for the feeding movement analysis in three dimensions.Chai et al. [17] effectively monitored the floor egg-laying behavior in free-range laying hens using deep learning techniques.Cuan et al. [18] proposed an early detection method for Newcastle disease (ND) in poultry by analyzing the poultry's sounds with deep learning networks, achieving a 98.50% accuracy rate.Shen et al. [19] adopted convolutional neural networks with infrared thermal imaging to extract the highest temperatures of broiler chickens' heads and legs, creating an automatic temperature-detection model for white-feathered broilers.This model, incorporating environmental temperatures, relative humidity, and light intensity had an average relative error of only 0.29%.Degu et al. [20] developed a Animals 2024, 14, 2192 3 of 25 poultry disease discrimination method based on the images of diseased chicken feces, using the YOLOv3 and ResNet-50 target detection algorithms with smartphones, achieving a 98.70% classification accuracy for identifying health, coccidiosis, salmonella, and Newcastle disease.Gu et al. [21] proposed a method for real-time behavior recognition of cage-reared laying ducks by enhancing YOLOv4, offering a valuable technical reference for anomaly detection in cage-reared ducks. Although these achievements have facilitated multi-target monitoring and disease diagnosis in broiler and layer chickens, there is a lack of research on methods for identifying abnormal situations in cage-reared ducks, necessitating innovative approaches.Given the strong learning and reasoning capabilities of computer vision technology in healthy poultry breeding, exploring its use in detecting and addressing abnormal situations in cage-reared ducks is both feasible and promising. In current cage-rearing practices for meat ducks, a single cage typically holds 10-20 ducks (depending on cage size), each exhibiting varied physiological states and behavioral patterns.To ensure experimental consistency, this study specifically focused on ducks that were overturned or deceased, while disregarding others.In summary, this study addressed the demand for detecting abnormal conditions in cage-reared ducks, with a focus on 10-dayold ducks prone to such issues.This study consisted of four parts based on computer vision technology: (1) proposing an object detection model for over-turned and dead ducks in cages using an improved YOLOv8, (2) evaluating the impact of embedding global attention mechanism (GAM) into the original YOLOv8 network structure and replacing the CIoU loss function with Wise-IoU on the detection accuracy, (3) testing the generalization ability of the optimized model by adjusting image brightness levels ('bright' and 'dark') and comparing its performance with other mainstream methods, (4) introducing six key points, including head, beak, chest, tail, and left and right feet to refine the detection of abnormal situations in cage-reared ducks, and (5) proposing a method for abnormal cage-reared duck pose-estimation based on HRNet-48, and conducting performance comparison tests.The proposed method not only provides fundamental data for advancing cage-reared duck technology but also offers technical references for the intelligent breeding of other poultry species. Image Acquisition and Experimental Material In this study, 10-day-old Cherry Valley meat ducks raised at the Jiangsu Academy of Agricultural Sciences Animal Experiment Base in Liuhe District, Nanjing City, Jiangsu Province (32 • N, 118 • E) were used as experimental animals.The breeding cages used were 3-layer H-shaped structures with LED light strips at the top.Unlike laying ducks, meat ducks did not have a peak egg-laying period from 0:00 to 4:00.Their feeding, drinking, and other activities primarily occurred on well-lit days.After the lights were turned off at night, most ducks remained in the resting state. To adequately collect samples of abnormal situations in cage-reared meat ducks, we used a duck house inspection robot equipped with a RealSense D435i depth camera (produced by Intel corporation in the United States), developed earlier, as the imageacquisition platform.The robot operated on a fixed trajectory at a constant speed from 7:00 a.m. to 8:00 p.m. on the 10th day, capturing the imagery data of cage-reared ducks at a resolution of 1280 × 720 and a frame rate of 30 fps.Subsequently, frame processing was performed manually.The camera parameters are listed in Table 1.Additionally, the layout and process of abnormal data collection for cage-reared ducks are illustrated in Figure 1, and examples of anomalies are shown in Figure 2.Moreover, to prevent the environmental stress caused by the inspection robot from affecting the ducks' behavior, the ducks underwent adaptive training before the experiment.Figure 2 illustrates a significant difference in the body posture of cage-reared ducks between overturned and deceased ducks.Overturned ducks typically have their backs on the ground, head and neck lifted upward near the chest, with both feet facing upward.In contrast, the deceased ducks exhibited abnormal postures, including bending of the body, leg extension, head on the ground, and closed eyes.These characteristics can be used to accurately identify and classify abnormal cage-reared ducks.stress caused by the inspection robot from affecting the ducks' behavior, the ducks underwent adaptive training before the experiment.Figure 2 illustrates a significant difference in the body posture of cage-reared ducks between overturned and deceased ducks.Overturned ducks typically have their backs on the ground, head and neck lifted upward near the chest, with both feet facing upward.In contrast, the deceased ducks exhibited abnormal postures, including bending of the body, leg extension, head on the ground, and closed eyes.These characteristics can be used to accurately identify and classify abnormal cage-reared ducks.Figure 2 illustrates a significant difference in the body posture of cage-reared ducks between overturned and deceased ducks.Overturned ducks typically have their backs on the ground, head and neck lifted upward near the chest, with both feet facing upward.In contrast, the deceased ducks exhibited abnormal postures, including bending of the body, leg extension, head on the ground, and closed eyes.These characteristics can be used to accurately identify and classify abnormal cage-reared ducks. Preprocessing of Abnormal Cage-Reared Duck Images and Dataset Construction The images of abnormal cage-reared ducks were obtained through frame extraction by experienced breeders.Data augmentation techniques were employed to expand the dataset, which was ultimately normalized to 640 × 640 pixels to prevent overfitting in Animals 2024, 14, 2192 5 of 25 later stages of the model due to the limited number of image samples.Two experiments were designed: one for the identification of abnormal cage-reared ducks and another for pose estimation.Each experiment used a corresponding dataset created using SAM and Labelme image-annotation tools, following annotation rules.The definitions of abnormal situations and key parts of cage-reared ducks, as well as the composition of the abnormal cage-reared ducks dataset, are presented in Table 2 and Figure 3, respectively. Labels Abnormal Situations and Key Points Definitions Abnormal cage-reared ducks dataset Overturned Cage-reared ducks exhibit a supine posture with both feet pointing upwards, their backs pressed against the ground, heads and necks inclined away from the ground, with a tendency to sway from side to side. Dead Cage-reared ducks adhere to the ground in a deformed posture, remain stationary, with stains covering their feathers.Facing the cage-reared ducks, the left palm area of the cage-reared ducks Right foot Facing the cage-reared ducks, the right palm area of the cage-reared ducks Preprocessing of Abnormal Cage-Reared Duck Images and Dataset Construction The images of abnormal cage-reared ducks were obtained through frame extraction by experienced breeders.Data augmentation techniques were employed to expand the dataset, which was ultimately normalized to 640 × 640 pixels to prevent overfitting in later stages of the model due to the limited number of image samples.Two experiments were designed: one for the identification of abnormal cage-reared ducks and another for pose estimation.Each experiment used a corresponding dataset created using SAM and Labelme image-annotation tools, following annotation rules.The definitions of abnormal situations and key parts of cage-reared ducks, as well as the composition of the abnormal cage-reared ducks dataset, are presented in Table 2 and Figure 3, respectively. Labels Abnormal Situations and Key Points Definitions Abnormal cage-reared ducks dataset Overturned Cage-reared ducks exhibit a supine posture with both feet pointing upwards, their backs pressed against the ground, heads and necks inclined away from the ground, with a tendency to sway from side to side. Dead Cage-reared ducks adhere to the ground in a deformed posture, remain stationary, with stains covering their feathers.The abnormal cage-reared ducks dataset was annotated based on two typical situations: overturned and dead, with a proportion approaching 7:3 based on statistical analysis.The abnormal cage-reared ducks pose-estimation dataset defined six key points: the head, beak, chest, left foot, right foot, and tail.Following the connection rules between the head and chest, chest and left foot, chest and right foot, and chest and tail, these points characterize the posture of cage-reared ducks in abnormal situations.The distribution of The abnormal cage-reared ducks dataset was annotated based on two typical situations: overturned and dead, with a proportion approaching 7:3 based on statistical analysis.The abnormal cage-reared ducks pose-estimation dataset defined six key points: the head, beak, chest, left foot, right foot, and tail.Following the connection rules between the head and chest, chest and left foot, chest and right foot, and chest and tail, these points characterize the posture of cage-reared ducks in abnormal situations.The distribution of the number of images in each category within the abnormal cage-reared ducks dataset is presented in Table 3.In 2023, Ultralytics introduced YOLOv8 as an enhancement to YOLOv5, categorizing it into five types, including n, s, l, m, and x, based on different depths and widths, with depth increasing sequentially [22].This study selected YOLOv8n, known for its balanced detection accuracy and real-time performance, as the base network.Considering the characteristics of a large number and scale of cage-reared ducks, significant multi-scale features, and the difficulty in detecting abnormal targets due to their small size, this paper proposes a YOLOv8-ACRD (abnormal cage-reared ducks) model for identifying abnormal situations in cage-reared ducks, incorporating the GAM attention mechanism [23] and Wise-IoU loss function [24]. YOLOv8-ACRD Network Structure The structure of YOLOv8-ACRD was similar to that of the original YOLOv8, consisting of four parts: the input, backbone, neck, and head.However, YOLOv8-ACRD distinguished itself by densely embedding the GAM mechanism after each C2f module in its neck.This enhancement improved the ability of the network to extract and fuse the features of abnormal cage-reared ducks, allowing the model to focus better on small-area features, such as overturned or deceased ducks in the image background, thereby enhancing recognition accuracy.Furthermore, the Wise-IoU loss function was introduced to address the data imbalance and reduce the impact of geometric factors, such as distance and aspect ratio, on the bounding box regression accuracy for abnormal cage-reared ducks.The network structure of YOLOv8-ACRD is illustrated in Figure 4. The Input section applied mosaic data augmentation, adaptive anchor box calculation, and adaptive grayscale filling to 640 × 640 × 3 resolution cage-reared duck images before passing them to the backbone section.The Backbone performed progressive featureextraction and generated feature maps of different scales using Convolution (Conv), Contextual Convolution (C2f), and Spatial Pyramid Pooling Fusion (SPPF) layers.The Conv layer down-sampled the images using 3 × 3 convolutional kernels with a stride of 2 and a padding of 1, followed by Batch Normalization (BN) and Sigmoid Linear Unit (SiLU) activation functions.The C2f module enhanced the gradient flow through operations such as slicing, convolution, bottleneck, and concatenation, enriching the model's ability to learn residual features with multi-branch cross-layer connections.Finally, the SPPF layer performed predefined size transformations on the feature tensors of Conv and C2f operations.Moreover, the Backbone section output feature maps with sizes of 80 × 80, 40 × 40, and 20 × 20 to the neck section. The Neck network in YOLOv8 incorporated the Path Aggregation Network (PANet) structure for multi-scale feature fusion, comprising certain components such as a Feature Pyramid Network (FPN) and Path Aggregation Network.The FPN constructed a feature pyramid using a top-down strategy, merging fine-grained feature maps with up-sampled coarse-grained ones to fuse feature tensors at different scales.The PANet preserved the spatial information of feature maps through bottom-up convolutional operations.Additionally, in the Neck section, multiple GAMs were densely embedded after each C2f module to allocate spatial and channel attention to the feature maps.This enhanced the network's focus on small-scale abnormal cage-reared duck objects in the image background.After feature-fusion processing in the neck section, the feature maps of three scales (80 × 80 × 256, 40 × 40 × 512, and 20 × 20 × 512) were put out to the head network for loss calculation and detection-box filtering, thereby obtaining category and positional information of abnormal cage-reared ducks of various sizes.The YOLOv8 loss function combined three loss functions: classification loss VFL (Varifocal Loss), regression loss (CIoU), and deep feature loss (DFL).YOLOv8 ACRD introduced Wise-IoU to replace CIoU, improving the regression performance of the model in predicting bounding boxes. Global Attention Mechanism Abnormal cage-reared duck images present challenges for accurate recognition and detection because of their small object size and large background ratio.The YOLOv8-ACRD network proposed in this study aimed to enhance the focus on abnormal cagereared duck areas in the images.This was achieved by introducing a GAM attention mechanism, which eliminated redundant information and improved the accuracy of abnormal cage-reared duck recognition under unstructured and complex environmental conditions.The GAM combined a channel attention mechanism and a spatial attention mechanism.It Global Attention Mechanism Abnormal cage-reared duck images present challenges for accurate recognition and detection because of their small object size and large background ratio.The YOLOv8-ACRD network proposed in this study aimed to enhance the focus on abnormal cage-reared duck areas in the images.This was achieved by introducing a GAM attention mechanism, which eliminated redundant information and improved the accuracy of abnormal cage-reared duck recognition under unstructured and complex environmental conditions.The GAM combined a channel attention mechanism and a spatial attention mechanism.It amplified the interaction of global dimensional features while reducing diffuse information.The structure of the GAM is illustrated in Figure 5.In Figure 5, the GAM applies channel attention correction to the input feature 1 Input feature Figures 6 and 7 illustrate the channel attention submodule, which maintained the three-dimensional information through a three-dimensional arrangement.It then enhanced the cross-dimensional channel spatial dependencies using a multilayer perceptron (MLP).The spatial sub-attention module utilized two 7 × 7 convolutional layers for spatial information fusion.Equations ( 1) and ( 2) indicate the calculations for both the channel attention and spatial attention processes. ( ) In Figure 5, the GAM applies channel attention correction to the input feature 1 Input feature Figures 6 and 7 illustrate the channel attention submodule, which maintained the three-dimensional information through a three-dimensional arrangement.It then enhanced the cross-dimensional channel spatial dependencies using a multilayer perceptron (MLP).The spatial sub-attention module utilized two 7 × 7 convolutional layers for spatial information fusion.Equations ( 1) and ( 2) indicate the calculations for both the channel attention and spatial attention processes. ( ) represents the multiplication of elements. In Figure 5, the GAM applies channel attention correction to the input feature F 1 using two independent attention submodules, obtaining the feature F 2 .Subsequently, the feature F 2 underwent the spatial attention correction to produce the output feature F 3 .The channel attention and spatial attention submodules are shown in Figures 6 and 7, respectively. amplified the interaction of global dimensional features while reducing diffuse information.The structure of the GAM is illustrated in Figure 5.In Figure 5, the GAM applies channel attention correction to the input feature 1 Input feature Figures 6 and 7 illustrate the channel attention submodule, which maintained the three-dimensional information through a three-dimensional arrangement.It then enhanced the cross-dimensional channel spatial dependencies using a multilayer perceptron (MLP).The spatial sub-attention module utilized two 7 × 7 convolutional layers for spatial information fusion.Equations ( 1) and ( 2) indicate the calculations for both the channel attention and spatial attention processes. ( ) amplified the interaction of global dimensional features while reducing diffuse information.The structure of the GAM is illustrated in Figure 5.In Figure 5, the GAM applies channel attention correction to the input feature 1 Input feature Figures 6 and 7 illustrate the channel attention submodule, which maintained the three-dimensional information through a three-dimensional arrangement.It then enhanced the cross-dimensional channel spatial dependencies using a multilayer perceptron (MLP).The spatial sub-attention module utilized two 7 × 7 convolutional layers for spatial information fusion.Equations ( 1) and ( 2) indicate the calculations for both the channel attention and spatial attention processes. amplified the interaction of global dimensional features while reducing diffuse information.The structure of the GAM is illustrated in Figure 5.In Figure 5, the GAM applies channel attention correction to the input feature 1 Figures 6 and 7 illustrate the channel attention submodule, which maintained the three-dimensional information through a three-dimensional arrangement.It then enhanced the cross-dimensional channel spatial dependencies using a multilayer perceptron (MLP).The spatial sub-attention module utilized two 7 × 7 convolutional layers for spatial information fusion.Equations ( 1) and ( 2) indicate the calculations for both the channel attention and spatial attention processes. ( ) Figures 6 and 7 illustrate the channel attention submodule, which maintained the threedimensional information through a three-dimensional arrangement.It then enhanced the cross-dimensional channel spatial dependencies using a multilayer perceptron (MLP).The spatial sub-attention module utilized two 7 × 7 convolutional layers for spatial information fusion.Equations ( 1) and ( 2) indicate the calculations for both the channel attention and spatial attention processes. In the equations, M c and M s , respectively, represent the channel attention map and the spatial attention map. The Wise-IoU Loss Function The abnormal cage-reared duck dataset used in this study demonstrated characteristics such as sample imbalance and long-tail distribution.Despite the small proportion of dead cage-reared ducks, death is a crucial aspect of abnormal situations in cage-reared models.The impact of the network on feature learning and inference directly affected the recognition accuracy of the model.The original YOLOv8 strategy designed a loss function based on the calculation of the intersection and union ratio between the predicted and real boxes.However, this approach ignored the recognition impact caused by differences in the number of samples in different categories.Additionally, low-quality samples in the dataset could lead to geometric factors such as target distance and aspect ratio, thereby increasing the network's punishment for these samples and reducing the accuracy and generalization of the model.To address these issues, the Wise-IoU was introduced to replace the CIoU loss function.The Wise-IoU utilized a dynamic non-monotonic focusing mechanism to balance the data samples and reduce the excessive punishment of geometric parameters in the model.This optimization improved the training process of the abnormal cage-reared duck recognition model.The calculation process for the Wise-IoU loss value L W IoU is illustrated in Equations ( 3)-( 6) below: Here, α and δ are hyperparameters, R W IoU represents the geometric disparity between the predicted box and the annotation, L IoU denotes the average dynamic Intersection over Union (IoU) loss for the predicted box, L * IoU stands for the current IoU loss value for the predicted box, and the superscript * indicates exclusion from backpropagation.β represents the outlier value for the current predicted box, where a smaller value indicates higher anchor box quality, requiring a smaller gradient gain to be allocated.Simultaneously, smaller gradient gains can also be assigned to predicted boxes with larger outlier values, facilitating the reduction of harmful gradients from low-quality samples during network training.w g and h g , respectively, represent the width and height of the box, while r represents the gain allocated to the box. Method for Brightness Adjustment in Cage-Reared Duck Images Some studies have shown that changes in light intensity directly affect duck development [25,26].Therefore, it is necessary to adjust the brightness of LED strips according to their physiological habits to improve health and welfare.To test and analyze the generalization ability of the proposed methods for identifying abnormal situations and estimating the posture of cage-reared ducks, this study considered image brightness as a single factor.Two brightness levels, 1.25 and 0.85 times the predetermined brightness, were set to obtain images of cage-reared ducks under 'bright' and 'dark' conditions, constructing a generalization ability test dataset.Brightness adjustment of the cage-reared duck images was based on OpenCV, which converts the RGB color space to HSV.Subsequently, the V channel value, representing image brightness, was assigned a multiplier to transition between 'bright' and 'dark'.Finally, the image was converted back to RGB.The process of adjusting image brightness is shown in Equation (7).bright = V × 1.25 dark = V × 0.85 (7) Estimation Method for Abnormal Cage-Reared Ducks Posture Animal behavior consists of co-ordinated movements among various parts of the body.Typically, in a single behavioral state, animals may exhibit multiple postures [27].Conducting research on posture estimation in cage-reared ducks to identify abnormal situations would be beneficial for refining their physiological status.Currently, the field of Animals 2024, 14, 2192 10 of 25 pose estimation for both humans [28][29][30] and animals [31][32][33] often employs two strategies: regression and heatmaps.Among these, the heatmap method can fully utilize spatial information from adjacent key points and joint locations, achieving high accuracy.Based on this, we used HRNet-48 [34], which is based on a heatmap, as the foundational network, and proposed an abnormal pose-estimation model for cage-reared ducks by leveraging the significant difference in the distribution of body parts between overturned and dead ducks.HRNet-48 consists of four stages with varying resolutions, parallel connections, and multi-scale feature cascades for feature analysis and localization, as illustrated in Figure 8. Estimation Method for Abnormal Cage-Reared Ducks Posture Animal behavior consists of co-ordinated movements among various parts of the body.Typically, in a single behavioral state, animals may exhibit multiple postures [27].Conducting research on posture estimation in cage-reared ducks to identify abnormal situations would be beneficial for refining their physiological status.Currently, the field of pose estimation for both humans [28][29][30] and animals [31][32][33] often employs two strategies: regression and heatmaps.Among these, the heatmap method can fully utilize spatial information from adjacent key points and joint locations, achieving high accuracy.Based on this, we used HRNet-48 [34], which is based on a heatmap, as the foundational network, and proposed an abnormal pose-estimation model for cage-reared ducks by leveraging the significant difference in the distribution of body parts between overturned and dead ducks.HRNet-48 consists of four stages with varying resolutions, parallel connections, and multi-scale feature cascades for feature analysis and localization, as illustrated in Evaluation Criteria To quantitatively analyze the performance of the proposed method for identifying abnormal situations in cage-reared ducks and the abnormal cage-reared ducks posture estimation model, four commonly used quantitative indicators in the field of object Evaluation Criteria To quantitatively analyze the performance of the proposed method for identifying abnormal situations in cage-reared ducks and the abnormal cage-reared ducks posture estimation model, four commonly used quantitative indicators in the field of object detection, mAP, Recall, F1 score, and OKS [35,36] were selected as the evaluation criteria, represented by Equations ( 8)- (11). In the equation, N represents the total number of target categories, and AP i is the average precision of the i-th category.The range of values for mAP is [0, 1], where a higher value indicates a better detection performance of the model.Additionally, in this paper, an IoU (Intersection over Union) threshold of 0.5 is set. In the equation, TP represents the number of samples correctly predicted as positive, and FN represents the number of samples incorrectly predicted as negative.The Recall value ranges from 0 to 1, with a higher Recall indicating a stronger ability of the model to identify positive samples. Here, Precision represents the proportion of true positive predictions among all positive predictions made by the model.A high F1 Score, close to 1, indicates a good balance between Precision and Recall, indicating that the model effectively identifies positives while minimizing false positives. In the formula, i is the key point number, d i is the Euclidean distance between the true value and predicted value of the key point with number i, v i is the visibility marker of the key point with number i, invisible is 0, occluded is 1, visible is 2, δ is the Kronecker delta function, k i is the constant of the key point with number i, s represents a scaling factor typically defined as a percentage of the diagonal length of the bounding box. Experimental Environment The proposed methods in this paper were implemented for model training and performance evaluation on a Dell T3060 tower workstation (made in China by Dell Inc) running Windows 10 Professional.The system was equipped with an Intel i9-12900 CPU featuring a base frequency of 3.20 GHz, 128 GB of RAM, and an NVIDIA RTX 4090 GPU with 24 GB of dedicated memory.The experiments were conducted using the Python programming language within a virtual environment established based on the PyTorch deep learning framework. Experimental Steps The experiment for identifying abnormal situations and estimating posture in cagereared ducks consisted of six steps: 1. Images of 10-day-old cage-reared ducks in the abnormal states of 'overturned' and 'dead' were collected and annotated to establish datasets for abnormal cage-reared ducks and abnormal cage-reared duck pose-estimations. 2. Based on the characteristics and experimental environment of cage-reared ducks, multiple GAM modules were densely embedded into the neck of the original YOLOv8 network, and the Wise-IoU loss function was introduced to optimize the detection performance.This led to the development of YOLOv8-ACRD, a network for recognizing abnormal situations in cage-reared ducks. 3. The proposed method, based on YOLOv8-ACRD, was tested for accuracy compared with the original YOLOv8 and evaluated against other mainstream methods to assess its effectiveness. 4. Brightness was used as a factor, with two levels ('bright' and 'dark') set to test the generalization ability of the proposed method for identifying abnormal situations in cage-reared ducks. 5. An abnormal posture estimation model based on HRNet-48 was developed by refining the identification of six key body parts in cage-reared ducks.This model was compared with other commonly used pose-estimation algorithms, and its real-time performance was evaluated.6. The experimental results were discussed, and conclusions were drawn. The experimental procedure is illustrated in Figure 9. cage-reared ducks. 5.An abnormal posture estimation model based on HRNet-48 was developed by refining the identification of six key body parts in cage-reared ducks.This model was compared with other commonly used pose-estimation algorithms, and its real-time performance was evaluated.6.The experimental results were discussed, and conclusions were drawn. The experimental procedure is illustrated in Figure 9.The training process for the optimal abnormal situation recognition model for cagereared ducks was conducted separately on the abnormal cage-reared duck dataset based on the original YOLOv8 network and improved YOLOv8-ACRD network.To analyze and evaluate the effects of introducing GAM and Wise-IoU on model-detection accuracy, the mAP values were compared.The pretrained weights for the COCO dataset were loaded to prevent overfitting, and convergence difficulties during the COCO dataset were loaded.A Stochastic Gradient Descent (SGD) optimizer was employed for gradient descent, with the momentum set to 0.9, and the learning rate and batch size set to 0.001 and 16, respectively.Furthermore, the strategy of saving the model once per cycle was adopted, totaling 300 epochs of iterations.The trends of YOLOv8, YOLOv8-ACRD loss, and mAP values with respect to the number of iterations and epochs during this period are illustrated in Figures 10 and 11, respectively. the momentum set to 0.9, and the learning rate and batch size set to 0.001 and 16 tively.Furthermore, the strategy of saving the model once per cycle was adopted 300 epochs of iterations.The trends of YOLOv8, YOLOv8-ACRD loss, and mA with respect to the number of iterations and epochs during this period are illus Figures 10 and 11, respectively.reared ducks was conducted separately on the abnormal cage-reared duck dataset based on the original YOLOv8 network and improved YOLOv8-ACRD network.To analyze and evaluate the effects of introducing GAM and Wise-IoU on model-detection accuracy, the mAP values were compared.The pretrained weights for the COCO dataset were loaded to prevent overfitting, and convergence difficulties during the COCO dataset were loaded. A Stochastic Gradient Descent (SGD) optimizer was employed for gradient descent, with the momentum set to 0.9, and the learning rate and batch size set to 0.001 and 16, respectively.Furthermore, the strategy of saving the model once per cycle was adopted, totaling 300 epochs of iterations.The trends of YOLOv8, YOLOv8-ACRD loss, and mAP values with respect to the number of iterations and epochs during this period are illustrated in Figures 10 and 11, respectively.In Figure 10, the loss values of YOLOv8 and YOLOv8 ACRD remained stable with the number of iterations, exhibiting a trend of rapid decrease at the beginning, gradual flattening in the middle, and convergence towards the end.However, after a rapid decrease in loss values for both models, YOLOv8-ACRD consistently exhibited lower loss than YOLOv8, a trend that continued until both models converged.In Figure 11, the trend of mAP values for YOLOv8 and YOLOv8-ACRD, with respect to epoch, mirrors that of loss, with both indicating rapid changes early on and then leveling off.Between 0 and 55 epochs, the mAP values of both models increased significantly to approximately 0.737.Subsequently, from 56 to 300 epochs, YOLOv8-ACRD consistently maintained an advantage over YOLOv8 in mAP values, consistent with the observations in Figure 10.Finally, YOLOv8 and YOLOv8-ACRD converged to the mAP values of 0.909 and 0.924, respectively.The mAP of the cage-reared duck anomaly recognition model based on YOLOv8-ACRD increased by 1.65% compared with the original YOLOv8.To further evaluate the performance differences introduced by GAM and Wise-IoU, an additional analysis was conducted based on Recall and F1 scores.The results are shown in Figure 12. of mAP values for YOLOv8 and YOLOv8-ACRD, with respect to epoch, mirror loss, with both indicating rapid changes early on and then leveling off.Between epochs, the mAP values of both models increased significantly to approximate Subsequently, from 56 to 300 epochs, YOLOv8-ACRD consistently maintained vantage over YOLOv8 in mAP values, consistent with the observations in Figur nally, YOLOv8 and YOLOv8-ACRD converged to the mAP values of 0.909 and 0 spectively.The mAP of the cage-reared duck anomaly recognition model b YOLOv8-ACRD increased by 1.65% compared with the original YOLOv8.To furth uate the performance differences introduced by GAM and Wise-IoU, an addition ysis was conducted based on Recall and F1 scores.The results are shown in Figur Figure 12 reveals that the Recall and F1 scores of the optimal model for d abnormal cage-reared ducks based on YOLOv8-ACRD increased by 0.017 and 0 spectively, compared with YOLOv8.This further demonstrates that YOLOv8-AC hibits enhanced precision and balance in detecting instances of dead and overturn reared ducks under the experimental conditions. In summary, the results indicated that the YOLOv8-ACRD network demo superior learning and inference effectiveness for detecting the distribution patt overturned and dead ducks compared with YOLOv8.This confirmed the positive of employing dense GAM embedding and introducing the Wise-IoU loss functio hance detection accuracy.Consequently, YOLOv8-ACRD was selected as the model for subsequent experiments to test anomaly recognition and generalization in cage-reared ducks. Figure 13 presents the visualization of the feature maps with the maximum ac in the three output channels of the neck section for both the YOLOv8 and YOLOv8 networks overlaid on the original images.YOLOv8-ACRD demonstrated more pr calization in identifying the features of cage-reared ducks in the overturned stat pared to YOLOv8.It can exclude interference from other similar features and ac focus on target ducks in the images.In contrast, YOLOv8 demonstrated deviation attention area and actual position of features related to overturned cage-reared Although it covered the duck's body, some features extended beyond this are Figure 12 reveals that the Recall and F1 scores of the optimal model for detecting abnormal cage-reared ducks based on YOLOv8-ACRD increased by 0.017 and 0.021, respectively, compared with YOLOv8.This further demonstrates that YOLOv8-ACRD exhibits enhanced precision and balance in detecting instances of dead and overturned cage-reared ducks under the experimental conditions. In summary, the results indicated that the YOLOv8-ACRD network demonstrated superior learning and inference effectiveness for detecting the distribution patterns of overturned and dead ducks compared with YOLOv8.This confirmed the positive impact of employing dense GAM embedding and introducing the Wise-IoU loss function to enhance detection accuracy.Consequently, YOLOv8-ACRD was selected as the optimal model for subsequent experiments to test anomaly recognition and generalization abilities in cage-reared ducks. Figure 13 presents the visualization of the feature maps with the maximum activation in the three output channels of the neck section for both the YOLOv8 and YOLOv8-ACRD networks overlaid on the original images.YOLOv8-ACRD demonstrated more precise localization in identifying the features of cage-reared ducks in the overturned states, compared to YOLOv8.It can exclude interference from other similar features and accurately focus on target ducks in the images.In contrast, YOLOv8 demonstrated deviations in the attention area and actual position of features related to overturned cagereared ducks.Although it covered the duck's body, some features extended beyond this area.These observations further validated that the proposed YOLOv8-ACRD cage-reared duck abnormal recognition model had an accuracy advantage over the original YOLOv8.In Figure 14, cage-reared duck samples 2-6 in the overturned or deceased states were partially occluded to varying degrees by other ducks or cage meshes.For instance, in Recognition of Abnormal Situations in Cage-Reared Ducks Using the proposed YOLOv8-ACRD model to identify abnormal situations in cagereared ducks, the dataset was analyzed to identify overturned and dead situations.The selected results are shown in Figure 14.In Figure 14, cage-reared duck samples 2-6 in the overturned or deceased states were partially occluded to varying degrees by other ducks or cage meshes.For instance, in In Figure 14, cage-reared duck samples 2-6 in the overturned or deceased states were partially occluded to varying degrees by other ducks or cage meshes.For instance, in sample 2, the head of the cage-reared ducks was partially obscured, whereas in samples 3 and 5, the bodies were covered with a cage mesh.However, the proposed model correctly identified anomalous states with high confidence levels.The bounding box positioning was accurate with no instances of false positives or false negatives.Furthermore, the different sample images exhibited varying degrees of depth-of-field, and there were significant differences in multi-scale features among the ducks.Despite this, the proposed model maintained a high level of recognition accuracy, indicating its robust performance.Finally, the recognition model based on YOLOv8-ACRD for anomalous situations in cage-reared ducks achieved Average Precision (AP) values of 0.913 and 0.935, respectively, for recognizing overturned and dead situations in the validated subset of the abnormal cage-reared duck dataset. Comparison and Analysis Object-detection algorithms based on deep learning are continuously advancing in the field of computer visualization.However, these algorithms often vary in effectiveness when applied to different objects and environments.This study evaluated the effectiveness of the proposed anomalous situation-recognition method based on YOLOv8-ACRD, compared with other mainstream object-detection algorithms.To satisfy the requirements of detection accuracy and real-time performance for cage-reared duck inspection robots, a comparative experiment was conducted, focusing on single-shot object-detection algorithms capable of balancing recognition accuracy and speed.The experiments compared YOLOv8, YOLOv7 [37], YOLOv5 [38], YOLOF [39], SSD [40], and RetinaNet [41].To maintain environmental consistency, the same computer platform, compilation environment, and training hyperparameters were used, which were consistent with YOLOv8-ACRD.The comparison results, including mAP values for the abnormal cage-reared duck dataset and AP values for the two abnormal situations, are illustrated in Figure 15. sample 2, the head of the cage-reared ducks was partially obscured, whereas in samples 3 and 5, the bodies were covered with a cage mesh.However, the proposed model correctly identified anomalous states with high confidence levels.The bounding box positioning was accurate with no instances of false positives or false negatives.Furthermore, the different sample images exhibited varying degrees of depth-of-field, and there were significant differences in multi-scale features among the ducks.Despite this, the proposed model maintained a high level of recognition accuracy, indicating its robust performance.Finally, the recognition model based on YOLOv8-ACRD for anomalous situations in cagereared ducks achieved Average Precision (AP) values of 0.913 and 0.935, respectively, for recognizing overturned and dead situations in the validated subset of the abnormal cagereared duck dataset. Comparison and Analysis Object-detection algorithms based on deep learning are continuously advancing in the field of computer visualization.However, these algorithms often vary in effectiveness when applied to different objects and environments.This study evaluated the effectiveness of the proposed anomalous situation-recognition method based on YOLOv8-ACRD, compared with other mainstream object-detection algorithms.To satisfy the requirements of detection accuracy and real-time performance for cage-reared duck inspection robots, a comparative experiment was conducted, focusing on single-shot object-detection algorithms capable of balancing recognition accuracy and speed.The experiments compared YOLOv8, YOLOv7 [37], YOLOv5 [38], YOLOF [39], SSD [40], and RetinaNet [41].To maintain environmental consistency, the same computer platform, compilation environment, and training hyperparameters were used, which were consistent with YOLOv8-ACRD.The comparison results, including mAP values for the abnormal cage-reared duck dataset and AP values for the two abnormal situations, are illustrated in Figure 15.The mAP values for abnormal situations in cage-reared ducks indicated that YOLOv8-ACRD improved by 0.015, 0.046, 0.070, 0.071, 0.247, and 0.128, compared with YOLOv8, YOLOv7, YOLOv5, YOLOF, SSD, and RetinaNet, respectively.The overall performance difference between YOLOv5 and YOLOv8 was not significant.However, Reti-naNet and SSD exhibited relatively low performances, with SSD achieving only 0.677 mAP.This indicated that, in this experimental environment, the proposed model performed best, whereas SSD was the least effective and not suitable for identifying abnormal situations in cage-reared ducks.This disparity may be attributed to the structure of the SSD, which, despite utilizing deep convolutional neural networks to extract features of abnormal cage-reared ducks, lacked sufficient multi-scale feature fusion compared to other networks.Furthermore, the absence of attention mechanisms in SSD resulted in The mAP values for abnormal situations in cage-reared ducks indicated that YOLOv8-ACRD improved by 0.015, 0.046, 0.070, 0.071, 0.247, and 0.128, compared with YOLOv8, YOLOv7, YOLOv5, YOLOF, SSD, and RetinaNet, respectively.The overall performance difference between YOLOv5 and YOLOv8 was not significant.However, RetinaNet and SSD exhibited relatively low performances, with SSD achieving only 0.677 mAP.This indicated that, in this experimental environment, the proposed model performed best, whereas SSD was the least effective and not suitable for identifying abnormal situations in cage-reared ducks.This disparity may be attributed to the structure of the SSD, which, despite utilizing deep convolutional neural networks to extract features of abnormal cage-reared ducks, lacked sufficient multi-scale feature fusion compared to other networks.Furthermore, the absence of attention mechanisms in SSD resulted in insufficient focus on important features, especially in cases where cage-reared duck images had complex backgrounds and significant variations in duck phenotypes, making the accurate localization of object duck features challenging.Analysis of the changes in AP values for the recognition of dead and overturned cage-reared ducks in Figure 14B,C reveals a consistent pattern: YOLOv8, YOLOv7, YOLOv5, YOLOF, SSD, and the proposed model were better at recognizing over-turned situations than dead situations.However, RetinaNet demonstrated the opposite trend, with an AP value of 0.824 for dead ducks, which was 0.056 higher than that for overturned ducks.Furthermore, compared to the aforementioned six models, the proposed model enhanced the recognition AP values for overturned and dead ducks by 0.011, 0.019, 0.063, 0.029, 0.068, 0.072, 0.105, 0.037, 0.341, 0.153, 0.089, and 0.167, respectively.Overall, in the experimental environment, the proposed model demonstrated a precision advantage over other mainstream object-detection algorithms in recognizing both types of abnormal situations in cage-reared ducks. Perception of Abnormal Situations in Cage-Reared Ducks under Different Lighting Conditions Image acquisition of cage-reared ducks was conducted under a predetermined light intensity of LED light sources with no specific requirements for direct or back lighting.Although the proposed model achieved high accuracy in recognizing abnormal situations in this scenario, differences in lighting intensity existed across cage-reared-duck breeding facilities at different rearing stages.The recognition performance of the proposed model under varying lighting conditions was unclear, necessitating the testing of its generalization ability.To assess this, 50 randomly selected original images containing overturned and dead cage-reared ducks were used.The brightness adjustments were applied to these images to create a total of 100 images categorized as 'bright' and 'dark', using the brightness adjustment method described in Section 2.4.Subsequently, the generalization ability of the model was tested based on these images.The selected recognition results are shown in Figure 16. insufficient focus on important features, especially in cases where cage-reared duck images had complex backgrounds and significant variations in duck phenotypes, making the accurate localization of object duck features challenging.Analysis of the changes in AP values for the recognition of dead and overturned cage-reared ducks in Figure 14B,C reveals a consistent pattern: YOLOv8, YOLOv7, YOLOv5, YOLOF, SSD, and the proposed model were better at recognizing overturned situations than dead situations.However, RetinaNet demonstrated the opposite trend, with an AP value of 0.824 for dead ducks, which was 0.056 higher than that for overturned ducks.Furthermore, compared to the aforementioned six models, the proposed model enhanced the recognition AP values for overturned and dead ducks by 0.011, 0.019, 0.063, 0.029, 0.068, 0.072, 0.105, 0.037, 0.341, 0.153, 0.089, and 0.167, respectively.Overall, in the experimental environment, the proposed model demonstrated a precision advantage over other mainstream object-detection algorithms in recognizing both types of abnormal situations in cage-reared ducks. Perception of Abnormal Situations in Cage-Reared Ducks under Different Lighting Conditions Image acquisition of cage-reared ducks was conducted under a predetermined light intensity of LED light sources with no specific requirements for direct or back lighting.Although the proposed model achieved high accuracy in recognizing abnormal situations in this scenario, differences in lighting intensity existed across cage-reared-duck breeding facilities at different rearing stages.The recognition performance of the proposed model under varying lighting conditions was unclear, necessitating the testing of its generalization ability.To assess this, 50 randomly selected original images containing overturned and dead cage-reared ducks were used.The brightness adjustments were applied to these images to create a total of 100 images categorized as 'bright' and 'dark', using the brightness adjustment method described in Section 2.4.Subsequently, the generalization ability of the model was tested based on these images.The selected recognition results are shown in Figure 16.The results shown in Figure 15 reveal significant differences in the external appearance of cage-reared ducks under varying lighting conditions.For instance, the high brightness of head feathers caused feature loss in samples 1 and 2, and the dim lighting led to tail feathers blending with the background, causing confusion.However, the proposed abnormal situation recognition model based on YOLOv8-ACRD effectively eliminated these interferences and accurately detected the overturned and dead cage-reared ducks.These findings suggest that changes in image lighting intensity did not significantly affect the performance of the model, as presented in this paper.After adjusting the ambient brightness for recognition of abnormal situations in caged-duck breeding, the model in this study achieved an mAP value of 0.881.This indicated its applicability in recognizing abnormal cage-reared ducks under different lighting intensities, showing its strong generalization ability.In the initial stage of the experiment, the focus was on recognizing two types of abnormal situations in cage-reared ducks: overturned and dead.The approach centered on six key points: the duck's head, beak, chest, tail, left foot, and right foot, following the preset connection rules.An abnormal cage-reared duck pose-estimation model was developed based on an HRNet-48 network.During the model training process, the Adam optimizer was used for gradient descent, with a learning rate of 0.0001.The model was saved every 10 epochs for 100 iterations.Throughout this training period, the trend of the OKS value with respect to the epochs was monitored, as shown in Figure 17. The results shown in Figure 15 reveal significant differences in the external a ance of cage-reared ducks under varying lighting conditions.For instance, the high ness of head feathers caused feature loss in samples 1 and 2, and the dim lighting tail feathers blending with the background, causing confusion.However, the pr abnormal situation recognition model based on YOLOv8-ACRD effectively elim these interferences and accurately detected the overturned and dead cage-reared These findings suggest that changes in image lighting intensity did not significantl the performance of the model, as presented in this paper.After adjusting the a brightness for recognition of abnormal situations in caged-duck breeding, the m this study achieved an mAP value of 0.881.This indicated its applicability in recog abnormal cage-reared ducks under different lighting intensities, showing its stron eralization ability. Results of Abnormal Cage-Reared Duck Pose Estimation In the initial stage of the experiment, the focus was on recognizing two type normal situations in cage-reared ducks: overturned and dead.The approach cente six key points: the duck's head, beak, chest, tail, left foot, and right foot, follow preset connection rules.An abnormal cage-reared duck pose-estimation model w veloped based on an HRNet-48 network.During the model training process, the optimizer was used for gradient descent, with a learning rate of 0.0001.The mod saved every 10 epochs for 100 iterations.Throughout this training period, the trend OKS value with respect to the epochs was monitored, as shown in Figure 17. Figure 16 shows that the OKS value of the abnormal cage-reared duck posture estimation model increased rapidly during the initial 0-20 epochs.Subsequently, the rate of increase decelerated, reaching its peak at epoch 70, before decreasing and stabilizing at approximately 0.921, indicating convergence.Therefore, this model was selected for subsequent visualization of abnormal cage-reared duck pose estimations.The results are shown in Figure 18.The visualization effect was poor because of the small pixel area occupied by the target ducks in the abnormal states.Therefore, the region of interest was zoomed in proportion.In Figure 18, significant differences in the postures of cage-reared ducks in the same abnormal state were evident, despite interference from the cage mesh.Additionally, dead ducks are often trampled by other ducks, resulting in varying degrees of surface contamination that distinctly alters their color compared to overturned ducks.Meanwhile, the differences in key features among the dead ducks were minimal.None of these conditions is conducive to accurately estimating the posture of abnormal ducks.For example, in Figure 18, the limb distributions of dead cage-reared ducks 1 and 2 differed, whereas the colors of cage-reared ducks 1 and 3 closely matched the background.Furthermore, some of the features of overturned cage-reared duck 3 were obscured by the cage mesh.However, the proposed model accurately detected and classified six key points, such as the duck's head and beak, without any missed or false detection.The key points can be assembled into a duck trunk in a predetermined association sequence without any errors.These findings indicate that the proposed abnormal cage-reared duck pose-estimation model could accurately detect overturned and dead cage-reared duck poses and key body parts, while also demonstrating a certain degree of interference resistance. Comparison and Analysis Recent research has made significant progress in refining methodologies for detecting key body parts and poses, building on advancements in animal and human behavior detection.To compare the proposed abnormal cage-reared duck pose-estimation model with other mainstream methods, this study introduced CPM [42], PVT [43], MSPN [44], Openpose [45], Hourglass [46], and liteHRNet [47].To evaluate the impact of cage-reared duck overturned and dead postures on model accuracy, this study categorized the images The visualization effect was poor because of the small pixel area occupied by the target ducks in the abnormal states.Therefore, the region of interest was zoomed in proportion.In Figure 18, significant differences in the postures of cage-reared ducks in the same abnormal state were evident, despite interference from the cage mesh.Additionally, dead ducks are often trampled by other ducks, resulting in varying degrees of surface contamination that distinctly alters their color compared to overturned ducks.Meanwhile, the differences in key features among the dead ducks were minimal.None of these conditions is conducive to accurately estimating the posture of abnormal ducks.For example, in Figure 18, the limb distributions of dead cage-reared ducks 1 and 2 differed, whereas the colors of cage-reared ducks 1 and 3 closely matched the background.Furthermore, some of the features of overturned cage-reared duck 3 were obscured by the cage mesh.However, the proposed model accurately detected and classified six key points, such as the duck's head and beak, without any missed or false detection.The key points can be assembled into a duck trunk in a predetermined association sequence without any errors.These findings indicate that the proposed abnormal cage-reared duck pose-estimation model could accurately detect overturned and dead cage-reared duck poses and key body parts, while also demonstrating a certain degree of interference resistance. Comparison and Analysis Recent research has made significant progress in refining methodologies for detecting key body parts and poses, building on advancements in animal and human behavior detection.To compare the proposed abnormal cage-reared duck pose-estimation model with other mainstream methods, this study introduced CPM [42], PVT [43], MSPN [44], Openpose [45], Hourglass [46], and liteHRNet [47].To evaluate the impact of cage-reared duck overturned and dead postures on model accuracy, this study categorized the images containing overturned and dead cage-reared ducks from the abnormal cage-reared duck pose-estimation dataset, and conducted targeted performance testing.The comparative experiments maintained consistency in the compilation environment, hyperparameters, deep-learning frameworks, and computer models used.The optimal models of the six aforementioned methods and the model proposed in this study were compared based on the OKS, as shown in Figure 19.The inspection of the physiological status of cage-reared ducks by breeding robots requires a pose-estimation model that not only has high recognition accuracy but also exhibits excellent real-time performance.Therefore, focusing on the processing time of a single image, this study further compared the real-time performance with the aforementioned methods for the abnormal cage-reared duck pose-estimation dataset.The real-time performance comparison results are shown in Figure 20. Animals 2024, 14, x FOR PEER REVIEW 20 of 25 containing overturned and dead cage-reared ducks from the abnormal cage-reared duck pose-estimation dataset, and conducted targeted performance testing.The comparative experiments maintained consistency in the compilation environment, hyperparameters, deep-learning frameworks, and computer models used.The optimal models of the six aforementioned methods and the model proposed in this study were compared based on the OKS, as shown in Figure 19.The inspection of the physiological status of cage-reared ducks by breeding robots requires a pose-estimation model that not only has high recognition accuracy but also exhibits excellent real-time performance.Therefore, focusing on the processing time of a single image, this study further compared the real-time performance with the aforementioned methods for the abnormal cage-reared duck pose-estimation dataset.The real-time performance comparison results are shown in Figure 20.Based on the comparison results in Figure 19, CPM, PVT, MSPN, Openpose, Hourglass, liteHRNet, and the model proposed in this study consistently demonstrated a higher accuracy in detecting the overturned poses of cage-reared ducks than in detecting the dead poses.This difference may be attributed to the fact that, when a duck was overturned, its key body parts were more fully presented in the image, and there was less mutual occlusion compared with the dead situation.This facilitated the model's extraction containing overturned and dead cage-reared ducks from the abnormal cage-reared duck pose-estimation dataset, and conducted targeted performance testing.The comparative experiments maintained consistency in the compilation environment, hyperparameters, deep-learning frameworks, and computer models used.The optimal models of the six aforementioned methods and the model proposed in this study were compared based on the OKS, as shown in Figure 19.The inspection of the physiological status of cage-reared ducks by breeding robots requires a pose-estimation model that not only has high recognition accuracy but also exhibits excellent real-time performance.Therefore, focusing on the processing time of a single image, this study further compared the real-time performance with the aforementioned methods for the abnormal cage-reared duck pose-estimation dataset.The real-time performance comparison results are shown in Figure 20.Based on the comparison results in Figure 19, CPM, PVT, MSPN, Openpose, Hourglass, liteHRNet, and the model proposed in this study consistently demonstrated a higher accuracy in detecting the overturned poses of cage-reared ducks than in detecting the dead poses.This difference may be attributed to the fact that, when a duck was overturned, its key body parts were more fully presented in the image, and there was less mutual occlusion compared with the dead situation.This facilitated the model's extraction Based on the comparison results in Figure 19, CPM, PVT, MSPN, Openpose, Hourglass, liteHRNet, and the model proposed in this study consistently demonstrated a higher accuracy in detecting the overturned poses of cage-reared ducks than in detecting the dead poses.This difference may be attributed to the fact that, when a duck was overturned, its key body parts were more fully presented in the image, and there was less mutual occlusion compared with the dead situation.This facilitated the model's extraction and learning of multi-dimensional features of abnormal cage-reared ducks, thereby aiding key point classification and position inference.Additionally, the OKS values of the above models for the overturned and dead postures were 0.852, 0.830, 0.943, 0.920, 0.918, 0.871, 0.893, 0.869, 0.909, 0.885, 0.865, 0.857, 0.934, and 0.915, respectively.PVT achieved the highest accuracy, followed closely by the model in this study, with negligible differences of only 0.009 and 0.005, respectively.This minor difference could be attributed to the PVT's application of a self-attention mechanism, which enhanced the model's ability to focus on key features based on the transformer architecture.This mechanism enables more accurate feature localization compared to HRNet-48, which relies on a multi-resolution channel-cascading strategy.In contrast, CPM exhibited the lowest accuracy.This suggests that relying solely on convolutional forms for inferring key point categories and positions of abnormal cage-reared ducks was not suitable for the experimental environment of this study. In the real-time performance comparison results shown in Figure 20, the single image processing times of the models were 0.482, 0.745, 0.431, 0.448, 0.577, 0.350, and 0.528 s, respectively.Among these, Lite HRNet demonstrated the best real-time performance, being 0.178 s faster than the model in this study, albeit with slight decreases in accuracy of 0.069 and 0.058, respectively.This indicates that, although the lightweight HRNet network could reduce the model inference time, it may compromise the accuracy improvement.Moreover, compared with PVT, the model proposed in this study reduced the processing time by 0.217 s.Given the minimal difference in accuracy between the two models, this suggested that, for estimating abnormal cage-reared duck poses, the model proposed in this study achieved a balance between high accuracy and excellent real-time performance, indicating the best comprehensive detection ability. Discussion During the exploration of the stereoscopic cage-reared mode for meat ducks, timely detection and identification of abnormal ducks are crucial for reducing the risk of disease transmission, ensuring group health and breeding welfare, and promoting the stable development of the poultry industry.Utilizing artificial intelligence technology for accurate and real-time perception of abnormal cage-reared ducks is a key trend in the industry's stable development.However, current research on poultry behavior detection has primarily focused on meat/egg chickens, and there is a lack of technology and theoretical research on detecting abnormal behavior in meat ducks.This study addressed this gap by focusing on the abnormal states of overturning and death that often occur in 10-day-old cage-reared ducks.Using computer vision technology, a method for identifying abnormal situations in cage-reared ducks was proposed, and further research was conducted on the estimation method of abnormal cage-reared duck posture, achieving accurate and real-time detection of two typical abnormal states.The following three points of discussion are presented based on the experimental process and results. Influence of Different Abnormal States of Cage-Reared Ducks on Model Recognition and Pose-Estimation Accuracy In this experiment, we proposed a method for the recognition of abnormal cagereared ducks, focusing on the two typical abnormal states of overturned and dead, based on YOLOv8-ACRD.We further refined this approach by exploring an abnormal cagereared duck posture estimation model based on HRNet-48, focusing on six key points, including the head, beak, chest, tail, left foot, and right foot.Our research demonstrated the effectiveness and accuracy of these models.We also discovered that both proposed models exhibited a pattern of higher detection accuracy for overturned cage-reared ducks than for dead ducks.This pattern was also observed in performance comparison experiments with other mainstream methods.This phenomenon may be attributed to the fact that, when a cage-reared duck is overturned, its head, legs, and other body parts are not obstructed.In contrast, in the case of death, the duck's body is covered to varying degrees by the trampling and attacks of other ducks, making feature extraction and localization more challenging, and resulting in differences in detection accuracy.This finding could serve as a reference for research focusing on the recognition of physiological states of livestock and poultry based on specific body shapes and postures. Impact of Introducing Attention Mechanism and Optimized Loss Function on Model Performance Numerous studies have shown that tailoring attention mechanisms, changing loss functions, or deepening network architectures based on different experimental subjects and environments can enhance model performance.In this study, we built upon YOLOv8 as the base network and embedded the GAM after each c2f module in its neck section to enhance the focus on regions of abnormal cage-reared ducks in the images.Additionally, we replaced the CIoU loss function with Wise-IoU to optimize the model-training process, resulting in the proposed YOLOv8-ACRD for detecting abnormal situations in cage-reared ducks.The experimental results demonstrated that the abnormal situation-recognition model for cage-reared ducks based on YOLOv8-ACRD significantly improved detection accuracy compared to the original YOLOv8.This finding suggests that, in specific objectdetection tasks, selective attention mechanisms or other optimization methods based on the properties of the experimental object can improve model performance and enhance detection effectiveness. Limitations and Future Directions Extensive experiments have demonstrated that the proposed method and pose-estimation model for identifying and estimating the poses of cage-reared ducks in overturned and dead situations achieve accurate detection.The proposed approach exhibited excellent generalization ability and robustness, accurately detecting the key parts of the abnormal cage-reared duck body and forming correct pose expressions.It also demonstrated superior overall performance compared to other commonly used methods.However, this study had certain limitations. (1) Ducks are inherently sensitive and susceptible to stress, making them prone to a range of bacterial and viral diseases.Although various types of abnormal situations can occur, this study specifically focused on detecting two common scenarios: overturned and dead ducks.A gap exists between the diverse range of abnormal conditions observed in real-world cage-reared ducks and those addressed in this study.Future research could incorporate thermal infrared sensing, audio processing, and hyperspectral/near-infrared technologies.A more comprehensive method for identifying various abnormalities in cage-reared ducks can be developed by integrating temperature, sound, and fecal spectral information through multisource data fusion.(2) The abnormal cage-reared duck dataset and abnormal cage-reared duck pose-estimation dataset were labeled manually, a labor-intensive process.Future research should concentrate on semi-supervised approaches to enhance the model's performance, with reduced manual effort and data requirements.(3) As ducks aged, their appearance and shape changed significantly.The experimental subjects in this study were limited to 10-day-old ducks.Future research should incorporate age gradients to improve the robustness of this model.(4) This study did not address the simultaneous detection and classification of multiple abnormal ducks, nor did it effectively estimate the posture of heavily obscured ducks.Given the high-density nature of poultry farming, future research will focus on the multi-target detection of abnormal ducks in such scenarios, as well as on feature generation and completion. Conclusions This study introduced a method for recognizing abnormal situations in cage-reared ducks based on YOLOv8-ACRD.The method achieved accurate detection of overturned and dead situations, with an mAP of 0.924, surpassing the original YOLOv8 by 1.65%.It also effectively handled the recognition interference caused by changes in lighting conditions, demonstrating excellent generalization and robustness.Compared to other methods, this approach demonstrated superior overall performance.Additionally, by refining the structure of the abnormal cage-reared duck body and focusing on six key parts, an abnormal caged-duck pose-estimation model based on HRNet-48 was proposed.The model achieved an OKS value of 0.921, accurately detecting the key points of the cagereared ducks and generating correct posture expressions.Furthermore, it demonstrated excellent real-time performance, surpassing other mainstream pose-estimation models in terms of accuracy and efficiency.Moreover, both of the proposed models consistently demonstrated lower perception accuracy for dead cage-reared ducks than for overturned ducks.This method could offer technical support for enhancing the duck cage-reared mode, ensuring animal welfare, and serving as a reference for the intelligent breeding of other poultry animals. Figure 1 . Figure 1.Example of the process and layout for collecting abnormal situations in cage-reared ducks. Figure 1 . Figure 1.Example of the process and layout for collecting abnormal situations in cage-reared ducks. Figure 1 . Figure 1.Example of the process and layout for collecting abnormal situations in cage-reared ducks. Figure 3 . Figure 3. Composition of two types of dataset. Figure 3 . Figure 3. Composition of two types of dataset. FF . Subsequently, the feature 2 F 3 F using two independent attention submodules, obtaining the feature 2 underwent the spatial attention correction to produce the output feature .The channel attention and spatial attention submodules are shown in Figures6 and 7, respectively. FF . Subsequently, the feature 2 F 3 F using two independent attention submodules, obtaining the feature 2 underwent the spatial attention correction to produce the output feature .The channel attention and spatial attention submodules are shown in Figures6 and 7, respectively. F 2 F . Subsequently, the feature 2 F 3 F using two independent attention submodules, obtaining the feature underwent the spatial attention correction to produce the output feature .The channel attention and spatial attention submodules are shown in Figures6 and 7, respectively. Figure 6 . Figure 6.The channel attention M c structure.Note: F 2 F . Subsequently, the feature 2 F 3 F using two independent attention submodules, obtaining the feature underwent the spatial attention correction to produce the output feature .The channel attention and spatial attention submodules are shown in Figures6 and 7, respectively. Figure 7 . Figure 7.The spatial attention M s structure. Figure 8 . Figure 8.The network structure for abnormal cage-reared duck posture estimation. Figure 8 . Figure 8.The network structure for abnormal cage-reared duck posture estimation. Figure 9 . Figure 9. Experimental procedure for perceiving abnormal situations in cage-reared ducks.Figure 9. Experimental procedure for perceiving abnormal situations in cage-reared ducks. Figure 9 . Figure 9. Experimental procedure for perceiving abnormal situations in cage-reared ducks.Figure 9. Experimental procedure for perceiving abnormal situations in cage-reared ducks. 1 . Abnormal Situation Recognition in Cage-Reared Ducks Based on YOLOv8-ACRD 3.1.1.Acquisition of Abnormal Situation Recognition Model for Cage-Reared Ducks and Comparison of Feature Maps Figure 10 . Figure 10.The trend of loss values with iterations during the training process of YOL YOLOv8-ACRD. Figure 11 . Figure 11.The trend of mAP values with iterations during the training process of YOL YOLOv8-ACRD. Figure 10 . Figure 10.The trend of loss values with iterations during the training process of YOLOv8 and YOLOv8-ACRD. Figure 10 .Figure 11 . Figure 10.The trend of loss values with iterations during the training process of YOLOv8 and YOLOv8-ACRD. Figure 12 . Figure 12.Comparison of optimal models YOLOv8 and YOLOv8-ACRD based on reca score. Figure 12 . Figure 12.Comparison of optimal models YOLOv8 and YOLOv8-ACRD based on recall and F1 score. Figure 13 .Figure 14 . Figure 13.The visualization comparison of maximum activation feature maps for the neck in YOLOv8 and YOLOv8-ACRD.(A).YOLOv8, (B).YOLOv8-ACRD.3.1.2.Recognition of Abnormal Situations in Cage-Reared DucksUsing the proposed YOLOv8-ACRD model to identify abnormal situations in cagereared ducks, the dataset was analyzed to identify overturned and dead situations.The selected results are shown in Figure14. Animals 2024 ,Figure 13 .Figure 14 . Figure 13.The visualization comparison of maximum activation feature maps for the neck in YOLOv8 and YOLOv8-ACRD.(A).YOLOv8, (B).YOLOv8-ACRD.3.1.2.Recognition of Abnormal Situations in Cage-Reared DucksUsing the proposed YOLOv8-ACRD model to identify abnormal situations in cagereared ducks, the dataset was analyzed to identify overturned and dead situations.The selected results are shown in Figure14. Figure 14 . Figure 14.Recognition results of abnormal situations in cage-reared ducks based on YOLOv8-ACRD. Figure 15 . Figure 15.Comparative experimental results.(A).Distribution of mAP values for each model; (B).Distribution of AP values for cage-reared duck overturned situation recognition; (C).Distribution of AP values for cage-reared duck dead situation recognition. Figure 15 . Figure 15.Comparative experimental results.(A).Distribution of mAP values for each model; (B).Distribution of AP values for cage-reared duck overturned situation recognition; (C).Distribution of AP values for cage-reared duck dead situation recognition. Figure 17 . Figure 17.Trend of OKS values with epoch variation based on HRNet-48 abnormal cag ducks pose-estimation model. Figure 16 Figure16shows that the OKS value of the abnormal cage-reared duck postu mation model increased rapidly during the initial 0-20 epochs.Subsequently, the increase decelerated, reaching its peak at epoch 70, before decreasing and stabili approximately 0.921, indicating convergence.Therefore, this model was selected f sequent visualization of abnormal cage-reared duck pose estimations.The resu shown in Figure18. Figure 17 . Figure 17.Trend of OKS values with epoch variation based on HRNet-48 abnormal cage-reared ducks pose-estimation model. Figure 19 . Figure 19.Comparison results of OKS for abnormal cage-reared duck pose estimation. Figure 19 . Figure 19.Comparison results of OKS for abnormal cage-reared duck pose estimation. Figure 19 . Figure 19.Comparison results of OKS for abnormal cage-reared duck pose estimation. stress caused by the inspection robot from affecting the ducks' behavior, the ducks underwent adaptive training before the experiment. Table 2 . Definition of abnormal situations and key body parts of cage-reared ducks. Table 2 . Definition of abnormal situations and key body parts of cage-reared ducks. Table 3 . The distribution of the abnormal cage-reared ducks dataset. Abnormal Cage-Reared Ducks Dataset After feature-fusion processing in the neck section, the feature maps of three scales (80 × 80 × 256, 40 × 40 × 512, and 20 × 20 × 512) were put out to the head network for loss calculation and detection-box filtering, thereby obtaining category and positional information of abnormal cage-reared ducks of various sizes.The YOLOv8 loss function combined three loss functions: classification loss VFL (Varifocal Loss), regression loss (CIoU), and deep feature loss (DFL).YOLOv8 ACRD introduced Wise-IoU to replace CIoU, improving the regression performance of the model in predicting bounding boxes.
v3-fos-license
2014-10-01T00:00:00.000Z
2011-04-11T00:00:00.000
12191773
{ "extfieldsofstudy": [ "Materials Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://nanoscalereslett.springeropen.com/track/pdf/10.1186/1556-276X-6-320", "pdf_hash": "da4e1b1f049f41949637128739b943863c48287c", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46275", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "sha1": "802162d5985275ea24ceff0a2e02019ed87a4268", "year": 2011 }
pes2o/s2orc
Size-dependent visible absorption and fast photoluminescence decay dynamics from freestanding strained silicon nanocrystals In this article, we report on the visible absorption, photoluminescence (PL), and fast PL decay dynamics from freestanding Si nanocrystals (NCs) that are anisotropically strained. Direct evidence of strain-induced dislocations is shown from high-resolution transmission electron microscopy images. Si NCs with sizes in the range of approximately 5-40 nm show size-dependent visible absorption in the range of 575-722 nm, while NCs of average size <10 nm exhibit strong PL emission at 580-585 nm. The PL decay shows an exponential decay in the nanosecond time scale. The Raman scattering studies show non-monotonic shift of the TO phonon modes as a function of size because of competing effect of strain and phonon confinement. Our studies rule out the influence of defects in the PL emission, and we propose that owing to the combined effect of strain and quantum confinement, the strained Si NCs exhibit direct band gap-like behavior. Introduction The discovery of unusual quantum-induced electronic properties, including photoluminescence (PL), from Si nanocrystals (NCs) has aroused huge scientific interest on Si nanostructures [1][2][3]. The origin of the PL in the Si NCs is still being debated because of difficulty in isolating the contributions of quantum confinement, surface states and embedding matrix have on the band structure in these materials [4,5]. In general, Si NCs are embedded in other materials with different elastic constants and lattice parameters. In such a case, owing to the lattice mismatch, the consequent elastic strain is known to impact their properties [6]. Lioudakis et al. [7] investigated the role of Si NCs size and distortion at the grain boundary on the enhanced optical properties of the nanocrystalline Si film with the thickness range of 5-30 nm using spectroscopic ellipsometry. They showed that, in the strong confinement regime (≤2 nm), the increase in interaction between fundamental band states and surface states due to distortion results in pinning up of absorption bands. Lyons et al. [8] studied the tailoring of the optical properties of embedded Si nanowires through strain. Thean and Leburton studied the strain effect in large Si NCs (10 nm) embedded in SiO 2 and showed that coupling between the Si NCs and the strain potential can enhance the confinement [9]. Thus, one would expect an enhanced quantum confinement effect resulting in increased band gap for strained Si NCs as compared with the unstrained Si NCs. Several authors have studied the role of strain and quantum confinement on the optical emission of semiconductor NCs, including Si NCs embedded in a SiO 2 matrix [9,10] and Ge NCs embedded in SiO 2 [11]. While these studies find evident strain effects on the band gap, to our knowledge, no study has focused on the coupled effects of size and strain on freestanding Si NCs. Recent reports on the visible PL from freestanding core-shell Si quantum dots provide evidence of quantum confinement-induced, widened band gap-related transitions, and oxide-associated interface-state-related transitions [12,13]. However, the effect of lattice strain in the observed PL emission had been completely ignored in these studies. In this letter, we investigated the strain evolution and resulting changes in the optical properties of the freestanding strained Si NCs with size down to approximately 5 nm. Microstructure of the Si NCs is studied by high-resolution transmission electron microscopy (HRTEM). Si NCs size and anisotropy in strain are calculated from detailed analysis of X-ray diffraction (XRD) line profile. The optical properties are studied using UV-Vis-NIR absorption, PL, and Raman measurements. Mechanisms of visible PL and fast PL decay dynamics are discussed in the framework of anisotropic strain and confinement effects on Si NCs. Experimental Commercial high purity Si powder (particle size approximately 75 μm, Sigma-Aldrich, Germany) was ball-milled at 450 rpm for a duration of 2-40 h in a zirconia vial (Retsch, PM100) under atmospheric condition using small zirconium oxide balls at a weight ratio of 20:1 for Si powder. Very fine Si NCs with few nanometer sizes obtained after every 2, 5, 10, 20, 30, and 40 h of ballmilling were studied. These samples are named as Si-2, Si-5, Si-10, Si-20, Si-30, and Si-40, respectively. The size, strain, microstructure, and related dislocation density were calculated from powder XRD (Seifert 3003 T/T) pattern and verified by HRTEM (JEOL, JEM-2100) imaging. For careful determination of average NCs size, internal lattice strain, and dislocation density, XRD data were collected at a slow rate at of 0.0025°/s. The UV-Vis-NIR absorption spectra of all the samples were recorded using a commercial spectrometer (Shimadzu 3010PC) at room temperature. Steady-state PL (Thermo Spectronic, AB2) measurements were performed using a Xenon lamp source at different excitation wavelengths and also with a 488-nm Ar laser as an excitation source. The PL decay measurements were performed with 475-nm laser excitation using a commercial fluorimeter (Edinburgh, LifeSpecII,) with time resolution better than 50 ps. Raman scattering measurement was carried out with a 488-nm Ar + laser excitation using a micro-Raman spectrometer (Jobin Yvon, LabRAM HR-800) equipped with a liquid nitrogen-cooled charge-coupled device detector. Results and discussion Owing to the high speed grinding, substantial size reduction occurs after 2-40 h of milling. The sample milled for 30 h shows the Si NCs with sizes 7-14 nm, and most of the NCs are not purely spherical (Figure 1a). The shape transformation is due to the development of anisotropic lattice strain in the Si NCs, as seen from HRTEM images and XRD studies. After another 10 h of milling, we obtained nearly spherical Si NCs with sizes in the range of 3.5-10 nm, as shown in the HRTEM image in Figure 1b. These NCs are single crystalline, as indicated by clear lattice fringes (Figure 1c) and small area electron diffraction pattern (inset of Figure 1c). In Si-10, lattice strain (distortion) caused by dislocations is clearly observed in the region marked with oval ring in Figure 1c. Careful analysis shows that the interplanar spacing d <111> decreases from 3.13 to 2.95 Å because of size reduction implying a compressive strain developed during milling. Figure 1d shows the histogram of the size distribution for Si-40. It is noted that a lognormal fitting to size distribution yields an average NC size of 6.8 nm, while many NCs have diameter below 6 nm. Similarly, Si-30 shows an average NC size of approximately 10 nm. During the milling process, owing to deformation, strain is expected in the as-prepared Si NCs. The XRD spectra of the freestanding Si NCs obtained after different durations of milling are shown in Figure 1e along with the XRD pattern of the unmilled Si powder (Si-0). All the milled Si NCs show strong characteristic XRD peaks for the Si (111), (220), and (311) planes, which confirms high crystalline nature. Our XRD studies on the milled NCs indeed show large broadening in the XRD pattern because of the size reduction and development of strain. To isolate the contribution of strain and size in the observed broadening, XRD line profile analysis is performed following the method of Ungar and Borbely [14]. According to this method, individual contribution of size and strain to the line broadening can be expressed as where ΔK = (2β cos θ B )/λ, b is the FWHM (in radians) of the Bragg reflections; θ is the Bragg angle of the analyzed peak; l is the wavelength of X-rays; D U is the average crystallite size; K = 2sin θ B /l; e is the strain; and C is the dislocation contrast factor, respectively. Details of the calculation of size and strain evolution in Si NCs sizes and strain are reported elsewhere [15]. Our analysis shows clear evidence for anisotropic strain in these NCs. If dislocations are the main contributors to strain (as evidenced from HRTEM image), then the average crystallite size and dislocation density are calculated from a linear fit to Equation 1 (see Figure 1f). The factor C explicitly incorporates the elastic anisotropy of lattice strain. Efficacy of this method has been demonstrated for several systems, including freestanding Ge NCs [16]. Analysis shows that screw-type dislocations are main contributors to the strain in Si NCs. The evolution of crystallite size and dislocation density (strain) as a function of milling time is shown in Figure 1g. For comparison, size obtained from the HRTEM analysis is also shown in Figure 1g. The sizes obtained from both theses analyses are in close agreement. XRD analysis shows that the average NC size monotonically goes down from 43 to 8.2 nm as the milling time increases from 2 to 40 h. On the other hand, the strain/dislocation density first increases up to 10 h of milling and then it slowly decrease for higher milling time. This can be explained as follows: during milling, the strain and dislocations first develop; however, for prolonged milling when the dislocation density is high, the crystal breaks up along the slip plane and thus produces smaller sized NCs. In this way, strain is partly released for a prolonged milling time [15]. The presence of lattice strain and possible phonon confinement in Si NCs were further studied by micro-Raman analysis, and the results are shown in Figure 2a. The pristine Si powder exhibits a sharp peak at 520 cm -1 associated with the transverse optical (TO) phonon mode and second-order modes at 300 and 960 cm -1 corresponding to 2TA and 2TO modes, respectively. A plot of Raman shift of TO phonon modes as a function of NC size is shown as inset of Figure 2a. It is noted that the TO modes for different sized NCs show large red shift (from 520 cm -1 down to 503.8 cm -1 ) and line shape broadening (from 10.2 up to 26.6 cm -1 ) with respect to pristine Si powder. Such a large red shift cannot be accounted for phonon confinement effect, as the Si NC sizes are quite large here, especially in Si-2 and Si-5. Thus, the red shift is primarily caused by the local heating of the Si NCs during Raman measurement that uses a 488-nm laser excitation at a sample power of approximately 0.9 mW. Owing to poor thermal conduction in freestanding Si NCs, local heating is expected to be significant. It has been reported that because of local heating by laser excitation, TO phonon modes shows a significant red shift for Si nanowires [17] and Si nanogranular film [18]. Heating effect is expected to increase with decreasing NC size. Possible contribution of ultrathin native oxide layer on Si NCs to the red shift cannot be ruled out, as we observe even higher red shifts for these NCs when oxidized during prolonged storage in air ambient. It is noted that with increasing milling time (up to 10 h), the strain first increases (see Figure 1g) along with size reduction. Owing to the presence of a large compressive strain (as evidenced from HRTEM analysis), one would expect a blue shift in the TO mode that is consistent with our observation in Si-10, as it shows the maximum strain. Therefore, from Si-2 to Si-20, the observed red shifts are due to the competitive effect of local heating and compressive strain in the lattice, as both increase with the size reduction. As there is a sudden increase in the compressive strain in Si-10, the blue shift due to the compressive strain is dominant over heating-induced red shift, this results in a blue shift compared with Si-5. In the case of Si-20, with size reduction, heating-induced red shift increased but, owing to strain relaxation, blue shift is decreased, which effectively results in a red shift. However, in Si-30, owing to further reduction in size as well as reduced strain, a large red shift is observed. Apparently, a higher intensity Raman peak in Si-30 also implies a lower strain in the NCs. In comparison to Si-20, in Si-30 and Si-40, the phonon confinement effect may contribute considerably to the observed higher red shift. Thus, despite the influence of local heating, Raman spectra clearly show the compressive strain effect in all NCs, while the phonon confinement effect is observed for NCs in Si-30 and Si-40. It appears that at sizes <10 nm, the strained Si NCs may be exhibiting enhanced electron and phonon confinement effect because of combined effect of strain and quantum confinement. This is consistent with the theoretical prediction by Thean and Leburton [9], which showed an enhanced confinement effect on the strained Si NCs of large size (10 nm). Earlier, similar quantum confinement-related band structure modification has been observed by Lioudakis et al. [19] from nanocrystalline Si film (approximately 10 nm). Such enhanced confinement effect can be probed by optical absorption and PL emission from the strained Si NCs. Alonso et al. [20] and Lioudakis et al. [21] provided evidence for quantum confinement effect on inter-band optical transitions in SiO 2 embedded Si NCs for diameter below 6 nm. Owing to the possible presence of native oxide layer on Si NCs, core diameter of the NCs may be actually smaller than the diameter observed in HRTEM. It is noted that despite the presence of anisotropic strain, no splitting of the LO-TO mode was observed in this study perhaps because of random orientation and size distribution of the Si NCs that essentially broaden the Raman spectra. Figure 2b shows the absorption spectra of the strained Si NCs showing a strong absorption peak in the green portion of the visible spectrum. A systematic blue shift in absorption peak is observed with decrease in NCs sizes, which is an indication of band gap widening of the NCs. In case of Si-30 and Si-40, most of the Si NCs sizes are of the order of Bohr diameter (approximately 9.8 nm) of electron in Si, where a quantum confinement effect is expected [20,22]. However, we observed blue shifts for all the NCs with sizes ranging from 4 to 40 nm. Though the as-prepared Si NCs are likely to have an ultrathin native oxide layer, the size-dependent absorption and low energy of the absorption peak cannot be ascribed to oxide layer or the oxygen-related-defect states. Therefore, strain-induced enhanced quantum confinement effect may play an important role for the band gap widening (as shown in inset of Figure 2b). Thean and Leburton [9] theoretically calculated the band gap widening of Si NCs as a function of strain and showed that the coupling between the Si NCs geometry and the symmetry generated by the strain potential can enhance the confinement in the quantum dot and can lift the degeneracy of the conduction band valleys for nonspherically symmetric NCs. In the present case, many of the anisotropically strained Si NCs are nonspherical (see Figure 1b). Hence, lattice strain may have caused enhanced confinement effect that gave rise to the widening of band gap in these Si NCs, as evident from the absorption spectra. Hadjisavvas and Kelires [23] have also theoretically shown the influence of strain and deformation to the pinning of the fundamental energy band gap of the Si NCs embedded in amorphous oxide matrix. The Si NCs in Si-30 and Si-40 show strong PL emission in the visible region, which requires fitting of two Gaussian peaks, as shown in Figure 3a,b. The centers of the two peaks are located at 585 and 640 nm for Si-30, and 580 and 613 nm for Si-40, respectively. The emission peaks for the Si-40 is blue shifted, and the peak intensity is also enhanced compared with Si-30. It is noted that no visible PL emission was detected from the as-prepared NCs in Si-5, Si-10, and Si-20, all of which have average NC sizes above 10 nm. However, after prolonged storage in ambient air that causes a thicker oxide layer on the Si NCs, we observe a broad PL emission band at approximately 750 nm from all the samples excited with 488-nm laser, as shown in inset of Figure 3b. As the PL data shown in Figure 3a,b are recorded soon after the milling process, native oxide layer thickness is too small to contribute toward any discernable peak at approximately 750 nm in Figure 3a,b. The approximately 750-nm broad peak is attributed to oxygen-related-defect states in surface oxide layer [13]. We note that 585-nm peak is very strong as compared to the 640-nm peak in Si-30 and this shows a blue shift and higher intensity peak at 580 nm for Si-40, because of to size reduction. Further, the 585-nm peak in Si-30 is found to be completely independent of the excitation wavelength, whereas the 640-nm peak shifts to lower wavelength (higher energy) of 629 nm when excited at a lower wavelength, as shown in the inset of Figure 3a. This excitation energy dependence of the 640-nm peak strongly indicates its origin as surface/interface defectrelated states. On the other hand, 585-nm peak cannot originate from defect-related state. Wilcoxon et al. [24] reported on the appearance of PL peaks in the range 1.8-3.6 eV for different sizes of Si NCs. The intense violet peak was assigned to direct electron-hole recombination, whereas the less intense PL peak (approximately 600 nm) was attributed to the surface states and phonon-assisted recombination. Lioudakis et al. [7] showed that L-point indirect gap of nanocrystalline Si film increases monotonically with decreasing film thickness down to 5 nm, as exactly predicted from the quantum confinement theory. Since the excitation wavelength of 460 nm is above the L-point gap (indirect) of Si-30, phonon-assisted recombination is likely to contribute to the 640-nm PL peak in Si-30. Similarly, Ray et al. [13] ascribed the PL bands at approximately 600 and 750 nm from core-shell Si/SiO 2 quantum dots to oxide-related interface defect states. Therefore, phonon-assisted recombination is most likely to be responsible for the low intensity peak at 613-640 nm. However, the strong emission at 580-585 nm cannot arise from such a process. It is noted that in the literature, less intense PL peak at around approximately 600 nm from Si NCs is generally attributed to surface states only for very small NCs (<3-4 nm). PL excitation measurements for Si-30 and Si-40 at their corresponding emission wavelengths (585/580 nm) show that Stokes shift is very insignificant (approximately 0.067 eV). This is also obvious from the relatively close position of the absorption and emission peaks for Si-30 and Si-40. Such a small shift again rules out the involvement of defects or interface states being responsible for the observed PL. This may indicate a direct transition from valence band to conduction band in the Si NCs. Further, if the interface defects or oxide layer contribute to the 585 nm PL, then one would expect this band from all the samples that show absorption in the visible region, which is contrary to the observation. Therefore, strain-induced enhanced quantum confinement may responsible for the observed PL band at 580-585 nm. To further understand the nature of transition, we studied the PL decay dynamics of the observed band at 580/585 nm (Figure 3c,d). For Si-30, the decay profile fits to a single exponential decay with time constant τ 1 = 3.67 ns, while for Si-40, it fits to a bi-exponential decay with time constants τ 1 = 2.34 ns, τ 2 = 8.69 ns. It is noted that for Si-40, amplitude of the fast decay component (τ 1 ) is about six orders of magnitude higher than that of the slow component (τ 2 ). This is consistent with the steady-state PL spectra that show a very strong peak at 580 nm as compared to the weak band at 613 nm. Further, reduction in τ 1 from 3.67 to 2.34 ns with size reduction in Si-40 is consistent with the quantum confinement effect, and this minimizes the possibility of the fast decay dynamics to be attributed to defect states. Most of the reported PL decay behavior of Si NCs has lifetime values in the range of microseconds to a few milliseconds and the NCs are usually embedded in SiO 2 matrix [25][26][27][28], while some studies reported decay in the nanosecond time scale [29,30]. In the present case, Si NCs are freestanding with minimum influence of native oxide layer, and emission is monitored specifically at 580/585 nm. Since the 580/585-nm PL band does not originate from defects, the observed properties are believed to be intrinsic to the strained Si NCs core. We believe that this fast decay dynamics is a signature of formation of quasi-direct energy bands in the band structure of the Si NCs, since in the case of quasi-direct nature of transition the electron-hole recombination process is very fast [22]. However, possible contribution of non-radiative decay channel in the observed fast PL decay cannot be fully ruled out. Othonos et al. [31] showed that surface-related states in the oxidized Si NCs can enhance the carrier relaxation process and Auger recombination does not play a significant role even in small NCs. It may be noted that this study deals with Si NCs that are freestanding and not oxidized (intentionally). Based on these observations and recent reports [12,13], we are inclined to suggest that dominant transition involving strain-induced, enhanced quantum confinementrelated, widened band gap states are responsible for the distinct visible absorption and an intense visible PL at 580-585 nm from the freestanding Si NCs. While the absorption/photoexcitation of carriers is certainly a band-to-band transition process, higher wavelength emission bands are though to be defect mediated. Such transitions can take place via a three-step process: (i) creation of electron-hole pairs inside the crystalline core, followed by (ii) nonradiative relaxation of electrons within the band, and (iii) subsequent radiative de-excitation of the electron to the valence band of the core. As the Stokes shift is very small for the 580/585 nm band, the thermal relaxation loss is very small. Hence, the photoexcited carriers in this case are not at all relaxing at the band edge or at the interface states, they are possibly relaxing within the band. The higher size as-prepared Si NCs did not exhibit the approximately 585-nm PL band partly because of the absence of quantum confinement effect and partly because of the presence of high density of dislocations, as evident from Figure 1. These dislocations usually quench the PL, and hence no PL signal was detected. Conclusions In conclusion, we synthesized anisotropically strained freestanding Si NCs with sizes approximately 5-42 nm that are freestanding and studied the optical absorption and PL emission from these NCs as a function of its size. The Raman studies show that besides the local heating effect that causes a substantial downshift, TO modes upshift because of compressive strain in all the NCs, while the phonon confinement-induced downshift is observed for NCs with average size below 10 nm. The observed enhanced visible absorption and the systematic blue shift in absorption peak with size reduction are believed to be caused by the combined effect of lattice strain and quantum confinement effects. Size-dependent strong PL band at 585 nm and the fast PL decay dynamics for this band are believed to be caused by the quasi-direct energy bands in the strained Si NCs. Role of defects in the 585-nm PL emission was ruled out. These results imply that strain engineering of Si NCs would enable tunable visible light emission and fast-switching light-emitting devices.
v3-fos-license
2022-06-12T15:02:04.957Z
2022-06-09T00:00:00.000
249589863
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2079-9292/11/12/1840/pdf?version=1654833008", "pdf_hash": "144d66bb7c4b98343eaec53ae87136b5c0fc1a0c", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46277", "s2fieldsofstudy": [ "Engineering", "Physics" ], "sha1": "125ae995e8724427528ede3c61c87ae6bd544540", "year": 2022 }
pes2o/s2orc
Single-Branch Wide-Swing-Cascode Subthreshold GaN Monolithic Voltage Reference A voltage reference generator in GaN IC technology for smart power applications is described, analyzed, and simulated. A straightforward design procedure is also highlighted. Compared to previous low-power monolithic solutions, the proposed one is based on a single branch and on transistors operating in a subthreshold. The circuit provides a nearly 2.7 V reference voltage under 4 V to 24 V supply at room temperature and with typical transistor models. The circuit exhibits a good robustness against large process variations and improves line regulation (0.105 %V) together with a reduction in area occupation (0.05 mm2), with a reduced current consumption of 2.7 μA (5 μA) in the typical (worst) case, independent of supply. The untrimmed temperature coefficient is 200 ppm/◦C. Introduction Voltage reference generators are fundamental building blocks used in the power management section of virtually any integrated circuit (IC) system. They provide a stable and accurate reference output voltage, V REF , that is used by other blocks to work correctly. Examples of systems that exploit voltage references for proper function include switching power converters, linear regulators, oscillators, PLLs, A/D and D/A converters, operational amplifiers, etc. The characteristics of V REF are usually measured in terms of insensitivity to supply voltage variations (either as line regulation, ∆V REF /∆V DD , or power supply rejection, PSR, as a function of frequency) and to temperature (voltage drift or temperature coefficient, TC = ∆V REF /∆T). Insensitivity to load variations (load regulation, ∆V REF /∆I LOAD ) is another important feature that is often achieved by a following voltage regulator circuit. Other sources of inaccuracy in IC implementations are caused by process variations and mismatches. In these last years, the emergence of GaN (gallium nitride) HEMT (high electron mobility transistor) technologies have attracted considerable interest by the power IC design community. Indeed, GaN HEMT processes are ultimately being developed in an attempt to allow for the realization of next-generation, fully integrated GaN power converters integrating both high-voltage power devices and low-voltage peripheral driving devices with mixed-signal functional blocks into the same substrate (smart power). A monolithic solution is in fact preferable as, in addition to minimizing the area, the cost, and the packaging effort and to improving the reliability, this solution also reduces the interconnection parasites between the driver and the power switch. This allows for all the potential advantages of the wide bandgap GaN devices, which can be summarized into higher power density, breakdown voltage, operating temperature, and frequency and lower on-resistance when compared to traditional power MOS devices [15][16][17][18]. In the field of smart power electronics, the implementation of a monolithic GaN gate driver for a GaN power switch is one of the major emerging research targets [19][20][21][22][23][24][25][26], as it is one of the main building blocks of power converters. Of course, a GaN gate driver needs auxiliary sub circuits to operate correctly, including a voltage reference generator. Unfortunately, the available GaN technologies for smart power ICs are far from mature and suffer from a large spread in process parameters, especially in terms of device threshold voltages. This is a severe limitation when implementing a voltage reference that must provide a reliable, maximally constant voltage under the extreme technology corners other than in a wide temperature range. In this context, previous GaN solutions are basically focused on minimizing the temperature coefficient of the reference voltage, but disregard the problem that different samples from different lots may provide very different reference voltage values owing to the very large parameter spread. In this paper, the topology of a voltage reference generator designed in a commercial AlGaN/GaN technology for smart power applications [27,28] is presented. The design is not trivial and takes into consideration the limitations of the technology that provide only n-channel enhancement (E) and depletion (D) devices but lack a complementary p-channel transistor and p-n junctions. In particular, since p-n junctions are essential for bandgap references, new methods that allow for the design of a low-TC reference generator operating in the wide temperature range offered by the GaN technology and in a wide supply range from around 4 V to 24 V, as required by consumer and automotive applications, must be devised. In addition, low area occupation and low current consumption are key features that are more and more in demand also from the automotive sector. Indeed, the number of electronic devices to be embedded in the vehicle is constantly increasing and this should not significantly impact the battery autonomy. The paper is organized as follows. Section 2 illustrates the solutions found in the literature. The proposed architecture as well as an accurate design methodology are then presented in Section 3. The main simulation results are summarized in Section 4. The authors' conclusions are drawn in Section 5. Previous Art The simplest and most widely used method for implementing a voltage reference in a GaN IC exploits an external zener diode. However, this causes non-negligible drawbacks such as the increase in inductive parasitic effects due to the bonding wires and, above all, large temperature drift in V REF . It is therefore essential to avoid the zener diode and to realize the voltage reference in a monolithic form to counteract these drawbacks. Schottky-Diode-Based GaN Voltage Reference The first monolithic voltage reference in AlGaN/GaN HEMT and Schottky diodes technology [28] is depicted in Figure 1 [29]. The depletion transistor, Q D1 , implements a current source and is operated in the subthreshold regime to obtain relatively low power consumption (0.8 mA at room temperature). It should be noted that Q D1 must also provide enough current driving capability to the external load to prevent V REF from being affected by the load because no additional voltage regulator was used in this application. Diodes D 3 and D 4 implement a source degeneration of Q D1 that stabilizes the current magnitude against process tolerances (for instance, variations of threshold voltage of Q D1 ). Moreover, D 3 and D 4 also allow for compensation of the effects that temperature changes have onto D 1 and D 2 . [29]. The output reference voltage is expressed by A TC of less than 0.5 mV/°C was reported in [29]. It should be noted that the voltage drop across D3 and D4 must be smaller than the magnitude of the threshold voltage of QD1 to ensure the transistor turn-on. This early solution suffers from several drawbacks. A negative supply voltage (−9 V) is indeed required; otherwise, VREF would depend linearly on VDD, which means unitary line regulation. A relatively high temperature coefficient is also observed and Schottky diodes are not always offered by commercial GaN platforms. Finally, a high current consumption is found. To solve these drawbacks, two reference voltage generators based on two different current mirrors were proposed in [30]. Reference Voltage Generators Based on Current Mirrors The first solution discussed in [30] is shown in Figure 2 and is based on the Wilson current mirror (QE1-QE3), in which the reference current IREF = IQ,D1 is realized through QD1 and the current limiting resistor R1. As shown in [30], the circuit's loop gain ensures that approximately the same current IREF flows in the two branches. By setting R1 suitably large, we obtain a very small IREF value, so that QD1 is biased in the near threshold, i.e., VGS,D1 is approaching the (negative) threshold voltage VTH,D, whereas VGS,E1 and VGS,E2 are approaching the threshold voltage VTH,E: The output reference voltage is expressed by A TC of less than 0.5 mV/ • C was reported in [29]. It should be noted that the voltage drop across D 3 and D 4 must be smaller than the magnitude of the threshold voltage of Q D1 to ensure the transistor turn-on. This early solution suffers from several drawbacks. A negative supply voltage (−9 V) is indeed required; otherwise, V REF would depend linearly on V DD , which means unitary line regulation. A relatively high temperature coefficient is also observed and Schottky diodes are not always offered by commercial GaN platforms. Finally, a high current consumption is found. To solve these drawbacks, two reference voltage generators based on two different current mirrors were proposed in [30]. Reference Voltage Generators Based on Current Mirrors The first solution discussed in [30] is shown in Figure 2 and is based on the Wilson current mirror (Q E1 -Q E3 ), in which the reference current I REF = I Q,D1 is realized through Q D1 and the current limiting resistor R 1 . As shown in [30], the circuit's loop gain ensures that approximately the same current I REF flows in the two branches. The output reference voltage is expressed by A TC of less than 0.5 mV/°C was reported in [29]. It should be noted that the voltage drop across D3 and D4 must be smaller than the magnitude of the threshold voltage of QD1 to ensure the transistor turn-on. This early solution suffers from several drawbacks. A negative supply voltage (−9 V) is indeed required; otherwise, VREF would depend linearly on VDD, which means unitary line regulation. A relatively high temperature coefficient is also observed and Schottky diodes are not always offered by commercial GaN platforms. Finally, a high current consumption is found. To solve these drawbacks, two reference voltage generators based on two different current mirrors were proposed in [30]. Reference Voltage Generators Based on Current Mirrors The first solution discussed in [30] is shown in Figure 2 and is based on the Wilson current mirror (QE1-QE3), in which the reference current IREF = IQ,D1 is realized through QD1 and the current limiting resistor R1. As shown in [30], the circuit's loop gain ensures that approximately the same current IREF flows in the two branches. By setting R1 suitably large, we obtain a very small IREF value, so that QD1 is biased in the near threshold, i.e., VGS,D1 is approaching the (negative) threshold voltage VTH,D, whereas VGS,E1 and VGS,E2 are approaching the threshold voltage VTH,E: By setting R 1 suitably large, we obtain a very small I REF value, so that Q D1 is biased in the near threshold, i.e., V GS,D1 is approaching the (negative) threshold voltage V TH,D , whereas V GS,E1 and V GS,E2 are approaching the threshold voltage V TH,E : It is shown in [30] that the threshold voltages of the D-type transistor and of the E-type transistor were both proportional to the absolute temperature (PTAT), with the TC of the D-GaN smaller than that of the E-GaN. For the specific adopted technology, setting V DS = 1 V, it is seen that V TH,D increases with temperature by 1.1 mV/ • C, whereas V TH,E increases with temperature by 4.1 mV/ • C. As a result, I REF will exhibit a complementary to absolute temperature (CTAT) behavior. Equally sized transistors Q E1 -Q E2 of the Wilson current mirror ensure, at a first approximation, the same drain current I DE1 and I DE2 ; hence, we have I DE2 ≈ I DE1 = I REF . As a consequence, V REF is given by From the above considerations, it is seen that an ideally zero TC can be achieved by suitably setting the ratio R 2 /R 1 to 4.1/1.1 = 3.72. The main drawbacks of this solution are listed below. V REF is still dependent on V DD because the drain-to-source voltage seriously affects the threshold voltage of the E-GaN (with a slope of −36.3 mV/V [30]). A large area occupation is caused by the required large values of R 1 and R 2 . Unavoidable mismatch affects I DE1 and I DE2 because Q E1 and Q E2 work at substantially different V DS values. A second improved solution was then presented in the same paper [30], as described below. A wide-swing-cascode structure Q E1 -Q E4 is used to suppress the V DS variations of Q E1 -Q E2 and to accurately set I DE1 = I DE2 . Moreover, Q E3 in Figure 2 is changed into a depletion device, Q D2 , in Figure 3. This choice allows us to decrease the minimum required V DD that can be now from 3.9 V to 24 V, thanks to the negative threshold of Q D2 . It is shown in [30] that the threshold voltages of the D-type transistor and of the Etype transistor were both proportional to the absolute temperature (PTAT), with the TC of the D-GaN smaller than that of the E-GaN. For the specific adopted technology, setting VDS = 1 V, it is seen that VTH,D increases with temperature by 1.1 mV/°C, whereas VTH,E increases with temperature by 4.1 mV/°C. As a result, IREF will exhibit a complementary to absolute temperature (CTAT) behavior. Equally sized transistors QE1-QE2 of the Wilson current mirror ensure, at a first approximation, the same drain current IDE1 and IDE2; hence, we have IDE2  IDE1 = IREF. As a consequence, VREF is given by ( From the above considerations, it is seen that an ideally zero TC can be achieved by suitably setting the ratio R2/R1 to 4.1/1.1 = 3.72. The main drawbacks of this solution are listed below. VREF is still dependent on VDD because the drain-to-source voltage seriously affects the threshold voltage of the E-GaN (with a slope of −36.3 mV/V [30]). A large area occupation is caused by the required large values of R1 and R2. Unavoidable mismatch affects IDE1 and IDE2 because QE1 and QE2 work at substantially different VDS values. A second improved solution was then presented in the same paper [30], as described below. A wide-swing-cascode structure QE1-QE4 is used to suppress the VDS variations of QE1-QE2 and to accurately set IDE1 = IDE2. Moreover, QE3 in Figure 2 is changed into a depletion device, QD2, in Figure 3. This choice allows us to decrease the minimum required VDD that can be now from 3.9 V to 24 V, thanks to the negative threshold of QD2. With a similar analysis performed before for the circuit in Figure 2, we obtain the expression of VREF for the circuit in Figure 3: With a similar analysis performed before for the circuit in Figure 2, we obtain the expression of V REF for the circuit in Figure 3: Again, as in (3), by suitably selecting the ratio (R 2 + R 3 )/R 1 , the TC of V REF can be ideally nullified. This solution is still affected by some drawbacks. V REF is residually dependent on V DD because current I D1 in saturation linearly depends on V DS through the Early effect, and V DS is in turn the result of the V DD voltage divider between the output impedance of Q D1 and that of the current mirror (drain of Q E3 ), which are both high impedances. A large area occupation is still caused by the required large values of the resistors. Two branches are used. 2-T Voltage Reference The 2-T voltage reference was recently proposed in [31] and its schematic diagram is shown in Figure 4. It is constituted by only one branch made up of two transistors, a depletion device, Q D1 (acting as current source and a diode-connected enhancement device) and Q E1 (acting as a diode-connected load). From inspection, we see that V GS,E2 = −V GS,D1 = V REF with both transistors currents equal. Assuming Q E1 and Q D1 in saturation with the expression of the currents in the form By suitably selecting the transconductance factor ratio, k n,E /k n,D , the TC of the reference voltage can be minimized. From (5), we can also evaluate the reference current: The main drawbacks of this solution are listed below. V REF is dependent on V DD because current I D1 of Q D1 in saturation linearly depends on V DS through the Early effect, and V DS is in turn equal to V DD − V REF . Though the minimum number of transistors is used and no resistors are exploited, a large area occupation is still necessary because large channel lengths are required to reduce the current consumption and to improve line regulation. It should also be noted that current I REF cannot be freely chosen, once the transconductance ratio is set for TC minimization. nullified. This solution is still affected by some drawbacks. VREF is residually dependent on VDD because current ID1 in saturation linearly depends on VDS through the Early effect, and VDS is in turn the result of the VDD voltage divider between the output impedance of QD1 and that of the current mirror (drain of QE3), which are both high impedances. A large area occupation is still caused by the required large values of the resistors. Two branches are used. 2-T Voltage Reference The 2-T voltage reference was recently proposed in [31] and its schematic diagram is shown in Figure 4. It is constituted by only one branch made up of two transistors, a depletion device, QD1 (acting as current source and a diode-connected enhancement device) and QE1 (acting as a diode-connected load). From inspection, we see that VGS,E2 = −VGS,D1 = VREF with both transistors currents equal. Assuming QE1 and QD1 in saturation with the expression of the currents in the form ID = kn(VGS − VT) 2 , where kn is the transconductance factor, we have By suitably selecting the transconductance factor ratio, kn,E/kn,D, the TC of the reference voltage can be minimized. From (5), we can also evaluate the reference current: The main drawbacks of this solution are listed below. VREF is dependent on VDD because current ID1 of QD1 in saturation linearly depends on VDS through the Early effect, and VDS is in turn equal to VDD − VREF. Though the minimum number of transistors is used and no resistors are exploited, a large area occupation is still necessary because large channel lengths are required to reduce the current consumption and to improve line regulation. It should also be noted that current IREF cannot be freely chosen, once the transconductance ratio is set for TC minimization. Proposed Voltage Reference Generator The proposed solution aims at four main targets, namely low area and low current consumption independent from supply; robustness against process corners; and of course, a reasonably low temperature coefficient. These features must be obtained under an additional constraint derived from the adopted technology that is characterized by D-type transistors with very stable threshold voltage, V TH,D , in which TC is about 250 µV/ • C, much lower than the TC of V TH,E (that is 4.2 mV/ • C). This technological behavior then makes approaches such as that in [30] almost useless because, using (3) or (4), the TC minimization of V REF would require an extremely large and unpractical resistor ratio (as high as 17). To achieve the aforementioned goals, the proposed voltage reference generator shares the best properties of previous topologies [30,31] but with a further reduction in current consumption by operating all the transistors in their subthreshold region. Furthermore, insensitivity to supply voltage variations and parameter spread is improved through suitable circuit solutions and design strategy. Circuit Description The schematic diagram of the proposed GaN voltage reference circuit is illustrated in Figure 5. As in [31], a single branch is exploited to halve current consumption with respect to the two-branch topologies and, as in [30], the reference current can be arbitrarily set by means of resistors. lower than the TC of VTH,E (that is 4.2 mV/°C). This technological behavior then makes approaches such as that in [30] almost useless because, using (3) or (4), the TC minimization of VREF would require an extremely large and unpractical resistor ratio (as high as 17). To achieve the aforementioned goals, the proposed voltage reference generator shares the best properties of previous topologies [30,31] but with a further reduction in current consumption by operating all the transistors in their subthreshold region. Furthermore, insensitivity to supply voltage variations and parameter spread is improved through suitable circuit solutions and design strategy. Circuit Description The schematic diagram of the proposed GaN voltage reference circuit is illustrated in Figure 5. As in [31], a single branch is exploited to halve current consumption with respect to the two-branch topologies and, as in [30], the reference current can be arbitrarily set by means of resistors. The topology is made up of an upper-side section (QD1, QD2, and R1) that generates the reference current and a lower side section (QE1, QE2, R2, and R3) which acts as an active load. From inspection of the reference current generator formed by the wide-swing structure QD1-QD2 and current-limiting resistor R1, current IREF (flowing from VDD to ground) is equal to VSG,D1/R1. Unlike in (2), VSG,D1 cannot be approximated by −VTH,D because the transistors will be all operated in subthreshold. Therefore, previous analyses such as those carried out in [30] and [31] cannot be adapted to this case and new design equations must be developed, as described in Section 3.2. It is seen that the additional cascode device QD2 in the current generator does not vary the value of IREF, but it allows for decreasing the second-order dependence of IREF (and The topology is made up of an upper-side section (Q D1 , Q D2 , and R 1 ) that generates the reference current and a lower side section (Q E1 , Q E2 , R 2 , and R 3 ) which acts as an active load. From inspection of the reference current generator formed by the wide-swing structure Q D1 -Q D2 and current-limiting resistor R 1 , current I REF (flowing from V DD to ground) is equal to V SG,D1 /R 1 . Unlike in (2), V SG,D1 cannot be approximated by −V TH,D because the transistors will be all operated in subthreshold. Therefore, previous analyses such as those carried out in [30] and [31] cannot be adapted to this case and new design equations must be developed, as described in Section 3.2. It is seen that the additional cascode device Q D2 in the current generator does not vary the value of I REF , but it allows for decreasing the second-order dependence of I REF (and hence of V REF ) on the supply voltage. Indeed, Q D2 shields the drain-source voltage of Q D1 from V DD variations by setting V DS,D1 constant and equal to V SG,D2 , hence independent of V DD . Hence, assuming that Q D1 and Q D2 have the same aspect ratio and the same drain current, we have that −V GS,D1 = −V GS,D2 = V DS,D1 As a drawback, the minimum supply voltage is slightly increased by V DS,D1 . The reference current is then injected into the low-impedance load made up of transistors Q E1 -Q E2 and resistors R 2 −R 3 , which act similarly to the previous circuit of Figure 3. It should be noted that the function of cascode transistor Q E2 is to make V DS,E1 independent of the threshold voltage variations caused by the large process spreads. Indeed, a simple evaluation shows that indicating that a global variation affecting both thresholds of Q E1 and Q E2 does not vary V DS,E1 . On the contrary, V DS,E2 depends on V GS,E2 Analysis and Design Strategy The analysis of the proposed topology begins by evaluating the reference current. Equation (2) is rearranged here as in (10) to take into account that Q D1 operates in the subthreshold. In particular, ∆V SUB is the amount of subthreshold voltage: Assuming as a design specification I REF = 2.5 µA, using minimum size depletion transistors, and considering that in the adopted technology V TH,D = −691 mV, by setting ∆V SUB = −150 mV, we have from (10) that the required value of R 1 is about 340 kΩ. Once I REF is set, assuming that V GS,E1 and V GS,E2 are almost equal, we can set V DS,E1 from (7) through R 3 . For V DS,E1 around 300 mV, R 3 = 300 mV/2.5 µA = 120 kΩ is required. Let us now consider the lower part of the circuit. The reference voltage is expressed by As stated before, the very small TC of V TH,D is reflected into I REF that is almost constant with temperature. The last term of (11) is hence roughly constant with T, and consequently, the TC of V REF is dominated by the TC of V GS,E1 . Note that following an approach such as that in (3) or (4) would lead to an impractically large R 2 +R 3 value that is 17 times R 1 , i.e., around 5.8 MΩ. We instead utilize (11) to compensate for the variations in V REF due to process corners. Specifically, SS, or slow-slow models, are characterized by the highest threshold magnitudes for both E-and D-type transistors, and FF, or fast-fast models, are characterized by the lowest threshold magnitudes. The large thresholds spread of the adopted technology is summarized in Table 1, which shows 42% and 28% variations in V TH,E and V TH,D , respectively, at room temperature. Considering the SS corner and (10) and (11), it is seen that V TH,E (V TH,D ) tends to increase (decrease) V REF . The FF corner works in the opposite manner. As a result, there is an optimal value of R 2 +R 3 that minimizes the effect of the process spread at a specified temperature. By considering SS and TT corners and taking the absolute value of the threshold differences, from (10) and (11), we have which gives R 2 + R 3 around 555 kΩ and, consequently, R 2 = 435 kΩ since R 3 is already known. Consider now the enhancement devices. The following simplified equation holds for the drain current of the generic transistor Q E operating in the subthreshold. where V T is the thermal voltage, n E is the subthreshold slope coefficient, η E is the drain induced barrier lowering (DIBL) coefficient, and I 0,E is proportional to (W/L) and V 2 T . We neglected body effect for simplicity. In the following, V DS >> V T is always met, and therefore, the factor in round brackets in (13) can be also neglected. Evaluating V GS,E1 from (13) yields A similar equation holds also for V GS,E2 . Now, substituting (8) in (14) and expressing V GS,E1 − V GS,E2 as ln[(W/L) E2 /(W/L) E1 ], we have Equation (15) shows that the temperature coefficient can be minimized through a suitable selection of the ratio (W/L) E2 /(W/L) E1 that is found through computer simulation, as the temperature coefficient of V GS,E1 cannot be easily evaluated analytically. Validation Results The proposed solution in Figure 5 was simulated using Spectre and the design kit of a commercial GaN smart power technology supplied by TSMC with L min equal to 1 µm and 0.55 µm for enhancement and depletion transistors, respectively. The transistor thresholds have been already given in Table 1. The aspect ratio and multiplicity of the transistors together with the resistor dimensions and their nominal values are summarized in Table 2. Note that resistor values are those obtained in the previous section with only marginal fine-tuning optimizations, while optimal (W/L) E2 /(W/L) E1 was found to be 3/10 through computer simulations. Figure 6 shows the reference voltage versus supply voltage at room temperature and when using typical model parameters for both enhanced and depletion devices (TT case). An average reference voltage approximately equal to 2.685 V is found starting from the minimum supply of 3.9 V. Below 3.9 V, the circuit does not work properly because the reference voltage is not kept constant. From the inset in Figure 6, we compute the line regulation that is found to be ∆V REF /∆V DD = (2.706 − 2.685)/(24 − 3.9) = 1.05 mV/V, or equivalently, 0.105%V. regulation that is found to be ΔVREF/ΔVDD = (2.706 − 2.685)/(24 − 3.9) = 1.05 mV/V, or equivalently, 0.105%V. Current IREF versus supply voltage is shown in Figure 7, under the same room temperature and typical process conditions. IREF is roughly equal to 2.7 µA for VDD greater than 3.9 V. A current consumption independent of supply voltage is achieved. Knowing the resistor values and from the values of VREF and IREF, we can infer from (10) that the value of VGS,E1 is 1.14 V, confirming the subthreshold operation. As already stated, GaN IC technologies suffer from large process spreads. It is therefore essential to simulate the effects of process variations together with temperature. Figure 8 illustrates the corner analysis (SS, TT, and FF cases) of the reference voltage versus temperature (from −40 °C to 190 °C) under the two extreme supply voltages: 3.9 V and 24 Figure 7, under the same room temperature and typical process conditions. I REF is roughly equal to 2.7 µA for V DD greater than 3.9 V. A current consumption independent of supply voltage is achieved. when using typical model parameters for both enhanced and depletion devices (TT case). An average reference voltage approximately equal to 2.685 V is found starting from the minimum supply of 3.9 V. Below 3.9 V, the circuit does not work properly because the reference voltage is not kept constant. From the inset in Figure 6, we compute the line regulation that is found to be ΔVREF/ΔVDD = (2.706 − 2.685)/(24 − 3.9) = 1.05 mV/V, or equivalently, 0.105%V. Current I REF versus supply voltage is shown in Current IREF versus supply voltage is shown in Figure 7, under the same room temperature and typical process conditions. IREF is roughly equal to 2.7 µA for VDD greater than 3.9 V. A current consumption independent of supply voltage is achieved. Knowing the resistor values and from the values of VREF and IREF, we can infer from (10) that the value of VGS,E1 is 1.14 V, confirming the subthreshold operation. As already stated, GaN IC technologies suffer from large process spreads. It is therefore essential to simulate the effects of process variations together with temperature. Figure 8 illustrates the corner analysis (SS, TT, and FF cases) of the reference voltage versus temperature (from −40 • C to 190 • C) under the two extreme supply voltages: 3.9 V and 24 V. A sharp decrease in V REF is found for temperatures below 23 • C and is due to the fitting point of the device models at ambient temperature. Nevertheless, the reference voltage ranges from 2.47 V to 2.71 V, with less than ±4.5% variability in the whole temperature range and within the 3.9-24 V supply. The robustness of the solution and the viability of the design approach is hence confirmed to counteract process, temperature, and supply variations. Considering the worst-case curve (SS at 3.9 V), the TC is evaluated to be around 200 ppm/ • C. ranges from 2.47 V to 2.71 V, with less than ±4.5% variability in the whole temperature range and within the 3.9-24 V supply. The robustness of the solution and the viability of the design approach is hence confirmed to counteract process, temperature, and supply variations. Considering the worst-case curve (SS at 3.9 V), the TC is evaluated to be around 200 ppm/°C. It was also confirmed (not shown) that the maximum current consumption is as low as 5 µA, under corner FF. Figure 9 shows the magnitude of the power supply rejection, or PSR, of the voltage reference, under different corners and temperatures. DC supply voltage is set to 24 V. It is seen that the worst case is given by the SS corner at low temperature, while the best case is given by FF corner at high temperature. PSR is better than −45 dB in the 10 MHz frequency range, indicating that a disturbance in the supply is attenuated by more than 170 times. Figure 10 depicts the layout of the proposed voltage reference generator. Occupied area is 176.6 µm × 261.8 µm. It was also confirmed (not shown) that the maximum current consumption is as low as 5 µA, under corner FF. Figure 9 shows the magnitude of the power supply rejection, or PSR, of the voltage reference, under different corners and temperatures. DC supply voltage is set to 24 V. It is seen that the worst case is given by the SS corner at low temperature, while the best case is given by FF corner at high temperature. PSR is better than −45 dB in the 10 MHz frequency range, indicating that a disturbance in the supply is attenuated by more than 170 times. the design approach is hence confirmed to counteract process, temperature, and supply variations. Considering the worst-case curve (SS at 3.9 V), the TC is evaluated to be around 200 ppm/°C. It was also confirmed (not shown) that the maximum current consumption is as low as 5 µA, under corner FF. Figure 9 shows the magnitude of the power supply rejection, or PSR, of the voltage reference, under different corners and temperatures. DC supply voltage is set to 24 V. It is seen that the worst case is given by the SS corner at low temperature, while the best case is given by FF corner at high temperature. PSR is better than −45 dB in the 10 MHz frequency range, indicating that a disturbance in the supply is attenuated by more than 170 times. Figure 10 depicts the layout of the proposed voltage reference generator. Occupied area is 176.6 µm × 261.8 µm. Conclusions The aim of this paper was to present a novel topology for a reference voltage generator amenable for GaN IC processes, together with its optimal design methodology. The solution is made up of a single branch that allows for setting the standby reference current in the microampere range through practical resistor values. The solution is also designed to operate in a wide supply voltage range, from around 4 V to 24 V, in order to meet the requirements of consumer and automotive applications. In addition, the circuit allowed for low area occupation (0.05 mm 2 ) and low current consumption (2.7 µA). Both features are becoming more and more important targets also Table 3 summarizes the main performance of the proposed voltage reference together with a comparison to the previously published solutions. Another reference, [21], that was not discussed before as it is not suitable for automotive applications, is added here for the sake of completeness. It is seen that the proposed solution, together with [30] and [31], are the only solutions able to work under a 24 V supply. However, the typical current consumption of the proposed solution is the lowest, even considering the worstcase scenario (5 µA). Limited area occupation is also a key feature of the proposed solution, and line sensitivity is also good in comparison to the state of the art. The TC achieved is similar to the values reported by the other untrimmed solutions. Conclusions The aim of this paper was to present a novel topology for a reference voltage generator amenable for GaN IC processes, together with its optimal design methodology. The solution is made up of a single branch that allows for setting the standby reference current in the microampere range through practical resistor values. The solution is also designed to operate in a wide supply voltage range, from around 4 V to 24 V, in order to meet the requirements of consumer and automotive applications. In addition, the circuit allowed for low area occupation (0.05 mm 2 ) and low current consumption (2.7 µA). Both features are becoming more and more important targets also in the automotive sector to save space inside the vehicle and to preserve the battery autonomy. A unique characteristic of the solution related to the low current consumption was the operation of the transistors in the subthreshold regime. For this purpose, a straightforward, accurate design strategy was also developed. Indeed, after describing the topology, equations based on a subthreshold operation were formulated and component values were derived from these equations. Taking these results as the initial point, fine-tuning dimensioning was performed through computer simulations. Unlike prior art, the proposed solution was also targeted at counteracting the large parameter spreads of commercial GaN IC technologies. Indeed, it provided an average reference voltage of around 2.685 V at room temperature with ±120 mV variation (<±4.5%), against a ±42% and 28% variability of the threshold voltages of enhancement and depletion devices, respectively. The temperature coefficient of the reference voltage achieved is around 200 ppm/ • C and is similar to the values reported by the other untrimmed GaN solutions. Presently, the circuit does not cope with SF and FS corners; in this case, external trimming is mandatory. To avoid trimming, future research will focus on threshold voltage on-chip sensing and on the definition of calibration strategies to inherently reduce the temperature coefficient. Work is also ongoing to improve current drive capabilities by designing a low-dropout regulator in the same IC GaN technology. To this purpose, an operational transconductance amplifier has already been presented in [32].
v3-fos-license
2021-11-14T16:17:46.162Z
2021-11-01T00:00:00.000
244089485
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2223-7747/10/11/2430/pdf", "pdf_hash": "08d9f7f9bba2705e86186579d311e4d25865c0a3", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46280", "s2fieldsofstudy": [ "Biology" ], "sha1": "a772a1bb78e740bac58650cc7ea4730fac2d526c", "year": 2021 }
pes2o/s2orc
The Combination of Untargeted Metabolomics and Machine Learning Predicts the Biosynthesis of Phenolic Compounds in Bryophyllum Medicinal Plants (Genus Kalanchoe) Phenolic compounds constitute an important family of natural bioactive compounds responsible for the medicinal properties attributed to Bryophyllum plants (genus Kalanchoe, Crassulaceae), but their production by these medicinal plants has not been characterized to date. In this work, a combinatorial approach including plant tissue culture, untargeted metabolomics, and machine learning is proposed to unravel the critical factors behind the biosynthesis of phenolic compounds in these species. The untargeted metabolomics revealed 485 annotated compounds that were produced by three Bryophyllum species cultured in vitro in a genotype and organ-dependent manner. Neurofuzzy logic (NFL) predictive models assessed the significant influence of genotypes and organs and identified the key nutrients from culture media formulations involved in phenolic compound biosynthesis. Sulfate played a critical role in tyrosol and lignan biosynthesis, copper in phenolic acid biosynthesis, calcium in stilbene biosynthesis, and magnesium in flavanol biosynthesis. Flavonol and anthocyanin biosynthesis was not significantly affected by mineral components. As a result, a predictive biosynthetic model for all the Bryophyllum genotypes was proposed. The combination of untargeted metabolomics with machine learning provided a robust approach to achieve the phytochemical characterization of the previously unexplored species belonging to the Bryophyllum subgenus, facilitating their biotechnological exploitation as a promising source of bioactive compounds. Introduction Bryophyllum constitutes a subgenus within the Kalanchoe genus (Crassulaceae family) that contains several plant species commonly known as "Bryophyllum" in Ethnomedicine [1,2]. Accordingly, different Bryophyllum-derived formulations have been traditionally used worldwide for the treatment of diabetes, cardiovascular, and neoplastic diseases [3,4]. Bryophyllum spp. medicinal properties are a consequence of the production of phenolic compounds as recently established [4][5][6]. Thus, different extracts from Bryophyllum have been reported to exhibit valuable health-promoting properties because of the high contents of phenolic compounds, as determined through different in vitro assays. For instance, antioxidant activity was assessed in terms of free radical scavenging activity, inhibition of lipid peroxidation, and prevention of oxidative hemolysis [6,7], together with anti-inflammatory activity, antimicrobial activity towards a wide range of both bacterial and fungal strains, and anticancer activity. Indeed, the anticancer activity of Bryophyllum was determined towards a wide panel of cancer cell lines, such as MCF-7 breast adenocarcinoma, NCI-H460 non-small cell lung carcinoma, HeLa cervical carcinoma, and HepG2 hepatocellular carcinoma cell lines [6]. Consequently, due to the aforementioned health properties associated with Bryophyllum extracts, novel strategies should be proposed to ensure the large-scale production of phenolic compounds from these underexplored medicinal plants. Nevertheless, little information is available on the biosynthesis of phenolic compounds in Bryophyllum plants, making difficult their exploitation as a valuable source of bioactive compounds. Concerning the biosynthetic pathway of phenolic compounds, in brief, the major precursor is cinnamic acid, synthesized after the action of phenylalanine ammonia-lyase on the amino acid phenylalanine [8]. Afterwards, cinnamic acid may either be incorporated into the biosynthesis of phenolic acids (C6-C1 and C6-C3 compounds) or undergo an enzymatic transformation to produce coumaroyl-CoA that is assumed to be the common basic structure for the biosynthesis of other subfamilies, namely: lignans ((C6-C3)2), stilbenes (C6-C2-C6), and flavonoids (C6-C3-C6) [9]. Finally, these subfamilies are later subjected to condensation to give rise to polymeric phenolic compounds, as is the case for lignin and tannins [9]. In particular, phenolic acids (protocatechuic acid, caffeic acid, and ferulic acid), flavonoids (myricetin, quercetin, and kaempferol glycosides), and anthocyanins (malvidin glycosides) have been identified as the main phenolic compounds in Bryophyllum plants [5,6,10]. In this sense, to gain insight into the biosynthesis of phenolic compounds, the application of untargeted metabolomics (UM) becomes an essential high-throughput approach: it confers a fast, reliable, and detailed perspective of the metabolite pool found in plants, thus contributing to the identification of their wide array of compounds present in cells and tissues [11,12] but also facilitating the rapid industrial exploitation of those promising bioactive compounds. Additionally, the establishment of plant tissue culture (PTC) has emerged as a solid methodology to characterize phenolic compound biosynthesis. PTC constitutes a reliable and controlled biotechnological system able to achieve homogeneous and standard bioactive compound production [13][14][15]. For this purpose, the design of optimized culture media formulations is crucial. Its development must be carefully carried out to achieve a correct balance between bioactive compounds accumulation and the preservation of tissue culture integrity [15,16]. Additional factors, such as the genotype or the type of explant, also play important roles in the biosynthesis of phenolic compounds [17,18]. Thus, the great number of factors affecting this process limits the interpretation of results and the achievement of general conclusions [19][20][21]. In the last decade, the application of machine learning (ML) technology has been replacing the traditional statistical methods to easily reveal exhaustive information about multivariate processes in which occluded patterns and complex interactions occur [22][23][24]. Among the different ML algorithms available, the combination of artificial neural networks (ANNs) with fuzzy logic (neurofuzzy logic, NFL) has already been successfully applied in the field of PTC for the characterization and optimization of diverse multifactorial processes, including seed germination [25], micropropagation [26], and the identification of physiological disorders [27]. The application of ANNs provides the establishment of predictive mathematical models obtained after training the empirical data, including the independent variables or factors as inputs and the dependent variables or parameters as outputs, to predict the key factors involved in each parameter, as well as their potential interactions [28]. To enhance the interpretation of the resulting ANN model, the application of fuzzy logic facilitates this task by the formulation of 'IF-THEN' rules, which confers an understandable linguistic definition of the model results [29]. In this way, NFL contributes to the characterization and understanding of complex processes and, simultaneously, it may be regarded as a useful decision-making tool for optimization, as it has been previously used for maximizing the production of phenolic compounds [18]. In this work, a combinatorial approach including UM and ML is proposed in order to decipher the critical factors affecting the biosynthesis of phenolic compounds in three Bryophyllum species cultured in vitro. We hypothesize that the combination of both cuttingedge methodologies will assist in the phytochemical valorization of these unexplored medicinal plants and will confer a novel approach to contribute to their biotechnological exploitation in different sectors, including the food, cosmeceutical, and pharmaceutical industries. Furthermore, due to the vast information provided by both UM and ML, and thanks to their plasticity, their combination will be priceless to increase the knowledge of novel sources of bioactive compounds with beneficial properties on human health, from unexplored plant sources, thus conferring a multidisciplinary workflow regarding the large-scale production of those phytochemicals. The resulting profile showed a total of 485 putatively annotated compounds. The full list of annotated compounds is provided accompanied by their retention time and composite mass spectrum (Table S1). Flavonoids were the most abundant subfamily of phenolic compounds, mainly characterized by anthocyanins, flavonols, and flavones, followed by phenolic acids, low-molecular-weight phenolics, and other subfamilies. Among the flavonoids, malvidin, and pelargonidin presenting 3-O-or 3,5-O-glycoside bonds were the most abundant anthocyanins, followed by myricetin 3-O-, kaempferol 3-O-, and quercetin 3-O-glycosides as the most representative members of the flavonol subfamily, and apigenin 7-O-, apigenin 6,8-C-, luteolin 6-O-, and luteolin 6-C belonging to the flavone subfamily (Table S1). Concerning the phenolic acid subfamily, hydroxybenzoic acids and hydroxycinnamic acids were the most prevalent compounds, with protocatechuic acid 4-O-glucoside, caffeoylquinic acid mono-and di-glycoside, and cinnamic acid as the most abundant ones. In addition, a significant number of low-molecular-weight phenolics, i.e., alkylphenols, hydroxybenzaldehydes, hydroxycoumarins, hydroxyphenylpropenes, tyrosols, and other simple phenylpropanoids, were also detected (Table S1). Afterward, a semi-quantification of phenolic compounds was performed, using reference compounds for each subfamily. The results are shown in Figure 1, for both aerial parts ( Figure 1A-H) and roots ( Figure 1I-P). As it can be seen by the statistical analysis performed by factorial analysis of variance (ANOVA), all the factors tested: genotypes, plant organs, and culture media composition, influenced the accumulation of phenolic compounds, as well as their potential interactions (p < 0.001; Table S2). Due to a large number of factors and data collected, the information provided by such analysis for the identification of simple patterns and/or identification of interactions between variables was limited. Consequently, a multivariate statistical approach was carried out to determine the influence of the genotypes and the organs on the biosynthesis of phenolic compounds. An Orthogonal Projection in Latent Structures-Discriminant Analysis (OPLS-DA) was used, revealing the discriminant compounds between species (BD, BH, and BT) and their organs (aerial parts and roots) ( Figure 2). Additionally, the Variable Importance in Projection (VIP) selection method was used to identify the VIP markers implicated in the discrimination between genotypes (Table S3) and plant organs (Table S4). The three different genotypes, BD, BH, and BT, presented a differential composition in terms of phenolic compound production (Figure 2A), as assessed by the high quality of the generated model in terms of linearity and predictability (R 2 Y = 0.907 and Q 2 Y = 0.856, respectively). Regarding the compounds with the highest contribution to the discrimination between genotypes, anthocyanins, flavonols, and phenolic acids were the most prevalent subfamilies ( Figure 3A), accounting for > 60% of total discriminant compounds. Anthocyanins were mainly represented by cyanidin, malvidin, and pelargonidin glycosides; flavonols were mainly represented by quercetin, kaempferol, and myricetin glycosides; and phenolic acids showed a great heterogeneity, but caffeic, ferulic, and gallic acid derivatives had a higher prevalence (Table S3). Thus, such compounds are predicted to be present in a genotype-dependent manner. Regarding the discrimination of the phenolic profiles of plant organs ( Figure 2B), there was a clear differential pattern between aerial parts and roots, as reflected by the OPLS-DA model with linearity and predictability parameters of high quality (R 2 Y = 0.960 and Q 2 Y = 0.926, respectively). Similar to the genotype-based discrimination, anthocyanins, flavonols, and phenolic acids were predicted as the most relevant subfamilies contributing to such differences, together with low-molecular-weight phenolics (LMW), which included more than 70% of total annotated compounds ( Figure 3B). In this case, anthocyanins were mainly represented by malvidin and cyanidin glycosides, flavonols were represented by quercetin, kaempferol, and myricetin glycosides, phenolic acids were represented by cinnamic acids derivatives, and LMW compounds were mainly represented by catechols and coumarins (Table S4). Accordingly, these compounds are suggested to present an organ-restricted distribution in Bryophyllum. Machine Learning Prediction of the Biosynthesis of Phenolic Compounds Once the influence of genotypes and organs on the biosynthesis of phenolic compounds was assessed, the composition of culture media formulations was the remaining factor whose influence had to be determined, as well as the interactions between them. For that purpose, ML modeling was carried out to identify the critical factors affecting the biosynthesis of phenolic compounds in Bryophyllum and deciphering the potential multivariate interactions that may occur. The NFL results are shown in Table 1. The model efficiently predicted seven out of the eight outputs evaluated, showing a high predictability with training set R 2 values > 70% (Table 1). MSE values also assessed the model quality, together with the ANOVA performed, which showed no statistical differences between the experimental and predicted values (F ratio > f critical; Table 1). Only one output could not be predicted by the model: the flavone production (training set R 2 < 70%), probably due to the heterogeneous composition of this subfamily employed for its semi-quantification, which included flavones, flavanones, and related compounds. To interpret how each output was affected by the predicted inputs, the model was accompanied by the generation of 'IF-THEN' rules together with their membership degree, which are shown in Table 2. The ranked values provided for the inputs are displayed in Figure S1. LMW biosynthesis was mainly predicted as a function of the interaction organ × sulfate (Table 1). However, additional submodels indicated that the interaction genotype × copper and phosphate concentration played a secondary role in LMW biosynthesis. In fact, LMW concentration was the output with the highest number of submodels, which can be explained by the great heterogeneity of compounds that make up this subfamily: tyrosols, coumarins, and catechols (Table S1). Due to their different biosynthetic origins, it is reasonable to find many factors causing the production of LMW compounds. According to the model, the interaction of roots with high sulfate concentrations (>1.43 mM) caused a high LMW content with the highest membership degree ( Table 2; rule 8). Generally, a high LMW content was observed in both aerial parts and roots under mid-to high sulfate concentrations (0.94 mM; Table 2; rules 3-4, 7-8). In contrast, the combination of aerial parts and low sulfate concentrations (<0.61 mM) caused a low LMW content with the highest membership degree ( Table 2; rule 1). In the case of copper, high LMW content values were obtained by low concentrations in the case of BD and BT (<0.03 mM; Table 2; rules 9 and 15, respectively) and low and mid concentrations in the case of BH (<0.08 mM; Table 2, rules 12-13). In the same way, high LMW contents were caused by low phosphate concentrations (<0.71 mM), thus suggesting an inhibitory role ( Table 2; rule 18). Phenolic acid biosynthesis was mainly predicted by the interaction genotype × copper and, secondarily, by the organ ( Table 1). The rules for phenolic acid content indicated that high values were primarily due to BD and high copper concentrations (> 0.08 µM; Table 2; rule 24) and, to a lower extent, aerial parts (Table 2; rule 20). In contrast, a low phenolic acid content was determined for the rest of the conditions, being the combination of BD with moderate copper concentrations (0.03-0.08 µM), the condition showing the low value with the highest membership degree ( Table 2; rule 23). In the case of lignans, only one model was generated, predicted by the interaction genotype × sulfate × organ (Table 1). In this way, high levels were only observed in the case of BH combined with high sulfate concentrations (>1.11 mM; Table 2; rules 37 and 38), but aerial parts presented the highest membership degree (0.84; Table 2; rule 37). On the contrary, a low lignan content was observed for the rest of the conditions, being the combination of BT, roots, and low sulfate concentrations (<1.11 mM) that showed the highest membership degree (0.90; Table 2; rule 40). Stilbene biosynthesis showed the most complex prediction since it was mainly predicted by the combination of calcium × organ × genotype and, secondarily, by the interaction genotype × phosphate × organ (Table 1). Thus, a high stilbene content was predominantly caused by the combination of BH, aerial parts, and low calcium concentrations (<1.68 mM; Table 2; rule 44), whereas a low stilbene content was obtained also by the combination of BH and aerial parts but with a high calcium concentration (>1.68 mM; Table 2; rule 50). In the second submodel, high stilbene concentrations were caused by low phosphate concentrations (<0.43 mM) in BD and BT (Table 2; rules 55-56 and 67-68, respectively), but high phosphate concentrations (>0.98 mM) were required for BH to achieve high levels ( Table 2; rules 65 and 66). For flavonol and anthocyanin contents, only one model was generated in both cases, represented by the interaction genotype × organ, with the independence of culture media formulation (Table 1). In the case of flavonols, high values were only obtained in the case of aerial parts from BD ( Table 2; rules 79 and 80). Similar to flavonols, the low anthocyanin content was predominantly caused by the combination of roots from BT (Table 2; rule 84). Finally, the flavanol content was mainly caused by the combination of genotype × organ and, secondarily by magnesium × organ (Table 1). BT was the most critical genotype associated with flavanol biosynthesis showing the highest contribution to the high flavanol content in roots (Table 2; rule 94) and a low content in aerial parts (Table 2; rule 93). Concerning the influence of magnesium, only high levels were observed under low magnesium concentrations (<0.85 mM) in roots (Table 2; rule 86). Proposed Mechanism of Phenolic Compound Biosynthesis of Bryophyllum Plants Cultured In Vitro The wide variety of interactions predicted between all the factors involved in the biosynthesis of phenolic compounds in Bryophyllum makes the interpretation of the obtained results difficult. As a solution, the generation of the NFL model provided valuable knowledge on this complex process, which is in accordance with the previously performed by ANOVA and OPLS-DA models, since all the outputs predicted by NFL modeling showed a significant influence of genotype, organ, and culture media composition, thus conferring strong evidence: the biosynthesis of phenolic compounds in these plants followed a genotype-and organ-dependent behavior, which was affected by mineral nutrition. Due to the complexity associated with the great number of rules given by the NFL, a graphical representation better represents all of the integrative information obtained. Thus, a proposed biosynthetic pathway, reflecting all the factors involved in the production of phenolic compounds for each species included in this study, is shown in Figure 4. Similar patterns are shown for the biosynthesis of LMW compounds in all three species, whereas a differential behavior was observed for the rest of the phenolic subfamilies ( Figure 4). In the case of phenolic acids, they mostly accumulated in aerial parts, but copper played a pivotal role depending on the species, causing a positive effect on BD, whereas it was shown as an inhibitor on BH and BT. Concerning lignans and stilbenes, BH followed a differential pattern with respect to BD and BT, since lignans mainly accumulated in BH, whereas the mineral requirements for stilbene biosynthesis were contrary to those found for BD and BT. Flavonols mainly accumulated in the aerial parts of BD, together with anthocyanins, the latter being also present in the aerial parts of BH, whereas both subfamilies were present in low concentrations in BT. Finally, flavanols showed a characteristic pattern, since they mostly accumulated in the roots of BT, with magnesium playing a positive role, whereas in the case of BD, they accumulated in aerial parts and were inhibited by magnesium. In consequence, the combination of UM with NFL emerged as a promising approach to characterize highly complex processes by providing exhaustive information that is easy to interpret. Our results clearly show that, although these three Bryophyllum species are closely related, a genotype and organ-dependent pattern was observed for the biosynthesis of phenolic compounds in Bryophyllum cultured in vitro, depending on the composition of culture media. Such results are the consequence of ML modeling of the experimental results (Figure 1), which displayed cryptic information that did not show clear patterns and, therefore, conferred scarce information, thus limiting the enormous potential offered by the phenolic profile obtained by the untargeted metabolomics approach. Discussion Phenolic compounds play an essential role in the therapeutic properties of Bryophyllum plants since they have been seen to be efficient antioxidant, cytotoxic, anti-inflammatory, and antimicrobial agents [6,30]. In recent years, these metabolites have attracted much attention from a biotechnological point of view due to their pleiotropic beneficial effects on human health [31,32]. The establishment of PTC constitutes a reliable biological platform for the perpetual production of industrially important bioactive compounds, largely exploited in the field of plant biotechnology [33,34]. PTC has been recently assessed as an efficient approach for achieving the phytochemical valorization of multiple Bryophyllum species, including B. daigremontianum (BD), B. × houghtonii (BH), and B. tubiflorum (BT) [6,35]. In fact, the establishment of PTC promoted the production of several phenolic compounds, such as anthocyanins, which have not been described in plants based on conventional breeding [6]. In this work, the above-mentioned species were selected because of their wide application in the traditional medicine for the treatment of several prevalent diseases, ranging from wound healing and cough alleviation to chronic diseases, such as diabetes, neoplastic, cardiovascular, and neurological diseases, among others [1,2]. Nonetheless, most investigations have been exclusively focused on the study of Bryophyllum pinnatum (Lam.) Oken [2], resulting in a gap of knowledge on the phytochemical valorization of BD, BH, and BT. Moreover, to date, the study of phenolic compounds of Bryophyllum has been limited to their identification and description, with phenolic acids and flavonols as the main polyphenols found in this subgenus [5,36,37]. Little is known about the biosynthesis of phenolic compounds in Bryophyllum plants, and untargeted approaches are required for rapid and robust metabolic profiling of unexplored plant species [38]. The application of UM revealed new subfamilies of phenolic compounds in Bryophyllum plants, such as tyrosols, coumarins, catechols, lignans, stilbenes, and flavanols, together with the previously described phenolic acids, flavonols, and anthocyanins, although all of these subfamilies were already identified in elicited plant cell suspension cultures (PCSCs) from BD and BH [39]. According to our results (Figure 1), all three species have been determined as a potent source of phenolic compounds from different subfamilies. Regarding the OPLS discriminant analyses performed, a genotype-dependent biosynthesis of phenolic compounds was revealed (Figure 2A). Although the three species are genetically close [40], our results indicated a differential phenolic profile for each one, in agreement with previous results, where these species showed different patterns related to important physiological processes, such as organogenesis [41], mineral nutrition [35], and the production of phenolic compounds [18]. The biosynthesis of phenolic compounds also followed an organ-dependent pattern ( Figure 2B): anthocyanins, flavonols, and phenolic acids were the metabolites with the highest contribution to such discrimination. This compartmentalization may be a consequence of the physiological features associated with polyphenols since anthocyanins and flavonols usually accumulate in the aerial parts due to their role as protectants of the oxidative burst upon environmental stresses, UV-light absorbers, and pollinator attractants [42][43][44][45]. Our results are in agreement with previous reports, which determined that flavonols and anthocyanins predominantly accumulate in the aerial parts compared to roots [6,18]. Furthermore, the existence of specialized cell types within leaf tissues devoted to the storage of anthocyanins and other flavonoids, known as idioblasts, has been described in BD and BT [36,46]. The influence of genotype and organ on the biosynthesis of phenolic compounds, together with their interaction with nutrients, was assessed by the NFL predictive model ( Table 1). The accuracy of the NFL-based prediction was assessed by the coefficient of determination of the training dataset (training set R 2 ), together with the ANOVA parameters (F ratio > f critical), as described by Shao and co-workers [47]. Due this high predictability, the ML application enabled the identification of critical factors in the biosynthesis of each phenolic subfamily, thus conferring useful information that is easily understandable through the generation of the model rules (Table 2). Such a computer-based tool was successfully applied to predict the critical factors affecting the total phenolic and flavonoid contents of Bryophyllum cultured in vitro, revealing a significant influence of genotype and organs [16]. Concerning mineral nutrients, among the 18 different ions present in the universal Murashige and Skoog medium formulation [48], the NFL model only selected five as critical in the biosynthesis of phenolic compounds: sulfate, phosphate, calcium, magnesium, and copper (Table 1). This efficiency in the selection of mineral factors was previously demonstrated for other physiological processes, thus contributing to the optimization of PTC protocols [24,49,50]. With respect to the experimental design proposed, a reduction in both macronutrients and micronutrients from the universal Murashige and Skoog medium [48] was established (Section 4.2). Such a nutrient decrease was previously determined to exert a positive impact on the growth and multiplication of Bryophyllum cultured in vitro, motivated by their enhanced adaptation to arid regions where they are naturalized, with poor mineral accessibility that leads to low mineral requirements [35,51,52]. Furthermore, the reduction in mineral concentrations was already reported to promote an elicitor effect on Bryophyllum cultured in vitro [6,18], thus promoting an efficient strategy to assess the viability of this biotechnological system in the production of bioactive compounds. It must be considered that the modification of mineral concentrations may eventually have a significant effect on the buffer capacity, osmolarity, etc., thus promoting a possible effect on the modulation of phenolic compound biosynthesis that cannot be excluded. Among the different subfamilies of phenolic compounds obtained by UM-mediated annotation, flavonols and anthocyanins were the only subfamilies that did not show a significant dependence on the mineral composition of the media employed, being predicted as a function of the interaction of genotype and organ ( Table 1). The same behavior was previously reported for hydroethanolic Bryophyllum extracts, in which a genotype-dependent content of both flavonol and anthocyanin glycosides was observed [6]. Due to the high plasticity that flavonols and anthocyanins exhibit on plant physiology depending on mineral nutrition [7], our results suggested that the biosynthesis of these phenolic compounds could be stimulated by other mineral compositions different from those performed in this work for Bryophyllum spp. cultured in vitro. The NFL model predicted that the biosynthesis of LMW compounds, which includes mainly tyrosols, coumarins, and catechols, was mainly influenced by sulfate concentration and, secondarily, by copper and phosphate. Sulfate was required in high concentrations (>0.94 mM) to promote LMW biosynthesis, probably due to its role in alleviating the autotoxicity caused by the prooxidant effects associated with the overaccumulation of tyrosols, as was demonstrated for hydroxytyrosol [53]. In addition, copper sulfate was reported as an elicitor of the biosynthesis of the coumarin scopoletin in PCSCs of Angelica archangelica [54], thus revealing the role of sulfate in plant stress tolerance [55]. The effect of copper and phosphate identifed by the NFL model also agreed with previous observations: a minimal copper concentration is required for the biosynthesis of tyrosols since this metal ion constitutes part of the active center of copper amine oxidase, which catalyzes the generation of hydroxytyrosol from dopamine [56]. In contrast, low phosphate requirements were predicted to enhance LMW biosynthesis, thus suggesting an inhibitory role of this salt, in agreement with the results reported for the roots of Arabidopsis thaliana, where coumarin biosynthesis is controlled by phosphate deficiency [57]. In the case of phenolic acids, the predictive model identified copper, in combination with genotype and organs, as the only nutrient involved in their biosynthesis. In this sense, phenolic acids predominantly accumulated in the aerial parts of the three Bryophyllum species, but copper was suggested to play a positive role in BD, causing an inhibitory effect on BH and BT (Figure 4). Thus, the results found for BD agree with those found for other medicinal plants, such as Catharanthus roseus [58], and Raphanus sativus [59], in which copper promoted the accumulation of phenolic acids. Such influence is driven by the copper-mediated stimulation of nitric oxide production, which acts as an inductor of phenylalanine ammonia lyase, driving the transformation of phenylalanine into cinnamic acid [58,60]. Lignan biosynthesis was predominantly found in BH, and it was enhanced by high sulfate concentrations, according to the predictive NFL model (>1.11 mM; Figure 4). The impact of sulfate, as a sulfur-containing ion, on lignan biosynthesis was already studied in Linum album hairy roots [61]. The authors proved that sulfur-containing signaling molecules, such as hydrogen sulfide, regulate the shift between lignan and flavonoid biosynthesis. This shifting behavior of copper might also play a role as a master regulator of lignan biosynthesis in Bryophyllum plants, due to the differential effects found on BH against its parental species, BD and BT. Stilbene biosynthesis was predicted to be mainly affected by calcium concentration together with genotype and organ (Table 1). In this case, the genotypes showed the same pattern for lignans, since the mineral requirements for BD and BT were the same, being opposite to those predicted for BH ( Figure 4). Thus, in the case of BD and BT, high calcium concentrations (>1.68 mM) were shown to enhance stilbene biosynthesis. Such an observation could be explained on the basis of the role of stilbenes as calcium complexing agents, thus providing evidence of the role of this subfamily of phenolic compounds as metal ion scavengers [62]. Moreover, this intraspecific differential role of calcium in stilbene biosynthesis was reported in Vitis spp., since calcium promoted stilbene biosynthesis in PCSCs of Vitis amurensis by stimulating stilbene synthase (STS) expression, via induction of calcium-dependent kinases [63], whereas calcium did not affect STS in PCSCs of Vitis vinifera [64]. Finally, flavanols constituted the last subfamily of phenolic compounds potentially affected by mineral nutrients in Bryophyllum spp., exhibiting a genotype-dependent accumulation of these compounds, as predicted by the NFL model (Table 1): high values for the flavanol content were observed in the roots of BT, mainly, and in the aerial parts of BD (Table 2). In addition, magnesium was found to exert a positive role on the flavanol accumulation in roots, showing an inhibitory effect on aerial parts (Figure 4). Since flavanols are the major phytoconstituents found in tea, the influence of magnesium on their production has been analyzed. The beneficial effects of magnesium on flavanol biosynthesis have been thoroughly investigated, being considered a relevant factor [65]. Thus, the exogenous soil addition of magnesium in open field experiments promoted the production of flavanols in black tea, via amino acid transferase induction [66]. In addition, molecular studies indicated that the metal complexing properties of catechins efficiently promoted the formation of stable complexes with magnesium [67], providing evidence of the role of magnesium as a regulator of catechin biosynthesis. On the other hand, the improved flavanol biosynthesis predicted for BT roots may be supported by the observations found for Centaurea maculosa, in which (-)-catechin is present in roots exudates developing an allelochemical effect responsible for the invasiveness of this species [44,68]. Since BT, together with other Bryophyllum species, is considered an invasive species [69], this enhanced production of flavanols by roots may contribute to the enhancement of such invasive mechanism. Greenhouse-grown plants were selected as the source of epiphyllous plantlets. Thus, plantlets from the three species were collected, subjected to surface sterilization, and transferred to in vitro conditions, following the previously described protocol [70]. After disinfection, plantlets were cultured in pairs in glass culture vessels containing 25 mL of previously autoclaved Murashige and Skoog medium [48], supplemented with 3% (w/v) sucrose and solidified with 0.8% (w/v) agar at pH = 5.8. Then, plant cultures were randomly placed in growth chambers under a photoperiod of 16 h light and 8 h dark at 25 ± 1 • C. Periodical subcultures were performed every 12 weeks, by transferring new epiphyllous plantlets to fresh culture media. Experimental Design An experimental design for three variables at three, two, and seven levels was established: genotype (BD, BH, BT species), organ (aerial parts or roots), and culture media (7 formulations), resulting in a total of 42 treatments. Seven culture media formulations, derived from Murashige and Skoog medium [48] were used for the nutrition experiment (Table 3). Murashige and Skoog-derived formulations contained a reduced content of both macronutrients (M) and micronutrients (µ) [35]. Treatments consisted of a half-strength medium (1/2MSM and 1/2MSµ), a quarter-strength medium (1/4MSM and 1/4MSµ), and an eighth-strength medium (1/8MSM and 1/8MSµ). Full-strength Murashige and Skoog medium was used as a control. To prevent additional interactions, EDTA-chelated iron, vitamin, and organic molecule concentrations were maintained for all the treatments as in the original formulation. All media were supplemented with 3% (w/v) sucrose and solidified with 0.8% (w/v) agar at pH = 5.8. Epiphyllous plantlets with their own root system were selected from 12-week-old Murashige and Skoog-grown plants. The growth conditions were the same as previously described. Cultures were maintained in the same culture media formulation for successive four subcultures every 12 weeks. Plantlets were cultured in pairs, using 10 glass culture vessels per media formulation, accounting for a total of 20 replicates per treatment and species. After each subculture, plants were divided into aerial parts and roots and were separately stored at −20 • C until use. Sample Preparation and Extraction Collected plant materials were frozen-dried and powdered to obtain fine particles that were stored at −20 • C until extraction. Sample extraction was performed using the solvent mixture MeOH:HCOOH:H 2 O (80:0.1:19.99) at a final concentration of 50 mg mL −1 . The mixture was homogenized by a high-speed rotor (Polytron PT 1600-E) for 2 min and centrifuged at 8000 g for 10 min at 4 • C (Eppendorf 5810R, Hamburg, Germany). Supernatants were collected and filtered throughout syringe filters (pore size: 0.22 µm). Finally, extracts were transferred to vials and subsequently analyzed or stored at −20 • C until use. Phenolic Profiling Using Untargeted Metabolomics Phenolic compounds were profiled through an UM approach based on UHPLC-QTOF/MS, as previously reported [71,72]. Briefly, reverse phase chromatographic separation was achieved using a water-acetonitrile gradient, and then compounds were detected in SCAN mode (100-1200 m/z) at a nominal resolution of 40,000 FWHM. Quality controls were prepared by pooling each sample and were analyzed under the same chromatographic conditions, with acquisition using data-dependent tandem mass spectrometry [73]. The annotation of phenolic compounds was carried out using the Profinder B.07 software tool (Agilent Technologies), following mass (5-ppm tolerance) and retention time (0.05 min) alignment, as previously reported [71,74]. For this aim, the database exported from Phenol-Explorer 3.6 [75] was used, and annotation used the whole isotopic pattern of aligned features (namely, the monoisotopic accurate mass, isotopic ratio, and isotopic accurate spacing) [71,76]. Compounds were filtered by abundance (signal-to-noise >8) and by frequency (only the features annotated in 75% of replications within a treatment were used). A further step of annotation was then carried out in MS-DIAL 4.48 from tandem MS information, using publicly available MS/MS experimental spectra (Mass Bank of North America) and MS-Finder 3.50 for in-silico fragmentation (using Lipid Maps, FoodDB, and PlantCyc). The list of MS/MS compounds annotated is provided in the Supplementary (Table S1). Overall, compound annotation was done under Level-2 identification (putatively annotated compounds, COSMOS Metabolomics Standard Initiative) [77]. Total ion chromatograms were included in Figures S2 and S3. Finally, identified phenolic compounds were classified into different subclasses and quantified using appropriate calibration curves for a reference standard per class. Results were expressed as equivalents of the reference compounds in mg/g of sample: cyanidin was selected for anthocyanins; catechin for flavanols; quercetin for flavonols; luteolin for flavones and other related flavonoids (flavanones and chalcones); sesamin for lignans; tyrosol for low-molecular-weight phenolics (LMW, including tyrosols, phenolic terpenes, quinones, coumarins, alkylphenols, and other phenylpropanoids); ferulic acid for phenolic acids; and resveratrol for stilbenes. Results were expressed as tyrosol equivalents (TE) for the LMW content, ferulic acid equivalents (FE) for the phenolic acid content, sesamin equivalents (SE) for the lignan content, resveratrol equivalents (RE) for the stilbene content, luteolin equivalents (LE) for the flavone content, quercetin equivalents (QE) for the flavonol content, cyanidin equivalents (CyE) for the anthocyanin content, and catechin equivalents (CaE) for the flavanol equivalents. Statistical Analysis Metabolomic profiling was performed with raw data using the software Agilent Mass Profiler Professional B.12.06. The data normalization was performed as previously indicated [78]: compounds were filtered according to their abundance and frequency, normalized at the 75th percentile, and baselined to the median of all samples. Bonferroni multiple testing correction was adopted in multivariate analyses. The obtained dataset was then exported to the software SIMCA 16 (Umetrics, Malmo, Sweden) for orthogonal projection to latent structures discriminant analysis (OPLS-DA). The cross-validation (CV) of the OPLS-DA model generated was developed using CV-ANOVA (α = 0.05), and its fitness and prediction ability were evaluated by the goodness-of-fit R 2 Y and goodnessof-prediction Q 2 Y parameters, respectively. Finally, to determine the most discriminant compounds, a variable importance in projection (VIP) analysis was performed setting a threshold VIP score >1. Moreover, in order to assess the influence of genotypes, organs, and culture media formulations and their interactions on the production of phenolic compounds, a factorial ANOVA was performed, using the software STATISTICA v. 12 (StatSoft). The significance level was adjusted at α = 0.05. Modeling Tools After data collection, all experimental values were included in a database (Table S5) in which salts from culture media formulations were split into their containing ions to avoid ion confounding [79]. Consequently, 18 factors were selected as inputs for modeling: genotype, organ, and 16 ion concentrations from all media formulations tested. The parameters derived from phenolic quantification, including eight subclasses, were selected as outputs. Data modeling was carried out using FormRules ® commercial software (v. 4.03; Intelligensys LTD, Cheshire, UK), as previously described [33]. The training parameters used for model establishment were described as follows: the Adaptive Spline Modeling of Data (ASMOD) was selected for the parameter minimization, as it improves model accuracy by reducing its complexity [80], with a ridge regression factor of 1 × 10 −6 . FormRules ® software includes several fitness criteria, such as Cross Validation (CV), Leave One Out Cross Validation (LOOCV), Bayesian Information Criterion (BIC), Minimal Description Length (MDL), and Structural Risk Minimization (SRM). All were tested in this study and the best fitted result, which provided the simplest and most intelligible rules with minimum generalization error, was SRM [35]. The rest of the training parameters were: C1 = 0.884, C2 = 4.8; number of set densities: 2; set densities: 2, 3; adapt nodes: TRUE; maximum inputs per submodel: 4; maximum nodes per input: 15. Thus, the model was divided into submodels in order to achieve an easier interpretation of results by the generation of "IF-THEN" rules [50,81]. Independent models were developed for each output, and a model assessment criterion was selected to avoid data over-fitting [22,82,83]. The application of NFL confers an advantage as a knowledge-obtaining tool since the predicted values for the inputs were expressed by words, ranging from low to high, combined with a corresponding membership degree, which takes values between 0 and 1 [29]. Furthermore, the predictive models for each output were quality-assessed in terms of the coefficient of determination of the training set (training set R 2 ), expressed as a percentage given by Equation (1) [47], and mean square error, MSE, given by Equation (2). where y i is the experimental value from the dataset, y i is the value predicted by the model, and y i is the mean value of the dependent variable. Acceptable predicted values given by training set R 2 range between 70-99.9% since higher values indicate model overfitting [22,84]. MSE represents the random error component associated with the built model, providing insight into model prediction due to a smaller incidence of random error [21,35]. Finally, to assess model accuracy, ANOVA was performed to check statistical differences between experimental and predicted data. Conclusions In this work, a combinatorial approach including three cutting-edge technologies, plant tissue culture, untargeted metabolomics, and machine learning, was established to gain insight into the biosynthesis of phenolic compounds of medicinal plants responsible for their associated therapeutic properties. The results indicate that Bryophyllum plants can be considered a promising source of phenolic compounds including the previously identified, flavonols, phenolic acids, and anthocyanins, together with new subfamilies reported for the first time in these species: tyrosols, catechols, lignans, stilbenes, flavones, flavanones, and flavanols. The knowledge derived from this investigation contributes to the phytochemical valorization of these unexplored medicinal plants. At the same, it may facilitate their exploitation as a natural source of bioactive compounds, promoting the large-scale application of Bryophyllum by-products to different biotechnological sectors with limitless purposes in food, cosmeceutical, and pharmacological industries. In addition, thanks to the robustness and plasticity of this multidisciplinary approach, the workflow proposed here can be applied to a plethora of poorly characterized plant species with medicinal potential, thus conferring a rapid and reliable methodology to provide insight into their biosynthetic capacity. In fact, the robustness and high performance associated with the combination of UM and ML presents limitless applications, thus opening new perspectives in the field of natural products research, facing the introduction of uncharacterized plant sources as efficient biofactories of health-promoting compounds of natural origin at an industrial level. The use of NFL as a predictive ML tool confers useful information about the key factors involved in complex processes, as was demonstrated here for the biosynthesis of phenolic compounds. This predicted information should be further validated in order to assess the knowledge conferred by this deep learning approach. Additionally, the application of other ML tools, such as genetic algorithms, will contribute to the computer-based optimization of such a multifactorial process. In this sense, this multidisciplinary strategy has proven extremely useful to improve the current paradigm of plant biotechnology, facilitating the knowledge on the production of phenolic compounds by medicinal plants, with the ability of being easily applied to economically important sectors, as is the case for agricultural, food, and related industries. Table S1: sheet 1, Dataset of annotated compounds; sheet 2, list of compounds annotated by MS/MS; Table S2: Statistical parameters associated with the factorial ANOVA performed for each subfamily involved in the semiquantification of phenolic compounds. df, degrees of freedom; SS, the sum of squares; MS, mean squares.; Table S3: Metabolites with the highest contribution to discrimination between Bryophyllum genotypes, according to the OPLS-DA predictive model, followed by VIP selection method. Metabolites are grouped into their subfamilies and are accompanied by their VIP score and standard error; Table S4: Metabolites with the highest contribution to discrimination between organs: aerial parts and roots, according by the OPLS-DA predictive model, followed by VIP selection method. Metabolites are grouped into their subfamilies and are accompanied by their VIP score and standard error; Table S5: Dataset subjected to NFL modeling.
v3-fos-license
2021-10-31T15:07:08.411Z
2021-10-28T00:00:00.000
240269941
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2077-0383/10/21/5024/pdf", "pdf_hash": "32b9d70831a1c1408982bdb9834203a4a48d14dc", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46281", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "5631c8451708c2df0b551a9856a333dabd51d364", "year": 2021 }
pes2o/s2orc
Perirenal Adipose Tissue from Healthy Donor: Characteristics and Promise as Potential Therapeutic Cell Source Perirenal adipose tissue, one of the fat masses surrounding the kidneys, can be obtained from healthy donors during a kidney transplant. Perirenal adipose tissue has only ever been known as a connective tissue to protect the kidneys and renal blood vessels from external physical stimulation. Yet, recently, as adipose tissue has begun to be considered an endocrine organ, and perirenal adipose tissue is now regarded to have a direct effect on metabolic diseases. The characteristics of perirenal adipose tissue from a healthy donor are that: (1) There are a large number of brown adipose cells (70–80% of the total), (2) Most of the brown adipose cells are inactive in the resting cell cycle, (3) Activating factors are constant low-temperature exposure, hormones, metastasis factors, and environmental factors, (4) Anatomically, a large number of brown adipose cells are distributed close to the adrenal glands, (5) Beige cells, produced by converting white adipocytes to brown-like adipocytes, are highly active, (6) Activated cells secrete BATokines, and (7) Energy consumption efficiency is high. Despite these advantages, all of the perirenal adipose tissue from a healthy donor is incinerated as medical waste. With a view to its use, this review discusses the brown adipocytes and beige cells in perirenal adipose tissue from a healthy donor, and proposes opportunities for their clinical application. Perirenal Adipose Tissue There are three types of fat around the kidneys: paranephric fat, renal sinus fat, and perirenal fat. The paranephric fat is located outside the kidney membrane and is made up of white fat [1]. The renal sinus fat lies around the renal blood vessels and is found in the kidney membrane, and increases in proportion to obesity. Perirenal fat is located in the retroperitoneal cavity, and has been considered a simple connective tissue that protects the kidneys and renal blood vessels from external physical stimulation ( Figure 1A) [1]. However, as adipose tissue has been recognized as an endocrine organ that secretes various adipokines and does not just serve for energy storage, perirenal adipose tissue has come to be regarded as a tissue that directly affects metabolic diseases, such as diabetes, obesity, and cardiovascular abnormalities [2]. In its role as an endocrine organ, perirenal adipose tissue contains a large number of brown adipose cells [3] and highly activated beige cells that are produced by the conversion of white adipose cells [4]. Thus, perirenal adipose tissue is considered to be a very useful cell source in the therapeutic aspect. Nevertheless, all perirenal adipose tissue obtained from a healthy donor during a kidney transplant is incinerated as medical waste. To increase the possibility of its clinical perirenal adipose tissue is considered to be a very useful cell source in the therapeutic aspect. Nevertheless, all perirenal adipose tissue obtained from a healthy donor during a kidney transplant is incinerated as medical waste. To increase the possibility of its clinical application, this review paper discusses the characteristics and potential applications of perirenal adipose tissue. Adipose Cell Types in Perirenal Adipose Tissue Adipose cells that make up the perirenal adipose tissue are largely divided into white and brown cells, like with other adipose tissues ( Figure 1B). White adipose cells store energy in the form of triglycerides, and are decomposed into fatty acids and glycerol when fasting. They affect the appetite and insulin sensitivity by secreting hormone-like molecules, such as leptin and adiponectin, in the same way as endocrine organs do [5]. Brown adipose cells, meanwhile, maintain the body temperature by releasing chemical energy as heat through the uncoupling protein 1 (UCP1)-mediated pathway, as a defense mechanism against low temperatures ( Figure 1C) [6,7]. Histologically, adipose cells have a uniform shape divided by a thin collagen septum. In white adipose cells, the cytoplasm is pushed to the edge by the pressure of the fat drop. The nucleus, meanwhile, is small, thin, elliptical, and pushed to one side, with one big fat drop in the middle ( Figure 1B(b)) [8]. Brown adipose cells are small and contain many fat droplets ( Figure 1B(a)) [3]. When white adipose cells express high UCP1 and have many small fat droplets, they are called beige cells ( Figure 1B(c)) [9]. Beige cells differ from brown adipose cells in terms of their origin, but have the same function of consuming energy as heat; thus, they are clinically valuable. Adipose Cell Types in Perirenal Adipose Tissue Adipose cells that make up the perirenal adipose tissue are largely divided into white and brown cells, like with other adipose tissues ( Figure 1B). White adipose cells store energy in the form of triglycerides, and are decomposed into fatty acids and glycerol when fasting. They affect the appetite and insulin sensitivity by secreting hormone-like molecules, such as leptin and adiponectin, in the same way as endocrine organs do [5]. Brown adipose cells, meanwhile, maintain the body temperature by releasing chemical energy as heat through the uncoupling protein 1 (UCP1)-mediated pathway, as a defense mechanism against low temperatures ( Figure 1C) [6,7]. Histologically, adipose cells have a uniform shape divided by a thin collagen septum. In white adipose cells, the cytoplasm is pushed to the edge by the pressure of the fat drop. The nucleus, meanwhile, is small, thin, elliptical, and pushed to one side, with one big fat drop in the middle ( Figure 1B(b)) [8]. Brown adipose cells are small and contain many fat droplets ( Figure 1B(a)) [3]. When white adipose cells express high UCP1 and have many small fat droplets, they are called beige cells ( Figure 1B(c)) [9]. Beige cells differ from brown adipose cells in terms of their origin, but have the same function of consuming energy as heat; thus, they are clinically valuable. Benefits of Brown Adipose Tissue The main role of brown adipose tissue is to keep the body temperature constant by generating heat; to generate 300 kcal, 50 g of brown adipose tissue is consumed ( Figure 1C) [10]. The calorie-burning effect of brown adipose tissue can be applied to treat obesity and insulin resistance, which are metabolic diseases caused by an excessive accumulation of energy. When a brown adipose cell is activated, glucose and fatty acids are effectively removed from the blood; blood glucose is eliminated by activation of the β3-adrenergic receptor in the brown adipose cell membrane, followed by increased synthesis of glucose transporter 1 (GLUT1), a glucose transporter, by cyclic adenosine monophosphate (cAMP) in the cytoplasm [11]. Triglycerides in the plasma are removed by the activation of lipoproteinase and CD36 secreted by brown adipose cells [12]. Thus, the activation of brown adipose cells is effective at increasing the insulin sensitivity and energy consumption, and reducing the weight. Until recently, brown adipose tissue was thought to be nonexistent in humans at all stages of life, from infancy to adulthood. However, with the development of equipment to measure metabolic activity (fluorine-18-fluorodeoxyglucose positron emission tomography (18F-FDG-PET)/computed tomography (CT)), brown adipose tissue was discovered to be present in adults in thermo-sensitive tissues [13]. In particular, large quantities of brown adipose tissue were found around the kidneys, and its activity was high [14]. In our ongoing preliminary experiment, we reserve 302 peripheral adipose tissues; the average weight of the samples was 229.19 ± 136.53 g and the average age of the kidney donors was 32.98 ± 9.94. Using 17 samples, we measured brown fat's distribution and found it to be present in 10-60% (v/v) of tissues. The brown fat volume showed significant individual differences. Brown Adipose Tissue as a Heat Generator The organelle involved in the energy generation is the mitochondria, and chemical energy and thermal energy are generated through two channels in the inner mitochondrial membrane. A proton exits the mitochondria through the electron transport path, causing a potential difference; when a proton enters through the ATP synthesis complex, chemical energy (ATP) is produced, and when a proton enters the UCP1 pathway, thermal energy is generated by the activation of fatty acid oxidation of mitochondria ( Figure 1C) [15]. Brown fat is a specialized tissue that we use to adapt to the cold. When exposed to low temperatures, catecholamine (especially norepinephrine) is secreted from the sympathetic nerve, and its receptor (β3-adrenergic receptor) is activated. Then, UCP1 in the inner mitochondrial membrane is activated. As we experience regular differences in temperature, brown adipocytes are in constant activity with temperature-related genes, but beige cells derived from white adipocytes are activated only when we experience low-temperature exposure [16]. Brown Adipose Tissue as an Endocrine Organ Activated brown adipocytes secrete substances through the endocrine pathway and affect other metabolic tissues (motor muscles) to regulate the energy metabolism [4] and inflammation [17]. The substances secreted by brown adipose tissue are called brown adipose tissue (BAT) adipokines or BATokines, which are secreted by the autocrine, paracrine, peripheral, and endocrine pathways ( Figure 1D) [18]. The substances for autocrine and peripheral secretion are NGF, FGF2, and VEGF-A, which are involved in brown adipocyte growth, vascularization, neuronalization, and blood flow processes; these substances play a role in activating brown adipocytes when exposed to the cold [18]. The substances secreted by the endocrine system are IGF1 and FGF21. IGF1 plays a role in reducing the concentration of glucose in the blood [19]. FGF21 is increased in the blood by the activation of brown adipocytes when exposed to low temperatures [20], is involved in white adipocyte browning [21], and regulates energy metabolism through the lipoprotein catabolism pathway [22]. We analyzed NGF, FGF2, VEGF-A, IGF1, and FGF21's concentrations using 10 peripheral adipose tissues. Using 25 g of each tissue as the initial volume, a stromal vascular fraction (SVF) was obtained using a manual kit (Ustem kit, Ustem Biomedical, Seoul, Korea), according to the manufacturer's instructions. The volume of the final product was 1 mL, and NGF 3.56 ± 0.25, FGF2 230.27 ± 167.24, VEGF-A 7.50 ± 5.95, IGF1 2830.85 ± 5201.98, and FGF21 3.36 ± 0.19 pg/mL were measured. FGF2, VEGF-A, and IGF1 showed significant individual differences, while NGF and FGF21 were relatively even. Brown fat is also related to the circulating exosomal miRNAs. BAT secretes exosome microRNAs to inhibit transcription. When BAT is transplanted into mice lacking the miRNA-processing enzyme dicer that makes microRNAs, various types of microRNAs are observed, glucose tolerance is reduced [23], and miR-92 is known to be related to the glucose absorption of brown fat [24]. Developmental Characteristics and Representative Markers of BAT The perirenal adipose cells exist as adipocytes in the prenatal stage and mature after birth, and this process is called whitening [25]. This is different from the typical white adipocyte maturation seen subcutaneously; the rate of differentiation into adipocytes is faster than that subcutaneously [25] and the activity of brown adipocytes in the perirenal area is similar to that of typical brown adipose cells around the scapula [26]. The origin cells of brown adipocytes are found in the embryonic mesoderm, and among adipocytes, the cells expressing myogenic factor 5 (MYF5) differentiate into brown adipocytes and myoblasts, and then differentiate into muscle and fat, depending on the presence or absence of the PR/SET domain 16 (PRDM16) gene. As such, brown adipocytes and muscle have the same origin of development and are functionally related; thus, brown adipocyte activation is possible by exercise [27]. In addition, even adipocytes that do not express MYF5 can differentiate into beige cells when UCP1 expression occurs [28]. The main marker of brown adipocytes is UCP1, which is involved in the process of heat production by the oxidization of fatty acids through the activation of the uncoupling respiratory chain [29]. Secreted protein, acidic and rich in cysteine (SPARC), is an adipokine involved in the maintenance of brown fat, also called osteonectin. Calsyntenin 3 (CLSTN3) is involved in multiocular expression, with numerous small droplets representing a histological characteristic of brown adipocytes. Potassium two pore domain channel subfamily K member 3 (KCNK3) has a temperature-sensitive function. Peroxisome proliferator-activated receptor-gamma coactivator-1alpha (PGC-1α) and PRDM16 are brown fat transfer factors. PPARG coactivator 1 alpha (PPARGC1A) and Cbp/P300 interacting transactivator with glutamic acid [E] and aspartic acid [D] rich carboxy-terminal domain 1 (CITED1) are transcription cofactors. Retinoid X receptor gamma (RXRγ) is a differentiation factor. In addition, Ebf3, Fbxo31, Lhx8, TBX1, ELOVL3, and CIDEA are used as typical brown adipocyte markers. The human-specific brown adipocyte markers are ACOT11, PYGM, and FABP3. HMGCS2 and CKMT1A/1B show increased expression in brown adipocytes compared to white adipocytes [14,30]. Other brown/beige adipocytes and white adipocyte markers are summarized in Table 1. When UCP1 is expressed in white adipocytes, it becomes a beige cell showing intermediate characteristics between white and brown adipocytes, and shows a temperaturesensitive phenotype in response to various stimuli, such as low temperature, drugs, or genetic factors [4]. When cells turn into beige cells, they express CD137, Tbx1 Tmem26, and Epsti1 [31], but leptin, peroxisome proliferator-activated receptor gamma (PPARγ), HOXC8, and HOXC9's expression is decreased [14]. Main Stimulators for Activation of Brown Adipocytes The main stimulating factors for the activation of brown adipocytes and beige cellization are low temperatures and drugs ( Figure 1C) [32]. A cold temperature is the most effective inductor; when treated for a long (2 h per day for 6 weeks) or short time (6 h per day for 10 days), heat consumption is increased and body fat is significantly reduced [33]. A known mechanism for activation is non-shivering thermogenesis. The sympathetic nervous system is stimulated by the cold to activate brown adipocytes, and the hydrolyzed triglycerides produce fatty acids, generating heat [34]. Browning of white adipocytes is induced by UCP1 activation when exposed to a low temperature [5,7]. Because glucose and fatty acids are effectively consumed to generate heat, this process is considered to treat metabolic diseases. Thus, UCP1-activating drugs are being studied [4]; Mirabegron, a β3 antagonist, was originally approved as a treatment for overactive bladders, but it has been reported to increase energy consumption by activating brown adipocytes [35]. Spicy capsaicin derivatives activate temperature-related genes through the same receptor of white adipocytes' browning [36]. Liraglutide, an antidiabetic drug, acts on the GLP-1glucagon-like peptide-1 receptor and significantly reduces the weights of obese patients by increasing their energy consumption [37]. Chenodeoxycholic acid (CDCA), a bile acid, induces brown adipocyte activation by enhancing mitochondrial respiration [38], and activates brown adipocytes by stimulating intracellular thyroid hormones through the G protein-coupled receptor (TGR5) [39]. Bone morphogenetic protein 7 (BMP7) and BMP8b are important for brown adipocyte maturation, temperature sensitivity, and browning of white adipocytes. BMP8b was found to be involved in weight loss through brown fat activation [40]. In overweight patients with type 2 diabetes, the fibroblast growth factor 21 (FGF21) analog showed decreased plasma lipids, increased blood adiponectin levels, and significantly decreased body weight [4]. As an attempted drug, 2,4-dinitrophenol, a drug similar to UCP1, was used as a weight loss drug in the 1930s, but was discontinued due to deaths from high fever and adverse effects when patients were given too high of a dose [16]. CL316,243, a β3 antagonist, also failed due to various drug receptors and poor oral activity [34]. Other Factors for Activation of Brown Adipocytes When exposed to cold, the browning in the perirenal adipose tissue is significantly higher in women than in men [7]. In immunohistochemical staining, 33% of perirenal adipocytes in women were UCP1-positive, but in men, only 7% were [7]. In histological comparison, smaller lipid droplets were observed in women than men [7]. In women, the following process is more active than in men: cold-activated UCP1 expression increases heat generation in the mitochondria, which leads to increased energy consumption, and consequently, to adipose tissue loss [41]. These gender-specific physiological differences are related to sex hormones. The related hormones are: (1) Follicular hormone estradiol (E2), a female hormone that increases the metabolic rate in the interphase cell through E2 and induces heat generation in brown fat (when the α2-adrenergic receptor, a pathway that directly affects brown fat, is activated, the adrenergic signal is suppressed [7], and E2 activates brown adipocytes through the α2-adrenergic receptor's inhibition); (2) Testosterone inhibits brown adipocytes' activity by suppressing UCP1 [42]; (3) Estrogen induces brown adipocytes' activation and white adipocytes' browning [7]; (4) Gonadotropin and the Y chromosome inhibit UCP1's expression in brown adipocytes [43]; and (5) The transcription and translation processes of UCP1 are epigenetically regulated according to the sex [44]. In adults, 70-80% of perirenal fat is composed of brown adipocytes [14], and brown adipose progenitor cells are distributed throughout the perirenal adipose tissue. While inactive brown adipocytes' distribution differs depending on the location, when closer to the adrenal glands, inactive cells are increased. The inactive cell is expressed via the SPARC gene, which is a representative gene indicating the inactive state [3]. A macrophage is a new cell-type known to mediate the browning of white adipocytes [45]; previously, it was just known as a cell that secreted catecholamine. The size of BAT is opposite to obesity and age [34], while white adipose tissue is proportional [3]. Beige-ization of white adipocytes significantly decreases after the age of 40 [46]. Transformation of White Adipocytes into Beige Cells In the resting state of the cell cycle, beige cells show gene expression similar to white adipocytes, but are stimulated by a low temperature or UCP1 expression. Beige cells consume energy similar to brown adipocytes [4]. Because of their two-sided nature, there are two hypotheses about the origin of beige cells: (1) Progenitor cell model: a beige cell is derived from a specific progenitor cell population that responds to stimuli, such as low temperatures or specific genetic regulation, and (2) Interconversion model: a beige cell comes from a mature white adipocyte and is transdifferentiated by appropriate stimulation [47]. Additionally, an ambient temperature, the genetic background, and the local location are believed to have an effect [4]. The concept of converting white adipocytes into beige cells is very useful in the therapeutic aspect to treat metabolic diseases [4]. If white adipocytes can convert to beige cells through the browning process, then histologically, a large number of small lipid droplets will be visible, and genetically, UCP1 expression can increase, becoming a cell whose purpose switches from energy storage to energy consumption. Physical exercise stimulates the central nervous system, especially specific neuronal populations such as agouti-related protein (AgRP) and proopiomelanocortin (POMC) neurons. POMC neuron activation stimulates browning, while the AgRP neuron sup-presses it [53]. Through the POMC neurons, insulin and leptin signaling are regulated. In leptin signaling, exercise stimulates JAK2 and STAT3 tyrosine phosphorylation, which transcribe anorexigenic neuropeptides. In insulin signaling, exercise enhances IRS-1/2 and Akt activation and Fox01 phosphorylation, and sequentially halts the transcription of orexigenic neuropeptides. The pharmacological products are PPAR-α agonist, adrenergic receptor stimulator, thyroid hormone administrator, irisin and FGF21 inducer [52], and adenylate cyclase activator (e.g., forskolin) [54]. Bioinformatics also are used to increase the pharmacologic efficiency [55]. The DNA microarray is used to quantify gene expression, RNA sequencing is used to quantify RNA expression, and chromatin immunoprecipitation with sequencing (ChIP-seq) is used to identify protein-binding sites in DNA and examine histone modifications. For example, the white adipocyte gene expression profiles of normal mice and transgenic mice overexpressing EBF2 were compared by RNA sequencing. The mice overexpressing EBF2 in white adipocytes showed a brown adipocyte genotype, and white adipocyte-specific gene expression was decreased when compared to the normal mice. Transplantation of Brown Adipocytes Transplantation of brown adipocytes into diabetic or obese mice resulted in significantly lowered blood glucose levels, systemic inflammation, and concentration of serum adipokines [56]. When brown adipocytes were transplanted into IL-6-deficient mice, the concentration of IL-6 in the body increased, and insulin sensitivity in the skeletal muscle and adipose tissue was increased. This result indicates that IL-6 was secreted from the implant, and although IL-6 is a proinflammatory cytokine, it has the effect of increasing insulin sensitivity in the skeletal muscle and adipose tissue [56]. Meanwhile, temperature-related gene expression was not changed, which means that transplantation of brown adipocytes has no sensitivities to the temperature pathway [57]. Up to now, human transplantation of brown adipocytes has not been attempted because the safety of this has not been confirmed. Renal Pathological Aspect The advantages of perirenal adipose tissue, as described above, are limited to healthy donor tissue. Because perirenal adipose tissue is anatomically in direct contact with the kidneys and adrenal glands, when the physical size increases due to obesity or other problems, this can lead to various pathological abnormalities [58]. The size increase of the perirenal adipose tissue means an increase of white adipocytes that (1) secrete inflammatory adipokines, (2) increase the free fatty acids, glucose, triglycerides, and uric acid, (3) decrease the blood flow in the renal artery and renal parenchyma, (4) decrease the glomerular filtration rate, (5) increase the sodium reabsorption, and (6) stimulate renin secretion, which causes acute/chronic renal failure [59]. In addition, adipose afferent reflex, renin-angiotensin-aldosterone system activation, and adipokine/cytokine elevation are associated with hypertension, cardiovascular disease [60], atherosclerosis [61], and insulin resistance [62]. Also, dormant brown adipocyte activation and proinflammatory cytokine synthesis are associated with tumor progression. Therefore, it is necessary to consider the pathological risk of perirenal adipose tissue when obtained from an unhealthy donor. Conclusions The perirenal adipose tissue contains a large number of brown adipocytes and there is high conversion efficiency of beige cells from white adipocytes. Technically, we have identified the stimulating factors for inactive brown adipocytes, and browning factors have also been also identified. This research has found that adipocytes of the perirenal adipose tissue obtained from a healthy donor represent an effective human cell source with which to treat metabolic diseases through energy consumption, rather than being incinerated as medical waste. The approximate benefits of peripheral adipose tissue were summarized in Table 2 comparing subcutaneous adipose tissue.
v3-fos-license
2018-04-03T04:16:24.103Z
2015-07-27T00:00:00.000
24331068
{ "extfieldsofstudy": [ "Mathematics", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://aip.scitation.org/doi/pdf/10.1063/1.4926536", "pdf_hash": "5565961a178f54de14ac74b439cb44c9d2887453", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46283", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "sha1": "3df8ce919f5bc40e6e97994d9c239c815cc6744a", "year": 2015 }
pes2o/s2orc
Supplemental material for the paper : Noise analysis of genome-scale protein synthesis using a discrete computational model of translation aLaboratory of Computational Systems Biotechnology, École Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland bSwiss Institute of Bioinformatics (SIB), CH-1015 Lausanne, Switzerland cCurrent address: Computational Cancer Biology Lab, Ludwig Center for Cancer Research, University of Lausanne, CH-1066 Epalinges, Switzerland dCurrent address: Via Loreto 24, CH-6900 Lugano, Switzerland FIGURE S2 Runtime of the simulations. (a) Comparison of the runtime using Gillespie's optimized direct method or our stochastic translation algorithmthese simulations were implemented in MATLAB.Five simulations with randomly selected genes were performed for each value of total mRNA copy number.(b) The runtime with our stochastic translation algorithm, for simulations optimized in C++, scales only linearly with the total number of mRNA copies present.(The coefficient of determination R 2 of the linear fitting is 0.97).The simulations were performed on Mac Pro computer, with a 2 x 2.93 GHz Quad-Core Intel Xeon processor, on a C++ implementation of the algorithm that was not parallelized.The final time in these simulations was taken as 1000 seconds. FIGURE S3 Synthesis rate profiles for suboptimal conditions.Similar profiles to Fig. 3 are shown but for conditions of initiation/termination rate constants that are suboptimal.See Fig. 3 for a description of the legend.Parameter values used for the simulations for these profiles are given in Table S1. FIGURE S4 Estimating protein abundances and noise on the protein abundance.The simulations are performed as described in Appendix A, and assuming the translation profiles of the genes are the optimal ones (Fig. 3). (a-b) Protein abundance (a) and coefficient of variation (b) versus the ribosomal density of the mRNA, for a protein with a half-life of 20 minutes. (c) Coefficient of variation for proteins with different half-lives, assuming their mRNA is translated at a ribosomal density of 0.2. FIGURE S5 Evolution of the instantaneous ribosomal density, with parameters giving maximal synthesis rate (blue curve, r = 0.76 ) or suboptimal parameters (green curve, r = 0.69 ). Figure S6 Comparing the results with a background pool of genes or without.(a) Distribution of the number of free ribosomes during the evolution of the two background pools of genes.(b) Protein synthesis rates when the "marker" gene is isolated or competing with a background pool of genes using two different backgrounds; results are showed for various mean ribosomal densities of the "marker" gene (indicated on the x-axis).Note that for the simulations with the marker gene observed in isolation, a constant number of 17670 free ribosomes was used. FIGURE S7 Probability distribution of the specific synthesis rates at various densities and resulting distribution after a change of 50% of the given input parameter (given in the legend at the right of each row).The mean densities for the unperturbed cases are given at the top of each column. FIGURE S8 Probability distribution of the ribosomal densities at various mean densities and resulting distribution after a change of 50% of the given input parameter (given in the legend at the right of each row).The mean densities for the unperturbed cases are given at the top of each column. FIGURE S12 Allowing for ribosomes unbinding from initiation site.The rates of translation initiation and reverse-initiation were simultaneously varied in order to keep the same average protein synthesis rates.The corresponding mean ribosomal densities and synthesis rates for 4 different cases are indicated on the figure.The left-most values of kI for each "line" denotes the minimal kI value needed to reach the given synthesis rate and ribosomal density (i.e. when k-I = 0).(a) Coefficient of variation on the rate for all ribosome binding events.(b) Coefficient of variation on the rate for the ribosome binding events that are followed by translation elongation (i.e. in (a) all events of initiation are recorded, even those that are followed by the ribosome unbinding from initiation site, while in (b) only the events of initiation that are followed by translation and protein synthesis are recorded).(c) Coefficient of variation on the rate of protein synthesis.(d) Mean initiation delay, i.e. delay during which the translation initiation site is occupied by a ribosome before this ribosome translated the first L codons of the mRNA, allowing for a new ribosome to bind (the delay reported here only accounts for the ribosomes that perform a full protein translation).(e) Mean translation delay, time needed by a ribosome to fully translate the protein, between the translation initiation event and translation termination.Table S1: Parameter values used for the main simulations without a background pool of genes in the case of optimal or suboptimal synthesis profiles.In these simulations the total number of free ribosomes was kept constant.tend describes the time until which the simulations where performed to compute the statistics of protein synthesis.See Table 1 and method section for the parameters definition.As the system is characterized by a single steady state at each parameter set, and the recording starts after the steady state was reached, each state is simulated with a single simulation with a late end-time (2  10 6 s) which is equivalent to doing for example 1000 repetitions of the simulations during 2  10 3 s. FIGURE FIGURE S9 Probability density value of the protein synthesis rate (a) and ribosomal densities (b) for various sets of parameters and after different changes on the given input parameters.The input parameters (initiation (1 st column), elongation (2 nd column) and termination (3 rd column) rate constants: kI, kE, kT) were varied one at a time by various amounts (±10, 50 and 90% with respect to the original values), and the resulting pdf value for the synthesis rate (a) and ribosomal densities (b) are presented.This was repeated for multiple sets of input parameters that gave rise to different mean ribosomal densities (the various rows of subfigures; the value of ribosomal density given on the left correspond to the mean ribosomal density with the original parameter value sets). FIGURE FIGURE S13Probability density functions of the instantaneous specific synthesis rates for our full model (model 1) and for three simpler models.Showing these distributions at various mean ribosomal densities as indicated in the titles.
v3-fos-license
2019-04-30T13:08:12.383Z
2018-08-23T00:00:00.000
140036875
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBYSA", "oa_status": "GOLD", "oa_url": "https://www.ajol.info/index.php/njtd/article/download/176584/165971", "pdf_hash": "41865ec54745de5c796a3155af486450158c0283", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46284", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "sha1": "525972d013f2a2c3ed62d6b8a441213b19d6ebe6", "year": 2018 }
pes2o/s2orc
Investigative Study on the Use of De-Oiled Palm Kernel Cake for Biogas Production : Availability of Palm Kernel Cake (PKC) has increased due to the increase in the number of cottage oil palm processing industries in developing countries. A quest for clean energy from bio-waste is also on the increase. This study aims at investigating the biogas generating capacity of de-oiled PKC and its corresponding methane content. De-oiled PKC and a mixture of de-oiled PKC and fresh sugar cane chips were used as the two bio-feed samples in a laboratory anaerobic digestion set up. A theoretical approach was also used to determine the expected methane content in the biogas. Laboratory results for de-oiled PKC gave the volume by weight of bio-feed for biogas and methane to be 12.7 ml/g and 4.2 ml/g respectively and that of the combination of de-oiled PKC and fresh sugar cane chips to be 3.15 ml/g and 1.25 ml/g respectively. The measured methane composition for de-oiled PKC and that of the combination of de-oiled PKC and fresh sugar cane chips to be 33% and 40% respectively while the theoretical estimates were 33.5% and 41.1% respectively. The study shows that de-oiled palm kernel cake has biogas/methane generation potential whose quality can be improved by the addition of other biogas producing wastes. INTRODUCTION Biogas is a mixture of gases mainly carbon dioxide and methane that results from anaerobic fermentation of organic matter by bacteria (GreenLearning Canada Foundation, 2017). Fig. 1 shows the schematic of the process. Biogas is mainly composed of 50 to 70 percent methane, 30 to 40 percent carbon dioxide (CO 2 ) and low amount of other gases (Hydrogen 5-10%, Nitrogen 1-2%, Water vapour 0.3%, and traces of hydrogen sulphide). Its calorific value is 20 MJ/m 3 and burns with 60% efficiency in a conventional biogas stove (Regattieri et al, 2018). Biogas and electricity are generated from effluent management, and several biochemicals such as ethanol, fatty acids, waxes and others which could be obtained through application of biotechnology. Conversion to energy is a good means of obtaining carbon credit facility for sustainable management (Sridhar and AdeOluwa 2009). Closed tank digester system with biogas capture and utilisation can contribute to the sustainable development rather than open air disposal of palm oil industry wastes. This method has been developed for treating palm oil mill effluents (Understanding Energy, 2017). The biogas generated is captured and directed to flaring or used as boiler fuel or for power generation. For the controlled anaerobic tank digester method with mixing, the gross treatment efficiency has been estimated to be in the range of 90-95% in terms of BOD removal. COD treatment efficiency is experienced in the range of 80-90%. Methane content in the biogas generated has been reported in the range of 54-70% with an average of 64%. The major part of the balance of the biogas is CO 2 (36%) with traces of hydrogen sulphide (Understanding Energy, 2017). Oil palm industry generates a large quantity of residues and wastes in the form of empty fruit bunch, palm kernel shells, trunk of the plant, fibre, leaves and others. Empty fruit bunches and palm kernel shells have been successfully converted into compost and were useful in developing oil palm nurseries and other food crops. The various uses of the by-products and waste from palm oil mills are shown in Figure 2. Wastewater generated through typical palm oil processing averages 0.5 m³/ton of fresh fruit bunches (FFB). Some palm oil mill processes about 450,000 tons FFB and thus 210,000 m³ of wastewater per year. For such a company, a digester volume of 6,000 m³, will allow handling the daily load of about 700 m³ of waste water (Understanding Energy, 2017). Palm kernel cake (PKC) is a byproduct of oil extraction from palm kernel nut and it is abundant in the tropical areas of the world (Rhule, 1996). In recent times however, there has been an increase in the number of cottage oil palm processing industries in developing countries like Nigeria due to ban on the importation of vegetable oil which has resulted in abundant availability of PKC. The PKC so obtained varies considerably in chemical composition (protein, fibre or lipids), depending on source, the extent and methodology of oil removal and the proportion of endocarp remaining (Rhule, 1996;Adesehinwa, 2007). Presently PKC is being used as a filler in animal feeds, used for land filling or burnt directly as fuel. From the chemical composition of PKC, Adesehinwa (2007) showed that PKC is a potential energy source. This study therefore aims at investigating biogas production potential as an alternative use for PKC. Chaikitkaewa et al (2015), evaluated three biomass residues from palm oil mill plant including empty fruit bunches (EFB), palm press fiber (PPF) and decanter cake (DC) for methane production by solid state anaerobic digestion at 25% total solids content. The highest cumulative methane production of 2180 mLCH4 was obtained from EFB followed by PPF (1964 mL CH4) and DC (1827 mL CH4). Methane production from EFB, PPF and DC by solid state anaerobic digestion was 55, 47 and 41 m 3 CH4/ton, respectively, which suggested that decanter cake could be a promising substrate for methane production by solid state anaerobic digestion. This paper seeks to investigate the biogas potential after oil palm kernel oil has been extracted from Palm kernel cake with a focus on its quality in terms of the percentage methane content. A. Theoretical Calculation of Biogas Composition Based on chemical formula, Buswell devised the equation to predict theoretical yield of component products from bio-digestion. The Buswell's equation is given as Where a, b, c, d are the moles of Carbon, Hydrogen, Oxygen and nitrogen respectively present in the waste sample. The simplification of the Buswell equation and Boyle's law to give the contribution of some chemical components in waste samples is given in the Table 1 (Renewable Energy Concepts, 2018; Czepuck et al). These estimates were used to determine the theoretical gas composition of the waste samples. 3 and 5) were set up simultaneously for the anaerobic digestion process and this was repeated twice (Experiment 1& 2). In each set-up, 50 g of de-oiled palm-kernel cake was measured into conical flasks (which served as mini-digesters). The wastes in the conical flasks were diluted with 500 ml of water to make a total solid value (TS) of 10% and were thoroughly stirred. The conical flasks were sealed at the top to prevent the escape of gas while a hose was fixed at the side opening. The hose serves as a passage for the gas to the point of collection. C. Experimental Procedure using de-oiled palm kernel cake and sugar cane Two units labelled B and D (Figures 4 and 6) were set up simultaneously for the anaerobic digestion of the mixture and this was repeated twice (Experiment 1& 2). A mix of 50 g of de-oiled palm-kernel cake and 50 g of fresh sugar cane chips were measured into conical flasks (which served as minidigesters). The wastes in the conical flasks were diluted with 500 ml of water and were thoroughly stirred. The conical flasks were sealed at the top to prevent the escape of gas while a hose was fixed at the side opening. The hose serves as a passage for the gas to the point of collection. D. Volume Measurement In all the experiments, the volume of gas produced was measured through the displacement method. In set-ups A and B, the gas was collected directly while in set-up C and D the gas collected was passed through Calcium Hydroxide, Ca(OH) 2 to remove the CO 2 present. This leaves only the methane gas as the gas collected (Mel et al, 2014). The ambient temperature was measured and recorded daily using a thermometer. The data obtained from the volume measurement was recorded and compared to determine the amount of biogas that can be recovered from the amount of waste used for the experiment and the total energy potential of the waste samples. A. Theoretical Gas Estimates Applying the composition percentage to de-oiled palm kernel cake gives the results in Table 2 while its result from the application to the mixture of de-oiled palm kernel cake and fresh sugar cane chips is shown in Table 3. De-oiled palm kernel cake has a volatile solids content of 76.6% weight by weight of solids. Therefore, the expected theoretical methane content of de-oiled palm kernel cake is given by 0.766 x 43.75 which is equal to 33.51%. Fresh sugar cane chips have a volatile solids content of 97.7% weight by weight of solids. Therefore, the expected theoretical methane content of fresh sugar cane chips is given by 0.977 x 49.868 which is equal to 48.72%. Expected theoretical methane content from mixture of de-oiled palm kernel cake and fresh sugar cane chips is a 50% contribution by both materials since an equal weight of 50g each was used. The value is given by 0.5(33.51 + 48.72) which is equal to 41.12%. B. The Anaerobic Digestion Process The cumulative biogas volume measured with time from the anaerobic digestion of de-oiled palm kernel cake and fresh sugar cane chips is illustrated in Fig. 7. It was observed that there were traces of biogas production within 24 h. The average hourly production was 2.5 ml with peak production indicated at about 140 h for de-oiled palm kernel cake and 90 h for de-oiled palm kernel cake with fresh sugar cane chips. The production then began to decline with some negative values of gas volumes recorded. The negative values indicate that some other gases present within the biogas had dissolved in the water used to take measurement (Mel et al, 2014). For the weight of material used, a detention time of about 260 h C. Methane Measured in the Laboratory Methane content within the biogas was noticed within 24 h. The cumulative volume measured with time is shown in Figure 8. From the results, it was observed that the volume of methane was considerable low when compared with the volume of biogas produced by the same sample. CO 2 removal from biogas is mandatory to meet the specifications of a natural gas grid since CO 2 reduces the heating values of natural gas. The percentage of methane present in biogas as against that of CO 2 determines the quality of the biogas. The comparison of the total volume and percentage of biogas present in the waste samples is shown in Table 4. It was observed that the waste sample containing only de-oiled palm kernel cake waste produced an average biogas and methane of 12.7 ml/g and 4.2 ml/g of waste respectively while the sample containing fresh sugar cane chips produced an average of 3.15 ml/g and 1.25 ml/g respectively. The methane yield from the sample containing de-oiled palm kernel cake and fresh sugar cane chips was higher than that of de-oiled palm kernel cake only. In terms of methane content in the measured biogas, the methane content from the deoiled palm kernel cake is 33% while that of de-oiled palm kernel cake and sugarcane is 40%. DPKC: De-oiled Palm Kernel Cake, FSCC: Fresh Sugar Cane Chips Good biogas quality is expected to have a methane content of at least 50%. The sugar cane content added resulted in an increase in the methane content indicating that de-oiled palm kernel cake has a biogas generation potential which can be boosted by the addition of other products such as sugar cane. D. Comparison of Laboratory and Theoretical Values The laboratory measured methane content in de-oiled palm kernel cake was 33% which is 1.5% lower than the theoretical value of 33.5%, while the methane content of the sample with de-oiled palm kernel cake and fresh sugar cane chips is 40%, which is 2.68% lower than the theoretical value of 41.1%. Variations between experimental and theoretical values have been noticed by other researchers, stating that the theoretical value is an indication of the maximum amount of methane that can be produced by the sample under ideal conditions. Factors responsible for the variations include Temperature, Heat, Mixing, Carbon/Nitrogen ratio, Volatile solids content and Lipid content in waste sample (Czepuck et al, 2006;Chaikitkaewa et al, 2015). IV CONCLUSION The results of the test digesters used for the laboratory research showed that de-oiled Palm kernel cake after going through anaerobic digestion produced biogas. Addition of other types of waste such as fresh sugar cane chips increased the amount of methane present in the biogas and therefore its quality. A comparison of the percentage of methane in the biogas from theoretical calculation and the laboratory showed a variation of less than 3%. It can therefore be concluded that de-oiled palm kernel cake has biogas/methane generation potential whose quality can be improved by the addition of other biogas producing wastes.
v3-fos-license
2016-03-01T03:19:46.873Z
2015-04-01T00:00:00.000
8244820
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1422-0067/16/4/8896/pdf", "pdf_hash": "bf6ed64a8de29a041aa238da6d386cf6f7edb77f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46285", "s2fieldsofstudy": [ "Physics" ], "sha1": "bf6ed64a8de29a041aa238da6d386cf6f7edb77f", "year": 2015 }
pes2o/s2orc
Chemical Bonding: The Orthogonal Valence-Bond View Chemical bonding is the stabilization of a molecular system by charge- and spin-reorganization processes in chemical reactions. These processes are said to be local, because the number of atoms involved is very small. With multi-configurational self-consistent field (MCSCF) wave functions, these processes can be calculated, but the local information is hidden by the delocalized molecular orbitals (MO) used to construct the wave functions. The transformation of such wave functions into valence bond (VB) wave functions, which are based on localized orbitals, reveals the hidden information; this transformation is called a VB reading of MCSCF wave functions. The two-electron VB wave functions describing the Lewis electron pair that connects two atoms are frequently called covalent or neutral, suggesting that these wave functions describe an electronic situation where two electrons are never located at the same atom; such electronic situations and the wave functions describing them are called ionic. When the distance between two atoms decreases, however, every covalent VB wave function composed of non-orthogonal atomic orbitals changes its character from neutral to ionic. However, this change in the character of conventional VB wave functions is hidden by its mathematical form. Orthogonal VB wave functions composed of orthonormalized orbitals never change their character. When localized fragment orbitals are used instead of atomic orbitals, one can decide which local information is revealed and which remains hidden. In this paper, we analyze four chemical reactions by transforming the MCSCF wave functions into orthogonal VB wave functions; we show how the reactions are influenced by changing the atoms involved or by changing their local symmetry. Using orthogonal instead of non-orthogonal orbitals is not just a technical issue; it also changes the interpretation, revealing the properties of wave functions that remain otherwise undetected. Abstract: Chemical bonding is the stabilization of a molecular system by charge-and spin-reorganization processes in chemical reactions. These processes are said to be local, because the number of atoms involved is very small. With multi-configurational self-consistent field (MCSCF) wave functions, these processes can be calculated, but the local information is hidden by the delocalized molecular orbitals (MO) used to construct the wave functions. The transformation of such wave functions into valence bond (VB) wave functions, which are based on localized orbitals, reveals the hidden information; this transformation is called a VB reading of MCSCF wave functions. The two-electron VB wave functions describing the Lewis electron pair that connects two atoms are frequently called covalent or neutral, suggesting that these wave functions describe an electronic situation where two electrons are never located at the same atom; such electronic situations and the wave functions describing them are called ionic. When the distance between two atoms decreases, however, every covalent VB wave function composed of non-orthogonal atomic orbitals changes its character from neutral to ionic. However, this change in the character of conventional VB wave functions is hidden by its mathematical form. Orthogonal VB wave functions composed of orthonormalized orbitals never change their character. When localized fragment orbitals are used instead of atomic orbitals, one can decide which local information is revealed and which remains hidden. In this paper, we analyze four chemical reactions by transforming the MCSCF wave functions into orthogonal VB wave functions; we show how the reactions are influenced by changing the atoms involved or by changing their local symmetry. Using orthogonal instead of non-orthogonal orbitals is not just a technical issue; it also changes the interpretation, revealing the properties of wave functions that remain otherwise undetected. Bonding and Bonds In a composite system composed of two or more subsystems, the subsystems do not initially interact if they are spatially well separated. In a chemical context, the system is often a molecule, and the subsystems will be called fragments. If the distance between the subsystems is reduced, then interactions between the subsystems can be recognized; e.g., by the forces attracting or repelling the subsystems. In the majority of cases, attraction of the subsystems is accompanied by the release of energy in the form of heat; when the subsystems repel each other, the system has to consume energy to overcome the repulsion. The system's state of minimum energy is called the equilibrium state; it is well characterized by properties, such as equilibrium geometry, equilibrium dipole moment, etc. Both stabilization and destabilization of the composite system are defined relative to the state of non-interacting subsystems, which one might call the canonical reference state. We say that the attractive interaction between the subsystems stabilizes the system and that a repulsive interaction destabilizes the system. In some scientific disciplines, like chemistry, the process of stabilizing a composed system is called bonding, sometimes also binding. The stabilization energy, or bonding energy, is the best system property for creating a scale for the degree of stabilization; with such a scale, one can classify the stabilization as strong or weak and say that the strength of bonding is high or low, respectively. The initial system, composed of non-interacting subsystems, is called non-bonded, while the stabilized system is called bonded. Frequently, one speaks about the bonded system as if something causing the stabilization had been added to, or had appeared in, the system. This something is mostly called a bond. When a system changes from non-bonded to bonded, not only the energy, but many more system properties will also change, and many of these changes can be experimentally monitored. Released energy can be measured; changes in the spatial distances can be detected with spectroscopic methods; while bonding in macroscopic systems can change material properties, such as ductility, elastic stiffness, plasticity, strain, strength, toughness, viscosity, and many more. Changes in system properties are always reported relative to a reference state, which need not always be the canonical one; different reference states can be defined with varying degrees of physical plausibility. For example, if the increase in electron density between two atoms is used as the measure of bonding in a molecule and if the reference state is the canonical one, the change in the electron density is conceptually plausible: During the approach of the atoms, the atom densities change gradually due to mutual perturbation until, at the equilibrium geometry, the final molecular electron density is found. However, how should one compare electron densities at different system geometries? A possible, but physically implausible, reference state is created by summing the electron densities of non-interacting (free) atoms, each at its own position in the system's equilibrium geometry. We know that when two atoms approach each other, the electron densities gradually change due to contraction and polarization, but the implausible reference state is constructed as if the completely separated atoms can be brought to the equilibrium distance, where they instantly change from the electron densities of the free atoms to the molecular electron density. In this model reaction, the interaction between the free atoms can be simply switched on at a certain geometry. Subtracting the electron density of the molecule from the reference density gives the electron-density difference, which is frequently used to show where electron density is accumulated between atoms, indicating a "bond" between the atoms, and where it is depleted, in which regions no bond can exist. The use of words like "exist" indicates that we are talking about philosophical questions concerning the reality of things. The philosophical discipline in which such questions are discussed is called ontology. In the philosophy of science, the existence of entities (things, objects) and theories is frequently discussed on the basis of two antagonistic positions: Realism and anti-realism. Those who claim that entities do exist independently of a human observer, which means they claim that entities are real, are entity-realists; those who deny this claim are entity-anti-realists. Those who agree with the claim that theories are not a mere human construction, but exist, just as horses or cars do, are theory-realists, and those who claim that theories are only human constructs made to explain experiments or to make predictions are theory-anti-realists. Of course, this is an extremely crude sketch of these ontological positions, but it is sufficient to show that the spectrum of what can be claimed to exist is very broad. (Some books on the philosophy of science, where such topics are discussed, include those by Nancy Cartwright [1], Ian Hacking [2] and Ronald Giere [3,4]. These books are also understandable for natural scientists interested in philosophical issues.) Theories are necessary to explain or predict experimental outcomes, and they help to bring structure to an otherwise chaotic set of unrelated experimental facts. Scientific theories are sets of sentences formulated such that the relationship between two different entities in systems are described: those with which one can make experiments and those that are just claimed to exist. If a theory contains entities that are claimed to exist, although it is not possible to make experiments with them, and if such a theory is successful, then these entities are essential for its success. Such fictitious entities are called theoretical entities. According to Ian Hacking [2], one criterion to distinguish real entities from theoretical entities is the ability to manipulate them, i.e., do experiments with them. It was criticized that this is too restrictive a criterion because scientists in disciplines like astronomy or cosmology observe their entities, but do not manipulate them. (An overview of critiques of Hacking's entity realism can be found, e.g., in [5].) I propose that whether the properties of entities can be observed in a reproducible way should also be a criterion to distinguish theoretical entities from real entities. If it turns out that one can do experiments with theoretical entities or that one can make observations that can be attributed only to them, the status of the entity turns from theoretical to real. Now, one could argue that non-real entities have no place in scientific theories because, after all, such theories claim to describe the "reality" or the "nature" of things, and this is not fictitious. However, scientific theories and especially theories of chemical bonding are full of theoretical entities, such as covalent bonds, multiple bonds, polar bonds, and so on. Gernot Frenking [6] called them the "unicorns in the world of chemical bonding models". Other theoretical entities well known to molecular physics include the reduced mass and center of mass of a two-particle system or the different quasiparticles in many-body physics [5,7]. Therefore, why is speaking about a bond much more difficult than speaking about bonding? Most words used in scientific theories are taken from everyday language, just think of "force", "velocity" or "bond". Every word has its own conceptual history; the semantics of the word "bond" in everyday language is connection via a cord, rope, band or ligament; in general, a material joint. Using the word "bond" has the connotation of something that is localized in space, that is strong, more or less rigid and that binds, fastens, confines or holds together. This connotation is supported in chemistry by rendering techniques used in molecular modeling software, like ball and cylinder or ribbon. If this view of a bond is adopted, one can claim that bonds between atoms exist or do not exist, that they can be broken or unbroken, that one can make bonds where no bonds are or split existing bonds. Forming a new bond causes stabilization of the system, whereas splitting a bond causes destabilization. Covalent bonds possess directional preferences that are made responsible for the three-dimensional structure of many molecules; the origin of this anisotropy is frequently attributed to other theoretical entities called hybrid orbitals. The semantics of the words "bond" and "bonding" are actually very different; the latter is just used to describe the system stabilization due to interactions between subsystems. Bond-making is frequently used synonymously with bonding to indicate system stabilization. On the other hand, there is no well-accepted word in chemistry that describes system destabilization and which can be used as an antonym for bonding; instead, one speaks about bond breaking or bond dissociation, but this immediately suggests the existence of a bond. I shall use the word "debonding" to describe the destabilization of a composite system; the word "bond" will only be used to describe elements in a structural formula, e.g., a C-H bond or a C=C double bond. Covalent Bonding and Chemical Reactions Bonding energies in chemistry range from a few kilojoules (weak hydrogen bonding) [8] to several hundreds of kilojoules (strong bonding); the physical origin of the differences in strength are the different interactions causing the stabilization. It has been shown that all weak bonding is caused by four basic interactions [9]: The interaction between static electric multipoles higher than monopoles in the different subsystems can be attractive, as well as repulsive; the interaction between static multipoles in one subsystem and multipoles induced in another are always attractive, as are the interactions between instantaneously created multipoles in one and induced multipoles in another subsystem. These three interactions are called electrostatics, induction and dispersion, respectively. Repulsive interactions are between the nuclei, but more important is that, for many electron atoms, electrons with like spin avoid being spatially close (Pauli principle). The result is the same as if strong repulsive forces, called Pauli forces, kept these electrons far apart. Depending on how large the contributions of the basis interactions are, weak bonding is called van der Waals bonding, hydrogen bonding, and so on [10]. Weak bonding, however, is not the topic of this paper. Strong bonding covers ionic bonding, metallic bonding and covalent bonding. Ionic bonding is caused by long-range Coulomb interactions between ions (monopoles); these interactions are completely isotropic. Cations are formed when some atoms lose one or more valence electrons, whereas anions are formed when these electrons are strongly attracted by neutral atoms. When an electron changes from one site to another, one speaks of charge transfer. Although Coulomb interactions are isotropic, the result of ionic bonding is crystal grids, where isotropy is strongly reduced. In metallic bonding, atoms lose some of their valence electrons, and the cations then form a grid, similar to the case of ionic bonding. In contrast to ionic bonding, however, there are no atoms that can bind these electrons; instead, they are distributed among the cations in the grid. The metal crystal is stabilized by interactions between delocalized electrons and the cation grid. The characteristics of covalent bonding are a pronounced spatial anisotropy, a strong dependence on the electron spin and attractive interactions that are too strong and too short-range to be caused by those interactions that are responsible for weak bonding. When chemists talk about chemical bonding, most of the time they mean covalent bonding. Covalent bonding is the topic of this paper. Chemical bonding and debonding occur in chemical reactions; complex reactions are considered to be composed of several elementary reactions, which are each named according to what happens between the reacting fragments: Dissociation, recombination, insertion, addition, elimination, substitution, and so on. In every elementary reaction, only a few atoms are involved-usually two and seldom more than three-and this is the reason why Levine simply stated: "Chemistry is local" [11]. Each elementary reaction may itself comprise several processes, such as spin-coupling and -decoupling, spin flip, local electron excitations in fragments and charge transfer between them. These are local processes, and they have a strong influence on the properties of the fragments, especially their geometries. Depending on the theoretical method used, the description of local processes may be easy, but can be very difficult. In classical valence bond (VB) theory, for example, the stabilization of two radical fragments in their respective doublet ground states by spin coupling to a singlet ground state (the formation of a Lewis electron pair) is well described by a single Heitler-London-type wave function. Such wave functions clearly show, by construction, the local character of the spin coupling; the delocalized molecular orbitals used in molecular orbital (MO) theory, on the other hand, hide this local process completely. Similarly, if the chemical environment of one of the atoms involved in an elementary reaction is changed, e.g., by substitution, the reactivity can change dramatically. Therefore, the question arises as to what is possible to explain what is responsible for such changes, in terms of the local processes. The use of terms like the "local Hund's rule", for instance, indicates that the importance of such local processes is known, but they are seldom revealed by the quantum chemical methods used to describe them (I am only referring to wave function methods; density functional methods are even worse in this respect). All VB wave functions are, in general, linear combinations of configuration state functions (CSF) [12], based on orbitals describing the fragments. In conventional VB, or Slater-Pauling VB, the fragments are atoms and the orbitals are non-orthogonal atomic orbitals or hybrid orbitals; in non-conventional VB methods, the orbitals are either atom-centered atomic orbitals (AO) or basis functions, once Slater, now usually Gaussian, with small delocalization tails on other atoms, or the fragments are larger moieties than atoms, or both. The orbitals describing the fragments may be orthogonal or non-orthogonal. We speak of an orthogonal-valence-bond (OVB) method whenever the wave function is based on orthogonal fragment orbitals. The multi-configurational self-consistent field (MCSCF) [12] is the most efficient wave-function method for describing chemical reactions; the CSFs are made with delocalized MOs. The best MCSCF wave functions are of the complete active space (CAS)-type [13,14], also called fully-optimized reaction space (FORS)-type [15][16][17][18][19][20][21], the latter description indicating that the wave function is designed to optimally describe the local processes during chemical reactions. However, the use of delocalized molecular orbitals hides the local processes occurring in chemical reactions; analysis of a FORS wave function is necessary to reveal them. OVB Reading of FORS Wave Functions The typical reactions of two or more fragments are recombination, insertion or addition reactions and the corresponding reverse reactions. The FORS wave function describing such reactions is a linear combination of CSFs made with orthogonal delocalized molecular orbitals (MO); it is completely determined by the system's spin state and its spatial symmetry, the number n MO and symmetry of the partially occupied MOs (the active orbitals) and the number of electrons n elec that can be distributed among them (the active electrons). The information concerning the number of active electrons and active orbitals is abbreviated as CAS(n elec , n MO ). The fragments are the reactants that are combined to give the product, while the definition of fragments is determined by the idea of how an elementary reaction proceeds and which MOs are needed to describe the reaction. If the choice of the active MOs and of the active electrons is correct, the FORS wave function will correctly describe static electron correlation during the reaction, although a quantitatively correct description needs to include dynamic correlation corrections. That means FORS wave functions do correctly describe the geometries of reactants, products and possible transition structures, but FORS energy differences between them are always too small [12]. The description of all reactions discussed in this paper is based on molecular structures where for fixed inter-fragment distances R, all fragment geometries are fully optimized. Use of localized orbitals helps to reveal the local processes occurring during the reaction; if localized orbitals are obtained by separate orthogonal transformations in the space of active MOs and of doubly-occupied MOs, the electron density and the total energy of the molecule remain unchanged. A localization procedure using an orthogonal Procrustes transformation was recently described [22] that allows transformation from delocalized FORS-MOs to orthogonal fragment orbitals (FO). Transformation of the doubly-occupied MOs yields the same number of doubly-occupied FOs, and transformation of the n MO active MOs yields n MO FOs. When these FOs are used as active orbitals, the CSFs made with the FOs will describe the local processes during the reaction. Since the FOs are orthogonal, so are the CSFs made with them. The FO-CSFs describe which local states fragments are in, how these states are coupled and whether the fragments are neutral or ionic. A single diagonalization of the CI -matrix (vide infra) that was constructed with the FO-CSFs yields the energies and the weights of the FO-CSFs in the molecular state function. This analysis, which can be called an OVB reading of a FORS wave function, differs significantly in some aspects from a VB reading based on non-orthogonal orbitals. In this paper, the OVB analysis of four reactions will be presented, each describing bonding in a molecular system composed of two subsystems. All four reactions involve four active electrons in four active MOs, leading to a CAS(4,4) singlet wave function. The reactions are: The dimerization reaction of two carbenes was investigated in D 2h symmetry, while the dimerization reaction of two silylenes was studied in D 2h and C 2h symmetry. Insertion of a carbene into the H-H single bond was studied in C s symmetry. For the three dimerization reactions, the four active orbitals of the product are the bonding and antibonding MOs of the double bond; for the insertion reaction, two bonding and the corresponding antibonding orbitals for two CH bonds of the methane molecule are the active orbitals. For the dimerization reactions, the FOs are the s and p lone pair orbitals of the carbene and silylene fragments; for the insertion reaction, the s and p lone pair orbitals of the carbene and the bonding and antibonding σ orbitals in the hydrogen molecule are the fragment orbitals. The types of CSFs that can be made with the FOs are listed in Figure 1. According to the distribution of the four active electrons on the two fragments, the FO-CSFs can be classified as ionic or neutral (non-ionic). All CSFs with a C in the abbreviation are ionic CSFs; all others are neutral. For representing the results, only those CSFs with a weight larger than 0.1 somewhere along the reaction coordinate are included in the discussion; the number of these important CSFs varies with the system. The Carbene Dimerization The dimerization of two carbene fragments to form ethene was performed in D 2h symmetry using the 6-31G* basis. The order of the active MOs is σ, π, π * and σ * . Of the 20 MO-CSFs, only 12 are of the A g symmetry, and the ethene ground state of A g symmetry was decomposed in these 12 MO-CSFs. The energy curve for the dimerization ( Figure 2) shows a monotonic energy decrease from the non-interacting carbenes at large distances to the equilibrium geometry; at C-C distances larger than 2.6 Å, the decrease is rather slow; at smaller C-C distances, the system stabilization becomes stronger. The distance dependence of the weights of the MO-CSFs ( Figure 3) leads one to assume that the carbene dimerization occurs in a completely monotonic way. The Hartree-Fock determinant (CSF |2200| = |σ 2 π 2 |) has the highest weight; at equilibrium geometry, it is close to one. It decreases with increasing C-C distance, and other CSFs, like |2020| = |σ 2 π * 2 |, then become important. At r(C-C) = 3.5 Å, the wave function is dominated by five CSFs with weights ≈ 0.2. Fragment properties (Figure 4), like the HCH bond angle, indicate a drastic and sudden change in the electronic structure of the fragments at the point where bonding becomes strong: The bond angle drops from about 128 degrees, which is slightly smaller than the typical HCH angle in triplet carbene (134.0 degrees), to 116 degrees and then increases to the final HCH angle in ethene. The C-H bond length at large C-C distances is typical for triplet carbene (1.077 Å) [23], shrinking slightly when bonding starts and then increasing to its final value. The OVB analysis ( Figure 5) reveals what happens during the carbene dimerization: At large C-C distances, the wave function is dominated by the TT CSF, representing two triplet carbenes coupled to a singlet. At r(C-C) = 2.6 Å, the single-charge transfer and single excitation CX1 CSF becomes important; its weight increases from zero to about 15%, while the weight of the TT CSF drops to 85%. With respect to the TT configuration, the ionic CSF describes a simple charge transfer from the s lone pair orbital at one fragment to the s lone pair orbital at the other. At the equilibrium C-C distance, both CSFs have equal weight of about 30%, and the weight of the CX2 CSF increases to nearly 20%. This ionic CSF describes the charge transfer from the p lone pair orbital at one fragment to the p lone pair orbital at the other. CX1 and CX2 have nearly identical energies, but their weights are significantly different. For none of the important FO-CSFs can a pronounced local minimum be found; the TT CSF is repulsive, while the ionic CSFs have nearly identical energy curves with an extremely shallow minimum around r(C-C) = 2.5 Å. Therefore, no single FO-CSF causes the deep minimum in the ground state; rather, the increasing weight of the ionic CSFs and the reduction of the weight of the non-ionic TT CSF are both responsible. The Silylene Dimerization in D 2h The energy curve ( Figure 6) shows a local maximum caused by a drastic change in the wave function at an Si-Si distance of about 3.5 Å. At shorter distances, the Hartree-Fock determinant (CSF |2200| = |σ 2 π 2 |) dominates as in the case of ethene; on stretching the Si-Si bond, the weight of the |2020| = |σ 2 π * 2 | CSF increases, but at 3.5 Å, the weight of both CSFs drops to zero and the |2002| = |σ 2 σ * 2 | CSF reaches a weight of more than 90% within few tens of an Ångstrom (Figure 7). Interpretation of the change of character of the wave function on the basis of MOs is impossible; after all, the change occurs at an Si-Si distance that is more than 50% larger than the equilibrium distance, so the meaning of bonding and antibonding σ and π orbitals is far from clear. The maximum in the total energy curve suggests that the MO switch π ↔ σ * does not occur smoothly. Using a larger basis set is no remedy for this problem. The geometry data of the silylene fragments ( Figure 8) already suggest what happens at r(Si-Si) = 3.5 Å. For larger Si-Si distances, the HSiH bond angle has a typical value of the singlet ground state, and Si-H has a typical distance of about 1.52 Å. The experimental values are 1.516 Å and 92.8 degrees [24]. At 3.5 Å, the HSiH angle increases almost immediately to about 118 degrees, while the Si-H distance shrinks to less than 1.48 Å; both are close to typical values for triplet silylene (1.48 Å and 118.5 degrees) [25]. This suggests local singlet-triplet excitation in each fragment, and the weights of the FO-CSFs support this interpretation (Figure 9). At large Si-Si distances, the wave function is dominated by NB (no-bond), representing the silylenes in their corresponding singlet ground states with a doubly-occupied s AO; the weight of this configuration is greater than 90%. The weight of double excitation (DX) is about 10% at large distances; this CSF describes the angular correlation in the singlet ground state by excitation of the lone pair electrons from the s to the p lone pair AO. Between r(Si-Si) = 3.5 and r(Si-Si) = 3.0 Å, NB disappears and is replaced mainly by TT and CX1; at r(Si-Si) = 3.0 Å, the weight of TT is greater than 60%, and the weight of the ionic CX1 is about 20%. The local excitations initiate bonding between the silylenes. With decreasing Si-Si distance, the weight of TT decreases to about 30%, and the weight of the ionic CX1 increases to the same value. Together with CX2 (20%), the ionic contributions dominate the disilene wave function at the equilibrium geometry. The shape of the FO-CSF energy curves ( Figure 9) seems rather strange at first. However, it is well known that the electron distribution in high-spin systems is, due to Fermi correlation, more compact than in low-spin systems; accordingly, the silylene lone pair orbitals will be more compact after the triplet excitation than in the singlet ground state, and this contraction of the electron density seems to be energetically favorable for nearly all FO-CSFs. Only for NB with doubly-occupied s AOs are the compact s AOs unfavorable. The kinks in the energy curves might be responsible for the faster decrease in the total energy compared with the more moderate decrease found for the carbene dimerization. The Silylene Dimerization in C 2h There is no hump in the energy curve of the ground state ( Figure 10); it more resembles the carbene dimerization than the planar silylene dimerization, but again, bonding occurs in a much smaller interval of the Si-Si distances; similar to that for planar dimerization. That the non-planar and planar dimerizations of silylenes are very different can be seen from the weights of the MO-CSFs ( Figure 11). The Hartree-Fock determinant has the highest weight at short Si-Si distances; it decreases monotonically to about 30% at r(Si-Si) = 3.5 Å. At the same time, the weight of other CSFs increases. In contrast to the planar silylene dimerization, there is no switching of MO-CSFs, because, due to switching of MOs, due to the strong puckering of the silylene moieties, there is no σ − π separation, and the four active MOs can change their character smoothly. At very short Si-Si distances, disilene becomes planar, but even at the equilibrium geometry, disilene is puckered; the pucker angle is largest at r(Si-Si) = 3.5 Å where bonding starts ( Figure 12). Furthermore, the changes in the HSiH bond angle and the Si-H bond length for the planar and non-planar dimerizations are very different. Even though the curves for non-planar dimerization are rather wiggly, both geometry parameters change less abruptly than for planar dimerization. The OVB analysis ( Figure 13) shows that there is indeed a significant difference in the bonding processes. At larger distances, NB and the correlating DX are dominant, but when the Si-Si distance decreases, the single-charge-transfer CSF, C, becomes important, even ahead of TT. At shorter distances, C disappears and is replaced by CX1, describing charge-transfer coupled with local excitation. It is also noteworthy that the weight of NB is only zero when disilene becomes planar. The shapes of the CSF energy curves indicate that the electron density at the silicon atoms contracts at r(Si-Si) = 3.5 Å due to the triplet excitation, although the energy jump is less pronounced than during planar dimerization. Nevertheless, the rapid energy decrease is again enhanced by contraction of the electron density in the local triplet states. The Insertion of Carbene into H 2 This reaction was investigated in C s symmetry. For all 20 CSFs, the coefficient is different from zero for symmetry reasons. The parameter R used as the reaction coordinate ( Figure 14) is the normal distance of the carbon atom from the molecular axis in H 2 . The energy curve of the ground state ( Figure 15) is as unspectacular as the energy curves for the carbene dimerization. According to the energy curve, bonding starts at about R = 1.5 Å. The geometry parameters of the fragments ( Figure 16) show, however, that at this distance, an electron or spin rearrangement occurs with significant implications for the fragment geometries. At large R values, the HCH angle and C-H bond length are typical for carbene in the 1 A 1 state; 1.107 Å and 102.4 deg [26]. At R = 1.5 Å, both parameters change in a discontinuous way to values typical for carbene in the 3 B 1 state. At the same distance, the bond length of the hydrogen molecule doubles, which is impossible when the molecule is bonded. The distances between the carbon atom and the two hydrogen atoms are different in the initial phase of the insertion reaction, i.e., when R ≥ 1.5 Å, the carbon is not pointing to the H 2 midpoint; rather, the carbene and the hydrogen molecules approach each other in a parallel fashion, with the carbon atom closer to one hydrogen atom in H 2 than to the other. At R = 1.5 Å the two C-H distances are equal, and the carbene has rotated from a parallel to a perpendicular position with respect to the hydrogen molecule, so the symmetry changes from C s to C 2v . Many of the 20 FO-CSFs have weights less than 0.1 along the whole reaction coordinate; when all of these CSFs are neglected, eight CSFs remain, of which only three have significant weights. These CSFs are important at very different parts of the reaction coordinate. At large distances, the dominant CSF is NB, which describes the doubly-occupied s lone pair orbital on the carbon atom and the doubly-occupied σ MO in H 2 . At short distances, where the lowest excited carbene triplet state 3 B 1 is coupled with the lowest excited H 2 triplet state to an overall singlet, TT dominates. Additionally, in between these two regions, around R = 1.5 Å, CSF X is important, where the hydrogen molecule is in its ground state, σ 2 , and the carbene is in the excited singlet state 1 B 1 . This CSF has zero weight at large, as well as at small R values; it appears when bonding starts and disappears again when bonding is finished. Its role is to prepare the carbene in the 1 A 1 state for bonding. According to basic chemical principles, covalent bonding between fragments is only possible if unpaired electrons are available. Accordingly, the two singlet coupled electrons in the s lone pair orbital cannot contribute to covalent bonding, whereas the electrons in the triplet state can. However, this needs an excitation of one electron from the s to the p orbitals and a spin flip of one electron. The excitation without spin flip is described by X. Spin flip in one fragment needs spin flip in the second to guarantee that the ground-state multiplicity of the molecular wave function is not changed. As soon as this occurs, TT dominates, and X no longer describes a physical process in the system and disappears. The lowest triplet state of H 2 is dissociative, which is in accord with the sudden increase in bond length when TT becomes dominant. The shape of the energy curves ( Figure 17) supports this interpretation: NB describes the two fragments each housing a pair of singlet-coupled active electrons. In singlet pairs, the two electrons have different spins, so they do not avoid each other as strongly as two electrons with like spins would. Accordingly, the electrons can come much closer to each other; the Coulomb repulsion increases; and the electron density is more extended than when electrons avoid each other due to spin correlation. To reduce the Coulomb repulsion of the electrons in the carbon 2s AO, one electron must be farther away from the carbon nucleus than the other (in-out correlation), which is possible when the 2s AO is more expanded; thus, contraction of the atomic orbital is unfavorable, and the energy of the NB CSF increases. The same can be said for the σ orbital in the H 2 fragment. In X, only the hydrogen σ orbital is doubly occupied, but the two active electrons at the carbon atom occupy the orthogonal 2s and 2p π AOs (angular correlation), which helps them to avoid each other, and therefore, a contraction of these two AOs is much less unfavorable than when the two electrons are in the same AO. For all other CSFs, contraction of the active FOs is favorable as a result of the two local spin flips. What We Can Learn from the OVB Analysis In all reactions where the reactants are in low-lying singlet states (low-spin states), the energies of all CSFs, but NB, decrease suddenly at a certain geometry. At the same geometry, the energy of NB increases, although this effect is less pronounced than the sudden energy decrease of the other CSFs. Concomitantly, the weights of the lowest high-spin states of the reactants coupled to a resultant low spin state increase strongly. If the system's wave function is dominated by NB at large distances, the weight of this CSF decreases. This is not found for the carbene dimerization in D 2h , where both reactants are already in their corresponding lowest high-spin state and where NB therefore has zero weight throughout the whole reaction. This finding is in accord with Lewis' idea that covalent bonding between reactants is only possible if both fragments have unpaired electrons that can be singlet coupled to an electron pair. In the cases of carbene and silylene, the two electron spins must be coupled to a local high-spin state, which is only possible when they occupy the s and the p lone pair orbitals and are thus angularly correlated. Furthermore, the carbene insertion reaction shows that the high-spin state of the hydrogen molecule is best represented by a triplet excitation of the doubly-occupied σ MO, in which the two unpaired electrons are left-right correlated. The 1 A 1 state of the carbene reactant in this reaction is an excited fragment state, so the change from the singlet to triplet state is indeed a de-excitation, a process that seems to occur in two steps: First, the fragment goes from the low-lying 1 A 1 state to the higher lying 1 B 1 state; the second step is a spin flip in both fragments. The 1 B 1 state only helps to prepare the singlet carbene for bonding; at the equilibrium geometry, its weight is already zero. That such a CSF is important can only be seen when the whole reaction is investigated, not when an OVB analysis is only made at the equilibrium geometry. It is noteworthy that, for all four reactions, the weight of neutral TT at the equilibrium distance is always smaller than 0.4, even when it is very large at intermediate distances, as in the case of the carbene dimerization. On the other hand, there are several ionic CSFs at the equilibrium geometry that, together, have a much larger weight than the neutral CSFs. The sum of the weights of the ionic CSFs in the set of the most important CSFs is at least as large as the weight of neutral TT. When the reactions are described with the FORS wave function based on delocalized MOs, the fragment states in the four molecular systems are hidden, but some of the geometry parameters of the fragments may be helpful indicators for the fragment states. This point of view is not adopted by those who deny that, using such wave functions, one can make physical statements about fragment states in interacting systems, even when the fragments manifestly do not interact. According to this, only when isolated fragments are studied, one can say that the fragment geometry is caused by the fragment state; in all other cases, the origin of fragment geometries is not determined. For someone who denies that local states are responsible for fragment properties, local fragment states are theoretical entities; on the other hand, for someone who claims that the agreement of fragment geometries and fragment states, as suggested by the FO-CSFs, is too systematic to be just an accidental coincidence (as I do), speaking about fragment states and the information about local spin and charge distribution is speaking about entities that are as real as the free molecules that we are investigating. Monitoring geometry parameters during covalent bonding in a system suggests which local processes might be important for explaining what causes bonding. OVB reading of a proper FORS wave function should then describe these processes in more detail. The Basis of Conventional VB The Heitler-London calculation on H 2 is based on the minimization of the expectation value of the molecular Hamiltonian:Ĥ using a two-electron wave function (geminal) of the form: where a and b are the 1s AOs of the free hydrogen atom placed at hydrogen atoms A and B; the distance between atoms is R. The spatial part of the Heitler-London (HL) wave function Ψ HL has 1 Σ + g symmetry. The energy expectation value calculated with Ψ HL is given by: with: J is the Coulomb interaction between one electron and the other nucleus, the sum j + k is the Coulomb interaction between two electrons; j describes the Coulomb repulsion between the electrons located on the atoms; k is frequently described as the repulsion of two non-local charge distributions, also called exchange charge densities. Indeed, the sum of the two integrals describes the Coulomb repulsion between two independent charged Fermions. KS, on the other hand, is the contribution arising from the interference of the two hydrogen AOs. There are three other wave functions that can be constructed with two hydrogen AOs: Ψ I is the spatial part of the so-called ionic wave function of 1 Σ + g symmetry; Ψ T is the spatial part of the 3 Σ + u wave function; and Ψ S is the spatial part of the wave function describing the ionic 1 Σ + u state. The energy expectation values for all four wave functions are shown in Figure 18; clearly, only the HL wave function Ψ HL describes the stable H 2 ground state qualitatively correctly. The energy curve of the ionic wave function does have a local minimum, but at a too-large equilibrium distance, and the stabilization energy with respect to two isolated hydrogen atoms is close to zero. The triplet-energy curve is completely repulsive, and the energy curve of the second ionic wave function lies very high. Nevertheless, the quantitative agreement between the experimental results and the theoretical results obtained with the HL wave function is poor. One finds an equilibrium distance of R e = 0.8679 Å and a dissociation energy of D e = 304.5 kJ/mol; the best experimental values are r e = 0.74117 Å and D e = 456.8 kJ/mol, so the predicted equilibrium distance is 17% too long and the dissociation energy is 33% too small. Since the HL and the ionic wave functions have the same 1 Σ + g symmetry, one can make linear combinations of them, i.e., construct a CI (configuration interaction) wave function and do a CI calculation to get a better description of the 1 Σ + g ground state and also of the 1 Σ + g excited state (vide infra). However, this improves the description of the ground state only slightly; the ground state CI wave function Ψ = c HL Ψ HL + c I Ψ I , the so-called Weinbaum function, is dominated by the HL wave function for all interatomic distances R from very large values to distances smaller than the equilibrium distance, as the absolute value of the CI coefficient of the HL wave functions is always much larger than that of the ionic wave function, |c HL | >> |c I |. Using the Weinbaum function, the equilibrium distance becomes even worse, R e = 0.884 Å, while the dissociation energy improves slightly, D e = 311.6 kJ/mol. These results are obtained with 1s AOs for the free hydrogen atom; when a basis function χ(r) = N e −ζr with a variational parameter ζ is used instead, the results are considerably improved: using a simple HL wave functions and ζ opt = 1.17 gives R e = 0.7356 Å and D e = 364.7 kJ/mol; when ζ is optimized with the energy of the Weinbaum function, the equilibrium distance is R e = 0.757 Å and D e = 388.0 kJ/mol. By adding a p polarization function to the 1s AO with the optimized ζ, the results can again be improved: R e = 0.746 Å and D e = 397.5 kJ/mol with errors of 0.7% for the bond distance and 13% for the dissociation energy. These results seem to justify the view that it is the HL wave function that describes the major part of the stabilizing processes in the hydrogen molecule; all other contributions give just minor improvements. This property of Ψ HL is frequently related to the form of the wave function in which each AO is occupied by exactly one valence electron, which is thought to exactly represent covalency: Each atom contributing one electron to the bonding electron pair. Ψ HL is therefore also called a covalent or a neutral wave function, and the same holds true for the triplet wave function Ψ T . The ionic wave functions, on the other hand, describe a cation/anion pair, they differ in their relative phases and in their symmetry. Slater and Pauling generalized Heitler and London's method to what is now called the (conventional) VB method, where the major role is played by covalent CSFs and where ionic CSFs are only used to correct some shortcomings of a treatment based solely on covalent CSFs. The interpretative power of VB methods has been convincingly shown by Shaik and Hiberty in several publications; see, for example, [27]. The Non-Orthogonality of VB-CSFs In Figure 19, the CI-energies for the ground state E GS and for the excited state E ES are shown together with the energies of the two CSFs, E HL and E I . One can see that the ground state energy curve E GS is nearly identical to the E HL curve, whereas the energy curves E ES and E I only get close for large distances. This means, whereas Ψ HL describes the ground state very well, the excited state is not dominated by Ψ I , except at large distances. One can also see that, for short distances, the energy curves of E HL and E I get very close and become identical for very small distances. This is due to the fact that Ψ HL and Ψ I are not orthogonal to each other; the overlap between the two wave functions is: and this goes to one when R goes to zero, because the overlap of the hydrogen AOs S = a|b then goes to one. Figure 20 shows that the overlap between the VB wave functions approaches one much faster than the overlap between the AOs; at the equilibrium distance, their overlap is already greater than 0.95. AOs CSFs Figure 20. The overlap between the VB CSFs and between the hydrogen 1s AOs. Minimization of the ground state energy using the Weinbaum function leads to an eigenvalue problem with (2,2)-matrices (a (2,2)-CI problem). Because of the non-orthogonality of the CSFs the CI problem is a generalized eigenvalue problem HC = SCE with the CI matrix H and the metric S. C is the matrix of the CI coefficients (eigenvectors), and E is the diagonal matrix of eigenvalues. For R → 0, all four elements of the CI matrix become identical, and the outer diagonal matrix element of the metric becomes one, which means both matrices become singular. The CI coefficients and the squares for the two VB CSFs (Figure 21) show how differently ground and excited states are treated in conventional VB. For the ground state at distances larger than about R = 0.5 Å, the squares of the CI coefficients of Ψ ion and Ψ HL are nearly zero and nearly one, respectively, confirming that the ground state is well represented by the HL CSF alone. At distances smaller than the equilibrium distance when both matrices become singular, the CSFs are becoming linearly dependent and the CI coefficients of the CSFs are approaching infinity. Since the CI vectors of the generalized eigenvalue problem are orthogonal with respect to the metric S, the squares of the CI coefficients are not proper weights of the CSFs; instead, the Chirgwin-Coulson weights are mostly used to describe molecular electronic structures by their fractional ionic character [28]. However, since Ψ HL describes two neutral hydrogen atoms only at large distances, but the same cation/anion pair as Ψ I at small distances, such a characterization does not have unique physical relevance. Instead, the question arises: What does it mean calling Ψ HL a covalent wave function, when it describes a neutral situation only at large distances, but an ionic one at small distances? One might as well claim that at small distances, Ψ I is covalent. One immediate consequence is: From the form of a wave function, one cannot infer what kind of electron distribution it describes. This is in contrast with common belief: In the valence bond (VB) view. . . , the electrons are viewed to interact so strongly that there is negligible probability of finding two electrons in the same orbital. The wave function is thus considered to be dominated by purely covalent contributions in which each electron is spin paired to another electron [29]. Another consequence is: The non-orthogonality of VB CSFs poses difficulties for the interpretation of wave functions that are more severe than the numerical problems of the VB method frequently mentioned. Nevertheless, even nowadays, chemists, who at best learned that VB is an obsolete method, often characterize the electron structure of molecules by the fractional ionic character of the state function, assuming or having heard that these numbers have physical relevance. Figure 21. The CI coefficients and their squares of the (2,2)-CI problem using VB and OVB CSFs. The Role of Interference in Conventional VB Why is Ψ HL so well suited to describe bonding in H 2 ? To answer this question, we reorder the contributions in the energy expression. The separation of the energy into classical and interference contributions shows ( Figure 22) that only the interference of the non-orthogonal AOs causes bonding; the energy curve for the classical contributions without interference contributions is purely repulsive. Non-interference Interference HL Figure 22. Partitioning of the Heitler-London energy into classical contributions and contributions caused by interference. To be more precise: Constructive interference is responsible for bonding in the ground state of H 2 ; the repulsive character of the triplet state is due to destructive interference. This can be seen from the one-particle densities normalized to the number of particles: When R approaches zero, then S approaches one, and therefore, the atomic contributions approach 0.5 in the HL CSF, whereas the interference contribution, which piles up charge in the internuclear region, approaches one. In the triplet wave function, the atomic density contributions are strongly positive, while the destructive interference contribution is strongly negative. As a consequence, the electron densities are located at the nuclei with their maxima outside the internuclear region. This is in accordance with the Pauli principle, which states that fermions with like spin avoid being spatially close. However, to call both wave functions covalent, although their electron and spin distributions are completely incompatible, shows that the word 'covalent' has conflicting semantics with respect to electronic wave functions. The one-particle densities made with the other two VB CSFs do not add anything new; they are ρ I = ρ HL and ρ S = ρ T . Therefore, both Σ + g states show constructive interference, whereas both Σ + u states show destructive interference. This explanation of bonding in the hydrogen molecule has one flaw: It is based on the interference of atomic orbitals that never change their shape even though the atoms strongly interact. Chemical bonding is the result of strong interactions between atoms: The valence electrons of one atom are attracted by the nucleus of the other atom; the electrons repel each other due to charge and, also, if there are more than two electrons in the system, due to the Pauli principle. The electron density should reflect the results of these interactions: the electron density of one atom surrounded by other atoms must be more contracted due to the electron-electron repulsion, and the electron density must deviate from spherical symmetry due to polarization and repulsion. If a minimal basis with a single Slater function χ(r) = N e −ζr is used, contraction of the electron density can be accounted for by a variable exponent ζ, and polarization may be represented by a non-spherical AO by adding a p-type basis function to the 1s AO. Such an AO is similar to hybrid AOs. Calculations with such modified AOs were done in the early days of quantum chemistry, but they were always just seen as a way to improve the quantitative agreement between calculated and experimental data. To explain chemical bonding, the spherical 1s AOs of free hydrogen atoms were considered to be sufficient. Coulson and Fischer [30] showed that it is possible to represent the ground-state wave function from the (2,2) CI problem by a single function of the HL-type, if the atom-centered AO a is replaced by a linear combination φ a = N (a + b) and b is replaced by the linear combination φ b = N (b + a) with a small positive depending on the interatomic distance and a normalization coefficient. These so-called semi-localized AOs are nodeless and non-orthogonal; they are the basis of the generalized VB method (GVB) by Goddard [31]. Orthogonal VB Orthogonal AOs are frequently seen just as a convenient technical means in quantum chemical calculations. Symmetric orthonormalization of AOs was introduced into solid-state physics by Wannier [32] and into molecular quantum theory by Löwdin [33], and even when the AOs used in quantum chemical calculations were not orthogonal, they were assumed to be, as in the Pariser-Parr-Pople (PPP) method [34][35][36], or quite general in the zero differential overlap (ZDO) approximation [37][38][39]). No wonder that symmetrically orthonormalized AOs (OAOs) were also used in VB calculations on H 2 [40,41]. An unexpected result of these calculations was that the energy curve for the HL-type wave function Ψ o HL (where the superscript o indicates a wave function made with OAOs) is completely repulsive, like that for the triplet state; indeed, the two energy curves are parallel, with the HL-curve lying slightly above the triplet curve. Similarly, the energy curve of the ionic wave function of Σ + g symmetry is parallel to the energy curve of the ionic 1 Σ + u ; again, the former lying slightly above the latter. For the two Σ + u states, one finds that the wave functions and, accordingly, the energy curves are completely unchanged when the non-orthogonal AOs are replaced by OAOs. Since the OAOs depend linearly on the AOs, so do the CSFs. For the Σ + g CSFs, the linear transformations are [42]: where VB-CSFs are labeled with the superscript n; for the Σ + u CSFs, it holds: Because Ψ n HL and Ψ n I are a non-orthogonal basis for the (2,2)-CI problem, Ψ o HL and Ψ o I are an orthogonal basis for the same CI problem; the results for ground and excited states are the same, irrespective of which basis is chosen. Only when individual basis vectors are compared do the differences between VB and OVB become apparent. The difference between Ψ o HL and Ψ n HL is best seen by comparing their one-particle densities. It turns out that ρ o HL = ρ n T , which means Ψ o HL shows the same destructive interference as does Ψ o T . Additionally, this is due to the use of OAOs, as has been long known. To quote Slater: This is not surprising; for our discussion of the nature of the covalent bond . . . has made it clear that it is the overlap charge which is responsible for the binding, and these orthogonalized orbitals are just set up so as to avoid overlap. If one uses them, one can still carry out a configuration interaction and end up with the same results which we have obtained by our other methods [43]. Slater pointed to the fact that orthogonalization of AOs prevents constructive interference in Ψ o HL , and we showed that all OVB-CSFs have the same one-particle density. Proper linear combinations of Ψ o HL and Ψ o I describe the bonded ground state of the H 2 molecule as correctly as do linear combinations of Ψ n HL and Ψ n I , regardless of the fact that the one-particle density of the former CSFs shows destructive interference and that of the latter shows constructive interference. The correct description of bonding does not depend on a certain choice of AOs. A second fact is also well known from McWeeny's work in the 1950s [41]: The VB-CSF Ψ n HL is a linear combination of the neutral CSF Ψ o HL and the ionic CSF Ψ o I , and since Ψ n HL is a very good description of the H 2 ground state, the linear combination of OVB CSFs is also a very good description of the ground state. Because the OVB CSFs are orthogonal to each other, the neutral CSF can never describe an ionic electron distribution and vice versa. However, since the overlap integral S increases with decreasing distance R, the ionic contribution increases, as well. At large distances R, where S = 0, Ψ n HL is identical to Ψ o HL , and the electron distribution in the ground state is strictly neutral or covalent. With increasing S, the ionic contribution increases, as well, but the contribution of Ψ o HL is always larger than that of Ψ o ion . For R approaching zero, both coefficients of the linear combination approach 1/ √ 2, which means the weight of covalent and ionic CSF is 1/2 for R = 0 ( Figure 23). Pilar [44] in his Elementary Quantum Chemistry summarized the findings by McWeeny as follows: This means that the concepts of covalent and ionic character are not unique. In using the Slater-Pauling method for polyatomic molecules, it has been standard practice among chemists to speak of the relative importance of ionic structures in a molecule in terms of the coefficients of the corresponding wave function in the total wave function. The above analysis shows that such an interpretation does not have a unique physical significance. Of course, this is not at all surprising in the light of a previous discussion. . . , where it was shown that the so-called covalent and ionic functions used to describe H 2 have an overlap of 0.95 and thus have no unique interpretation in terms of fractional ionic character. One must then conclude that any Slater-Pauling covalent wave function which predicts stable chemical bonding does so only because the wave function contains ionic wave functions in terms of OAO's. In conclusion, the use of OAO's in the VB method leads to a clearer electrostatic picture of chemical bonding but destroys the chemist's simple concepts of covalent and ionic character. In light of the relation of VB and OVB CSFs and the implications for their interpretation (known for half a century), statements of the following kind are surprising: . . . we may say that the symmetric orthonormalization gives very close to the poorest possible linear combination for determining the lowest energy. This results from the added kinetic energy of the orbitals produced by a node that is not needed.. . . We have here a good example of how unnatural orthogonality between orbitals on different centers can have serious consequences for obtaining good energies and wave functions [45]. OVB and Chemical Bonding The most profound analyses of chemical bonding are due to Klaus Ruedenberg. Starting in the 1960s, he and his coworkers showed, in a series of papers [46][47][48][49][50][51][52][53][54][55][56][57] , what can be summarized as Ruedenberg's physical interpretation of covalent bonding in H + 2 : (1) Covalent bonding is the result of the lowering of kinetic energy through inter-atomic electron delocalization, called electron-sharing. Delocalization is caused by constructive interference during the superposition of hydrogen AOs. The electrostatic interactions due to charge accumulation in the internuclear region are not bonding, as is frequently claimed, but debonding. (2) Electron-sharing is accompanied by intra-atomic contraction and polarization. Contraction causes a decrease in the intra-atomic electrostatic energy and an increase in the intra-atomic kinetic energy in the deformed atoms in the molecule. (3) Intra-atomic contraction enhances the inter-atomic lowering of the kinetic energy and, thus, contributes to energy minimization. (4) The antagonistic changes of intra-atomic and inter-atomic energy contributions cause a variational competition between electrostatic and kinetic energy; the wave function that achieves the optimal total energy is obtained by variational optimization. (5) The atom-centered orbitals describing the deformed atoms are quasi-AOs; their shape depending on the distance between the interacting atoms. Near equilibrium distance, they are more contracted than the free AOs, causing the lowering of electrostatic energy; at larger distances, they may be even more expanded than in the free atom, because then the electron can better expand into spatial regions not available for the electron in the free atom when the AOs are superimposed. It was shown for the H + 2 ion that the quasi-AOs are very similar to free AOs; the difference between them is not larger than 6% (measured by their overlap); the major deformation is contraction. One might assume that this deformation of the AOs during bonding is negligible; however, neglecting the deformation decreases the bonding energy by about 50% [53]. Moreover, of the 6% deformation, about 75% is contraction and only 25% is polarization. Nonetheless, the latter again has an unexpected impact: The Coulomb interaction between the proton (Atom A) and the spherically-contracted quasi-atomic density at Atom B is repulsive at all distances, but if the quasi-atomic density is also polarized, the Coulomb interaction is attractive at all distances. It has been shown that the bonding in H 2 is completely analogous; because of the additional electron-electron repulsion, the total bonding energy is not twice the bonding energy of H + 2 , but only 85% of it [53]. For the many-electron molecules B 2 , C 2 , N 2 , O 2 and F 2 , the basic conclusions remain valid: Because of the larger number of interacting electrons, the deformation of the atoms in the molecule (due to electrostatic interactions or due to the Pauli exclusion principle) becomes more important, and the wave function adjustment may become a subtle problem. Ruedenberg's analysis is based on non-orthogonal quasi-AOs allowing interference; in OVB, contraction and polarization of OAOs is enforced by orthogonalization, irrespective of whether 1s AOs are symmetrically orthogonalized or orthogonal MOs are localized on fragments by an orthogonal transformation. On the right side of Figure 24, one can see that symmetrical orthogonalization transforms the spherical 1s AOs into contracted sp-like hybrid orbitals. On the left side, one can see what is described by the original Heitler-London treatment: although the 1s AOs strongly interact with each other during bonding, they are not perturbed; instead, there is what is called mutual interpenetration of the never-changing electron densities. Accordingly, there is only interference of the 1s states of free hydrogen atoms, even at the equilibrium state of the hydrogen molecule. Obviously, this reaction is a model reaction of two fictitious hydrogen atoms; it demonstrates that interference is responsible for delocalization of the electrons and, thus, for electron sharing. Unless the AOs are scaled, or extremely large basis sets are used that can mimic the scaling, and polarization is allowed, e.g., by adding p-type AOs, then the influence of contraction and polarization on bonding is not accounted for. In OVB, on the other hand, the OAOs are made, as Slater has pointed out, to prevent any interference and, thus, electron sharing. Therefore, the neutral HL-type CSF cannot describe covalent bonding, regardless of whether the OAOs are contracted and polarized or not. Electron delocalization is only represented by the ionic CSF, and the ground-state wave function must be a linear combination of the neutral and ionic CSFs; at large distances, it is purely neutral, but with decreasing R, both CSFs become equally important (see Figure 21). In the Coulson-Fisher treatment, electron sharing is represented by increasingly delocalized non-orthogonal atom-centered AOs, while in OVB, delocalization is represented by the increasing weight of the ionic CSF. This is completely analogous to the Hubbard model [58,59] in solid-state physics, where each site contributes one electron occupying an orthogonal AO. The ground state is represented by a linear combination of VB-like CSFs; neutral CSFs describe electron distributions, where each site hosts a single electron; in ionic CSFs, some sites are occupied by two and other sites by zero electrons. In the OVB description of the H 2 molecule, the outer diagonal element of the CI matrix, the coupling matrix element, describes the transfer of an electron from one atom to the other, thus changing the neutral electron distribution into an ionic one. The same role is played by the "hopping integral" in the Hubbard model (the author is grateful to Prof. Ruedenberg for having pointed out this fact). Diabaticity of OVB CSFs It is a characteristic of VB CSFs that they can change their character with their molecular geometry. In contrast to such chameleon-like state functions, the electron configurations described by OVB CSFs never change along the interatomic distance R: Neutral CSFs always describe neutral configurations, and ionic CSFs always describe ionic ones. This is reminiscent of the characterization of wave functions as adiabatic and diabatic. Adiabatic states are eigenstates of the molecular Hamiltonian, and wave functions representing adiabatic states are adiabatic wave functions. Whenever two or more potential energy surfaces closely approach each other in certain regions of configurational space, their energies and corresponding state functions undergo rapid changes in these regions, which causes problems not only in the determination of the electronic wave functions, but also in the treatment of reaction dynamics. In both cases, the resolution of adiabatic wave functions in terms of diabatic wave functions is a possible way out, because diabatic wave functions are much easier to handle and they do not rapidly change in the same spatial regions as adiabatic wave functions do. The drawback is that diabatic states are not eigenfunctions of the Hamiltonian, but must be constructed on the basis of criteria coming from the application field under consideration. There are two conceptually totally different approaches to constructing diabatic wave functions, as described by Atchity and Ruedenberg [60]. One is the dynamic approach, where one has to deal with a set of coupled differential equations between the adiabatic states with large coupling terms (nuclear-derivative matrix elements between electronic states), and the construction of diabatic states is guided by the goal of minimizing the coupling terms in the dynamic equations. Angeli et al. [61] calculated the coupling matrix elements between the neutral and ionic CSFs, both based on AOs and OAOs, and showed that the coupling matrix element was only zero for the OVB CSFs, but not for the VB CSFs. In the second approach, the electronic structure approach, one starts with the observation that, in certain regions of coordinate space, drastic changes occur in the electronic structures of the adiabatic states, and the construction of diabatic states is guided by the goal of finding wave functions whose electronic structures maintain their essential characteristics over the entirety of such regions. In the following, the electronic structure approach, based on the maximization of configurational uniformity, as developed by Atchity and Ruedenberg, is presented [60]. This approach requires the ability: (i) To quantitatively assess the character of the structure of a wavefunction in electronic coordinate space; and (ii) To monitor changes in these characteristics as functions of the nuclear positions over regions in nuclear coordinate space. When these characteristics change only little for a wavefunction in such a region, then we consider the electronic structure of that wavefunction uniform in that region. This approach was designed for the investigation of a few states, say N , by means of state-averaged MCSCF wave functions, where it is assumed that the number of CSFs M is much larger than the number of states. If the MOs used to span the CSFs are unambiguously defined, the electronic structure of each state function can be characterized by the CI coefficients. Furthermore, it is assumed that each state function is dominated by only a few CSFs, and their CI coefficients characterize the electronic structure. It is furthermore assumed that, when a molecule deforms during a reaction, all involved MOs deform continuously along the reaction path in nuclear coordinate space, and since it is known that MOs only change their shape gradually, any rapid change in the electronic structure comes from strong changes in the CI coefficients as functions of the nuclear coordinates. Given this, a wavefunction is considered to essentially maintain its electronic structure along a nuclear coordinate path if the deforming configurations in its dominant part remain the same. If this holds true for all points in a nuclear coordinate region, then we consider the state to have a uniform electronic structure in this region. In this sense, we equate electronic structure uniformity with configurational uniformity. In regions where strong change in the electronic structure is observed, the adiabatic states will not exhibit configurational uniformity over the entire region. It is now assumed that the M CSFs can be partitioned into N configuration groups so that in any adiabatic wave function, all members of a certain configuration group are always either dominant or not dominant. If all of these conditions are met, one can surmise that the N adiabatic states can be expressed as linear combinations of N diabatic states, each of which is dominated, everywhere in nuclear coordinate space, by the configurations of one and the same configuration group. . . . The MOs that allow the construction of such diabatic states are called diabatic MOs (DMO). Nakamura and Truhlar [62][63][64] developed a method to determine DMOs, called the four-fold way, and investigated the deformation of several molecules during chemical reactions. They showed that good candidates for DMOs are orbitals localized on a few atoms. The results for the H 2 molecule by Angeli et al. [61] are in full agreement with the electronic structure approach: When the DMOs are OAOs, the adiabatic states are linear combinations of the neutral and ionic OVB CSFs, which show electronic uniformity along the whole reaction coordinate. For the collinear reaction of two H 2 molecules, Nakamura and Truhlar showed that the second and third excited singlet states can be characterized by the σ → σ * excitation in either of the two H 2 molecules. Therefore, the DMOs must be FOs localized on each molecule. When the delocalized MOs, using the input data for this example, are Procrustes localized on the hydrogen molecules and these localized FOs are used as DMOs, the diabatic states are identical with those constructed with Nakamura's DMOs. This show, that, at least for this example, the FOs are indeed DMOs. More investigations are necessary to find out whether or not OVB CSFs are always diabatic; investigations using the coupling matrix method will be made in Angeli's group, and those using the electronic structure approach will be made in the group of Sax. As mentioned, there are many possibilities to construct states with a varying degree of diabaticity, and often the diabatic character was claimed, but not proven. This is especially true for conventional VB CSFs, where neutral and ionic CSFs were often claimed to be diabatic (see, for example, [27]). In the group of Malrieu, a method was developed to construct from adiabatic CI wave functions nearly diabatic states that are as close as possible to conventional VB CSFs [65,66], and it was said that for intermediate nuclear configurations, the transformed wave functions resemble as much as possible the VB CSFs; one may therefore consider them as nearly diabatic [65]. Angeli et al. [61] showed that conventional VB CSFs are not diabatic; therefore, one can question the diabaticity of the transformed wave functions for intermediate nuclear configurations, unless it is proven explicitly. Discussion Local information about chemical reactions is, in general, hidden by the delocalized MOs that are used to construct high-quality wave functions. VB methods are made to reveal local information. Conventional Slater-Pauling VB wave functions based on non-orthogonal AOs hide the real electronic structure behind a seemingly neutral HL-type wave function; OVB wave functions also reveal this information. OVB reading of MCSCF wave functions is therefore an excellent means to get information about charge and spin redistribution during chemical reactions, especially reactions connected to chemical bonding. The OVB view of chemical bonding is different from the VB view, unless one is only interested in calculating energy curves [67]. The sample reactions discussed above demonstrate how the fragment properties determine the local processes in elementary reactions. During the dimerization of triplet carbene, for example, no local excitations are necessary to make the carbenes ready for bonding; each fragment is already in a local high-spin state, and therefore, TT CSF is the most important neutral CSF. Electron sharing during bonding is reflected by the increasing weights of the two ionics, CX1 and CX2; these two CSFs are necessary to describe the angular correlation of the electrons in the anion/cation pair. In the ethene molecule, angular correlation in the neutral electron distribution becomes possible by CSF DX. The local processes during planar dimerization of singlet silylenes, on the other hand, are very different. At large distances, the two fragments are described by the no-bond configuration, while the angular correlation of the singlets is described by CSF DX. To make the fragments ready for bonding, they must be locally excited into the lowest triplet states, where the Fermi correlation of the two valence electrons causes a pronounced contraction of the electron distribution and a strong energetic stabilization, as can be seen from the kinks in all (but one) energy curves at 3.5 Å. Electron sharing is again described by the two ionic CSFs, CX1 and CX2. This reaction shows not only that preparing fragments for bonding often requires spin reorganization, it also shows that excitation into local high-spin states yields strong energetic stabilization. Additionally, one can see that spin reorganization processes are accompanied by drastic changes in the fragment geometries, like bond lengths or bond angles. The change in the HSiH angle from 92 to 118 degrees will certainly be seen by some chemists as an indication that sp 2 hybrid orbitals are needed to correctly describe the equilibrium geometry of planar disilene, but this post festum argument completely ignores the information that we have for the spin reorganization in the reacting fragments during the dimerization reaction. Comparison of planar and non-planar silylene dimerization shows how local processes also depend on the local symmetry. With no σ−π separation, preparation for bonding is also possible by a single-charge transfer, represented by CSF C, not just by excitation into a local high-spin state. For planar dimerization, CSF C does not have the symmetry of the ground state and can, therefore, not contribute, but as soon as the symmetry has been lowered, single-charge transfer becomes an important process during bonding. At very short distances, where the disilene molecule is again planar, the weight of CSF C is zero again. One can also see that, at much larger distances, the ionic CSF C becomes more important than the neutral CSF TT. Electron sharing is represented only by the ionic CSF CX1, which, at short distance, has the same weight as the most important neutral CSF TT. The local symmetry at the silicon atoms during bonding resembles that of a silyl radical; i.e., the HSiH angles are closer to tetrahedral angles, which, again, some chemists prefer to explain with the help of sp 3 hybrids. For the insertion of singlet carbene into the hydrogen molecule, both reactants are in electronic states that are not prepared for bonding; i.e., the carbene is in an excited state that corresponds to the silylene ground state. One could assume that, with simple de-excitation from the local 1 A 1 state into the local The traditional explanation of bonding based on hybrid orbitals has several drawbacks. First of all, argumentations based on hybrid orbitals are mostly used to explain the local symmetry at heavy atoms in the reaction product; only seldom are they used to explain the continuous geometry changes during the bonding process. Hybridization is the superposition of atomic eigenfunctions of different angular momenta under the influence of the potentials of all other atoms. The local symmetry of this perturbing potential determines the kind of hybridization; hybridization is thus the result of the molecular nuclear framework, not its cause. Therefore, if one wants to find the origin of the molecular geometry, one has to find the processes that cause the change in molecular geometries from the reactants to the product. Lennard-Jones [68] showed in the early 1950s that linear, trigonal and tetrahedral structures, as in ethyne, ethene and ethane, are compatible with the maxima of the probability of finding two, three or four particles with the same spins on a sphere. According to the Pauli exclusion principle, the spins avoid each other optimally, and the so-called Pauli repulsion is more important for the shape of molecules than electrostatic forces. In a realistic molecule, the spin distribution is not as isotropic as it is on a sphere, and therefore, the maxima of the spin distributions must be calculated with appropriate methods; e.g., quantum Monte Carlo (QMC) methods. Such calculations were made by Scemama et al. [69] and Lüchow [70]; the combination of OVB and QMC methods seems to be very promising for investigating the processes of chemical bonding. With such combined investigations, it should be possible to clarify the reality (or otherwise) of fragment states in a molecule. Computational Methods The CAS-SCF wave function for the model reactions was calculated with GAMESS [71] using the 6-31G* basis. The programs for creating the FOs were implemented in a local version of GAMESS. The H 2 calculations using the symmetrically orthonormalized 1s AOs were performed by using a MATLAB program. The formulae were taken from Slater's book [43]. Conclusions When VB CSFs are made with orthogonal fragment orbitals, the CSFs themselves will also be orthogonal; therefore such CSFs describe electronic structures that maintain their characteristics over large regions of coordinate space. In other words, neutral CSFs remain neutral and ionic CSFs remain ionic; in contrast to non-orthogonal CSFs used in conventional VB. In this way, OVB describes all local processes occurring during chemical reactions much better than conventional VB. If orthogonal transformations are used to localize delocalized CASSCF MOs on predefined fragments, then CASSCF wave functions will be transformed into OVB wave functions, thereby revealing information that is otherwise hidden in wave functions made with both delocalized orthogonal and localized non-orthogonal orbitals. This is what OVB reading of a CASSCF wave function means.
v3-fos-license
2018-12-03T06:46:26.797Z
2017-11-10T00:00:00.000
54583414
{ "extfieldsofstudy": [ "Physics", "Mathematics" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00220-019-03329-3.pdf", "pdf_hash": "787ae3e186ab0baf77e5be52ef98219c136760fc", "pdf_src": "Arxiv", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46286", "s2fieldsofstudy": [ "Physics", "Mathematics" ], "sha1": "787ae3e186ab0baf77e5be52ef98219c136760fc", "year": 2019 }
pes2o/s2orc
Perturbation theory for almost-periodic potentials I. One-dimensional case We consider the family of operators $H^{(\epsilon)}:=-\frac{d^2}{dx^2}+\epsilon V$ in ${\mathbb R}$ with almost-periodic potential $V$. We study the behaviour of the integrated density of states (IDS) $N(H^{(\epsilon)};\lambda)$ when $\epsilon\to 0$ and $\lambda$ is a fixed energy. When $V$ is quasi-periodic (i.e. is a finite sum of complex exponentials), we prove that for each $\lambda$ the IDS has a complete asymptotic expansion in powers of $\epsilon$; these powers are either integer, or in some special cases half-integer. These results are new even for periodic $V$. We also prove that when the potential is neither periodic nor quasi-periodic, there is an exceptional set $\mathcal S$ of energies (which we call $\hbox{the super-resonance set}$) such that for any $\sqrt\lambda\not\in\mathcal S$ there is a complete power asymptotic expansion of IDS, and when $\sqrt\lambda\in\mathcal S$, then even two-terms power asymptotic expansion does not exist. We also show that the super-resonant set $\mathcal S$ is uncountable, but has measure zero. Finally, we prove that the length of any spectral gap of $H^{(\epsilon)}$ has a complete asymptotic expansion in natural powers of $\epsilon$ when $\epsilon\to 0$. Introduction We consider the operator where ε > 0 is a small parameter and V is a real-valued almost-periodic potential. We are interested in various quantitative and qualitative spectral properties of H as ε → 0, and this paper is the first one in a series of articles devoted to the study of these properties of H under various assumptions. In this paper we assume that the dimension d = 1, so that (1.2) H = H (ε) := − d 2 dx 2 + εV. The quantities we will be interested in are: the length of the spectral gaps, and the behaviour of the integrated density of states (IDS) N (λ; H (ε) ) when the spectral variable λ is fixed (and ε → 0). It has been noticed by Arnold, [1] that if H is a Hill operator (1.2) with V being a finite trigonometric periodic polynomial 3) is infinite, then the size of all gaps is (generically) proportional to ε. Our first result is the extension of this observation to the case of almost-periodic potentials; moreover, we prove that the length of each spectral gap has a complete asymptotic expansion in natural powers of ε. We also prove similar expansions for the upper and lower ends of each spectral gap. The leading power in each expansion will depend on whether the potential is a finite or infinite linear combination of trigonometric functions (we call such operators quasi-periodic and almost-periodic respectfully). In the quasi-periodic case the leading power of the length of the gap opened around the square of each frequency θ will increase together with the order of θ (see the next section for the precise definitions and formulation of the results), whereas in the almost-periodic setting when no Fourier coefficients vanish, all expansions begin with the first power of ε. These expansions are formally uniform, but effectively they are not, because the higher the order of a frequency θ is, the smaller ε we need to choose to 'see' the expansion of the length of the gap generated by θ (i.e. if we choose ε not very small, then the remainder in the expansion will be larger than the asymptotic terms). Somewhat similar problems were considered in [2] and, in the discrete setting, in [3] (see also [4] and references there). However, there is a significant difference between these papers and our results. In these papers the authors have fixed ε and studied the behaviour of the gap length as a function of the 'natural label' of the gap (corresponding, roughly, to what we call the order of a frequency, see below for details). So, they were able to obtain information about all gaps simultaneously, but this information was either bounds (upper and lower), or one asymptotic term, whereas we obtain more detailed information (complete asymptotic expansion) about smaller number of gaps. The second problem we consider is as follows. Let λ ∈ R be a fixed number and consider the behaviour of the IDS of H (ε) at λ when ε → 0. Questions of this nature (how the value of IDS at a fixed energy depends on the value of a small coupling constant) have arisen in our study of perturbations of Landau Hamiltonians by almost-periodic potentials. Despite the slightly esoteric feel of this type of questions, we believe they are more natural than it may seem at the first sight, especially given that the answers are quite surprising. Let us briefly describe the effects happening in one dimension; we are going to devote the second paper in this series to discuss the multidimensional case, where the results are even more unexpected. Suppose first that V is quasi-periodic. Then, whenever λ is not a square of a frequency, there is a complete asymptotic expansion of N (λ; H (ε) ) in integer powers of ε. Suppose, λ = θ 2 = 0, where θ is a frequency. Then the type of the expansion will depend on the relationship between τ (the constant Fourier coefficient of V ) and ν (the Fourier coefficient at e i2θx ). First we notice that, as we will show in this paper, there is a spectral gap of H (ε) around θ 2 of length ∼ 2νε. Therefore, if |τ | < |ν|, then the point λ + τ ε stays inside this gap and, as a result, the IDS does not depend on ε when ε is small. If, on the other hand, |τ | > |ν|, then the shift by τ ε pushes our point λ well outside the spectral gap, and we obtain the standard asymptotic expansion in integer powers of ε. The most interesting case is |τ | = |ν|, when the point λ + τ ε is approximately at the edge of the spectral gap. In this case generically the answer will depend on the sign of τ . For one value of this sign the point λ + τ ε is still located in the gap and so the IDS is constant. However, for the opposite value of the sign of τ the point λ + τ ε will be pushed just outside the gap and, as a result, the IDS will have a complete expansion in half-integer powers of ε (where we define half-integers as (Z/2) \ Z). Similar situation happens when we look at the point λ = 0: we have expansion in half-integers whenever τ < 0; otherwise, the expansion is in integers. The bottom line is, if V is quasi-periodic, then for all λ we have a complete asymptotic expansion of N (λ; H (ε) ) as ε → 0, which contains either integer, or half-integer powers of ε. An interesting phenomenon occurs when we look at this problem in the almost-periodic setting, for example, when all the Fourier coefficients are non-zero. Namely, in this case there is a substantial set S such that for λ 1/2 ∈ S there is no asymptotic expansion of N (λ; H (ε) ) at all; in fact, there are uncountably many values of λ for which the remainder N (λ; H (ε) ) − N (λ; H (0) ) is not even asymptotically equivalent to any power of ε. This set (which we call the super-resonance set) is uncountable, but has measure zero; the interesting feature of this set is that it is present no matter how quickly the Fourier coefficients of V go to zero -whether V is smooth, or analytic, the super-resonant set without the asymptotic expansion of IDS is always uncountable (but perhaps its dimension may depend on the smoothness of V ). The method we use for obtaining these results is a version of the gauge transform method used in [7] and [6]. The only difference is that in [7] and [6] we had fixed coupling constant and assumed that the energy λ was large (so that the small parameter was λ −1 ), whereas in the present paper the small parameter is the coupling constant ε. This difference is not essential, so the construction of the gauge transform can be performed almost word-to-word as it is done in [7] and [6]. This method allows us to find two operators, H 1 and H 2 so that H 1 is unitarily equivalent to H, H 2 is close to H 1 in norm, and H 2 is almost diagonal (in the sense that most of the off-diagonal matrix coefficients of H 2 vanish). For the sake of completeness, we have written the details of the gauge transform construction relevant to our setting in the Appendix; in the main body of the paper we will give a brief description of the method and use the relevant properties of H 1 and H 2 without proof. The structure of the rest of the paper is as follows: in the next section we will give all the necessary definitions and formulate the main results. In Section 3 we will discuss the quasi-periodic operators, and in Section 4 the almost-periodic operators. Finally, in the Appendix we will describe the method of the gauge transform. Here, V θ are complex numbers (called the Fourier coefficients of V ; since V is real, we havê V −θ =V θ ), and Θ = Θ(V ) ⊂ R d is a finite set, called the set of frequencies (or rather half-frequencies; the factor 2 is used purely for convenience) of V . We assume without loss of generality that Θ is symmetric about the origin and contains it. Denote by l the number of independent elements in Θ (so that |Θ| = 2l + 1). For each natural L we denote Θ L := Θ + Θ + · · · + Θ (the algebraic sum of L copies of Θ) and put Θ ∞ := ∪ L Θ L . When θ ∈ Θ ∞ , we denote by Z(θ) the smallest number L for which θ ∈ Θ L and call this number the order of the frequency θ. We put A simple combinatorial estimate shows that We also put τ :=V 0 , Θ := Θ \ {0} and V := V − τ , so that The second type of potentials we are going to consider are smooth almost-periodic, by which we mean that Θ is still a finite set, but we have for θ ∈ T m and arbitrary positive P . We also assume that Θ satisfies the diophantine condition, i.e. for θ ∈ Θ m we have |θ| m −P 0 , where P 0 > 0 is fixed. In either of these two cases (quasi-or almost-periodic potentials) we also assume (as we can do without loss of generality) that Our first main result concerns the spectral gaps. Theorem 2.1. Suppose, V is either quasi-periodic, or infinitely smooth almost-periodic and satisfies all the above assumptions. Suppose, θ ∈ Θ ∞ . Then for sufficiently small ε operator H has a (possibly trivial) spectral gap around |θ| 2 , the length of which, as well as its upper and lower ends, have complete asymptotic expansions in natural powers of ε. IfV θ = 0 then the asymptotic expansion for the upper (lower) end of the gap starts with |θ| 2 ± |V θ |ε + O(ε 2 ). Remark 2.2. IfV θ = 0, we cannot guarantee that an expansion for the gap-length is always non-trivial, i.e. it could happen, in principle, that the length of the gap is O(ε +∞ ). The next result involves two quantities, s 2 (0) and g 2 (0) which will be defined in the next section (in formula (3.24)). Throughout the paper we use the convention that each time we use letters a j (or a j (λ)) for coefficients in asymptotic expansions, the exact values of these coefficients could be different. The same refers to the use of C which can mean a different positive constant each time we use it. Theorem 2.3. Suppose, V is quasi-periodic. Then for sufficiently small ε > 0 the following holds: (i) For λ < 0 we have N (λ; H) = 0. Theorem 2.4. Suppose, V is infinitely smooth almost-periodic, but not periodic, and V θ = 0 for any θ ∈ Θ ∞ . Then there exists a set S (which we call a super-resonance set) such that a complete power asymptotic expansion of N (λ; H) exists if and only if λ ∈ S. The set S is uncountable and has measure zero. Remark 2.5. As we will see in the proof, there are uncountably many values of λ for which the difference N (λ; H (ε) ) − N (λ; H (0) ) properly oscillates between C 1 ε j and C 2 ε j , where C 1 = C 2 and j equals 1 or 2. We will think of a point ξ ∈ R as the exponential function e ξ (x) := e iξx lying in the Besikovich space B 2 (R) (the collection of all formal countable linear combinations of {e ξ } with square-summable coefficients). Then for arbitrary pseudo-differential operator W with symbol (in a left quantisation) w = w(ξ, x) being quasi-periodic in x, Thus, we can think of the Fourier coefficientsŵ(θ, ξ) of the symbol as the matrix element of W joining ξ and ξ + 2θ: (2.22)ŵ(ξ, θ) = W e ξ , e ξ+2θ B 2 (R) . In our paper [6] it is explained that instead of working with operators acting in L 2 (R), we can consider operators with the same symbol acting in B 2 (R) and work with them. This will not change the spectral properties we are studying in our paper (for example, the spectrum as a set is the same whether our operator acts in L 2 (R) or B 2 (R)). Quasi-periodic potential In this section we assume that the potential V is quasi-periodic, i.e. that (2.1) holds. 3.1. Gauge transform: general description. First of all, we give a brief outline of the construction of the gauge transform of our operator. The details of this construction are similar to those in [6]; for the sake of completeness, we present them in the Appendix. Let us fix a natural number N . All the constructions are going to depend on the choice of N , but we will often omit writing N as the variable. Applying the gauge transform leads to a pair of operators, H 1 = H and H 2 is almost diagonal in the sense that it can be decomposed into a direct integral with all fibres being finite dimensional (moreover, as we will see, the dimension of all fibres will be 1 or 2). Also, the frequencies of H 2 are inside the set Θ 3N . Here, the coefficient 3 technically appears in the gauge transform approach (see Appendix). It reflects the fact that one has to make slightly more than N steps to achieve the error of order ε N . Once we have constructed these operators, it turns out that we can study spectral characteristics of H by means of studying the corresponding spectral characteristics of H 2 . Indeed, the spectra of H and H 1 are the same, and so are the lengths of the spectral gaps. Also, the lengths of the spectral gaps of H 1 and H 2 differ by at most ε N . Concerning the IDS, it was proved in [6] that More precisely, we have shown in [6] that the immediate consequence of (3.1) is 2 ). We also define and notice the obvious property This trivial consideration is important for understanding of some of the effects described later. Now we choose a small positive number δ = δ(N ), to be specified later and for each non-zero frequency θ ∈ Θ (H 2 ) we put ). Next, let ψ = ψ(ξ) be a standard smooth non-negative cut-off function satisfying supp ψ ⊂ [−1/2, 1/2] and ψ(ξ) = 1 for ξ ∈ [−1/4, 1/4], and let ϕ := 1 − ψ. We put Note that We also putχ The region R(θ) is called the resonance zone corresponding to θ. Since (for fixed N ) the number of resonance zones is finite and the length of them goes to zero, it implies that for sufficiently small δ these zones do not intersect. We also denote by the 'overall' resonant set corresponding to ε; we obviously have In what follows we always assume that δ(N ) is sufficiently small so that different resonance zones R(θ; δ) do not intersect for all θ ∈ Θ 9N ; we also take ε so small that ε ≤ δ 2 . Remark 3.1. It is not difficult to see that in case when Θ satisfies Diophantine condition on frequencies, the parameter δ(N ) can be chosen to be c N with some constant c = c(Θ) with all constructions and statements of Section 3 being valid. The important property of the operator H 2 established in the appendix is as follows: the Fourier coefficientsĥ 2 (ξ; θ) satisfy This property implies that if a point ξ lies outside all the resonance zones, then the onedimensional subspace spanned by the corresponding e ξ is invariant with respect to H 2 . If, on the other hand, for some (unique) θ we have ξ ∈ R(θ), then the two-dimensional subspace spanned by e ξ and e ξ+2θ is invariant with respect to H 2 . The most important property of G is the following one: we have where we have denoted {ξ, G(ξ) ≤ λ} =: Ω λ = Ω λ (G). This property was proved in [6] and it immediately implies that the spectrum of H 2 is Equation (3.23) shows that in order to study the spectrum of H 2 , we need to look at the range of G. Our discussions above and Figure 1 imply the following statement: Later, we will obtain more precise information on the location and the length of the gaps. The characteristic polynomial of the matrix M (ξ) − µ is (3.25) and the eigenvalues of M (ξ) are 3.3. Spectral gaps. Let us find the size of the spectral gap around It is easy to see and will be even clearer in what follows that all the objects we are interested in require detailed information only from the interior of the resonant zones. In particular, maximum value of σ − and minimal value of σ + are attained inside the interval [− δ 100|θ 0 | , δ 100|θ 0 | ] (assuming of course, ε is small enough). This allows us to ignore cut-off functions ϕ θ introduced above as they are equal to zero in the region of interest. 3.4. Integrated Density of States. Now let us discuss the IDS of H 2 . Formula (3.22) implies that in order to study the integrated density of states, we need to solve the equation In the unperturbed case (when G(ξ) = ξ 2 ) this equation has two solutions whenever λ > 0. After the perturbation, this equation may have no solutions (when λ is inside a spectral gap), or it may have one solution (when λ is exactly at the spectral edge of H 2 ). As we will see later, in other cases equation (3.31) has exactly two solutions. If λ is negative, the above constructions imply that N (λ; H) = 0 for sufficiently small ε. Suppose now that λ is positive and √ λ ∈ Θ 3N (in particular, λ = 0). Then, for sufficiently small δ, points (both of them) ξ with ξ 2 = λ do not belong to any resonance region; the same is true for points of the form λ − ετ . This, together with (3.19), implies that the equation G(ξ) = λ has two solutions (recall that we use convention of not distinguishing two solutions that are within distance O(ε N ) from each other), call them G −1 (λ) > 0 and −G −1 (λ). Monotonicity of G implies that (again for sufficiently small δ) the following holds: whenever 0 < η < G −1 (λ), we have G(η) < λ, and whenever η > G −1 (λ), we have G(η) > λ. The last case we have to consider is λ = 0. The only points ξ where there is a chance that G(ξ) is negative are located in a (1 + |τ |) 1/2 ε 1/2 -neighbourhood of the origin and are not located in any resonance zone. Therefore, we have Now the simple use of the Implicit Function Theorem immediately gives the answer. If τ > 0, then N (0; H 2 ) = 0 for small ε. If τ < 0, then Finally, if τ = 0, then we have to note that formula (3.17) implies that for small ξ and non-trivial V we have f 2 (ξ; 0) < 0 and, therefore, All the asymptotic formulas for N (λ; H 2 ) obtained above together with equations (3.2) and (3.3) lead to Theorem 2.3. Again, it is easy to see that the corresponding expansions are independent of the particular choice of the cut-off function ϕ. Almost-periodic potential Let us discuss the situation when the potential is not quasi-periodic, but smooth almost-periodic, i.e. Θ is still a finite set, but we have We also assume that Θ satisfies the diophantine condition, i.e. for θ ∈ Θ m we have |θ| m −P 0 , where P 0 > 0 is fixed. Remark 4.1. We can relax the diophantine properties of the frequencies if we assume a faster decay of the Fourier coefficients: the only condition that we effectively need is that the resonance zones do not intersect, see (4.13). The way we perform the gauge transform is, essentially, the same as in the quasiperiodic case, with one important difference: we cannot afford to have infinitely many resonance zones, therefore, before transforming the operator H to H 1 and H 2 as above, we need to turn H to a quasi-periodic operator by truncating the potential V . The level of the truncation depends on the size of ε -the smaller ε, the more frequencies (and resonance zones) we need to keep. Thus, the number of resonance zones will be finite for each fixed ε, but, as opposed to the quasi-periodic situation, will increase as ε goes to zero. More specifically, let us assume first that 0 < ε < ε 0 , where ε 0 is a positive number, to be chosen later. We put ε n := 2 −n ε 0 and I n := [ εn 4 , ε n ]. The gauge transform construction will be performed separately for each I n and the asymptotic expansions we will obtain will hold only for ε ∈ I n . In order to 'glue' these expansions together at the end, we will use the following lemma: Here, a j;n are some coefficients depending on j and n (and M ) satisfying ] + 1 such that for all ε, 0 < ε < ε 0 we have: This Lemma (in slightly different form) is proved in Section 3 of [6]; see also [5]. In order to apply it, we have to establish (4.4)-(4.5). Whenever we will be using this lemma, it will be rather straightforward to check estimates (4.5) for the coefficients from the constructions, so in what follows we will concentrate on establishing (4.4). Remark 4.3. Note that (4.4) is not a 'proper' asymptotic formula, since the coefficients a j;n are allowed to grow with n. Now, we will describe the construction in more detail. Let us fix a natural number N (which signifies that our errors are going to be O(ε N )) and suppose that ε ∈ I n . All the constructions below depend on the choice of (n, N ), but we will often omit writing n and N as the variables. Recall that for each θ ∈ Θ ∞ we define Z(θ) := m for θ ∈ T m . We also fix the smoothness P of the potential so that (4.7) |V θ | Z(θ) −P ; this (large) P depends on P 0 and N and will be chosen later. For each natural L we define the truncated potential Estimate (2.4) implies assuming of course that P is sufficiently large. Now, we chooseL =L(n, N ) so large that the norm of the operator of multiplication by V − V L is smaller than ε N n . The previous estimate shows that it is enough to take (4.10)L(n; N ) := ε − 2N P n to achieve this. Then we run 3N steps of the gauge transform as described in the appendix, but for the operatorĤL := H + εVL. The main difference with the gauge transform procedure for the previous section is that now the width of each resonant zone decreases as n increases. More precisely, we put Then the frequencies of the resulting operator H 2 will be inside the set (ΘL) 3N = Θ 3NL . Note that the resonant zones obtained at each step do not intersect. Indeed, suppose that θ 1 , θ 2 ∈ Θ 3NL , θ 1 = θ 2 . Then θ 2 − θ 1 ∈ Θ 6NL and, therefore, our diophantine condition implies (4.13) n for sufficiently small ε n , assuming that P is chosen so large that (4.14) 3N P 0 P < 1/8. At the same time the length of the resonant zone corresponding to θ ∈ Θ 3NL is bounded from above by n . Remark 4.4. Of course, condition (4.14) means that the bigger N is (i.e. the more asymptotic terms we want to obtain), the bigger P we should take (i.e. the smoother potentials we have to consider). This construction leads to two operators, H 1 and H 2 with the same properties as described in the previous section. For each θ ∈ Θ 3NL we denote by R(θ) = R(θ; n) the resonant zone -the interval centred at −θ of length ε 1/2 n 2|θ| . We also denote (4.15) R(ε n ) = R n := ∪ θ∈Θ 3NL R(θ; n); this is the resonant zone corresponding to I n . The meaning of this set is that the symbol h 2 of H 2 is diagonal for ξ ∈ R n . This means that all Fourier coefficientsĥ 2 (ξ; θ) = 0 whenever θ = 0 and ξ ∈ R n ; our construction implies that even more is true:ĥ 2 (ξ; θ) = 0 unless ξ ∈ R(θ; n). The main difference between the almost-periodic and quasi-periodic cases is the following: in the quasi-periodic case the resonant set was fixed for any given N as δ(N ) and decreasing as N grows (see (3.13)), whereas in the almost-periodic case R(ε n ) is fixed only when ε ∈ I n , and in general it is no longer true that R n+1 ⊂ R n (since the smaller ε n leads to bigger n and biggerL(n) given by (4.10) and, thus, R n+1 consists of a bigger number of smaller zones than R n ). Estimate (2.4) implies that the number of elements in Θ 3NL can be estimated by which implies (4.17) meas(R n ) < ε 1/6 n if we choose P large enough. Let us now discuss the behaviour of the gaps of H 2 (and, therefore, of H). This can be done using the arguments from the quasi-periodic case. When ε ∈ I n , the operator H 2 has gaps around points |θ| 2 , θ ∈ Θ 3NL(n) , and the length of each such gap has asymptotic expansion in natural powers of ε, according to Theorem 3.6. Now we notice that if θ ∈ Θ 3NL(n) , then θ ∈ Θ 3NL(m) for any m ≥ n and, therefore, there is a gap of H 2 around θ for any m ≥ n. The length of this gap has an asymptotic expansion given by Theorem 3.6 for ε ∈ I m , m ≥ n (Here we assume that ε 0 is chosen to be small enough, depending only on N ). These expansions may be different in general, but we can use Lemma 4.2 to deduce that we have a complete power asymptotic expansion of the length of a gap valid for all ε < ε 0 . Thus, we obtain Theorem 2.1 for smooth almost-periodic case. Now we discuss the asymptotic behaviour of the IDS. Recall that all our constructions are made for fixed N ; sometimes, we will be emphasising this and make N an argument of the objects we consider. First, we introduce the set of ξ > 0 such that ξ ∈ Θ ∞ and there is an infinite sequence n j → ∞ and θ j ∈ Θ L(n j ) satisfying ξ ∈ R(θ j ; n j ). We denote this set byS 1 (N ). Since we have ∞ n=p meas(R n ) → 0 as p → ∞, the measure of S 1 (N ) is zero. Also, it is easy to see that the set ∩ n R n (N ) is the Cantor-type set (i.e. a perfect set with empty interior) and is, thus, uncountable (unless V is periodic and Θ ∞ is therefore discrete). Since, obviously, ∩ n R n (N ) ⊂ (S 1 (N ) ∪ Θ ∞ ) and Θ ∞ is countable, this implies that the setS 1 (N ) is uncountable. We also haveS 1 (N ) ⊂S 1 (Ñ ) for N <Ñ . Finally, we introduce S 1 := ∪ NS1 (N ) -global uncountable set of Lebesgue measure zero. Let us assume at the moment that τ = 0. For each fixed λ > 0 there are the following three possibilities: 1. Let √ λ ∈ Θ ∞ . Then √ λ = |θ| ∈ R(−|θ|; n) for all sufficiently large n and we therefore can repeat the procedure from the previous Section to obtain the resonance asymptotic 'expansion' (3.57) (see also Lemma 4.2). 2 Thus, for all sufficiently large n we have λ 1/2 ∈ R n . Then again we can repeat the (non-resonant) procedure from the previous Section which, together with Lemma 4.2, guarantees the existence of the complete asymptotic expansion (3.34). 3. Let √ λ ∈ S 1 . This is the most interesting case. As we will see below, in general there is a big part of S 1 where no power asymptotic expansion exists. Let us make a pause for a moment and summarize what we have done so far. We have proved the following statement: Theorem 4.6. Suppose, V is smooth almost-periodic with the constant Fourier coefficient τ = 0. Then there exists a set S 1 such that for λ 1/2 ∈ R + \ (S 1 ∪ Θ ∞ ), we have a complete expansion of the form (3.34), whereas when λ ∈ Θ ∞ , we have (3.57). The set S 1 is uncountable and has measure zero. Suppose now τ = 0. Let us denote by R (θ; n) the interval centred at −θ, but of twice larger length than R(θ; n); obviously, R(θ; n) ⊂ R (θ; n). We also denote byS 2 (N ) the set of points ξ ∈ Θ ∞ for which there is an infinite sequence n j → ∞ and θ j ∈ Θ L(n j ) such that ξ ∈ R (θ j ; n j ). We put S 2 = S 2 (τ ) := ∪ NS2 (N ). Then S 1 ⊂ S 2 , meas(S 2 ) = 0, and for √ λ ∈ S 2 we still have the complete asymptotic expansion. Indeed, if ε ∈ I n j and √ λ ∈ R (θ j ; n j ), then √ λ + τ ε ∈ R(θ j ; n j ) for sufficiently large n. This proves the following statement: Theorem 4.7. The statements of the previous theorem hold for any τ = 0 with the set S 1 replaced by a different uncountable zero measure set S 2 = S 2 (τ ). Now we will prove the opposite -that there is a substantial set S such that for √ λ ∈ S there is no asymptotic expansion in powers of ε for N (λ; H). Obviously, the measure of S has to be zero, but we will show that it is uncountable. However, as we have seen in the previous Section, such a set must be empty in the quasi-periodic case. This means that we need to make a further assumption on the potential. Namely, we will assume that V is not periodic (i.e., Θ ∞ is dense) andV θ = 0 for any θ ∈ Θ ∞ . Remark 4.8. We can replace the last condition by requiring that there are infinitely many non-zero Fourier coefficients located in 'strategically important' places. We again start with the case τ = 0. The strategy of the proof will be as follows. First, we will make a natural attempt to construct a set S such that for √ λ ∈ S there is no asymptotic expansion in powers of ε of N (λ; H). This attempt will almost work, but not quite. Then we will see what the problem with our first attempt is and will modify it correspondingly. So, we define R (θ; n) = (−θ − δ n (θ), −θ + δ n (θ)) as the interval centred at −θ of halflength δ n and at our first attempt we define δ n (θ) = ε n |V θ |(100|θ|) −1 ; obviously, then R (θ; n) ⊂ R(θ; n) for large n. Note that our constructions guarantee that if ξ ∈ R (θ; n) and ε ∈ I n , then |ξ| 2 is well inside the spectral gap of H 2 (n) (this is the operator H 2 , when we want to emphasize that we have performed the gauge transform for ε ∈ I n ). Now we consider the setS 3 (N ) of all λ for which the following two conditions are satisfied: a. There is an infinite sequence n j → ∞ and θ j ∈ Θ 3NL(n j ) such that λ 1/2 ∈ R (θ j ; n j ), and b. There is an infinite sequence n j → ∞ such that λ 1/2 ∈ R(n j ). A simple argument based on the fact that Θ ∞ is dense in R implies thatS 3 (N ) is uncountable. Suppose, √ λ ∈S 3 (N ). Then, if ε ∈ I n j , the point λ is in the spectral gap of H 2 (n j ) and, therefore, we have the (trivial) resonant asymptotic expansion (3.49). On the other hand, if ε ∈ I n j , we have the non-resonant asymptotic expansion (3.33). It is very tempting to stop the proof here by stating that these two expansions are different. However, we cannot quite guarantee this -it may well happen that all the coefficients in the non-resonant expansion (3.33) turn to zero. One way of overcoming this is to show that for generic set of Fourier coefficients of V these coefficients are bounded away from zero. We, however, will assume a different strategy and reduce the setS 3 (N ) even further (by choosing smaller values of the parameters δ n (θ)). Before doing this, let us see what happens with the position of the point ξ ∈S 3 (N ) related to different resonant zones as n changes. When n = n j , our point ξ is inside the resonant zone R (θ j ; n j ) and, therefore, we have a trivial expansion for ε ∈ I n j . If we consider values n bigger than n j , then ξ may stay inside R(θ j ; n) for a while, but since ∩ n R(θ j ; n) = |θ j | = ξ, for sufficiently large n our point ξ will get outside of the resonant zone R(θ j ; n); let us denote byk j the index when this happens (i.e.k j is smallest value of n > n j for which we have ξ ∈ R(θ j ; n)). Similarly, let k j be the biggest value of n < n j for which we have ξ ∈ R(θ j ; n). Since the width of a resonance zone shrinks by a factor √ 2 at each step, Remark 4.5 implies that ξ cannot 'enter' a different resonance zone immediately after 'leaving' R(θ j ; n), i.e. ξ ∈ (R(k j ) ∪ R(k j )). Then by our construction we have N asymptotic terms of N (λ; H (ε) ) when ε ∈ I k j , and the coefficient in front of ε 2 is easily computable and equal to Similarly, we have N asymptotic terms of N (λ; H (ε) ) when ε ∈ Ik j , and the coefficient in front of ε 2 equals Notice that the sum in (4.19) contains more terms than (4.18); one of the extra terms corresponds to θ = θ j and its modulus is at least . The rest of the extra terms give a total contribution of O(ε N k j ). Therefore, we have (4.20) Now we will readjust the definition of the subset R of the resonant zone R by requiring that the jump (4.20) is at least one, which can be achieved by asking that ε 18|θ j | . Another way of formulation this is requesting that if n > n j satisfies then ξ ∈ R(θ j , n). Now, we define a modified setS 3 (N ) which satisfies properties a and b above, but with a modified parameter δ n defining the resonant zone R given by δ n (θ) = min{ εn|V θ | 100|θ| , |V θ | 2 72|θ| 2 }. The calculations just above show that if ξ ∈ R (θ j , n j ), then, assuming once again that ε 0 = ε 0 (N ) is small enough, we have: and, therefore, we cannot have both these coefficients small at the same time. This shows that, indeed, we cannot have a complete power asymptotic expansion (nor even an asymptotic expansion with the remainder o(ε 2 )) for any ξ ∈S 3 (N ) with N ≥ 3. If we put (4.23) then this is an uncountable set such that there is no complete power asymptotic expansion of N (λ, H) for √ λ ∈ S 3 . We have proved the following result: Theorem 4.9. Suppose, V is smooth almost-periodic, but not periodic, the constant Fourier coefficient τ = 0, andV θ = 0 for any θ ∈ Θ ∞ . Then there exists an uncountable set S 3 such that when λ 1/2 ∈ S 3 , there is no complete power asymptotic expansion of N (λ; H). Suppose now that τ = 0. Consider the setS 3 (N ) of all λ for which the following two conditions are satisfied: a. There is an infinite sequence n j → ∞ and θ j ∈ Θ 3NL(n j ) such that (λ + τ ε n j ) 1/2 ∈ R (θ j ; n j ), and b. There is an infinite sequence n j → ∞ such that (λ + τ ε n j ) 1/2 ∈ R(n j ). A slightly more difficult than before (but still quite elementary) argument shows that S 3 (N ) is uncountable for each τ . Also, similar to the case τ = 0, if ε ∈ I n j , the point λ is in the spectral gap of H 2 (n j ) and, therefore, we have the (trivial) resonant asymptotic expansion (3.49). On the other hand, if ε ∈ I n j , we have the non-resonant asymptotic expansion (3.33), and the first order term in this expression equals − τ 2π √ λ , which means that these two expressions are different starting with ε, i.e. it is enough to take N ≥ 2. Putting S 3 := ∪ N ≥2S3 (N ), we will prove the analogue of Theorem 4.9 in the case τ = 0. Putting all the results proved in this section together, we have proved the following: Theorem 4.10. Suppose, V is smooth almost-periodic, but not periodic, andV θ = 0 for any θ ∈ Θ ∞ . Then there exists a set S (which we call a super-resonance set) such that a complete power asymptotic expansion of N (λ; H) exists if and only if λ 1/2 ∈ S. The set S is uncountable and has measure zero. Remark 4.12. We have called the set S the super-resonance set. An interesting question which we have not studied so far is what is the dimension of this set. Preparation. Our strategy will be to find a unitary operator which reduces H = H 0 + ε Op(V ), H 0 := −∆, to another PDO, whose symbol, essentially, depends only on ξ (notice that now we have started to distinguish between the potential V and the operator of multiplication by it Op(V )). More precisely, we want to find operators H 1 and H 2 with the properties discussed in Sections 3 and 4. The unitary operator will be constructed in the form U = e iΨ with a suitable bounded self-adjoint quasi-periodic PDO Ψ. This is why we sometimes call it a 'gauge transform'. It is useful to consider e iΨ as an element of the group We assume that the operator ad(H 0 , Ψ) is bounded, so that U (t)D(H 0 ) = D(H 0 ). This assumption will be justified later on. Let us express the operator A t := U (−t)HU (t) via its (weak) derivative with respect to t: A t = H + t 0 U (−t ) ad(H; Ψ)U (t )dt . The operator Ψ is sought in the form (5.2) Ψ =k j=1 Ψ j , Ψ j = Op(ψ j ), with some bounded operators Ψ j . Substitute this formula in (5.1) and rewrite, regrouping the terms: Next, we switch the summation signs and decrease l by one in the second summation: We emphasise that the operators B l and T l depend only on Ψ 1 , Ψ 2 , . . . , Ψ l−1 . Let us make one more rearrangement: Let ϕ θ (ξ, ε n ) be a smooth cut-off function of the set (5.7) Similar notation is used for corresponding operator, i.e. B . Now we can specify our algorithm for finding Ψ j 's. The symbols ψ j will be found from the following system of commutator equations: ad(H 0 ; Ψ 1 ) + B 1 = 0, (5.9) ad(H 0 ; Ψ l ) + B l + T l = 0, l ≥ 2, (5.10) and hence (5.11) Below we denote by yk the symbol of the PDO Yk. Obviously, the operators B l , T l are bounded, and therefore, in view of (5.9), (5.10), so is the commutator ad(H 0 ; Ψ). This justifies the assumption made in the beginning of the formal calculations in this section. It is also convenient to introduce the following norm in the class of symbols: We notice that Op(b) ≤ b . 5.3. Computing the symbol of the operator after gauge transform. The following lemma provides us with more explicit form of the symbol yk. Here C (p) s (θ) depend on s, p and all vectors θ, θ j , θ j , φ j , θ j , φ j . At the same time, coefficients C (p) s (θ) can be bounded uniformly by a constant which depends on s only. We apply the convention that 0/0 = 0. The proof is identical to the proof of Lemma 9.3 from [6] and we omit it here. Explicit value of the coefficients for the second term (see (3.17) and (3.21)) can be found directly as the second order perturbation or following more carefully the first two steps of the construction for A 1 from (5.11).
v3-fos-license
2023-04-26T06:17:09.609Z
2023-04-24T00:00:00.000
258310292
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/acps.13562", "pdf_hash": "f112e5e33d3a32386aff78912a8e71e2c071153d", "pdf_src": "Wiley", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46287", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "sha1": "8dc12f50c844c01e6411c79d098a053bf4e5f0ee", "year": 2023 }
pes2o/s2orc
Does a history of cannabis use influence onset and course of schizophrenia? While evidence strongly supports a causal effect of cannabis on psychosis, it is less clear whether the symptom pattern, clinical course, and outcomes differ in cases of schizophrenia with and without a background of cannabis use. | INTRODUCTION Cannabis use is thought to increase risk of developing schizophrenia and other non-affective psychoses. [1][2][3][4] Although the relationship between cannabis and psychosis is likely to be highly complex, evidence from experimental 5 and observational studies, including longitudinal studies that minimize reverse causation and robustly address confounding, 2 support a causal effect of cannabis on psychosis. Plausible biological mechanisms for the association have been identified, and the effects of THC exposure on brain function seem particularly prominent in the developing brain, including during adolescence. [6][7][8] Cannabis use has also been shown to be associated with increased risk of relapse and other markers of poorer outcome in psychosis, 9 but evidence of a bidirectional causal relationship between cannabis and schizophrenia 10 increases the complexity of inferring causality in these associations. While it has been shown that cannabis use is associated with a more chronic course and worse outcome in schizophrenia, [11][12][13] many studies have been crosssectional in design, with cannabis use measured retrospectively, hence it is not clear whether cannabis use began (or became more frequently used) before or after onset of schizophrenia. A systematic review of longitudinal studies concluded that cannabis use among patients with schizophrenia was associated with an increased risk of relapse and rehospitalization, although the overall quality of studies when examining other outcomes was low and follow-up times were mostly around one or 2 years. 9 Foti et al 14 used repeat measures over a 10-year period to assess the longitudinal relationship between cannabis use and psychiatric symptoms, with crosslagged models providing evidence of a bidirectional relationship between them, suggesting that cannabis might be used to alleviate some symptoms of schizophrenia. We are not aware of any study that has assessed whether cannabis use in adolescence, prior to first episode of psychosis, is associated with symptom severity and long-term outcome in schizophrenia. It is also unclear whether cannabis use prior to onset can influence the type of symptoms experienced and clinical characteristics of the disorder. Some studies have shown a higher occurrence of positive symptoms among persons with previous substance use, 15,16 but in general no increased occurrence of negative symptoms. 9,17 Other studies have found no difference in clinical characteristics when comparing patients with and without a cannabis history. 18,19 Again, most studies are based on cross-sectional data or case report and few studies have been able to assess long-term outcomes. In a long-term follow-up of Swedish conscripts, based on register data, we found that subjects with a history of cannabis use had longer duration of first admission to hospital, more readmissions, and more total time of hospital stay than those without cannabis use. 12 Cannabis use was assessed by self-report at around 18 years of age, and incidence of schizophrenia according to the national inpatient register was assessed up to ages 60-62 years. Furthermore, in an earlier study of medical records on a small sample of Swedish conscripts treated for schizophrenia in Stockholm County, we found that persons with a history of cannabis use had a more sudden onset and more positive symptoms than those without a history of cannabis use. 16 However, no formal assessment protocol was used and a longer follow up of a national sample is needed to reassess the findings. Another limitation in our previous studies is the lack of information on cannabis use at follow-up. Myles et al 20 reviewed continuation of cannabis use after first episode psychosis. While they found evidence of continued as well as cessation of cannabis use, there was no documentation of how this affected outcome of psychosis. By combining conscript cohort data with medical admission records and review of case records we aimed to examine the relationship between self-reported cannabis use at conscription, later record of cannabis abuse/dependence and outcomes in schizophrenia. In contrast to other longitudinal studies that have examined psychoses more broadly or schizophrenia spectrum disorders as outcomes, 2 the outcome in our cohort has been recorded diagnosis of schizophrenia. 12,21,22 The validity of schizophrenia diagnoses in this cohort have been assessed qualitatively in a subsample 23 but a validation study of all cases has not previously been undertaken. In this study we use data from an updated linkage of the Swedish Conscript Cohort to the national patient register in order to identify medical records of patients with schizophrenia that we then assessed for information on clinical characteristics, including record of cannabis abuse/dependence. Data were recorded using the OPCRIT system 24 which enabled us also to ascertain and validate diagnoses of schizophrenia. The overall aim of the study was to assess differences in age and type of onset, clinical course and prognosis of schizophrenia between subjects with a history of cannabis use with those without such history. The following research questions were addressed: i. Do patients in the patient register fulfill the diagnosis of schizophrenia according to defined criteria? ii. Do subjects with cannabis use in adolescence differ from non-users with regard to age at onset, type of onset, clinical characteristics and outcomes of schizophrenia compared to non-users? Significant outcomes • Schizophrenia patients with a history of cannabis use in adolescence had earlier age at onset and more severe disease in terms of hospital admissions and length of stay compared to patients without cannabis history. • Symptom profiles of schizophrenia did not seem to differ by previous cannabis history. Limitations • We did not have data on self-reported cannabis use during follow-up. • In spite of a large national cohort, number of cases with schizophrenia was limited. iii. Are differences in the course and outcome of schizophrenia related to cannabis abuse or dependence after conscription? | METHODS The study is based on a cohort of Swedish men born 1950-1952 conscripted for compulsory military training in 1969-1970, as used in previous studies. 21,22 The conscription procedure comprised medical examination, tests on physical and mental capacity, and for the particular period of conscription 1969-1970, also a series of questions on use of alcohol, tobacco and other drugs. In questionnaires, conscripts were asked whether they had ever used drugs, the first drug they used, which type of drug(s) they had used, and how many times they had used the drug. By using the personal identity number, data were linked to the national inpatient register. The register was set up in the beginning of the 1970:s and achieved full national coverage for psychiatric care in 1973. While the original cohort previously use for register studies comprised 50,653 men, for this analysis of medical records we were able to access personal data for about half of the population, 24,875 persons, for legal and administrative reasons. Comparisons of this smaller cohort with the total cohort showed that the distribution of a number of variables were very similar, and the study population remains representative of the national population of Swedish young men at the time. Linkage with the national patient register was performed through 2011, by which time conscripts had reached an age of around 60 years. Permission from the Ethical Review Board had been obtained at repeated occasions to perform record linkages, and a new permission was obtained to retrieve medical records for scrutiny. From the register we identified all individuals with a psychotic disorder diagnosis (Table S1). In total 569 persons were identified, of whom 223 had a primary or secondary diagnosis of schizophrenia, and 346 had another psychotic diagnosis. We approached all treatment facilities identified and asked for copies of the medical records or, if required, access for reading these in their archives. Efforts were made to find the record for each given treatment episode. The procedure was difficult due to the numerous administrative changes in the organization of care and treatment during the >40 years of follow-up. Many of the treatment units recorded in the electronic system no longer existed or had merged with other units, and the system for archiving records varied between county councils and individual hospitals and over time. Some facilities (three clinics, 14 patients) required individual consent from patients for us to access their medical records, but we did not have permission from the Ethical review board to contact patients individually, so these records could not be accessed. We accessed medical records for 402 patients, 204 with a recorded diagnosis of schizophrenia, and 198 with other psychotic diagnoses, from a total of 144 different treatment facilities. Treatment units of these patients were from all parts of the country, encompassing urban and rural areas, and represented all types of care; university hospitals, special centers for treatment of mental illness and standard hospital clinics. Medical records were scrutinized by one of the authors, board certified specialist in psychiatry (TJ), using the OPCRIT system. 24 This was performed blind to data on cannabis use reported at conscription. The OPCRIT protocol was used to identify patients with schizophrenia according to ICD-10 and to assess clinical characteristics according to the protocol. To assess symptom profiles we selected OPCRIT items describing the following types of symptoms: Positive symptoms, divided into delusions, hallucinations, thought interference and severe delusions; negative symptoms; disorganized symptoms. Diagnosis of alcohol as well as cannabis abuse/dependence was defined in the OPCRIT guidelines as continued use despite knowledge of having a persistent or recurrent social, occupational, psychological or physical problem that is caused or exacerbated by the substance. Of the 204 patients with a record diagnosis of schizophrenia, 17 had too little information to assess in the OPCRIT protocol. For the 198 patients with other psychotic diagnoses, records were screened informally and those suspected of possibly having schizophrenia (n = 21) were formally assessed using OPCRIT. In the analyses we have compared schizophrenia clinical characteristics and outcomes in subjects who at conscription reported having used cannabis on zero or one occasions, with those who reported having used cannabis at least twice. In the text, we refer to these two groups as subjects without and with a history of cannabis use, respectively. Table S2 shows the distribution of the number of times subjects reported cannabis use. Statistical evidence of differences between the two groups was assessed using chi2-test, or Fisher's exact test in cases where cells had low numbers. Median number of hospital admissions and number of hospital days were compared using the Mann-Whitney U test. 95% confidence intervals (CI) were computed where relevant. | Diagnoses assessed Of the 187 records from patients with a clinical diagnosis of schizophrenia on the NPR, 158 (84%) were found to have confirmed schizophrenia according to ICD-10 criteria after OPCRIT assessment. Of the 21 patients with other psychoses, 10 patients met ICD-10 criteria for schizophrenia in OPCRIT that is, 5% of the total number of 198 subjects with a non-schizophrenia psychotic disorder diagnosis on the NPR. From the 168 patients with an OPCRIT ICD-10 schizophrenia diagnosis, 160 were included in the analyses below. Eight subjects were excluded: four who at conscription reported use of other (non-cannabis) drugs (to avoid confounding), and four who had a psychosis diagnosis at conscription. Of these 160 patients, 32 had a history of cannabis use according to the data from conscription. | Age and mode of onset Mean age at onset of schizophrenia for patients was 23.4 years (SD ±6.6) among patients with a cannabis history, and 27.7 years (SD ± 9.3) among those without. Mean age difference was 4.3 years (95% CI 0.9-7.7). Table 1 shows the distribution of mode of onset. There was no significant difference between patients with a cannabis history compared to those without ( p = 0.6). Table 2 shows the number of hospital episodes and length of stay. Two patients had only outpatient records, bringing the number of subjects to 158. Patients with a cannabis history had a higher median number of hospital admissions than those without (20 vs. 8; p = 0.05). They also had a higher total number of hospital days (877 vs. 273; p = 0.01), but there was little evidence of any difference in length of first admission (38 vs. 27; p = 0.20). Individuals with an older age of onset had a greater number of admissions, total hospital days and length of first admission. Evidence of differences between those with and without a cannabis use history was weaker in these stratified analyses, though the direction of associations was consistent with the whole-sample analysis. OPCRIT assessment of medical records indicated little evidence of difference in long-term clinical outcome (Table 3). Both groups responded well to neuroleptics. Persons with a cannabis history had significantly higher rates of lifetime cannabis abuse/dependence as well as lifetime alcohol abuse/dependence. 28% of the patients who reported cannabis use at conscription did not have T A B L E 1 Mode of onset of schizophrenia among subjects with and without a history of cannabis use. (Table 3). Table S3 shows that there was little evidence of difference between the groups regarding marital status or cohabitation (life time) as well as employment status at time of admission. | Cannabis abuse/dependence according to medical records Among the 32 subjects who reported cannabis use at conscription, 23 (72%) had documentation of cannabis abuse/dependence in the medical records. Of these, 6 patients also had a register diagnosis of cannabis use disorder (ICD-8 304,5 ICD-9 304D, ICD F12) in the patient register. Among the 126 subjects without a history of cannabis use, 14 (11%) had documentation of cannabis abuse/dependence, although none had a diagnosis in the patient register. Table 4 shows that there was a strong association between level of cannabis use at conscription and a later record of cannabis abuse/dependence in the medical records. Almost all (12 out of 13) of those who reported the heaviest cannabis use at conscription had a later record of cannabis abuse/dependence in their medical records. Table 5 shows the association between having a record of cannabis abuse/dependence in the medical records and the indicators of hospital care use. There was a significantly higher level of number of hospital days and number of readmissions among those who had a record of cannabis abuse/dependence, and at a higher level among those who reported cannabis use at conscription. | Clinical characteristics As shown in Table 6, there were no significant differences between the groups in the presence of positive (delusions, severe delusions, hallucinations or thought interference), negative or disorganized symptoms. | DISCUSSION We confirmed our previous findings 12 of an earlier age at onset, more readmissions, and a higher total number of hospital days among patients with a history of cannabis use in adolescence compared to those without. The added value of this study is the use of information from medical records, and thorough validation of schizophrenia diagnoses. Earlier age at onset of psychosis among subjects with a history of cannabis use has been described by several authors. 25,26 Studies have asked about cannabis use in retrospect, so with possible recall bias, and few studies have specifically examined schizophrenia as an outcome. To our knowledge this is the first study in which cannabis use was assessed prior to incidence of psychosis, in a non-healthcare setting, and linked to later incidence of schizophrenia. The higher number of readmissions and greater total number of hospital days we observed in those with a history of cannabis use could be a consequence of an earlier age of onset. However, findings were similar when we stratified subjects by age of onset, although evidence was weaker and confidence intervals included the null. Thus individuals with pre-illness cannabis use seem to have a greater illness burden as well as, and independent of, a younger age of schizophrenia onset. There was little evidence that the first hospital episode was longer or the mode of illness onset different in patients with a history of cannabis use. Our previous register-based data of first inpatient care episode in the full cohort 12 indicated a significantly longer first hospital episode among cannabis users. The weaker evidence here may be due to the smaller sample size or to the more specific case definition of schizophrenia. While the issue of causal association has been addressed and discussed in many of the papers cited below, the type of psychotic outcome has hardly been addressed at all. In an early review, Thornicroft 27 pointed out the importance of outcome specification, yet few studies since then have specified type and characteristics of psychosis identified as outcome in longitudinal studies. An obvious reason is the low incidence of schizophrenia compared to the broader group of chronic psychotic disorders. Although the Danish register studies 28,29 did specify schizophrenia diagnoses, exposure in these studies T A B L E 5 Number of hospital admissions, total number of hospital days and number of hospital days at first admission among those who had record of cannabis use, compared to those who had not. were diagnoses of cannabis use disorders in health care, and not cannabis use measured independently of contact with health services. Previous population based studies on cannabis and psychosis have not addressed schizophrenia as a specific outcome, and our previous studies only used registerbased clinical diagnoses. In this study we showed that 85% of those with a register-based diagnosis of schizophrenia were assessed as meeting criteria for ICD-10 defined schizophrenia according to OPCRIT assessment. Furthermore, screening and OPCRIT assessment of other psychotic disorder showed that only 5% of those with other psychotic disorder met criteria for ICD-10 schizophrenia. We can therefore have greater confidence that the increased risk of schizophrenia found in our previous studies was indeed schizophrenia according to research diagnostic criteria. We did not find any difference in symptom pattern between the two groups. Our previous finding in a smaller sample of more rapid onset and more positive symptoms 16 may have been a chance finding, or the fact that the study was based on patients in an early stage of the disease. While the study by Caspari 15 found a higher rate of positive symptoms in cannabis users, this was based on a relatively small group of patients and shorter followup. It should be noted that the occurrence of negative symptoms may be underestimated, since patients with negative symptoms may be less prone to be admitted to hospital and thus less apparent in medical records. Negative symptoms may also be less consistently recorded in medical journals. While the medical records in general had information enough to grade the OPCRIT items, including what we defined as positive symptoms, the clinical assessments of these were not recorded in a standardized way, for example, through the PANSS. In addition, information on cognitive function, particularly when assessed using standardized instruments, is rarely available in clinical records, though would also be of interest considering the possible effects of cannabis on cognitive function. 30 Although cannabis use in adolescence is often considered experimental and self-limiting, there was a clear association between reported cannabis use at conscription and later cannabis abuse/dependence in medical records. In particular, more than 90% of the subgroup who reported the highest use of cannabis at conscription had a record of later cannabis abuse/dependence. There was also an association between cannabis abuse/ dependence at follow-up and more readmissions and longer hospital stay, consistent with findings by for example, Kuepper et al 31 and Linszen et al. 11 The proportion of patients who were married or cohabitating, as well as being employed, was lower among those with a cannabis history, although not significant. A larger study population might have shown a difference, but the findings confirm the general observation that a substantial proportion of patients with schizophrenia do manage social life, and that this is not substantially affected by previous cannabis use. We acknowledge several limitations of this study, the main one being lack of regular and detailed assessment of cannabis use, as well as confounders, over time. It is, for example, possible that comorbid conditions occurring during follow-up may influence both cannabis use and schizophrenia outcome, or that early life characteristics such as childhood trauma or pleiotropic genetic effects explain part of our findings. Furthermore, although we stratified analyses by age of onset, it remains possible that markers of poorer outcome in the cannabis history group are due to an earlier age of onset. Another limitation is the sample size. Although a large cohort at baseline, the incidence of schizophrenia means that the number available for comparisons is low. It is notoriously difficult to get access to medical records on patients treated decades ago, and we could unfortunately only get access to patient records for half the original cohort. There was no systematic bias in the access to medical records, since these were retrieved from clinics distributed all over the country, and smaller hospitals as well as university hospitals. It would have been an advantage to have two independent assessments of the medical records. However, the scrutiny of hundreds of psychiatric records, many of which were very extensive, and completing the OPCRIT, was time consuming and resources were not available for more than one experienced psychiatrist to perform the assessment. The use of high potency cannabis has been increasing in recent years and seems to be associated with higher risk of psychosis. 2,32 Adolescents in the end of the 1960's were exposed to cannabis of lower potency, so the associations found in this study might be lower than what would be found today. In conclusion, through scrutiny of medical records, we showed that cannabis use in adolescence is associated with higher levels of hospital use, likely partly due to continued use of cannabis during follow-up. The ongoing debate on legalization, and the apparently low risk perception of cannabis use in adolescence 33,34 indicate the need for interventions to mitigate against problematic cannabis use in young people.
v3-fos-license
2023-01-18T06:42:33.115Z
2023-01-15T00:00:00.000
256000779
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://ieeexplore.ieee.org/ielx7/8700143/10153947/10126076.pdf", "pdf_hash": "f5226a451f2a5208c979c81a67f05da2249f75cc", "pdf_src": "Arxiv", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46289", "s2fieldsofstudy": [ "Mathematics" ], "sha1": "f5226a451f2a5208c979c81a67f05da2249f75cc", "year": 2023 }
pes2o/s2orc
The Voronoi Region of the Barnes-Wall Lattice $\Lambda_{16}$ We give a detailed description of the Voronoi region of the Barnes-Wall lattice $\Lambda_{16}$, including its vertices, relevant vectors, and symmetry group. The exact value of its quantizer constant is calculated, which was previously only known approximately. To verify the result, we estimate the same constant numerically and propose a new very simple method to quantify the variance of such estimates, which is far more accurate than the commonly used jackknife estimator. This generator matrix is scaled down by a linear factor of √ 2 (or, equivalently, a volume factor of 256) compared with the generator matrix for the same lattice in [5,Fig. 4.10]. Some lattice parameters depend on the scaling of the lattice. A. 0-faces The Voronoi region has 201 343 200 vertices, which belong to six equivalence classes listed as v 1 , v 2 , . . . , v 6 in Tab. I. Equivalence is defined by the rotations Aut(Λ 16 ) that take Λ 16 into Λ 16 . If translation by a lattice vector is considered as another equivalence operation, v 2 becomes equivalent to v 4 and v 3 to v 5 , reducing the six equivalence classes to only four. The vertices are located at a squared distance from the origin of 3/2, 10/9, or 1. Hence, the covering radius is 3/2, as already known [3], [5,Section 4.10]. D. 3-to 14-faces In dimensions 3 to 14, there are 6 052 classes of faces, which we will not describe in detail here. Some of their properties are summarized in Tab. II, where we show the number of face classes under Aut(Λ 16 ), numbers of child faces (i.e., subfaces of dimension d − 1) and vertices for the faces in all dimensions d = 0, 1, . . . , 16. Further information is available as supplementary material [12]. E. 15-faces The 15-faces, or facets, all lie halfway between the origin and another lattice vector, orthogonal to the line between them. There are in total 65 760 such facet-defining nonzero vectors, or relevant vectors. They belong to two equivalence classes at different distances from the origin (see Tab. I). The ones closest to the origin are the minimal vectors at a squared distance of 2, which were found already in [1]. The packing radius is half of their length, i.e., √ 2/2. There are 4 320 such vectors, which is the kissing number of the lattice. There are also 61 440 other relevant vectors, which have a squared length of 3. The facets belonging to the 4 320 minimal vectors each have 7 704 child faces and 1 046 430 vertices of all six classes, while the remaining 61 440 facets have 828 child faces and 26 160 vertices equivalent to either v 2 , v 4 , v 5 , or v 6 . F. 16-face Having enumerated all inequivalent d-faces for d = 0, 1, . . . , 15 and computed their volumes and second moments using the recursion relations in [13,Sec. 3], a complete characterization of the 16-face is obtained. Using [11], we estimate that the Voronoi region has between 1 · 10 14 and 3 · 10 14 faces across all dimensions. Next, the covariance matrix or second moment tensor is computed as where the (unnormalized) second moment U = tr U = 207 049 815 983 4 287 303 820 800 and I 16 the 16×16 identity matrix. After proper normalization, the quantizer constant is obtained as where n = 16 is the lattice's dimension and V = 1/16 is the volume of its Voronoi region, which yields G = U √ 2 ≈ 0.068 297 622 489 318 7 . To verify our enumeration of face classes, we use the recursion relations in [13,Sec. 3] to calculate the volume of the Voronoi region, which agrees with the expected value of 1/16. We also verify the result (5) numerically in Sec. IV. III. THE SYMMETRY GROUP OF Λ 16 The symmetries of Λ 16 are generated by products of sign changes, permutations and the matrix where is a Hadamard matrix. There are 2 048 sign changes, which can be described as a product of three subgroups S 1 , S 2 and S 3 . The first subgroup S 1 contains all even numbers of sign changes of component pairs (x i , x i+1 ) for i = 1, 3, . . . 15, and has order 128. S 2 changes the signs of an even number of the first and last 4 odd components (x i , x 16−i ), i = 1, 3, 5, 7. This subgroup has order 8. Finally, S 3 is of order 2 and changes the signs of the components (x 1 , x 3 , x 5 , x 7 ). here given in cycle notation for compactness. The complete subgroup P can be generated using various subsets of these permutations, for example The full automorphism group Aut(Λ 16 ) can be generated by combining H with the generators of S 1 , S 2 , and S 3 and one of the sets of generators of P. Remarkably, it can also be generated by just two matrices. The first is the 16 × 16 permutation matrix M 1 corresponding to p 3 . The second is a matrix which is built using (7) with a sign change of the last row, i.e., with the Hadamard matrix IV. NUMERICAL VERIFICATION AND ERROR ESTIMATES To validate (3), we estimate U by Monte-Carlo integration over the Voronoi region. We also estimate the variance of the estimate of U , for which we use a different method than the "jackknife estimator" in [15]. In this section, we first describe our estimate of U and the variance thereof, then motivate why we prefer our variance estimator over the jackknife, and finally compare our numerical estimate of G for Λ 16 with the true value in (5). The Monte-Carlo estimate of U iŝ where x 1 , . . . , x N are N independent random vectors uniformly distributed in the Voronoi region of Λ. To estimate varÛ , we first note that since the vectors x i are independent and identically distributed, varÛ = In the fifth column, we visualize the number of face classes (y-axis) containing a certain number of vertices (x-axis). The last column shows the same information for the numbers of faces instead of face classes, which have been approximated using [11]. (1/N ) var x 2 , where x is a single random vector with the same distribution as x i . Therefore, our estimate of varÛ , denoted by varÛ , is defined by Applying the standard unbiased variance estimator of var x 2 in (13) yields or after normalization as in (4) The variance estimator (15) follows directly from fundamental laws of probability. What is surprising is that a different estimator has been used, unchallenged, in most, or perhaps all, previous works involving numerical estimates of lattice second moments [15]- [17]. To rectify this 39-year old misconception, we now elaborate on why (15) is more accurate. The jackknife works by partitioning the independent randomly selected vectors x 1 , . . . , x N into g groups, computing the average squared length within each group, and finally computing the sample variance of these g averages [ Fig. 2: Histograms of two estimates of the standard deviation of the estimated second momentÛ of the cubic lattice. The exact standard deviation (varÛ ) 1/2 , which can be calculated analytically for the cubic lattice, reveals that the proposed estimator (12) is much more accurate than the jackknife with 100 groups. (4)]. This method brings at least two disadvantages: First, the estimated variance depends on how the list x 1 , . . . , x N is ordered; reordering the list would yield a different variance estimate, although the estimated second moment (12) remains the same. And second, the variance of vectors within a group is ignored. The proposed estimator (15) suffers from neither of these disadvantages. To quantify the accuracy of both variance estimators, we numerically estimate the second moment of the cubic lattice Z n for n = 3. The second moment of Z n is U = E[ x 2 ] = n/12, and the variance ofÛ can be calculated exactly as varÛ = (1/N ) var We generated N = 100 000 vectors uniformly in the Voronoi region of Z 3 , which is the unit cube, computedÛ using (12), and estimated the variance ofÛ using the two methods. For the jackknife, we used a group size of g = 100 as in [15]. Both estimators were run 10 000 times, each time with N new random vectors. Fig. 2 shows histograms of the resulting estimates of the standard deviation, together with the exact value. It can be observed that (12) in this example is more than an order of magnitude more accurate than the jackknife with g = 100. The accuracy of the jackknife improves with increasing g, and it is most accurate when each group consists of a single sample, i.e., when g = N . In this extreme case, the jackknife simplifies into (15)-but this is not how the jackknife was applied in previous studies [15]- [17]. Having established the usefulness of the new variance estimator, we proceed to estimate the quantizer constant G of Λ 16 with high accuracy. Numerically evaluating (12) and (16) for the mean and (15) and (17) for the standard deviation, using N = 4 · 10 12 random 16-dimensional vectors, we obtain The difference betweenĜ and the exact G in (5) is only 0.7 standard deviations, which may serve as a numerical verification of the face hierarchy. The results are also in agreement with the previous (less accurate) estimate of the same constant in [15,Eq. (13)]. V. THE ALGORITHM Our algorithm 2 is described in detail in [13], which builds on previous methods for finding all relevant vectors [21] and faces [22]. In this section, we briefly summarize the main concept and present minor modifications to the methods of [13]. The basic approach remains the same: We first find all relevant vectors, i.e., normals of the facets, and all the vertices of the Voronoi region. The hierarchy of subfaces of the facets is then built by recursively intersecting the sets of vertices of parent faces. The computational cost is kept low by finding the classes of faces equivalent under Aut(Λ 16 ) and then only constructing the child faces of one (arbitrarily chosen) representative face per class. In total, only 159 143 faces are constructed explicitly. The classification of faces is performed iteratively as described in [13,Section 2.4.4]. In this method, we begin identifying equivalent faces using a proper subgroup U ⊂ Aut(Λ 16 ), which creates classes of faces under U. The set consisting of one (arbitrary) representative per class is then classified using another subgroup U . This can be repeated with different subgroups until we finally use the full group Aut(Λ 16 ). For Λ 16 , we found that a good option is to use only a single subgroup U, chosen as the stabilizer of the relevant vector n 2 with a stabilizer size of 1 451 520 (see Tab. I). We made three changes to the method in [13], which affect how the equivalence of two faces is tested and how the orbits and stabilizers of individual vectors are constructed. We now describe these changes in turn, briefly revisiting the respective previous methods followed by our new algorithms. A. Testing the equivalence of faces Our previous method of testing whether a face F is equivalent to another face F under a group G is based on the following idea. 3 For each face, we take a set of vectors that uniquely identifies that face. We use either the set of relevant vectors associated with the facets containing the face (i.e., the "normal vectors" of the face) or alternatively the face's vertices. The choice depends on the number of vectors in either of the two sets and on their classification under G. Let x 1 , . . . , x N be the vectors of F and y 1 , . . . , y N be those of F . We order these vectors such that x i is equivalent to y i for all i (if that is not possible, the faces are inequivalent). We then form the sets of all transformations between pairs (x i , y i ) for all i. If the intersection of these sets is non-empty, it consists of transformations taking F into F . If it is empty, however, we permute one of the sets and try again. The faces are inequivalent if and only if all permutations lead to empty intersections of the sets of transformations. In principle, the full set of transformations between any two equivalent vectors can easily be constructed as follows. Let x = g x x rep and y = g y x rep be two equivalent vectors with g x , g y ∈ G and x rep representing their equivalence class. Then, the full set of transformations in G taking x into y is [13] T where Stab G (x rep ) is the stabilizer of x rep in G. From Tab. I, we see that for Λ 16 , the sets (20) contain between 1 344 and 20 643 840 elements. When forming the intersections using GAP, these sets are held in memory, which becomes a problem when multiple intersections need to be calculated. We now describe a memory-efficient alternative, shown in Alg. 1. As in [13], this method is used after ensuring that F and F have the same number of vertices and number of normal vectors, and that the respective sets of vectors can be ordered such that x i ∼ y i for all i. The main idea is to fix one vector x of F and then construct all transformations taking x into any of the vectors y ∈ Y, where Y denotes the vectors of F . Clearly, if F and F are equivalent, say gF = F for some g ∈ G, then g takes x into one of the vectors y of F and thus g ∈ T x . Choosing x as the vector with the smallest stabilizer and fewest equivalent vectors of F , T x will often be very small and can be checked one by one. However, even if the smallest stabilizer is large, the elements of T x can be enumerated without holding the full set in memory. Alg. 1 performs this test as follows. In lines 6 and 7, x is chosen as the vector with the smallest stabilizer and, if there are multiple possibilities, then the one with the smallest number of equivalent vectors of F . In line 10, we store the set of these equivalent vectors as Y x . Independently from the choice of x, let D be the smaller of the sets of vertices and of normal vectors of F (lines 12-17). We choose D analogously for F . Since the stabilizer is a group, we can use methods in GAP to iterate over all its elements in line 18, while holding only one element in memory at any given time. For each element g s ∈ Stab G (x rep ) and each y ∈ Y x , we form the transformation (line 21) and evaluate if the two sets gD and D are equal. If they are, then F is equivalent to F and gF = F . If they are unequal for all g s ∈ Stab G (x rep ) and all y ∈ Y x , then the two faces are inequivalent under G. B. Constructing the orbit of a vector We use a variation of the standard orbit enumeration technique as implemented, e.g., in [18]. Alg. 2 constructs the orbit of a vector x under a group G and stores the group elements taking x to the elements in its orbit. These group elements are Algorithm 1 Evaluate if two faces F and F are equivalent under G. If they are, return a transformation g ∈ G taking F into F , otherwise return NULL. We define |X | as the number of elements in a set X and arg min x∈X f as the function returning the subset of X for which f (x) is smallest. 1: procedure FINDTRANSFORMATION(F , F , G) 2: V ← vertices of F x ← ANY(arg min x∈M |{y ∈ V ∪ N : y ∼ x}|) 8: x rep ← REPOF(x, G) 9: if |V| < |N | then for all g y ∈ T y do 21: g ← g yḡ 22: if gD = D then 23: return g 24: return NULL Utility functions: • ANY(X ) returns an arbitrary element of X • REPOF(x, G) returns a representative of x, assuming that all vectors have been classified under G and an arbitrary but fixed choice of class representatives has been made • TRANSFORMOF(x, G) returns g x ∈ G such that x = g x REPOF(x, G), again assuming that vectors have been classified under G and that (at least) one group element taking its representative into x is known needed in the procedure TRANSFORMOF in Alg. 1. The result is stored as a dictionary, where each key-value pair consists of an element y of the orbit as key and one arbitrary transformation matrix taking x into y as value. We will call such a dictionary an orbit map. orbit map[x] ← identity matrix 5: pool ← copy of orbit map 6: while pool is not empty do 7: new pool ← new empty Dictionary 8: for all y ∈ keys of pool do 9: h ← pool[y] 10: for all g ∈ gens do 11: y ← gy 12: if y / ∈ orbit then return orbit, orbit map vertices are needed. 4 The idea of the standard orbit algorithm is to repeatedly apply the generators of the group to the initial and the newly constructed vectors until no new vector appears. This is used in Alg. 2, where the pool and new pool variables keep track of which new vectors have appeared in the last iteration. In lines 16-17, we conditionally store the vector and its transformation in orbit map. If all vectors are known, the new pool remains empty and the termination condition of the whileloop is satisfied. When constructing the orbits of vertices, the condition is chosen to evaluate to true only when the vector lies in one of the representative facets. For relevant vectors, condition is set to always evaluate to true. C. Constructing the stabilizer of a vector The third change to the method in [13] is an algorithm to construct the stabilizer of a vector under a group G. Our method is again inspired by a standard orbit-stabilizer algorithm such as the one implemented in [18]. Stabilizers are needed in line 18 of Alg. 1, where we iterate over all elements of the stabilizer of one of the representative vectors. For G = Aut(Λ 16 ), there are in total 8 representative vectors listed in Tab. I. We previously let GAP find the stabilizer of a vector. With the knowledge about each vector's orbit size, however, we can implement a more efficient method. Algorithm 3 Construct the stabilizer of a vector x in G whose orbit size is known. As in Alg. 2, the group G is given as a set gens of generator matrices. See the main text for details. orbit map[x] ← identity matrix 8: for all g ∈ G do 9: x ← gx 10: if x ∈ keys of orbit map then 11: g ← orbit map[x ] 12: if g s / ∈ stab then 14: append g s to stab gens 15: stab ← GAP group from stab gens 16: if |stab| = stab size then orbit map[x] ← g In Alg. 3, we construct elements of the orbit by applying different group elements to the vector x (line 9). Any vector x that is visited this way is stored together with the corresponding group element in an orbit map (line 19). Whenever we encounter a vector x previously found, we retrieve the stored group element g (line 11). Since gx = g x, we have g −1 g x = x and so g s = g −1 g is an element of the stabilizer of x. If it is not yet an element of the subgroup stab ⊆ Stab G (x) found thus far, it is added to the list of group generators in line 14. After updating stab in line 15, we check if it is complete by comparing its size against the known stabilizer size. This is made efficient by two facts. First, due to the "birthday paradox" [23, Section 3], the first coincidence in line 10 occurs on average after 1 + For Aut(Λ 16 ), this means that the first element of the stabilizers of the vectors in Tab. I is found after about 83 (for n 1 and v 1 ) to 10 210 (for v 2 , v 4 , v 5 ) iterations. Second, the stabilizers are often generated by very few group elements. In the case of Λ 16 , the set of all 8 stabilizers is found within minutes on a single core, since each stabilizer can be generated by only two generators. VI. CONCLUSIONS In this work, we provide a complete account of the relevant vectors, vertices, and face classes of the Voronoi region of the Barnes-Wall lattice Λ 16 . This is used to calculate the exact second moment of Λ 16 . In order to obtain these results, we improve our algorithm [13], allowing it to be used with larger symmetry groups than previously possible. We believe that our algorithm can be used to analyse the Voronoi regions of many lattices with known symmetry group, potentially even in dimensions higher than 16. Using Monte-Carlo integration, the exact value of the second moment is numerically verified. Furthermore, it is shown that the variance of the numerical result can be approximated with much higher accuracy than conventionally obtained with the jackknife estimator. This may provide significant improvements in numerical second moment estimates in the future.
v3-fos-license
2018-04-03T06:10:37.251Z
2013-02-21T00:00:00.000
15503089
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cam4.59", "pdf_hash": "dbc728edceb4d395e3375c27e22d5b31bac47cf0", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46290", "s2fieldsofstudy": [ "Medicine" ], "sha1": "dbc728edceb4d395e3375c27e22d5b31bac47cf0", "year": 2013 }
pes2o/s2orc
Evaluation of 5-year imatinib treatment of 458 patients with CP-CML in routine clinical practice and prognostic impact of different BCR-ABL cutoff levels We evaluated responses to the treatment and long-term outcomes of chronic myeloid leukemia patients treated with imatinib as first-line treatment in routine clinical setting from two countries with centralized tyrosine kinase inhibitors (TKIs) treatment. We assessed prognostic significance of European LeukemiaNet (ELN) 2006- and 2009-defined responses and the prognostic value of molecular responses at defined time points on 5-year survivals. Among the cumulative rates of incidence of hematologic, cytogenetic, and molecular responses and all important survival parameters, we evaluated the prognostic significance of different BCR-ABL transcript-level ratios (≤1%; >1%–≤10%; >10%) at 3, 6, 12, and 18 months (n = 199). The ELN optimal response criteria and their predictive role were significantly beneficial for event-free survival at all given time points. We found significant improvement in survivals of patients with BCR-ABL lower than 10% in the 6th and 12th months. Significantly better outcome was found in patients who achieved major molecular response (MMR) in the 12th month. The cumulative incidences of complete cytogenetic response (CCyR) and MMR were significantly associated with the molecular response in the 3rd month. The ELN response criteria and their predictive role were helpful at given time points; however, the 2009 definition did not significantly alter the prognostic accuracy compared with that of the 2006 definition. The significant value was observed for cytogenetic responses at the 6th and 12th month. Moreover, progression-free and event-free survivals were improved with MMR at the 12th month. Introduction Imatinib (IM; originally STI571), a BCR-ABL tyrosine kinase inhibitor (TKI), is a highly potent targeted therapeutic agent that has substantially changed the treatment of patients with chronic myeloid leukemia (CML). For patients with newly diagnosed disease in the chronic phase (CP), it markedly improves prognosis [1,2]. Following the IRIS multicenter clinical trial (International Randomized Study of Interferon vs. STI571), which demonstrated an estimated 8-year overall survival (OS) of 85%, progressionfree survival (PFS) of 93%, and event-free survival (EFS) of 81%, IM became the first-choice medication for CML patients in the CP [3,4]. The European LeukemiaNet (ELN) recommendations, initially published in 2006 [5], aimed to rationalize CML treatment, so as to unify treatment procedures and to optimize the frequency and types of laboratory analysis. This first published summary of recommendations for CML treatment and monitoring particularly focused on early detection of its failure [5]. ELN 2009 is an updated version that reflects the experience with second-generation TKIs and the long-term outcome data with the aim of managing the therapy for survival maximization and normal quality of life [6]. According to the ELN recommendations, the response to first-line IM can be stratified according to the therapeutic response at defined time points [5,6], where optimal responders were likely to reach long-term benefit from the treatment, in contrast to the others. One of the important changes in the upgraded version was the definition of the 3rd month optimal response. At least minor cytogenetic response was introduced next to the complete hematologic response, both considered as an optimal response achievement in the 3rd month [6]. Outside clinical trials, there is still lack of data on the impact of IM on patient outcome as well as on the applicability of ELN recommendations to clinical practice. As the treatment of patients in routine clinical practice is influenced by many factors not encountered in clinical trials, the extrapolation of procedures and recommendations from clinical trial results to clinical practice may not be straightforward. For this reason, it is necessary to study the experience and outcomes from the routine practice. BCR-ABL transcript-level monitoring (BCR -gene encoding the break point cluster region protein; ABL -Abelson murine leukemia viral oncogene homolg 1) is a highly useful diagnostic tool that controls the effectiveness of the CML treatment and indicates at an early stage resistance development or disease progression. So the kinetics of BCR-ABL transcript level is very important and many reports proved its usefulness for disease management e.g., [7][8][9]. However, it is still a matter of contention if BCR-ABL transcript-level data observed in the defined time points may be significantly predictive for the long-term outcome of CML patients treated with IM first line and might improve or even replace the prognostic significance of cytogenetic data [10,11]. Major effort was put into the interlaboratory harmonization and conversion factors (CF) calculation and their validation at international scale (IS) [12,13]. Many labs across Europe have obtained their validated CF, but the international study showed the impending instability of the CF within one single lab [14]. Therefore, even estimated and validated CF could not guarantee that laboratories will perform the monitoring in an entirely comparable way, because new causes that may contribute to an increase in the variability of the measurements may appear over time (e.g., other sources of chemicals, another batch, upgraded instrumentation, human factor). In the Czech Republic and Slovakia (regions with alto-gether~15 million inhabitants), the TKI treatment of CML patients is centralized in 13 major hemato-oncologic centers. Treatment data from all these centers are collected in two databases: CAMELIA [15] and INFINITY [16] including all patients treated with IM. The patients are closely monitored and treated according to the ELN recommendations [5,6]. In this study, we focused on analysis of the prognostic value of the ELN 2006-and 2009-recommended responses evaluations for the first-line IM treatment. Moreover, we attempted to evaluate the prognostic significance of different cutoffs of BCR-ABL transcript level in the 3rd, 6th, 12th, and 18th month in the outcomes and compare data with the recently reported results from the IRIS study. Patients Data from a cohort of 458 unselected patients with newly diagnosed CML in the CP, treated with first-line IM in 11 Czech and Slovak hemato-oncologic centers between the years 2003 and 2009 were analyzed. The databases CAMELIA (Chronic MyEloid LeukemIA) and INFINITY (tyrosine kinase Inhibitors iN FIrst aNd followIng CML Treatment) collected anonymized data of 306 and 152 patients, respectively, with approval of the ethic committees and patients' informed consents. Definitions of treatment responses and the endpoints Treatment responses were evaluated according to the ELN recommendations released in 2006 and 2009 [5,6]. We assessed the cumulative incidence rates of complete hematologic response (CHR), major cytogenetic response (MCyR), complete cytogenetic response (CCyR), major molecular response (MMR), and complete molecular response (CMR). OS was defined as the time from the start of IM administration to death from any cause, irrespective of IM discontinuation. Survival to CML-related death (OS CML ) was defined as the time to death due to CML only. Transformation-free survival (TFS) was defined as survival without evidence of accelerated phase (AP) or blast crisis (BC) or death from any cause during IM therapy. PFS was defined as in the IRIS trial [17], that is, survival without evidence of AP or BC, loss of CHR, loss of MCyR, increased white blood cell count (in patients who had never had CHR), or death from any cause while on IM treatment, whichever came first. Events EFS were defined as a progression (as in PFS described above), loss of CCyR together with improved definition including failure to achieve CHR at 6 months, MCyR at 12 months, and CCyR at 18 months, or intolerance of IM as the cause for discontinuation, whichever came first [5,18]. Alternative treatment-free survival (ATFS) was defined as the time since start of IM to change to any alternative treatment or death from any cause during the IM therapy [16]. ATFS reflected the real proportion of patients who stayed on IM despite the event occurrence. Cytogenetic and molecular analyses were performed according to ELN recommendations [5,6]. Conventional cytogenetic analysis used the G-banding technique, and at least 20 metaphases were analyzed. Evaluation of prognostic significance of the ELN recommendations Based on the quality of a response to IM at defined time points (3, 6, 12, and 18 months) determined using the ELN 2006 recommendations, the patients were stratified into the following categories: optimal response, suboptimal response, and treatment failure [5]. Subsequently, the prognostic impact of optimal and less than optimal responses on TFS, PFS, and EFS was assessed. On comparing the ELN 2009 recommendations with the 2006 version, the treatment response at 3 months is more strictly defined: optimal response = CHR and at least a minor CyR (mCyR); suboptimal response = no CyR; and treatment failure = no CHR [6]. Impact of the changes between these two editions (2006 and 2009) on survival end points was assessed on a subset of 156 patients, in whom the cytogenetic analysis was performed in the 3rd month. Prognostic significance of molecular response Patients' survival was evaluated according to different rates of BCR-ABL transcript level at 3, 6, 12, and 18 months. BCR-ABL transcript quantity data were considered only from a subset of 199 patients, whose samples were analyzed in three laboratories with the standardized quantitative real-time reverse transcription-polymerase chain reaction (RT-PCR) methodology at the time of data collection. These three laboratories were annually controlled by the external quality control organized by the National reference laboratory for DNA diagnostics in the Czech Republic (accredited by the Czech Accreditation Institute; http:// www.cia.cz/default.aspx?id=45) and produced comparable results. In the meantime, the laboratories have started their participation in the international BCR-ABL standardization project (EUTOS for CML) [12]; however, any CFs for the calculation into the IS had not yet been and recently observed CF should not be applied retrospectively. We were aware that the interlaboratory comparison was not absolute, but we intended to evaluate the molecular data as these reflect the clinical practice that had been running during the years 2003-2009. An optimized multiplex RT-PCR was adapted from the method of Cross et al. to determine the type of BCR-ABL transcript [19]. Quantitative real-time RT-PCR was performed according to Europe Against Cancer (EAC) recommendations [20], using ABL (two laboratories) or B2M (one laboratory) as control genes. The MMR was identified if the BCR-ABL transcript at any levels was stably 0.1%. BCR-ABL-negative sample (CMR) was identified if the BCR-ABL transcript was stably undetectable by quantitative real-time RT-PCR and/or nested RT-PCR [6]. Patients with nonstable MMR or CMR were excluded from evaluations. Statistical methods The frequency tables and standard descriptive statistics (mean, median, minimum, maximum) were used to summarize patient characteristics. The probabilities of OS, OS CML , TFS, ATFS, PFS, and EFS, were estimated using the Kaplan-Meier method. The probabilities of hematological, cytogenetic, and molecular responses were estimated using the cumulative incidence method. The point estimates were supplied with 95% confidence intervals (CI). Landmark analysis of TFS, PFS, and EFS was performed based on treatment responses according to ELN criteria [5,9]. Univariate analyses estimating prognostic power of treatment response for TFS, PFS, and EFS were based on log-rank test. Level of statistical significance a = 0.05 was used in all analyses. Analyses were performed by using statistical software SPSS 12.0.1 for Windows (IBM Corp., Armonk, NY) and STAT-ISTICA 8.0 for Windows (StatSoft, Tulsa, OK). Patient characteristics and treatment Between July 2003 and July 2009, a total of 458 adult patients (median age 52 years (range 17-81), men 51.3%) with Ph-positive CML in the CP (one patient was BCR-ABL positive, but without Ph chromosome), treated with IM as a first-line therapy, were recorded in the databases CAMELIA and INFINITY (Table 1). Median follow-up on IM treat-ment was 33.1 months (range 1.4-82.1); median time from diagnosis to start of IM therapy was 1.2 months (range 0-13.3). Initially administered daily dose of IM was 400 mg. The dose was reduced in 131 (28.6%) patients, mainly because of side effects (e.g., vomiting, diarrhea, headache, hematologic toxicity), and escalated during the treatment to 600-800 mg/day in 101 patients (22.1%) mainly because of suboptimal response. IM was permanently discontinued in a total of 112 (24.5%) patients after median 14.4 months (range 0.2-25.7) from the start of therapy. Reasons for the discontinuation included disease progression or IM failure (n = 54), intolerance to IM (n = 30), elective allogeneic transplantation (n = 14), death from non-CML-related causes (n = 8), and other reasons (n = 6). The cumulative incidences of hematologic and cytogenetic responses among 458 patients are illustrated in Figure S1a and summarized in Table S1. Cumulative incidences of MMR and CMR (Fig. S1b, Table S1) were evaluable in the cohort of 199 patients (see Methods). In line with the treatment duration, the number of patients who achieved CCyR and MMR rose continuously, with a rise in CCyR from 61.7% after 18 months to 79.2% after 5 years of IM treatment, and in MMR from 51.2% to 71.8%. BCR-ABL negativity increased from 11.3% after 18 months to a predicted 37.0% after 5 years of IM. Treatment responses and survival end points Prognostic significance of optimal response defined by ELN 2006 [5] The prognostic significance of achieved optimal versus nonoptimal responses on the 5-year probability of survival without transformation, progression, and event is summarized in Table 2. Optimal responses at 6 months (partial cytogenetic response [PCyR]) and 12 months (CCyR) were predictive of PFS (P = 0.041, P = 0.021) and EFS outcomes (P = 0.001, P = 0.001) after 5 years of IM therapy. Optimal response at 3 months was predictive of EFS. The 18month interval defined according to ELN was not predictive of TFS, PFS, and EFS. Probability of survival according to optimal response in the 3rd month: comparison of ELN definitions 2006 [5] and 2009 [6] We analyzed the probability of survival in patients who achieved optimal response in the 3rd month according to (Table 2). Figure 1 shows no significant difference between the ELN 2009 and 2006 classifications of optimal responders in the probability to survive without progression, and event. Survival of patients according to BCR-ABL transcript levels Only BCR-ABL molecular data from 199 patients that had been obtained from laboratories with standardized and comparable methodologies were considered (see Methods). The 5-year probability of transformation-free, progression-free, and event-free survivals were calculated according to BCR-ABL transcript levels (1%; >1%-10%; >10%) in 3rd, 6th, 12th, and 18th months. The 6th month landmark showed significant differences between the group with BCR-ABL transcripts higher than 10% and the groups with the levels equal to or lower than 10% for PFS and EFS (Table 3). A significant difference was found between the groups with BCR-ABL level higher than 1% and those with levels equal to or lower than 1% in the 12th month. Even more significantly higher probability of PFS and EFS was found in patients who achieved MMR in comparison with patients with BCR-ABL level higher than 0.1%. The 18th month landmark showed significantly higher probability to survive without an event in patients who achieved MMR. The 3rd month landmark was not significant when comparing groups that achieved different BCR-ABL transcript levels for the TFS, PFS, and EFS after 5 years. Cumulative incidence of CCyR and MMR according to BCR-ABL transcript levels in the 3rd month The probability of cumulative incidence of CCyR in 12th month and MMR in 18th month was analyzed according to the BCR-ABL transcript level with defined ranges in 3rd month. Again, only reliable BCR-ABL data that were available from 145 patients in the 3rd month were considered in this analysis. A significantly higher probability to achieve CCyR at 12th month was found for the group with BCR-ABL quantity in the 3rd month lower than or equal to 10% than in the group with BCR-ABL higher than 10% (P < 0.001) ( Fig. 2A, Table S2). Patients with BCR-ABL transcript level >10% in the 3rd month had a significantly lower probability to achieve MMR, compared with those with lower levels (P < 0.001). Significant differences were found even between the groups with BCR-ABL transcript levels 1% and those with levels >1% to 10% (P = 0.028) (Fig. 2B, Table S2). An increase in the cumulative incidence of CCyR and MMR after 4 years and 30 months on IM treatment, respectively, was found in all three BCR-ABL groups (Table S2). Discussion Following the results of the IRIS multicenter trial [3,4], IM promptly became the standard frontline therapy of CML in CP. Some single-center reports on the use of IM in clinical practice have been published, but further evaluations of data from nonselected cohorts of patients or population-based studies are required. This study is focused on nonselected cohort of CML patients in CP treated in clinical practice with IM as firstline therapy during the years 2003-2009. Recalculated for the whole population in both countries (Czech Republic Subgroup of patients with known cytogenetic status in 3rd month. 2 In this study, TFS was defined as survival without evidence of AP or BC or death from any cause during IM therapy. PFS was defined as survival without evidence of AP or BC, loss of CHR, loss of MCyR, increased white blood cell count (in patients who had never had CHR), or death from any cause while on IM treatment, whichever came first. EFS was defined as a progression (as in PFS described above), loss of CCyR together with improved definition including failure to achieve CHR at 6 months, MCyR at 12 months, and CCyR at 18 months, or intolerance of IM as the cause for discontinuation, whichever came first [5,18]. and Slovakia~15 million inhabitants), annual incidence rate corresponds to 0.78 CP CML treated with IM first line per 100,000 adults. According to Rohrbacher et al. [21] the incidence rate of CML varies from 0.6 to 2.0 cases per 100,000, higher in men than in women, which is in agreement with our cohort. The access to IM first Progression-free survival (PFS) Event-free survival (EFS) Time from start of imatinib therapy (months) Time from start of imatinib therapy (months) Proportion of patients without progression Proportion of patients without event ELN criteria 2006 Optimal response Less than optimal response ELN criteria 2009 Optimal response Less than optimal response In this study, PFS was defined as survival without evidence of AP or BC, loss of CHR, loss of MCyR, increased white blood cell count (in patients who had never had CHR), or death from any cause while on IM treatment, whichever came first. EFS was defined as a progression (as in PFS described above), loss of CCyR together with improved definition including failure to achieve CHR at 6 months, MCyR at 12 months, and CCyR at 18 months, or intolerance of IM as the cause for discontinuation, whichever came first [5,18]. PFS, progression-free survival; EFS, event-free survival; ELN, European LeukemiaNet; AP, accelerated phase; BC, blast crisis; CHR, complete hematologic responses; MCyR, major cytogenetic response; IM, imatinib; CCyR, complete cytogenetic response. line by health-insurance companies in 2003-2004 was limited, and part of elderly patients were not referred to the hematologic centers; this may explain the observed lower age median in this study than expected [22]. The direct comparison of data from clinical practice with data from clinical trials may be difficult. The problem lies in different cohorts, survival definitions, and follow-up [16,23]. Moreover, the definitions of end points can differ even in the updates of a specific study [3,24]. However, regarding the probabilities of OS, PFS (in our study as TFS), EFS (in our study as PFS), the presented multicenter data of nonselected cohort of 458 patients are highly similar to the IRIS study [3], with the intention-to-treat analysis from Hammersmith hospital [18], and with our previous study on patients from two centers [16]. Among OS, TFS, and loss of CHR, we may directly compare the observed probabilities of PFS (survival without evidence of AP or BP, loss of CHR, loss of CyR, increased WBC (in patients who had never had CHR), or death from any cause while on IM treatment, whichever came first); this definition was reported as EFS in the mentioned studies showing 83% probability at 7 years in IRIS, 81.3% at 5 years in Hammersmith, and 78.1% at 4 years in INFINITY in comparison with this report showing 80.7% at 5 years. Among the survival analyses ("time to event analyses"), we used a recently published parameter ATFS, that is, the indicator of survival without the administration of an alternative treatment [16], which is, in our opinion, an improved definition for the characterization of the proportion of patients who will continue on IM treatment despite the events. This is supported by our results showing that 69.7% (95% CI: 64.2-75.2) of patients will be still treated with IM after 4 years from treatment initiation (when EFS = 66.6% [95% CI 60.6-71.4]). It is important to note that second-generation TKI has been approved in both countries since 2007; therefore, the ratio of patients who stayed on IM in spite of an event is quite high for the treatment between the years 2003 and 2009. Prognosis of ELN-defined responses As the next goal, we evaluated the prognostic significance of treatment responses defined by ELN [5,6]. A significantly better prognosis for PFS was demonstrated for optimal response to IM (i.e., cytogenetic response) at 6 and 12 months, which is in agreement with other recent reports [25][26][27]. For EFS, including 3rd month, all three evaluated prognostic time points were significant. The optimal response in the 18th month is according to ELN defined as MMR achievement. In spite of EFS, we did not find significant differences in the probability to survive without progression between patients with and without MMR in the 18th month. On the subgroup of patients (n = 156), we showed that the optimal response defined for the 3rd month according to 2006 and 2009 ELN recommendation was significantly associated with better survival. Additionally, an achievement of mCyR after 3 months of the treatment (ELN 2009) did not significantly improve the survival prognosis over that based solely on CHR (ELN 2006). In contrast, MCyR in the 3rd month was significantly associated with 5-year PFS (defined as TFS in our study) in Hanfstein et al. [10]. Prognosis of BCR-ABL transcript levels in defined time points Currently, a frequently discussed topic in CML treatment is the quickness to achieve deep molecular response after TKI treatment initiation and its prognostic impact. It was postulated and shown in some works that the earlier and deeper the molecular response was, the more likely the response to treatment would be better and longer lasting [28]. In this study, we were able to analyze BCR-ABL molecular data from a subset of 199 patients who were monitored in the three laboratories that were in the meantime standardized and compared with each other. BCR-ABL monitoring was performed in those patients regularly at least every 3 months including defined time points such as the 3rd, 6th, 12th, and 18th months after start of IM therapy. We found that patients were divided nearly equally into the three groups according to achieved BCR-ABL transcript level 3 months after IM start: 1%; >1% 10%; >10%. However, among these groups, we did not find any significant predictive value for survivals without transformation, progression, and event. Recent works of Hanfstein et al. [10] and Marin et al. [11] proved that the 3rd month BCR-ABL transcript level higher than 10% IS was significantly predictive of survival without progression (i.e., survival without evidence of AP or BC or death from any cause during IM therapy) on IM first line. A cohort of patients of similar size to the one used in our study was the Hammersmith cohort of 282 patients; however, their OS survival was 84.3% in the 8-year probability, allowing better discrimination in comparison with our study when the outcome of our patients was better in the 5-year probability (OS 90.2% and OS CML 96.6%). The OS in our study was comparable to Hanfstein et al. [10]; however, the cohort of patients of the German Study VI was 3.5 times larger. We suppose that longer follow-up and larger cohort of patients in our study are needed to showing BCR-ABL data in the 3rd month landmark predictive for outcome. Hanfstein et al. [10] and Marin et al. [11] showed an impact of BCR-ABL equal to or lower than 1% IS cut-off in the 6th month on significantly better survival. Significant differences were found in more detailed definitions of PFS and EFS in our study within the BCR-ABL groups after 6 and 12 months of IM therapy (exception was for PFS between 1.0% vs. >1.0% -10% in the 6th month). No benefit was found in PFS or EFS for patients with MMR in the 6th month. This observation is consistent with the data published by Hughes et al. [17] showing no significant difference in EFS (defined in our study as PFS) on comparing MMR versus no MMR achievement and versus >0.1% to 1% in the 6th month landmark. MMR achievement in the 12th month showed significantly higher probability of PFS and EFS in comparison with patients without MMR. This is in agreement with the recently published data from IRIS study [17] and Marin et al. [11], which confirmed better EFS (in our study defined as PFS) in patients with MMR in the 12th month. In spite of IRIS and Jabbour et al. [26], we found a significant difference for PFS even when comparing MMR versus >0.1%-1.0%. Hehlmann et al. [25] proved better PFS (defined as survival free of AP and BC) significantly associated with MMR in the 12th month, which we did not confirm in our study for TFS (i.e., survival free of AP, BC, or death from any cause during IM therapy). To explore the possible importance of depth of early molecular response, we investigated the cumulative incidence of MMR and CCyR according to the BCR-ABL transcript level in the 3rd month. This analysis clearly showed that patients with a BCR-ABL ratio >10% had a significantly lower probability of achieving MMR and CCyR than those with lower levels (P = 0.001). The greatest reduction in BCR-ABL within the first 3 months of IM therapy was significantly associated with the cumulative incidence of CCyR and MMR optimal achievements in the 12th month and the 18th month, respectively. Our results are consistent with previous work showing that the deeper the molecular response and the earlier these responses are achieved, the higher is the probability of achieving CCyR and MMR [10,11,27]. Additionally, irrespective of optimal response definition for CCyR and MMR achievement, the 4-year and 30-month cumulative incidence of CCyR and MMR, respectively, showed that there is still a chance that significant proportion of patients will achieve required responses after a longer IM treatment. This may occur in patients in whom the dose of IM was reduced during the treatment for various reasons and who therefore did not achieve CCyR or MMR in defined optimal time. Conclusion Our data, which are highly comparable to clinical trials or single-center intention-to-treat analysis, significantly show the effectiveness of IM as a first-line treatment in patients with CP-CML. The response criteria and their predictive role defined by ELN were helpful at given time points; however, the ELN 2009 did not significantly alter the prognostic accuracy compared with ELN 2006. Additionally, the powerful value of cytogenetic response achievement at the 6th and 12th months was proved for outcome prognostication. Moreover, PFS and EFS with more detailed definitions in comparison with most of other studies were improved, with deeper molecular response including MMR at 12 months. The cumulative incidences of CCyR and MMR were significantly associated with the levels of BCR-ABL transcripts in the 3rd month. We should expect a significant impact of molecular response at 3 months on survivals, which we did not confirm. To prove whether the BCR-ABL transcript level cut offs at the 3rd month landmark have significant impact on better outcome remains a challenge for our forthcoming study that will require a larger cohort of patients and longer follow-up. Supporting Information Additional Supporting Information may be found in the online version of this article: Figure S1. Cumulative incidence of (a) CHR, MCyR, CCyR (N = 458), (b) MMR and CMR (N = 199). CHR, complete hematologic responses; MCyR, major cytogenetic response; CCyR, complete cytogenetic response; MMR, major molecular response; CMR, complete molecular response. Table S1. The cumulative incidence of survival (n = 458). Table S2. The cumulative incidence of responses. Table S3. Cumulative incidence of CCyR (A) and MMR (B) according to BCR-ABL level in 3rd month. CCyR, complete cytogenetic response; MMR, major molecular response.
v3-fos-license
2021-04-30T02:33:54.952Z
2021-01-01T00:00:00.000
233453862
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://thesai.org/Downloads/Volume12No3/Paper_21-Evaluating_Software_Quality_Attributes.pdf", "pdf_hash": "8f5e49850bdfdb2d4ae4ebd545bffd3d8585fae7", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46291", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "8f5e49850bdfdb2d4ae4ebd545bffd3d8585fae7", "year": 2021 }
pes2o/s2orc
Evaluating Software Quality Attributes using Analytic Hierarchy Process (AHP) The use of quality software is of importance to stakeholders and its demand is on the increase. This work focuses on meeting software quality from the user and developer’s perspective. After a review of some existing software-quality models, twenty-four software quality attributes addressed by ten models such as the McCall’s, Boehm’s, ISO/IEC, FURPS, Dromey’s, Kitchenham’s, Ghezzi’s, Georgiadou’s, Jamwal’s and Glibb’s models were identified. We further categorized the twenty-four attributes into a group of eleven (11) main attributes and another group of thirteen (13) sub-attributes. Thereafter, questionnaires were administered to twenty experts from fields including Cybersecurity, Programming, Software Development and Software Engineering. Analytic Hierarchy Process (AHP) was applied to perform a multi-criteria decision-making assessment on the responses from the questionnaires to select the suitable software quality attribute for the development of the proposed quality model to meet both users and developer’s software quality requirements. The results obtained from the assessment showed Maintainability to be the most important quality attribute followed by Security, Testability, Reliability, Efficiency, Usability, Portability, Reusability, Functionality, Availability and finally, Cost. Keywords—Analytic Hierarchy Process (AHP); software quality; quality attribute; quality model; sub-attributes I. INTRODUCTION Software quality is a paramount issue to all software stakeholders in a given establishment and its demand is increasing rapidly due to customer demand [1]. In the last few decades, the importance of the use of quality software has increased exponentially [2]. Software users see software as a tool to enable them to carry out their daily activities with ease, and hence, use it to perform sensitive tasks [3]. The use of less quality software can, directly and indirectly, endanger one's life [30] as well as causing huge loss to software users. As a result, many software quality models have been proposed to evaluate software quality, yet, none of these models has been widely accepted as the benchmark for assessing software quality. This is because these models do not address all the important software quality attributes that are of keen interest to stakeholders and are tailored towards meeting specific project's requirements. To address stakeholder requirements, custom software quality models have been proposed [4]. These custom made quality models offer different benefits to the software industry and research community and hence do not cover a wide scope of quality attributes. This research presents an evaluation of software quality attributes using the Analytic Hierarchy Process (AHP). It was conducted based on a questionnaire given to stakeholders to assess the quality attributes they expect a software quality model to have. As a result, an evaluation of these quality attributes was made and represented with a graph to pictorially highlight the percentage of weight each quality attribute ranked. Ranking the software quality attributes will assist developers greatly in selecting the best quality attribute for evaluating developed software. Previous works have failed to rank quality attributes and have led to the proposal of numerous custom software quality models. The rest of the paper is organized as follows: Section II discusses related work done on AHP, software quality models and quality attributes. Section III discusses software quality, quality models and attributes. The methodology used to select the software quality attributes is illustrated in Section IV. Section V presents the conclusion and discusses future works. II. RELATED WORKS Software quality models have been reviewed by numerous researchers in addressing software quality problems. The authors in [14] evaluated the quality of software in Enterprise Resource Planning (ERP) systems using the ISO 9126 model. They offered a comparison between existing quality models and identified the quality characteristics of ERP systems but they did not rank the main quality characteristics of the model. In [15], research was conducted on an analytical and comparative study of software usability quality factors. They analysed ten famous quality models for developing a usability model that satisfies the demand of current business software and proposed an integrated improved usability model for assuring software quality. The new usability evaluation model was proposed from ten models of McCall, Boehm, Shackel, FURPS, Nielson, SUMI, ISO 9242-11, ISO 9126, and QUIM model. A research was conducted by [16] on an approach for enhancing software quality based on ISO 9126 quality model. They were able to propose a new quality model for integrating some quality attributes in software development. Another study was conducted by [3] on software quality attributes to enhance software quality assurance. The authors did this research because, in recent times, industries are giving more attention to software quality improvement. Therefore, they focused on meeting customer perspectives of software quality to propose a new model. The limitation of the research is that it did not address availability, testability and reusability problems. Authors in [25] worked on extending Dromey's quality model to specify the security quality requirements in a software product. They conducted the research based on the increase in cybercrimes. The model was able to enhance the security requirement of software and trained people on how to develop secure software. A study by researchers in [26] adapted the ISO/IEC 9126 quality model to evaluate Enterprise Resource Planning (ERP) systems. The model was proposed as a result of the urge in the increasing usage of ERP systems by organisations to get faster data transactions. The researchers proposed the model to have six (6) main software quality attributes including functionality, maintainability, reliability, efficiency, usability and portability. The limitation was that the model did not address some of the most important software quality attributes such as availability, testability and flexibility. It also did not rank the quality attributes. In [27], the authors presented a software quality model for academic information systems. Their objective was to guide academic institutions that are in the process of building their Elearning systems to evaluate and choose the appropriate software attributes that are essential to the success of the entire system. The researchers identified the key attributes for Information System's Software Quality (ISSQ) from the users' perspective to measure the quality of the E-learning system. The proposed model consisted of six (6) standard attributes with their sub-attributes. This was achieved based on the ISO/IEC 9126 model. The limitation was that the proposed model failed to evaluate the importance of quality attributes. These researchers have shown how the use of software quality models is gaining much importance in the development of software. Nevertheless, they have not ranked the quality attributes and hence, it is difficult to know the weight of each attribute to ease decision making. This research work employs the use of the Analytic Hierarchy Process, a multi-criteria decision-making tool to evaluate software quality attributes and rank them. Analytic Hierarchy Process (AHP) has been applied by several researchers to enhance group decisions. The researchers in [17] applied this technique to evaluate and select Commercial-off-the-shelf (COTS) components. They found AHP to be useful in making trade-offs between tangible and intangible factors in calculating the weight of COTS components. Applying these weights as coefficients of an objective function in the proposed model helped to determine the best component under constraints such as budgetary constraint, compatibility among components and system reliability. Their findings have validated AHP to be an effective and flexible tool. AHP was applied by [4] to produce an integrated framework that applies statistical analysis to generate software quality models tailored to stakeholder specifications. They found AHP to be quite accessible and conducive for decisionmaking that requires the reduction of decisions complexity in pair-wise matrices. AHP was also applied by [18] in evaluating the reliability of object-oriented software systems. They took the ISO/IEC 9126 model as the base model for the evaluation. Their results showed AHP to be useful for making decisions for the hierarchical structure of the model. Authors in [28] applied the Analytic Hierarchy Process to develop an algorithm for evaluating software functionality. The research was due to the increase in the number of sub-attributes of software functionality quality attribute. They wanted to know the most important sub-attribute that has a great impact on software products. The AHP technique was seen to be a useful tool for the decision-making process. In [29], the AHP technique was used to perform a risk assessment of software quality. The authors were able to construct an index system of software quality risk assessment by calculating the weight and order of risk factors. With the use of AHP, they were able to categorise risk factors into demand risk, technology risk, process risk and management risk. The authors in [19] applied AHP to analyse software reliability. They reported that although software reliability is an important quality attribute, different stakeholders have a variety of views in that regard. Hence, they applied AHP which is designed to manage human assessment subjectively. The Analytic Hierarchy Process has been seen to effectively aid researchers in solving complex decision-making problems in various fields but its rate of application in the software quality assurance industry is minimal. Most software quality attributes used for software quality assurance have not been ranked, hence, it is difficult to note the important attributes to use to evaluate software projects. In this research, AHP is used to rank quality attributes by using the value of their criteria weights. The higher the criteria weight, the higher its importance in evaluating software quality. III. SOFTWARE QUALITY Software quality is a benchmark for measuring software requirements and the prerequisite to meet the user's specifications. Software quality involves user requirements, system design, documentation, and all the requirements needed for the development of professionally acceptable software [5]. It strictly follows the software development life cycle and evaluates and improves software performance [5]. Software quality can be enforced using software quality models. A. Software Quality Models Different software quality models have been proposed by researchers such as McCall [6], Boehm [7], Jamwal [8], Grady [9], Dromey [10] and ISO/IEC [11] among others as shown in Table I. These quality models contain quality attributes that may be used to ascertain the quality of a software product by determining how the software executes its code or how the software architecture is structured and organized with the system's requirements [12]. All the quality models have software quality attributes and sub-attributes used for the measurement of software quality [13]. Quality attributes and sub-attributes are used to characterize products and can be measured. They usually end with the word "lity". According to the ISO 9126 standard, a software quality model is expected to have the following attributes: Functionality, Reliability, Usability, Efficiency, Maintainability and Portability. B. Software Quality Attributes Software quality attributes are used to measure customer fulfilment of a product for other similar products. They are also used by software developers to develop quality software. These attributes include correctness, reliability, portability, efficiency, maintainability, supportability, functionality, usability, availability, among others. The software development life cycle ensures that implementing quality attributes in software development may result in the production of a well-engineered software product and is to be enforced throughout the development, implementation, and deployment phases of the software [5]. C. Software Quality Attributes Descriptions This section itemizes and describes some software quality attributes. • Correctness: Correctness refers to the capability of software to meet its required results. • Usability: Usability is the ease of use and learnability of software by customers. • Efficiency: Efficiency is the ability of software to perform well, given that tasks are completed faster while using fewer resources and saving computer power with great performance. • Reliability: Reliability refers to the probability of software operating in a given environment within a specified period to perform well without encountering a breakdown. • Accuracy: Accuracy refers to the degree to which a software product provides the right results during usage without encountering an error. • Robustness: Robustness refers to the ability of a software product to cope with any form of error it may encounter during operation. • Functionality: Functionality is the ability of software to perform the tasks for which it was intended. • Performance: Performance refers to the total effectiveness of a software product. • Availability: Availability refers to the degree to which a software product is operational and easily accessible when needed for usage. • Maintainability: The ease with which software can be modified to correct faults or improve performance. • Flexibility: Flexibility is the ability of software to adapt to possible future changes in its requirements. • Portability: The measure of the ease of transferring software from one computing environment to the other. • Reusability: Reusability is the use of existing tested and validated loosely coupled components in the development of software applications. • Testability: Testability is the ease with which the correctness of software can be verified. • Understandability: The capability of a software product to enable the user to understand whether it is suitable and its usability for specific tasks and conditions for use. • Interoperability: Interoperability is the ease with which software is used with other software applications. IV. ANALYTIC HIERARCHY PROCESS (AHP) AHP is a method of multi-criteria evaluation that organizes and simplifies the decision-making process. It was originally developed by Thomas L. Saaty [20] to provide measures of judgement consistency; to derive priorities among criteria and alternatives, and to simplify the rating of preferences among decision criteria using pair-wise comparisons [21]. The AHP decision-making tool is robust and flexible in dealing with complex decision problems. It uses a multi-level hierarchical structure of objective or goal, criteria or attributes, and alternatives. AHP is based on mathematics and psychology [22]. It helps decision-makers to find a decision that best suits their goal and their understanding of a given problem. It is a method to derive ratio scales from paired comparisons [23] and is based on a certain scale that changes subjective judgements into objective judgement and solves qualitative problems with quantitative analysis. It is simple and hence has seen its application in many fields. A. Assessment of Quality Attributes The research uses an Analytical Hierarchy Process (AHP) to perform a multi-criteria decision-making assessment to select a suitable software quality attribute for the development of the quality model. The selection will be made from eleven attributes (Maintainability M(s), Testability T(s), Reliability R(s), Efficiency E(s), Usability U(s), Portability P(s), Reusability Re(s), Cost Co(s), Functionality Fn(s), Security S(s) and Availability A(s)) and three alternatives ("Mostly addressed", "Doubles up as a Sub-attribute", "Has Subattributes"). This information will be used to develop a hierarchical structure with the goal at the top level, the attributes at the second level, and the alternatives at the third level as shown in Fig. 1. The hierarchical structure obtained was synthesized to determine the relative importance of each attribute to the goal. This is done using a pair-wise comparison matrix with the help of a scale of relative importance as shown in Table II. The quality attributes used for the judgement matrix are shown in Table III. It consists of eleven (11) main attributes and thirteen (13) sub-attributes. The AHP technique was only applied to the eleven (11) main attributes. B. Selection of Appropriate Software Quality Attributes using the Analytic Hierarchy Process (AHP) The judgement matrix was determined by twenty (20) experts' decisions, based on related research. The implementation was done using MATLAB/Simulink Software R2020b. The software allows for easy calculation and analysis for the decision-making process. It also helps in constructing the model and drawing analysis. C. Quality Attribute Selection Judgement Matrices A geometric mean of the scores from the questionnaire was found and represented in Table IV for effective criteria and pair-wise comparison. Twenty experts were given questionnaires to fill for the multi-criteria decision process. The geometric mean of these questionnaires was found by multiplying the values for each of the attributes in Table IV and setting it to the 1/nth power. The sum of each attribute was finally calculated. The geometric mean of the scores was found by Where n is the number of terms that are being multiplied. The normalised pair-wise comparison matrix was found in Table V by diving each of the values for the attributes in Table IV by the sum. To calculate the criteria weight, an average of the rows is found. It is seen from Table V that the criteria weight of Maintainability is 17.37%, Testability is 13.02%, Usability is 7.22%, Functionality is 6.22%, Cost is 4.73%, Portability is 7.13%, Availability is 5.99%, Reusability is 6.86%, Security is 13.61%, Reliability is 10.35% and Efficiency is 7.49%. Maintainability is seen to have the highest weight while Cost is seen to have the lowest weight. To check for the cost of the expert's evaluation, the consistency of the pair-wise comparison matrix is calculated. This is done by multiplying the criteria weight by the pair-wise comparison matrix which is not normalised in Table IV as shown in Table VI. The weighted sum of the new matrix is found and then divided by the criteria weight. The overall sum is found for the calculation of λ max and Consistency Ratio (CR). The value of the consistency ratio must be less than 0.1 to make the judgement matrix acceptable. The selection judgement matrix shows consistency since the value of the Consistency Ratio (CR) is 0.079 which is less than 0.1. It can be seen from Table V that Maintainability M(s) has the highest weight which is 17.37% while Cost Co(s) has the lowest weight of 4.73%. Fig. 2 shows a graphical representation of the weights of the software quality attributes. D. Alternative Selection Judgement Matrices The alternatives which are "Mostly addressed", "Doubles up as Sub-attributes", and "Has Sub-attributes" were also analysed for Maintainability M(s) as shown in Table VII. Table VII shows that Maintainability has a higher weight of 74% for being mostly addressed and a lower weight of 11% for doubling up as a sub-attribute. The alternatives were also analysed for Testability T(s) as shown in Table VIII. Table VIII shows that Testability has a higher weight of 67% for being mostly addressed and a lower weight of 10% for doubling up as a sub-attribute. The alternatives were also analysed for Reliability R(s) as shown in Table IX. Table IX shows that Reliability has a higher weight of 78% for being mostly addressed and a lower weight of 8% for doubling up as a sub-attribute. The alternatives were also analysed for Efficiency E(s) as shown in Table X. Table X shows that Efficiency has a higher weight of 62% for being mostly addressed and a lower weight of 10% for doubling up as a sub-attribute. The alternatives were also analysed for Usability U(s) as shown in Table XI. Table XI shows that Usability weights 80% for being mostly addressed and a weight of 8% for doubling up as a subattribute. The alternatives were also analysed for Portability P(s) as shown in Table XII. Table XII shows that Portability weighs 56% for being mostly addressed and a weight of 9% for doubling up as a subattribute. The alternatives were also analysed for Reusability Re(s) as shown in Table XIII. Table XIII shows that Reusability has a weight of 41% for having sub-attributes and a weight of 26% for doubling up as a sub-attribute. The alternatives were also analysed for Functionality Fn(s) as shown in Table XIV. Table XIV shows that Functionality's weightiness for being mostly addressed is 57% and 10% for doubling up as a subattribute. Vol. 12, No. 3, 2021 The alternatives were also analysed for Availability A(s) as shown in Table XV. Table XV shows that Availability has a weight of 60% for doubling up as a sub-attribute and a lower weight of 17% for being most addressed. The alternatives were also analysed for Cost Co(s) as shown in Table XVI. Table XVI shows that Cost has a weight of 77% for doubling up as a sub-attribute and a weight of 11% for having sub-attributes. The alternatives were also analysed for Security S(s) as shown in Table XVII. Table XVII shows that Security has a weight of 56% for doubling up as a sub-attribute and a weight of 9% for having sub-attributes. The overall weights for the software quality attribute selection are summarised in Table XVIII. The results in Table XVIII show that "Mostly Addressed" is the highest-ranking software quality alternative with 52% and "Has Sub-attribute" is the lowest ranking alternative with 22%. The result also shows Maintainability as the highestranking software quality attribute with 17%. Table XVIII also shows that the overall analysis is consistent since the value of CR is 0.057 which is less than 0.1. V. RESULTS AND DISCUSSIONS The software quality attributes have been evaluated and according to Table IV, Maintainability is seen to weigh 17.37%, Testability has a percentage of 13.02%, Reliability has a percentage of 10.35, Efficiency has a percentage of 7.49, Usability has a percentage of 7.22, Portability has a percentage of 7.13, Reusability has a percentage of 6.86, Functionality has a percentage of 6.22, Security weighs 13.61, Availability has a percentage of 5.99 and Cost has a percentage of 4.73. This was pictorially represented in Fig. 2. Tables VII, VIII, …, XVII has shown that Maintainability has a higher weight of 74% for being mostly addressed and a lower weight of 11% for doubling up as a sub-attribute. Testability weight of 67% for being mostly addressed and a weight of 10% for doubling up as a sub-attribute. Reliability has 78% for being mostly addressed and a weight of 8% for doubling up as a sub-attribute. Reliability is seen to also have 78% for being mostly addressed and a weighs 8% for doubling up as a sub-attribute. 62% was the weight of Efficiency for being mostly addressed and 10% for doubling up as a subattribute. Usability's weightiness for being mostly addressed is 80% and 8% for doubling up as a sub-attribute. Portability also has a weight of 56% for being mostly addressed and a weight of 9% for doubling up as a sub-attribute. 41% was the weight of Reusability for having sub-attributes and 26% for doubling up as a sub-attribute. Functionality also has a weight of 57% for being mostly addressed and a weight of 10% for doubling up as a sub-attribute. 60% was also the weight of Availability for doubling up as a sub-attribute and 17% for being mostly addressed. Cost has a weight of 77% for doubling up as a subattribute and a weight of 11% for having sub-attributes. Finally, Security is seen to have 56% for doubling up as a subattribute and 9% for having sub-attributes. VI. CONCLUSION AND FUTURE WORK The paper uses a multi-criteria decision-making analysis based on the expert's evaluation and the use of the Analytic Hierarchy Process (AHP) to rank software quality attributes. A hierarchical model is presented for the AHP process. The results show the criteria weight of Maintainability to be 17.37%, Testability to be 13.02%, Reliability to be 10.35%, Efficiency to be 7.49%, Usability to be 7.22%, Portability to be 7.13%, Reusability to be 6.86%, Security to be 13.61%, Functionality to be 6.22%, Availability to be 5.99% and Cost Co(s) to be 4.73%. Maintainability is therefore the most important quality attribute followed by Security, Testability, Reliability, Efficiency, Usability, Portability, Reusability, Functionality, Availability and Cost. The future work will include the integration of AHP with Linear Programming (LP) to select the most important software quality attribute among several attributes. The criteria weights produced from the AHP technique will serve as function coefficients in LP to build a linear model. Sensitivity analysis will also be performed to check changes in criteria weight and effect on the attributes.
v3-fos-license
2024-01-22T14:03:19.059Z
2024-01-22T00:00:00.000
267061544
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "32bfdda8ff0faa7a8ff6c34944de7949b5b3beb8", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46293", "s2fieldsofstudy": [ "Medicine" ], "sha1": "41c7e004166d44359baaef0f9fe7d0cfbfba8814", "year": 2024 }
pes2o/s2orc
Efficacy of topical 0.05% cyclosporine A and 0.1% sodium hyaluronate in post-refractive surgery chronic dry eye patients with ocular pain Background The management of post-refractive surgery dry eye disease (DED) can be challenging in clinical practice, and patients usually show an incomplete response to traditional artificial tears, especially when it is complicated with ocular pain. Therefore, we aim to investigate the efficacy of combined topical 0.05% cyclosporine A and 0.1% sodium hyaluronate treatment in post-refractive surgery DED patients with ocular pain unresponsive to traditional artificial tears. Methods We enrolled 30 patients with post-refractive surgery DED with ocular pain who were unresponsive to traditional artificial tears. Topical 0.05% cyclosporine A and 0.1% sodium hyaluronate were used for 3 months. They were evaluated at baseline and 1 and 3 months for dry eye and ocular pain symptoms and objective parameters, including Numerical Rating Scale (NRS), Neuropathic Pain Symptom Inventory modified for the Eye (NPSI-Eye), tear break-up time (TBUT), Schirmer I test (SIt), corneal fluorescein staining (CFS), corneal sensitivity, and corneal nerve morphology. In addition, tear levels of inflammatory cytokines and neuropeptides were measured using the Luminex assay. Results After 3 months of treatment, patients showed a statistically significant improvement in the ocular surface disease index (OSDI), TBUT, SIt, CFS, and corneal sensitivity (all P < 0.01) using linear mixed models. As for ocular pain parameters, the NRS and NPSI-Eye scores were significantly reduced (both P < 0.05) and positively correlated with the OSDI and CFS scores. Additionally, tear IL-1β, IL-6, and TNF-α levels were improved better than pre-treatment (P = 0.01, 0.03, 0.02, respectively). Conclusion In patients with post-refractive surgery DED with ocular pain, combined topical 0.05% cyclosporine A and 0.1% sodium hyaluronate treatment improved tear film stability, dry eye discomfort, and ocular pain, effectively controlling ocular inflammation. Trial registration Registration number: NCT06043908. Efficacy of topical 0.05% cyclosporine A and 0.1% sodium hyaluronate in postrefractive surgery chronic dry eye patients with ocular pain Lu Zhao 1 † , Jiawei Chen 2 † , Hongyu Duan 1 , Tingting Yang 3 , Baikai Ma 1 , Yifan Zhou 1 , LinBo Bian 1 , Xiying Cai 4 and Hong Qi 1,2* Background Nowadays, corneal refractive surgery offers a choice of procedures, such as laser-assisted in situ keratomileusis (LASIK), femtosecond laser-assisted laser in situ keratomileusis (FS-LASIK), and small-incision lenticule extraction (SMILE), all of which are associated with high indices of efficacy and safety.Nonetheless, dry eye disease (DED) is the most common complication after corneal refractive surgery and one of the leading causes of patient dissatisfaction [1][2][3][4].Although DED generally occurs transiently in the early postoperative period, it may also develop into a chronic condition, and approximately 18-41% of patients develop chronic DED for more than 6 months [4][5][6][7].It becomes more worrisome when combined with ocular pain. Conventional artificial tears, such as sodium hyaluronate, are the first-line therapy for patients with DED and temporarily alleviate dry eye symptoms owing to their water-retentive properties [8,9].However, its therapeutic mechanism is single; it may not be sufficient enough to treat DED following refractive surgery [10].In addition, patients with DED combined with ocular pain were more likely to show an incomplete response to conventional artificial tears than those without ocular pain.Anti-inflammatory drugs, such as topical glucocorticoids and cyclosporine, are recommended for patients who are unresponsive to treatment with conventional artificial tears [11].However, long-term use of topical glucocorticoids can lead to complications such as steroid-induced glaucoma and cataracts [12]. Cyclosporine is an immunosuppressive agent widely used to treat various autoimmune diseases and has been approved by the United States Food and Drug Administration (FDA) for treating DED [13].In a prospective study, 0.05% cyclosporine improved the symptoms and signs of patients with DED, with significant differences compared to conventional artificial tears [14].However, the aforementioned studies did not evaluate the effects of cyclosporine on ocular pain, corneal nerves, tear cytokines, and neuropeptides, which are also involved in the mechanism of post-refractive surgery DED, especially ocular pain.Neurotrophic inflammation caused by corneal nerve damage has been suggested as a causative factor for this type of DED [15].The DEWS report suggested that anti-inflammatory therapy plays an essential role in maintaining ocular surface homeostasis [11].The efficacy of cyclosporine and sodium hyaluronate have been evaluated before; however, it is not clear whether the combination therapy of these can be helpful for patients with post-refractive surgery DED with ocular pain.This study aimed to evaluate the combined effect of topical 0.05% cyclosporine A and 0.1% sodium hyaluronate treatment on post-refractive surgery DED associated with ocular pain that was not responsive to conventional artificial tears. Methods This prospective study aimed to investigate the effects of combination therapy with 0.05% cyclosporine A and 0.1% sodium hyaluronate eye drops in post-refractive surgery dry eye patients with ocular pain.The study patients used 0.05% cyclosporine A eyedrops (GYZZ H20203239, Shenyang Xingqi Pharmaceutical Co Ltd.) twice a day and 0.1% sodium hyaluronate eyedrops (HyloComod®, Ursapharm, Saarbrucken, Germany) four times a day for 3 months.Dry eye and ocular pain symptoms, ocular surface parameters, corneal nerve, tear cytokines, and neuropeptides were measured at baseline and at 1-month and 3-month visits after commencing treatment.This study was approved by the Ethics Committee of the Peking University Third Hospital and followed the principles of the Declaration of Helsinki.Written informed consent was obtained from all participants. To determine the appropriate sample size for this study, we utilized PASS 15.0 software for power analysis.The calculation was based on changes observed in OSDI scores among dry eye patients treated with 0.05% cyclosporine eye drops in a previous study by Shin D and Sang Min J [16].In this reference study, the baseline OSDI score was 25.30 ± 19.04, which significantly decreased to 13.63 ± 14.94 after three months of treatment (P < 0.001).With an alpha level set at 0.05 and a power of 0.9, the analysis indicated a necessity for a minimum of 25 participants.Considering an anticipated dropout rate of 10%, we aimed to enroll 28 participants.Ultimately, our study included 30 participants, thus satisfying and marginally exceeding the calculated sample size requirement. Participants The inclusion criteria were as follows: (1) diagnosed with DED [17] (ocular surface disease index [OSDI] score ≥ 13 and tear breakup time [TBUT] < 10 s) continuing for at least 6 months after corneal refractive surgery [7]; (2) experienced ocular pain, which was indicated by a Numerical Rating Scale (NRS) score ≥ 2 [18]; (3) nonresponsive to artificial tear treatment for more than 3 months based on both symptoms and signs; (4) patients were able to follow up for at least three months; (5) All Trial registration Registration number: NCT06043908.Keywords Post-refractive surgery dry eye disease, Ocular pain, Cyclosporine A, Sodium hyaluronate, Ocular inflammation, Tear film stability patients underwent a comprehensive dry eye evaluation prior to refractive surgery, with no patients being diagnosed with preoperative DED.Participants were excluded if they had active ocular disease, anti-inflammatory therapy, other previous ocular surgery, or other major systemic diseases, including malignant tumors and autoimmune diseases.Pregnant and nursing mothers were excluded from the study. We performed TBUT, Schirmer I test (SIt), corneal fluorescein staining (CFS), and conjunctival lissamine green (LG) staining to evaluate ocular surface signs.The measurements were performed from least to most invasively.Ocular surface assessments were performed in both eyes at all visits.The right eye was selected for the analysis.TBUT was evaluated using a cobalt blue filter over a slit-lamp biomicroscope.SIt was conducted using Schirmer paper strips (5 × 35 mm) without anesthesia.CFS and LG staining were evaluated using the National Eye Institute Workshop guidelines (total score:0-15) [23] and the Oxford grading panel (total score:0-10) [24], respectively. Corneal sensitivity is one way to evaluate the function of corneal nerves and was measured using a Cochet-Bonnet esthesiometer (Luneau Ophthalmologie, Chartres Cedex, France) with a 6.0-cm adjustable nylon monofilament.Starting at 6.0 cm, the monofilament length was gradually reduced at 5-mm intervals until the initial response occurred.were detected using the MILLIPLEX® Human High Sensitivity T Cell Magnetic Bead Panel (Millipore, Billerica, MA, USA) and MILLIPLEX MAP® Human Neuropeptide Magnetic Bead Panel (Millipore, Billerica, MA, USA), separately.All procedures were performed according to the manufacturer's instructions [26]. Statistical analyses Statistical analyses were performed using SPSS software (version 27.0; SPSS Inc., Chicago, IL, USA).Figures were created using GraphPad Prism 9.4 software package and R software (version 4.3.1).The normality assumption was checked using the Shapiro-Wilk test.The variables are expressed as the mean ± standard deviation (SD) or medians (interquartile ranges) according to their distributions.Linear mixed models were used to assess changes in the studied variables over time.The Bonferroni adjustment was used for multiple comparisons.Spearman's rank correlation was used to explore the relationship between ocular parameters.Statistical significance was set at P < 0.05. Participant demographics In this study, a total of 30 participants were enrolled in the study, all of whom met the inclusion criteria and successfully completed the entire follow-up process.The participants' characteristics are presented in Table 1.Among the 30 patients, 24 were women and 6 were men.The mean age was 34.40 ± 7.02.The mean preoperative spherical equivalent (SE) was − 5.30 ± 1.75D. Tear cytokine and neuropeptide concentrations Tear inflammatory cytokine and neuropeptide concentrations in the participants before and after treatment are shown in Table 4.There was no significant difference in the concentrations of all inflammatory factors and neuropeptides before and after 1 month of treatment.After 3 month treatment, tear IL-6, IL-1β, TNF-α levels were decreased than baseline (P = 0.03; P = 0.01; P = 0.02, respectively) (Fig. 5).As for other inflammatory cytokines, including IL-10, IL-17 A, INF-γ, and IL-23, no statistical difference was found.There were no statistically significant differences in neuropeptide concentrations before and after treatment. Discussion DED is one of the most common complications associated with corneal refractive surgery.According to previous reports, DED affects approximately 85.4% of patients at 1 week postoperatively and 59.4% of patients at 1 month after refractive surgery [27,28].While DED usually occurs transiently in the early postoperative period, it could also develop into a chronic condition; approximately 8-20% of patients develop chronic DED for more than 6 months [4][5][6].Traditional artificial tears are often poor or even ineffective in these patients.Additionally, a number of patients experience some form of ocular pain [29].Owing to the lack of understanding of ocular pain, there is currently no effective drug for treating it, which substantially impacts the quality of life.This is the first study to evaluate the therapeutic effects of 0.05% cyclosporine and sodium hyaluronate eye drops in patients with post-refractive surgery DED with ocular pain unresponsive to traditional artificial tears.Our results showed that the topical combined application of 0.05% cyclosporine A and sodium hyaluronate eye drops had beneficial effects on the relief of dry eye and ocular pain symptoms and on improving tear film stability and ocular inflammation. Almost 80% of the participants in this study were women.This is consistent with previous findings showing that women are more likely to develop refractive surgery-related DED [30].The results showed that in DED patients who are ineffective in treating sodium hyaluronate alone, the dry eye symptoms and ocular pain improved significantly after using cyclosporine combined with sodium hyaluronate for 3 months, especially burning spontaneous pain and evoked pain.Moreover, we observed a positive correlation between dry eye symptoms and ocular pain symptoms and a negative correlation between BUT scores and ocular pain symptoms, indicating that DED may cause ocular pain to some extent. Cyclosporine is an agent reported to promote the secretion of aqueous tears.In this study, after 3 months of treatment, TBUT and SIt had significant improvements, especially at 3 months of treatment.The degree of corneal fluorescein staining was significantly lower than that before treatment, indicating that the corneal epithelium was repaired gradually.Moreover, we found that the ocular pain score positively correlated with the degree of corneal fluorescein staining, indicating that corneal epithelial injury was one of the factors causing ocular pain in these patients. The cornea is densely innervated by sensory neurons that are responsible for corneal perception when the ocular surface is exposed to harmful stimuli or inflammation [31].There were no significant differences in the morphology of the corneal subbasal nerves between pre-treatment and post-treatment.Interestingly, corneal perception was better than before treatment, consistent with a previous study by Toker and Asfuroğlu [32].This may be due to the neurotrophic effect of cyclosporine, either by directly acting on nerve cells or by reestablishing a healthy environment for nerve regeneration [33].However, improved corneal perception and nerve function did not show the same trend.This may be because corneal perception mainly represents the density of the subepithelial nerve endings and does not completely reflect the density and length of the corneal subbasal nerve. Cyclosporine can regulate the underlying inflammatory pathology of the ocular surface by binding to cyclophilin in lymphocytes, blocking the expression of immune mediators such as IL-1β, IL-6, and interferon-γ [34].In DED, hyperosmotic factors disturb the dynamic balance of the ocular surface, resulting in Fig. 2 The proportion of severity of dry eye symptoms in post-refractive surgery DED patients with ocular pain before and after treatment.The severity of dry eye symptoms was scored according to Ocular Surface Disease Index (OSDI) (range, 0-100) an imbalance between secretion and degradation of tear film components.Tear film instability increases the risk of corneal epithelial injury, which leads to the release of inflammatory mediators.Immune cells on the ocular surface release a large number of proinflammatory cytokines, which recruit more immune cells to accumulate on the ocular surface, leading to a vicious circle of inflammation [35].This study showed a significant reduction in tear inflammatory cytokine levels at 3 months.Still, no difference was observed at 1 month, suggesting that cyclosporine has a slower but better effect.Short-term treatment limits its benefits; therefore, long-term treatment for at least 3 months is considered necessary. Although sodium hyaluronate can improve dry eye symptoms to some extent, it fails to address the underlying cause of the disease, namely, inflammation.Consequently, their clinical efficacies are limited.Without adequate treatment, the ocular surface can become progressively damaged.Therefore, during the treatment of DED, especially post-refractive surgery DED, it is appropriate to improve the tear film while addressing the inflammatory response of the ocular surface [36]. This study has some limitations.One of these limitations was the short study duration.This is because cyclosporine eye drops are thought to inhibit the recruitment of T cells, but this process may take 3-6 months [37].Despite the limited duration of this study, the results remain valid.Second, all patients received combination treatment, which may have confounded the interpretation of the effects of cyclosporine.However, the recruited patients did not respond to sodium hyaluronate treatment.Hence, the positive outcomes observed were unlikely to have been affected by the lubricants. Approximately 5 µl of the unstimulated basal tears from the right eye were collected from the lower tear meniscus with a clean glass micropipette (Microcaps; Drummond Scientific Co, Broomall, PA) in a reasonable time (up to 5 min) without provoking a reflex secretion of tears, and samples were stored at -80 °C as soon as possible.The levels of inflammatory cytokines (interferon [IFN]-γ, interleukin [IL]-10, IL-17 A, IL-1β, IL-23, IL-6 and tumor necrosis factor-α [TNFα]) and neuropeptides (α-melanocyte-stimulating hormone [α-MSH], oxytocin, and substance P [SP]) Table 1 The demographic data of participants
v3-fos-license
2023-12-26T16:04:24.406Z
2023-12-23T00:00:00.000
266546568
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://academic.oup.com/noa/advance-article-pdf/doi/10.1093/noajnl/vdad167/54803785/vdad167.pdf", "pdf_hash": "e01c68d124d9f0235a58f3f9674b995f58b7e810", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46295", "s2fieldsofstudy": [ "Medicine" ], "sha1": "f896bf74862d786b8995e9c08435840d5026d68e", "year": 2023 }
pes2o/s2orc
Does exogenous hormonal therapy affect the risk of glioma among females: A systematic review and meta-analysis Abstract Background The effect of exogenous hormone replacement therapy (HRT) and oral contraceptive pills (OCPs) on glioma risk in females is unclear despite numerous studies; hence, we conducted a meta-analysis to evaluate this relationship. Methods Studies investigating the impact of exogenous female hormones on glioma risk were retrieved by searching 4 databases from inception until September 2022. Articles of any design, such as case–control and cohort studies, proving the relative risk (RR), odds ratio (OR), or hazard ratio were included. Summary OR values were calculated using a random effects model. Results Both HRT and OCP use of any duration decreased the risk of developing glioma [HRT OR = 0.78, 95% CI 0.66–0.91, P = .00; OCP: OR = 0.80, 95% CI 0.67–0.96, P = .02]. When stratified by duration of use, HRT use >1 year significantly reduced glioma risk (<1 year: OR = 0.82, 95% CI 0.63–1.07, P = 0.15; 1–5 years: OR = 0.79, 95% CI 0.67–0.92, P = .00; 5–10 years: OR = 0.80, 95% CI 0.66–0.97, P = .02; >10 years: OR = 0.69, 95% CI 0.54–0.88, P = .00). In contrast, only OCP use for >10 years significantly reduced glioma risk (<1 year: OR = 0.72, 95% CI 0.49–1.05, P = .09; 1–5 years: OR = 0.88, 95% CI 0.72–1.02, P = .09; 5–10 years: OR = 0.85, 95% CI 0.65–1.1, P = 0.21; >10 years: OR = 0.58, 95% CI 0.45–0.74, P = .00). Conclusions Our pooled results strongly suggest that sustained HRT and OCP use is associated with reduced risk of glioma development. ionizing radiation and hereditary syndromes such as neurofibromatosis 1 and 2, tuberous sclerosis, Lynch syndrome, and von Hippel-Lindau syndrome. 2The incidence of glioma is also higher among males, suggesting that development may be influenced by hormones. 3Consistent with this notion, glioma cells express steroid hormone receptors 4 and factors such as duration of exogenous hormone use, age at first childbirth, number of births, age at menarche, age at menopause, and type of menopause (natural or medically induced), and duration of hormone alter glioma incidence. 5There are many important indications for hormone replacement therapy (HRT), including treatment of menopause symptoms and prevention of cardiovascular disease or osteoporosis. 6Hot flashes and urogenital atrophy are common examples of postmenopausal symptoms that are frequently managed by HRT. 7 It was reported that 44% of postmenopausal females have used HRT at least once, most often in pill form (40%). 8 While numerous studies have addressed the effects of HRT and oral contraceptive pills (OCPs) on glioma risk, many of the results are contradictory.For instance, Benson et al. reported an increased risk of developing glioma and meningioma, 9 while Yang et al. found that risks of glioma and meningioma were dependent on the duration of OCP use. 10 Others have found that factors such as old age at menarche increase the risk of developing glioma. 11,12onversely, Lan et al. reported that HRT reduced the risk of developing glioma, although they did not stratify by duration of use. 13In this meta-analysis, we examined the relationship between glioma risk and the use of HRT or OCP with duration of use stratification. Search Strategy This study was conducted according to Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines. 14Studies on the effects of HRT on glioma risk in females were retrieved by searching Medline, Cochrane, Embase, and CENTRAL as well as the references lists of included papers and previous meta-analyses.Searches were conducted in September 2022 and were restricted to English language literature.The search string used for all databases was as follows: [(Brain Glioma OR high-grade tumor OR glial cell neoplasm OR glioblastoma multiforme OR GBM OR diffuse glioma OR glial tumors OR anaplastic glioma) AND (hormone replacement therapy OR contraceptives OR exogenous hormones OR exogenous estrogen OR estrogen OR HRT OR OCP) AND (risk OR health risk assessment OR risk factor)]. Study Selection Inclusion criteria were: (i) studies describing the relationship between glioma incidence and current or past use of female exogenous hormones using a case-control or cohort study design, and (ii) providing the relative risk (RR), odds ratio (OR), or hazard ratio.No randomized controlled trials were identified through our search.Retrieved studies conducted in animal models, presented as conference abstracts, that did not classify CNS tumor subtypes or did not include glioma as the outcome of interest were excluded.In addition, reviews and previous meta-analyses were excluded.Two groups of authors independently performed the primary survey according to our preset inclusion criteria, and conflicts were resolved by senior authors through discussion and consensus. Data Extraction and Quality Assessment The following parameters were extracted from each study and entered into an Excel sheet: first author, year of publication, country where the study was conducted, mean or median age, sample size, study design, follow-up duration, exposure (HRT, OCPs, or both), risk estimate, duration of use, and Newcastle-Ottawa Scale (NOS).The data were then reviewed by a third author. Study quality was assessed using the NOS, a wellvalidated metric for evaluating observational and nonrandomized studies according to participant selection criteria, comparability, and exposure or outcome.Comparability points were given whenever the age at glioma diagnosis and duration of hormone use were available.Additionally, the adequacy of the follow-up duration was determined by the senior authors.The NOS score ranges from 0 to 9 stars, and studies with≥6 stars are considered to be of relatively higher quality. 15We searched for the source of funding and reported it as yes (provided), no (not provided), or not mentioned (Table 1). Analysis Descriptive statistics, including mean and frequency, were calculated using IBM SPSS version 2, while the meta-analysis was conducted using Comprehensive Meta-Analysis software version 3. Summary ORs) and RR with 95% CI of developing glioma were calculated separately.Due to the rarity of glioma, ORs were considered equivalent to RRs.For simplicity, therefore, pooled results are expressed as ORs.The influences of oral contraceptives and HRT on glioma risk were also examined separately.Additional subgroup analyses were performed on treatment groups stratified by duration of use (when Importance of the Study This updated meta-analysis and systematic review reveals a significant association between hormonal therapy and reduced glioma risk among adult females. These findings may warrant further evaluation of the role of female hormones as preventative therapies for glioma.CC = case-control, C = cohort, NOS = New castle-Ottawa scale. Alfuridy et al.: Exogenous hormonal therapy and glioma risk available) as follows: <1 year, 1-5 years, 5-10 years, and >10 years.A study design influences the risk of bias, this assessment was conducted separately for case-control and cohort studies.The possibility of heterogeneity was evaluated using the I-squared statistic, with <25% considered low, 25-50% moderate, and 50-75% as high heterogeneity.Due to the heterogeneity among studies, a random effects model was for pooled analysis.Sensitivity analysis was performed by omitting 1 study at a time and assessing the stability of the result and by omitting studies with NOS scores less than 6.Publication bias was assessed using Begg's funnel plot and Egger's test. Search Results and Study Characteristics A total of 386 studies were retrieved from Midline, Cochrane, Embase, or CENTRAL using the indicated search string.Among these, 12 were excluded as duplicates and 356 due to irrelevance after reviewing the title and abstract.The full texts of the remaining 18 studies were carefully examined, and 5 was excluded as reviews.However, 4 studies found by searching the reference lists of included studies (n = 2) and previous meta-analysis (n = 2) were included.Finally, 17 valid observational studies were enrolled, 12 population-based case-control stud ies 5,9,[16][17][18][19][20][21]26,27,29,30 and 5 cohort studies [22][23][24][25]28 (Figure 1). The basic fatures of the enrolled studies are summarized in Table 1. Among th 17 observational studies included, 4 examined the effect of OCPs on glioma risk, 3 examined the effect of HRT, and 10 examined the effects of both HRT and OCPs. Descriptive Statistics and Participant Demographics The secondary aim of this study was to provide updated descriptive statistics on glioma and associations with OCP and HRT use.The 17 studies included in this metaanalysis were conducted in 5 different countries, of which the United States of America was the site of the greatest number.Most studies were conducted between 1990 and 2015 (inclusive) and included a total of 2 995 082 glioma cases.The median patient age was 52.More than or less than9.063years, and the mean duration of follow-up was 8.76 ± 4.433 years. Quantitative Synthesis The primary aim of this study was to provide updated estimates of glioma risk among females using OCPs or receiving HRT. HRT and Glioma Risk. OCPs and Glioma Risk. Quality Assessment and Bias Risk of Bias. Quality assessment was conducted using the NOS scale.Two studies were given a score of 5 stars, 4 studies a score of 6 stars, 5 a score of 7 stars, and the rest a score of 8 stars (all out of 9).Based on a score of 6 or higher, 15 studies (88%) were classified as high quality. Sensitivity Analysis and Publication Bias. -Omitting each study separately yielded no significant changes in OR, indicating that the results were stable and robust.Construction of a Begg's funnel plot and Egger's test also yielded no evidence of publication bias (Figure 2B and D).We also examined the effect of omitting the 2 studies with high risk of bias (NOS scores of 5), one a casecontrol study on the effects of HRT and one a case-control study examining the effects of OCPs on glioma risk, 16,17 but again significant protection was maintained (OR = 0.76, 95% CI 0.63-0.91,P = .000,I 2 = 55.54 and OR = 0.72, 95% CI 0.65-0.80,P = .000,I 2 = 00.00,respectively). Discussion This updated meta-analysis aimed to determine the effects of HRT and OCP on glioma risk among adult females. The pooled dataset included 12 case-control and 5 cohort studies with an overall total of 2 995 082 glioma patients.1][12][13] Similarly, HRT reduced the risk of developing glioma, also consistent with previous studies, 11,13 but this protective effect required only 1 year or more of treatment.Further, sensitivity analysis in which studies with NOS score < 6 were removed (leaving only studies deemed high quality) yielded qualitatively similar results.Additional subgroup analysis revealed that the protective effects of both treatments were only significant in case-control studies.However, it is well known that case-control studies carry a higher risk of bias due to potential improper control group selection, Alfuridy et al.: Exogenous hormonal therapy and glioma risk especially for rare diseases such as glioma.For instance, using interviews or registries to identify participants with equivalent exposure can be a challenge, and in some of these case-control studies, exposure risk was taken from a proxy interviewer due to death or disability.Therefore, caution is warranted in interpreting these results, and future large-scale prospective studies are essential for confirmation. The protective effect of HRT against glioma development is likely related to direct hormonal effects as glioma cells express steroid hormone receptors.However, Benson et al. found an increased risk of glioma among patients receiving HRT for any length of time (ever use subgroup). 9his contradictory finding suggests that the relationship between HRT and glioma is influenced by other factors, such as the timing, dose, type, and duration of HRT, and possibly also by individual differences in hormone metabolism.Anderson et al. also reported a significant increase in glioma risk among OCP users, particularly females taking progesterone-only therapy, and this enhanced risk was specific for glioblastoma multiforme, the most aggressive and deadly form of glioma. 16Several potential confounders may account for these discrepancies.Progesterone-only pills are usually prescribed for overweight women, and obesity alone has been identified as a risk factor for CNS tumors. 17Further, data on OCP were collected from a prescription registry initiated in 1995, and so may exclude longer-term use by older females (i.e. the sample included a disproportionate number of females <50 years old). 16Therefore, this result may not be applicable to older females.In fact, Hatch et al. found that OCPs reduced overall glioma risk, but stratification by age at diagnosis based on a cutoff of 50 years revealed that the protective effect was significant only in the older age group, possibly because older patients are more likely to have used more potent preparations before the 1970s. 18ormone replacement therapy is prescribed more often for females with higher education and socioeconomic status.For instance, Hatch et al. found that HRT cases were better educated than controls. 18Similarly, Felini et al. found a greater number of low-income participants among controls in their study, although there were equal numbers of high-income earners among cases and controls. 26owever, no stratified analysis based on income was conducted in either study.Alternatively, Benson et al. found that socioeconomic status had no effect on CNS tumor incidence, including glioma and meningioma incidence. 28onetheless, we acknowledge that an association between HRT and income or education could influence glioma incidence and thus should be included in future studies. A previous meta-analysis by Zong et al. also found that older age at menarche was associated with a higher risk of brain tumors and glioma in particular.In addition, a longer duration of breastfeeding was associated with higher glioma risk, although with lower meningioma risk.In contrast, other reproductive factors such as menopausal status, parity, age at first birth, and age at menopause exhibited no significant association. 12 The meta-analysis by Benson et al. also examined the influence of HRT type on CNS tumor risk and found enhanced risk among estrogenonly users, amounting to an absolute excess risk of 2/10 000 users over 5 years, while no difference in risk was found for estrogen-progesterone users. 9Therefore, the HRT type should also be included in future studies. The associations of HRT and OCP exposure with lower glioma incidence both became stronger as the duration of use increased, but significant protection required only 1 year for HRT but 10 years for OCPs.These findings are in partial accord with the results of Yang et al., who found that only OCPs used for 7.5 years or more substantially reduced the risk of glioma. 10This difference in the effect of treatment duration between OCPs and HRT may be explained by age, as OCPs are used by premenopausal females while HRT tends to be prescribed for older females already at increased risk of glioma. One important factor missing from some of the included studies was the particular type of glioma.This lack of specificity is concerning because glioma types may be differentially sensitive to OCP exposure.This gap may lead to false perceptions regarding risks for specific glioma types.However, gliomas are rare tumors, so stratification according to type is challenging.Other limitations of this meta-analysis include the absence of age stratification in some studies.While the majority of studies found reduced glioma risk among exogenous hormone users, especially after prolonged use, the pooled result is inconsistent with some individual studies.Thus, larger-scale prospective studies considering possible confounders such as age at menarche, age at menopause, parity, breastfeeding history, age during treatment, hormone type(s), and dose among others are required to establish more accurate associations with glioma risk. A funnel plot revealed no signs of publication bias.However, publication bias is a potential limitation of all meta-analyses as it is well known that negative results are often not published.Finally, the source of funding can be a potential source of bias, and 2 studies did not mention the source of funding. Conclusion This meta-analysis suggests an association between HRT for at least 1 year and OCP for at least 10 years and a reduction in the overall risk of glioma among adult females.However, additional research is needed to elucidate the mechanisms underlying this protective effect.Such information could help in the development of therapeutic applications for the prevention or treatment of glioma. Figure 1 . Figure 1.PRISMA flow chart for the search strategy. Figure 2 . Figure 2. (A) Forest plots for the OR of developing Glioma after HRT regardless of the duration of use, (B) funnel plot for HRT use and glioma, (C) forest plots for the OR of developing glioma after OCP regardless of the duration of use, (D) funnel plot for OCP use. Figure 4 . Figure 4. Forest plots for the OR of developing Glioma after HRT regardless of the duration of use, stratified by study type, C = cohort study, CC = case-control study. Table 1 . Summary of Included Studies
v3-fos-license
2020-02-22T14:03:50.559Z
2020-01-21T00:00:00.000
211210937
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/8948470/08964305.pdf", "pdf_hash": "8aad690050a2534d133f9894711773ca39005968", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46297", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "4ab4cbd4103b7f384c56da1b4f539c48c13872fd", "year": 2020 }
pes2o/s2orc
Multi-Attribute Decision-Making Based on Interval-Valued q-Rung Dual Hesitant Uncertain Linguistic Sets The interval-valued q-rung dual hesitant fuzzy sets (IVq-RDHFSs) effectively model decision makers’ (DMs’) evaluation information as well as their high hesitancy in complicated multi-attribute decision-making (MADM) situations. Note that the IVq-RDHFSs only depict DMs’ evaluation values quantificationally but overlook their qualitative decision information. To improve the performance of IVq-RDHFSs in dealing with fuzzy information, we incorporate the concept of uncertain linguistic variables (ULVs) into them and propose a new tool, called interval-valued q-rung dual hesitant uncertain linguistic sets (IVq-RDHULSs). Then we investigate MADM approach with interval-valued q-rung dual hesitant uncertain linguistic (IVq-RDHUL) information. Afterwards, the concept of IVq-RDHULSs as well as their operations and ranking method are proposed. Further, we propose a set of IVq-RDHUL aggregation operators (AOs) on the basis of the powerful Muirhead mean, i.e., the IVq-RDHUL Muirhead mean operator, the IVq-RDHUL weighted Muirhead mean operator, the IVq-RDHUL dual Muirhead mean operator, and the IVq-RDHUL weighted dual Muirhead mean operator. The significant properties of the proposed AOs are also discussed in detail. Lastly, we try to introduce a new method to MADM issues in IVq-RDHUL context based on the newly developed AOs. I. INTRODUCTION Multi-attribute decision-making (MADM) refers to a collection of decision-making problems that aim to select or determine the optimal or most suitable alternative(s) under multiple attributes. In real life we always face MADM problems and hence, approaches to MADM have been a hot research topic in management sciences and operations research. A major obstacle in dealing with practical MADM problems is to describe decision makers' (DMs') evaluation information in a suitable and explainable method. It was Zadeh [1] who firstly described fuzzy information from the view of fuzzy sets (FSs) theory. The main contribution of FS is that it incorporates the idea of membership degree (MD) The associate editor coordinating the review of this manuscript and approving it for publication was Xudong Zhao . into ordinary sets, which describes the degree that an element to a given fixed set. Due to the good ability of FSs in dealing with fuzziness and uncertainty, they have received much attention [2]- [7]. In addition, FSs make it much easier to describe vague and uncertain DMs' evaluation information, which also have been extensively employed in MADM [8]- [10]. Recently, Atanassov [11] extended the classical FS theory to intuitionistic FSs (IFSs) by taking both MDs and non-membership degrees (NMDs) into consideration. Compared with FSs, IFSs provide more sufficient decision information and more effectively handle DMs' uncertain evaluation values. Therefore, IFSs based on MADM has soon become a new research topic and many decision-making methods have been proposed [12]- [16]. Although quite a few IFSs based MADM approaches have been proposed, with the increasing complexity of real MADM problems, it has become more and more difficult to describe DMs' evaluation information in the form of IFSs. In the framework of IFSs, the MD and NMD of an element are denoted by two single values. Nevertheless, in some real MADM problems DMs are likely to hesitate among several values when giving their assessment values. In other words, DMs often have high hesitancy in expressing their decision information, and they would like to use a set of single values to denote the MD and NMD. Therefore, Zhu et al. [17] generalized IFSs to dual hesitant fuzzy sets (DHFSs), which allow DMs to express their evaluation information through a collection of single values. After the introduction of DHFSs, Garg and Arora [18] studied dual hesitant fuzzy soft sets and their application in MADM. Arora and Garg [19] investigated the robust correlation coefficient measure of DHFSs and its application in MADM problems. Qu et al. [20] proposed a novel stochastic MADM method based on DHFSs, regret theory and group satisfaction degree. Hao et al. [21] generalized DHFSs to probabilistic DHFSs by considering both randomness and impreciseness. Zhang et al. [22] studied the concept of dual hesitant fuzzy rough sets and applied them in medical diagnosis. Ren et al. [23] proposed the new method to rank dual hesitant fuzzy elements and extended the classical VIKOR method to MADM with dual hesitant fuzzy information. Zhang et al. [24] extended DHFSs to interval-valued DHFSs (IVDHFSs), proposed their operational laws and studied their applications in MADM. Some scholars studied MADM methods based on dual hesitant fuzzy aggregation operators (AOs) and we suggest readers to refer [25]- [35]. Recently, Wei [36] extended the IVDHFSs to the interval-valued dual hesitant fuzzy uncertain linguistic sets (IVDHFULSs) and studied their application to MADM. In IVDHFULSs, uncertain linguistic variables (ULVs) are utilized to describe DMs' qualitative assessments, while interval-valued dual hesitant fuzzy MD and interval-valued dual hesitant fuzzy NMD depict DMs' quantitative assessments. However, there are still some shortcomings of the MADM method proposed by Wei [36]. 1) The MADM method proposed by Wei [36] is based on IVDHFULSs. The IVDHFULSs are constructed by interval-valued dual hesitant fuzzy uncertain linguistic numbers (IVDHFULNs), which satisfy the constraint that the sum of MD and NMD is less than or equal to one. However, such a constraint cannot be always strictly satisfied. In addition, if IVDHFULNs are employed to portray DMs' evaluation information, some information may lose. 6, 0.7] . Obviously, the above value cannot be handled by IVDHFULNs, as 0.6 + 0.7 = 1.3 >1. In order words, if DMs utilize IVDHFULSs to express their evaluation value, then some information will lose, which will further lead to unreliable decision results. 2) Wei's [36] MADM method employs simply weighted average operator to aggregate attributes. In other words, in Wei's [36] opinions, attributes are independent and there is no interaction among attributes. However, more and more scholars and scientist have realized the existence of the interrelationship among attributes in MADM problems and quite a few AOs, which take the interrelationship among attributes into account when determining the optimal alternative(s), have been presented [37]- [39]. Based on the above analysis, Wei's [36] decision-making method is flawed in dealing with practical MADM problems. Based on the above analysis, the main motivation and purpose of this paper is to propose a novel MADM method, which overcomes the drawback of Wei's [36] decision-making method. The main novelties and contributions of this paper are three-fold. 1) A new information representation tool, called interval-valued q-rung dual hesitant uncertain linguistic sets (IVq-RDHULSs), is proposed to depict DMs' evaluation information. The IVq-RDHULS is a combination of interval-valued q-rung dual hesitant fuzzy sets (IVq-RDHFSs) [40] with ULVs. As an extension of q-rung orthopair fuzzy sets 41] and q-rung dual hesitant fuzzy sets [42], IVq-RDHFSs allow the MDs and NMDs to be represented by several interval values, such that the sum of qth power of MD and qth power of NMD is less than or equal to one. The IVq-RDHULSs inherit the advantage of IVq-RDHFSs. Compared with IVDHFULSs, the IVq-RDHULSs enlarge the describable information space and give DMs' more freedom to fully express their evaluation opinions. Compared with IVq-RDHFSs, the IVq-RDHULSs comprehensively portray DMs' evaluation opinions, as they can represent quantitative and qualitative evaluation information simultaneously. 2) Novel AOs to aggregate interval-valued q-rung dual hesitant uncertain linguistic (IVq-RDHUL) attribute values are proposed. To deal with the interrelationship among attributes, the powerful Muirhead mean (MM) [43] is extended to IVq-RDHUL environment to propose new AOs. The MM has gained much attention in the field of information fusions, due to its capacity of capturing the interrelationship among multiple attributes [44]- [50].The main superiority of our proposed AOs is that they effectively take into account the interrelationships among arbitrary numbers of attributes, which overcomes the second drawback of Wei's [36] decision-making method. 3) The main steps of a novel MADM method are clearly illustrated. The new method is applied to practical MADM problems to verify its validity and effectiveness. Comparative analysis is conduced to demonstrate VOLUME 8, 2020 the advantages and superiorities of the proposed method. In our proposed MADM method, the IVq-RDHULSs are employed to express DMs' evaluation values and the MM operator is used to aggregate attributes values. Therefore, our method can more effectively solve practical MADM problems. To clearly present our works, we organize the remainder of this paper as follows. Section 2 reviews basic notions and proposes the concept of IVq-RDHULSs as well as their related notions. Section 3 presents some series of IVq-RDHUL operators and analyzes their properties. Section 4 presents the main steps of a new MADM method. Section 5 proves the efficiency of our method by conducting experimental examples. We make conclusions in Section 6. II. RELATED CONCEPTS A. THE INTERVAL-VALUED Q-RUNG DUAL HESITANT FUZZY SETS AND INTERVAL-VALUED Q-RUNG DUAL HESITANT UNCERTAIN LINGUISTIC SETS Xu et al. [40] recently proposed the IVq-RDHFS and its definition is presented as follows. Definition 1 [40]: Let X be a fixed set, then an interval-valued q-rung dual hesitant fuzzy set (IVq-RDHFS) D defined on X is expressed as where are two sets of some interval values in [0, 1], denoting the possible MD and NMD of the element x ∈ X to the set D, respectively, with the conditions: r l D , r u D , η l D , η u D ⊂ [0, 1], and 0 ≤ r u an interval-valued q-rung dual hesitant fuzzy element (IVq-RDHFE) denoted by d = (h, g). Especially, if r l = r u and η l = η u , then D reduces to q-rung dual hesitant fuzzy set (q-RDHFS) [42]; if q = 1, then D reduces to IVDHFS [24]; if q = 2, then D reduces to hesitant interval-valued Pythagorean fuzzy set [51]. It is noted that IVq-RDHFSs portray DMs' evaluations from both MD and NMD, which are quantitative information. However, to comprehensively express DMs' evaluation information, we have to depict both the quantitative and qualitative. In order to do this, scholars usually combine the fuzzy sets theory and linguistic variables or ULVs to propose some hybrid tools. The representatives are intuitionistic linguistic sets [52], intuitionistic uncertain linguistic sets [53], Pythagorean fuzzy linguistic sets [54], Pythagorean fuzzy uncertain linguistic sets [55], etc. Similarly, we combine IVq-RDHFSs with ULVs to propose the IVq-RDHULSs. Definition 2: Let X be a fixed set and S be a continuous linguistic term set, then an IVq-RDHULS defined on X can be expressed as sets of some interval values in [0, 1], denoting the possible MD and NMD of the element x ∈ X to the set D, respectively, such that r l ) , d 1 = s θ 1 , s τ 1 , (h 1 , g 1 ) , and d 2 = s θ 2 , s τ 2 , (h 2 , g 2 ) be any three IVq-RDHULVs, 26794 VOLUME 8, 2020 According to Definition 3, the following theorem can be obtained. Definition 4: Let d = [s θ , s τ ] , (h, g) be an IVq-RDHULV, then we define the score function of d as and the accuracy function of d as where #h and #g represent the numbers of interval values in h and g, respectively. Then, let III. AGGREGATION OPERATORS FOR INTERVAL-VALUED Q-RUNG DUAL HESITANT UNCERTAIN LINGUISTIC INFORMATION In this section, we extend MM and DMM to IVq-RDHUL environment and develop some IVq-RDHUL Muirhead mean AOs. Properties of these operators are also discussed in this section. Theorem 2: Let d j = s θ j , s τ j , h j , g j (j = 1, 2, . . . , n) be a collection of IVq-RDHULVs, then the aggregated value by using the IVq-RDHULMM operator is still an IVq-RDHULV and Proof: According to the Definition 3, we have = s (θϑ(j)) t j , s (τϑ(j)) t j , ∪ r ϑ(j) ∈h ϑ(j) ,η ϑ(j) ∈g ϑ(j) Then, Moreover, For convenience, let and Then, which means that the aggregated value is still an IVq-RDHULV. The proof of Theorem 2 is completed. Moreover, the IVq-RDHULMM operator has the following properties. Theorem 3 (Idempotency): If all the d j (j = 1, 2, . . . , n) are equal, i.e., d j = d = [s θ , s τ ] , (h, g) for all j, and there is only one interval-valued element in h and g, respectively, then Proof: According to Theorem 2, we can get Then we can get r l ≥ r ′l . Similarly, we also have Then, according to Definition 4, the proof of Theorem 4 is completed. Theorem 5 (Boundedness): Let d j = s θ j , s τ j , h j , g j (j = 1, 2, . . . , n) be a collection of IVq-RDHULVs, and Proof: Based on Theorem 4, we have and In addition, both d − and d + only have one MD and one NMD. Therefore, Evidently, the parameter q and parameter vector T play important roles in the aggregation results. In the followings, we will discuss some special cases of the IVq-RDHULMM operator with respect to T and q. Special which is the interval-valued q-rung dual hesitant uncertain linguistic geometric (IVq-RDHULG) operator. Special Case 6: if q = 1, then the IVq-RDHULMM operator reduces to the following which is the interval-valued dual hesitant uncertain linguistic Muirhead mean operator. Special Case 7: if q = 2, then the IVq-RDHULMM operator reduces to the following which is the interval-valued Pythagorean dual hesitant uncertain linguistic Muirhead mean operator. Theorem 6: Let d j = s θ j , s τ j , h j , g j (j = 1, 2, . . . , n) be a collection of IVq-RDHULVs, then the aggregated value by using the IVq-RDHULWMM operator is still an IVq-RDHULV and The proof of Theorem 6 is similar to that of Theorem 2. So, we omitted it here. Moreover, the IVq-RDHULWMM also has the properties of monotonicity and boundedness. which is the IVq-RDHULA operator. Special Case 6: If q = 1, then the IVq-RDHULDMM operator reduces to the following which is the interval-valued dual hesitant uncertain linguistic dual Muirhead mean operator. Special Case 7: If q = 2, then the IVq-RDHULDMM operator reduces to the following which is the interval-valued Pythagorean dual hesitant uncertain linguistic dual Muirhead mean operator. Definition 10: Let d j = s θ j , s τ j , h j , g j (j = 1, 2, . . . , n) be a collection of IVq-RDHULVs and T = (t 1 , t 2 , . . . , t n ) ∈ T n be a vector of parameters. Let w = (w 1 , w 2 , . . . , w n ) T be the weight vector, such that 0 ≤ w j ≤ 1 and n j=1 w j = 1. If then IVq − RDHULWDMM T is called the interval-valued q-rung dual hesitant uncertain linguistic weighted dual Muirhead mean (IVq-RDHULWDMM) operator, where ϑ (j) (j = 1, 2, . . . , n) is any a permutation of (1, 2, . . . , n), and S n is the collection of all ϑ (j) (j = 1, 2, . . . , n). Theorem 8: Let d j = s θ j , s τ j , h j , g j (j = 1, 2, . . . , n) be a collection of IVq-RDHULVs, then the aggregated value by using the IVq-RDHULWDMM operator is still an IVq-RDHULV, and The proof is similar to that of Theorem 2. In addition, the IVq-RDHULWDMM operator has the properties of monotonicity and boundedness. IV. A NOVEL APPROACH TO MADM WITH INTERVAL-VALUED Q-RUNG DUAL HESITANT UNCERTAIN LINGUISTIC INFORMATION A typical MADM problem with interval-valued q-rung dual hesitant uncertain linguistic information can be described as follows: Let A = {A 1 , A 2 , . . . , A m } be a collection of alternatives, and C = {C 1 , C 2 , . . . , C n } be a set of attributes. Let w = (w 1 , w 2 , . . . , w n ) T be the weight vector of attributes, satisfying w j ∈ [0, 1], j = 1, 2 . . . , n and n j=1 w j = 1. Suppose that D = d ij = s θ ij , s τ ij , h ij , g ij is the interval-valued q-rung dual hesitant uncertain linguistic decision matrix, assessed by the DM for alternative A i with respect to attribute C j . In the following, we present an approach to solve this MADM problem. Step 1: Standardize the original decision matrix. In real decision-making problems, there are two kinds of attributes: benefit attributes and cost attributes. Therefore, the original decision matrix should be normalized by where s ij = s θ ij , s τ ij , and h ij = ∪ r ij ∈h ij r l ij , r u ij , g ij = ∪ η ij ∈g ij η l ij , η u ij . B and C are the collections of benefit attributes and cost attributes, respectively. Step 3: Rank the overall values d i (i = 1, 2, . . . , m) based on their scores according to Definition 4. Step 4: Rank the corresponding alternatives according to the result of Step 3, and then select the best alternative. V. AN APPLICATION OF THE PROPOSED MADM METHOD IN CLINICIAN PERFORMANCE ASSESSMENT Clinician performance assessment aims to qualitatively and quantitatively measure and calculate the achievement of clinician's tasks over a certain period of time, which is closely related to their salary, rewards and punishments, title promotion and so forth. Therefore, it is very significant to develop a scientific and effective performance evaluation system. For clinicians working in a teaching hospital, their tasks involve not only medical services but also the work of a university teacher, which makes the assessment of their performances more complex. Suppose that there are four clinicians A i (i = 1, 2, 3, 4), and their performances need to be evaluated from three aspects C j (j = 1, 2, 3): clinical services (C 1 ), teaching quality (C 2 ) and scientific research level (C 3 ), of which the weighted vector is w = (0.4, 0.2, 0.4) T . To give the DMs more freedom in decision-making process, they are allowed to give interval-valued q-rung dual hesitant fuzzy uncertain linguistic information and the decision matrix is presented in Table 1. The proposed method is utilized to obtain their scores, and the higher the score, the better the working achievement of the clinician. Note that DMs can express their judgments on clinician' performance through linguistic set S = {s 1 , s 2 , s 3 , s 4 , s 5 , s 6 , s 7 }, and the degree of ''Good'' becomes stronger and stronger from s 1 to s 7 . A. THE DECISION-MAKING PROCESS Step 1: As all attributes are benefit type, the original decision matrix does not need to be normalized. Step 2: Utilize the IVq-RDHULWMM operator to aggregate DMs' assessments so that the overall assessments of alternatives can be derived (assume T = (1, 1, 1)) and q = 3). As the results are too complicated, we omit them here. Step 3: Compute the score values of the overall assessments, and we can get Step 4: According to Definition 4, we can obtain the ranking order which means the order of the job performances of clinicians is A 3 > A 4 > A 2 > A 1 . Therefore, A 3 deserves the best reward. In Step 2, if IVq-RDHULWDMM operator is utilized to aggregate attribute values (assume T = (1, 1, 1)) and q = 3), the score values of alternatives are Thus, the order of performances is A 3 > A 2 > A 4 > A 1 , which means that during the assessment period, A 3 achieved the best performance. B. THE INFLUENCE OF THE PARAMETERS ON THE RANKING RESULTS As aforementioned, the parameter vector T and parameter q play significant roles in final results. Therefore, it is crucial to further investigate the influence of parameters on the score values and ranking results. Firstly, we assign different parameter vectors to T in IVq-RDHULWMM and IVq-RDHULWDMM operators (suppose q = 3), respectively. The decision-making outcomes are presented in Tables 2 and 3. From Tables 2 and 3, we know that the score values obtained by IVq-RDHULWMM and IVq-RDHULWDMM operators are different based on different T , which indicates T does have significant influence on the score values. In detail, if only the first parameter of vector T is real number and the others are 0, for IVq-RDHULWMM, we can find that the larger the real number is, the greater the score value of each alternative is. In contrast, for IVq-RDHULWDMM, the larger the real number, the smaller the score value of each alternative. Therefore, different values of T can reflect DMs' risk preferences of experts. In addition, although there is a little difference among the ranking orders, the clinician with highest score is always A 3 , which reveals the efficiency and stability of the proposed method. On the other hand, it is well-known that MM and DMM are characterized by taking into account the interrelationship among multiple input arguments. In real management environment, there are often some relationships among decision attributes. For the above-mentioned clinician performance assessment, due to the limited personal energy of the clinician, more investigation in clinical services (C 1 ) may led to the decrease of teaching quality (C 2 ) and scientific research level (C 3 ), and eventually his/her comprehensive score may change. So, it is definitely important to consider the interrelationship among these attributes, and users who are confronted with kinds of management environment can choose appropriate T according to practical needs. In the followings, by assigning different values to the parameter q in the IVq-RDHULWMM and IVq-RDHULWDMM operators (suppose T = (1, 1, 1)), the effects on the score values and ranking orders are discussed. The decision results are shown in Tables 4 and 5, respectively. 1, 1, 1)). 1, 1, 1)). As we can see from Table 4 and Table 5, different score values and ranking results are produced with different values of q. However, the clinician who has the highest score is always A 3 , which means that he/she achieves the best job performance and should get more rewards. In general, the parameter q can make the proposed method more flexible. As for the numerical selection of q, we recommend DMs choose the minimum integer that can make the sum of qth power of maximum element in membership degree set and in non-membership degree set no larger than one. C. EFFECTIVENESS ANALYSIS In order to verify the effectiveness of our method, we compare the proposed method based on IVq-RDULWMM operator with Wei's [36] method based on interval-valued dual hesitant fuzzy uncertain linguistic weighted average (IVDUFULWA) operator. Example 1: Our method based on IVq-RDHULWMM operator and Wei's [36] method based on IVDHFULWA operator, respectively, are utilized to assess these clinicians' performances (the above example) based on the decision matrix showed in Table 1. The results are presented in Table 6. As displayed in Table 6, the ranking orders of the alternatives produced by the two methods are the same, i.e., A 3 > A 4 > A 2 > A 1 , which indicates the effectiveness of our proposed method. D. ADVANTAGES OF OUR METHOD In order to illustrate the superiorities of the proposed method, we compare our method with that proposed by Wei [36] based on IVDHFULWA operator, and that introduced by Lu and Wei [57] based on dual hesitant fuzzy uncertain linguistic weighted average/geometric (DHFULWA/ DHFULWG) operator. We employ these methods to solve some practical numerical examples and conduct comparative analysis to discuss the advantages of our proposed method. Basically, our method has the following four advantages. 1) ITS ABILITY OF PORTORYING DMS' EVALUATION INFORMATION MORE ACCURATELY In our proposed MADM method, IVq-RDHULSs are employed to depict DMs' evaluation information. IVq-RDHULSs allow DMs to express the MD and NMD of an ULV by a series of interval values rather than crisp numbers. Evidently, the proposed IVq-RDHULS is more powerful and flexible than DHFULS proposed by Lu and Wei [56] which utilizes some crisp numbers to denote the possible MD and NMD of corresponding ULV. Furthermore, the dual hesitant fuzzy uncertain linguistic element (DHFULE) is a special case of IVq-RDHULV (q = 1) in which the upper bound and lower bound of each interval value in the collections of MD and NMD are equal. Therefore, the decision-making problems in which attribute values in the form of DHFULEs can also be solved by our proposed method. We provide the following example to better demonstrate this characteristic. Example 2 (Revised From [56]): There are five possible emerging technology enterprises A i (i = 1, 2, 3, 4, 5). The experts are required to evaluate the five alternatives under three attributes C j (j = 1, 2, 3), where C 1 represents the technical advancement, C 2 represents the potential market opportunity, and C 3 represents the industrialization infrastructure, human resources and financial conditions. The weight vector of the attributes is w = (0.35, 0.25, 0.40) T . DMs are required to evaluate these alternatives with dual hesitant fuzzy uncertain linguistic information and the revised decision matrix is shown in Table 7. In Example 2, DMs employ DHFULEs to express their evaluations. As mentioned above, DHFULE is a special case of IVq-RDHULV, so we can transform a DHFULE into an IVq-RDHULV. For Table 8). Followingly, the proposed method based on IVq-RDHULWMM and IVq-RDHULWDMM operators and Lu and Wei's [56] method based on DHFULWA and DHFULWG operators are applied to cope with the problem, and outcomes are presented in Table 9. As we see from Table 9, the ranking orders derived by Lu and Wei's [56] method and our method are slightly different, but the first is always A 3 . This also demonstrates the effectiveness of our proposed method. However, our proposed method is still more powerful and flexible than Lu and Wei's [56] method. In reality, due to the high complicacy of decision-making problem, it is usually difficult for DMs to express their judgments by crisp numbers. Instead, in order to comprehensively provide their evaluation [57] method is powerless to deal with this case, whereas our proposed method can still determine the ranking order of alternatives. 2) THE GREATER FREEDOM IT PROVIDES FOR DMS Our proposed method is based on IVq-RDHULSs which satisfy the condition that the sum of qth power of MD and NMD of a ULV is less than or equal to one. In practical decision-making problems, DMs can choose a proper value of q to make (r u ) + q + (η u ) + q ≤ 1 hold. The MADM method proposed by Wei [36] is based on IVDHFULSs which satisfy the constraint that the sum of MD and NMD is less than or equal to one, i.e.(r u ) + + (η u ) + ≤ [36]): There is a panel with five possible service outsourcing providers of communications industry A i (i = 1, 2, 3, 4, 5) to select. The expert team selects three attributes C j (j = 1, 2, 3) to evaluate the five candidates. i.e., business reputation (C 1 ), technical ability (C 2 ), and management ability (C 3 ). The weight vector of the attributes is w = (0.35, 0.25, 0.4) T . The DMs are required to evaluate the five possible providers under the above attributes with the interval-valued dual hesitant fuzzy uncertain linguistic information. The original decision matrix D = d ij 5×3 is presented in Table 10. First, we use our developed method based on IVq-RDHULWMM and IVq-RDHULWDMM operators, and Wei's [36] method based on IVDHFULWA and IVDHFULWG operators to determine the most desirable outsourcing providers. Their outcomes are then shown in Table 11. It is clear from Table 11 that all the four methods can be utilized to solve this problem, and although score values derived by four methods are slightly different, the best choice is always A 4 . Note that although our proposed method is based on the IVq-RDHULSs, it still can be employed in solving Example 3. This is because IVDHFULS is a special case of IVq-RDHULS and we can always find a value of q which makes (r u ) + q + (η u ) + q ≤ 1 hold. However, Wei's [36] method will fail if not all attribute values satisfy the condition (r u ) + + (η u ) + ≤ 1. We provide the following example to better explain this characteristic. Example 4: Given the DMs' subjective bias in real world, some evaluation values in Table 10 respectively. And all other values remain unchanged. The socre values and ranking orders derived by different methods are displayed in Table 12. It is clear that Wei's [36] MADM method is powerless to deal with Example 4, while our proposed method can still get the ranking orders of alternatives. This is because none of d 31 , d 32 or d 33 satisfy the constraint (r u ) + + (η u ) + ≤ 1 so that they cannot be represnted by IVDHFULSs. However, we can always find a proper value of q that makes d 31 , d 32 and d 33 satisfy the condition (r u ) + q + (η u ) + q ≤ 1. For example, we can set q = 3, then 0.9 3 + 0.4 3 = 0.793 < 1, 0.5 3 + 0.9 3 = 0.854 < 1, and 0.5 3 + 0.7 3 = 0.468 < 1. Hence, our proposed method can provide more freedom for DMs to fully express their evaluation values. 3) THE ABILITY TO CONSIDER THE INTERRELATIONSHIP AMONG MULITPLE ATTRIBUTES Generally, in practical MADM problems there usually are interrelationship among multiple attributes. When calculating the comprehensive evaluation values of alternatives, not only the attributes values and their corresponding weight information but also the complicated interrelationships among attributes should be taken into account. As we know from Tables 9 and 11, the ranking orders derived by our proposed method are slightly different from those obtained by Lu and Wei's [57] and Wei's [36] methods. This is because Lu and Wei's [57] and Wei's [36] methods are based on the simply weighted average/geometric AOs which do not consider the interrelationship between attributes. Our proposed method is based on MM (DMM) so that it has the ability to capture the interrelationships among interacted attributes. Basically, the interrelationship among attributes widely exists, while Lu and Wei's [57] and Wei's [36] methods assume that attributes are always independent, which is inconsistent with the reality. Our method based on the MM operator not only deals with the interrelationships among attributes, but also has the ability to manipulate the number of interacted attributes. Hence, our proposed method is more suitable for real MADM problems. For example, if T = (1, 1, 0), then our method captures the interrelationships between any two attributes; If T = (1, 1, 1), then our method can reflect the interrelationship among all the three attributes; If there is indeed no interrelationship between the attributes, then we can set T = (1, 0, 0). Therefore, our proposed method is more powerful and flexible than Lu and Wei's [57] and Wei's [36] methods. 4) THE EFFICIENCY IN PORTRAYING DMS' EVALUATION INFORMATION BOTH QUANTIFICATIONALLY AND QUALITATIVELY Basically, to appropriately express their evaluation information, DMs usually prefer using ULVs. In addition, the MDs and NMDs of ULVs provided by DMs can represent the decision judgments more accurately. Xu et al.'s [40] method is based on the IVq-RDHFSs which employ some interval-valued values to denote the possible MDs and NMDs. Obviously, the MADM method proposed by Xu et al. [40] cannot fully express DMs' evaluation information because it ignores the qualitative decision information. Our method is based on IVq-RDHULS which is a combination of Xu et al.'s [40] IVq-RDHFS with ULVs. It not only effectively depicts DMs' quantitative decision information (the same as Xu et al.'s method [40]), but also portrays the qualitative evaluation information by ULVs. Table 13 is provided to better demonstrate the characteristics of different MADM methods. In the following, we summarize the defects of some existing MADM methods and the superiorities of our proposed method. E. SUMMARY (1) First, Lu and Wei's [57] MADM method is based on DHFULS in which crisp numbers are employed to denote the possible MDs and NMDs of ULV. Our proposed method is based on IVq-RDHULS where the MDs and NMDs of ULVs are expressed as interval values. Obviously, interval values can take more information into account than crisp numbers. Furthermore, the DHFULS requires the sum of MD and NMD to be less than or equal to one. If the DHFULS is used to describe DMs' evaluation information, in order to meet its constraint DMs may provide some anamorphic evaluation values, which further leads to unreasonable decision results. Compared with DHFULSs, our IVq-RDHULSs have laxer constraint and so that DMs can express their preference information freely. Hence, our proposed method is better than Lu and Wei's [57] method in information expression form. (2) Wei's [36] method is based on IVDHFULSs which also use interval values to denote the possible MDs and NMDs of ULVs. This characteristic is same as our proposed method. However, the constraint of IVDHFULSs is so rigorous that it may incur information distoration to some extent. Our method is based on IVq-RDHULSs and DMs have enough freedom to express their preference information. Moreover, IVDHFULS is a special case of IVq-RDHULS. Hence, our proposed method is more powerful than Wei's [36] MADM method. (3) The main defect of Xu et al.'s [40] method is that DMs' qualitative evaluation information is ignored in the whole process of MADM. In contrast, our method reflects both DMs' quantitative and qualitative evaluation values. Therefore, our method is more suitable than Xu et al.'s [40] method in solving practical MADM problems. (4) Lu and Wei's [57] and Wei's [36] MADM methods are based on the weighted average/geometric operators which fail to consider the inherent interrelationship between attributes. Our proposed MADM method is based on the MM and DMM operators so that it has the ability to capture the interrelationships among attributes. Furthermore, the information aggregation process of our method is more flexible. As a result, our method is more powerful and flexible than Wei's [36] and Lu and Wei's [57] MADM methods. VI. CONCLUSION REMARKS In this paper, we introduced a new MADM method based on IVq-RDHULWMM and IVq-RDHULWDMM operators. First, the concept of IVq-RDHULSs were put forward by extending the IVq-RDHFS to linguistic environment. Then in order to appropriately aggregate IVq-RDHUL attribute values, we extended the powerful MM and DMM to IVq-RDHULSs and proposed a series of new AOs of IVq-RDHULVs. In the manuscript, some important properties of these AOs were also discussed. Based on the IVq-RDHULSs and their AOs we further introduced a new MADM method. Finally, in order to demonstrate the superiorities of this method over some recently presented methods, several numerical examples were provided, and comparative analysis was conducted. In future works, we shall investigate more applications of IVq-RDHULSs in MADM problems. He is currently an Associate Professor with Beijing Jiaotong University. His research interests include health informatics, data-driven healthcare management, information technology and society, and decision making. JUN WANG received the B.S. degree from Hebei University, in 2013, and the Ph.D. degree from Beijing Jiaotong University, China, in 2019. He is currently a Lecturer with the School of Economics and Management, Beijing University of Chemical Technology, Beijing, China. His research interests include aggregation operators, multiple attribute decision making, fuzzy logics, operational research, and big data. YUAN XU received the B.S. degree from Hebei University, Baoding, Hebei, China, in 2017. She is currently pursuing the master's degree with the School of Economic and Management, Beijing Jiaotong University, Beijing, China. Her current research interests include group decision making, aggregations, health informatics, and data-driven healthcare management. VOLUME 8, 2020
v3-fos-license
2021-08-04T13:28:12.783Z
2021-08-04T00:00:00.000
236899899
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmolb.2021.685466/pdf", "pdf_hash": "e5517692933492fde864627c495b3414fead65f1", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46299", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "e5517692933492fde864627c495b3414fead65f1", "year": 2021 }
pes2o/s2orc
Circular Ribonucleic Acid circFTO Promotes Angiogenesis and Impairs Blood–Retinal Barrier Via Targeting the miR-128-3p/Thioredoxin Interacting Protein Axis in Diabetic Retinopathy Background: Increasing attention has been attracted by the role of circular RNAs (circRNAs) in ocular diseases. Previous study has revealed that circ_0005941 (also known as circFTO, an alpha-ketoglutarate–dependent dioxygenase) was upregulated in the vitreous humor of diabetic retinopathy (DR), while its underlying mechanism in DR remains unknown. Methods: Retinal vascular endothelial cells (RVECs) treated with high glucose (HG) were used to establish the DR cell model. The in vivo assays were conducted using streptozotocin-induced diabetic mice. The circular structure and stability of circFTO were identified by Sanger sequencing and RNase R treatment. RT-qPCR analysis was used to detect the RNA expression. The levels of the mRNA-encoded protein thioredoxin-interacting protein (TXNIP) or angiogenesis-associated proteins (VEGFA, PDGF, and ANG2) and blood–retinal barrier (BRB)-related proteins (ZO-1, Occludin, and Claudin-5) were measured by Western blot. The viability of RVECs was measured using CCK-8 assays. The angiogenesis of RVECs was assessed using tube formation assays in vitro. Endothelial permeability assays were conducted to examine the function of the BRB. The binding between genes was explored using RNA pulldown and luciferase reporter assays. Results: CircFTO was upregulated in HG-treated RVECs. CircFTO deficiency reversed the HG-induced increase in the viability and angiogenesis of RVECs and alleviated HG-mediated impairment of the BRB. MiR-128-3p bound with circFTO and was downregulated in HG-treated RVECs. TXNIP was a downstream target gene of miR-128-3p. TXNIP was highly expressed in the DR cell model. Rescue assays revealed that circFTO promoted angiogenesis and impaired the blood–retinal barrier by upregulating TXNIP. In the DR mouse model, circFTO silencing inhibited angiogenesis and promoted BRB recovery in vivo. Conclusion: CircFTO promotes angiogenesis and impairs the blood–retinal barrier in vitro and in vivo by binding with miR-128-3p to upregulate TXNIP in DR. INTRODUCTION Diabetic retinopathy (DR) is a common diabetic complication leading to blindness in patients in developed countries (Congdon et al., 2003). Risk factors such as genetic background, hyperglycaemia, hypertension, dyslipidaemia, puberty, and pregnancy all contribute to DR progression (Cheung et al., 2010). The lesions within the retina are symbolic features of DR, with changes in retinal blood vessels and neovascularization (Forbes and Cooper, 2013). It is estimated that approximately 80% of diabetic patients with over 20 years' duration of diabetes show signs of DR (Leasher et al., 2016). The number of DR patients reaches 150 million worldwide and may be doubled by 2025 (Gupta et al., 2013). However, treatment for DR is still unsatisfactory, and it is necessary to explore the underlying mechanism for the improvement of therapy. Circular RNAs (circRNAs) refer to endogenous noncoding RNAs in a covalently closed circular structure commonly present in eukaryotes (Chen, 2020). CircRNAs have been reported to be implicated in the pathogenesis of diverse diseases (Li et al., 2018), including ocular diseases (Guo et al., 2019;Zhang et al., 2020). For example, circular RNA-ZNF532 knockdown suppresses the degeneration of retinal pericytes and vascular dysfunction induced by diabetes (Jiang et al., 2020). CircRNA cPWWP2A facilitates retinal vascular dysfunction via the upregulation of Angiopoietin 1, Occludin, and sirtuin 1 (Yan et al., 2020). CircRNA_0084043 enhances the oxidative stress and inflammation response in DR progression via the upregulation of transforming growth factor alpha (TGFA) . In this study, the role of circ_0005941 (also known as circFTO, an alpha-ketoglutarate-dependent dioxygenase) in DR was investigated. Previous study has revealed that circ_0005941 was upregulated in the DR vitreous humor , while its underlying regulatory mechanisms remain unclear. CircRNAs exert their functions via diverse mechanisms, including the competitive endogenous RNA (ceRNA) network, in which circRNAs act as molecular sponges for miRNAs to restrain the suppressive effect of microRNAs (miRNAs) on messenger RNAs (mRNAs) (Hansen et al., 2013;Tay et al., 2014). Many circRNAs act as ceRNAs in the progression of DR. For example, circCOL1A2 acts as a ceRNA for miR-29b to upregulate the expression of the vascular endothelial growth factor (VEGF), which promotes angiogenesis in DR (Zou et al., 2020). CircRNA DMNT3B alleviates DR vascular dysfunction by serving as a ceRNA for miR-20b-5p to upregulate the expression of the bone morphogenetic protein and activin membrane bound inhibitor (BAMBI) (Zhu et al., 2019). Circ_0041795 acts as a ceRNA for miR-646 to activate the expression of vascular endothelial growth factor C (VEGFC), which facilitates the injury of high glucose-treated ARPE 19 cells in DR (Sun and Kang, 2020). In the present study, we hypothesized that circFTO might function as a ceRNA in DR progression, and the regulatory mechanism of circFTO in DR was further investigated using both in vivo and in vitro assays. The findings might provide a theoretical basis for DR treatment. Bioinformatic Analysis The starBase website (http://starbase.sysu.edu.cn/) was used to predict the miRNAs interacting with circFTO under the screening condition of CLIP-Data > 2 and Degradome-Data > 2 (Li et al., 2014). The downstream target genes of miR-128-5p were also predicted using the starBase website, and the top ten targets were identified based on the number of supported AGO CLIP-seq experiments (Ago-Exp-Num). Cell Culture Human retinal vascular endothelial cells (RVECs) provided by the BeNa Culture Collection (Beijing, China) were maintained in Dulbecco's modified Eagle's medium (DMEM; Gibco, Grand Island, NY, United States) with 10% fetal bovine serum (FBS; Corning, Midland, MI, United States) and 1/100 penicillin/ streptomycin (Biochrom, Cambridge, United Kingdom) at 37°C in 5% CO 2 . To establish the DR cell model, RVECs were treated with 5.5 mM glucose for the control group, 5 mM glucose and 25 mM mannitol for an osmotic control, or 30 mM d-glucose for the high glucose (HG) group (Zhu et al., 2019). Actinomycin D and RNase R Treatments Total RNA (2 μg) was cultured with or without 3U μg −1 RNase R (Epicentre, Madison, WI, United States) at 37°C for 30 min. Next, the RNA was purified using an RNeasy MinElute cleaning kit (Qiagen), and the expression of circFTO or linear FTO was detected by RT-qPCR. To examine the stability of mRNA, 2 mg/ml of Actinomycin D (Sigma-Aldrich, Shanghai, China) was used to treat RVECs, with dimethyl sulfoxide (DMSO) treatment as the negative control. The RNA expression level in RVECs was detected at specific time points (4, 8, and 12 h). Reverse Transcription Quantitative Polymerase Chain Reaction TRIzol reagent (Sigma-Aldrich) was used to extract total RNAs from human RVECs. The GoScript Reverse Transcription System (Qiagen GmbH, Germany) was used for RNA transcription. RT-qPCR was performed using a Universal RT-PCR Kit for circFTO and TXNIP and a TaqMan MicroRNA Assay Kit for miR-128-3p. RNA expression was calculated using the 2 -ΔΔCt method with normalization to GAPDH and U6. The primer sequences were as follows: circFTO Cell-Counting Kit-8 Assay The transfected RVECs were cultured in the medium at 37°C overnight. After trypsinization, RVECs were suspended in the medium (Jiang et al., 2009). Next, cells were plated onto 96-well plates at the density of 1 × 10 3 cells/well. To detect the viability of RVECs, 10 μL of CCK-8 solution (Dojindo, Kumamoto, Japan) was added to each well and incubated for 4 h. After 48 h, a microplate reader (Reagen, Shenzhen, China) was used to determine the optical density (OD) at a wavelength of 450 nm. All experiments were conducted in triplicate. Ribonucleic Acid Pulldown Assay RNA pulldown assay was used to explore the interaction between circFTO and potential binding miRNAs (miR-128-3p, miR-216a-3p, and miR-3681-3p). RVECs were lysed using Pierce IP Lysis Buffer (Thermo Scientific). A biotinlabeled circFTO probe (100 pmol) and oligonucleotide probes were mixed with the RVEC lysates added with 50 µL of streptavidin magnetic beads (Thermo Scientific) at 25°C overnight. Next, the RNAs pulled down using the oligo probe and circFTO probes were purified, extracted using an RNeasy Mini Kit (Qiagen) and detected by RT-qPCR analysis. The biotin-labeled RNA circFTO probe and control probes were designed and synthesized by RiboBio (Guangzhou, China), and the sequences of used RNA and probes are provided in Table 1. Luciferase Reporter Assay The pmirGLO vectors (Promega, Madison, WI, United States) subcloned with fragments of circFTO or TXNIP 3′UTR sequences containing the binding site to miR-128-3p were used to construct circFTO-WT/Mut vectors or TXNIP-WT/ Mut vectors and then transfected into RVECs with miR-128-3p mimics or NC mimics. After 48 h of transfection, the dualluciferase assay system (Promega) was used to examine the luciferase activity of WT/Mut reporters. Tube Formation Assay The tube formation assay was conducted in triplicate to detect the angiogenesis of RVECs. After being starved in the medium for 24 h, RVECs were seeded in precooled 96well plates with 60 μL of Matrigel (BD Biosciences, San Jose, CA, United States) at a density of 1 × 10 3 for 18 h of incubation in the medium. Finally, an inverted microscope (Nikon, Tokyo, Japan) was used to take images, and Image-Pro Plus (version 6.0) was used to calculate the meshes and branch length in the tube formation. Hematoxylin and Eosin Staining The mouse retinal tissues embedded in paraffin in 5-μm slices were cultured in hematoxylin for 5 min. Then eosin dye solution was used to stain the tissue samples for 3 min. A microscope was used to capture the images of HE staining. Xenograft Animal Experiment The animal experiment was approved by the ethical committee of The Second Hospital of Shanxi Medical University. A total of 32 C57Bl/6J male mice (5 weeks; weighing 20-25 g) were provided by Beijing Vital River Laboratory Animal Technology. The animals were randomly divided into the sham, DR, DR + AAV-empty, and DR + AAV-sh-circFTO groups (n 8 per group). The mice were kept in a 12-h light/dark cycle with access to food and water. A DR mouse model was established by daily intraperitoneal injection of streptozotocin (STZ, 60 mg/kg, Sigma) into the mice successively for 5 days after the injection (IP) of ketamine (80 mg/kg) and xylazine (4 mg/kg) for anesthetization. Mice in the sham group were injected with an equal volume of citrate buffer. The fasting blood glucose of mice was accessed once a week. The diabetes was identified as successfully achieved when the fasting blood glucose of mice was over 300 mg/dl (Zou et al., 2020). Approximately 1.5 μL (1 × 10 12 vg/mL) of adenoassociated virus (AAV)-5 with sh-circFTO or an empty vector was delivered into the vitreous humor using a 33gauge needle two weeks before the diabetes induction (Shan et al., 2017). AAV5 was reported to effectively infect vascular endothelial cells and retinal pigment epithelial cells in previous studies, and recombinant AAV5 can be used to infect human retinal microvascular endothelial cells for gene expression depletion (Wu et al., 2017). Blood-Retinal Barrier Breakdown Quantitation Evans blue was employed to measure BRB breakdown, according to a previous study (Hossain et al., 2016). After the mice were anesthetized, Evans blue (45 mg/kg) was injected into them through the tail vein. 2 h later, the mice were re-anesthetized and blood samples (0.1-0.2 ml) were collected, and then the mice were euthanized. Next, the eyes were enucleated to dissect out the retina. Dimethylformamide treatment, centrifugation for retinas and blood samples, and the calculation of BRB breakdown were performed according to the previous study (Hossain et al., 2016). Statistical Analysis All experiments were repeated three times, and data are shown as the mean ± standard deviation. The data were evaluated using SPSS 17.0 (IBM, Armonk, NY, United States). The differences between two groups were analyzed using Student's t-test. The comparison among multiple groups was analyzed using one-way analysis of variance followed by Tukey's post hoc analysis. The value of p< 0.05 was regarded as the threshold value. TCGGTTCCCTGTCATCGACGCAATTATATAAAGTCGCATGAGGAAGTGGCAGTTTATGCCCGGTAGTTATGAACAAGT ATTCACTGACGACCTTGTTGGATCTAGCGATGGGTTAATGTATATAAAAAACCCTGACTGTATTACTCGTATGCGTTT GACTATGAATTGGTGGTGTATCACTGGTTCCTCGAAGTTTCTTTTGTCTGTACTTTACTATGTCAAACTCACAAGGTGCTT CTGGGTCCTCCTCTTTCTAGTGCTGGTTTTTATCAATATGAATAATTGGGTATCGTTTCACTACGGTCAGGATAGTAT TGGATACTTGGTTTTACGTTACTCGTGTTGCCACTTTCTTATCTGTGGCATCAAGGCGTGCTATTAGAGTTTTTATCG CCCCAATACGTTCAACCTAGGAGAGGTAATGGCTTGTGCCACAGAATGGCTCGCGTTAAAGCCAAAAAGTATACGTCT TCTTCTCCTTCGGACGCGCACACTTAACGGGATTTATCACCTTGTATCGACTAATGAAACGCGTCCTTGATAGCACAT AATTTAGGTCAACTTAATTTAGAATGTTGCCAATCTCCTAGATAAGAAAATGTCTTCCGCTTGTCTACCTCTAATTCTTTC GACAGTTTATTAGTCTGTCTAGTTTTTCTCGTTTGATTTATGGAACTTCCGATATATTGACAGGGGTTCTGTTGTGTAATT TCCACGCCGTTACTGCAAAGCCTTTGACGGTGTTTTAATGATGACTATTCGACTCTTTATAGATGAAATTACAUGAGG AUUACCCAUGUCGATAGTTTATGATGTTTGAATTGTTGGTAAAATAATCTTAAACAGCGCACTTACTTCTACCTTATTTCT circFTO probe GCCCTGAAGAGGAAAGTGAGGATGACTCTCATCTCGAAGGCAGGGATCCT GATATTTGGCATGTTGGTTTTAAGATCTCATGGGACATAGAGACACCTGG TTTGGCGATACCCCTTCACCAAGGAGACTGCTATTTCATGCTTGGTAATC TTTGGAAAATCAAAATTATATTGAAACTCTAGTGTCTAAATTTAGATTAT AGGATTTATATTTTGAGTATGTCTTATGAAATAAACTTTTGGAGTATTTT ATATTAAGAGCGAAACTTCTTTATAAGAACATGTAACTAGGTTTTTCTTT TTTGAGATATTGCTGATTTTTTGAGATGGAGCTTCACTCTTGTTGCCCAG GCTGGAGTGCAGTGGCACAATCTCTGCTCATTGCCGTTTCCACCTCCTGG ATTCAAGCAATTCTCCTTCCTCAGCCTCCCAAGTAGCTGGGATTACAGGC ACACACCAACACACCCAGTTAATATTTTGTATTTTTAGTAGAGATGGGTT TCACCACGTTGGCCAAGCTGGTCTCAAACTCCTGACCTCAGGTGATCCAC CTGCCTCGGTCCCCCAAAGTGCTGAGATTTACAGGACCATGCCCGGCCCA ATTAAAAGTTGTTTTTCTTTTTATTCTACATAGATGCAGAGTTCCTCAAA TATGCTAAACAATTTTCTGGATTCTTTATCAGAGATATTAAATCATCCAA GTCAAATTATAACTTTAAAAAATACTTTCCTGTCCGTATTAGTTTTTAGT AGTGAGAGTTTTTTTTTTTTTTCTTCTAGTTTTGAATGTTGTTTCTTCTC AGGAACAGCTGACCTGACAATGGGGCCTTTCTCTCGACTTGTCATTTAGT TCTTTGGGAAGCACAUGAGGAUUACCCAUGUAGTTTCTCAGTTGTGGTTA Frontiers in Molecular Biosciences | www.frontiersin.org August 2021 | Volume 8 | Article 685466 CircFTO Forms Closed Stable Circular Structure The formation of circFTO is presented in Figure 1A. CircFTO comprising three exons at a length of 344 nucleotides was backspliced by FTO premRNA. The splice junction that formed the circular structure of circFTO was confirmed by Sanger sequencing ( Figure 1B). According to agarose gel electrophoresis (AGE) assay, circFTO was amplified only from cDNA with divergent primers, rather than in the products from genomic DNA, and FTO was amplified in both cDNA and gDNA ( Figure 1C). As shown in Figure 1D, circFTO was more resistant to RNase R digestion than FTO. After Actinomycin D treatment, RT-qPCR showed that the half-life of circFTO was over 24 h, while that of FTO mRNA was approximately 14 h ( Figure 1E). CircFTO Knockdown Alleviates High Glucose-Induced Angiogenesis and Blood-Retinal Barrier Breakdown in Retinal Vascular Endothelial Cells According to RT-qPCR analysis, the expression of circFTO was significantly upregulated in RVECs treated with HG compared with that in the control and Mannitol groups (Figure 2A). The silencing efficiency of circFTO in HG-treated RVECs was confirmed using RT-qPCR analysis, showing that circFTO expression was markedly decreased in RVECs after HG treatment ( Figure 2B). The viability of RVECs was increased in the group receiving HG treatment, while circFTO knockdown reversed the increase induced by HG treatment ( Figure 2C). The RVEC angiogenesis was enhanced by HG treatment, while circFTO silencing decreased the number of meshes and the branch length in tube formation in RVECs after HG treatment ( Figure 2D). Western blot analysis was used to detect the levels of angiogenesis-related proteins in RVECs. The protein levels of VEGFA, PDGF, and ANG2 were elevated in the HG group and showed a decrease in the HG+ sh-circFTO group compared with the HG group ( Figure 2E). The levels of proteins (ZO-1, Occludin, and Claudin-5) associated with the blood-retinal barrier (BRB) were reduced in HG-treated RVECs, which was reversed by circFTO deficiency ( Figure 2F). CircFTO Interacts With miR-128-3p in Retinal Vascular Endothelial Cells As shown by the Venn diagram in Figure 3A, three miRNAs with a potential binding site to circFTO were predicted using the starBase website under the condition of CLIP > 2 and Degradome > 2. The results of RNA pulldown assay revealed that only miR-128-3p was significantly enriched in the circFTO probe, compared with the other candidate miRNAs ( Figure 3B). Thus, miR-128-3p was selected for further study. The expression of miR-128-3p in RVECs treated with HG or Mannitol was detected using RT-qPCR. The results indicated that miR-128-3p was expressed at a low level in HG-treated RVECs ( Figure 3C). Afterward, the overexpression efficiency of miR-128-3p was confirmed by RT-qPCR after the transfection of miR-128-3p mimics in RVECs ( Figure 3D). The binding site between miR-128-3p and circFTO was presented. The results of luciferase reporter assay showed that miR-128-3p overexpression decreased the luciferase activity of wild- type circFTO in RVECs, while that of the mutant circFTO exhibited no significant change ( Figure 3E). Moreover, the effect of circFTO knockdown on miR-128-3p expression was detected by RT-qPCR, showing that miR-128-3p expression was markedly upregulated due to circFTO deficiency ( Figure 3F). Thioredoxin-Interacting Protein Is Directly Targeted by miR-128-3p in Retinal Vascular Endothelial Cells The expression of candidate mRNAs for miR-128-3p in RVECs transfected with miR-128-3p mimics was detected using RT-qPCR. The results showed that only TXNIP was significantly downregulated by miR-128-3p overexpression in RVECs ( Figure 4A). Therefore, TXNIP was identified for further study. The protein level of TXNIP was also decreased after the transfection of miR-128-3p mimics in RVECs ( Figure 4B). The mRNA expression and protein levels of TXNIP were all downregulated due to circFTO depletion, as shown by RT-qPCR and Western blot analyses ( Figures 4C,D). Luciferase reporter assays demonstrated that miR-128-3p overexpression reduced the luciferase activity of Wt-TXNIP, and no evident change was observed in the TXNIP-Mut group (Figures 4E,F). RT-qPCR showed that TXNIP expression was elevated in RVECs treated with HG compared with that in the Mannitol or control group ( Figure 4G). CircFTO Promotes Angiogenesis and Impairs the Blood-Retinal Barrier in Retinal Vascular Endothelial Cells by Upregulating Thioredoxin-Interacting Protein The overexpression efficiency of TXNIP was verified in HG-treated RVECs. RT-qPCR revealed that TXNIP expression was successfully upregulated by pcDNA3.1/TXNIP ( Figure 5A). Cell-counting kit-8 (CCK-8) assays revealed that TXNIP overexpression reduced the suppressive effect of circFTO silencing on the viability of RVECs ( Figure 5B). The results of tube formation assay indicated that RVEC angiogenesis was inhibited by circFTO knockdown, while overexpressed TXNIP reversed the suppressive effect of silenced circFTO on RVEC angiogenesis ( Figure 5C). Moreover, circFTO deficiency caused a reduction in the levels of angiogenesis-related proteins (VEGFA, PDGF, and ANG2), which was rescued by overexpressed TXNIP ( Figure 5D). The levels of BRB-associated proteins (ZO-1, Occludin, and Claudin-5) were elevated after circFTO silencing and then reversed by TXNIP overexpression in RVECs ( Figure 5E). CircFTO Knockdown Attenuates Angiogenesis and Alleviates Blood-Retinal Barrier Breakdown Mediated by Diabetes in Diabetic Retinopathy Mouse Models As shown by HE staining, in the DR mouse model (n 8 mice/ group), the mouse retinal cells were irregularly and disorderly FIGURE 3 | CircFTO interacts with miR-128-3p in RVECs. (A) Candidate miRNAs with a potential binding site to circFTO were selected using a Venn diagram under the condition of CLIP > 2 and Degradome > 2. (B) RNA pulldown assay was used to investigate the interaction between miR-128-3p and circFTO in RVECs using an oligo probe and a circFTO probe. (C) Expression of miR-128-3p in RVECs treated with HG or Mannitol was measured by RT-qPCR. (D) Overexpression efficiency of miR-128-3p in RVECs transfected with miR-128-3p mimics was examined using RT-qPCR. (E) Binding site between miR-128-3p and circFTO was predicted using the starBase website. A luciferase reporter assay was used to explore the binding between miR-128-3p and circFTO in RVECs transfected with miR-128-3p mimics. (F) RT-qPCR was performed to examine the effect of circFTO deficiency on the expression of miR-128-3p in RVECs. ***p<0.001. Frontiers in Molecular Biosciences | www.frontiersin.org August 2021 | Volume 8 | Article 685466 arranged, and the number of vessels was increased. CircFTO deficiency attenuated these performances and alleviated the angiogenesis ( Figure 6A). The thickness of the retina in mice was increased by DR, while circFTO deficiency partially reversed the DR-mediated increase in retinal thickness ( Figure 6B). The breakdown of the BRB was measured using Evans blue dye, indicating that silencing circFTO mitigated the impairment of the BRB ( Figure 6C). According to Western blot, the levels of angiogenesis-related proteins (VEGFA, PDGF, and ANG2) were elevated in DR mouse retinal tissues and then reversed by silenced circFTO. The levels of proteins associated with the BRB (ZO-1, Occludin, and Claudin-5) were reduced in DR mouse retinal tissues, and circFTO knockdown rescued the decrease in these protein levels ( Figures 6D,E). RT-qPCR showed that circFTO and TXNIP expression levels were upregulated in DR mouse retinal tissues and decreased after circFTO knockdown, while the expression of miR-128-3p was downregulated in the DR mouse retinal tissues (n 10) and showed an increase after the transfection of sh-circFTO ( Figure 6F). Moreover, miR-128-3p expression was demonstrated to be negatively correlated with circFTO and TXNIP expression, while TXNIP expression was positively correlated with circFTO expression in mouse retinal tissues (n 30), as identified using Spearman's correlation coefficient ( Figure 6G). DISCUSSION In the present study, circFTO was identified to be highly expressed in HG-treated RVECs. The knockdown of circFTO was revealed to reverse the HG-induced increase in the viability and angiogenesis of RVECs and alleviate the HG-induced impairment of the blood-retinal barrier (BRB). In vivo assays showed that circFTO silencing attenuated the angiogenesis and CircFTO was demonstrated to function as a ceRNA in the progression of DR. MiR-128-3p was revealed to be sponged by circFTO in RVECs and was demonstrated to be downregulated in HG-treated RVECs. The role of miR-128-3p has been reported in diverse diseases. For example, miR-128-3p is targeted by mmu_circ_0000250, which promotes the wound healing of diabetic mice (Shi et al., 2020). Moreover, previous studies have also indicated the aberrant expression profile of miR-128 in diabetic patients and diet-induced diabetic mice (Prabu et al., 2015). MiR-128-3p overexpression is revealed to promote inflammatory responses induced by TNF-alpha via regulating Sirt1 in bone marrow mesenchymal stem cells (Wu et al., 2020). Thioredoxin-interacting protein (TXNIP) was revealed to be directly targeted by miR-128-3p at the 3′-untranslated region (3′-UTR) in RVECs. The expression of TXNIP at the mRNA and protein levels was negatively regulated by miR-128-3p and positively regulated by circFTO in RVECs. TXNIP was also upregulated in RVECs after HG treatment. Rescue assays indicated that TXNIP overexpression offset the effects of circFTO silencing on the viability, angiogenesis, and BRB of HG-treated RVECs. Previous studies have also reported that TXNIP was highly expressed in diabetic complications (Xu et al., 2013;Lv et al., 2020). For example, the inhibition of the p38/TXNIP/NF-κB pathway by melatonin is suggested to maintain the inner blood-retinal barrier in DR (Tang et al., 2021). Moreover, TXNIP overexpression has been demonstrated to activate autophagy and apoptosis in the rat müller cells treated with high glucose in DR (Ao et al., 2021). TXNIP deficiency is revealed to inhibit the NLRP3 axis and reduce renal damage in diabetic nephropathy rat models (Ke et al., 2020). In the progression of diabetes, retinal vessels are the early and common targets whose injury and dysfunction becomes a leading cause for vision loss in diabetic patients. The pathological alterations in the retina of diabetic patients are characterized by neovascularization (Nawaz et al., 2019). Molecules including vascular endothelial growth factor A (VEGFA), platelet-derived growth factor (PDGF), and angiogenin, ribonuclease A family, member 2 (ANG2) are closely related to angiogenesis, which increase vascular leakage and facilitate DR progression (Rask-Madsen and King, 2013). In this study, the expression levels of VEGFA, PDGF, and ANG2 were elevated after HG treatment, while circFTO silencing exerted a suppressive effect on the levels of these factors in HG-treated EVECs or the DR mouse model. The blood-retinal barrier (BRB) is essential for the establishment and maintaining of a stable environment for optimum retinal function (Cunha-Vaz et al., 2011;Naylor et al., 2019), which is critically implicated in DR progression (Antonetti et al., 2021). The inner blood-retinal barrier (iBRB) comprises tight junctions (TJs) between neighboring retinal endothelial cells. As the first transmembrane protein that was identified in the TJ, Occuludin plays a vital role in paracellular permeability. Zonula occludens-1 (ZO-1) is a cytoplasmic protein that anchors transmembrane proteins to the cytoskeleton, which is important in the formation and organization of TJs (Bazzoni and Dejana, 2004;Naylor et al., 2019). Claudin-5 is the most predominant Claudin of the iBRB, which is confined to endothelial cells (Naylor et al., 2019). Herein, protein levels of ZO-1, Occuludin, and Claudin-5 were examined to probe the impairment of the BRB. When the BRB is broken down, protein levels of ZO-1, Occuludin, and Claudin-5 are downregulated. CircFTO promotes BRB breakdown by upregulating the expression of TXNIP in HG-induced RVECs and DR mouse models. Mechanistically, the mTOR inhibitor was reported to suppress the breakdown of the BRB, and mTOR signaling involved in pericytes might be profoundly relevant to early subclinical stages of DR (Jacot and Sherris, 2011). Despite the role of the mTOR pathway in DR, mTOR signaling is associated with the pathobiology of the retina. For example, the suppression of mTOR signaling inhibits the dedifferentiation of the retinal pigment epithelium, which is a critical factor involved in the formation of the outer blood-retinal barrier (Zhao et al., 2011). The association between mTOR and the regulation of TJs will be investigated in our future studies. In conclusion, circFTO is upregulated in HG-treated RVECs, and circFTO promotes angiogenesis and impairs the BRB in HGtreated RVECs and in DR mouse models in vitro and in vivo, which might provide novel insight into DR treatment. However, more experiments are required in our future studies to further verify the upstream genes or downstream signaling pathways of the circFTO/miR-128-3p/TXNIP axis in DR development. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material; further inquiries can be directed to the corresponding author. ETHICS STATEMENT The animal study was reviewed and approved by the ethical committee of The Second Hospital of Shanxi Medical University. AUTHOR CONTRIBUTIONS JG conceived and designed the research; JG, FX, WR, YZ, QD, QL, and XL performed the research; JG analyzed the data; JG, FX, and WR wrote the manuscript. The final manuscript has been seen and approved by all authors, and we have taken due care to ensure the integrity of the work. ACKNOWLEDGMENTS We are truly grateful for the help that all participants offered during our study.
v3-fos-license
2021-12-01T16:33:26.449Z
2021-11-01T00:00:00.000
244736928
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1016/j.prdoa.2021.100121", "pdf_hash": "691f04f712d2b4657021487f7ce7e650079218c4", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46300", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "sha1": "6e7eaa3045930c0f8e43baa71555f7d0f9f9a6d7", "year": 2021 }
pes2o/s2orc
Internet-based cognitive behavioural therapy as a feasible treatment of adult-onset, focal, isolated, idiopathic cervical dystonia Highlights • Internet-based CBT is feasible for individuals with adult-onset cervical dystonia.• Internet-based CBT reduces depression and anxiety in adult-onset cervical dystonia.• Effects from internet-based CBT are sustained in some individuals at six months. Introduction Psychiatric symptoms, in particular depression and anxiety, are increasingly recognised as part of the phenotypic spectrum of adultonset, focal, isolated, idiopathic cervical dystonia (AOIFCD) [1]. In spite of this, there remains no standardised management strategy, with available pharmacological treatment often exacerbating the underlying movement disorder [2]. These psychiatric symptoms have also been shown to have a greater impact on quality of life (QoL) than the motor symptoms themselves, further reinforcing the need to develop appropriate treatment options [1]. Previous case reports have demonstrated promise for cognitive behavioural therapy (CBT) in managing anxiety and depression in AOIFCD [3,4], however timely access of face-to-face psychological therapy is often limited by cost, waiting times and a shortage of suitably qualified therapists. This has resulted in a number of internet-based CBT (iCBT) programmes being developed, with many focused on the management of depression and anxiety [5,6]. This study demonstrates the feasibility of using an anxiety and depression focused iCBT programme for individuals with AOIFCD, as well as determining its impact on these symptoms, motor symptom severity and QoL. iCBT has the potential to provide an accessible, costeffective care model that could be offered alongside currently available medical management, maximising the use of available healthcare resources. Methods Participants were recruited, providing informed consent in paper format or via an online platform (Research Ethics committee reference: 19/WA/0265), following a previously detailed protocol (Fig. 1a) [7]. Randomisation was completed on a 1:1 ratio using sealed opaque envelopes that contained a computer-generated random allocation code. Envelope selection was made by a blind assessor and were opened during the participants baseline assessment. Due to the nature of the intervention, it was not possible to blind the participant to the outcome of the randomisation. Participants randomised into the iCBT intervention group were introduced to the iCBT platform, hosted by SilverCloud Health Ltd (www.silvercloudhealth.com), provided with a link to the "Space from Anxiety and Depression" programme, and asked to complete the course (one module a week) over the subsequent 8 weeks (Fig. 1b). Our primary outcome measure was the extent of participant engagement with the programme, with the frequency of 'logins' recorded, and participants classified as having high (minimum of 7 weeks of programme activity), medium (active for 4-7 weeks), or low (active for < 4 weeks) engagement over the 8-week period. Those recruited to the iCBT arm were also asked to respond to a short feedback survey (Fig. 1a). Secondary outcome measures assessed for changes in motor and nonmotor symptomatology at baseline, 3-months, and 6-months post enrolment. These were conducted in the participant's home, university research clinic or online using videoconferencing software (Fig. 1a). Motor symptoms were scored using the Burke-Fahn-Marsden Dystonia Rating Scale (BFMDRS) by two independent movement disorder specialists blind to the participants group allocation, with an average of these scores combined with the participant completed BFMDRS Disability Scale. Detailed psychiatric evaluation was also conducted at each timepoint via the MINI International Neuropsychiatric Interview, with anonymised individual participant level data available in the supplementary material (Supplementary Tables 1, 2 and 3). Statistical analysis was conducted using R version 3.6.3 [8]. Data on participant responses to iCBT is reported as frequencies. Two-way mixed ANOVA determined differences between the iCBT and control groups over time. Participants were excluded from onward analysis if they demonstrated low programme engagement or if not all 3 assessments were complete. Percentage change from baseline for each participant was also reported for each symptom outcome. Based on programme engagement and assessment completion, 7/10 iCBT and 8/10 control participants were included for onward analysis. No significant difference in depression or anxiety-related symptoms were observed between the two groups (Supplementary Table 4). There was a greater trend towards depression and anxiety score improvement in those receiving iCBT at 3-months, with this improvement sustained at 6-months for measures of depression ( Fig. 2a & b). However, anxietyrelated measures demonstrated mixed results with Hamilton Scale for Anxiety (HAM-A) scores showing a sustained effect (Fig. 2c), but Generalised Anxiety Disorder-7 (GAD-7) scores returning towards baseline (Fig. 2d). Interestingly, although there was no significant difference between groups, there was a statistically significant improvement in depression scores between baseline and 3-months (Beck Depression Inventory (BDI) p = 0.04; Hamilton Scale for Depression (HAM-D) p = 0.008), and anxiety-related symptoms measured by the HAM-A (p = 0.043). QoL measures (p = 0.416) and motor impairment (p = 0.880) demonstrated no statistically significant differences between the groups over the 6-month period. Individual level analysis demonstrated improvements across multiple symptom groups in 6/7 of those receiving iCBT and 7/8 of those in the control group (Fig. 2e-k). At 3-months, percentage improvements across multiple domains were higher for those receiving the iCBT compared to controls; BDI (iCBT 90.0%, control 60%, Fig. 2e Discussion This study demonstrates the feasibility of iCBT in the management of anxiety and depression for those diagnosed with AOIFCD. Sixty percent of those receiving iCBT demonstrated high engagement, with 75% of feedback responses indicating its utility and 87.5% indicating they would continue to use the programme and/or try another iCBT programme. Although no statistically significant differences were observed, those receiving the iCBT intervention also demonstrated a trend towards improvement in anxiety and depression at 3 months post-enrolment, with sustained effects in some individuals at 6-months, supported by larger improvements, and more sustained, individual-level percentage improvements. The lack of statistically significant between group differences in reported symptoms is likely due to the small sample size of this study. Trends towards improvement were observed in depression and anxiety scores, although we did not see any improvement in QoL or motor impairment. Interestingly, we saw general improvements in depression and anxiety between baseline and 3-months across both study groups. This may be due to volunteer bias, as participants may have been more likely to volunteer to take part in the study if they felt their psychiatric symptoms were worse than usual, or due to the COVID-19 pandemic which occurred during the data collection phase of this study, with 12 participants undergoing baseline assessments prior to national lockdowns being introduced, and the remaining 8 recruited following lockdown introductions. In the general population, the COVID-19 pandemic had a detrimental effect on mental health [9], although this did seem to recover [10] with some evidence suggesting individuals with certain medical conditions had reduced psychological distress [11], particularly given the high reported rate of social anxiety amongst those with AOIFCD [12]. This may also have had an impact on QoL scores, particularly relating to the uncertainty around regular receipt of neurotoxin injections, recognised as providing a positive impact on QoL for individuals with AOIFCD [13], possibly providing some explanation for the variation observed across both groups on an individual level. Previous studies involving face-to-face CBT have shown sustained (caption on next page) positive impact for those with AOIFCD beyond 6 months [4]. In this study, while measures of depression appeared to indicate sustained improvement, results from anxiety-focused questionnaires were more variable. The programme we used was relatively short with only eight modules and no requirement to revise any sections after course completion, therefore it may be that a longer programme, with refresher sessions built in may produce more consistent sustained improvement. Although the overall response to iCBT was positive, some individuals did not engage, and a small proportion gave negative feedback. We also saw large variation in individual symptom effects, with some participants demonstrating very little or no symptom improvement. This suggests iCBT may not be an appropriate management strategy for all individuals with AOIFCD, with suitability dependent on additional factors not included in this study. Several factors have been identified as potential barriers to iCBT including computer anxiety, self-stigma, and lower perceived need [14]. In conclusion, iCBT provides a feasible option in the management of symptoms of anxiety and depression in those diagnosed with AOIFCD. Further investigation in larger sample sizes is needed to fully determine the symptom effects of iCBT, identify those most likely to benefit, as well as addressing potential barriers to this intervention. Once refined, iCBT could provide an accessible, cost-effective treatment option that could be administered alongside current management strategies, maximising health care resources and addressing gaps in the current care model. Data availability statement The data that support the findings of this study are available from the corresponding author upon reasonable request. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Fig. 2. Symptom effects in the iCBT and control groups for group-level BDI effects (A), group-level HAM-D effects (B), group-level GAD-7 effects (C), group-level HAM-A effects (D), individual level BDI effects (E), individual level HAM-D effects (F), individual level GAD-7 effects (G), individual level HAM-A effects (H), individual level SF-36 effects (I), individual level BFMDRS effects (J), and individual level improvement from baseline for anxiety, depression, QoL, and motor scores (K). Positive (+) changes indicate improvement in outcome scores. Bold indicates ≥ 25% improvement from baseline at 3-months, whilst bold italics indicates a sustained improvement of ≥ 25% from baseline at 6-months as well as 3-months. Group-level graphs (A-D) show the mean raw score for each questionnaire, with error bars representing the standard error, and lower scores indicating an improvement in symptoms. Individuals-level graphs (E-J) represent the percentage change from baseline for each individual participant, with positive changes indicating an improvement in symptoms. BDI, Beck's Depression Inventory; BFMDRS, Burke-Fahn-Marsden Dystonia Rating Scale, GAD-7, Generalised Anxiety Disorder-7; HAM-A, Hamilton Scale for Anxiety; HAM-D, Hamilton Scale for Depression; iCBT, internet-based cognitive behavioural therapy; SF-36, Short Form-36 Health Survey.
v3-fos-license
2020-03-14T13:04:12.307Z
2020-03-12T00:00:00.000
212689765
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0229847&type=printable", "pdf_hash": "8551012b91281e27667aaa6e2bb5ff3e0af99ce9", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46302", "s2fieldsofstudy": [ "Environmental Science", "Medicine", "Biology" ], "sha1": "7298c0a5caaca2d50242559253b574a238a9fd93", "year": 2020 }
pes2o/s2orc
First report of Aedes albopictus infected by Dengue and Zika virus in a rural outbreak in Brazil In Brazil, Dengue (DENV) and Zika (ZIKV) viruses are reported as being transmitted exclusively by Aedes aegypti in urban settings. This study established the vectors and viruses involved in an arbovirus outbreak that occurred in 2019 in a rural area of Espírito Santo state, Brazil. Mosquitoes collected were morphologically identified, sorted in samples, and submitted to molecular analysis for arboviruses detection. Phylogenetic reconstruction was performed for the viral sequence obtained. All 393 mosquitoes were identified as Aedes albopictus. DENV-1 genotype V was present in one sample and another sample was positive for ZIKV. The DENV-1 clustered with viruses that have circulated in previous years in large urban centers of different regions in Brazil. This is the first report of A. albopictus infected by DENV and ZIKV during an outbreak in a rural area in Brazil, indicating its involvement in arboviral transmission. The DENV-1 strain found in the A. albopictus was not new in Brazil, being involved previously in epidemics related to A. aegypti, suggesting the potential to A. albopictus in transmitting viruses already circulating in the Brazilian population. This finding also indicates the possibility of these viruses to disperse across urban and rural settings, imposing additional challenges for the control of the diseases. Introduction Dengue virus (DENV) and Zika virus (ZIKV) are etiological agents of reemerging and emerging infectious diseases that constitute important global public health concerns [1]. Both are a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 RNA viruses belonging to the Flaviviridae family, genus Flavivirus, that are transmitted to humans by the bite of infected mosquitoes of the Aedes genus (Stegomya subgenus) [2]. Consequently, DENV and ZIKV present an epidemiological overlap, with occurrence influenced by similar environmental and socioeconomic characteristics, having the same geographical distribution and seasonality [3]. In 2019, more than 100 countries were endemic for DENV and 87 had evidence of autochthonous transmission of ZIKV [4]. Brazil is currently the nation with the highest report of DENV [5] and ZIKV infections in the world [4]. There, Aedes aegypti is the only species proven to be involved in the transmission of these viruses [5][6][7]. Consequently, epidemics affect mainly Brazilian urban areas, due to the adaptation of the vector to this environment [7]. In Espírito Santo state, Brazil, this vector has been reported since the 1980s, with a broad dispersion in its territory, mainly in areas with underbrush [26]. Ports of this state were presumed as the first entry places of A. albopictus in Brazil [13,27]. There, 73,998-suspected cases and 37 deaths by DENV infection were reported in 2019 until the 39˚epidemiological week, and 1,055 registers of ZIKV were made in the same period [28]. The introduction of an arbovirus in an area with the presence of vectors must be treated as a relevant event [2]. In March 2019, an outbreak of dengue-like illness with 20-suspected cases of DENV infection was reported in a rural area of Linhares municipality, in Espírito Santo state, Brazil. This study investigated this unexpected autochthonous rural occurrence, establishing the vectors and viruses possibly involved in the transmission. Study location The mosquitoes were collected in a farm located at 13.6 km distance from the center of Linhares municipality, in the North of Espírito Santo state, Brazil (19˚24'38.46'' S, 40˚10'13.22'' W). The farm has an area of 463.6 hectares, with 61 hectares of Atlantic Rainforest and approximately 214.6 hectares of cocoa plantation. Two lakes are adjacent to the farm: Lagoa Nova and Lagoa das Palminhas. There were 20 brick houses constructed in the farm headquarters, where 38 people live (Fig 1). Epidemiological scenario During the outbreak, there were 20 reports of exanthematous febrile illness. Four cases were tested for DENV infection by the public health authority: one was positive in the NS1 and viral isolation tests, confirming DENV-1 circulation, and three were positives in the Elisa IgM test. Ten cases were not tested but had confirmation by clinical epidemiological criteria. None infection was reported as caused by ZIKV. Nevertheless, six cases tested were negative and presented inconclusive diagnoses. None of the patients has reported recent travel. The attack rate of this rural outbreak, considering all 20 symptomatic cases and 38 local inhabitants, was 52%. Insects sampling The mosquitoes were collected on 26, 27 and 28th March 2019 at different times of day by using entomological net (sweeping) and insect aspirator (Castro model) [29] in intradomicile, peridomicile and in a cocoa and rubber tree plantations located within a radius of 24 meters from the residences. Specimen identification The mosquitoes were identified based on morphological characters using the identification keys from Consoli and Oliveira (1994) [30] under a stereomicroscope (Olympus SZ61). Molecular analysis After the entomological identification, the mosquitoes were stored in cryogenic tubes with guanidine isocyanate, aiming for the preservation of the genetic material in order to verify the natural infection of the mosquito by the viruses. Mosquitoes were divided into subsamples with pools of approximately 10 to 15 individuals per tube according to the date and time of collection. Numbers and letters, according to the sample code and subsample number of origin, identified the subsamples. For instance, the second pool from sample code 1 was named 1b ( Table 1). The mosquitoes were macerated in FastPrep-24 5G Instrument (MP Biomedicals, Ohio, USA) in 1 mL of phosphate-buffered saline solution with 0.75% bovine albumin. Viral RNA was extracted using the QIAamp Viral RNA Mini Kit following the manufacturer's instructions (QIAGEN, Hilden, Germany). Molecular tests for arboviruses detection were made through Real-Time PCR. Dengue detection was made according to Huhtano et al. (2010) [31] and serotype determination was accessed using primers and probes serotype-specific described by Callahan et al. (2001) [32]. Zika RNA detection was done using the protocol developed by Lanciotti et al. (2008) [33] employing their second set of primers and probe (ZIKV-1086/1162c). Finally, chikungunya Real-Time PCR was also done using the protocol described by Cecilia et al. (2015) [34]. The evaluation of the presence of human DNA in possibly engorged mosquitoes was performed by using primers and probes directed to RNaseP, according to the protocol described by the World Health Organization [35]. Multiplex tiling PCR The extracted RNAs that were positive for DENV-1 and ZIKV were submitted to wholegenome amplification using a tiling, multiplex PCR approach that has been previously developed [36]. Briefly, the sample was converted to cDNA using random hexamers (Invitrogen; Carlsbad CA, USA) and ProtoScript II Reverse Transcriptase (New England BioLabs; Ipswich, MA, USA) according to the manufacturer's instructions. The cDNA was then amplified with a multiplex PCR assay designed from Primal Scheme using as input the "ZikaAsian" scheme for ZIKV (https://github.com/zibraproject) and the one described by Quick et al. (2017) for DENV-1 [36]. An 80% consensus generated from a reference alignment of DENV-1 sequences was used as input to Primal Scheme. PCR was performed using the Q5 High-Fidelity DNA polymerase (NEB). PCR products were cleaned-up using a 1:1 ratio of AMPure XP beads (Beckman Coulter, Brea, CA) and quantified using fluorimetry with the Qubit dsDNA High Sensitivity Assay on the Qubit 3.0 instrument (Life Technologies). Bioinformatics workflow Raw files were basecalled using Guppy software version 2.2.7 GPU basecaller (Oxford Nanopore Technologies), then demultiplexed and trimmed by Porechop version 0.2.4 (https:// github.com/rrwick/Porechop). Demultiplexed fastQ files were then inputted in CLC Genomics Workbench 6 (CLC Bio, Qiagen). First, the reads were trimmed to remove short (below 50) and low-quality reads (default parameter). Using a DENV-1 genome sampled in Brazil as reference (GenBank ID KP188543), reads were assembled using the following parameters: mismatch cost = 1, indels cost = 2, length fraction that must match reference = 0.8 and similarity fraction = 0.8. A total of 83.8% of reads were correctly mapped to the DENV-1 reference genome, and a consensus sequence of 8413 nucleotides was generated. The sequence obtained in the study was deposited in GenBank under the accession number MN567709. Phylogenetic analysis The envelope gene of the DENV-1 consensus obtained here by deep sequence was then aligned to globally sampled DENV-1 genomes from all five genotypes using the method implemented in the CLC Genomics Workbench. A dataset containing 93 sequences with 1,485 nucleotides (complete envelope region) was used to infer a maximum likelihood phylogenetic tree in PhyML implemented in SeaView v.4 [37], with 1,000 nonparametric bootstrap replicates. The GTR+I was used as the best nucleotide substitution model as chosen by the jModelTest [38]. Permissions The mosquitoes sampling was performed as part of the standard procedures of the Environmental Surveillance Service by the local health authority with landowner permission. The Information System for Notifiable Diseases (SINAN) was used to access the epidemiological information. Ethical approval was not required since all data accessed retrospectively were aggregated and anonymized. Results A total of 393 mosquitoes were collected in four samples, all identified as A. (Stegomyia) albopictus (Skuse, 1894) and females. ZIKV was amplified in the subsample 2f (subsample f of sample 2). The presence of the human Cytb gene in the subsample 2f suggests that one or more mosquitos in such a pool were engorged. Subsample 3i (subsample i of sample 3) was positive for DENV-1, and no human DNA was detected in it ( Table 2). Fourteen reads of the ZIKV gene were amplified and sequenced, resulting in a consensus sequence with 401 nucleotides, and 98.8% similarity with the ZIKV genome. Due to the low sequence coverage, phylogenetic analysis was not performed for ZIKV. The envelope gene of the DENV-1 was successfully amplified and sequenced. The reconstructed phylogenetic tree shows that DENV-1 found in the study belongs to genotype V (Fig 2, clade A). This virus was closely related to strains identified in other Brazilian states in previous years: São José do Rio Preto, in 2012 and 2013, São Paulo in 2013, Goiânia in 2013, Rio de Janeiro in 2010, and Pernambuco in 2010, with 100% bootstrap support. This clade shares a most recent common ancestor (MRCA) with a sample obtained in Réunion, an island in the Indian Ocean. Another two Brazilian clades were identified: Clade B basal to all another DENV-1 genotype V (except by the clade previously mentioned and four samples from Puerto Rico), and clade C close related to Colombian and Venezuelan strains (Fig 2). Discussion This study is the first to report A. albopictus infected by DENV and ZIKV during an outbreak of a dengue-like illness in a rural area in Brazil. In Brazil, A. aegypti was previously the only species proven to be involved in the transmission of these viruses, considered typical of urban settings. Despite a tendency of expansion of DENV to the countryside, all the explanations for this phenomenon remain on the establishment of A. aegypti in smaller cities [39]. Differential diagnosis between DENV and ZIKV infections is challenging due to their similar signs and symptoms [40]. In the rural area under investigation, 20 people presented an exanthematous febrile illness and four had laboratory confirmation for DENV infection. One infection by DENV-1 was identified, and the same serotype was found in A. albopictus. Despite the absence of confirmed cases of ZIKV infection in humans, six human cases of febrile illness were considered to be not caused by DENV and could have plausibly been a result of ZIKV infection or diseases with similar symptoms, such as those caused by other arboviruses not investigated, e.g. Chikungunya, Mayaro fever and yellow fever. A subsample of mosquitoes infected by ZIKV was engorged, indicating the potential involvement of A. albopictus in a silent transmission of this virus, despite it is not possible to know if the virus was originated from the human blood or from the mosquito. Therefore, it requires further investigation, also because only the presence of the virus in the mosquito is not enough to confirm its ability to transmit the pathogen to humans and due to this vector feed on non-human hosts in the sylvatic environment. Despite the lack of molecular detection of ZIKV or other arboviruses in the studied human cases, the assumption on the participation of A. albopictus in the transmission is credible, due to the absence of A. aegypti even after an extensive entomological sampling. A. albopictus normally feeds in the daytime and outdoor but can rest and feed indoors [41]. In the study setting, the adult mosquitoes were present during daytime, concentrated in all areas surveyed, corroborating the pattern of the population aggregate distribution, with mosquitoes concentrating in close areas [16]. The DENV-1 genotype V found in the infected A. albopictus was also detected in previous Brazilian studies [42][43][44]. The strain from the study setting is closely related to viruses that have circulated previously in large urban centers of three different regions in Brazil (Southeast, Northeast, and Midwest), showing the capacity of the virus dispersion from urban to rural areas. In addition, it demonstrates that the strain found in the A. albopictus was not new in Brazil, being involved previously in epidemics related to A. aegypti, suggesting the potential to A. albopictus in transmitting viruses already circulating in the Brazilian population. The DENV-1 found in the study had a different origin from a strain identified in 2000 in Espírito Santo state, evidencing multiple introductions of this virus into the state. The clade that contains the strain identified in the A. albopictus clustered with a virus from Réunion, an island from the Indian Ocean, while the DENV-1 virus circulating in 2000 clustered with viruses from other South American countries. Besides these two clades, the results suggest another clade of DENV-1 in Brazil, closely related to strains from Caribbean countries. It corroborates a most recent study on the phylogenetic evaluation of DENV-1 in the country [44], which suggests at least three clades of this virus in the Brazilian territory. The introduction of DENV-1 in the rural area under evaluation was related to an outbreak with potential infectivity, raising questions on possible similar future occurrences in Brazil. In this country, previous studies identified field-collected immature forms of A. albopictus infected by all DENV serotypes [6,17,19,[45][46][47], and by ZIKV [48]. Despite the variation on the vector competence of A. albopictus according to its geographic origin [49], in other countries this is the only species involved in arboviruses transmission [19,50], including endemic occurrence in rural areas [5], reinforcing the plausibility of a similar event in Brazil. A possible scenario of A. albopictus involvement in DENV and ZIKV transmission in Brazil imposes a concern on the transmission increases, on the establishment of a bridge between sylvatic, rural and urban cycles [15,51,52], and on the maintenance of the virus in the environment in non-epidemic periods [5]. In the face of some characteristics of this vector, such as the reports its resistance to some insecticides [9,11,12], the use of natural and artificial breeding sites [24], and the broad distribution with ecological plasticity, a possible "ruralization" of DENV and ZIKV may impose additional challenges for the control of these viruses. This study presents some limitations: it was not possible to identify if the mosquitoes positive for DENV and ZIKV were collected in intradomicile, peridomicile, or in the cocoa and rubber tree plantation, and no inference could be made on it. The insect sampling, restricted to daytime, excluded mosquitoes with nighttime behavior that could be competent vectors for arboviruses, such as Culex. In addition, it was not possible to evaluate the origin of the infection of the mosquitoes or the similarity of the virus found in the study with those from the confirmed human infections. The study presents two concomitant events-humans and mosquitoes infected-that might be connected, but it cannot affirm with total certainly the involvement of A. albopictus in the transmission. Therefore, additional investigations involving the human hosts affected by this outbreak need to be conducted. Nevertheless, the study findings are relevant to the adoption of actions for prevention and control. A. albopictus must be considered in areas with arbovirus risk transmission and should be included in public health programs, especially with a focus on epidemiological and entomological surveillance, inclusive in rural areas.
v3-fos-license
2023-01-22T14:46:13.083Z
2021-10-21T00:00:00.000
256068787
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://link.springer.com/content/pdf/10.1007/s13209-021-00250-8.pdf", "pdf_hash": "031ec5f878149e95e387efe26e089e753356b051", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46303", "s2fieldsofstudy": [ "Economics" ], "sha1": "031ec5f878149e95e387efe26e089e753356b051", "year": 2021 }
pes2o/s2orc
From He-Cession to She-Stimulus? The labor market impact of fiscal policy across gender Men, especially those that are young and less educated, typically bear the brunt of recessions because of the stronger cyclicality of their employment and wages relative to women’s. We study the extent to which fiscal policy may offset or worsen these asymmetric effects across gender. Using micro-level data for the U.S. from the Current Population Survey, we find that the effects of fiscal policy shocks on labor market outcomes depend on the type of public expenditure. Women benefit most from increases in the government wage bill, while men are the main beneficiaries of higher investment spending. Our analysis further reveals that the fiscal component most efficient at closing gender gaps is least suitable for offsetting inequitable business cycle effects across other socioeconomic dimensions. Introduction Despite substantial progress in the labor market fortunes of women over recent decades, gaps in wages and employment rates between male and female workers remain significant. In addition, gender differences in industry composition can gener-We are grateful to Evi Pappa, Leonardo Melosi, Axelle Ferriere, Juan Dolado and Alexandra Fotiou for their helpful comments. We also thank Dimitrios Bermperoglou for sharing his codes with us. The views in this paper are solely the responsibility of the authors and should not be interpreted as reflecting the views of the Swiss National Bank or the Bank of Canada. ate cyclical fluctuations in labor market gaps, as men tend to be employed in sectors more exposed to business cycles. 1 Notably, young, less-educated and blue-collar men are particularly strongly affected. 2 The role of fiscal policies in reducing inequalities has recently received increasing interest in the literature, with less attention paid to the gender dimension. Evaluating the ability of government spending to address both policy goals, i.e., to reduce inequalities not only within gender (to assist crisis-hit male groups) but also between genders (to close gender gaps), is important to shed light on potential trade-offs involved. 3 We find that these trade-offs depend crucially on the type of public expenditure considered. Using micro-level data for the U.S. from the Current Population Survey (CPS), our study provides policy-making insights on the importance of the composition of government expenditure for understanding the impact of fiscal shocks on labor market outcomes across gender. We also examine the impact on demographic subgroups to assess whether fiscal expansions that close gaps can simultaneously offset inequitable business cycle impacts that particularly affect some categories of male workers. Our main findings can be summarized as follows. First, the composition of fiscal shocks matters. Spending on the government wage bill narrows gender gaps in wages and employment rates, while government purchases from the private sector and investment expenditure tend to stimulate men's wages relatively more than women's. These results are likely driven by spending components that target specific occupations and sectors which differ in their gender composition. Second, promoting gender equality through fiscal expansions is not fully compatible with offsetting other types of inequalities. The spending component that best closes gender gaps has adverse effects on labor market outcomes of cyclically vulnerable male subgroups: young, less-educated and blue-collar workers. Similarly, investment spending, which fosters employment of these crisis-hit men, is not able to reduce gender inequalities but rather contributes to widening them. Government spending can impact labor market outcomes unequally across gender for four main reasons. First, because men and women sort into different occupations, their labor demand will shift to different extents following fiscal shocks. Such shifts will depend on which type of government spending is boosted. This motivates us to distinguish between different fiscal components in our analysis. Second, since women are more mobile across industries and occupations, 4 they may be the main beneficiaries of higher wages and expanded employment opportunities after a fiscal expansion. Third, there is solid empirical evidence that female labor supply is relatively more 1 Men incurred around three-quarters of the net job losses during the Great Recession, with similar magnitudes during previous downturns (Wall and Engemann 2009). This can be ascribed partly to men's employment in more cyclical industries such as manufacturing and construction. See for instance Clark and Summers (1981), Solon et al. (1994) and Hoynes et al. (2012). 2 Figure 13 shows unemployment rates for men, women and vulnerable male subgroups for the period 1979-2019. For a more detailed analysis see Bredemeier et al. (2017b). 3 Note that fiscal policy may have long-term effects on labor market outcomes (see, e.g., Fatás and Summers 2018; Saez et al. 2019) and therefore on gender gaps. Notably, Saez et al. (2019) find evidence of stronger hysteresis effects of employment subsidies on women than men. Fiscal policy may also have a lasting impact on female labor force participation with family-friendly policies (see, e.g., Blau and Kahn 2013). 4 See, e.g., Shin (1999). elastic than male labor supply. 5 Consequently, female employment may respond more strongly than male employment to fiscal shocks. Fourth, women taking up jobs may hire (usually female) caregivers for children and elderly dependents, inducing secondround employment and wage effects. These insights can be valuable for macroeconomists and policy makers. First, our results help to gauge to what extent government expenditure is able to "assist those most impacted by the recession," which was the explicit purpose of the American Recovery and Reinvestment Act of 2009. 6 Second, our analysis is insightful for policy makers whose goal is to promote female employment and gender equality, independently of the cycle. 7 Conversely, this paper highlights the potential damaging effect that cutting government expenditure, especially the wage bill, may have by widening existing gender gaps. Hence, we underline the gender non-neutrality of budgetary decisions and substantiate the importance of implementing "gender budgeting" as suggested by the International Monetary Fund (2017) and the European Parliament (2015). Third, our analysis hints at the importance of encouraging women's labor force participation as this may increase the effectiveness of fiscal policy as an aggregate stabilization tool. To measure the effects of fiscal policy shocks on gender gaps in the labor market, we estimate several vector autoregressive models using Bayesian estimation techniques. Following Mountford and Uhlig (2009), we identify the fiscal shocks using an agnostic sign restriction approach. The main advantage of this identification strategy is that it allows us to eliminate the confounding influence of other macroeconomic shocks: namely, business cycle, monetary policy and tax revenue shocks. We examine the impulse response functions (IRFs) of gender gaps in wages and employment rates to different types of government spending shocks. Our study encompasses the analysis of two dimensions of heterogeneity. First, we investigate whether the effects of fiscal policy shocks on gender gaps differ depending on the type of public expenditure. Second, we explore how the effects vary across male and female workers with different characteristics, such as age, education and occupation. This paper relates to a strand of the literature that reports heterogeneous effects of fiscal policy across households with different characteristics (such as Giavazzi and McMahon 2012;Misra andSurico 2014 andAnderson et al. 2016), and across industries (notably, Nekarda andRamey 2011 andBredemeier et al. 2020). Several studies have emphasized the crucial role of industry composition in shaping gender differences in labor market outcomes, including Hoynes et al. (2012), Olivetti and Petrongolo (2014) and Bredemeier et al. (2017b). However, despite a growing interest in the evolution and the determinants of gender gaps in the labor market, 8 the literature 5 See, e.g., Cogan (1981), Eckstein and Wolpin (1989), van der Klaauw (1996) and Francesconi (2002). 6 This stimulus package, worth $787 billion, consisted of a mix of tax credits, spending on social welfare, consumption spending (mainly on education and healthcare) and investments in infrastructure and the energy sector. 7 Our results suggest that the impact of fiscal policy on gender gaps can be quite persistent. In addition, government expenditure shocks show a high degree of persistence. We find estimates of the autocorrelation coefficients of the cyclical component for government spending instruments that are larger than 0.9 and highly statistically significant. 8 See, e.g., Blau and Kahn (2000), Blau and Kahn (2017), Ngai and Petrongolo (2017) and Albanesi anḑ Sahin (2018). on the impact of fiscal policy on gender equality is scarce. A few recent studies document that fiscal expansions stimulate primarily female employment, in particular Bredemeier et al. (2017b) and Akitoby et al. (2019). These papers focus on the effects of total government spending. Our main contribution to the existing literature is to explore the effects of various components of public expenditure. We argue that who benefits from fiscal stimuli depends on the type of expenditure under consideration. We also analyze labor market outcomes for male subgroups that are hurt most during recessions to better understand the trade-offs involved when attempting to close gender gaps. Furthermore, our identification strategy is able to better isolate the variations in fiscal policy variables from automatic responses to other macroeconomic shocks. The remainder of this paper is structured as follows. Sections 2 and 3 describe the data and the econometric approach. Results are presented in Sect. 4 and robustness checks and extensions are described in Sect. 5. Section 6 concludes by offering directions for future research. The appendices contain some stylized facts about the components of government expenditure and gender compositions across occupations and sectors. Furthermore, we provide a description of the data and of the algorithm used for estimating the impulse response functions. Data We construct labor market series using micro-level data from the Centre for Economic Policy Research (CEPR) extracts of the CPS Merged Outgoing Rotation Groups. 9 We build quarterly series for real hourly wages and employment rates for each gender and for subgroups most exposed to cyclical fluctuations, i.e., those (i) without college education, (ii) aged 16 to 30 and (iii) in blue-collar occupations (mainly production, construction, transport, and installation). 10 Following the approach described in the seminal paper by Deaton (1985), we build pseudo-panels by aggregating individual observations into pseudo-cohorts of workers with similar characteristics and computing averages for each period. 11 We restrict the sample to full-time workers aged 16-64, i.e., who have worked at least 35 hours a week. 12 Self-employed workers are excluded. 13 All variables are seasonally adjusted by X-12 ARIMA. Data on fiscal vari-9 The CPS is the source of official US government statistics on employment, wages and unemployment, with interviewed households selected to be representative of the US population. 10 To build occupational employment groups, we use the conversion factors from the U.S. Census Bureau as the occupation and industry codes in the CPS were subject to several revisions. As defined in Bredemeier et al. (2020), blue-collar occupations include construction and extraction occupations; installation, maintenance, and repair occupations; production occupations; and transportation and material moving occupations. Note that these occupations have a female share of less than 50% for the whole sample period. 11 We compute quarterly averages of monthly observations. 12 In Sect. 5, we also conduct the analysis for non-married individuals, to exclude partner effects, and for part-time workers. 13 We have excluded the self-employed since their wages, employment status and hours worked are difficult to measure accurately. As Hamilton (2000) points out, earnings of business owners are less reliable because of tax incentives to under-report income. Moreover, other forms of "indirect" compensation, such as pensions and health insurance contributions that are paid for employees by the employer, are not received by the self-employed, making it hard to compare incomes. ables, GDP and inflation are from the U.S. Bureau of Economic Analysis, on civilian population from the U.S. Bureau of Labor Statistics, and on the federal funds rate from FRED. Details of sources and definitions of the data are provided in Appendix E. Figure 12 shows the historical evolution of each fiscal component between 1979 and 2019. Total government spending consists of government consumption expenditures and gross investment. 14 In turn, consumption expenditures include compensation of general government employees (the wage bill), consumption of fixed capital, and purchases of intermediate goods and services from the private sector. While real government spending per capita has nearly quintupled since the start of our sample, the relative shares of its components have remained fairly stable, except for purchases from the private sector, which have grown from 21% in 1979 to 28% in 2019. The wage bill is the largest component, with a share of 45% of total government expenditure on average over 1979-2019, while investment spending accounts for about 19% of total government spending. Gender gaps have narrowed over the sample period, especially during the 1980s, driven by the rise in female labor force participation; but they remain significant. In 1979, full-time female workers earned around 40% less per hour than male workers and their employment rate was 27% lower than men's. In 2019, gaps in wages and employment were about 18% and 15% respectively (see also Appendix A). VAR model To measure the effects of different types of government expenditure on gender gaps in the labor market, we estimate several structural vector autoregressive (VAR) models with up to nine endogenous variables. In our baseline specification, the vector of endogenous variables first includes the three fiscal components of interest: namely, the log of real per capita government expenditure on goods and services from the private sector, the log of real per capita government investment expenditure and the log of real per capita expenditure on the government wage bill. 15 Next, the variables included are the log of real per capita net (of transfers) tax revenue, the log of real per capita GDP, the labor market gap variable, inflation and the federal funds rate. The labor market gap variable alternates between the gender gap in (i) hourly wages and (ii) employment rates. The gender wage gap is measured as the difference between the log of real male wages and the log of real female wages. The gender gap in employment rates is defined as the difference between male and female rates. 14 Consumption expenditures consist of spending by the government to produce and provide services to the public, such as national defense and public school education. Gross investment consists of expenditure by the government in structures that directly benefit the public, such as highways, as well as in equipment, software and R&D that assist government agencies in their production activities, such as purchases of military hardware. 15 Results for government expenditure on purchases of goods and services are similar to those obtained using non-wage government consumption, i.e., the sum of expenditure on purchases of goods and services and expenditure on fixed capital. To control for fiscal foresight, we include eight lags of an exogenous war dummy following Ramey (2011). The VAR models are estimated with two lags, on quarterly data from 1979Q1 to 2019Q4. 16 Following Mountford and Uhlig (2009), we include neither a constant nor a time trend. 17 In addition, we repeat the above analysis including male and female series of log real wages (respectively, employment rates) instead of gender gaps. Estimating the effects of fiscal shocks on male and female labor market outcomes separately allows us to better understand the mechanism behind changes in gender gaps and to draw finer policy conclusions. Identification Following Mountford and Uhlig (2009) Arias et al. (2018) and Bermperoglou et al. (2017) among others, we identify the fiscal shocks using an agnostic sign restriction approach that sets a minimum number of restrictions on impulse responses, while controlling for other macroeconomic shocks. These identifying sign restrictions are summarized in Table 1. The shocks are identified sequentially, as in Mountford and Uhlig (2009), Arias et al. (2018) and Bermperoglou et al. (2017). First, we identify a generic business cycle shock that leads to a positive comovement between output and government net tax revenue for four quarters. Second, we follow Bermperoglou et al. (2017) and identify a monetary policy shock by combining zero and sign restrictions. In particular, the federal funds rate should react positively and contemporaneously to output and inflation deviations only, to approximate the Taylor rule. 18 We also impose orthogonality between the monetary policy shock and the business cycle shock. Third, the government revenue shock is identified as a shock that raises net tax revenues for four quarters and that is orthogonal to the monetary and business cycle shocks. Lastly, we identify shocks to government goods purchases, a government investment shock and a government wage bill shock sequentially. We impose that these shocks increase the corresponding fiscal variable for four quarters while being orthogonal to the business cycle, monetary policy and other fiscal shocks. Orthogonality to the other fiscal shocks ensures that our results are driven exclusively by the fiscal instrument of interest and not by any other expenditure component. Following Uhlig (2005), we estimate the model using a Bayesian approach with flat priors for model coefficients and the covariance matrix of shocks (see Appendix D). The estimations are based on 400 draws from the posterior distribution of VAR parameters and 4000 draws of orthonormal matrices. We compute the median, the 68% and the 90% confidence bands of impulse responses to a shock that raises the government expenditure component of interest by 1% on impact. 16 The starting date of the sample is constrained by the availability of CPS MORG data. Results are qualitatively similar when the sample ends in 2007Q4. 17 We checked that the results are qualitatively robust when a constant and a time trend are included. Results This section reports our results for different government spending components. Section 4.1 looks at gender gaps among all full-time employees aged 16-64, and Sect. 4.2 analyzes heterogeneities across subgroups, obtained by further splitting the sample by age, education and occupation. The purpose of this exercise is threefold. First, it helps us to gain insights into whether spending components have asymmetric effects across men and women. Second, it allows for a better assessment of how to use fiscal policy to offset inequitable business cycle effects across other socioeconomic dimensions. Third, the analysis highlights trade-offs involved when attempting to close gender gaps since demographic subgroups may not react equally to fiscal stimuli. Overall, we find that gender gaps close most strongly following a shock to the government wage bill. 19 However, this spending component amplifies the particularly adverse effects experienced by certain male subgroups during recessions. Figure 1 shows the responses of gender gaps in wages (first row) and employment rates (second row) to shocks that raise the government wage bill, government purchases from the private sector and government investment by 1% on impact, respectively. The effects of an increase in government purchases and investment expenditure on gender gaps are small in magnitude and mostly not statistically significant. In contrast, a positive government wage bill shock significantly reduces both wage and employment gaps between genders. Exploring the effects of a wage bill shock on full-time men and women separately reveals that the reduction in wage gaps is driven by a significant increase in female wages and a fall in male wages (Fig. 4). In addition, the employment gap closes since employment falls among men but remains unchanged among women. Expansions in government purchases and investment spending lead to a rise in male wages in the short run, leaving female wages unchanged, and a reduction in employment rates for both genders in the medium run. The effect on gender gaps differs across fiscal components To start with, note that, overall, these government spending shocks tend to have positive effects on wages (for women in the case of a wage bill shock, for men in the case of goods purchases and investment spending shocks) but negative effects on employment. As Finn (1998) showed, an increase in the number of public employees is predicted to crowd out private employment. Increases in public wages or employment also put upward pressure on private sector wages, inducing a negative labor demand effect. In addition, if the fiscal expansion is financed with increased labor income taxes, workers may reduce their labor supply or ask for higher pre-tax real wages. These results are also in line with empirical findings reported in Alesina et al. (2002) and Ramey (2013). Alesina et al. (2002) show that increases in public spending raise labor costs and lead to declining profits. Ramey (2013) provides evidence that increases in government purchases of private goods and in the government wage bill have negative effects on private activity and employment. Next, we observe differences in labor market outcomes across gender depending on the type of fiscal shock considered. Women benefit most from an increase in the wage bill, while men are the main beneficiaries of expansions in government purchases and investment expenditure. These findings are likely driven by spending components that target specific sectors and occupations which differ in their gender composition. Hence, men and women face different shifts in labor demand. Government purchases from the private sector and investment expenditure mainly target manufacturing, construction and transportation industries, which are male dominated. In contrast, increases in public sector employment or wages should benefit women disproportionately as they are over-represented in this sector, with an average share of 53% during our sample, as compared with 42% in the private sector. Thus, the reduction in the gender wage gap after a shock to public sector wages is partly a mechanical outcome. In addition, increasing the public sector head count will attract women disproportionately since they are offered a relatively higher public sector wage premium compared with men (see Figs. 14 and 15). 20 Moreover, women in the private sector may find it easier to transition to the public sector since, unlike men, they mirror the occupational structure of their public sector counterparts. In both sectors, a disproportionate share of women work in healthcare, education and administrative jobs (see Figs. 16 and 17). 21 Thus, following a wage bill shock, the negative employment effects described above for the whole population are offset for women but not for men. In addition, women taking up government jobs may hire (usually female) caregivers for children and elderly dependents, which may induce second-round employment and wage effects. 22 Furthermore, as documented in Bredemeier et al. (2020), expansions in government consumption-most of which are dedicated to the government wage bill-lead to a shift from blue-collar to pink-collar employment. 23 The authors show that this heterogeneity in occupational employment dynamics can be explained by differences in substitutability between labor and capital services across occupations. 24 The shift from blue-to pink-collar jobs in the private sector may also occur as a result of public-sector outsourcing. A shock to the wage bill may therefore increase demand for female-dominated jobs in the private sector, such as in healthcare, education and 20 In 2015, full-time median earnings of women amounted to only 75.8% of male earnings in the private sector but 84% in state and local government jobs and 88.5% at the federal government level (American Community Survey, see Fig. 15). Besides the higher wage premium, women's stronger appeal of the public sector may also be driven by higher levels of job protection and family-friendly work arrangements (see Kolberg 1991; Gomes and Kuehn 2019). 21 In contrast, men do not perform the same jobs across the two sectors. Public sector male employees mainly cluster in protective services and education, while men in the private sector are over-represented in construction, installation, production and transportation occupations. 22 For instance, Connelly and Kimmel (2003) find that, independently of marital status, female labor supply has a positive effect on demand for formal childcare. 23 See also Bredemeier et al. (2017a). 24 As discussed in Bredemeier et al. (2020), capital services and blue-collar labor, which includes mainly manual tasks, tend to be close substitutes. In contrast, pink-collar labor, which usually requires more social skills, is a poor substitute for capital services. As labor supply is less elastic than capital services, demand for pink-collar workers rises disproportionately after a fiscal expansion. administration. 25 In contrast, these changes in labor demand contribute to amplifying the adverse effects on wages and employment for male workers, in particular after a wage bill shock. To summarize, we find that men and women are affected differently by the three types of public expenditure. Our results suggest that increases in the government wage bill, which is the largest component of government expenditure, affect women more favorably than men. Gender-specific sectoral and occupational sorting may explain heterogeneous responses to different fiscal shocks. In the next section, we split workers further into demographic subgroups to explore how crisis-hit men respond to increases in these different types of government expenditure. Female-friendly spending harms cyclically vulnerable men It is well documented that business cycle fluctuations affect male workers disproportionately, especially those who are younger, less educated or work in blue-collar occupations. 26 Our analysis reveals that the same groups of men are hurt by shocks to the government wage bill-the instrument that best closes gender gaps. Figures 2 and 3 show the impulse response functions of wages and employment rates, respectively, to shocks in each spending component for the male subgroups hit hardest during recessions, i.e., the young, the less educated and blue-collar workers. Spending on the government wage bill (first column of Fig. 2) strongly reduces men's wages in all subgroups in the medium run and has no statistically significant impact on their employment (first column of Fig. 3). In contrast, shocks to both government purchases and investment spending lead to an increase in wages among these subgroups in the short run. Furthermore, investment spending raises employment significantly among all these categories of male workers, and government goods purchases stimulate employment of blue-collar men. As discussed previously, men are particularly negatively affected by increases in the government wage bill, since they cannot easily move to the public sector and since demand for their labor in the private sector is adversely affected by the shift from blueto pink-collar employment. This may explain why male wages decrease substantially among these subgroups after a wage bill expansion, while female wages remain unaffected or increase, resulting in narrower gender gaps (see Fig. 5). In contrast, shocks to investment and goods purchases strongly stimulate demand in manufacturing, construction, installation and transportation sectors. In particular, manufacturing firms receive the largest share of government contracts. 27 The fact that young, less-educated and blue-collar male workers are over-represented in these sectors may explain why they strongly benefit from these fiscal shocks. Conversely, women belonging to these Fig. 2 IRFs of wages for male subgroups to shocks in different spending components. Notes Dashed lines and shaded areas indicate the 90% and the 68% confidence bands respectively subgroups benefit less from increases in government investment, which significantly raises gender gaps (see Fig. 7). Overall, we find that the fiscal instrument that is most useful for closing gender gaps in the whole population-the government wage bill-decreases wages of crisisprone men. Conversely, government investment spending, which benefits cyclically vulnerable men, is less suitable for tackling gender inequalities. Robustness checks and extensions We consider several robustness checks and extensions of our main results. These include: (i) using an alternative identification scheme-namely, a recursive (Cholesky) identification; (ii) restricting the sample to part-time workers; and (iii) restricting the sample to unmarried workers. Figure 9 displays the impulse responses of gender gaps for the total population that result from identifying fiscal shocks recursively using a Cholesky decomposition. The ordering of endogenous variables is the same as in the baseline VAR specification. Overall, the results are qualitatively and quantitatively similar to our baseline except that the wage gap no longer closes after an increase in government purchases from the private sector. Conducting our analysis for part-time workers (i.e., working less than 35 hours per week) reveals substantial differences (see Fig. 10) compared with our baseline sample consisting of only full-time workers. 28 The wage and employment gaps now increase after a shock to the government wage bill. A potential explanation is that part-time private-sector women work in different occupations than their public-sector counterparts, while the occupational structure for full-time women is similar in private and public sectors. For example, female part-time workers are less likely to work in administrative jobs (see Fig. 18); thus, they benefit less from wage bill expansions because moving to the public sector is more difficult. The opposite holds for part-time male workers. They are more likely than full-time male employees to hold administrative jobs (see Fig. 19), allowing them to transfer to public-sector jobs as the wage bill rises. Another interesting difference is that the effects of government spending shocks on employment are more positive for part-time than for full-time workers. This could be explained by part-time workers being hired to meet temporarily higher labor demand after a spending shock. Lastly, in order to verify robustness and exclude partner effects, we re-run all estimations for full-time non-married workers. Our baseline results are largely confirmed (see Fig. 11). In particular, the insight that the government wage bill is the most powerful fiscal instrument to close gender gaps remains intact. Conclusions In this paper, we analyzed the labor market effects of fiscal policy shocks from a gender perspective, with special emphasis on the role of distinct spending components. We find that they affect men and women differently and that the effect also varies by demographic subgroup. Thus, policy makers may alter the composition of expenditures according to their objectives. If their goal is to decrease gender gaps in wages and employment, expanding the government wage bill is most appropriate. However, if the fiscal authority aims to assist young, less-educated and blue-collar men who are most affected by negative business cycle shocks, investment spending is the preferable fiscal tool. Hence, these two goals are not perfectly aligned. Our analysis points at the importance of fostering cross-occupational mobility. If men were able to easily move to less cyclical jobs or to the public sector when the need arises, they could be better sheltered from adverse business cycle and fiscal policy effects. Moreover, our findings indicate potentially large costs of austerity for women's wages and employment, especially in case of cuts to the government wage bill. However, by estimating a linear VAR we are not able to assess whether the effects of austerity on gender gaps differ from those of fiscal expansions. Knowledge of which spending components to cut in times of tight budgets without widening gender gaps is still wanting. 29 Future research should also explore whether fiscal policy has asymmetric effects on men and women depending on the state of the cycle. This would allow for a better assessment of how to offset inequitable effects during economic slumps. 30 Furthermore, investigating the gender implications of financing fiscal expansions through tax or deficit increases is a relevant policy issue that needs to be addressed. Finally, since women respond more positively to the major spending component, that is, the government wage bill, encouraging their labor force participation may enhance the efficiency of fiscal policy as a stabilization tool-a conjecture for which further evidence is needed. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. A Tables See Table 2. D VAR estimation method and algorithm for computing impulse response functions The procedure to identify the shocks follows the approach described in Arias et al. (2018) to make independent draws from the posterior distribution of structural parameters conditional on the sign and zero restrictions. The VAR model can be written in the following general form: where y t is the vector of n endogenous variables, t a n × 1 vector of exogenous structural shocks. The reduced form representation of this model is: where The matrices D and are the reduced-form parameters, A 0 and B the structural parameters. Let h be any continuously differentiable mapping from the set of symmetric positive definite n × n matrices into the set of n × n matrices such that h(X ) h(X ) = X . In particular, h(X ) could be the Cholesky decomposition of X. We have (A 0 , B) = (h( ) −1 , Dh( ) −1 ). We denote f (h( ) −1 , Dh( ) −1 ) a function, with dimensions nr ×n, which stacks the impulse responses for the r horizons where sign restrictions are imposed, such that it satisfies f (h( ) −1 Q, Dh( ) −1 Q) = f (h( ) −1 , Dh( ) −1 )Q for any orthogonal matrix Q ∈ O(n). Zero restrictions can be defined using matrices Z j of dimension z j × nr, with z j being the number of zero restrictions imposed on f (h( ) −1 , Dh( ) −1 ). The parameters (D, ) satisfy the zero restrictions if Z j f (h( ) −1 Q, Dh( ) −1 Q)e j = 0, for 1 ≤ j ≤ n, where e j is the jth column of the identity matrix I n . The main steps of the algorithm are the following: 1. Draw (D, ) from the posterior distribution of the reduced-form parameters. 2. Draw X = [x 1 , . . . , x n ] from an independent standard normal distribution. 6. Repeat steps 1-5 for N draws from the posterior distribution of the VAR parameters. 7. For all accepted draws, compute and save the corresponding impulse response. 8. Lastly, calculate the median, the 5th, the 16th, the 84th and the 95th percentiles of all the impulse responses. E Data definitions and sources See Table 3.
v3-fos-license
2015-03-27T04:16:54.000Z
2010-11-03T00:00:00.000
14137859
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://acp.copernicus.org/articles/11/5277/2011/acp-11-5277-2011.pdf", "pdf_hash": "9be242bbcde34450349ced6930aa9ac4c30f3311", "pdf_src": "Grobid", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46304", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "9be242bbcde34450349ced6930aa9ac4c30f3311", "year": 2010 }
pes2o/s2orc
Homogenous Nucleation of Sulfuric Acid and Water at Close to Atmospherically Relevant Conditions In this study the homogeneous nucleation rates in the system of sulfuric acid and water were measured by using a flow tube technique. The goal was to directly compare particle formation rates obtained from atmospheric measurements with nucleation rates of freshly nucleated particles measured with particle size magnifier (PSM) which has detection efficiency of unity for particles having mobility diameter of 1.5 nm. The gas phase sulfuric acid concentration in this study was measured with the chemical ionization mass spectrometer (CIMS), commonly used in field measurements. The wall losses of sulfuric acid were estimated from measured concentration profiles along the flow tube. The initial concentrations of sulfuric acid estimated from loss measurements ranged from 10 8 to 3 × 10 9 molecules cm −3. The nucleation rates obtained in this study cover about three orders of magnitude from 10 −1 to 10 2 cm −3 s −1 for commercial ultrafine condensation particle counter (UCPC) TSI model 3025A and from 10 1 to 10 4 cm −3 s −1 for PSM. The nucleation rates and the slopes (dlnJ/dln [H 2 SO 4 ]) show satisfactory agreement when compared to empirical kinetic and activation models and the latest atmospheric nucleation data. To the best of our knowledge, this is the first experimental work providing temperature dependent nucleation rate measurements using a high efficiency particle counter with a cutoff size of 1.5 nm together with direct measurements of gas phase sulfuric acid concentration. Introduction Atmospheric new particle formation consists of rather complicated sets of processes, the first of them is gas-to-particle nucleation.It is generally accepted that sulfuric acid is a robust source of new particles and plays a central role in atmospheric new particle formation (Weber et al., 1996(Weber et al., , 1997;;Kulmala, 2003).In number of field experiments (e.g.Weber et al., 1996;Sihto et al., 2006;Riipinen et al., 2007;Zhao et al., 2010) and also in some laboratory studies (e.g.Berndt et al., 2005, 2010, Young et al., 2008, Sipilä et al., 2010) the rate of particle formation is not adequately explained by binary classical homogenous nucleation (CNT), the theory greatly under predicts the observed nucleation.According to Kashchiev (1982) the relationship (the slope, dlnJ/dln [H 2 SO 4 ]) between particle production rate and sulfuric acid concentration directly corresponds to number of molecules in critical cluster.In atmospheric measurements and also in a laboratory studies (Sipilä et al., 2010;Berndt et al., 2010) it was observed that particle number concentration followed a power-law dependency of about 1-2, compared to CNT prediction that suggest exponents from 4 to 9 (Vehkamäki et al., 2002).This discrepancy has been puzzling to atmospheric researchers for more than a decade.As a solution to this problem it has been suggested that other associate molecule as ammonia and amines (Weber et al., 1996;Berndt et al., 2010) or organic acids (Zhang, 2010;Zhang et al., 2004;Metzger et al., 2010;Paasonen et al., 2010) may have a stabilizing effect on the clusters and allow nucleation to occur at much lower concentrations of sulfuric acid than needed by CNT.The first in-situ atmospheric measurements of sulfuric acid in troposphere by using chemical ionization mass spectrometer D. Brus et al.: Homogenous nucleation of sulfuric acid and water (CIMS) were reported by Eisele and Tanner (1993).Since that time CIMS was used in many field studies in diverse locations around the world (e.g.Weber et al., 1996Weber et al., , 1997;;Mauldin et al., 1998;Sihto et al., 2006;Riipinen et al., 2007;Petäjä et al., 2009, Zhao et al., 2010) and also in laboratory studies (e.g.Ball et al., 1999;Zhang et al., 2004;Benson et al., 2008;Young et al., 2008;Sipilä et al., 2010). Compared to our previous study (Brus et al., 2010) we apply here a similar approach as is commonly used in atmospheric measurements.The gas phase sulfuric acid concentration was measured with the chemical ionization mass spectrometer (CIMS).The number concentration of freshly nucleated particles was measured in parallel with commercial UCPC TSI 3025A and particle size magnifier (PSM) with particle counting efficiency close to unity for particles of ∼1.5 nm, (Vanhanen et al., 2011). Experimental setup The same experimental setup as introduced in Brus et al. (2010) was used in this investigation.The experimental setup details and the principle of operation can be found therein.Only brief description of apparatus, its principle of operation and differences associated with particle counting and determination of sulfuric acid concentration are discussed here.The experimental setup consists of five main parts: an atomizer, a furnace, a mixing unit, a nucleation chamber and a particle detector unit.A liquid solution of known concentration and amount (0.22 ml min −1 ) is introduced by the HPLC Pump (Waters 515) through a ruby micro-orifice (Bird Precision -20 µm) together with particle free air (about 4 l min −1 ) into the furnace.The dispersion is vaporized in a furnace (Pyrex glass tube) which is 60 cm long and has an internal diameter (I.D.) of 2.5 cm.The tube is wrapped with resistance heating wires.The temperature inside the furnace is kept at approximately 470 K and controlled by a PID controller to within ±0.1 K (DigiTrace, TCONTROL-CONT-02).After the furnace, the vapor is filtered with a Teflon filter (MITEX TM Millipore 5.0 µm LS) to remove any liquid residue or particulate impurities.The Teflon filter is placed on the perforated Teflon support pad just after the furnace, and before the entrance to the mixing unit.The filtered vapor is then introduced into the mixing unit, made of Teflon, and cooled by turbulent mixing with room temperature particle free air to about 320 K.The flow rate of the mixing air is about 8 l min −1 .The mixing unit dimensions are: O.D. = 10 cm, I.D. = 7 cm, height = 6 cm.The mixing unit is kept at room temperature and it is not insulated.Both lines of particle free air are controlled by a flow rate controller to within ±3 % (MKS type 250).The vapor gas mixture is then cooled to the desired nucleation temperature in a nucleation chamber, which is kept at a constant temperature with two liquid circulating baths (Lauda RK-20).The nucleation chamber is made of stainless steel, with an I.D. of 6 cm and an entire length of 200 cm.The concentration of water vapor is measured at the middle and far end of the nucleation chamber with two humidity and temperature probes (Vaisala HMP37E and humidity data processor Vaisala HMI38) to within ±3 %.The aerosol number concentration is measured just after the nucleation chamber with an ultrafine condensation particle counter (UCPC) TSI model 3025A and simultaneously with particle size magnifier (PSM).The sulfuric acid concentration is measured also at the end of the nucleation chamber with chemical ionization mass spectrometer (CIMS). The liquid samples of sulfuric acid and water mixture are prepared from a 0.01 M solution of H 2 SO 4 (Reagecon, AVS purity) and ultrapure water (Millipore, TOC less than 10 ppb, resistivity 18.2 M .cm@ 25 • C).The desired solution concentration is prepared in two steps of dilution.First, 1 l of primary solution of concentration (1.96 × 10 −4 mol l −1 ) is made by adding 20 ml of 0.01 mol H 2 SO 4 to 1 l of pure water.Then the desired final solution for a particular measurement is made.To cover RH's from 60 % to 10 % we prepare 1 l of final solution from 0.5 ml to 70 ml of primary solution.The final solution concentration is always checked by Ion Chromatography with a lower detection limit of 0.02 mg l −1 of SO 2− 4 in the analytical laboratory at the Finnish Meteorological Institute. Chemical Ionization Mass Spectrometer, CIMS Sulfuric acid was measured with a chemical ionization mass spectrometer, CIMS (Eisele and Tanner, 1993;Mauldin et al., 1998;Petäjä et al., 2009).The sulfuric acid in the sample flow is chemically ionized by (NO − 3 ) ions.The reagent ions are generated by nitric acid and a 241 Am alpha source and mixed in a controlled manner in a drift tube utilizing concentric sheath and sample flows together with electrostatic lenses. Prior to entering the vacuum system, the chemically ionized sulfuric acid molecules pass through a layer of dry nitrogen flow in order to dehydrate the sulfuric acid.In the vacuum system the sulfuric acid clusters are dissociated to the core ions by collisions with the nitrogen gas seeping through the pinhole in the collision-dissociation chamber (Eisele and Tanner, 1993).The sample beam is collimated with a set of conical octopoles, mass filtered with a quadrupole and detected with a channeltron.The sulfuric acid concentration is determined by the ratio between the signals at m/z channel of 97 Da (HSO − 4 ) and the reagent ion at m/z channel of 62 Da (NO − 3 ) multiplied by the instrument and setup dependent calibration factor. The calibration factor is determined by photolyzing ambient water vapor with a mercury lamp to generate a known amount of OH radicals in front of the inlet.The produced OH radicals subsequently convert isotopically labeled 34 SO 2 into labeled sulfuric acid in a well defined reaction time yielding finally after ionization (H 34 SO − 4 ).A nominal detection limit of the CIMS instrument is 5 × 10 4 molecules cm −3 for a 5 min integration period.The error estimate in determined concentrations is estimated to be about factor of 2. CIMS was used also to detect sulfuric acid dimers.The calibration factor used for monomers was applied also in converting the dimer signal to concentration.Since the transmission for dimer (m/z channel of 195 Da) can differ from monomer (97 Da), use of the single calibration factor causes error in the determined concentration.Furthermore, our reported dimer signal comprises dimers formed both via neutral processes inside the flow tube and dimers formed by ion induced mechanism in the CIMS charger, for detailed discussion see Petäjä et al. (2011).Therefore our results concerning the dimer concentrations are still somewhat qualitative. H 2 SO 4 losses Sulfuric acid wall losses were determined experimentally by measuring the losses of sulfuric acid concentration along the nucleation chamber.Two sets of experiments were conducted.First, relative humidity was changed (16, 32 and 57 %) and nucleation temperature (25 • C) was kept constant.Second, nucleation temperature was changed (25, 15 and 5 • C) and relative humidity was kept constant (∼50 %).The nucleation chamber consists of two 1 meter long interchangeable parts; one of them is equipped with 4 holes in equal distance of 20 cm from beginning and from each other.In the first set of measurements the holes were in upper position so we measured sulfuric acid losses for relative humidities 16, 32, 57 % in distances of 20, 40, 60, and 80 cm from the beginning and then at the end (200 cm) of the nucleation chamber.The slopes obtained from the fits to experimental data ln([H 2 SO 4 ]) vs. distance in the nucleation chamber stand for the loss rate coefficient, k obs (cm −1 ), under the assumption that the only sink for molecular sulfuric acid is the first order loss to the flow tube wall.To be able to measure along whole nucleation chamber an additional CIMS inlet sampling tube had to be used, which is a stainless steel tube with I.D. 10 mm and its whole length was 122 cm (100 cm straight + 22 cm elbow-pipe).The sulfuric where k obs is the observed loss rate coefficients, v is mean flow velocity, t is residence time, WLF is the wall loss factor, and WLF inlet is the wall loss factor estimated for CIMS' inlet sampling tube.2010) for details.The presence of temperature gradient (both axial and radial) imposes thermophoretic force towards the cooled nucleation chamber wall (set to 25 • C) and also increases the value of diffusion coefficient, thus increasing WLF in first 50 cm.There is no temperature gradient in the CIMS' sampling tube and the WLF inlet is behaving as expected, the WLF inlet is increasing with decreasing RH, (Hanson and Eisele, 2000). In the second set of loss measurements the positions of nucleation chamber parts were exchanged, so the holes were in lower part of nucleation chamber.This was done to ensure the reproducibility of experiment at relative humidity ∼50 %, and also find out how big role plays the axial temperature gradient (thermophoresis and higher diffusion coefficient) in first 50 cm of the nucleation chamber.The sulfuric acid losses were measured at three nucleation temperatures (25, 15 and 5 • C) and relative humidity ∼50 % at distances of 120, 140, 160, 180, and 200 cm.Table 2 contains the obtained loss rate coefficients together with accompanied parameters.The WLF is generally smaller then in the first set of experiment due to smaller and constant diffusion coefficient; i.e. there is no axial temperature gradient present in lower part of nucleation chamber.The WLF is increasing with decreasing nucleation temperature again due to increased radial temperature gradient.The WLF inlet is even pronounced because the CIMS' sampling inlet tube was not temperature controlled, but only well insulated.D. Brus et al.: Homogenous nucleation of sulfuric acid and water Particle Size Magnifier Particle size magnifier (PSM, Airmodus A09) used in this study is based on two recent major developments on the field of particle counting.First, on the work of Sgro and de la Mora (2004) (and the references therein) with the development of mixing type particle size magnifier for almost arbitrarily small particles, and second, on the study by Iida et al. (2009) to find the most suitable working fluid to be used in a condensation particle counter (CPC).The critical dimensions and the geometry of the PSM are very close to those given by Sgro and de la Mora (2004).Diethylene-glycol was used as the working fluid.It has relatively high surface tension and low saturation vapor pressure.Because of these properties a high saturation ratio is acquired without homogeneous nucleation (Iida et al., 2009).Diethylene-glycol has also been experimentally tested in the ultrafine condensation nucleus counter (UFCNC) prototype (Stoltzenburg and McMurry, 1991) showing a superior performance in the sub-2 nm size range (Iida et al., 2009).Due to low vapor pressure of diethylene-glycol the particles cannot easily grow to optical sizes (∼1 µm in diameter).Therefore an external CPC (TSI 3010) is used for detecting the activated particles in this design.Calibration results (Vanhanen et al., 2011) have shown that PSM detects charged particles approaching efficiency of unity (practically diffusion loss limited) down to ∼1.5 nm.Below that still ∼25 % of the smallest calibration ion (tetra-methyl-ammonium-ion) with mobility equivalent diameter of 1.05 nm, was activated in the PSM in comparison to reference electrometer (TSI 3068B).An assumption of unity detection efficiency in case of PSM is justified. Results and discussion Two separate experiments were conducted in the Finnish Meteorological Institute (FMI) flow tube.The nucleation rates of sulfuric acid and water were measured as a function of initial sulfuric acid concentration at three different relative humidities (16, 32 and 57 %).Also the nucleation rate temperature dependency was investigated; the experiments were conducted at three temperatures (25, 15 and 5 • C) while keeping the relative humidity close to 50 %.To obtain the initial sulfuric acid concentration, the sulfuric acid losses were estimated separately for all experimental conditions.The main reason why we focused on obtaining initial sulfuric acid concentration in our flow tube was that the concentration of prepared solution of sulfuric acid and water is known for each particular experiment and thus the initial sulfuric acid concentration determined with CIMS and IC method can be mutually compared. Nucleation rates The number concentrations of freshly nucleated particles were measured as a function of initial sulfuric acid concen-tration at several levels of relative humidities.In all experiments two different counting systems were used in parallel.An UCPC TSI 3025A which was calibrated with silver particles to a mobility diameter d 50 cut-off of 2.28 nm.The following modification to UCPC TSI 3025A has been done to obtain a d 50 cut-off diameter of 2.28 nm.The saturator temperature was increased from a nominal 37 • C up to 38 • C, the condenser temperature was decreased from a nominal 10 • C down to 8 • C. At these new temperatures no homogeneous nucleation was observed inside the counter.As a second counting system a mixing type particle size magnifier (PSM) with close to unity detection efficiency for mobility equivalent diameter of 1.5 nm was used.The initial concentration of sulfuric acid was estimated from loss measurements using CIMS and it ranged from 10 8 to 3 × 10 9 molecules cm −3 .The onset of nucleation for UCPC TSI 3025A particle counter was observed at sulfuric acid initial concentrations about 10 8 and for PSM at about 10 7 molecules cm −3 (extrapolated to J = 1 cm −3 s −1 ).The different counting efficiency of both counters lead to different slopes in plot of nucleation rate vs. sulfuric acid concentration.The biggest difference in counting between UCPC TSI 3025A and PSM is at lowest nucleation rates, about a factor of 200, and the smallest difference is at highest nucleation rates, about a factor of 3. From the obtained slopes it is obvious that both lines will merge at certain point, where the particle diameter of grown particles for UCPC TSI 3025A will also reach the counting efficiency of unity.The detailed comparison and explanation of differences among several counting systems can be found in Sipilä et al. (2010).The linear fits to experimental data for both particle counters are presented in Table 3. Nucleation time in our experiment is defined as time from the nucleation zone maxima to the end of the flow tube; which is half of the total residence time.Nucleation zone was determined experimentally (Brus et al., 2010) and also with Fluent CFD model (Herrmann et al., 2010), the maxima of nucleation zone was found at distance of about 1 m up from the nucleation chamber end.The nucleation rate is then defined as particle number concentration divided by nucleation (or half of residence) time.The highest uncertainty in nucleation rate is estimated to be factor of 2 when considering unlikely shift in position of nucleation zone maximum from 50 cm to 150 cm in the nucleation chamber.The resulting nucleation rates at relative humidities of 16, 32, 47 and 57 % cover about three orders of magnitude from 10 −1 to 10 2 cm −3 s −1 and from 10 1 to 10 4 cm −3 s −1 for UCPC TSI 3025A and PSM, respectively, see Fig. 1a and b.It has to be pointed out that experimental data at RH = 32 % were already published in Sipilä et al. ( 2010) and experimental data at RH = 47 %, are taken from temperature dependency measurements (see next Sect.3.2 Temperature dependency) to show the experiment reproducibility. Temperature dependency The effect of temperature on nucleation rate was studied and the experimental results are presented for both counters (UCPC TSI 3025A and PSM) separately in Fig. 2a and b.The nucleation rates were measured as a function of sulfuric acid initial concentration at three temperatures 25, 15 and 5 • C, the relative humidity was kept close to 50 %.The experiment was conducted in a way that for one prepared solution of sulfuric acid and water, first all flow tube parameters (RH, T , total flow) were adjusted to measure nucleation temperature of 25 • C.After the experiment was finished the parameters were readjusted to measure nucleation temperature of 15 • C by changing temperature of nucleation chamber wall and flows to keep relative humidity close to 50 %.Finally nucleation temperature of 5 • C was measured in the same way and the sulfuric acid -water solution was changed afterwards. The nucleation rate shows an enhancement of more than one order of magnitude when decreasing the nucleation temperature by 20 • C at sulfuric acid concentration of 10 9 molecules cm −3 for both particle counters.At sulfuric acid concentration of 10 8 molecules cm −3 the measured data show no enhancement of nucleation rate because of different slopes of isotherms.The steepest slope was observed at temperature of 5 • C which is in disagreement with prediction of CNT (Vehkamäki et al., 2002).This might be due to undercounting of both particle counters at lower sulfuric acid concentrations.CNT predicts about 30 % smaller critical cluster size at 5 • C than at 25 • C. The reduction of critical cluster size with decreasing temperature is usually also observed experimentally in unary systems (e.g.Manka et al., 2010).The experimental data for 15 and 25 • C lie almost on top of each other; this is probably due to experimental difficulties we observed at lower temperatures 15 and 5 • C. The resulting slopes of fits to experimental data are collected in Table 4.The temperature dependency was already studied earlier by Wyslouzil et al. (1991) in temperature range of 20-30 • C.These measurements are provided in the plot of nucleation rate vs. relative acidity, which complicates the direct comparison to our dataset, however their data indicate that a 5 • C decrease in nucleation temperature would lead to a decrease in nucleation rate of two to four orders in magnitude. Dimer formation The formation of sulfuric acid dimer in both its hydrated and unhydrated form is the first step in sulfuric acid and water nucleation process (Hanson and Lovejoy, 2006) 2.2 0.99 2.9 0.99 15 1.5 0.99 1.9 0.99 25 1.2 0.98 2.1 0.99 potential contribution of ion induced clustering inside the CIMS charger.Thus, it must be pointed here that results concerning the dimer concentrations in the flow tube are still only qualitative.Generally, the observed concentration of dimer was in units of percent of the monomer concentration, which agrees with earlier studies (Eisele and Hanson, 2000).A slight RH dependency in monomer to dimer relation was observed.For the nucleation temperature of 25 • C the increasing trend in the ratio (M/D) of monomer (97 Da) to dimer (195 Da) from ∼100 to ∼200 was observed with increasing relative humidity from 16 to 56 %, see Fig. Eisele and Hanson (2000) reported M/D value ∼40 at ∼240 K.The trend of M/D ratio is decreasing with decreasing temperature in our study.Similar trend was also observed in Eisele and Hanson (2000) but only for cluster bigger than trimer. Comparison to our previous data The detailed comparison to other literature data concerning sulfuric acid -water system is given in our previous publication Brus et al. (2010), however the discrepancy was found in the results of this study compared to data published earlier (Brus et al., 2010).In our earlier study the method of bubblers was used to estimate concentration and the losses of sulfuric acid along the flow tube as the total sulfate (SO 2− 4 ) concentration obtained via ion chromatography (IC) analysis.In this study the initial sulfuric acid concentration measured with CIMS method reaches about 20 % at RH ∼50 % and only about 1 % at RH∼16 % of total sulfate concentration obtained via ion chromatography (IC) analysis of the prepared liquid samples and consequent initial sulfuric acid concentration calculated by mass balance, see Fig. 5. There might be several reasons for such observations.The CIMS measures only monomer (97 Da) in a gas phase, the dimer concentration (195 Da) was usually less then 1 % of monomer concentration.This might indicate that the rest of sulfuric acid is in another form.The losses of sulfuric acid into particles is marginal, it was in the range of few per mille to maximum of 3 % for sulfuric acid concentration range from 10 8 to 10 9 molecule cm −3 . What has to be also considered is shielding of sulfuric acid with water molecules.The hydration of sulfuric acid takes always place whenever traces of water are involved in the process of nucleation.According to classical theory of hydration made by Jaecker-Voirol andMirabel (1988), Jaecker-Voirol et al. (1987) and validated by Kulmala et al. (1991), only about 10 % of sulfuric acid is in unhydrated form at relative humidity of 50 %.Salcedo et al. (2004) studied the effect of relative humidity on the detection of sulfur dioxide and sulfuric acid and found negative effect on the sensitivity of the CIMS to SO 2 and H 2 SO 4 because water molecules form clusters with reactant and product ions thus shielding the molecules from being ionized.They claim that the effect can be avoided by increasing the CIMS' inlet flow tube temperature to 150 • C. On the other hand e.g.Eisele and Tanner (1993) in their study claim that the CIMS measurements are sensitive to total sulfuric acid without discrimination between free acid and monoacid hydrates, or even between free and higher-order acid clusters and their hydrates.Water is far more volatile than sulfuric acid and any water associated with an ion may be driven off as the ion is sampled through the collisional-dissociation chamber (CDC) of the CIMS, (Eisele and Hanson, 2000).Our results are contra in- tuitive in the case of water molecule shielding.The sulfuric acid concentration measured with CIMS is decreasing with decreasing relative humidity.If the shielding would be due to water molecules then sulfuric acid concentration would be increasing towards the lower relative humidity.Thus the shielding effect only due to water itself is an improbable explanation. Other possibility is involvement of ammonia or other bases like amines in shielding of sulfuric acid molecules by creation of stable clusters of, in case of ammonia, ammonium sulfate or ammonium bisulfate.The discussion on the role of stabilizing compounds affecting the chemical ionization methods to determine sulfuric acid is currently ongoing (Kurtén et al., 2011).As these effects are potentially setup and instrument dependent and difficult to quantify, our concentration estimates have a larger uncertainty (factor of two) associated with them than presented earlier for the CIMS technique (30-35 %, Tanner and Eisele, 1995;Berresheim et al., 2000).Furthermore, the same calibration factor was used in converting the raw signal to monomer and dimer concentrations. Even though the concentration of ammonia was always below the detection limit of ion chromatography (IC) analysis (0.02 mg l −1 ), we have no doubts that there is always certain level of ammonia present in our experiment even though the ultrapure water and particle free clean air is used.The IC ammonia detection limit (0.02 mg l −1 ) for our experimental setup corresponds to mixing ratio of 0.5 ppb of ammonia, this corresponds to concentration about one order higher than sulfuric acid concentration measured with CIMS and close to ratio of unity to total sulfate concentration obtained from IC 1.4 × 10 −15 7.2 × 10 −7 1.7 32 6.9 × 10 −16 4.9 × 10 −7 1.3 16 1.3 × 10 −16 3.9 × 10 −8 1.5 analysis and subsequent mass balance calculation.However ammonia was never detected in our samples, so the actual ammonia mixing ratio in our system has to be much smaller. In this study the liquid solutions of sulfuric acid and water were prepared in the same way as in previous study (Brus et al., 2010) also the same range of total sulfate concentrations when calculated by mass balance was observed, and the similar nucleation rates when compared to UCPC 3025A were obtained for the same range of total sulfate concentration. In conclusion, we have no certain explanation for apparent loss of sulfuric acid.Also, it should be mentioned, that Sipilä et al. ( 2010) also observed an apparent additional loss of molecular sulfuric acid with high initial concentrations and longer residence times.That observation was explained by rapid conversion of concentrated sulfuric acid monomer to dimer and larger clusters, stabilized by proper, possibly basic compounds (Petäjä et al., 2011).The same process can take place also in our system even though it is difficult to perceive from the data. Comparison to atmospheric nucleation data Many scientific groups found and confirmed that the vapor concentration of sulfuric acid in atmosphere is often strongly connected with new particle formation.The correlation of sulfuric acid vapor concentrations and formation rate of neutral aerosol particles can be generally expressed with two models, the kinetic model of McMurry (1980) and the activation model of Kulmala et al. (2006).Parameters of both models are determined empirically from atmospheric data.Both models are dependent on the sulfuric acid concentration, kinetic model quadratically and activation model linearly: where K is a kinetic coefficient ranging from 10 −14 to 10 −11 cm 3 s −1 and A activation coefficient ranging from 10 −7 to 10 −5 s −1 (e.g.Weber et al., 1996;Sihto et al., 2006;Riipinen et al., 2007;Paasonen et al., 2010), E is an exponent associated with number of sulfuric acid molecules in critical cluster (Kashchiev, 1982), it is usually found to be in between values 1 and 2 when applied to atmospheric data.In this study we compare our experimental data to latest atmospheric data analysis made by Paasonen et al. (2010) where two CIMS systems were used at four measurement sites -Hyytiälä (Finland), Hohenpeissenberg and Melpitz (Germany), and San Pietro Capofiume (Italy).The measurements in Hohenpeissenberg, Melpitz and San Pietro Capofiume were performed with the CIMS of German Weather Service (DWD), whereas in Hyytiälä the CIMS of the University of Helsinki (UHEL) was used.The two instruments are very similar, as the UHEL CIMS is built at the National Center for Atmospheric Research (NCAR, USA), and also the DWD CIMS is NCAR-type CIMS.They also rely on the same calibration procedure, for more details see Paasonen et al. (2010).However, as Paasonen et al. (2010) concluded, the nucleation rates in Hohenpeissenberg were not closely connected to sulfuric acid concentration, and thus our comparison is made only to the data from the other three sites, see Fig. 6.The formation rates of 2 nm neutral particles (J 2 ) were obtained at all stations from particle size distributions recorded on nucleation event days.Such dataset can be directly compared to nucleation rates obtained with PSM in our study.The exponents from linear fits to our experimental data range from 1.2 to 2.2, depending on relative humidity and nucleation temperature.The worse agreement between our experiment and atmospheric data was found for the highest nucleation temperature (25 • C) and the lowest relative humidity (RH 16 %), see Fig. 6.The kinetic and activation coefficients obtained from our experimental data are in close agreement to atmospheric ones even though the range of relative humidities and temperatures of atmospheric data is quite wide, see Tables 5 and 6.The median kinetic and activation coefficients of whole dataset presented in Paasonen et al. (2010) (Table 4 therein) are K=26 × 10 −14 cm 3 s −1 and A = 9.7 × 10 −7 s −1 .In our study we found median coefficients for whole dataset to be K = 0.1 × 10 −14 cm 3 s −1 and A = 7.85 × 10 −7 s −1 , thus favouring the activation mechanism in nucleation process.However, this kind of interpretation has to be considered with cautiousness, because the nucleation coefficients may be strongly dependent on some other quantities, e.g.low-volatility organic vapor concentration as suggested by Paasonen et al. (2010). Conclusions In this study the homogeneous nucleation rates of sulfuric acid and water were measured in two separate sets of experiment.In first one we tested the influence of relative humidity in the range from 16 to 57 % and in second one the influence of temperature on nucleation rate at three different nucleation temperatures 25, 15 and 5 • C. Two condensation particle counters (UCPC TSI 3025A and PSM with CPC TSI 3010) with different d 50 detection efficiency were used in parallel to count freshly nucleated particles.The gas phase sulfuric acid concetration was measured with CIMS.The initial concentration of sulfuric acid was estimated from loss measurements using CIMS and it ranged from 10 8 to 3 × 10 9 molecules cm −3 .The losses of sulfuric acid along the flow tube were estimated for each particular set of experimental conditions.The onset of nucleation for UCPC TSI 3025A was observed at sulfuric acid initial concentrations at about 10 8 and for PSM at about 10 7 molecules cm −3 .The resulting nucleation rates at relative humidities of 16, 32, 47 and 57 % cover about three orders of magnitude from 10 −1 to 10 2 and from 10 1 to 10 4 for UCPC TSI 3025A and PSM, respectively.The nucleation rate shows an enhancement of more than one order of magnitude per decreasing the nucleation temperature by 20 • C at sulfuric acid concentration of 10 9 molecules cm −3 for both particle counters.At sulfuric acid concentration of 10 8 molecules cm −3 the measured data show no enhancement of nucleation rate because of different slopes of isotherms.The concentration of dimers was found to be usually less than one percent of monomer concentration.Obtained experimental nucleation rate data were also compared to two empirical (kinetic and activation) models.The obtained median activation coefficients are close to the atmospheric ones, whereas the kinetic coefficients were from one to three orders of magnitude smaller.However it has to be pointed out that these coefficients may be strongly dependent on some other quantities like low-volatility organic vapor concentration.The exponents obtained from fits to our data are in the range of 1.2 to 2.2, depending on relative humidity and nucleation temperature.Even though the sulfuric acid concentration determined together from CIMS measurements and wall loss estimates was only 10 % of the total sulfate concentration obtained via Ion Chromatography analysis and subsequent mass balance (Brus et al., 2010), the slopes in figures J vs. [H 2 SO 4 ] and J vs. total sulfate are the same.This probably means that the form and amount of active sulfuric acid involved in nucleation process itself is ambiguous and it is limited from left side by free [H 2 SO 4 ] and from right side by total sulfate concentration.The participation of ammonia can not be disproved in our nucleation experiment, even though the concentration of ammonia never reached the detection limit of IC analysis. Fig. 1 . Fig. 1.Nucleation rates (cm −3 s −1 ) of sulfuric acid and water as a function of sulfuric acid initial concentration, nucleation temperature T = 25 • C. Particle number concentration measured with UCPC TSI 3025 A (A) and particle size magnifier (PSM) (B). . The working mass range of the CIMS used in this study was from m/z channels of 40 to 250 Da, which allowed us to observe individual sulfuric molecules as HSO − 4 at m/z channel of 97 Da and also sulfuric acid dimer cluster as HSO − 4 •H 2 SO 4 at m/z channel of 195 Da.Signal at m/z channel of 195 Da comprises both dimers formed inside the flow tube and the www.atmos-chem-phys.net/11/5277/2011/Atmos.Chem.Phys., 11, 5277-5287, 2011 3. The monomer to dimer ratio as a function of nucleation temperature can be seen in Fig. 4. The data are averages over whole isotherm with corresponding standard deviations as error bars.The M/D ratio is about factor of 3 larger for nucleation temperature of 25 • C (M/D = 224) than for 5 • C (M/D = 85), Fig. 3 . Fig. 3. Sulfuric acid dimer concentration as a function of monomer concentration at three different relative humidities and nucleation temperature T = 25 • C. The M/D ratio is increasing from ∼100 to ∼200 with increasing relative humidity from 16 to 57 %. Fig. 4 . Fig. 4. Monomer to dimer ratio as a function of three different nucleation temperatures T = 5, 15 and 25 • C. Fig. 5 . Fig. 5.The initial sulfuric acid monomer concentration determined from CIMS measurements and WLF analysis as a function of initial total sulfate concentration determined by ion chromatography (IC) analyses and subsequent mass balance calculations.The error bars stand for uncertainty in CIMS measurements (factor of two) and error propagation in total sulfate mass balance calculations (±20 %). Fig. 6 . Fig. 6.Nucleation rates as a function of sulfuric acid concentration, comparison of atmospheric data (Paasonen et al., 2010) and this study according to relative humidity (A) and temperature (B). Table 1 . Sulfuric acid losses in the upper half of the nucleation chamber at three relative humidities, RH (16, 32 and 57 %), T = 25 • C, where k obs is the observed loss rate coefficients, v is mean flow velocity, t is residence time, WLF is the wall loss factor, and WLF inlet is the wall loss factor estimated for CIMS' inlet sampling tube. Table 2 . Sulfuric acid losses in the lower half of the nucleation chamber at three temperatures, T (25, 15 and 5 Table 5 . Calculated median kinetic (K) and activation (A) coefficients at different levels of relative humidity, exponent E is taken from the linear fit to PSM data of this study. Table 6 . Calculated median kinetic (K) and activation (A) coefficients at different nucleation temperatures, exponent E is taken from the linear fit to PSM data of this study.
v3-fos-license
2022-12-24T16:16:58.093Z
2022-12-22T00:00:00.000
257235570
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1002/cbic.202200642", "pdf_hash": "52ea55404f38a39d3e3704e51a0d7b7846c8582e", "pdf_src": "Wiley", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46306", "s2fieldsofstudy": [ "Biology" ], "sha1": "9f839a07aabdc559accaeaa66208081064570666", "year": 2022 }
pes2o/s2orc
Substrate Specificity of the Highly Thermostable Esterase EstDZ3 Esterases are among the most studied enzymes, and their applications expand into several branches of industrial biotechnology. Yet, despite the fact that information on their substrate specificity is crucial for selecting or designing the best fitted biocatalyst for the desired application, it cannot be predicted from their amino acid sequence. In this work, we studied the substrate scope of the newly discovered hydrolytic extremozyme, EstDZ3, against a library of esters with variable carbon chain lengths in an effort to understand the crucial amino acids for the substrate selectivity of this enzyme. EstDZ3 appears to be active against a wide range of esters with high selectivity towards medium‐ to long‐carbon chain vinyl esters. In‐silico studies of its 3D structure revealed that the selectivity might arise from the mainly hydrophobic nature of the active site's environment. Introduction Lipolytic enzymes have received a great deal of attention as they are ubiquitous in nature and are easily observed in all kingdoms of life. Carboxylic ester hydrolases (EC 3.1.1.x) represent a broad class of lipolytic enzymes catalysing the hydrolysis of ester bonds over a wide range of substrates. This class of enzymes can be further subcategorized into several groups based on their substrate specificity. Among them, two subclasses, carboxylesterases (EC 3.1.1.1) and triacylglycerol hydrolases (EC 3.1.1.3, also known as lipases), have been excessively studied. [1] There have been many attempts from the scientific community to establish an adequate distinction between these classes through the establishment of various criteria. Lipases and esterases act upon water-insoluble and -soluble substrates, respectively. Moreover, a distinctive characteristic of lipases is that they can be interfacially activated. [2] This characteristic is attributed to a "lid" structure, composed from an amphiphilic α-helix that covers the hydrophobic substrate pocket. [3] However, detailed biochemical and structural studies have proven these criteria to be not exact determinants. For example, some lipases exhibit high affinity for water-insoluble esters without possessing a lid domain. [4] Lipolytic enzymes are of great biological importance due to the fact that they contribute to a variety of biological processes [5] Indeed, their functions are critical for human physiology owning to the fact that they can provide carbon sources for energy, through triacylglycerol catabolism, while the resulting products can also act as precursors or mediators for various biosynthetic and cellular signalling processes. [6,7] Some of their most industrially favourable characteristics include stability in organic solvents, their ability to catalyse ester synthesis in unconventional media, as well as their capacity to catalyse reactions with high chemo-, regio-, and enantioselectivity without the need of cofactors. All these features render them greatly important catalysts for industrial biotransformations. [8,9] Some notable industrial applications include the production of dairy products, the degradation of plastics, the synthesis of fine chemicals and their use as diagnostic tools. [10][11][12][13] More recently, the focus is gradually shifting towards the development of novel immobilization carriers to improve the stability of such biocatalysts. An example of such an application recently published [14] has clearly showcased the advantages of an ecologically friendly process based on the immobilization of a Thermomyces lanuginοsus lipase on hybrid magnetic ZnOFe nanoparticles, which greatly enhanced the stability of the enzyme leading to high transformation yields. The vast majority of lipolytic enzymes incorporated in industrial settings are of microbial origin. For this reason, there is a growing interest in discovering new enzymes with enhanced thermal stability, which can withstand high-temperature industrial processes. Such enzymes are frequently encountered in organisms residing in extreme environments. [15] One powerful strategy to identify such biocatalysts is metagenomic analysis, either by bioinformatic or functional means. [16][17][18] Metagenomics offer the great advantage that they can bypass the limitations imposed by traditional microbial culturing techniques. [19] We have recently discovered the new esterolytic enzyme EstDZ3, which was found to exhibit remarkable thermostability. [20] EstDZ3 was isolated from a bacterium of the genus Dictyoglomus, identified in a hot spring in China. [20] Interestingly, this new enzyme was found to have low amino acid sequence similarity to any previously identified enzyme. Molecular modelling of its three-dimensional structure has provided an indication of the existence of a "subdomain insertion", which is similar to that of the cinnamoyl esterase Lj0536 from Lactobacillus johnsonii. [20,21] According to our initial biochemical studies, EstDZ3 seems to function more like a carboxylesterase rather than a lipase, as it demonstrated a clear preference towards p-nitrophenyl (pNP) esters with short to medium acyl-chain length [20] More importantly, EstDZ3 was stable when exposed to high concentrations of organic solvents and exhibited remarkable thermostability, characteristics which render it a promising new catalyst for industrial biotransformations. In this work, we have investigated in more detail the hydrolytic activity and specificity of EstDZ3 against a variety of aliphatic and aromatic esters of synthetic or natural origin. Furthermore, we have performed computational analyses to provide rationalization for the identified substrate specificity. Our studies contribute important understanding of the biochemical properties and the biocatalytic potential of this new enzyme. Results and Discussion The esterolytic activity of EstDZ3 We investigated the hydrolytic activity of EstDZ3 against various synthetic and natural aromatic esters ( Figure S1 in the Supporting Information). As seen in Figure 1, EstDZ3 can hydrolyse a broad range of aliphatic esters, as well as aromatic esters, albeit with a lower efficiency. Among the aromatic esters, EstDZ3 exhibited good activity against benzyl acetate and cinnamyl acetate, while low activity was observed against bulky natural substrates, such as oleuropein and rosmarinic acid. On the other hand, the enzyme exhibited up to 20-fold higher hydrolytic activity towards vinyl esters ( Figure 1). This observation may be attributed to the fact that the electron-withdrawing groups of vinyl esters are able to shift the reaction equilibrium towards hydrolysis through the instant tautomerization of vinyl alcohol to the highly volatile acetaldehyde. [22] The effect of the acyl chain length (from C 2 to C 12 ) on the hydrolytic activity of EstDZ3 was investigated using aliphatic vinyl esters. EstDZ3 showed a preference towards medium to long aliphatic esters (C 10 À C 12 , Figure 1b highest hydrolytic activity towards vinyl decanoate (C 10 ), while lower levels of activity were observed for substrates with shorter or longer carbon chains. A more extensive and detailed study of EstDZ3's selectivity was conducted by determining its apparent kinetic parameters towards the aforementioned aliphatic esters through the Michaelis-Menten kinetic model (Table 1). EstDZ3 displayed the highest catalytic efficiency for vinyl decanoate with a k cat /K m value of 3511 s À 1 mM À 1 , while its ability to hydrolyse esters with a carbon chain length longer or shorter than C 10 was significantly lower. The lowest esterolytic activity was observed against vinyl acetate, with a k cat /K m value of 95 s À 1 mM À 1 . Based on these findings, it is evident that EstDZ3 is not a typical esterase as it accepts larger substrates than other esterases. It is interesting to note that these results are not in accordance with our initial substrate specificity analysis using pNP esters. [20] In that work, EstDZ3 displayed a high preference towards pNP esters with short to medium carbon chain length (C 2 À C 8 ), as expected for a typical esterase, with the highest affinity observed for p-nitrophenyl butyrate. The different specificity observed in the present work indicates that the overall size of the substrate might affect the affinity of the enzyme for each substrate. pNP esters are bulkier substrates compared to vinyl esters. Moreover, they differ in the alcohol moiety, which in turn might also have a potential effect on the overall specificity of EstDZ3. Docking esters into the homology model of EstDZ3 We continued by employing computational methods in an attempt to gain a deeper structural insight and rationalize the substrate specificity of EstDZ3. Currently there is no available three-dimensional structure for EstDZ3. The published EstDZ3 homology model, as predicted by iTasser [23] revealed that its 3D structure is characterized by the typical α/β hydrolase fold, while the catalytic triad is comprised by the residues S114, D202 and H233 [20] In order to obtain a refined 3D structure of EstDZ3 for our in silico studies, we utilized the algorithm provided by AlphaFold. [24] The 3D structure of the protein can be found in Figure S2. Molecular docking was performed to elucidate the binding mode of vinyl esters, for which kinetic data were obtained, to EstDZ3 and to provide an insight into the defining elements of the enzyme specificity. Table 2 summarizes the docking results (productive binding poses) of EstDZ3 against various vinyl esters with variable acyl chain sizes. As observed in the table, all substrates were docked successfully into the binding pocket of EstDZ3 with binding affinities ranging from À 3.03 to À .31 kcal mol À 1 . Vinyl decanoate exhibited the lowest binding energy as well as the lowest K d , which indicates that this substrate binds tightly into the active site. Regarding vinyl acetate, it is apparent that the acetate group cannot fully exploit the binding pocket for interactions (Table 2, Figure S3e), while vinyl decanoate (C 10 ) seems to be the optimal substrate size ( Figure S3b). Vinyl laurate (C 12 ) is positioned in a way which yields a bent of the alkyl group towards the solvent (Figure S3a), indicating that the size of the substrate is probably larger than what can be accommodated in the pocket. These results are in agreement with experimental K m values. It seems that active site of EstDZ3 cannot fully accommodate substrates with C 12 atoms or larger, hence the observed sharp decrease in enzyme activity for longer substrates. In addition, the binding mode for almost all substrates is similar. More specifically, the vinyl moiety of the substrate positions itself closer to the active site residues, S114 and H233, while the acyl chain expands and adopts a proper orientation into the rest of the enzyme's binding pocket that seems to be favoured by hydrophobic interactions. These docking results are in accordance with the experimental results described in the previous paragraph, as EstDZ3 exhibited the highest affinity for vinyl decanoate and the lower affinity for vinyl acetate. It is noteworthy that the environment of the enzyme's active site is composed mainly by hydrophobic amino acids such as valine, glycine, and leucine ( Figure S4). The important role of the hydrophobic interactions is also highlighted in Table 2, since most of the interactions recorded are between hydrophobic residues, apart from the polar interactions with the catalytic residues and the oxyanion hole. This might serve as a possible explanation on why the selectivity of the enzyme is towards aliphatic substrates, that is, more hydrophobic, instead of aromatic ones, which indicates a better stabilization inside the active site. Furthermore, as already mentioned above, the closest homolog to EstDZ3 that has been structurally and biochemically characterized, is the cinnamoyl esterase Lj0536. This protein possesses an "inserted sub-domain" in close proximity to the active site, [21] which appears to play an important role for the ability of the enzyme to discriminate among different substrates. The sub-domain of Lj0536 is mainly comprised of polar residues ( Figure S4b) as opposed to that of EstDZ3, which is mainly composed of hydrophobic ones ( Figure S4a). This difference might also explain why Lj0536 displays higher preference towards aromatic esters [21] as opposed to the aliphatic ones showcased for EstDZ3 in this study. Molecular dynamics simulation studies Molecular dynamics simulations were performed on substrates with different alkyl-chain lengths, to validate the stability of binding modes of the vinyl esters. For this reason, we focused on vinyl decanoate (C 10 ) and vinyl butyrate (C 4 ), which differ significantly in terms of binding energy and acyl-chain length. In order to investigate the ability of the candidate substrate to maintain a proper catalytic orientation, we measured the distances of the critically relevant atoms between the carbonyl carbon atom (Ca) of the substrates and the Oγ of the catalytic serine as well as the distances between the carbonyl oxygen and the amide group of the oxyanion hole residues as a function of time during the simulation (Figures S5 and S6). In Figure 2, the distances between the substrates' Ca and the Oγ atom in catalytic serine are represented as a function of simulation time. In the case of vinyl butyrate (Figure 2a), the distances drastically increase after 10 ns, ranging from~10 tõ 67 Å, indicating that the substrate is no longer accommodated in a catalytic orientation inside the active site, and instead dissociates in the solvent. Indeed, as seen in Figure 3a, vinyl butyrate occupies a different position by the end of the simulation way far from the protein's active site. This orientation seems to be favoured by the hydrophilic interactions between the substrate's carbonyl oxygen and water molecules of the simulation's solvent (molecules not shown). Therefore, vinyl butyrate fails to maintain a stable conformation close to the catalytic residues due to the lack of hydrophobic interactions between the substrate's short acyl chain and the enzyme's active site. These results indicate a low affinity of the enzyme for vinyl butyrate. Regarding vinyl decanoate, the results from the molecular dynamics analysis have shown a different outcome. It is observed that, at the beginning of the simulation, the carbonyl carbon maintains a distance smaller than 4 Å from serine's Ογ atom. Throughout the course of the simulated time, fluctuations arise on the substrate's conformation, affecting its orientation in the active site (Figure 2b). The substrate seems to be moving further from the catalytic serine, which is evident from the increase of its distance, but also seems to maintain its catalytic orientation (< 4 Å) in the majority of the spectra's simulated time (Table S2). This is a great indication that in the timeframe of the analysis the substrate preserved a catalytic conformation. Additionally, the distances between the carbonyl oxygen and the residues of oxyanion hole (Phe38 and Met115) represent a similar behaviour ( Figure S6). In Figure 3b, it is evident that the orientation of the substrate mainly in the end of the simulation, involves the acyl-chain moiety and the vinyl moiety as well. This movement of the acyl-chain is likely because of the core hydrophobic pocket near the active site, which seems to be favoured by the hydrophobic interactions. Nevertheless, in Table S2 it is still evident that for the majority of time, the substrate remained in a catalytically active conformation based on the critical distances between the atoms of the complex. The aforementioned observations from the molecular dynamics analysis are in accordance with our experimental results and also further confirm our docking analysis. Vinyl decanoate is able to maintain a more stable and catalytically active orientation inside the active site, which indicates that the enzyme displays a higher affinity for this substrate and, thus, can exhibit higher catalytic activity against it compared to vinyl butyrate. Conclusions In this work, we have showcased the ability of EstDZ3 to hydrolyse an extended spectrum of substrates. EstDZ3 was able to hydrolyse vinyl esters with great catalytic efficiency, with maximum efficiency against vinyl decanoate (C 10 ). EstDZ3 exhibited lower activity towards bulky and aromatic substrates, a characteristic that may be attributed to the fact that its active site is mainly hydrophobic. Docking analysis revealed that the esterolytic specificity is a factor of the chemical environment of the enzyme's active site, as it was able to accommodate substrates with carbon chain lengths up to C 10 with great ease. At the same time, hydrophobic interactions were defining for the stability of these substrates in the binding pocket. Further analysis from molecular dynamics simulations validated our docking results, which in turn were in accordance with our experimental observations. In conclusion, EstDZ3 specificity is affected not only by the substrates' fatty acid chain length, but also it is strongly dependent on the nature of the alcohol part of the ester substrates. Experimental Section All chemical reagents were purchased form Sigma-Aldrich or Fluka. All organic solvents were HPLC grade. Enzyme expression and purification: EstDZ3 recombinant production and purification was carried out as described previously. [20] Briefly, BL21(DE3) cells carrying the pLATE52-EstDZ3 plasmid were grown at 37°C until the culture reached an optical density at 600 nm (OD 600 ) of about 0.5. At that point, the expression of estDZ3 was induced by the addition of isopropyl-β-d-thiogalactoside (IPTG) followed by overnight incubation at 25°C. The cells were harvested and EstDZ3 was recovered using the Qiagen IMAC purification kit according to the manufacturers' instructions with the following modifications. The cell pellet was washed and resuspended in equilibration buffer supplemented with 1 % TritonX-100, and lysed by brief sonication. The cell extract was clarified by centrifugation and the supernatant was incubated for 30 min at 80°C in order to denature Escherichia coli-soluble proteins. After heat-treatment, the precipitated material was removed by centrifugation. The supernatant was collected and incubated with Ni-NTA agarose beads before it was loaded onto a polypropylene column (ThermoScientific). The flow-through was discarded, and the column was washed with NPI20 wash buffer containing 1 % Triton X-100. Next, Triton X-100 was washed away by passing standard NPI20 wash buffer through the column. EstDZ3 was eluted using NPI200 elution buffer. Imidazole was subsequently removed from the protein preparation using a Sephadex G-25 M PD10 column (GE Healthcare). EstDZ3 was retrieved in apparent purity according to SDS-PAGE analysis visualization. The protein preparation was lyophilized and stored at À 20°C. Enzyme activity assays: A colorimetric method was used for the determination of the substrate profile of EstDZ3, [25,26] with slight modifications. The purified and lyophilized EstDZ3 was dialysed in BES [N,N-bis(2-hydroxyethyl)-2-aminoethanesulfonic acid] buffer 2.5 mM, pH 7.2 at 1 mg mL À 1 . All substrates were diluted in acetonitrile at 30 mM. A standard reaction mixture contained 0.23 mM of the pH indicator, p-nitrophenol (pNP), 7.1 % acetonitrile, 1 mM substrate and 30 μg mL À 1 enzyme EstDZ3. The enzymatic reaction was performed in a total volume of 1050 μL in a cuvette, in an Agilent Technologies Cary 60 UV-Vis spectrophotometer, equipped with a Peltier system for temperature control. The decrease in absorbance was monitored at 405 nm, at 50°C, for 10 min, at 11 s intervals. The extinction coefficient of pNP at the above conditions was found to be ɛ = 10 100 M À 1 cm À 1 . To determine the kinetic parameters K M and V max , an enzyme concentration of 5 μg mL À 1 was used, and the substrate concentration ranged from 0.02 to 2 mM. The EnzFitter Biosoft (UK) software was used for data analysis and curve fitting to the Michaelis-Menten equation. All experiments and controls were performed in triplicate. In silico analysis: All in silico experiments were performed using YASARA modelling suite (YASARA structure v.18). AlphaFold Colab: For this work, AlphaFold Colab notebook was utilized for the prediction of protein's 3D structure. [24] The relaxation option was also enabled for the final prediction. The quality of the final structure was evaluated visually, checking for the proper orientation of the catalytic residues, as well as with the AlphaFold's platform results. Molecular docking: Molecular docking was performed using the VINA algorithm [27] and the AMBER03 force field [28] at 50°C in vacuo. The simulation cell was 7 Å towards each axis direction from the Oγ atom of the catalytic serine residue. The protein's side chains were kept flexible during the docking procedure, creating a total of five protein ensembles with different side-chain rotamers. For each ensemble, 25 docking runs were executed, and the resulting ligand structures clustered when the ligand RMSD was lower than 5 Å. Prior to docking, energy minimization was performed for both the protein structure and the ligands using YASARA2 force field. All figures were prepared by using PyMOL v 0.99. The parameters that were used to identify the catalytically active docked conformations of each ligand were: 1) stability of the hydrogen bonding interactions with the oxyanion hole, 2) the distance between the catalytic serine Oγ atom and the substrate Cα to be less than 4 Å, 3) low free energies of binding. Molecular dynamics simulations: MD simulations were used to analyse and further confirm the binding poses that occurred from molecular docking. The boundaries of the simulation cell were set to periodic to avoid surface tension effects. The TIP3P solvation model was used for the simulation of explicit water molecules and the YASARA2 force field for the simulation of the complexes. [29] The setup of this work included the optimization of the hydrogen bonding network [30] to increase the protein's stability, as well as a pK a prediction so that the protonation states of the protein's residues would represent the model at the chosen pH of 7.2. The cell charge was neutralized by adding NaCl ions with a physiological concentration of 0.9 %. Afterwards, the system was subjected to energy minimization with the steepest descent and simulated annealing algorithms in order to remove clashes. The simulation was then run for a total duration of 50 ns and the time step was set to 1.25 fs for bonded interactions and 2.5 fs for nonbonded interactions at a temperature of 323.15 K and constant pressure of 1 atm (NPT ensemble) using the Particle Mesh Ewald algorithm for the interactions of long-range electrostatic forces. [31] Finally, the coordinates of all the atomic positions were stored every 20 ps and the recurring snapshots were plotted for the trajectory analysis.
v3-fos-license
2018-04-03T03:18:21.434Z
2016-08-09T00:00:00.000
1849037
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ojrd.biomedcentral.com/track/pdf/10.1186/s13023-016-0496-x", "pdf_hash": "b38c4df49b62ce89ab6b2d4d8111c0aa961260f5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46307", "s2fieldsofstudy": [ "Medicine" ], "sha1": "b38c4df49b62ce89ab6b2d4d8111c0aa961260f5", "year": 2016 }
pes2o/s2orc
Clinical and endocrine characteristics and genetic analysis of Korean children with McCune–Albright syndrome: a retrospective cohort study Background McCune–Albright syndrome (MAS) is a rare disease defined by the triad of fibrous dysplasia (FD), café au lait spots, and peripheral precocious puberty (PP). Because of the rarity of this disease, only a few individuals with MAS have been reported in Korea. We describe the various clinical and endocrine manifestations and genetic analysis of 14 patients with MAS in Korea. Methods Patients’ clinical data—including peripheral PP, FD, and other endocrine problems—were reviewed retrospectively. In addition, treatment experiences of letrozole in five patients with peripheral PP were described. Mutant enrichment with 3′-modified oligonucleotides - polymerase chain reaction (MEMO-PCR) was performed on eight patients to detect mutation in GNAS using blood. MEMO-PCR is a simple and practical method that enables the nondestructive selection and enrichment of minor mutant alleles in blood. Results The median age at diagnosis was 5 years 2 months (range: 18 months to 16 years). Eleven patients were female, and three were male. Thirteen patients showed FD. All female patients showed peripheral PP at onset, and three patients subsequently developed central PP. There was a significant decrease in estradiol levels after two years of letrozole treatment. However, bone age was advanced in four patients. Two patients had clinical hyperthyroidism, and two patients had growth hormone (GH) excess with pituitary microadenoma. c.602G > A (p.Arg201His) in GNAS was detected in two patients in blood, and c.601C > T (p.Arg201Cys) in GNAS was detected in one patient in pituitary adenoma. Conclusions This study described the various clinical manifestations of 14 patients with MAS in a single center in Korea. This study first applied MEMO-PCR on MAS patients to detect GNAS mutation. Because a broad spectrum of endocrine manifestations could be found in MAS, multiple endocrinopathies should be monitored in MAS patients. Better treatment options for peripheral PP with MAS are needed. Background McCune-Albright syndrome (MAS) is a rare congenital sporadic disorder, and the precise prevalence of MAS is not known (the estimated prevalence ranges between 1/ 100,000 and 1/1,000,000) [1]. MAS is defined by the triad of polyostotic fibrous dysplasia of bone (FD), café au lait skin pigmentation, and peripheral precocious puberty (PP). Other multiple endocrinopathies-including hyperthyroidism, growth hormone (GH) excess, hypercortisolism, and renal phosphate wasting-could be associated with the original triad [2]. Peripheral PP is the most common endocrine manifestation of MAS, and it is much more frequently found in girls than in boys [2]. It arises due to the autonomous activation of ovarian tissue [2]. Current treatment of peripheral PP in girls with MAS revolves around the use of anti-estrogens, including aromatase inhibitors (AIs) and estrogen receptor modulators [3]. We tried to evaluate the efficacy and safety of letrozole, a third-generation AI, in girls with peripheral PP-associated MAS. This syndrome is caused by a postzygotic somatic activating mutation in the GNAS gene encoding the Gprotein alpha subunit (Gsα). Activating Gsα mutations that induce constitutive activation of the cAMP signaling pathway leads to multiple clinical manifestations [4]. In MAS, mutations are exclusively present in the somatic mosaic state, and mutation abundance is generally low in unaffected tissues. Thus, it is difficult to detect mutations in peripheral blood leukocytes by standard Sanger sequencing. However, biopsy of affected tissue to identify the genetic defect is too invasive, requiring surgical intervention. In this regard, we applied the mutant enrichment with 3′-modified oligonucleotides -polymerase chain reaction (MEMO-PCR) method for the detection of even low levels of mutant alleles using peripheral blood leukocytes. Because of the rarity of this disease, only a few patients with MAS have been reported in Korea. Here, we describe the various clinical manifestations and genetic analysis of 14 patients with MAS in a single center in Korea. Methods We performed a retrospective study on 14 patients with MAS who were followed over 16 years (1999-2015) at the Samsung Medical Center. The diagnoses were made based on the following clinical criteria. Patients were required to exhibit at least two of the three major features of MAS (hyperfunctioning endocrinopathies, polyostotic FD, and café au lait spots) [1]. Initial evaluation of MAS included laboratory and radiographic studies (skeletal surveys). Eight patients underwent genetic studies of peripheral blood or affected tissue. Written informed consents were obtained from the parents of each patient, and the Institutional Review Board approved the study (IRB file number: 2012-12-054). Endocrinopathies Eleven girls with clinically suspected PP were firstly evaluated for serum levels of luteinizing hormone (LH), follicle-stimulating hormone (FSH), and estradiol at baseline. A gonadotropin-releasing hormone (GnRH) stimulation test was then performed to differentiate gonadotropindependent PP from gonadotropin-independent PP. X-rays of the hand and wrist to determine bone age were checked regularly for patients diagnosed with PP. Patients with vaginal bleeding were questioned about episodes of menstruation at every follow-up. In addition, patients were assessed with pelvic ultrasound for measurement of uterine and ovarian volumes and evaluation of abnormal findings, such as ovarian cysts. In addition, we evaluated thyroid-stimulating hormone (TSH), total triiodothyronine (T3), and free thyroxine (FT4). A GH suppression test was performed for two patients (Patients 3 and 12) with tall stature and acromegalic features, and a brain MRI was done for these patients to localize the source of GH excess. Fibrous dysplasia The diagnosis of FD was established on a clinical and radiological basis for all patients and from bone biopsy for five patients who underwent orthopedic surgery due to pathological fracture. Plain radiographs are often sufficient to diagnose FD. Eight patients had a bone scan to determine the extent of the disease. Among the patients clinically suspected of craniofacial FD, craniofacial computed tomography (CT) was performed in eight. Genetic analysis Eight out of the 14 patients agreed to perform genetic tests for MAS. After obtaining informed consent, genomic DNA was extracted from peripheral blood leukocytes using the Wizard Genomic DNA Purification kit following the manufacturer's instructions (Promega, Madison, WI). The exon 8 region of the GNAS gene was tested by conventional Sanger sequencing with a primer set (forward: 5′-ggactctgagccctctttcc-3′, reverse: 5′-accacg aagatgatggcagt-3′) as well as MEMO-PCR using a primer set (forward: 5′-tgtttcaggacctgcttcg-3′, reverse: 5′-gaa cagccaagcccacag-3′, blocking: 5′-cttcgctgccgtgtcctg-6-ami ne-3′) followed by sequencing with the reverse primer. The PCR was performed with a thermal cycler (model Veriti, Applied Biosystems, Foster City, CA, USA), and sequencing was performed with the BigDye Terminator Cycle Sequencing Ready Reaction kit (Applied Biosystems) on the ABI Prism 3100xl genetic analyzer (Applied Biosystems). To describe sequence variations, we followed the guidelines by the Human Genome Nomenclature Committee (HGVS) such that "A" of the ATG translation start site was numbered +1 for a DNA sequence and the first methionine was numbered +1 for a protein sequence. We additionally performed Sanger sequencing using brain tissue with pituitary microadenoma on Patient 12. Statistical analysis The mean changes of hormone levels and uterine sizes between before and two years after treatment with letrozole were compared using a paired t test; p < 0.05 was considered statistically significant, and data are expressed as means ± standard deviations (SD). The statistical analyses were performed using the SPSS program (version 21.0). Results The median age at diagnosis of MAS was 5 years 2 months (range: 18 months to 16 years). All patients had been diagnosed as having MAS by the time they were aged 16 years or younger, with 12 patients having been diagnosed before 10 years of age. Five patients had been diagnosed by the time they were aged 3 years or younger. The proportion of female patients (79 %) was overwhelmingly higher than that of male patients (21 %). Patients' clinical characteristics are summarized in Table 1. The most common symptoms at diagnosis were vaginal bleeding or breast development in female patients (7/11, 64 %) and pathological fracture in male patients (2/3, 67 %). Precocious puberty Eleven out of 14 patients showed symptoms of peripheral PP. All patients with peripheral PP were female. The median age at onset of initial symptoms of peripheral PP was 3 years (range: 18 months to 6 years 7 months). Peripheral PP was confirmed in 9 out of 11 patients using a GnRH stimulation test, and one (Patient 8) of them subsequently developed central PP during the treatment of peripheral PP. Two patients (Patients 5 and 11) exhibited central PP at diagnosis through a GnRH stimulation test. It was assumed that these two patients had peripheral PP before from the history of vaginal bleeding in early childhood (at the ages of 2 and 3 years old, respectively). Patient 5 was diagnosed as having central PP late at 9 years old; therefore, she was monitored for symptoms of pubertal progression without GnRH analogue treatment. Patient 11 was diagnosed as having central PP at the age of 7 years 1 month and treated with GnRH analogue therapy. Because she showed frequent vaginal bleeding, letrozole treatment was added to GnRH analogue therapy at the age of 8 years. Patient 8 started to receive letrozole treatment at the age of 6 years 7 months but subsequently developed central PP at the age of 8 years and started to receive GnRH analogue therapy. We analyzed the results of two-year treatment in five patients (Patients 7, 8, 9, 10, and 11) treated with letrozole, the third-generation AI, and followed up regularly in our pediatric endocrinology clinic. Letrozole was Table 2. All five patients had experiences of vaginal bleeding at diagnosis, and two patients (Patients 7 and 8) showed a reduction in the frequency of menstruation while taking letrozole. No significant changes in the pubertal stages of breasts were seen throughout the study period. Pelvic ultrasound examination revealed ovarian cysts in four patients (Patients 7, 9, 10, and 11) during the treatment periods. Three patients had a unilateral cyst (Patients 7, 10, and 11). The ovarian cyst had disappeared one year after letrozole treatment in Patient 7, and the ovarian cyst size had decreased after two years of treatment in Patient 10. Patient 9 showed bilateral ovarian cysts before treatment, and a left ovarian cyst disappeared during the treatment period. The unilateral ovarian cyst had newly appeared after two years' treatment in Patient 11. Average uterus lengths had increased from 49.8 ± 6.9 mm to 55.2 ± 18.1 mm (p = 0.44), and average widths had also increased from 13 ± 1.9 mm to 16.2 ± 7.9 mm (p = 0.41) after letrozole treatment. However, there were no significant differences in uterine size. Hormone levels and pelvic ultrasound findings are shown in Table 3. All five patients experienced a significant decrease in serum estradiol on treatment. After two years' treatment, average levels of estradiol had decreased from 63.4 ± 40.8 pg/ml to 2.2 ± 1.1 pg/ml (p ≤ 0.03). However, LH and FSH levels showed no significant change before and after letrozole treatment. The bone age advancement (defined as bone agechronological age) was decreased in Patient 7; however, the other four patients showed further advanced bone age. There was no significant change in the height standard deviation score (SDS) during the treatment period. The treatment was well tolerated, and no significant adverse events, such as ovarian torsion, occurred in any patient treated with letrozole. Hyperthyroidism Four patients (4/14, 29 %) showed abnormal findings of TSH and/or FT4; three were female (Patients 6, 8, and 10), and one was male (Patient 4). Two patients (Patients 4 and 10) showed increased FT4 levels and confirmed clinical hyperthyroidism (age at diagnosis: 2 and 2.5 years, respectively). Patient 10 presented with tachycardia at diagnosis, and innumerable small cystic lesions in the thyroid gland were revealed by thyroid ultrasound. Medical treatment with methimazole was started, and euthyroid status was achieved in these two patients. In Patients 6 and 8, TSH had decreased, while FT4 levels remained in the normal range. In addition, Patient 8 revealed small cystic nodules in the thyroid gland by thyroid ultrasound. They have been regularly checked up on for TSH and thyroid hormone without treatment. Pituitary adenoma producing GH GH excess was observed in two patients (Patients 3 and 12). Patient 12 developed acromegaly at the age of 17 years. GH was not suppressed in the GH-suppression test. Pituitary MRI revealed a left pituitary adenoma 7 mm in size without a significant change in FD. A bone scan revealed polyostotic FD in the craniofacial bone and left iliac bone. After tumor removal by endoscopic endonasal surgery, GH was suppressed well. Pituitary pathology revealed a pituitary adenoma. Patient 3 was diagnosed with MAS at 5 years 3 months and initially presented with FD of the craniofacial bones and café au lait spots. During follow-up, this patient showed GH excess at the age of 14 years, and surgery for tumor removal has been planned. Renal phosphate wasting Two patients (Patients 4 and 10) had renal phosphate wasting with hypophosphatemia and received phosphate supplements; neither had signs of rickets on X-ray findings. Fibrous dysplasia Thirteen patients had polyostotic FD. The most common sites of FD involvement were the craniofacial bones. All 13 patients with FD had craniofacial FD, and three patients had FD only in the craniofacial bones. FD in the craniofacial bones and limbs was found in eight patients, and involvement of the axial skeleton was found in two patients (Table 4). FD in the extremities usually presented (Patients 3 and 4), and a painless "lump" or asymmetric feature was the presenting sign when FD occurred in the craniofacial bones (Patients 2, 5, and 11). Genetic analysis Of the eight patients who underwent genetic testing for mutations in GNAS in peripheral blood, GNAS mutations (p.Arg201His) were detected in two (Patients 3 and 4) by MEMO-PCR (Fig. 1). In the case of Patient 12, who was diagnosed with a pituitary adenoma, GNAS mutation (p.Arg201Cys) was detected in this tissue by Sanger sequencing but not in peripheral blood leukocytes by both Sanger sequencing and MEMO-PCR (Fig. 2). The conventional Sanger sequencing method from peripheral blood cells did not detect an activating mutation of GNAS in any of the eight patients, as expected. Discussion MAS is characterized by various endocrinopathies, including hyperthyroidism, GH excess, and renal phosphate wasting, as well as peripheral PP, as the tissue distribution of Gsα expression is broad [5]. This study described the various clinical characteristics in 14 Korean patients with MAS. In addition, we applied MEMO-PCR to detect somatic mutations in GNAS using blood and investigated the clinical response to letrozole in patients with peripheral PP. Peripheral PP is the most frequent initial presentation of MAS and is much more common in girls than in boys [2]. In this study, peripheral PP was observed in all female patients but not in any males. Treatment of peripheral PP-associated MAS, including AIs and estrogen receptor antagonists, has been evolving for decades [6][7][8][9][10]. However, an ideal pharmacological treatment of peripheral PP-associated MAS has not been identified. AIs prevent the conversion of androgens to estrogens and, thereby, reduce the serum levels of estrogens [3]. Recent reports of the most potent third-generation AIs, anastrozole and letrozole, yielded mixed results [7,8]. A pilot study of nine girls treated for 12-36 months with letrozole indicated decreased rates of growth, bone maturation, and vaginal bleeding [7]. However, mean ovarian volumes tended to increase over time, and one patient experienced ovarian torsion. A systemic prospective study of anastrozole for the treatment of peripheral PP in 27 girls with MAS over one year found that it was ineffective in halting vaginal bleeding, attenuating rates of skeletal maturation, and increasing linear growth [8]. Tamoxifen, a selective estrogen receptor modulator, was found to have positive results in a year-long multicenter trial of 25 girls with peripheral PP and MAS [9]. However, uterine volumes were unexpectedly found to increase throughout the study and raised safety concerns given the association of tamoxifen and stromal tumors. In the subset of girls with frequent vaginal bleeding or progressive forms of PP, pharmacological intervention was applied in order to prevent early epiphyseal fusion and reduce the frequency of vaginal bleeding. In this study, the potent third-generation AI letrozole was used for the treatment of peripheral PP with MAS. Letrozole has demonstrated some short-term success in one study [7], but further investigations are needed. Therefore, we also evaluated the safety and efficacy of letrozole. In five patients who were treated with letrozole for two years, there was a significant decrease in estradiol levels; however, bone age was further advanced in four patients. Uterine size did not show a significant change during therapy. Recently, a prospective study revealed that a pure estrogen receptor blocker, fulvestrant, was effective in decreasing vaginal bleeding and rates of skeletal maturation in 13 girls with MAS over 12 months [10]. Long-term studies comparing available medications are needed. Although the PP in MAS is gonadotropin-independent, secondary activation of the hypothalamic-pituitary-gonadal axis may occur, resulting in concurrent central PP. In this study, two patients (Patients 5 and 11) showed central PP at diagnosis. They had initially presented with facial abnormalities at the ages of 9 years and 7 years 1 month, respectively, and they had been referred to an endocrinologist due to previous histories of vaginal bleeding. In addition, one patient (Patient 8) subsequently developed central PP during the treatment of peripheral PP. Central PP in these children might be caused by extensive sex steroid exposure due to uncontrolled peripheral PP [11]. Thyroid disorder is the second most common endocrinopathy in MAS [12]. One retrospective polycentric study analyzed 36 MAS patients, followed over 20 years; 11 patients (31 %) had functional and/or morphological thyroid dysfunctions [13]. In our study, four patients (4/14, 29 %) showed functional and/or morphological thyroid abnormalities, and two of them (2/14, 14 %) had clinical hyperthyroidism with treatment. For this reason, strict monitoring of thyroid function is recommended every six months in patients with MAS [13]. GH excess affects about 20 % of patients with MAS [14,15]. The median age of GH excess patients in previous reports was 20 years. In this study, all 14 patients were younger than 20, and only two patients showed acromegaly (diagnosed at the ages of 14 and 17 years, respectively). Since acromegaly with MAS is usually accompanied by craniofacial FD, the diagnosis of acromegaly may be delayed by craniofacial FD, masking the dysmorphic craniofacial effect of acromegaly [16]. Therefore, it is important to perform laboratory screening, such as IGF-I, for GH excess in patients with craniofacial FD. FD is the most common component of MAS [5], and it usually involves the craniofacial bones [17]. Thirteen patients in our study had polyostotic FD involving the craniofacial bones. Isotopic bone scans are useful not only for detecting the extent of the disease but also for quantifying the skeletal disease burden of FD and predicting functional outcome [17]. The relative prevalence of the café au lait spots in the National Institutes of Health (NIH) cohort of patients with FD/MAS was 66 % (140 patients followed over 24 years) [5]. In this study, café au lait spots were found in 9 of our 14 patients (64 %). The mechanism of skin pigmentation in MAS is the activating mutation of Gsα in affected melanocytes and augmentation of tyrosinase gene expression, which results in melanin overproduction on affected melanocytes by increased cAMP-mediated signal transduction [18]. The diagnosis of MAS usually depends on clinical features, and it is not always straightforward, especially in the absence of the classical triad. Genetic analysis of the affected tissue would likely provide diagnostic confirmation of the clinical suspicion of MAS; however, obtaining affected tissues is invasive. We, therefore, applied a non-invasive genetic test to confirm the diagnosis of MAS. Based on the fact that activating GNAS mutations mostly occur in the Arg201 residue in MAS, a method for the selective enrichment of Arg201 GNAS mutations using a series of nested PCRs and restriction enzyme digestion was developed [19][20][21][22][23]. We performed a simple, practical enrichment technique, MEMO-PCR, for the detection of somatic mutations in GNAS. The concept of this technique is similar to that of peptide nucleic acid (PNA)/locked nucleic acid (LNA)-mediated PCR clamping, but the PNA or LNA is replaced by 3′-modified oligonucleotides, which are much less expensive and are easy to design [24]. In this study, the detection rate of MEMO-PCR from peripheral blood leukocytes was 25 % (2/8). Although the test was performed with a small number of patients, the mutation detection rate of MEMO-PCR was not significantly different from that of the previous PNA/LNA-mediated PCR clamping [22,23]. There are several limitations to our study. MAS is rare, and a limited number of patients had been treated with letrozole therapy; thus, data from an untreated control group of subjects were not available. For this reason, it is difficult to confirm the therapeutic effects of letrozole on patients with MAS. A greater number of subjects and longer periods of treatment are needed before the safety and effectiveness of estrogen receptor modulators such as tamoxifen and fulvestrant as well as AIs including letrozole can be confirmed. Conclusions This study described the various clinical and endocrine manifestations of 14 patients with MAS in a single center in Korea. In addition, this study first applied MEMO-PCR on patients with MAS to detect lowabundance somatic GNAS mutation using peripheral blood. A broad spectrum of endocrine manifestations was found in this study. Multiple endocrinopathies should be monitored in patients with MAS through careful physical examinations with history taking and serial endocrine function tests. In this study, we could not definitively conclude the efficacy of two-year letrozole treatment without any severe adverse effects. Better treatment options for peripheral PP and for improving the quality of life of patients with MAS are needed.
v3-fos-license
2024-07-27T15:02:04.422Z
2024-07-25T00:00:00.000
271485470
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.3389/fpsyg.2024.1382614", "pdf_hash": "c7854b81c2a79b1d0e02ad13cf70a3c2d7bbac8f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46308", "s2fieldsofstudy": [ "Psychology" ], "sha1": "ddae8c153da565c4ddc822fa1ed6010e73081d84", "year": 2024 }
pes2o/s2orc
Evaluating the before operational stress program: comparing in-person and virtual delivery Introduction Public safety personnel (PSP) are at increased risk for posttraumatic stress injuries (PTSI). Before Operational Stress (BOS) is a mental health program for PSP with preliminary support mitigating PTSI. The current study compared the effectiveness of delivering BOS in-person by a registered clinician (i.e., Intensive) to virtually delivery by a trained clinician (i.e., Classroom). Methods Canadian PSP completed the Intensive (n = 118; 61.9% male) or Classroom (n = 149; 50.3% male) program, with self-report surveys at pre-, post-, 1 month, and 4 months follow-ups. Results Multilevel modelling evidenced comparable reductions in anxiety (p < 0.05, ES = 0.21) and emotional regulation difficulties (ps < 0.05, ESs = 0.20, 0.25) over time with no significant difference between modalities. Participants discussed benefits of the delivery modality they received. Discussion The results support virtual delivery of the BOS program (Classroom) as an accessible mental health training option for PSP, producing effects comparable to in-person delivery by clinicians. Introduction Canadians rely on diverse public safety personnel (PSP; e.g., police, firefighters, paramedics) to ensure their safety and well-being (Mendicino and Blair, 2022).PSP are at increased risk for posttraumatic stress injuries (PTSI) (Carleton et al., 2020a;Heber et al., 2023) resulting from operational and organizational stressors (Carleton et al., 2020a).Almost half (44.5%) of PSP screen positive for one or more mental health disorders (Vig et al., 2020) and many report lifetime suicidal ideation (27.8%), planning (13.3%), and attempts (4.6%) (Carleton et al., 2018a).PTSI are associated with frequent exposures to potentially psychologically traumatic events (PPTEs; i.e., direct or indirect exposures to actual or Ioachim et al. 10.3389/fpsyg.2024.1382614Frontiers in Psychology 02 frontiersin.orgthreatened death, serious injury, or sexual violence) (Carleton et al., 2019a;Heber et al., 2023).PTSI are a substantial concern for PSP mental health and the Canadian government has recently developed an action plan to address challenges related to these mental health difficulties (Public Safety Canada, 2019).There are several effective treatments for PTSI, such as cognitive behavioural therapy (CBT), that appear beneficial for PSP (Foa and Rothbaum, 2001;Ponniah and Hollon, 2009;Carleton, 2021;Hadjistavropoulos et al., 2021).Proactive solutions to mitigate PTSI also exist, often focusing on providing training or peer support to bolster resilience, minimize stigma, and develop individual stress management skills (Carleton et al., 2020b;Stelnicki et al., 2021).There has been relatively little research on such proactive efforts, with the available evidence suggesting small time-limited effects, necessitating recommendations for larger longitudinal research efforts (Anderson et al., 2020;Di Nota et al., 2022). Overview of the before operational stress program The Before Operational Stress (BOS) program was designed to provide access to effective evidence-based mental health training for PSP (McElheran et al., 2020;Stelnicki et al., 2021).The BOS programming is based on teaching core CBT skills and providing evidence informed learning content, taught by a clinician trained in CBT principles and familiar with PSP culture and treatment.BOS program combines theoretical and experiential learning procedures designed to improve resiliency, strengthen interpersonal relationships, and mitigate the effects of operational stressors.Participants complete one module per week for 8 weeks.The first six modules involve teaching participants to identify, understand, and navigate the connection between thoughts, emotions, physiological sensations, and behaviour.The final two modules focus on enhancing interpersonal relationships by teaching communication skills and empathy.Throughout the program, participants explore various mental health related topics to expand their knowledge of OSI and effective coping strategies.The program also emphasizes the importance of PSP maintaining healthy, authentic relationships, which can provide important mental health benefits for PSP as previous research suggests they are more likely to seek support from spouses and friends than professionals (Carleton et al., 2019b(Carleton et al., , 2020b;;Nisbet et al., 2023). The BOS Intensive program is facilitated in-person by a trained clinician in a group setting, following the one module per week for eight weeks model.Each 2 hour module-session is approximately evenly divided between a facilitator-led didactic component communicating program content, and 1 hour of group processing where participants share thoughts on the program content and discuss its application in their lives.A prior evaluation of the BOS Intensive program evidenced small but statistically significant improvements in PTSD symptoms and quality of life measures, increases in perceived social support, and reductions in mental health stigma associated with the training (Stelnicki et al., 2021).This evaluation was conducted with a smaller sample of participants who completed the training through the Original BOS Intensive (in-person) delivery modality and surveyed participants at four time points (before and after the training, and at 1 month and 3 month follow-ups).The available outcomes were associated with improved communication skills and more positive behaviours toward family members, as evidenced by qualitative responses. The BOS Classroom program, consisting of 1 h sessions delivered virtually by a trained clinician, was developed to enhance training accessibility for individuals in areas lacking BOS-trained clinicians.The BOS Classroom modality removes the group processing aspect of the Intensive training to solely focus on delivery of the didactic component, therein shortening each module-session to approximately one hour.BOS Classroom became particularly important for maintaining training accessibility during public gathering restrictions imposed during the COVID-19 pandemic.The virtual training also helped address logistical access barriers by providing flexible timing and avoiding stigma barriers still prevalent in PSP workplaces (Rice et al., 2019;Hadjistavropoulos et al., 2021). To date there has been no evaluation of the effectiveness of the BOS Classroom program and, by extension, no comparative assessment with the BOS Intensive program.Extant research indicates internet-based Cognitive Behavioural Therapy (ICBT) can be as effective as face-to-face CBT in treating a range of mental health challenges, offering similar therapeutic benefits, but with the added convenience of accessibility and flexibility (Andersson et al., 2019;Hadjistavropoulos et al., 2021;Thew et al., 2022).Previous evaluations of mindfulness training with military personnel demonstrated in-person training as producing better improvements in sleepiness, pain, and energy, than virtual training (Rice and Schroeder, 2021).The researchers suggested the in-person training benefits were supported by having longer sessions, stronger interpersonal bonds between the instructor and classmates (Rice and Schroeder, 2021), face-to-face interactions, greater accountability, and in-class participation (Rice et al., 2019). The current study The current study was designed to evaluate the effectiveness of the BOS Classroom program for improving mental health.The study hypotheses were: (1) participation in the BOS program would be associated with reductions in mental health symptoms, substance use, and mental health stigma, as well as increases in perceived social support, emotional regulation, resilience, and quality of life; and (2) the outcomes of the BOS Classroom program would be largely comparable to the BOS Intensive program. We anticipate that the effect sizes in the BOS Classroom will be smaller than those in the BOS Intensive, although the direction of the effects is expected to be consistent across both modalities.This difference in effect sizes is expected due to the distinct characteristics of the Classroom modality, such as reduced group processing and shorter session durations.Although previous research suggests in-personal and virtual CBT training and treatment programs can have similar effectiveness (Spek et al., 2007;Andersson et al., 2016;Carlbring et al., 2018;Rice et al., 2018;Zhang et al., 2022;Alavi et al., 2023), BOS Classroom also features a reduced session time and omits the group processing component, which may affect the overall effectiveness of the training.The current study builds upon the findings of the previous independent evaluation of the BOS program (Stelnicki et al., 2021).Our study extends this work by comparing delivery modalities and employing a comprehensive mixed-methods approach, with a specific focus on contrasting the effectiveness of in-person and virtual delivery modalities. Procedure The current study was approved by the University of Regina Institutional Research Ethics board .Data were collected for two training modalities: BOS Intensive (i.e., in-person with group processing components) and BOS Classroom (i.e., virtual, didactic components only).The quasi-experimental design resulted from convenience samples arising from BOS Intensive groups being scheduled in-person until implementation of COVID-19 pandemic gathering restrictions, with BOS Classroom groups being scheduled after onset of the gathering restrictions.All participants completed a standardized intake interview prior to starting the BOS program, during which they were informed about potential voluntary participation in an independent research study evaluating the BOS training.Participants were screened prior to training for acute mental health distress or severe PTSD symptoms, and those in need were referred to therapy intervention resources.Participants who expressed interest were emailed a consent form along with a link to the first survey.Due to confidentiality assured during BOS training, data was only recorded from participants who expressed interest in the study, and the total number of participants in the BOS training at the time outside of study participation is not available.The research participation was voluntary, anonymous, and did not impact eligibility for the BOS program.Neither the clinical facilitators nor other group members were aware which members chose to participate in the study.No questions were mandatory, and participants could withdraw at any time without consequences for their training.Surveys were administered at four time points: pre-training, post-training, 1 month follow-up, and 4 months follow-up.All surveys contained the same measures, with pre-training also including sociodemographic measures, and post-training having an additional five open-ended questions requesting feedback on participant experiences in the training program.Although training recruitment was aimed at early career recruits, training was open to all career stages.As PSP will likely experience multiple exposure to PPTEs in their careers, BOS is intended as a proactive measure for any future exposures a PSP may experience. Measures Study measures were chosen based on module content of the BOS program and to facilitate comparison with the BOS pilot study evaluation (Stelnicki et al., 2021). Alcohol use disorders identification test The Alcohol Use Disorders Identification Test [AUDIT; Babor et al., 2001] is a 10-item self-report measure of potentially hazardous alcohol use.Participants are asked to rate each item on a 5-point Likert scale ranging from 0 (never) to 4 (daily or almost daily).Higher scores indicate more potentially hazardous alcohol use.The AUDIT is widely employed and has evidence of adequate psychometric properties (Reinert and Allen, 2002;de Meneses-Gaya et al., 2009;Peng et al., 2012).AUDIT scores can be used to screen for hazardous alcohol use (>7) and alcohol dependence (>15). Brief resiliency scale The Brief Resiliency Scale (BRS; Smith et al., 2008) is a 6-item self-report measure of resilience.Participants rate each item on a 5-point Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree).Items are summed to produce a total score, with higher scores indicating higher resiliency.The BRS has evidence of adequate internal consistency and test-retest reliability (Windle et al., 2011). Depression, anxiety, and stress scale The Depression, Anxiety, and Stress Scale [DASS-21;Lovibond and Lovibond, 1995] is a 21-item self-report measure of designed to measure the negative emotional states of depression, anxiety, and stress.Each subscale consists of 7 items scored on a 5-point Likert scale ranging from 0 (did not apply to me at all) to 4 (applied to me very much or most of the time).Higher scores indicate greater symptom levels.The DASS-21 has evidence of adequate internal consistency, construct validity, and convergent and discriminant validity (Henry and Crawford, 2005).The DASS-21 subscale scores can be used to screen for clinicallysignificant depression (>20), anxiety (>14), and stress (>25). Difficulties in emotional regulation scale The Difficulties in Emotional Regulation Scale (DERS; Gratz and Roemer, 2004) is a 36-item self-report measure of difficulties with emotional regulation.Participants rate each item on a 5-point Likert scale ranging from 1 (almost never) to 5 (almost always).All items can be summed to produce a total score, with higher scores indicating a greater degree of emotional dysregulation.Item subsets can be summed to produce subscales related to emotional responses (i.e., nonacceptance of emotional responses, lack of emotional awareness, limited access to emotional regulation strategies, and limited emotional clarity).The DERS has evidence of adequate reliability, as well as construct and concurrent validity (Gratz and Roemer, 2004;Bardeen et al., 2012;Fowler et al., 2014;Hallion et al., 2018), and may help predict responses to cognitive-behavioural therapy (Hallion et al., 2018(Hallion et al., ). 10.3389/fpsyg.2024.1382614 .1382614Frontiers in Psychology 04 frontiersin.org Opening minds survey for workplace attitudes The Opening Minds Survey for Workplace Attitudes (OMSWA; Szeto et al., 2013) is an 11-item self-report measure of attitudes towards people with mental illness.Participants rate each item on a 5-point Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree).Items are summed to produce a total score with higher scores indicating a higher degree of stigma in the workplace.The OMSWA has been widely employed by the Mental Health Commission of Canada to measure stigma (Krakauer et al., 2020) and with evidence of adequate factor validity for the 9-item version (Boehme et al., 2022). PTSD checklist for DSM-5 The PTSD Checklist for DSM-5 (PCL-5; Weathers et al., 2013) PCL-5 is a 20-item self-report measure used to assess PTSD symptoms in the past month.Participants who identify exposure to at least one PPTE are then asked to select which exposure has caused them the most difficulty recently and answer questions regarding how much they have been bothered by different aspects of that event over the past month.Participants rate each item on a 5-point Likert scale ranging from 0 (not at all) to 4 (extremely).All item scores can be summed to produce a total score, and subsets of items can be summed to produce subscale scores reflecting DSM-5 symptom clusters B (re-experiencing/re-living), C (avoiding reminders of the incident), D (negative thoughts and mood), and E (hyper arousal/alertness).Higher scores indicate greater symptom levels.The PCL-5 has evidence of adequate reliability, as well as structural, convergent, and discriminant validity (Blevins et al., 2015;Ashbaugh et al., 2016).A positive screen for probable PTSD can be made for participants who endorse symptoms in each PTSD cluster and exceed the minimum clinical cutoff of 32 for the total PCL-5 score. Social provisions scale The Social Provisions Scale (SPS10; Cutrona and Russell, 1987) is a 10-item self-report measure of extent of social support.Participants rate each item on a 4-point Likert-type scale ranging from 1 (strongly disagree) to 4 (strongly agree).All items can be summed to produce a total score, and the SPS10 has evidence of adequate reliability and convergent validity (Gottlieb and Bergen, 2010). WHO quality of life-BREF (WHOQOL-BREF) The WHO Quality of Life-BREF (WHOQOL-BREF; WHOQOL Group, 1998) is a 25-item self-report measure of quality of life.Participants rate each item on a 5-point Likert scale ranging from 1 (not at all) to 5 (an extreme amount).The mean of all items is calculated to produce a total score, with item subset means calculated to produce subscale scores describing each of four domains: physical health, psychological health, social relationships, and environmental quality of life.We analyzed domains 2, 3, and 4, which relate to psychological and social quality of life.Higher mean scores indicate greater quality of life.The scale has evidence of adequate psychometric properties (Skevington et al., 2004). Qualitative data and analyses Participants were asked five open-ended questions at the end of the post-training survey (i.e., upon BOS program completion): (1) What has been the most helpful aspect of BOS for you?; (2) What has been the least helpful aspect of BOS for you?; (3) Has anything gotten better for you as a result of BOS?If so, please describe; (4) Has anything gotten worse for you as a result of BOS?If so, please describe; and (5) Please use the space below to provide any other comments you would like about your participation in BOS.Of the initial participant responses, 9 (n = 9) were excluded due to missing data, resulting in 30 Intensive and 45 Classroom responders, totaling 75 (n = 75) participants.Responses were anonymized and analyzed in NVivo 12. Two authors (MR and AW) used the coding procedures outlined in Miles et al. (2018) to separately inductively analyze the data.The same authors then collaborated to create broad themes by comparing their two code lists, producing a master coding framework, and recoding the data.A comparison query was run to determine the degree of overlap between coders (calculated at 80% agreement with a Cohen's Kappa of 0.40, suggesting a fair level of agreement beyond what might be expected from chance) and to reach consensus on codes of disagreement (i.e., what theme best captured codes of discrepancy between authors).The authors then re-visited the coded data and addressed discrepancies in themes to achieve 100% agreement and co-constructed definitions to closely represent participant experiences by discussing the coded themes and the associated interrelationships.A matrix coding query was used to analyze the differences in theme references between Intensive and Classroom modalities and to clarify final participant counts for each theme. Quantitative analyses Descriptive analyses provided information about the sociodemographic characteristics of participants in the Intensive and Classroom modalities (Table 1).Descriptive analyses characterized mental health and resilience scale variables over time (Table 2).Generalized Estimating Equations were used to assess whether the number of participants screening positive for a particular mental health challenge changed statistically significantly over time (Table 3) (Andreski et al., 1998;Heine et al., 2011).Modality was included as a factor to assess whether the number of participants with a positive screen differed between Intensive and Classroom modalities.Interaction effects were initially tested, but none were statistically significant (all ps > 0.05), so none were retained for the final model. A multi-level modelling (MLM) approach was employed to assess for changes in mean self-report scores over time, and to assess whether any changes differed across delivery modalities (i.e., Intensive or Classroom, Table 4).The MLM approach provides accurate estimates when some individuals have missing timepoints (Heck et al., 2013;West et al., 2015), and also allowed for more close comparison to the previous BOS Intensive evaluation.Model fits were estimated using restricted maximum likelihood (REML) via the MIXED command in SPSS Version 28.The MLM strategy for each scale was as follows: analyses began with a two-level model consisting of a fixed effect for Time, a random intercept and slope for Time at the individual level, and the first order autocorrelation structure for the within individual covariance matrix at the repeated measures level.The random slope for Time was removed from the model due to non-convergence, which is consistent with previous BOS program evaluations using the same model criteria (Stelnicki et al., 2021).Modality was then included as a moderator variable by adding fixed effects for Modality intraclass correlation coefficients included for interpretation.Standardized effect sizes (ES) and intraclass correlation coefficients (ICC) for the MLM analysis were calculated manually using estimates of fixed and random effects.Standardized effect sizes were calculated for each fixed effect using a pooled variance estimate from the final model for each scale.Using pooled variance to standardize effect size estimates accounts for the within and between individual variance in repeated measures designs (Pustejovsky et al., 2014;Westfall et al., 2014).The standardized effect sizes can be interpreted like Cohen's d (Pustejovsky et al., 2014), in which 0.20 represents a small effect, 0.50 represents a medium effect, and 0.80 represents a large effect (Pleil et al., 2018).ICCs were calculated for each applicable scale or subscale using the between individual (random intercept) variance estimate and the residual variance estimate from the empty random intercept model.The empty random intercept model consisted of a random intercept at the individual level, and no fixed effects, estimated with REML.In the absence of model predictors, the ICC provides an estimate of the proportion of variance attributable to individual differences (Raudenbush and Bryk, 2002;Hox and De Leeuw, 2003). Results Sociodemographic characteristics of participants in both modalities were largely comparable (Table 1).The only statistically significant effect of modality for positive screens indicated Classroom modality participants at pre-training had a lower prevalence of positive screens for potentially hazardous alcohol use than Intensive modality participants (Table 3).The differences in prevalence of positive screens for mental health disorders between the modalities were largely attributable to individual differences between participants (Table 3). There were no statistically significant differences between modalities for other measures, except that OMSWA total scores were statistically significantly higher (p < 0.05) among Intensive participants (Table 4).Average change estimates (Table 4) indicated PCL-5 Cluster E (hyper arousal/alertness) scores increased statistically significantly (p < 0.05) for all participants from pre-to post-training.Anxiety and environmental quality of life subscale scores decreased (p < 0.05) for all participants from pre-training to 1-month follow-up.Emotional regulation total scores decreased from pre-training to 1 month follow-up (p < 0.05), and pre-training to 4-month follow-up (p < 0.05) (Table 4).Emotional regulation subscale scores decreased (p < 0.05) for "Non Acceptance of Emotional Responses" and "Emotional Awareness" from pre-training to 4 month follow-up."Emotional Clarity" subscale scores evidenced a statistically significant increase from pre-to post-training, and a statistically significant decrease from pre-training to 1 month follow-up.Decreases in resilience from pre-to post-training, and from pre-training to 4 month follow-up, and increases in resilience from pre-training to 1 month follow-up were not statistically significant. All model random effects were statistically significant (ps < 0.05), indicating a large proportion of the variance observed in outcome measures across time was attributable to initial differences between individuals, as well as differences within each individual.The results Qualitative analyses results The most heavily emphasized theme across participant responses was insight and awareness changes, with many participants describing thinking about themselves and their experiences in new ways (cognitive); behaving in different ways or acting upon new awareness (behavioural); and understanding their feelings and emotional responses, including being more attuned to bodily sensations (emotional).Participants in both modalities described increased awareness or insight within cognitive (Intensive: n = 11; Classroom: n = 24), emotional (Intensive: n = 13; Classroom: n = 6), and behavioural domains (Intensive: n = 9; Classroom: n = 5).Intensive participants more frequently described increased emotional and behavioural awareness, whereas Classroom participants more frequently described increased cognitive awareness.For example, one Intensive participant described increased attention to and recognition of their emotional experiences: "I have been able to recognize certain emotions as they are occurring and work through them, opposed to before when I would just become angry because I did not understand emotion" (Participant 17).Intensive (n = 5) and Classroom (n = 2) participants also described changes in their familial relationships: "I've started communicating better with my wife and being able to relate and explain some of her feelings and thoughts to what I've learned in the program as well" (Intensive Participant 15). The stigma reduction theme included participant descriptions of feeling less different from others or feeling less alone with their mental health symptoms, either via the explicit learning content (i.e., learning about brain-based or physiological responses to PPTE or other stressful events) or via group discussion of said content (i.e., relating to others' experiences, experiencing normalization, and validation of responses to trauma or stress by sharing and hearing others' stories).Intensive (n = 14) and Classroom (n = 4) modality participants described reductions in stigma via the group dynamic: "I have found a small cadre of others in the same boat as me, people I can relate with who had also stepped forward to say 'I'm broken'" (Intensive Participant 4).Classroom (n = 7) and Intensive (n = 2) modality participants described the learning content contributed to reductions in feelings of isolation: "The program has been very helpful, but the one thing that really was … [that] PTSD is not a mental illness but a brain injury." (Classroom Participant 39).This variation indicates that while both modalities were perceived as beneficial, consistent with the program content emphasized in each modality, Intensive participants more frequently cited group discussions and Classroom participants discussed the learning content. The delivery theme included participant descriptions of program accessibility, quality, and delivery of the program.Classroom modality participants (n = 17) expressed negativity towards the delivery method, indicating a preference for Intensive delivery.There were also Classroom modality participants (n = 18) who reported feeling they had missed out on the benefits of group sessions and described online participation as difficult: "… very hard to express your feelings and receive feedback… The program would be far more beneficial if it was in a group setting, face to face with others" (Classroom participant 52).Intensive modality participants (n = 8) more often described participation challenges such as scheduling of the sessions and time required to attend compared to Classroom participants (n = 2). Discussion The current study was designed to evaluate the effectiveness of the virtual modality of the BOS program (BOS Classroom) as compared to the existing in-person delivery (BOS Intensive).The current results support the Classroom modality as generally comparable to the Intensive modality in terms of changes in selfreported mental health symptom outcomes.Across both modalities, the BOS program was associated with small, but statistically significant, improvements in self-reported anxiety and emotional regulation.The current results are consistent with a previous evaluation of BOS Intensive (Stelnicki et al., 2021), but incrementally evidence some of the small, statistically significant improvements in anxiety and emotional regulation, some of which were sustained 4 months after completing the training. Consistent with our hypotheses, outcomes following the BOS program were largely comparable across both Classroom and Intensive modalities, with no differences in effects observed between modalities.The only modality effect was for the OMSWA measure, which differed statistically significantly between groups at initial scores, indicating changes in outcome measures did not meaningfully differ between delivery types.The proportion of participants who screened positive for mental health disorders were also largely comparable across modalities, except for a higher prevalence of potentially hazardous alcohol use positive screens among Intensive participants at pre-training.There were no statistically significant decreases in the number of participants who screened positive for mental health disorders over time after controlling for individual differences, suggesting any changes reflected dynamic individual circumstances, differences in relative engagement with the BOS material, or other unmeasured variables.The results were consistent with previous research evidencing Mindfulness-based Stress Reduction training as comparable across virtual and in-person formats among US military personnel and Veterans (Rice et al., 2018(Rice et al., , 2019)). There were small, statistically significant improvements in anxiety symptoms and difficulties with emotional regulation across both modalities, which were sustained 4 months after the training.Opentext responses corroborate and contextualize the quantitative results, with participants from both modalities reporting new insights into their own thoughts and emotions concerning stressful situations.There were no statistically significant reductions in mental health stigma and feelings of isolation for participants in either modality, but participants who provided qualitative feedback described stigma reductions as important elements of the BOS program experience.Classroom modality participants primarily attributed reductions in self-stigma to the didactic component of BOS, whereas Intensive modality participants attributed the same reductions to group sharing.The feedback is consistent with prior research regarding the perceived potential impact of psychoeducation (Ricciardelli et al., 2020) and social validation (Kosmicki and Glickauf-Hughes, 1997;Cox et al., 2017;Yalom and Leszcz, 2020) for reducing stigmatizing beliefs.Reports of reduced stigma may be particularly important for PSP populations, as previous research has identified stigma towards mental health as a primary barrier to help-seeking behaviors in PSP (Newell et al., 2022). Open-text responses also indicated that Intensive participants valued the group discussions, whereas Classroom modality participants described technical difficulties with online discussions sufficient to describe the discussions as detrimental to the overall training.The results contrast previous evidence suggesting preferences for virtual discussions perceived as less anxiety-provoking and more accessible (Rochlen et al., 2004;Fortier et al., 2022).Despite also facing some accessibility barriers while attending the training, participants in the BOS Intensive modality collectively expressed a preference for in-person groups.The individual variability in results and preferences, juxtaposed with the comparable results across modalities, suggests PSP may be best served by self-selecting modality training options that meet their current needs and preferences. There was evidence of statistically significant increases in hyperarousal/alertness and decreases in environmental quality of life from pre-to post-training, possibly due to increased PPTE exposures concurrent with COVID-19 pandemic onset (Heber et al., 2020).Participants reported that the BOS program increased their mental health self-awareness, which may have facilitated increased attention to, and reporting of, the progressive pandemic impacts.In general populations, 40 to 70% of those who met screening criteria for PTSD symptoms no longer screened positive after CBT treatments or interventions (Bradley et al., 2005).In military and Veteran populations, many participants still report residual symptoms after CBT and prolonged exposure therapy (Bradley et al., 2005;Steenkamp et al., 2015;Allan et al., 2017), which may indicate these particular symptoms are resistant to treatment or are more successfully addressed over longer periods of time.Hyperarousal symptoms in particular were reported to be treatment resistant to typical CBT in military and Veteran populations (Crawford et al., 2019;Schnurr and Lunney, 2019;Miles et al., 2023).Promising alternative solutions have been explored in Veteran populations, such as mediation-based intervention (Crawford et al., 2019) to address this category of symptoms.Consistent with previous research results for BOS (Stelnicki et al., 2021) (Carleton et al., 2018b(Carleton et al., , 2019a;;Anderson et al., 2020).The BOS program presents a promising option for PSP mental health training as a function of small, but statistically significant, improvements in measures of anxiety, emotional regulation, and mental health stigma, which appear comparable across in-person and virtual modalities.Participants in the BOS Intensive program reported challenges with accessibility, but still reported a preference for in-person engagement.In addition, the virtual BOS training may be a viable alternative for PSP who find in-person sessions less accessible or convenient.Allowing PSP to select the modality that best aligns with their current circumstances and learning preferences could lead to increased engagement and, potentially, more effective training outcomes. Limitations and future directions The current study has several limitations that can help inform future research directions.First, the Classroom modality was introduced in response to growing accessibility needs and COVID-19 pandemic safety measures, which meant that the current data were based on a quasi-experimental design without randomized assignment to each modality.Second, there is no way to differentiate the relative impact of the COVID-19 pandemic from the impact of the BOS program using data from the current sample (Yu et al., 2020;Combden et al., 2022;Bouza et al., 2023) with PSP and healthcare workers appearing to have been disproportionately impacted by the COVID-19 pandemic (Heber et al., 2020;Marchildon et al., 2020;Hossain and Clatty, 2021;Cadell et al., 2022;Xue et al., 2022;Patel et al., 2023).The comparable effects found in the two modalities despite this time delay suggest the effects associated with BOS may be sustained even in highstress social environments.Third, there was considerable intrapersonal variability in many outcome measures, reducing the statistical power to detect additional small effects at 1 month and 4 month follow-up assessments.Future researchers should consider using explicit random assignment or participant self-selection and including measurements of skill acquisition and use as part of evaluating the relative impact of the BOS program.Fourth, there was substantial attrition during the study such that the sample size at the 4 month follow-up was modest and the associated results may reflect important self-selection biases.Fifth, the current qualitative evidence suggests important elements of participant experiences may not be entirely captured in quantitative surveys, highlighting the importance of nuanced assessment of participant perceptions of the program in program evaluations. Conclusion The current study evaluated BOS program delivery and expanded previous research on the BOS program by including a longer follow-up period, comparing the Intensive (i.e., in-person) and Classroom (i.e., virtual) delivery modalities, and including qualitative analyses of participant experiences.The current results were consistent with previous research and evidenced small, but statistically significant, improvements in anxiety and emotional regulation, some of which were sustained 4 months after training.Changes in mental health symptoms were largely comparable across Intensive and Classroom modalities; however, many participants reported a preference for the Intensive program despite acknowledging accessibility benefits of the Classroom modality.Participants reported perceiving stigma reductions as part of qualitative data collection that were not reflected in quantitative selfreport analyses.The comparable results across modalities suggests PSP may be best served by self-selecting modality training options that meet their current needs and preferences. TABLE 1 Public safety personnel demographics by delivery modality. TABLE 2 Self-report mental health measure metrics by time. TABLE 2 ( Continued) Total percentages may not sum to 100 due to non-response or responding "other." PCL-5, posttraumatic stress disorder checklist for DSM-5 (Subscale cluster B-re-experiencing, cluster Cavoiding, cluster D, negative thoughts, cluster E-arousal/alertness); DASS, depression, anxiety, and stress scale; DERS, difficulties in emotion regulation scale (subscales for non-acceptance of emotional responses, lack of emotional awareness, limited access to emotional regulation strategies, lack of emotional clarity); WHOQOL, World Health Organization Quality of Life (Subscales for domain 2-psychological health, domain 3-social relationships, domain 4-environmental); BRS, brief resilience scale; SPS-10, social provisions scale; OMSWA, opening minds survey for workplace attitudes. TABLE 3 Valid generalized estimating equations for psychological disorder criteria screeners by modality and timepoint, using intensive modality at T1 as baseline. TABLE 4 Two-level MLM results for overall scale totals. , standard error; CI, 95% confidence interval; ICC, intraclass correlation; PCL-5, posttraumatic stress disorder checklist for DSM-5 (subscale cluster B-re-experiencing, cluster Cavoiding, cluster D, negative thoughts, cluster E-arousal/alertness); DASS, depression, anxiety, and stress scale; DERS, difficulties in emotion regulation scale (subscales for non-acceptance of emotional responses, lack of emotional awareness, limited access to emotional regulation strategies, lack of emotional clarity); WHOQOL, World Health Organization Quality of Life (Subscales for domain 2-psychological health, domain 3-social relationships, domain 4-environmental); BRS, brief resilience scale; SPS-10, social provisions scale; OMSWA, opening minds survey for workplace attitudes.*p < 0.05, **p < 0.01, and ***p < 0.001.The AUDIT is not reported due to insufficient variance in the measure to estimate the model. SE there were no statistically significant 10.3389/fpsyg.2024.1382614Frontiers in Psychology 11 frontiersin.orgchanges in self-reported resilience among participants in either modality.Existing time-limited training interventions for PSP have generally demonstrated limited effectiveness for improving mental health outcomes
v3-fos-license
2016-05-18T11:48:21.238Z
2016-05-04T00:00:00.000
15353718
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcophthalmol.biomedcentral.com/track/pdf/10.1186/s12886-016-0227-z", "pdf_hash": "7f5c56e3849288c4a543e57ec08507f4eba0f782", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46309", "s2fieldsofstudy": [ "Medicine" ], "sha1": "7f5c56e3849288c4a543e57ec08507f4eba0f782", "year": 2016 }
pes2o/s2orc
Choroidal thickness in obese women Background Excessive weight is a well-known risk factor for microvascular diseases. Changes in thickness in a vascular tissue, such as the choroid, can be useful to evaluate the effect of obesity on the microvascular system. The aim of this study was to evaluate the choroidal thickness (CT) changes in obese women, using optical coherence tomography (OCT). Methods The prospective clinical study included examination of the right eyes of 72 patients. The right eyes of 68 patients were examined and served as the controls. A complete ophthalmological examination and OCT imaging were performed for each group studied. The CT in each eye was measured using OCT. Results The obese group consisted of 72 female patients with a mean age of 37.27 ± 1.18 years. The control group included 68 female subjects with a mean age of 37.85 ± 7.98 years (p > 0.05). There was no statistical significant difference for the foveal retinal thickness measurements between the two groups (p > 0.5). Our study revealed significant choroidal tissue thickening subfoveally and at areas 500 μm temporal, 500 μm nasal, and 1500 μm nasal to the fovea in the obese group (all p < 0.05). There was a positive correlation between body mass index (BMI) and CT changes. Conclusions CT may increase in obese women and a positive correlation was found between BMI and CT. The trial protocol was approved by the Local Ethical Committee of the Kırıkkale University, date of registration: April 27, 2015 (registration number: 10/11). Background Obesity is a common health problem and its prevalence is increasing worldwide [1][2][3]. Excessive weight is a wellknown risk factor for diabetes, hypertension, dyslipidemia, and microvascular diseases [4][5][6], including retinal vasculature [7,8]. One of the main concerns with obesity is that microvascular alterations cannot be diagnosed in the early stages. Although many studies have investigated the comorbidities associated with obesity [9][10][11], predicting the risk of developing vascular damage remains challenging. The association of obesity with cataract formation, glaucoma, and age-related macular degeneration has been shown in varying degrees. Researchers have hypothesized that retinal microvascular changes are precursors to developing obesity based on experimental and clinical observations [12,13]. In the Blue Mountains Eye Study, retinal vessel diameter was associated with the prevalence of higher body mass index (BMI) and the increased risk of incident obesity [14]. In the eye, the choroid, the posterior portion of the uveal tract, nourishes the outer portion of the retina. It contributes to the blood supplied to the prelaminar portion of the optic nerve [1], is an integral constituent in the functioning of the eye, and is involved in important diseases affecting the optic nerve, retinal pigment epithelium, and the retina. By using enhanced depth imaging optical coherence tomography (EDI-OCT), choroid images can be obtained and the choroidal thickness (CT) can be measured [3]. Previous studies have suggested that a higher BMI can trigger structural changes in the retinal vascular system that could provoke retinal dysfunction, as shown in aged-related macular degeneration or diabetic retinopathy. Therefore, knowledge of the thickness changes in a vascular tissue, such as the choroid, may help to evaluate the effect of obesity on the microvascular system. The prevalence of obesity among men and women varies greatly within, and between countries, with more obesity found in women than in men. This gender disparity in obese population is exacerbated among women in developing countries. In the TURDEP study, which investigated 24,788 people >20 years old in Turkey, the prevalence of obesity in women was 29.9, and 12.9 % in men [15]. Therefore, in the present study, we hypothesized that obesity is correlated with CT changes, particularly in women. To the best of our knowledge, this is the first study evaluating CT in obese female patients. Methods This prospective clinical study included the examination of the right eyes of 72 patients. In total, 68 right eyes of 68 patients were examined and served as controls. The study was conducted between 2015 and 2016 in accordance with the tenets of the Declaration of Helsinki. The trial protocol was approved by the Local Ethical Committee of the University of Kırıkkale. Registration of the trial was requested on April 27th, 2015 (decision no:10/11). All patients and control subjects voluntarily participated in the study and signed an informed consent form. The obese group was classified according to the World Health Organisation criteria; (BMI 18.5-24.9 kg/m 2 = normal; 25.0-29.9 kg/m 2 = pre-obese/overweight, and ≥30.0 kg/ m 2 = obese). In the study, the obese group included patients who had a BMI > 30 kg/m 2 , without any other disease, whereas healthy adults with BMI <25 kg/m 2 constituted the control group. Obese patients were randomly selected from those monitored by the Department of Endocrinology. The exclusion criteria were as follows: a previous systemic or chronic disease such as hypertension, smoking, ocular surgery in one or both eyes; axial length >24 ± 1.0 mm; and a refractive measurement > 2.0 diopters. All participants underwent a complete ocular examination, including a best-corrected visual acuity measurement, slit-lamp examination, intraocular pressure measurement, and dilated fundoscopy. Only the right eyes of each of the patients were selected to avoid any intra-individual bias. The CT was measured as close to noon as possible to avoid diurnal variations. The measurements were performed using anEDI-OCT scanning system (OCT Advance Nidek RS-3000; Nidek Co. Ltd., Gamagori, Japan). Prior to evaluation using EDI-OCT scanning, the central macular thickness was measured in the right eye of each patient. Choroidal and scleral boundaries were drawn with the assistance of software programs. The boundaries limited the Bruch membrane, between the subfoveal points (FCT), to 500 and 1500 μm in the nasal regions (N500, N1500) and 500 and 1500 μm in the temporal regions (T500, T1500), for CT measurements. All measurements including the demarcation of the choroid and sclera were made by two independent (masked) observers. There were no significant differences between the results of the two observers (p = 0.317: Paired t-test, r = 0.716 and p 0.001:Pearson's correlation), and the average of the two results was used in our analyses. Statistical analyses were performed using the SSPS statistical software (SPSS for Windows 23.0, Inc., Chicago, USA). The results of the descriptive analysis were provided in numbers, percentages, mean, median, and standard deviations. A paired t-test was used to assess the difference in the means of the observers' measurements to test the repeatability and accuracy of the two independent measurements. The independent t-test was used to compare the variables between the obese group and the control group, and correlations were performed using Pearson's correlation coefficient. A multiple linear regression analysis (forward) was used to determine confounding factors among the variables. p < 0.05 was considered statistically significant. Results The study group consisted of 140 female (100 %) subjects, with a mean age of 37.55 ± 1.01 years (median:38; range:21-59 years). There were 72 patients in the obese group, with a mean age of 37.27 ± 1.18 years (median: 38.5; range 21-59 years). The control group included 68 subjects, with a mean age of 37.85 ± 7.98 years (median:38; range 24-54 years). There was no significant difference, in terms of age, between the two groups (p > 0.5). Demographics of the study groups are shown in Table 1. There was no significant difference found for foveal retinal thickness (FT) when the two groups were compared (p > 0.5). In contrast, the CT revealed significant differences at FCT, T500, N500, and N1500 between the two groups (all p < 0.05). Changes in both FT and CT are demonstrated in Table 2. There was a positive correlation found between BMI and CT at FCT, T500, and N500 (Table 3). Multiple linear regression analysis revealed that CT had been affected by BMI independently from the aspect of age of the patient (Table 4). Discussion In the eye, CT may be affected by several factors, such as age, axial length, and refractive errors [16,17]. Diurnal changes in CT have also been reported [18]. It is believed that systemic blood pressure and intraocular pressure induce choroidal tissue changes through an autoregulatory mechanism [19]. Therefore, because the choroid possesses a rich vascular structure, all of the aforementioned factors have the potential to alter the CT [20]. A study by Tanabe et al., demonstrated a significant correlation between choroidal vein diameter and the CT [21]. Another investigation by Vance et al., reported that phosphodiesterase-5 inhibitors, such as sildenafil citrate, increased CT via a smooth muscle relaxation effect [22]. In a study by Wong et al., CT was found to be thicker in hypercholesterolemic patients [23]. This study had a crosssectional design with only Chinese subjects; therefore, their results may not address the issue of any ethnic differences in CT. It is of interest that Regatieri et al., found that the choroid was thinner among subjects with diabetic retinopathy [24]. However, a previously observed inverse correlation between age and CT might have affected this correlation [25]. A number of studies have found that CT plays a prognostic or predictive role in various local (for example, diabetic retinopathy, and AMD), and systemic diseases (for example, hypertension, anemia, and rheumatoid arthritis) [24,[26][27][28][29][30][31][32][33]. Jongh et al., reported the effects of obesity on the microvascular system; hyperinsulinemia and elevated blood pressure were found to be the major causes of the vascular alterations in obese women [4]. In another study by Kawasaki et al., both retinal venous and arterial dilatation were found in hypertensive patients [7]. Research by Saito et al., studied the retinal venous system in 900 subjects and reported an incidence of 5 years of obesity in some patients [34]. The authors found a positive correlation between vessel caliber and BMI; however, no correlation was shown between these changes and the development of obesity. In this study, CT was found to be significantly reduced in the non-obese controls, except for the temporal measurement of 1500 μm. It was an interesting finding because as shown in recent studies, we expected a subfoveal or temporal change in CT. Previous studies reported that the macula demonstrated a thin choroid layer in the nasal region [35,36]. Another possibility for this regional difference may be a result of the developmental pattern of the eye. In the light of previous reports, we hypothesized that there is a relationship between obesity and the choroidal layer of the eye. In the present study, the obesity group consisted of patients with a BMI > 30, and subjects with a BMI < 25 constituted the control group. To avoid any diurnal effect, we performed all the measurements at noon for each patient. We also excluded patients with a history of local and systemic diseases. Although no significant differences were found for FT between the groups, there was a significant increase in CT at certain points (CFT, nasal 500, and 1500 μm, and temporal 500 μm) in the obese group. The results indicated that there was a positive correlation between BMI and CT, and multiple linear regression analysis revealed that CT was independently affected by the age of the patients. FCT: choroidal thickness at fovea; N500, choroidal thickness at 500 μm nasal to the fovea; N1500, choroidal thickness at 1500 μm nasal to the fovea; T500, choroidal thickness at 500 μm temporal to the fovea; T1500, choroidal thickness at 1500 μm temporal to the fovea; FT: central macular thickness Table 3 The Pearson Correlation analysis between body mass index -Choroidal thickness and foveal thickness There were some limitations in the study, such as the pathogenesis of obesity, which included several unknown hormonal and genetic factors; moreover, because choroid is a vascular tissue, it may be affected by local and systemic factors. We also excluded patients with systemic metabolic disorders to avoid confounding factors. A further limitation of the study was the lack of data on CT changes after weight loss through dietary restriction. The prevalence of obesity among women is greater than that in men, which we found to be the same for the patients in our Department of Endocrinology. Due to the difficulty in making homogenous groups of obese patients of both genders and the fact that choroidal tissue may be different in both genders, we decided to include only females in our study. Indeed, it may be proposed that obese male patients also have choroidal changes; therefore, further studies with male patients are warranted in the future. Conclusion In summary, our data provides evidence for a relationship between CT and obesity in female patients. Vascular abnormalities may occur at early stages in obesity and ocular circulation may be a preferred target for the disease process. The assessment of CT is a quick and noninvasive technique, which can be utilized to determine such abnormalities. Meanwhile, it is unclear how this data may be applied to individual patients and how it can benefit obesity management. The data suggests that CT measurement has a predictive role and BMI should be included among the parameters that may affect CT results in obese women. A prospective follow-up study with a large sample size is required to test our hypothesis and to verify the results of the present clinical study. Ethics approval and consent to participate The study protocol was approved by the Institutional Review Board (Local Ethical Committee of the Kırıkkale University), and informed written consent was obtained from all participants. The design of the study followed the tenets of the Declaration of Helsinki for biomedical research. Consent to publish All authors have given consent to the publication for this manuscript. Availability of data and materials In this study data supporting findings can be found in Kırıkkale University, School of Medicine, Department of Ophthalmology, Dr Erhan Yumusak; Mail: Erhany umusak@yahoo.com.
v3-fos-license
2019-08-02T13:24:49.428Z
2019-08-28T00:00:00.000
199044774
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcvm.2019.00102/pdf", "pdf_hash": "12e42d4860f4d78db339620d48c2e82d0c4b5c45", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46310", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "12e42d4860f4d78db339620d48c2e82d0c4b5c45", "year": 2019 }
pes2o/s2orc
Editorial: New Trends in Vascular Inflammation Research: From Biology to Therapy The evidence from basic science and clinical studies has established the role of inflammation in atherosclerosis and other vascular diseases. Many patients on potent drugs for modifiable risks, such as cholesterol-lowering statins and PCSK9 inhibitors, still suffer from vascular complications, including acute myocardial infarction. To tackle such residual risk, new medical therapies that more specifically target mechanisms for excessive inflammation may be needed. We believe that exploring novel mechanisms for vascular inflammation is a first stride toward the development of such new medical solutions. This Research Topic features 18 articles on new trends in vascular inflammation research with a focus on disease mechanisms (Part 1) on the one hand and new therapies (Part 2) on the other hand, all authored by leaders in vascular inflammation biology. The evidence from basic science and clinical studies has established the role of inflammation in atherosclerosis and other vascular diseases. Many patients on potent drugs for modifiable risks, such as cholesterol-lowering statins and PCSK9 inhibitors, still suffer from vascular complications, including acute myocardial infarction. To tackle such residual risk, new medical therapies that more specifically target mechanisms for excessive inflammation may be needed. We believe that exploring novel mechanisms for vascular inflammation is a first stride toward the development of such new medical solutions. This Research Topic features 18 articles on new trends in vascular inflammation research with a focus on disease mechanisms (Part 1) on the one hand and new therapies (Part 2) on the other hand, all authored by leaders in vascular inflammation biology. PART I: UPDATES ON THE MECHANISMS FOR CARDIOVASCULAR INFLAMMATION (12 ARTICLES) Accumulating evidence suggests that monocytes and macrophages are heterogeneous and their subpopulations may have distinctive roles in vascular inflammation. Buscher et al. discuss monocyte heterogeneity with a focus on "patrolling" monocytes. Their article offers the emerging knowledge of the roles and kinetics of this monocyte subset as well as new technologies for identification and functional assays. Decano and Aikawa then provide the updates for macrophage biology in vascular disease. They focus on the mechanisms for activation, changes in intracellular metabolism, and current understanding of heterogeneity, and further discuss new paradigms of discovery science in vascular inflammation. Thrombogenicity is a key feature of inflamed vessels, particularly in the diabetic milieu. Pechlivani and Ajjan discuss mechanisms for the imbalance of thrombotic and fibrinolytic factors, pathways responsible for increased thrombogenicity in diabetes, and therapeutic agents for thrombosis. This review emphasizes the importance of targeting diabetes-specific mechanisms for thrombosis. While pro-inflammatory pathways may contribute to vascular disease, the impact of impaired protective mechanisms that support the hemostasis of non-diseased vessels also deserves similar levels of attention. An article by Yurdagul et al. discusses the role of defective efferocytosis of macrophages, a mechanism that clears apoptotic cells and promotes the resolution of inflammation, in the formation of necrotic core and the onset of acute thrombotic events. Miyazaki and Miyazaki then review the contribution of impaired protein catabolism to atherogenesis, focusing on the ubiquitin-proteasome pathway, autophagy, and the calpine system. Many studies have reported the role of non-coding RNAs in cancer and neurologic disorders. More recently, we have learned that various non-coding RNAs contribute to cardiovascular diseases. In vascular biology, the evidence for the role of long non-coding RNAs, as compared to microRNAs, remains scant. Haemmig et al. overview how long non-coding RNAs promote vascular inflammation and future perspectives of this area. Implantation of a autologous vein graft to bypass an obstructive coronary or peripheral artery is a common procedure. Rates for the occlusion or narrowing of vein grafts, however, are unacceptably high. Better understanding of underlying mechanisms will help to establish new therapies that prevent vein graft failure. An article by de Vries and Quax provides a comprehensive review of inflammatory mechanisms for the vein graft lesion development. Members of the Krüppel-like factor (KLF) family of zincfinger containing transcription factors regulate many biological processes. Accumulating evidence has implicated KLFs in cardiovascular biology. Two comprehensive reviews focus on two different contexts. Sweet et al. overview the role of KLFs in the biology of cell types related to vascular diseases (e.g., endothelial cells, smooth muscle cells, monocytes/macrophages), and strategies for pharmacologic modulations. Manabe and Oishi then discuss the biology of KLFs in key metabolic organs such as the liver and skeletal muscles and their disorders, and provide future perspectives. Two articles review aging from different angles. Sanada et al. discuss cell senescence and dysregulation of innate immunity that contribute to chronic low-grade vascular inflammation in the elderly. Katsuumi et al. link cellular senescence with age-related disorders, such as heart failure, atherosclerotic vascular diseases, and metabolic syndrome. At the end of Part I on the mechanisms for vascular disease, Yamazaki and Mukouyama review the role of pericytes in vascular disease with a specific emphasis on their heterogeneity. PART II: EMERGING EVIDENCE ON NEW THERAPIES FOR VASCULAR INFLAMMATION (6 ARTICLES) This part covers a wide range of translational vascular medicine that spans from experimental validation of therapeutic targets to cardiovascular outcome trials. Sena et al. propose that cathepsin S is a potential therapeutic target for vascular inflammation and calcification. Peripheral artery disease is a global burden which shows an increasing prevalence and incidence worldwide. An original report by Nishimoto et al. demonstrates that activation of Toll-like receptor 9-mediated signaling by cell-free DNA released from ischemic tissues promotes macrophage activation and impairs blood flow recovery in the ischemic limb. An original report by Akita et al. demonstrates that the blockade of the IL-6 receptor suppresses atherogenesis in mice. Katsuki et al. then provides a comprehensive review of nanotechnologybased drug delivery and imaging for cardiovascular disease with a focus on inflammation. Rahman and Fisher provide a comprehensive review on the experimental and clinical evidence for the regression of atherosclerotic lesions and underlying mechanisms. They particularly focus on the role of macrophages. The last article by Aday and Ridker overviews the strong clinical evidence for the inflammatory aspects of atherosclerosis based on large cardiovascular outcome trials, including CANTOS that directly tested the effects of an anti-inflammatory therapy.
v3-fos-license
2014-10-01T00:00:00.000Z
2006-04-04T00:00:00.000
9433188
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://genomebiology.biomedcentral.com/track/pdf/10.1186/gb-2006-7-4-r27", "pdf_hash": "b40268b534191eaf89466524a9d393531346d56a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46311", "s2fieldsofstudy": [ "Biology" ], "sha1": "b40268b534191eaf89466524a9d393531346d56a", "year": 2006 }
pes2o/s2orc
RNA editing of human microRNAs A survey of RNA editing of miRNAs from ten human tissues indicates that RNA editing increases the diversity of miRNAs and their targets. Background MicroRNAs (miRNAs) are short (around 20-22 nucleotides) RNAs that post-transcriptionally regulate gene expression by base-pairing with complementary sequences in the 3' untranslated regions (UTRs) of protein-coding transcripts and directing translational repression or transcript degradation [1][2][3][4][5]. There are currently 326 human miRNAs listed in the miRNA registry version 7.1 [6], but the total number of miRNAs encoded in the human genome may be nearer 1,000 [7,8]. The function of most miRNAs is unknown, but many are clearly involved in regulating differentiation [9] and development [10]. It is estimated that up to 30% of human genes may be miRNA targets [11,12]. miRNAs are transcribed by RNA polymerase II into long primary miRNA (pri-miRNA) transcripts which are capped and polyadenylated [13,14]. Genomic analyses indicate that many miRNAs overlap known protein coding genes or non-coding RNAs [15], and that many are in evolutionarily conserved clusters with other miRNAs [16]. Furthermore, intronic miR-NAs share expression patterns with adjacent miRNAs and the host gene mRNA indicating that they are coordinately coexpressed [17]. Pri-miRNAs contain a short double-stranded RNA (dsRNA) stem-loop formed between the miRNA sequence and its adjacent complementary sequence. In the nucleus, the ribonuclease III-like enzyme Drosha cleaves at the base of this stemloop to liberate a miRNA precursor (pre-miRNA) as a 60-70nucleotide RNA hairpin [18]. The pre-miRNA hairpin is exported to the cytoplasm by exportin-5 [19][20][21] where it is further processed into a short dsRNA molecule by a second ribonuclease III-like enzyme, Dicer [22][23][24]. A single strand of this short dsRNA, the mature miRNA, is incorporated into a ribonucleoprotein complex. This complex directs transcript cleavage or translational repression depending on the degree of complementarity between the miRNA and its target site. RNA editing is the site-specific modification of an RNA sequence to yield a product differing from that encoded by the DNA template. Most RNA editing in human cells is adenosine to inosine (A-to-I) RNA editing which involves the conversion of A-to-I in dsRNA [25,26]. A-to-I RNA editing is catalyzed by the adenosine deaminases acting on RNA (ADARs). The majority of A-to-I RNA-editing sites are in dsRNA structures formed between inverted repeat sequences in intronic or intergenic RNAs [25,[27][28][29][30]. Therefore, the double-stranded precursors of miRNAs may be substrates for A-to-I editing. Indeed, it has recently been shown that the pri-miRNA transcript of human miRNA miR-22 is subject to A-to-I RNA editing in a number of human and mouse tissues [31]. Although the extent of A-to-I editing was low (less than 5% across all adenosines analyzed), targeted adenosines were at positions predicted to influence the biogenesis and function of miR-22. This raises the possibility that RNA editing may be generally important in miRNA gene function [31]. In this study we have systematically investigated the presence of RNA editing in miRNAs. Results To search for RNA-editing sites in human miRNAs, PCR product sequencing was performed from matched total cDNA and genomic DNA isolated from adult human brain, heart, liver, lung, ovary, placenta, skeletal muscle, small intestine, spleen and testis. Primers were designed to amplify pri-miRNA sequences flanking all 231 human miRNAs in miR-Base [6]. Of these, 99 miRNA containing sequences were successfully sequenced in both directions and from duplicate PCR products from total cDNA of at least one tissue. Total cDNA sequence traces were compared with genomic DNA sequence traces from the same individual, and A-to-I editing was identified as an A in the genomic DNA sequence compared with a novel G peak at the equivalent position in the total cDNA sequence. In total, 12 of the 99 miRNA-containing sequences (13%) were subject to A-to-I RNA editing according to A-to-G differences between matched genomic DNA and total cDNA sequence traces from at least one tissue (Figure 1). These sequences were next oriented with respect to the strand of transcription of the miRNAs. In six cases the A-to-G changes were in the same orientation as the miRNA, and overlap the stem-loop structure of the miRNA, consistent with RNA editing of the pri-miRNA precursor transcript. In an additional case, A-to-I editing was observed in a novel stem-loop structure in sequence adjacent to the unedited miRNA miR-374. This novel stem-loop structure may represent a novel miRNA ( Figure 2, novel hairpin). In the remaining five cases, the Ato-G changes were from the opposite strand to the miRNA (that is, U-to-C changes in the miRNA sequence). Although U-to-C editing of miRNA sequences cannot be ruled out, no editing of this type has previously been observed and no enzymes capable of catalyzing this conversion are known. The most likely explanation is that these are A-to-I edits in a transcript derived from the DNA strand complementary to the annotated miRNA gene. Consistent with this hypothesis, all of these sequences overlap, or are adjacent to, genes transcribed from the opposite strand to the annotated miRNA gene. To distinguish these sequences from the edited pri-miRNAs, these sequences are referred to here as edited antisense pri-miRNAs ( Table 1). One of the antisense pri-miRNAs contains editing sites overlapping the intended miRNA (miR-144) and miR-451, a recently identified miRNA that was not deliberately included in our list of 231 miRNAs. Collectively the 13 sequences were edited at 18 sites. Ten out of the 13 were edited at a single site. miR-376a and antisense miR-451 were each edited at two sites, and antisense miR-371 was edited at four sites. The extent of editing varied with editing site and with tissue, ranging from around 10% (for example, miR-151 in multiple tissues) to around 70% (antisense miR-371 in placenta). Overall, the levels of RNA editing observed were considerably higher than the approximately 5% editing previously reported for the -1 position of miR-22 [31]. Editing of miR-22 was not detectable by our method, presumably because the low levels of editing of this miRNA fall below our limits of detection. All miRNAs were found to be edited in multiple tissues, with the extent of editing varying from tissue to tissue ( Figure 1). All novel A-to-I editing sites were found within the dsRNA stems of the predicted stem-loop structures ( Figure 2). Of the seven editing sites in pri-miRNAs, four were in the 22-nucleotide mature miRNA. Three of these were within nucleotides 2 to 7, which are thought to be important for conferring binding-site specificity between the miRNA and its target sites [3]. Five out of seven editing sites in pri-miRNAs were at single nucleotide A:C mismatches flanked by paired bases. Similarly, five out of seven editing sites were in 5'-UAG-3' trinucleotides. These results are consistent with local structural and sequence preferences of RNA editing determined from A-to-I editing sites in inverted repeat sequences [25]. Three of the ten editing sites in antisense pri-miRNAs were in 5'-UAG-3' trinucleotides. Six of the ten editing sites were at A:C mismatches. Only one was at a single A:C mismatch, however, with the remainder at extended mismatches involving more than one consecutive nucleotide. Discussion We have identified novel A-to-I editing sites in six out of 99 pri-miRNAs, indicating that at least 6% of all human miRNAs may be targets of RNA editing. We were only able to detect relatively high levels of editing, as illustrated by our failure to detect editing of miR-22, so this estimate is probably a conservative one. Moreover, our method is not strand specific, and cannot distinguish multiple overlapping transcripts from the same genomic locus. Thus, in regions of transcriptional complexity, it is likely that the sensitivity of our assay will be reduced. For example, even miRNAs that are 100% edited would appear to be unedited if transcribed at low levels compared with an unedited overlapping transcript from the opposite strand. We may also be unable to detect RNA editing if it occurs subsequent to the processing of the pri-miRNA (for example, by splicing) such that the binding sites for the PCR primers are removed. In addition to the edited pri-miRNAs, six antisense pri-miRNA transcripts derived from the opposite strand to the annotated miRNA were subject to A-to-I editing. There are many potential explanations for apparent editing on the opposite strand to the annotated miRNA. One possibility is that these sequences are actually due to U-to-C editing of the pri-miRNA. There are, however, no known U-to-C RNA editing enzymes capable of catalyzing such a reaction, and despite extensive searches for RNA editing sites, only a single U-to-C RNA editing site has been reported [32]. It is therefore more likely that these sequences represent an edited transcript from the opposite strand to the annotated miRNA. These transcripts could be another miRNA transcribed and processed from the genomic strand opposite the annotated miRNA, or they could be some other class of transcript, for example the intron of a gene overlapping the annotated miRNA but transcribed from the opposite DNA strand. Alternatively, these may be pri-miRNAs that have been incorrectly annotated to the wrong strand of the genome. To evaluate the possibility that the edited antisense pri-miR-NAs are due to incorrect annotation of miRNAs to the wrong genomic strand, we examined previous experimental data obtained for these miRNAs. One of the edited antisense pri-miRNA sequences is derived from the DNA strand opposite the computationally predicted miR-215 [33]. The method used to predict miR-215 successfully predicted 81 out of 109 known miRNAs from a reference set, but around 20% (17/81) were predicted on the wrong strand of the genome [33]. Our data and the direction of overlapping transcripts suggest that miR-215 may have been annotated to the wrong genomic strand. An edited antisense miRNA sequence was also derived from the DNA strand opposite experimentally verified miRNA miR-133a [34]. This miRNA is present in the genome in two copies (miR-133a-1 and miR-133a-2). Copy miR-133a-2 is hosted within a gene transcribed in the same direction as the annotated miRNA gene. In contrast, copy miR-133a-1 A-to-I RNA editing of miRNA precursors in human tissues Figure 1 A-to-I RNA editing of miRNA precursors in human tissues. The extent of A-to-I editing at each editing site is indicated by the color scale. Each colored box represents the average extent of editing calculated from at least two PCR product sequences, at least one of which was sequenced in both directions. Gray boxes indicate miRNAs that could not be amplified. The number in brackets after the miRNA name is the position of the edited adenosine from the 5' end of the pre-miRNA or equivalent antisense pre-miRNA from the miRNA registry. Positions of edited adenosines in human pri-miRNAs and antisense pri-miRNAs Figure 2 Positions of edited adenosines in human pri-miRNAs and antisense pri-miRNAs. Folded pri-miRNA structures were taken from the miRNA registry [6]. Antisense pre-miRNA structures were generated from the reverse complement pri-miRNA sequence using MFOLD [38]. Mature miRNA sequences of around 22 nucleotides and antisense mature miRNA sequences of around 22 nucleotides are indicated by red letters. Edited adenosines are highlighted in yellow. In antisense Hsa-mir-371, edited adenosines were found to reside in base-paired sequence extending beyond the annotated hairpin. Additional bases are in gray. overlaps a gene transcribed from the opposite strand. Cloning and expression analysis of miR-133a [34] provides proof that at least one copy of miR-133a is transcribed. As a result of this finding, both copies of miR-133a have been annotated according to the sequence of the cloned copy. Given the direction of overlapping transcripts, however, it remains possible that miR-133a-1 is transcribed from the opposite strand to miR-133a-2, giving rise to a different miRNA. Indeed, our results suggest that miR-133a-1 may have been incorrectly annotated. Similarly, both copies of experimentally verified miR-194 (miR-194-1 and miR-194-2) have been annotated according to the sequence of a cloned copy [34]. Our data and the presence of overlapping transcripts on the opposite strand suggest that miR-194-1 may also have been incorrectly annotated to the wrong genomic strand. In the case of both mir-133a and mir-194, the two copies would generate miR-NAs that are perfectly complementary to one another. It has previously been suggested that pairs of complementary miR-NAs play a role in miRNA regulation by forming miRNA:miRNA duplexes [35]. Our results suggest that RNA editing may add a further layer of regulation by disrupting complementarity in miRNA:miRNA duplexes. A further two edited antisense miRNA sequences (antisense mir-144 and antisense mir-451) overlap miRNAs that are annotated on the basis of their similarity to mouse miRNAs, and have not been cloned or shown to be expressed by northern blotting in human tissues. The remaining antisense miRNA sequence overlaps mir-371, which has been validated by cloning and northern blotting in human tissues and is therefore correctly annotated. The presence of edited nucleotides in pri-miRNA transcripts indicates that RNA editing occurs early in miRNA biogenesis. Subsequent processes that recognize sequence or structural features of the miRNA precursor could therefore potentially be affected by RNA editing. These include cleavage of the pri-miRNA by Drosha, export of the pre-miRNA to nucleus by exportin-5, cleavage of the pre-miRNA by Dicer, and miRNA strand selection for inclusion in the microprocessor complex. Indeed, it has recently been demonstrated that RNA editing of pri-miRNAs can result in suppression of processing by Drosha, and subsequent degradation of the unprocessed edited pri-miRNA [36]. Although it is unclear whether a miRNA that base-pairs with its target through an I:U wobble would be functional, another possibility is that RNA editing may alter target site complementarity. To investigate the effect of RNA editing of miRNAs on targetsite complementarity, we used the miRanda software [37] to predict binding sites of edited miRNAs in 3' UTRs, and compared these with the predicted binding sites of the equivalent unedited miRNAs. For each of the four pri-miRNAs with an editing site in the mature 22mer, the set of predicted targets of edited miRNAs differs from the predicted targets of edited miRNAs (Table 1). For the three miRNAs in which the edited adenosine is at a position two to seven bases from the 5' end of the miRNA (miR-151, miR376a and miR-379) over half of the targets of the edited miRNA are unique to the edited miRNA. In the case of miR-99a the difference is small, with only 5/75 (6%) target predictions differing between edited and unedited miRNAs. In all cases, the top ten predicted targets of the edited miRNA differ from the top ten predicted targets of the unedited miRNA (data not shown). To gain further insight into the potential biological impact of miRNA editing, we identified Gene Ontology (GO) terms in the 'cellular process' category [38] which were over-represented in the predicted targets of edited and unedited miR-NAs compared with all Ensembl genes ( Figure 3). For the three miRNAs that are edited in the 5' seed region (miR-151, miR-376a and miR-379), comparison of over-represented GO terms associated with the predicted targets of edited and unedited copies reveals distinct differences (Figure 3). Of particular interest are the additional terms that become overrepresented; these include regulation of programmed cell death, biosynthesis, RNA metabolism, cell proliferation and transcription ( Figure 3). RNA editing may therefore contribute to miRNA diversity by generating multiple different miRNAs from an initial pool of identical miRNA transcripts. For example, the total number of predicted targets of Hsa-mir-151 increases from 143 to 229 when taking into consideration both edited and unedited Hsa-miR-99a 5 8 70 Target predictions were performed using the miRanda software using a probability score cut-off of p < 0.001. For each miRNA, the number of targets predicted for both edited and unedited miRNAs is shown against the number of targets predicted exclusively for edited miRNAs, and the number of targets predicted exclusively for unedited miRNAs. miRNAs. Editing of miRNAs may simultaneously alleviate and augment the gene-regulation effects of miRNAs by changing the concentration of individual miRNAs. Conclusion We have performed the first systematic survey of RNA editing of human miRNAs. We have identified RNA editing sites in at GO term comparison of edited and unedited miRNA target predictions Figure 3 GO term comparison of edited and unedited miRNA target predictions. For each edited miRNA, GO terms from level 4 of the 'biological process' category that are over-represented in predicted targets of the unedited or edited miRNA (indicated by +) compared with all Ensembl genes were identified. All values are normalized and colored in terms of significance, with bright red cells indicating that a miRNA specifically targets genes in that GO functional class. least 6% of human miRNAs that may impact on miRNA processing, including edits that alter miRNA binding sites and contribute to miRNA diversity. Furthermore, our results suggest that some miRNA genes may have been incorrectly annotated to the wrong strand of the genome. This has implications for the interpretation of existing miRNA experiment data and future experimental design. Materials and methods Total RNA, total cDNA and genomic DNA For the initial screen of RNA editing in ten human tissues, total RNA and matching genomic DNA from the same tissue sample was obtained for human brain, heart, liver, lung, ovary, placenta, skeletal muscle, small intestine, spleen and testis from Biochain (Hayward, USA). For each tissue, sequence data was obtained from one individual. The donor was different for each tissue type. Total cDNA synthesis was performed using random nonamers (200 ng per 20 µl reaction) with Superscript III (Invitrogen, Carlsbad, USA) according to the manufacturer's instructions. Sequencing of pri-miRNAs Primers were designed to the genomic sequence in the vicinity of all 231 miRNA sequences in the miRNA registry version 7.0 [6], using primer3 [39]. PCR primer design was optimized to give PCR products of approximately 500 bp with at least 75 nucleotides either side of the predicted stem-loop structure. PCR primers were used to sequence PCR products in both directions on ABI3700 DNA sequencers. Sequence traces were quality scored using phred. Sequences with less than 70% of bases having a quality score of 20 or more were rejected. In the first stage of sequencing, duplicate PCR and sequencing was performed for each miRNA from each tissue. A miRNA was considered to be successfully sequenced if the following minimum sequence requirements were met for at least one tissue: good-quality sequence from both strands of one PCR, and good-quality sequence from at least one strand of a second PCR. Successfully sequenced miRNAs that were found to be edited were submitted to a second confirmation stage of sequencing. In the second stage of sequencing, quadruplicate PCR and sequencing was performed for each miRNA from each tissue. For each tissue, a miRNA was considered to be successfully sequenced if the following minimum sequence requirements were obtained: good-quality sequence from both strands of one PCR, and good-quality sequence from at least one strand of a second PCR. See Additional data file 1; primary sequence data is available from [40]. Detection and quantification of RNA editing Sequences were visualized and compared in a gap4 database. A-to-I editing was identified as a novel G peak and a drop in peak height at As in a cDNA sequence relative to the equivalent peak in the matching genomic DNA sequence. The extent of RNA editing was estimated using a modified version of the comparative sequence analysis (CSA) method [41]. Briefly, this program normalizes a cDNA sequence trace to a genomic DNA reference trace by comparison of peak heights at unedited nucleotides. The drop in peak height between the DNA reference trace and the cDNA trace at the edited nucleotide is then reported as a percentage of the peak height in the genomic DNA reference trace. For each edited miRNA, the mean extent of editing for each tissue is calculated from all cDNA sequences obtained for that tissue. Analysis of novel RNA editing sites miRNA structures were obtained from the miRBase database [6]. Stem-loop structures of antisense miRNAs were generated by folding the antisense of the miRNA stem-loop sequence obtained from miRBase using MFOLD [42]. To predict edited and unedited miRNA target sites, miRanda (v3.0) [32] was used to scan the edited and unedited miRNA sequences against all human 3' UTR sequences available from Ensembl v34. The algorithm uses dynamic programming to search for maximal local complementarity alignments, which correspond to a double-stranded antiparallel duplex. The new version of the miRanda algorithm (AJ Enright, personal communication) assigns P values to individual miRNA-target binding sites, multiple sites in a single UTR, and sites that appear, from a robust statistical model [43], to be conserved in multiple species. The resulting targets were filtered based on P value (p < 0.001) to ensure a high degree of confidence in the predicted target sites. GO analysis GO terms from level 4 of the 'cellular process' category were obtained for each human transcript from Ensembl. Over-representation for each term (O term ) in a group of sequences with C terms is calculated as follows: where F 1 is the frequency of a term in the group being considered, F 2 is the frequency of a term in the whole genome and t is the term at level L. GO terms with low transcript counts (< 3.0) were excluded from further analysis. Additional data files The following additional data are available with this paper online. Additional data file 1 contains examples of edited sequence traces for each of the edited sites identified in this survey, and the coordinates of edited bases. Additional data file 2 contains PCR primer information, details of the initial screen of miRNAs and annotation of edited miRNAs.
v3-fos-license
2020-06-11T09:03:20.930Z
2020-06-09T00:00:00.000
219889599
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/journals/crid/2020/6525797.pdf", "pdf_hash": "0055c79051307c032ce05b99068cb538667b4930", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46316", "s2fieldsofstudy": [ "Medicine" ], "sha1": "791090772aa683dd113bac0d3d1d8ec5752b6c5b", "year": 2020 }
pes2o/s2orc
Autogenous Chin Block Grafts in the Aesthetic Zone: A 20-Year Follow-Up Case Report The successful use of osseointegrated implants in the treatment of partial or complete edentulism requires a sufficient bone support. Whenever rehabilitation in atrophic edentulous areas is needed, bone augmentation procedures are recommended. The aim is to provide adequate amount of supporting bone to achieve a prosthetically guided implant placement. This in turn leads to functional and aesthetic improvements that can be maintained on the long term. Bone grafting of the atrophic site can be performed either prior to implant placement or at the time of implantation. Irrespective of the timing, bone augmentation by means of autogenous bone grafts is a reliable technique, as confirmed by several studies. On the other hand, long-term evidence on the use of autogenous chin block grafts in preprosthetic implant surgery is still scarce. Thus, the purpose of the present case is to report the 20-year clinical and radiological outcome of autogenous chin block grafts used to augment a bilateral defect due to agenesis of the upper lateral incisors for implant placement purposes. Introduction The agenesis of the upper lateral incisors is a challenging clinical situation, being an area of high aesthetic value with a limited space available for a correct implant insertion, even after an orthodontic treatment, together with high patient expectations [1]. The agenesis often results in alveolar bone defects, especially at the expense of the buccal plate. The repair of congenital or acquired alveolar defects with autologous bone grafts is one of the most traditional surgical techniques in oral and maxillofacial surgery. Once the ideal amount of bone has been restored, implant treatment assumes an important role in the rehabilitation of these patients [2]. Bone reconstruction techniques have been improved in order to optimize the aesthetic and functional result [3,4]. Despite this, the functional rehabilitation of atrophic alveolar ridges still remains a challenge in oral implantology. Bone augmentation procedures are often indicated to allow implant placement in an optimal three-dimensional position to achieve long-term function and predictable aesthetic results for prosthetic restorations [3]. The extension of the bone defect determines whether bone augmentation procedures can be performed simultaneously with implant placement or as a separate procedure [5]. Among the various bone augmentation materials available, only autologous bone combines osteoconductive, osteoinductive, and osteogenic features when compared to any other bone substitute [6]. Thanks to its biological properties and also due to the absence of immunological reactions, autologous bone graft is considered the "gold standard" for bone regeneration procedures [7]. However, intraoral harvesting is not free from limitations; these include the extension of the donor site and possible complications that may occur during the bone harvesting procedure [8]. The use of intraoral donor sites such as mandibular symphysis and ramus has several advantages compared to that of extraoral sites such as the iliac crest and the tibial plateau. Some studies have shown that membranous bone grafts, including the mandibular symphysis and ramus, show less resorption and better and faster revascularization than endochondral bone grafts, such as the iliac crest and tibial plateau [9,10]. Therefore, the embryological origin of the donor site plays a pivotal role in the success of the procedure. An additional important advantage of intraoral donor sites is that bone harvesting can be performed with local anesthetic infiltrations without recurring to general anesthesia. Furthermore, intraoral bone grafts can be easily obtained with lower complications compared to extraoral bone grafts, and also, the postoperative course is easier and faster [11]. Common donor sites in the oral region are mandibular symphysis, retromolar area, and maxillary tuberosity [11,12]. The mandibular symphysis not only provides a greater graft volume of more than 50% compared to the mandibular ramus but is also characterized by a simpler surgical access. Moreover, it has been proved that the mandibular symphysis graft is composed on average of 65% cortical bone and 36% cancellous bone. Conversely, the mandibular ramus is almost 100% cortical in nature. The corticocancellous nature of the bone harvested from the mandibular symphysis facilitates a faster vascularization once the block is positioned at the recipient site, resulting in more rapid integration. Better postoperative patient morbidity and low rate of wound dehiscence are other reasons why the ramus as a donor site may be preferred [11]. The use of bone substitutes characterized by low turnover rates to cover the graft may reduce the resorption rate of the bone block [13]. Some authors found that deproteinized bovine bone (DBB) particles stabilized by resorbable membranes covering onlay block grafts reduced resorption by almost 50% in comparison to noncovered grafts [1,14]. Bone substitutes may also contribute to the creation of a smooth connection between the block graft and the recipient bone and can provide a scaffold that promotes bone regeneration [13,14]. Another advantage of using resorbable rather than nonresorbable membranes is the elimination of a second surgical phase. Long-term studies evaluating the survival rates of dental implants placed in sites augmented with symphysis onlay grafts are lacking. However, to the best of our knowledge, no studies are currently available reporting on survival analysis performed with more than 10 years of follow-up. In view of the aforesaid, the aim of the present case report was to evaluate the survival rate of dental implants placed in resorbed alveolar ridges reconstructed with symphysis autogenous onlay bone grafts. Case Presentation A 19-year-old male affected by agenesis of the upper lateral incisors was referred to the authors' department seeking for an implant-supported fixed rehabilitation. At the time of presentation, the patient was healthy, nonsmoking, with no local or systemic pathologies nor drug allergies (ASA I according to the American Society of Anesthesiologists physical status classification). The anatomy of the upper jaw was evaluated by clinical examination and panoramic radiograph. At the clinical examination, it was immediately possible to observe a bilateral bone defect in correspondence with the upper lateral incisor region. The appearance and consistency of the soft tissues were good ( Figure 1). The orthopantomograph confirmed the agenesis of the upper lateral incisors with a reduced development of the alveolar process in a mesiodistal direction ( Figure 2). After discussing possible treatment alternatives with the patient, it was decided to proceed with a bone augmentation procedure by means of intraoral autogenous bone harvested from the mandibular symphysis and delayed implant insertion. All surgical and prosthetic procedures were performed by the same team. A signed informed consent was obtained from the patient. All procedures were conducted according to the 1964 Helsinki Declaration and its later amendments. The first surgical phase was performed on an outpatient basis under local anesthesia after premedication with diazepam 0.2 mg/kg administered orally 30 minutes before surgery. Two monocortical block grafts were collected from the symphysis and fixed at the buccal aspect of the bone defects with osteosynthesis screws (Figures 3-5). At this point, DBB particles (Bio-Oss®, Geistlich Biomaterials, Case Reports in Dentistry Wolhusen, Switzerland) and native lyophilized type I resorbable collagen membranes from equine origin (Paroguide®, GABA VEBAS srl, Rome, Italy) were used to cover the block graft at each site ( Figure 6). Suture in polyamide was performed with detached stitches to obtain a first-intention seal of the flaps. Silk sutures were instead used at the donor site. Postoperative medications included amoxicillin 1 g twice daily for 6 days, starting on the day of surgery, naproxen sodium as required every 6 hours, and 0.2% chlorhexidine mouthwashes twice daily for 2 weeks, starting on the day after surgery. The sutures were removed after 14 days, and an orthopantomograph was performed. After 6 months of uneventful healing, fixation screws were removed, and two 3:25 × 13 mm implants were placed in a prosthetically guided position with the aid of a surgical stent (Figure 7). The insertion torque was >35 Ncm. After 5 months, implants were uncovered to connect the healing abutments. After proper maturation of the soft tissues, impressions were taken with custom impression trays to start the prosthetic phase. Temporary implant-supported acrylic resin prostheses were connected to the implants for initial load and soft tissue conditioning. After 6 months, definitive implant-supported cemented-retained metalceramic prostheses were delivered. A buccal peri-implant gingival plastic surgery was performed with a diamond bur to improve the aesthetic of the soft tissues. Clinical and radiological evaluations were conducted at 8 years (Figures 8 and 9) and 20 years from the prosthetic loading (Figures 10 and 11). After 20 years, the clinical examination showed healthy and stable soft tissues, with no signs of suppuration and bleeding on probing. Peri-implant probing was performed at six sites per implant, namely, mesiobuccal, buccal, distobuccal, mesiopalatal, palatal, and distopalatal. In all said sites, probing depth values ≤ 4 mm were observed. An adequate amount of attached keratinized tissue was present apically to the gingival margin. The quality and stability of the gingival architecture were supported by the radiographic analysis. The 8-year and 20-year orthopantomographs were scanned to obtain digital images with a resolution of 1200 dpi. The digital images were imported in a specialized 3 Case Reports in Dentistry computer software (ImageJ 1.49v, Research Services Branch, National Institutes of Health, Bethesda, MD, USA). The calibration of the pixel/millimeter ratio was performed on the basis of a known distance, namely, the length of the implants. The 8-year orthopantomograph revealed no detectable marginal bone resorption, calculated as the distance between the most apical bone-to-implant contact visible in the scanned image and the implant-abutment connection level at the mesial and distal aspects (Figure 9). The same measurements were performed in the 20-year orthopantomography ( Figure 11); however, it was not possible to obtain reliable data due to metal artifacts. Nonetheless, the clinical findings suggested the presence of stable marginal bone levels circumferentially around the implants. Discussion The present case was reported to document clinically and radiographically the long-term survival of dental implants placed in atrophic alveolar ridges augmented with mandibular symphysis autogenous onlay grafts. The rationale was to provide evidence that implant rehabilitations in bone reconstructed with autogenous mandibular grafts might constitute a reliable treatment option on a long-term basis. This strengthens the current evidence, as only few studies reported on the outcome of such implant-supported rehabilitations for periods longer than 10 years. In a recent retrospective study [15], it has been claimed that intraoral bone grafts harvested from the mandibular symphysis, mandibular ramus, and maxillary tuberosity provide a good treatment modality for ridge augmentation. In addition, the amount of bone available from these sites is suf-ficient for anatomical defects extended up to the width of three teeth [16]. Harvesting of retromolar and symphysis bone grafts is particularly recommended in those cases involving multiple tooth reconstruction in the mandible. The surgical access to the symphysis has been described as being easier than that of the mandibular ramus [11]. Both techniques can be performed on an outpatient basis, while harvesting of bone from distant sites is associated with inpatient care and increased costs [17]. Both the harvesting and grafting procedures are usually performed in the same surgical field. The use of autologous bone in the present case showed excellent survival and success rates. The success of the bone augmentation was confirmed by the stability of the marginal bone levels assessed at the mesial and distal aspects of the implants over the years. It is worth mentioning that at the 20-year follow-up visit, a horizontal remodeling of the buccal plate was observed. This however did not affect the implant stability. The horizontal bone resorption that was found is attributable to the embryological nature of the bone that was grafted and the duration of the follow-up [18]. From the aesthetic aspect, the gingival parabolas have been maintained over time, and therefore, the aesthetic success has been preserved on the long term [19]. The present study confirmed the long-term effectiveness of alveolar ridge augmentation and implant placement by means of autogenous bone grafts [20]. This procedure resulted in stable bone conditions with low risk of mucosal recession over an observation period of 20 years. It should be noted, however, that a similar clinical situation, currently, can be solved through the use of narrow implants and careful soft tissue management. As a matter of fact, at present, soft tissue augmentation techniques have demonstrated good aesthetic results so that more invasive bone augmentation procedures may be avoided [21]. Furthermore, at the time of surgery, the patient was only 19 years old and a different approach with an additional waiting period of 3 years might be contemplated to prevent the intrusion of the peri-implant tissues that may occur in case of premature implant insertion [22]. The patient also refused presurgical orthodontic treatment aimed at optimizing the interdental spaces of the anterior sector. In a recent paper [18], an average follow-up of 23.9 months was calculated for literature articles on follow-up of patients with implant rehabilitation in augmented bone with autogenous bone grafts. It is therefore clear that the resorption evaluated in this 20-year case report is worthy of note. The hypothesis that bone substitutes could effectively replace the use of autologous bone with its osteoinductive, osteoconductive, and osteogenic properties is still under investigation. On the other hand, various studies have proved benefits and appropriateness of autogenous tissue for an ideal reconstruction of atrophied ridges before implant surgery [23]. Regarding surgical complications, the postoperative morbidity is commonly related to the management of the soft tissues. The most frequent postsurgical complications included flap dehiscences with or without exposure of the grafts or membrane [24]. The peri-implant mucosa needs to be supported by an adequate three-dimensional volume of alveolar bone, including an intact buccal plate of sufficient height and Case Reports in Dentistry thickness [4,25,26]. Deficiency of the buccal bone anatomy has a negative impact on the aesthetic outcome and is therefore considered a critical causative factor for implant complications and failures [4,26,27]. In the present case, no soft tissue complications occurred at any stage. Postoperative morbidity after mandibular bone harvesting procedures was reported to be mainly related to temporary or permanent neural disturbances involving the inferior alveolar nerve and its branches [23]. It is clear that although precise anatomical limits have been defined on the localization of the mandibular incisor canal, in the bone removal from the mandibular symphysis, there is no objective limit below which the probability of having neurosensory alterations is eliminated. This is due to physiological changes in the course of the mandibular incisor canal. It is therefore advisable to subjectively evaluate the feasibility of the technique on a case-by-case basis through orthopantomographs and second-level investigations such as computed tomography, dental scan, and stereolithographic prototypes. No drawbacks related to neurosensory complications were noted in the present case. A limitation of the study is the marginal bone loss measurement on the mesial and distal aspects owing to 2D imaging. The measurement of buccal and lingual bone loss can only be performed using 3D imaging modalities. However, panoramic radiographs are frequently used in clinical settings for the evaluation of bone peak stability [2,28]. The results of the present study pointed out that, in case of agenesis of the upper lateral incisors, bone grafting from the mandibular symphysis and delayed implant placement may provide satisfactory functional and aesthetic outcomes on the long term. Despite a certain resorption of the graft that may occur, correct management of the peri-implant soft tissues and the prosthesis is pivotal to maintain the success on the long term. To date, a similar clinical situation can be resolved through the use of narrow implants and careful soft tissue management. Furthermore, at the time of surgery, the patient was 19 years old. To prevent intrusion, we would wait additional 3 years before surgery.
v3-fos-license
2014-10-01T00:00:00.000Z
1998-02-01T00:00:00.000
1286168
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.scielo.br/j/bjmbr/a/sDYcHJsPrj5CQT4vg9rDP9S/?format=pdf&lang=en", "pdf_hash": "c5cb2fd9f512485aef436997027d59baf7c53f72", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46317", "s2fieldsofstudy": [ "Medicine" ], "sha1": "c5cb2fd9f512485aef436997027d59baf7c53f72", "year": 1998 }
pes2o/s2orc
Histologic Distribution of Insulin and Glucagon Receptors Insulin and glucagon are the hormonal polypeptides secreted by the B and A cells of the endocrine pancreas, respectively. Their major physiologic effects are regulation of carbohydrate metabolism, but they have opposite effects. Insulin and glucagon have various physiologic roles, in addition to the regulation of carbohydrate metabolism. The physiologic effects of insulin and glucagon on the cell are initiated by the binding of each hormone to receptors on the target cells. Morphologic studies may be useful for relating biochemical, physiologic, and pharmacologic information on the receptors to an anatomic background. Receptor radioautography techniques using radioligands to label specific insulin and glucagon receptors have been successfully applied to many tissues and organs. In this review, current knowledge of the histologic distribution of insulin and glucagon receptors is presented with a brief description of receptor radioautography techniques. Introduction Insulin is a hormone secreted by B cells, and glucagon is secreted by A cells of the pancreas.The two hormones play an important role in carbohydrate metabolism.However, the actions of insulin and glucagon in carbohydrate metabolism are opposite.Furthermore, insulin and glucagon have various physiologic roles in addition to the regulation of carbohydrate metabolism.The physiologic effects of insulin and glucagon on the cell are initiated by the binding of each hormone to target cell receptors.To relate biochemical, physiologic, and pharmacologic information on receptors to an anatomic background, morphologic studies might be important, although microdissection techniques allow the determination of receptor levels in tissues as small as 200 to 500 µm in diameter (1,2).Significant advances in the determination of the histologic distribution of receptors for many hormones have been made in the past two decades, primarily by radioautography.In this review, the insulin and glucagon receptors will be described with emphasis on the histological tissue distribution of these receptors from macroscopic to light microscopic levels determined mainly with the radioautographic technique.The fundamental procedure of macro-and microradioautography for peptide hormone receptors will also be described. Insulin and insulin receptors Insulin is only secreted by B cells of the islets of Langerhans of the pancreas.Insulin was crystallized by Abel in 1926 and all amino acid sequences were identified by Sanger in 1955.Hodgkin determined the tertiary structure of the protein in 1969.Insulin is a polypeptide with a molecular mass of about 5800 Da, in which the A chain is linked to the B chain by two disulfide bridges.The binding site of the insulin molecule to the receptor has been clarified (3).When the site at which insulin takes part in binding to the receptor is labeled with radioactive iodine, normal binding ability is lost.Therefore, in investigations on insulin binding to the insulin receptors, Tyr A14 located away from the receptor-binding site is labeled with radioactive iodine. Action of insulin Insulin plays an important physiologic role, especially in the liver, muscle, and adipose cells, in homeostasis of blood glucose concentration (4).For this reason, the liver, muscle, and fat have been regarded as major target tissues for insulin.However, insulin has also been found to promote cellular growth and proliferation in many cell types in culture.The physiologic effects of insulin are divided into three types in terms of the timing of action (5). 1) Immediate effect of insulin: The effect occurs within several seconds after insulin is administered; transportation of glucose is promoted and phosphorylation and dephosphorylation of enzymes occur.2) Mid-term action of insulin: The effect of insulin is detected within 5-60 min after insulin administration, including the induction and appearance of the gene encoding the protein.The maximum effects of this action are caused within 3-6 h. 3) Long-term action of insulin: The effect can be detected from several hours to several days later, including the stimulation of DNA synthesis, cell division, and cell differentiation. It should be noted that the action of insulin is different depending on the amount.For instance, the amount of insulin necessary to inhibit gluconeogenesis is greater than that necessary to inhibit glycogenolysis (4).These actions of insulin begin with the binding of insulin to insulin receptors. Insulin receptors The action of insulin starts with binding to the receptors located on the membrane of target cells as is the case for all peptide hormones.The kinetic characteristics of insulin receptors in various types of cells have been clarified using radiolabeled insulin (6).The insulin receptor exists on the membrane of all mammalian cells.The brain cell, which has been assumed to have an insulin-independent organization, is also included among these cells (7,8).The number of receptors varies from 40 for erythrocytes to 200 ~300 x 10 3 for adipocytes and hepatocytes.The binding characteristics are complex because of negative cooperativity among the insulin-binding sites, in which binding of one insulin molecule to the receptor prevents binding of the second molecule (9). The insulin receptor consists of two α subunits containing the site for insulin binding and two ß subunits containing the tyrosine kinase domain; these subunits are connected by disulfide bridges to form a 350 kDa ß-α-αß tetramer (10).The insulin receptor derives from a single gene on the short arm of chromosome 19 and consists of 22 exons separated by 21 introns (11).However, two isoforms of the insulin receptor are produced by alternative splicing of exon 11 (12). The distribution of insulin receptors with or without exon 11 varies among tissues (12,13).The isoform with exon 11 is predominant in the liver, but is not common in muscle.The adipocyte has both isoforms at nearly equal levels.These two isoforms of the insulin receptor exhibit different affinities for insulin.The isoform with exon 11 exhibits higher affinity than the isoform without exon 11 (14,15).The insulin receptor is a hormone-activated protein tyrosine kinase.Protein tyrosine kinases are enzymes that catalyze the transfer of phosphate groups from adenosine triphosphate to the tyrosyl residues of proteins.Binding of insulin to the binding site on the extracellular portion of the α subunit results in activation of the protein kinase site on the cytoplasmic portion of the ß subunit, which adds phosphate groups to tyrosine residues in the target proteins in the cytoplasm.Following receptor kinase activation, an insulin signal is transmitted through postreceptor signaling pathways (16,17). Glucagon and glucagon receptors Glucagon was discovered in the 1930s in crude insulin preparations and was termed the hyperglycemic glycogenolytic factor.Glucagon is a single-chain polypeptide with a molecular mass of 3485 Da which consists of 29 amino acids (4).A single gene encoding preproglucagon has been found on chromosome 2 in humans and rats, and certain intestinal and neural cells express the preproglucagon gene (18,19).Glucagon immunoreactivity has also been reported in certain intestinal (20)(21)(22) and neural (23) cells, besides pancreatic A cells, but these cells do not appear to secrete significant quantities of true glucagon under normal circumstances (24).In the intestine, glucagon is secreted in the form of glicentin (consisting of 69 amino acids and referred to as enteroglucagon) and oxyntomodulin (37 amino acids), but not in the form of true glucagon (29 amino acids) (25).Therefore, A cells of the islets of Langerhans of the pancreas are the only source of true glucagon under normal circumstances.The binding site of the glucagon molecule to the receptor has been investigated (26,27).Since normal binding ability of glucagon to the receptors will be lost by labeling the binding sites with radioactive iodine, Tyr10, which is not the receptor-binding site, is labeled with radioactive iodine in experiments carried out to investigate glucagon binding. Action of glucagon Glucagon has several effects that are opposite to those of insulin (4).Glucagon raises the blood glucose concentration by stimulating hepatic glycogenolysis and gluconeogenesis.In contrast, a fall in glucagon concentration below basal levels results in a decrease in hepatic glucose production.Glucagon has been shown to play a critical role in the disposition of amino acids by increasing their inward transport, degradation, and conversion into glucose.The stimulatory action of glucagon on hepatic glucose production lasts only 30 to 60 min.In addition to its action on carbohydrate metabolism, glucagon is said to be involved in the regulation of lipolysis (28), but within the physiological range, glucagon has little or no effect on the lipolysis of adipose tissue in humans (29,30).Other reported actions of glucagon include the inhibition of gastric acid secretion and gut motility (31), a positive inotropic effect on the heart (32), a spasmolytic function on the intestinal wall (33), involvement in intra-islet hormone regulation in the pancreas (34)(35)(36), and regulation of renal function (37,38). Glucagon receptor The effects of glucagon are mediated by the binding of the hormone to a specific receptor (24).The human glucagon receptor is located on chromosome 17 (39).The rat glucagon receptor was cloned and found to belong to the GTP family and cyclase-linked receptors having seven putative transmembrane domains (40,41).The N-terminal extracellular portion of the receptor is required for ligand binding and most of the distal Cterminal tail is not necessary for ligand binding; the absence of the C-terminal tail may slightly increase the receptor-binding affinity for glucagon.The C-terminal tail is also not necessary for adenyl cyclase coupling and, therefore, does not play a direct role in G protein activation by the glucagon receptor (42).A human glucagon receptor has also been cloned from human liver tissue, and it was shown that the human glucagon receptor amino acid sequence had 82% identity with the rat receptor (43).The number of receptors is supposed to be 200 x 10 3 in hepatocytes (40), and it has been proposed that two types of hepatic glucagon receptors may have different signaling pathways (44).The principal hepatic glucagon receptor is a glycoprotein of 63 kDa (45,46) and there is a second receptor of 33 kDa (45).The existence of two functionally distinct forms of glucagon receptors, a high-affinity form with a K d of 0.1~1.0nM comprising 1% of the glucagon-binding sites and a low-affinity form with a K d of 10~100 nM that makes up 99% of the molecules, has been reported (24). Methods for investigating the histologic distribution of receptors Although both immunohistochemistry and radioautography can be used to investigate the distribution of insulin and glucagon receptors, we focus here on the radioautographic technique (47).The body is made up of many organs and each individual organ is functionally and morphologically heterogeneous.It is important that experiments of receptor distribution proceed from the macroscopic to the microscopic level.Macroradioautography including whole body radioautography is suitable for studying the tissue distribution of radiolabeled ligands in the whole animal or in large organs.Microradioautography at the light microscopy level is suitable for the visualization of radiolabeled ligand binding at the cellular level. For radioautography, two types of labeling techniques for receptors can be used in vivo and in vitro.In in vivo receptor radioautography, tissues and organs are removed from experimental animals that have received radiolabeled ligand injections, and sections are prepared.The sites labeled with the ligand are visualized by radioautography with films or nuclear emulsions.During in vitro procedures, sections of tissues and organs from experimental animals that have not been injected with a radiolabeled ligand are incubated with a radiolabeled ligand.The sections are then exposed to films or nuclear emulsions to detect ligand radioactivity.Since many factors, such as ligand metabolism and tissue barriers, might influence in vivo receptor labeling, the interpretation of these results might be more complicated than that of in vitro labeling.However, in vivo procedures might better reflect the physiologic binding of ligands to their receptors.As reported by Kuhar (48), in vitro procedures have certain advantages over in vivo procedures: the quantity of radioisotope is much smaller than that used in in vivo radioautography and the physiologic condition of the receptor sites can be regulated easily by removing endogenous ligand sources.The present paper mainly deals with the in vivo procedure. Macroradioautography for histologic receptor distribution Macroradioautography includes whole body radioautography of animals and organ radioautography of large organs such as the brain, liver and kidney.The procedure for in vivo whole body radioautography is as follows (47,49,50): 1) After intravenous injection of a radiolabeled ligand, the experimental animals are perfused with Ringer solution to wash out unbound ligands from the whole body.Three minutes before the perfusion, the mice are anesthetized by intraperitoneal injection of sodium pentobarbital.2) The region incised for perfusion is covered with 3% carboxymethylcellulose that has been frozen with powdered dry ice.The entire body of the animal is then frozen at -70 o C in a mixture of dry ice and acetone.3) The frozen animal is embedded in 6% carboxymethylcellulose on the microtome stage and after equilibrating the block with the temperature of the cryostat (-20 o C), 20-µm thick whole body cryosections are prepared using a heavy duty microtome (LKB 2250, Sweden).4) Adhesive tape (Scotch tape, Type 800, 3M Co., St. Paul, MN) is applied to the cut surface of the frozen animal to prevent the section from falling apart.5) The sections obtained are freeze-dried in a cryostat or deep freezer.The freeze-dried sections are brought to room temperature in a desiccator containing silica gel.6) The dry sections are then placed in direct contact with films, using aluminum plates that are screwed down.For 125 I-labeled ligands, Ultrofilm (LKB, Bethesda, MD), Hyperfilm TM -3 H (Amersham International plc., Buckinghamshire, UK), or a Konica Macroradioautograph (Konica Co., Tokyo, Japan) film should be used (without a protective layer of gelatin), and inserts should be avoided.Before examination using this method of direct contact with the film, negative and positive chemography should be checked (51)(52)(53).7) After exposure in a cool, dark box, the films are developed and fixed.D19 (Kodak, Rochester, NY) is a good emulsion developer because it shows a large range of gray levels and results in good proportionality between absorbance and radioactivity (54).There is an alternative method of whole body radioautography in which whole body cryosections on glass slides are used rather than cryosections adhering to adhesive tape (55). Whole body radioautographs thus obtained are usually observed with the unaided eye.However, whole body radioautographs are also suitable for observation under a lowpower microscope when 125 I-labeled ligands are used.To identify organs and tissues appearing in whole body radioautographs, whole body histologic sections that correspond to the radioautographs are prepared (56)(57)(58). With whole body radioautography, radioactivities in various parts of organs and tissues can be estimated and compared.For tubular organs, these techniques are not sufficient for such demands, because a single whole body section does not display any tubular organ as a whole.For this purpose, radioautography of tubular organs has been established (59).For parenchymal organs such as the brain, liver and kidney, the following organ radioautography is recommended: after injection of 125 I-labeled ligand, with or without excess unlabeled ligand, the animals are perfused with Ringer solution and fixed with 4% paraformaldehyde solution through the left ventricle.The organs are then removed and immersed in the solution for 2 h.After immersion, the specimens are dehydrated with a graded series of ethanol, and embedded in paraffin after immersion in xylene.Five-µm thick paraffin sections on glass slides are deparaffinized and brought into contact with the film for macroradioautography.After appropriate exposure, the films are developed and fixed.In this case, a nuclear emulsion instead of a film will give better results.The emulsion is applied using the dipping technique (60,61), but the exposure time is much longer than in microradioautography. In vitro techniques are essential for the study of receptors in tissues and organs with a functional barrier to a given ligand such as the brain.In the in vitro procedure, the organs are removed from the animals without the injection of a radiolabeled ligand.The organs are washed with ice-cold physiologic saline to remove blood and are frozen as soon as possible in isopentane cooled with liquid N 2 .Sections, 25-µm thick, are cut with a microtome at -15 o C and thaw mounted on glass slides.To prevent the section from peeling off the glass slide, the glass slides must be coated with dichlorodimethyl-silane, poly-L-lysine, or gelatin.The sections are dried immediately and preincubated with a buffer solution for 15 to 30 min to remove intrinsic ligands and to act as an inhibitor for peptidase.After preincubation, the sections are incubated with a radiolabeled ligand.Temperature and incubation time are determined by biochemical receptor-binding experiments with homogenate tissue or with the crude membrane fraction.To decrease nonspecific binding, washing with ice-cold isotonic buffer solution is performed after incubation with the radiolabeled ligand.Final washings are conducted with ice-cold distilled H 2 O to remove salts present in the buffer.After air drying, the sections are exposed to a photographic film. For analysis of macroradioautographs, the density of the radioautographic images in the films is determined by computer-assisted densitometry.As a standard for quantification of radioautographs, 125 I scales ([I-125]micro-scales, RPA522, Amersham International plc.) are used.With the 125 I-plastic standards the sensitivity of the film to 125 I should be determined (62).To obtain appropriate absorbance values for the radioautographs, it is necessary to know the relationship between isotope concentrations and film blackings at various exposure times.Therefore, in the experiments of radiolabeled ligand binding, with or without unlabeled ligand using serial cryosections, appropriate absorbance values for each section should be obtained by changing the exposure time for the different amount of binding. Microradioautography for histologic receptor distribution To determine where specific ligand-binding sites are located at the cellular level, radioautographic techniques with high spatial resolution are required.For this purpose, microradioautography at the light microscopy level is used.The procedure for microradioautography is as follows. The paraffin-and resin-embedded sections or the frozen sections of tissues obtained from the animals injected with radio-active ligand are used for light microscopic radioautography.These sections are mounted on microscope slides and a nuclear emulsion is applied.For microradioautography several types of nuclear emulsions are commercially available.For microradioautographic studies of receptors at the light microscopy level with 125 I, Kodak NTB2, and Konica NR-M2 (Konica Co., Tokyo, Japan) are excellent.Although there are various methods for applying emulsion to a section, the dipping technique is commonly used.The dipping technique is based on brief dipping of the section-slide into a liquefied emulsion, followed by drying of the emulsion.Under safelighting, the appropriate amount of emulsion is transferred from the bottle to a glass cylinder and placed in a water bath at 43 o C for 10 min to melt the emulsion.The volume should then be increased by about 50% with distilled water depending on the emulsion used and the thickness of the emulsion layer desired.The molten emulsion is poured into a dipping jar and 1% glycerol is added.A slide is dipped into the emulsion for several seconds and slowly withdrawn.The back of the slide is wiped with paper, and the slide is then cooled quickly by laying it on a metal plate cooled with ice to prevent redistribution of silver grains.After about 10 min on the plate, the slide is laid flat on the bench for 1 hour, transferred to a slide box, and the box is kept overnight in a desiccator with silica gel.After appropriate exposure in a cool dark room without silica gel, the emulsion is developed and fixed.The sections are then stained and mounted by routine histologic procedures. Radioautographs prepared by microradioautography are usually observed by light microscopy.In the radioautographs prepared with a thin layer of emulsion and a 1-µm thick plastic section, it is not only relatively easy to relate silver grains and tissue, but a micrograph can be also taken with both the silver grains and the tissue section in a single focus.However, with thick sections made from paraffin and frozen blocks, using the ordinary dipping method, the resulting distances between the silver grains and tissue might be variable.There are many problems with these stained radioautographs, but they can be solved with a laser scanning microscope (LSM) (63)(64)(65). Microradioautographs are quantified by grain counting.The counting can be carried out on photomicrographs taken by the recorder fitted to the LSM or by image analysis of the photomicrographs.The number of grains in the cells of interest are scored, and the average number of grains per µm 2 is determined. To estimate specific ligand binding among the tissues and cells, statistical analysis is performed, usually by the t-test.In receptor radioautography, however, two different kinds of microradioautographs should be made to identify specific ligand binding.Specific binding of ligand in a given tissue is obtained from the differences between total and nonspecific binding.The total binding is obtained from radioautographs made from the tissue injected with radiolabeled ligand.The nonspecific binding is obtained from those made from the same radiolabeled ligand-injected tissue plus excess unlabeled ligand.Radioautographs showing total binding of a ligand and those showing the nonspecific binding are obtained from different specimens in receptor radioautography.In microradioautography using nuclear emulsions, the grains in a large number of micro-radioautographs are counted, making it difficult to correlate them.For this purpose, a statistical method for receptor radioautography has been developed (66,67). Histologic distribution of insulin receptors The distribution of insulin receptors in whole animal tissues has been studied in our laboratory by in vivo whole body radioautography (62).After intravenous injection of 125 I-insulin into male adult mice, high specific insulin binding occurred in the liver, small intestine and large intestine (Figure 1).Relatively high binding was observed in the pancreas, Harderian gland, and choroid plexus in the brain.The deferent duct also showed relatively high binding, while the binding level in other parts of the reproductive organs, including the testis, was very low.The skeletal muscle and fat, both of which are thought to be major targets for insulin action, showed extremely low insulin binding.Microradioautography of skeletal muscle demonstrated that the blood vessel in the skeletal muscle showed substantial specific insulin binding, but specific binding was not seen in the muscle fiber even after long exposure (Figure 2). In the liver, specific insulin receptors on the plasma membrane of hepatocytes have been demonstrated by in vivo radioautographic studies (68,69).The distribution of insulin-binding sites in the liver of fed and A B Deferent duct Spinal cord Brain Kidney Spleen Brain Testis Liver Large intestine Small intestine Liver fasted mice was studied by microradioautography 3 min after intravenous injection of 125 I-insulin (70).Specific binding of 125 I-insulin to liver parenchymal cells was seen in these mice.In both fed and fasted mice, a density gradient of the binding from the periportal zone to the perivenous zone was evident, and binding in each zone was significantly higher in fasted than in fed mice. The presence of insulin receptors in the epithelial cells of the gastrointestinal tract of the rat has been shown by in vivo radioautography (71).Macroradioautographic study of the alimentary tract of mice injected intravenously with 125 I-insulin demonstrated a gradual decrease in insulin binding from the duodenum to the rectum (72).This finding is in good agreement with the in vivo radioreceptor assay data of Whitcomb et al. (73).Our in vivo whole body radioautography also demonstrated the absence of insulinbinding sites on the epithelial cells in the non-glandular part of the stomach (62). As to the pancreas, in vivo microscopic radioautography with 125 I-insulin revealed that the exocrine pancreatic cells of the rat have a large number of insulin receptors (71,74).The existence of insulin receptors not only on the plasma membrane of the exocrine pancreatic cells but also on that of duct cells was confirmed by in vivo radioautography (75). Insulin has been detected in the brain (7,8,76,77), which was thought to be an insulin-independent organ because insulin cannot pass through the blood-brain barrier.The distribution of insulin receptors was investigated in the brain by in vitro radioautography (78)(79)(80)(81), and insulin receptor mRNA was demonstrated in rat brain by in situ hybridization (82).These studies demonstrated that the distribution of insulin receptor-binding sites was consistent with the distribution of insulin receptor mRNA, and insulin receptors were most abundant in the granule cell layers of the olfactory bulb, cerebellum and dentate gyrus, in the pyramidal cell layers of the pyriform cortex and hippocampus, in the choroid plexus and in the arcuate nucleus of the hypothalamus.Our detailed studies on the anatomical distribution of insulin receptors in the mouse hippocampus using radioautography after in vitro labeling of cryostat sections with 125 Iinsulin demonstrated that insulin receptors were distributed most intensely in the granular and pyramidal layers, while the densities of insulin receptors were low in the lacunomolecular layer (83).Among the pyramidal cell layers of the hippocampus, CA3b and CA3c sectors showed significantly higher densities of insulin receptors than CA1 and CA3a.Baskin et al. (78), studying rat brain by quantitative radioautography, showed that the choroid plexus had a high density of insulin receptors. The kidney showed an intense radioautographic reaction when experimental animals received radiolabeled insulin, but the binding of labeled insulin was nonspecific in nature since the strong reaction was not depressed by the presence of an excess amount of unlabeled insulin (69,84).In vitro radioautographic study, however, demonstrated specific insulin receptors in the glomeruli and tubules of the cortex (85).In the reproductive system, the deferent duct showed relatively high insulin binding in the in vivo whole body autoradiograph (62).In vivo microradioautographic analysis of insulin-binding sites in the mouse deferent duct (86) showed the presence of specific insulin binding in the endothelial cells of capillaries and certain fibroblasts in the lamina propria, but the epithelial cells, except for the basal cells, did not show any insulin binding (Figure 3).Endothelial cells are known to be insulin targets (87), and the presence of specific insulin receptors on the vascular endothelial cells has been demonstrated by in vivo radioautography in rat heart capillary (88).It has been shown that insulin has a biological effect on cultured human lung fibroblasts (89), probably mediated by an interaction of insulin with insulinlike growth factor I receptor.However, although insulin binding to fibroblasts in the mouse deferent duct decreased significantly in the presence of an excess amount of insulin-like growth factor I (86), the rate of decrease was 22% indicating that, to some extent, the receptors are specific for insulin.Hirose et al. (86) also indicated that the fibroblasts in the deferent duct do not always show insulin binding and two types of fibroblasts could be distinguished by electron microscopy.The other reproductive organs such as testis, prostate, seminal vesicle, and epididymis have been shown to possess insulin receptors by the membrane binding assay (90,91).A radioautographic localization study demonstrated the presence of insulin receptors in rat Leydig cells (92). In skeletal and smooth muscles, in vivo radioautographic investigation failed to demonstrate specific insulin binding (62,71), though the muscles are clearly major target organs of insulin and many biochemical studies on muscle insulin receptors have been performed (93,94).The situation of the adipocytes is equal to that of muscle cells.The reason why we have not been able to demonstrate insulin binding in these tissues by in vivo radioautography is uncertain at present.Apart from the tissues described above, localization of insulin receptors has been demonstrated by in vivo radioautography in osteoblasts of rat tibia (84), parenchymal cells of the adrenal cortex and medulla (71), and epidermal cells of mouse skin (95).Immunohistochemical localization of insulin receptors has been demonstrated in the syncytiotrophoblast of human placenta from 6 to 10 weeks postmenstruation (96), in human fetal fibroblasts (97), and in amacrine cells of the chick retina (98). Histologic distribution of glucagon receptors The receptors for glucagon have been identified in the kidney (99,100), brain (101,102), lymphoid cells of the spleen and thymus (103), parenchymal cells of the liver (104-106), and endothelial and Kupffer cells in the liver (107), heart (108)(109)(110), adipose tissue (100), intestinal smooth muscle tissue (33) and endocrine pancreatic cells (111,112).Recently, expression of glucagon receptor mRNA was also examined in various tissues revealing that liver, kidney, heart, adipose tissue, spleen, pancreatic islets, ovary, and thymus expressed relatively abundant levels of glucagon receptor mRNA, whereas levels in the stomach, small intestine, adrenal glands, thyroid and skeletal muscle were low (113).Similar results have been reported by Svoboda et al. (114), Burcelin et al. (115), Christophe (116), and Yoo-Warren et al. (117).However, expression of glucagon receptor mRNA does not necessarily signify the formation of glucagon receptor, because it has been suggested that glucagon receptor expression is modulated at a step after mRNA formation (117).Only limited information is available concerning the histologic distribution of glucagon receptors in vivo in liver parenchymal cells (105,106), intestinal smooth muscle cells (33) and the brain (102).We have investigated the histologic distribution of the receptors in young, adult and pregnant mice using whole body and light microscopic radioautography with 125 I-labeled glucagon. The whole body radioautographic experiment using adult mice injected intravenously with 125 I-glucagon demonstrated that only the liver had very high glucagon binding (Figure 4).In vivo microradioautography of the liver revealed the presence of a density gradient of binding from the periportal zone to the perivenous zone (Figure 5).Some cortical tissues in the kidney also showed significant specific binding, but the level was much lower than in the liver.In other tissues and organs, specific binding was equal to or below the reliable quantification limit. Conclusion A working hypothesis of a direct correla-A B PV CV tion of hormone receptor density with hormone action points to hitherto unemphasized targets in the small and large intestines and deferent duct as major sites of insulin action in the body.In contrast, only the liver is regarded as a major site of glucagon action.However, the existence of insulin receptors has been demonstrated in almost all tissues studied.Furthermore, certain tissues such as skeletal muscle and adipose tissue revealed the existence of insulin receptors despite the difficulty of morphological demonstration of insulin receptors in these tissues.These conflicts may derive from the sensitivity of the technique used for detecting the hormone receptors.The function of the hormone receptor in a given tissue should change with age (93) and the action of a given hormone should change in accordance with concentration (118).In addition to these factors, evidence has pointed to heterogeneity of insulin receptor structure and function (119).To understand the true function of insulin and glucagon in a specific tissue, anatomical localization of the receptors should be the first step of investigation. Figure 1 - Figure 1 -Whole body radioautographs showing the distribution of total (A) and nonspecific (B) insulin-binding sites in mice 3 min after 125 I-insulin injection.The mice were injected intravenously with 185 kBq of 125 I-insulin (porcine, 125 I-Tyr A14 insulin dissolved in Ringer solution at a concentration of 185 kBq/0.4ml) in the absence (A) or presence (B) of excess (50 µg) unlabeled insulin.Bar = 1 cm. Figure 2 - Figure 2 -Overlay image of differential interference contrast (DIC) and confocal LSM images of microradioautographs of mouse skeletal muscle 3 min after 125 I-insulin injection.The number of silver grains on the wall of blood vessels (BV) are higher than that found in skeletal muscle fibers (SMF).Bar = 50 µm. Figure 3 - Figure 3 -Overlay image of differential interference contrast (DIC) and confocal LSM images of microradioautographs of mouse deferent duct 3 min after intravenous injection of 125 I-insulin.Numerous silver grains can be seen in the lamina propria (LP), but not in the epithelium (ET) or smooth muscle layer (SML).Bar = 50 µm. Figure 4 -Figure 5 - Figure 4 -Whole body radioautographs showing the distribution of total (A) and nonspecific (B) glucagon-binding sites in mice 3 min after intravenous injection of 185 kBq 125 I-glucagon. 125I-glucagon (3-[ 125 I]iodotyrosyl 10 glucagon dissolved in Ringer solution at a concentration of 185 kBq/0.4ml) was injected into the tail vein in the absence (A) or presence (B) of excess (25 µg) unlabeled glucagon.Bar = 1 cm.
v3-fos-license
2018-04-03T04:24:01.239Z
2014-07-23T00:00:00.000
9012863
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0102555&type=printable", "pdf_hash": "fd7e7fea9a39783fd14c3e4e8d782a98a8d17e43", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46318", "s2fieldsofstudy": [ "Biology" ], "sha1": "fd7e7fea9a39783fd14c3e4e8d782a98a8d17e43", "year": 2014 }
pes2o/s2orc
Epigenetic Alterations in the Brain Associated with HIV-1 Infection and Methamphetamine Dependence HIV involvement of the CNS continues to be a significant problem despite successful use of combination antiretroviral therapy (cART). Drugs of abuse can act in concert with HIV proteins to damage glia and neurons, worsening the neurotoxicity caused by HIV alone. Methamphetamine (METH) is a highly addictive psychostimulant drug, abuse of which has reached epidemic proportions and is associated with high-risk sexual behavior, increased HIV transmission, and development of drug resistance. HIV infection and METH dependence can have synergistic pathological effects, with preferential involvement of frontostriatal circuits. At the molecular level, epigenetic alterations have been reported for both HIV-1 infection and drug abuse, but the neuropathological pathways triggered by their combined effects are less known. We investigated epigenetic changes in the brain associated with HIV and METH. We analyzed postmortem frontal cortex tissue from 27 HIV seropositive individuals, 13 of which had a history of METH dependence, in comparison to 14 cases who never used METH. We detected changes in the expression of DNMT1, at mRNA and protein levels, that resulted in the increase of global DNA methylation. Genome-wide profiling of DNA methylation in a subset of cases, showed differential methylation on genes related to neurodegeneration; dopamine metabolism and transport; and oxidative phosphorylation. We provide evidence for the synergy of HIV and METH dependence on the patterns of DNA methylation on the host brain, which results in a distinctive landscape for the comorbid condition. Importantly, we identified new epigenetic targets that might aid in understanding the aggravated neurodegenerative, cognitive, motor and behavioral symptoms observed in persons living with HIV and addictions. Introduction Approximately 40 million people worldwide are infected with the Human Immunodeficiency Virus (HIV). HIV traffics into the central nervous system (CNS) early after infection and could be associated with a spectrum of neurobehavioral conditions ranging from minor neurocognitive disorder (MND) to HIV-associated dementia (HAD) [1,2]. HIV involvement of the CNS continues to be a significant problem despite successful use of combination antiretroviral therapy (cART), which has decreased the incidence of HAD but has not greatly affected the prevalence of milder forms of HIVassociated neurocognitive disorders (HAND) [3]. This is probably due to several conditions, including drug resistance, cART toxicity and comorbidity factors such as aging, use of drugs of dependence and Hepatitis C virus infection [4,5]. Multiple behavioral risk factors contribute to the transmission of HIV, including injection drug use that represents the second most risky behavior in the United States, accounting for one-third of the AIDS cases [6]. Drugs of abuse may act in concert with HIV proteins to damage glia and neurons, worsening the neurotoxicity elicited by HIV alone [7]. Methamphetamine (METH) in particular is a highly addictive psychostimulant drug, the abuse of which has reached epidemic proportions worldwide and its use is particularly high among persons with HIV infection [8]. METH use has progressively increased in frequency to become the second most abused illicit drug in the United States, and more than 35 million individuals worldwide use this drug. As a result of the association of METH use with high-risk sexual behavior, increased HIV transmission, and the development of antiretroviral drug resistance, METH plays an important role in driving the course of the HIV epidemic in the United States [9]. HIV infection and METH dependence are frequently comorbid and may have synergistic pathological effects, with preferential involvement of frontostriatal circuits [10]. HIV seropositive METH users have more cognitive abnormalities, higher plasma viral loads and more neurological damage than non-drug users [11,12,13,14], and in turn have adverse affects on real-world health outcomes in HIV [15,16]. The METH potentiation of HIV neurodegeneration is mediated by a complex variety of molecular mechanisms, including calcium deregulation, oxidative stress and inflammation [13,14]. METH exposure was recently shown to alter DNA methylation, histone acetylation and gene expression in animal models [17,18], implicating epigenetic mechanisms in its neurotoxicity. Epigenetic mechanisms play an important role during the infection with retroviruses, including HIV-1, as they mediate the integration of the virus into the host genome, a crucial step in the viral life cycle [19,20,21]. Latent forms of HIV-1 are silenced by transcriptional shutdown with the establishment of a repressive chromatin environment at insertion sites by the recruitment of histone deacetylases [22]. In addition, viral promoters and enhancers are hypermethylated in the latent reservoirs from HIV-1-infected patients without detectable plasma viremia, while those same sites appeared hypomethylated in viremic patients [23]. Drug abuse alone is known to induce epigenetic changes in the brain which have been related to addictive behaviors [21], including alterations in histone tail modifications and DNA methylation, and the regulation of gene expression by non-coding RNAs in reward-associated brain nuclei. These changes modify neuronal plasticity and render individuals more prone to drug addiction [21,24]. Acute exposure to amphetamines causes induction of c-fos and fos-b genes in the nucleus accumbens, which is associated with increased acetylation of histone 4 residues in their promoters [25]. Although the epigenetic mechanisms induced by either HIV-1 infection or drug abuse are somewhat defined, the molecular consequences of their co-occurrence are much less explored. The combination of HIV and METH, for example, results in greater interneuron loss and higher activation of reverse transcriptase in macrophages [26]. In rodent models, exposure to METH alters the expression of DNA methyltransferase I (Dnmt1), the enzyme involved in maintenance of DNA methylation and which is abundantly expressed in the adult brain [18]. Remarkably, the transcription of DNMT1 in human T-cells is also regulated by early expressed HIV-1 genes [27]. These examples suggest a synergistic action of HIV-1 infection and METH use that might be transduced via epigenetic mechanisms. In the present report we provide evidence that concurrent HIV-1 infection and METH use alters the methylome of the host brain, inducing epigenetic and transcriptional changes that appear to be specific for the comorbid condition. We detected increased levels of DNMT1 on the brains of HIV-seropositive cases who used METH, which resulted in increased global DNA methylation. Analysis of the individual loci the showed differential methylation showed enrichment for neurodegenerative diseases; dopamine metabolism; and oxidative phosphorylation, pathways associated with neuronal damage and previously reported to be affected during HIV infection and METH use, and which might be associated with the aggravated neurodegenerative, cognitive, motor and behavioral symptoms observed on HIV seropositive individuals who use METH. Study population We evaluated 27 HIV seropositive cases (HIV+) from the National NeuroAIDS Tissue Consortium (NNTC) autopsy cohort ( Table 1). Fourteen cases had no history of METH abuse or dependence (HIV+METH2) and 13 cases had a history of METH dependence (HIV+METH+). Subjects had standardized, comprehensive neuromedical and neurocognitive assessments within a median of 12 months before death. Neurocognitive testing methods have been described previously [28], and consisted of a battery of tests that assessed cognitive domains commonly affected in HIV disease, including learning, memory, attention, speed of information processing, abstraction, and verbal and motor skills. All autopsy brain tissue underwent standardized research neuropathology examination. Cases with a history of CNS opportunistic infections or non-HIV-related developmental, neurologic, psychiatric or metabolic conditions that might affect CNS functioning were excluded from the study. Standard Protocol Approvals, Registration and Patient Consents We obtained postmortem brain tissue banked at the University of California, San Diego HIV Neurobehavioral Research Program HIV RNA and DNA assays DNA and RNA were extracted from frozen postmortem human frontal cortex samples using the DNeasy Blood and Tissue Mini kit and the RNeasy Lipid Tissue Mini kit (Qiagen), respectively. Quantification of viral DNA and unspliced cellular HIV RNA in the brain samples was performed by the HNRP Neurovirology Core (UCSD) as previously described [29]. Briefly, for RNA quantification, cDNA was generated using the Superscript III First-Strand Synthesis kit (Invitrogen) with specific primers targeting HIV pol. Nucleic acid was quantified by real-time PCR. Normalization was applied to the cellular input and expressed in copies per 10E06 GAPDH (for RNA) or ACTIN (for DNA) copies. Global DNA methylation assays Genomic DNA was extracted from 25 mg of frozen brain tissue (frontal cortex) with DNeasy Blood & Tissue Mini kit (Qiagen). DNA methylation was measured in 200 ng of genomic DNA with the Methylamp Global DNA Methylation Quantification Ultra kit (Epigentek). Each sample was run in triplicate. Real-time PCR analysis of gene expression Transcript abundance was determined by quantitative real-time PCR (qPCR) using Taqman technology (Life Technologies). Briefly, total RNA was extracted from frozen cortical samples using the RNeasy Lipid Tissue Mini kit (Qiagen) and was reversed-transcribed using RT 2 First Strand kit (Qiagen) from 1 mg of total RNA. Real-time PCR analysis was performed using specific probes for DNMT1 (RefSeq NM_001130823; Taqman assay Hs00945875); DNMT3B (RefSeq NM_001207055; Taqman assay Hs00171876); TGFBR3 (RefSeq NM_001195683; Taqman assay Hs01114253); and NET1 (RefSeq NM_001o47160; Taqman assay Hs01087884) using Taqman Fast Advanced Master Mix (Life Technologies) on the StepOne-Plus real-time PCR system (Applied Biosystems) according to the manufacturer instructions. PCR reactions were performed in duplicate. Relative quantification of gene expression was calculated using b-actin (ACTB; RefSeq NM_001101; Taqman assay Hs99999903) as an internal control and expressed as the inverse ratio to threshold cycle (1/dCt). DNMT1 protein analysis DNMT1 protein levels were quantified by western blot and immnuhistochemistry as described earlier [30]. Briefly, 100 mg of frozen postmortem human frontal cortex were used to isolated the nuclear protein fractions with the Epiquik Nuclear Extraction Kit (Epigentek) as instructed by the manufacturer. After electrophoretic separation and transfer using standard conditions, the blots were probed with anti-DNMT1 (Abcam, 1:1,000) or anti-TBP . b values range between 0 (non-methylated) and 1 (completely methylated). The Illumina GenomeStudio Software (version 2011.1) was used to assess quality and extract the DNA methylation signals from scanned arrays. Methylation data was extracted as raw signals with no background subtraction and data was normalized to control probes present on the array. Sample methylation profiles including average b and intensity signals for methylated (M) and unmethylated (UM) probes obtained from the whole array are available as Files S1-4. Differential methylation analysis was performed using PARTEK Genomic Suite [31] after exporting normalized b values obtained from GenomeStudio methylation module. Principal component analysis (PCA) was used as quality control and to interrogate possible clustering of samples. Mixedmodel multi-way analysis of variance was used to compare the individual CpG loci methylation data across different groups, similar to a previous report [32]. Method of Moments [33] was used to investigate the principal sources of variation. For the ANOVA model, METH dependence (present vs. absent), ''Age'' (coded as decades) and exposure to antiretroviral therapy (''ART'', On vs. Off) were used as categorical variables with fixed effects since their levels represent all conditions of interest and were influencing methylation levels. The model used was Yijkl = m+ METH i +Age j +ART k + e ijkl where Yijkl represents the lth observation on the ith METH jth Age kth ART m is the common effect for the whole experiment. eijkl represents the random error present in the lth observation on the ith METH jth Age kth ART. The errors eijkl were assumed to be normally and independently distributed with mean 0 and standard deviation d for all measurements. Validation of DNA Methylation by High Resolution Melting Analysis For the validation of microarray discovery we performed Methylation-Sensitive High Resolution Melting (MS-HRM) analysis on a group of genes showing differential methylation in the HIV seropositive (HIV+)/METH+ group, including NET1; TTL7; TGFBR3; SCN1A; UNC5D and APBA1. Primers were designed for bisulfite-converted DNA using Methyl Primer Express software (Applied Biosystems) to produce an amplicon ,300 bp that overlapped the Illumina probe-set that showed differential methylation and also included neighbor CpG sites to increase the magnitude of the delta of their melting temperatures (Tm). Presence of CpGs in the primer sequence was avoided to prevent bias due to preferential amplification of the unmethylated target DNA [34]. Primer efficiency and specificity was tested by running a mock MS-HRM using 0% and 100% methylated human standard DNA (Life Technologies) and analyzing the profile of Tm peaks in the melting curve and the number and size of amplicons by gel electrophoresis. Bisulfite-converted DNA templates (40 ng) from the studied cases underwent PCR alongside with standards ranging from 0% to 100% methylation using MeltDoctor HRM Dye (Life Technologies). Melt curves of the samples were fitted to the sample curves using HRM 3.0 software (Applied Biosystems). Statistical analysis Statistical analysis was performed using Student's t test (unpaired; two-tailed) with a significance of p,0.05 or Mann Whitney test, as indicated. Correlation between viral RNA and global DNA methylation was calculated by Spearman's rho. Linear regression was used to analyze the relation between lociassociated methylation levels and transcript abundance (Prism Graph Pad Software). Fold change of methylation was determined by analysis of variance multivariable analysis using Partek Genomics Suite (Partek) and computing for false discovery rate (FRD) at q,0.05 as described earlier. Comparison of the clinical characteristics of HIV seropositive individuals with and without concurrent METH use In the present study we analyzed a total of 27 HIV seropositive cases (HIV+), whose clinical characteristics are presented in Table 1. The majority of cases were males, and in general the groups did not differ significantly respect to age, estimated duration of HIV-1 infection, year of death, nadir or current CD4+ T-cell count, or plasma viral load at the last antemortem visit. Antemortem neurocognitive diagnosis identified a high rate of cognitive impairment in HIV+METH2 cases (13 out of 14 cases) with mostly milder disease (mild neurocognitive disorder (MND) or asymptomatic neurocognitive impairment (ANI)). None had HIV-associated dementia (HAD). In the HIV+METH+ group, 62% individuals had cognitive impairment, with 5 cases of MND and 1 case of HAD. The relatively high rate of cognitive impairment in the non-METH cases may reflect brain injury that occurred before cART became available to these subjects, whose exposure to ARV on average was significantly shorter than the METH group (8.53861.972; MEAN 6 SEM for HIV+METH2 METH use alters global DNA methylation in the brain of HIV seropositive individuals We first quantified the levels of HIV DNA and RNA content on the frontal cortex of the HIV seropositive individuals included in the study, comparing cases with and without METH use (Fig. 1). Although no significant differences were detected on HIV-1 DNA (Fig. 1A) or RNA (Fig. 1B) content in the brains of HIV+METH+ subjects in comparison to HIV+METH2 individuals, a significant increase in the RNA/DNA ratio was detected in subjects who used METH (Fig. 1C), indicating higher average viral transcription associated with METH use disorders. As HIV infection and METH use have been reported to alter epigenetic regulation, we next investigated the levels of global DNA methylation in the brain. The HIV+METH+ group had increased global methylation in the frontal cortex ( Fig. 2A), which correlated with HIV RNA levels, suggesting that higher viral expression associated with METH exposure might influence DNA methylation. In agreement with previous observations on rodent models [18], the gain in methylation appears to be specifically associated with the selective increase of DNA methyl-transferase I (DNMT1) expression whose mRNA and protein levels were higher in the METH+ cases (Fig. 2C, E-H); while the groups did not differ in the levels of the close family member DNMT3B, also active in postmitotic neurons (Fig. 2D). Impact of Combined HIV-1 and METH on the Brain Methylome The observed changes in global methylation might have important consequences in the host brain. We therefore explored the epigenetic changes induced by HIV disease in concert with METH dependence by profiling genome-wide DNA methylation in the frontal cortex of a subgroup of samples from our initial cohort, including 6 HIV+METH+ and 6 HIV+METH2 cases. Methylome analysis was performed using the Infinium Human 450 K beadchip and GenomeStudio. For the analysis of differential methylation, we applied the Illumina Custom model after normalization to control probes present in the array and using the HIV+METH2 group as the reference. We selected probes showing absolute Delta b-values.|0.2| at p,0.01 (with false discovery rate (FDR) q ,0.001 to control for multiple comparisons) as differentially methylated (DM), a threshold previously suggested to improve detection of DM probes in this array platform with 99% confidence [35]. Detection levels were similar among groups, with an average of 485,381 CpG detected at p,0.01. A locus was called as detected at p,0.01 level if the mean signal intensity from multiple probes for that CpG locus was significantly higher at the level of p,0.01 than the negative control on the same chip. We first compared the methylation profiles between HIV+ METH+ cases with HIV+METH2 subjects. We detected 235 CpG with differential methylation, with 54% of DM loci showing decreased methylation and the remaining 46% showing increased methylation in the HIV+METH+ (Fig. 3A). Distribution of average b values across samples showed similar profiles in both groups with most CpG clustering in low-methylation (LMF, b values ,20%) and high-methylation fractions (HMF, b values . 80%, Fig. 3B), as previously described for another neurodegenerative disease [36]. Consistent with b-value distribution and previous reports [37], CpG neighborhood context analysis and genomic location distribution showed that loci with decreased methylation were over-represented at CG islands (CGi); while CpG sites located farther away from islands (open sea) and at the gene bodies showed increased methylation (Fig. 3 C). Finally, Gene Ontology analysis of the fraction of DM CpGs associated with annotated genes, using Panther Classification System (www. pantherdb.org) [38] and clustering by Biological Process, showed that metabolic processes, cellular processes and cell communication were the more populated clusters (Fig. 1D). In order to incorporate in the analysis other biological and clinical factors other than drug use that might modify DNA methylation, we performed a second analysis by exporting normalized b values from Genome Studio into PARTEK Genomic Suite [31]. Principal component analysis (PCA) was used as quality control and to interrogate possible clustering of samples. We tested age, exposure to antiretroviral therapy (cART) and METH dependence. METH exposure was the only characteristic that separated clusters (Fig. 4A). To further investigate the source of variation in methylation, we used multivariable ANOVA. Figures 4B-C show the significance of different sources of variation in the entire data. For the ANOVA model, METH dependence (presence or absence); age (coded as decades) and exposure to antiretroviral therapy (ART, On or Off) were used as categorical variables with fixed effect since they represent conditions of interest and influenced methylation levels. Differential methylation analysis reporting loci with absolute fold change .1.2 at a FDR,0.05 identified a total of 446 geneassociated CGs with differential methylation in the brains of HIV+ METH+ subjects, 441 corresponding to autosomal loci; with 204 presenting increased methylation ( Table 2 and Table S1) and 237 showing decreased methylation (Table 3 and Table S1). We partially verified the array findings by Methylation-Sensitive High Resolution Melting (MS-HRM) analysis, a novel form of real time PCR that uses thermal denaturation differences in bisulfiteconverted DNA to create a methylation profile against known methylation standards. We tested a group of genes showing differential methylation in the HIV+METH+ cluster, including UNC5D; TGFBR3 and NET1 that were hypermethylated; and TTL7; SCN1A and APBA1 that were hypomethylated (Fig. S1). We observed coincident changes on methylation in 4 out of 6 retested loci. Epigenetic changes associated with HIV disease and METH dependence correlate with transcriptional alterations In order to determine the impact of these epigenetic changes on the brain transcriptome, we profiled the mRNA levels of two candidate genes whose methylation changes were validated by MS-HRM. We selected genes showing increased methylation, as we found a net gain on global DNA methylation in association with HIV disease and METH dependence ( Fig. 2A). TGFBR3 had the highest change in methylation, showing a 7.8 fold increase (Table 2). We observed a significant increase in TGFBR3 transcripts in the HIV+METH+ group (Fig. 5A). Increased DNA methylation at CG islands, which are abundant at promoter regions, is associated with repression. In contrast, methylation changes that occur at the gene body are associated with transcribed genes and directly correlated with expression [39]. The specific probe showing DM for TGFBR3 on the Illumina array mapped to a CG site not associated with the promoter or a CG island, and in agreement with the previous notion, methylation status significantly correlated with transcription in the HIV+ METH2 group, and showed a similar trend in the HIV+METH+ group. This suggests that methylation is involved in TGFBR3 expression on HIV+ brains, and that this activation is increased after exposure to METH (Fig. 5B). In addition we investigated NET1 transcription; a gene that also showed increased methylation (Table S1). Higher transcript levels were detected for NET1, but in this case the CG dinucleotide identified in the array analysis as hypermethylated in the HIV+METH+ group is located in a CG island, which should result in transcriptional repression (Fig. 5C). Interestingly, while linear regression analysis showed no significant association between methylation and expression in the HIV+ METH2 group, methylation levels strongly correlated with transcription in the HIV+METH+ group (Fig. 5D). These results suggest that the co-occurrence of HIV infection and exposure to METH might modify gene expression, not only quantitatively but also qualitatively, which might result in a more drastic deregulation on the transcriptome that the alterations induced by HIV infection alone. Epigenetic changes induced by METH on the brain of HIV seropositive subjects are related to neurological disease and dopaminergic alterations To gain further insight into the biological significance of the observed epigenetic alterations, we performed data mining comparing the list of DM genes to existing datasets of genes deregulated in response to drug addiction, in particular to the abuse of amphetamine and methamphetamine [40] under the rationale that, although the observed effects on methylation can not be explained as a consequence of viral infection, which was a common factor among groups, they still might be triggered by drug exposure on itself. From a total of 117 genes previously reported altered by these stimulants, only 5 (about 4%) appeared also in our DM group, including CKB; PGM1; DPYSL2; STXBP1 and HNRNPA1. Moreover, none of the genes differentially methylated on the HIV+METH+ group were reported as altered by either alcohol, cocaine or marijuana dependence, substances that were also abused by the subjects of our cohort. This suggests that the observed changes in DNA methylation are likely due to the specific interaction of HIV-1 disease and METH dependence, rather to their individual effects or to the combined exposure to multiple substances. We also investigated the existence of common regulatory factors that could be affecting the expression of a majority of differentially methylated genes. We performed Gene Set Enrichment analysis using the Molecular Signature Database category C3, ''motifbased'' to search for known and likely regulatory elements present in the promoters and 39 UTR on either hypermethylated or hypomethylated gene groups. Interestingly, NFAT, nuclear factor of activated T cells, had the highest significance overlapping on both gene sets, with 23 genes on the hypermethylated group and 28 from the hypomethylated fraction included in the overlap at FDR q values of 6.7 E-04 and 1.9 E-05 respectively. This finding suggests that differentially methylated genes might be linked at a higher regulatory level by NFAT. Lastly, we performed canonical pathway analysis by feeding our entire list of differentially methylated genes into Ingenuity Pathway Analysis software to interrogate if epigenetic deregulation was more likely affecting a particular biological pathway. Interestingly, Neurological Disease (illustrated in Fig. S2) and Psychological Disorders were among the highest represented groups with significant alterations (p values ,0.001 in both cases), with L-DOPA Degradation (p value 3.95E-04), ERK/MAPK Signaling (p value 1.41E-03) and Dopamine-DARPP32 Feedback in cAMP Signaling (p value 1.61E-03) being the top ranked canonical pathways. These results are in agreement with neuropathology and behavioral alterations documented for HIV seropositive individuals that are exacerbated by METH use, and also with the observation of deregulation of the dopaminergic system by METH [41], highlighting DNA methylation as a molecular mechanism implicated on the neurodegeneration associated with the interaction of HIV disease and METH dependence in the brain. Discussion In the present study we analyzed epigenetic changes associated with METH dependence as a comorbid presentation to HIV-1 infection. We focused on changes pertinent to DNA methylation, as this epigenetic modification is altered independently by both, viral infection and exposure to drugs. We present evidence for the existence of a unique cortical methylome induced by the interaction of HIV and METH, and which results in deregulation of many genes linked to AIDS and drug dependence neuropathologies, but which can not be fully explained by either factor alone. We proposed that changes in methylation induced early in infection by HIV potentiate the alterations in the DNA methylation machinery, particularly induction of DNMT1 expression, which is further altered after METH exposure. Our results provide a new molecular understanding of gene alterations due to the interaction of METH and HIV, standing from the host point of view. Epigenetic mechanisms encompass DNA methylation, histone posttranscriptional modifications and non-coding RNA forms, and produce heritable changes in the genome that shape the phenotype without altering the DNA sequence. These mechanisms are intimately ligated to viral replication and to molecular changes induced by drugs of dependence. Epigenomic regulation is implicated in integration and viral latency, two crucial steps in the HIV-1 life cycle. Integration of proviral DNA into the host genome is essential for HIV-1 replication [42], a mechanism favored by an array of epigenetic modifications including H3 and H4 acetylation and H3 and K4 methylation. In contrast, viral integration is impaired around chromatin regions harboring H3 K27 methylation and DNA methylation [43]. Moreover, epigenetic control is tightly connected with viral latency. Silencing of viral replication is achieved by the concerted recruitment of chromatin remodeling factors to the integrated LTR sites, which result in chromatin structural changes that prevent further transcription [20,22]. We recently reported that imbalance of the epigenetic factors involved in this chromatinremodeling complex is associated with latent HIV infection in postmortem brains [19]. In addition, integrated provirus appears to be silenced by CpG methylation around the integration sites [44]. Therefore, major epigenetic changes could be driven by HIV-1 at different stages of its life cycle, which might impact the host. Notably, a higher degree of DNA methylation at the 59-LTR of HIV-1 has been reported for long-term ''non-progressor'' and ''ellite-controller'' patients in comparison to ''progressor'' cases, implicating the role of DNA methylation on viral replication and its impact on disease outcomes [45]. Epigenetic mechanisms are sensible responders to the environment and can elicit changes in cellular physiology upon exposure to varied stimuli, including use of drugs of addiction. Exposure to increasing doses of METH alters glutamatergic function on rat striatum, an effect accompanied by epigenetic changes, including increased recruitment of CoREST, HDAC2, MeCP2 and SIRT2, which downregulate GluA1 and 2 transcription [46]. In addition, a recent study aimed at understanding the epigenetic changes induced by chronic METH exposure leading to neuroadaptations at glutamatergic synapses suggested that H4 hypoacetylation may be a determinant factor of this response [47]. The neurotoxic effects of HIV-1 are in part due to its ability to enter the CNS early during infection, causing a deficiency in dopaminergic function [48]. The frontostriatal regions of the brain are highly vulnerable to neurotoxins released by infected macrophages/microglia cells [49], but this brain region is also injured by METH via increased dopamine and glutamate transmission, which further leads to neuronal damage [13]. Thus, the concomitant adverse effects of HIV-1 and METH on similar neuronal circuits exacerbate neuronal damage that each factor can produce alone. Another common pathway that is affected by both, HIV and METH is the regulation of the DNA methylation machinery itself. Early expressed HIV-1 proteins act directly on the promoter region and induce the transcription of DNMT1, without concomitant activation of DNMT3A or DNMT3B [27]. As DNMT1 is the main maintenance methylation enzyme resident in the adult brain, HIV-induced deregulation could have a substantial impact on the CNS, particularly on sustaining methylation patterns. Moreover, METH exposure was reported to alter the expression of Dnmt1 on the rat brain, by a mechanism mediated by glucocorticoid hormones, which are increased by METH. This cascade leads to differential DNA methylation and altered gene expression in the nucleus caudatus and the nucleus accumbens of exposed animals [18]. In agreement with these observations we report here specific increase of DNMT1 transcription that results on higher levels of global DNA methylation and associated with higher viral expression, suggesting that both stimulus act synergistically to reshape the epigenetic landscape in the host brain. The experimental design of our study, in which we compared HIV seropositive individuals who only differed significantly on their use of METH, enabled us to withdraw changes induced exclusively by the virus, therefore altered methylation patterns reflect changes due to the interaction of HIV disease and METH dependence. Although the subjects included in the cohort had also a history of abuse/dependence of a variety of other substances, the fact that only 4% of differentially methylated genes identified in our study matched extensive collections of genes deregulated by drug use reinforces the uniqueness of the methylome we described here. Noteworthy, pathway analysis including genes with altered methylation showed enrichment for neurodegenerative diseases, with particular effects on dopamine metabolism and transport, highly pertinent to the neuronal damage due to HIV and METH as described earlier. Moreover, the observation that many DM genes appear to be under co-regulated by NFAT, a factor directly linked to HIV-1 infection; suggests a complex cross talk between transcriptional and epigenetic factors that result on this signature epigenome/transcriptome. NFAT is associated with a broad spectrum of regulatory molecules, including growth factors and cytokines, and it is also important for the induction of specific genetic programs that guide differentiation and effector activity of CD4+ T helper cells and also governs the transcription of signature cytokines [50]. In addition, NFAT transcription factors can regulate HIV-1 expression by direct binding to an specific site located at the 59 LTR [51], highlighting a fundamental role of this protein during HIV-1 infection and the relevance of the epigenetically-deregulated gene set that we report here. In sum, we provide evidence of the role of DNA methylation as a molecular mediator of alterations induced by HIV infection and METH use, whose complexity need to be further investigated. This epigenetic analysis also unraveled novel genes that might be related to precise neurodegenerative cascades, contributing to a better understanding of the pathways affected by these highly comorbid conditions.
v3-fos-license
2017-05-02T06:16:57.657Z
2013-06-11T00:00:00.000
1258166
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0065262&type=printable", "pdf_hash": "f9e6029ca54cc2458d7f03d9f35284a51a0fba82", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46319", "s2fieldsofstudy": [ "Psychology", "Biology" ], "sha1": "f9e6029ca54cc2458d7f03d9f35284a51a0fba82", "year": 2013 }
pes2o/s2orc
Numerical Abstraction in Young Domestic Chicks (Gallus gallus) In a variety of circumstances animals can represent numerical values per se, although it is unclear how salient numbers are relative to non-numerical properties. The question is then: are numbers intrinsically distinguished or are they processed as a last resort only when no other properties differentiate stimuli? The last resort hypothesis is supported by findings pertaining to animal studies characterized by extensive training procedures. Animals may, nevertheless, spontaneously and routinely discriminate numerical attributes in their natural habitat, but data available on spontaneous numerical competence usually emerge from studies not disentangling numerical from quantitative cues. In the study being outlined here, we tested animals' discrimination of a large number of elements utilizing a paradigm that did not require any training procedures. During rearing, newborn chicks were presented with two stimuli, each characterized by a different number of heterogeneous (for colour, size and shape) elements and food was found in proximity of one of the two stimuli. At testing 3 day-old chicks were presented with stimuli depicting novel elements (for colour, size and shape) representing either the numerosity associated or not associated with food. The chicks approached the number associated with food in the 5vs.10 and 10vs.20 comparisons both when quantitative cues were unavailable (stimuli were of random sizes) or being controlled. The findings emerging from the study support the hypothesis that numbers are salient information promptly processed even by very young animals. Introduction A wealth of data has demonstrated that a variety of non-human animals are able to solve some numerical tasks [1], but little is known about how salient numbers are relative to other properties [2]. Some authors have argued that non-human animals can learn to master abstract numerical competence outside of their natural environment only after undergoing extensive laboratory training, which occurs whenever the stimuli employed do not offer quantitative non-numerical information [3,4,5,6,7]. After being trained to respond to ordinal relationships linked to six Arabic numerical symbols, when tested, squirrel monkeys chose the larger number in all the previously experienced combinations as well as in new ones [8]. Trained to sort in ascending order stimuli representing from 1 to 4 elements, rhesus monkeys [9], hamadryas baboons, squirrel monkeys [10] and brown capuchin monkeys [11] were able to generalize to new numbers -from 5 to 9-and new stimuli. Monkeys trained to respond (in ascending or descending order) to pairs of numerosities (1)(2)(3)(4)(5)(6)(7)(8)(9) spontaneously ordered in that same direction new pairs of larger values (i.e., 10,15,20,30) [12]. When trained to place values (6-5-4) in descending order, rhesus macaques were able to apply the descending rule to novel values (1-2-3) [12]. Lemurs (Lemur catta and Eulemur mongoz), when trained to select between two images the higher-ranking one, could learn the ordinal relationship between seven stimuli, showing a transitive inference reasoning [13]. But abstraction is not just a prerogative of primates: an African grey parrot (Psittacus erithacus) was, for example, able to use labels to order numbers from 1 to 8 [14]. Adult Clarks' Nutcrackers could identify the 4 th or the 6 th positions in a series of 16 identical ones [15]. When all other spatial cues were being controlled, day-old domestic chicks (Gallus gallus) could identify an element within a series of identical ones solely on the basis of ordinal information [16,17]. While these and other studies prove that some animals possess some abstract numerical competence, it cannot be excluded that pure numerical ability emerges only following long laboratory training. Studies which were, instead, carried out to specifically investigate spontaneous, i.e. in the absence of training, numerical discrimination have been unable to clarify if and when non-verbal subjects (non-human animals and pre-verbal infants) rely on number or on other cues. Many of those studies were performed considering quantitative variables (usually either cumulative surface area or total volume) one at a time [18,19,20]. If indeed the quantitative cues are not specifically controlled it becomes impossible to verify if the subjects are relying on quantitative cues or on the number itself [1,21,22]. Using a preferential looking method, six-month-old infants discriminated large numerosities that differ by a ratio of 0.5 (8vs.16 or 16vs.32), but not 0.667 (8vs.12 or 16vs.24), when presented with visual arrays in which many quantitative variables were controlled [23,24]. At the same age infants discriminated 8vs.16 and 4vs.8 sounds but failed in discriminating 8vs.12 and 4vs.6 sounds, providing evidence that the same ratio (0.5) limits numerosity discrimination in auditorytemporal sequences and visuo-spatial arrays [25,26]. When monkeys (Macaca mulatta) observed an experimenter hiding a number or apple pieces, one at a time, in an opaque container and a different number of apple pieces in another opaque container, they approached the larger quantity when the following pairs were presented: 1vs.2, 2vs.3, 3vs.4 and 3vs.5. In order to examine the possibility that the monkeys were focusing on volume rather than on number, in one control condition the experimenters placed 3 pieces of apple in one opaque container and 1 piece of apple, equal in volume to the three pieces grouped together, in another one. The monkeys once again chose the larger number, showing that the numerical cue was considered more important than the quantitative one [27]. Horses (Equus caballus), likewise, selected the larger of two quantities when presented with small numerical contrasts (1vs.2 and 2vs.3) even when the total volume of the two sets was equal [28]. Using heterogeneous elements in experimental paradigms seemed to be the best way to effectively control for quantitative variables and consequently to test the abstraction of numerical values. Until now heterogeneous items have been used to test abstract numerical competence in experiments characterized by long training procedures [9,29,30]. The study being outlined here describes experiments in which chicks were reared in an environment where food was available only behind a screen picturing a particular number (positive stimulus, i.e. 5) of heterogeneous elements (differing in colour, size and shape) and not behind another screen picturing a different number (neutral stimulus, i.e. 10) of heterogeneous elements. We were interested in investigating the chicks' spontaneous encoding of numerical information during rearing and to evaluating their ability to discriminate between large numbers of heterogeneous items solely on the basis of numerical cues when quantitative variables (area and perimeter) were being controlled. During testing, the chicks could freely choose to approach the positive or the neutral stimulus. The former pictured the same number of elements as did the positive stimulus but differing for colour, size and shape from the rearing ones and the latter represented the same number of elements as in the neutral stimulus during the rearing period but different again for colour, size and shape. If the animals were spontaneously encoding numerical information, we expected them to move towards the stimulus associated with food both when quantitative cues were missing (due to the use of randomly sized heterogeneous stimuli) and when the cues were not the same as those used during rearing. Ethics Statement The experiments complied with all applicable national and European laws concerning the use of animals in research and were approved by the Italian Ministry of Health (permit number: 5/ 2012 emitted on 10/1/2012). All procedures employed in the experiments included in this study were examined and approved by the Ethical Committee of the University of Padua (Comitato Etico di Ateneo per la Sperimentazione Animale -C.E.A.S.A.) as well as by the Italian National Institute of Health (N.I.H). Experiment 1 The goal of the first experiment was to investigate the chicks' ability to discriminate between large numbers of heterogeneous items (5 vs.10) solely on the basis of numerical cues (when quantitative variables were unavailable), or when quantitative variables (area and perimeter) are being controlled. Since we were interested in spontaneous encoding of numerical information, the chicks were exposed for about two days to the contingent presentation of food with a certain number (i.e. 5) of items and not with another (i.e. 10). A similar procedure had been used to demonstrate chicks' spontaneous discrimination of possible and impossible objects [31] and their sensitivity to the Ebbinghaus illusion [32] as well as other types of numerical discrimination [33]. Subjects. ''Hybro'' domestic chicks (Gallus gallus), a local variety of the White Leghorn breed, were used. These were obtained weekly, every Monday morning when they were a few hours old, from a local commercial hatchery (Agricola Berica, Montegalda, Vicenza, Italy). On arrival, the chicks were housed individually in standard metal cages (28632640 cm). Chicks were housed individually as this procedure allowed to employ half of the animals and it allowed to obtain more informative data. In fact data obtained from individual chicks that have been reared in pairs are not independent. Moreover, individual testing would be distressful to pair-reared chicks. The rearing room was constantly monitored for temperature (28-31uC) and humidity (68%) and was illuminated continuously by fluorescent lamps (36 W) located 45 cm above each cage. Water, placed in transparent glass jars (5 cm in diameter, 5 cm high) in the centre of the cages, was available ad libitum. During the three days, rearing period (from Monday to Wednesday morning), the chicks found food behind two of four vertical plastic screens (10614 cm) located approximately 10 cm in front of each of the cage's four corners. The two screens hiding food were decorated with identical pictures representing a certain number of elements (Positive Stimulus, S p ) while the other two screens not associated with food were decorated with identical pictures of a different numerousness (Neutral Stimulus, S n ). All of the screens were covered with static 2D images picturing a certain number of elements whose images were randomly selected from sets of patterns of different shapes (10 different), colours (10 different) and sizes (10 different, ranging between 0.5 cm and 2 cm) and printed on identical white rectangular plastic boards (screens) (11.569 cm). During the rearing period four screens were always present in each cage: two representing S p and two representing S n (Fig. 1). To prevent the chicks from learning to identify the stimuli on the basis of the spatial disposition of the elements depicted on the screens, six different pairs of stimuli were used. In each, the elements' disposition on the screens was randomly determined in such a way that the distance between elements varied from 0.3 to 3.8 cm. Three times a day the stimuli were replaced, in such a way that each chick was exposed, for about 8 hours, to each pair of stimuli. Every time the stimuli were replaced, the screens were also rotated from corner to corner in order to avoid positional learning. An artificial imprinting object (a red capsule measuring 263 cm) was suspended (at chick's height) in each rearing cage to prevent social isolation. Artificial imprinting objects are effective social substitutes of real social mates. After about one-two hours of exposure the chick responds to the artificial object with a range of behavioural responses which are clearly identifiable as social-affiliative [34,35,36,37,38]. On the morning of the third day (testing day) each chick underwent a single test to verify how the rearing period had affected their numerical discrimination. Numerosities used during testing were the same as those utilized during the rearing period, but all the elements appearing on the screens were new and presented in different spatial dispositions. Three Testing Conditions, The Apparatus and Procedure. Testing took place in an experimental room, adjacent to the rearing room, in which temperature and humidity were controlled (25uC and 70%, respectively) and which was kept dark except for light shining from two lamps (40W) placed at a height of 25 cm at either end of the apparatus. The apparatus (Fig. 2) consisted of a runway (45 cm long, 20 cm wide and 30 cm high). One of the two stimuli was placed at the far end of each side of the apparatus, at a height of 2 cm, so that it was entirely visible to a chick placed in the central area of the apparatus. The positions of the two testing stimuli and the bird's starting position (i.e. with the S p either to the left or to the right) were balanced across the experiments. The apparatus was made up of three compartments (each 15 cm long): a central, starting area considered a no-choice zone and two side compartments (choice zones). On testing day, each chick was individually placed at the starting area with the two (positive and neutral) stimuli positioned at the end of the two arms of the apparatus. Choosing one of the two compartments (indicative of its preference for that stimulus) meant that at least L of the chick's body entered the area as the subject was looking at the screen. No choice was instead scored whenever chicks entered a choice zone but looked at the opposite screen [15,33]. At starting time each chick was placed in the starting position and its behaviour was videorecorded throughout the duration of the experiment, i.e. six minutes. Placed above the apparatus and connected to a monitor, a video camera enabled the experimenter to track the chicks' behaviour during the test without being seen using a computer operated device. This was activated every time a chick entered a choice zone, and registered the amount of time each chick spent near to either stimulus. An index of choice was calculated for every chick according to the formula used to analyse choice behaviour [39]: The times spent approaching the S p /(Time spent near S n + Time spent near S p ) were calculated. Values at about 0.5 indicated no preference for either stimulus; values .0.5 indicated a preference for S p and values ,0.5 indicated a preference for S n . Significant differences with respect to chance level (0.5) were calculated by one-sample two tailed t-tests. Results and Discussion. An ANOVA was used to analyze S p (5E or 10E) and the Testing Condition (RSG, PCG or ACG) as well as the independent variables. The dependent variable was the Index of Choice for S p . The ANOVA did not uncover a significant main effect for These results demonstrate that 3-day-old chicks spontaneously discriminate between the numbers of heterogeneous elements even when quantitative variables (area and perimeter) are being controlled. This behaviour seems to be explained by Analogue Magnitude System (AMS) processing, a non-verbal numerical system according to which encoding of numerosities is only approximate [40]. Depending on the ratio between numbers to be discriminated and in accordance with Weber's law, as the ratio between the numerosities to be discriminated becomes larger, the response times decrease and accuracy increases [40]. Experiment 2 Experiment 1 demonstrated that numerical discrimination of large (5vs.10) sets of heterogeneous elements is possible solely on a numerical basis. In experiment 2 we used a comparison (i.e. 6vs.9) with a 0.67 ratio which is more difficult to discriminate than the previous one (0.50). Subjects, Apparatus and Procedure. A new group of 40 chicks was used. All the chicks were tested using the same numerical comparison: 6vs.9. The rearing conditions were the same as those described above. For one subgroup of chicks (group-6E: N = 20) S p pictured 6 elements (6E) and for the second (group-9E: N = 20) S p pictured 9 elements (9E). Six pairs of rearing stimuli differing from one another with regard to the spatial disposition of the elements pictured were used. The testing stimuli were composed of heterogeneous elements positioned in a different way with respect to the rearing situation. The first two experiments showed that the chicks were unable to discriminate between two sets of elements characterized by a 0.67 (6vs.9) ratio while they were able to distinguish between two sets characterized by a 0.50 (5vs.10) ratio. Experiment 3 The aim of this experiment was to evaluate if there is an absolute upper limit in numerical discrimination. As the 0.50 ratio was discriminated when the 5vs.10 comparison was employed, we arbitrarily decided to duplicate each set: the comparison tested here was therefore 10vs.20. Subjects, Apparatus and Procedure. A new group of 66 chicks was used. The rearing conditions were the same as in the previous experiments. All the chicks were tested using the same numerical comparison: 10vs.20. Rearing stimuli consisted of six pairs each of 10 or 20 heterogeneous elements. The testing stimuli consisted once again of heterogeneous elements, but with different spatial dispositions, colours, shapes and sizes with respect to the rearing stimuli and which varied in the three Testing Conditions. In the Random Size Group (RSG, N = 29, with group-10E: N = 14 and group-20E: N = 15) the dimensions of the elements were randomly selected in both the 10E and the 20E. In the Perimeter Control Group (PCG, N = 18, with group-10E: N = 9, and group-20E: N = 9) the overall perimeter of the two stimuli was equated. While in the Area Control Group (ACG, N = 19 with group-10E: N = 10 and group-20E: N = 9) the overall areas of the two stimuli were equated. Results and Discussion. An ANOVA was used to analyse the S p (10E or 20E) and Testing Condition (RSG, PCG or ACG) as well as the independent variables. The dependent variable was the Index of Choice for S p . The ANOVA did not uncover a significant main effect for S p (F(1,60) As the interaction (Testing Condition 6 S p ) was not significant (F(2,60) = 0.610, p = 0.547), the data were merged and the resulting mean (N = 66; Mean = 0.633, SEM = 0.026) was found to be significantly above chance level (one-sample t-test, t(65) = 5.115; p,0.001) (Fig. 3). Conclusions The aim of this study was to test 3-day old chicks' capacity to discriminate between stimuli representing different numerosities and, in particular, to assess their ability to process those stimuli entirely on the basis of numerical cues. The results demonstrate that chicks discriminate between numbers, even when quantitative variables are unavailable, if the numbers compared have a 0.50 ratio (either 5vs.10 or 10vs.20), but not when the ratio between numbers is 0.67 (i.e., 6vs.9). Performance, therefore, seems to be affected not by the overall absolute number of items (15 elements are discernible in the comparisons 5vs.10 but not in the 6vs.9), but by the ratio between the numbers to be discriminated. This suggests that processing in the cases described is carried out using the Analogue Magnitude System (AMS). Interestingly, these results differ from those outlined in precedent study [41] focusing on the same species but using a different experimental setting; in that case the chicks were able to discriminate both between 5vs.10 and 6vs.9, but only when quantitative as well as numerical cues were available. Why were the chicks investigated during the current study able to discriminate numerosities on the basis of numerical cues alone? In order to answer that question we need to go back and re-examine the 2011 study. During the experiments carried out at that time, the chicks were reared with a set of artificial social objects upon which they became imprinted. Social objects were also employed during the experiments carried out during our latest study, but the ones used (two-dimensional red squares) were all identical to one another during the rearing period while at testing each of the two sets was composed of homogeneous elements (although elements could differ in size to control for quantitative variables). At testing during the 2011 study, the chicks watched the imprinted objects disappearing one at a time (i.e. sequentially) behind screens located in different positions in the test area. After all the elements had appeared and disappeared the chicks were expected to look for the larger set associated to one of the two screens. If we want to compare the two studies, we need to examine how they differed in terms of: i. the characteristics of the stimuli used (homogeneous vs. heterogeneous sets, respectively, in the precedent and in the current study), and ii. the modality in which the chicks experienced the stimuli during the test. i. Stimuli characteristics (homogeneous vs. heterogeneous). Some investigators have suggested that using sets of objects made up of elements identical to one other (i.e., a homogenous set) favours computation of continuous variables [42,43,44,45], while heterogeneity of objects within the same set favours computation of exact numbers [46,47,48]. In accordance with these hypotheses, our precedent results indicated that when the chicks were required to discriminate between homogeneous sets of items in a 2vs.3 comparison, they processed quantity information, but when heterogeneous stimuli were presented, the chicks coded the numerousness [49]. ii. Stimuli presentation at testing. When stimuli disappear one after another, as in our sequential presentation paradigm [29], higher demands are posed on the working memory since they are no longer perceptually available at the moment of choice; a lower performance is thus expected. The modality of stimulus presentation could, according to a recently proposed hypothesis [50] and verified by different brain activation patterns, favours diversified kinds of elaboration. According to that hypothesis, the simultaneous presentation of whole sets of elements directs attention to the entire collection, thus activating AMS processing. On the contrary, presenting the elements one after another, focuses attention on each object, thus activating the Object File System (OFS) processing which concentrates on single objects at the expense of the overall set, which in any case cannot contain more than 3 elements [51]. These considerations appear relevant to the results emerging from our two studies; in fact in the present study the whole sets of elements were visibly accessible to the chicks during testing, while in the 2011 paradigm the stimuli used at testing were presented sequentially. It is important to underline that the difference in the two studies with regard to stimuli presentation was limited to the testing phase. During the rearing stage the chicks in both studies were exposed to the entire sets of stimuli, probably triggering AMS processing. There may have been interference in the cognitive systems activated during the 2011 study when sequential presentation (linked to OFS) was used at testing. It would be interesting in future studies to directly test how stimuli presentation during rearing with respect to testing stages can affect OFS or AMS processing. Some considerations can also been drawn regarding the paradigm used in the current study that made it possible to highlight the chicks' spontaneous learning and therefore to better assess their spontaneous behaviour. After rearing, the chicks, in fact, associated some numerical patterns with food. In contrast with previous studies [39], learning did not require any explicit training (i.e. through shaping) during which an operant behavioural response (i.e. pecking at the positive stimulus) was reinforced as in conventional discrimination learning tasks. Our paradigm offers an advantage with respect to tasks requiring animals to choose/directly respond to two or more food options. In fact, animals' choices in those cases are expected to maximize possibility of survival according to the optimal foraging theory [52]. Researchers carrying out numerical discrimination tasks [53,54,55,56,57,58] consider preference/choice directed towards the larger quantities of food a strategy directed at exploiting food resources. Although this is often the case, the best choice could also depend on a variety of factors -usually neglected -but not necessarily related to overall amounts such as optimal size of food for catching or ingesting, optimal density, etc. Differences in spontaneous discrimination in diverse species could then be related to their foraging strategy rather than to numerical cognitive skills. Direct comparison between food quantities generally cannot guarantee simultaneous control over quantitative variables, such as volume, surface, etc. of food items. In those cases data on spontaneous foraging demonstrate spontaneous proto-numerical discrimination, but do/can not test purely numerical abilities. The paradigm used in the current study offers an advantage also with respect to conventional operant conditioning, which is known to be linked to a behavioural response between the stimulus and the reinforcement. It is well known that when an auto-shaping paradigm is used, pigeons auto-reinforced with water manifest a drinking behaviour to ''previous neutral stimulus'' (a key), while those auto-reinforced with food show an eating behaviour [59]. The concept of positive behavioural reinforcement could be extended to paradigms using shaping to test numerical competences and help explain some results. For example, pigeons trained to discriminate between two numerousness of non-edible elements to obtain a food reinforcement performed better when reinforced to respond to the larger rather than to the smaller numerousness [60]. By contrast, the performance of the chicks' studied here did not differ when food was associated with a larger or a smaller number of objects, and this could be explained by the fact that no conditioned pecking response was required for the stimulus to be associated with food. Under these conditions the characteristics of the stimulus (numerousness of objects) are not affected by the features of the attractor (food), thus making it possible to manipulate the latter as required and to better investigate numerical cognitive abilities. The same paradigm could easily be employed using a different kind of attractor, i.e., a social (i.e. an imprinting) object in order to further reduce any possible association between food and stimulus. To conclude, this work demonstrated how an abstract recognition of large [16] numbers could be precociously available in one animal species, disproving the 'last resort' theory, according to which animals rely on numerical information solely when quantitative cues (considered more salient) are not available [61]. The data outlined here support the hypothesis that animals naturally extrapolate and use numerical cues, suggesting that numerical information constitutes a crucial cue for animal survival.
v3-fos-license
2021-09-27T19:49:35.432Z
2021-08-10T00:00:00.000
238654572
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://doi.org/10.21203/rs.3.rs-791800/v1", "pdf_hash": "b64c840e3d72dc3e59d6e6181dbc43d02a406acf", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46321", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "1309031426dc50bba28b3f7ce1d513028ed00fe7", "year": 2021 }
pes2o/s2orc
Semen Production and Semen Quality of Mehsana Buffalo Breed Under Semiarid Climatic Conditions of Organized Semen Station in India Semen production data comprising of 55071 ejaculates of 144 bulls from Mehsana buffalo breed was analysed. The traits under study were semen volume, sperm concentration, initial sperm motility, post-thaw sperm motility and number of semen doses per ejaculate. The objective of the present study was to assess the effect of various factors affecting semen production traits and measure the semen production potential of Mehsana buffalo bulls. Data collected of semen production traits were analysed using linear mixed model, including a random effect of bull along with xed effect of various non-genetic factors like farm, ejaculate number, season of birth, period of birth, season of semen collection and period of semen collection. First ejaculation had higher semen volume and sperm concentration resulted in to higher number of semen doses but semen quality was better in second ejaculation. Season of birth of the bull was affecting semen quality traits. As the period of birth advances semen volume increases whereas sperm concentration decreases which reected in persistent production of number of semen doses per ejaculate. Monsoon and summer were favorable seasons for semen collection because of higher sperm concentration which resulted in to higher semen doses per ejaculate. Additionally, Monsoon collected semen had highest volume. Hence, monsoon followed by summer season would be the favorable season for semen collection. Period of semen collection affecting all the semen production traits under study but it did not have specic trend which means managemental and environmental changes over the period have sizable inuence on the traits. Results of the study will help to plan future managemental practices and breeding strategies to improve semen production traits. Similar to the present Bhave in pooled data of Banni, Bhadawari, Pandharpuri and Surti buffaloes reported signicant effect of ejaculate on initial sperm motility. Introduction With an intensive selection for increased milk yield, reproductive performance was declined in many countries, in part due to an unfavourable genetic relationship. The intense selection for production traits in the last decades has led to a decrease in fertility. Sustain or improve reproductive e ciency of dairy cattle along with productivity become one of the major challenges of the dairy industry worldwide. Many factors may account for decline in reproductive performance like physiological, nutritional, environmental and genetic. In this sense, several studies have recognized that there is substantial genetic variation underlying reproductive success in dairy cattle. More emphasis was given for improvement of reproduction traits in females only rather than the males. A de ciency in the breeding ability of the bull has a larger impact on the herd productivity as well as fertility problems of females The contribution of the males either through the natural mating or arti cial insemination (AI) cannot be ignored and careful scrutiny of the reproduction traits of bulls should be done before their extensive use in farm and eld condition for AI. In recent years many countries have also implemented genetic evaluation for reproductive traits of the bulls. Thus, the relative emphasis of dairy cattle breeding has gradually shifted from production to functional traits such as reproduction. After extensive implementation of AI technique in eld condition, semen stations are working with the objective to maximize the output of good quality semen from bulls. To ful l the objective they are executing various semen evaluation method and also interested into factors affecting semen production traits of the bulls. This is assessed by aiming on semen volume, sperm concentration and sperm motility of the bull in each ejaculate. Moreover, scrutinizing the source of variations in semen production traits due to various non-genetic factors like farm, ejaculate number, season and period of birth, season and period of semen collection is necessary to ensure su cient semen production from the bulls. Studies on semen production traits of buffaloes are very scanty (Singh et al. 2013;Bhakat et al. 2015; Ramajayan 2016: Bhave et al. 2020) as compared to cattle. Most of the studies on semen production traits of buffalo were done using very less number of ejaculates and focused on age of bull and primary managemental practices like semen collectors, time of collection and interval between collections. Location of the farm, season and period of birth, season and period of semen collection like non-genetic factors could have signi cant effect in variation of semen production traits. These non-genetic factors are associated with environment variation which might be came from feeding and other managemental practices at that time. The variation contributed due to these factors could be rectify after careful evaluation and semen production traits will be improved. The present study analyses semen production traits of two different semen stations of India with state of art facility for buffalo bull semen collection and processing. Data Evaluation of semen production traits was carried out on Mehsana buffalo bulls of two frozen semen stations of Gujarat viz. Pashu Samvardhan Kendra (PSK), Jagudan and Dama Semen Production Unit (DSPU), Dama. Information on breed characteristics is available on the national portal of the NBAGR website (National Bureau of Animal Genetic Resources 2021). Data pertaining to semen production traits are available with two semen stations as database which were utilized for the present study with permission of the semen stations. Details of period of data, number of bulls and ejaculates utilized for the present study is given following table. Semen collection and evaluation Both the semen stations following standard routine practices for the collection of semen from Mehsana buffalo bulls. The bulls were cleaned properly on the day of semen collection in early morning before semen collection. For each bull semen collectors are speci ed and that semen collectors performed all the operations of semen collection for that particular bull. In the semen collection operations, Dummy bulls were used for sexual stimulus, and each bull allowed to perform 2 to 3 false mounts before nal semen collection ride. The time require for false mounting and actual collection mount varies from bull to bull. Normally one to three semen ejaculates were collected from the bull on the day of semen collection. After collection of semen, Semen volume was recorded and kept in a water bath at 37 °C. The semen stations are using photometer for estimation of sperm concentration per ejaculate (x 10 6 /ml). The sperm concentration was recorded per ml for particular ejaculate. The initial motility of the sperm cell was estimated by the semen stations as percentage by examining a drop of diluted semen with Tris buffer placed on a pre-warmed slide covered with a pre-warmed cover slip in a phase contrast microscope with a stage warmer at a magni cation of 40x. The sperm cells which exhibit progressive movement were scored on a scale of 0 to 100 percent. The collected semen with poor quality which did not ful l minimum standard criteria were removed from further process of frozen semen dose production. After completion of initial assessment, frozen semen doses were prepared using 0.25 ml straw which contain 20 × 10 6 sperms per dose (i.e., with the hypothesis that it reaches approximately 10 million motile sperms after thawing per dose), sealed, and printed. Semen straws were cooled at 4°C for approximately 3 hr after that frozen down at around −140°C for 10 min in a programmable freezer followed by storage in liquid nitrogen. Post-thaw sperm motility was then carried out for those stored frozen semen doses after 24 hr using 2-3 straws. Thawing of frozen semen straw was done by removing a straw from the liquid nitrogen container and plunging it in warm water bath at 37°C for 30 seconds. Frozen thawed semen was collected in a small test tube by cutting the ends of the straw and remaining procedure was as per initial sperm motility estimation. Semen production traits and in uencing factors Semen production traits considered to study the effects of various non-genetic factors are semen volume, sperm concentration, initial sperm motility, post-thaw sperm motility and number of semen doses per ejaculate. Non-genetic factors affecting semen production traits are farm, number of ejaculate, season and period of birth, season and period of semen collection. There are two farms under study. The bulls were maintained under proper housing, feeding, management and health care. The young bulls were trained for semen collection using arti cial vagina. The semen collection was done twice a week from individual bull and ejaculates were obtained with an interval of 15 -30 minutes. The nutrition requirement is standardized, so bulls are fed ad-libidum chaffed green and dry fodder mixture as per seasonal availability, concentrate mixture as per requirement based on body weight with area speci c mineral mixture. For analysis and description following coding is use. Name of Farm Code Pashu Samvardhan Kendra, Jagudan F1 Dama Semen Production Unit, Dama F2 Mehsana bulls were grouped as per seasons of birth of bull and season of semen collection as winter (November to February), summer (March to June) and monsoon (July to October) looking to the monthly average environmental conditions observed at farms. For analysis and description following coding is use. Semen collection from an individual bull was done two or three times in a day with the time interval of 15-30 minutes, accordingly it was grouped as rst (EJ1), second (EJ2) or third ejaculate. To study the effect of ejaculate number on various semen production traits in present study it was classi ed as such rst and second ejaculate as follow. Data pertaining to third ejaculate were limited, hence it was not utilized for the study. Statistical Analysis Abnormal records in semen production traits i.e. missing data or non-justi able data were eliminated. The non-genetic factors affecting the semen production traits of Mehsana buffalo bulls evaluated were farm, ejaculate number, season of birth, period of birth, season of semen collection and period of semen collection. The effects of non-genetic factors on semen production traits like semen volume, sperm concentration, initial sperm motility, postthaw sperm motility and number of semen doses per ejaculate were studied by multivariate analysis under linear mixed model and restricted maximum likelihood (REML) method considering all non-genetic factors listed above have xed effect and bull as random effect to study the within bull and between bull variability. The data were analysed using SAS software version 9.3 and PROC MIXED as base command (2011). The differences between the least squares means for sub classes under a particular effect were tested by Scheffe test (Scheffe, 1959) to check the signi cance. The high heterogeneous variances between the subclasses, unequal group size, pairwise and unpairwise comparison lead to use of Scheffe test as other tests nd differences between the least squares subclass means. Scheffe test is one of the best adjustments that can used to decrease experiment wise error rates when testing multiple comparisons. Scheffe test is a very conservative adjustment that why it is the safest method. The F-ratio used in the calculation is unique in that the mean square (MS) for only two groups being compared is used in the numerator and the MS for all respective comparison is used in the denominator. This means that each pairwise comparison has to have the same signi cance as the variance for all comparisons when using Scheffe test. Where, The semen production trait (expressed in percentages) such as initial sperm motility was adjusted after angular transformation of the percentages as per Snedecor and Cochran (1987). While expressing the means and standard errors, angles were reconverted to percentages to a precision of two decimals. TS = Test of signi cance Statistical model is designed to estimate least squares means of semen production traits for the random effect of bulls and xed effect of non-genetic factors i.e. farm, ejaculate number, season of birth, period of birth, season of semen collection and period of semen collection Yrahbcfgx = μ + Rr + Sa + Zh + Tb + Uc + Xf + Yg + erahbcfgx Results And Discussion The least squares means (LSMs) for semen production traits i.e. semen volume, sperm concentration, initial sperm motility, post-thaw sperm motility and number of semen doses per ejaculate with the random effect of bull and xed effects of non-genetic factors such as farm, ejaculate number, season of birth, period of birth, season of semen collection and period of semen collection are given in Table 2. The results of type-3 tests of non-genetic factors and their interactions are given in Table 3. Overall LSMs of semen volume per ejaculate was found to be 3.34 ± 0.18 ml in the present study, which was higher as compared to semen volume reports of 2. The semen collected in the rst ejaculate number gave signi cantly (P ≤ 0.01) higher semen volume (3.91 ± 0.18 ml) compared to second ejaculate number (2.77 ± 0.18 ml). Signi cantly higher semen volume was produced in the rst ejaculate collection in the present study. Lower semen volume in the second ejaculation was due to physiological effect which was always bound to occur in subsequent collection after rst collection. Similar ndings were also reported by Ramajayan (2016) The semen collected in the rst ejaculation gave signi cantly (P ≤ 0.01) higher sperm concentration (1473.28 ± 75.62 million per ml) as compare to the second ejaculation (1003.11 ± 75.65 million per ml). Sperm concentration was signi cantly affected by ejaculate number and signi cantly higher sperm concentration was found in the rst ejaculate number in the present study. Similar ndings were also reported by Ramajayan (2016) in Murrah and Bhave et al. (2020) in pooled data of Banni, Bhadawari, Jaffarabadi, Murrah, Pandharpuri and Surti buffaloes. Overall LSMs of initial sperm motility (70.55 ± 0.12 %) in the present study was higher as compared to initial sperm motility reports of 68.40 ± 1. The initial sperm motility was signi cantly (P ≤ 0.01) higher in farm-1 (70.96 ± 0.16 %) as compare to farm-2 (70.14 ± 0.12 %). Effect of period of birth on the initial sperm motility was found highly signi cant (P ≤ 0.01) in the present study which might be due to changes in the environmental condition and managemental practices over the periods. Effect of age of bull at rst semen collection on initial sperm motility was signi cant (P < 0.05) in the present study. Higher initial sperm motility of 70.61 ± 0.36 to 70.73 ± 0.07 % were found in the bulls with 1 to 3 years of age at rst semen collection which was at par with initial sperm motility found in the semen of 5 to 7 years of age of bull at rst semen collection. Highly signi cant (P ≤ 0.01) effect of period of semen collection was found on initial sperm motility. Highest initial sperm motility of 70.69 ± 0.12 % was found in the semen collected during the period of 2019 to 2020 whereas lowest initial sperm motility of 70.44 ± 0.12 % was found in the semen collected during the 2013 to 2014. It revealed that change in managemental and environmental condition during semen collection may contribute to differential initial sperm motility. The semen collected in the second ejaculation gave signi cantly (P ≤ 0.01) higher initial sperm motility of 70.58 ± 0.12 % as compare to the rst ejaculation (70.53 ± 0.12 %). Lower initial sperm motility in the rst ejaculation was due to physiological effect which was always bound to occur as the rst ejaculation contains more non-viable sperms. Similar to the present study, Ramajayan (2016) in Murrah and Bhave et al. (2020) in pooled data of Banni, Bhadawari, Jaffarabadi, Murrah, Pandharpuri and Surti buffaloes also reported signi cant effect of ejaculate number on initial sperm motility. Overall LSMs of post-thaw sperm motility was found to be 60.82 ± 0.16 % in the present study, which was higher as compare to postthaw sperm motility reports The post-thaw sperm motility was signi cantly (P ≤ 0.01) higher in farm-2 (70.17 ± 0.16 %) as compare to farm-1 (51.48 ± 0.18 %). Effect of season of birth was signi cant (P ≤ 0.05) on post-thaw sperm motility. Post-thaw sperm motility of 60.96 ± 0.16 % was observed in the winter born bulls which signi cantly differed with monsoon born bulls' post-thaw sperm motility (60.73 ± 0.16 %). Effect of period of birth was highly signi cant (P ≤ 0.01) on post-thaw sperm motility. Higher post-thaw sperm motility of 62.02 ± 0.22 % was observed in the 2016 to 2017 born bulls but it was lower (60.34 ± 0.21 %) in 2008 to 2009 born bulls which was at par with 2004 to 2007 and 2010 to 2013 born bulls. Effect of period of birth on the post-thaw sperm motility was found highly signi cant (P ≤ 0.01) in the present study which might be due to changes in the environmental conditions and managemental practices developed and adopted over the periods. Younger bulls produced semen with higher post-thaw sperm motility as compare to adult and older bulls in the present study. Post-thaw sperm motility was not affected signi cantly by season of semen collection in the present study. Contrarily to the present study signi cant effect of season of semen collection on post-thaw sperm motility was reported by Bhave et al. (2020) in the pooled data of Banni, Bhadawari, Jaffarabadi, Murrah, Pandharpuri and Surti buffaloes. Highly signi cant (P ≤ 0.01) effect of period of semen collection was found on post-thaw sperm motility. Post-thaw sperm motility of 61.04 ± 0.16 % was found to be signi cantly highest in the semen collected during the period of 2019 to 2020 as compare to postthaw sperm motilities during periods 1 to 4. Post-thaw sperm motilities of semen collected during period 1 to 4 were at par with each other. As the period of semen collection advanced post-thaw sperm motility increased showing better handling practices adopted by the semen stations with time. The semen collected in the second ejaculation gave signi cantly (P ≤ 0.01) higher post-thaw sperm motility of 60.89 ± 0.16 % as compare to the rst ejaculation (60.75 ± 0.16 %). Post-thaw sperm motility was signi cantly affected by ejaculate number and signi cantly higher post-thaw sperm motility was found in the second ejaculation in the present study. Lower post-thaw sperm motility in the rst ejaculation was due to physiological effect which was always bound to occur as the rst ejaculation contains more nonmotile sperms due to gap between semen collection days and sperm production cycle in the reproductive organ as sperms get produce, mature and become non-motile till next semen collection. Similarly signi cant effect of ejaculate number on post-thaw sperm motility were reported by Ramajayan (2016) Number of semen doses per ejaculate did not differ signi cantly between farm-1 and 2. Semen production traits like semen volume, sperm concentration, initial sperm motility and post-thaw sperm motility under present study were signi cantly differed between farm-1 and 2 but number of semen doses per ejaculate was not affected. Number of semen doses per ejaculate mainly depend on semen volume and sperm concentration. From the data of the present study it was observed that farm-1 has lower semen volume compared to farm-2 but sperm concentration was higher in farm-1 as compare to farm-2 which might have resulted in overall at par production of semen doses per ejaculate. Effect of season of birth was non-signi cant (P > 0.05) on number of semen doses per ejaculate. This indicate overall adoption of bulls to the particular environment to achieve pubertal age and have well developed reproductive organs without in uence of seasonal variation. Number of semen doses per ejaculate was not affected signi cantly by period of birth of bull. Number of semen doses per ejaculate were signi cantly (P ≤ 0.01) affected by season of semen collection. Semen collected in the monsoon season has higher number of semen doses per ejaculate (186.89 ± 11.85) which was followed by 185.23 ± 11.87 in summer, however difference between both of them was non-signi cant. Signi cantly lower number of semen doses was observed in the winter season of semen collection (172.69 ± 11.85). Number of semen doses per ejaculate were signi cantly affected by season of semen collection in the present study. Signi cantly higher number of semen doses per ejaculate were produced from the semen collected during summer and monsoon seasons compared to winter season. Semen characteristics like semen volume and sperm concentration were higher during the summer and monsoon seasons' collected semen. Hence, the higher number of semen doses per ejaculate were produced during summer and monsoon season of semen collection. Contrary to the present nding, Bhosrekar (1988) reported highest frozen semen doses in the winter season but he also narrated that rainy season seemed to be better for semen freezability and lower discard rate. Similar to the present study signi cant effect of season of semen collection on number of semen doses per ejaculate was also reported by Bhosrekar (1988) and Bhosrekar et al. (1992) in Surti buffalo. Highly signi cant (P ≤ 0.01) effect of period of semen collection was found on number of semen doses per ejaculate. Higher number of semen doses per ejaculate (196.26 ± 11.86) was observed from the semen collected during 2015 to 2016 whereas lowest number of semen doses per ejaculate of 151.51 ± 12.32 was produced from the semen collected during 2011 to 2012. Signi cant (P ≤ 0.01) effect of period of semen collection on the number of semen doses per ejaculate was found in the present study. Relatively higher number of semen doses per ejaculate produced during 2015 to 2016 and 2019 to 2020 periods which might be due to better environmental and managemental practices during the periods. The semen collected in the rst ejaculation produced signi cantly (P ≤ 0.01) higher number of semen doses per ejaculate (248.15 ± 11.83) as compare to the second ejaculation (115.06 ± 11.84). In conclusion, Monsoon and summer were favorable seasons for semen collection because of higher sperm concentration which resulted in to higher semen doses per ejaculate in Mehsana buffalo bull. Additionally, Monsoon collected semen had highest volume. Hence, monsoon followed by summer season would be the favorable season for semen collection. Mature Mehsana buffalo bulls of 3 to 5 years of age or bulls having more than 700 kg body weight or the bulls, where semen collection was done after 2012 produced higher semen volume leading to higher semen doses per ejaculate. This indicates that bulls maturing at the age of 3 years (approx.) or having body weight of 700 kg or more produced more semen. First ejaculation had higher semen volume irrespective of age and season of semen collection resulting in to more semen doses per ejaculate in Mehsana buffalo bulls. Figure 1 Average monthly high and low temperature at PSK, Jagudan during the period of study Average monthly high and low temperature at DSPU, Dama during the period of study
v3-fos-license
2021-11-14T16:12:36.290Z
2021-11-06T00:00:00.000
244079128
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.journaljpri.com/index.php/JPRI/article/download/33235/62575", "pdf_hash": "8b3c47872439d70a9c255c41ee1b14da7c651b89", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46322", "s2fieldsofstudy": [ "Education" ], "sha1": "a163f89c12642c358aa54f6194f09204ee84f4a6", "year": 2021 }
pes2o/s2orc
Knowledge and Attitude of Primary School Teachers Regarding Early Identification and Management of Learning Disability Introduction: In India, 1% to 19% of the total students have Learning Disability. Learning disability may vary from person to person and is incurable but can be controlled if diagnosed earlier. Teachers play a vital role in its identification. Aims: Assessing the Knowledge and Attitude of primary school teachers regarding early identification and management of Learning Disability. Study Design: The study design is Descriptive cross sectional design. Place and Duration of Study: selected school at Tapi District, Gujarat, between 2020 – 2021. Methodology: The research was carried out by using Quantitative research approach and on 150 primary school teachers. The non probability sampling technique was used. The tool includes socio demographic variables, Knowledge questionnaire and Attitude scale. Results: No teacher have excellent knowledge i.e 0.00% regarding Learning Disabilty. 59.33% have good knowledge and 40.66% teachers are poor in knowledge regarding Learning Disability. 96.66% teachers have positive attitude towards children having Learning Disability while 3.33% teachers have negative Knowledge. There is positive correlation between Knowledge and Attitude. Original Research Article Koshy et al.; JPRI, 33(48A): 174-181, 2021; Article no.JPRI.76148 175 There is significant association of knowledge with age and classes allotted at p< 0.05. There is significant association of Attitude with classes allotted at p< 0.05. Conclusion: Majority of the teachers have good Knowledge and most of them have positive attitude towards the children with Learning disability. INTRODUCTION Learning disability is a type of Learning disorders that arises from neurological problems. It can be due to the faulty brain structure or functions. This affects the ability of brain to process and comprehend. This makes difficult for the child with Learning disability to learn and process like other normal children [1]. Justine james.et al. (2018) conducted a study in India to assess the percentage if learning Disability and the study reveals that 1% to 19% of total students are having Learning Disability [2]. Due to poor attention spam and weak concentration, students cannot perform well in academics, but this kind of students may be found good in extracurricular activities. Here, it is responsibility of parents, care takers and teachers to motivate to do other activities by focusing and strengthening on abilities rather than disabilities. In Indian setting, pressurizing the child for scholastic performance in academics is much found and this results in Anxiety, Depression, and Stress disorders. Sibnath Deb .et. al (2015) conducted a study on 190 samples in high school students of India to assess the stress in the academics caused due to the parental pressure among Indian high school students. the result reveals that almost two third of the students were found to be stressed out due to the parental pressure for good academic performance [3]. Learning Disability is not an Intellectual disorder. The child might be bright and intelligent, but as the teachers and parents will not be able to identify that the child is actually suffering from Learning Disability, this children may be identified as failures, poor or disinterested in studies. As a result they might be ignored by teachers. This can lead to damage in motivation and threat to child's future and career. Therefore it is found to be important for the teachers to do early identification and management of Learning disability. Problem Statement Knowledge and Attitude of primary school teachers regarding early identification and management of Learning Disability. Review of Literature Syed Arifa 1 , Syed Shahid Siraj 2 (2015) conducted a study titled "A descriptive study to assess the knowledge and attitude of primary school teachers regarding learning disabilities among children in selected schools of district Pulwama Kashmir". Quantitative descriptive study was used. Convienient sampling technique was used for data collection from teachers at selected school at District Pulwana. Selfstructured Knowledge questionnaire and Attitude scale was made. karlpearson's correlation coefficient was used to check the realiability of the tool. The result showed that majority of the teachers 73.3% had moderate knowledge on learning disability, 20.0% had inadequate knowledge and only 6.7% teachers had adequate knowledge on the subject. Majority of the teachers that is 93.3% had Most favorable Attitude towards children with learning Disability. Only 6.7% teachers showed Favorable attitude and none (0%) had Unfavorable attitude level towards the children with learning disability. There was significant correlation between knowledge of teachers and their attitude towards such children [4]. Elizabeth K Thomas 1 , Seema p uthaman 2 (2019) conducted a study titled "A Study on the knowledge and attitude of primary school teachers towards inclusive education of children with specific learning disabilities" with the aim to determine the knowledge and Attitude of 180 primary school teachers who meet the inclusion criteria. The result concluded that 63% teachers have average knowledge and 51% have positive Attitude towards the child having Specific learning disability. There is significant correlation between teacher's knowledge and attitude [5]. Vranda M N (2016) conducted a study titled "Attitude of Primary School Teachers towards Children with Learning Disabilities" with the aim to assess the attitude of primary school teachers regarding learning disability. The study was conducted on 200 teachers to assess attitude using Teachers' Attitude about Learning Disabilities (PSTALD) scale. The result shows there is less favourable attitude of teachers towards inclusion of children with learning disability in regular schools [6]. RESEARCH METHODOLGY The research was carried out by using Quantitative research approach and Descriptive cross sectional research design on 150 primary school teachers of Tapi District, Gujarat. The non probability sampling technique was used. The tool includes socio demographic variables, Knowledge questionnaire and Attitude scale. The tool was validated by five experts in the field of psychiatric nursing. The reliability of the tool was checked and it was found 0.7 for Knowledge and Attitude questionnaire. Pilot study was carried out on 16 samples to find out the feasibility of the research study. The main study was carried out on 150 samples. Descriptive and inferential statistics was used to do the data analysis and interpretation. A) Inclusion criteria: 1. Primary School Teachers who are willing to participate in the study. 2. Primary School Teachers available at the time of data collection. B) Exclusion criteria: 1. Teachers who are teaching in private schools. 2. Teachers teaching in schools for physically or mentally challenged children. RESULTS AND DISCUSSION The findings based on the description an inferential analysis tabulated as follows: Description of Correlation between Research Variable Socio-Demographic Characteristics of Teachers The findings of the present study shows that 54% of teachers belongs to age between 31years to 50years, while 4% and 42% of participants belongs to age less than 30 years and age more than 50 years respectively. Knowledge of Primary School Teachers Regarding Learning Disability The present study was conducted among the teachers of Tapi Attitude of Primary School Teachers Regarding Learning Disability Correlation between Knowledge and Attitude of Primary School Teachers Regarding Learning Disability There was significant correlation between knowledge of school teachers regarding learning disability and their attitude towards such children. The correlation was calculated by karl's pearson's formula and the score was 0.99. A study was conducted by Seema Uthama 1 , Elizabeth k 2 (2019), "Knowledge and Attitude of primary school teachers towards inclusion education of children with Specific Learning Disability". The study found that there is a significant correlation between teacher's knowledge and their attitude towards inclusion criteria [5]. Association of knowledge with Selected Socio-Demographic Variables The association between knowledge and sociodemographic variable which was tested by using chi-square test. Findings shows that age and classes allotted is associated with knowledge at p<0.05. The other socio-demographic variable such as gender, educational qualification, total years of experience, had special training on learning disability and learnt child psychology were found statistically non-significant with knowledge of primary school teachers. B Amali Rani (2016), "A study to assess the effectiveness of psycho education module on knowledge regarding early identification of children with learning disability among primary school teachers in selected school at Chennai". The association between post test knowledge scores of primary school teachers with their demographic variables. There is association between demographic variables such as age, years of experience and teachers those who attended in service training on learning problems [8]. Association of Attitude with Selected Socio-Demographic Variables The association between Attitude and sociodemographic variable which was tested by using chi-square test. There is significant association between classes allotted to the teachers with socio-demographic variable at p<0.05. The other socio-demographic variable such as age, gender, educational qualification, total years of experience, had special training on learning disability and learnt child psychology were found statistically nonsignificant with knowledge of primary school teachers. Bhavya1, Bhavya s 2 , the knowledge and attitude of teachers regarding specific learning disabilities among children . The present study finding shows that there is an association between attitude and demographic variables such as gender, classes allotted, child psychology and there is no association between age, years of experience, marital status, in-service education, and family history of learning disability 7 . CONCLUSION The following results are obtained from the study. Majority of the teachers have favourable Attitude towards the child having Learning disability. There is significant correlation between attitude and knowledge of teachers. There is significant association between knowledge with age and classes allotted to the teachers. No association was found with other socio-demographic variables like gender, educational qualification, total teaching experience, learnt child psychology, dealt with child with Learning disability. There is significant association between knowledge and classes allotted to the teachers. No association was found with other socio-demographic variables like age, gender, educational qualification, total teaching experience, learnt child psychology, dealt with child with Learning disability. A study can be done in a large population area to generalise the study findings. 2. A study can be conducted to check the prevalence of learning disability and types of 3. learning disabilities among various age groups of children. 4. A comparative study can be done on knowledge of primary school teachers between urban and rural areas. 5. A study can be conducted to assess the knowledge regarding learning disability among parents or primary care taker of school going children. 6. A similar study can be done on secondary school teachers. CONSENT AND ETHICAL APPROVAL As per international standard or university standard guideline participant consent and ethical approval has been collected and preserved by the authors. concieved and designed analysis:-betty Koshy.
v3-fos-license
2020-04-16T09:13:48.367Z
2020-04-09T00:00:00.000
216528492
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12649-020-01061-x.pdf", "pdf_hash": "16ec204a55ae5731c9ef31be6341d73b46661c7d", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46327", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Materials Science" ], "sha1": "2a323d66f955917e8d2e26ac394926cd6f1d85a3", "year": 2020 }
pes2o/s2orc
Structural Compatibility of Infrastructures Utilizing Alternative Earth Construction Materials This paper presents analysis of the structural behavior of road pavements in which alternative construction materials are replacing the traditional ones in some of the structural layers. The analysis is consider important since from the structural performance point of view many of the alternative materials have mechanical properties far different from those of the traditional road construction materials, especially unbound aggregates, and as a consequence of that, the empirically calibrated design rules applied and adjusted for the normally utilized pavements solutions are not valid any more. The analysis is exemplified by means of four different low volume road pavement structures that are in line with the existing design guidelines in Finland. The mechanical behavior of these structures is analyzed using three different approaches: semi-empirical Oedemark design approach, multi-layer linear elastic analysis and finite element analysis. The obtained calculation results indicate clearly that if a low volume road structure containing a high stiffness layer made e.g. of stabilized fly ash is resting on soft subgrade soil, tensile stresses up to 1 MPa may be developed. Therefore, the performance and respective distress mechanisms of the structure are likely to be very different from those of a traditional solution. As a key conclusion from the analysis, need for a new concept, structural compatibility, was identified. It would help in drawing due attention to the mechanical behavior of alternative materials when they are used in replacing the traditional ones in road structures exposed to repeated heavy traffic loads. Introduction Structural solutions that are applied in constructing road pavement structures exposed to repeated traffic loads have typically been developed over a long time span. Empiricism has also played a big role when structural solutions have by time been adjusted to the local ambient conditions and available types of construction materials. Long-term feedback obtained from the actual performance of pavement structures has resulted in operative solutions even though the applied design approaches may have been somewhat vague from a theoretical point of view. When various types of alternative earth construction materials have increasingly been taken into use in constructing road and field structures, the status quo described above has changed. The mechanical properties, especially the stiffness and strength of these alternative materials, may be quite different from those of the traditional construction materials such as sand, gravel, and crushed rock aggregates. Replacing one of the structural layers in a road or field structure with a material having fundamentally different mechanical properties may therefore change the overall performance of the whole structure. This means, of course, that the empirically calibrated design approaches are not valid as such anymore. From the mechanistic pavement analysis it is well known that under a wheel load acting on the road surface a large stiffness difference in between two structural layers on top of each other results in the development of tensile stresses at the bottom of the stiffer layer. In the case of an asphalt concrete layer resting on top of an unbound base course layer, this is one of the fundamental distress mechanisms against which the mechanistic design of a road pavement is normally made. Therefore, it is fairly evident that if we replace an unbound road pavement layer either with a very stiff material (e.g. a self-cementing or cement-stabilized layer of fly ash) or a material with very low stiffness (e.g. a layer of tire shreds), tensile stresses tend to develop in places different from the traditional type of road structure. Correspondingly, the critical distress mechanisms that are decisive regarding the service life of the structure will change as well. Loading effects caused by heavy vehicle wheels moving on top of road structures are severe in many respects: -Contact pressure between a truck tire and road surface has typically an intensity of about 800 kPa, in the case of old generation single tires even up to 1000 kPa. In comparison to the contact stresses normally allowed, for instance, under the footings of normal house construction, these values may be up to threefold. -Wheel loads have a moving nature. In the literature, this has already been shown in the early 1990s to have a markedly more damaging effect on the road structure in comparison to a static load or even a cyclic load staying in place. This especially concerns the rutting behavior of unbound layers of road structures [1]. -One more characteristic feature of traffic loads is that they are repetitive. During the lifetime of a heavily trafficked road, even the heaviest wheel loads can be repeated hundreds of thousands or even millions of times over a certain point of road pavement. Considering that in addition to the traffic loads, road infrastructures are exposed to the varying effects of weather and seasons-rain, heat, freezing, thawing, etc.and the fact that roads must mainly be built using locally available construction materials, it is evident that the structural design of road pavements has a critical importance on their service life. Since ancient times, roads have been built very much based on empirical design rules that have later on been supplemented by calculatory elements and experimental road tests. One of the very best-known examples among the later ones is the extensive AASHO (American Association of State Highway Officials) road test carried out in the U.S.A already in the early 1960s [2]. From that time dates back also the so-called "fourth power rule" according to which the damaging effect of a wheel or axle load increases by a power of four when the wheel or axle load increases a certain amount. For instance, if the axle load rises from the typical design value of 100 kN (10 tons) by 20%, the damaging effect of that axle is assumed to more than double (1.2^4 = 2.0736). Developments made in computer technology since AASHO have enabled new types of design approaches to be introduced, because it has become possible to analyze the prevailing stresses and strains in road pavement exposed to a wheel load. Most typically, these analyses have utilized the so-called multi-layer linear elastic (MLLE) theory, in which each structural layer of road pavement is assumed to have a constant stiffness, both with regard to the compressive and tensile stresses the layer is experiencing. Since especially the unbound granular layers are known to have stress-dependent stiffness, multi-layer 1 3 analysis tools enabling the modeling of this important feature have also been introduced e.g. [3]. More advanced possibilities for analyzing the mechanical behavior of road pavement materials and traffic-loaded road pavements have been opened up by the introduction of easy-to-use numerical analysis tools based on Finite Element Method (FEM) e.g. [4]. and Discrete Element Method (DEM) e.g. [5,6]. Both of these approaches have, of course, their own strengths and weaknesses. With regard to the FEM approach, an important feature is that it treats all the structural layer materials as continuums, because of which the representativeness of the structural model fundamentally depends on the available types of material models. In the DEM approach, this limitation is avoided by modeling the interactions between each individual grain separately, so ideally the model should be able to reproduce the actual behavior of a granular material layer inherently consisting of a large number of individual grains interacting with each other. The very large number of grains contained in a road structure imposes, however, a severe limitation on accomplishing a truly realistic representation of reality with the DEM approach. One characteristic feature that concerns all the so-called mechanistic or analytical pavement design approaches utilizing either the multi-layer analyses or the more advanced FEM approaches is that even if we are able to determine the mobilizing stresses and strains in a traffic-loaded road pavement, we still need to know "how much is much." In other terms, what are the allowable distresses during one load application that correspond to the combination of repeated traffic loads and environmental conditions during the lifetime of the structure to be designed? This is the reason why empirical knowledge and calibration are required, even with the mechanistic design approaches. Excellent sources for this type of verification data with regard to traditional types of pavement structures have been the Minnroad test, carried out in the state of Minnesota in the U.S.A since the 1990s, and various types of Accelerated Pavement Test facilities used in a number of countries around the world during the last few decades e.g. [7][8][9]. Concerning pavement structures containing alternative construction materials, a big challenge is that many of these materials are available only locally and/or in limited quantities in comparison to the traditional types of construction materials: natural and crushed rock aggregates. That is why it is not economically feasible to carry out very extensive and long-lasting experimental loading test campaigns to find out their long-term performance in different types of potential utilization applications. Therefore, even if some empirical testing and verification is inevitably needed, utilization of the available numerical analysis tools is of utmost importance in developing a better understanding of the performance and failure mechanisms of these non-traditional pavement structures and thus to enable sustainable utilization of alternative construction materials in different types of road pavement applications. The aim of this paper is to exemplify the structural analysis of a few pavement structures containing alternative construction materials and to discuss the meaning of the obtained results. 11] and related structural solutions typically used in Low Volume Road (LVR) types of applications not conforming, however, the overall structural layer thickness requirements set for higher quality roads based on the design against frost action. Analyzed Pavement Structures Because the stiffness of subgrade soil underlying a road pavement has in earlier studies e.g. [12]. been observed to have a marked effect on the stresses and strains that are mobilizing into an infrastructure under the action of traffic load, subgrade stiffness was included as an additional variable into the analysis. Consequently, three different subgrade stiffness values were used, one representing very soft subgrade soil conditions, one subgrade soil with a medium high stiffness value, and one with high stiffness subgrade soil conditions. Semi-empirical Design Approach Based on Odemark Method The semi-empirical design approach that is based on the calculation method introduced by Odemark [13] is widely used for the structural design of road and street pavements in Finland. The key idea of this approach is that the overall stiffness of a pavement structure is a measure of its longterm load-carrying capacity (i.e. bearing capacity). Therefore, basically the only critical design parameter is the stiffness of each structural layer material, in addition to which the stiffness value of underlying subgrade soil is of course required. In the design guidelines for road and streets with different traffic volumes, the respective target values for the overall stiffness of the pavement structure are given [10] After having that, the Odemark method is used to calculate the overall stiffness of the whole pavement structure by summing up the contributions of each structural layer one by one, starting from the bottom of the pavement structure and continuing up to the road surface. In practical terms, this approach means that the higher the target stiffness value is, the stronger the pavement structure, consisting of thicker layers of better quality materials, you need to design. When it comes to the recommended stiffness values for different types of structural layer materials, empiricism plays a big role in the design approach described above. The recommended values are, in part, based on back-calculations of Plate Loading Test (PLT), but in addition to that it can be stated that the recommended values have by time been calibrated against the observations made from actual performance of real road pavements on a long time span. As far as traditional types of pavement structures and construction materials are utilized in building a road pavement, the empirical calibration makes the design approach operative, even though the overall stiffness as a true measure of the long-term load-carrying capacity of a pavement structure as such can be highly questioned. The stiffness values used in connection with the Odemark structural design method for the pavement structures given in Fig. 1 are summarized in Table 1. Multi-layer Linear Elastic Analysis Multi-layer analyses carried out in this research were accomplished using the program BISAR-PC originally delivered by Shell [14]. Characteristic features of the software are typical for the most of multi-layer linear elastic (MLLE) analysis tools; they include: -Wheel loads are applied on top of the pavement structure using a set of circular contact areas, each having a constant contact pressure; -each structural layer has a constant thickness, horizontal upper and lower boundaries and infinite length in the horizontal direction -subgrade soil is described as an infinite half-space with a horizontal upper boundary -all structural layer materials are isotropic, and their stiffness values are constant, independent of the prevailing stress conditions -all structural layer materials have infinite strength, both with regard to compressive and tensile stresses, i.e. no tension cut-off property is included into the model -structural layer materials do not have any weight of their own in the analysis. The stiffness values used in the multi-layer linear analysis were intentionally kept the same as those used in connection with the Odemark design approach (Table 1). In addition to the stiffness values, multi-layer linear elastic analysis carried out using BISAR-PC software requires as an input the value of a Poisson's ratio for each structural layer material. In the current analysis a constant value of υ = 0.35 was selected for all of the structural layer materials and υ = 0.5 for the subgrade soil. Finite Element analysis FEM analyses carried out in this study were accomplished using PLAXIS 3D (version 2017) software. Dimensions of the model were 5 m by 5 m, while the total thickness of structural and subgrade layers was 5.5 m. The element type used for modeling the structural layers of road pavements as well as subgrade soil was a ten-node tetrahedral element. The circular load was given as uniformly distributed pressure. The material models used were as follows: Hardening Soil (HS) for the structural layers of the road consisting of traditional aggregates, the Mohr-Coulomb (MC) model in undrained conditions for the subgrade soil, and the Linear Elastic (LE) model for the asphalt layer, fly ash, or bitumen-stabilized base course and fly ash layers in the sub-base course. Materials modeled as linear elastic had the same material parameters as those used in the MLLE calculations ( Table 1). The undrained state of subgrade in MC was defined with undrained shear strength, s u (s u = 20 kPa for type a subgrade, 35 kPa for type b subgrade, and 70 kPa for type c subgrade). The applied material parameters for HS materials are summarized in Table 2. Other Aspects of Structural Design Because the aim of the current study is primarily to analyze the mechanical behavior of pavement structures under wheel load action, other important aspects influencing the actual service life of bound structural layers, such as variation in material quality, shrinkage cracking behavior or resilience against uneven frost heave or consolidation settlement, are not considered here. Overall Stiffness of Pavement Structures The overall stiffness of analyzed pavement structures determined using the three parallel calculation methods have been compared in Fig. 2. In the case of the Odemark calculation approach, the overall stiffness is directly the output of calculation as such, while in connection with MLLE and FEM analyses, stiffness values have been derived based on the intensity of applied surface load and respective surface deflection according to Eq. 1. The main principal difference between the results thus obtained is that the two later ones correspond to an evenly distributed surface load (i.e. flexible loading plate), while in connection with the Odemark approach, the result corresponds obviously more to the PLT type of loading, i.e., rigid loading plate under which load distribution is not uniform: where E is the overall stiffness of structure (MN/m 2 ), ν is Poisson's ratio (-), σ is contact pressure (MN/m 2 ), r is the radius of the loading plate (mm), and d is deflection (mm). Based on Fig. 2, it is evident that the overall stiffness of a pavement structure in not a unique value, but it clearly depends on the calculation method that is used in determining it. In addition, with this thin pavement structures the overall stiffness determined on the top of a road structure markedly depends also on subgrade stiffness, as Fig. 2 indicates. In the case of Structure number 3, the Odemark approach seems to give consistently higher, up to 30%, overall stiffness values in comparison to those derived from the results of MLLE and FEM. The main reason for this is assumed to be that Odemark approach has originally been derived for analyzing structures in which the stiffness is increasing layer by layer from the bottom to the top of structure, while MLLE and FEM are more robust in the analysis of any type of layered structures. The same phenomenon can at least partly explain the slight inconsistency of results obtained for Structure number 1, even though in this case the influence of subgrade stiffness seems to be important as well. Correspondingly, FEM modelling results in slightly lower overall stiffness values compared to the MLLE method. Most likely, this arises from the elasto-plastic material models used in FEM analyses. Layers constructed with traditional aggregate materials do not have any tensional capacity in FEM simulations. This leads to somewhat larger and Tensile Stresses Inside of Pavement Structures Because the overall stiffness determined from top of a road structure is only a very robust measure of the mechanical behavior of the whole road pavement, it is worthwhile to investigate the stresses that are mobilized inside the structure as well. Considering the long-term performance of a pavement structure, one of the critical distresses is the intensity of tensile stress and/or strains at the bottom of any stiff structural layer that is resting on the top of a more flexible material layer. In connection with mechanistic pavement analyses, this type of distress is normally considered as the critical one regarding the service life of an asphalt concrete layer resting on top of an unbound base course layer. Figure 3 compares the tensile stresses calculated using the MLLE and FEM approaches at the bottom of the base course layer for Structures 1 and 2 and at the bottom of the sub-base layer for Structures 3 and 4, respectively. Because the Odemark approach does not enable the internal stresses of a pavement structure to be evaluated, it is not included in this comparison. As Fig. 3 reveals, large tensile stresses are developing into the stiff structural layers included in the analyzed pavement structures. Clearly, the highest these tensile stresses under the 50 kN surface load, up to 1,5 MPa, are mobilized in Structure 1 in which the stiffness of base course layer is very high, while in Structure 2 the respective tensile stresses are less than half of those mobilized in Structure 1. For a bituminous material, the mobilization of tensile stresses is not likely to be a very critical issue due to material's ductility, but in the case of a more fragile fly ash-stabilized base course material, repetitive application of this high tensile stresses is likely to result in gradual fracturing of base course layer. In Structures 3 and 4, in which a stiff material layer is located deeper below the road surface, the mobilized tensile stresses are not as high as in Structure 1, but especially on soft subgrade soil conditions, they are still of the order of hundreds of kilopascals. Discussion The results of calculations carried out with three parallel modeling approaches and summarized in Figs. 2 and 3 indicate clearly that the overall stiffness of a pavement structure is not a unique quantity, but it markedly depends both on the calculation method and the subgrade soil on which the structure is located. In Fig. 3, it could be observed that high tensile stresses are developing into the structural layers that are stiffer than the underlying components of the structure. The simple Odemark calculation approach does not enable assessment of these internal distresses of the pavement structure, while in the MLLE modeling approach, the stresses and strains mobilizing into the loaded pavement structure can be obtained at selected observation points. The most complete picture of the mobilized distresses can be obtained when the FEM approach is used in analyzing the overall performance of a traffic-loaded pavement structure. The distribution of tensile stresses was examined more closely from the FEM simulation results. Cross sections from simulations under a PLT type of loading (subgrade type a) were examined here and are illustrated in Fig. 4. As Fig. 4 indicates in the fly ash-stabilized base course layer of Structure 1, much higher tensile stresses seem to be mobilized in comparison to the bitumen-stabilized Structure 2. In addition, in Structure 1, remarkably high tensile stresses prevail in a large area practically throughout the whole base course layer. When the fly ash material is positioned into the sub-base course layer (Structures 3 and 4), the tensile stresses are significantly lower in magnitude. However, an almost continuous tensioned zone seems to develop from the bottom of the layer under the loaded area to the top of the sub-base course layer on both sides of the load. Both types of tensile stress zones in the fly ash materials may affect the long-term behavior of the layer. Fly ash is stiff but brittle material. If the magnitude of tensile stress is relatively high, such as in these structures, it is questionable that the fly ash layers can withstand these tensile stresses without cracking under the repeated wheel loads during the service life of the road structure. Conclusions If structural layers made of alternative construction materials, e.g. with especially high or low stiffness values, are used in a pavement structure to replace the traditional types of materials, it is important to realize that the overall performance of the whole pavement structure is likely to change. Therefore, the failure mechanisms that are decisive regarding the service life of the structure may also be quite different from those that are relevant to a traditional type of road structure. Recognition of these critical failure mechanisms is, however, not straightforward, especially if the structural analysis is made using the empirically based design approaches that are, as such, applicable for more traditional types of pavement structures. Based on the structural analyses exemplified in this study it can be concluded that: -Even such a simple quantity as the overall stiffness of a pavement structure is far from being a unique value, but it depends on the calculation approach that is used in determining it. In addition, it naturally also depends to a great extent on the conditions, especially the type of subgrade soil, on which a certain type of pavement structure is located. Fig. 4 Comparison of horizontal stress distributions in cross-sectional direction as obtained from FEM analyses carried out for Structures 1 to 4 resting on subgrade type a. Structural layer boundaries are indicated in cross sections by dotted lines -Tensile stresses clearly exceeding 1 MPa may mobilize in a Low Volume Road type of pavement structure loaded by a heavy wheel load if the base course is made of very stiff material. -If and when these high tensile stresses lead to cracking of the base course layer, load distribution capacity of the base course layer will markedly reduce, which in turn results in the increase of stresses and unevenness of stress distribution in the underlying layers. -A much more complete picture of the prevailing stresses and strains within a traffic-loaded pavement structure can be obtained if the analysis is made using more sophisticated analysis tools such as the FEM. In comparison to the performance evaluation made based on a simple quantity such as the overall stiffness of a pavement structure, it enables better recognition of the critical distresses and related failure mechanisms that are characteristic for non-traditional types of pavement structures. Even though the use of the FEM approach enables the development of a thorough understanding of the structural behavior of basically any type of non-traditional pavement structure, at least two major challenges still remain: -Identification of the potentially critical distresses and failure mechanisms is not enough without a knowledge about the allowable level of stresses or strains that are repeated a certain number of times during the service life of a road pavement at the critical points of the structure to be designed. By using the terminology of mechanistic pavement analyses, better knowledge of the fatigue models of alternative pavements construction materials should be developed. -In spite of the great developments made in the userfriendliness of modern Finite Element software tools, they are still too complicated to be used in the routine structural design of pavement structures. Therefore, simplified design tools still incorporating the recognition of critical failure mechanisms typical for the non-traditional types of pavements structures are required. Based on the above, it is evident that quite a bit of work is still ahead before these two main challenges have been tackled. One of the first steps on this way is to understand that utilization of the alternative types of construction materials may change the overall behavior and related failure mechanisms of a road pavement. People who are used to work with alternative construction materials are familiar with the concept of chemical compatibility. In addition to that, the authors of this paper would like to introduce the concept of structural compatibility. Keeping this concept in mind, due attention will, it is hoped, also be drawn to the mechanical behavior of these alternative materials when they are used in replacing the traditional ones in road structures exposed to heavy traffic loads. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
v3-fos-license
2020-03-12T10:38:36.037Z
2020-03-05T00:00:00.000
215414172
{ "extfieldsofstudy": [ "Computer Science", "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2072-4292/12/5/840/pdf", "pdf_hash": "ece4dd5bcd487929b662eb3547fe08c6883a3ecb", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46329", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "fc6a58efedf169c16659d3c019652fbda429cd4c", "year": 2020 }
pes2o/s2orc
Estimation of the Particulate Organic Carbon to Chlorophyll- a Ratio Using MODIS-Aqua in the East / Japan Sea, South Korea : In recent years, the change of marine environment due to climate change and declining primary productivity have been big concerns in the East / Japan Sea, Korea. However, the main causes for the recent changes are still not revealed clearly. The particulate organic carbon (POC) to chlorophyll- a (chl- a ) ratio (POC:chl- a ) could be a useful indicator for ecological and physiological conditions of phytoplankton communities and thus help us to understand the recent reduction of primary productivity in the East / Japan Sea. To derive the POC in the East / Japan Sea from a satellite dataset, the new regional POC algorithm was empirically derived with in-situ measured POC concentrations. A strong positive linear relationship (R 2 = 0.6579) was observed between the estimated and in-situ measured POC concentrations. Our new POC algorithm proved a better performance in the East / Japan Sea compared to the previous one for the global ocean. Based on the new algorithm, long-term POC:chl- a ratios were obtained in the entire East / Japan Sea from 2003 to 2018. The POC:chl- a showed a strong seasonal variability in the East / Japan Sea. The spring and fall blooms of phytoplankton mainly driven by the growth of large diatoms seem to be a major factor for the seasonal variability in the POC:chl- a . Our new regional POC algorithm modified for the East / Japan Sea could potentially contribute to long-term monitoring for the climate-associated ecosystem changes in the East / Japan Sea. Although the new regional POC algorithm shows a good correspondence with in-situ observed POC concentrations, the algorithm should be further improved with continuous field surveys. Introduction The East/Japan Sea is a semi-marginal sea located in the northwestern Pacific, and it is bordered by Korea, Japan, and Russia. Many previous studies reported that the East/Japan Sea is a productive region, especially the Ulleung Basin located in the southwestern East/Japan Sea [1][2][3][4][5][6][7]. Recently, not only the changes in environmental conditions but also alterations in biological characteristics were reported in many previous studies [4][5][6]8]. Especially, Joo et al. [5] addressed a significant declining trend of the annual primary production in the Ulleung Basin which is the biological hotspot in the East/Japan Sea. However, although many other studies have observed the changes in the East/Japan Sea, the main causes driving the recent changes remain unclear. The particulate organic carbon (POC) to chlorophyll-a (chl-a) ratio (POC:chl-a) has been used as an important indicator for ecological and physiological state of phytoplankton. Generally, the concentrations of photosynthetic pigments in phytoplankton cells depend on the environmental factors and the physiological conditions of phytoplankton [9][10][11][12][13][14]. For instance, the POC:chl-a can be increased under high light intensity and low nitrogen supply conditions [1,5,6]. Moreover, the POC:chl-a varies according to the size structure of the phytoplankton community [15][16][17]. Therefore, understanding of spatial and temporal variations in the POC:chl-a can help us to determine the ecological and physiological conditions of a phytoplankton community. However, the existing POC-deriving algorithm using satellite ocean color data [18] is not validated in the East/Japan Sea. Although the algorithm is fully validated in global ocean, it is hard to directly apply to this small regional sea. Thus, the algorithm needs to be calibrated and the accuracy should be evaluated before applying to our study area. Therefore, this study aims to (1) develop a new regional POC algorithm using an ocean color satellite and (2) investigate spatiotemporal variability of the POC:chl-a in the East/Japan Sea. Study Area and Sampling Total chl-a and POC concentrations were measured at 41 stations in the East/Japan Sea (16 stations in 2012, 11 stations in 2013, six stations in 2014, and eight stations in 2015, respectively; Figure 1). Northern and southern regions of the East/Japan Sea were defined as shown in Figure 1 to investigate spatial variations of POC:chl-a. Sampling and analysis of total chl-a and POC concentrations were conducted based on Lee et al. [19]. Water samples for the total chl-a and POC concentrations of phytoplankton were collected from the surface layer using a rosette sampler with Niskin bottles. The collected water samples were immediately filtered on Whatman ® glass microfiber filters (precombusted; Grade GF/F, diameter = 24 mm) using a vacuum pressure lower than 5 in. Hg. The filtered samples were frozen immediately and preserved until analysis at the laboratory. The chl-a concentrations were measured with a precalibrated fluorometer (10-AU, Turner Designs) after extraction in 90% acetone in a freezer for 24 h based on Parsons et al. [20]. The filters for POC concentrations were frozen immediately and preserved for mass spectrometric analysis at the Alaska Stable Isotope Facility of the University of Alaska Fairbanks, USA. Satellite Datasets We obtained the chl-a and remote sensing reflectance (Rrs) data from the MODIS (Moderate Resolution Imaging Spectroradiometer) onboard the satellite Aqua provided by the OBPG (Ocean Biology Processing Group at NASA Goddard Space Flight Center; https://oceandata.sci.gsfc.nasa.gov/ MODIS-Aqua/). We used the Level-3 daily composite datasets covering the East/Japan Sea from July 2002 to December 2018 at 4-km of spatial resolution [21,22]. The POC derivation algorithm for the East/Japan Sea was empirically derived with our in-situ observation data. Based on the Stramski et al. [18], a power-law relationship between a blue-to-green band ratio of Rrs and POC were used to estimate POC concentrations. The equation for the algorithm is expressed below: where a and b are constants. Constants were determined empirically by regression analysis with our in-situ dataset. The input wavelength for the green band can be replaced with available band between 547 and 565 nm. In this study, Rrs(443) and Rrs(547) which are available bands of MODIS were used as input wavelengths for blue-to-green band ratio for the POC algorithm. POC:chl-a ratios were calculated by dividing our estimated POC concentrations by remotely sensed chl-a concentrations. Monthly composited data for POC and POC:chl-a were obtained by averaging daily data for each month. POC Algorithm Derivation In-situ measured POC concentrations ranged from 84.07 to 713.69 mg m −3 , and the average was 262.93 ± 205.82 mg m −3 . The blue-to-green band Rrs ratio were extracted from MODIS-Aqua monthly composite datasets. To determine two constants in the POC algorithm, a and b, the curve fitting using nonlinear regression between Rrs ratio and in-situ POC concentration was conducted ( Figure 2a). From this result, the constants a and b were determined as 295.7 and -1.028, respectively. Based on these two constants, the POC algorithm was derived as follows: The determination coefficient (R 2 ) and Spearman's correlation coefficient for the relationship From this result, the constants a and b were determined as 295.7 and −1.028, respectively. Based on these two constants, the POC algorithm was derived as follows: The determination coefficient (R 2 ) and Spearman's correlation coefficient for the relationship between Rrs ratio and in-situ POC concentrations were 0.8017 and −0.861, respectively (Figure 2a). The POC concentrations derived from the regional model also showed a strong linear relationship with in-situ measured POC concentrations (R 2 = 0.6579), and most of the satellite-derived POC were plotted within 95% prediction bounds (Figure 2b). The new POC algorithm showed lower RMSE and bias in comparison to the existing algorithm [18] in the East/Japan Sea (Figure 3). The RMSE and bias of the new algorithm were 115.37 and −17.43, respectively, and those of Stramski et al. [18] were 161.47 and −97.29, respectively. From this result, the constants a and b were determined as 295.7 and -1.028, respectively. Based on these two constants, the POC algorithm was derived as follows: The determination coefficient (R 2 ) and Spearman's correlation coefficient for the relationship between Rrs ratio and in-situ POC concentrations were 0.8017 and -0.861, respectively (Figure 2a). The POC concentrations derived from the regional model also showed a strong linear relationship with in-situ measured POC concentrations (R 2 = 0.6579), and most of the satellite-derived POC were plotted within 95% prediction bounds (Figure 2b). The new POC algorithm showed lower RMSE and bias in comparison to the existing algorithm [18] in the East/Japan Sea (Figure 3). The RMSE and bias of the new algorithm were 115.37 and -17.43, respectively, and those of Stramski et al. [18] were 161.47 and -97.29, respectively. The POC concentrations were derived for the East/Japan Sea from our new regional algorithm ( Figure 4). The climatological monthly mean POC (January 2003-December 2018) showed a seasonal variation of the POC concentration in the East/Japan Sea (Figure 4). Generally, the POC concentrations were relatively higher during spring season, and the lowest POC concentrations were observed during summer season. The POC concentrations were derived for the East/Japan Sea from our new regional algorithm (Figure 4). The climatological monthly mean POC (January 2003-December 2018) showed a seasonal variation of the POC concentration in the East/Japan Sea (Figure 4). Generally, the POC concentrations were relatively higher during spring season, and the lowest POC concentrations were observed during summer season. POC:chl-a The climatological monthly distribution of the POC:chl-a (January 2003-December 2018) in the East/Japan Sea showed strong seasonal variations ( Figure 5). In contrast to the seasonal pattern of POC, relatively lower POC:chl-a ratios were observed during spring and autumn compared to those during winter and summer. POC:chl-a The climatological monthly distribution of the POC:chl-a (January 2003-December 2018) in the East/Japan Sea showed strong seasonal variations ( Figure 5). In contrast to the seasonal pattern of POC, relatively lower POC:chl-a ratios were observed during spring and autumn compared to those during winter and summer. The ranges of the mean POC:chl-a in the northern and southern East/Japan Sea were 169.6-528.4 and 172.4-549.3, respectively ( Figure 6). Domains for the two regions are shown in Figure 1. The average of POC:chl-a in the northern and southern regions of the East/Japan Sea were 377.6 ± 80.4 and 388.1 ± 69.7, respectively. No statistically significant difference in POC:chl-a was observed between the northern and southern regions (t-test, p > 0.05). . The POC:chl-a showed a strong seasonal variation (Figure 7). Climatological monthly mean POC:chl-a showed the lowest values during April (270.3 ± 74.7 and 261.9 ± 60.9 for the northern and southern East/Japan Sea, respectively), and the highest values were observed during August (461.0 ± 18.1) and July (451.5 ± 23.6) for the northern and southern East/Japan Sea, respectively (Figure 7). The The POC:chl-a showed a strong seasonal variation (Figure 7). Climatological monthly mean POC:chl-a showed the lowest values during April (270.3 ± 74.7 and 261.9 ± 60.9 for the northern and southern East/Japan Sea, respectively), and the highest values were observed during August (461.0 ± 18.1) and July (451.5 ± 23.6) for the northern and southern East/Japan Sea, respectively (Figure 7). The POC:chl-a during spring and autumn were significantly lower than summer and winter (t-test, p < 0.01). Remote Sens. 2020, 12, x FOR PEER REVIEW 7 of 10 POC:chl-a during spring and autumn were significantly lower than summer and winter (t-test, p < 0.01). New regional POC algorithm Since the POC algorithm reported by Stramski et al. [18] was developed for the eastern South Pacific and eastern Atlantic Oceans, we validated and modified the algorithm to derive a suitable POC model for the East/Japan Sea. The previously reported POC algorithm by Stramski et al. [18] tends to underestimate the POC concentrations in the East/Japan Sea (Figure 3). However, our modified regional POC algorithm showed significantly improved accuracy in the East/Japan Sea, and the POC concentrations estimated from the regional algorithm derived in this study showed a strong New Regional POC Algorithm Since the POC algorithm reported by Stramski et al. [18] was developed for the eastern South Pacific and eastern Atlantic Oceans, we validated and modified the algorithm to derive a suitable POC Remote Sens. 2020, 12, 840 7 of 10 model for the East/Japan Sea. The previously reported POC algorithm by Stramski et al. [18] tends to underestimate the POC concentrations in the East/Japan Sea (Figure 3). However, our modified regional POC algorithm showed significantly improved accuracy in the East/Japan Sea, and the POC concentrations estimated from the regional algorithm derived in this study showed a strong linear relationship with in-situ measured POC concentrations and its linear regression line located near the 1:1 line (Figure 3). However, only 41 data points were used to derive the POC model in this study, and in fact, the number of data points could not be considered sufficient. Nevertheless, the modified POC model showed a good correspondence with field measured POC concentrations. If continuous field observations and the calibration and validation of the algorithm are conducted, the POC algorithm with a better performance would be derived. Spatial and Temporal Variability of the POC:chl-a The seasonal variation of the POC:chl-a in the East/Japan Sea would be closely related with the physiological conditions of phytoplankton communities. Generally, phytoplankton tend to accumulate more carbon in their cells under the high light intensity and nutrient depleted conditions [9,13,14,23,24]. Phytoplankton increase their chl-a contents to maximize light absorption under the low light condition [11,25,26]. Moreover, many previous studies reported that the carbon to chl-a ratio of phytoplankton decreases with increased growth rate [14,[27][28][29][30]. The size structure of the phytoplankton community also can affect the carbon to chl-a ratio [15][16][17]. Small cell-sized phytoplankton such as flagellates usually show higher carbon to chl-a ratio than large cell-sized phytoplankton such as diatoms [15][16][17]. However, the daily carbon uptake rate by phytoplankton tends to be lowered when the productivity contribution of picoplankton to the total primary production is high [31,32]. It suggests that the investigation of POC:chl-a in the East/Japan Sea can provide some potential clues on the recent changes of primary productivity in the East/Japan Sea. In general, there are two phytoplankton blooms per year in the East/Japan Sea; spring bloom and fall bloom [4,5,[33][34][35]. Signals of the blooms were also observed in the climatological monthly mean distribution of the POC (Figure 3). The strongest bloom occurs during spring season while the weaker blooms appears in fall. Both spring and fall blooms are mainly caused by the massive growth of diatoms [3,4,33,35,36]. During spring and fall blooms, the size distribution of phytoplankton cells would shift from smaller to larger size due to the rapid growth of diatoms. Consequently, lower POC:chl-a during spring and fall might be caused by the bloom mainly driven by the growth of diatoms. However, these suggestions are only our hypothesis based on several previous studies. To understand the temporal variation of the POC:chl-a in the East/Japan Sea, further research with field observations is needed. On the other hand, the timing of spring bloom showed a difference in the northern and southern regions of the East/Japan Sea (Figures 3 and 4). POC concentrations were highest in April in the southern region and highest in May in the northern region. Additionally, the distribution of POC:chl-a during April and May appeared to be the opposite of POC concentrations. Other previous studies have also observed a similar spatial distribution of chl-a in the East/Japan Sea with satellite datasets [37,38]. Those spatiotemporal distributions of POC concentrations and POC:chl-a suggest that the spring bloom occurs earlier in the southern regions of the East/Japan Sea. Summary and Conclusions In this study, the regional POC algorithm for the East/Japan Sea was derived empirically using in-situ measured POC and MODIS-Aqua satellite datasets. In-situ measured POC concentrations at the 41 stations in the East/Japan Sea were used to calibrate and validate our new regional POC algorithm. The power-law relationship between POC concentration and blue-to-green band of remote sensing reflectance, Rrs(443)/Rrs(547), was used to derive the algorithm based on Stramski et al. [18] (Figure 2a).
v3-fos-license
2022-12-09T16:14:51.841Z
2022-12-01T00:00:00.000
254449173
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2079-7737/11/12/1779/pdf?version=1670407426", "pdf_hash": "3b8ff0f5a63b3543b92930e4f9e00761520be44b", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46331", "s2fieldsofstudy": [ "Biology" ], "sha1": "2901e4740d9d04f5893b8baec367f8cf91b9796f", "year": 2022 }
pes2o/s2orc
Ten Plastomes of Crassula (Crassulaceae) and Phylogenetic Implications Simple Summary Plastids are semi-autonomous plant organelles which play critical roles in photosynthesis, stress response, and storage. The plastid genomes (plastomes) in angiosperms are relatively conserved in quadripartite structure, but variable in size, gene content, and evolutionary rates of genes. The genus Crassula L. is the second-largest genus in the family Crassulaceae J.St.-Hil, that significantly contributes to the diversity of Crassulaceae. However, few studies have focused on the evolution of plastomes within Crassula. In the present study, we sequenced ten plastomes of Crassula: C. alstonii Marloth, C. columella Marloth & Schönland, C. dejecta Jacq., C. deltoidei Thunb., C. expansa subsp. fragilis (Baker) Toelken, C. mesembrianthemopsis Dinter, C. mesembryanthoides (Haw.) D.Dietr., C. socialis Schönland, C. tecta Thunb., and C. volkensii Engl. Through comparative studies, we found Crassula plastomes have unique codon usage and aversion patterns within Crassulaceae. In addition, genomic features, evolutionary rates, and phylogenetic implications were analyzed using plastome data. Our findings will not only reveal new insights into the plastome evolution of Crassulaceae, but also provide potential molecular markers for DNA barcoding. Abstract The genus Crassula is the second-largest genus in the family Crassulaceae, with about 200 species. As an acknowledged super-barcode, plastomes have been extensively utilized for plant evolutionary studies. Here, we first report 10 new plastomes of Crassula. We further focused on the structural characterizations, codon usage, aversion patterns, and evolutionary rates of plastomes. The IR junction patterns—IRb had 110 bp expansion to rps19—were conservative among Crassula species. Interestingly, we found the codon usage patterns of matK gene in Crassula species are unique among Crassulaceae species with elevated ENC values. Furthermore, subgenus Crassula species have specific GC-biases in the matK gene. In addition, the codon aversion motifs from matK, pafI, and rpl22 contained phylogenetic implications within Crassula. The evolutionary rates analyses indicated all plastid genes of Crassulaceae were under the purifying selection. Among plastid genes, ycf1 and ycf2 were the most rapidly evolving genes, whereas psaC was the most conserved gene. Additionally, our phylogenetic analyses strongly supported that Crassula is sister to all other Crassulaceae species. Our findings will be useful for further evolutionary studies within the Crassula and Crassulaceae. Nucleotide Substitution Rate Analyses The 79 PCGs from 87 species of Crassulaceae were employed to evaluate the evolutionary rates (Table S1). The percentage of variable sites (PV) and average π values were measured with DnaSP v6.12 (Departament de Genètica, Universitat de Barcelona, Barcelona, Spain) [46]. The nucleotide substitution rates, including dN, dS, and dN/dS, were inferred with PAML v4.9 [55] under F3X4 and M0 model. Plastome Organizations and Structural Features Furthermore, based on the results obtained with mVISTA, in all plastomes investigated it was found that the IR and coding regions (exons, tRNAs, and rRNAs) are more conserved than SC and conserved non-coding regions (CNS), respectively ( Figure 2). Additionally, the results also revealed that 3 plastomes (labelled 8-10) of subgenus Disporocarpa exhibited higher divergences than 7 plastomes (labelled 1-7) of subgenus Crassula, when compared with the reference. Furthermore, based on the results obtained with mVISTA, in all plastomes investigated it was found that the IR and coding regions (exons, tRNAs, and rRNAs) are more conserved than SC and conserved non-coding regions (CNS), respectively ( Figure 2). Additionally, the results also revealed that 3 plastomes (labelled 8−10) of subgenus Disporocarpa exhibited higher divergences than 7 plastomes (labelled 1−7) of subgenus Crassula, when compared with the reference. Y-scale represents the percent identity between 50% and 100%. The labels 0 to 10 indicate C. perforata (reference), C. alstonii, C. columella, C. dejecta, C. mesembryanthoides, C. tecta, C. mesembrianthemopsis, C. socialis, C. volkensii, C. expansa subsp. fragilis, and C. deltoidei, respectively. The sliding-window-based π values estimated for 11 plastomes of Crassula ranged from 0.00073 to 0.10315 (Table S2 and Data S2). The mean π value and its standard deviation were 0.02978 and 0.01954, respectively. Thus, a total of 11 HVRs were identified with relatively high variability (π > 0.06886) ( Figure 3). These HVRs containing high π values (0.06912-0.08653) and abundant variable sites (111-559) might be used as potential DNA barcodes for species identification within Crassula ( Table 2). The sliding-window-based π values estimated for 11 plastomes of Crassula ranged from 0.00073 to 0.10315 (Table S2 and Data S2). The mean π value and its standard deviation were 0.02978 and 0.01954, respectively. Thus, a total of 11 HVRs were identified with relatively high variability (π > 0.06886) (Figure 3). These HVRs containing high π values (0.06912−0.08653) and abundant variable sites (111−559) might be used as potential DNA barcodes for species identification within Crassula (Table 2). Regions with higher π values (π > 0.06886) were considered as HVRs. In our current study, all 11 plastomes of Crassula displayed similar IR junction patterns ( Figure 4). The SSC/IRa borders are located in the coding regions of ycf1 gene, resulting in the fragmentations of ycf1 (ycf1-fragment) in IRb regions. Moreover, ndhF genes were discovered to occur mainly in SSC, and partly in IRb, regions. Notably, rps19 genes are located at the LSC/IRb junctions, with extension into the IRb regions for 110 bp. Similarly, trnH genes lie at the IRa/LSC junctions, with uniform 3 bp-sized expansions to the IRa regions. In our current study, all 11 plastomes of Crassula displayed similar IR junction patterns ( Figure 4). The SSC/IRa borders are located in the coding regions of ycf1 gene, resulting in the fragmentations of ycf1 (ycf1-fragment) in IRb regions. Moreover, ndhF genes were discovered to occur mainly in SSC, and partly in IRb, regions. Notably, rps19 genes are located at the LSC/IRb junctions, with extension into the IRb regions for 110 bp. Similarly, trnH genes lie at the IRa/LSC junctions, with uniform 3 bp-sized expansions to the IRa regions. Blue, orange and green blocks represent the LSC, IR and SSC regions, respectively. Gene boxes represented above the block were transcribed clockwise and those represented below the block were transcribed clockwise. "fra." is the abbreviation of "fragment". Codon Usage and Aversion Patterns To compare the patterns of codon usage and aversion between Crassula and other Crassulaceae species, four analyses (RSCU, ENC, PR2-plot, and codon aversion motif) of 53 plastid genes (length ≥ 300 bp) were performed. The overall RSCU values ranged from 0.32 (CTC or AGC) to 2.07 (TTA) among Crassulaceae species (Table S3). Similar with other Crassulaceae species, seven taxa of Crassula exhibited significant preference for A/T-ending codons over G/C-ending codons in plastid genes ( Figure 5). Importantly, the RSCU heatmap showed two subgenera within the Blue, orange and green blocks represent the LSC, IR and SSC regions, respectively. Gene boxes represented above the block were transcribed clockwise and those represented below the block were transcribed clockwise. "fra." is the abbreviation of "fragment". Codon Usage and Aversion Patterns To compare the patterns of codon usage and aversion between Crassula and other Crassulaceae species, four analyses (RSCU, ENC, PR2-plot, and codon aversion motif) of 53 plastid genes (length ≥300 bp) were performed. The overall RSCU values ranged from 0.32 (CTC or AGC) to 2.07 (TTA) among Crassulaceae species (Table S3). Similar with other Crassulaceae species, seven taxa of Crassula exhibited significant preference for A/T-ending codons over G/C-ending codons in plastid genes ( Figure 5). Importantly, the RSCU heatmap showed two subgenera within the Crassula: subgenus Disporocarpa included C. expansa subsp. fragilis, C. deltoidea and C. volkensii; subgenus Crassula consisted of the remaining eight taxa ( Figure 5). The ENC values ranged from 30.83 (ndhC in Sedum sarmentosum Bunge) to 57.74 (ndhJ in C. volkensii and C. expansa subsp. fragilis) among Crassulaceae species (Table S4). Generally, ENC values ≤35 indicate high codon preference [52,61,62]. The results show that most of the ENC values (99.48%) were higher than 35, indicating a weaker bias. Most surprisingly of all, we detected the ENC values of matK, from the Crassula clade, are significantly higher than those of all other clades (Table S4 and Figure 6). It might prove to be a unique feature for Crassula species. To further verify this finding, more sampling data and comprehensive analyses are need in future studies. Figure 5). erally, ENC values ≤ 35 indicate high codon preference [52,61,62]. The results show that most of the ENC values (99.48%) were higher than 35, indicating a weaker bias. Most surprisingly of all, we detected the ENC values of matK, from the Crassula clade, are significantly higher than those of all other clades (Table S4 and Figure 6). It might prove to be a unique feature for Crassula species. To further verify this finding, more sampling data and comprehensive analyses are need in future studies. The PR2 plots of matK and 52 other PCGs are presented in Figures 7 and S1, respectively. These results indicated the nucleotide usage at the 3rd codon site of 4-fold degenerate codons is uneven in different genes. For example, rps14, clpP, psbA, and pafII prefer to use A/G, A/C, T/C, and T/G in 4-fold degenerate sites, respectively ( Figure S1). In addition, these unbalanced utilizations were also found in different species ( Figure S1). Obviously divergent GC-biases were observed in matK genes between species of subgenus Crassula and others. Specifically, all GC-biases of clades from Kalanchoideae and Sempervivoideae, plus subgenus Disporocarpa, were less than 0.5. On the contrary, all these values for the subgenus Crassula were higher than 0.5, which might be unique characteristic for subgenus Crassula. Moreover, species with close relationships had identical nucleotide biases. For example, C. alstonii and C. columella had identical AT-biases (0.4074) and GCbiases (0.5455). Similar phenomena could also be observed in C. mesembryanthoides and C. tecta (AT-biases = 0.4286, and GC-biases = 0.5455). The PR2 plots of matK and 52 other PCGs are presented in Figures 7 and S1, respectively. These results indicated the nucleotide usage at the 3rd codon site of 4-fold degenerate codons is uneven in different genes. For example, rps14, clpP, psbA, and pafII prefer to use A/G, A/C, T/C, and T/G in 4-fold degenerate sites, respectively ( Figure S1). In addition, these unbalanced utilizations were also found in different species ( Figure S1). Obviously divergent GC-biases were observed in matK genes between species of subgenus Crassula and others. Specifically, all GC-biases of clades from Kalanchoideae and Sempervivoideae, plus subgenus Disporocarpa, were less than 0.5. On the contrary, all these values for the subgenus Crassula were higher than 0.5, which might be unique characteristic for subgenus Crassula. Moreover, species with close relationships had identical nucleotide biases. For example, C. alstonii and C. columella had identical AT-biases (0.4074) and GC-biases (0.5455). Similar phenomena could also be observed in C. mesembryanthoides and C. tecta (AT-biases = 0.4286, and GC-biases = 0.5455). Owing to the codon aversion motifs containing phylogenetic implication, we analyzed codon aversion patterns of genes among Crassulaceae species. Except for rpoB, rpoC2, ycf1 and ycf2, codon aversion motifs were found in the remaining 49 genes (Table S5). It is worth noting that 27 and 16 unique codon aversion motifs were detected for species of subgenus Crassula and subgenus Disporocarpa, respectively (Table 3), which might be used as potential biomarkers for species identification. Further to this, 8 consensus motifs might be considered as the feature of genus Crassula (Table 3). Moreover, the codon aversion motifs from 3 genes (matK, pafI and rpl22) could also divide 11 species into two subgenera (subgenus Crassula and subgenus Disporocarpa) (Figure 8), which is congruent with results from RSCU heatmap. Owing to the codon aversion motifs containing phylogenetic implication, we analyzed codon aversion patterns of genes among Crassulaceae species. Except for rpoB, rpoC2, ycf1 and ycf2, codon aversion motifs were found in the remaining 49 genes (Table S5). It is worth noting that 27 and 16 unique codon aversion motifs were detected for species of subgenus Crassula and subgenus Disporocarpa, respectively (Table 3), which might be used as potential biomarkers for species identification. Further to this, 8 consensus motifs might be considered as the feature of genus Crassula (Table 3). Moreover, the codon aversion motifs from 3 genes (matK, pafI and rpl22) could also divide 11 species into two subgenera (subgenus Crassula and subgenus Disporocarpa) (Figure 8), which is congruent with results from RSCU heatmap. Evolutionary Rates and Patterns The π (0.00447-0.0914) and PV (4.91-37.52%) values of 79 plastid PCGs of Crassulaceae species were plotted in Figure 9a. Two genes, referring to ycf1 (π = 0.0914, PV = 35.78%) and matK (π = 0.08239, PV = 37.52%), had obviously higher π and PV values than those of the other 77 genes, indicating they might accumulate more mutations than other plastid genes. The detailed data are listed in Table S6. Evolutionary Rates and Patterns The π (0.00447-0.0914) and PV (4.91-37.52%) values of 79 plastid PCGs of Crassulaceae species were plotted in Figure 9a. Two genes, referring to ycf1 (π = 0.0914, PV = 35.78%) and matK (π = 0.08239, PV = 37.52%), had obviously higher π and PV values than those of the other 77 genes, indicating they might accumulate more mutations than other plastid genes. The detailed data are listed in Table S6. To further quantify the evolutionary rates of PCGs, the nucleotide substitution rates, including dN, dS and dN/dS, were calculated ( Figure 9b, Table S6). The dN values ranged from 0 to 0.8671, with higher dN values for ycf1 (dN = 0.8671) and matK (dN = 0.7804) than for others. Compared with dN values, the dS values had relatively wide ranges (0.177-2.3917), resulting in corresponding dN/dS ratios (0-0.5891) of less than 1. This finding indicates the plastid genes from Crassulaceae appear to be evolving under a purifying selective constraint. Among 79 plastid PCGs, ycf2 is the most rapidly evolving gene, with the highest ratio (dN/dS = 0.5891), followed by ycf1, cemA, psaI, and matK. By contrast, psaC was the most conserved gene with the lowest ratio (dN/dS = 0). Phylogenetic Implications To investigate the evolutionary relationships among 87 species of Crassulaceae, phylogenetic analyses were performed. After a model test, GTR + G4 and GTR + I+G4 were inferred as the optimal substitution models for most genes (the detailed models can be seen in Table S7). As shown in Figure 10, the trees inferred from two methods displayed the same topology. To further quantify the evolutionary rates of PCGs, the nucleotide substitution rates, including dN, dS and dN/dS, were calculated ( Figure 9b, Table S6). The dN values ranged from 0 to 0.8671, with higher dN values for ycf1 (dN = 0.8671) and matK (dN = 0.7804) than for others. Compared with dN values, the dS values had relatively wide ranges (0.177-2.3917), resulting in corresponding dN/dS ratios (0-0.5891) of less than 1. This finding indicates the plastid genes from Crassulaceae appear to be evolving under a purifying selective constraint. Among 79 plastid PCGs, ycf2 is the most rapidly evolving gene, with the highest ratio (dN/dS = 0.5891), followed by ycf1, cemA, psaI, and matK. By contrast, psaC was the most conserved gene with the lowest ratio (dN/dS = 0). Phylogenetic Implications To investigate the evolutionary relationships among 87 species of Crassulaceae, phylogenetic analyses were performed. After a model test, GTR + G4 and GTR + I+G4 were inferred as the optimal substitution models for most genes (the detailed models can be seen in Table S7). As shown in Figure 10, the trees inferred from two methods displayed the same topology. Ten species of Crassula that we sequenced, together with C. perforate, form the wellsupported clade Crassula, which is sister to all other Crassulaceae species (maximum Ten species of Crassula that we sequenced, together with C. perforate, form the well-supported clade Crassula, which is sister to all other Crassulaceae species (maximum likelihood bootstrap [BS] = 100 and bayesian posterior probability [PP] = 1.00). In addition, our phylogenetic tree indicated that this monophyletic clade could be clustered into two subgenera: subgenus Disporocarpa harbored C. volkensii, C. expansa subsp. Discussion Ten new plastomes of Crassula were reported in the present study. Combined with available data from public database, we conducted comprehensive analyses, including plastome organizations, codon usage and aversion patterns, evolutionary rates, and phylogenetic implications. The expansion and contraction of IR regions are common evolutionary events and have been considered as the main mechanism for the length variation of angiosperm plastomes [64][65][66]. In our study, we performed comparative analyses among Crassula plastomes, and found that the IRb regions had uniform length (110 bp) expansions to the rps19 gene. This 110-bp expansion had been also observed in Aeonium, Monanthes, and most other taxa of Crassulaceae in our recent study [17]. This finding indicated that the conserved IR organization might act as a family-specific marker for Crassulaceae species. Interestingly, it was reported that rps19 genes were completely located in the LSC regions in Forsythia suspensa (Thunb.) Vahl, Olea europaea Hoffmanns. & Link L., and Quercus litseoides Dunn [67,68], and were fully encoded by the IR regions in Polystachya adansoniae Rchb.f., Polystachya bennettiana Rchb.f., and Dracaena cinnabari Balf.f. [69,70]. There are several mechanisms that might explain the IR expansion and contraction [71][72][73]. For instance, Goulding et al. [71] proposed that short IR expansions may occur by gene conversion events, whereas large IR expansions involved in double-strand DNA breaks. In order to better reveal the mechanisms of IR expansion and contraction, more extensive investigations in Crassulaceae and Saxifragales are required. Investigations of codon usage patterns could reveal phylogenetic relationships between organisms [25,74]. In particular, 11 species of Crassula can be divided into two subgenera from the RSCU heatmap, which agreed with the results of phylogenetic analyses. This finding further demonstrates that RSCU values contain phylogenetic implications [75][76][77][78][79][80]. Additionally, we observed codon usage patterns are gene-specific and/or species-specific, reflected in diversified ENC values and various distribution patterns in PR2 plots. Interestingly, we found the codon usage patterns of matK gene in Crassula species are unique among Crassulaceae species with elevated ENC values. Furthermore, the GC-biases of matK gene with specific preference (>0.5) might be the particular feature for subgenus Crassula. Due to rapid evolutionary rate, high universality, and significant interspecific divergence, the matK gene has been broadly used in plant evolutionary studies as one of the core DNA barcodes [9,10,[81][82][83][84]. Codon aversion, a novel concept proposed by Miller et al. [29][30][31], is an informative character in phylogenetics. Specifically, the codon aversion motifs in orthologous genes are generally conserved in specific lineages [29][30][31]. To date, these analyses have only been performed in a few plant plastomes [17,26]. For example, the specific codon aversion motifs of rpoA gene could distinguish not only the two genera (Aeonium and Monanthes), but also the three subclades of Aeonium in our recent report [17]. In this work, genus-specific and subgenus-specific codon aversion motifs were identified for 11 Crassula species. These findings suggest codon aversion pattern could be used as a promising tool for phylogenetic study. Generally, the dN/dS ratios of genes could reflect the extent of selection pressures during evolution [22]. Here, the dN/dS values of plastid PCGs ranged from 0 to 0.5891 within Crassulaceae, indicating all plastid genes were under purifying selection. Among these values, elevated dN/dS ratios were found for ycf1 (0.4349) and ycf2 (0.5891). Similarly, high dN/dS ratios of these two genes were also observed in other families, such as Asteraceae Bercht. & J.Presl [38], Mazaceae Reveal [22], and Musaceae Juss. [13]. The ycf1 gene was related to protein translocation [85]. The ycf2 gene is necessary for cell viability, but the detail function is still unknown [86]. Why ycf1 and ycf2 evolve relatively fast is interesting. The possible reason put forwarded by Barnard-Kubow et al. [87] considered that relaxed purifying selection or positive selection on ycf1, ycf2 and some other genes might result in the development of reproductive isolation and subsequent speciation in plants. Therefore, the results suggested that ycf1 and ycf2 might play important roles in the divergence of Crassulaceae. Our phylogenetic tree divided 87 species into 3 subfamilies and 7 clades. The clade Crassula is sister to all other 6 clades, which agrees with the phylogeny reported by Gontcharova et al. [4], Chang et al. [6], and Han et al. [17]. Furtherly, 11 Crassula species could be furtherly divided into two subgenera, which generally accords with the morphological differences (floral shape) reported by Bruyns et al. [10] (Table S8). Nevertheless, there are still some unsolved phylogenetic problems within Crassulaceae. The first problem is that the plastid phylogeny of Crassula is not entirely clear due to the limited data. According to the classification proposed by Tölken [11,88], 11 and 9 sections were respectively identified in subgenus Crassula and subgenus Disporocarpa. However, Bruyns et al. [10] indicated that most sections were not monophyletic. Moreover, subgenus Disporocarpa recently has been regarded as a paraphyletic group [9,10]. The second is the genus Sedum, which is not monophyletic in our study, agreeing with the widely accepted viewpoint [3][4][5]89,90]. Finally, the genus Orostachys has been demonstrated to be non-monophyletic based on plastid data, which is consistent with previous analysis based on nuclear internal transcribed spacers (ITS) data [63]. In order to better understand the phylogeny of Crassula or Crassulaceae, more data are needed for the further detailed analyses. Conclusions In the present study, 10 new plastomes of Crassula species were reported. These plastomes exhibited identical gene content and order, and that they contained 134 genes (130 functional gene and 4 pseudogenes). The 11 identified HVRs with relatively high variability (π > 0.06886) might be used as potential DNA barcodes for species identification within Crassula. The unique expansion pattern, where the IRb regions had uniform length (110 bp) boundary expansions to rps19, might become a plesiomorphy of Crassulaceae. According to RSCU values, the A/T-ending codons were favored in plastid genes. Most importantly, we found the codon usage patterns of the matK gene in Crassula species are unique among Crassulaceae species with elevated ENC values. Furthermore, subgenus Crassula species have specific GC-biases in the matK gene. In addition, the codon aversion motifs from matK, pafI and rpl22 contained phylogenetic implications within Crassula. Compared with other Crassulaceae species, 27 and 16 unique codon aversion motifs were detected for subgenus Crassula and subgenus Disporocarpa, respectively. Additionally, the evolutionary rates analyses indicated all plastid genes of Crassulaceae were under purifying selection. Among these genes, ycf1 (dN/dS = 0.4349) and ycf2 (dN/dS = 0.5891) were the most rapidly evolving genes, whereas psaC (dN/dS = 0) was the most conserved gene. Finally, our phylogenetic analyses strongly supported Crassula is sister to all other Crassulaceae species. Our results will be benefit for further evolutionary studies within the Crassula and Crassulaceae. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: The sequence data generated in this study are available in GenBank of the National Center for Biotechnology Information (NCBI) under the access numbers: OP729482-OP729487 and OP882297-OP882300.
v3-fos-license
2015-12-02T01:35:28.312Z
2015-01-01T00:00:00.000
17609893
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://thesai.org/Downloads/Volume6No9/Paper_29-Content_BASED_Image_Retrieval_Using_Local_Features_Descriptors.pdf", "pdf_hash": "8fe9962e172acbd15ef5a5d7410e955d2f02b75a", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46332", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "8fe9962e172acbd15ef5a5d7410e955d2f02b75a", "year": 2015 }
pes2o/s2orc
Content-Based Image Retrieval using Local Features Descriptors and Bag-of-Visual Words Image retrieval is still an active research topic in the computer vision field. There are existing several techniques to retrieve visual data from large databases. Bag-of-Visual Word (BoVW) is a visual feature descriptor that can be used successfully in Content-based Image Retrieval (CBIR) applications. In this paper, we present an image retrieval system that uses local feature descriptors and BoVW model to retrieve efficiently and accurately similar images from standard databases. The proposed system uses SIFT and SURF techniques as local descriptors to produce image signatures that are invariant to rotation and scale. As well as, it uses K-Means as a clustering algorithm to build visual vocabulary for the features descriptors that obtained of local descriptors techniques. To efficiently retrieve much more images relevant to the query, SVM algorithm is used. The performance of the proposed system is evaluated by calculating both precision and recall. The experimental results reveal that this system performs well on two different standard datasets. Keywords—Content-based Image Retrieval (CBIR); Scale Invariant Feature Transform (SIFT); Speeded Up Robust Features (SURF); K-Means Algorithm; Support Vector Machine (SVM); Bag-of-Visual Word (BoVW) INTRODUCTION Image retrieval is the field of the study that concerned with looking, browsing, and recovering digital images from an extensive database.CBIR is viewed as a dynamic and quick advancing research area in image retrieval field.It is a technique for retrieving images from a collection by similarity.The retrieval based on the features extracted automatically from the images themselves.Many of CBIR systems, which is based on features descriptors, are built and developed. A feature is defined as capturing a certain visual property of an image.A descriptor encodes an image in a way that allows it to be compared and matched to other images.In general, image features descriptors can be either global or local.The global feature descriptors describe the visual content of the entire image, whereas local feature describes describe a patch within an image (i.e. a small group of pixels) of the image content.The superiority of the global descriptor extraction is the increased speed for both feature extraction and computing similarity.However, global features still too rigid to represent an image.Particularly, they can be oversensitive to location and consequently fail to identify important visual characteristics [1,2]. Local feature approaches provide better retrieval effectiveness and great discriminative power in solving vision problems than global features [3].However, the number of local features that are extracted for each image may be immense, especially in the large image dataset.Wherefore, BoVW [4,5] is proposed as an approach to solving this problem by quantizing descriptors into "visual words." Depending on the previous facts, the present study proposed a system for image retrieval based on local features using BoVW model.The system tries to bring more accuracy with the option to use the two main local descriptors (SIFT [6], SURF [7]). The rest of this paper is organized as follows.Section 2 gives an overview of the BoVW model, K-Means, and SVM.Section 3 discusses two of the most commonly used local feature descriptors.Section 4 reviews some of the related work using BoVW model in image retrieval.In Section 5, the proposed architecture of our image retrieval system, which is based on local feature descriptor, is introduced.Our experimental results are manipulated in Section 6.Finally, Section 7 contains the conclusion and our future work. II. BAG-OF-VISUAL WORD MODEL The BoVW model is one of the most widely used ways that represents images as a collection of local features.For this reason, some researchers tend to name it as a bag of features.These local features are typically grouped of local descriptors.The total number of local descriptors that is extracted for each image may be colossal.In addition, searching nearest neighbors for each local descriptor in the image query consumes a long time.Therefore, BoVW was proposed as an approach to tackling this issue by quantizing descriptors into "visual words," which decreases the descriptors' sum drastically.Thus, BoVW makes the descriptor more robust to change.This model is very close to the traditional description of texts in information retrieval, but it is considered for images retrieval [5,6].BoVW is the de facto standard of image features for retrieval and recognition [7].It consists of three main stages like the following in the sequent subsections: A. Keypoint Detection The first step of the BoVW model is to detect local interest points.For feature extraction of interest points, they are computed at predefined locations and scales [8].Feature extraction is a separate process from feature representation in www.ijacsa.thesai.orgBoVW approaches [9].There are many keypoint detectors that were used in research, such as Harris-Laplace, Difference of Gaussian (DoG), Hessian Laplace, and Maximally Stable Extremal Regions (MSER) [10,11]. B. Features Descriptors The keypoints are described as multidimensional numerical vectors, according to their content [6].In other words, features descriptors are used to determine how to represent the neighborhood of pixels near a localized keypoint [9].The most efficient feature descriptors in the BoVW model are SIFT and SURF. C. Building Vocabulary In the previous stage, the total extracted feature descriptors are large.To solve this problem, the feature descriptors are clustered by applying the clustering algorithm, such as K-Means technique [12] to generate a visual vocabulary.Each cluster is treated as a distinct visual word in the vocabulary, which is represented by their respective cluster centers.The size of the vocabulary is determined using the clustering algorithm.In addition, it depends on the size and the types of the dataset [7]. The BOW model can be formulated as follows.First, BoVW is usually defining the training dataset as S including images represented by S = s 1 , s 2 ,…, s n , where s is the extracted visual features.After that, used clustering algorithm like K-Means, which is based on a fixed number to visual words W represented by W = w 1 , w 2 ,..., w v , where v is the cluster number.Then, the data is summarized in a V×N occurrence table of counts N ij = n(w i , s j ), where n(w i , s j ) denotes how often the word w i is occurred in an image s j [6]. On the other hand, K-Means is one of the most unsupervised learning algorithms that take care of the wellknown clustering issue.It defines the size of K clusters based on the features extracted from the images themselves [13].It is used to calculate the nearest neighbors of the points and the cluster center.It is usually utilizing the method of computation by approximating the nearest neighbor method.This method can be scaled to similarly large vocabulary sizes by the use of approximate nearest neighbor methods [12].SVM is supervised machine learning technique [14].It shows the image database as two sets of vectors in a high or infinite-dimensional space.It relies on a fundamental principle, which is called a maximum margin classifier.A maximum margin classifier is a hyperplane, which separates two 'clouds' of points at equal distance.The margin between the hyperplane and the clouds is maximal.SVM built a hyperplane or set of hyperplanes that increases the margin among the images that are relevant and not relevant to the query [15].The goal of SVM classification technique is to find an ideal hyperplane to separate the irrelevant and relevant vectors using maximizing the size of the margin between both classes [16]. An image classification is a machine learning technique.It is a step used to accelerate image retrieval in big-scale databases and is used to increase retrieval precision.Similarly, in the absence of labeled data, unsupervised clustering is found to be helpful to increase the retrieval velocity and to improve retrieval precision.Image clustering based on a similarity measure, while the image classification has been performed using different techniques that does not require the use of similarity measures [15,17]. III. LOCAL FEATURE DESCRIPTORS In computer vision, local feature technique contains two parts [18]: feature detector and feature descriptor.Feature detector determines regions of an image that have unique content, like corners.Feature detection is used to find interest points (keypoints) in the image that remain locally invariant.Therefore, it can detect them even in the presence of scale change or rotation.Whereas, feature descriptor involves computing a local descriptor, which is usually done on regions centered on detected interest points.Local descriptors depend on image processing to transform a local pixel neighborhood into a compact vector representation [19]. On the other hand, the local descriptors are broadly used in many of computer vision research, such as robust matching, image retrieval, and object detection and classification.In addition, using local descriptors enables computer vision algorithms to deal strongly with rotation, occlusion, and scale changes. Local feature algorithms depend on the idea of determining some interest points in the image and implementing a local analysis on them, rather than looking at the image as a whole.There are numerous algorithms for describing local image regions, such as SIFT and SURF.The SIFT and SURF descriptors depend on local gradient computations.The following subsections will discuss the SIFT and SURF algorithms briefly. A. Scale Invariant Feature Transform (SIFT) Lowe [3] developed SIFT as a continuation of his previous work on invariant feature detection.It has four computational phases: (a) extrema detection, (b) keypoint localization, (c) orientation assignment, and (d) keypoint description. The first phase examines the image under different octaves and scales to isolate points of the image that are different from their surroundings.These points, which are called extrema, are potential candidates for image features.In keypoint localization phase, it selects some of extrema points to be keypoints.Candidate keypoints are refined by reject extrema points that are caused by edges and by low contrast points.In the orientation assignment phase, it represents every keypoint and neighbors as a set of vectors using the magnitude and the direction.In the last phase, it takes a collection of vectors in the neighborhood of every keypoint and combines this information with a set of eight vectors called the descriptor.The neighborhood is divided into 4×4 regions, in each region the vectors are histogrammed in eight bins.SIFT provides a 128 element of the keypoint descriptor. B. Speeded Up Robust Features (SURF) Bay et al. [4] introduced the SURF algorithm as a scaleand rotation-invariant interest point detector and the descriptor.SURF algorithm is a mixing of crudely localized information and the distribution of related gradient.SURF algorithm is similar to SIFT algorithm, but it is much more simplified and faster in computation and matching.www.ijacsa.thesai.orgSURF algorithm depends on the Hessian Matrix to detect keypoints.It uses a distribution of Haar wavelet responses at the keypoint's neighborhood.The final descriptor is obtained by concatenating the feature vectors of all the sub-regions and represented with 64 elements. The SIFT and SURF algorithms are nowadays the most widely used feature-based techniques in the computer vision community.These algorithms have proven their efficiency and robustness in the invariant feature localization (invariant to image rotation, scaling, and changes in illumination) [4,20]. IV. RELATED WORK The subject of image retrieval is discussed intensively in the literature.The success of using BoVW model had also contributed to increasing the number of researchers and studies.For example, Cakir et al. [21] studied CBIR using BoVW model.They discussed how BoVW considers an image as a document, going through using K-Means as vector quantization uses. Zhang et al. [22] proposed a bag of images for CBIR schemes.They supposed that the image collection composed of image bags rather than independent individual images.They contain some relevant images that have same perceptual meaning.The image bags were built before image retrieval.In addition, a user's query is an image bag, named query image bag.In this condition, all image bags in the image collection are sorted according to their similarities to the query image bag.It hypothetically represented that the new idea can enhance the image retrieval process.However, this work needs to develop more efficient ways to measure the dissimilarity between two image bags. Ponitz et al. [23] attempted to solve the problem of detecting images limitations in huge scale image databases.They decide to enhance the methodology of BoVW by improving the distance measure between image signatures to avoid the occurrence of vague features.They utilized SIFT algorithm for local visual features acquisition.Only 60% of all images were randomly chosen, and their features utilized for clustering.These features were then quantized.100 random images are selected as input images.The images were changed with mounting distortion to test the robustness of the application.It needs more discrimination force of the actual image description. Liu [8] reviewed BoVW model in image retrieval system.He provided details about BoVW model and explained different building strategies based on this model.First, he presented several procedures that can be taken in BoVW model.Then, he explained some popular keypoint detectors and descriptors.Finally, he looked at strategies and libraries to generating vocabulary and do the search. Alfanindya et al. [24] presented a method for CBIR by using SURF with BoVW.First, they used SURF to computed interest points and descriptors.Then, they created a visual dictionary for each group in the COREL database.They concluded from their experiments that their method outperforms some other methods in terms of accuracy.The major challenge in their work was that the proposed method is highly supervised.It means that they n need to determine the number of groups before they perform classification. The primary aim of this paper is to design a system for image retrieval based on local feature descriptors using BoVW model.Most of the previous image retrieval using BoVW systems used only one local descriptor.Whereas, our proposed system uses both SIFT and SURF descriptors.It provides a comparison of the actual performance of those local descriptors with BoVW in image retrieval field. V. SYSTEM ARCHITECTURE We propose a system for image retrieval based on extracting local features using BoVW model.The system uses SIFT or SURF techniques to extract keypoints and compute the descriptor for those keypoints.K-Means algorithm is used to obtain the visual vocabulary.As shown in Figure 1, the proposed system consists of two stages: a training stage and a testing stage.During the training stage, the proposed system is given below: 1) For each image in the dataset:  Convert image to a grayscale.  Resizing the image to (300,300 pixels) to get uniformed results.  Image features are extracted and associated these characteristics to local descriptors.  Cluster the set of these local descriptors for the amount of bags using a K-Means algorithm to construct a vocabulary of K clusters. 2) For each feature descriptor in the image:  Find the nearest visual word from the vocabulary for each feature vector with L2 distance based matching.  Compute the Bag-of-words image descriptor as is a normalized histogram of vocabulary words encountered in the image.  Save the Bag-of-words descriptors for all image. At the test stage, the proposed system is given below, for each input image:  The input image is pre-processed for keypoints extraction.  Local descriptors are computed from the pre-processed input image.  Compute the Bag-of-Words vector with the algorithm defined above.  In the matching step, grab the best results via SVM Classification.www.ijacsa.thesai.orgP Fig. 1.The architecture of the proposed system A. Reprocessing The preprocessing step consists of converting the image to grayscale and resizing process.Due to the local descriptors algorithms that deal only with density information, the images are converted to the grayscale.After that, the images are resized to 300x300 pixels to normalize the results. B. Keypoint Detection and Description The most important step in the proposed system is to extract the local descriptors from the processed image.There are many keypoint description techniques, such as Harris, SIFT, and SURF.In this paper, SIFT and SURF description were chosen in order to test the performance of the proposed system.Once keypoints are extracted from the image, the system computes the local description of each keypoint, as shown in Figure 2. C. BoVW Descriptor In this step, the BoVW model is used to create the vocabulary.First, we compute the centroid of the vocabulary that is closest to the feature vector using Brute Force matcher method.Then, we calculate the difference between the centroid and the feature vector.Finally, we compute the bag-of-words image descriptor as a normalized histogram of vocabulary words. D. Matching and Classification At this stage, the descriptor query is used to match the BoVW descriptors in the database.The nearest neighbor approach was used to retrieve similar images. Finally, SVM classification was used to grab the best results, which has the most similarity with the image query. A. Dataset The system was evaluated by using two different standard datasets: the Flickr Logos 27 dataset [25] and Amsterdam Library of Object Images (ALOI) dataset [26].The Flickr Logos 27 dataset is an annotated logo dataset downloaded from Flickr, and it consists of three image collections/sets.The training set contains 810 annotated images, corresponding to 27 logo classes/brands (30 images for each class).Figure 3 shows some image samples of the training set.The query set consists of 270 images.There are five images for each of the 27 annotated classes, summing up to 135 images that contain logos.Some image samples from the queries set are presented in Figure 4. ALOI is a color image collection of one-thousand small objects, which is recorded for scientific purposes under various imaging circumstances (viewing angle, illumination angle, and illumination color).Over a hundred images of each object were recorded, yielding a total of 110,250 images.A large variety of object shapes, transparencies, and surface covers are considered.It makes this database quite interesting to evaluate object-based image retrieval approaches [27].Some image samples of the training set and queries set are presented in Figures 5 and 6 B. Experimental Results The performance of our system was measured using precision and recall measures.Recall measures the ability of the system to retrieve all the images that are relevant while precision measures the ability of the system to retrieve only the images that are relevant. Eq. ( 1) is used to calculate the precision of the retrieval performance: True Positives is the number of the images that are correctly retrieved from the image datasets.While, False Positives is the number of images that are incorrectly retrieved from the image datasets.In addition, the recall of the retrieval performance was calculated by Eq. ( 2): The missed parameter is the number of relevant images that is not retrieved.Additionally, Precision-Recall graphs were used to measure the accuracy of our image retrieval system.They are used to evaluate the performance of any search engine. All tests were performed on an HP-ElitBook-2740p laptop with Intel Core i5, 2.40 GHz processor, 4GB RAM, and Windows 7 Ultimate 64-bit as an operating system.The system was implemented in Microsoft Visual Studio 2013 using OpenCV version 2.4.9 for the graphical processing functions and C Sharp for the GUI design with EmguCV as a wrapper. In the Flickr Logos 27 dataset, ten classes randomly selected (Google, FedEx, Porsche, Red Bull, Starbucks, Intel, Sprite, DHL, Vodafone, NBC) for training stage.The total number for training stage is of 300 images and 50 images in the testing stage.In the test stage, each image has been queried twice, once using SURF and other using SIFT.Precision and recall values appear directly below the images retrieved, as shown in Figures 7, 8. Table 1, Figure 9 (Precision-Recall graphs) show the values of the average of the precision and recall of all images in the test set with 10 class (5 images for each class). (1) (2) www.ijacsa.thesai.orgFig. 7.A snapshot of our proposed system in Flickr dataset using SURF technique Fig. 8.A snapshot of our proposed system in Flickr dataset using SIFT technique In ALOI dataset, the similar procedures that conducted for Flickr Logos 27 dataset were used.Accordingly, ten object images randomly selected from ALOI dataset (Big Smurf, Blue girls shoe, Boat, Christmas bear, cow kitchen clock, Green Pringles box, head, pasta and sugo, toy keys, Wooden massage) for training stage.Therefore, the total number of 300 object images for training stage and 50 object images for the testing stage.In the test set, each object has been queried twice, once using SURF and other using SIFT.Precision and recall values appear directly as shown in Figures 10, 11 As shown in the results of Fiker dataset, SURF algorithm was the batter than SIFT algorithm.The reason is that the SURF has good matching rate compared with SIFT.However, the results of SIFT with ALOI dataset was the better than SURF. The reason may be due to the SIFT is more suitable for objects because it extracts more features.Also, maybe SURF are not robust enough in various imaging circumstances.However, seems both SIFT and SURF more suitable according of the type of the dataset.Recent CBIR systems rely on the use of the BoVW model for being enables efficient indexing for local image features.This paper presented a system for CBIR, which uses local feature descriptors to produce image signatures that are invariant to rotation and scale.The system combines the robust techniques, such as SIFT, SURF, and BoVW, to enhance the retrieval process.In the system, we used a k-means algorithm to cluster the feature descriptors in order build a visual vocabulary.As well as, SVM is used as a classifier model to retrieve much more images relevant to the query efficiently in the features space. We compared two different features descriptors techniques with BoVW model.Based on the experimental results, it is found that both SIFT and SURF are appropriate depending on the type of used dataset.The performance of the proposed system is evaluated by calculating the precision and recall on two different standard datasets.The experiments demonstrated the efficiency, scalability, and effectiveness of the proposed system. In the future, we intend to study the possibility of improving the system performance using other local descriptors.We will do a comparative study between all of these descriptors according to illumination changes, scale changes, and noisy images on other types of standard datasets. Fig. 2 . Fig. 2. The local feature extration for one of the used images, (a) The gayscale image, (b) The extracted local features ijacsa.thesai.orgVI.THE PERFORMANCE EVALUATION AND RESULTS Fig. 3 .Fig. 4 . Fig. 3. Some sample images from the Flickr Logos dataset for the training . Fig. 10 . Fig. 10.A snapshot of our proposed system in ALOI dataset using SURF technique Fig. 11 . Fig. 11.A snapshot of our proposed system in ALOI dataset using SIFT technique Table 2 and Figure 12 (Precision-Recall graphs) showing the values of the average of the precision and recall of all images in the test set with ten objects. Fig. 12 . Fig. 12.The graph of the precision and Rcall of each object in ALOI dataset VII.CONCLUSION With advances in the multimedia technologies and the social networks, CBIR is considered an active research topic.Recent CBIR systems rely on the use of the BoVW model for being enables efficient indexing for local image features.This paper presented a system for CBIR, which uses local feature descriptors to produce image signatures that are invariant to rotation and scale.The system combines the robust techniques, such as SIFT, SURF, and BoVW, to enhance the retrieval process.In the system, we used a k-means algorithm to cluster the feature descriptors in order build a visual vocabulary.As well as, SVM is used as a classifier model to retrieve much more images relevant to the query efficiently in the features space. TABLE I . THE AVERAGE OF THE PRECISION AND RECALL OF EACH CLASS (FLICKR LOGOS DATASET) Fig. 9.The graph of the precision and Rcall of each class in Flickr Logos dataset TABLE II . THE AVERAGE OF THE PRECISION AND RECALL OF EACH OBJECT (ALOI DATASET)
v3-fos-license
2017-10-15T03:57:18.623Z
2016-04-08T00:00:00.000
35101617
{ "extfieldsofstudy": [ "Psychology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.omicsonline.org/open-access/evaluating-a-nonverbal-assessment-tool-in-nursing-students-and-staff-atthe-university-of-botswana-2332-0915-1000164.pdf", "pdf_hash": "58a0bc4c837b8df5f984b9bbf5b1f683774a4b55", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46333", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "sha1": "ee83f892609ff0f6c84e8ca435ee3ebb8c018b06", "year": 2016 }
pes2o/s2orc
Evaluating a Non-verbal Assessment Tool in Nursing Students and Staff at the University of Botswana Evaluating well-being in non-Western populations has been hampered by the fact that most psychometric instruments are not culturally sensitive. One possible way to remove cultural biases is by eliminating the verbal content from the assessment. The Well-Being Picture Scale (WPS) is a ten item conceptual assessment that has been used to evaluate well-being in a variety of populations. The purpose of this study was to examine its utility in a sample of nursing students and staff from the University of Botswana in Gaborone, Botswana. The WPS and a traditional English language based depression scale, the Zung Self-rated Depression Scale (SDS) were distributed to students and staff at the school of nursing; 71 (31 male, 40 female (mean age= 28.2 years) returned the questionnaires. Reliability of the scales was assessed using Cronbach’s alpha. Validity of the WPS was evaluated by examining its sensitivity and specificity using the SDS as a referent, with previously published cut-points denoting either well-being or depression from the scales. The results show that the WPS has good reliability (α=0.863) and that when compared to the SDS depression scale, has excellent specificity in identifying positive well-being, but poor sensitivity in detecting depression. The poor sensitivity could be the result of the WPS being a state indicator, while the SDS is a trait measure, or that sociocultural and linguistic factors are affecting the scale comparisons. Nonetheless, the results suggest that the WPS may be useful as way to measure an emotional state of well-being that is independent of cultural context. Introduction Well-being and depression are invariably linked to how an individual perceives the world through the lens of lived experience, sociocultural setting, and other behavioral and environmental factors [1]. Life histories shape and influence the way people approach their well-being and address illness, by providing a framework through which people filter information. Thus, there is a conceptual link between what is deemed an appropriate response within the confines of sociocultural norms and an individual's response [1]. Nevertheless, there is significant discontinuity in medical and psychiatric care which is often based on Western generalizations of health and well-being without essential cultural context. These generalizations and the impetus to find biological answers for mental health homogenize mental health and reinforce the mind/body dichotomy [2,3]. This trend is what Nancy Scheper-Hughes refers to as being "trapped by the Cartesian legacy, " [3] or the failure of the biomedical community to connect the mind to the body and both to society. Psychology often falls back on insufficient explanatory concepts and terms regarding the "ways in which the mind speaks through the body, and the ways in which society is inscribed on the expectant canvas of human flesh" [3]. Medical Anthropology has worked for decades toward a global and reflexive approach to medical prevention and intervention in biomedicine, with a particular focus on protecting vulnerable populations. Still, "reflexivity requires a certain degree of structural flexibility that allows, or forces, the blending of biomedical paradigms with patient culture and history". In order to accommodate the distinctive cultural and situational needs of a patient, particularly within psychiatric frameworks, the field must continue to validate methods of assessment in diverse sociocultural settings. The measurement of well-being in psychology is a relatively recent phenomenon. In the late 1980s, Ryff proposed a six dimensional model of well-being that identifies (1) purpose in life, (2) environmental mastery, (3) positive relationships, (4) personal growth, (5) autonomy, and (6) self-acceptance using a 54 item questionnaire [4][5][6]. These dimensions are a projection of the motivations behind two approaches to defining well-being, the eudaimonic and hedonic. Eudaimonic wellbeing is described as self-fulfillment and the realization of one's own natural strengths, whereas, hedonic well-being is linked to pleasure [4]. Studies have found that among groups in the United States, those who score high on the six dimensional scales which largely focus on eudaimonic well-being tend to have more positive health outcomes suggesting that positive well-being creates optimal physiological functioning [5,6]. It should be noted, however, that much of this positive outcome is also associated with social factors such as socioeconomic status, coping, and social support [4]. The evaluation of well-being using this approach has largely been limited to population groups in the United States. However, recently Curham et al. [7] did a comparative study examining the relationships between well-being and subjective and objective social hierarchies associated with social status and their effect on health outcomes in Japan and the United States. Objective status was defined as social status that is recognized by society (occupation, level of education, etc.), whereas subjective status was based around the individual's "own view of where they stand in the social hierarchy" [7]. Utilizing Ryff's approach, the results showed that certain subjective aspects, which predicted purpose in life and self-acceptance, had higher associations in the U.S. while other objective aspects, which predicted positive relations with others and self-acceptance, were more strongly related in Japan. Overall, the results suggested that the relationship between social hierarchy and individual well-being differed with cultural context [7]. What the results also suggest, however, is that cultural perceptions may have affected the way subjects responded to the questionnaire. While culture should define what induces well-being and the context in which it is expressed, the feeling of well-being itself, like other emotions, is independent of culture and its measurement should reflect that. The Well-Being Picture Scale (WPS) is an assessment tool that was designed as an alternative method to Ference's Human Field Image Metaphor Scale (HFMT) for evaluating well-being in elderly patients who had difficulty with traditional language centered scales [8,9]. The scale was developed to address the difficulties residents were having with the three word metaphors by designing pictures to take their place. Keeping with the Rogerian notion of humans as dynamic energy fields, this conceptual figure-based instrument treats well-being as a reciprocal system of interconnectivity between the individual and their environment [9]. These images represent "well-being relative to self-image" [9] in regard to frequency (intensity), awareness, action, and power or the ability to purposely change [9]. The WPS has been validated for use in relation to mood profiles in a number of populations including groups from Taiwan, Japan, Zambia, and the United States where both adults and children have been assessed [9][10][11]. It appears to provide a unique assessment that can circumvent the need to for literacy or verbal ability and thus may be independent of cultural context. While the WPS has been examined in relation to mood in several populations, its use in non-Western settings as an indicator of general well-being is limited. Hence, the purpose of this study was to examine the reliability and utility of the WPS to assess general well-being. This was an opportunity to sample a non-Western population, specifically nursing students and staff from the University of Botswana in Gaborone, Botswana. To further assess the validity of the WPS in evaluating well-being, responses were compared to the Zung Self-rated Depression Scale (SDS). Given the reciprocal relationship between well-being and depression, it was hypothesized that there would be an inverse relationship between the scales. Finally, demographic variation in the WPS and SDS responses were also examined. Setting The Republic of Botswana is a landlocked country surrounded by South Africa to the south, Namibia to the west and north, and Zimbabwe to the northeast [12]. The landscape is dominated by the Kalahari Desert which covers 80% of the county [12]. The nation is home to approximately 2 million people, with 20 different ethnic groups speaking languages that fall into two of the four major language families in Africa: Khoisan and Bantu. While the national language Setswana (Bantu) is spoken by a majority of the population, English is the official language of education and the government [12]. The present study was conducted at the University of Botswana, which became an independent entity with its main campus located in the capital city of Gaborone in 1982, where it has since grown exponentially. The university is divided into Faculties of Business, Education, Engineering and Technology, Health Sciences, Humanities, Medicine, Science, Social Sciences, and the School of Graduate Studies [13]. While the University's total enrollment in 2005-6 was reported at 15,710 students, the School of Health Sciences enrollment was at 346 students [14]. Subjects and Protocol The subjects were Botswanan students, faculty, and staff from the School of Nursing in the Faculty of Health Sciences who volunteered to participate in a survey study designed to evaluate the psychometric properties of the WPS. All signed informed consent and the project was approved by the Health Ministry Research Unit, Ministry of Health, Botswana, and the Human Subjects Committee at Binghamton University. In total, there were 71 respondents who returned the distributed materials, 31 (43%) male and 40 (56%) female who all selfidentified as black African. Sixty-seven (94%) of the respondents listed their birth country as Botswana; three (4%) reported being born in Zimbabwe and one (1%) reported their birth in South Africa. The average age of the participants was 28.2 years, with the average age for females (31.6 years) being higher than that for males (23.6 years) (p<. 001). Some 52% reported student as their social status while professionals (faculty or staff) comprised 21% of the study population; 27% of respondents did not report their occupation. All surveys were distributed as paper copies in April, 2009. The respondents answered basic demographic questions (age, ethnicity, country of birth, social status (student, faculty, or staff), and number of years of schooling completed) in addition to the WPS and SDS questionnaires (see below). Questionnaires Well-being Picture Scale (WPS): The original Well-Being Picture Scale was designed as a 75 item scale to be utilized in adult populations. A series of developmental phases over the course of a decade brought the WPS down to an 18 item scale, which was trial tested in children by Abbate [9,15]. Abbate found that some of the pictures were difficult for children to identify and modified the scale to a 10 item scale. This version of the picture scale was validated and refined in a number of populations [9] and was utilized in this study. To answer an item on the WPS, the subject places an "x" in one of boxes between the two dichotomous pictures that best described how they conceptualize themselves on that pictorial dimension. Because there are seven possible boxes, each item is scored from 1-7 (rated from the picture on the left to the picture on the right) with items 1,3,5,6,9 and 10 being reverse scaled left to right (7-1) and items 2,4,7, and 8 being scaled 1-7 left to right. Higher numbers indicate a state of better well-being. Total scores (sum of the 10 items) of 50 or greater are indicative of an overall a state of general well-being [9]. A copy of the picture scale is shown in Figure 1. The SDS is a written 20item scale which evaluates depressive symptoms over the previous two weeks [16,17]. It consists of a series of short written or verbal phrases based on self-identification of symptoms [16,17]. Items are answered using a Likert scale with the following choices: "A little of the time", "some of the time", "good part of the time", or 'most of the time. " These items are scored 1-4, and a total for the scale is calculated (ranging from 20 to 80). Higher scores indicate increasing depression. A score greater than 50 suggests the possibility of a depression diagnosis [18]. The SDS has proven to be a useful and successful assessment in determining the presence of depressive symptoms [17]. While the Zung scale has not been validated in Botswana, it has been widely validated in a number of populations and is widely accepted for use as a traditional language based assessment [17]. It should be noted, however, that Campo-Arias et al. 's [17] assessment of the SDS for validation in Columbia found that "sociocultural and linguistic factors" might interfere with the accuracy of the answers [17]. Analysis Of the 71 respondents who returned the surveys, 69 (97%) completed every item on the WPS, and, 43 (61%) completed every item on the SDS. There were 25 (35%) respondents who answered most of the SDS items (17 or more). A total score on the SDS for these respondents was determined after interpolating values for the missing items using the individual's item means. The two respondents who did not complete the WPS also responded to fewer than 15 items on the SDS and one did not return the SDS, as such, these subjects were excluded from the scale evaluations and comparisons. Analyses unless otherwise noted were conducted on data from the 68 (96%) people with complete WPS and either complete or adjusted SDS scores. The survey responses and demographic data were entered into an Excel spreadsheet and then exported to SPSS Version 21 for analysis. Cronbach's Alpha was calculated to assess the reliability and internal consistency of the WPS and SDS scales in the total sample of respondents. A contingency analysis was conducted to further evaluate the relative independence of the two scales. It was hypothesized that the preponderance of respondents scoring above 50 on the WPS would score less than 50 on the SDS scale indicating that those experiencing general well-being would not report depression. This premise was evaluated as a "sensitivity" and "specificity" assessment in which the WPS was evaluated with regard to its ability to identify "non-depressed" and "depressed" patients (or in other words those with positive well-being and those without). To determine if there were differences in well-being and depression by gender, age group (18-24 years; 25-34 years and 35+ years), or occupational status (student, faculty and staff) WPS and SDS scores were compared using separate One-way ANOVAs. Where necessary, post-hoc comparisons were adjusted using the Bonferroni method. Results Cronbach's alpha for the WPS was 0.863 and 0.828 for the SDS. These alphas would indicate that for both questionnaires, there was good internal consistency and reliability in the study group. The results of the contingency analysis are shown in Table 1. Assuming that depression and positive well-being are antithetical, the analysis suggests that the WPS has 96.6% specificity, indicating it does well in identifying non-depressed subjects, or those with positive well- being. However, at the same time, the scale has a sensitivity of 0% indicating that it does poorly in identifying those who are depressed. Total scores for the WPS ranged from 27 to 70, with 19 participants choosing seven for all items. The average score was 60.47+10.43 indicating that as a group, the respondents exhibited substantial wellbeing. Comparisons by gender indicate that females had slightly higher scores than males, but the difference was not significant. Likewise, there were no significant differences in score by age group. Differences between students, and faculty and staff were also not statistically significant. These mean comparisons are shown in Table 2. Finally, Table 3 shows the comparisons for the SDS. The overall mean score was 36.4+8.49, and scores ranged from 23 to 59. There were significant differences in mean scores based on age, with participants aged 25-34 scoring nearly ten points higher on average than older or younger participants (p= 0.02). Discussion The results show that the WPS had good internal consistency and reliability in this sample (α =.863), and that well-being as reflected in the average scores on the scale did not differ by gender, age or social status. When cross-classified with the SDS with cut points of 50 for the WPS (poor/high well-being) and 50 for the SDS (depressed,/not depressed), the WPS demonstrated a high specificity in identifying those that were not depressed (with presumably high well-being), but had low sensitivity in that low WPS scores did not identify those who reported being depressed. The concepts of well-being and depression should be reasonably antithetical, so why did the WPS have such poor sensitivity? One possibility, as suggested by Terwilliger et al. [11], is that the WPS is measuring "in the moment" or more precisely is a state indicator, while the SDS may be more of a trait measure, meaning that it focuses on an enduring characteristic of the person. Thus, the WPS may simply be picking up a momentary state of wellbeing in all persons, whether they are generally depressed or not. Another possibility for the poor sensitivity finding may be that sociocultural and linguistic factors are affecting the scale comparisons. That is, the SDS is constructed using culturally bound written and verbal phrases [17] while the WPS is a conceptually based [9]. The SDS scale may thus be measuring a culturally or socially defined concept of depression while the WPS scale is capturing a culturally unbound state of well-being. Subjects may be defining themselves as depressed within their cultural context, but at the same time have an emotional sense of well-being. The reliability of the WPS in this study is also consistent with that found in other population groups. Among groups of adults from Taiwan, Japan and the United States in which the WPS was evaluated, Cronbach's alpha was above 0.8 (Taiwan: 0.8602; Japan: 0.9129; USA; 0.8266) [9]. Finally, our results are also consistent with that of an earlier study that evaluated depression and well-being among adult men in Botswana [19]. That study examined the impact of globalization on individual well-being through the interplay of self and standard forms of lifestyle aspirations by comparing poor rural-dwelling men with urban well-off men from Gaborone (capital of Botswana). The study specifically tested the premise that the poor rural-dwelling Botswana men would suffer diminished well-being compared to their relatively well-off urban counterparts. The results indicated that failed urban migration among the rural men was associated with high depressive affect and that the rural men exhibited a syndrome that was similar to post traumatic distress disorder, whereas, the urban men exhibited relatively greater well-being. The Gaborone based subjects in the present study also had a high degree of well-being which provides some support for the notion proposed by Decker [19] that participation in globalization contributes to well-being. Although the present study confirms the reliability of the WPS and supports earlier assessments of well-being in Botswana, caution should be used in extrapolating the results. First, the study was conducted on a small, non-random sample of personnel from the School of Nursing at the University of Botswana which limits the generalizability of the findings. Additionally, the study respondents were highly educated students, staff, and faculty with a high level of secondary education who may have a greater understanding of the concept of well-being and the process of assessing it which could have skewed the results toward a report of positive well-being. Further, the WPS was compared to the SDS, and it is possible that had another depression scale [20] been used, the comparative results would change. Nonetheless, the results suggest that the WPS may be useful cross-culturally as a way to measure an emotional state of well-being [21]. Further validation studies using larger samples and a variety of cultural groups with diverse educational levels need to be conducted to improve the understanding of what the WPS is measuring.
v3-fos-license
2018-04-03T04:11:34.966Z
2018-01-01T00:00:00.000
20532068
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2072-6694/10/1/9/pdf", "pdf_hash": "f7d07e5bd3e0277e1b1330166e21b08969d2fedf", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46334", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "sha1": "f7d07e5bd3e0277e1b1330166e21b08969d2fedf", "year": 2018 }
pes2o/s2orc
Current Advances in Aptamers for Cancer Diagnosis and Therapy Nucleic acid aptamers are single-stranded oligonucleotides that interact with target molecules with high affinity and specificity in unique three-dimensional structures. Aptamers are generally isolated by a simple selection process called systematic evolution of ligands by exponential enrichment (SELEX) and then can be chemically synthesized and modified. Because of their high affinity and specificity, aptamers are promising agents for biomarker discovery, as well as cancer diagnosis and therapy. In this review, we present recent progress and challenges in aptamer and SELEX technology and highlight some representative applications of aptamers in cancer therapy. Introduction The conventional anticancer strategies of chemotherapy and radiotherapy are highly effective at killing cancer cells, but they lack target specificity and can also kill healthy noncancerous cells [1], resulting in unwanted side effects, such as nausea and vomiting [2]. Recently, targeted cancer therapies have been designed to reduce potential toxicity and achieve a higher therapeutic index [3,4]. Therapies that use monoclonal antibodies to target tumors are the most successful cancer-targeting treatments [5,6]; however, several limitations, such as high production costs and low penetration of solid tumors, cannot be overlooked [7,8]. Thus, the development of cheaper, more effective targeted therapies is eagerly desired. Nucleic acid aptamers are short single-stranded oligonucleotides that fold into unique three-dimensional structures and bind to a wide range of targets, including proteins [9,10], small molecules [11], metal ions [12][13][14], viruses [15], bacteria [16] and whole cells [17,18], with high specificity and binding affinities (from the low nanomolar to picomolar range) similar to those of antibodies [19]. Aptamers also have advantages compared to antibodies, such as rapid in vitro selection, cell-free chemical synthesis, low immunogenicity and superior tissue penetration because of their smaller size. Aptamers are generally isolated through in vitro selection from oligonucleotide libraries containing random sequences. After target-specific aptamers have been identified, they can be chemically synthesized, modified and optimized for clinical applications. Therefore, aptamers are promising agents for the treatment of human diseases, including cancers, infectious diseases, and heritable diseases. In this review, we discuss recent advances and challenges in the development of aptamers as agents for cancer diagnosis and therapy, with a particular focus on the past five years. SELEX Technology In the 1990s, three independent groups isolated specific RNA aptamers through a selection method called the systematic evolution of ligands by exponential enrichment (SELEX) [9,20,21]. SELEX is generally divided into four steps: incubation, partition, recovery, and amplification ( Figure 1). The selection cycle starts by mixing an initial DNA or RNA library with the target of interest. A library generally consists of up to 10 15 random sequences of 20-60-nucleotides flanked by fixed primer regions at the 5 and 3 ends. After incubation, target-bound sequences are separated from un-bound sequences through various partition strategies. The bound sequences are recovered and re-amplified to generate a new library for the subsequent selection cycle. New DNA libraries are directly amplified by PCR, whereas recovered RNA sequences must be reverse transcribed into cDNA before PCR amplification and transcription into a new RNA library for the next cycle. After the selection cycle has been repeated 2-15 times, sequencing analysis is used to identify the specific sequences that have been enriched in the library. To enhance the enrichment of target-bound sequences, the selection stringency can be increased during the selection cycle by manipulating the library-to-target ratio, buffer composition, incubation time, and temperature. SELEX Technology In the 1990s, three independent groups isolated specific RNA aptamers through a selection method called the systematic evolution of ligands by exponential enrichment (SELEX) [9,20,21]. SELEX is generally divided into four steps: incubation, partition, recovery, and amplification ( Figure 1). The selection cycle starts by mixing an initial DNA or RNA library with the target of interest. A library generally consists of up to 10 15 random sequences of 20-60-nucleotides flanked by fixed primer regions at the 5′ and 3′ ends. After incubation, target-bound sequences are separated from unbound sequences through various partition strategies. The bound sequences are recovered and reamplified to generate a new library for the subsequent selection cycle. New DNA libraries are directly amplified by PCR, whereas recovered RNA sequences must be reverse transcribed into cDNA before PCR amplification and transcription into a new RNA library for the next cycle. After the selection cycle has been repeated 2-15 times, sequencing analysis is used to identify the specific sequences that have been enriched in the library. To enhance the enrichment of target-bound sequences, the selection stringency can be increased during the selection cycle by manipulating the library-to-target ratio, buffer composition, incubation time, and temperature. Protein-Based SELEX Over the past 27 years, proteins have been the most common targets for aptamers. If the target proteins can be purified, protein-based SELEX can be easily performed in a test tube. One of the most critical steps in protein-based SELEX is partitioning, which involves separating target-bound sequences from unbound sequences. Various methods have been developed for partitioning, including nitrocellulose membrane filtration, affinity and magnetic bead separation, resin chromatography and capillary gel electrophoresis [22]. Although protein-based SELEX has successfully generated a wide variety of aptamers, it may be limited in several circumstances. For example, it is difficult to isolate aptamers that target unknown proteins, insoluble proteins, or proteins that have complex conformations. Furthermore, the surface of living cells is complex, and purified proteins may exist in different conformations than native proteins on the cell surface. Thus, target-specific aptamers isolated by purified protein-based SELEX may fail to recognize their target proteins on the cell surface [23,24]. Protein-Based SELEX Over the past 27 years, proteins have been the most common targets for aptamers. If the target proteins can be purified, protein-based SELEX can be easily performed in a test tube. One of the most critical steps in protein-based SELEX is partitioning, which involves separating target-bound sequences from unbound sequences. Various methods have been developed for partitioning, including nitrocellulose membrane filtration, affinity and magnetic bead separation, resin chromatography and capillary gel electrophoresis [22]. Although protein-based SELEX has successfully generated a wide variety of aptamers, it may be limited in several circumstances. For example, it is difficult to isolate aptamers that target unknown proteins, insoluble proteins, or proteins that have complex conformations. Furthermore, the surface of living cells is complex, and purified proteins may exist in different conformations than native proteins on the cell surface. Thus, target-specific aptamers isolated by purified protein-based SELEX may fail to recognize their target proteins on the cell surface [23,24]. Whole-Cell-Based SELEX Whole-cell-based SELEX was developed to overcome the limitations of protein-based SELEX [25,26]. In whole-cell-based SELEX, live cells that express the target of interest are used instead of purified protein, enabling the identification of aptamers that can recognize targets in their native conformation. Because protein purification is not necessary prior to selection, whole-cell-SELEX can be applied to uncharacterized target proteins without prior information about their properties and structures [18,27]. Although whole-cell-based SELEX involves the same major steps as conventional protein-based SELEX, whole-cell-based SELEX requires both positive and counter selection in target-positive and target-negative cells, respectively. Counter selection is crucial for removing non-specific binders. After positive and counter selection, aptamers are expected to bind to the cells that express the target but not to the cells that do not express the target. To efficiently enrich target-specific aptamers, the cells used must be healthy. The presence of dead cells can result in enrichment of non-specific binders, delaying the enrichment of target-specific sequences. Therefore, careful recovery of healthy cells that highly express the active target is crucial for successful selection. Some technical approaches, such as fluorescence-activated cell sorting (FACS) [28] and magnetic bead separation [29], have been used to eliminate the risk of non-specific binding to dead cells, optimizing the selection and improving the generation of target-specific aptamers. Live-Animal-Based SELEX Live animals have also been used to directly generate tissue-targeting aptamers in vivo via live-animal-based SELEX [30], or in vivo SELEX [22]. Unlike whole-cell-based SELEX, counter selection is not needed for live-animal-based SELEX. In 2010, Mi et al. generated tumor-targeting aptamers by live-animal-based SELEX in intrahepatic tumor-bearing mice [31]. The authors intravenously injected a random 2 -F-pyrimidine-modified RNA library into mice with intrahepatic colorectal metastases and harvested the tumor-containing liver tissue. The target-bound RNA sequences were extracted, amplified, and used for the next selection cycle. After 14 cycles, the authors isolated RNA aptamers that specifically localized to intrahepatic tumors, one of which bound to RNA helicase p68 that is overexpressed in colorectal cancers. This report demonstrated that live-animal-based SELEX can directly generate aptamers that can be efficiently delivered into tumor tissues in vivo. High-Throughput SELEX Efforts have been made to improve the selection efficiency of SELEX [22,32]. To isolate high-affinity aptamers from random sequences, it is important to ensure a diverse library during selection and to avoid technical bias. In traditional SELEX, the multiple cycles of conventional PCR may accumulate nonspecific byproducts, causing some bias during amplification [33]. For example, some sequences favored by DNA polymerase may be over-enriched during PCR. In contrast, highly structured sequences that are difficult to amplify may eventually be eliminated. Amplification has been improved with novel PCR technologies, such as droplet digital PCR [34] and emulsion PCR [35][36][37], which can reduce the accumulation of byproducts and avoid PCR bias, thus preserving library diversity. After amplification, the PCR products in the final library are generally cloned into E. coli for sequence identification. However, this step is time-consuming and laborious, and the resulting clones are not necessarily representative of the whole population of aptamers. Some infrequent, high-affinity aptamers may be missed due to limited clone number or inefficient cloning. This kind of cloning bias can be avoided by using high-throughput sequencing technology and bioinformatics analysis combined with SELEX (HT-SELEX). HT-SELEX enables visualization of dynamic changes among millions of sequence reads throughout selection, so it is possible to reduce cloning bias and identify high-affinity aptamers during a much earlier selection round. Thus, HT-SELEX can not only save money and time, but also reduce the risk of technical biases. Recent Progress in Aptamer-Based Biosensor Technology Biosensors are analytical devices that can measure the concentration of organic or inorganic targets, called analytes, by generating signals proportional to the analyte. Biosensors are generally composed of four parts: a bioreceptor that detects the analyte, a transducer that converts recognition of the target into a measurable signal, electronics that amplify and the signal, and a display that presents the results to the user [38]. The high specificity of aptamers makes them ideal bioreceptors in aptamer-based biosensors called aptasensors. Aptasensors are superior to antibody-based sensors because of their high affinity and stability, highly modifiable kinetic parameters, relatively fast animal-free development and wide spectrum of targets ranging from small chemicals to whole cells [39]. In addition, aptamers change conformation upon binding, and sensors have been developed that exploit this property for target detection [40]. Aptasensors have the potential for a variety of applications, including detection of foodborne pathogens, chemicals, and disease markers [38]. Several electrochemical, optical, and colorimetric aptasensor methods exist for the detection of cancer. In this section, we will focus on recent advances in aptasensors for cancer detection, with an emphasis on advances from the past year. Electrochemical Aptasensors One of the most common aptasensors is the electrochemical aptasensor. Electrochemical aptasensors have existed since 2004, when Ikebukuro et al. developed a sandwich-style aptasensor to detect the clotting factor thrombin [41]. A simple aptamer sandwich detection system is composed of two aptamers and an electrode surface ( Figure 2). A capturing aptamer conjugated to an electrode surface captures and immobilizes the analyte, and a secondary aptamer, which recognizes a different part of the analyte surface, binds to form an aptamer-analyte-aptamer sandwich. The secondary aptamer contains an electroactive label, such as glucose dehydrogenase [41], cadmium sulfite quantum dots [42], or gold nanoparticles (AuNPs) [43], which can be detected by the electrode [38]. Because of their relative simplicity, a number of sandwich-based detection systems have been developed against cancer targets. As an example from the last year, Zhang et al. developed an electrochemical aptasensor using an aptamer against mucin 1 (MUC1), a surface glycan that is highly overexpressed in many cancers. MUC1-expressing cells were bound by MUC1 aptamer conjugated to magnetic beads, followed by capture by a secondary lectin-based nanoprobe functionalized on AuNPs [44]. In this experiment, gold-promoted reduction of silver ions induced voltage changes that, when read through electrochemical stripping analysis, were indicative of MUC1 expression levels, and thus potentially of cancer detection. Additional sandwich-style aptasensor systems targeting cancer markers and cancer cell lines are summarized in Table 1. Recent Progress in Aptamer-Based Biosensor Technology Biosensors are analytical devices that can measure the concentration of organic or inorganic targets, called analytes, by generating signals proportional to the analyte. Biosensors are generally composed of four parts: a bioreceptor that detects the analyte, a transducer that converts recognition of the target into a measurable signal, electronics that amplify and the signal, and a display that presents the results to the user [38]. The high specificity of aptamers makes them ideal bioreceptors in aptamer-based biosensors called aptasensors. Aptasensors are superior to antibody-based sensors because of their high affinity and stability, highly modifiable kinetic parameters, relatively fast animal-free development and wide spectrum of targets ranging from small chemicals to whole cells [39]. In addition, aptamers change conformation upon binding, and sensors have been developed that exploit this property for target detection [40]. Aptasensors have the potential for a variety of applications, including detection of foodborne pathogens, chemicals, and disease markers [38]. Several electrochemical, optical, and colorimetric aptasensor methods exist for the detection of cancer. In this section, we will focus on recent advances in aptasensors for cancer detection, with an emphasis on advances from the past year. Electrochemical Aptasensors One of the most common aptasensors is the electrochemical aptasensor. Electrochemical aptasensors have existed since 2004, when Ikebukuro et al. developed a sandwich-style aptasensor to detect the clotting factor thrombin [41]. A simple aptamer sandwich detection system is composed of two aptamers and an electrode surface ( Figure 2). A capturing aptamer conjugated to an electrode surface captures and immobilizes the analyte, and a secondary aptamer, which recognizes a different part of the analyte surface, binds to form an aptamer-analyte-aptamer sandwich. The secondary aptamer contains an electroactive label, such as glucose dehydrogenase [41], cadmium sulfite quantum dots [42], or gold nanoparticles (AuNPs) [43], which can be detected by the electrode [38]. Because of their relative simplicity, a number of sandwich-based detection systems have been developed against cancer targets. As an example from the last year, Zhang et al. developed an electrochemical aptasensor using an aptamer against mucin 1 (MUC1), a surface glycan that is highly overexpressed in many cancers. MUC1-expressing cells were bound by MUC1 aptamer conjugated to magnetic beads, followed by capture by a secondary lectin-based nanoprobe functionalized on AuNPs [44]. In this experiment, gold-promoted reduction of silver ions induced voltage changes that, when read through electrochemical stripping analysis, were indicative of MUC1 expression levels, and thus potentially of cancer detection. Additional sandwich-style aptasensor systems targeting cancer markers and cancer cell lines are summarized in Table 1. Label-free electrochemical aptamers have been developed that take advantage of aptamer features, including their conformational change upon target binding, increased resistance caused by double-stranded DNA formation, and decreased signaling when aptamer binding displaces an Label-free electrochemical aptamers have been developed that take advantage of aptamer features, including their conformational change upon target binding, increased resistance caused by double-stranded DNA formation, and decreased signaling when aptamer binding displaces an electroactive group on an electrode [38,45]. Several label-free aptamers have been developed against cancer targets especially aptasensors that take advantage of conformational change when aptamers bind to their target (Table 1). Khoshfetrat et al., for example, developed an aptasensor against leukemia cells utilizing the sgc8c aptamers that target protein tyrosine kinase 7 (PTK7), which is highly expressed in the acute lymphoblastic leukemia cell line CCRF-CEM [46]. To provide a signal, ethidium bromide (EB) was introduced and allowed to intercalate into the stem of the aptamer hairpin. When the target (i.e., PTK7) bound to the aptamer, the hairpin structure of the aptamer was disrupted, releasing the intercalating EB and decreasing the electrical signal on a nitrogen-doped graphene nanosheet that was used as an electrode surface [47]. Conformational changes upon aptamer binding can also alter the electronic transfer distance between an aptamer and an electrode. This property has been exploited through the use of modified electroactive aptamers and modified electrodes. For example, Wang et al. used a polyadenine modified aptamer system to detect MCF-7 breast cancer cells via the voltage drop recognized by differential pulse voltammetry upon target binding [48]. Heydari-Bafooei and Shamszadeh instead modified an electrode by combing reduced graphene, multi-walled carbon nanotubes, and AuNPs, to detect voltage changes based on the conformational changes of aptamers bound to the prostate cancer marker PSA [49]. Additional label-free systems targeting cancer markers and cancer cell lines summarized in Table 1. Current efforts aim to develop low-cost, portable aptasensor platforms. Microfluidic paper-based analytical devices (µ-PADs) are one potential solution. µ-PADs use grooved hydrophilic paper containing a series of millimeter-sized channels bound by a hydrophobic polymer [78]. Detectors, including metal ions and colorimetric dyes, bound within these grooves, can generate signals from microscopic samples [79,80]. Ma et al. used a metal ion-based µ-PAD detection system to detect both carcinoembryonic antigen (CEA) and MUC1 [81]. In this electronic aptasensor, aptamers against CEA and MUC1 were bound to thiolated complementary capture probes on the surface of a µ-PAD. Upon exposure to the target, aptamers preferentially unbound from the capture probes and bound to the target. This allowed metal ion-incorporating nanospheres to bind to unoccupied capture probes though their conjugated auxiliary sequences. Conductivity was then measured through Ru(NH 3 ) 6 3+ electronic wires [81]. The development of µ-PAD-based aptasensors for clinical use is promising, as the grooved paper can be produced using a series of low cost techniques, including wax printing, laser treating, and photolithography, which are increasingly cost-effective and scalable for mass production [78]. Fluorescent Aptasensors In 1996, a fluorescent aptamer against human neutrophil elastase (HNE) was the first aptamer-based biosensor developed. The HNE fluorescent aptamer was found to be as effective as an antibody at detecting HNE on beads, with the added benefits of faster chemical synthesis and the ability to add functional groups, small size to ease internalization, potential for to detect intracellular targets, less off-target binding, and greater storage stability [82]. Since then, several fluorescent aptasensors have been developed for cancer detection. In 2006, Herr et al. developed a fluorescent sandwich system using aptamers conjugated to fluorescent nanoparticles to detect cancer cells. They used magnetic nanoparticles to facilitate extraction of these cells from whole blood samples, providing an effective system for clinical use [83]. In 2009, Chen et al. developed a multiplexed detection system to detect multiple cancer cell targets using aptamer-conjugated Forster resonance energy transfer (FRET) silica nanoparticles [84]. Fluorescent aptasensors can be used to detect not only cancerous cells but cancer markers. One such marker, vascular endothelial growth factor (VEGF), was an early target for detection by fluorescent aptamers. In 2012, Freeman et al. presented a series of optical aptasensor methods based on the conformational change of the anti-VEGF aptamer upon binding to its target. FRET-, chemiluminescence-, and chemiluminescence resonance energy transfer (CRET)-based strategies were used for target visualization [85]. Cho et al. developed a single-step detection method for VEGF165 based on nanoplasmonic sensing, the optical phenomenon in which the intensity of a fluorophore changes when it interacts with the free electrons on the surface of a metal [86]. For this strategy, cyanine (Cy3)-labeled anti-VEGF aptamers were recruited to the surface of AuNPs. Upon binding with VEGF, the aptamers changed conformation, which released them from the AuNPs, causing a significant decrease in fluorescence intensity [86]. Several studies have used FRET aptasensors to detect cancer markers. Hamd-Ghadareh et al. developed an aptamer-based system to detect CA125, a marker of several cancer types, including ovarian cancer on which this aptamer system was tested. This study used aptamer-carbon dot probes to detect CA125-positive cells and measured the FRET signals caused by the interaction of the carbon dots and AuNPs, which acted as nanoquenchers [87]. Xiao et al. used graphene-oxide (GO), which binds to single-stranded DNA, as a FRET quencher in a similar manner [88]. When the analyte was not present, fluorescent labels on the aptamer were brought in close proximity to GO and the signal was quenched, providing very low background signal. Upon contact with the analyte, the fluorescent labels were separated from the quencher and could emit a signal that was proportional to the concentration of the analyte [89]. One emerging trend is the use of nanogels for aptamer delivery and cancer detection. Iwasaki et al. created 2-methacryloyloxyethyl phosphorylcholine (MPC) nanospheres that incorporated anti-thrombin aptamers during synthesis. Two types of MPC polymers were synthesized independently, one with MPC conjugated to the aptamer and another with MPC conjugated to a strand of DNA complementary to the aptamer strand. The strands combined and self-organized into aptamer-carrying nanospheres. These nanospheres were also able to incorporate fluorescent markers, such as EB, for highly specific detection of the target, which in this case was the cancer marker thrombin [90]. Hu et al. created a simple fluorescent aptamer detection system using the biotin-conjugated aptamer TLS11a against liver cancer cells and streptavidin-conjugated fluorescein isothiocyanate (FITC)-doped silica nanoparticles. The biotin-conjugated aptamers outperformed simpler FITC conjugated nanoparticles (~90% vs.~60%) in detection of HepG2 cells via flow cytometry [91]. Shangguan et al. also used TLS11a to make an "activatable" aptamer-based fluorescent probe [92]. In their study, a short 5 strand was added to the fluorescent tag FAM and a complimentary 3 C-strand was conjugated to the fluorescent quencher Eclipse. The quencher prevented fluorophore expression unless the aptamer came into contact with its target, allowing a conformational change that separated the quencher from the fluorophore [93]. The activatable aptamer approach was taken further by Lei et al., who developed a theranostic method that not only activates a fluorescent signal upon interaction with its target, but induces the release of a drug. In this proof-of-concept study, the conformational change of an aptamer against CCRF-CEM leukemia cells activated a fluorescent probe and released the chemotherapy drug doxorubicin (Dox) for cancer cell-specific drug delivery [94]. Colorimetric Aptasensors Colorimetric aptasensor assays allow for simple, fast detection of targets ranging from small metal ions [95] to proteins [96]. Several aptamer-based colorimetric assays have been developed for cancer marker detection. Xu et al. targeted the proto-oncogene K-Ras with a colorimetric biosensing system based on a DNA molecular machine [97]. The core of the machine was a hairpin probe that targeted K-Ras and hybridized with a primer-contained polymerization template (PPT) that generated an anti-hemin aptamer. The anti-hemin aptamer activated a DNAzyme that mimicked the action of horseradish peroxidase, catalyzing the activation of the substrate 2,2 -azino-bis(3-ethylbenzothiozoline-6-sulfonic acid) (ABTS) and changing the color of the substrate from colorless to green, detectable by the naked eye [97]. A nanoparticle-based colorimetric aptasensor system was developed by Ahirwar et al. for detecting the human estrogen receptor alpha (ERα), a common marker in breast cancer [98]. The system uses the color-changing properties of gold nanospheres, which interact with light and reflect different colors based on nanosphere size and dispersion rate. Monodispersed gold nanospheres reflect red light and aggregated nanospheres reflect a pale to purple color [99,100]. The aptamer-functionalized gold nanospheres were resistant to salt-induced aggregation until they were exposed to their target, ERα, which caused spontaneous aggregation of the nanospheres, changing the color of the nanosphere clusters from a wine red to a deep blue and allowing visual identification [98]. Aptasensors in Clinical Diagnostics Aptasensors provide a number of advantages for clinical diagnostics including high specificity and selectivity, and relatively low cost of production [45]. When compared to traditional antibody-based detection platforms, nucleic acid aptamers are more stable, highly modifiable, and are capable of fast animal-free development against a wide spectrum of targets ranging from small chemicals to whole cells [39]. One main disadavantage of aptasensors, which is shared with their antibody based counterparts, is that they can only detect previously known markers. This could be resolved with significant work into biomarker discovery with an emphasis on identifying biomarkers that are common across multiple cancer types. An emphasis on early cancer and metastasis markers would also be useful in tracking and early detection of cancer in high risk patients. Patients with known BRCA mutations who are prone to ovarian and breast cancer [101] and patients with Cowden Syndrome who are more at risk for breast, thyroid, and uterine cancers [102], for example, may prove ideal for aptasensor testing. Another limitation arises from the fact that each aptasensor is highly specific to one marker or cell type. Multiplexed aptasensors that detect a panel of cancer markers could be a solution, but a larger number of aptamers must first be developed. Cai et al. pointed out that many aptasensors are optimized for buffer solutions and may not be effective in biological fluids [103], but an increasing number of aptasensors have been shown to be effective in detecting markers in serum [47,49,74] signifying that this problem is a design problem not a technological one. Despite these limitations, aptasensors provide a great opportunity for advances in clinical diagnostics. In cancer, the high sensitivity of aptamers would allow for low-cost and non-invasive cancer testing by detecting minuscule biomarker levels in the blood, urine, or other bodily fluids [104]. A wide range of aptasensors have been developed with sensitivities within an ideal clinical range and have potential for commercial use as has been summarized in Table 1 for electrochemical aptasensors and a number of aptasensor reviews [45,104,105]. Despite this, aptasensors have not yet broken into the field of clinical diagnostics, a field still dominated by immunoassays [104]. However, aptasensors are beginning to enter the market. [107]. While slow to adapt as of now, with advances in aptasensor technology, cancer, biomarker discover, and aptamer development, aptasensors pose a significant threat to traditional immunoassays as the clinical diagnostics of choice. Development of Cancer-Specific Aptamers for Diagnosis Aptamers have been used to detect a variety of cancers by targeting tumor markers, such as nucleolin [108], tenascin [109], prostate-specific membrane antigen (PSMA) [110], MUC1 [111,112], annexin A2 [113], and matrix metalloprotease-9 (MMP-9) [114,115]. In this section, we will highlight advances in aptamer design for the detection of early, metastatic, and multiple cancers, with an emphasis on reports from the past year. Aptamers for Early Cancer Detection Early cancer detection drastically increases survival rates and treatment options [116]. Aptamer-based cancer detection systems may enable earlier, more sensitive cancer detection because they are highly specific and only require small quantities of analytes to generate signals. Lung cancer would benefit from early diagnosis because it often is not detected until it has progressed to a late stage, when five-year survival rates approach single digits [117,118]. Li et al. isolated six DNA aptamers against lung cancer markers using a modified SELEX technique involving magnetic carboxyl agar beads [118]. During this process, the beads were incubated with clarified mixed serum from healthy individuals for negative selection, followed by positive selection by beads incubated with serum from lung cancer patients. These six aptamers were shown to be highly specific in detecting lung cancer in the serum of 20 lung cancer patients but not of 20 healthy patients. This system was much more sensitive than traditional lung cancer diagnosis methods, thus potentially enabling earlier diagnosis [118]. Gynecological cancers are also difficult to diagnose at an early stage. Tsai et al. used an aptamerbased microfluidic system to capture and detect circulating tumor cells (CTCs), which are generally found at extremely low concentrations and circulate irregularly. By using highly specific aptamers, the system allows for a high rate of CTC discovery, low false positives, and quick detection when compared to antibody-based detection of ovarian cancer [119]. Aptamers Targeting Metastatic Cancer Patient outcomes can also be improved by detecting metastasis. One of the most common ways of developing aptamers to detect metastatic cells is to perform SELEX between metastatic and non-metastatic variants of established cancer lines. Yuan et al. isolated an aptamer for metastatic colorectal cancer via SELEX by using metastatic colorectal carcinoma LoVo cells for positive selection and non-metastatic colorectal carcinoma SW480 and HT-29 cells for negative selection [120]. The aptamer, fluorescently labeled with cyanine (Cy5), recognized colorectal carcinoma metastases in lymph node tissue with a detection rate of 73.9% and low detection of non-metastatic carcinoma (36.7%) and cancer-adjacent tissues (11.1%) [120]. Duan et al. developed an aptamer, DML-7, for metastatic prostate cancer by using SELEX with the metastatic prostate cancer line DU145 for positive selection and the human prostatic stromal myofibroblast line WPMY-1 for counter selection [121]. This aptamer was then tested on cells that were androgen receptor (AR)-negative (PC3) and AR-positive (LNCaP and 22Rv1), because ARs may suppress metastatic potential and are often upregulated in patients with metastatic prostate cancer [121,122]. The authors found that DML-7 bound to AR-negative PC-3 cells but not AR-positive LNCaP and 22Rv1 cells. However, the authors could not identify the receptor to which the aptamer bound and also found that it did not bind exclusively to metastatic prostate cells. The aptamer was found to bind to the adenocarcinomal cell line PL45, the human lung adenocarcinoma line A549, and the osteosarcoma cell line U2OS [121]. For better specificity to highly metastatic cells, Chen et al. developed aptamers against hepatocellular carcinoma cells from high metastatic (HCCLM9) and low metastatic (MHCC97L) cell lines derived from the same genetic background. These aptamers were found to have high specificity to the high metastatic line but did not bind to any of the other cell lines tested suggesting high specificity to HCCLM9 [123,124]. A more direct approach for detecting metastasis is to directly target metastasis-associated proteins. An RNA aptamer created by Kryza et al. targets MMP-9 [114], which is overexpressed in tumors and promotes metastasis by degrading the extracellular matrix to facilitate tumor cell invasiveness [125]. The RNA MMP-9 aptamer, F3B, was developed via SELEX against purified hMMP-9 protein using a 2 -F-pyrimidine-modified initial RNA library to make the RNA aptamer resistant to RNAse [115]. In the study, the authors created the radiolabeled constructs 99m Tc-MAG-F3B and 111 In-DOTA-F3B. Biodistribution studies showed that 99m Tc-MAG-F3B detected hMMP-9 in mice with A375 melanoma tumors but had high accumulation in the digestive tract. The 111 In-DOTA-F3B construct was found to have higher tumor uptake but high accumulation in the kidneys and bladder and low uptake into the digestive tract [114]. MacDonald et al. created a bispecific aptamer targeting epithelial cell adhesion molecule (EpCAM) and transferrin to specifically target brain cancer metastases. The bispecific aptamer had higher binding to metastatic brain cancer cells than EpCAM and transferrin aptamers alone. The aptamers had the added benefit of being able to pass through the blood-brain barrier, an obstacle that often hinders the treatment of brain disorders [126]. Another innovative way to detect metastasis is to identify morphological changes in captured cancer cells. Mansur et al. captured metastatic (MDA-MDB-231) and non-metastatic (MCF-7) breast cancer cells with anti-epidermal growth factor receptor (EGFR) aptamers on plane and nanotextured substrates, which exaggerate the morphological characteristics of the cells. The authors compared the shapes and sizes of cells captured and found that metastatic cells had a significantly greater change in morphology between nanotextured and plane substrates than did non-metastatic cells [127]. CTCs are also a common target for the detection of metastatic cancers. Several groups have developed methods to detect rare CTCs using AuNPs and plasma mass spectrometry [128], aptamer-linked magnetic particles, aptamer-functionalized microchannel/microstructures [129], and a label-free electrochemical cytosensor targeting overexpressed EpCAM, which is common in CTCs. Aptamers Targeting Multiple Cancers MUC1 is a glycoprotein that is overexpressed on the cell surface of most malignant epithelial cancers, including colorectal [130], lung [131], prostate [132], pancreatic [133], ovarian [134], and bladder cancers [135]. It is overexpressed in 0.9 million of the 1.2 million cancers diagnosed in the United States each year [136]. In normal cells, MUC1 provides a barrier between the cells and the environment. In cancer cells, its overexpression enhances invasiveness, metastasis, and resistance to reactive oxygen species [137]. Because of its ubiquity and high expression in cancer, MUC1 is an excellent target for multiple cancers. Several techniques for detecting MUC1 have been in development over the last two decades, including an electrochemical aptasensor (discussed in Section 3.1) [44], a fluorescent aptasensor using GO-based fluorescent quenching (discussed in Section 3.2) [89], an antibody-based nanowire sensor [138] and an aptamer-quantum dot-based detection method [139]. Ma et al. developed a dual-targeting electrochemical aptasensor to simultaneously detect CEA and MUC1 in multiple cancers (discussed in Section 3.1) [81]. As described in Section 3.1, Zhang et al. developed a sandwich-type electrochemical aptasensor to capture MUC1-overexpressing MCF-7 human breast adenocarcinoma cells [44]. This aptasensor system also enabled colorimetric assessment by catalyzing the deposition of silver for naked-eye detection of MCF-7 cells [44]. Santos do Carmo et al. used a technetium-99m-labeled silica-based polymeric nanoparticle loaded with anti-MUC1 aptamers to deliver drugs and radiolabel triple negative breast cancer (TNBC). Biodistribution studies showed that the nanoparticle-aptamer construct was highly absorbed by the intestine (30%), but was also taken up by the tumor (5%), which is a high rate for targeted drug delivery [140]. In contrast to the use of silica nanoparticles, Yu et al. successfully used MUC1 aptamers to deliver anti-cancer paclitaxel-loaded liposomal formulations to MCF-7 cells [141]. One issue raised by Cao et al. is that many of the above-mentioned strategies do not easily detect low-expressing, low-abundance protein biomarkers and require florescent probe modifications or complicated procedures to develop and use these aptasensors [142]. To address these issues, Cao et al. developed a MUC1 detection system using immuno loop-mediated isothermal amplification (Im-LAMP), which uses Bst DNA polymerase and a group of specialized primers for highly selective DNA amplification under isothermal conditions [143]. Their aptamer-based Im-LAMP system consisted of a MUC1 aptamer to capture the target protein, followed by LAMP amplification of the targeting aptamer and measurement via real-time fluorescent PCR [143]. Cancer/testis antigens (CTAgs) are also potential targets for multiple cancers. CTAgs are a group of proteins that are generally only expressed in the immune-privileged testes of adult males. However, CTAgs are highly overexpressed in many cancers, including bladder, prostate, non-small cell lung carcinomas, and melanoma [144]. Several CTAgs, including the melanoma-associated antigen family, synovial sarcoma X antigens, and the immunogenic tumor antigen NY-ESO-1, have been identified as candidates for adaptive immunotherapies and cancer vaccines [145,146]. CTAgs are promising targets for aptamer-based detection of cancer; however, several challenges may slow the development of CTAg-targeted aptamers. One potential concern is that more than 90% of CTAgs are predicted to be intrinsically disordered proteins, meaning they are biologically active but lack a rigid 3D structure [147]. Most CTAgs, however, transition to a rigidly structured shape upon binding to a target, which may make aptamer detection possible [147,148]. Additionally, CTAgs have many alternative splicing forms and post-translational modifications [149,150], which increase their diversity and functional variability but provide a challenge for CTAg aptamer development. As of yet, no aptamers against CTAgs have been developed, but several clinical trials testing cancer vaccines that sensitize the immune system to CTAg-expressing cancer cells are in development [146]. If the obstacles to anti-CTAg aptamer development are overcome, CTAgs may prove useful for aptamer-directed identification and treatment of multiple cancers. Application of Aptamers in Cancer Therapy Aptamer-based cancer therapies can be divided into two major types: (1) target antagonists and (2) delivery vehicles for therapeutic agents. In this section, we focus on aptamers that directly antagonize their pro-cancer targets. To date, two antagonistic aptamers have been evaluated in clinical trials for cancer treatment (Table 2). Here, we summarize these clinical trials and highlight recent preclinical studies of several more therapeutic aptamers for cancer treatment. Clinical Trials of Cancer-Targeting Aptamers AS1411 is a 26-nt guanosine-rich G-quadruplex DNA oligonucleotide developed by Antisoma that was the first aptamer to enter clinical trials for cancer treatment. AS1411 was not isolated by SELEX, but instead was discovered in a screen for antiproliferative DNA oligonucleotides [151]. AS1411 shows high affinity for the external domain of nucleolin, which is expressed in the nuclei of all cells, overexpressed on the surfaces of tumor cells, and involved in cell survival, growth and proliferation [152]. After binding to nucleolin, AS1411 is efficiently internalized, even at nanomolar doses [153]. AS1411 inhibits the function of nucleolin in cancer cells and shows great antiproliferative activity in various types of cancers, including lung, prostate, breast, cervical, and colon cancers, as well as malignant melanoma and leukemia. In a phase I clinical trial, AS1411 delivered by continuous infusion at doses up to 40 mg/kg/day specifically inhibited nucleolin without causing serious side effects in a variety of tumor types (ClinicalTrials.gov identifier NCT00881244). In a 2009 phase II clinical trial, AS1411 safely and effectively treated patients with primary refractory or relapsed acute myeloid leukemia (AML) (ClinicalTrials.gov identifier NCT00512083). However, in a subsequent phase II trial for renal cell carcinoma, only one of 35 patients had a response to AS1411 treatment (ClinicalTrials.gov identifier NCT00740441) [154]. Although the underlying mechanism of AS1411 action is not fully understood, the patient who showed a response to AS1411 had mutations in fibroblast growth factor receptor 2 (FGFR2) and mechanistic target of rapamycin (mTOR), suggesting potential pathways and predictive biomarkers for AS1411 treatment. NOX-A12 is a 45-nt L-ribose-based RNA aptamer, known as a Spiegelmer, developed by NOXXON Pharma AG. Spiegelmers are mirror-image oligonucleotides that have high resistance to nucleases [155,156]. NOX-A12 was developed against chemokine C-X-C motif ligand 12 (CXCL12; also known as stromal cell-derived factor-1) and linked to a 40-kDa polyethylene glycol to give it a longer half-life in plasma [157]. CXCL12 binds to CXCR4 and CXCR7 chemokine receptors, which have important roles in tumor proliferation, metastasis, and angiogenesis, as well as regulation of leukemia stem cell migration [158,159]. Because CXCL12/CXCR4/CXCR7chemokine axis activation regulates the pattern of tumor growth and metastatic spread to organs expressing high levels of CXCL12, NOX-A12 was expected to be useful in the treatment of several types of cancers, including multiple myeloma, lung, colorectal and brain cancers [159]. In phase I clinical trials, NOX-A12 was well tolerated (ClinicalTrials.gov identifiers NCT00976378 and NCT01194934). Currently, NOX-A12 is being studied in phase II clinical trials for the treatment of chronic lymphocytic leukemia (ClinicalTrials.gov identifier NCT01486797), relapsed multiple myeloma (ClinicalTrials.gov identifier NCT01521533), and metastatic pancreatic cancer (ClinicalTrials.gov identifier NCT03168139). Recent Progress in Therapeutic Aptamers for Cancer Therapy Aptamers have several advantages over current cancer therapies, such as chemotherapies and monoclonal antibodies. Compared to monoclonal antibodies, aptamers have similar target binding affinity and specificity but also several advantages, such as rapid in vitro selection, low immunogenicity, and superior penetration into solid tumor tissue. Because aptamers are obtained by chemical synthesis, they can be easily modified, and their production costs may be lower than those of monoclonal antibodies. Therefore, aptamers have vast potential for therapeutic use. However, aptamers still have several crucial limitations, such as short in vivo duration due to nuclease-mediated degradation and rapid renal filtration, a lack of comprehensive toxicity studies, and some exclusive patents that limit the global distribution of aptamer technology. Despite these challenges, several aptamers have been reported as potential candidates for cancer therapy [160], and recent progress in SELEX technology and lessons learned from the preceding clinical trials are expected to enhance translation to the clinic. In this section, we highlight several studies published in the past five years, which investigate aptamers that show anticancer activity in vivo. Human epidermal growth factor receptor 2 (HER2/ErbB2) is a member of the EGFR family and is overexpressed in various types of cancer, including breast and gastric cancers [161]. Recently, Mahlknecht et al. developed a DNA aptamer targeting HER2 [162]. The HER2 targeting aptamer was generated by SELEX using HER2-specific polyclonal antiserum, extracts of gastric cancer cells and random PCR deletion. After an original 14-nt aptamer was isolated, it was trimerized to improve binding affinity to the target protein. The trimeric (42-nt) aptamer efficiently bound HER2-positive cells, induced internalization and lysosomal degradation of the target protein, and inhibited cancer cell growth. Furthermore, intraperitoneal administration of the trimeric aptamer reduced tumor volume in HER-2-positive cancer xenograft mice. Once highly specific, high affinity monomeric aptamers are obtained, they can be easily multimerized. The multimerization strategy may be expanded to other aptamers to increase their efficacy as therapeutic agents. PSMA is a transmembrane protein that is primarily expressed in prostate tissue and prostate cancers [163]. Because prostate cancer cells overexpress PSMA on the cell surface, it is a promising marker for diagnosis and targeted therapy of prostate cancers [164,165]. Several reports also suggest that PSMA has enzymatic activity related to cancer progression [166][167][168]. Dassie et al. demonstrated that an RNA aptamer targeting PSMA (A9g; 43-nt) inhibits the enzymatic activity of PSMA, reducing prostate cancer cell migration and invasiveness in vitro [169]. When systemically administered in a mouse model of metastatic prostate cancer, A9g selectively targeted PSMA-positive tumors and greatly reduced metastasis without causing significant toxicity. Thus, this aptamer may be used not only as a direct antagonist, but as a dual inhibitor via aptamer-drug conjugates targeting PSMA-positive tumors. Programmed death 1 (PD-1) is an immune checkpoint protein expressed on the surface of T cells that functions as a negative regulator of immune responses [170]. Using protein-based SELEX, Prodeus et al. isolated a 75-nt PD-1-targeting DNA aptamer, MP7 [171]. MP7 recognized the extracellular region of PD-1, bound to murine PD-1 with nanomolar affinity, blocked the interaction between PD-1 and programmed death-ligand 1 (PD-L1), and inhibited the suppression of interleukin-2 (IL-2) secretion, which is related to immunosuppression in primary T-cells. PEGylated MP7 reduced tumor growth in mice bearing colon cancer xenografts. However, because MP7 only showed specific binding to murine PD-1 not human PD-1, a human PD-1-targeting aptamer would first need to be developed for use in clinical cancer therapy. Rong et al. reported a DNA aptamer, LY-1, that specifically binds targets on the surface of highly metastatic hepatocellular carcinoma (HCC) [124]. They isolated this aptamer by whole-cell SELEX using two cell lines that have the same genetic background but different metastatic potential, HCCLM9 and MHCC97L (mentioned in Section 4.2). Although the direct target of this aptamer is unknown, LY-1 recognized metastatic HCC with a dissociation constant (K d ) of 167 nM, reduced migration and invasiveness of these cells in vitro, and inhibited tumor growth when administered intraperitoneally to an HCCLM9 mouse xenograft model. This aptamer may not only be a therapeutic candidate but also a molecular probe for metastatic HCC therapy. Current Use of Aptamers as Cancer-Targeting Agents Targeted cancer therapy is a potential strategy to lower side effects and enhance the efficacy of anticancer agents. Because aptamers bind to their targets with high affinity and specificity and are effectively internalized into cells, cancer cell-specific aptamers have been conjugated with therapeutic agents and delivery vehicles, including small chemical drugs, oligonucleotides, and nanocarriers, for targeted delivery [4,172,173]. In this section, we discuss studies published during the past five years that have used aptamers as cancer-targeting agents. Aptamer-Small Compound Conjugates Aptamer-drug conjugates are especially useful for chemotherapeutic agents that have systemic side effects. Dox, a traditional chemotherapeutic agent that induces cancer cell death by intercalating into DNA, has been used as a model agent for cell-specific aptamer conjugation. Some groups demonstrated that Dox can non-covalently conjugate to aptamers, via intercalation into their GC-rich regions ( Figure 3A), for delivery into specific cells [174][175][176][177]. Over the past five years, several other groups have reported novel types of aptamer-Dox conjugates. Wen et al. isolated a CD38-targeting DNA aptamer and non-covalently conjugated Dox to it in a CG-repeat structure, termed CG-cargo ( Figure 3B) [178]. Using the CG-repeat structure, the aptamer-Dox conjugate formed with a 1:5 molar ratio of aptamer to Dox. When systemically administered to multiple myeloma-bearing mice, the conjugate specifically released Dox in tumor cells, inhibited tumor growth and improved survival rates of mice. CG-cargo can carry a high payload of Dox and may be conjugated to other aptamers to target different cancers. Trinh et al. generated a drug-DNA adduct called AS1411-Dox by crosslinking Dox and AS1411 with formaldehyde at 10 degrees overnight [179]. When systemically injected into hepatocellular carcinoma-bearing mice, AS1411-Dox inhibited tumor growth without causing severe toxicity to non-tumor tissues. Generation of the adduct was simply and cheap, suggesting that it may be widely used, particularly in developing counties, to produce other aptamer-Dox conjugates that will reduce the systemic toxicity of Dox. To further improve the targeting ability of monovalent PSMA aptamer-Dox conjugates, Boyacioglu et al. developed a dimeric PSMA aptamer complex (DAC) bound to Dox ( Figure 3C) [180]. The PSMA aptamers were synthesized with either A 16 or T 16 tails at their 3 -temini. DACs were prepared by mixing the A 16 -and T 16 -tailed PSMA aptamers at a 1:1 ratio. Dox was covalently conjugated to the CpG sequences in DACs through a pH-sensitive linker, so the DAC-Dox conjugates were stable under physiological conditions but dissociated after internalization into PSMA-positive cells. As a result, the DAC conjugate specifically inhibited growth of PSMA-positive cells. In addition to enhancing Dox-induced therapeutic toxicity to the targeted cancer cells, DACs may improve the pharmacokinetic properties (e.g., circulation time and half-life) of aptamer-Dox conjugates in vivo due to their increased molecular weight. Covalent conjugation to aptamers has also been used to target other chemotherapy agents to cancer cells. For example, Zhao et al. developed a cell-specific aptamer-methotrexate (MTX) conjugate to specifically inhibit AML [181]. They first isolated a DNA aptamer targeting CD117, which is highly expressed on AML cells. The DNA aptamer, which contains a G-quadruplex structure, was covalently conjugated with MTX via amine coupling reaction using N-hydroxysuccinimide (NHS). The CD117 aptamer-MTX conjugate specifically inhibited AML cell growth. Aptamer-Therapeutic Oligonucleotide Conjugates Like aptamers, several other types of oligonucleotides, including small interfering RNA (siRNA), micro RNA (miRNA), and anti-miRNA (antimiR), are attractive as therapeutic agents because they can modulate the expression of specific cancer targets, including undruggable oncogenes that could not be targeted pharmacologically [182]. Because these oligonucleotides function inside cells, it is important to deliver them efficiently. Since the first report of PSMA aptamer-siRNA chimeras in 2006 [183], many aptamer-oligonucleotide conjugates have been developed for anticancer therapy [4,105,184]. Among the various therapeutic oligonucleotide conjugates, siRNA conjugates are the most popular. Recently, several groups demonstrated that EpCAM-targeting aptamers are good tools for siRNA delivery into epithelial cancers and cancer stem cells. EpCAM is a tumor-associated antigen that is highly expressed on epithelial cancers and their associated cancer stem cells [185]. Subramanian et al. reported very strong tumor regression after injection of an EpCAM-targeting aptamer-siRNA conjugates in an MCF-7 epithelial cancer xenograft model [186]. EpCAM-AsiC is an EpCAM aptamer covalently linked to a polo-like kinase (PLK1)-specific siRNA sense strand annealed to its antisense strand ( Figure 4A). Gilboa-Geffen et al. showed that EpCAMiC is specifically taken up by EpCAM-positive cancer cell lines and in human EpCAM-positive breast cancer biopsies, where it silences the expression of PLK1 [187]. EpCAM-AsiC subcutaneously administered at 5 mg/kg every three days for two weeks suppressed cancer growth in a EpCAM-positive TNBC xenograft model. Survivin, a member of the inhibitor of apoptosis (IAP) protein family that inhibits caspases and blocks cell death, is overexpressed in the cancer stem cell population of Dox-resistant breast cancer cells. Wang et al. demonstrated that an EpCAM aptamer-survivin siRNA chimera that specifically targeted cancer stem cells in a mouse xenograft model induced survivin knockdown, which therefore resulted in the reversal of Dox resistance [188]. Collectively, the EpCAM aptamers have proven their application for effective delivery of anticancer agents and reversal of chemoresistance for killing cancer stem cells. To enhance the pharmacological efficacy of siRNA-based anticancer therapeutics, multifunctional and multi-targeting strategies have recently been used. In 2013, Zhou et al. developed dual-functional B-cell-activating factor receptor (BAFF-R) aptamer-siRNA conjugates for B-cell Aptamer-Therapeutic Oligonucleotide Conjugates Like aptamers, several other types of oligonucleotides, including small interfering RNA (siRNA), micro RNA (miRNA), and anti-miRNA (antimiR), are attractive as therapeutic agents because they can modulate the expression of specific cancer targets, including undruggable oncogenes that could not be targeted pharmacologically [182]. Because these oligonucleotides function inside cells, it is important to deliver them efficiently. Since the first report of PSMA aptamer-siRNA chimeras in 2006 [183], many aptamer-oligonucleotide conjugates have been developed for anticancer therapy [4,105,184]. Among the various therapeutic oligonucleotide conjugates, siRNA conjugates are the most popular. Recently, several groups demonstrated that EpCAM-targeting aptamers are good tools for siRNA delivery into epithelial cancers and cancer stem cells. EpCAM is a tumor-associated antigen that is highly expressed on epithelial cancers and their associated cancer stem cells [185]. Subramanian et al. reported very strong tumor regression after injection of an EpCAM-targeting aptamer-siRNA conjugates in an MCF-7 epithelial cancer xenograft model [186]. EpCAM-AsiC is an EpCAM aptamer covalently linked to a polo-like kinase (PLK1)-specific siRNA sense strand annealed to its antisense strand ( Figure 4A). Gilboa-Geffen et al. showed that EpCAMiC is specifically taken up by EpCAM-positive cancer cell lines and in human EpCAM-positive breast cancer biopsies, where it silences the expression of PLK1 [187]. EpCAM-AsiC subcutaneously administered at 5 mg/kg every three days for two weeks suppressed cancer growth in a EpCAM-positive TNBC xenograft model. Survivin, a member of the inhibitor of apoptosis (IAP) protein family that inhibits caspases and blocks cell death, is overexpressed in the cancer stem cell population of Dox-resistant breast cancer cells. Wang et al. demonstrated that an EpCAM aptamer-survivin siRNA chimera that specifically targeted cancer stem cells in a mouse xenograft model induced survivin knockdown, which therefore resulted in the reversal of Dox resistance [188]. Collectively, the EpCAM aptamers have proven their application for effective delivery of anticancer agents and reversal of chemoresistance for killing cancer stem cells. To enhance the pharmacological efficacy of siRNA-based anticancer therapeutics, multi-functional and multi-targeting strategies have recently been used. In 2013, Zhou et al. developed dual-functional B-cell-activating factor receptor (BAFF-R) aptamer-siRNA conjugates for B-cell malignancies [189]. They isolated several BAFF-R aptamers that could efficiently bind to BAFF-R on the surface of B-cells and compete with BAFF to inhibit BAFF-mediated B-cell proliferation. They further conjugated the inhibitory BAFF-R aptamer with siRNA against human STAT3 for achieving a dual inhibitory effect. STAT3 plays an important role in promoting the progression of human cancers including several types of B-cell lymphoma. Two different types of BAFF-R aptamer-STAT3 siRNA conjugates, a covalent aptamer-siRNA chimera and a non-covalent aptamer-stick-siRNA conjugate ( Figure 4B), blocked BAFF-mediated signaling and specifically decreased the expression of STAT3 in human B-cell lines. Multi-functional aptamer-siRNA conjugates, in which both the aptamer and siRNA can suppress their corresponding targets, may be more effective than single-function conjugates at controlling tumor progression. More recently, Liu et al. reported a bivalent aptamer-dual siRNA chimera for prostate cancer ( Figure 4C) [190]. This aptamer-siRNA chimera consists of two PSMA aptamers and two siRNAs targeting EGFR and survivin. The chimera reduced the expression of EGFR and survivin, induced apoptosis, and effectively suppressed tumor growth and angiogenesis in a prostate cancer xenograft model. Compared to monovalent conjugates in which only one aptamer delivers one siRNA, bivalent conjugates improve aptamer-mediated targeting avidity and increase siRNA cargoes for improved gene silencing. Zhang et al. developed RNA nanoparticles containing a three-way junction motif derived from bacteriophage phi29 packaging RNA (pRNA) bearing a HER2-targeting RNA aptamer and two different siRNAs targeting estrogen receptor coactivator Mediator Subunit 1 (MED1) to overcome tamoxifen-resistant breast cancer [191]. These multi-functional nanoparticles (pRNA-HER2apt-siMED1) ( Figure 4D) specifically bound to HER2-positive cells and inhibited MED1 expression and cell growth. More importantly, pRNA-HER2apt-siMED efficiently reduced the growth and metastasis of breast cancer cells and sensitized them to tamoxifen treatment after systemic administration in a xenograft mouse model. These pRNA-based nanoparticles were stable and exhibited a favorable pharmacokinetic profile with multi-functional properties, such as targeted co-delivery of various therapeutics, suggesting feasible translation for clinical use. In addition to delivering anticancer therapeutics, aptamer-siRNA conjugates have potential roles in cancer immunotherapy. Cytotoxic T lymphocyte-associated antigen 4 (CTLA4) is a cell surface receptor that functions as an immune checkpoint and downregulates immune responses [192]. Herrmann et al. developed a CTLA4 aptamer-STAT3 siRNA conjugate [193]. When locally or systemically administered, this conjugate significantly reduced the number of tumor-associated regulatory T cells and inhibited the growth of various tumors, including melanoma, renal cell carcinoma, colon carcinoma and human T cell lymphoma, in mice. More recently, Rajagopalan et al. showed that a 4-1BB aptamer-CD25 siRNA conjugate has potential utility for cancer immunotherapy [194]. 4-1BB is a major immune stimulatory receptor expressed on the surface of CD8+ T cells [195], and CD25 is the alpha subunit of the Il-2 receptor that has an important role in the differentiation of CD8+ T cells [196]. Intravenous injection of the 4-1 BB aptamer-CD25 siRNA conjugate reduced CD25 expression and downregulated IL-2 signaling in circulating CD8+ T cells in mice. Furthermore, administration of this conjugate improved the antitumor activity induced by vaccines and radiation. Similarly to siRNA, other types of oligonucleotides, such as miRNA, antimiR, small activating RNA (saRNA), and decoy DNA, have been conjugated to cell-specific aptamers. The tyrosine kinase receptor Axl that is overexpressed in several human cancers is closely related to invasiveness and therapeutic resistance [197]. Recently, an RNA aptamer, GL21.T has been identified to specifically bind to and antagonize the Axl [198]. Recent studies have demonstrated the flexible use of this aptamer in aptamer-therapeutic oligonucleotide conjugates. Esposito et al. developed a dual-functioning aptamer-miRNA conjugate termed GL21.T-let [199]. The conjugate consists of GL21.T and the anti-oncogenic miRNA let-7g. When systemically injected into mice bearing Axl-positive or -negative lung cancer, GL21.T-let specifically delivered let-7g into Axl-positive tumors and inhibited their growth, but not inhibit the growth of Axl-negative tumors. Iaboni et al. demonstrated that GL21.T-conjugated miR-212 enhances sensitization to TNF-related apoptosis-inducing ligand (TRAIL) in human non-small cell lung cancer cells [200]. Catuogno et al. demonstrated that GL21.T can deliver antimiRs, as well [201]. They non-covalently conjugated GL21.T to the tumor suppressing antimiR-222 through a stick sequence ( Figure 4E). This conjugate showed synergistic inhibition of cell migration and enhanced sensitivity to temozolomide (TMZ) treatment of Axl-positive cells. To further enhance the therapeutic potential of aptamer-antimiR conjugates, the same group non-covalently conjugated GL21.T to two antimiRs, antimiR-222 and antimiR-10b, which target corresponding oncomiR, respectively ( Figure 4F). This multi-functional conjugate reduced the expression of both miR-222 and miR-10b. Esposito et al. studied a combination of aptamer-miRNA and aptamer-antimiR conjugates [202]. They used GL21.T and a platelet-derived growth factor receptor aptamer (Gint4.T) as carriers for miR-137 and antimiR-10b, respectively, and demonstrated that these conjugates synergize to inhibit the growth and migration of glioblastoma stem-like cells. Shu et al. conjugated antimiR-21 and Alexa-647 dye to an EGFR aptamer in a three-way junction pRNA ( Figure 4G) [203]. When intravenously administration to TNBC-bearing mice, the RNA complexes were specifically delivered to TNBC cells, where they reduced the expression of miR-21, increased the expression of downstream target mRNAs, and inhibited tumor growth. saRNA is a novel type of small double-stranded RNA that targets specific sequences in promoter regions to upregulate the expression of target genes. Downregulation or mutation of CCAAT/enhancer-binding protein-α (C/EBPα) has been reported to associate with tumor aggressiveness [204], suggesting it may be an important target for cancer, such as liver cancer and pancreatic cancer. Yoon et al. isolated two RNA aptamers that specifically target pancreatic adenocarcinoma cells and conjugated them to an saRNA targeting C/EBPα [205]. Both conjugates induced expression of C/EBPα, inhibited cell proliferation in pancreatic adenocarcinoma cell lines. These conjugates also reduced tumor growth when systemically administered to human pancreas cancer xenografts. Porciani et al. combined the concepts of aptamer-small drug and aptamer-oligonucleotide conjugates [206]. They developed an RNA aptamer-DNA decoy chimera and loaded Dox into the GC-rich region. The chimera consisted of an RNA aptamer targeting the transferrin receptor and a DNA decoy for nuclear factor κB (NF-κB) DNA ( Figure 4H). Using a secondary structure prediction tool, six to seven Dox molecules were assumed to bind to the GC-rich region in the tail/antitail. This multiple conjugate was effectively internalized and released Dox in tumor cells. Moreover, the conjugate inhibited NF-κB activity and enhanced Dox-induced apoptosis in pancreatic tumor cells. The therapeutic payload can also be nucleic acid aptamer, therefore resulting in bi-specific aptamer conjugates for cancer immunotherapy with increased targeting and specificity [207][208][209]. For example, Boltz et al. isolated CD16α specific DNA aptamers which is able to specifically recognize natural killer (NK) cells. By rationally conjugating CD16α aptamer with a c-Met aptamer they generated bi-specific aptamer conjugates ( Figure 4I) [210]. The bi-specific aptamer conjugated simultaneously bound to CD16α expressing NK cells and c-Met-overexpressing tumor cells, which specifically recruited NK cells to tumor cells, consequently inducing tumor cell lysis. In another study, Gilboa et al. develop a bivalent 4-1BB aptamers to function as an agonist, therefore promoting tumor immunity in mice. 4-1BB is a major co-stimulatory receptor expressed on activated CD8+ T cells. Because non-targeting activation may cause some unwanted toxicities, the same group constructed a PSMA aptamer-dimeric 4-1BB aptamer conjugates ( Figure 4J) [211]. Compared to 4-1BB antibodies or non-targeting dimeric 4-1BB aptamers, the resulted bi-specific aptamer conjugates effectively induced co-stimulation at the PSMA-expressing tumor site at a reduced dosage. Taken together, these studies suggest that aptamers hold great promise for cancer immunotherapy. Aptamer-siRNA chimera. An EpCAM aptamer is covalently linked to a PLK1-specific siRNA sense strand; (B) Aptamer-stick-siRNA conjugates. One of the two STAT3 siRNA strands is linked to the 3′ end of a BAFF-R aptamer through a "sticky bridge" sequence; (C) Bivalent PSMA aptamer-dual siRNA chimera. Two PSMA aptamers flank siRNAs specific to survivin and EGFR; (D) Three-way junction pRNA nanoparticle. pRNA-HER2apt-siMED1 consists of a HER2-targeting RNA aptamer and two different siRNAs targeting MED1 connected by a three-way junction pRNA; (E) Aptamer-antimiR conjugates. The aptamer GL21.T is non-covalently conjugated to antimiR-222 through a stick sequence; (F) Aptamer-dual antimiR conjugate. GL21.T is non-covalently conjugated to antimiR-10b and antimiR-222 through a stick sequence; (G) Three-way junction-EGFR aptamer-antimiR-21 nanoparticle. The nanoparticle consists of four strands bearing an EGFR aptamer, Alexa647 and antimiR-21; (H) Aptamer-decoy-Dox complex. The 3′ end of the anti-transferrin receptor RNA aptamer is elongated with a short DNA tail (CGA)7 complementary to a DNA anti-tail (TCG)7 that is conjugated to the 3′ end of the NF-κB decoy by a disulfide linker. The GC-rich region in tail/anti-tail is a putative Dox binding site; (I) Bi-specific aptamers as a cell engager. CD16α specific DNA aptamer is covalently conjugated to c-Met aptamer; (J) PSMA aptamer-dimeric 4-1BB aptamer conjugates. A PSMA aptamer is non-covalently conjugated to a dimeric 4-1BB aptamer. Aptamer-siRNA chimera. An EpCAM aptamer is covalently linked to a PLK1-specific siRNA sense strand; (B) Aptamer-stick-siRNA conjugates. One of the two STAT3 siRNA strands is linked to the 3 end of a BAFF-R aptamer through a "sticky bridge" sequence; (C) Bivalent PSMA aptamer-dual siRNA chimera. Two PSMA aptamers flank siRNAs specific to survivin and EGFR; (D) Three-way junction pRNA nanoparticle. pRNA-HER2apt-siMED1 consists of a HER2-targeting RNA aptamer and two different siRNAs targeting MED1 connected by a three-way junction pRNA; (E) Aptamer-antimiR conjugates. The aptamer GL21.T is non-covalently conjugated to antimiR-222 through a stick sequence; (F) Aptamer-dual antimiR conjugate. GL21.T is non-covalently conjugated to antimiR-10b and antimiR-222 through a stick sequence; (G) Three-way junction-EGFR aptamer-antimiR-21 nanoparticle. The nanoparticle consists of four strands bearing an EGFR aptamer, Alexa647 and antimiR-21; (H) Aptamer-decoy-Dox complex. The 3 end of the anti-transferrin receptor RNA aptamer is elongated with a short DNA tail (CGA) 7 complementary to a DNA anti-tail (TCG) 7 that is conjugated to the 3 end of the NF-κB decoy by a disulfide linker. The GC-rich region in tail/anti-tail is a putative Dox binding site; (I) Bi-specific aptamers as a cell engager. CD16α specific DNA aptamer is covalently conjugated to c-Met aptamer; (J) PSMA aptamer-dimeric 4-1BB aptamer conjugates. A PSMA aptamer is non-covalently conjugated to a dimeric 4-1BB aptamer. Aptamer-Related Nanoparticles In addition to delivering directly conjugated therapeutic drugs, aptamers can be used to deliver and functionalize nanomaterials, including liposomes, AuNPs, polymers, and dendrimers. Among these nanomaterials, liposomes are promising carriers for anticancer chemotherapeutic agents because they specifically target tumor tissue and reduce the systemic toxicities of anticancer drugs [212]. Doxil is a PEGylated liposomal Dox (PL-Dox) that is approved for clinical use by the US Food and Drug Administration [213]. Although Doxil effectively reduces the side effects of Dox, there is still room to improve drug efficacy because Doxil is passively targeted to tumors lacks a targeting agent. Recently, several groups reported aptamer conjugation techniques for targeted therapy of PL-Dox. Xing et al. attached the nucleolin-targeting aptamer AS1411 with PL-Dox ( Figure 5A) [214]. As compared to a non-targeting liposome, the AS1411-conjugated liposome increased cellular uptake and cytotoxicity of Dox in breast cancer cells. Furthermore, when intratumorally injected into breast-cancer-xenograft mice, the AS1411-conjugated liposome enhanced the antitumor efficacy of Dox compared to a non-targeting liposome. The same group also optimized the length and composition of spacer molecules on the surface of AS1411-conjugated liposomes, and showed that spacer length is an important factor for the targeting ability of the AS1411 aptamer [215]. Baek et al. conjugated a PSMA-specific RNA aptamer to a PEGylated liposome, and called it an "aptamosome" ( Figure 5B) [216]. A Dox-encapsulating aptamosome showed specific binding to, uptake by, and cytotoxicity against PSMA-positive prostate cancer cells. When intravenously administrated to prostate-cancer-xenograft mice, the Dox-encapsulating aptamosomes specifically accumulated in, were retained by, and inhibited the growth of tumor tissues. Moosavian et al. conjugated TSA14, an RNA aptamer that specifically targets breast cancer cells, to the surface of PL-Dox [217]. Compared to aptamer-unmodified PL-Dox, aptamer-conjugated PL-Dox showed improved cellular uptake and cytotoxicity in breast cancer cell lines. When intravenously injected into breast cancer xenograft mice, aptamer-conjugated PL-Dox accumulated in tumor tissue and enhanced inhibition of tumor growth compared to non-targeting liposomes. Recent studies have also focused on AuNPs as aptamer-functionalized carriers because of their favorable characteristics, such as biocompatibility, low toxicity, and large surface area for decoration. Over the past five years, several groups have reported sophisticated aptamer-AuNPs systems for cancer therapy. To enhance the tumor targeting of hollow gold nanospheres (HAuNS), Zhao et al. covalently conjugated HAuNS to RNA aptamers specific for CD30, a diagnostic biomarker for Hodgkin's lymphoma and anaplastic large cell lymphoma [219]. The Dox-loaded aptamer-conjugated HAuNS (Apt-HAuNS-Dox) showed specific binding to CD30-positive lymphoma cells. In addition to selective binding, Apt-HAuNS-Dox effectively released the loaded Dox into the target cells and into a low pH solution similar to the lysosomal environment. Most importantly, this pH-sensitive Apt-HAuNS-Dox selectively killed the CD30-positive lymphoma cells in mixed cultures with CD30-negative cells. Danesh et al. reported the use of AuNPs conjugated to PTK7 aptamers for selective delivery of daunorubicin (Dau) to T cell acute lymphoblastic leukemia [220]. The PTK7-targeting aptamer, sgc8c, was conjugated to AuNPs, and Dau was loaded onto the surface of the AuNPs and intercalated into the sgc8c aptamers ( Figure 5C). The Apt-Dau-AuNP complexes effectively released Dau in an acidic environment and showed selective internalization and cytotoxicity to the targeted T cell line. The same group enhanced the therapeutic efficacy of the Apt-Dau-AuNPs complex by using two kinds of aptamers, sgc8c and AS1411, to target cancer cells [221]. The polyvalent aptamer-conjugated AuNPs showed pH-dependent Dau release and selective internalization into target cells. As compared to the sgc8c aptamer-conjugated complex, the polyvalent Apt-Dau-AuNP complex was more cytotoxic to target cells and less cytotoxic to non-target cells. Dau is loaded onto the surface of the AuNP and intercalated into the sgc8c aptamers; (D) Co-drugloaded aptamer-conjugated AuNP. AS1411 is extended with a 27-base T6(CGATCGA)3 sequence at the 3′ end. After hybridization with the complementary sequence 5′ thiol-T10(TCGATCG)3, the double stranded AS1411 is immobilized onto the surface of AuNP. Dox molecules are loaded onto the CG-rich region within the extended sequence. The photosensitizer TMPyP4 is non-covalently attached to the AS1411-conjugated AuNP; (E) Aptamer-dsDNA and Dox nanoparticles. The aptamer part forms a dimeric G-quadruplex nanostructure. The dsDNA consists of GC-rich region to deliver the Dox payload; (F) Apts-Dendrimer-Epi complex. The DNA-dendrimer is prepared by mixing several ssDNAs containing ATP aptamers. Epi is loaded onto the dendrimer by mixing, and MUC1 and AS1411 aptamers are non-covalently conjugated to the dendrimer-Epi encapsulate. Conclusions and Perspective Based on their unique characteristics, such as fast in vitro selection and facile chemical synthesis and site-specific modification, related small physic size, high thermal stability, and better tissue penetration, a number of aptamers have been developed as versatile tools for cancer imaging, diagnosis and therapy. In this review, we have focused on the progress of aptamer technology in the field of cancer over the past five years. During this time, the aptamer selection process has been improved to identify high-affinity target-specific aptamers, and high-throughput technologies will continue to reduce the time required for isolation of aptamers. To date, although only two aptamerbased cancer therapeutics have undergone clinical trials, several more aptamers have shown great potential for cancer imaging, diagnosis, and therapy. Use of covalent or non-covalent conjugation Cholesterol-modified nucleolin aptamers (AS1411) are immobilized onto the surface of a PEGylated liposome; (B) An aptamosome. A PSMA aptamer is conjugated to a PEGylated liposome by annealing to linker DNA modified with FITC and covalently conjugated to the termini of PEG; (C) Apt-Dau-AuNP complex. A PTK7-targeting aptamer (sgc8c) is conjugated to AuNPs simply by mixing them. Dau is loaded onto the surface of the AuNP and intercalated into the sgc8c aptamers; (D) Co-drug-loaded aptamer-conjugated AuNP. AS1411 is extended with a 27-base T6(CGATCGA)3 sequence at the 3 end. After hybridization with the complementary sequence 5 thiol-T10(TCGATCG)3, the double stranded AS1411 is immobilized onto the surface of AuNP. Dox molecules are loaded onto the CG-rich region within the extended sequence. The photosensitizer TMPyP4 is non-covalently attached to the AS1411-conjugated AuNP; (E) Aptamer-dsDNA and Dox nanoparticles. The aptamer part forms a dimeric G-quadruplex nanostructure. The dsDNA consists of GC-rich region to deliver the Dox payload; (F) Apts-Dendrimer-Epi complex. The DNA-dendrimer is prepared by mixing several ssDNAs containing ATP aptamers. Epi is loaded onto the dendrimer by mixing, and MUC1 and AS1411 aptamers are non-covalently conjugated to the dendrimer-Epi encapsulate. Dual-targeting AuNPs was also developed by Chen et al. They conjugated gold nanoclusters with the nucleolin aptamer AS1411 and cyclic peptide RGD (cRGD), a ligand of the integrin alpha-v beta 3, which is overexpressed on the surface of tumor cells [222]. The dual-targeting AuNP (AuNC-cRGD-Apt) was functionalized with near infrared fluorescence dye for tumor imaging or Dox loading for tumor therapy. AuNC-cRGD-Apt was efficiently internalized and delivered Dox to nuclei in target cells. When systemically injected in malignant glioma xenograft mice, the dual-targeting AuNP accumulated in tumor tissue and inhibited tumor growth without causing severe toxicity in non-tumor tissues. In addition to multi-targeted delivery, aptamer-conjugated AuNPs have been developed for multidrug delivery as well. Shiao et al. non-covalently attached the photosensitizer 5,10,15,20-terakis(1-methlpyridinium-4-yl) porphyrin (TMPyP4) and Dox to AS1411-conjugated AuNPs ( Figure 5D) [223]. When this co-drug complex was delivered into nucleolin-positive cells, light exposure induced production of reactive oxygen species, release of Dox, and synergistic cytotoxicity. As mentioned above, the use of multi-functional aptamer-based nanoparticles is a potential strategy to enhance drug efficacy and reduce the side effects of cancer treatment. Recently, some groups have also developed multi-functional aptamer-based nanoparticles using oligonucleotides that can self-assemble into three-dimensional nanostructures. Liu et al. developed a multi-functional DNA nanostructure consisting of a dimeric nucleolin aptamer and GC-rich dsDNA for Dox delivery to resistant cancer cells ( Figure 5E) [224]. Dox was efficiently intercalated into the GC-rich dsDNA region, and the DNA nanoparticles enhanced the Dox sensitivity of resistant breast cancer cells, perhaps in part by inducing S phase arrest of resistant breast cancer cells, increasing cellular uptake and decreasing efflux of Dox. When intravenously injected into Dox-resistant breast cancer xenograft model, the DNA complex strongly inhibited tumor growth and reduced cardiotoxicity compared to free Dox treatment. Taghdisi et al. attached three different aptamers targeting MUC1, nucleolin and ATP to a DNA dendrimer for targeted delivery of epirubicin (Epi) [225]. This Apts-Dendrimer-Epi complex ( Figure 5F) was specifically internalized by and cytotoxic to target tumor cells. When intravenously injected in colon carcinoma xenograft mice, the Apts-Dendrimer-Epi complex strongly reduced tumor growth. Conclusions and Perspective Based on their unique characteristics, such as fast in vitro selection and facile chemical synthesis and site-specific modification, related small physic size, high thermal stability, and better tissue penetration, a number of aptamers have been developed as versatile tools for cancer imaging, diagnosis and therapy. In this review, we have focused on the progress of aptamer technology in the field of cancer over the past five years. During this time, the aptamer selection process has been improved to identify high-affinity target-specific aptamers, and high-throughput technologies will continue to reduce the time required for isolation of aptamers. To date, although only two aptamer-based cancer therapeutics have undergone clinical trials, several more aptamers have shown great potential for cancer imaging, diagnosis, and therapy. Use of covalent or non-covalent conjugation strategies allows different aptamers to serve as easily exchanged building blocks for functionalizing other therapeutic agents. This feature may greatly facilitate their clinical translation. In particular, multi-specific nucleic acid aptamer-based immunotherapeutic modality is being exploiting for engaging cancer-specific immunity to eliminate tumor cells. Although bi-specific antibodies, such as BiTE, are currently used for this purpose, there are several concerns with clinical translation of antibody-based immunotherapeutics, including significant autoimmune toxicities associated with repeat administration, and high cost and complexity associated with bi-specific antibody conjugate development and production in cGMP standard facilities. A nucleic acid aptamer-based platform is superior to current antibody-based strategies, as aptamers allow: (1) better tissue penetration; (2) lack of immunogenicity; (3) faster target accumulation and shortened body clearance, enabling the use of shorter-lived radioisotopes; (4) simpler, better controlled, and thus less expensive chemical production; (5) lack of aggregation issues; (6) amenability to a variety of chemical modifications that are needed for production and storage, such as pH changes or elevated temperature. Aptamers are becoming increasingly common as therapeutics; as of May 2016, there were ten aptamers investigated for clinical applications, and one had received FDA approval. Nevertheless, some practical hurdles, such as the high cost of modified oligonucleotides and insufficient survival in vivo due to nuclease-mediated degradation and rapid renal filtration, still hinder the development of aptamer-based tools. Addressing these challenges will improve the versatility of aptamers for cancer treatment in the future.
v3-fos-license
2014-10-01T00:00:00.000Z
2011-11-22T00:00:00.000
12588981
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/journals/ijh/2011/490463.pdf", "pdf_hash": "3d5795347a9e1df1908b87e8d2a10197bf99729c", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46335", "s2fieldsofstudy": [ "Medicine" ], "sha1": "2f108ccc616d61baf94b299c2085259e201bf609", "year": 2011 }
pes2o/s2orc
Oxaliplatin but Not Irinotecan Impairs Posthepatectomy Liver Regeneration in a Murine Model Introduction. We examined the murine hepatectomy model of liver regeneration (LR) in the setting of neoadjuvant chemotherapy. Methods. C57BL/6 mice were randomized to receive neoadjuvant intraperitoneal (IP) injections of a control, oxaliplatin (15 mg/kg), or irinotecan (100 mg/Kg or 250 mg/Kg) solution. Hepatectomy (70%) was performed 14 days after the final IP treatment. Animals were sacrificed at postoperative day (D) 0, 1, 2, 3, and 7. Liver remnants and serum were collected for analysis. T-tests for independent samples were used for statistical comparisons. Results. For oxaliplatin, percent LR did not differ at D1 or D2 but was significantly less at D3 (89.0% versus 70.0%, P = 0.048) with no difference on D7 (P = 0.21). Irinotecan-treated mice at both dose levels (100 mg/Kg and 250 mg/Kg) showed no significant differences in LR. BrdU incorporation was significantly decreased in oxaliplatin-treated animals (D1,2,3). Conclusions. Neoadjuvant oxaliplatin but not irinotecan impairs early LR in a posthepatectomy murine model which correlates with decreased DNA synthesis. Introduction 2010 an estimated 142,570 people developed colorectal cancer (CRC) with an estimated 51,370 people dying of the disease [1]. Synchronous liver metastases are found in 20% of patients, and more than half of those diagnosed with CRC will go on to develop metachronous liver metastases [2,3]. Liver only or liver-predominant disease affects 20-35% of patients, affording those with resectable lesions the possibility of long-term survival. In selected cases with R0 resection, 10-year overall survival has been reported in the literature to range from 17-25% [4,5]. In addition to its adjuvant use in Stage 3 colon cancer and following hepatic resection, chemotherapy has the potential to convert borderline or unresectable liver disease to resectable disease by reducing the size of the tumor to an amenable dimension. Furthermore, neoadjuvant chemotherapy has been advocated as a test for aggressive tumor biology [6][7][8]. Timing and appropriateness of chemotherapy, however, is debated, and there are concerns regarding worse outcomes in heavily treated patients [9]. In this regard, steatohepatitis, steatosis, and sinusoidal injury have been linked to the use of irinotecan, fluoropyrimidines and oxaliplatin [10]. Animal models for the study of posthepatectomy liver regeneration are well described [11]. These models have yet to be applied to the study of commonly used agents for CRC. Given first-line use of oxaliplatin and irinotecan for stage IV CRC, these agents were chosen for investigation. We hypothesized that posthepatectomy liver regeneration is impaired by oxaliplatin and/or irinotecan administration and that this impairment can be demonstrated in a mouse model. Animal Maintenance and Treatments. Eight-week-old C57BL/6 male mice, weighing between 23-25 grams, were obtained from commercial sources (Taconic Farms, Hudson, NY). The animals were housed under standard 12-hour light/12-hour dark conditions with standard feed and water ad libitum. After a minimum of 48 hours acclimation, animals were randomized to receive either oxaliplatin (15 mg/Kg), irinotecan (100 mg/kg or 250 mg/kg) or control solution (dextrose 5% water) by intraperitoneal injection. Animal tolerance of chemotherapy was closely monitored, and posthepatectomy animals were evaluated daily. Animal handling, drug administration, monitoring, and survival surgery protocols were approved by the City of Hope, Research Animal Care Committee. Chemotherapy. Oxaliplatin and irinotecan were obtained through the City of Hope, Investigational Drug Services and diluted in non-chloride-containing solution (dextrose 5% water) to deliver the determined dose in an approximate volume of 100 mcL. Dose regimens were based on data from in vivo activity in previously described colon cancer tumor models in mice [12,13]. Oxaliplatin 15 mg/Kg was administered IP × 1 dose. Irinotecan was administered at two dose levels as follows: regimen A, 100 mg/Kg, IP divided in 2 weekly doses and regimen B, 250 mg/Kg IP divided in 3 weekly doses (75 mg/Kg, 75 mg/Kg, 100 mg/Kg). Fourteen days after the last control or chemotherapy injection, a 70% hepatectomy was performed. Despite using well-established dosing schedule [12] in a dedicated vivarium with skilled personnel, 19 of 32 animals died from the initial treatment with oxaliplatin. There was no mortality in the irinotecan group. All surviving animals were included in the surgical portion of the experiment. Animal Surgery. The left and median lobes were resected with preservation of the gallbladder for 70% hepatectomy. Briefly, tribromoethanol (Avertin) anesthetic was administered IP (250 mg/Kg). After sterile prep a subxiphoid transverse incision was created and the median and left liver lobes were exteriorized. The lobes were encircled with silk ligature, their vascular pedicles tied at the base and the lobes resected. Care was taken to spare the gallbladder and associated bile ducts. Closure was accomplished with autoclips. Buprenorphine was administered (0.5 mg/Kg subcutaneously) upon awakening. At postoperative days 0, 1, 2, 3, and 7, remnant right and caudate lobes were harvested, and blood was collected from the retroorbital sinus concomitant with animal sacrifice. In the oxaliplatin experimented cohort, there were 3 perioperative deaths (2 oxaliplatin treated, 1 control). There was no mortality in the irinotecan cohort. Percent Liver Regeneration by Mass. Percent liver regrowth was calculated by the following formula: (Mass of regenerating liver remnant in grams) ÷ (Mass of resected liver lobes in grams)/(0.7) × 100. to sacrifice. Uniform samples of hepatic parenchyma were removed and fixed in 4% formaldehyde solution, embedded in paraffin, sectioned at 5 micrometers, and stained with hematoxylin and eosin. BrdU immunohistochemical staining was performed using a commercially available kit (Roche). The number of positively stained nuclei was counted in 3 randomly selected high-power fields per sample, one sample from at least 2 mice per time point and arm. ALT Analysis. Under anesthesia prior to sacrifice, approximately 500 mcL of blood was drawn from the retroorbital sinus and placed in serum separator tubes (Falcon). Collected serum was then analyzed for ALT after 10-fold dilution in 7% bovine serum albumin. Statistical Analysis. Statistical comparisons were performed using t-tests for independent samples. 3.1. Oxaliplatin. 22 animals underwent 70% hepatectomy in the oxaliplatin versus control study, 9 animals in the control arm, and 13 in the oxaliplatin arm. Animal weights of the survivors were similar to those of the control group at the time of hepatectomy. There were 3 perioperative deaths; 1 in the control arm and 2 in the oxaliplatin arm which were technical in nature (pneumothorax, excessive manipulation of lobes on extraction, and hemorrhage). Percent liver regrowth (Table 1 and Figure 1) at day 1 following hepatectomy did not differ between oxaliplatintreated and control mice (56.1% versus 52.5%, resp., P = 0.312). Data collected on day 2 suggests less regrowth in the oxaliplatin-treated arm (57.6% versus 73.0%, P = 0.154); however this was not statistically significant. Regeneration was significantly less in the treatment arm at day 3 (70.0% versus 89.0%, P = 0.048). By 7 days following hepatectomy, delayed LR in the oxaliplatin-treated arm was no longer found to be statically significant (89.8% versus 99.0%, P = 0.214). Hepatocyte injury was assessed by measurement of ALT levels. ALT levels peaked at posthepatectomy day 1 and normalize by day 3. ALT levels in oxaliplatin-treated animals were not found to be statistically different than controls throughout the study ( Figure 5). BrdU incorporation was used to determine if oxaliplatin impairs DNA synthesis (cellular division), thus contributing to impaired liver regrowth ( Figure 3). DNA synthesis was significantly higher in the control arm at all three measured timepoints. Oxaliplatin-treated animals showed significantly less incorporation consistent with reduced DNA synthesis. Irinotecan. In the irinotecan experiments no animals experienced chemotherapy-related mortality. Weights were similar between groups at the time of hepatectomy. Neither dose level, group A (100 mg/Kg, N = 15; control N = 8) nor group B (250 mg/Kg, N = 17; control N = 5) showed significant impairment in liver regrowth by mass compared with respective controls (Figure 2). Similar to the oxaliplatin group, irinotecan-treated animals showed peak ALT levels at day 1 with return to baseline between days 3-7 (data not shown). In contrast to the oxaliplatin results, BRDU incorporation in irinotecan-treated animals was similar or increased compared to controls ( Figure 4). Histology. Histologic examination of regenerating liver specimens showed no evidence of hepatic sinusoidal obstruction in oxaliplatin-or-irinotecan treated animals. Mild ballooning changes due to increased cytoplasmic water were seen in both treated and untreated groups. In the oxaliplatin arm, mild portal inflammation with necrosis near the portal triads and microvesicular steatosis were seen in two animals, one at posthepatectomy day 2 and one at day 3. Discussion The liver's remarkable ability to restore a functionally adequate portion of its previous volume following surgical resection is tightly regulated by mechanisms that include bile acid interactions with the FXR nuclear receptor and several other complex mechanisms [11,14]. The mouse liver regeneration model is well described and highly reproducible in this posthepatectomy setting. The differences in regeneration are demonstrated at early timepoints, namely, days 2 and 3 after hepatectomy [11]. We chose to apply this model to the study of liver regeneration after treatment with commonly used modern chemotherapeutic agents for CRC. The oxaliplatin dose was selected based on established, species-specific doses from the research literature [12]. An unexpected toxicity (mortality) was observed in this experiment. The animal deaths affected the group sizes, but only impacted the planned experimental animal numbers (approved by the Research Animal Care Committee) that would be required to achieve definitive statistical results in one cohort. We discovered that oxaliplatin-treated animals showed significantly reduced regrowth on the third posthepatectomy day. This finding has not been previously described in a preclinical model and may, in part, be due to the mechanism of oxaliplatin cytotoxicity. Oxaliplatin is a third-generation platinum derivative that acts at the level of DNA by forming bulky DNA adducts [15]. Most commonly, intrastrand links between guanine and adenine are formed by the platinum moiety. DNA synthesis is impaired by these adducts which in turn leads to strand breaks and subsequent apoptosis. Oxaliplatin's mechanism of action is consistent with the marked decrease in DNA synthesis demonstrated by the decrease in BrdU staining in these experiments. BrdU staining was more sensitive than percent regrowth by weight. Analysis of oxaliplatin's impact on BrdU staining were demonstrated with significant differences as early as day 1 following hepatectomy with significant impairment in DNA synthesis continuing through day 3. These data combined with the absence of direct hepatic damage, as evidenced by nonsignificant differences in ALT, suggest oxaliplatin blunts LR by inhibiting cell division in the early postoperative period. However, this effect appears to be lost by 7 days postoperatively as physiologic mechanisms to restore appropriate liver function normalize liver size by this time. Despite the consistency (growth and DNA synthetic activity) of these data, the small number of experimental animals requires they be viewed as exploratory, not definitive. The experimental model is well established, and the BrdU incorporation is a sensitive measure of DNA synthesis. The current literature contains variable conclusions on the impact of chemotherapy on liver regeneration. In part, this is due to differences in experimental modeling (e.g., number of chemotherapy injections, use of Ki-67, and single time-point analysis). The current series of experiments provides sequential time point evaluation at 0, 1, 2, 3, and 7 days in an attempt to mimic the immediate, early, and longer phases of hepatic regeneration in the human. This sequential reporting is unique in investigations of this type. This topic area remains controversial, and additional experiments with a consistent experimental model will be the most definitive way to answer the controversies and variability in results. Irinotecan-treated animals did not show differences in liver regrowth despite treatment at previously documented pharmacologically active doses [16,17]. BrdU incorporation assays corroborate these findings with no decrease but rather, a nonstatistically significant increase in DNA synthesis. Increased DNA synthesis with irinotecan treatment may be related to the drug's mechanism of action. Irinotecan inhibits topoisomerase I, stabilizes single-strand breaks and results in double-strand breakage though interaction at the replication fork [18]. Our results suggest that the structure and coiling of DNA is altered by irinotecan without direct effect on the cells ability to synthesize DNA and thus incorporate BrdU. Despite the somewhat counterintuitive nature of this finding, a higher proportion of cells in S-phase after irinotecan treatment has been described in animal and clinical settings [19]. High toxicity was seen in the animals receiving oxaliplatin. This occurred despite the use of previously reported doses [20]. In our experiments, although the toxicity was high during the administration of oxaliplatin, the surviving animals were fully recovered prior to hepatectomy with no difference in animal weight or appearance in the oxaliplatintreated animals when compared to irinotecan-treated animals. This argues that differences in regrowth were liver International Journal of Hepatology specific and not a byproduct of other factors such as a weakened state or poor nutrition. Recently, clinical studies have raised concerns regarding the significant hepatotoxicity of chemotherapeutic agents for CRC [21]. Oxaliplatin is implicated in the "blue liver" syndrome from hepatic sinusoidal obstruction, and worse posthepatectomy outcome is reported in association with chemotherapy-related steatohepatitis primarily with irinotecan [22,23]. Histologic examination of the liver in 2 animals showed microvesicular steatosis and mild periportal inflammation. However, this was an uncommon finding. The differences seen in regeneration and DNA synthesis, therefore, likely reflect changes not yet evident on H&E, but in part detectable with special staining techniques such as BrdU immunohistochemistry. Clinical guidelines for hepatectomy recommend more conservative volumes of liver resection in chemotherapytreated patients with a goal future liver remnant of 30%, rather than 20% [16,24]. Given the adverse effects of chemotherapy on the liver, our goal was to establish an animal model to study these interactions. We have shown early impairment of regenerative ability in oxaliplatin-treated animals. These findings are corroborated by decreased DNA synthesis. These data suggest that early in the patient's postoperative course, when the risk for liver failure is higher, regenerative mechanisms may be impaired. Future studies with this model will aim at abrogating these effects. Conclusion The mouse 70% hepatectomy model provides a useful tool for studying the effects of chemotherapy on posthepatectomy liver regeneration. We demonstrate that oxaliplatin impairs early liver regeneration in a posthepatectomy model and that this reduced regrowth correlates with decreased DNA synthesis. Conversely, irinotecan did not impair regeneration or DNA synthesis.
v3-fos-license
2015-12-27T08:33:42.323Z
2014-07-17T00:00:00.000
13101266
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0099982&type=printable", "pdf_hash": "1be3676092dc11af3b61dd85f1bbb59578d8e31a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46339", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "1be3676092dc11af3b61dd85f1bbb59578d8e31a", "year": 2014 }
pes2o/s2orc
Effective Automated Feature Construction and Selection for Classification of Biological Sequences Background Many open problems in bioinformatics involve elucidating underlying functional signals in biological sequences. DNA sequences, in particular, are characterized by rich architectures in which functional signals are increasingly found to combine local and distal interactions at the nucleotide level. Problems of interest include detection of regulatory regions, splice sites, exons, hypersensitive sites, and more. These problems naturally lend themselves to formulation as classification problems in machine learning. When classification is based on features extracted from the sequences under investigation, success is critically dependent on the chosen set of features. Methodology We present an algorithmic framework (EFFECT) for automated detection of functional signals in biological sequences. We focus here on classification problems involving DNA sequences which state-of-the-art work in machine learning shows to be challenging and involve complex combinations of local and distal features. EFFECT uses a two-stage process to first construct a set of candidate sequence-based features and then select a most effective subset for the classification task at hand. Both stages make heavy use of evolutionary algorithms to efficiently guide the search towards informative features capable of discriminating between sequences that contain a particular functional signal and those that do not. Results To demonstrate its generality, EFFECT is applied to three separate problems of importance in DNA research: the recognition of hypersensitive sites, splice sites, and ALU sites. Comparisons with state-of-the-art algorithms show that the framework is both general and powerful. In addition, a detailed analysis of the constructed features shows that they contain valuable biological information about DNA architecture, allowing biologists and other researchers to directly inspect the features and potentially use the insights obtained to assist wet-laboratory studies on retainment or modification of a specific signal. Code, documentation, and all data for the applications presented here are provided for the community at http://www.cs.gmu.edu/~ashehu/?q=OurTools. Introduction The wealth of biological sequences made possible by highthroughput sequencing technologies is in turn increasing the need for computational techniques to automate sequence analysis. In particular, as the community at large is focusing on elucidating the sequence-function relationship in biological macromolecules, a primary sequence analysis problem involves unraveling the rich architecture of DNA and mapping underlying functional components in a DNA sequence [1]. A combination of valuable biological insight gathered from wet-laboratory experiments and increasingly powerful computational tools has resulted in significant progress being made in important sequence analysis tasks, such as gene finding [2,3]. Despite this progress, challenges remain [4,5]. For instance, accuracy in gene finding ultimately depends on addressing various subproblems, one of which is the correct detection of splice sites that mark the beginning and end of a gene. The splice site prediction problem is now considered a primary subtask in gene finding and is thus the subject of many machine learning methods [6][7][8][9][10][11][12][13][14][15][16]. Other prominent DNA analysis problems involve the identification of regulatory regions [17,18] through detection of binding sites of transcription factors [19][20][21] or detection of hypersensitive sites as reliable markers of regulatory regions [15,[22][23][24][25][26][27][28], identification of ALU sites [29][30][31][32][33] to understand human evolution and inherited disease [34,35], and more. From a computational point of view, detecting specific functional regions in a DNA sequence poses the interesting and challenging task of searching for signals hidden in sequence data. Detecting a signal in a given sequence or whether a sequence contains a particular signal is a difficult computational task, particularly in the ab initio setting, for which little or no a priori information is available on what local or distal interactions among the building blocks of investigated sequences constitute the sought signal. Yet, automating this process is central to our quest to understand the biology of organisms and characterize the role of macromolecules in the inner workings of a healthy and diseased cell. This quest is not limited to nucleic acids. Important sequence analysis problems include predicting protein solubility, crystallizability, subcellular localization, detecting enzymatic activity, antimicrobial activity, secondary structure folding, and more [36][37][38][39][40][41][42][43][44][45][46]. The focus in this paper on DNA is due to a growing body of work in machine learning pointing to the fact that many important functional signals consist of a complex combination of local and distal information at the nuucleotide level. Sequence analysis problems in which the objective is to find what constitutes a functional signal or property at the sequence level naturally lend themselves to formulation as classification problems in machine learning. The effectiveness of these algorithms largely depends on the feature sets used. In some settings, the construction of effective features can be facilitated by a priori insight from biologists or other domain experts. For instance, biophysical insights have been instrumental in developing effective features for predicting protein subcellular localization and folding rates, CG islands in DNA sequences, and more [37,38,42,43,47]. However, it is becoming increasingly clear that there are problems for which domain-specific insight is either incomplete or hard to translate into effective features. As a consequence, there is considerable interest in automating the process of constructing effective features. A prominent example is the automated detection of splice sites in DNA sequences [12][13][14][15][16]. The key issue here is how to define a space of potential features that is sufficiently rich to allow the generation of effective features while maintaining computational feasibility. In recent work, we have indicated how one can explore large spaces of potential features in a computationally-viable manner by employing evolutionary algorithms (EAs) [15,16]. The success of this "proof of principle" effort has prompted us to propose and investigate a more general EA-based framework (EFFECT) for efficient automated feature construction for classification of biological sequences. In this paper we describe the generalizations and then demonstrate the broad applicability of the framework on three DNA sequences analysis problems on the detection of splice sites, HS sites, and ALU sites in DNA sequences. The algorithmic realizations of EFFECT for each of the selected problems in this paper are sufficiently detailed to allow one to adapt the framework for other sequence classification problems of interest. Indeed, one of the contributions of this work is in providing a roadmap as to how one can do so in different application settings. To further facilitate this, the entire data, code, and documentation are provided to the community at http://www.cs.gmu.edu/ ashehu/ ?q = OurTools. The rest of this article is organized as follows. We first provide a brief review of related research that includes machine learning methods for classification of biological sequences and EAs for feature construction in the context of classification. The EFFECT framework is detailed in Methodology. A comprehensive analysis of results from application of this framework on the three chosen problems is presented in Results. The paper concludes in Discussion, where we provide a short summary of the main features of EFFECT, its availability to the research community, and its use for other classification problem of interest. Related Work Methods for Classification of Sequence Data. We focus here on supervised learning methods for classification of sequences. In this scenario, a model is trained to find features that separate labeled training sequence data. Typically, these are binary classification problems in which a positive label is assigned to sequences known to contain a particular functional signal or property, and a negative label to sequences that do not. The learned model is then applied to novel sequences to make label predictions and thus detect or recognize the presence of the sought functional signal. Our review below categorizes classification methods into statistical-based and feature-based, though many methods are a combination of the two approaches. Typically, the process involves first transforming sequence data into vectors over which an underlying classifier operates. In statistical-based approaches, the focus is on the underlying statistical model for the classification. In feature-based approaches, the primary focus is on constructing effective features that allow transforming sequence data into (feature) vectors for standard classifiers. What follows below is not a comprehensive review of literature on each of these two approaches, but rather a summary of representative methods in each category to facilitate the discussion of results in the comparison of our framework to state-of-the-art methods. Statistical Learning Methods. Statistical learning methods can be broadly classified by the models that they employ, which can be generative or discriminative. Generative models learn the joint probability P(x, y) of inputs x [ X with labels y [ Y . Bayes rule is used to then calculate the posterior p(y D x) and predict the most likely label for an unlabeled input. Discriminative models learn the posterior directly, but this also limits them to a supervised setting that demands labeled training data (as opposed to the ability of generative models to additionally exploit unlabeled data). Nonetheless, discriminative models are preferred in many classification settings, as they provide a more direct way at modeling the posterior without first addressing a more general setting (as demanded by modeling the joint probability) [48]. The transformation of input sequence data into numeric data for these models is conducted a priori through a kernel function or a featurebased method explicitly extracting features of relevance for the transformation. Heuristic procedures have been proposed to combine discriminative and generative models [49] as a way to address the issue that generative methods lose their ability to exploit unlabeled data when trained discriminatively [50]. The resulting hybrid methods have been shown to result in superior performance on recognition of transcription factor-binding sites on DNA [51]. Representative methods include the position-specific scoring matrix (PSSM) -also known as the position-weight matrix (PWM) -a method that assumes nucleotides at all positions are drawn independently [52,53], the weight array model (WAM) which relaxes assumptions of independence by additionally modeling dependencies on a previous position [54], higher-order Markov models which model more dependencies and outperform PSSMs [55,56], and even more complex models like Bayesian networks [57,58] and Markov Random Fields (MRFs) [59,60]. A mixture of Bayesian trees and PSSMs in [61], smooth interpolations of PSSMs, and empirical distributions [62] have also been proposed to model arbitrary dependencies. Kernel-based Methods. SVMs are probably the most widespread discriminative learning method in bioinformatics due to their ease of implementation and solid grounding in statistical theory [63,64]. They have been applied to many sequence classification problems, including prediction of transcription start sites on DNA [65], translation initiation sites [66], gene finding [67], transcription factor-binding sites [68], and DNA regulatory regions [69]. The predictive power of SVMs greatly depends on the chosen kernel function. This function maps input (sequence, here) data onto a usually higher-dimensional feature space where provided samples of the two classes can be linearly separated by a hyper-plane. Many kernels are designed for sequence classifica-tion, of which the most relevant and state-of-the-art are weighted position and weighted position with shift kernels devised for recognition of DNA splice sites [12]. In these kernels, limitedrange dependencies between neighboring nucleotides are considered to encode features for the SVM. Concepts from evolutionary computation have been lately proposed to learn effective, possibly more complex, kernels for a particular sequence classification problem at hand [27,28]. Feature-based Methods. Feature-based methods make the process of feature construction transparent and so can offer constructed features for inspection and further analysis to biologists. Constructing effective features, however, is non-trivial. The straightforward approach is to use enumeration to list all considered features. When no domain-specific expertise is available to guide feature construction towards certain feature types, the predominant approach has been to limit the focus to features that are strings of k symbols over the alphabet of building blocks in considered biological sequences (nucleotides in DNA/ RNA and amino acids in proteins). These k-mers are also known as spectrum features [70]. The essential idea is to transform given sequences into numeric vectors recording frequency or occurrence of k-mers and then employ supervised learning techniques, such as SVMs, to separate training data in the resulting vector space [25]. Spectrum features have been shown useful in various classification problems, such as prediction of DNA promoter regions, cis sites, HS sites, splice sites, and more [70][71][72][73]. However, work has shown that the majority of spectrum features are seldom useful and can be removed by effective feature selection algorithms [74]. In many classification problems on biological sequences, research has shown that simple spectrum (compositional-based) features are not sufficient. Problems, such as predicting protein enzymatic activity, DNA hypersensitive sites, or RNA/DNA splice sites seem to necessitate complex local and distal features [11,13,14,[25][26][27][28]38]. In particular, taking into account dependencies through features that encode correlations or simultaneous occurrences of particular k-mers at different positions in a biological sequence is shown to be important for accurate detection of splice sites [14][15][16]. Work in [8,14] introduced the idea of explicitly considering various feature types in the context of splice site detection but limited the number of types and number of enumerated features per type to control the size of the feature space and the computational cost demanded by enumeration. The feature types considered were position-based, region-based, and composition-based [14]. In general, enumeration-based approaches introduce artificial limits on the length and the complexity of features in order to achieve reasonable computation times. Moreover, insight in a particular problem domain is difficult to translate into meaningful features when a combination of local and distal features are needed. Ideally, a general feature construction approach would be able to operate ab initio; that is, explore the space of possible local and distal features and guide itself towards discriminating features. When the types or number of features are not limited, one is invariably confronted with a feature construction problem that is NP-hard problem due to the combinatorial explosion in the size of the feature space [75]. Yet, a variety of general purpose search techniques have been shown effective for NP-hard problems. In particular, EAs, which we summarize next, provide a viable alternative for exploration of complex feature spaces in automated feature construction and are the backbone of the framework proposed here for automatic feature construction for classification of biological sequences. EAs for Exploration of Feature Spaces The ability of EAs to efficiently explore large search spaces with complex fitness landscapes makes them appealing for feature construction [76]. EAs mimic biological evolution in their search for solutions to a given optimization problem. Typically, a population of candidate solutions, also referred to as individuals, is evolved towards the true ones through a process that generates candidate solutions and retains only a population deemed promising according to some fitness function. In standard GAs, individuals are fixed-length strings of symbols. In another class of EAs, genetic programming (GP) algorithms, an individual is a variable-length tree composed of functions and variables. The functions are represented as non-terminal nodes, and the variables represented as terminal (leaf) nodes. GPs were originally introduced to evolve computer programs and complex functions [84][85][86][87]. Today, GP-based algorithms are being used for a variety of applications, including feature construction in the context of classification of biological sequences [38,[88][89][90][91][92]. Our recent work introduced a GP-based method for feature construction in the context of DNA splice site recognition [16]. In this paper, we present a more general EA-based approach that makes use of a GP algorithm to explore complex feature spaces and generate predictive features from sequence data. Methods for Feature Selection EAs can be used to construct a large set of discriminating features, but selecting a non-redundant subset that retains its predictive power remains a difficult and open problem, particularly when the set of features is large [93]. Finding an optimal set of features is generally intractable [94] and is shown to be NP-hard in various settings [95,96]. This is due in part to the fact that a feature by itself may not be predictive of a particular class but may be informative in combination with other features. Additionally, features which are informative by themselves may be redundant when grouped with others. In general, even finding a small subset of discriminating features is a challenging search problem [93,97,98]. Feature selection methods generally follow one of two approaches, subset search and subset evaluation [99]. Univariate feature selection like Information Gain or Chi-square are not very useful when applied on a set of features that already have high discriminatory power, as is the case with features found by the GP algorithm employed in the first stage of our EFFECT framework. In cases of already discriminating features, a more relevant criterion for feature selection is to reduce redundancy while retaining predictive power in the selected subset. In this paper we present an EA-based approach to feature selection that achieves that goal. Methodology The proposed EFFECT framework consists of two stages, each comprised of an EA. In the first stage, the Evolutionary Feature Construction (EFC) algorithm is used to search a given space of complex features and identify a set of features estimated to be effective in the context of a given classification problem. These features are then fed to the second stage, where a second algorithm, Evolutionary Feature Selection (EFS), reduces the set of constructed features by selecting a subset deemed most informative without sacrificing performance. A schematic of the framework showing the interplay between these two algorithms, is shown in Figure 1. Constructing Complex Features with EFC Since EFC is a generalization of the feature generation algorithm presented in [16], our description here focuses primarily on the novel components, providing a brief summary of the common elements where needed and directing the reader to Text S1 for further details. Central to the power of EFC is its generalized representation of sequence-based features as GP trees. These feature "trees" are maintained in a population that evolves over generations using standard GP reproductive mechanisms of mutation and crossover. Mimicking the process of natural selection, features that are deemed more discriminative for classification have a higher probability of surviving into the next generation, steering the probabilistic search in EFC towards more effective features. The discriminative power of a feature is estimated through an empirical or surrogate fitness function. The best features (those with highest fitness) found by EFC are collected in a set referred to as a hall of fame. It is this set that is fed to the subsequent EFS algorithm for feature subset selection. Feature Representation in EFC. As standard in GP, the individuals (features) evolved by the EFC algorithm are represented as parse trees [87]. In EFC, the leaf nodes of a feature tree are known building blocks of given biological sequences. In the case of DNA sequences, for instance, these blocks are the four nucleotides in the DNA alphabet. To improve on generality and effectiveness, EFC supports additional building blocks that represent groups of nucleotides based on similar chemical properties. In this paper, this capability is illustrated by the use of the IUPAC code [100], resulting in 15 symbols listed in Table 1. If the sequences of interest are proteins, the building blocks can either be amino-acid identities, types, or other categorizations based on physico-chemical properties. Alternatively, building blocks can be short subsequences or motifs of k symbols. Information may be available from domain experts to determine the length of these motifs. For instance, work in splice sites shows that motifs of length kw8 are not useful [13,14]. In other applications, there may be lower bounds on the length of effective motifs. Such bounds may be available and specified a priori to EFC or tuned interactively after analysis of constructed features. In the selected applications of the EFFECT framework in this paper, the leaf nodes of feature trees are motifs, and we limited the length of these motifs between 1 and 8. As illustrated in Figure 2 and Figure 3. EFC uses the standard boolean operators (and, or, not) to combine basic building blocks into more complex features. In addition to boolean operators, the EFC algorithm uses application-specific functional nodes to assist in the construct meaningful features for biological sequences. These are listed in Table 2. An important functional generalization in EFC is the ability to specify the matching of a motif in some region (up or down) or matching it around some expected position. This allows for the construction of features that are more robust to possible sequence variations. In Text S1 we provide more detail regarding the types of features that one can construct with these operators and provide illustrations for them. Population and Generation Mechanism. As detailed in Text S1, the initial population of N features is carefully constructed to contain a variety of tree shapes with maximum depth D. In contrast to EAs with fixed population sizes, EFC employs an implosion mechanism that reduces the size of the population by r% over the previous one, in order to avoid known convergence pitfalls of GPs. The population of features evolves for a pre-specified number of generations G. Each population contributes its top ' features to a hall of fame. In turn, the hall of fame is used to provide a randomly selected initial set of m features for the next generation, with the rest of the features in the next generation obtained through reproductive operators. Reproductive Operators. Based on studies that show robust EAs incorporate both asexual (mutation) and sexual (crossover) breeding operators [101], EFC employs both operators. These operators are executed until the goal population size for the next generation is reached. Each of the operators has a certain probability with which it is performed. Given the additional functional nodes in EFC over our prior work in [16], four new mutation operators are employed depending on the type of tree node being modified. Each of the variants has equal probability of being performed once the mutation operator is selected. Additional details and illustrations on the mutation and crossover operators are provided in Text S1. Bloat Control. A common problem with tree-based individuals in EAs is that, as generations progress, individuals become more complex without any improvement in fitness. This is known as bloat. It is important to control bloat, particularly when the goal is to have features that are easily interpretable by humans. As such, bloat control is an important element in EFC, the details of which are given in Text S1. Fitness Function. EFC employs a surrogate fitness function or a "filter" approach, which is considered to be more effective than wrapper approaches for feature evaluation [102]. Since most sequence classification datasets are imbalanced in the sense of having very few positives as compared to a large number of negatives, the objective of a filter approach is to improve precision while managing the discriminative power of features. For this purpose, we use the following fitness function: In this equation, f refers to a particular feature, C z,f and C {,f are the number of positive and negative training sequences that contain feature f , respectively, and C z is the total number of positive training sequences. This fitness function tracks the occurrence of a feature in positive sequences, as negative sequences may not have any common features or signals. The fitness function additionally penalizes non-discriminating features; that is, features that are equally found in positive and negative training sequences. Hall of Fame. Previous research on EAs has noted that if parents die after producing offspring, there can be genetic drift or convergence to some local optimum [76]. This can result in the loss of some of the best individuals. The EFC algorithm addresses this issue by using an external storage of features known as a hall of fame. As noted above, the ' best individuals in every generation are added to the hall of fame, and the hall of fame in return helps seed the population in each generation with m randomly selected features. It should be noted that the parameter values for m and ' should depend on the problem at hand. In general, keeping the fittest individuals in a hall of fame improves overall performance [103]. After execution of the EFC, the features in the hall of fame are those submitted to the ensuing EFS algorithm. Effective Feature Selection with EFS The hall of fame features generated by EFC were selected on the basis of their individual performance. What is required for effective and efficient classification is to identify a relevant and non-redundant subset of features. EFS, a novel GA-based algorithm, is employed for this purpose and described below. Feature Subset Representation in EFS. EFS evolves feature subsets by having individuals in the population correspond to feature subsets represented as binary strings. The length of each string is equal to the number of individuals in the hall of fame. A string of all '1's would correspond to the maximum subset, the hall of fame itself, and a string of all '0's would correspond to the empty subset. In addition to being a suitable representation for our purposes, binary representations in GAs are the standard ones and include a well-studied set of mutation and crossover operators. Population and Generation Mechanism. The initial population contains M individuals of length ' which are created using randomly generated binary strings and represent M subsets of selected features from the hall of fame. The GA implementation in EFS is generational; that is, after the offsprings are created using mutation and crossover, the parents die. The population size of M remains constant throughout the generations in EFS. The number of generations is set to K~M by default. The best individual (feature subset) is tracked over the generations and constitutes the feature subset presented to a classifier for labeling new unlabeled (testing) sequences. For the experiments reported in this paper, M~20. Reproductive Operators. EFS uses a standard bit-flip mutation operator with mutation rate of 1='. Additionally, standard uniform crossover is used, in which each bit is considered a crossover point with a probability of 0:5. It has been shown that employing uniform crossover along with bit-flip mutation is effective at balancing exploration and exploitation of search landscapes [101]. Parent(s) for the reproductive operators are selected using standard fitness-proportional selection. Details are provided in Text S1. Fitness Function. Recall that the objective of EFS is to find a subset of features with high feature-class correlation to retain discriminating power but low feature-feature correlation to reduce redundancy. EFS achieves this by employing a correlation-based fitness function [104]. Using a measure of feature correlation r based on Pearsons correlation, a set of features A, a feature subset F [ A, and a to-be-predicted class C6 [A, let the average featureclass correlation be Feature-feature correlation is given by Combining the two for maximizing class-feature correlation while minimizing feature-feature correlation and weighing with the number of features n f , results in the following fitness function: Classifiers The best (highest fitness) individual obtained from EFS defines the feature subset to be used by a machine learning classifier. Generally, any classifier can be used, and our experimentation shows there are no significant differences among standard ones. Since the Naive Bayes (NB) classifier is the simplest, fastest, and most effective when features have low correlation among them but high correlation with class [105,106], we employ NB as our classifier of choice. We used the kernel density estimator with NB using Weka, which is the default estimation method. Experimental Setting, Implementation Details, and Performance Measurements for Analysis Experimental Setting. The experimental setting has been designed to support two forms of analysis. First, the features generated by EFFECT are made available for visual inspection and detailed analysis. Second, the experimental setting allows for a detailed analysis of the classification performance of a Naive Bayes classifier using EFFECT-generated features in comparison to a representative set of alternative machine learning approaches (as described in the Related Work section). First, a baseline featurebased method is defined that uses spectrum (compositional) features and over-represented motifs as reported from alignments using Gibbs sampling. The features are fed to the same Naive Bayes classifier used to evaluate EFFECT-obtained features for a direct comparison of features in the context of classification. A comprehensive comparison is also conducted with state-of-the-art statistical methods, PSSM, WAM, Bayes Tree Network with PWM, Markov Chain (MC), and Maximum Supervised Posterior (MSP). Their implementation is made possible through the Jstacs software package [107]. MSP is configured with PWM and Homogenous HMM classifiers as generative mixture classifier. EFFECT is also compared to kernel methods. We focus on the latest two most successful kernel methods (shown so on the splice site prediction problem [12]), the weighted degree positional kernel (WD) and the weighted degree positional kernel with shift (WDS) method (the underlying classifier is an SVM). To the extent possible, the methods selected for comparison have been tuned in order to obtain their best performance on each of the data sets considered in this paper, often in communication with the original developers. Details on our tuning protocols and resulting parameter values are posted on the http://www.cs.gmu. edu/ ashehu/?q = OurTools site we provide that lists the EFFECT Implementation Details. All experiments are performed on an INTEL 2X 4core machine with 3.2 Ghz and 8GB of RAM. The code for the EFC algorithm in EFFECT is written in Java, using the publicly-available ECJ toolkit [108] and BioJava [109] software packages. The code for the EFS algorithm in EFFECT is also in Java, using the GeneticSearch and CFSSubset techniques code, documentation, and data sets. of the publicly-available WEKA package for machine learning. The implementation of the statistical methods employed for comparison is in Java, based on the publicly-available Jstacs package [107]. The kernel-based methods are implemented using the publicly-available Shogun toolkit [110] with the standard SVM implementation provided in the publicly-available LibSVM package [111]. The feature-based methods employed for a baseline validation are implemented in Java. The resulting open source software that we provide to the community for academic purpose includes not only the EFFECT framework, but also our implementations of all the methods employed for comparison, tuned parameters, along with datasets, features, and complete models. Performance Measurements. Standard datasets used by other researchers are used in each of the three application settings showing the generality and power of the EFFECT framework. Since most of these datasets have an imbalance between the size of the positive and negative classes, classification accuracy is a meaningless performance measurement. For this reason, the analysis in this paper employs other evaluation criteria, such as area under the Receiver Operating Characteristic Curve (auROC) and area under the Precision Recall Curve (auPRC). All these are based on the basic notions of TP, FP, TN, and FN, which correspond to number of true positives, false positives, true negatives, and false negatives. Details on common performance measurements for classification can be found in [112]. To briefly summarize what these measures capture, consider that predicted instances (sequences assigned a label by the classification model) can be ordered from most to least confident. Given a particular confidence threshold, the data above the threshold can be considered correctly labeled. The true positive rate and false negative rate can then be computed as one varies this threshold from 0:0 to 1:0. In an ROC, one typically plots the true positive rate (TPR = TP/(TP+FN)) as a function of the false negative rate (FNR = FN/(FN +TN)). The auROC is a summary measure that indicates whether prediction performance is close to random (0:5) or perfect (1:0). Further details can be found in [112]. For unbalanced datasets, the auROC can be a wrong indicator of prediction, since this measure is independent of class size ratios; large auROC values may not necessarily indicate good performance. The auPRC is a better measure for performance when the class distribution is heavily unbalanced [113]. The PRC measures the fraction of negatives misclassified as positives and so plots the precision (TP/(TP+FP)) vs. the recall ratio (this is TPR, sometimes referred to as sensitivity). Again, as one varies the threshold, precision can be calculated at the threshold that achieves that recall ratio. auPRC is a less forgiving measure, and a high value indicates that a classification model makes very few mistakes. Thus, the higher the auPRC value, the better. Performance is measured and compared to all methods used for comparison both on training and testing datasets. When testing datasets are not available for a particular application, 10-fold cross-validation is conducted instead. We used 1% of training data with equal mix of both classes as the evaluation set for tuning every employed for comparison, reserving the rest 99% of training data for cross-validation. The idea is to train on a randomly-selected 9=10ths of the data and then test on the rest. This is repeated 10 times, and an average performance is reported in terms of the evaluation criteria described above. Moreover, since the EFFECT framework employs stochastic search algorithms (EFC and EFS), it is run 30 times, thus resulting in 30 sets of features. Each set is evaluated in the context of classification performance (using NB). The reported performance measurements are averages over 30 values obtained (over each set of features from a run of EFFECT). Paired t-tests are used to measure statistical significance at 95% confidence intervals. It should be noted that many of the statistical learning (and kernel) methods used for comparison in this paper have a limitation of demanding that all input sequences be of fixed length. On the other hand, some of the datasets available consist of sequences of variable length. Typically, in such a setting, one can either use a random alphabet to ''fill'' smaller sequences and achieve a maximum fixed length or throw away shorter sequences. Since shorter sequences make up only 2{5% of the datasets under each application in this paper, we decide to discard shorter sequences (additionally, our analysis indicates that doing so results in better performance than filling sequences with random alphabets). We also point out that the parameters of each of the methods used for comparison have been tuned to achieve a maximum performance for each method. Various classifier parameters (e.g., the cost parameter C in SVM) have also been tuned for this purpose. All tuned parameters are listed at http:// www.cs.gmu.edu/ ashehu/?q = OurTools. Results We summarize the performance of EFFECT on each of the three selected applications on DNA sequence analysis, recognition of HS, splice, and ALU sites. The training (and testing, where available) datasets employed are detailed first, followed by an empirical analysis of the results on each application. Datasets Benchmark data sets are selected for each of the three application settings in order to allow comparison with as many methods as possible. Datasets for Recognition of HS sites. The dataset employed for evaluating the features constructed in EFC and for training the NB classifier is the one provided at noble.gs.washington.edu/proj/hs. This dataset consists of experimentallydetermined sequences (each 242 nucleotides long) extracted from the human genome and consists of 280 HS and 737 non-HS ones. THe HS sequences were identified employing cloning and in-vivo activity of K562 erythroid cells [114], whereas the non-HS sequences were sequences collected and distributed proportionally throughout the human genome but found not to be hypersensitive when tested in the same cell type. Datasets for Recognition of Splice Sites. A distinction is made between acceptor and donor splices sites. An acceptor splice site marks the start of an exon, whereas a donor splice site marks the end. These sites have different consensus sequences, and machine learning research has additionally shown they have different features of relevance for classification [13,16]. For the purpose of feature construction and classification performance, splice site datasets are split into a donor subset and an acceptor subset, and evaluation is done separately on each subset. The splice site recognition problem is well-studied in machine learning, and so many datasets have been accumulated over the years. We report performance on three datasets used as benchmarks in recent literature. The first dataset is known as NN269 to indicate that it is extracted from 269 human genes [115]. It consists of 1,324 confirmed acceptor sequences, 1,324 confirmed donor sequences, 5,552 false acceptor sequences, and 4,922 false sequences (length of acceptor sequences is 90 nucleotides, whereas that of donor sequences is 15 nucleotides). Further details on these sequences can be found in [115]. We split this dataset into a training and testing dataset. The training dataset has 1,116 true acceptor, 1,116 true donor, 4,672 false acceptor, and 4,140 false donor sequences. The testing dataset has 208 true acceptor, 208 true donor, 881 false acceptor, and 782 false donor sequences. Performance is reported on another dataset extracted from the C_Elegans (worm) genome, prepared as in [12], on which statistically-significant differences are observed in the comparative analysis between EFFECT and other methods. Briefly, the genome is aligned through blat with all known cDNA sequences available at http://www.wormbase.org and all known EST sequences in [116] to reveal splicing sites. 64,844 donor and 64,838 acceptor sequences, each 142 nucleotides long are then extracted from the alignment, centered at the identified splicing sites. Equal-length negative training sequences are centered around non-splice sites (selected in intronic regions). In [12], 1,777,912 negative acceptor and 2,846,598 negative donor sequences are constructed. This dataset is too big to feasibly conduct a thorough comparative analysis with other methods and gather summary statistics over many runs. For this reason, we sample a smaller training set of 40,000 sequences from the entire (positive and negative) dataset, preserving the ratio of positive to negative sequences as in the original dataset. Datasets for Recognition of ALU Sites. 319 known ALU sequences were obtained from NCBI website. This small set of sequences is considered to be representative of 99% of all ALU sequences in GenBank [117]. The average length is approximately 300 nucleotides. A negative training dataset of 319 sequences was constructed at random, sampling similar-length sequences with similar nucleotide distribution as that found over ALU sequences. Comparative Analysis Empirical Analysis on Recognition of HS Sites. Given the availability of only a training set in this setting, 10-fold validation is used to measure the classification performance of the NB classifier on features obtained through EFFECT and compare it to the other methods summarized above. We recall that EFFECT is run 30 independent times, and the auROC and auPRC measurements reported are averages over these runs, as well. Table 3 compares EFFECT to all the methods employed for comparison in terms of auROC and auPRC values. As Table 3 shows, EFFECT achieves the highest performance both in terms of auROC (89.7%) and auPRC (89.2%). For comparison, MSP achieves the second highest auROC (85.5%), and K-mer (feature-based with spectrum features in SVM) achieves the second highest auPRC (82.6%). Paired t-tests at 95% confidence intervals indicate that the reported values for EFFECT are statistically significant (data not shown). Taken together, this comparative analysis demonstrates that the quality of the features found by EFFECT is such that even a simple classifier, such as NB, achieves comparable classification performance with sophisticated methods for HSS recognition. Empirical Analysis on Recognition of Splice Sites. Our analysis first proceeds on the NN269 dataset. We recall that the analysis (as well as construction and selection of features and training of classifiers) is conducted separately for the acceptor and donor datasets. Table 4 compares auROC and auPRC values obtained on the testing sequences in each dataset. While EFFECT and the kernel-based methods have the highest performance (EFFECT is second best) in both auROC and auPRC on the acceptor dataset, and all methods are comparable on the donor dataset (with the exception of inhomogeneous HMM and K-mer, which perform worst), the t-test analysis indicates none of the methods' performance is statistically significant on the NN269 splice site dataset. In a second analysis, the C_Elegans splice site dataset is employed as a training dataset. 10-fold validation on highly unbalanced positive over negative datasets (the positive dataset in both the donor and acceptor setting is about 5% of the entire dataset) clearly separates performance among the different methods. Table 5 shows that EFFECT and kernel-based methods achieve the highest performance in terms of auROC on the acceptor dataset (around 99% for kernel-based and 98% for EFFECT). The two are top performers in terms of auROC on the donor dataset, as well (close to 100% for kernel-based and 97% for EFFECT). However, the unbalancing of the positive and negative datasets in each setting results in EFFECT obtaining a higher auPRC value on both the acceptor and donor dataset. On the acceptor dataset, EFFECT obtains an auPRC of 90.2%, followed by kernel-based methods with a value of 89.1% (6 of the 9 methods used for comparison obtain auPRCs less than 16%). On the donor dataset, EFFECT obtains an auPRC of 91:3%, followed by kernel-based methods with a value of 90:1% (5 of the 9 methods used for comparison obtain auPRCs less than 14%). The robust performance of EFFECT even on a highly unbalanced dataset suggests that the bias introduced in the fitness function in the EFC algorithm to improve precision while managing the discriminative power of features gives the algorithm an edge in terms of auPRC. Empirical Analysis on Recognition of ALU Sites. As in the HSS setting, the availability of only a training dataset for the ALU recognition problem limits us to a 10-fold validation. As above, the comparative analysis is conducted only in terms of auROCs, as the ALU training dataset has balanced positive and negative subsets. Table 6 shows that EFFECT achieves the highest performance over the other methods with a mean auROC of 98:9%. For comparison, the second-best value is obtained by one of the kernel-based methods (auROC of 97.8%). Again, values reported by EFFECT are statistically significant, as indicated by a t-test at a 95% confidence interval. It is worth noting, additionally, that in the strict context of feature-based methods, EFFECT does not risk overfitting for the NB classifier. Recall that the dataset for ALU sites is small, consisting of 319 sequences. The number of features should not exceed the size of the dataset. Yet, the number of spectrum features used by a k-mer-based method (with SVM) is 65,536, and limiting the number of features with Gibbs sampling still results in 1,213 motifs as features. Detailed Analysis of Features Obtained By the EFFECT Framework We now further analyze features found by EFFECT on each of the three application settings. While ambiguous symbols were used in the alphabet for feature construction in the HS and ALU Site Detection problems, the basic set of {A, C, G, T} was used for the splice site recognition problem. During evaluation on 1% of training data, on which we tuned the methods employed for comparison to EFFECT in this paper, we found that symbol ambiguity was not useful for splice site recognition due to the presence of decoys. HS Site Features. The entire HS dataset described above is used to obtain features through the EFFECT framework. Features reported are found to contain many compositional motifs, such as CGCG, CGCGAGA, and (A/G)GG(T/G). Positional features with slight shifts recorded the presence of short 2-mers, such as CG, and long 8-mers, such as CTTCCGCC. Correlational features recorded the simultaneous presence of GAT and ATCT, and that of CATTT and (G/T)GGC. Interestingly, these last two features have been reported by other researchers as having important biological significance for maturation, silencer, and enhancer effects [118,119]. Lastly, various features recorded the presence of CG patterns, such as CGMS, CGMSN, and CGSBN, which confirms current knowledge that HS sites are rich in CG nucleotides [25]. Splice Site Features. On the NN269 dataset, EFFECT reported many positional features, such as (C/A)AGGTAAG and (T/C)(T/C)CCAGGT. Note that these features match the donor and acceptor consensus sequences exactly. An interesting complex conjunction feature was reported, containing three positional features, CG, GA, and AG, around position 10 to 17 nt in the acceptor region. This is in good agreement with known acceptor region signals reported by other studies [120]. On the C_Elegans dataset, EFFECT reported many regional features, such as the 7-mer motifs GGTAAGT, AGGTAAG, and GGTAGGT around position -43 nt, matching the donor consensus sequence AGGTAAGT. Another important positional feature in the region -18 to-14 nt containing the TAAT motif was reported. We note that this motif s a well-known branch site signal [120]. Shift-Positional features around position -3 nt recorded the presence of motifs, such as TTTCAGG and TTTCAGA, matching the known acceptor consensus TTTCAG(A/G) sequence exactly. ALU Site Features. On the ALU dataset, EFFECT reported many compositional features, such as motifs AAAAAA, AAAAT, AGCCT, CCCAG, and CCTGT. These are well known signals in ALU repeats [121]. An interesting disjunctive features was also reported, consisting of two correlational sub-features CCTR, AAT, shift 3 and CA, GY, shift 3 and a compositional feature TGG. This feature is shown in Figure 4. We additionally performed a clustal alignment on the whole ALU dataset, shown in Figure 5 and found the three sub-features of the disjunctive feature found by EFFECT and shown in Figure 4 to be indeed over-represented in the ALU dataset. This finding further highlights the importance of using ambiguous symbols in the representation for matching pyridines. Finally, additional disjunctive features recorded the presence of motifs, such as CCTGG, CTGGGG, and GAGGC, further showcasing the ability of EFFECT to combine the presence of lower-level signals in interesting higher-order features. Statistical Analysis of Obtained Features Our detailed feature analysis concludes with measuring the information gain (IG) from each feature in the set reported by EFFECT. For a dataset D, with classes ranging from i~1 to k, the information theory metric for entropy, I, is given by: For a feature F taking on values(F) different values in D, the weighted sum of its expected information (over splits of the dataset D according to the different values of F into D v subsets, with v ranging from 1 to values(F)) is given by: The information gain IG for a feature F over a dataset D is then given by: Figure 6 shows the mean information gain for EFFECT features is 0:017, which is almost 3 and 9 times more than that of the Gibbs sampling and k-mer methods, respectively, for HS sequences. We note that the number of features reported by EFFECT is 45, which is much smaller than the 1030 Gibbs sampling and 65,536 k-mer features. Figure 7 shows the mean information gain for EFFECT is 0:155, which is approximately 10,000 and 1,000,000 times more than that of the Gibbs sampling and k-mer methods, respectively, for acceptor splice sites. The number of features generated by EFFECT is only 45, which is smaller than the 2,424 Gibbs sampling and 65,536 k-mer features. Figure 8 shows the mean information gain for EFFECT is 0:131, which is approximately 40 and 5,000 times more than that of the Gibbs sampling and K-mer methods, respectively, for donor splice sites. The number of features generated by EFFECT is only 27, which is smaller than the 751 Gibbs sampling and 65,536 k-mer features. Figure 9 shows the mean information gain for EFFECT is 0:115, which is again approximately 3 and 1,000 times more than that of the Gibbs sampling and k-mer methods, respectively, for ALU sequences. Also, the number of features generated by EFFECT is only 103, which is smaller than the 170 Gibbs sampling and 65,536 k-mer features. Taken together, this analysis demonstrates that the EFFECT framework generates fewer but statistically more discriminating features, which is one of the most desired qualities required of feature construction algorithms. Discussion In this paper we describe and evaluate EFFECT, a computational framework to automate the process of extracting discriminatory features for determining functional properties of biological sequences. Using basic domain knowledge to identify the fundamental building blocks of potential features, EFFECT constructs complex discriminatory features from these building blocks in a two-stage process. First, an evolutionary algorithm, EFC, constructs a set of potentially useful complex features. A second evolutionary algorithm, EFS, reduces the size of this feature set to a collectively effective subset. The key to this approach is the use of a GP-based EA capable of efficiently constructing complex features from an appropriate set of basic building blocks. The generality of the approach is obtained by allowing more general building blocks than the basic sequence elements and by providing a flexible way of describing positional information. The effectiveness of the approach is enhanced by a novel feature selection phase. The power and the versatility of this approach is demonstrated by its application to three important problem areas: the recognition of hypersensitive, splice, and ALU sites in DNA sequences. An important observation is the preciseness with which the constructed features characterize complex discriminatory patterns. Figure 5 illustrates some of the sequence patterns matched by the feature shown in Figure 4. If we imagine using basic spectrum Kmers for the same dataset, it would have taken a significant number of K-mers to capture the information. More importantly, the positional, correlational and compositional context would not have been captured. This would not only result in lower information gain at the cost of a higher number of features as clearly seen in the earlier analysis on information gain, but would also generate a large number false positives. Markov models and positional matrix-based algorithms would have captured more of the patterns outlined in the example, but not the complex combinations that EFC does. In addition, the complex features constructed by EFFECT can frequently be interpreted in meaningful ways by the domain experts, providing additional insights into the determination of functional properties. Our web site describes the top constructed features obtained by EFFECT on the three application settings presented in this paper. We encourage interested researchers to study them directly for further insights. Finally, we hope that the provided source code will provide the research community with a powerful tool to support further investigations in other application settings. For example, we note that interesting problems involving amino-acid sequences can be pursued with the EFFECT framework. In such settings, simple approaches involving enumeration of features is impractical, unless the amino-acid alphabet is drastically simplified. The proposed framework allows the exploring of large feature spaces while retaining more of the characteristics of amino acids. While further problem-specific details can be explored, the investigation can begin by simply replacing the DNA alphabet employed in this paper. Text S1 Feature representation in EFC, population and generation mechanism in EFC, details on parameter tuning, statistical significance results, and comparison with other feature selection algorithms. Table S1: Parameters used for experiments in HSS. Table S2: Parameters used for experiments in C Elegans (splice site). Table S3: Parameters used for experiments in Alu. Table S4: auROC and auPRC comparison analysis for HSS recognition. Table S5: auROC and auPRC comparison analysis for recognition of ALU sites. Table S6: auROC and auPRC comparison analysis with different feature selection methods. Table S7: Summary statistics are shown for information gain and number of features over 30 independent runs of EFFECT. (PDF)
v3-fos-license
2018-04-03T05:01:56.764Z
2013-03-13T00:00:00.000
7816048
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0058524&type=printable", "pdf_hash": "17b27f6987c00f7944cc3a81f81ca54dc29eef44", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46340", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "17b27f6987c00f7944cc3a81f81ca54dc29eef44", "year": 2013 }
pes2o/s2orc
Too Fresh Is Unattractive! The Attraction of Newly Emerged Nicrophorus vespilloides Females to Odour Bouquets of Large Cadavers at Various Stages of Decomposition The necrophagous burying beetle Nicrophorus vespilloides reproduces on small carcasses that are buried underground to serve as food for their offspring. Cadavers that are too large to bury have previously been postulated to be important food sources for newly emerged beetles; however, the attractiveness of distinct successive stages of decomposition were not further specified. Therefore, we investigated the potential preference of newly emerged N. vespilloides females for odour bouquets of piglet cadavers at specific stages of decomposition. Analyses of walking tracks on a Kramer sphere revealed a significantly higher mean walking speed and, consequently, a higher mean total track length when beetles were confronted with odour plumes of the decomposition stages ‘post-bloating’, ‘advanced decay’ or ‘dry remains’ in comparison with the solvent control. Such a change of the walking speed of newly emerged N. vespilloides females indicates a higher motivation to locate such food sources. In contrast to less discriminating individuals this behaviour provides the advantage of not wasting time at unsuitable food sources. Furthermore, in the advanced decay stage, we registered a significantly higher preference of beetles for upwind directions to its specific odour plume when compared with the solvent control. Such a change to upwind walking behaviour increases the likelihood that a large cadaver will be quickly located. Our findings are of general importance for applied forensic entomology: newly emerged N. vespilloides females on large cadavers can and should be regarded as potential indicators of prolonged post mortem intervals as our results clearly show that they prefer emitted odour bouquets of later decomposition stages. Introduction During the decomposition process of a cadaver, the occurring volatile organic compounds (VOCs), which are linked in quality and quantity to specific stages of decay [1][2][3], are reliable cues for appropriate succession niches of cadaver-associated insects [4,5]. For instance, the blowflies Calliphora vicina and Lucilia caesar (Diptera: Calliphoridae) and also the burying beetles Nicrophorus vespillo and N. vespilloides (Coleoptera: Silphidae), which are usually amongst the first insect visitors to a cadaver, can detect and orient towards sulfur-containing volatile organic compounds (S-VOCs), such as dimethyl sulfide, dimethyl disulfide and dimethyl trisulfide [5,6], which are produced by bacteria shortly after the death of an animal. Our forensic chemo-ecological study focuses on the burying beetle N. vespilloides, because, in this species, the question remains open with regard to the preference for odour bouquets of various decomposition stages in its dependency on carcass size. According to its name, the burying beetle N. vespilloides buries small vertebrate cadavers in the soil as food for its offspring [7]. Biparental care by one conspecific pair of beetles, which have secured a carcass suitable for reproduction, has been known for a long time in the taxon Nicrophorus [8]. The cadaver itself is rolled up under soil into a brood ball, the fur or, in the case of birds, the feathers being mechanically removed [8]. The brood ball is impregnated with anal and oral secretions of the beetles, both secretions of which are known to contain substances that reduce the microbial colonization of cadavers [9,10]. Hatched larvae are fed by their parents in the form of regurgitated predigested carcass material, with the development of the larvae being completed in only seven days [8] at a temperature of 20uC. In Europe, the seasonal activity of N. vespilloides starts early in the season during late April and lasts until September [8,11]. In April, dense populations of this species emerge and the sexually immature females thereof immediately start with their egg-ripening feeding period as a prerequisite for their reproduction on small carcasses in May [8,11]. Reproduction on fresh cadavers without any existing infestations of competing carrion-associated species, such as blowflies, and with a low amount of microbial decomposers is highly advantageous [5,12]. Thus, burying beetles are able to detect a cadaver as early as 1 day post mortem over a distance of up to several kilometres [5,13]. However, in addition to the above-mentioned ability of fresh carcass detection, cadaver preference in burying beetles appears to depend on the size of the cadaver and the maturity of the beetles [14,15]. Burying beetles with mature ovaries favour small mice carcasses for reproduction, whereas newly emerged adults with immature ovaries tend to favour large cadavers as an important food source for ovarian development [8,14,15]. During the period when ovaries are maturing, dozens of N. vespilloides individuals converge on large cadavers that are too big for burial (.300 g, [8,16]). In forensic entomology, large insect-inhabited cadavers such as pigs or humans are important study objects for succession-based post mortem interval (PMI) estimations [17,18]. The entomofaunal succession of a huge richness of carrion-associated species accompanies the decomposition process [19]. In the fresh stage of decomposition, members of Calliphoridae and Sarcophagidae arrive at the cadaver [18]. In the bloated stage (inflated abdomen through gaseous by-products of putrefaction [20]), significant maggot masses can be observed [18]. The post-bloating stage (skin rupture and release of trapped putrefactive gases [20]) is dominated initially by large numbers of feeding fly maggots and predatory beetles such as Staphylinidae and Histeridae [18]. At the end of this stage and also at the beginning of the advanced decay stage (most of the flesh has disappeared, some soft tissue remains in the abdomen [20]), blowfly maggots migrate in intense numbers for pupation [18,21]. In the last stage of decomposition, namely the dry remains stage, only bones, hair and remains of dried-out skin remain [20]. Matuszewski et al. (2008) conducted a forensic entomological field study with decomposing domestic pig cadavers and found that the number of collected adults of burying beetles peaked in the post-bloating stage of decomposition. An early occurrence of Nicrophorus adults was not found, but they were collected until the last day of the study [21]. Peschke et al. (1987) registered the highest peak of N. vespilloides in the post-bloating stage of rabbit carcasses. Analogous to the study of Matuszewski et al. (2008), they collected no individuals in the fresh stage of decay, but, in lower abundances, in all the other remaining stages during the entire decomposition period [19]. The findings of the above-mentioned field studies raise the question as to how newly emerged N. vespilloides females with immature ovaries can be attracted to the different odour bouquets that occur during the whole course of cadaver decomposition. Therefore, the aim of our forensic chemoecological study has been to investigate whether newly emerged N. vespilloides females are attracted to the odour bouquets of piglet cadavers and whether they show any preferences for specific decomposition stages (fresh, bloated, post-bloating, advanced decay and dry remains). We collected carcass volatiles of maggot-infested piglet cadavers by means of a headspace sampling technique in the field. We conducted our chemical attraction experiments on a Kramer sphere ('open loop' device [22]) in order to find significant differences in the walking tracks and walking parameters of tested burying beetles with regard to distinct offered odour bouquets of piglet cadavers in the abovementioned five decomposition stages. Ethics Statement All necessary permits were obtained for the described field studies. No animals were killed for this study. Experiments were conducted with stillborn piglets obtained from a local pig farm (Josef Möst, Jedesheim, Germany). Rearing of Burying Beetles Experimental burying beetles, Nicrophorus vespilloides were trapped in carrion-baited pitfall traps in a deciduous forest near Freiburg, Germany (48u009N, 07u519E). Beetles were reared for 6 generations at the Institute of Experimental Ecology (University of Ulm, Germany). A maximum of four adult beetles of the same sex were kept in moist peat substrate in transparent plastic boxes (100 mm6100 mm665 mm) in a climate chamber under a 16:8 light/dark regime, an environmental temperature of 20uC and a humidity of approximately 80%. Decapitated mealworms and mice cadavers (for reproduction purposes) served as a food supply. Shortly after eclosion, the newly emerged female beetles were maintained separately in a climate chamber under a 8:16 light/ dark regime (simulation of short days), an environmental temperature of 15uC and a humidity of approximately 80%. Such rearing parameters are necessary to retard gonad development (Müller, personal observation). At 20uC and under a 16:8 light regime, N. vespilloides is known to become sexually mature after about 14-20 days. Females kept in colder temperatures mature much later. Egg-laying experiments (with a supply of mouse carrion to trigger egg-laying) conducted after 30 short cold days in a climate chamber revealed, in 10 out of 10 cases, no positive oviposition events. This result was regarded as a reliable indication of still-immature gonads, even at 30 days after eclosion. For reliability, exclusively female beetles with ages between 4 and maximal 19 days after eclosion were used for our bioassays. Headspace Sampling of Piglet Cadavers During two consecutive exposure periods, within a fenced grassland in Neusä ß (Bavaria, Germany) in summer 2011, we collected 241 headspace volatile samples from a total of 4 piglet cadavers (Sus domesticus, 2 kg individual weight). The cadavers were exposed in wire dog cages (63 cm648 cm654 cm, Primopet GmbH, Germany) in order to allow insect infestation but to exclude larger scavengers such as crows or foxes. The ambient temperature in the surroundings of a cadaver was logged every 30 minutes with a Voltcraft DL-100T Data Logger (Voltcraft, Germany) mounted inside the wire cage. Volatiles of the first two piglets were sampled daily from June 6, 2011 to July 16, 2011. These two piglets passed through the following 5 stages of decomposition: fresh (days 1-4 post mortem, T mean = 19uC 68uC); bloated (days 5-7 post mortem, T mean = 17uC 68uC); post-bloating (days 8-11 post mortem, T mean = 24uC 610uC); advanced decay (days 12-25 post mortem, T mean = 21uC 610uC) and dry remains (days 26-40 post mortem, T mean = 22uC 611uC). The second two piglets were sampled daily from July 25, 2011 to August 28, 2011. These two piglets passed through the following 5 stages of decomposition: fresh (days 1-2 post mortem, T mean = 21uC 611uC); bloated (days 3-7 post mortem, T mean = 20uC 68uC); post-bloating (days 8-14 post mortem, T mean = 22uC 69uC); advanced decay (days 15-22 post mortem, T mean = unknown) and dry remains (days 23-34 post mortem, T mean = unknown). In order to compensate for individual differences in the course of cadaver decomposition, we used two different piglets in each distinct exposure interval. For the collection of cadaveric volatile compounds, we packed the piglets hermetically into commercial oven bags (ToppitsH, 3 m631 cm extra broad). Incoming air at 100 ml/min was sucked through a charcoal filter (600 mg, Supelco, Orbo 32 large) for cleaning purposes by insertion of a membrane vacuum pump (DC12, FÜ RGUT, Aichstetten, Germany). Subsequently, the air passed through the oven bag with the piglet cadaver inside. Over a sampling time of 4 hrs, the exiting air of the oven bag passed through an adsorbent tube in which the volatiles of the carcass were collected in 5 mg PorapakH Q (Waters Division of Millipore, Milford, MA, USA) adsorbent material. For airflow control, a E29-C-150 MM2 sinker flowmeter (Air Products and Chemicals, Netherlands) was used. In order to obtain information about the ever-present environmental volatiles, we used an empty oven bag as a control and collected the volatiles analogously to the abovedescribed conditions (flow rate = 100 ml/min). After the sampling procedure, we used 4650 ml of a pentane/acetone (9:1) mixture (Sigma-Aldrich, Munich, Germany, HPLC grade) for the elution of the adsorbed volatile organic compounds. This elution procedure finally yielded sample volumes of approximately 100 ml. For later application of the headspace samples in Kramer sphere bioassays, these samples were stored in hermetically sealed glass ampules at -40uC. Recording of Walking Behaviour, Data Processing and Analysis After long-distance flights, burying beetles are known usually to land at some remove from a cadaver [5,13,23]. Thus, their finally covered walking tracks can be recorded in a bioassay by using a walking beetle on top of a freely rotating ball (a so-called Kramer sphere [22]). The beetle is oriented with its antennae towards an offered scent-loaded air stream (Fig. 1). The movement of the ball indirectly represents the movement of the tested beetle and can be tracked and analysed. We attached the beetles pronotum to a glass bar by a wax-colophony mixture and this bar was vertically mounted above the apex of a freely rotating black-coloured Styrofoam ball (Ø = 8 cm, Fig. 1). Consequently, the beetles were not able to change their head position in relation to the stimulus ('open loop' device [24]). However, with their legs and their freely movable abdomen, they were able to move the Styrofoam ball in diverse directions between -90 and +90 degrees from the 0u direction of the stimulus. The Styrofoam ball was floated on an upward-directed air stream. We tracked the locomotion of the beetles by means of an optical mouse that was mounted at the equator of the ball (Fig. 1), 3 mm above its surface [25]. Every 0.5 seconds (sampling interval), we computed, visualized and stored the displacement of the mouse-pointer (position of the burying beetle) in the form of x and y coordinates by means of self-written Microsoft Visual C++ software. For visual control during the tracking procedure, the trajectories of the mouse pointer were visualized on the monitor of a laptop. When the pointer reached the window frame, the software relocated it back to the centre [25] ( Fig. 1). The test duration was 5 minutes for each particular run. For analysis of the walking tracks, we calculated and compared the following 8 walking parameters [26][27][28] The 0u-direction was the wind direction and consequently the direction of the odour plumes. Turns to the right were represented as negative angles and turns to the left as positive angles. MWS and MAV were calculated as mean values of 599 instantaneous walking speeds and angular velocities per individual beetle, respectively. The momentary angular velocity represented the velocity of the change in the walking direction between two subsequent sampling intervals. Negative angular velocity indicated clockwise path rotations of the tested beetles. An individual vector originated at the starting point and ended at the final point of an individual run. Its spanning angle described the mean walking direction and its mean length was a quotient of the vector length and the length of the whole distance a tested beetle had actually covered. The value of the mean length ranged between 0 (starting point = end point) and 1 (absolutely straight path). Therefore, ALV was calculated as the average degree of orientation for the tested individual beetles. AAV represented the degree of orientation for the whole population and therefore was calculated as the length and angle (by analogy to ALV) of the resultant vector of all individual mean vectors. TTL was calculated as the mean value of the sums of 599 instantaneously traversed distances per individual beetle. The computation of UL was carried out as the registration of the upwind displacement after the test period of 5 min and served as a measure for the orientation of the individual towards the tested odour plumes. The value of UF as a measure of the degree of direct upwind movement [27] ranged between 21 (absolutely straight downwind movement) and +1 (absolutely straight upwind movement) and was calculated as the quotients of upwind length and total track length per individual beetle. TSWU was calculated as the total walking time in which angles less than 60u or greater than minus 60u from the wind direction (0u) were adopted per individual beetle. Provision of Wind and Odour Stimuli We installed the whole experimental setup inside a laboratory fume hood for a constant air flow supply. A self-constructed cardboard arena (24 cm624 cm619 cm) ( Fig. 1) with black and white striped walls separated the Kramer sphere from the surroundings in order to avoid optical stimulation of the tested beetles. Additionally, all test runs were performed under red light in order to prevent flight behaviour triggered by artificial light sources. The arena had two rectangular openings (8 cm63 cm) in front of and behind the mounted beetle in order to permit a constant laminar air stream as the carrier for applied cadaveric volatile organic compounds (Fig. 1). By means of the rectangular opening behind the beetle, we prevented an accumulation of cadaveric volatile compounds inside the arena. The air current velocity inside the arena was 50 cm/s (appropriate value for anemotaxis behaviour [26]), as measured with an anemometer (SKYMATE SM-18, Speedtech Instruments, Virginia, USA) before each single test. To ensure that both beetle antennae were inside the laminar air current, we tested the structure of the odour plumes, previous to our test series, with the smoke of incense cones placed inside the expected laminar air stream. Odour stimuli were provided by an air-streamed Pasteur pipette (inner diameter of 5 mm) under constant flow of 100 ml/min during the entire test duration of 5 minutes. The opening of the pipette (inner diameter of 1 mm) was inserted through a tiny hole (Ø = 2 mm) directly under the rectangular opening in front of the mounted beetle (Fig. 1). The tip of the pipette was positioned at a distance of 12 cm from the beetles antennae and was charcoal precleaned (Alltech Associates Inc., Illinois, USA). Humidified air with a constant flow was maintained by using a membrane vacuum pump (DC12, FÜ RGUT, Aichstetten, Germany). For each single test, we placed a wrinkled piece of filter paper (2.5 cm61 cm) impregnated with 20 ml of test solution (see below) in the inside of the Pasteur pipette. Bioassay Procedure and Applied Headspace Samples Because N.vespilloides females typically search for carcasses during the few hours before sunset [29,30], we conducted our bioassays in a high activity period 2 hours before lights-off in a climate chamber. All beetles were tested at room temperature of about 20uC. Before application of a specific odour bouquet, each beetle was allowed 5 minutes of settling time on top of the sphere. After these 5 minutes, a 20 ml headspace sample diluted 1:10 with pentane (1/10th of the concentration after 4 hours of sampling time) or 20 ml pure pentane as a control (Table 1) was impregnated on the filter paper by using a micro-syringe (100 ml, Göhler HPLC-Analysentechnik, Chemnitz, Germany). After evaporation of the solvent, the filter paper was introduced into a Pasteur pipette (see above). For the subsequent test period, the walking behaviour of the beetle inside the scent-loaded laminar air stream was recorded. Each beetle was tested in a random order against maximally four (in order to reduce tiring) of the following six test samples: pentane (solvent control); fresh; bloated; post-bloating; advanced decay; dry remains (Table 1). Between two consecutive test samples, the beetles were allowed a 5 minute resting time in a scentless laminar air flow. In order to avoid learning effects, each individual beetle was only tested once with the same test odour bouquet. If an individual walked less than 4 metres in the 5-minute test period, it was discarded. The maximal traversed walking distance was 35 metres and the average traversed walking distance was 22 metres plus/minus 6 metres. Statistics The responses of newly emerged N. vespilloides females to various odour stimuli were compared by using a multivariate general linear model (GLM) with odour as the fixed factor and MWS, MAV, ALV, TTL, UL, UF and TSWU as dependent variables. Levene's test of equality of error variances revealed homogeneous variances for all dependent variables (all P.0.2). Computed walking parameters with significant effects in the model were further analysed with Tukey's honest significant difference (HSD) post hoc test (significance level = 0.05) to localize the significant differences between the five distinct test samples (Table 1) and the solvent pentane. All statistical analyses were performed by using SPSS (Version 19, IBM, USA). Results The type of the presented headspace sample had an effect on the walking parameters (dependent variables) recorded (F 30 = 1.741, P = 0.009). GLM tests of between-subjects effects showed that decomposition odour significantly affected MWS ( MWS and consequently also TTL were significantly higher in the decomposition stages of post-bloating, advanced decay and dry remains in comparison with the pure solvent ( Fig. 2A and Table 2). Additionally, in the advanced decay stage, the measure of beetle orientation towards its respective odour plume (UL) was significantly higher when compared with the supply of pure solvent ( Fig. 2B and Table 2). As mentioned above, decomposition odour affected the MAV of females (F 5 = 2.292, P = 0.047) but the post-hoc tests revealed no significant differences in the angular velocities between the six different odour bouquets (Tukey HSD, all P.0.06, Table 3.) However, the frequency distribution of the MAV showed, tendentially, a sharper peak (better orientation) and a smaller scatter (higher running smoothness) in the case of beetle antennae stimulation with advanced decay odour (scatter 62.96u/s, Fig. 3B and Table 3 when compared with the stimulation with the pure solvent (scatter 63.45u/s, Fig. 3A and Table 3.) This result is in accordance with the highest mean upwind length and consequently the best orientation of beetles to decomposition odour in the advanced decay stage (mean UL = 1470 cm, Table 2, Fig. 2B and Fig. 3B). With regard to the degree of direct upwind movement, no difference was seen between the six distinct offered odours (UF in Table 3.) The same was true for the mean TSWU ( Table 3.) Angles of less than 60u or greater than -60u from the 0u direction of wind and odour supply were adopted between 60% and 68% of the total walking time by all analysed burying beetle populations (TSWU in Table 3.). The lengths of the resultant mean vectors (AAV) of all tested burying beetle populations showed a high degree of orientation relating to the six different offered test odours (all l r $0.96, Table 3 and corresponding vector diagrams in Fig. 4C and D). The directions of these vectors also indicated a strong preference of beetles for the 0u direction of wind and odour stimuli (all angles phi between -7 and +3 degrees, Table 3, Fig. 4C and D). Correspondingly, during the 5 minutes of test duration in an air current of 50 cm/s plus a specific supplied odour plume, walking N. vespilloides individuals generally exhibited relatively straight and stable courses (ALV, all l $0.6, Table 3), independent of the nature of the offered odours ( Fig. 4A and B), and therefore showed so-called anemotactic behaviour [26]. However, as mentioned above, the total track length depended on the nature of offered odours, as is also depicted in an exemplary form in Fig. 4A and B. When the odour of a post-bloated piglet was offered, a higher total track length (22.73 m) was achieved than when the pure solvent was provided (17.34 m; Fig. 4A and B). Decomposition odour changed the wind orientation behaviour of the exemplary female in such a way that zigzag subcourses against the wind were the result (Fig. 4A). However, after a test duration with a decomposition odour supply for 5 minutes, the walking direction of the specific female was more upwind-oriented compared with the test situation with pentane ( Fig. 4A and B). Discrimination between Stages of Decomposition of Large Vertebrate Cadavers in Newly Emerged N. vespilloides Females In 1984, Wilson and Knollenberg detected newly emerged females of Nicrophorus tomentosus, N. orbicollis and N. defodiens with immature ovaries in baited pitfall traps that simulated a high concentration of carrion volatiles as is typical for large cadavers. In addition, they demonstrated that females with mature ovaries avoided large cadavers but showed, instead, a preference for small cadavers that are suitable for burying and reproduction. However, they had no clear explanation for the underlying proximate mechanisms of discrimination, such as different preferences for different chemicals (quantitative or qualitative) in odour bouquets of differently decomposed cadavers of various sizes [14]. In the current study, we have started to explore the underlying proximate mechanism; we have investigated whether immature females of N. vespilloides show any behavioural response towards the odour of large cadavers and determined which stage of composition they prefer. Our results suggest that newly emerged burying beetles females respond to and are able to discriminate between the odour bouquets of various decomposition stages of large cadavers: only the stages of post-bloating, advanced decay and dry remains lead to a significant increase of the mean walking speed, and not the fresh and bloated stages. Such a chemically triggered change of their walking speed indicates a higher motivation to locate such food sources. In contrast to less discriminating individuals this behaviour provides the advantage of not wasting time at unsuitable food sources. Our detected behaviour in walking beetles could probably be considered as congruent to the decisions made by burying beetles in flight. A flying beetle that does not waste time investigating a fresh cadaver would have a [8,31,32]), predation, fights or poor feeding might occur. On unburied large cadavers such as piglets like in our study (feeding substrate for ovarian development as a prerequisite for reproduction), newly emerged burying beetles as early feeders should have a fitness advantage in competition with large numbers of other carcass-associated insects (possibility of rapid consumption of whole cadaver tissue by necrophagous flies) or vertebrate scavengers. Most likely, the perception of such a valuable large food source increases the beetles' motivation and consequently their willingness to invest the larger amount of energy that is needed for faster movement. Another consequence of a higher walking or flight speed is shown in the eucalyptus woodborer Phoracantha semipunctata (Coleoptera: Cerambycidae). In this species, a faster flight speed is coupled with path linearity and a lower turning rate in the case of permanent contact with an odour plume [33]. In Nicrophorus humator, path linearity increases the travelled distance between the starting and endpoint (and consequently the range of the explored environment) from usually 1 m in a windless bioassay environment to 9 m in an air current of 100 cm/s (5 minutes of walking time on a locomotion compensator; [26]). These aspects are especially important for the burying beetle, which has to detect cadavers over distances of up to several kilometres [5,13]. Higher mobility probably increases the chance for cadaver detection. From a forensic entomological point of view, we find it interesting that newly emerged N. vespilloides females with immature ovaries show a strong preference for the odour bouquets of later stages of decomposition (from post-bloating over advanced decay to dry remains; days 8-31 post mortem, T mean = 19uC) of large cadavers. More precisely, these females show not only a higher walking speed, but also a tendency to higher running smoothness and the highest orientation towards odour plumes of decomposed cadavers in later stages. These findings are also supported by several succession and decomposition-based field studies. Peschke et al. (1987) performed extensive field investigations with rabbit carcasses of approximately 2800 g in weight, similar to the weight of our piglet cadavers, in Bavaria in Germany (the same federal state as in our study) from 1976 to 1983. In accordance to the preferences found in our study, they registered the highest abundance of N. vespilloides in the post-bloating stage and they collected no individuals at the fresh stage of decay [19]. In the remaining decomposition stages (bloated, advanced decay and dry remains), they also collected N. vespilloides individuals but with lower abundances compared with the post-bloating stage [19]. Matuszewski et al. (2008) performed forensic entomological field studies to determine insect succession and carrion decomposition in various forest habitats of western Poland. They used domestic pig cadavers of a mean weight of 34 kg as adequate models for human corpses. Nicrophorus adults could be collected right up until the last day of the study with the highest abundance in the post-bloating stage [21]. Matuszewski et al. (2008) also stated that the early occurrence of adult Nicrophorus species was not found in decomposition studies with large cadavers, a finding that agrees with our results from the tracking analysis. A possible explanation for our findings and also for the observations of the cited studies could be that newly emerged burying beetle females with immature ovaries prefer large cadavers in order to feed on blowfly maggots [8,11,15,21,34,35]. This is supported by the results of a field study of Kentner and Streit (1990) with 9 exposed rat cadavers in various biotopes. They stated that adult Nicrophorus species are predators and feed only rarely on decomposed meat. The authors concluded that adult burying beetles are also attracted by older cadavers where they feed upon fly maggots [15]. The preference of odour bouquets emitted by large cadavers in later stages of decomposition, such as the post-bloating or the advanced decay stage might help burying beetles to detect suitable feeding sites, as the dominance of feeding and migrating blowfly larvae is the highest in these stages of decay [21]. In the post-bloating stage, masses of maggots have been found to feed on the soft tissues of a cadaver and, in the initial advanced decay stage, an intense migration of maggots can be observed [18,21]. Blowfly larvae excrete urea and allantoin, which give the breeding substrate a characteristic intense smell. The antimicrobial properties of urea and allantoin cause a reduction in the microbial decomposition of the corpse [36,37], which additionally affects the odour bouquets of cadavers and consequently influences the specific scent attraction of carcass-associated insects such as the burying beetle. During our headspace sampling procedure in the field, we included blowfly maggots and offered the complete odour bouquets of maggot-infested piglet cadavers in our tracking experiments. We detected a dominance of dipteran larvae in the post-bloating stage and the migration of post-feeding L3-larvae (third instar) in the advanced decay stage during our field work (headspace sampling) in this study. The odour bouquets of the two stages with the highest dominance of feeding and migrating blowfly larvae (a good food source for female burying beetles) elicited a higher mean walking speed of the beetles in the tracking experiments. The attractiveness of cues from cadavers with substantial blowfly maggot populations indicates that these cadaver inhabitants are of major importance in the diet of burying beetles. Further studies will be needed to clarify whether newly emerged burying beetles seek out large cadavers mainly to feed on fly larvae (as assumed by Kentner and Streit (1990), see above) or whether they also feed directly on cadaver substrate. From a phylogenetic point of view, the majority of carrion beetles (Coleoptera: Silphidae) are known to feed on cadavers of either vertebrates or invertebrates [38]. Only the more derived taxa Ablattaria, Dendroxena and Phosphuga are highly specialized predators of snails or caterpillars [39][40][41]. If Silphidae and Staphylinidae are sister taxa [42] then their last common ancestor might have been a predator of fly maggots, because many staphylinids live predaceously on fly larvae. Interestingly, the odour of the dry remains stage, i.e. the period at which arthropod activity has almost ceased, also raised the mean walking speed of N. vespilloides females. Electrophysiologically active (EAD-active, 'smellable') compounds might be present in higher quantities in a decomposition stage that only consists of hardened skin and bones than in earlier decomposition stages (von Hoermann, unpublished data) and therefore could modify the beetles behaviour. Future consideration of available cues (constraints of sensory detection) versus adaptive behaviour might aid our understanding of the response to dry remains odour. It is possible, that newly emerged females are not able to perceive fresh cadavers. In that case constraints in sensory detection rather than adaptation explain why young females respond to later decomposition stages in our experiments. Examining the olfactory capabilities of burying beetles and the chemical composition of cadaver odours will help to determine if sensory constraints are responsible for our observations. Currently, we are conducting GC-EADs (gas chromatography coupled with electroantennographic detection) with the antennae of newly emerged N. vespilloides females and chemical analysis (coupled gas chromatography-mass spectrometry (GC-MS)) in order to identify the patterns of behaviourally active cadaveric VOCs in this species over time (von Hoermann, in preparation). Orientation of Newly Emerged N. vepilloides Females in Decomposition Odour-loaded Air Streams Our results show that newly emerged N. vespilloides females exhibit a typical anemotactic behaviour in a constant air current of 50 cm/s. All tracked courses are relatively straight and stable, regardless of which specific odour bouquet is offered. Heinzel and Böhm (1989) stated that such a general wind-orientation behaviour could improve the search of odour plumes (and consequently the cadaver itself) in the case of a possible loss of contact during the landing procedure at some distance to the cadaver. This proposed explanation is in accordance with our finding that pure solvent in combination with an air current (analogous to contact lost of odour plumes) also arouses windorientation. The wind-oriented straight walking behaviour in air currents of 50 to 150 cm/s has previously been demonstrated for another burying beetle species, Nicrophorus humator [26]. In a windless environment, on the other hand, this species shows an inherent internal turning tendency [26]. When we look more in Figure 4. Representative walking tracks of individual female beetles. Provision of headspace samples of A a post-bloated piglet cadaver and of B pentane solvent. The black dots mark the starting points of the respective 5-min runs and the black arrowheads indicate the endpoints and walking directions. Representative vector diagrams of the mean walking directions of females (N = 40) when headspace samples of C a post-bloated piglet cadaver and of D pentane solvent were provided. Black arrows denote the common direction (0u) of wind and odour stimuli. Each tested individual beetle was mounted with its head upwind against the 0u direction. doi:10.1371/journal.pone.0058524.g004 detail at the walking characteristics of individual N. vespilloides tracking paths, we can find zigzag-shaped walking reactions (and therefore higher total track lengths) in air-current fields loaded with decomposition odour bouquets. The same is true for N. humator in an air current with additional added carrion odour in the form of successively offered pulses lasting 0.85 seconds [43]. Such behaviour is also well-known from moths while searching for pheromone sources [44,45]. Moths sense the overall shape of pheromone plumes during zigzag flights, by flying in and out of the plume borders in an alternating manner (''aerial trail following'', [45]). Similar chemo-orientation mechanisms are also known in walking ants that follow ground-deposited pheromone trails [46]. Therefore, in the burying beetle N. vespilloides, successive comparisons of the positions of decomposition odour plumes in combination with wind-orientation [26] might enable them to walk along the plume's long axis towards the cadaveric resources [47]. In locomotion compensator experiments, Böhm and Wendler (1988) found that N. humator measures the actual air stream and the actual concentration of carrion volatiles. They stated that the integration of both inputs is necessary for the insect to reach an appropriate odour source by means of wind-orientation, as they were able to demonstrate that, even after one odour stimulus, the wind-orienting behaviour of N. humator could be changed and the walking direction could be directed more upwind after the whole odour stimuli period [43]. Because N. vespilloides females also exhibit wind-oriented walking tracks with zigzag-shaped structures and a more upwind orientation at the end of a 5-minute run, we conclude that this statement is also valid in this species. Conclusions In a forensic chemo-ecological approach with a highly sensitive 'open-loop' tracking system, we tested the preference of newly emerged N. vespilloides females with immature ovaries for odour bouquets of large cadavers at five different decomposition stages. We have been able to show that sexually immature females prefer odour bouquets of large cadavers only when they are in later stages of decomposition (from post-bloating over advanced decay to dry remains; days 8-31 post mortem, T mean = 19uC). We assume that volatiles of large numbers of blowfly maggots in combination with cadaveric odour bouquets are responsible for this phenomenon in the necrophilous and predacious species N. vespilloides. Additionally, our study indicates that immature N. vespilloides females show zigzag-shaped walking reactions inside relatively straight wind-oriented tracking paths as a search strategy for reaching large cadavers, as previously discussed for Nicrophorinae in N. humator [43]. This is the first study in which the attraction of newly emerged N. vespilloides females to headspace samples of maggot-infested piglet cadavers has been investigated during an entire decomposition period. At present, we are studying qualitative and quantitative differences of EAD-active volatiles at the various decomposition stages of maggot-infested large cadavers. Electrophysiological experiments with N. vespilloides antennae and subsequent GC and GC-MS analysis should much improve our knowledge about the nature of the substances responsible for the preference of later decomposition stages.
v3-fos-license
2020-12-10T09:01:29.870Z
2020-01-01T00:00:00.000
234908336
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scielo.br/pdf/pd/v38/0100-8358-PD-38-e020215165.pdf", "pdf_hash": "9424f5ea7e8196590f12f2068779cf2eb2e48a2a", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46341", "s2fieldsofstudy": [], "sha1": "84154c87b1aff7734ac65df0821ccadda80b9568", "year": 2020 }
pes2o/s2orc
Response of imidazolinone-resistant and -susceptible weedy rice populations to imazethapyr and increased atmospheric CO 2 Background: Weedy rice (Oryza sativa L.) is the main weed of rice crop. The high genetic variability of weedy rice contributes to the high phenotypic diversity between biotypes and different responses to environmental stress. Objective: The present study aimed to evaluate the response of imidazolinone-susceptible and -resistant weedy rice populations to increased atmospheric [CO2]. Methods: The experiment was arranged in a complete randomized design with six replications. The treatments included two [CO2] concentration (700 and 400 μmol mol-1) and three treatments: resistant genotype (IMI-resistant) treated with imazethapyr; resistant genotype without imazethapyr, and a susceptible genotype without imazethapyr. Results: The IMI-resistant and –susceptible weedy rice responded similarly to [CO2] enrichment. Enhanced [CO2] increased competitive ability of the weedy rice populations tested, by means of increased plant height. Weedy rice seed production also increased with enhanced [CO2] by means of increased photosynthesis rate and reduced transpiration (increased water use efficiency). Increased seed production also means increased weed persistence as it increases the soil seedbank size. The application of imazethapyr on IMI-resistant weedy rice did not alter its response to [CO2]; conversely, increased [CO2] did not change the resistance level of weedy rice to imazethapyr. High [CO2] increased spikelet sterility, but this beneficial effect was negated by the overall increase in production of filled grains. Conclusions: Enhanced [CO2] concentrations increases weedy rice growth, photosynthesis rates, seed production and spikelet sterility; the imidazolinone application does not affect the response of weedy rice to enhanced [CO2] affects weedy rice response to imidazolinone herbicide. The Intergovernmental Panel on Climate Change (IPCC) estimates that, by the end of the 21 st century, global climate changes caused by emission of greenhouse gases will lead to an increase in atmospheric [CO2] above 700 μmol mol -1 (IPCC, 2014). Such condition will cause major changes on the earth's climate, which will then drive changes in agricultural zoning, methods of crop management, and crop yields (Wang et al., 2017). At the plant level, climate change effects modifications in plant physiology, morphology, and biology to enable adaptation to biotic and abiotic stresses. Likewise, the distribution, abundance, and severity of insect pests, diseases, and weeds are projected to change as has already occurred today (Korres et al., 2016). Climate change contributes to the constant adaptation of agriculture (Tokatlidis, 2013). Climate change effects on agricultural production can be positive in some farming systems and regions and adverse in others (Obirih-Opareh and Onumah, 2014). At the plant level, we can observe the interaction effects of increasing [CO2] and temperature on plant performance. The benefits of elevated atmospheric [CO2] can be minimized or negated by high temperatures (Korres et al., 2016). Walker et al. (2016) pointed out that with increasing temperature, photorespiration will increase, which could negatively affect yield under future climates despite increases in carbon dioxide. Weedy rice (Oryza sativa L.) is a global weed in rice production, which is most difficult to control because of its high similarity to cultivated rice in genetic, morphological, physiological, and biochemical traits. This hampers its selective chemical control (Sudianto et al., 2016) as it does mechanical and manual weeding. Rice yield losses due to weedy rice infestation can reach 50% in the USA (Shivrain et al., 2010). In their review article on weedy rice, Ziska et al. (2015) reported yield losses between 35 and 100% in direct-seed rice. The Clearfield ® rice production system, which uses cultivars resistant to the imidazolinone herbicides, has allowed selective control of weedy rice (Merotto Jr et al., 2016;Sudianto et al., 2016). Imidazolinone herbicides inhibit acetolactate synthase (ALS), which catalyzes the synthesis of branched chain amino acids. The low outcrossing rate between rice and weedy rice and the general inability of farmers to prevent seed production from outcrosses has produced contemporary populations of ALS-inhibitor-resistant weedy rice in the southern USA (Shivrain et al., 2007;Burgos et al., 2008) southern Brazil (Menezes et al., 2009;Roso et al., 2010), Greece (Kaloumenos et al., 2013); Italy (Andres et al., 2014) and other regions where Clearfield TM rice has been adopted (Sudianto et al., 2016). These herbicide-resistant weedy rice populations carry some crop traits and are even more diverse than the historical weedy populations (Burgos et al., 2014), making weedy rice management more challenging. Although cultivated and weedy rice are the same species, the weedy traits of the former led us to believe that these two types of Oryza will respond differently to climate change. Some researchers have explored the behavior of weedy rice in relation to climate change, specifically, increased atmospheric [CO2] (Ziska and McClung, 2008;Ziska et al., 2014). In a recent study, Refatti et al. (2019) found that increasing atmospheric [CO2] and temperature may increase the speed of junglerice resistance evolution to herbicides. The combined effect of increased [CO2] and herbicide application on weeds is an important aspect to address in relation to crop production. This current work aimed to evaluate the response of imidazolinone-resistant and -susceptible weedy rice populations to imazethapyr and increased [CO2]. MATERIAL AND METHODS Two genotypes of weedy rice (Oryza sativa spp. indica), similar in morphology and growth cycle, were evaluated. These were collected from the municipality of Dom Pedrito, Mesoregion of Campanha, in Rio Grande do Sul (RS) State (31 o 02'07" S; 54 o 52'02" W), in the 2012/2013 crop season from commercial rice fields. To produce relatively homogeneous 'populations' and increase the seed volume, seeds from field-collected accessions were planted for three generations -1 st year in Arroio Grande, RS; 2 nd year in Capão do Leão, RS; and 3 rd year in Fayetteville, AR, USA). Atypical plants were removed during each cycle. Within this collection, herbicide-resistant (15-189) and -susceptible (15-214) genotypes were chosen based on similarity in morphology and phenology. Genotype 15-189 was confirmed resistant and 15-214 was confirmed susceptible to imidazolinone herbicides in previous resistance screening tests (Menezes et al., 2009). Using the seeds from the homogenous populations, a growth chamber experiment was conducted in 2016. Planta Daninha 2020;38:e020215165 -https://doi.org/10.1590/S0100-83582020380100078 3/9 The experiment was arranged in a completely randomized design with six replications in a factorial arrangement. Factor A consisted of two environmental conditions (CO2 levels of 400 and 700 μmol mol -1 ). Factor B included three weedy rice treatments: IMIresistant genotype '15-189' treated with imazethapyr; IMI-resistant genotype without herbicide and a susceptible genotype without herbicide. We did not conduct a factorial arrangement of treatments (genotype x herbicide) because the herbicide would kill the susceptible plants. The plants were grown in two growth chambers (Conviron™, model PGW36) with atmospheric [CO2] of 400 and 700 μmol mol -1 , respectively. Both growth chambers were set at 14/10 h photoperiod (day/night), 600 mol m -2 s -1 photosynthetic active radiation (PAR), and 34/26 o C (day/night) temperature programmed across a gradient with the peak temperature occurring at mid-day. Starting from the V4 growth stage, the plants were kept in trays with a constant water level to simulate flooding. Imazethapyr (Newpath™, BASF) was applied at the V3-V4 growth stage at 106 g a.i. ha -1 with 1% by volume crop oil concentrate (COC). The herbicide was applied in a spray chamber equipped with a motorized spray boom fitted with two 800067 flat fan nozzles that delivered 187 L ha -1 spray volume at 276 KPa. The spray droplets were allowed to dry before returning the plants into the growth chamber. Data were subjected to analysis of variance and when the effect of genotype was significant, the means were compared using the Tukey's test (p0.05). The statistical analysis was conducted using the R Studio program, version 1.0.143. Regression analysis was conducted for plant height and the number of tillers with time. The cubic polynomial model was fitted to the data, based on the coefficient of determination (R²), the statistical significance (F-test), and goodness-of-fit of the model. RESULTS AND DISCUSSION Plant height was not affected by [CO2] at the beginning of the growing season but starting at about 80 DAE, the plants were taller under elevated [CO2] (700 μmol mol -1 ) compared to those in ambient [CO2] (400 μmol mol -1 ) regardless of genotype or herbicide treatment ( Figure 1) The weedy rice growing taller under high [CO2] has important practical ramifications. First, weedy rice is already generally taller than rice currently (Shivrain et al., 2010). This contributes to the competitiveness of weedy rice with cultivated rice for obvious reasons. Furthermore, weedy rice has weak stalks. In high densities, weedy rice would lodge, taking down the rice crop with it, thereby increasing harvest losses. For weedy rice to grow even taller, or faster, is disastrous to rice production. Second, plant height differential between weedy and cultivated rice could alter the gene flow rate between the weed and the crop. This is highly relevant with respect to the continued use of herbicide-resistant rice technology to manage weedy rice and other weedy species in Planta Daninha 2020;38:e020215165 -https://doi.org/10.1590/S0100-83582020380100078 4/9 rice production and the resulting gene flow from crop to weed (Shivrain et al., 2008(Shivrain et al., , 2009). Depending on the response of weedy ecotype and cultivated rice, increased [CO2] may reduce the height differential between the weed and crop, resulting in increased cross-pollination (Gealy et al., 2003). In the same context, Ziska et al. (2012) found that the increase in atmospheric [CO2] increased the average height of cultivated and weedy rice plants and Error bars correspond to 95% confidence intervals. increased the synchronization of flowering between the weed and crop, resulting in increased gene flow from cultivated rice to weedy rice. Ziska et al. (2012) studied three concentrations of atmospheric CO2: preindustrial (300 μmol mol -1 ), current (400 μmol mol -1 ) and projected (600 μmol mol -1 ). The authors recorded higher synchronization of flowering and crossfertilization between cultivated rice 'CL 161' and weedy rice (StgS) under the highest [CO2]. In turn, this has increased the number of weedy rice types and the number of herbicide-resistant hybrid weeds. These results, although preliminary, suggest that increased [CO2] may alter the synchrony of flowering between the crop and some genotypes of the weedy relative and may reduce the effectiveness of herbicides through transfer of herbicide-resistant genes to the weedy relative. The number of tillers tended to increase under high [CO2] in both genotypes (Figure 2). The regression parameters for this response variable are presented in Table 1. On average, the IMI-resistant genotype produced 21.5 tillers at 67 DAE while the susceptible had 15.8 tillers. At this time, tiller production had reached its peak. In the field, without competition, strawhull weedy rice (like the ones used in this experiment) can produce an average of 85 tillers per plant (Shivrain et al., 2010). Tillering is crucial for competition (and weediness) because it determines how much space the plant can occupy and crowd-out other plants. It also directly relates to how much nutrients the weed can mine from the soil to support biomass production . One reason that weedy rice is highly competitive against cultivated rice is that the former can produce about 3X to 9X more tillers than the latter (Shivrain et al., 2006(Shivrain et al., , 2010. The number of tillers also contributes directly to the number of panicles per plant and, consequently, to seed production. Photosynthesis-related responses The photosynthetic parameters and the chlorophyll meter measurements (SPAD) are shown in Table 2. The SPAD reading is indicative of the chlorophyll Error bars correspond to 95% confidence intervals. (Hopkins and Hüner, 2009). Since the enzyme affinity for oxygen is higher than for CO2, respiration is favored under low supply of CO2 and the plant energy is wasted. Another consequence of this inefficient process is low water use efficiency (Rawson et al., 1977;Morison and Gifford, 1983). When the CO2 limitation is relieved, photosynthesis rate is expected to increase, and water use efficiency improves as observed in our experiment. In the long term, it is expected that under higher [CO2], less stomata are needed for the plant to acquire sufficient CO2 from the air as can be inferred from decades-old research on stomatal behavior of C3, C3-C4, and C4 species (Huxman and Monson, 2003). Stomates also do not need to stay fully open during the day as [CO2] is high. Both situations reduce transpiration; therefore, water use efficiency is also expected to increase. All of the above should lead to higher yield of C3 plants such as rice, under high [CO2]. Seed production The susceptible genotype had higher spikelet sterility, fewer panicles, and, consequently, lower seed production per plant than the IMI-resistant genotype averaged across [CO2] (Table 3). Again, this genotype difference could not be attributed to the resistance trait because these were different populations. The more relevant information pertains to the effect of [CO2] on the seed production of these weedy rice populations. Averaged across genotypes, increasing the [CO2] did not increase the number of panicles per plant. This is expected considering that high [CO2] generally did not increase the number of tillers (Figure 2). On the other hand, high [CO2] reduced the percentage of sterile spikelets, which partially contributed to increased seed production (Table 3). Under ambient [CO2], weedy rice produced 1,105 g seed/plant. The number of seeds produced by weedy rice increased by 7% under high [CO2] compared to ambient [CO2]. It is important to note that the high number of seeds per plant is due to the fact that the plants were isolated without competition (one plant per pot). Ainsworth (2008) In a study conducted in FACE (free-air carbon dioxide enrichment) system to evaluate the interaction of nitrogen fertilizer application and the increase of atmospheric [CO2] in rice, Liu et al. (2008) found that rice yield increased up to 34% in the enriched environment than in normal environment. Comparing the results of this study to those of Kim et al. (2003) and Yang et al. (2006) In these studies, that factor is nitrogen. Therefore, for rice farmers to be able to take advantage of high CO2 level, for instance, they would need to use more fertilizer. However, the study of Zhu et al. (2008) showed that rice does not respond significantly to high N fertilizer under high [CO2]; on the contrary, the C4 barnyardgrass (Echinochloa crus-galli) does. The optimization of crop production and weed management to keep agriculture sustainable certainly becomes more complex as we experience climate change. High temperature or low availability of N may lead to limitation of photosynthesis sinks (smaller number of tillers, spikelet sterility, among others), resulting in the reduction of photosynthetic capacity (Kim et al., 2003). Without balancing other growth factors, high [CO2] or high temperature may have adverse effects on crop productivity. On the other hand, when there is adequate supply of N, and we have climate-resilient varieties, high yield can be realized under various scenarios of climate change (Hasegawa et al., 2013;Shimono and Okada, 2013;Ziska et al., 2014). When evaluating two rice cultivars under high atmospheric [CO2], the cultivar with higher yield showed higher sink/source ratio, higher gene expression of RuBisCO, as well as higher RuBisCO activity (Zhu et al., 2014). Therefore, crop varieties (not just different weed species or weed genotypes) can respond differentially to climate change factors. Within the Oryza genus, there is high diversity in the growth and yield of weedy rice ecotypes or genotypes and rice cultivars in response to high temperature and elevated [CO2] (Ziska et al., 2014). For this reason, rice improvement programs must include the use of genotypes responsive to increased [CO2], especially those capable of producing more tillers and, consequently, higher yield. The efficacy of herbicides may be affected by increasing atmospheric [CO2] as high [CO2] could change the plant morphologically, physiologically, and phenologically. These changes could be reflected in leaf morphology; root/shoot ratio; possible reduction in protein content of the leaf (site of action of some herbicides), changes in plant anthesis, or changes in the plant community (Ziska and Bunce, 2006;Ziska et al., 2004;Ziska, 2016). In this context, Ziska and Goins (2006) evaluated the weed seed bank during a growing season and found that the number of C3 grass plants was higher than C4 grass plants, along with other significant changes in the weed population of the area. The efficacy of a herbicide on a particular weed could be reduced as a result of increased root biomass relative to shoots as reported by Ziska et al. (2004) on the reduced efficacy of glyphosate on Canada thistle (Cirsium arvense) under elevated [CO2]. Consequently, weed management approaches need to be adjusted. CONCLUSIONS IMI-resistant and -susceptible weedy rice responds similarly to [CO2] enrichment. Increased [CO2] increases competitive ability of the weedy rice populations tested, by means of increased plant height. Weedy rice seed yield also increases with increased [CO2] by means of increased photosynthesis rate and reduced transpiration (increased water use efficiency). Increased seed production also means increased weed persistence as it increases the soil seedbank size. The application of imazethapyr on IMI-resistant weedy rice did not alter its response to [CO2]; conversely, increased 18.9 20.2 1,181 (1) Means followed by different lower-case letters (genotypes) differ by the Tukey test (p<0.05). (2) Means followed by * differ by the t-test (p<0.05). ns Not significant by the t-test (p<0.05). [CO2] does not change the resistance level of weedy rice to imazethapyr. High [CO2] increases spikelet sterility, but this beneficial effect is negated by the overall increase in production of filled grains.
v3-fos-license
2023-05-28T15:16:20.165Z
2023-05-25T00:00:00.000
258939263
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2306-7381/10/6/373/pdf?version=1685086193", "pdf_hash": "1302ab64659cb2830fd4d3b693c9d5e2875dfc6f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46342", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "45ab1b6e5c5494a62a78dd73d99b81bcf683b88e", "year": 2023 }
pes2o/s2orc
Incorporation of Testicular Ultrasonography and Hair Steroid Concentrations in Bull Breeding Soundness Evaluation Simple Summary Bulls’ subfertility has a major impact on the efficiency of production and profitability of cattle enterprises. Bulls typically undergo a bull breeding soundness evaluation (BBSE) to predict potential fertility. The present study investigated if a more comprehensive index of indicative fertility could be developed in bulls by including testicular ultrasonography and hormonal status in the BBSE. Bulls with homogeneous testicular parenchyma showed a higher percentage of motile sperm post-thawing compared with bulls with heterogeneous parenchyma. In bulls with homogenous parenchyma, the percentage of motile sperm, progressively motile sperm, and motility yield were positively correlated with hair DHEA-S concentration. The use of testicular ultrasonography and DHEA-S status in the BBSE would provide a more comprehensive assessment of potential fertility in bulls. In addition, ultrasonography can be used in the BBSE when the evaluation of semen parameters is not available. Abstract Testicular ultrasonography and steroid concentrations (cortisol, dehydroepiandrosterone sulfate (DHEA-S), cortisol/DHEA-S ratio, testosterone) in hair were examined for their utility in the bull breeding soundness evaluation (BBSE). Beef and dairy bulls (n = 16; 2.7 ± 0.4 years old; body condition score 3.2 ± 0.1) of five breeds were maintained under the same conditions at an accredited semen collection center. Bulls underwent routine semen collection twice weekly for 12 weeks and semen was processed and cryopreserved. Ultrasonography and hair sampling were undertaken at the last semen collection. Bulls with homogeneous testicular parenchyma (n = 8) had a higher (p < 0.05) percentage of motile sperm post-thawing compared with bulls with heterogeneous parenchyma (n = 8). There were no differences (p > 0.05) in the hair concentrations of cortisol, DHEA-S, and testosterone between bulls with homogeneous and heterogeneous parenchyma. In bulls with homogeneous parenchyma, hair DHEA-S concentration was positively correlated with percentage motile sperm (R2 = 0.76), progressively motile sperm (R2 = 0.70), and motility yield (R2 = 0.71). The findings indicate that the integration of testicular ultrasonography and hair DHEA-S status in the BBSE could provide a more comprehensive assessment of indicative fertility in bulls. Additionally, ultrasonography can be used in the BBSE when the evaluation of semen parameters is not available. Introduction Bulls with low fertility have a major negative impact on the efficiency of production and profitability of cattle enterprises [1,2]. Hence, bulls routinely undergo a bull breeding soundness evaluation (BBSE) before they are used for natural or assisted breeding. BBSE Vet. Sci. 2023, 10, 373 2 of 9 involves an assessment of overall structural soundness, integrity of the reproductive organs, and semen quality [1]. Noninvasive testicular ultrasonography has undergone preliminary investigation as an additional parameter for inclusion in the BBSE. Ultrasonography provides information on the integrity of testicular parenchyma, and the relationship to spermatogenesis [3,4]. The homogeneity of testicular parenchyma, as judged by ultrasonography, was reported to have an important bearing on spermatogenesis and fertility in males. In men, testicular inhomogeneity, characterized by the presence of fibrotic tissue on ultrasound, was associated with impaired sperm quality and azoospermia [5]. The relationship between testicular homogeneity and sperm production and fertility is less clear for bulls. In an early study, there were no differences in sperm abnormalities between bulls with fibrotic foci in testicular parenchyma and bulls without fibrotic foci [6]. Subsequent studies also reported no clear association between the integrity of testicular parenchyma and semen quality in bulls [7][8][9]. However, in a study with a small number of bulls, testicular lesions were associated with a low BBSE score and poor semen quality [10]. Spermatogenesis is influenced by many factors including metabolic and endocrine status [11]. The brain-adrenal axis is involved in metabolic homeostasis [12] and it also influences the brain-gonadal axis [13,14]. Adrenal glucocorticoids, including cortisol, typically have a negative impact on testicular function including spermatogenesis [15,16]. Glucocorticoids are elevated during stress, and chronic stress can be associated with impaired sperm production [17]. The adrenals also secrete the androgens dehydroepiandrosterone (DHEA) and DHEA sulphate (DHEA-S) [18,19]. In cattle, DHEA and DHEA-S are suppressed when cortisol is elevated during chronic stress [20]. The inverse relationship between cortisol and DHEA/DHEA-S led to the proposal that DHEA and DHEA-S could be antagonistic to cortisol [20]. Allostasis is a term used to describe mechanisms whereby the body adapts to stressors to maintain healthy homeostasis. Allostatic load is the build-up of stressors over time and the impact on the brain and somatic tissues. In cattle and other species, the amount of cortisol present in hair is reflective of the short-to medium-term activity of the brainadrenal axis and provides an index of allostatic load [21,22]. Hair and blood concentrations of DHEA and DHEA-S are also reflective of allostatic load in cattle [22]. The effect of cortisol and DHEA-S on semen parameters and fertility has not been thoroughly investigated in bulls. The present study investigated the effects of testicular ultrasonography and hair steroids (cortisol, DHEA-S, cortisol/DHEA-S ratio, testosterone) on semen parameters in bulls. The aim was to determine the association between testicular ultrasonography and hair steroids with semen parameters in bulls. If ultrasonography and hair steroids were shown to be related to semen parameters, they could be used in BBSE if semen assessment was not available. The hypothesis tested was that the integrity of testicular parenchyma is related to semen parameters in bulls. The accurate selection of bulls for fertility is particularly important when bulls of high commercial value are used extensively in assisted breeding programs. Materials and Methods All experimental procedures complied with the Italian legislation on animal care (Legislative Decree n. 116, 27/1/1992). The study had approval from the Ethical Animal Care and Use Committee of the University of Naples Federico II (Protocol PG72021/0130477). Animals The study involved sixteen bulls (2.7 ± 0.4 years old, body condition score 3.2 ± 0.1) of five breeds: Pezzata rossa italiana (n = 7), Holstein Friesian (n = 5), Limousine (n = 2), Charolaise (n = 1), and Chianina (n = 1). Animals were maintained under the same management at an accredited National Semen Collection Center. The study lasted 12 weeks and testicular ultrasound examination and hair sampling were undertaken at the same time as the last semen collection, according to the retrospective value provided by the hair matrix. Testicular Ultrasonography For testicular ultrasound examination, bulls were restrained in a bovine steel stanchion. The scrotal skin was cleaned and ultrasonographic gel was applied to increase the quality of the ultrasound image. A B-mode ultrasound scanner (MyLab™AlphaVET-Esaote S.p.a, Genova, Italy) equipped with a 13-3 MHz linear array probe was used to image the testes of each bull; the same settings were used for focus, gain, brightness, and contrast, standardized at the machine median settings. The ultrasound transducer was held vertically (parallel to the long axis of the testes) on the caudal surface of the scrotum. The image was aligned until the mediastinum of the testes was clear and apparent [7]. The image was then frozen and saved. This process was repeated with the ultrasound transducer in the horizontal plane (at the widest part of the testis) and both views were repeated for the other testis. A validated scoring system was used to identify bulls with a homogeneous testicular parenchyma and bulls with a heterogeneous parenchyma [23,24]. In brief, the scoring system adopts a six-point scale with scores of 0-5 encompassing normal homogenous patterns of echotexture to very severe fibrosis throughout the testis [8]. All images were obtained by the same operator. Hair The hair in cattle grows at approximately 0.6-1.0 cm per month and animals show a full molt approximately every 3 months [25]. The concentration of steroids in hair therefore provides an integrated measure of secretion during the preceding 2 to 3 months [25][26][27]. The integrated value avoids the short-term and diurnal variations in steroid secretion and is a more accurate indicator of the prevailing steroidal status of animals. Hair samples can be readily obtained and processed compared with blood samples. Hence, hair steroid concentrations were used in the present study. Hair was obtained from the scapular region of bulls using a razorblade and cut close to the skin at the same time as the last semen collection. Samples represented the integrated steroid concentration over the 12-week duration of the study. Samples were stored in dry tubes at room temperature and in the dark until analysis. Semen Bulls underwent semen collection twice weekly for 12 weeks as part of the routine commercial activity of the authorized National Semen Collection Center. Bulls were trained to serve an artificial vagina (IMV, L'Aigle, France). A total of 384 ejaculates were collected during the study. Hair Steroid Assays Hair samples were prepared for the steroid assay as previously described [25]. Briefly, the hair samples were washed in isopropanol (Sigma-Aldrich, St. Louis, MO, USA), and approximately 60 mg of trimmed hair was extracted with methanol (Sigma-Aldrich, St. Louis, MO) for 16 h. Vials were then evaporated to dryness at 37 • C under an airstream suction hood and the remaining residue was dissolved in 0.35 mL of phosphate-buffered saline (PBS), 0.05 M, pH 7.5. Statistical Analyses Statistical analyses were carried out using SPSS (28.0) for Windows 10 (SPSS Inc., Chicago, IL, USA). The initial dataset was edited, discriminating both for missing information and outliers (values lying 3 standard deviations below/above the mean). The number of samples excluded was the same between the homogenous and heterogenous bulls, which were characterized by similar coefficients of dispersion. The final dataset consisted of 236 ejaculates (14.7 ± 1.4/bull). The normal distribution of all data was confirmed using the Shapiro-Wilk test. Bulls were used as the experimental units. Multivariate analysis of variance (general linear model) was used to compare hair steroids of bulls (dependent variables); testicular parenchyma and breed were the fixed factors, and their interaction was also considered. Data on semen characteristics were analyzed by ANOVA for repeated measures with testicular parenchyma (homogenous/heterogenous) as the main factor, and breed and cortisol as random. The day of collection was the repeated measure. Multiple linear regression was performed (forward stepwise procedures) with steroid concentrations as independent variables, and fertility parameters (mean values) as dependents. Potential independent and dependent variables were first tested for potential correlations using Pearson correlations and only significant correlations (p < 0.05) were included in the regression model. Pearson correlation was also used to exclude possible intercorrelations between the independent variables. Unless otherwise stated, the results are presented as mean ± standard error and significance was set at p < 0.05. Testicular Parenchyma and Spermatozoan Parameters Fertility parameters of fresh semen did not differ between homogenous and heterogenous bulls (Table 1). Steroid Concentrations Concentrations in hair of cortisol, DHEA-S, and testosterone, and the cortisol/DHEA-S ratio, are shown in Table 3. For all three steroids, there were no significant differences between bulls with homogeneous or heterogeneous testicular parenchyma. Discussion The present study examined whether the incorporation of testicular ultrasonography and hair steroid concentrations in the bull breeding soundness evaluation (BBSE) would provide a broader and more comprehensive index of indicative fertility. Another objective was to determine whether testicular ultrasonography could be used in the BBSE when semen evaluation is not available. Bulls with homogeneous testicular parenchyma had a higher percentage of motile sperm post-thawing compared with bulls with heterogeneous parenchyma. This finding could be interpreted to suggest that the sperm of bulls with homogenous parenchyma has a higher tolerance to cryopreservation and thawing compared with the sperm of bulls with heterogeneous parenchyma. This was an important observation as sperm motility is related to fertility in bulls [28,29]. Ultrasonography represents a practical, non-invasive procedure and adds important information to the BBSE. In an earlier study, the condition of the parenchyma was reported to be predictive of semen quality 2 to 4 weeks after ultrasound examination in bulls [7]. There were no differences in hair concentrations of testosterone, cortisol, and DHEA-S between bulls with homogeneous or heterogeneous testicular parenchyma. Previous studies in cattle and other species have reported an inverse relationship between cortisol and DHEA-S, and it was suggested that DHEA-S may partly counterbalance the negative impact of elevated cortisol on physiological and endocrine functions [20,22,31,32]. The cortisol/DHEA-S ratio was also considered an index of allostatic load [22]. The finding on the cortisol/DHEA-S ratio in the present study was interpreted to indicate that bulls with homogeneous and heterogeneous testicular parenchyma experienced the same allostatic load and did not have compromised endocrine function. This could be expected as all bulls experienced the same handling and management at an accredited National Semen Collection Center. Therefore, factors other than cortisol, and the cortisol/DHEA-S ratio, contributed to differences in testicular parenchyma condition in the present study. In this regard, testicular status was reported to have a genetic component [33]. Percentage motile sperm, progressively motile sperm, and motility yield were positively correlated with hair DHEA-S concentration in bulls with homogeneous parenchyma. This relationship may have been partly due to the prohormonal role of DHEA-S and its conversion to androgens and/or estrogens in peripheral target tissues [34]. As noted above, sperm motility and morphology are closely correlated with fertility [29]. Hair DHEA-S concentrations reflect adrenal secretion and assimilation in hair during the preceding weeks, and give a longer-term integration of DHEA-S status. Ultrasonography is now used routinely for monitoring reproductive function in females, and hair sampling is used for genomic testing in males and females. Hair sampling is more practical than blood for hormonal and genetic evaluation. As noted, there are conflicting reports on relationships between testicular parenchyma and testis hormonal and spermatogenic function in bulls. The present study has provided strong evidence that the condition of the parenchyma is reflective of spermatogenesis. Given the practical implementation of ultrasonography and hair sampling, the case can be made for inclusion in the BBSE, or ultrasonography can be used when semen evaluation is not available. Conclusions The present study has shown that bulls with homogeneous testicular parenchyma have sperm with a greater resilience to cryopreservation than the sperm of bulls with heterogeneous testicular parenchyma. This is an important finding as bulls of high commercial value are used extensively in artificial insemination. The study also highlighted a positive relationship between hair DHEA-S and important sperm fertility parameters. A limitation of the present study was the relatively small number of bulls tested and the absence of the BBSE. Notwithstanding, it could be concluded that the inclusion of testicular ultrasonography and hair DHEA-S in the standard BBSE is practical and would provide a more integrated and comprehensive assessment of fertility in bulls. Finally, ultrasonography can be used when the evaluation of semen parameters is not available. Institutional Review Board Statement: The animal study protocol was approved by the Ethical Animal Care and Use Committee of the University of Naples Federico II (Protocol PG72021/0130477). Informed Consent Statement: Not applicable. Data Availability Statement: Data are available upon reasonable request to the corresponding author.
v3-fos-license
2022-12-26T14:33:46.593Z
2013-10-20T00:00:00.000
255102397
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11248-013-9749-9.pdf", "pdf_hash": "a11d9550b0867ccfea9d979b0633c10bb3cc8b4c", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46343", "s2fieldsofstudy": [ "Biology" ], "sha1": "a11d9550b0867ccfea9d979b0633c10bb3cc8b4c", "year": 2013 }
pes2o/s2orc
Comparative analysis of different biofactories for the production of a major diabetes autoantigen The 65-kDa isoform of human glutamic acid decarboxylase (hGAD65) is a major diabetes autoantigen that can be used for the diagnosis and (more recently) the treatment of autoimmune diabetes. We previously reported that a catalytically-inactive version (hGAD65mut) accumulated to tenfold higher levels than its active counterpart in transgenic tobacco plants, providing a safe and less expensive source of the protein compared to mammalian production platforms. Here we show that hGAD65mut is also produced at higher levels than hGAD65 by transient expression in Nicotiana benthamiana (using either the pK7WG2 or MagnICON vectors), in insect cells using baculovirus vectors, and in bacterial cells using an inducible-expression system, although the latter system is unsuitable because hGAD65mut accumulates within inclusion bodies. The most productive of these platforms was the MagnICON system, which achieved yields of 78.8 μg/g fresh leaf weight (FLW) but this was substantially less than the best-performing elite transgenic tobacco plants, which reached 114.3 μg/g FLW after six generations of self-crossing. The transgenic system was found to be the most productive and cost-effective although the breeding process took 3 years to complete. The MagnICON system was less productive overall, but generated large amounts of protein in a few days. Both plant-based systems were therefore advantageous over the baculovirus-based production platform in our hands. Introduction Type-1 diabetes (T1D) is a chronic disease caused by the autoimmune destruction of insulin-producing pancreatic b-cells. The incidence of the disease is increasing by approximately 3 % per year and it requires life-long insulin replacement therapy (Aanstoot et al. 2007). The 65-kDa isoform of human glutamic acid decarboxylase (hGAD65), which catalyzes the decarboxylation of glutamate to c-aminobutyrate (GABA) and CO 2 (Soghomonian and Martin 1998;Capitani et al. 2003;Gut et al. 2006), is one of the major T1D autoantigens. Autoreactivity against hGAD65 is a valuable marker that can be used both to classify and monitor the progression of the disease (Schmidt et al. 2005). Autoantibodies against GAD65 are considered predictive markers when tested in combination with other disease-specific autoantibodies (Kulmala et al. 1998). Studies using animal models have shown that exposure to GAD65 may be therapeutic by inducing tolerance (Kaufman et al. 1993;Tisch et al. 1993). Human clinical investigations in recent-onset T1D patients using alum-formulated hGAD65 therefore considered the safety and efficacy of a treatment regimen consisting of prime and boost injections with different doses of the protein (Lernmark and Agardh 2005). Although these studies showed that treatment was safe, the efficacy data were equivocal suggesting that inducing tolerance in humans remains a challenge (Wherrett et al. 2011;Ludvigsson et al. 2012). A further trial, involving genetically-predisposed children and young adults with multiple islet cell autoantibodies, is currently exploring the ability of alumformulated hGAD65 to prevent the onset of disease (NCT01122446). Future strategies may include combination therapies coupling immunosuppressive agents with one or more autoantigens (Larsson and Lernmark 2011). The large-scale production of full-length recombinant hGAD65 currently involves the use of either insect cells (Moody et al. 1995) or methylotrophic yeast (Raymond et al. 1998) both of which are expensive and vulnerable to contamination. The growing demand for high-quality hGAD65 for diagnostic and therapeutic applications means that alternative platforms are required to ensure there is sufficient production capacity in the future. The production of hGAD65 in plants was previously reported (Avesani et al. 2003(Avesani et al. , 2007(Avesani et al. , 2010Ma et al. 2004;Morandini et al. 2011) including a catalyticallyinactive derivative autoantigen (hGAD65mut) that retains its immunogenic properties and accumulates to tenfold higher levels than its wild-type counterpart (Avesani et al. 2010). We hypothesized that the wildtype version of hGAD65 interferes with plant cell metabolism to suppress its own synthesis, whereas the catalytically-inactive version escapes such feedback and accumulates to higher levels. The hGAD65mut mutant was generated by substituting the lysine residue that binds the co-factor pyridoxal 5 0 -phosphate (PLP) with an arginine residue (K396R). The mutant protein has been produced in a cell-free transcription and translation system (Hampe et al. 2001) and in transgenic tobacco plants (Avesani et al. 2010) and in each case binds autoantibodies from the sera of T1D patients. We developed a hypothesis that hGAD65mut is intrinsically more suitable for heterologous expression than hGAD65, and should accumulate to higher levels than the wild-type protein in different production platforms such as bacteria, insect cells and Nicotiana benthamiana plants. We therefore tested a commercial E. coli platform containing an inducible vector, Spodoptera frugiperda cells infected with the Baculodirect Expression System (Life Technologies) and two transient expression vectors (pK7WG2 and the MagnICON system) in N. benthamiana plants. These systems were compared with the best-performing stable transgenic tobacco lines we previously reported, which have been improved by several generations of conventional breeding starting with the T1 generation. Self-pollination of elite tobacco plants The flowers of hGAD65mut transgenic plants were bagged before blooming to prevent cross-pollination, and the bags were collected and stored after blooming, fruit ripening and seed drying. Starting from the bestperforming hGAD65mut T1 plants, the dried seeds were sown to produce subsequent generations of transgenic tobacco plants up to the T6 generation. Construction of plant expression vectors The pK7WG2.G65 and pK7WG2.G65mut vectors were constructed as previously described (Avesani et al. 2010). To obtain final TMV 3 0 modules carrying the genes of interest, hGAD65 and hGAD65mut were amplified by PCR using forward (5 0 -TTT GGT CTC AAG GTA TGG CAT CTC CGG GCT CTG GCT TTT GG-3 0 ) and reverse (5 0 -TTT GGT CTC AAA GCT TAT TAT AAA TCT TGT CCA AGG CGT TC-3 0 ) primers and inserted in the pGEM-T Easy vector (Promega, Madison, WI). The pGEM.G65 and pGEM.G65mut vectors were used as entry clones for recombination with the TMV 3 0 module pICH31070, as described by Engler et al. (2008). Transient expression in N. benthamiana The pK7WG2.G65 and pK7WG2.G65mut vectors were introduced into Agrobacterium tumefaciens strain EHA105. The bacteria were cultivated for 2 days in YEB medium containing 50 lg/ml rifampicin, 300 lg/ml streptomycin and 100 lg/ml spectinomycin, pelleted by centrifugation at 4,0009g and resuspended in infiltration buffer (10 mM MES, 10 mM MgCl 2 , 100 lM acetosyringone, pH 5.6) to an OD 600 of 0.9. Following incubation for 3 h at room temperature, bacterial suspensions were syringe infiltrated into 5-6-week-old N. benthamiana plants, using three leaves per plant (one biological replicate). Leaves were infiltrated with the pK7WG2 vector carrying the gfp marker gene as a negative control. The leaves of each biological replicate were sampled 2 days post-infiltration (dpi). For TMV-based expression, pICH31070.G65 and pICH31070.G65mut (3 0 modules), pICH20111 (5 0 module) and pICH14011 (integrase module) were introduced into A. tumefaciens strain GV3101. The bacteria were seeded into LB medium containing 50 lg/ml rifampicin and 50 lg/ml kanamycin (3 0 modules) or 50 lg/ml carbenicillin (integrase and 5 0 modules). Overnight bacterial cultures were collected by centrifugation at 4,0009g and resuspended in two volumes of 10 mM MES (pH 5.5) and 10 mM MgSO 4 . Equal volumes of the hGAD65 or hGAD65mut 3 0 module, 5 0 module and integrase module suspensions were mixed and used to infiltrate the leaves of 5-6-week-old N. benthamiana plants, with each biological replicate comprising a pool of three infiltrated leaves, sampled at 4 dpi. A mixture of the 5 0 -module and integrase-module suspensions was used as a negative control. The plants were grown in an enclosed chamber at 25/22°C day/night temperature with a 16-h photoperiod. Expression using the baculovirus/insect cell system Recombinant baculovirus DNA was obtained by LR recombination between pENTR TM /D-TOPO.G65 or pENTR TM /D-TOPO.G65mut (Avesani et al. 2010) and the linearized viral DNA. Sf9 cells were seeded into 6-well plates (8 9 10 5 cells per well) and washed twice with 2 ml of non-supplemented Grace's Insect Medium (Life Technologies, Paisley, UK). The medium was removed and replaced drop-wise with the transfection mixture (5 ll LR recombination reaction, 6 ll Celfectin solution and 200 ll non-supplemented Grace's Insect Medium). The plates were incubated at 27°C for 5 h before the transfection mixture was removed and replaced with 2 ml fresh Sf-900 medium (Life Technologies, Paisley, UK) supplemented with 10 % fetal bovine serum, 10 lg/ml gentamicin and 100 lM ganciclovir for the selection of recombinant baculovirus clones. After incubation for 96 h at 27°C, the medium (V1 viral stock) was collected, centrifuged at 4,0009g to remove cells and large debris, and stored in the dark at 4°C. High-titer V2 viral stock was generated by seeding 1 9 10 6 Sf9 cells per well in 2.5 ml Sf-900 medium containing 10 % fetal bovine serum, 10 lg/ml gentamicin and 100 lM ganciclovir, and infecting with 100 ll of the V1 stock. The cells were incubated for 3 days at 27°C, the medium was collected and centrifuged at 4,0009g, and the supernatant (V2 stock) was stored at 4°C. Expression in bacterial cells The Gateway destination vector pDEST17 (Life Technologies, Paisley, UK) was isolated from E. coli DB3.1 cells (Life Technologies, Paisley, UK) and used for LR recombination with the entry vectors pENTR TM / D-TOPO.G65 and pENTR TM /D-TOPO.G65mut (Avesani et al. 2010), yielding pDEST17.G65 and pDEST17.G65mut, respectively. The pDEST17.CmR vector carrying a chloramphenicol-resistance gene was used as a negative control. The three expression vectors were independently transferred to electrocompetent E. coli BL21 (DE3) cells (Novagen, Madison, WI) and individual colonies were cultured overnight at 37°C in ampicillin-containing LB medium. The culture was then diluted 1:100 with LB medium and incubated at 37°C for 1-6 h until the OD 600 reached 0.8. Recombinant protein expression was induced with 1 mM isopropyl-b-D-thiogalactopyranoside (IPTG; Sigma-Aldrich, St. Louis, MO) and the culture was incubated at 37°C for 3 h before the cells were collected by centrifugation at 4,0009g and stored at -80°C prior to protein extraction. Analysis of recombinant protein expression Total soluble proteins were extracted from plant tissues by grinding to fine powder under liquid nitrogen and homogenizing in extraction buffer (40 mM HEPES pH 7.9, 5 mM DTT, 1.5 % CHAPS) supplemented with Protease Inhibitor Cocktail (Sigma-Aldrich, St. Louis, MO). Bacterial cells were collected by centrifugation at 4,0009g and resuspended in half the culture volume of TBS (20 mM Tris-HCl pH 7.4, 500 mM NaCl) supplemented with 1 mM phenylmethanesulfonylfluoride (PMSF; Sigma-Aldrich, St. Louis, MO) then sonicated on ice three times for 40 s at half power. The lysate was clarified by centrifugation at 14,0009g for 20 min at 4°C. The supernatant and pellet were stored separately at -80°C. The inclusion bodies were solubilized with 6 M urea and stored at -80°C. Infected insect cells were collected by centrifugation at 3,0009g for 5 min, washed with 1 ml PBS, resuspended in 200 ll lysis buffer (20 mM Tris/HCl pH 8.0, 0.5 M NaCl, 10 mM imidazole, 3 mM b-mercaptoethanol and 1 % Tween-20) and incubated on ice for 30 min. The solubilized cells were centrifuged at 14,0009g at 4°C for 20 min and the soluble fractions were collected and stored at -80°C. Radioimmunoassays (RIAs) were carried out using hGAD65 autoantibody-positive serum from a T1D patient and 125 I-GAD65 (RSR, Cardiff, UK) as a tracer (Falorni et al. 1994). Commercial recombinant human GAD65 (rhGAD65) produced in the baculovirus expression system (Diamyd, Karlavagen, SE) was used as positive control. Non-transformed controls were analyzed in parallel to exclude potential negative effects caused by the buffer and host components during the detection procedure. The protein samples were separated by SDS-PAGE on a 10 % polyacrylamide gel and transferred to a nitrocellulose membrane by electroblotting. Proteins were detected using the GC3108 (IgG1) monoclonal antibody (Biomol International, Farmingdale, NY) as previously described (Avesani et al. 2003). Results Stable expression of hGAD65 and hGAD65mut in tobacco and the establishment of a homogeneous transgenic tobacco platform for hGAD65mut We previously reported the expression of hGAD65 and hGAD65mut in transgenic tobacco plants (Avesani et al. 2010). As expected, the recombinant protein levels varied significantly among independently-transformed lines, probably reflecting the position effects associated with random transgene insertion (Krysan et al. 2002). We compared the accumulation of the two proteins in T1 transgenic lines by selecting the three best-performing individuals (elite lines) evaluated by RIA using GAD65 autoantibody-positive serum (Table 1). This comparison showed that the average yield in the elite hGAD65mut pre-flowering lines was 143.6 lg/g FLW, 13-fold higher than the 10.5 lg/g FLW average yield in the hGAD65 elite lines (Table 1). The observed difference was statistically significant (Student's t test, p \ 0.01). We developed a homogeneous production platform by self-crossing the best-performing T1 hGAD65mut transgenic plant and repeating the self-crossing over several generations, checking the performance in each generation by RIA until no further improvement was achieved (data not shown). The average yield increased from 68.9 lg/g FLW in T2 to 99.1 lg/g FLW in T6 ( Fig. 1 and Online Resource 1). During the selection process, the standard deviation in the expression level declined from 40.1 in T2 to 11.33 in T6 (Online Resource 1). Transient expression of hGAD65 and hGAD65mut in N. benthamiana using pK7WG2 The hGAD65 and hGAD65mut sequences were cloned separately in pK7WG2, which was previously used for the stable transformation of tobacco (Karimi et al. 2002). The resulting vectors pK7WG2.G65 and pK7WG2.G65mut were separately introduced into A. tumefaciens and infiltrated into three leaves on three different N. benthamiana plants. Time-course analysis showed that protein accumulation peaked at 2 dpi (Online Resource 2). Therefore, the leaves were harvested 2 dpi and protein extracts were analyzed by RIA. The average expression level of hGAD65mut was 67.8 lg/g FLW, which was approximately 16-fold higher than hGAD65 at 4.3 lg/g FLW (Table 1). This was a statistically significant difference (Student's t test, p \ 0.01) and the trend matched our observations of the elite transgenic tobacco lines (Table 1). Extracts from the best-performing N. benthamiana biological replicates for each construct were investigated in more detail by western blot (Fig. 2). Extracts from leaf pools expressing each construct revealed a major band with an apparent molecular weight [65 kDa, probably reflecting the presence of protein aggregates. The 65-kDa polypeptide, corresponding to the monomeric form of the protein, was only detected in extracts from plants expressing hGAD65mut, although this was more likely to reflect the greater abundance of the protein per se in these plants rather than the relatively greater abundance of the monomeric form compared to aggregates. In support of this conclusion, we found that protein aggregates were also more abundant than the monomeric form in western blots of transgenic plants (data not shown). As expected, the anti-GAD monoclonal antibody did not recognize endogenous tobacco proteins. MagnICON expression of hGAD65 and hGAD65mut in N. benthamiana The hGAD65 and hGAD65mut sequences were also transiently expressed using the MagnICON deconstructed tobacco mosaic virus (TMV) system. The sequences were cloned separately in the TMV 3 0 -module pICH31070 (Marillonnet et al. 2004). The final 3 0 -modules, the pICH20111 5 0 -module and the pICH14011 integrase-module, were introduced into A. tumefaciens separately, and mixed suspensions were used for the agroinfiltration of three leaves from three different N. benthamiana plants. Based on previously-determined time-course data (Online Resource 3), the infiltrated leaves were collected 4 dpi. The accumulation of immunoreactive recombinant protein was measured by RIA revealing that hGAD65mut accumulated to 78.8 lg/g FLW, almost threefold higher than hGAD65 at 26.8 lg/g FLW, representing a statistically significant difference (Student's t test, p \ 0.01) albeit less than the difference in the accumulation of the two recombinant proteins observed with the pK7WG2 transient expression approach (Table 1). Western blots of leaf protein extracts from the two bestperforming plants confirmed the RIA results, indicating a slight difference between the two recombinant proteins that could be inferred only after normalizing the specific signals with the total soluble proteins stained in the reference gel with Coomassie Brilliant Blue (Fig. 3). As above, both hGAD65 and hGAD65mut predominantly comprised aggregates, whereas the monomeric forms of the proteins appeared as minor components. Expression of recombinant hGAD65 and hGAD65mut using baculovirus vectors Baculovirus vectors containing the hGAD65mut and hGAD65 sequences with a C-terminal His 6 tag were expressed in adherent Sf9 cell cultures. V1 and V2 high-titer stocks were prepared and the optimal viral stock (Online Resource 4) and time-course kinetics were identified for comparative purposes. Each experiment was carried out in triplicate. RIA analysis revealed a statistically significant difference (Student's t test, p \ 0.01) in the accumulation of hGAD65mut (11.7 ± 0.8 lg/ml culture medium) and hGAD65 (7.7 ± 0.7 lg/ml culture medium), as shown in Table 1. This difference was difficult to visualize in western blots of extracts from the best-performing cultures (Fig. 4). The anti-GAD antibody did not recognize endogenous insect proteins but primarily detected a specific band migrating at the predicted molecular weight of the monomeric form of the recombinant proteins. With the exception of a weaker 130-kDa band presumably representing a protein dimer, the insect cell cultures were conspicuous by the absence of multimeric aggregates that were most abundant in the plant extracts. The hGAD65 and hGAD65mut sequences were individually cloned in the Gateway destination vector pDEST17, which allows the induction of transcription with IPTG (Belfield et al. 2007). A chloramphenicolresistance gene in the same vector was used as a negative control. The three resulting vectors (pDEST17.G65, pDEST17.G65mut and pDEST17.CmR) were introduced into E. coli BL21 cells. Individually, the expression of hGAD65mut and hGAD65 was induced in triplicate cultures. Western blots indicated that both hGAD65mut and hGAD65 accumulated in the insoluble fraction (data not shown). We tested several strategies to solubilize the recombinant proteins but only the use of a strong denaturing agent such as 6 M urea was successful (data not shown). Urea interferes with RIA analysis thus preventing the accurate quantification of the recombinant proteins. The western blots confirmed that hGAD65mut accumulated to higher levels than hGAD65 (Fig. 5) but also revealed the presence of polypeptides with a lower molecular mass than expected, suggesting the proteins were degraded or suffered premature translational termination events. The solubility of recombinant proteins produced in E. coli can be improved by culturing the cells at lower temperatures (Hunt 2005) but we found that low-temperature cultivation at 15 or 20°C had no impact on the yield of either hGAD65 or hGAD65mut (Online Resource 5). Discussion We previously reported the expression of hGAD65 and hGAD65mut in stably-transformed tobacco plants (Avesani et al. 2010). The yield of the inactive mutant protein was up to 2.2 % total soluble protein (TSP), which was more than tenfold higher than ever achieved for the wild-type protein. We reasoned that the enzymatic activity of hGAD65 prevented highlevel accumulation by suppressing its own synthesis, whereas the inactive version was unaffected by such feedback. Here, we investigated whether this trend was conserved in other expression systems, i.e. transient expression with standard and MagnICON vectors in N. benthamiana, inducible expression in E. coli and transduction with baculovirus vectors in insect cells. We compared the performance of these expression platforms in terms of recombinant protein yield. We used the original human sequences in all experiments, i.e. the constructs were not optimized for the different platforms. We began by testing the previously-reported T1 transgenic tobacco plants expressing hGAD65mut and hGAD65 (Avesani et al. 2010). We found that the average yields in the three best-performing elite lines were 143.6 lg/g FLW for hGAD65mut and 10.5 lg/g FLW for hGAD65, which was a tenfold difference. Stable transformation is advantageous because predictable expression levels can be achieved in the offspring of a well-characterized transgenic event, which can give rise to a large population of homogeneous transgenic plants. Hence, the best-performing hGAD65mut elite lines were self-crossed for several generations until recombinant protein yields were homogeneous, probably reflecting homozygosity at the transgenic loci. After six generations of selfing taking more than 3 years, the final average yield of hGAD65mut in the most productive plants was 114.3 lg/g FLW. In contrast to stable transformation, transient expression in N. benthamiana can achieve high yields over relatively short timescales, although there can be significant variation (Voinnet et al. 2003;Chiera et al. 2008;Conley et al. 2011). A high-throughput platform for transient expression in tobacco has also been proposed (Piotrzkowski et al. 2012), consisting of a leaf-disc based infiltration approach that allows different traits to be compared simultaneously on a small scale. We carried out transient expression using the vector previously used to generate transgenic plants (pK7WG2) and observed a similar fold difference in the expression levels of catalytically active hGAD65 and inactive hGAD65mut in both systems. The overall average yield of both proteins was significantly higher in the stably-transformed plants (Student's t test, p \ 0.05 for both proteins), probably reflecting the benefits of multiple rounds of selection to isolate those plants with the genomic background most favorable for strong transgene expression. Interestingly, the transient expression levels we observed were less variable than the stable expression levels among the T1 transgenic plants, showing it was quicker and easier to generate a relatively homogeneous population without selection, reflecting the presence of hundreds of active but non-integrated copies of the transgene for a few days after agroinfiltration (Kapila et al. 1997). The agreement between the transient and stable expression data in terms of the fold difference between hGAD65 and hGAD65mut suggests that transient expression in N. benthamiana gives a reliable forecast of the best-performing transgenic tobacco plants, as previously observed (Conley et al. 2011). Numbers indicate the molecular mass markers in kDa. n.c. negative control, bacterial cell transformed with the pDest17 'empty' vector; p.c. positive control, 15 ng of commercial rhGAD65-His 6 produced in the baculovirus/insect cell system We also tested the MagnICON transient expression platform which is based on deconstructed viral vectors (Gleba et al. 2007) and can achieve yields of up to 4 mg/g FLW (Marillonnet et al. 2005). However, agroinfiltrated plants expressing the MagnICON hGAD65mut vector were significantly less productive than the elite transgenic tobacco lines (Student's t test, p \ 0.05) and there was no significant difference between the transient expression levels achieved with the MagnICON platform and the standard expression vector pK7WG2 (Student's t test, p [ 0.05). In contrast, the MagnICON platform significantly outperformed both pK7WG2 transient expression and the transgenic plants in the case of hGAD65 (Student's t test, p \ 0.01 in both cases). The fold difference in hGAD65mut and hGAD65 expression was therefore lower in the MagnICON platform compared to both pK7WG2 transient expression and the transgenic plants. We observed signs of toxicity (such as premature leaf senescence) when either hGAD65mut or hGAD65 were expressed, as previously reported for other recombinant proteins (Pinkhasov et al. 2011;Nausch et al. 2012). This may explain the lower yields we observed compared to the potential of the system, which can achieve recombinant protein yields of up to 80 % TSP (Marillonnet et al. 2004). To explain these data, we propose that hGAD65mut reaches a threshold level in the transgenic plants which is determined by the inherent stability of the protein in the plant cell environment, based on its intrinsic physicochemical properties. This cannot be overcome using the MagnICON system. In contrast, the accumulation of hGAD65 in the transgenic plants is inhibited at a much lower level because of the hypothesized feedback mechanism discussed above, which is determined by its catalytic activity. When pK7WG2 is used for transient expression, we propose that the same feedback mechanism kicks in. However, it is possible that the MagnICON can overcome this feedback because of the rapid and high-level expression, allowing large amounts of protein to accumulate before any impact on plant metabolism takes effect. In addition to the plant-based platforms, we also expressed hGAD65mut and hGAD65 in insect cells using baculovirus vectors, since hGAD65 produced in this system has recently been used for a phase III clinical trial testing the preservation of b-cell function in patients with recent-onset T1D (Ludvigsson et al. 2012). As in plants, we found that hGAD65mut accumulated to a higher level than hGAD65, but the fold difference was the lowest among the platforms we tested, suggesting that hGAD65 is less toxic to insect cells than plant cells. It has previously been shown that hGAD65 forms inclusion bodies when expressed in E. coli (Mauch et al. 1993) so that laborious solubilization and refolding are required to achieve the native conformation (Franke et al. 1988). We therefore expressed hGAD65 and hGAD65mut using an inducible system suitable for protein overexpression (pDEST17/BL2.1) exploiting different growing temperatures to optimize performance. We focused on low-temperature cultivation because this reduces the hydrophobic interactions that are known to promote the formation of inclusion bodies, and in this way we aimed to improve the solubility of the recombinant proteins and encourage efficient folding (Niiranen et al. 2007). Even with the benefits of this platform, we found that both recombinant proteins formed insoluble aggregates under all the conditions we tested. Solubilization of the aggregates using strong denaturing agents suggested that hGAD65mut accumulated to higher levels than hGAD65, but this system is clearly unsuitable for the large-scale production of immunogenic proteins. However, the higher accumulation of hGAD65mut compared to hGAD65 confirms that the catalytic activity of the recombinant protein hampers its accumulation in bacterial cells. Endogenous bacterial GAD is thought to control the acidification of the cytosol so it is likely that the recombinant protein disrupts this process (Capitani et al. 2003). Overall, our data indicate that hGAD65mut accumulates to a higher level than hGAD65 in all the platforms we tested, although the fold difference is platform-dependent. This is likely to reflect a universal feedback mechanism in which GAD65 enzyme activity interferes with the metabolic processes responsible for its own synthesis, whereas the catalytically inactive form escapes such feedback and accumulates to higher levels (Avesani et al. 2010). Finally, we selected the best-performing plant system, i.e. the elite transgenic plant line expressing hGAD65mut at the highest level, and compared it in terms of yield and cost with the commercial baculovirus platform used to produce hGAD65. Assuming similar costs for developing the two systems and ignoring personnel costs, we estimated that the production costs for 1 g of recombinant protein using Transgenic Res (2014) 23:281-291 289 the baculovirus system could reach 700 euro (including conservative costs for the media required to grow 9 l of insect cells) whereas the equivalent cost in plants was substantially lower, at less than 5 euro (including the cost of soil to grow 60 tobacco plants). The costs associated with sterile cell cultures are much higher than the costs associated with growing plants, so even if downstream processing is more difficult and expensive in the case of plants, the baculovirus system remains much more costly overall. It is also notable that plants are much more scalable than cultured cells, so it is clear that transgenic plants offer a significant advantage in terms of overall production and costs even if insect cells have a greater intrinsic productivity per unit of biomass.
v3-fos-license
2023-03-15T15:22:36.016Z
2023-03-13T00:00:00.000
257518971
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnbot.2023.1149201/pdf", "pdf_hash": "45a95f419c23d87371fda8dc0e93c9bf881e4c49", "pdf_src": "Frontier", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46345", "s2fieldsofstudy": [ "Engineering" ], "sha1": "b532ae2b1c98460cff9eb8b336fa9f23ae8395b5", "year": 2023 }
pes2o/s2orc
Research on control strategy of vehicle stability based on dynamic stable region regression analysis The intervention time of stability control system is determined by stability judgment, which is the basis of vehicle stability control. According to the different working conditions of the vehicle, we construct the phase plane of the vehicle's sideslip angle and sideslip angular velocity, and establish the sample dataset of the stable region of the different phase planes. To reduce the complexity of phase plane stable region division and avoid large amount of data, we established the support vector regression (SVR) model, and realized the automatic regression of dynamic stable region. The testing of the test set shows that the model established in this paper has strong generalization ability. We designed a direct yaw-moment control (DYC) stability controller based on linear time-varying model predictive control (LTV-MPC). The influence of key factors such as centroid position and road adhesion coefficient on the stable region is analyzed through phase diagram. The effectiveness of the stability judgment and control algorithm is verified by simulation tests. . Introduction Active safety technology has increasingly become one of the key research fields of the automotive industry. The stability of the vehicle indicates the safety of the vehicle driving, and vehicle stability control is the basis for the implementation of active safety technology (Lai et al., 2021). The judgment of vehicle stability determines the intervention and exit time of the control system, which is an extremely critical part of stability control (Chen et al., 2020). There are two mainstream methods of vehicle stability judgment. The first is to use the stability criterion, such as Lyapunov criterion, in control theory to conduct stability judgment based on a multi-DOF model of the vehicle or tire (Zhenyong, 2006;Yang et al., 2009;Vignati et al., 2017). The second is to use the phase plane to judge the stability of the vehicle, which is very intuitive. It is an important research method of vehicle stability judgment. The vehicle stability judgment methods based on the phase plane can be divided into two main types: the sideslip angle-yaw rate phase plane method and the sideslip anglesideslip angular velocity phase plane method. Because the sideslip angle-yaw rate phase plane method cannot accurately judge the vehicle stability under the unstable conditions such as pure sideslip with small yaw rate fluctuation, while the sideslip angle-sideslip angular velocity phase plane method does not have this problem, so the latter is more widely used . The sideslip angle-sideslip angular velocity phase plane method was originally proposed by Inagaki et al. (1995) and Yamamoto et al. (1995). They use the "double-line method" to distinguish vehicle stability. Two straight lines passing through the saddle point are determined in the sideslip angle-sideslip angular velocity phase plane. The region surrounded by these two straight lines is considered as the stable region in the phase plane, but the stable region still contains many unstable trajectories far from the equilibrium point. Taeyoung and Kyongsu (2006) proposed to determine a rhombic region in the sideslip angle-sideslip angular velocity phase plane as the stable region of the vehicle. The four vertices of the rhomb fall on the two coordinate axes, and the vehicle stability control with variable threshold is achieved by setting the relaxation factor. The experimental results show that the stability control scheme has good performance, but more parameters will be introduced at the same time, which leads to difficulties in dividing stable region. Von Vietinghoff et al. (2008) verified the work of Taeyoung and Kyongsu (2006) by simulation and found that the rhombic method may not be able to determine the upper and lower endpoints. Yu et al. (2015) introduced the stable region determined by the yaw rate method based on the double-line method, reduced the unstable operating conditions in the stable region obtained by the double-line method, and established a database of stable region at different vehicle speeds, road adhesion coefficients and front wheel angles. Liu et al. (2014) proposed an improved five-eigenvalue rhombus stable region determination scheme, and established a stability region database for different vehicle speeds, road adhesion coefficients and front wheel angles through simulation. During the simulation process, the table can be checked according to vehicle state parameters to judge vehicle stability. In summary, most of the existing documents have considered the effect of real-time vehicle speed, adhesion coefficient and front wheel angle on the phase plane. In fact, due to the uncertainty of the mass and position of the load and passengers, the mass and centroid position of the vehicle will change. These changes will lead to great changes in vehicle performance, such as braking performance, acceleration performance and anti-roll performance. Therefore, it is necessary to consider the change of centroid position when plotting the vehicle phase plane. At present, in the study of using the phase plane method to determine the driving stability of vehicles, most of them use the method of establishing databases and looking up tables to determine the stable region under different working conditions. This scheme can meet the accuracy requirements, but when the vehicle parameters change, the database needs to be reconstructed, resulting in high time and space complexity and poor practicability. With the rapid development of data transmission and artificial intelligence technology, machine learning algorithms are widely used in various disciplines to solve various classification and regression problems. The division of stable regions of vehicles in different states is also a regression prediction problem of data feature extraction, which can be solved by machine learning. In this paper, based on the traditional double-line method, we proposed an improved double-line method for stable region division considering the limit value of the sideslip angular velocity. Then, we designed the SVR vehicle stable region regression model with a small dataset, which can make reasonable speculation on the stable region of the vehicle. In addition, we constructed a DYC controller to verify its feasibility and superiority. The structure of this paper is as follows. Section 1 is the introduction of the background. Section 2 introduces the process of vehicle β −β phase plane plotting and stable region dividing. Section 3 introduces the dynamic stable region regression model, including data sample making, model construction, parameters optimization and test set comparison. Section 4 introduces the summary of influential factors and effect analysis. Section 5 introduces the design of the stability controller and simulation test scheme, and analyzes the simulation results. Section Conclusion is the conclusion of the paper. The architecture of this paper is shown in Figure 1. . β −β phase plane establishment and stable region division . . Vehicle dynamics modeling As shown in Figure 2, this paper carries out vehicle driving stability research based on a 2-DOF nonlinear monorail model, where β denotes the sideslip angle, δ denotes the front wheel angle, α f denotes the front wheel slip angle, α r denotes the rear wheel slip angle, v COG denotes the velocity at the centroid of the vehicle, γ denotes the yaw rate, C G denotes the centroid of the vehicle, O denotes the instantaneous center of the steering motion of the vehicle at this moment, a denotes the distance from the centroid to the front axis, b denotes the distance from the centroid to the rear axis, F 1 indicates the lateral force on the front axle and F 2 indicates the lateral force on the rear axle. According to the 2-DOF monorail model of the vehicle shown in Figure 2, the kinematics equation of the whole vehicle is derived from Newton's law. As shown in (1), where m denotes the mass of the whole vehicle, v x denotes the component of the vehicle velocity in the X-axis direction, I z denotes the rotational inertia of the whole vehicle around the Z-axis. . . Phase plane plotting In this paper, the lateral forces on the front and rear axles of the vehicle under different road adhesion coefficient, centroid position, front wheel angle and speed are obtained through simulation tests (Zha et al., 2021). According to the formula (1), the changes of the sideslip angle and the sideslip angular velocity under different working conditions can be calculated. Using the parameters in Table 1, the phase trajectories of sideslip angle and sideslip angular velocity can be drawn (Li et al., 2014), as shown in Figure 3. . . Stable region dividing This section proposes a method for dividing the stable region in the β −β phase plane based on the improved double-line method. Figure 4. Since the phase trajectory in this region always extends in the direction of decreasing the absolute value of the sideslip angle, controlling the centroid sideslip angle in the phase plane stable region can effectively maintain the lateral stability of the vehicle (Zhang et al., 2011). The phase plane represents the relationship between the sideslip angle and the sideslip angular velocity. Its phase trajectory varies according to the changes of the centroid position, road adhesion coefficient, vehicle speed and front wheel angle, etc. From the above, the four boundaries and equilibrium points of the regression model output determine the stable region. The specific methods will be described in the next section. . Dynamic stable region regression and stability judgment . . Sample making SVR machine is widely used to solve data regression problems because of its good predictive property for small dataset and its robustness to abnormal data (Zhang et al., 2019). In this paper, we will use SVR to realize the regression of phase plane stable region. Firstly, the samples of dataset are made according to the improved double-line method, i.e., the phase plane is artificially divided and the information of the division of stable region is recorded. To . /fnbot. . FIGURE Phase plane stable region. study the effects of centroid position, vehicle speed, road adhesion coefficient and front wheel angle on the phase plane trajectory, set the working condition parameters as shown in Table 2. In this paper, five MISO SVR models are established based on data sets, as shown in Figure 5A. The input and output of the model have the following mapping relationship. The model output includes three sideslip angle boundary predictions and two sideslip angular velocity boundary predictions, and each output is shown in Figure 5B. The data set is divided into training set and test set according to the ratio of 8:2. The training set is used to train the SVR model, and the test set is used to evaluate its performance, as shown below. . . Model structure For linearly divisible SVM problems (Huang et al., 2014), a convex optimization problem needs to be solved by the maximum interval algorithm: minimizing a linear inequality constrained quadratic function. Given linearly differentiable training samples whose number is l, the optimization problem is solved as follows (Sun et al., 2008). The solution process requires the transformation of the above optimization problem using Lagrange function, from the minimax problem to the corresponding duality problem, and then substituted back into the original equation to obtain the following objective function, which is further solved by sequential minimal optimization (SMO) algorithm. where α i ≥ 0 are the Lagrange multipliers. However, the hard margin classifier mentioned above cannot be used in many real-world problems. If the experimental data are noisy, a soft margin classifier is used to allow the model to tolerate noise and outliers, thus taking more training points into account, which is a class of problems called linear SVM. We introduce a margin . /fnbot. . relaxation factor, which allows the formula to violate the margin constraint to some extent, when the optimization problem becomes: The SVR, which is the main topic of this paper, retains all the main features of the maximum interval algorithm. In this paper, we will use the ε − SVR, which is a common form of regression estimation, and we will propose an insensitive loss function ε to ignore the error within a certain upper and lower range of the true value, as shown in Figure 6, ξ measures the cost of the error at the training points in linear regression while the error within the insensitive region of ε is zero. Further, the final single-objective optimization problem with constraints takes the following expression: The kernel function is a widely used computational tool in SVM (Cai et al., 2019), which can calculate the inner product φ (x i ) · φ (x) in the feature space directly, then build a nonlinear learner. In this paper, a Gaussian kernel function is used for high-dimensional mapping, and the equation is as follows: Where σ is the scale of Gaussian kernel function. FIGURE Model parameter optimization process. . . Parameter optimization As with most learning algorithms, the hyperparameters in the SVR model determine the performance of the SVR model, including the regularization parameter C, the insensitivity parameter ε and the radial basis kernel parameter σ (Xiao et al., 2008). For different nonlinear regression problems, it is necessary to select different hyperparameters to find the optimal highdimensional feature space to reflect the characteristics of the data (Jia et al., 2022). K-fold cross-validation is a statistical concept. Its practice is to divide the training set into K equal parts, and take the first part as the validation set and the rest as the training set in the first round. In the second round, take the second part as the verification set, the rest as the training set, and so on. Finally, the average error of K-fold cross validation is calculated to represent the training effect of the model. Bayesian optimization is a common method for tuning parameters in machine learning. The main principle is to perform probabilistic sampling in the feature space and return the optimal solution from the sampling point after multiple iterations. In this paper, we will use Bayesian optimization to perform automatic parameter search to minimize the 5-fold cross validation loss of the SVR model that meets the training set samples. The optimization process of one SVR model is shown in Figure 7. Optimal parameters of the five SVR models were obtained, which were shown in Table 3. . . Test set comparison In this section, the model performance is evaluated using a test set based on the stable region regression model obtained from the training above. The mean square error (MSE) of the test set is selected to represent the relative closeness between the predicted output and the expected output. It is also used to evaluate the . /fnbot. . generalization ability of the model (Jingxu, 2006). The form of MSE is as follows: From the Table 4, we can see that the maximum MSE of the boundary of the sideslip angular velocity is 0.0428 (rad.s −1 ) 2 , while the maximum MSE of the boundary of the sideslip angle is 0.0030 rad 2 , which proves that the stable region regression model has strong generalization ability. . Summary of influential factors & e ect analysis According to the SVR model of vehicle stable region, the influencing factors of vehicle stability region are summarized and analyzed. The intercept of the stable region boundary on the horizontal axis characterizes the limit of the stable-state sideslip angle, which is the base point of the whole stable region boundary. The slope of the boundary represents the limit of the sideslip angle under different sideslip angular velocities. The smaller the absolute value of the boundary slope, the stronger the limit of the boundary on the sideslip angle under transient conditions . Through the analysis of the phase plane stable region and the quadrilateral stability boundary, the following conclusions are drawn: (1) The slope of the left and right boundaries is mainly affected by vehicle speed. In Figure 8, with the increase of vehicle speed, the values of the left and right boundaries remain basically unchanged, and the absolute value of the boundary slope decreases with the increase of vehicle speed. This shows that under the same sideslip angle, the limit of transient sideslip angular velocity increases, the convergent phase trajectory decreases significantly, and the stable region of the phase plane shrinks. (2) The intercept of the stable boundary is mainly affected by the road adhesion coefficient. In Figure 9, with the decrease of the adhesion coefficient, the slope of the non-adjacent boundary of the stable region remains basically unchanged, but the left and right boundary values converge to the equilibrium point. This shows that under the same sideslip angle, the restriction on the transient sideslip angle is strengthened, the stable region shrinks, and the trajectory of convergence in the phase plane decreases. (3) The effect of the front wheel angle on the phase plane mainly is making the phase trajectory no longer symmetrical. In Figure 10, when the current wheel angle is small, the number of convergence trajectories does not change significantly, but the stable region will flatten along the horizontal axis, and the asymmetry is not obvious at this time. When the current wheel angle is large, the slope of the left boundary will change so that it is no longer parallel to the right line. This means that the absolute value of the slope decreases, resulting in a greater limit of the steady-state sideslip angle and a sharp reduction of the transient sideslip angle limit. (4) The main effect of the centroid position on the stable region is the distance from the centroid position to the front and rear axles. In the Figure 11, when the centroid position is shifted backward, the slope of the boundary is basically unchanged, the absolute value of the intercept between the left and right boundaries of the stable region decreases, the convergent phase trajectory is significantly reduced, and the stable region of the phase plane decreases. . Stability controller designing and simulation test . . Stability controller designing When there is a high risk of instability, the ESP system will automatically intervene to prevent the vehicle from losing control. Both DYC and Active Front Steering (AFS) control technology can improve the driving stability of the vehicle. Among them, DYC generates a transverse torque acting on the body through four-wheel differential braking force to achieve vehicle stability control. It is widely used because of its good performance in vehicle handling and trajectory keeping. Based on the following 2-DOF differential equation of vehicle motion, vehicle stability is closely related to vehicle state: Considering the influence of deceleration caused by four-wheel brake distribution on vehicle speed, vehicle speed is considered as a time-varying state quantity. The expression of sideslip angular velocity is as follows.β State space equation belongs to the system formula of modern control theory. It starts from the differential equation of the system and introduces the concepts of system state, input and output to construct the system expression. The following vehicle stability state space equation is obtained by further simplification: Where a, b is the distance from the center of mass to the front and rear axles, k f , k r is the lateral stiffness of front and rear axle, T is the additional yaw moment. Model predictive control (MPC) is one of the model-based feedback control strategies, which is widely used in various control systems because of its good control effect and robustness. The basic principle of MPC can be summarized as follows: at each sampling time, update the optimization problem with the latest measured value, solve the updated open-loop optimization problem and apply the first component of the optimization solution u * (k|k) to the system. LTV-MPC is an extended control method of MPC. Because MPC is an algorithm with high requirements for model accuracy, its control accuracy will decline when the system status is updated. However, LTV-MPC considers the state change of the linear control system and has stronger adaptability for time-varying systems. Figure 12 shows the schematic diagram of the open-loop optimal solution of LTV-MPC. Considering the time-varying of the vehicle system, this paper will establish a DYC stability controller based on LTV-MPC, which includes an upper controller and a lower controller. The upper controller receives the signal from the stability judgment module and gives the expected additional yaw moment at the The upper controller is mainly composed of LTV-MPC, while the lower controller is responsible for the four-wheel braking force distribution of the yaw moment, which is adjusted in real time by giving the desired additional yaw moment and desired deceleration speed from the upper controller. The four-wheel braking force needs to satisfy the following constraints. Where, F X1 , F X3 , F X2 , F X4 corresponds to the left front, left rear, right front, and right rear wheels respectively, L a is the front and rear axle half axle length, and a aim is the desired braking Frontiers in Neurorobotics frontiersin.org . /fnbot. . deceleration, is given by another upper speed controller, which is not the focus of this paper and will not be discussed more here. Due to the axle load transfer during braking, the front wheels are subjected to greater vertical loads than the rear wheels, and are subjected to greater braking forces, with the following constraints. Where h g is the height of the center of mass of the vehicle. According to the above constraints, there is a unique fourwheel braking force distribution scheme for the given expected additional yaw moment and expected deceleration. At this point, a DYC stability controller based on LTV-MPC is proposed, and the model-in-loop simulation test will be launched based on the algorithm proposed in the previous section. . . Model-in-the-loop simulation testing CARSIM-SIMULINK co-simulation platform is used for model in-loop simulation test. We configured the vehicle dynamics model and operation scenario in CARSIM, and designed the controller and algorithm in SIMULINK (Cong et al., 2022). The scenario set in CARSIM selects a road with different adhesion coefficients between the left and right wheel surfaces: 1.0 on the left and 0.2 on the right. Under such bad road conditions, the stability controller proposed in this paper will work to avoid the instability of vehicle. However, the braking force distribution will lead to the underutilization of the road adhesion coefficient, which will lead to the deterioration . /fnbot. . of the vehicle braking performance. Therefore, it is necessary to set up a decision module to determine the current stability state of the vehicle and decide whether to apply stability control (Guo et al., 2018). The stability decision module is given by the dynamic stable region regression model proposed in this paper, as shown in Figure 13. According to the vehicle state input, the stable region regression model gives the vehicle driving stable region on the phase diagram of the sideslip angle-sideslip angular velocity, as the criterion of vehicle stability. Since the artificial division of stable region is strict, so in the stable region, we regard that the vehicle is in the absolute stability state. At the same time, in order to prevent the controller from switching frequently and repeatedly, we set a relaxation factor F a which is greater than 1. Multiply each boundary of the stable region by F a to obtain the unstable boundary. The vehicle state outside the unstable boundary is regarded as unstable, and the middle of the unstable boundary and the stable boundary is regarded as the transition region. As shown in Figure 14. On this basis, the test process of stability decision and control algorithm is proposed. The following is a model-inthe-loop simulation test method to compare the traditional AEB, the equipment of stability control (SC) without stability judgment decision (SVR) and the equipment of SC and SVR, we use the vehicle sideslip angle and the obstacle distance to measure the stability safety performance and braking safety performance of each scheme, which can verify the feasibility of the vehicle stability control strategy (Zhang et al., 2016;Wu et al., 2022). By adjusting and calibrating the F a , the simulation results were obtained as shown in Figure 15. From the simulation test results, for emergency braking under bad road conditions, the traditional AEB scheme has a serious instability and side slip (Wu, 2007), resulting in the sensor not being able to recognize the obstacle in front, which obviously does not meet the requirements of stability safety. The scheme of adding SC without SVR can ensure the stability safety of the vehicle, but due to the excessive intervention of the stability controller, the braking safety of the vehicle cannot meet the requirements. The scheme of adding SC and SVR can not only ensure that the vehicle stops at a safe distance in front of the obstacle, but also ensure that the sideslip angle is within the acceptable range. In summary, the rationality of using the stable region regression model proposed in this paper for stability judgment is verified, and the efficiency of the LTV-MPC stability control algorithm proposed in this paper is illustrated. . Conclusions & outlooks The algorithm proposed by this paper is mainly applied to scenarios where lateral instability of the vehicle may occur. In another word, the algorithm is used to determine the stability state of the vehicle, and avoid (1) Based on the traditional double-line method, this paper proposes an improved double-line method for quadrilateral stable region, and carries out a large amount of work on stable region dividing to establish a sample data set for supervised learning training and testing. (2) In this paper, an SVR-based dynamic stable region regression model is proposed based on the vehicle β −β phase plane to provide a criterion for the real-time stability of vehicle driving. The result of the test set indicates that this dynamic stable region regression model has a strong generalization ability. (3) In this paper, we extend the regression model of dynamic stable region, consider the real-time input of vehicle state, and the causal analysis of important factors and vehicle driving stable region is performed and summarized. (4) In this paper, based on LTV-MPC, a DYC stability controller is carried out, and the algorithm verification process is designed. Compared with the traditional AEB scheme and the SC-equipped scheme, the SC&SVR can coordinate the braking safety and stability safety of the vehicle. The result verifies the rationality of using the stable region regression model proposed in this paper for stability evaluation, as well as the efficiency of the stability control algorithm of the DYC stability controller. However, there are still many limitations of the algorithm. (1) Due to the use of pre-calibrated data sets, when vehicle parameters change a lot, the accuracy of the model will be reduced. (2) The test vehicle needs to have four-wheel differential braking capability. (3) The adaptive MPC algorithm consumes a lot of arithmetic power and requires high accuracy of the model, so it needs to be used with a better performance observer for actual testing. Data availability statement The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
v3-fos-license
2022-08-05T15:10:29.479Z
2022-08-01T00:00:00.000
251340763
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1016/j.bioadv.2022.213068", "pdf_hash": "bd8a14347810901d1a121cef32dcc8be3f044a9c", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46346", "s2fieldsofstudy": [ "Materials Science", "Medicine" ], "sha1": "820a3c424f4b1713467e9ad3efad54ba596ce25a", "year": 2022 }
pes2o/s2orc
Lithium ion doped carbonated hydroxyapatite compositions: Synthesis, physicochemical characterisation and effect on osteogenic response in vitro Biomaterials Advances Hydroxyapatite is a commonly researched biomaterial for bone regeneration applications. To augment perfor- mance, hydroxyapatite can be substituted with functional ions to promote repair. Here, co-substituted lithium ion (Li + ) and carbonate ion hydroxyapatite compositions were synthesised by an aqueous precipitation method. The co-substitution of Li + and CO 32 (cid:0) is a novel approach that accounts for charge balance, which has been ignored in the synthesis of Li doped calcium phosphates to date. Three compositions were synthesised: Li + -free (Li 0), low Li + (Li 0.25), and high Li + (Li 1). Synthesised samples were sintered as microporous discs (70 – 75 % theoretical sintered density) prior to being ground and fractionated to produce granules and powders, which were then characterised and evaluated in vitro . Physical and chemical characterisation demonstrated that lithium incorporation in Li 0.25 and Li 1 samples approached design levels (0.25 and 1 mol%), containing 0.253 and 0.881 mol% Li + ions, respectively. The maximum CO 32 (cid:0) ion content was observed in the Li 1 sample, with ~8 wt % CO 3 , with the carbonate ions located on both phosphate and hydroxyl sites in the crystal structure. Mea- surement of dissolution products following incubation experiments indicated a Li + burst release profile in DMEM, with incubation of 30 mg/ml sample resulting in a Li + ion concentration of approximately 140 mM after 24 h. For all compositions evaluated, sintered discs allowed for favourable attachment and proliferation of C2C12 cells, human osteoblast (hOB) cells, and human mesenchymal stem cells (hMSCs). An increase in alkaline phosphatase (ALP) activity with Li + doping was demonstrated in C2C12 cells and hMSCs seeded onto sintered discs, whilst the inverse was observed in hOB cells. Furthermore, an increase in ALP activity was observed in C2C12 cells and hMSCs in response to dissolution products from Li 1 samples which related to Li + release. Complementary experiments to further investigate the findings from hOB cells confirmed an osteogenic role of the surface topography of the discs. This research has shown successful synthesis of Li + doped carbonated hydroxyapatite which demonstrated cytocompatibility and enhanced osteogenesis in vitro , compared to Li + -free controls. Introduction Hydroxyapatite, a specific composition of calcium phosphates, remain highly investigated for applications in bone regeneration. As the most stable of calcium phosphate ceramics under physiological conditions, hydroxyapatite possess a high degree of physicochemical similarity to natural bone mineral. Nevertheless, their high stability results in reduced chemical solubility, and therefore slow resorption in vivo. To augment the performance of hydroxyapatite biomaterials, researchers have investigated incorporating functional ions to enhance bone regeneration and repair. Examples of this are the substitution of strontium [1], magnesium [2] or zinc [3] ions for calcium ions, or silicate [4] or carbonate [5] ions for phosphate ions, or fluoride [6] ions for hydroxyl ions in the hydroxyapatite structure, and such substitutions have been reviewed in detail elsewhere [7,8]. A recent systematic review and meta-analysis studied the effect of inorganic supplementation of calcium phosphates on the enhancement of in vivo bone formation and showed strontium, magnesium and silicon supplementation specifically significantly enhanced bone repair [9]. Of interest in the present study, lithium ions (Li + ) have demonstrated osteogenic potential in vitro [10,11] and in vivo [12]; the in vitro studies showed osteogenic stimulation with 5-20 mM LiCl. As a non-specific GSK-3 inhibitor, Li + induced osteogenesis is thought to involve activation of the canonical Wnt signalling pathway [13,14]. To date, there has been limited investigations into Li + doped hydroxyapatite/calcium phosphates in the literature. In all these studies, however, a mechanism for charge balance to account for monovalent Li + substituting for divalent Ca 2+ was not introduced or proposed. This oversight may confine the accuracy of the fabrication process and the actual success of Li + substitution for Ca 2+ in the crystal structure, rather than just Li + addition. For example, an earlier investigation mixed bovine hydroxyapatite powder with Li 2 CO 3 prior to sample fabrication, however, chemical characterisation to validate the synthesis and dissolution analysis to confirm Li + release was absent [15]. Other studies have used a variety of precursors to introduce Li + as a dopant in single substitutions in hydroxyapatite or beta-tricalcium phosphate (β-TCP) including LiNO 3 [16][17][18][19], LiCl [20,21], Li 2 O [22], and Li 3 PO 4 [23], but none of these attempted to account for charge balance that would be required for successful Li + substitution for Ca 2+ in the HA or β-TCP crystal lattice. On the other hand, a great deal of research has investigated carbonated hydroxyapatite owing to the presence of carbonate ions in biological apatite (~8 wt% in mammalian bone) and is thought to play an important physical and biological role [24]. The anionic carbonate ions are capable of substitution with hydroxyl ions or phosphate ions to forms A-and B-type carbonated hydroxyapatite, respectively [25]. Bone is comprised of both A-and B-type (AB-type) with a greater predominance of B-type carbonate with an A/B ratio of 0.7-0.9 [26]. Cation co-doped carbonated hydroxyapatite typically constitute sodium or ammonium ion doping owing to their predominance in the precursor chemicals (i.e. Na 2 CO 3 , (NH 4 ) 2 CO 3 ) [7]. The cosubstitution of e.g. monovalent sodium with carbonate ions (for phosphate ions) provides an effective method for maintaining charge balance, which single substitution of a monovalent cation for calcium does not. To address the limitations of studies to date, this study aims to cosubstitute Li + ions with CO 3 2− ions for Ca 2+ and PO 4 3− ions, respectively, to achieve charge balance, resulting in novel compositions. Thus far, there has been no reported studies on Li + doped carbonated hydroxyapatite. To understand how Li + substitution affects the properties of the resulting compositions, the in vitro assessment of cell response, specifically the direct and indirect osteogenic response of various cell types to the samples, will be assessed. Sample preparation Compositions of Li + doped AB-type carbonated hydroxyapatite were prepared based on an aqueous precipitation reaction, developed previously [27]. Three compositions were considered: Li + free (Li 0), low Li + substitution (Li 0.25), and high Li + substitution (Li 1). Carbonate and Li + were co-substituted in adherence with Eq. (1), where 0, 0.25, and 1 correspond to the value of x; quantities of respective reactants were calculated to generate an approximately 0.025 mol (25 g) of material, and the masses used are listed in Table S1. For all compositions, carbonate was introduced into the synthesis by bubbling CO 2 gas into the phosphoric acid solution prior to precipitation reaction, whereas the Li substituted compositions had an additional source of carbonate ions from the lithium carbonate (Li 2 CO 3 ) reactant used. This proposed mechanism does not account for carbonate substitution on the hydroxyl site, which could occur by a mechanism independent of x. For Li + doped compositions, Li 2 CO 3 (255823, Sigma Aldrich, UK) was included in the calcium hydroxide (10304KA, VWR, UK) suspension preceding acid addition. In a typical synthesis, phosphoric acid (EMSURE 85 % assay, Merck, UK) in 250 mL of carbonated distilled waterachieved by bubbling CO 2 gas for 30 min priorwas added dropwise to a 250 mL calcium hydroxide suspension in distilled water under continuous stirring and alkaline conditions (pH > 10, via the addition of 50 mL of concentrated ammonium hydroxide). Upon addition of all of the acid solution, the reaction was stirred for 2 h prior to aging unstirred for a further 24 h. Following aging, the mixture was filtered under vacuum utilising Whatman® grade 3 filter paper with the ensuing filter-cake dried at 80 • C overnight and then, ground finely using a mortar and pestle. To produce discs for sintering, approximately 300 mg of powder was pressed into a Ø 13 mm die set and a 1 ton force applied for 1 min using a hydraulic press. Produced discs were sintered in a tube furnace (Model STF 15/450 with 3216 controller; Carbolite, UK) under a CO 2 rich environment with CO 2 gas flowing into distilled water and then the furnace at 500 cm 3 /min. The heat treatment involved ramping up to the required temperature at 2.5 • C/min, holding for 1 h, and a cooling rate of 10 • C/min. Sintering temperatures of 1150 • C, 950 • C, and 750 • C were used for Li 0, Li 0.25, and Li 1, respectively, as experimentation identified that these temperatures allowed fabrication of discs with similar sintered density (data not shown). Sintered densities were determined from the masses and dimensions of the sintered discs, and were expressed as a percentage of the theoretical density of hydroxyapatite [28]. As sintering samples in a CO 2 atmosphere can lead to partial substitution of hydroxyl groups for carbonate groups in the HA structure, all three compositions were sintered in the CO 2 rich environment to provide consistency. To generate samples with a higher surface area and thus, greater ion release, sintered discs were ground and sieved and fractionated to 500-1000 μm to produce granules or <200 μm to produce powder samples. Powder X-ray diffraction The crystalline phases of powder samples were identified using an X'Pert Pro diffractometer (PANalytical Ltd., UK) operating at 45 kV and 40 mA using Cu K α radiation (λ = 1.5418 Å). X-ray diffraction (XRD) patterns were collected from 10 to 80 • 2θ with a step size of 0.01313 • and time per step of 998.07 s. Confirmation of a hydroxyapatite phase was obtained by comparing experimental patterns with the ICDD standard pattern of HA (PDF Card No. 9-432 [28]). Unit cell parameters were calculated using unit cell refinement in HighScore Plus software, using the space group P6 3 /m and unit cell parameters and reference peak positions from the ICDD standard pattern of HA as a starting point for the refinement. The crystallinity of the sintered samples was determined by the method described be Landi et al. [29]. Fourier transform infrared spectroscopy Fourier transform infrared (FTIR) spectra were obtained of the powder samples. The absorbance was measured between 4000 and 400 cm − 1 at a 2.0 cm − 1 resolution with 8 scans using a Spectrum Two FTIR spectrometer (Perkin-Elmer, UK) equipped with a Diamond/ZnSe ATR crystal. The Solver add-in in Microsoft Excel was employed for composite peak deconvolution of the carbonate ν 2 region and the phosphate ν 4 region of the Li 1 sample heated at 750 • C, using Gaussian profiles. Specific surface area determination The specific surface areas of powder samples were determined using a Micromeritics Tristar 3000 surface area analyser. Samples were preheated at 200 • C under flowing N 2 gas prior to collection of isotherms for calculation using the BET method. Due to the low surface area of the samples, multipoint analysis to calculate BET surface area could not be performed, so only a single point (at a partial pressure of ~0.2) BET method was used. Contact angle measurement The surface contact angle of sintered discs was measured using a FTA1000 B sessile drop instrument (First Ten Ångstroms, USA). A 5 μL droplet of distilled water was manually dispensed using a Gilmont goniometer and the resultant contact angle measured using the corresponding Fta32 software (First Ten Ångstroms, USA). Measurements were taken at ambient temperature and humidity using 3 discs per composition for each sintering temperature. Scanning electron microscopy Observations of surface topography, microporosity, and grain morphologies was performed using a Zeiss EVO MA10 scanning electron microscope employing a 10 kV accelerating voltage (Carl Zeiss, UK). Samples were mounted on aluminium stubs using carbon adhesive tabs and silver paint was added to form conductive paths between the samples and the stubs. Mounted samples were coated with an approximately 10 nm gold/palladium mixture prior to imaging. Dissolution studies To investigate the ionic release from the synthesised Li + doped carbonated hydroxyapatite samples and/or changes in the soaking solution ion composition, each of the three compositions (Li 0, Li 0.25, and Li 1) were prepared as sintered discs, granules, and powders and soaked in a 0.08 M acetic acid-sodium acetate buffer solution (pH 5.5) [30] or Dulbecco's modified essential media (DMEM) in triplicate. The concentration of material per soaking volume for granules and powders was kept at 1.5 mg/mL as previously recommended [31], whilst the volume used for discs was calculated based on the apparent surface area as set out by ISO 23317. For DMEM experiments, granule and powder samples were soaked in a total volume of 20 mL, whilst samples in the acetate buffer were soaked in a total volume of 35 mL. Samples in the acetate buffer or DMEM were incubated at 25 • C/300 RPM or 37 • C/120 RPM, respectively. Experiments in DMEM involved complete replenishment with fresh solutions at timepoints, whereas for samples in acetate buffer aliquots of 1 mL were taken at timepoints and replaced with fresh acetate buffer. Upon endpoint, test samples were washed in ethanol, air dried, and stored in a desiccator prior to preparation for SEM imaging as per the previous section. The soaked solutions were analysed for Ca 2+ and Li + using microwave plasmaatomic emission spectroscopy (MP-AES), whilst phosphate ions were measured using a previously described colorimetric assay [32]. For measurement of Ca 2+ and Li + , test samples were diluted 10-fold with 1 % (v/v) HNO 3 in deionised water prior to triplicate measurements using an Agilent 4100 instrument at the following wavelengths: 393.366 nm for Ca 2+ and 670.784 nm for Li + . For measurement of phosphate ions, 20 μL aliquots of test samples was added, in triplicate, to a 96 well microwell plate prior to the addition of 200 μL test reagent comprised of 1 part 4.2 % (w/v in 4 M hydrochloric acid) ammonium molybdate tetrahydrate and 3 parts 0.045 % (w/v in deionised water) malachite green oxalate salt. After a 15 min incubation, the absorbance was read at 650 nm using a BioTek Synergy HT microplate reader. Chemical analysis of powders The bulk composition of the powder samples was determined by dissolving 12.5 mg of powder in 100 mL of 1 % HNO 3 and measuring the Ca 2+ and Li + and phosphate concentrations as per the previous section. Ca 2+ and Li + measurements using MP-AES were taken following a 100fold dilution in 1 % HNO 3 whilst, a 20-fold dilution was employed prior to the colorimetric phosphate assay. The carbonate content of the powder samples was determined using a LECO CS744 carbon/sulphur analyser (LECO Instruments UK Ltd., UK). For each sample, duplicate measurements were made and the mean value reported alongside the standard deviation. Supplementation with 10 nM dexamethasone, 10 mM β-glycerophosphate, and 50 μM ascorbic acid 2-phosphate (hMSCs only) was used as a positive osteogenic control for hOB cells and hMSCs. As an additional control, to examine the effects of only Li + , cells were treated with 10 mM LiCl (in PBS). Resazurin assay The resazurin assay was used to indirectly monitor any cytotoxic effects of the sintered discs. C2C12 cells (40,000 cells/cm 2 ) and hOB cells and hMSCs (20,000 cells/cm 2 ) were seeded in a 100 μL volume onto the discs and allowed to adhere for 4 h. Cells were maintained in culture for 7 days with measurements taken at day 3 and 7. Osteogenic differentiation To evaluate the influence of the synthesised discs on osteoinduction and osteoblast functionality, C2C12 cells, hMSCs, and hOB cells were seeded per the previous section and cultured for 7 days prior to measurement of ALP activity. This marker was chosen as it has been shown to be a good predictor of bone forming capacity of MSCs in vivo [33] and of correlating C2C12 cell osteogenic differentiation in response to doses of BMP-2 from demineralised bone matrix, and bone formation in vivo [34]. Owing to the response observed in hOB cells, two sets of experiments were performed to distinguish the effects of the discs' topography from their surface chemistry. A set of discs were manually polished to remove the effects of the topography using abrasive paper, whilst another set of discs were sputter coated with a gold/palladium mixture to insulate the seeded cells from the underlying ionic effects. In order to ascertain whether the dissolution products of the synthesised materials were able to induce osteogenesis, sintered discs, granules, and powders were soaked in 5 mL DMEM for 7 days at 37 • C. The mass of granules and powders used was standardised to the mass of the discs (~300 mg) for a final concentration of 60 mg/mL. After the 7 days soaking, the culture media was sterile filtered, supplemented to obtain growth media, and used to culture C2C12 cells for 7 days prior to measurement of ALP activity. Moreover, the sintered materials were evaluated in an insert culture setup which ensured C2C12 cells responded to the dynamic effects of the materials within the culture media. The sintered granules were considered as they provided a greater ionic release compared to discs whilst remaining fixed in position compared to the powders. Three concentrations were evaluated, 15, 30, and 60 mg/mL, for a 7-day culture prior to measurement of ALP activity. Furthermore, positive responses from these experiments were authenticated in hMSCs. Statistics For in vitro characterisation, 3 discs per composition were evaluated in each experiment, with experiments performed in triplicate. Statistical analysis was carried out using GraphPad Prism 7.04 (California, USA) with assessment of statistical differences between multiple conditions using one-way analysis of variance with a post-hoc Tukey's test. Results are presented as mean ± standard deviation (SD); p values < 0.05 were considered statistically significant. For graphical presentation, * indicates p < 0.05, ** indicates p < 0.01, and *** indicates p < 0.001. Physicochemical characterisation With the exception of Li 1, samples synthesised were phase-pure, showing only X-ray diffraction peaks corresponding to an HA phase (Fig. 1A); for Li 1 samples a small calcite impurity phase was detected by a single diffraction peak at 29.4 • 2θ. The determined unit cell parameters indicated modest changes; comparisons of a and c unit cell parameters suggested negligible difference between Li 0 and Li 0.25 samples, whilst Li 1 samples illustrated a slight reduction in both a and c unit cell parameters compared to the other samples, Table 1. Likewise, similar observations were found with respect to the volume of the unit cell. The crystallinities of all samples were calculated as being >95 %. Infrared spectra showed the typical vibrations previous reported data on AB-type carbonated hydroxyapatite in the literature, with the spectrum for the Li 1 sample between 400 and 1800 cm − 1 shown in Fig. 1B [35,36]. Only a weak vibration at approximately 3569 cm − 1 , corresponding to hydroxyl group stretching, was observed at wavenumbers above 1800 cm − 1 , so spectra are only plotted for the region below this. (Fig. 2). The carbonate ν 2 region could be fitted to three peaks corresponding to Atype, B-type and labile carbonate ( Fig. 2A), although the region relating to labile carbonate (850-865 cm − 1 ) does not fit very well to a single peak suggesting slightly different environments for labile carbonate. A clear OH libration peak at approximately 630 cm − 1 was observed ( Fig. 2B), with 3 peaks corresponding to apatitic phosphate ν 4 vibrations and a peak at approximately 532 cm − 1 that has been assigned to HPO 4 groups. SEM micrographs of the sintered discs portrayed heterogeneous porosity and grain attributes between the compositions examined ( Fig. 1C). Li 0 discs possessed a polygonal grain shape with variable pore and grain sizes. In contrast, Li 0.25 discs displayed two classes of grain shape: grains comparable to Li 0 discs and the other more elongated and cylindrical with the latter retaining a relatively uniform size. Li 1 discs displayed predominantly elongated, cylindrical grains with greater uniformity compared to the other compositions, although Li 1 discs depicted variable pore sizes. SEM images of granules and powders (data not shown) showed irregular shaped particles with dimensions within the sieved ranges of each. Calculation of sintered densities of sintered discs indicated general parity between compositions as per optimisation of sintering temperature, with mean values ranging from 71.2 to 75.9 % of the theoretical density (the mean actual sintered densities for Li 0, Li 0.25 and Li 1 were 2.30, 2.39 and 2.25 g/cm 3 , respectively). Likewise, this effect was reflected in the contact angle measurements, with values showing a small, but not significant, increase with increasing Li substitution. Measurements of the specific surface area of Li 0, Li 0.25, and Li 1 powders are given in Table 1. A general trend of increased specific surface area of the powders with increased Li + doping is observed. However, it should be noted the different sintering temperatures used for these samples (1150, 950, and 750 • C for Li 0, Li 0.25, and Li 1 samples, respectively). Elemental analysis of the chemical composition of the powders, using MP-AES (Ca and Li) and a colorimetric assay (P) showed that the Ca/P and (Ca + Li)/P molar ratios increased with increasing Li + substitution, consistent with the design compositions from Eq. (1) that require increasing carbonate substituting on the phosphate site (B-type), Dissolution behaviour of sintered powders, granules and discs Measurement of Ca levels in DMEM after incubating samples illustrated several trends with respect to differences between compositions and the form of material evaluated. As expected, powders resulted in greater consumption of calcium ions from the media, as a result of surface precipitation of a CaP layer, followed by granules and then discs ( Supplementary Fig. S1B, A, and Fig. 3A, respectively). Considering the continuous exchange of media at timepoints, findings indicated sustained consumption of calcium for the duration of the timepoints examined. Although the progression of Ca consumption remained similar between the compositions, the data highlighted slight differences in relation to one another, and these were more evident for incubating granules and powder, rather than the granules. The granules upheld increased consumption of Ca up to day 14 which then plateaued from day 21 (Fig. 3A). For P measurements, Li 0 granules demonstrated a sustained increase in P consumption throughout the soaking period whilst Li 0.25 and Li 1 granules began to plateau at day 14, although, increased depletion was observed with Li 1 granules at day 28 (Fig. 3C). Data from Li + measurements indicated a burst release profile for the Li + doped compositions regardless of the form of material evaluated. As expected, Li 1 granules demonstrated a higher Li release compared to Li 0.25 granules with an approximately 6-fold increase at day 1 (Fig. 3E). Following a sharp incline at day 1 for all samples, pH levels remained stable throughout the soaking period ( Supplementary Fig. S2); dissolution experiments were not performed in a CO 2 enriched atmosphere, resulting in the initial pH increase observed in the first 24 h. In general, the solubility of granules in the acetate buffer was dependent on the design substitution value with greater Ca, P, and Li release per increased Li + substitution with Ca and P release from Li 0.25 granules closely following the results of the Li 1 granules (Fig. 3B, D). Furthermore, a burst Li + release was observed for Li 0.25 and Li 1 granules which began to plateau at an earlier stage in Li 0.25 granules compared to Li 1 granules (Fig. 3E). The burst release profile for Li reaffirmed data from DMEM experiments but differed from the Ca and P findings in the acetate buffer which depicted a sustained release of these elements throughout the soaking period. To confirm calcium phosphate precipitation on the surface of materials following incubating in DMEM, SEM micrographs were taken post-dissolution to visualise the surface for apatite formation (Fig. 4). Observations proved challenging, particularly at higher resolutions, with indications of charging effects presumably due to an air gap resulting from the separation of the comprehensive precipitation layer and the underlying surface of the disc. Indeed, visualisation of granules following DMEM experiments indicated widespread bulbous surface morphology characteristic of apatite formation. In vitro osteogenesis in response to dissolution products Treatment with conditioned media, derived from incubating discs, granules, and powders, failed to induce a significant increase in C2C12 cell ALP activity with the exception of Li 1 powders (p < 0.0001) which generated a greater response than a positive control of 10 mM LiCl stimulation (Fig. 5A). The Ca concentrations in the conditioned media for each composition/sample type was measured (Fig. 5B) and does not show a clear correlation with the corresponding ALP activity. To confirm whether the aforesaid response was a result of Li + release, a lower (30 mg/ml) and higher (90 mg/ml) mass of Li 1 powders was soaked prior to use in culture. Findings demonstrated a proportional increase in ALP activity with increased Li 1 powder concentration (Fig. 5C). Ca concentrations in the conditioned media were also measured for each composition, sample type and also powder concentration used for Li 1 (Fig. 5D). Although the Ca concentration varied significantly for the different compositions and sample type tested, consistent with the dissolution data in Fig. 3 and Supplementary Fig. S1, when different powder concentrations of Li 1 were used to produce conditioned media the Ca concentration in these media were comparable, but the ALP activity values were significantly different. Analysis of ALP activity upon exposure to sintered granules held within inserts demonstrated a Li + specific osteogenic response (Fig. 5E). Li 0 granules failed to induce a significant response regardless of concentration examined (p = 0.9997 at 15 and 60 mg/mL, and p > 0.9999 at 30 mg/mL). In contrast, Li 1 granules prompted a significant increase in ALP activity for all concentrations studied, albeit levels peaked at 30 mg/mL (p = 0.0002 at 15 mg/mL, and p < 0.0001 at 30 and 60 mg/mL). The Ca concentration in the media showed an increase with increased incubation time of the granules within the culture inserts, with the Li 0 granules resulting in a slightly greater Ca increase (Fig. 5F). The positive response illustrated in C2C12 cells were further authenticated in hMSCs. For this experiment, conditioned media from Li 1 powders (60 mg/mL) and 30 mg/mL Li 1 granules in inserts were assessed, compared to corresponding Li 0 powders and granules. Fig. 6A demonstrated a significant induction of ALP activity upon treatment with the conditioned media (p < 0.0001) and the granules (p = 0.0028). In contrast to C2C12 cells, conditioned media from Li 1 powders did not generate a greater response than LiCl treatment. The Ca, P and Li ion concentrations in the media of these experiments were measured, giving an indication of the ion release or CaP precipitation that occurred during the incubation of the granules in the cell culture wells. The Li + release was greater from powders than granules, consistent with the dissolution experiments, but the Li + concentrations achieved were significantly higher than from the dissolution experiments (Fig. 6D). Incubating the granules within the cell cultures did not result in the large consumption of Ca and P from the media that was observed in the dissolution experiments ( Fig. 6B & C), but the concentrations used were very different in these two experiments. In vitro response to sintered discs In both hOB cells and hMSCs, Li 1 discs caused faster cell spreading compared to Li 0 and Li 0.25 discs with recognisable cell polarisation within 4 h and the typical fibroblast-like morphology (Fig. 7). In contrast, cells on Li 0 and Li 0.25 discs remained predominantly rounded albeit with extensive cell projections (filopodia) indicative of the onset of cell polarisation. At 24 h, hOB cells and hMSCs exhibited cell spreading regardless of disc composition with no noticeable differences in cell morphology. A significant increase in ALP activity was observed in C2C12 cells seeded onto the sintered discs across all the compositions evaluated compared to cells in the growth media control (p = 0.0007 for Li 0, p = 0.0006 for Li 0.25, and p < 0.0001 for Li 1) (Fig. 8A). A similar trend to C2C12 cells was observed in hMSCs (Fig. 8C), although, differences in the level of statistical significance was detected. In contrast to C2C12 cells, only Li 1 discs induced a significant difference in ALP activity compared to the growth media control with an over 3-fold increase (p = 0.0004). Exposure to Li + doped carbonated hydroxyapatite failed to induce a significant response in hOB cells compared to cells in the growth media control (Fig. 8E). However, cells seeded onto the Li 0 discs generated a significant increase in ALP activity (p < 0.0001). Moreover, a proportional relationship of decreased ALP activity with increased Li + substitution was observed. Generally, findings demonstrated C2C12 cells, hOB cells, and hMSCs seeded onto discs retained comparable growth to cells in the growth media control, with no significant changes in mitochondrial activity with culture time (Fig. 8B, E, F). Overall, findings illustrated sintered discs retained cytocompatibility for all the compositions analysed regardless of the cell type considered. Influence of surface topography on hOB cell differentiation To evaluate the influence of the surface topography on hOB cells, two experiments were considered. Firstly, to abrogate the underlying surface chemistry but retain the overall surface topography, discs were sputter coated with gold/palladium, as per SEM preparation. Measurement of calcium ion levels, from media collected during culture, authenticated the experimental setup which masked ion release from the sintered discs. Findings from these experiments demonstrated a significant increase in ALP activity in coated Li 0 and Li 0.25 discs compared to the growth media control (p = 0.0422 and p = 0.0166, respectively) (Fig. 9A). In contrast, although a 2-fold increase was measured in Li 1 discs, no statistical significance was detected. Nevertheless, additional analysis indicated no difference in ALP activity of cells between Li 1 discs and those of the other compositions. Sputter coating of the discs led to no changes in the Ca concentration of the culture media (Fig. 9B). In a complementary experiment, sintered discs were polished to generate a uniform surface, and thereby, remove the influence of the surface topography. Here, hOB cells remained unresponsive with no noticeable increase or decrease in ALP activity measured, regardless of composition examined (p < 0.9999) (Fig. 9C); polished surfaces of all discs appeared similar by SEM (Fig. 9D). Discussion The aim of this study was to address a significant deficiency in previous studies that have attempted to substitute Li + ions into hydroxyapatite or tricalcium phosphates, which has been the exclusion of a charge balancing mechanism for the designed Li + for Ca 2+ ion substitution. To achieve this, we performed a co-substitution of Li + and CO 3 2− ions for Ca 2+ and PO 4 3− ions in hydroxyapatite using an aqueous precipitation method. This approach produced compositions with a low or a high concentration of Li + ions, allowing us to identify the effect of these compositions on their physical, chemical and biological properties, compared to a Li-free control prepared in a similar manner. In determining the mechanism for the physicochemical changes observed in the samples studied, several factors are of significance including the sintering conditions (time, temperature, gaseous environment) and precursor concentrations (particularly Li + and carbonate). Notably, previous investigations into sintering parameters in carbonated hydroxyapatite demonstrated densification from 700 • C [37], whereas stoichiometric hydroxyapatite typically exhibits densification from 800 to 900 • C [23]. Although heat treatments are considered isochronal, this fails to take into account the ramp-up process which may result in samples undergoing densification for several hours prior to reaching the sintering temperature. Results from density measurements indicated improved sinterability (i.e. lower temperatures required for densification) with increased Li + doping. Doping of β-tricalcium phosphates using Li 2 O as a precursor illustrated minimal differences in bulk and apparent density with increased dopant and no apparent trend [22]. In contrast, commercial hydroxyapatite powders doped with LiNO 3 resulted in moderate increased densities in Li 0.2 and 0.4 (wt%) samples but markedly reduced densities in Li 0.6 (wt%) samples [16]. Preceding work has demonstrated carbonate as paramount for sintering improvement in pure carbonated hydroxyapatite with carbonated water and heat treatment under a CO 2 rich environment as the sources of carbonate ions [35]. Thus, the improved sinterability observed in the present study is caused by increased carbonate ions derived from Li 2 CO 3 , and also CO 2 present in the sintering atmosphere. Changes to the crystal structure of hydroxyapatite are caused by an aggregate of the relative incorporation of Li + , A-type carbonate, and Btype carbonate ions, and sintering temperature. Formation of A-type carbonated hydroxyapatite is generally agreed to expand the a axis length and contract the c lattice parameter as CO 2 reacts with two hydroxyl groups to form carbonate resulting in reduced occupancy in the hexagonal channel [38]. Whilst, the inverse is true for B-type carbonated hydroxyapatite as the smaller planar carbonate substitutes the larger tetrahedral phosphate [39]. On the other hand, much of the data on Li + effect on the unit cell parameters remains inconsistent. Increased Li + doping, by way of higher concentrations of Li 2 NO 3 in the precursor solutions, in hydroxyapatite nanoparticles resulted in a smaller unit cell volume and reduced a and c parameters [40]. In contrast, other findings demonstrated minimal differences in lattice parameters at 900 and 1200 • C but indications of reduced a and c parameters and a smaller unit cell volume at 1400 • C [41]. Li + possess a smaller ionic radii compared to calcium ions (0.076 and 0.100 nm in a 6-fold coordination) and therefore, are capable of substitution with calcium ions at either the Ca (I) or Ca(II) site, and reduce the unit cell volume [42]. The similarity in unit cell parameters between the Li 0 and Li 0.25 samples, and the similar carbonate contents in these samples, suggests that carbonate and not Li + is the main controller of change in the unit cell dimensions. The Li 1 sample does show a notable decrease in both unit cell parameters, but the specific role of Li + , A-type or B-type carbonate substitution on this change can not be inferred here. The Li + contents of the samples were close to their design values, and the equal exposure to carbonate/CO 2 during synthesis and heating, except for the additional carbonate in the Li 2 CO 3 for the Li + substituted compositions, resulted in Li 0 and Li 0.25 having comparable carbonate contents, but the Li 1 sample having a much higher carbonate content that was consistent with the design composition. The similarity in all measured parameters for Li 0 and Li 0.25, except the Li + content, provides a good pair of samples to test the effect of Li + specifically on cell response. Equally, the Li 1 sample has a significantly larger Li + and carbonate content than the other two samples, but most of the physical parameters were comparable. Thermal decomposition of Li 1 samples was observed at higher temperatures, above the 750 • C used here, evidenced with physical bloating of the samples, from the density and contact angle measurements (data not shown). These findings are consistent with previous data which indicated carbonated hydroxyapatite decomposition occurred at higher temperatures and with increased carbonate content [43]. Thus, fabricated discs were heated to attain 70-75 % sintered densities to prevent this change in morphology. Although a small calcite impurity phase was observed in the Li 1 sample, the consequence of this on subsequent cell response is considered minimal due to the relative low solubility of this phase. In hydrolytic dissolution studies, investigations into mechanisms of hydroxyapatite dissolution indicated sequential processes of calcium and phosphate dissolution [44]. On the other hand, cell-mediated dissolution tends to occur more rapidly presumably due to the secreted acidic metabolites and enzymes [45]. This was previously illustrated in a comparison between porous Li + doped hydroxyapatite soaked in SBF and osteoblast biodegradation experiments [17]. In general, experimental data indicated greater solubility and thus, ion dissolution with Li 1 compared to Li 0.25 and Li 0 samples. This increase was likely due to the lower crystallinity and greater surface area in Li 1 samples. Previous studies have shown greater accumulative phosphate release in Li + doped hydroxyapatite compared to non-doped samples [17,19]. Likewise, prior investigations have established greater solubility with carbonate substitution [5,46]. Findings from the cell adhesion experiments demonstrated the impact of the varied surface topography on initial cell-material interaction with accelerated spreading on Li 1 discs. These findings were demonstrated in hOB cells and hMSCs and hence, were not considered a cell-dependent response. Earlier studies have illustrated the influence of microporosity, roughness, stiffness, surface charge, and wettability on protein adsorption [47]. Regarding the sintered discs analysed for in vitro experiments, the contact angle approximated to 60 • with minute but insignificant changes between the different compositions. Therefore, the cell spreading differences observed with Li 1 discs are thought to be a result of factors other than surface wettability. Investigations into cell adhesion on Li + doped hydroxyapatite remain scarce. A prior study reported improved MG63 cell adhesion and growth on Li + doped hydroxyapatite porous discs compared to the Li + free hydroxyapatite [17]. The authors speculated this was caused by a more compact bulk density in the Li + doped discs, in addition to the Li release to support proliferation. However, observations were made at later timepoints (day 4, 14, and 21) which failed to capture cell morphology during initial attachment. Whilst initial cell attachment is predominantly dictated by the underlying surface features, the ion release from calcium phosphate samples influence the ensuing cell proliferation and differentiation. Nevertheless, the surface topography remains a crucial regulator of proliferation and specifier of lineage commitment [48][49][50][51]. Findings from the resazurin assays indicated the sintered discs maintained cytocompatibility for the 7-day culture period but induced no noticeable changes in cell proliferation. These responses differed from previous studies which demonstrated increased MG63 cell proliferation, as measured by the MTT assay, seeded on porous low Li + -doped hydroxyapatite (99.5:0.5 Ca 2+ :Li + molar ratio) [17]. Similar findings were observed in hFOB cells on Li + doped β-tricalcium phosphate discs [22]. In contrast, no significant differences were observed in hMSC proliferation on Li + doped β-tricalcium phosphate discs compared to the nondoped controls, in agreement with findings from this study [20]. Differences in Li + release, surface topography, and culture conditions are presumably attributable for these discrepancies. Owing to the capacity for osteoinduction in response to the surface topography, surface chemistry, and dissolution products, a series of experiments were performed to assess these effects. In hOB cells, the osteogenic role of the surface topography was illustrated with the promotion of ALP activity on the sputter coated discs and the absence of differences in ALP activity on polished discs. Although not statistically significant, an increase in ALP activity was observed in sputter coated Li 1 discs compared to the unresponsiveness of hOB cells seeded onto uncoated Li 1 discs. Taken together, these findings would indicate that exposure to Li + , either from the surface chemistry or material dissolution, impaired ALP activity of hOB cells. These results are in agreement with those obtained previously in hFOB cells on Li + doped hydroxyapatite pellets [15]. At lower Li + doping, hFOB cells remained unresponsive, whilst, a higher doping (2 wt%) resulted in impaired ALP production and secretion. In addition, the authors observed different degrees of impairment with sintering temperature which despite the absence of topography visualisation, would indicate a function of the topography on ALP production and secretion [15]. Moreover, the osteogenic potential of the surface topography was observed in C2C12 cells and hMSCs with upregulation of ALP activity in Li 0 discs compared to cells in the growth media control. On the other hand, the osteogenic role of Li + release was established in C2C12 cells exposed to Li 1 granules and conditioned media of Li 1 powders which were further authenticated in hMSCs. Determination of Li + content of the Li 1 powder conditioned media indicated hMSCs were exposed to a concentration of approximately 100 μg/mL (Fig. 6D); of note, 10 mM LiCl constitutes a Li + concentration of approximately 70 μg/mL. Whilst these experiments remove the influence of the surface topography, other factors such as changes in calcium and phosphate concentrations may influence cell differentiation. Typically, increased supplementation of calcium and inorganic phosphate have resulted in enhanced osteogenic differentiation [52][53][54][55]. Although the effects of calcium and phosphate release/depletion are capable of confounding findings, several lines of evidence would suggest this not to be the case. Measurement of calcium concentrations in the conditioned media indicated depletion with Li 0.25 granules and Li 0 and Li 0.25 powders, in addition to Li 1 samples, yet no differences in ALP activity were observed (Fig. 5B). Separately, calcium concentration of conditioned media remained comparable despite increasing mass of soaked Li 1 powders though disparate outcomes in ALP activity was measured (Fig. 5D). On the other hand, the elevated calcium release in Li 0 granules compared to Li 1 granules during culture failed to induce an increase in ALP activity (Fig. 5F). Thus, these findings would reasonably suggest Li + as the prime driver of osteogenic differentiation. The osteogenic role of the dissolution products derived from Li 1 samples was in accordance with prior investigations into Li + doped calcium phosphate cements using MG63 and MC3T3-E1 cells [21,56]. In addition, other investigations have shown similar trends in calcium/phosphate levels during culture with comparable osteogenesis in response to Li + doped samples [18,20]. The observed effects of Li + ions released from the compositions studied on the ALP activity of hMSCs and C2C12 cells, but not hOB cells, are consistent with the effect of Li + ions introduced from LiCl added to the media, as observed as a control in our studies, but also from published studies adding LiCl [11,12]. How this would translate to bone formation would need to be tested in a preclinical model, but this was out with the scope of this study. Although Li + ions released from samples did not appear to enhance the ALP activity of hOB cells, the surface topography of all samples did appear to enhance ALP activity when hOB cells were seeded on sintered discs, and this effect was inversely related to the level of lithium substitution. Although all compositions were prepared to have comparable sintered densities, the loss of this effect on ALP activity when sintered discs were polished suggests that distinct surface microstructures existed for each composition after sintering that altered cell response. The significance of this would depend on these different surface structures being produced in more relevant sample forms, such as macroporous scaffolds. Conclusions This study set out to synthesise and characterise a novel, charge balanced, Li + doped carbonated hydroxyapatite material with follow-up in vitro studies to evaluate suitability as a bone graft or scaffold. Findings demonstrated successful synthesis of carbonated hydroxyapatite with relatively low and high Li + substitutions. Furthermore, Li 1 samples were shown to possess improved sinterability, enhanced dissolution, and a disparate microstructure compared to Li 0 and 0.25 samples. In vitro assessment indicated favourable cell attachment and cytocompatibility for all the compositions evaluated. In addition, Li 1 samples demonstrated augmented osteoinduction in C2C12 cells and hMSCs in response to direct seeding onto discs, granules held within inserts, and conditioned media from soaked powders. Of interest, experiments using the non-LiCl responsive hOB cells indicated a functional role of the surface topography with a potential compounded effect from the surface chemistry. CRediT authorship contribution statement Nasseem Salam: Investigation, experimental data acquisition, data analysis, writing -original draft. Iain R Gibson: Conceptualization, resources, experimental data acquisition and analysis (FTIR, surface area), supervision, funding acquisition, writing -review & editing, approving final draft. Declaration of competing interest The authors declare no conflict of interest related to the submitted work.
v3-fos-license
2018-04-03T02:57:37.042Z
2016-06-30T00:00:00.000
18136626
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/journals/cjidmm/2016/3574149.pdf", "pdf_hash": "9d6d9a6cddfcb444b85420140b1c61966fd10bed", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46347", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Medicine" ], "sha1": "620ad04e1b8a7e5f5e5615653be9142e0bcceaac", "year": 2016 }
pes2o/s2orc
Factors Affecting Microbial Load and Profile of Potential Pathogens and Food Spoilage Bacteria from Household Kitchen Tables The aim was to study the bacterial load and isolate potential pathogens and food spoilage bacteria from kitchen tables, including preparation tables and dining tables. Methods. A total of 53 households gave their consent for participation. The samples were collected by swabbing over an area of 5 cm by 5 cm of the tables and processed for bacterial count which was read as colony forming units (CFU), followed by isolation and identification of potential pathogens and food spoilage bacteria. Result. Knowledge about hygiene was not always put into practice. Coliforms, Enterococcus spp., Pseudomonas spp., Proteus spp., and S. aureus were detected from both dining and preparation tables. The mean CFU and presence of potential pathogens were significantly affected by the hygienic practices of the main food handler of the house, materials of kitchen tables, use of plastic covers, time of sample collection, use of multipurpose sponges/towels for cleaning, and the use of preparation tables as chopping boards (p < 0.05). Conclusion. Kitchen tables could be very important source of potential pathogens and food spoilage bacteria causing foodborne diseases. Lack of hygiene was confirmed by presence of coliforms, S. aureus, and Enterococcus spp. The use of plastic covers, multipurpose sponges, and towels should be discouraged. Introduction Foodborne diseases remain a challenge globally, with higher incidence rate in developing countries. In 2010, the World Health Organization's Foodborne Disease Burden Epidemiology Reference Group estimated 582 million cases of foodborne diseases and 351 000 associated deaths worldwide [1]. Furthermore, elderly people, children aged less than 5 years, pregnant women, and individuals with low immune systems could be more vulnerable to foodborne diseases [2]. Every year, contaminated food contributes to 1.5 billion cases of diarrhoea in children, resulting in more than three million premature deaths worldwide [3]. Foodborne diseases originating from home have been increasingly reported recently and now considered to be an important aspect of public health [4,5]. Households have been reported as the second most important venue for foodborne diseases after restaurants [6]. The incidence of home-based foodborne illnesses could be difficult to interpret due to various food sources and underreporting of illness [4,6]. A number of factors could contribute to foodborne diseases in the home, including types of food supply, domestic activities taking place in the kitchen, hygienic practices, attitudes, belief, experience, and knowledge of every member of the household [4,7,8]. Experimental studies have concluded that cross-contamination of bacteria which could cause foodborne illnesses such as S. aureus, Salmonella spp., and Campylobacter spp. could occur from fleshy food to raw foods, kitchen surfaces, and equipment, including chopping boards and knives [4,9,10]. It has also been reported that 50% of foodborne diseases were due to inappropriate food storage and 28% were due to cross-contamination [11]. Poor hygiene was found to significantly affect the presence of Escherichia coli 0157:H7 in homemade hamburgers [12]. Bacteria responsible for foodborne disease could cause biofilm on food contact surfaces such as tables which could disseminate the potential pathogens continuously in the kitchen environment as well as ultimately affecting food 2 Canadian Journal of Infectious Diseases and Medical Microbiology quality and safety [13]. The bacterial appendages, fimbriae, flagella and surface polysaccharides have been extensively studied for their contributions to the formation of biofilms by E. coli, Proteus spp., Pseudomonas spp., Klebsiella spp., and Salmonella spp. [13]. Proteus spp. have often been responsible for both food spoilage and food poisoning [14] whereas Pseudomonas spp., which are known to cause off-odours and off-flavours in food, have more often been cited as responsible for food deterioration and spoilage [15,16]. In Mauritius, data from Ministry of Health and Quality of Health has indicated an ascending trend in the number of reported food poisoning cases, which was 2.0 cases in 2001 and increased to 31.0 cases per 100,000 midyear population in 2013. Furthermore, in 2013, diarrhoea and gastroenteritis of presumed infectious origin were the second cause of hospital discharge [17]. It would be impossible to estimate the percentage of home-based foodborne outbreaks, although it cannot be neglected. There is a need to study the sources and possible causes of foodborne diseases in household kitchens. Therefore, this study aimed to study the hygienic practices of a random sample of individuals in their home kitchens. The bacterial load and profile of potential pathogens and food spoilage bacteria from the home kitchen tables, dining and preparation tables, were investigated and compared. The various factors which might affect the load and presence of potential pathogens and food spoilage bacteria were also studied. Study Design. For the purpose of the study, a survey was initially carried out, followed by laboratory investigations. A questionnaire was designed which included four sections: firstly, general information of the family under study (age of members, family size/type, and diet); secondly, kitchen set-up (details of dining and preparation tables and their materials and cover and uses); thirdly, hygiene practices in the kitchen (hand washing frequency, use of chopping board); and, fourthly, food safety knowledge. The study was approved by the Department of Health Sciences, University of Mauritius. Sample Collection. A total of 53 households provided the samples which were collected using sterile cotton swabs by swabbing over a 5 cm × 5 cm surface area of kitchen tables. From each kitchen, four samples were obtained, one from dining table in the morning, one from dining table in the afternoon, one from preparation table in the morning, and one from preparation table in the afternoon. All the 212 samples were processed within 24 hours. Laboratory Investigations. All kitchen samples were processed for a bacterial count which was read as colony forming units (CFU), followed by the isolation and identification of potential pathogens. A serial dilution was carried out from the original sample and spread plate technique was done to determine the CFU/25 cm 2 . The samples were also streaked on sterile Nutrient Agar, MacConkey Agar, Bile Aesculin Agar, Salmonella Shigella Agar, Cetrimide Agar, and Sabouraud Agar (all from HiMedia, Mumbai, India). The potential pathogens were identified by conventional methods such as gram staining and biochemical tests such as catalase, coagulase, urease, oxidase, indole, methyl red, citrate, phenylpyruvic acid, and Kligler iron agar slant. Statistical Analysis. Data analysis was done using SPSS v.19.0. Descriptive statistics were used to summarise demographic data. Independent sample -test was used to calculate and compare between the bacterial load from the various sources. The odds ratio and difference in the prevalence of the potential pathogens were determined using Pearson's 2 test. A value of less than 0.05 was read as significant. Odds ratio (OR) has been used to measure the association between potential pathogens and factors such as demographic details, types of table, usage of towels, and diet. 3.1. Questionnaire. The demographic details of the families have been detailed on Table 1. The kitchen was busiest during dinner time (45.3%) followed by morning breakfast (26.4%), lunch (17.0%), and afternoon tea time (11.3%). Of the 53 dining tables, 44 were made of wood and 9 were made of plastic material. None of the plastic dining tables were covered while 37 (84.1%) of the wood tables were covered with plastic cover with the rest covered with cloth material. A total of 37 preparation tables were made of ceramics and 16 were made of wood material. It was noted that 17 (32.1%) households used their preparation tables as chopping boards and 27 (50.9%) used the same chopping board for both vegetables and fleshy foods. Only 21 (39.6%) of the respondents reported washing their hands always before preparing a meal or before eating. The frequency at which the kitchen was entirely cleaned was found to be daily for 18.9%, weekly for 58.8%, bimonthly for 17.0%, and monthly for 5.7%. For cleaning of the kitchen tables, 25 (47.2%) used multipurpose sponges, 13 (24.5%) used separate sponges, 13 (24.5%) used multipurpose kitchen towels, and 2 (3.8%) used separate kitchen towels. A high percentage of the respondents (96.2%) reported that food safety was very important. Laboratory Investigations. Out of the 212 samples, 168 (79.2%) showed bacterial growth while yeast was noted in 27 (12.7%). The mean CFU/25 cm 2 from the kitchen tables per day was 3264, with a higher prevalence from the preparation tables compared to the dining tables (3433 versus 3095), although the difference was not significant. The time of collection was not found to affect the CFU significantly. The material of the tables was found to affect bacterial load. Dining and preparation tables made of plastic had higher CFU compared to those made of wood ( < 0.05). Furthermore, tables covered with plastic covers had higher CFU compared to cloth materials ( < 0.05). A significantly higher CFU/25 cm 2 was noted from preparation tables which were also used as chopping boards (11185 versus 4839; < 0.05). Good hand washing practice, that is, always washing hands before preparing meals or eating, was significantly associated with lower CFU from both dining and preparation tables ( < 0.05). The tables cleaned with multipurpose sponges had the highest load with 8475 CFU/25 cm 2 followed by multipurpose kitchen towels which had 6049 CFU/25 cm 2 , with separate sponges 3670 CFU/25 cm 2 and separate kitchen towels 826 CFU/25 cm 2 . The difference was statistically significant. The potential pathogens isolated from the samples have been detailed in Table 2. A higher prevalence of coliform was noted from preparation tables compared to dining tables (28.3% versus 17.9%; < 0.05; OR = 1.31 (1.01-1.73)), both in the morning (28.3% versus 19.9%; < 0.05: OR = 1.28 (1.01-1.69)) and in the afternoon (28.3% versus 17.0%; < 0.05: OR = 3.5 (1.02-1.77)). Pseudomonas spp. was also significantly more prevalent from the preparation table compared to dining tables (15.7% versus 5.1%; < 0.05; OR = 1.53 (1.14-2.6)). Among samples collected from the preparation tables, Enterococcus spp. was more prevalent in the morning samples (45.3% versus 28.3%; < 0.05; OR = 1.42 (1.02-2.06)). A significant increase in the prevalence of coliform and Enterococcus spp. was found with increasing number of residents, children, adults, and elderly people ( < 0.05). It was also noted that more frequent cleaning of the kitchen and better hand hygiene, such as washing hands before preparing every meal or having meals, significantly decreased the prevalence of coliforms and Enterococcus spp. ( < 0.05). The other factors which significantly affected the presence of coliform have been detailed in Table 3. Enterococcus was also isolated at higher prevalence from households on nonvegetarian diets ( < 0.05) and from preparation tables which were also used as chopping boards (56.4% versus 26.9%: < 0.05: OR = 2.13 (1.30-3.51)). S. aureus was more prevalent when the same chopping board was used for both vegetables and fleshy foods (22.2% versus 5.8%: < 0.05: OR = 2.44 (1.87-3.19)). The association between potential pathogens and food spoilage bacteria from the kitchen tables and the cleaning materials used to clean the kitchens were also enquired ( Table 4). Discussion It is now accepted that the prevalence of foodborne illnesses originating from home kitchens could not be neglected. However, most countries have not yet established adequate surveillance or reporting mechanisms to track home-based foodborne illnesses which could be due to technical and financial restraints. In this study, it was found that although a very high percentage of respondents reported that food safety was a very important matter, only half of them used separate chopping board for vegetables and fleshy foods. Furthermore, only 39.6% adhered to good hand washing practice before handling food. It could be that either knowledge was not complete or knowledge was not always put into practice. Previous studies have also reported that knowledge and guidance in food safety do not always help in changing behavior [4]. The cleaning of the kitchens was done at different frequencies and more frequent cleaning was associated with lower prevalence of coliforms and Enterococcus spp. Food preparation and cleaning in the kitchen have been reported to be routine tasks [18] which could be mundane and taken for granted [19]. In a kitchen, the process of cleaning has been reported to vary from one household to another. Some people might clean to remove debris from the tables, some would tidy the surfaces, and very few would actually clean with the aim of removing microbes [4]. Therefore, microorganisms could very easily be transferred from one place to another. The prevalence of E. coli and Enterococcus spp. was found to increase significantly in presence of elderly members and family size. It has been previously reported that food safety at home could be affected by the actions of every member using the kitchen [4]. Furthermore, the hands of older individuals were found to have a higher prevalence of coliforms compared to younger ones [7]. The elderly might be less strict about hygiene in the kitchen as they have been brought up in an era when processed food was consumed to lesser extent, refrigeration of foods was not in vogue, and the food supply chain was shorter [4]. In this study, S. aureus was the third most common potential pathogen isolated and was more prevalent when the same chopping board was used for both vegetables and fleshy foods. In an experimental study, S. aureus was found to have the highest rate of cross-contamination as compared to Campylobacter, Salmonella, and E. coli [10]. The presence of S. aureus on kitchen surfaces and food handlers hands has been associated with poor hygiene as the bacteria are highly susceptible to heat [7] and low concentrations of antibacterial dishwashing liquids [20]. As expected, a higher prevalence of potential pathogens was found from preparation tables compared to dining tables. The preparation tables are in contact with raw and fleshy foods more often. The presence of S. aureus and coliforms on kitchen counters and chopping boards has been previously reported. A significant increase of these potential pathogens was noted when the hands of the participants were positive to the same bacteria [7]. The use of preparation tables as chopping boards should be discouraged as this study found that such a practice significantly increased the CFU and prevalence of Enterococcus. One previous study reported an increase in prevalence of S. aureus and E. coli when preparation tables were used as chopping boards [7]. It was also revealed in this study that preparation tables made of wood have higher prevalence of coliform and Enterococcus spp. The nature of wood which is porous might Canadian Journal of Infectious Diseases and Medical Microbiology 5 allow penetration of juices from foods and bacteria, hence preventing their removal during cleaning and favouring their colonisation. Furthermore, plastic covers on both dining and preparation tables were associated with potential pathogens. The use of plastic covers on preparation tables should be discouraged as it was associated with high prevalence of coliforms. The cloth covers did not have coliform as the covers were most probably removed and washed as soon as they appear dirty whereas plastic covers might be wiped with sponges or towels to clean them for further use. The hydrophobicity and roughness of surfaces together with the strain and surface physicochemical properties of the bacteria could affect initial adhesion process of foodborne bacteria to kitchen materials [9]. A review has concluded that strains of Listeria monocytogenes and Salmonella enteritidis could bind to various common surfaces in the kitchen including stainless steel, polypropylene, cutting board, and silestone, but with different degree of adhesion [9]. E. coli and S. aureus survived on polyethylene materials for longer period of time [21]. The irregular surfaces of plastic material could favour the accumulation of organic matter and food residues, which could increase the attachment and survival of bacteria [22]. Several studies have concluded that kitchen cloths and sponges become contaminated during use and could be important in cross-contaminating kitchen utensils and surfaces [23,24]. This study did not isolate bacteria directly from sponges. However, a higher mean CFU and potential pathogens were noted from kitchen tables which were cleaned with multipurpose sponges and towels compared to separate ones. Studies have concluded that S. aureus and other foodborne illness causing bacteria could be transmitted from contaminated sponges to kitchen surfaces [24]. Furthermore, it was reported that washing of sponges contaminated with food did not reduce the bacterial load significantly [20]. Therefore, the use of multipurpose sponges and towels should be avoided in kitchens. The association of coliforms and S. aureus with foodborne diseases has been well documented. Enterococcus have also been recently studied as potential indicators of faecal contamination on hands as they are present in large numbers in human faeces and persist in the environment [25]. In the United Kingdom, enterococci are regarded as secondary indicators of faecal pollution [26]. The World Health Organization has recommended the adoption of enterococci as an indicator of recreational water quality [25,27]. Faecal enterococci from human beings have been reported to be avirulent [25]. However, their presence on kitchen tables, towels, and sponges would indicate lack of hygiene which could eventually affect food safety. Conclusion The present study revealed that kitchen tables at home could be very important sources of potential pathogens which have been reported to cause foodborne illnesses. The use of plastic covers on kitchen tables, multipurpose sponges, and towels should be discouraged in the kitchen. Lack of hygiene was confirmed by presence of coliforms, S. aureus, and Enterococcus spp. on the tables. Furthermore, people should be encouraged to apply basic food hygiene practices at home to ensure food safety.
v3-fos-license
2022-05-21T15:22:54.735Z
2022-05-19T00:00:00.000
248927390
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2022.867546/pdf", "pdf_hash": "bb069d81770d899879b0d51d414657fb6bdfacfd", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46349", "s2fieldsofstudy": [ "Medicine" ], "sha1": "b3b575999164bd938426d80ba459ef9424d48518", "year": 2022 }
pes2o/s2orc
An Observational Study of Trifluridine/Tipiracil-Containing Regimen Versus Regorafenib-Containing Regimen in Patients With Metastatic Colorectal Cancer Background There are no randomized control trials comparing the efficacy of trifluridine/tipiracil and regorafenib in patients with metastatic colorectal cancer (mCRC). Herein, we conducted an observational study to compare the oncologic outcomes of trifluridine/tipiracil-containing regimen (TAS-102) and regorafenib-containing regimen (REG) in patients with mCRC. Material and method Patients who were diagnosed to have mCRC in 2015 to 2021 and treated with TAS-102-containing regimen or REG-containing regimen were recruited. Monotherapy or combination therapy were all allowed in this study. Oncologic outcomes were presented with progression-free survival (PFS), overall survival (OS), overall response rate (ORR) and disease control rate (DCR). Results A total of 125 patients were enrolled into our study, accounting for 50 patients with TAS-102 and 75 patients with REG. Of these patients, 64% were treated with TAS-102 or REG monotherapy, while the remaining were treated with TAS-102 combination or REG combination. In general, the median PFS and OS were 3.7 versus 2.0 months (P = 0.006) and 9.2 versus 6.8 months (P = 0.048) in TAS-102 and REG, respectively. The ORR and DCR were 44% versus 20% (P < 0.001) and 72% versus 43% (P < 0.001) in TAS-102 and REG, respectively. As for treatment strategies, the survival were significantly longer in combination than in monotherapy, no matter in TAS-102 or REG group. Multivariate analysis showed TAS-102 and combination therapy were independent predictor associated with better survival. Conclusions Our results suggested that TAS-102 had better oncologic outcomes than REG in patients with mCRC, especially in combination. Further prospective trials are warranted to confirm our results. INTRODUCTION Colorectal cancer (CRC) is one of the most common gastrointestinal tract cancers in nowadays. It is the third prevalent malignancy and the second leading cause of cancerrelated death worldwide. There are more than 1.9 million new patients diagnosed to have CRC and 935,000 deaths attributed to CRC in 2020 (1). Among these patients, more than half developed metastatic colorectal cancer (mCRC) eventually. The standard therapies for patients with mCRC includes chemotherapy regimens containing irinotecan, oxaliplatin, and fluoropyrimidines in combination with anti-vascular endothelial growth factor (VEGF) or anti-epidermal growth factor receptor (EGFR) antibodies (2). With these treatments, the overall survival of mCRC has improved gradually with an estimated median survival about 30 months and 5-year survival rate about 14% (3). However, the prognosis of mCRC drops sharply when in chemorefractory status. The median survival after chemorefractory is approximately 6 months. Hence, there is an urgent need to improve outcomes in patients with chemorefractory mCRC. For chemorefractory mCRC, two oral agents, trifluridine/ tipiracil (Lonsurf; TAS-102; Taiho Pharmaceutical, TTY Biopharm., Taiwan) and regorafenib (REG; Bayer AG, Berlin, Germany), have been proved as third to fourth-line treatment to prolong survival. TAS-102 is composed of an antineoplastic thymidine-based nucleoside analog (trifluridine) and an inhibitor of thymidine phosphorylase that degrades trifluridine (tipiracil) (4). The pivotal phase 3 RECOURSE trial compared TAS-102 with placebo in patients with refractory mCRC and demonstrated that TAS-102 significantly prolonged overall survival (OS) (7.1 months vs. 5.3 months, p < 0.001) and progression-free survival (PFS) (2.0 months vs. 1.7 months, p < 0.001) as compared with placebo (5). REG is an oral multi-kinase agent that inhibits activity of several stromal receptor tyrosine kinases associated with angiogenesis, oncogenesis, and the tumor microenvironment (6). The pivotal phase 3 CORRECT trial compared REG and placebo in patients with refractory mCRC and demonstrated that REG resulted in significantly longer OS (6.4 months vs. 5.0 months, p: 0.0052) and PFS (1.9 months vs. 1.7 months, p< 0.0001) as compared with placebo (7). Based on these results, both TAS-102 and REG gain the indication of refractory mCRC. Current guidelines also indicate that TAS-102 and REG are both effective regimens in later-line treatment of mCRC (2). Nonetheless, there are no randomized control trials directly comparing the efficacy of TAS-102 and REG. Previous retrospective studies had published their prognosis of TAS-102 monotherapy and REG monotherapy. Kawakami et al. analyzed a nationwide database in which TAS-102 demonstrated significantly longer survival than REG (8), while other publications showed insignificant survival between TAS-102 and REG (9,10). Moreover, recent evidences exhibited combination strategy of TAS-102 and REG with longer survival benefits numerically (11,12). However, no comprehensive studies focused on the comparison between TAS-102 combination and REG combination. Given the inconclusive results, we conducted an observational study to compare the oncologic outcomes of TAS-102-containing regimen and REG-containing regimen in patients with mCRC. Patients Patients who were at the age older than 18 years and diagnosed with pathologically proved mCRC from 2015 to 2021 at E-Da Hospital and E-Da Cancer Hospital were retrospectively reviewed. Patients who failed at least 2 line of standard chemotherapy and treated with TAS-102 or REG as later line treatment were enrolled into our study. Standard chemotherapy includes oxaliplatin, irinotecan, 5-fluorouracil, anti-VEGF antibody and anti-EGFR antibody (if RAS wild type). All the patients' basic characteristics were retrieved from medical records. Exclusion criteria were previous history of other cancer, irregular evaluation intervals and lost follow-up. This was a retrospective observational study, which was exempt from requiring consent. This study was approved by the E-Da Hospital Institutional Review Board (EMPR-109-012), and was conducted in accordance with the Declaration of Helsinki. Treatments REG was administered orally with an initial dose of 160mg daily on days 1-21 with 7 days of rest. TAS-102 was administered orally with an initial dose of 35 mg/m2 twice daily for 5 days a week with 2 days of rest for 2 consecutive weeks, followed by 14 days of rest. Both drugs were repeated every 4 weeks. Combination treatment includes anti-VEGF targeted therapy, anti-EGFR targeted therapy, oxaliplatin or irinotecan. Dose modification could be adjusted at physician's discrete based on patients' comorbidities and treatment adverse effects. Computed tomography was evaluated for the treatment response every 2-3 months. The treatments were continued in responding or stable patients until tumor progression, death or intolerable toxicities. Statistical Analysis All the basic characteristic were retrospectively retrieved from a medical chart review and presented with frequencies. Chi-square tests were calculated to analyze the differences between TAS-102 and REG. Statistical analyses were performed using SPSS. Oncologic outcomes were presented with progression-free survival (PFS), overall survival (OS), overall response rate (ORR) and disease control rate (DCR). Progression-free survival (PFS) was measured from the first day of chemotherapy administration until the date of tumor progression or final follow-up, while overall survival (OS) was calculated as the time from the first day of chemotherapy administration until the date of death from any cause or final follow-up. Objective response criteria in the tumors, including complete response (CR), partial response (PR), stable disease (SD), and progressive disease (PD), were evaluated according to the RECIST 1.1 guidelines. ORR was defined as CR plus PR, and DCR was defined by CR, PR, plus SD. Kaplan-Meier curves were depicted for survival. We also conducted Cox regression analysis using "enter" selection to adjust for the effects of potential confounders. All P values were two sided and considered to have significance if P values < 0.05. Patients Characteristics A total of 125 patients were enrolled into our study for oncologic outcomes evaluation with a median follow-up period 20 months. The median age of our patients is 64 years. Baseline characteristics were presented in Table 1. In general, most patients were male in gender (56%) and older than 60 years (70%). The majority of primary tumor location was left side colon (78%). Nearly 90% of our patients had initial stage 3-4 disease, 78% received radical surgery and 62% underwent adjuvant chemotherapy. As for genetic profiles, 62% of patient had all RAS mutant, 98% were B-raf wild type and 99% were MMR proficient. The median time from diagnosis of metastases to enroll into our study was 18.5 months. Most patients received TAS-102 or REG as fourth-line treatment. As for treatment strategies, 64% patients were treated with TAS-102 or REG monotherapy, while the remaining were treated in combination with other agents, including targeted therapy or chemotherapy. After stratified by chemotherapy, 50 patients received TAS-102 and 75 patients received REG for their chemorefractory mCRC. In TAS-102 group, 52% patients received TAS-102 monotherapy and 48% received TAS-102 combination therapy. In TAS-102 combination group, 60% patients was treated in combination with anti-VEGF agents rechallenge and 40% in combination with anti-EGFR agents rechallenge. In REG group, 72% patients received REG monotherapy and 28% received REG combination therapy. In REG combination group, 60% patients were treated in combination with irinotecan rechallenge and 40% were treated in combination with oxaliplatin rechallenge. All basic characteristics including gender, age, primary tumor location, initial stage, previous history, genetic status, time from diagnosis of metastases and number of prior regimens were well balanced between the two treatment arms. Survival Outcomes The oncologic outcomes between TAS-102 and REG were summarized in Table 2. For total population, the median PFS were 3.7 months in TAS-102 and 2.0 months in REG (P = 0.006). The median OS were 9.2 months in TAS-102 and 6.8 months in REG (P = 0.048). The ORR and DCR were 44% versus 20% (P < 0.001) and 72% versus 43% (P < 0.001) in TAS-102 and REG, respectively. The survival curves of PFS and OS are plotted in Figure 1. Moreover, all patients were divided according to combination or monotherapy. As for treatment strategy, the survival is signifi cantly longer in combination than those in monotherapy, no matter in TAS-102 or REG group. The survival curves of PFS and OS of TAS-102 stratified by treatment strategy were plotted in Figure 2. For patients treated with TAS-102, the median PFS and OS were 6.6 months versus 2.0 months (P < 0.001) and 16.7 months versus 6.5 months (P < 0.001) in combination and monotherapy groups, respectively. For patients treated with REG, the median PFS and OS were 4.8 months versus 1.8 months (P < 0.001) and 14.5 months versus 4.9 months (P < 0.001) in combination and monotherapy groups, respectively. The survival curves of PFS and OS of REG stratified by treatment strategy were plotted in Figure 3. Multivariate Regression Analysis Cox regression analyses with survival for potential prognostic factors were performed. Hazzard ration (HR) with 95% CIs were depicted in Table 3. DISCUSSION To our best knowledge, this is the first study to demonstrate that combination is much better than monotherapy in mCRC patients treated with TAS-102 or REG. Previous literature all focused on the comparison between TAS-102 monotherapy and REG monotherapy. The phase 3 RECOURSE trial demonstrated that TAS-102 significantly prolonged survival (5) and the phase 3 CORRECT trial also demonstrated that REG resulted in significantly longer survival (7). The PFS was increased 0.3 months and the OS was increased 1.4 -1.8 months. Although these two studies are significant, the survival differences were modest numerically. Moreover, Kuboki et al. conducted a phase 1/2 C-TASK FORCE trial to analyze the efficacy of TAS-102 in combination with bevacizumab (11). This study suggested that TAS-102 plus bevacizumab combination might become a potential treatment option in chemorefractory mCRC patients, with median PFS 5.6 months and median OS 11.4 months. Another phase Ib NIVOREGO trial also showed the combination of regorafenib plus nivolumab had a manageable safety profile and encouraging antitumor activity in patients with mCRC, with ORR 36% and median PFS 7.9 m (12). Our study was consistent with these conclusions. For total population, the median PFS were 3.7 months in TAS-102 and 2.0 months in REG (P = 0.006). The median OS were 9.2 months in TAS-102 and 6.8 months in REG (P = 0.048). After stratification, the survival is significantly longer in combination than monotherapy, no matter in TAS-102 or REG group. For patients treated with TAS-102, the median PFS and OS were 6.6 months versus 2.0 months (P < 0.001) and 16.7 months versus 6.5 months (P < 0.001) in combination and monotherapy groups, respectively. For patients treated with REG, the median PFS and OS were 4.8 months versus 1.8 months (P < 0.001) and 14.5 months versus 4.9 months (P < 0.001) in combination and monotherapy groups, respectively. Multivariate analysis demonstrated that combination therapy were independent predictor associated with better survival, no matter in TAS-102 or REG. Our study demonstrated that combination therapy can achieve the best prognosis, rather than monotherapy, in chemorefractory mCRC patients treated with TAS-102 or REG. Further prospective trials are warranted to confirm with our conclusions. Targeted therapy-chemotherapy combinations have been recognized as the optimal regimens in patients with mCRC. You et al. conducted a comprehensive meta-analysis enrolling 16 firstline clinical trials to evaluate the efficacy between chemotherapy plus targeted therapies and chemotherapy alone in mCRC patients (13). The meta-analysis suggested that the right-sided mCRC patients benefited more from chemotherapy plus bevacizumab comparing with chemotherapy alone. Arnold et al. also performed an retrospective study to compare chemotherapy plus EGFR antibody therapy with chemotherapy alone in patients with leftside mCRC (14). This study demonstrated that a greater effect of chemotherapy plus EGFR antibody therapy was observed in comparison with chemotherapy alone for patients with left-side mCRC. Our results were consistent with these conclusions that the survival is significantly longer in combination than those in monotherapy, no matter in TAS-102 or REG group. TAS-102 plus targeted therapy has a greater clinical benefit than TAS-102 alone, and REG plus chemotherapy also has a longer survival than REG alone. Immunotherapy is emerging treatment in nowadays. The indication of immunotherapy in mCRC was mainly in microsatellite instability patients. Previous studies also tested immunotherapy combination therapy in mCRC. Patel et al. conducted a phase 2 trial adding nivolumab to TAS-102 in patients with heavily pretreated microsatellite-stable (MSS) mCRC patients (15). The results showed nivolumab plus TAS-102 failed to extend clinical benefits in patients with refractory MSS mCRC. Median PFS was only 2.2 months. Another immunotherapy combination is nivolumab plus REG. Fukuoka et al. conducted a phase Ib trial of regorafenib plus nivolumab for patients with mCRC. The efficacy is promising with ORR 36% and median PFS 7.9 months. However, further prospective trials are warranted. Taking together, our study suggests that the optimal treatment strategy for mCRC patients is combination therapy like targeted therapy plus chemotherapy, rather than targeted therapy or chemotherapy alone. Current guidelines all suggested that TAS-102 and REG are standard treatment in patients with chemorefractory mCRC (2). However, little is known about the priority and treatment sequences. Several literatures had compared the oncologic outcomes between TAS-102 monotherapy and REG monotherapy retrospectively. Chida There are several potential limitations in our work, which are inherent to any retrospective studies. Chemotherapy regimen, combination or monotherapy were decided at the discretion of physicians and patients. These might be major biases in this study. Meanwhile, a single institutional experience, a small sample size, heterogeneity of our patients and inconsistent follow-up interval may also limit the power of our study. Given that, our study first identified that combination therapy is much better than monotherapy in chemorefractory mCRC patients treated with TAS-102 or REG. Moreover, our study also confirmed that the oncologic outcomes of TAS-102 monotherapy and REG monotherapy were consistently insignificant. To date, there are no prospective randomized controlled trials with larger cohorts focusing on the comparison between TAS-102 and REG. Thus, in spite of a retrospective study with inevitable selection bias, our study remains clinically valuable. CONCLUSIONS Our study investigated the oncologic outcomes of TAS-102 and REG in patients with chemorefractory mCRC. Based on our results, we suggested that combination therapy is much better than monotherapy. Furthermore, the efficacy of TAS-102 monotherapy and REG monotherapy were consistently similar. In our multivariate analysis, combination therapy were strong prognostic factors related with survival. These conclusions are clinical valuable and pave the way for the treatment of chemorefractory mCRC. Further prospective randomized controlled trials are warranted to validate our conclusions. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by E-Da Hospital Institutional Review Board (EMPR-109-012). Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements. AUTHOR CONTRIBUTIONS M-CH, K-MR, and H-PC wrote manuscript and performed clinical/genetic investigation. S-EL, K-WL, C-CC, C-IC, and L-CS performed clinical/genetic investigation. All authors contributed to the article and approved the submitted version.
v3-fos-license
2018-04-03T02:52:47.994Z
2017-07-25T00:00:00.000
3323077
{ "extfieldsofstudy": [ "Psychology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41531-017-0026-0.pdf", "pdf_hash": "f14ba65553452e6b685f681f2d621e22a6546738", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46350", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "sha1": "1577d4c9c9f12dbc7bf6cb8be701cc39f952fc98", "year": 2017 }
pes2o/s2orc
Validation of the XDP–MDSP rating scale for the evaluation of patients with X-linked dystonia-parkinsonism X-linked dystonia-parkinsonism(XDP) is a neurodegenerative disorder endemic to the Philippines. A rating scale was developed by the authors under the guidance of the Movement Disorder Society of the Philippines (MDSP) to assess XDP severity and progression, functional impact, and response to treatment in future clinical trials. Our main objective was to validate our new scale, the XDP–MDSP scale. The initial validation process included pragmatic testing to XDP patients followed by a modified Delphi procedure with an international advisory panel of dystonia, parkinsonism and scale development experts. Pearson correlation was used to assess construct validity of our new scale versus the assess construct validity of our new scale versus standard dystonia, parkinsonism, non-motor and functional scales; and also to assess divergent validity against behavioral and cognitive scales. The 37-item XDP–MDSP scale has five parts: I-dystonia, II-parkinsonism, III-non-motor features, IV-ADL, and V-global impression. After initial validation, the scale was administered to 204 XDP patients. Inter-domain correlation for the first four parts was acceptable. The correlation between these domains and the global rating was slightly lower. Correlations between Parts I, II, III, and IV versus standard dystonia, parkinsonism, non-motor and functional scales were acceptable with values ranging from 0.323 to 0.428. For divergent validity, a significant correlation was seen with behavioral scales. No significant correlation was noted with the cognitive scale. The proposed XDP–MDSP scale is internally valid but the global rating subscale may need to be modified or eliminated. While there is convergent validity, divergent validation was successful only on cognitive and not behavioral scales. The frequent co-occurrence of anxiety and depression, and its effect on the motor and functional state, may explain this finding. INTRODUCTION X-linked dystonia-parkinsonism (XDP, DYT3, "Lubag", OMIM #314250) is an adult-onset, progressive, neurodegenerative movement disorder first described in Filipino males from Panay Islands in 1975, and so far found only in Filipinos. 1 The nationwide prevalence is 0.31/100,000, but is 23.66/100,000 in Capiz and 7.72/ 100,000 in Aklan. The mean age at onset of illness is 39.67 years and the mean age at death is 55.59 years. Only 6% of survivors are still able to work, 69% are ambulant but not working due to the profound disability caused by the movement disorders, and 23% are wheelchair-bound or bed-bound. 2 XDP typically manifests initially with focal dystonia (93%), and initial parkinsonian traits are observed in only 5.7%. The condition generalizes within 5 years of onset in 84% of cases, regardless of the initial site of involvement. As the illness reaches from the 7th to 10th year, the dystonic movements become less severe, with apparent stiffening of the limbs and straightening of the trunk. By the 15th year of illness, the predominant picture is one of parkinsonism manifesting as bradykinesia, masked facies, mumbling speech with drooling, and tremors. 2,3 There are currently no existing scales specific for XDP, a major impediment in having objective means of classifying patients and the extent of their disease severity, tracking disease progression and response to treatment. Hence the authors, in cooperation with the Movement Disorder Society of the Philippines (MDSP) developed this scale for clinical and research use. A proposed scale for use in patients with XDP is presented, with the intention of validating it for clinical use. A validated scale will be useful to clinicians who manage patients with XDP and clinical researchers that test effects of various interventions for uniformity in their assessments. RESULTS A total of 204 patients with a clinical diagnosis of XDP were recruited to the study. Cronbach's alpha for the entire five-part scale was acceptable at 0.805. Inter-domain correlation for the first four parts of the scale measuring four different domains (dystonia, parkinsonism, non-motor features and activities of daily living) was acceptable and significant with ranges from 0.434 to 0.671; the correlation between these domains and the last (global rating) was slightly lower but still significant at 0.319 to 0.447 (Table 1). For pragmatic validity, the average time spent completing all five parts of the scale was 40 min. Inter-rater validation was no longer pursued when it was determined that too only 7 of the 37 items on the scale could be evaluated using the video recordings. For convergent validity, correlation was significant and acceptable with values ranging from 0.323 to 0.428 (for XDP-MDSP Scale Part I and BFMDRS, Part II and UPDRS motor, Part III and NMSQuest and Part IV and SCOPA-ADL) (see Table 2). When testing for divergent validity, there was a trend toward negative correlation between the overall XDP-MDSP scale score, and the MMSE and a significant positive correlation with the HADS-P and HAM-D. If only Parts I, II, and IV of the XDP scale (which do not contain behavioral or cognitive items) are examined, there is now a trend of no correlation with the MMSE and HAM-D but a significant correlation with the HADS-P remains. DISCUSSION This is to our knowledge, the first comprehensive assessment scale on XDP ever reported and validated that comprised a section for dystonia, parkinsonism, non-motor features, activities of daily living and global assessment. Although the XDP-MDSP scale takes on average 40 min to administer, it eliminates the need to administer separate dystonia, parkinsonism, non-motor and functional scales, which cumulatively can take longer than 40 min. Also, since not all patients may have all features, the entire scale can often take less time to complete. While the Cronbach's alpha for the entire five-part scale was acceptable at 0.805, and the inter-domain correlation for the first four parts of the scale measuring four different domains (dystonia, parkinsonism, non-motor features and activities of daily living) was acceptable and significant; the correlation between these domains and Part V-the global rating was slightly lower but still significant. This implies that the global impression subscale of the XDP-MDSP scale may need to be modified. The subscale was heavily based on the Clinical Global Impression Scale (CGIS), a one-item, 7-point scale, ranging from normal (no disease symptoms) to extremely ill (among the worst disease severity encountered). The CGIS may therefore not be detailed enough or specific enough when evaluating for disease severity in this unique population, which may account for the lower correlation of this domain with the rest of the scale. For convergent validity, the correlation between the Parts I-IV of the XDP-MDSP scale and their corresponding gold standard counterparts was significant and acceptable. However, when testing for divergent validity, the desired negative/no correlation was barely met with the MMSE (a trend was seen) but not with the HADS-P and HAM-D (where significance was noted). Nonetheless, if only items of the XDP-MDSP scale which do not contain behavioral or cognitive items were examined, there is no longer a correlation with the MMSE and HAM-D, but a significant correlation with the HADS-P remains. This was probably because XDP patients have a concomitant anxiety and depression. Studies have shown a prevalence of anxiety symptoms at 16.7% and depressive symptoms between 54.8-92.9% among XDP patients. 4 This again serves as a warning that future studies that evaluate motor improvement in this population should look carefully at the intervention's effect on behavior as they can be a significant confounder. The potential weaknesses in this validation study are the lack of controls and our inability to take proactive measures to minimize skewness, floor, and ceiling effects. Since XDP is a rather uncommon disorder, endemic to only certain areas in the Philippine archipelago, we had to simply enroll as many patients as we could without much regard to their disease severity. Nonetheless, to minimize the effects of skewness, we enrolled more patients than what is thought to be typical for the scale's length. Our distribution and range of scores are also described in our Result section. Moreover, most other recently validated movement disorders scales have also not included controls or emphasized skewness, floor and ceiling effects. CONCLUSION The proposed XDP-MDSP scale is internally valid, although the last domain (global rating) should perhaps be modified slightly due to lower correlation with the other domains. Since Pearson correlation is acceptable, there is also convergent validity. On the other hand, the significant correlation between the proposed scale and the HADS-P and HAM-D may be because many patients also have concomitant anxiety and depression. Correlation with only those parts of the scale that do not contain behavioral or cognitive items shows partially successful divergent validation. The acceptable internal validity of the scale and convergent validity make this proposed XDP-MDSP scale valid and acceptable to assess the severity of the disease as well as the patient's response to treatment. Subsequent studies, such as clinical trials in this population can make use of the scale in order to assess the effectiveness of treatments as well as the natural history of the disease. Patient satisfaction with the scale can also be assessed more thoroughly by means of a questionnaire or survey aside from the pilot testing that was done. This is planned for a future validation phase, which will involve more patients when using the scale in clinical trials or progression studies. METHODS The authors developed the scale based on their collective clinical experience, with the aim of capturing the various phases of the illness, its associated non-motor features, along with a global impression subscale that could be useful in tracking progression and response to treatment. The investigators used the most commonly accepted gold standard scales in the assessment of various aspects of XDP. In assessing dystonia, the Burke-Fahn-Marsden dystonia rating scale (BFMDRS) was used. 5 For PD, the Unified Parkinson Disease Rating Scale (UPDRS) motor was used. 6 For the non-motor symptoms, the Non-Motor Symptoms Questionnaire (NMSQuest) was used. 7 The Short Parkinson's Evaluation Scale/Scales for Outcomes in Parkinson's Disease (SPES/SCOPA) was utilized to look into the motor impairments, activities of daily living, and motor complications. 8 Construct validity was assessed by correlating the different parts of the scale with the following validated scales: BFMDRS, UPDRS Motor, NMSQuest and SCOPA-ADL subscale. The XDP-MDSP scale's internal consistency was assessed by computing for the Cronbach's alpha coefficient. To ensure that the XDP-MDSP scale was not unduly influenced by the presence of confounders, such as depression, anxiety and cognitive impairment, divergent validity was assessed by correlating the XDP-MDSP scale with the Hospital Anxiety and Depression Scale-Pilipino (HADS-P), the Hamilton Depression Rating Scale (HAM-D) and the Mini-Mental State Examination. [9][10][11] After the scale was completed, a prospective, cross-sectional validation study was done. The patients were recruited from the investigators' clinics in Manila and from the XDP Clinic in Roxas City, Capiz. Patients with a clinical diagnosis of XDP based on the following were included: male sex, family history of dystonia and/or parkinsonism, inheritance pattern consistent with an X-linked recessive pattern and patients with whatever combination or severity of dystonia and/or parkinsonism. We excluded patients whose signs and symptoms could be explained by another diagnosis aside from XDP. The initial version of the XDP-MDSP rating scale was first applied to ten XDP patients for initial feedback. The scale was shortened after it was found to be too long and exhaustive for both the clinicians and the patients. In addition, certain items found to be vague were clarified and refined. Further content validation was then carried out using the modified Delphi technique. An international advisory panel was formed, composed of five movement disorders experts. Their comments on the scale were individually solicited. Further modifications to the scale were made based on their comments. The technique overcomes the disadvantages of conventional committee action (e.g., censored feedback) by allowing experts to provide relatively anonymous feedback. The revised version of the scale was again administered to a small group of patients to examine if they understood the individual items. The final XDP-MDSP scale consisted of 37 items divided into five parts to capture all XDP-related symptoms: part 1-dystonia, part II-parkinsonism, part III A and B-non-motor symptoms, part IV-activities of daily living and part V-global impression. To achieve optimal analyses, the scale was administered to at least 200 XDP patients, based on an approximate sample size calculation of 37 scale items × 5 = 185 patients plus 15 additional patients to allow for possible dropouts and to minimize the influence of any unanticipated skewness, floor and ceiling effects, as much as possible. Proper informed consent was obtained prior to the start of the study. Only one rater evaluated each patient. The raters scored patients on each item according to his or her best judgment, as to which score best describes the patient. They asked the patients about any doubts or unclear items. The time spent to administer the scale to each patient was taken. After the XDP-MDSP scale was completed for each patient, the BFMDRS, UPDRS motor, NMSQuest, SCOPA-ADL subscale, HADS-P, HAM-D, and MMSE were also administered. If the patient became fatigued or was unable to communicate due to his disease condition, the examiner allowed for time to rest or to ask the companion for information regarding the patient's functioning. The raters in the study were the authors, as well as other officers and members of the MDSP, who are all neurologists and experts in movement disorders. All raters underwent training and orientation in the administration of the XDP-MDSP scale and all other scales to be used, in order to clarify use and description of all instruments and standardize ratings. The validity of the scale was assessed using the following: for pragmatic validity, the average time for completing the scale by both clinician and caregiver or patient was taken. For inter-rater validity, at least two raters scored the patients based on a video recording made using a standardized protocol. For construct (or convergent) validity, the correlation between the different parts of the XDP-MDSP scale and the BFMDRS, UPDRS Motor, NMSQuest, SCOPA-ADL was calculated. For divergent validity, the correlation between the XDP-MDSP scale and HADS-P, HAM-D and MMSE was calculated. The convergent and divergent validity were tested using Pearson correlation. Some parts were clinician-administered (Parts I, II, IIIA, and V) and others were answered independently by the patient and/or caregiver (Parts IIIB and IV). The ratings were in whole integers. If the score lies between two items, the rater was advised to use the higher number. The scores for all scales were entered into an Excel file, with appropriate quality control and data processing measures performed. For internal consistency, Cronbach's alpha coefficient was calculated. The methods were performed in accordance to relevant regulations and guidelines. This study was approved by the Institutional Review Board of the Philippine Children's Medical Center. Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request.
v3-fos-license
2018-03-18T17:13:35.856Z
2016-08-15T00:00:00.000
1026018
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://diagnosticpathology.biomedcentral.com/track/pdf/10.1186/s13000-016-0527-x", "pdf_hash": "06ab5748bdb6fea250c573497c3e933d46373098", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46351", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "06ab5748bdb6fea250c573497c3e933d46373098", "year": 2016 }
pes2o/s2orc
Immature teratoma presenting as a soft-tissue mass with no evidence of other sites of involvement: a case report Background Germ cell tumors are tumors composed of tissues derived from more than one of the three germinal layers. They are more common in the testes and ovaries, but can present in many different regions in the midline, including the sacral region, retroperitoneum, mediastinum, and brain. Testicular germ cell tumors generally metastasize to the retroperitoneum, lungs, and brain; metastases to soft tissue are very rare. Case presentation Here we describe a case of a single soft-tissue mass in the thigh of a 27-year-old man, with histology showing areas of mature teratoma tissues derived from the ectodermal and mesodermal lineages, and areas of immature teratoma tissue composed of small undifferentiated cells, with primitive neuroectodermal differentiation foci forming neuroepithelial elements – thus classified as immature teratoma. The patient had no other clinical or radiological evidence of involvement, besides the lymph nodes. Conclusion The case presented suggests a rare and unexpected primary immature teratoma of the thigh. Background Teratoma is a subtype of germ cell tumors (GCT) derived from more than one of the three germinal layers. Teratomas can be classified as mature tumors (cystic or solid), which contain well-differentiated tissues, or as immature tumors, which contain poorly differentiated tissues consisting primarily of embryonic-appearing neuroglial or neuroepithelial components [1,2]. The most commonly involved sites of teratomas are the sacrococcygeal region (57 %) and the gonads (29 %); they occur much more frequently in the ovaries, but can also arise in the testes. In adults, the gonads are by far the most common site of teratomas. Other possible locations are the mediastinum (7 %), retroperitoneum (4 %), cervical region (3 %), and intracranial (3 %) [3]. Uncommon locations are the stomach, heart, pleura, pharynx, thyroid, base of the skull, maxilla, liver, prostate, vagina, and subcutaneous tissues [2,4]. GCTs represent 95 % of testicular tumors developing after puberty, although pure teratomas of the testis are rare (3-5 %). In men, teratomas of the testis developing during and after puberty are always considered to be malignant because of the potential for metastasis, mainly to the retroperitoneal lymph nodes [3]. According to Ghazarian et al. [5], almost 90,000 cases of testicular GCTs in men were registered in the US between 1998 and 2011. The prognosis of teratomas depends on many factors. Mature teratomas are generally benign. Immature teratomas in young children also tend to behave as benign tumors. In patients older than 15 years, immature teratomas can manifest as highly devastating malignancies [1]. Metastases of testicular teratomas to the subcutaneous tissues are very rare. Only two cases of primary testicular teratomas that metastasized to soft tissue (the left thigh and gluteal and iliac muscles, respectively) have been reported in the literature [6,7]. In addition, the literature contains only four descriptions of GCTs arising primarily from structures away from the midline [8][9][10][11]. Here we present the first reported case of an immature teratoma manifesting as a subcutaneous mass in the thigh, with no evidence of other sites of involvement, except the lymph nodes. Case presentation A 27-year-old man presented to his city's health service (in the Midwest of the state of Santa Catarina, Brazil) in February 2010 complaining of a nodule in the middle third of the lateral aspect of his right thigh. The mass was removed surgically and sent for pathological examination, which revealed a 105 g piece, measuring 10.5 × 8.5 × 4 cm, covered by white skin. In the central portion of dermis/hypodermis, there was a brownish, lobulated, soft, and friable nodule, measuring 6.5 × 5 × 3.5 cm, partially encapsulated. The deep margin was covered by whitish and smooth muscle fascia ( Fig. 1a and 1b). Histological sections stained with hematoxylin-eosin showed mature and immature teratoma, with presence of areas of mature teratoma with tissues derived from ectodermal and mesodermal lineages ( Fig. 1c and 1d), and areas of immature teratoma composed of small undifferentiated cells (Fig. 1e) with foci of primitive neuroectodermic differentiation forming neuroepithelial elements (Fig. 1f). The prevalence of immature teratoma was estimated to be 70 %. Margins were compromised, and a sample was sent for immunochemical analysis. While waiting for the immunochemical results, the patient was referred to the Oncology Service of Hospital Universitário Santa Terezinha (HUST). He presented at this service in May, 2010 with a hard nodule (diameter 5 cm) in his right thigh, in the same location from which the previous mass was removed. Computed tomography (CT) of the thigh revealed a mass in the lateral aspect of the thigh (Fig. 2a). Findings of chest CT and chest radiography, conducted as part of the workup to assess possible sarcoma, were negative. The testes were also normal, as determined by CT (Fig. 2b). Immunochemical results, available in June 2010, demonstrated positivity for cytokeratin (AE1 and AE3 clones), CD99 and MIC2 (Ewing's sarcoma and 12E7 markers), PS100 (anti-human S-100), Wilm's tumor 1 (WT1, 6 F-H2 clone), desmin (D33 clone), and vimentin (V9 clone), consistent with an immature teratoma. The patient underwent a second surgery to remove the mass. Analysis of the second resection specimen revealed a yellowish, irregular node, partially encapsulated and greasy to the touch, weighing 24 g and measuring 4.8 × 3.8 × 1.8 cm. Surgical margins were free and there was no perineural or angiolymphatic invasion. The patient was scheduled for follow up. In November 2010, the patient presented to the Oncology Service of HUST with a right inguinal node. Ultrasound of the testis and CT of the brain were normal. CT of the pelvis and abdomen revealed enlarged lymph nodes, measuring up to 2.7 cm, in the right inguinal region and retroperitoneum. No other abnormalities were seen. The patient's alpha-fetoprotein (AFP) level was 375.7 ng/ml (normal value < 15 ng/ml, slightly varying among laboratories) and his β-human chorionic gonadotropin (β-hCG) level was 0.1 mIU/ml (normal value < 2,67 mIU/ml, slightly varying among laboratories). At this point, the patient was started on bleomycin, etoposide, and cisplatin, and was followed regularly. The right inguinal node regressed. In May 2011, the patient presented to the Oncology Service of HUST with right inguinal pain. Physical examination revealed no right inguinal lymph node or testicular abnormality. The patient returned in June 2011 with right inguinal enlargement, up to 2.5 cm. Serum levels of AFP and β-hCG were 5.6 ng/ml and 0.1 mIU/ml, respectively. In March 2012, the patient presented with pain in his right thigh. His AFP level was 61.4 ng/ml and his β-hCG level was 4.8 mIU/ml. CT of the thigh revealed recurrence of the mass. Right inguinal node enlargement was also noticed. In May 2012, the patient returned to the Oncology Service of HUST with a palpable node in his left breast, in addition to the inguinal and thigh nodules. In June 2012, his AFP level was 1,101.8 ng/ml and his β-hCG level was 1.5 mIU/ml. The patient was started on palliative chemotherapy with the VIP (vinblastine, ifosfamide, and cisplatin) protocol. By the end of June 2012, after the first cycle of VIP, the patient developed neutropenic fever, septic shock, acute kidney injury and finally died in consequence of these complications. Discussion The case presented in this article is very rare, and it is the first reported case of an immature teratoma in the Fig. 2 a Axial CT image of the inferior extremities shows a muscledensity mass in the lateral aspect of the right thigh, measuring 38 × 20 mm (arrow). b Axial CT image of the perineum at the testes level shows the lack of disease in these structures subcutaneous tissue of the thigh with no evidence of a primary tumor site. As this case is unusual, many questions arise. The first question is whether the tumor arising in the subcutaneous tissue was a teratoma, rather than a sarcoma. Our answer is that the immunochemical findings were clear and concise. The positivity for all immunochemical markers and the elevation of characteristic serological tumor markers allow us to have no doubt about the histologic type of the tumor. Common immunochemical markers for sarcoma are vimentin, keratin, desmin, leucocyte common antigen, and S100 [12], but not CD99 and MIC2, which characterize primitive neuroectodermal components, a hallmark of immature teratomas [13,14]. The second question is how an immature teratoma can first present in the subcutaneous tissue of the thigh, with no evidence of a primary tumor. We propose two theories to explain this fact. First, an undetected primary tumor may have been unable to grow in its original site, leading to clinical evidence of only metastatic disease. Second, the thigh may have been the primary site of the tumor. We would like to emphasize the absence of findings in our patient's brain, chest, pelvic, and abdominal CT, with positivity only for the retroperitoneal and inguinal lymph nodes. In addition, ultrasound of the testes was normal, and physical examination of the four extremities and neck were also normal. We found in the literature two reported cases of immature teratomas with soft-tissue metastases, interestingly, to the thigh and the gluteal region [6,7]. Both cases involved clear primary gonadal tumors, in contrast to our case. We also found descriptions of four cases of GCTs arising outside of the midline, without evidence of a primary tumor; the authors considered these GCTs to be primary tumors [8][9][10][11]. One of these reports describes a malignant mixed GCT in the soft tissue of the right arm of a 37-year-old man, with no other sites of involvement; in this case, immunochesmistry was not performed, and serum markers for GCT were within normal limits [8]. Another report describes the case of a malignant teratoma in the left proximal humerus of a 14-year-old girl. Also in this case, no other sites of disease were found [9]. In this case, immunochemical findings were consistent with a GCT. An extragonadal malignant teratoma of the foot [10] and an intraosseous teratoma of the ilium [11], both without evidence of other sites of involvement that could suggest a primary tumor, have also been described. Teratomas originate from germ cells, which first appear in the endoderm of the yolk sac and then migrate to the genital ridges, through the wall of the midgut, during the fifth week of gestation. The abnormal migration of germ cells in the intrauterine period can lead to GCTs in extragonadal locations. Our patient had normal, topic testes. Although ectopic testes can be found in the medial thigh [15], we found no evidence in the literature that they can be located in the lateral aspect. We thus assume that the tumor in this case was not gonadal in origin. The lower limb begins to grow in the fourth week of embryonic development, arising from the sacral region opposite the fifth lumbar and first sacral somites. At the 6-9-mm stage, in approximately the fourth gestational week, the limb bud lengthens and the base extends toward the sacral myotomes [16][17][18]. The sacrococcygeal region, from which the lower limb arises, is one of the most common locations for immature teratoma development, especially in infants. We speculate that germ cells in the sacrococcygeal region can become trapped and follow the lower limb during its development in this case. During the course of the disease, our patient developed inguinal and retroperitoneal lymph node enlargement. This drainage route is consistent with dissemination from the thigh, first to the superficial and deep inguinal nodes and then to the external iliac and aortic nodes. Lymphatic drainage of the testes occurs first to the interaortocaval and left para-aortic lymph nodes, just below the renal vessels (as classically seen in metastatic GCT of the testes). Thus, we hypothesize that the lymph node metastases in this case likely originated from the thigh [19][20][21]. If we assume that the mass in the thigh was secondary to a primary occult tumor (e.g., testicular or retroperitoneal), it likely developed through hematogenous dissemination (as retrograde lymphatic dissemination is very unlikely), though it is uncommon for a GCT. Bilici et al. [7] reported a case of a stage IA immature teratoma of the testis that was treated surgically. The tumor relapsed years after treatment, with, among others, a mass in the thigh that was proven histologically to be an immature teratoma. In that case, however, the patient also had multiple lung, liver, mediastinal, and brain metastases, rather than the single metastasis that characterizes our case [7]. Soft-tissue metastases of a solid tumor are generally uncommon; it usually occurs in the setting of advanced, relapsed malignancy [7,21]. On the other hand, Damron and Heiner stated that metastatic soft-tissue masses present most commonly before or concomitant with the primary malignant sites [22]. Contrary to that statement, in this case, we have a single soft tissue mass, which hardly could represent a metastatic mass, since no evidence of other site of involvement was found. Conclusions The case presented here is challenging and unique. None of the hypothesis that we have developed to explain iteither a soft-tissue metastasis as the initial presentation of an immature teratoma arising in an unknown primary site or a primary immature teratoma arising in the thigh from germ cells sequestered abnormally in a location never previously describedmatches evidence in the literature. The first hypothesisa single metastasis to soft tissue with no evidence of disease in any organ except the lymph nodescould be considered more probable, given the premise that GCTs do not arise outside of the midline. We found only two reported cases in which teratomas spread to soft tissue, but definite primary sites were identified in both cases. In the present case, the testes may have been the primary site of the tumor; after a single metastasis to the subcutaneous tissue of the thigh, the original lesion may have undergone spontaneous necrosis and was no longer clinically evident. The second hypothesis, that the soft-tissue mass was primary, is supported by four other described cases of GCTs outside the midline with no evidence of any other disease site. Attention should be paid to similar cases in the future, to achieve a better understanding of the behavior of GCTs, especially teratomas.
v3-fos-license
2019-12-02T23:42:38.677Z
2019-12-01T00:00:00.000
208539039
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://nutritionj.biomedcentral.com/track/pdf/10.1186/s12937-019-0506-7", "pdf_hash": "de1432e4d4314a4750095e44105f61cea0de9faf", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46354", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "sha1": "de1432e4d4314a4750095e44105f61cea0de9faf", "year": 2019 }
pes2o/s2orc
Assessing the nutritional needs of men with prostate cancer Background Nutrition is important for prostate cancer (PC) survivorship care to help achieve a healthy weight, reduce treatment side effects and reduce the risk of developing other chronic diseases. We aimed to advance the understanding of the nutritional needs of men with PC and services that could be potentially implemented to address them. Methods We conducted a needs assessment of nutrition services for men with PC drawing on four perspectives; 1) patient evaluation of a nutrition education session in British Columbia (BC), 2) survey of BC health professionals, 3) an environmental scan of existing nutrition services across Canada and 4) a scoping literature review. Results Patients expressed a need for more nutrition information and a desire for additional nutrition services. More than 60% of health professionals believed there is a need for more nutrition services for men with PC, and reported the focus should be on weight management or management of PC progression. The environmental scan revealed few existing services for men with PC across Canada, most were inclusive of multiple cancers and not tailored for men with PC. Eighteen completed studies were identified in the scoping literature review. The majority provided combined diet and exercise programs with various formats of delivery such as individual, group and home-based. Overall, 78% of studies reported improvements in one or more of the following measures: dietary intake/ diet quality, body composition, self-efficacy, quality of life, fatigue, practicing health behavior goals and physical function/ exercise. Four studies assessed feasibility, adherence or satisfaction with all reporting positive findings. Conclusion Despite the high prevalence of PC in Canada, and the perceived need for more support by patients and health professionals, there are limited nutrition services for men with PC. Evidence from the literature suggests nutrition services are effective and well-accepted by men with PC. Our findings define a need for standardized nutrition services for men with PC that assess and meet long term nutritional needs. Our findings also provide insight into the type and delivery of nutrition services that may help close the gap in care for men with PC. Background In 2018, an estimated 1.3 million men were diagnosed with prostate cancer (PC) worldwide [1]. PC is the second most common cancer in men, and is particularly prevalent among developed countries [1]. Over the past decade there have been substantial improvements in cancer care including earlier detection, management and treatment leading to declining PC mortality and a 5-year relative survival rate of 95% [2]. Supportive care for men with PC and their families is important since many men may face physical, psychological and psychosocial challenges following a PC diagnosis and therapy for sexual and urinary dysfunction, fear of cancer reoccurrence and development of other chronic diseases [3]. Due to enhanced PC care, men are more likely to die from cardiovascular or respiratory diseases, rather than from PC itself [4][5][6][7]. Diet also plays an important role in the prevention of chronic diseases (e.g. diabetes, cardiovascular disease and osteoporosis) that men with PC can be at increased risk of developing, particularly with the use of androgen deprivation therapy (ADT) which is associated with increased fat mass and insulin resistance [8][9][10]. For instance, a study of over 73,000 men with prostate cancer found those treated with ADT had a 44, 16, 11 and 16% increased risk of developing diabetes, coronary heart disease, myocardial infarction and sudden cardiac death, respectively [9]. High levels of obesity are prevalent among cancer survivors, often at similar levels as the general population [11]. Most studies also report less healthy diets among male cancer survivors, including lower fruit and vegetable intake compared to breast and uterine cancer survivors [12,13]. As a result, the American Cancer Society's Prostate Cancer Survivorship Care Guidelines emphasize the need for nutrition as a part of survivorship care for men with PC [3]. Furthermore, men diagnosed with PC may be motivated to make dietary changes after a cancer diagnosis [14]. A large body of evidence shows that lifestyle change, including dietary modifications are feasible for men with PC [15], may minimize side effects of PC treatment [4], help achieve a healthy body mass index [16][17][18][19], and improve health related quality of life [20]. Studies report that men with PC value access to dietary advice and use dietary change as a way to regain control over their diagnosis and to enhance survivorship [21,22]. Together, this evidence highlights the importance of delivering nutrition services for men with PC to provide the knowledge and support needed to make lifestyle changes to improve quality of life and minimize the effects of PC and related treatment. Despite the need for nutrition in supportive PC care, there are barriers to delivering nutrition services [23]. This may be particularly true in a public health care system. In Canada, many hospitals prioritize access to nutrition services due to limited funding and thus access to dietitians. For example, at many provincial cancer agencies, patients are referred to a dietitian only if they meet criteria for malnutrition which is generally based on a history of recent weight loss or if patients experience severe side effects of cancer treatment that impact dietary intake, both of which are uncommon among men with PC [24]. Several Canadian provinces operate free telephone-based services supported by provincial government that connect people with dietitians [25 -28]. However, there are a limited number of dietitians with expertise in oncology and they provide services for all types of cancers (e.g. there is one oncology dietitian at HealthLink BC that serves the 4.6 million people in British Columbia (BC)). As a result, within a public healthcare system, access to nutritional services is often limited for men with PC. Indeed, there is no standard nutrition education program for PC across BC. This, along with the evidence from the published literature, suggests there is a missed opportunity to support the needs of men with PC, and encourage lifestyle change to improve PC outcomes and overall health. New healthcare programs that draw on additional sources of funding, may be a means to provide more comprehensive support to patients. The Prostate Cancer Supportive Care (PCSC) Program [29], that includes education on lifestyle changes for diet and exercise, was established in 2013 and is now being expanded to cancer centres across BC in partnership with the provincial cancer agency (BC Cancer). Currently, diet and nutrition information are provided in the PCSC Program through a single group education session. However, it is unlikely that this education session alone adequately meets the needs of men with PC. An understanding of the nutritional needs of men with PC, nutrition services available to men with PC in other healthcare settings and services that could be implemented from the evidence base is critical for informing supportive care delivery within the PCSC Program and supportive care programs more broadly. We conducted a needs assessment of nutritional services for men with PC that drew on four areas; 1) patient evaluation of the current PCSC nutrition education session, 2) a health professional survey regarding nutritional services, 3) an environmental scan of existing nutritional services for men with PC in Canada, and 4) a scoping literature review of nutrition services for men with PC. Methods Evaluation of the PCSC Program's "Nutrition for Prostate Cancer Patients" education session To provide patients' perspectives on nutrition services, we analyzed existing patient feedback on the PCSC Program's nutrition education session. All men diagnosed with PC and their families from the greater Vancouver area were eligible to attend the session after registering with the PCSC Program. The 2-h session was delivered at Vancouver General Hospital in BC by a Registered Dietitian from BC Cancer in a lecture-style format and included time for questions and answers. The topics included nutrition and exercise guidelines for cancer survivors [3], plant-based foods, dietary patterns, obtaining nutrients from foods not supplements, Eating Well with Canada's Food Guide [30], and an overview of diet and dietary supplement intervention studies in PC [31]. Immediately after the session concludes, attendees were asked to complete an anonymous evaluation form consisting of five close-ended questions related to satisfaction and open-ended additional comments. As forms are anonymous, we did not collect demographic or clinical information from participants. BC Health professional survey We developed a survey to seek perspectives on the perceived need for nutrition services from health professionals in BC (urologists, radiation oncologists, medical oncologists, and registered dietitians) caring for men with PC. BC-based researchers with expertise in PCrelated nutrition research were also invited to take part. The survey consisted of six questions that asked about the importance of nutritional services for men with PC, the content of such services, and how and when these services should be delivered. The survey was developed for this research study and is presented in Additional file 1. BC Health Professional Survey Questions. The questions were pilot tested and refined for clarity before being distributed to the larger sample. Purposive sampling was used to identify 56 health professionals who were asked to complete the survey via email within a 12-week period (May 2017 to August 2017). Reminder emails were sent to encourage participation. Environmental scan of existing nutritional services for men with PC To understand existing nutrition services available for men with PC in Canada, we conducted an environmental scan of services. Appropriate resources were those that provided information on services of interest defined a priori as those directly related to nutrition or diet that were provided to men with PC at any point in care. This included nutrition education sessions, cooking classes, individual counselling and online or telephone services that are available to men with PC. The predominant search strategy was to contact each of the ten provincial cancer agencies that provide cancer services for Canadians. We asked personnel at the cancer agencies to share information on other relevant nutrition services outside of their organization. Contact with identified organizations was attempted through email or telephone up to three times. This was supplemented by consulting a BC Cancer resource librarian for knowledge on existing nutrition services; and an online search using the following keywords "prostate cancer" and "nutrition" or "diet" and "services" using the Google search engine and limited results to those in English and Canadian organizations. The search was inclusive of services up to August 2019. A data collection form was used to collect information on the scope of the program, the target audience and the mode of delivery of nutrition services (available in the Additional file 1. Environmental Scan Data Collection Form). Scoping literature review We conducted a scoping literature review to describe best practices and research related to delivering nutritional services to men with PC including men with a history of PC and current diagnosis. The PRISMA checklist was used to define the population and guide the search and data collection strategies. A reference librarian at the University of British Columbia guided the database selection and search strategy, which relied on Embase, Medline and CINAHL electronic databases to identify relevant articles. MeSH terms and keywords included variations of the terms "prostate cancer" and "nutrition services" as detailed in the Additional file 1. We reviewed the references of articles to identify additional articles not captured by the initial search. We also searched Clinicaltrials.gov to identify ongoing and planned studies that included nutritional services. Articles were included in the review if they met the following inclusion criteria 1) participants were men with a PC diagnosis; either exclusively or a proportion of the overall study population; 2) studied a nutrition service or an education program; 3) the service or program was provided after PC diagnosis; 4) the article was in English and 5) articles published within the past 10 years (between 2007 and 2018). Articles were excluded if there was insufficient information on the nutrition services or program that prohibited extraction of key information. We used Mendeley referencing software version 1.17.9 to upload search results and remove duplicate articles. The primary reviewer conducted a title and abstract review of each article followed by a full text review to select articles for inclusion. A secondary reviewer repeated this process on a random 25% subset of articles. The agreement between the first and second reviewer was 97% for the title and abstract screening of articles from the initial search. The agreement was 57% for the full text screening of articles included from the title and abstract screening. Due to the discrepancy, the reviewers met in-person to re-review the screening process, discuss articles in disagreement and identify the source of the disagreements. The full text review was subsequently repeated and agreement was reached for all articles. From each article, we abstracted summaries of key points, the type of service provided (diet and exercise; nutrition; diet and mindfulness; clinical program) or delivery of nutrition services (home-based, group education, individual counselling, other), findings and study limitations and strengths. The data abstraction form is available in the Additional file 1. Analysis Descriptive statistics are reported as means for continuous variables and as number and percentage for categorical variables for all four aims; patient evaluation, the health professional survey, the environmental scan and scoping literature review of nutrition services for men with PC. Comparisons of categorical variables were performed using Chi squared tests while comparisons between continuous variables were performed with t-tests. Fisher's exact tests were performed when the assumptions for Chi squared tests were not met. Significance was determined at p < 0.05. Analyses were performed with R version 3.4.3 (R Foundation for Statistical Computing, Vienna, Austria). Analyses specific to each study approach are described below. Evaluation of nutrition education sessions Questions with missing data (non-response) were excluded from the analysis (N = 3, 13, 45, 27, and 6 participants for questions one through five, respectively). Responses were summarized for all respondents as well as separately for patients and partners. Differences in responses between patients and partners and between patients who attended sessions with a partner versus those who attended alone were compared using chi square tests or Fisher's exact tests for yes/no questions and two sample t-tests for Likert scale questions. Open-ended questions were analyzed using deductive and realist qualitative methods [32]. Deductive analysis uses qualitative data to identify themes specifically related to the research question and a realist approach describes the experience and reality of participants as reported in the data [32]. Responses were coded as key words and organized by emerging themes that were decided and summarized based on the frequency of occurrence. Themes and the total number of responses for each theme are reported and used to provide more detail and richness to the data. BC Health professional survey Question three, which asked about perceived demand for nutritional support for PC patients had 1 nonresponse which was excluded from the analysis, all other questions were complete. Responses for each question were summarized as proportions and compared across professions (physicians, radiation oncologists, medical oncologists, registered dietitians and researchers). Responses between dietitians and physicians, urologists, radiation oncologists and medical oncologists together were compared with chi squared tests and fisher's exact test. Each question had the option to provide an openended response ('other, specify'). These responses were thematically analyzed in the same manner as the nutrition education session. Environmental scan and scoping review Information was categorized based on common themes including type of service/intervention, target audience and for the characterization of existing services; geographic region. Nutrition session evaluation Between November 2013 and September 2016, 14 nutrition education sessions were delivered and 207 completed evaluation forms from 135 men with PC and 72 partners were collected. Evaluation responses are shown in Table 1. All participants reported that the session was easy to understand. The majority (88%) reported that the session was an appropriate length and 87% did not feel there was any information missing from the session. When asked about the inclusion of their partners in the education session, 94% of participants (patients, partners and patients who did not attend with a partner) agreed that inclusion was useful. Overall, 63% Comparisons between patients and partners and patient with partner and patient without partner for questions 1-4 with chi-square and question 5 with two-tailed t-test. Missing p-value indicates inadequate sample size for analysis of participants found the session very beneficial with a mean rating of 3.6 (SD = 0.52) out of 4 for both patients and partners. Responses between patients and partners and between patients who attended with their partner and those who did not were similar for all questions with the exception of question four which asked if the inclusion of partners was valuable. The vast majority of patients (92%), partners (99%) and patients with partners (98.5%) responded 'yes' and thus cell counts for 'no' were too few for comparison. Thematic analysis of the qualitative data from the evaluation forms indicated a high level of satisfaction with the session. Other themes that emerged from the comments suggested that inclusion of partners was useful for processing the information presented during the session (n = 16, 30% of comments), helpful for implementing and supporting dietary changes at home (n = 34, 63% of respondents) or both (n = 5, 9%). Comments identified perceived information gaps in the nutrition education session among 27% of respondents such as the need for more information on the role of specific dietary components (e.g. sugar, genetically modified food, animal protein, and supplements), as well as individual concerns among 12% of respondents (e.g. food sensitivities, diet for specific PC treatments), and practical meal planning tips and suggestions. Several participants (12%) also indicated that they would like access to further nutrition services, and 5% further specified they wanted access to one-on-one dietary counselling. BC health professionals survey The survey was sent to 56 health professionals in BC including 41 physicians (urologists (n = 24), radiation oncologists (n = 12), medical oncologists (n = 5)), registered dietitians (n = 13) and researchers (n = 2). Thirty-eight healthcare professionals responded to the survey for a response rate of 68%. The questions and response rates by each profession are shown in the Additional file 1: Table S1. BC Health Professional survey response rate across professions. The majority of health care professionals that responded (85%) reported that men with PC have expressed nutrition related concerns to them. Over 60% of health professionals agreed that men with PC are in need of more nutritional support, 16% reported that current nutrition services are sufficient and 24% responded with "other". Of those who responded with "other", nearly half stated they were unaware of current nutritional services for men with PC. One reported that men require specific food and nutrition skills for PC. The remaining believed there was not a high demand for nutrition information among men, that there were sufficient services but men were unaware of current programs or that nutrition support was not as critical for men with PC compared with other cancers. There were no significant difference between responses from physicians and dietitians when asked about men with PC's interest and need for nutrition services. Most health professionals responded that nutrition services should focus on weight management (61%) and reducing the risk of PC progression (53%). However, there was inter-professional disagreement with respect to the focus of nutrition services -90% of registered dietitians believed services should focus on reducing the risk of PC progression, whereas 75% of physicians believed the focus should be on weight management. The majority (67%) responded that nutritional services should be available to men multiple times after diagnosis through consecutive group education sessions (66%), online resources (66%) or individual nutritional counselling with a registered dietitian (50%). The overall themes that emerged from the open-ended questions suggested that 1) nutrition services should be available in different forms to facilitate individual needs (n = 12, 34%) and 2) there is a need for more nutrition services and organizational capacity to deliver services (n = 8, 23%). Health professionals specified a lack of funding (n = 2, 6%) and dietitian resources (n = 1, 3%) as barriers to implementing nutrition services for men with prostate cancer. A summary of the thematic analysis is available in the Additional file 1: Table S3. Summary of themes from the health professional survey. Environmental scan of existing nutritional services for men with PC A total of 30 organizations that provide nutritional services across Canada were identified. Of the organizations identified, 22 provided information on their services. Six of the 22 (27%) organizations offered nutritional services specifically targeted to men with PC. Of these, four offered group education sessions and three offered online services (nutrition videos and online portal with nutrition information). The PC-specific group education sessions were offered through organizations in BC (n = 2), Alberta (n = 1) and Nova Scotia (n = 1). The content focused on diet modifications for different PC treatments, review of diet and PC evidence and promotion of a healthy diet. Six nutritional guides for men with PC were identified through the online search, three from national agencies, two from BC Cancer and one from a cancer centre in Ontario. The remaining 16/22 organizations offered nutrition services for cancer patients including but not specific to men with PC. A variety of services such as cooking classes, group nutrition education sessions, online resources, dietitian telephone services, wellness retreat and individual counselling were available through these organizations. Scoping review The literature search identified 1030 articles and interventions through databases (n = 999), references lists (n = 25) and www.clinicaltrials.gov (n = 6). After the title and abstract review, 33 articles and interventions were included in the final scoping review. Of these articles, 27 were primary studies while the remaining six articles were secondary analyses of the primary studies (Table 2). An exception to the inclusion criteria of publication in the last 10 years was made for four studies, two of which provided details on study design and methodology needed to assess studies captured in the search. The other two were included as they represented a significant contribution to the body of evidence that was not captured in more contemporary studies. Of the 27 studies identified in the review, 18 were complete and had published their findings ( Table 2). The remaining nine studies were in progress at the time of the literature review. Of the completed studies, 78% reported improvements in one or more of the following measures: dietary intake/ diet quality, body composition, self-efficacy, quality of life, fatigue, practicing health behaviour goals and physical function/ exercise. Christry et al. was the only study that measured the long-term impact of nutrition services and found sustained improvements in diet quality, energy intake, and saturated fat intake 2 years following a 10-month home-based diet and exercise intervention [17]. All studies that examined qualitative outcomes (n = 4) such as feasibility, adherence and participant satisfaction, reported positive findings [38,40,46,49]. Common facilitators of dietary behavior change and/or patient satisfaction were group education [1,42] and peer support [46,59]. Other studies emphasized the importance of individualized nutrition services [9,50,52]. Common limitations included the use of self-reported measures, healthy participant bias, small sample size, short follow up period and lack of generalizability to the overall PC population. Our interpretation was limited by studies of participants with various cancer types (e.g. breast, colorectal, prostate) that did not provide cancer-specific results. This limits our understanding of the effect of these interventions on men with PC. Discussion To our knowledge this is the first report to provide a multifaceted approach to capture the need for nutritional services focused on men with PC. This approach is critical for informing and delivering evidence-based health services. Findings from multiple perspectives suggest that men with PC have an unmet need for nutritional information during supportive care. For instance, a need for additional nutrition services and support was identified by respondents to the BC health professional survey, few services for men with PC exist and even among those with access to nutrition education, there was indication of wanting more support. However, findings also indicate that survivorship clinics and cancer care centres vary widely in the range of nutritional services provided to men with PC, and few are specific to PC. Although generally, nutrition interventions were effective, nutrition interventions for men with PC published in the literature are heterogeneous with respect to design, mode of delivery and content, making it difficult to identify best practices. Standardized approaches would facilitate discovery and potentially implementation of effective PC care, however, the complexity of such an undertaking would be substantial. Each of the four approaches taken in this study provided important findings that can inform supportive care programming. The evaluation forms from the nutrition education session demonstrated overall high satisfaction with the PCSC Program's existing in-person groupbased educational session. Together with the qualitative feedback expressing a desire for more nutrition related information, this demonstrates a patient-perceived need for nutrition services among men with PC. A previous study among recently treated men with PC also reported group education on general PC knowledge including nutrition was well accepted and resulted in increased At 6 months the intervention group improved diet quality and self-efficacy to exercise but these changes were not maintained post intervention. Diet and exercise Longitudinal survey To assess the use of a diet and exercise tool-kit in improving well-being for men with PC on ADT. At completion the intervention arm had significant improvements in QoL, fatigue and exercise tolerance however only improvements in fatigue, exercise tolerance were sustained at 6 months. Bourke [40] Group education Diet and exercise RCT To assess the feasibility of a tapered supervised exercise and diet intervention for men with PC on ADT. The intervention arm showed significant improvements in dietary intake, exercise, and fatigue however there were high attrition rates at 6 months. Bourke [41] Group education Diet and exercise RCT To qualitatively evaluate tapered exercise and diet intervention for men with PC on ADT. Participants reported benefiting from the intervention both physically and psychologically. Participants reported benefits from dietary education but found adherence to guidelines difficult. Carmody [42] Group education Diet RCT To assess if dietary interventions can improve dietary intake, QoL and PSA velocity in men with PC. The intervention arm demonstrated improvements in diet quality/ intake and QoL. No significant changes were observed for PSA velocity. Diet and mindfulness Pre-post To measure changes in dietary intake and PSA velocity following a dietary and stress reduction intervention in men with recurrent PC. Following the intervention the participants significantly improved dietary intake however no significant reductions in PSA rise were observed. Davison [44] Group education Diet Pre-post To measure the impact of a nutrition intervention on calcium and vitamin D intake for men with PC on ADT. Following the intervention there were no significant increases in calcium and vitamin D through dietary intake however there were significant increases in through supplement intake. Hebert [45] Group education/ Diet and exercise /Diet RCT To assess the effects of lifestyle interventions on PSA levels and reducing The intervention arm did not show any significant changes in PSA levels Ferguson [46] Group education Comprehensive program Implementation study To evaluate the implementation and early impact of a nurse-led survivorship program for men with PC. The program had high participation rates with 90% attendance. Feedback from participants suggests high user satisfaction and reported QoL improvement. Baguley [47] Individual counselling Diet RCT To assess the efficacy of a Mediterranean-style nutrition intervention on cancer related fatigue and QoL in men with PC on ADT. Improvements in QoL and fatigue were not significant, however the intervention arm did demonstrate significant changes in weight. Baguley [48] Individual counselling Diet and exercise RCT To assess the efficacy of high intensity interval training in addition nutrition therapy on cancer related fatigue in men with PC on ADT. Comprehensive program Pre-post To assess the adherence to a multidisciplinary clinic for men with PC on ADT and if the intervention can lessen the metabolic impacts of ADT. Participation and adherence to the clinic was high with 95% adherence. The metabolic impact of ADT was minimal during the intervention. Diet and exercise Pre-post To assess the impact of a nutrition and exercise intervention on body composition and physical function in frail men with PC on ADT. After 3 months of ADT participants had improved timed-up-and-go test while other measures of physical function and body composition remained stable. Focht [51] Individual counselling Diet and exercise RCT To assess the feasibility and preliminary efficacy of implementing a groupmediated cognitive behavioural lifestyle intervention for men with PC undergoing ADT. N/A Chan [46] Other Diet and exercise Program summary The purpose of TrueNTH is to create an international partnership and develop interventions to improve the physical and mental well being of PC survivors. Canada, the U.S.A, Australia and the U.K. are developing lifestyle interventions and programs for men with PC and will evaluate implementation approaches. Cosby [52] Group education Diet Pre-post To evaluate the effectiveness and satisfaction of a weekly diet and PC group education session on meeting information needs in promoting healthy body weights. Significantly improved nutrition knowledge post session. Participants reported high satisfaction rates, usefulness of information, the importance of information and the value of group learning. [61]. Limitations to the group-based format included the need for more personalized and additional nutrition information. The need for more information is echoed in the literature including a study by Taylor et al. [62] that reported more than 50% of cancer patients would like more information on managing illness. However, providing additional nutrition supportive care for men with PC within a public healthcare setting is a challenge, especially given this requires a proactive versus reactive approach as patients generally are not malnourished. Findings from the other perspectives studied herein may provide direction moving forward. The findings from the BC health professional survey aligns with a comprehensive review of educational needs of cancer patients by Rutten et al. [63] who reported that cancer-specific and treatment-related information (of which nutrition is relevant to both) was required continually from diagnosis, treatment and post-treatment. Of note, there were differences between professions with respect to the content of nutrition services. Dietitians indicated that information should focus on the role of diet to reduce the risk of PC progression while medical oncologists, radiation oncologists and urologists believed that diet for weight management was the most important topic. We speculate that the differences in opinion may reflect questions commonly posed to dietitians by patients which is supported by the qualitative feedback on the nutrition evaluation session that requested more information on different dietary components and prostate cancer and notably there was no mention of more information for weight maintenance purposes. The focus on weight management by oncologists and urologists may reflect the relevance of weight management in clinical practice since increased body weight is linked to an increased risk of future prostate cancer mortality in those who are cancer free and increased risk of biochemical relapse after primary therapy [64,65]. In addition, increased weight gain is a known side effect of ADT that can be associated with metabolic syndrome and have downstream adverse effects on health [66]. Further work is needed to understand the differing perspectives in health professional disciplines to support the development of need-driven nutrition services. Meeting the nutritional needs of men with PC in a busy healthcare setting is a challenge that is likely reflected by the small number of existing services specific to PC in organizations across Canada. Among the six we identified, four were in-person group education sessions. Group education has been shown to be an effective means of delivering health information in a comparable and potentially more efficient and cost-effective manner than individual education [67,68]. However, inperson sessions may pose accessibility problems due to geographic and logistical barriers. The remaining two organizations provided nutrition services online, which has potential to provide supportive care to a larger number of men with PC and to overcome barriers to access. Although the efficacy of the online services was not assessed, literature suggests telemedicine education can be equally effective as in person education [69]. Between 2028 and 2032, the number of new cases of PC is projected to almost double in Canada [70]. The impact on healthcare services for the growing number of men living with PC will be substantial. Alternative modes of delivery of patient information will therefore be critical. The scoping review identified three main methods of delivering nutrition services to men with PC; homebased services, individual counselling, and group education classes. This substantiated what healthcare professionals in our survey considered the best modes of delivery, suggesting supportive care programs should consider offering flexible formats for nutrition services when possible. Although this paper is focussed on nutrition, diet is just one part of a healthy lifestyle. The generally positive findings reported by studies in the review such as feasibility, adherence and user satisfaction were not specific to the dietary component. Thus, we suggest that supportive care programs offer comprehensive healthy lifestyle services including nutrition. It is also important to highlight the difficulty in identifying 'best practice' and comparative success of approaches from the literature to inform nutrition services as studies were heterogeneous in design, target population and in primary outcome. Unsurprisingly, existing nutrition services identified in this study generally did not reflect interventions reported in literature, nor the perspectives of health care professionals with respect to content and timing of services. This confirms the well documented gap between research and practice [71]. There are several limitations to our study. Patient perspectives on nutritional services were assessed from data already collected through the PCSC programs education session evaluation form that was not designed to assess perspectives on broader nutrition services. The form also did not collect demographic information as the purpose is quality improvement and not research. Therefore, participants who attended the nutrition education session may not be representative of the general PC population. Patient engagement through focus groups and surveys would provide a deeper understanding of nutritional needs. The survey of health professionals used purposive expert sampling to capture insight on nutrition services from those working directly in PC-based healthcare or research in BC. Questions were developed by the researchers and not validated. This approach was used as our aim was exploratory in nature and required a focus on individuals with expertise providing care for men with PC. However, it also introduces researcher bias and a non-representative sample as it was confined to BC. One of the strengths of our study was the use of a multi-faceted approach that considered patient and health care perspectives, existing services and the evidence base on nutrition and PC. Incorporation of each of these areas is key for sustainable, effective nutrition services, and supportive care services more broadly. Conclusion As evident from the environmental scan, there are limited nutritional services targeted to the PC population despite the high prevalence of PC in Canada, and the effectiveness of nutritional services. It is perhaps unsurprising then that men with PC and PC-healthcare professionals identified a need for more nutrition services. Nutrition services should consider flexibility in delivery format, support at multiple times throughout survivorship, as well as embedding nutrition as part of overall supportive care. The provision of such support will be a challenge within a public healthcare system where nutrition services are generally prioritized for those who are malnourished, which is uncommon among men with PC. New models of care with supplemental funding may help to close the gap between the needs of men with PC and current standard of care.
v3-fos-license
2019-07-27T13:05:05.767Z
2019-07-24T00:00:00.000
198911106
{ "extfieldsofstudy": [ "Materials Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1996-1944/12/15/2340/pdf", "pdf_hash": "1fe523f0090d86ad79987e8886374f6fac15a558", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46355", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "e3c046ac31694a65dc97c613433ee04284e97e56", "year": 2019 }
pes2o/s2orc
Influence of Cementitious System Composition on the Retarding Effects of Borax and Zinc Oxide This research investigated the retarding impact of zinc oxide (ZnO) and borax (Na2[B4O5(OH)4]·8H2O) on hydration of Portland cement, calcium aluminate cement (CAC), and calcium sulfoaluminate cement (CSA). Heat of hydration of cement paste samples with and without ZnO and borax was used to measure the influence of ZnO and borax on the set time of these cementitious systems. It was found that both ZnO and borax can retard the set time of Portland cement systems; however, ZnO was shown to be a stronger set time retarder than borax for these systems. ZnO did not show any retarding impact on CAC and CSA systems while addition of borax in these systems prolonged the set time. It was concluded that ZnO does not poison the nucleation and/or growth of CSA and CAC hydration products. We suggest that borax retards the cement set time by suppressing the dissolution of cement phases. Introduction Utilization of cement set time retarders, referred to as retarding admixtures, in concrete allows concrete producers to delay the set time of concrete. Retarding admixtures are commonly used in hot weather concreting and when ready mixed concrete should be transported for a long distance. Sucrose (sugar) is a well-known cement set retarder [1][2][3][4][5]. Sucrose retards cement hydration by poisoning nucleation sites for calcium silicate hydrate (C-S-H) [3,6]. This poisoning effect prevents the formation of C-S-H for an extended period and thus delays the cement set time. The following abbreviations will be used in this paper: H = H 2 O, C = CaO, A = Al 2 O 3 ,Ŝ = SO 3 , S = SiO 2 . Zinc oxide (ZnO) has been shown to be another strong cement set retarder [7][8][9]. It has been suggested that ZnO, like sucrose, poisons C-S-H nucleation sites and thus retards the cement set time [7]. In cementitious systems containing ZnO, during the prolonged dormant period, cement particles continue to dissolve and thus the concentration of ions in the pore solution increases [3,7]. This will increase calcium concentration in the system leading to an increase in C-S-H nucleation sites. Therefore, the rate at which C-S-H grows after the set time is higher in systems containing ZnO compared to those without any retarder. This is why ZnO is called a "delayed accelerator" [7]. The retardation time of cementitious systems containing ZnO is correlated to the amount of Zn ions dissolved in the pore solution [7]. The higher the Zn ion concentration in the pore solution, the longer the retardation period. The retardation period caused by ZnO is suggested to end by two possible mechanisms [7]: (1) removal of Zn ions from the pore solution by chelation or adsorption of Zn ions by hydration products, such as C-S-H; and (2) removal of Zn ions by formation of calcium zinc hydrate according to Equations (1) and (2) [10,11]. Borax is another set time retarder for cementitious systems [12]. It has been shown that borax delays the set time of calcium sulfoaluminate cement (CSA) by preventing the dissolution of ye'elimite [12]. The behavior of retarding admixtures in cementitious systems depends on several factors, such as the composition of the systems as well as the curing temperature. Addition of supplementary cementitious materials (SCM), such as rice straw ash and silica fume, in cementitious systems have been shown to reduce the retarding action of chemical retarders [7,13]. This has been attributed to existence of more C-S-H nucleation sites in systems containing SCMs compared to those without SCMs [7,13,14]. Therefore, the composition of the cementitious system affects the chemical admixture performance in the system. The dissolution of ZnO in alkaline solutions (such as cementitious solutions) increases as the temperature increases [15]. Similarly, cement early hydration, and thus the formation of C-S-H, increases as the curing temperature increases [16]. It is not known, however, as to how the curing temperature could affect the retarding action of ZnO on cementitious systems. Although the impact of ZnO on Portland cement hydration has been studied by some researchers, the influence of ZnO on hydration of calcium aluminate cement (CAC) and on CSA has not been investigated yet. As the composition of CAC and CSA and their hydration mechanisms are different than that of Portland cement, it would be expected that ZnO and borax would affect the hydration of CAC, CSA, and Portland cement in different ways. This study investigates the impact of ZnO and borax on the Portland cement, CAC, and CSA hydration process. Furthermore, the influence of the curing temperature on the retarding action of ZnO is studied. Materials and Methods ASTM C150 [17] Type II/V and Type III as well as commercially available CAC and CSA cements were used in this study. The chemical composition of the cementitious materials is shown in Table 1 The retarding action of ZnO and borax on cementitious systems was determined by measuring the heat of hydration of cement paste samples using a four-channel isothermal calorimeter. A 0.45 water to cementitious material ratio (w/cm) was used. When ZnO or borax was used in paste samples, they were added to dry cementitious material and mixed for one minute by hand before addition of mixing water to the sample. This was done to make sure ZnO and borax particles were distributed evenly throughout the sample. Several ZnO and borax dosages were used; all dosages are given as % mass of cementitious material. Paste samples were mixed with an overhead mixer at 600 rpm for 120 s, followed by a 60 s rest period, and then mixed at 600 rpm for 60 s. The mass of cement pastes samples was approximately 50 g, except for CSA samples that were approximately 15 g. For a given mix, two test samples were prepared for measuring the heat of hydration; the reported result of the heat of hydration is based on the average of the two samples. To obtain the induction period (retardation time), the slope of the acceleration peak of the heat of hydration was extended to the x-axis (time axis). The intersection point between the extended slope and x-axis was considered as the induction (dormant) period as shown in Figure 1 [7]. Materials 2019, 12, x FOR PEER REVIEW 3 of 12 g, except for CSA samples that were approximately 15 g. For a given mix, two test samples were prepared for measuring the heat of hydration; the reported result of the heat of hydration is based on the average of the two samples. To obtain the induction period (retardation time), the slope of the acceleration peak of the heat of hydration was extended to the x-axis (time axis). The intersection point between the extended slope and x-axis was considered as the induction (dormant) period as shown in Figure 1 [7]. Impact of Curing Temperature on ZnO Retarding Action Experiments were conducted to investigate the impact of curing temperatures on ZnO retarding action on cementitious systems. Figure 2 shows the impact of curing temperature on type II/V paste samples with and without ZnO. Figure 3 presents the influence of curing temperature on Type III samples with and without ZnO. As can be seen from these figures, the curing temperature has a huge impact on ZnO retardation action in cementitious systems; the higher the curing temperature, the shorter the retardation time. Figure 4 plots curing temperatures versus retardation time of paste samples with 0.3% ZnO and 0.5% ZnO. There seems to be an exponential correlation between curing temperature and retardation time of paste samples containing ZnO. Another interesting trend that can be seen for samples that were cured at 50 °C is that the main hydration peak height is lower from samples containing ZnO compared to the control sample (with no ZnO). This is the opposite of samples cured at 23 °C or 10 °C. It has been proposed that the higher the curing temperature is, the higher the rate of the hydration reaction is [18]; this could mean that at higher curing temperatures, C-S-H nuclei are initiated at a faster rate. It has also been shown that higher curing temperatures increase the calcium hydroxide (CH) nucleation rate [19]. This increase in C-S-H and CH nuclei at higher curing temperatures could be the reason behind the suppressed retardation action of ZnO in cement paste, as shown in Figures 2 and 3. It could be proposed that the poisoning effect of ZnO is overcome sooner at high curing temperatures compared to low curing temperatures due to an increase in C-S-H nuclei at high curing temperatures. Comparing Figures 2 and 3, it can be observed that, for a given curing temperature and ZnO dosage, type III samples had a shorter retardation time compared to those samples made with type II/V. This is because type III is finer that type II/V and this results in a higher rate of cement dissolution and a higher number of C-S-H nuclei during early stages of hydration. Therefore, the number of C-S-H nuclei in systems containing type III cement would be more than of those Impact of Curing Temperature on ZnO Retarding Action Experiments were conducted to investigate the impact of curing temperatures on ZnO retarding action on cementitious systems. Figure 2 shows the impact of curing temperature on type II/V paste samples with and without ZnO. Figure 3 presents the influence of curing temperature on Type III samples with and without ZnO. As can be seen from these figures, the curing temperature has a huge impact on ZnO retardation action in cementitious systems; the higher the curing temperature, the shorter the retardation time. Figure 4 plots curing temperatures versus retardation time of paste samples with 0.3% ZnO and 0.5% ZnO. There seems to be an exponential correlation between curing temperature and retardation time of paste samples containing ZnO. Another interesting trend that can be seen for samples that were cured at 50 • C is that the main hydration peak height is lower from samples containing ZnO compared to the control sample (with no ZnO). This is the opposite of samples cured at 23 • C or 10 • C. It has been proposed that the higher the curing temperature is, the higher the rate of the hydration reaction is [18]; this could mean that at higher curing temperatures, C-S-H nuclei are initiated at a faster rate. It has also been shown that higher curing temperatures increase the calcium hydroxide (CH) nucleation rate [19]. This increase in C-S-H and CH nuclei at higher curing temperatures could be the reason behind the suppressed retardation action of ZnO in cement paste, as shown in Figures 2 and 3. It could be proposed that the poisoning effect of ZnO is overcome sooner at high curing temperatures compared to low curing temperatures due to an increase in C-S-H nuclei at high curing temperatures. Comparing Figures 2 and 3, it can be observed that, for a given curing temperature and ZnO dosage, type III samples had a shorter retardation time compared to those samples made with type II/V. This is because type III is finer that type II/V and this results in a higher rate of cement dissolution and a higher number of C-S-H nuclei during early stages of hydration. Therefore, the number of C-S-H nuclei in systems containing type III cement would be more than of those containing type II/V. Because of this, the poisoning ability of ZnO in systems containing type III will deplete faster than in systems with type II/V. containing type II/V. Because of this, the poisoning ability of ZnO in systems containing type III will deplete faster than in systems with type II/V. Influence of Borax on Portland Cement Hydration The impact of borax on the heat of hydration of type II/V and type III cement paste is presented in Figures 5 and 6. These samples were cured at 23 °C. In general, addition of borax in paste samples made with type II/V and type III delayed the set time. However, up to 0.5% dosage of borax had a negligible effect on the induction period of type III samples (refer to Figure 6). Comparing Figure Influence of Borax on Portland Cement Hydration The impact of borax on the heat of hydration of type II/V and type III cement paste is presented in Figures 5 and 6. These samples were cured at 23 • C. In general, addition of borax in paste samples made with type II/V and type III delayed the set time. However, up to 0.5% dosage of borax had a negligible effect on the induction period of type III samples (refer to Figure 6). Influence of Borax on Portland Cement Hydration The impact of borax on the heat of hydration of type II/V and type III cement paste is presented in Figures 5 and 6. These samples were cured at 23 °C. In general, addition of borax in paste samples made with type II/V and type III delayed the set time. However, up to 0.5% dosage of borax had a negligible effect on the induction period of type III samples (refer to Figure 6). Comparing Figure 2B with Figure 5 (or Figure 3B with Figure 6), it becomes clear that ZnO is a much stronger set time retarder than borax for Portland cement systems. Type II/V samples containing 0.5% ZnO have a retardation time of about 50 h while type II/V samples containing 0.5% borax (0.5%Br in Figure 5) have a retardation time of approximately 6 h. Similarly, type III samples containing 0.5% ZnO have a retardation time of 38 h ( Figure 3B), while type III samples with 0.5% borax (0.5%Br in Figure 6) have a retardation time of about 3 h. The other notable difference between paste samples containing borax and ZnO is that samples containing ZnO have a narrower but steep main hydration peak, whereas samples containing borax show wider but shallower main hydration peaks. The sharp narrow peaks in samples containing ZnO have been suggested to result from the high number of C-S-H nucleation sites in these samples [7]. Shallow hydration peaks in systems containing borax could suggest that borax limits dissolution of cement particles and thus reduces the number of C-S-H nuclei. The negligible effect of 0.3% and 0.5% borax on type III hydration could be attributed to the fact that type III has a high surface area, which means borax cannot reduce the cement dissolution at these lower dosages because there are more surfaces to be poisoned. If the cement particle dissolution is reduced by borax, the number of C-S-H nuclei would be small, the rate of C-S-H formation and growth would be slower, and hence the hydration peak would be shallower. Therefore, it could be suggested that the mechanism by which ZnO and borax retard the cement set time is different; ZnO retards the set time by poisoning C-S-H nucleation and growth whereas borax prolongs the set time by poisoning/reducing cement dissolution. Borax could also poison the nucleation and/or growth of C-S-H. Figure 7 shows the heat of hydration of CAC paste samples with and without ZnO. Clearly, ZnO has no retarding effect, even at high dosages, on CAC hydration. Figure 8 shows the heat of hydration of CAC paste samples containing different dosages of borax, 0%Br, 0.3%Br, 0.5%Br, and 1%Br. Figure 8A presents the heat flow and Figure 8B shows the total heat of hydration. Comparing Figure 2B with Figure 5 (or Figure 3B with Figure 6), it becomes clear that ZnO is a much stronger set time retarder than borax for Portland cement systems. Type II/V samples containing 0.5% ZnO have a retardation time of about 50 h while type II/V samples containing 0.5% borax (0.5%Br in Figure 5) have a retardation time of approximately 6 h. Similarly, type III samples containing 0.5% ZnO have a retardation time of 38 h ( Figure 3B), while type III samples with 0.5% borax (0.5%Br in Figure 6) have a retardation time of about 3 h. The other notable difference between paste samples containing borax and ZnO is that samples containing ZnO have a narrower but steep main hydration peak, whereas samples containing borax show wider but shallower main hydration peaks. The sharp narrow peaks in samples containing ZnO have been suggested to result from the high number of C-S-H nucleation sites in these samples [7]. Shallow hydration peaks in systems containing borax could suggest that borax limits dissolution of cement particles and thus reduces the number of C-S-H nuclei. The negligible effect of 0.3% and 0.5% borax on type III hydration could be attributed to the fact that type III has a high surface area, which means borax cannot reduce the cement dissolution at these lower dosages because there are more surfaces to be poisoned. If the cement particle dissolution is reduced by borax, the number of C-S-H nuclei would be small, the rate of C-S-H formation and growth would be slower, and hence the hydration peak would be shallower. Therefore, it could be suggested that the mechanism by which ZnO and borax retard the cement set time is different; ZnO retards the set time by poisoning C-S-H nucleation and growth whereas borax prolongs the set time by poisoning/reducing cement dissolution. Borax could also poison the nucleation and/or growth of C-S-H. Figure 7 shows the heat of hydration of CAC paste samples with and without ZnO. Clearly, ZnO has no retarding effect, even at high dosages, on CAC hydration. Figure 8 shows the heat of hydration of CAC paste samples containing different dosages of borax, 0%Br, 0.3%Br, 0.5%Br, and 1%Br. Figure 8A presents the heat flow and Figure 8B shows the total heat of hydration. As can been seen from Figure 8, addition of borax to CAC paste samples retarded the set time. Samples with 1% borax (1%Br) had an induction period of about 32 h, whereas samples with 0.3% Br had a retardation time of about 10 h. Besides prolonging the induction period, borax reduced the CAC main hydration peak height as well (refer to Figure 8A). The higher the borax dosage in the mix, the lower the peak height. However, the main hydration peaks are wider in samples containing borax compared to those without borax. Shallower but wider hydration peaks for samples containing borax As can been seen from Figure 8, addition of borax to CAC paste samples retarded the set time. Samples with 1% borax (1%Br) had an induction period of about 32 h, whereas samples with 0.3% Br had a retardation time of about 10 h. Besides prolonging the induction period, borax reduced the CAC main hydration peak height as well (refer to Figure 8A). The higher the borax dosage in the mix, the lower the peak height. However, the main hydration peaks are wider in samples containing borax compared to those without borax. Shallower but wider hydration peaks for samples containing borax As can been seen from Figure 8, addition of borax to CAC paste samples retarded the set time. Samples with 1% borax (1%Br) had an induction period of about 32 h, whereas samples with 0.3% Br had a retardation time of about 10 h. Besides prolonging the induction period, borax reduced the CAC main hydration peak height as well (refer to Figure 8A). The higher the borax dosage in the mix, the lower the peak height. However, the main hydration peaks are wider in samples containing borax compared to those without borax. Shallower but wider hydration peaks for samples containing borax suggests that borax controls the rate of nucleation and/or growth of hydration products. As it was suggested earlier, this could be because borax suppresses the dissolution of phases in cement. Furthermore, borax could also poison the nucleation and/or growth of CAC hydration products. Impact of ZnO and Borax on Calcium Aluminate Cement (CAC) Hydration Hydration reactions of CAC are different from that of Portland cement. The main hydration products of CAC are calcium aluminate hydrate and aluminate hydrate. At curing temperatures between 20 • C and 30 • C, the hydration products of CAC are calcium aluminate hydrate (CAH10 and C2AH8) and aluminate hydroxide (AH3). Therefore, it can be suggested that ZnO does not poison the nucleation and/or growth of CAC hydration products, nor does ZnO reduce the dissolution of CAC phases. Influence of ZnO and Borax on CSA Cement Hydration To study the influence of ZnO addition on CSA cement, the set time of CSA cement was delayed by three methods: (1) mixing 25% CSA and 75% type II/V cements, (2) adding 50% citric acid, and (3) adding a chemical retarder. Citric acid and retarder were mixed with the mix water. Figure 9 shows the heat of hydration graphs for binary paste samples (25% CSA and 75% type II/V) with and without ZnO. suggests that borax controls the rate of nucleation and/or growth of hydration products. As it was suggested earlier, this could be because borax suppresses the dissolution of phases in cement. Furthermore, borax could also poison the nucleation and/or growth of CAC hydration products. Hydration reactions of CAC are different from that of Portland cement. The main hydration products of CAC are calcium aluminate hydrate and aluminate hydrate. At curing temperatures between 20 °C and 30 °C, the hydration products of CAC are calcium aluminate hydrate (CAH10 and C2AH8) and aluminate hydroxide (AH3). Therefore, it can be suggested that ZnO does not poison the nucleation and/or growth of CAC hydration products, nor does ZnO reduce the dissolution of CAC phases. Influence of ZnO and Borax on CSA Cement Hydration To study the influence of ZnO addition on CSA cement, the set time of CSA cement was delayed by three methods: (1) mixing 25% CSA and 75% type II/V cements, (2) adding 50% citric acid, and (3) adding a chemical retarder. Citric acid and retarder were mixed with the mix water. Figure 9 shows the heat of hydration graphs for binary paste samples (25% CSA and 75% type II/V) with and without ZnO. Hydration graphs in Figure 9 show two distinct hydration phases, one from initial mixing to around 48 h, and the other one beginning at around 48 h. The hydration process of cementitious systems containing CSA and Portland cement has been proposed to be a two phase process [20]. CSA hydration happens during the first phase and produces AFm, AFt, and aluminium hydroxide (AH3). The second phase of hydration is due to the hydration of C3S in Portland cement, which forms stratlingite (C2ASH8), C-S-H, and CH according to Equations (3) and (4) [20]. Therefore, the first phase (up to 48 h) of the hydration reaction observed in Figure 9 is dominated by CSA cement, while Portland cement hydration dominates the second phase (after 48 h). As can be seen from Figure 9, hydration of CSA was not affected by ZnO addition as the first phase of hydration showed no retardation regardless of ZnO dosage. However, hydration of type II/V (the second phase of hydration) was delayed by ZnO. Hydration graphs in Figure 9 show two distinct hydration phases, one from initial mixing to around 48 h, and the other one beginning at around 48 h. The hydration process of cementitious systems containing CSA and Portland cement has been proposed to be a two phase process [20]. CSA hydration happens during the first phase and produces AFm, AFt, and aluminium hydroxide (AH3). The second phase of hydration is due to the hydration of C3S in Portland cement, which forms stratlingite (C2ASH8), C-S-H, and CH according to Equations (3) and (4) [20]. Therefore, the first phase (up to 48 h) of the hydration reaction observed in Figure 9 is dominated by CSA cement, while Portland cement hydration dominates the second phase (after 48 h). As can be seen from Figure 9, hydration of CSA was not affected by ZnO addition as the first phase of hydration showed no retardation regardless of ZnO dosage. However, hydration of type II/V (the second phase of hydration) was delayed by ZnO. Figure 10A shows the heat of hydration of CSA pastes samples made with 50% citric acid solution with and without ZnO. Figure 10B shows hydration graphs for CSA cement paste samples dosed by a chemical retarder admixture (ASTM type B and D). Citric acid and retarder were used to delay the set time enough to allow sample preparation. In both Figure 10A,B, ZnO did not affect the hydration of CSA. Therefore, based on the ZnO performance in binary paste samples ( Figure 8) and in 100% CSA cement paste samples (Figure 10), it can be suggested that ZnO doesn't have a retarding effect on the CSA set time. Thus, it can be concluded that ZnO has no poisoning effect on nucleation and/or growth of CSA main hydration products (ettringite (AFt) and aluminate hydroxide (AH3)). Figure 10A shows the heat of hydration of CSA pastes samples made with 50% citric acid solution with and without ZnO. Figure 10B shows hydration graphs for CSA cement paste samples dosed by a chemical retarder admixture (ASTM type B and D). Citric acid and retarder were used to delay the set time enough to allow sample preparation. In both Figures 10A and B, ZnO did not affect the hydration of CSA. Therefore, based on the ZnO performance in binary paste samples ( Figure 8) and in 100% CSA cement paste samples (Figure 10), it can be suggested that ZnO doesn't have a retarding effect on the CSA set time. Thus, it can be concluded that ZnO has no poisoning effect on nucleation and/or growth of CSA main hydration products (ettringite (AFt) and aluminate hydroxide (AH3)). Figure 11 shows the impact of borax on hydration of CSA paste samples mixed with 50% citric acid solution with and without borax. The sample containing 0.3% borax (0.3%Br) had the same induction period compared to the control one (0%Br). However, samples containing 0.5%Br and 1%Br had a longer induction period compared to the control sample. The induction period for the sample containing 1%Br was longer than an hour, as it can be seen in Figure 11A. Besides prolonging the induction period, borax lowered the height of the main hydration peak. However, the total heat of hydration of CSA samples at 6 h after mixing was similar regardless of the borax dosage. The retardation due to borax could be because borax prevents the dissolution of ye'elimite (C4A3Ŝ), which is a major mineral phase in CSA cement [12]. Figure 11 shows the impact of borax on hydration of CSA paste samples mixed with 50% citric acid solution with and without borax. The sample containing 0.3% borax (0.3%Br) had the same induction period compared to the control one (0%Br). However, samples containing 0.5%Br and 1%Br had a longer induction period compared to the control sample. The induction period for the sample containing 1%Br was longer than an hour, as it can be seen in Figure 11A. Besides prolonging the induction period, borax lowered the height of the main hydration peak. However, the total heat of hydration of CSA samples at 6 h after mixing was similar regardless of the borax dosage. The retardation due to borax could be because borax prevents the dissolution of ye'elimite (C4A3Ŝ), which is a major mineral phase in CSA cement [12]. Conclusions The retarding action of ZnO and borax on Portland cement, CSA cement, and CAC was investigated. The heat of hydration of cement paste samples with and without addition of ZnO and borax was measured to study the impact of ZnO and borax on the set time of cementitious systems. It was found that the retarding impacts of ZnO and borax on cementitious systems are different from each other. It was shown that both ZnO and borax can retard the set time of Portland cement systems; however, ZnO was found to be a stronger set time retarder than borax for Portland cement systems. However, the set time of CSA and CAC was not retarded by ZnO, while borax retarded the set time of these cementitious systems. It was also revealed that as the curing temperature raises, the effectiveness of ZnO in retarding the set time decreases. It can be concluded that the mechanism(s) by which the set time of cementitious systems is retarded by ZnO and by borax is different. It seems like ZnO does not suppress the nucleation and Conclusions The retarding action of ZnO and borax on Portland cement, CSA cement, and CAC was investigated. The heat of hydration of cement paste samples with and without addition of ZnO and borax was measured to study the impact of ZnO and borax on the set time of cementitious systems. It was found that the retarding impacts of ZnO and borax on cementitious systems are different from each other. It was shown that both ZnO and borax can retard the set time of Portland cement systems; however, ZnO was found to be a stronger set time retarder than borax for Portland cement systems. However, the set time of CSA and CAC was not retarded by ZnO, while borax retarded the set time of these cementitious systems. It was also revealed that as the curing temperature raises, the effectiveness of ZnO in retarding the set time decreases. It can be concluded that the mechanism(s) by which the set time of cementitious systems is retarded by ZnO and by borax is different. It seems like ZnO does not suppress the nucleation and growth of CSA and CAC hydration products, which are mainly aluminate bearing phases such as ettringite and aluminate hydrate. It can also be suggested that borax retards the set time by reducing the dissolution of cement particles, whereas we suggest that ZnO poisons nucleation and/or growth of C-S-H.
v3-fos-license
2018-12-18T10:48:34.000Z
2018-09-02T00:00:00.000
119452380
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.99.014007", "pdf_hash": "1b5d5b7b19c7bfb6f2b1f71e9742f4a466e73599", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46356", "s2fieldsofstudy": [ "Physics" ], "sha1": "df57e1cf0ed163568c6e3a39bfe7cfae5d8895b9", "year": 2018 }
pes2o/s2orc
$\eta'$ and $\eta$ mesons at high T when the U_A(1) and chiral symmetry breaking are tied The approach to the eta'-eta complex employing chirally well-behaved quark-antiquark bound states and incorporating the non-Abelian axial anomaly of QCD through the generalization of the Witten-Veneziano relation, is extended to finite temperatures. Employing the chiral condensate has led to a sharp chiral and U_A(1) symmetry restoration, but with the condensates of quarks with realistic explicit chiral symmetry breaking, which exhibit a smooth, crossover chiral symmetry restoration in qualitative agreement with lattice QCD results, we get a crossover U_A(1) transition, with smooth and gradual melting of anomalous mass contributions. This way we obtain a substantial drop of the eta' mass around the chiral transition temperature, but no eta mass drop. This is consistent with the present empirical evidence. I. INTRODUCTION The experiments at heavy-ion collider facilities-such as RHIC, LHC, FAIR, and NICA-aim to produce a new form of hot and/or dense QCD matter [1,2]. Clear signatures of its production are thus very much needed. The most compelling such signal would be a change in the pertinent symmetries, i.e., the restoration (in hot and/or dense matter) of the symmetries of the QCD Lagrangian which are broken in the vacuum, notably the [SU A ðN f Þ flavor] chiral symmetry for N f ¼ 3 ¼ 2 þ 1 light quark flavors q, and the U A ð1Þ symmetry. This provides much motivation to establish that experiment indeed shows this, as well as to give theoretical explanations of such phenomena. The first signs of a (partial) restoration of the U A ð1Þ symmetry were claimed to be seen in 200 GeV Au þ Au collisions [3,4] at RHIC by Csörgő et al. [5]. They analyzed the η 0 -meson data of the PHENIX [3] and STAR [4] collaborations through several models for hadron multiplicities, and found that the η 0 mass (M η 0 ¼ 957.8 MeV in vacuum) decreases by at least 200 MeV inside the fireball. The vacuum η 0 is, comparatively, so very massive since it is predominantly the SU V ðN f Þ-flavor singlet state η 0 . Its mass M η 0 receives a sizable anomalous contribution ΔM η 0 due to the U A ð1Þ symmetry violation by the non-Abelian axial Adler-Bell-Jackiw anomaly ["gluon anomaly," or "U A ð1Þ anomaly" for short], which makes the divergence of the singlet axial quark currentqγ μ γ 5 1 2 λ 0 q nonvanishing even in the chiral limit of vanishing current masses of quarks, m q → 0. This mass decrease is then a sign of a partial U A ð1Þ symmetry restoration in the sense of a diminishing contribution of the U A ð1Þ anomaly to the η 0 mass, which would decrease to a value readily understood in the same way [6] as the masses of the octet of the light pseudoscalar mesons P ¼ π 0;AE , K 0;AE ,K 0 , η, which are exceptionally light almost-Goldstone bosons of dynamical chiral symmetry breaking (DChSB). A recent experimental paper studied 200 GeV Au þ Au collisions [7]. Although a new analysis of the limits on the η 0 and η masses was beyond the scope of Ref. [7], the data contained therein make it possible, and preliminary considerations [8] confirm the findings of Ref. [5]. The first explanation [9] of these original findings [5] was offered by conjecturing that the Yang-Mills (YM) topological susceptibility, which leads to the anomalously high η 0 mass, should be viewed through the Leutwyler-Smilga (LS) [10] relation (12). This ultimately implies that the anomalous part of the η 0 mass decreases together with the quark-antiquark (qq) chiral-limit condensate hqqi 0 ðTÞ as the temperature T grows towards the chiral restoration temperature T Ch and beyond. This connection between the U A ð1Þ symmetry restoration and the chiral symmetry restoration was just a conjecture until our more recent paper [11] strengthened the support for this scenario. Nevertheless, there was also a weakness: our approach predicted the decrease of not only the η 0 mass, but also an even more drastic decrease of the η mass M η , and signs for that have not been seen in any currently available data [7,12]. In the present paper, we show that the predicted Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI. Funded by SCOAP 3 . decrease of M η [9] was the consequence of employing the chiral-limit condensate hqqi 0 ðTÞ, since it decreases too fast with T after approaching T ∼ T Ch . We then perform T > 0 calculations in the framework of the more recent work by Benić et al. [11], where the LS relation (12) is replaced by the full-QCD topological charge parameter (18) [13][14][15]. There, one can employ qq condensates for realistically massive u, d, and s quarks, with a much smoother T dependence. As a result, the description of the η-η 0 complex of Ref. [9] is significantly improved, since our new T dependences of the pseudoscalar meson masses do not exhibit a decrease of the η mass, while a considerable decrease of the η 0 mass still exists, which is consistent with the empirical findings [5]. The light pseudoscalar mesons are simultaneously qq 0 bound states (q, q 0 ¼ u, d, s) and (almost-)Goldstone bosons of the DChSB of nonperturbative QCD. We can implement both simultaneously by using the Dyson-Schwinger (DS) equations as Green functions of QCD (see, e.g., Refs. [16][17][18][19] for reviews). Particularly pertinent are the gap equation for dressed quark propagators S q ðpÞ with DChSB-generated self-energies Σ q ðpÞ, (while S free q are free ones), and the Bethe-Salpeter equation (BSE) for the qq 0 meson bound-state vertices Γ qq 0 , where K is the interaction kernel, and e, f, g, h represent (schematically) the collective spinor, color, and flavor indices. This nonperturbative and covariant bound-state DS approach can be applied for various degrees of truncations, assumptions, and approximations, ranging from ab initio QCD calculations and sophisticated truncations (see, e.g., Refs. [16][17][18][19][20][21][22] and references therein) to very simplified modeling of hadron phenomenology, such as utilizing Nambu-Jona-Lasinio point interactions. For applications in involved contexts such as nonzero temperature or density, strong simplifications are especially needed for tractability. This is why the separable approximation [23] is adopted in this paper [see the discussion between Eqs. (4) and (5)]. However, when describing pseudoscalar mesons (including η and η 0 ) reproducing the correct chiral behavior of QCD is much more important than the dynamicsdependent details of their internal bound-state structure. A rarity among bound-state approaches, the DS approach can also achieve the correct QCD chiral behavior regardless of the details of model dynamics, but under the condition of a consistent truncation of DS equations, respecting pertinent Ward-Takahashi identities [16][17][18][19]. A consistent DS truncation, where DChSB is very well understood, is the rainbow-ladder approximation (RLA). Since it also enables tractable calculations, it is still the most used approximation in phenomenological applications, and we also adopt it here. In the RLA, the BSE (2) employs the dressed quark propagator solution SðpÞ from the gap equation (1) and (4), which in turn employs the same effective interaction kernel as the BSE. It has a simple gluon-exchange form, where both quark-gluon vertices are bare, so that the quark self-energy in the gap equation is where D ab μν ðkÞ eff is an effective gluon propagator. These simplifications should be compensated by modeling the effective gluon propagator D ab μν ðkÞ eff in order to reproduce well the relevant phenomenology; here, pseudoscalar (P) meson masses M P , decay constants f P , and condensates hqqi, including T-dependence of all these. In the present paper, we use the same model as in Ref. [9] and attempt to improve their approach to the T dependence of the U A ð1Þ anomaly. All of the details on the functional form and parameters of this model interaction can be found in the Sec. II A of Ref. [24]. Such models-socalled rank-2 separable models-are phenomenologically successful (see, e.g., Refs. [23][24][25][26][27]). However, they have the well-known drawback of predicting a somewhat too low transition temperature: the model we use in this paper and that was used in Refs. [9,24,26,27] has T Ch ¼ 128 MeV, i.e., some 17% below the now widely accepted central value of 154 AE 9 MeV [28][29][30]. But, rather than quantitative predictions at specific absolute temperatures, we are interested in the relative connection between the chiral restoration temperature T Ch and the temperature scales characterizing signs of the effective disappearance of the U A ð1Þ anomaly, for which the present model is adequate. In addition, Ref. [31] showed that coupling to the Polyakov loop can increase T Ch , while the qualitative features of the T dependence of the model are preserved. Thus, separable model results at T > 0 are most meaningfully presented as functions of the relative temperature T=T Ch , as in Refs. [9,24]. Anyway, regardless of the details of the model dynamics [i.e., the choice of D ab μν ðkÞ eff ] and thanks to the consistent truncation of DS equations, the BSE (2) yields the masses M qq 0 of pseudoscalar P ∼ qq 0 mesons which satisfy the Gell-Mann-Oakes-Renner-type relation with the current masses m q , m q 0 of the corresponding quarks: While this guarantees that all M qq 0 → 0 in the chiral limit, it also shows that the RLA cannot lead to any U A ð1Þanomalous contribution responsible for ΔM η 0 . That is, the RLA gives us only the nonanomalous partM 2 NA of the squared-mass matrixM 2 ¼M 2 NA þM 2 A of the hidden-flavor (q ¼ q 0 ) light (q ¼ u, d, s) pseudoscalar mesons. In the basis fuū; dd; ssg,M 2 NA is simplyM 2 The anomalous partM 2 A arises because the pseudoscalar hidden-flavor states qq are not protected from the flavormixing QCD transitions (through anomaly-dominated pseudoscalar gluonic intermediate states), as depicted in Fig. 1. They are obviously beyond the reach of the RLA and horrendously hard to calculate. Nevertheless, they cannot be neglected, as can be seen in the Witten-Veneziano relation (WVR) [32,33], which remarkably relates the full-QCD quantities (η 0 , η, the K-meson masses M η 0 ;η;K , and the pion decay constant f π ) to the topological susceptibility χ YM of the (pure-gauge) YM theory: Its chiral-limit-nonvanishing rhs is large (roughly 0.8 to 0.9 GeV 2 ), while Eq. (5) basically leads to the cancellation of all chiral-limit-vanishing contributions on the lhs [9]. The rhs is the WVR result for the total mass contribution of the U A ð1Þ anomaly to the η-η 0 complex, M U A ð1Þ . TheM 2 A matrix elements generated by the U A ð1Þanomaly-dominated transitions qq → q 0q0 (see Fig. 1) can be written [35] in the flavor basis fuū; dd; ssg as Here b q ¼ ffiffi ffi β p for both q ¼ u, d, since we assume m u ¼ m d ≡ m l [i.e., isospin SUð2Þ symmetry] which is an excellent approximation for most purposes in hadronic physics. For example, M uū ¼ M dd ≡ M ll ¼ M ud ≡ M π obtained from the BSE (2) is our RLA model pion mass for π þ ðπ − Þ ¼ udðdūÞ and It still contains M ss , the mass of the unphysical (but theoretically very useful) ss pseudoscalar obtained in the RLA. However, thanks to Eq. (5), it can also be expressed through the masses of physical mesons, to a very good approximation [24,27,[34][35][36][37]. Its decay constant f ss is calculated in the same way as f π and f K . Since the s quark is much heavier than the u and d quarks, in Eq. (7) we have b q ¼ X ffiffiffi β p for q ¼ s, with X < 1. Transitions to and from more massive s quarks are suppressed, and the quantity X expresses this influence of the SUð3Þ flavor symmetry breaking. The most common choice for the flavor-breaking parameter has been the estimate X ¼ f π =f ss [9,24,27,[34][35][36][37], but we found [11] that it necessarily arises in the variant of our approach relying on Shore's generalization of the WVR (6) [13,14] (see Sec. III). The anomalous mass matrixM 2 A [which is of the pairing form (7) in the hidden-flavor basis fuū; dd; ssg] in the octet-singlet basis fπ 0 ; η 8 ; η 0 g of hidden-flavor pseudoscalars becomeŝ which shows that the SUð3Þ flavor breaking [X ≠ 1] is necessary for the anomalous contribution to the η 8 mass squared, ΔM 2 In the flavor SUð3Þsymmetric case (X ¼ 1), only the η 0 mass receives a U A ð1Þ-anomaly contribution: (8)] to be off diagonal, but in this basis the fη 8 ; η 0 g submatrix ofM 2 NA also gets strong, negative off-diagonal elements, g., Ref. [35]). Equation (8) thus shows that the interplay of the flavor symmetry breaking (X < 1) with the anomaly is necessary for the partial cancellation of the off-diagonal (8,0) elements in the complete mass matrixM 2 ¼M 2 NA þM 2 A , i.e., to obtain the physical isoscalars in a rough approximation as η ≈ η 8 and η 0 ≈ η 0 . How this changes with diminishing U A ð1Þ-anomaly contributions is exhibited in Secs. IV and V. Since the isospin-limit π 0 decouples from the anomaly and mixing, only the isoscalar-subspace 2 × 2 mass matrix FIG. 1. Axial-anomaly-induced, flavor-mixing transitions from hidden-flavor pseudoscalar states P ¼ qq to P 0 ¼ q 0q0 including both possibilities q ¼ q 0 and q ≠ q 0 . All lines and vertices are dressed. The gray blob symbolizes all possible intermediate states enabling this transition. The three bold dots symbolize an even [34] but otherwise unlimited number of additional gluons. As pointed out in Ref. [34], the diamond graph is just the simplest example of such a contribution. M 2 needs to be considered. Even thoughM 2 is strongly off diagonal in the isoscalar basis fη NS ; η S g (the NS-S basis), in this basis it has the simple form which also shows that when the U A ð1Þ-anomaly contributions vanish (i.e., β → 0) the NS-S scenario is realized. This means that not only do the physical isoscalars become η → η NS and η 0 → η S , but also that their respective masses become M π and M ss . Our experience with various dynamical models (at T ¼ 0) shows [27,[34][35][36][37] that after pions and kaons are correctly described, a good determination of the anomalous mass shift parameter is sufficient for Eq. (10) to give good η 0 and η masses, since M 2 Nevertheless, calculating the anomalous contributions (∝ β) in DS approaches is a very difficult task. Reference [38] explored this by taking the calculation beyond the RLA, but they had to adopt extremely schematic model interactions (proportional to δ functions in momenta) for both the ladder-truncation part (3) and the anomaly-producing part. Another approach [39] obtained qualitative agreement with the lattice on χ YM (and, consequently, acceptable masses for η 0 and η) by assuming that the contributions to Fig. 1 are dominated by the simplest one-the diamond graph-if it is appropriately dressed (in particular, by an appropriately singular quark-gluon vertex). However, we take a different route, since our goal is not to figure out how the breaking of U A ð1Þ comes about on a microscopic level, but rather to phenomenologically model and study the high-T behavior of the masses of the realistic η 0 and η, along with other light pseudoscalar mesons. In the DS context, the most suitable approach is then the one developed in Refs. [27,[34][35][36][37] and extended to T > 0 in Refs. [9,24]. The key is that the U A ð1Þ anomaly is suppressed in the limit of large number of QCD colors N c [32,33]. So, in the sense of the 1=N c expansion, it is a controlled approximation to view the anomaly contribution as a perturbation with respect to the (nonsuppressed) results obtained through the RLA (3)-(4). While considering meson masses, it is thus not necessary to look for anomalyinduced corrections to the RLA Bethe-Salpeter wave functions, 1 which are consistent with DChSB and with the chiral QCD behavior (5) that is essential for describing pions and kaons. The breaking of nonet symmetry by the U A ð1Þ anomaly can be introduced just at the level of the masses in the η 0 -η complex, by adding the anomalous contributionM 2 A to the RLA-calculatedM 2 NA . Its anomaly mass parameter β can be obtained by fitting [34] the empirical masses of η and η 0 or, preferably, from lattice results on the YM topological susceptibility χ YM (because then no new fitting parameters are introduced). Employing the WVR (6) yields [9,35] β ¼ β WV , while Shore's generalization gives (see Sec. III) β ¼ β Sho [11], where A is the QCD topological charge parameter, given below by Eq. (18) in terms of qq condensates of massive quarks, which turns out to be crucial for a realistic T dependence of the masses in the η 0 -η complex. III. EXTENSION TO T ≥ 0 Extending our treatment [27,[34][35][36][37] of the η 0 − η complex to T > 0 is clearly more complicated. Since to the best of our knowledge there is no systematic derivation of the T > 0 version of either the WVR (6) or its generalization by Shore [13,14], it is tempting to try to straightforwardly replace all quantities by their T-dependent versions. In the WVR, these are the full-QCD quantities M η 0 ðTÞ, M η ðTÞ, M K ðTÞ, and f π ðTÞ, but also χ YM ðTÞ, which is a puregauge, YM quantity and thus much more resistant to high temperatures than QCD quantities that also contain quark degrees of freedom. Indeed, lattice calculations indicate that the decrease of χ YM ðTÞ (from which one would expect the decrease of the anomalous η 0 mass) only starts at a T some 100 MeV (or even more) above the (pseudo)critical temperature T Ch for the chiral symmetry restoration of full QCD, near where decay constants already decrease appreciably. It was then shown [24] that the straightforward extension of the T dependence of the YM susceptibility would even predict an increase of the η 0 mass around and beyond T Ch , contrary to experiment [5]. It could be expected that at high T, the original WVR (6) will not work since it relates the full-QCD quantities with a much more temperature-resistant YM quantity, χ YM ðTÞ. 1 It is instructive to recall [36,40] that nonet symmetry (or a broken version thereof) is in fact assumed (explicitly or implicitly) by all approaches using the simple hidden-flavor basis qq, e.g., to construct the SUð3Þ pseudoscalar meson states η 0 and η 8 without distinguishing between the qq states belonging to the singlet and those belonging to the octet. An independent a posteriori support for our approach is also that η and η 0 → γγ ðÃÞ processes are described well [34][35][36][37]. However, this problem can be eliminated [9] by using, at T ¼ 0, the (inverted) Leutwyler-Smilga (LS) relation [10] to express χ YM in the WVR (6) through the full-QCD topological susceptibility χ and the chiral-limit condensate hqqi 0 . Thus the zero-temperature WVR is retained, while the full-QCD quantities inχ do not have the T dependence mismatch with the rest of Eq. (6). Thus, instead of χ YM ðTÞ, Ref. [9] used the combinationχðTÞ [Eq. (12)] at T > 0, where the QCD topological susceptibility χ in the lightquark sector can be expressed as [10,15,41] This implies that the (partial) restoration of U A ð1Þ symmetry is strongly tied to the chiral symmetry restoration, since it is not χ YM ðTÞ but rather hqqi 0 ðTÞ [throughχðTÞ] that determines the T dependence of the anomalous parts of the masses in the η-η 0 complex [9]. The dotted curve in Fig. 2 illustrates how hqqi 0 ðTÞ decreases steeply to zero as T → T Ch , indicative of the second-order phase transition. This behavior is followed closely byχðTÞ, and therefore also by the anomaly parameter β WV ðTÞ [Eq. (11)]. This makes the mass matrix (10) diagonal immediately after T ¼ T Ch , which marks the abrupt onset of the NS-S scenario M η 0 ðTÞ → M ss ðTÞ, M η ðTÞ → M π ðTÞ [9]. In Eq. (13), C m denotes corrections of higher orders in small m q , but it should not be neglected as C m ≠ 0 is needed to have a finite χ YM with Eqs. (12) and (13). They in turn give us the value C m at T ¼ 0 in terms of the qq condensate and the YM topological susceptibility χ YM . However, to the best of our knowledge, the functional form of C m is not known. Reference [9] thus tried various parametrizations covering reasonably possible T dependences of C m ðTÞ, but this did not greatly affect the results for the T dependence of the masses in the η 0 -η complex. An alternative to the WVR (6) is its generalization by Shore [13,14]. There, relations containing the masses of the pseudoscalar nonet mesons take into account that η and η 0 should have two decay constants each [42]. If one chooses to use the η 8 -η 0 basis, they are f 8 η , f 8 η 0 , f 0 η , f 0 η 0 , and can be equivalently expressed through purely octet and singlet decay constants (f 8 , f 0 ) and two mixing angles (θ 8 , θ 0 ). This may seem better suited for use with effective meson Lagrangians than with qq 0 substructure calculations starting from the (flavor-broken) nonet symmetry, such as ours. Nevertheless, Shore's approach was also adapted for the latter bound-state context, and successfully applied there (in particular, to our DS approach in the RLA [27]). This was thanks to the simplifying scheme of Feldmann, Kroll, and Stech (FKS) [43,44]. They showed that this "two mixing angles for four decay constants" formulation in the NS-S basis, although in principle equivalent to the η 8 -η 0 basis formulation, can in practice be simplified further to a one-mixing-angle scheme using plausible approximations based on the Okubo-Zweig-Iizuka (OZI) rule. The decayconstant mixing angles in this basis are mutually close, ϕ S ≈ ϕ NS , and both are approximately equal to the state mixing angle ϕ rotating the NS-S basis states into the physical η and η 0 mesons, which diagonalizes the mass (squared) matrix (10). So, Ref. [27] numerically solved Shore's equations (combined with the FKS approximation scheme) for meson masses for several dynamical DS bound-state models [24,34,35]. Then, Ref. [11] presented analytic solutions thereof, for the masses of η and η 0 and the state NS-S mixing angle ϕ. These are rather long but closed-form expressions in terms of nonanomalous meson masses M π , M K and their decay constants f π , f K , as well as f NS and f S (the decay constants of the unphysical η NS and η S ), and, most notably, the full-QCD topological charge parameter A. This quantity (taken [13,14] from Di Vecchia and Veneziano [15]) plays the role of χ YM in the WVR in the mass relations of Shore's FIG. 2. The relative-temperature T=T Ch dependences of the pertinent order parameters calculated in our usual [9,24] separable interaction model. The odd man out is the (third root of the absolute value of the) chiral condensate hqqi 0 ðTÞ, which decreases steeply at T ¼ T Ch and dictates similar behavior [9] tõ χðTÞ. All of the other displayed quantities exhibit smooth, crossover behaviors, which are smoother for heavier flavors: the dash-dotted and dashed curves are the (third roots of the absolute values of the) condensates hssiðTÞ and hūuiðTÞ, respectively, the thin solid curve is the resulting topological susceptibility χðTÞ 1=4 , and the thick solid curve is the topological charge parameter AðTÞ 1=4 . The decay constants f π ðTÞ and f ss ðTÞ are, respectively, the lower dashed and dash-dotted curves. generalization. A will be considered in detail for the T > 0 extension, but now let us note that although Shore's generalization is in principle valid to all orders in 1=N c [13,14], Shore himself took advantage of and approximated A (as we shall at T ¼ 0) by the lattice result χ YM ¼ ð0.191 GeVÞ 4 [45]. Further, one should note that since the FKS scheme neglects OZI-violating contributions (that is, gluonium admixtures in η NS and η S ) it is consistent to treat them as pure qq states, accessible by our BSE (2) in the RLA. Then f NS ¼ f π , and f S ¼ f ss (the decay constant of the aforementioned "auxiliary" RLA ss pseudoscalar). We calculate its mass M ss with the BSE, but at T ¼ 0 it can also be related to the measurable pion and kaon masses, M 2 ss ≈ 2M 2 K − M 2 π , due to Eq. (5). Similarly, f ss can also be approximately expressed with these measurable quantities as f ss ≈ 2f K − f π . Thus, after taking A ≈ χ YM from lattice data, Ref. [11] calculated the η-η 0 complex using both the model-calculated and the empirical M π , M K , f π , and f K in their analytic solutions. This serves as a check (independently of any model) of the soundness of our approach at T ¼ 0. Since the adopted DS model also enables the calculation of nonanomalous qq masses and decay constants for T > 0, the only thing still missing is the T dependence of the full-QCD topological charge parameter A, as χ YM ðTÞ is inadequate. But, A is used to express the QCD susceptibility χ through the "massive" condensates hūui, hddi, and hssi, i.e., away from the chiral limit, in contrast to Eqs. (12) and (13) [see, e.g., Eq. (2.12) in Ref. [13]]. Its inverse (expressing A) thus also contains the qq condensates out of the chiral limit for all light flavors q ¼ u, d, s, and so should χ in Eq. (18). That is, the light-quark expression for the QCD topological susceptibility in the context of Shore's approach should be expressed in terms of the current masses m q multiplied by their respective condensates hqqi realistically out of the chiral limit: As before [9], the small and necessarily negative correction term C m is found by assuming A ¼ χ YM at T ¼ 0. This large-N c approximation also easily recovers the LS relation (12): by approximating the realistically massive condensates with hqqi 0 everywhere in Eq. (18), the QCD topological charge parameter A reduces toχ, justifying the conjecture of Ref. [9] that connects the U A ð1Þ symmetry restoration with the chiral symmetry restoration. This connection between the two symmetries is still present. However, with the massive condensates we also get a more realistic, crossover T dependence of the masses, depicted in Figs. 3 and 4, and presented in Sec. IV. Figures 3 and 4 correspond to two variations of the unknown T dependence C m ðTÞ of the correction term in Eq. (19). As in Ref. [9], the simplest ansatz is a constant, C m ðTÞ ¼ C m ð0Þ, which is most reasonable for T < T Ch , where the condensates [and thus also the leading term in FIG. 3. T dependence, relative to T Ch , of various η 0 -η complex masses described in the text, the π mass (thick, dash-dotted curve) for reference, the halved (to maintain clarity) total U A ð1Þanomaly-induced mass 1 2 M U A ð1Þ (short-dashed curve), and the topological charge parameter A 1=4 (solid curve). The straight line is 2 times the lowest fermion Matsubara frequency 2πT. χðTÞ] change little. But above some higher T, the negative C m ð0Þ-although initially much smaller in magnitude than the leading term-will make χðTÞ [and therefore also AðTÞ] change sign. Concretely, this limiting T above which there is no meaningful description is found a little above 1.6T Ch . For another, nonconstant C m ðTÞ that would not have such a limiting temperature, we now have a lead from lattice data where the high-T asymptotic behavior of the QCD topological susceptibility has been found to be a power law, χðTÞ ∝ T −b [46,47]. The high-T dependence of our model-calculated condensates is also (without fitting) such that the leading term of our χðTÞ in Eq. (19) has a similar power-law behavior, with b ¼ 5.17. Also, the values of our leading terms are, qualitatively, for all T roughly in the same ballpark as the lattice results [46,47]. We thus fit the quickly decreasing power-law C m ðTÞ for high T by requiring that (i) this more or less rough consistency with lattice χðTÞ values is preserved, (ii) the whole χðTÞ has the high-T power-law dependence as the leading term (with b ¼ 5.17), and (iii) C m ðTÞ joins smoothly with the low-T value C m ð0Þ determined from χ YM at T ¼ 0. Our nonconstant choice of C m ðTÞ yields the masses in Fig. 3 [and χðTÞ and AðTÞ in Fig. 2], but these results are very similar to the ones with C m ðTÞ ¼ C m ð0Þ (of course, only up to the limiting T a little above 1.6T Ch ) in Fig. 4. Thus, Fig. 4 uses a different scale than Fig. 3, i.e., only the mass interval between 0.55 and 1.05 GeV, so as to zoom in on the η-η 0 complex and better discern its various overlapping curves, including M U A ð1Þ ðTÞ. The second choice of C m ðTÞ enables in principle the calculation of χðTÞ and AðTÞ without any limiting T. Nevertheless, Fig. 3 does not reach higher than T ¼1.8T Ch , because the model chosen for the RLA part of our calculations seems to become unreliable at higher T's: the mass eigenvalues seem increasingly too high, since they tend to cross the sum of the lowest q þq Matsubara frequencies. Fortunately, by T=T Ch ¼ 1.8 the asymptotic scenario for the anomaly has been reached, as we explain in the next section where we give a detailed description of all pertinent results at T ≥ 0. Figure 2 shows how various magnitudes of current-quark masses m q influence the T dependence and size of qq condensates hqqi and pseudoscalar decay constants f qq calculated in our adopted model. Defined, e.g., in Sec. II A of Ref. [24], it employs the parameter values m u ¼ m d ≡ m l ¼ 5.49 MeV and m s ¼ 115 MeV. IV. RESULTS AT T ≥ 0 IN DETAIL For both condensates and decay constants, larger current-quark masses lead to larger "initial" (i.e., T ¼ 0) magnitudes and, what is even more important for the present work, to smoother and slower falloffs with T. The magnitude of (the third root of) the strange-quark condensate is the top dash-dotted curve in Fig. 2. Its T ¼ 0 value jhssij 1=3 ¼ 238.81 MeV remains almost unchanged until T ¼ T Ch , and falls below 200 MeV (i.e., by some 20%) only for T ≈ 1.5T Ch . On the other hand, the T ¼ 0 value of the isosymmetric condensates of the lightest flavors, hūui ¼ hddi ≡ hlli ¼ ð−218.69 MeVÞ 3 , is quite close to the chiral one, hqqi 0 ¼ ð−216.25 MeVÞ 3 , showing how well the chiral limit works for u and d flavors in this respect. Still, the small current masses of u and d quarks are sufficient to lead to a very different T dependence of the lightest condensates, depicted by the dashed curve. It exhibits a typical smooth crossover behavior around T ¼ T Ch , and while the decrease is much more pronounced than in the case of hssi, it differs qualitatively from the sharp decrease to zero exhibited by the chiral condensate [and thus also by the anomaly-related quantityχðTÞ defined by the LS relation (12)]. The isosymmetric pion decay constant f π ðTÞ ≡ f ll ðTÞ is the lower dashed curve in Fig. 2, starting at T ¼ 0 from our model-calculated value f π ¼ 92 MeV. It decreases rather quickly, in contrast to f ss ðTÞ [starting at f ss ðT ¼ 0Þ ¼ 119 MeV], the decay constant of the unphysical RLAss pseudoscalar. It exhibits a much "slower" T dependence, in accordance with the s-quark condensate hssiðTÞ. The smooth, monotonic decrease of AðTÞ after T ∼ 0.7T Ch reflects the degree of gradual, crossover restoration of the U A ð1Þ symmetry with T. How this is reflected in the masses in the η-η 0 complex also depends on the ratios of AðTÞ with f 2 π ðTÞ, f π f ss ðTÞ, and f 2 ss ðTÞ in Eqs. (16) and (17). M 2 NS S ∝ AðTÞ=½f π ðTÞf ss ðTÞ decreases comparably to AðTÞ 1=2 , and 2AðTÞ=f ss ðTÞ 2 decreases even faster. Thus, M S ðTÞ [Eq. (17)] monotonically becomes the anomaly-free M ss ðTÞ in basically the same way as in Ref. [9], except now this process is not completed at T ¼ T Ch but rather [due to the AðTÞ crossover] drawn out until T ≈ 1.15T Ch . These two limited increases of AðTÞ=f 2 π ðTÞ may be model dependent and are not important, but what is systematic and thus important is that the "light" decay constant f π ðTÞ makes ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi AðTÞ=f 2 π ðTÞ p more resilient to T than not only AðTÞ 1=4 itself, but also other anomalous mass contributions in Eqs. (16) and (17). Indeed, β Sho ðTÞ ¼ 2AðTÞ=f 2 π ðTÞ decreases only after T ≈ 0.95T Ch (contributing over a half of the η 0 mass decrease), and then again increases somewhat after T ≈ 1.15T Ch , to start definitively decreasing only after T ≈ 1.25T Ch , but even then slower than other anomalous contributions. This makes M NS ðTÞ larger enough than M S ðTÞ to increase ϕðTÞ to around 80°, and keep it there up to T ∼ 1.5T Ch (see Fig. 5). This explains how the masses of the physical mesons η 0 and η (thick and thin solid curves in Figs. 3 and 4), exhibit the decrease of the mass of the heavier partner η 0 which is almost as strong as in the case [9] of the abrupt disappearance of the anomaly contribution, while on the contrary the lighter partner η now shows no sign of a decrease in mass around T ¼ T Ch , let alone an abrupt degeneracy with the pion. The latter happens in the case with the sharp phase transition because the fast disappearance of the whole M U A ð1Þ around T Ch can be accommodated only by a sharp change of the state mixing (ϕ → 0) to fulfill the asymptotic NS-S scenario immediately after T Ch . (See in particular Fig. 2 in Ref. [9]. Note that in our approach M η 0 ðTÞ cannot decrease by much more than a third of M U A ð1Þ , since the RLA M ss ðTÞ is the lower limit of M η 0 ðTÞ both in Ref. [9] and here.) In the present crossover case, however, T ¼ T Ch does not mark a drastic change in the mixing of the isoscalar states, but η 0 stays mostly η 0 and η stays mostly η 8 . Then, ΔM 2 Nevertheless, in M η 0 [Eq. (20)], the anomalous contributions from Eqs. (16) and (17) are all added together. The partial restoration of U A ð1Þ symmetry around T Ch , where around a third of the total U A ð1Þ-anomalous mass M U A ð1Þ goes away, is consumed almost entirely by the decrease of the η 0 mass over the crossover. After T ≈ 1.15T Ch , M η 0 ðTÞ starts rising again, but this is expected since after T ≈ T Ch light pseudoscalar mesons start their thermal increase towards 2πT, which is twice the lowest Matsubara frequency of the free quark and antiquark. This rather steep joint increase brings all of the mass curves M P ðTÞ quite close after T ∼ 1.5T Ch . The kaon mass M K ðTÞ is not shown in Figs. 3 and 4 to maintain clarity by avoiding crowded curves, but at this temperature of the characteristic η-η 0 anticrossing, M K ðTÞ is roughly in between M π ðTÞ and the η mass, and is soon crossed by M η ðTÞ which tends to become degenerate with M π ðTÞ (as detailed below). The rest of M U A ð1Þ ðTÞ [melting as 2 ffiffiffiffiffiffiffiffiffiffi AðTÞ p =f π ðTÞ] under 1.5T Ch is sufficiently large to keep M NS ðTÞ > M S ðTÞ and ϕ ≈ 80°. So a large ϕ makes θ positive, but not very far from zero, so that there we still have η 0 ≈ η 0 and η ≈ η 8 . This is also a fairly good approximation for T > 1.25T Ch , but there an even better approximation is η 0 ≈ η NS , M η 0 ðTÞ ≈ M NS ðTÞ and η ≈ η S , M η ðTÞ ≈ M S ðTÞ. V. SUMMARY, DISCUSSION, AND CONCLUSIONS We have studied the temperature dependence of the masses in the η 0 − η complex in the regime of the crossover restoration of chiral and U A ð1Þ symmetry. We relied on the approach of Ref. [11], which demonstrated the soundness of the approximate way in which the U A ð1Þ-anomaly effects on pseudoscalar masses were introduced and combined [24,27,[34][35][36][37] with chirally well-behaved DS RLA calculations in order to study η 0 and η. For T ¼ 0, this was demonstrated [11] model independently, with the only inputs being the experimental values of pion and kaon masses and decay constants, and the lattice value of the YM topological susceptibility. However, at T > 0 dynamical models are still needed to generate the temperature dependence of nonanomalous quantities through DS RLA calculations, and in this paper we used the same chirally correct and phenomenologically well-tested model as in numerous earlier T ≥ 0 studies (see, e.g., Refs. [9,24,31] and references therein). Following Ref. [11], we assumed that the anomalous contribution to the masses is related to the full-QCD topological charge parameter (18), which contains the massive quark condensates. They give us the chiral crossover behavior for high T. This is crucial, since lattice QCD calculations have established that for the physical quark masses, the restoration of the chiral symmetry occurs as a crossover (see, e.g., Refs. [29,48,49] and references therein) characterized by the pseudocritical transition temperature T Ch . Nevertheless, what happens with the U A ð1Þ restoration is still not clear [48,[50][51][52]. Whereas, e.g., Ref. [29] found its breaking as high as T ∼ 1.5T Ch , Ref. [53] found that above the critical temperature U A ð1Þ is restored in the chiral limit, and the JLQCD Collaboration [52] discussed the possible disappearance of the U A ð1Þ anomaly and pointed out the tight connection with the chiral symmetry restoration. Hence, there is a need to clarify "if, how (much), and when" [48] U A ð1Þ symmetry is restored. In such a situation, we believe instructive insight can be found in our study of how an anomaly-generated mass influences the η-η 0 complex, although this study is not done at the microscopic level. Since the JLQCD Collaboration [52] has recently stressed that the chiral symmetry breaking and U A ð1Þ anomaly are tied for quark bilinear operators (as, e.g., in our Eqs. (12), (13), (18) and (19), where the chiral symmetry breaking drives the U A ð1Þ one through qq condensates), we again recall how Ref. [11] provided support for the earlier proposal of Ref. [9] relating DChSB to the U A ð1Þ-anomalous mass contributions in the η 0 -η complex. This adds to the motivation to determine the full-QCD topological charge parameter (18) on the lattice from simulations in full QCD with massive, dynamical quarks [besides the original motivation [13,14] to remove the systematic Oð1=N c Þ uncertainty of Eq. (15)]. More importantly, this connects the U A ð1Þ symmetry breaking and restoration to those of chiral symmetry. It connects them in basically the same way in both Refs. [9,11] (and here), except that the full-QCD topological charge parameter (18) enables the crossover U A ð1Þ restoration by allowing the use of the massive quark condensates. But, if the chiral condensate (i.e., of massless quarks) is used to extend the approach of Ref. [11] to finite temperatures, the T > 0 results are, in essence, very similar to those of Ref. [9]: the quick chiral phase transition leads to quick U A ð1Þ symmetry restoration at T Ch (consistent with Ref. [53]), which causes not only the empirically supported [5] decrease of the η 0 mass but also an even larger η mass decrease; if M 2 U A ð1Þ ðTÞ ∝ βðTÞ → 0 abruptly when T → T Ch , Eq. (10) mandates that M η ðT → T Ch Þ → M π ðT Ch Þ equally abruptly (as in Ref. [9]). However, no experimental indication for this has ever been seen, although this is a more drastic decrease than for the η 0 meson. The present paper predicts a more realistic behavior of M η ðTÞ thanks to the smooth chiral restoration, which in turn yields the smooth, partial U A ð1Þ symmetry restoration (as far as the masses are concerned) making various actors in the η-η 0 complex behave quite differently from the abrupt phase transition (such as that in Ref. [9]). In particular, the η mass is now not predicted to decrease, but to only increase after T ≈ T Ch , just like the masses of other (almost-) Goldstone pseudoscalars, which are free of the U A ð1Þ anomaly influence. Similarly to T ¼ 0, η agrees rather well with the SUð3Þ flavor state η 8 until the anticrossing temperature, which marks the beginning of the asymptotic NS-S regime, where the anomalous mass contributions become increasingly negligible and η → η NS . In contrast to η, the η 0 mass M η 0 ðTÞ does decrease similarly to the case of the sharp phase transition, where its lower limit [namely, M ss ðTÞ] is reached at T Ch [9]. Now, M η 0 ðTÞ at its minimum (which is only around 1.13T Ch because of the rather extended crossover) is some 20 to 30 MeV above M ss ðTÞ, after which they both start to grow appreciably, and M η 0 ðTÞ is reasonably approximated by M η 0 ðTÞ up to the anticrossing. The effective restoration of U A ð1Þ regarding the η-η 0 masses only occurs beyond the anticrossing at T ≈ 1.5T Ch , in the sense of reaching the asymptotic regime M η 0 ðTÞ → M ss ðTÞ. Another, less qualitatively illustrative but more quantitative criterion for the degree of U A ð1Þ restoration is that there, at T ≈ 1.5T Ch , M U A ð1Þ is still slightly above 40%, and at T ≈ 1.8T Ch still around 14% of its T ¼ 0 value. Thus, the decrease to the minimum of M η 0 ðTÞ around 1.13T Ch in any case signals only a partial U A ð1Þ restoration. This M η 0 ðTÞ decrease is around 250 MeV, which is consistent with the current empirical evidence claiming that it is at least 200 MeV [5]. For comparison with some other approaches that explore the interplay of the chiral phase transition and axial anomaly, note that the η 0 mass decrease around 150 MeV is found in the functional renormalization group approach [54]. A very η 0 AND η MESONS AT HIGH T WHEN THE … PHYS. REV. D 99, 014007 (2019) recent analysis within the framework of the Uð3Þ chiral perturbation theory found that the (small) increase of the masses of π, K, and η after around T ∼ 120 MeV, is accompanied by the decrease of the η 0 mass, but only by some 15 MeV [55]. Admittedly, the crossover transition leaves more space for model dependence, since some model changes that would make the crossover even smoother would reduce our η 0 mass decrease. Nevertheless, there are also changes that would make it steeper, and those may, for example, help M η 0 ðTÞ saturate the M ss ðTÞ limit. Exploring such model dependences, as well as attempts to further reduce them at T > 0 by including more lattice QCD results, must be relegated to future work. However, here we can already note a motivation for varying the presently isosymmetric model current u-and d-quark mass of 5.49 MeV. Since it is essentially a phenomenological model parameter, it cannot be quite unambiguously and precisely related to the somewhat lower Particle Data Group values m u ¼ 2.2 þ0.5 −0.4 MeV and m d ¼ 4.70 þ0.5 −0.3 MeV [56]. Still, their ratio m u =m d ¼ 0.48 þ0.07 −0.08 is quite instructive in the present context, since the QCD topological susceptibility χ [Eq. (19)] and charge parameter A [Eq. (18)] contain the current-quark masses in the form of harmonic averages of m q hqqi (q ¼ u, d, s). Since a harmonic average is dominated by its smallest argument, our χ and A are dominated by the lightest flavor, providing the motivation to venture beyond the precision of the isospin limit and in future work explore the maximal isospin violation scenario [57] within the present treatment of the η-η 0 complex.
v3-fos-license
2024-07-10T15:13:31.028Z
2024-07-01T00:00:00.000
271081793
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "120853f48a074e777109975efdac67b64ab2655d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46357", "s2fieldsofstudy": [ "Chemistry", "Environmental Science", "Engineering" ], "sha1": "d11768b5151d12ce97a579660130347b015e7ee4", "year": 2024 }
pes2o/s2orc
Review of the Interfacial Structure and Properties of Surfactants in Petroleum Production and Geological Storage Systems from a Molecular Scale Perspective Surfactants play a crucial role in tertiary oil recovery by reducing the interfacial tension between immiscible phases, altering surface wettability, and improving foam film stability. Oil reservoirs have high temperatures and high pressures, making it difficult and hazardous to conduct lab experiments. In this context, molecular dynamics (MD) simulation is a valuable tool for complementing experiments. It can effectively study the microscopic behaviors (such as diffusion, adsorption, and aggregation) of the surfactant molecules in the pore fluids and predict the thermodynamics and kinetics of these systems with a high degree of accuracy. MD simulation also overcomes the limitations of traditional experiments, which often lack the necessary temporal–spatial resolution. Comparing simulated results with experimental data can provide a comprehensive explanation from a microscopic standpoint. This article reviews the state-of-the-art MD simulations of surfactant adsorption and resulting interfacial properties at gas/oil–water interfaces. Initially, the article discusses interfacial properties and methods for evaluating surfactant-formed monolayers, considering variations in interfacial concentration, molecular structure of the surfactants, and synergistic effect of surfactant mixtures. Then, it covers methods for characterizing microstructure at various interfaces and the evolution process of the monolayers’ packing state as a function of interfacial concentration and the surfactants’ molecular structure. Next, it examines the interactions between surfactants and the aqueous phase, focusing on headgroup solvation and counterion condensation. Finally, it analyzes the influence of hydrophobic phase molecular composition on interactions between surfactants and the hydrophobic phase. This review deepened our understanding of the micro-level mechanisms of oil displacement by surfactants and is beneficial for screening and designing surfactants for oil field applications. Introduction The utilization of enhanced oil recovery (EOR) technology often follows the main production phase, also known as water flooding, to achieve a substantial augmentation in oil extraction [1].It is an effective way to extract oil and gas from unconventional reservoirs as well [2].The process entails the introduction of chemicals into geological reservoirs that are not naturally occurring, including carbon dioxide (CO 2 ) [3], steam [4], and chemical agents such as surfactants and polymers [5][6][7].Therefore, it is imperative to enhance our understanding of the dynamics of multicomponent complexes, such as microemulsions [8] and foams [9], inside porous media that inherently possess numerous interfaces.Under this circumstance, two major scenarios are perceived.These scenarios involve the water-oil interface, which refers to the contact between two immiscible liquids in microemulsion, and the gas-water interface, which refers to the interface between a gas phase and a liquid phase in foam systems.Moreover, the displacement efficiency in the EOR process is dependent on surface wettability (i.e., contact angles), pore morphology, and remaining oil saturation [7]. Figure 1 illustrates possible processes of entrapment and distribution of residual oil in a porous medium [10].The oil phase possesses a comparatively high viscosity compared to the formation water present in the reservoirs.In strongly water-wet conditions, residual oil (i.e., non-wetting phase) is trapped in large pores by a bypassing mechanism, i.e., the water flow path in narrow pores can be formed, and the oil in large pores (but small throats) can be bypassed.The oil droplet is trapped at the pore throat by capillary force due to the Jamin effect, which is defined as the resistance to liquid flow through capillaries due to the presence of droplets [11], as shown in Figure 2. In contrast, the residual oil exhibits a non-continuous distribution within the porous medium in mixed wet conditions.Permeability to the water phase might increase if oil is redistributed.Bridges of oil that impede permeability at low flow rates can be fractured at increased flow rates.The capillary force is inversely proportional to the capillary number (Ca), and the latter can be defined as [5]: where µ and v represent the viscosity and the flow rate of the displacing phase.γ is the interfacial tension (IFT) between the aqueous phase and the oleic phase.θ denotes the contact angle of oil-water-mineral systems.A lower oil-water IFT helps to relieve the Jamin effect and enhances oil recovery in unconventional reservoirs.To facilitate the advancement of EOR techniques, it is necessary to gain a comprehensive understanding of the efficient reduction of IFT at the oil-water interface, effective control of the surfaceactive agent (surfactant) adsorption, and accurate assessment of the wetting properties of reservoir rocks [12].It should be noted that the present review does not delve into the latter aspect, and readers interested in this matter are advised to refer to the work by Ahmadi and coworkers [13].An alternative strategy involves enhancing sweep efficiency to mitigate the occurrence of fingering.The objective can be accomplished by either enhancing the mobility of residual oil by heating (e.g., injecting steam), which reduces the viscosity of the oil phase [14] or by limiting the mobility of the injectant by raising the viscosity of the displacing phase, such as foam flooding [7].Foam stability is a significant challenge in the application of foam flooding, and it is subject to influence from multiple factors.Two interfacial phenomena, namely Laplace capillary suction (which controls static stability) and the Gibbs-Marangoni effect (which controls dynamic stability), are particularly important in controlling foam stability [15].Figure 3 illustrates a microscopic mechanism for the static stability of CO 2 foam.Point P represents the junction of three bubbles near each other, called the plateau junction (i.e., the plateau node).Because the IFT between gas and water phases causes a pressure difference to exist across a curved surface, the pressure is greater on the concave side (i.e., on the inside of a bubble).The liquid pressure at the curved surface (point P) differs from that at point A within the foam film (i.e., the plateau border).It is subjected to excess pressure ∆p, which can be defined as [16]: where R is the radius of the curvature and γ is the IFT at the gas-water interface.This is the Young-Laplace equation.As is observed, the radius of curvature is relatively small at the plateau junction (point P), while the radius of curvature is relatively large at the plateau border (point A), indicating that the pressure at point P in the foam film is smaller than that at point A. Thus, the liquid will automatically flow from points A to P, gradually thinning the foam film.This is one of the liquid discharge processes of the foam film, termed Laplace capillary suction [17].A lower IFT causes less liquid to drain at the gas-water interface, thus facilitating the generation of foam and the preservation of a larger interfacial area, which are essential for keeping the foam stable.In addition, CO 2 molecules can penetrate the CO 2 -water interface and diffuse from small bubbles into big bubbles.This process leads to Ostwald ripening, which is detrimental to the foam stability.A surfactant-stabilized CO 2 -water interface can inhibit this phenomenon.Foam texture also plays a vital role in foam stability [17].Since the interfacial elasticity of the surfactant monolayers and the shear viscosity of the foam film counteract the effect of a mechanical perturbation, a high interfacial elasticity, and a large shear viscosity can be very helpful for the dynamic stability of CO 2 foam [18,19].The interfacial elasticity can be expressed by the following equation [20,21]: where γ denotes the IFT at the interface and A represents the geometric area of the interface.As demonstrated in Figure 4, when a surfactant-stabilized film experiences a sudden disturbance, the existence of an IFT gradient causes the surfactant molecules to spread from regions with low IFT to regions with high IFT.This behavior compels water molecules to move in a direction opposite to the flow of liquid drainage, which is termed the Gibbs-Marangoni effect.With a large interfacial elasticity, the interfaces can rapidly restore their original flatness after being disturbed by applied forces.This indicates that the interface has high dynamic stability. Molecules 2024, 29, x FOR PEER REVIEW 3 Surfactants are one of the most commonly used chemical agents in EOR methods [22,23].They have one part that has an affinity for nonpolar media (hydrophobic carbon tails) and one that has an attraction for polar media (hydrophilic or ionic headgroups).This amphiphilic property allows them to adsorb onto the interfaces that are defined as a transition area in two-phase dispersions like foam, microemulsions, and suspensions [24].Meanwhile, they can also increase the pore fluids' viscosity [25].Based on the nature of the polar headgroups, surfactants can be classified into different categories: anionic (negatively charged), cationic (positively charged), nonionic (electrically neutral), and zwitterionic (carry both a positive and a negative charge) surfactants.Depending on the attachment position of the hydrophilic headgroups on the alkyl chains, they can also be categorized into single-tail and multiple-tail structure surfactants [26].Surfactant types and structures have been varied, and the correlations between their structure and interfacial properties have been developed [19,26].The molecules in the intermediate region undergo imbalance pulls by bulk phases, leading to the occurrence of IFT.Adsorption of surfactants at the interfaces can significantly reduce the IFT values [27], and the physicochemical properties in this region are extremely important in all kinds of petroleum recovery and processing operations [28,29].We are still facing many problems, like how to achieve good miscibility in a microemulsion (i.e., oil-water interface) system [30] and mitigate the surfactant loss due to the adsorption on the mineral surface [31] under reservoir conditions, and the stability issue of a foam (i.e., gas-water interface) system remains to be solved for engineering applications [32].Understanding the dynamics of surfactant adsorption and the influence of surfactant chemical structure on adsorption behaviors at various interfaces has been a significant thrust in the research area.A schematic diagram of the Jamin effect at the pore throat (adapted from [11]).The oil droplet is squeezed in the pore throat and retained by capillary forces. Foam stability is a significant challenge in the application of foam flooding, and it is subject to influence from multiple factors.Two interfacial phenomena, namely Laplace capillary suction (which controls static stability) and the Gibbs-Marangoni effect (which controls dynamic stability), are particularly important in controlling foam stability [15].Figure 3 illustrates a microscopic mechanism for the static stability of CO2 foam.Point P represents the junction of three bubbles near each other, called the plateau junction (i.e., the plateau node).Because the IFT between gas and water phases causes a pressure difference to exist across a curved surface, the pressure is greater on the concave side (i.e., on the inside of a bubble).The liquid pressure at the curved surface (point P) differs from that at point A within the foam film (i.e., the plateau border).It is subjected to excess pressure ∆, which can be defined as [16]: Figure 2. A schematic diagram of the Jamin effect at the pore throat (adapted from [11]).The oil droplet is squeezed in the pore throat and retained by capillary forces.where is the radius of the curvature and is the IFT at the gas-water interface.This is the Young-Laplace equation.As is observed, the radius of curvature is relatively small a the plateau junction (point P), while the radius of curvature is relatively large at the plat eau border (point A), indicating that the pressure at point P in the foam film is smaller than that at point A. Thus, the liquid will automatically flow from points A to P, gradually thinning the foam film.This is one of the liquid discharge processes of the foam film termed Laplace capillary suction [17].A lower IFT causes less liquid to drain at the gaswater interface, thus facilitating the generation of foam and the preservation of a larger interfacial area, which are essential for keeping the foam stable.In addition, CO2 mole cules can penetrate the CO2-water interface and diffuse from small bubbles into big bub bles.This process leads to Ostwald ripening, which is detrimental to the foam stability.A surfactant-stabilized CO2-water interface can inhibit this phenomenon.Foam texture also plays a vital role in foam stability [17].Since the interfacial elasticity of the surfactant mon olayers and the shear viscosity of the foam film counteract the effect of a mechanical per turbation, a high interfacial elasticity, and a large shear viscosity can be very helpful for the dynamic stability of CO2 foam [18,19].The interfacial elasticity can be expressed by the following equation [20,21]: where denotes the IFT at the interface and A represents the geometric area of the inter face.As demonstrated in Figure 4, when a surfactant-stabilized film experiences a sudden disturbance, the existence of an IFT gradient causes the surfactant molecules to spread from regions with low IFT to regions with high IFT.This behavior compels water mole cules to move in a direction opposite to the flow of liquid drainage, which is termed the Gibbs-Marangoni effect.With a large interfacial elasticity, the interfaces can rapidly re store their original flatness after being disturbed by applied forces.This indicates that the interface has high dynamic stability.Surfactants are one of the most commonly used chemical agents in EOR methods [22,23].They have one part that has an affinity for nonpolar media (hydrophobic carbon tails) and one that has an attraction for polar media (hydrophilic or ionic headgroups).This amphiphilic property allows them to adsorb onto the interfaces that are defined as a transition area in two-phase dispersions like foam, microemulsions, and suspensions [24].Meanwhile, they can also increase the pore fluids' viscosity [25].Based on the nature of the polar headgroups, surfactants can be classified into different categories: anionic (negatively charged), cationic (positively charged), nonionic (electrically neutral), and zwitterionic (carry both a positive and a negative charge) surfactants.Depending on the attachment position of the hydrophilic headgroups on the alkyl chains, they can also be categorized into single-tail and multiple-tail structure surfactants [26].Surfactant types and structures have been varied, and the correlations between their structure and interfacial properties have been developed [19,26].The molecules in the intermediate region undergo imbalance pulls by bulk phases, leading to the occurrence of IFT.Adsorption of surfactants at the interfaces can significantly reduce the IFT values [27], and the physicochemical properties in this region are extremely important in all kinds of petroleum recovery and processing operations [28,29].We are still facing many problems, like how to achieve good miscibility in a microemulsion (i.e., oil-water interface) system [30] and mitigate the surfactant loss due to the adsorption on the mineral surface [31] under reservoir conditions, and the stability issue of a foam (i.e., gas-water interface) system remains to be solved for engineering applications [32].Understanding the dynamics of surfactant adsorption and the influence of surfactant chemical structure on adsorption behaviors at various interfaces has been a significant thrust in the research area. The investigations on the interfacial performance of surfactant-formed monolayers at the gas/oil-water interfaces are crucial to screening and evaluating surfactants.Great strides have been made in recent decades [5].The experimental approach has been well established, and the workflow is as follows: a candidate surfactant is first selected.Then, studies on phase behavior and thermal stability are conducted.Subsequently, the surfactants with lower IFT measurements are adopted to perform adsorption and core flooding tests.An applicable surfactant should have the following features: good thermal stability under reservoir conditions (i.e., high temperatures and high pressures), being capable of reducing the IFT to 10 −2 mN/m, low retention on the surface of reservoir rock, salt tolerance at reservoir salinity, and availability with an acceptable cost [22].However, the experiments are very complicated since the chemicals are always mixed with some impurities, which will interfere with the analyzed results.Realizing high temperatures and high pressures is very challenging in the laboratory.It is noteworthy that the macroscopic The investigations on the interfacial performance of surfactant-formed monolayers at the gas/oil-water interfaces are crucial to screening and evaluating surfactants.Great strides have been made in recent decades [5].The experimental approach has been well established, and the workflow is as follows: a candidate surfactant is first selected.Then, studies on phase behavior and thermal stability are conducted.Subsequently, the surfactants with lower IFT measurements are adopted to perform adsorption and core flooding tests.An applicable surfactant should have the following features: good thermal stability under reservoir conditions (i.e., high temperatures and high pressures), being capable of reducing the IFT to 10 −2 mN/m, low retention on the surface of reservoir rock, salt tolerance at reservoir salinity, and availability with an acceptable cost [22].However, the experiments are very complicated since the chemicals are always mixed with some impurities, which will interfere with the analyzed results.Realizing high temperatures and high pressures is very challenging in the laboratory.It is noteworthy that the macroscopic properties (e.g., IFT and viscosity) are majorly determined by the molecular arrangement and kinetic behaviors of the molecules from a nanoscale point of view.The conventional analysis process of indoor tests always lacks microscopic information due to the deficiency of temporal-spatial resolutions in experimental facilities, and it can hardly capture detailed pictures of molecular motions and the evolution of the system from a microscopic perspective.In contrast, molecular dynamics (MD) simulation is extremely powerful in modeling the interactions between different molecules, even under harsh conditions [33,34].It is a valuable complement to experimental and analytical approaches, which can provide profound perceptions of evolution processes with simulation time for microscopic systems and facilitate in-depth post-analysis [35,36].As shown in Figure 5, by solving Newton's second law regarding all the particles in the simulated systems, the trajectory profile (i.e., coordinates and instantaneous velocities at each step) can be derived.Then, the thermodynamics and kinetic properties can be predicted using the statistical mechanics method [37].By correlating with experimental work, MD simulations can effectively reveal microscopic mechanisms that experiments cannot explain solely.Furthermore, it can also vividly visualize the evolution process of molecular configurations from a molecular point of view [36]. In recent years, with the rapid development of computer hardware, parallel computing, and graphics processing units (GPU) acceleration technology, as well as the advancement of theoretical and computational chemistry, the MD simulation method has been extensively employed in the research fields of chemical engineering and petroleum engineering [36,[38][39][40][41]. Figure 6 shows the relationships between MD simulations and experimental work.Similar to experiments, MD studies can be categorized into three aspects. (1) Investigation of molecular configurations of surfactant-formed monolayers at the interfaces between two immiscible liquids, which corresponds to IFT measurements that are derived from experiments.It is increasingly being recognized by the interfacial science community that the choice of proper surfactants requires a fundamental understanding of both dynamic and static aspects of the IFT changes that occur in the presence of added surfactants.Using the MD simulation method, the factors that influence the interfacial performance of the surfactants with different structures can be well clarified [33,42].At the same time, the interfacial configurations with molecular views (nanoscale) can be easily associated with the interfacial properties (macroscopic scale).( 2) In aqueous solutions, surfactants with higher concentrations undergo self-assembly behavior and tend to form organized aggregates of large numbers of molecules, which are termed micelles [43].(Note: the formation of micelles and their properties are out of the scope of this review.)The specific value of the threshold is termed the critical micelle concentration (CMC).Perturbation of the liquid-liquid interface is crucial in forming and breaking oil-in-water and water-in-oil microemulsions.The desired perturbation can be accomplished using surfactants in emulsifying or demulsifying formulations [44].This situation corresponds to the experimental study of phase behaviors.(3) Investigation of interactions between the added surfactants and mineral surface, which corresponds to the adsorption test in the laboratory [13].Understanding the retention and adsorption of polymers and surfactants in porous media is of key importance for designing viable EOR processes.These studies not only complement the experimental evaluation process of surfactant performance but also unravel the microscopic mechanisms for oil displacement.The MD simulation results can provide a theoretical basis and meaningful guidance for designing suitable surfactant formulations for specific reservoir conditions [45].The first aspect (i.e., surfactant monolayers at the interfaces of binary immiscible fluids) will be the main focus of this review. The interfaces of two immiscible fluids can be divided into liquid-vapor interface (i.e., foam system) and oil-water interface (i.e., emulsion system) depending on the fluid's phase, as shown in Figure 7 (in this review, the water phase denotes the aqueous/solution phase in general).The objectives are the improvement of foam film stability and the realization of ultra-low IFT for oil displacement under reservoir conditions, respectively.The interactions between (1) the surfactant molecules, (2) the headgroups of surfactants and water molecules, and (3) the surfactant alkyl tails and the molecules in the hydrophobic phase are very crucial to the interfacial performance of the selected surfactants.The combined effects of these three interactions determine how well the surfactants perform at the interfaces. Molecules 2024, 29, x FOR PEER REVIEW 6 properties (e.g., IFT and viscosity) are majorly determined by the molecular arrange and kinetic behaviors of the molecules from a nanoscale point of view.The conven analysis process of indoor tests always lacks microscopic information due to the defic of temporal-spatial resolutions in experimental facilities, and it can hardly captur tailed pictures of molecular motions and the evolution of the system from a micros perspective.In contrast, molecular dynamics (MD) simulation is extremely power modeling the interactions between different molecules, even under harsh cond [33,34].It is a valuable complement to experimental and analytical approaches, whic provide profound perceptions of evolution processes with simulation time for m scopic systems and facilitate in-depth post-analysis [35,36].As shown in Figure 5, by ing Newton's second law regarding all the particles in the simulated systems, the t tory profile (i.e., coordinates and instantaneous velocities at each step) can be de Then, the thermodynamics and kinetic properties can be predicted using the stat mechanics method [37].By correlating with experimental work, MD simulations c fectively reveal microscopic mechanisms that experiments cannot explain solely.Fu more, it can also vividly visualize the evolution process of molecular configurations a molecular point of view [36]. In recent years, with the rapid development of computer hardware, parallel co ting, and graphics processing units (GPU) acceleration technology, as well as the adv ment of theoretical and computational chemistry, the MD simulation method has extensively employed in the research fields of chemical engineering and petroleum ization of ultra-low IFT for oil displacement under reservoir conditions, respectively.The interactions between (1) the surfactant molecules, (2) the headgroups of surfactants and water molecules, and (3) the surfactant alkyl tails and the molecules in the hydrophobic phase are very crucial to the interfacial performance of the selected surfactants.The combined effects of these three interactions determine how well the surfactants perform at the interfaces.The initial setup of the interfacial systems in MD simulations often employs the following geometry: a slab of liquid is positioned between two slabs of vapor (or another liquid) along the Z-axis, which is called the sandwich model (it consists of two phases).Under these circumstances, two interfaces parallel to the X-Y plane will be spontaneously generated.Each bulk phase is bounded by two independent interfaces.Using periodic boundary conditions allows for conditions representative of essentially infinite interfaces.The dimension of the simulation box can be determined by specifying the number of molecules desired and the expected phase density of each slab, whose values can be derived from experimental work.It should be noted that the lateral length (along the Z-axis) of the simulation box should be sufficiently greater than the size of the interfacial area (along the X-Y plane).The isobaric-isothermal-isointerface area ensemble (i.e., NPnAT) is recommended for the MD simulation.In this ensemble, normal pressure (perpendicular to the interface, also called bulk pressure) is maintained constant by adjusting the lateral length (i.e., LZ) of the simulation box.Meanwhile, the size of the interfacial area is also invariable during the simulation process.It is suggested to examine the potential energy of the systems and IFT values to identify whether the systems reach an equilibrium state.Note: The equilibration process at the water-surfactant-oil interface may require several microseconds due to the aggregation behaviors of the surfactants, which are much longer than The initial setup of the interfacial systems in MD simulations often employs the following geometry: a slab of liquid is positioned between two slabs of vapor (or another liquid) along the Z-axis, which is called the sandwich model (it consists of two phases).Under these circumstances, two interfaces parallel to the X-Y plane will be spontaneously generated.Each bulk phase is bounded by two independent interfaces.Using periodic boundary conditions allows for conditions representative of essentially infinite interfaces.The dimension of the simulation box can be determined by specifying the number of molecules desired and the expected phase density of each slab, whose values can be derived from experimental work.It should be noted that the lateral length (along the Zaxis) of the simulation box should be sufficiently greater than the size of the interfacial area (along the X-Y plane).The isobaric-isothermal-isointerface area ensemble (i.e., NPnAT) is recommended for the MD simulation.In this ensemble, normal pressure (perpendicular to the interface, also called bulk pressure) is maintained constant by adjusting the lateral length (i.e., L Z ) of the simulation box.Meanwhile, the size of the interfacial area is also invariable during the simulation process.It is suggested to examine the potential energy of the systems and IFT values to identify whether the systems reach an equilibrium state.Note: The equilibration process at the water-surfactant-oil interface may require several microseconds due to the aggregation behaviors of the surfactants, which are much longer than previously reported simulated times [46].This paper reviews the recent applications and advancements of the MD simulation method for surfactants in petroleum production and geological storage systems.It summarizes the surfactants' interfacial performance at various interfaces and discusses the factors that influence the resultant interfacial properties in subsequent sections.The interfacial properties of the surfactant-formed monolayers consist of the IFT, surface pressure-area (Π − A) isotherms, interface formation energy, and interfacial elasticity.MD simulation enables the efficient generation of diverse molecular models for various surfactants, allowing the computation of the systems' thermal dynamics and kinetic properties within a condensed timeframe.When the interfaces are perpendicular to the Z-axis, the IFT can be calculated using a microscopic stress tensor, which is derived from the equation below [47]: Research Progress where γ represents the IFT, P N (z) is the bulk pressure (i.e., P ZZ , also called normal pressure), and P T (z) is tangential pressure (i.e., P XX and P YY , also called lateral pressure).The integral is defined over the boundary layer and can be extended to infinity.Note: If we consider a localized area of a specific geometry, the equation can be applied to a nonplanar surface (such as a spherical shape) [48].The pressure tensor method is the predominant approach for determining the IFT for pure fluids and fluid mixtures.The basis of this method is to calculate the components of the diagonal element of the inhomogeneous pressure tensor P kk (z) using the Irving-Kirkwood (IK) formulation [49], which then feeds into Equation (4) to be employed to predict the IFT.In the IK method, the pressure tensor element P kk (z) is given by the expression below [50]: where the subscript kk denotes the spatial coordinate, either X, Y, or Z. k B is Boltzmann's constant, T is the absolute temperature, A is the interfacial area, N is the number of molecules, and the double sum involves the force on molecule i due to molecule j. f ij is the force on molecule i due to molecule j, and r ij represents the distance between molecules i and j. The prediction of interfacial elasticity can be used to evaluate the resistance to mechanical disturbance applied to the monolayers at the interface.Equation (3) can be written as follows [19]: where γ i and SAPM i represent the IFT value and surface area per molecule for the surfactant at the ith concentration, and <. ..> means the statistical average.It offers the benefit of studying changes in surfactant behaviors with increasing or decreasing concentration while also evaluating the interfacial elasticity of the monolayers.Admittedly, the experimental measurements are dependent on the frequency of perturbation.When the frequency is extremely low (e.g., 0.1 Hz), the surfactant molecules in solutions can rapidly move to the interface, leading to considerable uncertainty in the measurement.According to our experience [19], past simulation work [21], and experiments [51], the data at ~10 Hz can be reasonably reproduced by MD simulation studies. The monolayers are typically characterized in experiments by surface pressure-area (Π − A) isotherms, defined as a measurement at a constant temperature of surface pressure as a function of the available area for each molecule in the monolayers.The surface pressure of the monolayers can be calculated from the interfacial tension according to the following equation [20]: where A is the area per surfactant molecule.γ 0 is the IFT of the pure gas/oil-water interfaces, and γ is the IFT of the interfaces with the surfactant monolayer.As shown in Figure 8, the curves of the Π − A isotherms can be generally classified into different monolayer phases (gas-like phase, liquid-expanded phase, and liquid-condensed phase) with different slopes, and the slopes become sharper as the monolayer becomes more compact.The interface formation energy (IFE) can be calculated to evaluate the stability of the interfaces and find the most probable interfacial concentration.The minimum IFT value always corresponds to the lowest IFE value.The parameter can be obtained from the following equation [52]: where denotes the total energy of the entire system. , denotes the energy of a single surfactant molecule calculated from a separate MD simulation in a vacuum at the same temperature. / - denotes the energy of a pure gas/oil-water system obtained from a separate MD simulation with the same number of molecules (gas/oil/water) used in the total system at the same temperature.n is the total number of surfactant molecules.The lower IFE values indicate that inserting an extra surfactant molecule requires less energy to go into the interfaces.The interface formation energy (IFE) can be calculated to evaluate the stability of the interfaces and find the most probable interfacial concentration.The minimum IFT value always corresponds to the lowest IFE value.The parameter can be obtained from the following equation [52]: Effect of Interfacial Concentration and Molecular Structure where E total denotes the total energy of the entire system.E sur f actant,single denotes the energy of a single surfactant molecule calculated from a separate MD simulation in a vacuum at the same temperature.E gas/oil-water denotes the energy of a pure gas/oil-water system obtained from a separate MD simulation with the same number of molecules (gas/oil/water) used in the total system at the same temperature.n is the total number of surfactant molecules.The lower IFE values indicate that inserting an extra surfactant molecule requires less energy to go into the interfaces. Effect of Interfacial Concentration and Molecular Structure With increasing surfactant concentration, the IFT between a fluid (e.g., oil) and the surfactant aqueous solution decreases linearly below a specific concentration, showing an inflection point that can be regarded as the CMC [17,53].In practice, the CMC can be employed to indicate a concentration of the solution at which the interface has been entirely covered by the surfactant molecules (i.e., saturation coverage).The MD simulation studies can reproduce the linear relationship well [16,19,26].Please note that the interfacial concentration (i.e., an interfacial area that is occupied by each surfactant molecule) is used in the simulation studies since the bulk phase (in nanoscale) in the simulation is much smaller than that in the experimental cases (in millimeter scale).The MD simulation study can also characterize the saturation coverage at the interfaces.In summary, CMC is a useful concept to link simulations and experiments.However, to directly determine the CMC, it is essential to establish a correlation between the interfacial concentration and the concentration in the bulk solution.Fan and coworkers [16] have investigated the effect of interfacial concentration, temperature, and pressure on the static stability (i.e., IFT) of the sodium dodecyl sulfate (SDS)-stabilized CO 2 foam film system.Figure 9 shows that as the interfacial concentration of surfactant rises, the IFT values fall linearly.The results can be well understood as SDS molecules form a dense and thick monolayer under high-concentration conditions, preventing CO 2 and water molecules from contacting each other.Noteworthily, the linear relationship can correspond well with experimental results [53].Doing a series of MD simulations at different interfacial concentrations helps us find the saturation coverage of the surfactant at the CO 2 -water interface.At higher concentrations, the IFT may become negative (due to a curved interface) [26] or show an inflection point (due to the formation of micelles near the interface) [19].Then, the concentration of saturation coverage was used in the subsequent studies (i.e., the effect of temperature and pressure on the interfacial properties).Low temperatures and high pressures are favorable conditions for reducing IFT values due to the enhancement of interactions between the CO 2 molecules and the surfactant alkyl tails, thus inhibiting the interactions between CO 2 and water molecules. The architecture of surfactant molecules has a significant impact on the interfacial properties.Jia and coworkers [19] have investigated the effects of the molecular structure of surfactants on the dynamic stability (i.e., interfacial elasticity) at the CO 2 -water interfaces.According to the data presented in Figure 10, the ranking of CO 2 foam stability enhancement capacity is as follows: sodium polyoxyethylene alkyl ether sulfate (AES) > SDS > sodium decyl sulfonate (SDSn) > sodium dodecylbenzene sulfonate (SDBS) > sodium laurate (SLA), which aligns well with the results obtained from experiments.Using the MD simulation method, the researchers have discovered that lnA = 0 (note: A denotes the interfacial area per surfactant molecule, and here it is equal to 1 nm 2 /molecule) serves as a critical point.When the interfacial concentration is low (i.e., A is larger than 1 nm 2 /molecule, and lnA > 0), the difference in IFT variations between different surfactant monolayers is insignificant.Thus, the interfacial elasticities of the monolayers are similar.In contrast, when the interfacial concentration is high (i.e., A is less than 1 nm 2 /molecule, and lnA < 0), the impact of molecular structure on the reduction of IFT becomes increasingly evident with the concentration increasing.The IFT variations of the monolayers distinctly vary, and the difference in interfacial elasticity of various monolayers is very clear.Consequently, this part (i.e., lnA < 0) plays a vital role in determining the interfacial elasticity of the monolayers.Figure 10c illustrates the predicted interfacial elasticity using data points in the range of lnA < 0. Additionally, due to the occurrence of EO chains in the AES surfactant, the carbon tails are adequately solvated by the CO 2 phase while maintaining a balanced interaction between the headgroup and the water phase, thus significantly improving the hydrophilic-CO 2 -philic balance (HCB).Though the SDBS surfactant exhibits favorable features (i.e., large interfacial width and a high degree of interfacial coverage) as an EOR chemical agent, its performance at the CO 2 -water interface is very limited since the presence of the phenyl group makes SDBS surfactant too hydrophobic to a CO 2 phase (thus a poor HCB).However, these factors are very beneficial to the IFT reduction at the oil-water interface.Linear alkylbenzene sulfonates (LAS) possess relatively simple headgroups and several hydrophobic tails with diverse structures.Thus, many MD simulations have chosen LAS as a research object to study the influence of the molecular structure of surfactants on their interfacial properties [35,52,[54][55][56][57]. Chen et al. [56] discovered that adding a certain length of the branched structure to SDSn could make the performance better at the air-water interface.Zhao et al. [57] found that among the sodium hexadecane benzene sulfonate (SHBS) isomers, SHBS-1C16 with a benzene ring on the first carbon atom has stronger intermolecular interactions and greater electrostatic repulsion between the headgroups due to its long single-chain tail, which results in a disordered interfacial structure at the air-water interface.In contrast, SHBS-5C16, with the benzene ring on the 5th carbon atom, has two hydrophobic tails with different lengths.The steric hindrance effect of the short tail can inhibit the aggregation behavior of the surfactant molecules, thus improving the stability of the monolayers.As shown in Figure 11 Linear alkylbenzene sulfonates (LAS) possess relatively simple headgroups and several hydrophobic tails with diverse structures.Thus, many MD simulations have chosen LAS as a research object to study the influence of the molecular structure of surfactants on their interfacial properties [35,52,[54][55][56][57]. Chen et al. [56] discovered that adding a certain length of the branched structure to SDSn could make the performance better at the air-water interface.Zhao et al. [57] found that among the sodium hexadecane benzene sulfonate (SHBS) isomers, SHBS-1C16 with a benzene ring on the first carbon atom has stronger intermolecular interactions and greater electrostatic repulsion between the headgroups due to its long single-chain tail, which results in a disordered interfacial structure at the air-water interface.In contrast, SHBS-5C16, with the benzene ring on the 5th carbon atom, has two hydrophobic tails with different lengths.The steric hindrance effect of the short tail can inhibit the aggregation behavior of the surfactant molecules, thus improving the stability of the monolayers.As shown in Figure 11, Jang et al. [52] further proved that the twin-tailed structure of SHBS-4C16 holds the best interfacial performance at the decane-water interface.It has the lowest IFT and the lowest interfacial formation energy.Meanwhile, the formed monolayer has the largest interfacial width, and the molecules are closely distributed inside the monolayer.He et al. [55] conducted 36 simulations on the adsorption of LAS-mCn (m = 1~6, n = 1~11) with different carbon tail lengths and different benzene ring attachment positions at the air-water interface.They pointed out that LAS with short carbon tails have high solubility and flowability in the aqueous phase, and their excessive hydrophilicity prevents adsorption at the interface.LAS-1Cn possesses a single-tailed structure and is apt to aggregate at the interface, which deteriorates the structural integrity of the formed monolayers.Only LAS with a high degree of branchedness and long carbon tails can cover the interface and minimize the IFT at the interface to the fullest extent.In addition, it has been found that the salt tolerance of twin-tailed SHBS-5C16 is better than that of singletailed SHBS-1C16 [54], and the salt tolerance of SDBS isomers is 6C12 > 4C12 > 1C12 [35].Concerning the nonionic surfactant, take C12E3 (i.e., triethyleneglycol 1-dodecyl ether) and C6C5CE3 (i.e., triethyleneglycol 6-dodecyl ether) as examples.The twin-tailed C6C5CE3 has better salt tolerance than the single-tailed C12E3 at the water-dodecane interfaces [58].(We will further discuss the effect of molecular architecture and counterions in the solutions in Sections 2.2 and 2.3) In summary, the twin-tailed structure cannot only lead to lower IFT but also achieve better salt resistance compared with a single-tailed structure. Molecules 2024, 29, x FOR PEER REVIEW 14 of 42 with short carbon tails have high solubility and flowability in the aqueous phase, and their excessive hydrophilicity prevents adsorption at the interface.LAS-1Cn possesses a singletailed structure and is apt to aggregate at the interface, which deteriorates the structural integrity of the formed monolayers.Only LAS with a high degree of branchedness and long carbon tails can cover the interface and minimize the IFT at the interface to the fullest extent.In addition, it has been found that the salt tolerance of twin-tailed SHBS-5C16 is better than that of single-tailed SHBS-1C16 [54], and the salt tolerance of SDBS isomers is 6C12 > 4C12 > 1C12 [35].Concerning the nonionic surfactant, take C12E3 (i.e., triethyleneglycol 1-dodecyl ether) and C6C5CE3 (i.e., triethyleneglycol 6-dodecyl ether) as examples.The twin-tailed C6C5CE3 has better salt tolerance than the single-tailed C12E3 at the water-dodecane interfaces [58].(We will further discuss the effect of molecular architecture and counterions in the solutions in Sections 2.2 and 2.3) In summary, the twintailed structure cannot only lead to lower IFT but also achieve better salt resistance compared with a single-tailed structure.Moreover, Adkins and coworkers [59] showed that introducing extra-weak hydrophilic radicals, such as hydroxyl and ethoxy groups, into the molecular chain of the surfactants could also enhance their interfacial activities.Hou and coworkers [60,61] found that the interaction of oligomeric surfactants with water molecules and oil molecules is stronger than that of single-chain and dimer-typed surfactants, thus leading to the lowest IFT at the oil-water interface.Shi and coworkers [62] demonstrated that the Gemini surfactants, which utilize a linker group to associate two monomers, were more effective in reducing the IFT than the typical monomolecular surfactants at the oil-water interface.Moreover, Adkins and coworkers [59] showed that introducing extra-weak hydrophilic radicals, such as hydroxyl and ethoxy groups, into the molecular chain of the surfactants could also enhance their interfacial activities.Hou and coworkers [60,61] found that the interaction of oligomeric surfactants with water molecules and oil molecules is stronger than that of single-chain and dimer-typed surfactants, thus leading to the lowest IFT at the oilwater interface.Shi and coworkers [62] demonstrated that the Gemini surfactants, which utilize a linker group to associate two monomers, were more effective in reducing the IFT than the typical monomolecular surfactants at the oil-water interface.Han et al. [63] found that Gemini surfactants with shorter spacers exhibit better surface activity.In comparison, longer spacers bind more oil molecules to the carbon chain, reducing the surface activity.Wang et al. [64] demonstrated that the self-assembled morphologies of Gemini surfactants change with the decrease in the spacer length.Tan and coworkers [65] investigated the effect of headgroup size on the interfacial performance of six isomers of alkyl benzene sulfonate (ABS) and found that the IFT at the decane-water interface gradually decreased with the increase of the number of substituent groups in the benzene ring structure and the increase of headgroup size to some extent (see Figure 12a).However, Gao and coworkers [66,67] showed that the interfacial performance is not necessarily better when the headgroup size is larger.As shown in Figure 12b, the IFT of nonylphenol-substituted dodecyl sulfonates (NPDS) at the air-water interface decreases and then rises with the increase in headgroup size, and 3-C12-NPDS has the lowest IFT and IFE.Consequently, the interfacial properties are the combined effect of changes in molecular chain structure, attached chemical groups, and the amount of surfactants (i.e., interfacial concentration) at the interfaces. Molecules 2024, 29, x FOR PEER REVIEW 15 of 42 Han et al. [63] found that Gemini surfactants with shorter spacers exhibit better surface activity.In comparison, longer spacers bind more oil molecules to the carbon chain, reducing the surface activity.Wang et al. [64] demonstrated that the self-assembled morphologies of Gemini surfactants change with the decrease in the spacer length.Tan and coworkers [65] investigated the effect of headgroup size on the interfacial performance of six isomers of alkyl benzene sulfonate (ABS) and found that the IFT at the decane-water interface gradually decreased with the increase of the number of substituent groups in the benzene ring structure and the increase of headgroup size to some extent (see Figure 12a).However, Gao and coworkers [66,67] showed that the interfacial performance is not necessarily better when the headgroup size is larger.As shown in Figure 12b, the IFT of nonylphenol-substituted dodecyl sulfonates (NPDS) at the air-water interface decreases and then rises with the increase in headgroup size, and 3-C12-NPDS has the lowest IFT and IFE.Consequently, the interfacial properties are the combined effect of changes in molecular chain structure, attached chemical groups, and the amount of surfactants (i.e., interfacial concentration) at the interfaces. Synergistic Effect of Surfactant Mixtures In practical application, a single surfactant usually cannot fully meet complex reservoir conditions, such as temperature, pressure, and salinity.Thus, it is suggested that a variety of surfactants be chosen at the same time [68,69].The synergistic effect of the surfactant mixtures can significantly improve the interfacial performance compared with the single surfactant at the interfaces [5].The adsorption behavior of mixed surfactants at the gas/oil-water interfaces with varying molar ratios was studied using MD simulations.The researchers showed that, as compared to pure surfactants, the monolayer formed by the Synergistic Effect of Surfactant Mixtures In practical application, a single surfactant usually cannot fully meet complex reservoir conditions, such as temperature, pressure, and salinity.Thus, it is suggested that a variety of surfactants be chosen at the same time [68,69].The synergistic effect of the surfactant mixtures can significantly improve the interfacial performance compared with the single surfactant at the interfaces [5].The adsorption behavior of mixed surfactants at the gas/oil-water interfaces with varying molar ratios was studied using MD simulations.The researchers showed that, as compared to pure surfactants, the monolayer formed by the adsorption of their mixture is more compact, thus leading to better interfacial activities.The synergistic effects of ionic surfactants are mainly due to the strong electrostatic interactions between anionic and cationic headgroups, which shield the electrostatic repulsion between the same electrically charged headgroups and lead to a smaller separation distance between the surfactant molecules (i.e., closely packed) at the interface [70,71].The combination of anionic and cationic binary surfactant mixtures can lower the CMC value and reduce the IFT compared with individual surfactants.At low concentrations, surfactants with opposite charges pack as co-surfactants like Gemini, while at high concentrations, anionic and cationic surfactant mixtures generate closely packed adsorption layers at the interfaces with strong viscoelasticity and negligible diffusion exchange between the interface and bulk solutions. However, Agneta et al. [72] reported that antagonism exists between the anionic and cationic surfactant mixtures under high salinity conditions at the gas-water interfaces.In contrast, strong synergism exists between the anionic/cationic and nonionic binary surfactant mixtures.Compared with anionic/cationic surfactants, zwitterionic surfactants simultaneously have both positive and negative electrically charged headgroups.Due to the large size of the headgroups, the surfactant molecules are apt to lie flat at the interface, leading to a disordered arrangement of the molecules and a loose monolayer.Wang et al. [21] found that the introduction of an appropriate amount of lauryl betaine (LB-12) could significantly improve the interfacial performance of sodium α-olefin sulfonate (AOS-14).When the ratio of AOS-14 to LB-12 equals 7:3 at the interface, the interfacial elasticity is the largest, and the binding energy is the lowest, indicating that the monolayer is the most stable.Its resistance to external perturbation is the strongest (the molecular insights are discussed in Section 2.2.3).Li et al. [73,74] found that the surfactant mixtures of dodecyl sulfonate betaine (SB12-3) and SDBS have the most stable interface (i.e., the lowest IFT) when the ratio of SB12-3 to SDBS equals 4:6.As shown in Figure 13, with the increase in the fraction of SB12-3, the IFE decreases at the beginning and then rises from 50% concentrations.Gao et al. [75] reported that the presence of LB-12 surfactant can further improve the stability of alkyl polyoxyethylene carboxylate (AEC)-stabilized foam film.LB-12 can modulate the ordering of AEC at the air-water interface, and the electrostatic structure becomes denser with the increasing concentration of LB-12.In addition to well reproducing the interfacial properties of various surfactants, we can also effectively investigate the diffusion and aggregate behaviors of the surfactant molecules in the monolayers at the interface (Section 2.2), surfactant headgroups-aqueous phase interactions (Section 2.3), and surfactant alkyl tails-hydrophobic phase interactions (Section 2.4) in the vicinity of the interfaces using MD simulation method. Characterization of the Microstructure at the Interface The interfacial properties are mainly determined by the microstructure at the interfaces.MD simulation method can straightforwardly and quantitatively study the microstructure of the intermediate regions (i.e., interfaces) and surfactant behaviors within these regions.Furthermore, they can effectively correlate molecular configurations at nanoscale and interfacial properties measured in the laboratory.The microstructure of the interfaces can be characterized by the mass density distribution of different components along the Z direction that is normal to the interface.The density profile of the ith component (without excess absorption at the interface) obtained from the simulations can be fitted using the following hyperbolic tangent function [52] as below: where ρ i is the density of the ith component, z c is the position of the Gibbs dividing surface, and d is the adjustable parameter related to the interfacial width.Furthermore, the distribution of surfactant headgroups at the interfaces can be well fitted by a Gaussian function [55], which can be expressed by the equation as follows: where N s is a constant that, in fact, refers to the number of atoms in each monolayer peak, z p is the position of the peak center, and σ is the standard deviation, which shows the width of each peak. Characterization of the Microstructure at the Interface The interfacial properties are mainly determined by the microstructure at the interfaces.MD simulation method can straightforwardly and quantitatively study the microstructure of the intermediate regions (i.e., interfaces) and surfactant behaviors within these regions.Furthermore, they can effectively correlate molecular configurations at nanoscale and interfacial properties measured in the laboratory.The microstructure of the interfaces can be characterized by the mass density distribution of different components along the Z direction that is normal to the interface.The density profile of the ith component (without excess absorption at the interface) obtained from the simulations can be fitted using the following hyperbolic tangent function [52] as below: where is the density of the ith component, is the position of the Gibbs dividing surface, and d is the adjustable parameter related to the interfacial width.Furthermore, the distribution of surfactant headgroups at the interfaces can be well fitted by a Gaussian function [55], which can be expressed by the equation as follows: A common practice for defining the interfacial width for the liquid-vapor interface is the "10-90" criterion [66,77], which is the distance between two positions where the density varies from 10% to 90% of the density of the bulk phase.However, it becomes more complicated when surfactants are introduced at the liquid-liquid (e.g., oil-water) interfaces due to the presence of two sub-interfaces.In this case, the "90-90" criterion is suggested [52,76], which is the distance between two positions where the densities of water and oil are 90% of their bulk density.The interfacial width generally monotonically increases with the number of surfactant molecules at the interface since the surfactant molecules become more tightly packed and the carbon tails become more vertically oriented in relation to the interface [26].When the interface is no longer flat, the definition of the interfacial width breaks down and no longer reflects the actual interfacial width.However, it does reflect the size of the undulations. The interfacial coverage indicates the degree of integrity (i.e., the fraction of coverage) of the monolayers at the interface, which can also be quantitatively characterized by the following equation [26]: where φ denotes the interfacial coverage, and N is the number of water molecules that are within 0.5 nm (an empirical parameter for the estimation) from the gas/oil phase.Subscripts 0 and S denote the pure oil/gas-water system and systems containing surfactants, respectively.According to the molecular capillary wave theory, the variations of IFT values at the oil-water interfaces are inversely proportional to the interfacial width [26,78], whereas the IFT values at the gas-water interfaces are influenced by both interfacial width and interfacial coverage [18]. The order parameter can be employed to assess the ordering degree of the surfactant alkyl tails in relation to the X, Y, and Z axes at the interfaces and can be defined as follows [26]: the order parameter S CH characterizes the orientations of the segments/vectors (pointing from carbon atoms i − 1 to i + 1) at the interface, θ represents the angle between the segments/vectors and the axes.<. ..> means ensemble average.If S CH is equal to zero, it signifies that the orientation of the segments/vectors is disordered.When the values approach 1 or −0.5, it indicates that the alkyl tails tend to align perpendicular to or parallel to the interface.A complementary analysis of the inclined angle of the alkyl chains helps evaluate the overall trend of the tail orientation of a given alkyl chain, which is accomplished by creating a vector between the base of the chain (the first carbon atom that is next to the headgroup) and the terminal carbon atom.With this vector defined, we can take the projection on the monolayer normal to determine the degree of inclination.Finally, utilizing the radial distribution function (RDF) allows for the characterization of the average radial packing of atoms within a given system.It is expressed as follows [19]: where g(r) is the RDF, n(r) is the average number of atoms in a shell of width ∆r at a distance r from the reference atom.ρ is the average atom density.The presence of peaks identified at long ranges indicates a high degree of ordering.It can characterize the typical arrangement of particles at the interface and in the bulk phase.Furthermore, it can also be used to estimate the potential of mean force (PMF). Effect of Interfacial Concentration on the Packing State of the Surfactants The interfacial concentration, a crucial factor, plays a significant role in modulating the spontaneous organization of surfactant molecules.Increasing the concentration of surfactants triggers a process of self-assembly driven by noncovalent interactions between the molecules, leading to the formation of aggregates at the interface.The MD simulation method, a powerful tool, allows for a detailed study of the effect of interfacial concentration on the packing state of surfactants, particularly under extreme conditions.This method also provides a visual representation of the evolution process of the monolayers from an atomistic perspective, enabling a comprehensive analysis of the influence factors induced by molecular architecture.In the MD simulation method, surface area per molecule (SAPM) is preferred to describe the interfacial concentration of the surfactants.Given that the simulated system's interfacial area (size of cross-section area) is invariable, SAPM values become smaller with the increase in surfactant molecule number (until it reaches saturation coverage) at the interfaces.Accordingly, the geometric configurations and interfacial properties of the surfactant-formed monolayers change.Figure 14 illustrates the variations of the monolayer's morphology at the oil-water interface formed by internal olefin sulfonate (IOS) with the increase in interfacial concentration [26].Based on SAPM values, the evolution process of the monolayers can be divided into four stages, as follows: (1) When the number of surfactant molecules at the interface is very few, the SAPM value is large (2.5 and 1.25 nm 2 per surfactant molecule), as shown in panels a and b.The separation distances between the molecules are relatively large.In this circumstance, the interaction force between each other can be negligible.This state is called the gaslike (GL) phase.Since the molecular arrangements of the monolayers are sparse and the resulting interfacial widths are small, the interfacial performance of the monolayer is poor, and many hydrocarbon molecules can directly contact water molecules at the intermediate region via the gap that the surfactant molecules are not occupying.(2) As the number of surfactant molecules increases at the interface, SAPM values decrease, as shown in panels c and d, and the interaction force between each surfactant molecule is enhanced.This state is called the liquid-expanded (LE) phase.At this moment, the monolayers become denser than those in the GL phase, and the orientation angles of surfactant alkyl tails are randomly distributed toward the oil/gas phase.The void space that remains in the monolayers allows for continued interaction between oil/gas and water molecules still occurs.(3) When the number of surfactant molecules reaches the saturation concentration at the interface, SAPM reaches the critical minimum point (0.5 nm 2 per surfactant molecule), as shown in panel e.The molecular arrangement of the monolayers changes from a loosely packed pattern to a densely packed pattern, marking the transition to the liquid-condensed (LC) phase.In the LC phase, surfactant molecules are distributed close to each other, and most of the surfactant alkyl tails tend to be perpendicular to the interface.The absence of void space in the monolayers and the resulting largest interfacial widths allow for the best performance, effectively preventing the interactions and contacts between oil/gas and water molecules in the intermediate region.(4) When the interfacial concentration exceeds the concentration of saturation coverage, the interface becomes visibly curved (a concave surface), as shown in panel f.The interface becomes unstable and can undergo mechanical buckling to increase the interfacial area so that excessive surfactant molecules can be adsorbed at the contact surface between the oil/gas and water phases.In this circumstance, some surfactant molecules in the monolayers can also escape from the interface and form stable 3D structures such as vesicles and bilayers.As a result, the stability of the monolayer can recover.The interfacial properties change to different degrees as the shape of the surfactant monolayers changes over time. The MD simulation method gives detailed images of the dynamic evolution process of monolayers' morphology with increasing interfacial concentration and enables quantitative characterization of phase transitions and structural change.Wei and coworkers [79] reported entropic changes in SDS surfactant for phase transitions, which are −29.7 J mol −1 K −1 for the transition from 2D GL film to 2D LE film and −42.0 J mol −1 K −1 for the transition from 2D LE state to 2D LC film.These values gave us an intuitive insight into these phase changes in the surfactant monolayer.MD results reveal that the change in the monolayers' thickness associated with LC-LE transition is mainly due to a shortening of the surfactant alkyl tails, with little change in the average tilt angle of the headgroups [80].Meanwhile, it has been observed that multiple phases can coexist within one monolayer [77].We remark that the saturation coverage (LC phase) should be satisfied to maximize the interfacial performance of the surfactants.A comparative analysis of pertinent research findings [26] determined that monolayers composed of elongated single-chain molecules, such as AOS and SDS, exhibited interfacial buckling at elevated interfacial concentrations.By contrast, surfactant monolayers featuring twin-tailed structures, such as IOS, rhamnolipid, and DPPC, could achieve interface saturation and exhibit curved bucking at lower concentrations.This suggests that surfactants with twin-tailed structures may minimize or even eliminate cosolvent requirements (i.e., saving cost) and possess superior interfacial performance.However, it should be noted that SDS and DPPC differ in that SDS is a water-soluble surfactant while DPPC is not.The type of surfactant headgroups and the architecture of surfactant alkyl tails can directly influence the diffusion behaviors of the surfactant molecules at the interface, thus affecting the corresponding interfacial properties.Tan et al. [65] conducted an indepth study of the evolution of monolayer morphologies formed by six isomers of ABS surfactants at the decane-water interfaces.They showed that the GL-LE phase transition can be accelerated by disubstituted ABS surfactants while being delayed by trisubstituted ABS surfactants.Meanwhile, they found that large undulations are a sign of a collapse of the interface under extremely high surfactant concentrations.Shi and Guo [54] reported that the bending modulus can control the further transformation pathway from buckling to a protruding bud at the interface, which majorly depends on the tail length and interfacial surfactant coverage.They introduced area compressibility and bending modulus, and they showed that the bending modulus becomes larger as the tail length grows, indicating that the energy cost of bending the monolayer increases as the monolayer becomes thick.Likewise, Munusamy et al. [81] reported the segregation of molecular aggregates from the interface into the bulk water in the anionic rhamnolipid (Rha-C10-C10) monolayer at higher concentrations.In contrast, in the nonionic Rha-C10-C10 monolayer, the molecules are still distributed at the interface.Furthermore, the presence of a second rhamnose group can decrease the aggregate number [42].These findings from MD simulations have deepened our understanding of the molecular architecture's effect on the dynamic behaviors of the surfactants and the morphological evolution of the monolayers at the interfaces. The interactions between surfactant molecules in the monolayers are also subjected to multicomponent surfactant mixtures.As aforementioned, Wang et al. [21] found that the influence of LB surfactant on AOS surfactant is nonmonotonic with the change in ratio.The surface dilatational modulus (also known as interfacial elasticity) has a maximum when LB is 30% in the monolayer.They demonstrated that this overall impact is rooted in two competing effects as determined by MD simulation findings.They investigated the orientation of the headgroup of LB molecules.They found that it is tilted relative to the monolayer normal, and the tilt angle increases with increasing LB concentration at the interface.In contrast, the favorable interactions between the S (from AOS) and N (from LB) surfactant atoms (which can be demonstrated by the order parameters of the carbon tails) and the hydration of the carboxylate group of LB surfactant can inhibit the tendency of LB headgroups to become nearly parallel to the monolayer.Thus, the effective headgroup size (i.e., SAPM) is lower (compared to pure LB case) because favorable interactions between LB and AOS surfactants suppress the flexibility of the headgroup of LB.When the proportion of LB is higher than 70%, AOS-LB interactions are insufficient in constraining the headgroup orientation.The corresponding morphology can be depicted in Figure 15.As is observed, there are many gaps in the loose monolayer formed by zwitterionic surfactant (such as LB) molecules.The small headgroup size of ionic surfactants (such as AOS) can easily enter these gaps to prevent water and oil molecules from coming into contact with each other.The zwitterionic surfactant molecules (such as the carboxylate group in the headgroup of LB) penetrate the water phase to a greater extent, forming the "primary layer" in the monolayer, while most of the ionic surfactants occupy the gaps between the hydrophobic tails to form the "secondary layer" in the monolayer (reflecting the favorable interactions between the S and N atoms) [21,75,82,83].The monolayers formed by the surfactant mixtures will perform best when the newly introduced surfactant has a suitable length of carbon tails in relation to the pre-existing surfactants.The matching of carbon tail lengths reflects the effect of van der Waals force interactions between the hydrophobic tails of different types of surfactants on the interfacial structure and properties [84]. oil-water interface, respectively, and the interfacial assembled configuration presented an unexpected "H" shape rather than the traditional "U" shape.They claimed that the cation- interaction is responsible for the SDBS/ODC assembly mechanism and the final the oil-water interface configuration.In a word, the formation of closely packed and stable interfacial monolayers requires good compatibility between different surfactant (or cosolvent) molecules. Surfactant Headgroup Solvation and Counterion Effect in Aqueous Phase Surfactant-formed monolayers possess an inherent electric charge on their surface, resulting in the presence of surface potential.Ions of opposite charge (counterions) are attracted to the surface, while those of like charge (co-ions) are repelled.An electric double layer (EDL), which is diffuse because of mixing caused by thermal motion, is thus formed [17].The EDL can be described as consisting of two distinct layers: an inner layer that may contain adsorbed ions and a diffuse layer where ions are distributed according to the influence of electrical forces and thermal motion.Taking the surface electric potential to be , and applying the Gouy-Chapman approximation, the electric potential at a distance x from the surface is approximately predicted by the following equation: where I is the ionic strength, given by = (1/2) ∑ , where is the concentration of ions and is the charge number of ions.The presence of charge at crude oil-aqueous contacts may arise from the ionization of surface acid functionalities.The presence of charge at gas-aqueous interfaces may arise from the adsorption of surfactant ions.When surfactant molecules adhere to interfaces, they have the potential to modify the surface electric charge, therefore influencing the concentration of inorganic ions in the vicinity.Additionally, it is worth noting that the headgroups of surfactants can create hydrogen bonds with water molecules.The When the components involved in the mixtures are molecules without ionic groups (i.e., cosolvent), such as octanol, decanol, dodecanol, and tetradecanol, the matching of carbon tail lengths becomes the dominant factor in determining the synergistic effects rather than the headgroup size and the electrical properties [39,40].Surfactants with short alkyl chains have a higher tendency to transfer from the interface to the solution, which breaks down the tightly packed network at the interface.Ergin and coworkers [71] reported that the translational excess entropy due to the tail group interactions can discriminate between the synergistic system of SDS and LB-12 and the nonsynergistic system of SDS and cocamidopropyl betaine (CAPB).Therefore, we can use the MD simulation method to evaluate the synergistic effect of different surfactant mixtures.In addition, Jia and coworkers [85] investigated the interfacial assembly process and configuration of the pseudogemini surfactants consisting of SDBS and 4,4 ′ -oxydianilinium chloride (ODC).They found that SDBS and ODC showed the vertical and horizontal arrangements at the oil-water interface, respectively, and the interfacial assembled configuration presented an unexpected "H" shape rather than the traditional "U" shape.They claimed that the cation-π interaction is responsible for the SDBS/ODC assembly mechanism and the final the oil-water interface configuration.In a word, the formation of closely packed and stable interfacial monolayers requires good compatibility between different surfactant (or cosolvent) molecules. Surfactant Headgroup Solvation and Counterion Effect in Aqueous Phase Surfactant-formed monolayers possess an inherent electric charge on their surface, resulting in the presence of surface potential.Ions of opposite charge (counterions) are attracted to the surface, while those of like charge (co-ions) are repelled.An electric double layer (EDL), which is diffuse because of mixing caused by thermal motion, is thus formed [17].The EDL can be described as consisting of two distinct layers: an inner layer that may contain adsorbed ions and a diffuse layer where ions are distributed according to the influence of electrical forces and thermal motion.Taking the surface electric potential to be ψ 0 , and applying the Gouy-Chapman approximation, the electric potential ψ at a distance x from the surface is approximately predicted by the following equation: where I is the ionic strength, given by I = (1/2)∑ i c i z 2 i , where c i is the concentration of ions and z i is the charge number of ions. The presence of charge at crude oil-aqueous contacts may arise from the ionization of surface acid functionalities.The presence of charge at gas-aqueous interfaces may arise from the adsorption of surfactant ions.When surfactant molecules adhere to interfaces, they have the potential to modify the surface electric charge, therefore influencing the concentration of inorganic ions in the vicinity.Additionally, it is worth noting that the headgroups of surfactants can create hydrogen bonds with water molecules.The adsorption and aggregation phenomena of water molecules and inorganic salt ions (i.e., counterions) close to the headgroups of surfactants can potentially influence the interfacial structure of surfactant monolayers, causing alterations in macroscopic characteristics.Using the MD simulation method, it becomes possible to engage in comprehensive analyses of the hydrogen bonding interactions occurring between the headgroups and the surrounding water molecules.Additionally, these simulations allow for the examination of the electrostatic interactions that take place between the headgroups and the inorganic salt ions. Hydration Shell Structure and Hydrogen Bonding The interactions between water molecules and the surfactant headgroups substantially impact the monolayers' interfacial properties.The ionic surfactant may be immersed several layers deeper into the water phase than the nonionic surfactant.MD simulation method provides detailed information on the hydration shell structure near the monolayers.The spatial distribution function (SDF) can be employed to characterize water molecules' distribution visually.As shown in Figure 16, compared with dodecyl carboxylate (SDC), SDSn surfactant has more water molecules distributed around the headgroups, indicating the sulfonate group is more hydrophilic than the carboxylate group [86].Moreover, the RDF can quantitatively characterize the orientation and distribution patterns of the molecules.Figure 17 illustrates the RDF of the central atom (S) on the headgroup of IOS molecules in relation to the oxygen (Ow) and hydrogen (Hw) atoms on the water molecules and sodium ions (Na + ) in the decane-IOS-water system.The Coulomb force is stronger than the hydrogen bonding effect.The distances between S and Hw atoms are shorter than those of S and Ow atoms (indicated by the horizontal value of the first RDF peak).This means Hw forms hydrogen bonds with oxygen on the headgroups, which determines the orientation of water molecules and forms the hydration shell structure, as illustrated in the inset of Figure 17.Furthermore, the g(r) S-Ow curve has three local peaks representing three hydration layers.The first peak is the sharpest, indicating a strong hydrogen bonding interaction between the headgroup and the water molecules in this range (also known as bound water).The second peak represents the trapping water influenced by the first hydration layer due to the hydrogen bonding effect.The third peak is relatively inconspicuous.The distribution pattern means that the attraction force of surfactant monolayers decreases as the distance increases.The water molecules far away from the interface and unaffected by surfactant monolayers are called free water [39]. adsorption and aggregation phenomena of water molecules and inorganic salt ions (i.e., counterions) close to the headgroups of surfactants can potentially influence the interfacial structure of surfactant monolayers, causing alterations in macroscopic characteristics.Using the MD simulation method, it becomes possible to engage in comprehensive analyses of the hydrogen bonding interactions occurring between the headgroups and the surrounding water molecules.Additionally, these simulations allow for the examination of the electrostatic interactions that take place between the headgroups and the inorganic salt ions. Hydration Shell Structure and Hydrogen Bonding The interactions between water molecules and the surfactant headgroups substantially impact the monolayers' interfacial properties.The ionic surfactant may be immersed several layers deeper into the water phase than the nonionic surfactant.MD simulation method provides detailed information on the hydration shell structure near the monolayers.The spatial distribution function (SDF) can be employed to characterize water molecules' distribution visually.As shown in Figure 16, compared with dodecyl carboxylate (SDC), SDSn surfactant has more water molecules distributed around the headgroups, indicating the sulfonate group is more hydrophilic than the carboxylate group [86].Moreover, the RDF can quantitatively characterize the orientation and distribution patterns of the molecules.Figure 17 illustrates the RDF of the central atom (S) on the headgroup of IOS molecules in relation to the oxygen (Ow) and hydrogen (Hw) atoms on the water molecules and sodium ions (Na + ) in the decane-IOS-water system.The Coulomb force is stronger than the hydrogen bonding effect.The distances between S and Hw atoms are shorter than those of S and Ow atoms (indicated by the horizontal value of the first RDF peak).This means Hw forms hydrogen bonds with oxygen on the headgroups, which determines the orientation of water molecules and forms the hydration shell structure, as illustrated in the inset of Figure 17.Furthermore, the g(r)S-Ow curve has three local peaks representing three hydration layers.The first peak is the sharpest, indicating a strong hydrogen bonding interaction between the headgroup and the water molecules in this range (also known as bound water).The second peak represents the trapping water influenced by the first hydration layer due to the hydrogen bonding effect.The third peak is relatively inconspicuous.The distribution pattern means that the attraction force of surfactant monolayers decreases as the distance increases.The water molecules far away from the interface and unaffected by surfactant monolayers are called free water [39].The results of MD simulation studies not only describe the structure of the hydration layers induced by the headgroups but also quantitatively characterize the hydrophilicity of different headgroups by counting the number of solvated water molecules in the vicinity (i.e., coordination number) and by calculating the number of hydrogen bonds formed between the headgroups and the water molecules.Xu et al. [76] investigated the interfacial properties of SDSn, SDS, SDBS, and AES at the dodecane-water interface.The surfactant alkyl tails have the same length.As shown in Figure 18, the hydration number of the sulfate group (−SO ) is evidently larger than that of the sulfonate group (−SO ).Introducing spacer groups (e.g., benzene rings and ethoxy groups) in the headgroups can further enhance the hydrophilicity.It can also be observed that the IFT at the oil-water interface decreases with the enhancement of the hydrophilicity of surfactants.Zhang et al. [87] used MD simulation to reveal further the influence of spacer groups on the interfacial properties of the surfactants at the air-liquid interfaces.They found that the introduction of some functional groups as spacers into the structure of perfluorooctane sulfonate (PFOS) would not much influence the orientation and conformation of hydrophobic chains in surfactants, while the hydrophilicity of the headgroups would be improved by introducing hydrophilic groups as spacers.As shown in Table 1, compared with PFOS, the average number of hydrogen bonds increases, and the diffusion coefficients of the water molecules in the first shell of the hydrate layer remarkably decrease for PFOS with carbonyl, amino, amide groups, or their combinations.In contrast, the hydrophilicity of the headgroups would not be changed much when the methylene or thioether groups were employed in PFOS as a spacer.The results of MD simulation studies not only describe the structure of the hydration layers induced by the headgroups but also quantitatively characterize the hydrophilicity of different headgroups by counting the number of solvated water molecules in the vicinity (i.e., coordination number) and by calculating the number of hydrogen bonds formed between the headgroups and the water molecules.Xu et al. [76] investigated the interfacial properties of SDSn, SDS, SDBS, and AES at the dodecane-water interface.The surfactant alkyl tails have the same length.As shown in Figure 18, the hydration number of the sulfate group (−SO − 4 ) is evidently larger than that of the sulfonate group (−SO − 3 ).Introducing spacer groups (e.g., benzene rings and ethoxy groups) in the headgroups can further enhance the hydrophilicity.It can also be observed that the IFT at the oil-water interface decreases with the enhancement of the hydrophilicity of surfactants.Zhang et al. [87] used MD simulation to reveal further the influence of spacer groups on the interfacial properties of the surfactants at the air-liquid interfaces.They found that the introduction of some functional groups as spacers into the structure of perfluorooctane sulfonate (PFOS) would not much influence the orientation and conformation of hydrophobic chains in surfactants, while the hydrophilicity of the headgroups would be improved by introducing hydrophilic groups as spacers.As shown in Table 1, compared with PFOS, the average number of hydrogen bonds increases, and the diffusion coefficients of the water molecules in the first shell of the hydrate layer remarkably decrease for PFOS with carbonyl, amino, amide groups, or their combinations.In contrast, the hydrophilicity of the headgroups would not be changed much when the methylene or thioether groups were employed in PFOS as a spacer.Table 1.The average number of hydrogen bonds formed between oxygen atoms in the sulfonate group and water molecules in different systems and the diffusion coefficients of water molecules in the first shell of the hydration layer [87]. Surfactants Average The addition of inorganic salts can reduce the IFT at the oil-water interface [88] and gas-water interface [89] since they can shield the repulsive interactions between the headgroups of the surfactants, therefore enabling the monolayers to become more closely packed.This process can improve the stability of the interface.Liu et al. [90] argued that point charges can represent most inorganic salt ions; therefore, ions with the same charge but different masses have little effect on the interfacial properties of the surfactant monolayers.By contrast, Allen et al. [91] suggested that though different monovalent cations do not change the structure of the monolayers, they can change the interfacial properties to various degrees.They found that the interaction strength of monovalent cations with the headgroups follows the order of NH > Cs > Na > Li .Hu et al. [92] showed that the ability of SDS to reduce IFT increased with the increasing radius of monovalent cations, and the order of IFT reduction follows Cs > Rb > K > Na > Li .Yan et al. [93] showed that divalent ions (Ca and Mg ) have a more powerful influence on the Table 1.The average number of hydrogen bonds formed between oxygen atoms in the sulfonate group and water molecules in different systems and the diffusion coefficients of water molecules in the first shell of the hydration layer [87]. Influences of Inorganic Salt Ions The addition of inorganic salts can reduce the IFT at the oil-water interface [88] and gaswater interface [89] since they can shield the repulsive interactions between the headgroups of the surfactants, therefore enabling the monolayers to become more closely packed.This process can improve the stability of the interface.Liu et al. [90] argued that point charges can represent most inorganic salt ions; therefore, ions with the same charge but different masses have little effect on the interfacial properties of the surfactant monolayers.By contrast, Allen et al. [91] suggested that though different monovalent cations do not change the structure of the monolayers, they can change the interfacial properties to various degrees.They found that the interaction strength of monovalent cations with the headgroups follows the order of NH + 4 > Cs + > Na + > Li + .Hu et al. [92] showed that the ability of SDS to reduce IFT increased with the increasing radius of monovalent cations, and the order of IFT reduction follows Cs + > Rb + > K + > Na + > Li + .Yan et al. [93] showed that divalent ions (Ca 2+ and Mg 2+ ) have a more powerful influence on the hydration structure around the headgroups.They can disturb the original hydrogen bonding structure, leading to a decrease in the hydrogen bond number and an increase in the hydrogen bond lifetime.Compared with Ca 2+ , Mg 2+ has much greater difficulty entering the first hydration shell of the headgroups, and once entered into the shell, Mg 2+ has a stronger effect on the hydrogen network.Li et al. [74] reported that the additive Ca 2+ could replace Na + at the oil-water interface, which compresses the polarity headgroups of SB12-3 and SDBS so that both surfactants are arranged at the oil-water interface more closely.In the presence of Ca 2+ ions, the interactions between water molecules and sulfonate groups in SB12-3 and SDBS surfactants are enhanced.Meanwhile, Na + ions become closer to the sulfonate group in SB12-3, which compresses the thickness of the EDL.Sun et al. [94] pointed out that strong electrostatic interactions between multivalent cations and anionic surfactant molecules are beneficial for the reduction of electrostatic repulsions between the charged headgroups.The cations play a role as a bridge in connecting the surfactant molecules at the surface, improving the accommodation capacity for surfactant molecules, and consequently lowering the IFT and improving the stability of the interface.Zhao et al. [57] give a similar conclusion; meanwhile, they reported that cations would influence the compact degree of SHBS-formed monolayers, and the order follows: Ca 2+ > Mg 2+ > Na + . The combination of hydrophilic headgroups and counterions is defined as ion pairs, which can be bound together by electrostatic attractions.The binding energy of different ion pairs is related to the interaction strength.In MD simulation studies, the energy distribution of the ionic pairs can be determined by calculating the potential of mean force (PMF) [95,96] as shown in the following equation: where k B represents the Boltzmann constant, T denotes the temperature in the simulations, g(r) is the RDF of the ionic pairs (refer to Equation ( 13)). Figure 19 illustrates the PMF curve between ion pairs as a function of the separation distance between them, wherein a peak and two troughs are present.The first trough represents the state of contact minimum (CM), which indicates that the counterions are in direct contact with the headgroup.This state has the lowest energy and the most stable ion pairs.The second trough corresponds to the solvent-separated minimum (SSM), which signifies that the robust hydrogen bonding network formed by the water molecules around the surfactant headgroups hinders the entry of the counterions.Therefore, the counterions located outside the hydration shell are similarly in a relatively stable state.The peak indicates the energy barrier (BARR) of the hydration layers.The counterions must move past the BARR to move into the hydration layers.The BARR is mostly caused by the energy needed to rearrange the water molecules in the layer when the counterions enter the first hydration layer of the surfactants' hydrophilic headgroups. With the help of the PMF curves for the ion pairs, we can obtain the binding energy (∆E − = BARR−CM) as well as the dissociation energy (∆E + = BARR−SSM) between the counterions and the surfactant headgroups.Based on the ratio K of ∆E − to ∆E + , the tendency of binding and dissociation of various ion pairs can be discussed.As shown in Table 2, regardless of whether the hydrophilic headgroups are sulfate (−SO − 4 ), sulfonate (−SO − 3 ), or carboxylic acid (−COO − ) groups, the energy barriers for Na + ions to enter and leave the hydration layers are less than those for Ca 2+ and Mg 2+ ions.That is to say, Na + ions penetrate the hydration layers more easily and come into contact with the hydrophilic headgroups of the surfactants.Meanwhile, it is also easier to dissociate from the layers and return to the aqueous phase in a free state.In contrast, it is difficult for Ca 2+ and Mg 2+ ions to escape from the layers once they enter the hydration shell of the headgroups.Therefore, the interactions between divalent cations and the headgroups are more robust.In summary, on the one hand, the addition of counterions will weaken the structure of water molecules near the hydrophilic headgroups, thus reducing the hydrophilicity of the surfactant.On the other hand, its shielding effect on the electrostatic repulsion between the headgroups will make the interfacial film denser, leading to more surfactant molecules adsorbed at the interface.The increase in the number of surfactant molecules at the interface counterbalances the negative effect of the hydrophilicity weakening of the headgroups [86,93,94,[97][98][99]. Molecules 2024, 29, x FOR PEER REVIEW 28 of 42 Figure 19.A diagram of the potential of mean force (PMF) between the surfactant headgroups and counterions at the interface.The PMF curve can be predicted using Equation (13). With the help of the PMF curves for the ion pairs, we can obtain the binding energy (ΔE − = BARR−CM) as well as the dissociation energy (ΔE + = BARR−SSM) between the counterions and the surfactant headgroups.Based on the ratio K of ΔE − to ΔE + , the tendency of binding and dissociation of various ion pairs can be discussed.As shown in Table 2, regardless of whether the hydrophilic headgroups are sulfate (−SO ), sulfonate (−SO ), or carboxylic acid (−COO ) groups, the energy barriers for Na ions to enter and leave the hydration layers are less than those for Ca and Mg ions.That is to say, Na + ions penetrate the hydration layers more easily and come into contact with the hydrophilic headgroups of the surfactants.Meanwhile, it is also easier to dissociate from the layers and return to the aqueous phase in a free state.In contrast, it is difficult for Ca and Mg ions to escape from the layers once they enter the hydration shell of the headgroups. Therefore, the interactions between divalent cations and the headgroups are more robust.In summary, on the one hand, the addition of counterions will weaken the structure of water molecules near the hydrophilic headgroups, thus reducing the hydrophilicity of the surfactant.On the other hand, its shielding effect on the electrostatic repulsion between the headgroups will make the interfacial film denser, leading to more surfactant molecules adsorbed at the interface.The increase in the number of surfactant molecules at the interface counterbalances the negative effect of the hydrophilicity weakening of the headgroups [86,93,94,[97][98][99]. Liu et al. [100,101] reached similar conclusions by measuring the dynamic IFT of surfactant-contained systems.As shown in Figure 20a-c, adding inorganic salt ions increased the number of surfactants transported from the interior of the bulk phase to the interfaces, and increasing the surfactant concentrations at the interface resulted in a reduction in IFT.The decrease in IFT cannot solely be attributed to the increase in interfacial concentration.Alonso et al. [58] found that the addition of inorganic salt ions can also cause a decrease in the IFT of nonionic surfactants while maintaining a constant interfacial concentration.Liu et al. [100,101] reached similar conclusions by measuring the dynamic IFT of surfactant-contained systems.As shown in Figure 20a-c, adding inorganic salt ions increased the number of surfactants transported from the interior of the bulk phase to the interfaces, and increasing the surfactant concentrations at the interface resulted in a reduction in IFT.The decrease in IFT cannot solely be attributed to the increase in interfacial concentration.Alonso et al. [58] found that the addition of inorganic salt ions can also cause a decrease in the IFT of nonionic surfactants while maintaining a constant interfacial concentration. Compared with ionic surfactants, the headgroups of nonionic surfactants are difficult to completely insert into the aqueous phase due to their relatively weak hydrophilicity and large size; instead, they can only have an inclined orientation at the oil-water interface (see Figure 20d).The addition of inorganic salt ions will generate excessive adsorption of the surfactants at the interface and induce the surfactant headgroups to lie flat at the interface, which will increase the interfacial coverage of the monolayers and further hinder the diffusion and contact between the oil and water molecules.In summary, the presence of inorganic salt ions will not only induce more surfactant molecules to be enriched at the interface but also influence the orientation of the surfactant molecules so that the entire microstructure of the monolayers will be changed, which ultimately enhances the interfacial performance of the surfactants.Table 2. Predictions of dissociation and binding energy barriers between various ion pairs of counterions and surfactant headgroups (data are from [86,93,94,97]). Interactions between Surfactant Alkyl Tails and Hydrophobic Phase The molecular composition of the hydrocarbon phase is another important factor affecting the interfacial properties of the surfactant-formed monolayers.Chanda et al. [102] found that the monolayer thickness formed at the water-decane interface was 1.25 to 1.3 times higher than that formed at the gas-liquid interface by dodecyl diethylene glycol ether (C12E2).The reason for this is that the strong interactions between decane molecules and the hydrophobic carbon tails of C12E2 led to the straightening of the carbon tails.Under this situation, the surfactant molecules tend to be more perpendicular to the interface, whereas the gas molecules, such as CO 2 and N 2 , are weakly interacting with the carbon tails, so the surfactant molecules are more randomly oriented at the gas-liquid interface.Moreover, the size of gas molecules is much smaller than that of oil molecules, enabling gas molecules to be more diffused at the interfaces.The contact probability with the water phase is larger than that of oil molecules at the interface, which is detrimental to the IFT reduction.Thus, the molecular configurations of the surfactant monolayers at the gasliquid interface differs from that at the oil-water interface.Furthermore, Goodarzi and coworkers [103] found that the surfactant tends to stretch more in the case of aliphatic hydrocarbons (octane and dodecane) in comparison to cyclic oil molecules (cyclohexane and benzene) due to the linear structure of the oil molecules.They also found that the IFT is a function of the molecular weight of the hydrocarbons.The difference is attributed to the interaction strength between the hydrocarbon components and the hydrophobic carbon tails of the surfactants [104]. The interactions between nonpolar gas molecules (like N 2 , CO 2 , and CH 4 ) and the surfactant alkyl tails are controlled by van der Waals forces, which are much weaker than hydrogen bonding and electrostatic interactions.Based on the MD simulation results [16,105], either increasing the interfacial concentration of surfactants or raising pressure can strengthen the interaction forces between the monolayer and the gas phase, leading to increased interfacial width and reduced IFT at the gas-water interfaces.It is in good agreement with experimental work.For example, increasing the concentration of foaming agent SDS and raising the injection pressure of the CO 2 phase can prolong the half-life period of foam and increase the foaming volume.Sun et al. [106] found that the coalescence and collapse rates of CO 2 foam are apparently faster than those of N 2 and O 2 foam through macroscopic experiments.They simulated the interfacial behaviors of these three foam systems.They found that there were hydrogen bonding interactions between CO 2 /SDS headgroups and water molecules, which weakened the hydrophilicity of SDS and induced self-aggregation of the SDS molecules.This behavior leads to the occurrence of gaps or holes in the monolayers at the interfaces, allowing more water molecules to come into contact with CO 2 molecules, and eventually leads to a decrease in the stability of the foam liquid film.This mechanism was also demonstrated in foam systems stabilized by dodecyl trimethylammonium bromide (DTAB), nonionic lauryl alkanolamide (LAA) and amphoteric ionic surfactant (CAB), respectively. In contrast, it becomes more complicated for oil-water interfaces due to the complex oil components.Wade et al. [68] came up with the idea of the equivalent alkane carbon number (EACN), which is a number that does not have any units and can be used to measure how hydrophobic the oil phase is.It is an important parameter to determine the type and stability of emulsions formed from surfactant-oil-water (SOW) systems.Generally, the EACN is influenced by many factors, such as acyclic, mono-or polycyclic, linear or branched chains, and unsaturated states.It is difficult to estimate from the chemical structure [107][108][109].Understanding the oil's EACN value allows us to predict whether this oil should form Winsor type I-III microemulsions under an equilibrium state or a direct/inverse emulsion after stirring [44,108].At the macroscopic level, the EACN of a particular oil phase is found by looking at how it behaves compared to a well-defined linear hydrocarbon in the same SOW system.Based on the rule that the likes dissolve each other, the IFT of the system would reduce to the minimum when the hydrophobic carbon tails of the surfactants were similar to those of the oil molecules.However, the experimental approaches still face many challenges; for example, they cannot deal with the large number of branched paraffin molecules in the oil displacement system.The problems can be solved using the MD simulation method.Jang et al. [52] proposed effective alkyl tail length (EATL) using the method.As shown in Figure 21, the SHBS surfactant has a twin-tailed structure.The short carbon chain has a shielding effect (i.e., steric hindrance) on the long carbon chain, which prevents the oil molecules from inserting into the space between the two carbon chains.Thus, the oil components mainly interact with the long carbon chains.Under this situation, the EATL of the surfactant alkyl tail (R effective ) is defined by the difference between the long tail (R long ) and the short tail (R short ).In the decane-SHBS-water system, the R effective of SHBS-4C16 is 9.53 ± 1.36 Å, which is very close to the average length of decane molecules (9.97 ± 1.03 Å) in the oil phase.Therefore, SHBS-4C16 has optimum miscibility with the decane phase.The calculations by Xiao and coworkers [110] showed that although SHBS-4C16 has the lowest IFT at the oil-water interface, the EATL is closer to the average length of nonane molecules (see Table 3).In summary, although the EATL method further clarifies the matching relationship between the branched structure in the hydrophobic carbon tails of the surfactants and the oil phase, it still needs to be further studied and improved. It is well known that crude oil is a complex mixture of n-alkanes, isoparaffins, cycloparaffins, aromatic hydrocarbons, and other nonhydrocarbon constituents [111].The resin and asphaltene are polar molecules with surface activities [112,113].Thus, they can adsorb at the interface and affect the interfacial properties.In the past, n-alkanes with a carbon number of 8~14 were always selected as the simulated oil phase in MD simulations, wherein the influence of the strong polar components on the interfacial properties was neglected.A suitable molecular model is essential to mimic the actual complexity of the system.In recent years, researchers have been working hard to continuously upgrade and optimize the molecular models of the crude oil phase to bridge the gap between the simulated results and the experimental work.Kunieda et al. [112] used eight typical types of hydrocarbon molecules, namely hexane, heptane, octane, nonane, cyclohexane, cycloheptane, benzene, and toluene, with different ratios to construct the crude oil model.Sugiyama et al. [114] used quantitative molecular representation (QMR) in combination with experimental approaches to construct a molecular model containing 108 molecules (also termed digital oil), which can successfully characterize the properties and phase behavior of light crude oil produced in domestic Japan.Iwase et al. [115,116] extended the QMR method and successfully constructed a molecular model of heavy oil containing 36 typical types of hydrocarbon molecules.Cui et al. [117] showed bitumen's microstructural evolution by utilizing a digital oil mode and MD simulations, providing a theoretical framework to elucidate transition states between the liquid and glass states. at the interface and affect the interfacial properties.In the past, n-alkanes with a carbon number of 8~14 were always selected as the simulated oil phase in MD simulations, wherein the influence of the strong polar components on the interfacial properties was neglected.A suitable molecular model is essential to mimic the actual complexity of the system.In recent years, researchers have been working hard to continuously upgrade and optimize the molecular models of the crude oil phase to bridge the gap between the simulated results and the experimental work.Kunieda et al. [112] used eight typical types of hydrocarbon molecules, namely hexane, heptane, octane, nonane, cyclohexane, cycloheptane, benzene, and toluene, with different ratios to construct the crude oil model.Sugiyama et al. [114] used quantitative molecular representation (QMR) in combination with experimental approaches to construct a molecular model containing 108 molecules (also termed digital oil), which can successfully characterize the properties and phase behavior of light crude oil produced in domestic Japan.Iwase et al. [115,116] extended the QMR method and successfully constructed a molecular model of heavy oil containing 36 typical types of hydrocarbon molecules.Cui et al. [117] showed bitumen's microstructural evolution by utilizing a digital oil mode and MD simulations, providing a theoretical framework to elucidate transition states between the liquid and glass states.Figure 22 shows the workflow of constructing digital models of light oil and heavy oil, which can significantly improve the simulated models' authenticity and help us to further understand the interfacial behaviors of the surfactants at the oil-water interfaces [118].In addition, several studies have begun to address the imposed effect of polar molecules, such as resin and asphaltene, on their interfacial properties in crude oil.Mizuhara Figure 22 shows the workflow of constructing digital models of light oil and heavy oil, which can significantly improve the simulated models' authenticity and help us to further understand the interfacial behaviors of the surfactants at the oil-water interfaces [118].In addition, several studies have begun to address the imposed effect of polar molecules, such as resin and asphaltene, on their interfacial properties in crude oil.Mizuhara et al. [119] discussed the aggregation behaviors of 13 types of asphaltene molecules at the water-oil (heptane + toluene) interface.They focused on the influence of heteroatoms, such as sulfur and nitrogen, on the adsorption stability of asphaltenes.Gao and coworkers [120,121] investigated the adsorption morphology of the asphaltene molecule C5Pe at the oil-water interface.The authors pointed out that the polycyclic aromatic rings of asphaltene are perpendicular to the interface but not parallel to other asphaltene molecules.The aromatic ring structures on different molecules are aligned parallel to each other by the π-π interaction.The interaction can be weakened by adding inorganic salt ions [35].However, the carbon chains surrounding the aromatic rings have a steric hindrance effect, which ultimately leads to the formation of irregular agglomerative adsorption of C5Pe at the oil-water interface.At present, there are fewer simulation studies on the effect of strong polar components in crude oil on the IFT and the interfacial structure of the surfactant monolayers.The underlying mechanisms must be further explored and clarified from a microscopic perspective. Conclusions and Outlook The paper gives a comprehensive review of the research progress made in MD simulation studies over the last decade focused on the adsorption behaviors of surfactants at the interface between two immiscible fluids.Initially, the evaluation methods for the interfacial properties and the characterization methods for the microstructures at the interfaces are presented.Balancing the interactions of surfactants with the water and hydrophobic phases can improve the monolayers' interfacial performance.It can be realized by either enhancing the surfactant-water interactions or surfactant-oil/gas interactions.The methods include increasing the interfacial concentrations (until saturation coverage), introducing chemical groups (such as CO2-philic functional group for stabilizing CO2 foam), formulating surfactants with twin-tailed structure, and adding cosolvents into surfactants.Consequently, dense and thick monolayers can form, effectively inhibiting the contact between the two immiscible fluids at the interface.The molecular interactions can be Conclusions and Outlook The paper gives a comprehensive review of the research progress made in MD simulation studies over the last decade focused on the adsorption behaviors of surfactants at the interface between two immiscible fluids.Initially, the evaluation methods for the interfacial properties and the characterization methods for the microstructures at the interfaces are presented.Balancing the interactions of surfactants with the water and hydrophobic phases can improve the monolayers' interfacial performance.It can be realized by either enhancing the surfactant-water interactions or surfactant-oil/gas interactions.The methods include increasing the interfacial concentrations (until saturation coverage), introducing chemical groups (such as CO 2 -philic functional group for stabilizing CO 2 foam), formulating surfactants with twin-tailed structure, and adding cosolvents into surfactants.Consequently, dense and thick monolayers can form, effectively inhibiting the contact between the two immiscible fluids at the interface.The molecular interactions can be classified into three aspects: interactions between the surfactant molecules within the monolayers, interactions between the monolayers and the aqueous phases, and interactions between the monolayers and the hydrophobic phases.The influence factors, such as the molecular structure of the surfactant, the synergistic effect of surfactant mixtures and cosolvents, inorganic salt ions, and the molecular makeup of the hydrocarbon phase, are further analyzed in more detail to see how they affect the morphology and interfacial properties of the monolayers.The main points are listed as follows: (1) Interactions between the surfactant molecules within the monolayers: with the increase in interfacial concentration, the formed monolayers undergo the process of "GL dispersion-LE phase-LC phase-undulation state-protruding bud structurerestoration of flatness".In addition, modifying the molecular structure can enhance the interfacial performance of the surfactants.The measures include increasing the size of the headgroups, introducing extra hydrophilic radical groups, polymerizing the monomer molecules, as well as shortening and coarsening the linear-chain molecules.When applying the surfactant mixtures (i.e., synergistic effect), surfactant molecules of small size would be inserted into the gaps between the large surfactant molecules, improving the integrity degree of the monolayers, thus preventing the free diffusion of molecules and the contact between the two immiscible phases.(2) Interactions between the surfactant monolayers and the water phase: a clear hydration shell (which consists of bound water and captured water) exists near the hydrophilic headgroups of the surfactant.The number of water molecules in the hydrated layers and the number of hydrogen bonds, which quantitatively characterize the hydrophilicity of various headgroups, can be obtained from the MD simulation method.For ionic surfactant molecules, the inorganic salt ions shield the hydrophilic headgroups from electrostatic repulsions, which leads to more surfactant molecules being enriched at the interface.For nonionic surfactant molecules, the salt ions change the orientation of the hydrophilic headgroups, thus improving the degree of interfacial coverage of the monolayers.(3) Interactions between the surfactant monolayers and the hydrocarbon phase: most of the molecules (such as natural gas, paraffin, and aromatic hydrocarbons) are nonpolar, whereas resins and asphaltene are polar molecules.The nonpolar molecules would interact with the surfactant alkyl tail via van der Waals force.Thus, the molecular configurations at the gas-liquid interface are more disordered.As to nonpolar molecules in the oil phase (such as n-alkanes), the EATL method, using MD simulations, clarifies the matching relationship between the branched structures in the hydrophobic carbon tails and the components of the oil phase.The modeling of the crude oil composition by MD simulations has evolved from the initial pure n-alkanes to multicomponent simulated oils (i.e., digital oil) containing polar compounds.However, the influence of polar molecules with large sizes in crude oil on the interfacial properties of the surfactant monolayers still needs further study. Based on the above conclusions, the MD simulation method has great potential for studying and analyzing the morphology and properties of various interfaces between two immiscible fluids.To date, most MD simulation studies have employed single-structure surfactants and simplified the oil phase to n-alkanes.With the rapid development of computing power by high-performance clusters and the continuous optimization of modeling approaches, the differences in physicochemical properties between simulated oil phases and natural pore fluids become negligible.To enhance the dependability of MD simulations in predicting the microstructure and thermodynamic characteristics under different conditions and to promote the approach as a widely recognized technology in industrial research and application, it is imperative to improve and innovate it from multiple angles: (1) Upgrade the spatial and temporal scales.Currently, the dimensions of the simulated systems in most MD simulations are less than 20 nm for the sake of computational efficiency [36].Expanding the spatial scale of simulations to hundreds of nanometers is crucial to eliminate the randomness of the predictions caused by the size effect.Meanwhile, only if a simulation is ergodic and long enough to allow the system to visit all its energetically relevant states can we derive meaningful information from it [47].These are beneficial to describe the enrichment process of the surfactant molecules from the interior of the bulk phase to the interface and desorption from the interface under various conditions.Under these circumstances, coarse-grained MD and DPD simulations are recommended [122,123], which can model molecular behaviors from hundreds of nanometers to several micrometers (i.e., with a mesoscopic perspective).(2) Accurate description of the interface system.Unlike modeling of the bulk phase of the fluids, the intermediate regions in binary fluid systems are heterogeneous.Regarding van der Waals' interaction, an insufficient cut-off distance for intermolecular interaction would lead to significant artifacts in microstructure and properties at the interfaces [124].Furthermore, the cut-off scheme's dispersion correction significantly affects the system's adsorption process in which the Coulomb force is not strong enough.Lennard-Jones potential with the particle-mesh Ewald (LJ-PME) scheme is a potential solution for this issue [125].In addition, the commonly used force fields [126][127][128] are developed for specific purposes (e.g., phase behaviors).The simulation results for the interface system may not be quantitatively compared with each other.The existing force fields should be continuously improved with reference to first-principles calculations and experimental values [129][130][131].The combination of the MD simulation method and machine learning (ML) techniques may provide a fast and cost-effective IFT determination over multiple and complex fluid-fluid and fluidsolid interfaces (i.e., inhomogeneous systems) [132].The relationship between the IFT, fluid composition, and thermodynamic conditions may involve several variables.In this context, machine learning can be a suitable approach to correlating physical and chemical properties in a single and robust model. Generally speaking, there is no need to identify and describe every molecule present in reservoir fluids for research and industrial applications.Employing various modeling techniques, from atomistic to mesoscopic scales, to investigate the interfacial behaviors of the surfactants is a highly effective approach [45].MD simulations based on the first-principles method can simulate the chemical reactions occurring in the reservoir fluids [43].Coarse-grained MD and DPD simulations can model the surfactant behaviors at the mesoscale [122,123].Foam flooding experiments and MD simulation studies for the gas-liquid interfaces aim to enhance the foam film's stability and extend the foam's half-life period so that they can play a longer and more influential role in oilfield applications.Microemulsion flooding tests and MD simulation studies for the water-oil interface aim to achieve ultra-low IFT to enhance the mobility of the oil phase and the miscibility of the oil and water phases in the reservoirs.The rapid development of simulation technology has complemented the experimental process of the performance evaluation of the surfactants.Suppose there is sufficient experimental data to validate and correlate the computed results.In that case, it is feasible to use molecular modeling computations to forecast the macroscopic behaviors of the systems with a reasonably high level of reliability.In addition, it expedites the advancement of research in improving chemical flooding technologies, which involves the Laplace capillary suction effect, Winsor R theory, hydrophile lipophile balance (HLB) theory, and other related concepts [15,17,44].Regarding the simulation itself, different theories offer diverse simulation schemes for research.Innovative experimental techniques can provide more precise input parameters and validations for simulated results, therefore advancing the research progress in theoretical simulation methods.To summarize, the development of EOR methods with surfactants needs our continuous efforts and innovations from all the perspectives of theory, experiment, and simulation, as well as further strengthening the links among them.Given the crucial role that interfaces play in porous media for the energy transition, we anticipate this review will also benefit hydrogen storage and energy transitions [133]. Figure 1 . Figure 1.A schematic diagram of possible mechanisms of entrapment and fluid distribution porous medium with different flow rates.The black arrows indicate the flow directions.(a) str water-wet conditions, wherein a water flow path in narrow pores is formed, and the oil in pores (but small throats) is bypassed; (b) mixed wet conditions, showing a different displace mechanism where the oil is attached to the oil-wetting area [10]. Figure 1 . Figure 1.A schematic diagram of possible mechanisms of entrapment and fluid distribution in a porous medium with different flow rates.The black arrows indicate the flow directions.(a) strongly water-wet conditions, wherein a water flow path in narrow pores is formed, and the oil in large pores (but small throats) is bypassed; (b) mixed wet conditions, showing a different displacement mechanism where the oil is attached to the oil-wetting area [10]. Figure 1 . Figure 1.A schematic diagram of possible mechanisms of entrapment and fluid distribution in a porous medium with different flow rates.The black arrows indicate the flow directions.(a) strongly water-wet conditions, wherein a water flow path in narrow pores is formed, and the oil in large pores (but small throats) is bypassed; (b) mixed wet conditions, showing a different displacement mechanism where the oil is attached to the oil-wetting area [10]. Figure 2 . Figure2.A schematic diagram of the Jamin effect at the pore throat (adapted from[11]).The oil droplet is squeezed in the pore throat and retained by capillary forces. Figure 3 . Figure 3.A two-dimensional schematic diagram of foam film's plateau junction.(a) plateau junction of three bubbles; (b) enlarged view of plateau junction."P" denotes a point at the plateau junction (i.e., the plateau node), whereas "A" denotes a point within the foam film (i.e., the plateau border[16]. Figure 3 . Figure 3.A two-dimensional schematic diagram of foam film's plateau junction.(a) plateau junction of three bubbles; (b) enlarged view of plateau junction."P" denotes a point at the plateau junction (i.e., the plateau node), whereas "A" denotes a point within the foam film (i.e., the plateau border)[16]. Figure 4 . Figure 4.A diagrammatic drawing for the Gibbs-Marangoni effect.The left side depicts that when the liquid film is deformed under external disturbance, a drainage flow and an IFT gradient are generated.The right side illustrates that the liquid film is recovered due to the Gibbs-Marangoni effect, and the surfactant molecules are evenly distributed again at the surface.The purple solid spheres and black sticks represent the headgroups and carbon tails of surfactants, respectively [19]. Figure 4 . Figure 4.A diagrammatic drawing for the Gibbs-Marangoni effect.The left side depicts that when the liquid film is deformed under external disturbance, a drainage flow and an IFT gradient are generated.The right side illustrates that the liquid film is recovered due to the Gibbs-Marangoni effect, and the surfactant molecules are evenly distributed again at the surface.The purple solid spheres and black sticks represent the headgroups and carbon tails of surfactants, respectively[19]. Figure 6 . Figure 6.Relationships between molecular dynamics simulations and traditional experimental approaches to screening and evaluating the surfactants in enhanced oil recovery. Figure 6 . Figure 6.Relationships between molecular dynamics simulations and traditional experimental approaches to screening and evaluating the surfactants in enhanced oil recovery.Molecules 2024, 29, x FOR PEER REVIEW 8 of 42 Figure 7 . Figure 7. Computational schemes for molecular dynamics simulation of binary immiscible fluids. Figure 7 . Figure 7. Computational schemes for molecular dynamics simulation of binary immiscible fluids. Molecules 2024 , 42 Figure 8 . Figure 8.A schematic diagram of surface pressure ( − ) isotherms as a function of the surface area per molecule of the surfactant monolayer.The molecular model of internal olefin sulfonate is used here.GL, LE, and LC mean gas-like, liquid-expanded, and liquid-condensed, respectively. Figure 8 . Figure 8.A schematic diagram of surface pressure (Π − A) isotherms as a function of the surface area per molecule of the surfactant monolayer.The molecular model of internal olefin sulfonate is used here.GL, LE, and LC mean gas-like, liquid-expanded, and liquid-condensed, respectively. Molecules 2024 , 42 Figure 9 . Figure 9.The influencing factors affect the static stability (i.e., IFT values) of SDS-stabilized CO2 foam films.The effects of (a) interfacial concentration, (b) temperature, and (c) pressure on the IFT values at the CO2-water interfaces [16]. Figure 9 . Figure 9.The influencing factors affect the static stability (i.e., IFT values) of SDS-stabilized CO 2 foam films.The effects of (a) interfacial concentration, (b) temperature, and (c) pressure on the IFT values at the CO 2 -water interfaces [16]. Figure 10 . Figure 10.Interfacial properties of the surfactant-formed monolayers at the CO2-water interfaces.(a) Variation of the IFT values with the increase in the interfacial concentration; (b) Plotting of the IFT versus the natural logarithm of the area per surfactant molecule; (c) The predicted interfacial elasticity [19]. , Jang et al. [52] further proved that the twin-tailed structure of SHBS-4C16 holds the best interfacial performance at the decane-water interface.It has the lowest IFT and the lowest interfacial formation energy.Meanwhile, the formed monolayer has the largest interfacial width, and the molecules are closely distributed inside the monolayer.He et al. [55] conducted 36 simulations on the adsorption of LAS-mCn (m = 1~6, n = 1~11) with different carbon tail lengths and different benzene ring attachment positions at the air-water interface.They pointed out that LAS Figure 10 . Figure 10.Interfacial properties of the surfactant-formed monolayers at the CO 2 -water interfaces.(a) Variation of the IFT values with the increase in the interfacial concentration; (b) Plotting of the IFT versus the natural logarithm of the area per surfactant molecule; (c) The predicted interfacial elasticity [19]. Figure 11 . Figure 11.Variations in the interfacial tension (IFT) at the oil-water interface depend on the attachment position of the benzene sulfonate groups.The solid red line, open circles indicate the IFT results obtained from MD simulations.The dashed blue line and solid circles are IFT results from experiments (adapted from [52]). Figure 11 . Figure 11.Variations in the interfacial tension (IFT) at the oil-water interface depend on the attachment position of the benzene sulfonate groups.The solid red line, open circles indicate the IFT results obtained from MD simulations.The dashed blue line and solid circles are IFT results from experiments (adapted from [52]). Figure 12 . Figure 12.Effect of the headgroup architecture of the surfactants on the interfacial performance at various interfaces.The red bold values indicate the minimum IFT.The effect of headgroup size on the IFT values of (a) ABS surfactants at the decane-water interfaces and (b) NPDS surfactants at the air-water interfaces (adapted from [65-67]). Figure 12 . Figure 12.Effect of the headgroup architecture of the surfactants on the interfacial performance at various interfaces.The red bold values indicate the minimum IFT.The effect of headgroup size on the IFT values of (a) ABS surfactants at the decane-water interfaces and (b) NPDS surfactants at the air-water interfaces (adapted from [65-67]). Molecules 2024 , 42 Figure 13 . Figure 13.Ratio effect of the individual surfactant in the surfactant mixtures on the interface formation energy.The black solid line and solid circles are calculated interface formation energy (IFE) from MD simulations.The red dashed line indicates the minimum IFE [73,74]. Figure 13 . Figure 13.Ratio effect of the individual surfactant in the surfactant mixtures on the interface formation energy.The black solid line and solid circles are calculated interface formation energy (IFE) from MD simulations.The red dashed line indicates the minimum IFE [73,74].By analyzing the distribution patterns of the mass density of the individuals, we can obtain interfacial width at various interfaces, though there are different criteria [52,67-76].A common practice for defining the interfacial width for the liquid-vapor interface is the "10-90" criterion[66,77], which is the distance between two positions where the density varies from 10% to 90% of the density of the bulk phase.However, it becomes more complicated when surfactants are introduced at the liquid-liquid (e.g., oil-water) interfaces due to the presence of two sub-interfaces.In this case, the "90-90" criterion is suggested [52,76], which is the distance between two positions where the densities of water and oil are 90% of their bulk density.The interfacial width generally monotonically increases with the number of surfactant molecules at the interface since the surfactant molecules become more tightly packed and the carbon tails become more vertically oriented in relation to the interface[26].When the interface is no longer flat, the definition of the interfacial width breaks down and no longer reflects the actual interfacial width.However, it does reflect the size of the undulations. Molecules 2024 , 42 Figure 14 . Figure 14.Final molecular configurations of IOS surfactant monolayers at the decane-water interface under the equilibrium state.Panels (a-f) indicate the conditions with different interfacial concentrations.The left column is front view, and the right column is side view (adapted from [26]). Figure 14 . Figure 14.Final molecular configurations of IOS surfactant monolayers at the decane-water interface under the equilibrium state.Panels (a-f) indicate the conditions with different interfacial concentrations.The left column is front view, and the right column is side view (adapted from [26]). Figure 15 . Figure 15.A schematic diagram of the interfacial structures formed by betaine (i.e., zwitterionic surfactants) and anionic surfactants with different carbon tail lengths.(a) Anionic surfactant has a longer carbon tail than betaine surfactant.(b) Anionic surfactant has a carbon tail with the same length to betaine surfactant.(c) Anionic surfactant has a shorter carbon tail than betaine surfactant.The headgroups and carbon tails of surfactants are indicated by red and blue colors, respectively.The oil-water interface is indicated by the yellow line. Figure 15 . Figure 15.A schematic diagram of the interfacial structures formed by betaine (i.e., zwitterionic surfactants) and anionic surfactants with different carbon tail lengths.(a) Anionic surfactant has a longer carbon tail than betaine surfactant.(b) Anionic surfactant has a carbon tail with the same length to betaine surfactant.(c) Anionic surfactant has a shorter carbon tail than betaine surfactant.The headgroups and carbon tails of surfactants are indicated by red and blue colors, respectively.The oil-water interface is indicated by the yellow line. Figure 16 . Figure 16.The spatial distribution function of water molecules surrounding the carboxylic acid group (left side) and sulfonate group (right side), respectively.Oxygen atoms are represented by red balls, the sulfur atom is indicated by the yellow ball, carbon atoms are indicated by green balls, and water molecules are indicated by cyan shading [86]. Figure 16 . Figure 16.The spatial distribution function of water molecules surrounding the carboxylic acid group (left side) and sulfonate group (right side), respectively.Oxygen atoms are represented by red balls, the sulfur atom is indicated by the yellow ball, carbon atoms are indicated by green balls, and water molecules are indicated by cyan shading [86]. Figure 17 . Figure 17.The RDF of the central atom (S) of the IOS surfactant headgroup regarding the hydrogen (Hw) and oxygen atoms (Ow) of the surrounding water molecules and sodium ions (Na + ) in the water phase.In the inset, the RDF curve is a closeup of the S-Ow curve.The schematic diagram illustrates the hydration structure surrounding the headgroup.The pink arc and arrow indicate the bound water in the hydration shell.The green arc and arrow indicate the captured water in the hydration shell.Oxygen atoms are indicated by red balls, the sulfur atom is indicated by yellow ball, and hydrogen atoms are indicated by silver balls. Figure 17 . Figure 17.The RDF of the central atom (S) of the IOS surfactant headgroup regarding the hydrogen (Hw) and oxygen atoms (Ow) of the surrounding water molecules and sodium ions (Na + ) in the water phase.In the inset, the RDF curve is a closeup of the S-Ow curve.The schematic diagram illustrates the hydration structure surrounding the headgroup.The pink arc and arrow indicate the bound water in the hydration shell.The green arc and arrow indicate the captured water in the hydration shell.Oxygen atoms are indicated by red balls, the sulfur atom is indicated by yellow ball, and hydrogen atoms are indicated by silver balls. Figure 18 . Figure 18.Effects of different hydrophilic headgroups and spacers on the hydration numbers (indicated by bars) and interfacial tensions (indicated by the data points and lines) of the surfactant monolayers.As is observed, a larger hydration number leads to lower IFT values (data are from [76]). Figure 18 . Figure 18.Effects of different hydrophilic headgroups and spacers on the hydration numbers (indicated by bars) and interfacial tensions (indicated by the data points and lines) of the surfactant monolayers.As is observed, a larger hydration number leads to lower IFT values (data are from [76]). Figure 19 . Figure19.A diagram of the potential of mean force (PMF) between the surfactant headgroups and counterions at the interface.The PMF curve can be predicted using Equation(13). Figure 20 . Figure 20.Effect of counterions on the interfacial structures of surfactant monolayers.(a) Ionic surfactants at oil-water interface.(b) Ionic surfactants and salt ions at oil-water interface.(c) The interactions between ionic surfactants and salt ions.(d) Nonionic surfactants oil-water interface.(e) Nonionic surfactants and salt ions at oil-water interface.(f) The interactions between nonionic surfactants and salt ions.The headgroups and tails of the surfactants are represented by red and blue colors, respectively, and salt ions are represented by green balls.The oil-water interface is indicated by the yellow line. Figure 20 . Figure 20.Effect of counterions on the interfacial structures of surfactant monolayers.(a) Ionic surfactants at oil-water interface.(b) Ionic surfactants and salt ions at oil-water interface.(c) The interactions between ionic surfactants and salt ions.(d) Nonionic surfactants oil-water interface.(e) Nonionic surfactants and salt ions at oil-water interface.(f) The interactions between nonionic surfactants and salt ions.The headgroups and tails of the surfactants are represented by red and blue colors, respectively, and salt ions are represented by green balls.The oil-water interface is indicated by the yellow line. Figure 21 . Figure 21.Effective alkyl tail length of the sodium hexadecyl benzene sulfonate (SHBS-4C16).Oxygen atoms are indicated by red balls, the sulfur atom is indicated by a yellow ball, carbon atoms are indicated by cyan balls, and hydrogen atoms are indicated by silver balls (adapted from [52]). Figure 21 . Figure 21.Effective alkyl tail length of the sodium hexadecyl benzene sulfonate (SHBS-4C16).Oxygen atoms are indicated by red balls, the sulfur atom is indicated by a yellow ball, carbon atoms are indicated by cyan balls, and hydrogen atoms are indicated by silver balls (adapted from [52]). Figure 22 . Figure 22.Construction methods for a digital oil model.(a) Workflow for light crude oil and (b) workflow for heavy crude oil [118]. Figure 22 . Figure 22.Construction methods for a digital oil model.(a) Workflow for light crude oil and (b) workflow for heavy crude oil [118].
v3-fos-license
2020-05-07T15:27:13.713Z
2020-05-07T00:00:00.000
218527489
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://rbej.biomedcentral.com/track/pdf/10.1186/s12958-020-00604-0", "pdf_hash": "ff07ed0a01236675052f31d55edaa0fe1193d039", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46361", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "ff07ed0a01236675052f31d55edaa0fe1193d039", "year": 2020 }
pes2o/s2orc
Impact of human papillomavirus infection in semen on sperm progressive motility in infertile men: a systematic review and meta-analysis Background Human papillomavirus (HPV) has been considered as one of the most common sexually transmitted viruses that may be linked to unexplained infertility in men. The possible mechanisms underlying correlation between HPV infection and infertility could be related to the altered sperm parameters. Current studies have investigated the effect of HPV seminal infection on sperm quality in infertile men, but have shown inconsistent results. Methods We systematically searched PubMed, Embase, Web of Science and CNKI for studies that examined the association between HPV seminal infection and sperm progressive motility. Data were pooled using a random-effects model. Outcomes were the sperm progressive motility rate. Results are expressed as standardised mean difference (SMD) with 95% confidence interval (CI). Heterogeneity was evaluated by the I-square (I2) statistic. Results Ten studies were identified, including 616 infertile patients with HPV seminal infection and 2029 infertile controls without HPV seminal infection. Our meta-analysis results indicated that sperm progressive motility was significantly reduced in HPV-infected semen samples compared with non-infected groups [SMD:-0.88, 95% CI:-1.17 ~ − 0.59]. There existed statistical heterogeneity (I2 value: 86%) and the subgroup analysis suggested that study region might be the causes of heterogeneity. Conclusions HPV semen infection could significantly reduce sperm progressive motility in infertile individuals. There were some limitations in the study such as the differences in age, sample sizes and the number of HPV genotypes detected. Further evidences are needed to better elucidate the relationship between HPV seminal infection and sperm quality. Introduction Infertility is defined as the inability of a couple to conceive after 1 year of unprotected sexual intercourse, which affects approximately one-fifth of couples at the reproductive age [1]. Among them, male infertility contributes to roughly 50% of overall infertility cases [2]. Seminal infections are significant etiologic factors in male infertility and often associated with impaired semen quality [3,4]. Chronic viral infection of the urogenital tract, especially human immunodeficiency virus (HIV) infection, may result in urethral inflammation and decreased fertility [5,6]. Hepatitis B virus (HBV) and hepatitis C virus (HCV) infections in semen can also adversely alter seminal parameters [7,8]. Human papillomavirus (HPV) is one of the most common sexually transmitted viruses in both males and females worldwide [9]. Some studies have reported that HPV can bind to the head of sperm and result in decreasing male fertility or even causing infertility [10]. A significant association between seminal HPV infection and male fertility abnormality has been reported [11,12]. Also, recent researches suggested that HPV infection of semen represented a significant risk factor for infertility in men [13,14]. The possible mechanisms underlying correlation between HPV seminal infection and infertility remain unclear [15] and one possibility is that HPV infection significantly lowered the key sperm parameters [14]. Sperm progressive motility has conventionally been considered as a good indicator of motility and a key functional parameter essential for fertilization. The effects of HPV infection on sperm progressive motility in infertile men have been investigated, but the results are controversial [16]. Several researches indicated that HPV infection was closely related to male infertility with decreased sperm progressive motility [17][18][19][20][21][22][23][24][25], while Zheng et al. revealed that there was no significant difference of sperm progressive motility rate between infected and non-infected infertile subjects [26]. In this research, we performed a systematic review and meta-analysis to investigate the possible impact of HPV infection in semen on sperm progressive motility in infertile individuals. Literature search Two independent reviewers searched the PubMed, Embase, Web of Science and CNKI from inception until September 2019. The study type was not restricted. The following search terms were used in combination for search strategies: "human papillomavirus", "HPV", "infertility", "semen", "sperm quality", "sperm parameter" and "progressive motility". We also conducted manual searches of relevant additional references cited in review articles. Eligibility criteria Studies were included if sperm progressive motility could be directly extracted from the original article. Data should be expressed as mean ± standard deviation (SD). Studies were excluded if they were: 1) reports not focusing on infertile patients or participants with male accessory gland infection; 2) without SD value; 3) case reports or reviews. The inclusion criteria of infertile patients were at least 1 year of unprotected sexual intercourse without contraception, and healthy female partners (their tubal, uterine, cervical abnormalities, and ovarian disorders were excluded). Exclusion criteria were presence of antisperm antibodies, azoospermia, undescended testis, chromosome abnormalities and history of orchitis, epididymitis, epididymo-orchitis, varicocele and/or sexually transmitted infections in couples [27]. Study populations were separated into two groups: infertile patients with HPV seminal infection and infertile patients without HPV seminal infection. Diagnosed with HPV seminal infection in general population and fertile men were also excluded. Data extraction and risk of bias The data of all included articles were extracted independently by two investigators. Disagreements were discussed and resolved by consensus. Key variables of interest from each study included: first author, publication year, population characteristics (country of region, age, sample size), HPV genotype, sperm progressive motility in infertile patients with or without HPV semen infection. The Cochrane Handbook for Systematic Reviews was used to assess the risk of bias in each study. The inclusion criteria, risk of bias at the study level and data extraction were evaluated (Supplemental Figure S1 and Supplemental Figure S2). The primary outcome was the rate of sperm progressive motility. Statistical analysis The inputted data included sample sizes and outcome measures with mean and standard deviations. Outcome measures were converted into the SMD with 95% CI. Heterogeneity was evaluated by I 2 statistic to quantify the percentage of total variation across studies. If I 2 value was greater than 50%, the summary estimate was analyzed in a random effect model. Otherwise, a fixed effect model was used. Sensitivity analysis was conducted to estimate whether any single study influenced the stability of the meta-analytic results by sequentially removing individual included study. Publication bias was assessed by Egger's test and statistical analyses were performed using RevMan 5.3 and STATA 16.0. Study characteristics The initial literature search yielded 291 potentially relevant studies. Most ineligible studies were excluded based on information in the title or abstract and the remaining 32 eligible studies were reviewed in detail. The selection process was shown in Fig. 1. As a result, ten articles were included in the final meta-analysis, providing data on 616 HPV DNA positive men among 2645 participants from 3 countries. The main characteristics of the studies included in our meta-analysis were described in Table 1. Meta analysis To assess the effect of HPV seminal infection on sperm progressive motility, ten eligible studies including 616 infertile patients with HPV-infected in semen and 2029 non-infected infertile subjects were analyzed. According to the results of the heterogeneity test, the random effect model was chosen to estimate the SMD. A significant reduction of sperm progressive motility was found in semen samples of HPV-infected infertile patients compared with non-infected groups (SMD:-0.88, 95% CI:-1.17~− 0.59) (Fig. 2). A subgroup analysis was performed to differentiate the effect size based on study region. The pooled SMD was highest in China (− 0.59, 95% CI: − 0.73~− 0.45), followed by Italy (− 1.10, 95% CI: − 1.54~− 0.67) and Iran (− 1.26, 95% CI: − 2.02~− 0.49) (Fig. 3). There was no statistical heterogeneity in the subgroup of China. Sensitivity analysis None of an individual study significantly altered the overall significance of the combined SMD in the analyses relating to the impact of HPV seminal infection on sperm progressive motility in infertile individuals (Fig. 4). Publication bias Egger's test of publication bias of the seminal HPV infection on sperm progressive motility in infertile patients indicated a lack of publication bias (P = 0.84). Discussion HPV has been considered as an infectious factor that might be linked to unexplained infertility in men. Previous meta analyses have reported the prevalence of HPV in semen [28] and the risk for male infertility [13,14]. Laprise et al. [28] exhibited that the pooled HPV prevalence in semen was estimated at 16% for men seeking fertility evaluation/treatment and at 10% in general populations. Xiong et al. [14] and Lyu et al. [13] demonstrated that HPV semen infection was a risk factor for male fertility abnormality with an OR of 3.02 (95% CI: 2.11-4.32) and 2.93 (95% CI: 2.03-4.24) respectively. The issue whether HPV seminal infection has significance and consequence for sperm progressive motility in infertile men is controversial. The current study conducted a meta-analysis to evaluate the impact of HPV semen infection on sperm progressive motility in infertile subjects. The results showed that the prevalence of HPV detection in semen in infertile men ranged from 9.1% by Zheng et al. [26] to 67.7% by Yang et al. [25]. Sperm progressive motility reduced significantly in seminal HPV infected patients compared with non-infected groups. In the aspect of HPV genotypes distribution, the results showed that HPV-16, HPV-18/52, HPV-33, in decreasing order, were the most prevalent genotypes in semen of infertile group. Previous studies have shown that in semen the HPV were detected both in exfoliated cells [29] and in sperm surface, especially in the sperm head [17]. In an in vitro study, Carlo et al. [30] reported that HPV could infect human sperm and it localized at the equatorial region of sperm head through interaction between the HPV capsid protein L1 and syndecan-1. Moreover, HPV binding to sperm was tenacious [10,31] and conventional methods of sperm washing could not clear HPV DNA from sperm surface [32]. The pathogenic mechanism explicating the reduction of sperm progressive motility related to seminal HPV infection might be associated with anti-sperm antibodies (ASAs), glandular dysfunction and sperm DNA fragmentation. Firstly, several studies have shown that infertile patients with HPV semen infection had a high percentage of ASAs on sperm surface and the presence of HPV in semen was frequently related with ASAs of IgA and IgG classes, which suggested that the presence of HPV DNA on the sperm surface might represent an antigenic stimulus for ASA formation [17,33]. Although the role of ASAs is controversial, some mechanisms have been proposed affecting sperm quality: sperm agglutination and complement mediated sperm cytotoxicity occurring within the male genital tract [34]. Secondly, HPV seminal infection in infertility men may have altered proportions of secretory products mainly from prostate and seminal vesicles, which could have a negative impact on sperm motility [35]. Thirdly, HPV infection might result in the increased rate of sperm DNA fragmentation and apoptosis. In vitro study by Connelly et al. [36] indicated that sperm cells transfected with exogenous HPV E6/E7 DNA had higher percentages of breakages characteristic of apoptosis compared to the uninfected controls. In contrary, in vivo study by Cortes et al. [37] failed to find any association between HPV positive and sperm DNA fragmentation. Further evidence gathered through well-designed trials to confirm whether HPV-infected sperm is more susceptible to DNA damage is warranted. In fact, HPV-infected sperm maintained their ability to fertilize the oocyte, interfered with implantation and embryo development, thus affecting the outcome and safety of assisted reproduction techniques (ARTs) [38]. Henneberg et al. [39] demonstrated embryo stage-specific disruption effects of HPV on early development. Perino et al. [40] reported the lower pregnancy rate and increased percentage of abortions in ARTs with HPV positive in semen. In a cross-sectional clinical study [21], cumulative pregnancy rates recorded in noninfected and infected couples undergoing ART were, respectively, 38.4 and 14.2%. During the follow-up of these pregnancies, a significantly higher miscarriage rate (62.5% vs. 16.7% of noninfected) was observed in HPV-infected subjects. In particular, all pregnancy losses of the infected group took place very early (three at 5th and two at 6th gestational week). The results showed that I 2 -value was greater than 50%, which suggested that there was potential heterogeneity between studies. The heterogeneity might be attributed to differences in study region, sample size, the definition of male infertility and the number of HPV types detected. The inclusion criterion of the infertile group was at least 1 year or 2 years of unprotected sexual intercourse without conception. The study by Foresta et al. [23] included the infertile patients of case group only affected by HPV-16 semen infection and HPV-genotypes other than HPV-16 were all excluded. Multiple HPVgenotypes were detected in most of articles included in the present study and the genotype was not mentioned in one study [21]. The results of subgroup analysis showed that I 2 -value was equal to zero in the subgroup of China, which suggested that study region might be the causes of heterogeneity. In addition, some limitations of the present metaanalysis should be considered when interpreting the results. Firstly, though we performed an extensive literature search, potential selection bias could not be completely avoided because only articles published in Chinese and English were included. Secondly, some important confounding factors, such as male age and environmental exposures were not always noted. These factors might have confounding effects on the correlation between HPV semen infection and reduced sperm progressive motility. Thirdly, most articles were not prospective study and might therefore decrease the reliability of our results. Conclusions In summary, the current evidences suggest that HPV semen infection could significantly reduce sperm progressive motility in infertile individuals compared with non-infected infertile group. This information could make recommendations for reproduction diagnosis and treatment and could affect public health. However, this evidence is far from conclusive because of the small sample sizes and existing confounding factors of the currently available studies. Future studies with large sample size and rigorous design are necessary to elucidate the impact of HPV semen infection on sperm quality. Additional file 1: Figure S1. Assessment of risk of bias. Additional file 2: Figure S2. Assessment of risk of bias.
v3-fos-license
2019-04-12T13:57:44.495Z
2013-02-21T00:00:00.000
109075581
{ "extfieldsofstudy": [ "Mathematics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/archive/2013/354789.pdf", "pdf_hash": "e6f5c76ba499a195fab9f419f9bed6225cd17443", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46363", "s2fieldsofstudy": [ "Engineering", "Physics" ], "sha1": "460abe727af40dd982c62b0550c6e19ae90b808e", "year": 2013 }
pes2o/s2orc
A Quasi-Yagi Antenna Backed by a Jerusalem Cross Frequency Selective Surface A quasi-Yagi antenna is developed to operate at 2.4GHz (ISM band) presenting a low profile and off-axis radiation when packaged over ametal ground plane.The off-axis radiation is realized by incorporating a Jerusalem cross frequency selective surface (JC-FSS) as the ground plane for the antenna. A JC-FSS is preferred because of its frequency stability in the operating band for a large angular spectrum (≈70) of TEand TM-polarized incident waves. In this research, the substrate of the antenna flush-mounted on top of the FSS is added to the JC-FSS model and allows for a smaller cell grid. The prepared quasi-Yagi antenna over the JC-FSS offered 260MHz of functional bandwidth and 54 of beam tilt towards the end-fire direction. To the best of the authors’ knowledge this is the first instance that these two structures are combined for off-axis radiation. Additionally, to support the preferred use of the JC-FSS, the quasi-Yagi is backed by a square patch (SP) FSS for comparison purposes. Introduction The work presented in this paper introduces a quasi-Yagi antenna over a metal reflector with off-axis radiation at 2.4 GHz (ISM band). The main application of interest for this antenna is on sensor nodes comprising a wireless sensor network inside a multipath rich environment such as an aircraft fuselage. In practice these antennas would be mounted atop the metal package of a sensor node and used to communicate preferentially toward the front or rear of the aircraft. The packaging of the antenna over a metal ground plane presents a challenge, however, as this configuration results in undesired phase reflections (1 180 ∘ ) and image currents from the ground canceling the current of the antenna, degrading its operational bandwidth, and tilting the beam away from the end-fire direction. A previous solution was proposed in [1] which consisted of displacing the metal reflector from the antenna by a suitable distance (0.19 = 7.5 mm). Though off-axis radiation of 40 ∘ was achieved, the arrangement resulted in an inherently high profile. The configuration proposed herein introduces a new alternative to [1] as it packages the quasi-Yagi antenna over a high impedance surface (HIS) or electromagnetic band-gap (EBG) structure. The HIS eliminates the out-ofphase reflections generated at the ground from radiating to the antenna and supports the radiation of leaky TE waves in the frequency region of high impedance. The preferred HIS configuration used here is the Jerusalem cross frequency selective surface (JC-FSS) from [2], because of its compact size, numerous parameters for tuning, and its stability over the band gap for a large angular (<70 ∘ ) spectrum of TE and TM incident waves. The unique feature in [2] is the addition of the antenna substrate layer into the JC-FSS model, allowing for a smaller FSS grid. The use of an HIS to back existing antenna technologies presents advantages such as preserving a considerable fraction of the antenna bandwidth while reducing profile, improving gain, reducing back radiation, and increasing radiation efficiency. Work described in [3] for an Archimedean spiral antenna over a square patch EBG (versus the conventional 0.25 grounded cavity) offered an overall antenna/EBG profile reduction >69% and preserved the bandwidth by 71% without losing directive gain. Another example, developed in [4], shows a wearable dual-band (2.4 and 5.1 GHz) coplanar patch antenna combined with a square patch EBG made of common clothing fabrics. The EBG used here improved antenna gain (3 dB) and reduced back radiation into the body (>10 dB). An equivalent design in [5] provides the same characteristics (high gain and low back radiation), but it used a nonconformal material and the EBG arrangement consisted of a Jerusalem cross-slot array. An example published in [6] shows a flared dipole with quasi end-fire radiation at 3 GHz over an HIS consisting of an array of hexagonal grids (thumbtacks). The amount of angular beam tilting towards the endfire direction depends on the impedance of the HIS, which can be tuned in real time by a variety of methods (e.g., electronically, using varactor diodes [7]). Finally, work presented in [8] illustrates a broadband diamond dipole antenna over a Jerusalem cross frequency selective surface. The bandwidth for the combined structure extends from 5 to 11 GHz with high gain (>6 dB) from 3.7 to 6.8 GHz and a nondisturbed (without unwanted nulls) directive pattern up to 7 GHz. As suggested above, the majority of existing antenna/EBG configurations present radiation in the broadside direction. The design prepared here combines for the first time a quasi-Yagi antenna with a JC-FSS over a metal ground with radiation directed toward the end-fire direction. The combined structure has an overall profile of 5 mm (0.125 ), functional bandwidth extending from 2.29 to 2.55 GHz, and 54 ∘ of beam tilt towards the end-fire direction. In comparison to [1] the new design provides 33% reduced profile and 14 ∘ of additional beam tilting. Also, in comparison with a conventional quasi-Yagi antenna of the same substrate thickness (5 mm) but lacking the HIS layer, the proposed design offers wider bandwidth (2.24-2.46 GHz versus 2.35 GHz) and 27 ∘ more beam tilting. The following sections present the antenna design, the derivation of the JC-FSS model with the antenna substrate, and comparisons between simulated and measured results for the combined structure. Also, the performance of the quasi-Yagi antenna when backed by a square patch FSS is shown to highlight the advantages of the JC-FSS. Design Characteristics. A quasi-Yagi antenna, such as that in [9], consists of an array of dipoles printed on a substrate and fed by a microstrip to coplanar strip-line (CPS) transition. The transition is used as a transformer to connect the unbalanced microstrip input line to the balanced (CPS) antenna feed line. In addition, the ground plane from the microstrip transition is used as the reflector element for the array, eliminating the need for a reflecting dipole and resulting in a more compact length (< 0 /2), along with direct compatibility with microstrip circuitry. Further advantages inherent in the design are mechanical support and planar transmission line compatibility due to the presence of a substrate. The use of a high permittivity substrate means that the antenna will be extremely compact in terms of free space wavelength ( 0 ). In regard to frequency of operation and radiation pattern, quasi-Yagi antennas are broadband (≈50%) and radiate in the end-fire direction. The new feature developed in this research, Figure 1, consists of shielding the antenna with a metal ground. To overcome out-of-phase reflection (1 180 ∘ ) from the metal ground and surface currents from shorting the director and driver dipoles, a JC-FSS is implemented. The ground planes of the FSS and the truncated microstrip are connected to the same potential through shorting vias. The distance separating the antenna and the FSS was determined based on antenna bandwidth requirements and commercially available substrate thickness options. The effect of this distance is accounted for as a superstrate during the derivation of the closed form equations for the JC-FSS model. The dimensions for the antenna elements in Figure 1 are optimized from [1] for end-fire radiation and to account for the added JC-FSS through simulations in Ansoft HFSS. The substrate material is RT/Duroid 6010 LM ( = 10.2). The overall size of the antenna is 58 mm × 86 mm. The optimized dimensions for the quasi-Yagi antenna are listed in Table 1. International Journal of Microwave Science and Technology The most significant adjustments are the reduction of the driver to director separation (30%) and the decrease of the overall substrate profile (33%). Jerusalem Cross Frequency Selective Surface. The JC-FSS implemented here was previously developed by the authors in [2]. The design offers in-phase reflection (1 0 ∘ ) for an operational band extending from 2.39 to 2.5 GHz at normal incidence. In addition, at the center frequency (2.45 GHz) the JC-FSS offers frequency stability for a large angular spectrum (>70 ∘ ) for both TE-and TM-polarized incident waves. As previously stated, the main feature from the FSS in [2] is the addition of the antenna substrate into the JC-FSS model, which decreases the center frequency of the high impedance band from 3.2 GHz to 2.45 GHz for the same FSS dimensions. Surface Waves on a Metal Surface versus a Textured Surface. The properties of surface waves on a metal surface versus a textured surface are compared herein, to explain the use of the latter in the quasi-Yagi antenna. If the radiating element is placed near the ground plane, it will generate currents that propagate along the metal sheet. Any break or discontinuity (e.g., the edge of board) on the flat surface will promote radiation from that location. The result is a destructive interference which cancels the radiation from the antenna and decreases the radiation efficiency. By adding a special texture to a metal surface, it is possible to suppress surface currents over a range of frequencies (band gap). As discussed in [10], the electromagnetic properties of the structure can be described by a single value, the surface impedance, if the period of the textured surface is much smaller than the wavelength in the dielectric media ( ). A smooth or flat conductive sheet has low surface impedance, while a textured surface can be engineered to have high surface impedance. The fields radiated by the quasi-Yagi antenna over the FSS are a combination of those produced by the antenna elements themselves and those that exist due to the presence of the FSS. For a quasi-Yagi antenna without a ground or underlying structure, the radiated fields are TE with respect to the substrate surface and the direction of propagation (end fire). If a conventional ground was placed beneath the antenna, TM surface wave propagation is possible; however, these waves are unlikely to be excited by the antenna elements due to their orientation. The main problem with this configuration is field cancelation due to image currents. With the textured FSS layer beneath the antenna both TE and TM wave propagations are possible. However, TE wave excitation is dominant because of the orientation of the antenna elements. When the surface impedance is large, these TE waves are leaky and radiate readily, causing the overall radiation pattern to tilt away from broadside. The low cross-polarization levels achieved with the antenna presented herein support the conclusion that TM surface wave radiation is not significant. Derivation of the JC-FSS Model. The JC-FSS of this work is effectively modeled by a parallel resonant LC circuit following the condition that the grid period is smaller than the wavelength ( ≪ ). The LC model consists of the parallel combination of the self-resonant grid impedance ( ), which represents a strip, with the grounded dielectric slab impedance ( ). Figure 2(a) shows that can be expanded into the series combination of the narrow strip impedance and the edge impedance between end loading strips. The narrow strip impedance is mostly inductive ( ) and is derived from Telegrapher's equations or from the stepped impedance equations from [11]. The equation for the grid inductance is written as The impedance between end loading plates is mostly capacitive ( ) and is a result of the charge builtup between plates [10]. This capacitance is given by where eff is the effective permittivity including the superstrate layer, is the length of an end loading plate, is the gap between crosses, and is the period between adjacent capacitive plates. As illustrated in Figure 2(b) , is mostly inductive ( ) and is derived from the TEM transmission line equation for a dielectric slab backed by a perfect electric conductor [12]. In (3) is the wavenumber √ o o √ , ℎ is the dielectric height, and is the intrinsic impedance in free space: International Journal of Microwave Science and Technology From the parallel LC circuit in Figure 2(b) the equivalent surface impedance is calculated by The resonant frequency can then be derived by equating the denominator from (4) to zero which results in The bandwidth is obtained by dividing the equivalent impedance of the JC-FSS by and following the criteria in [12,13] that the phase of the reflection coefficient should fall between ±0.25 . The dimensions for the JC-FSS are listed in [2]. Both dielectric layers have a high relative permittivity ( = 10.2) offering better angular stability and smaller dimensions at low resonant frequencies. Each dielectric layer is relatively thick (2.5 mm), increasing the inductance ( ) of the equivalent surface impedance (4). The use of a small gap ( = 0.32 mm) between crosses leads to larger edge capacitance ( ) in the surface impedance. Impact of Superstrate on JC-FSS Model. The most attractive trait from the JC-FSS in [2], compared to others [13], is that the antenna substrate (or superstrate) is included into the JC-FSS model. Here, the impinging wave is excited on top of this layer, (Figure 3). The equivalent reflection coefficient (Γ IN ) for the structure at oblique incidence for TEand TM-polarized waves is the result of the combination of the reflected waves from the two dielectric layers and ground. Also, the addition of the superstrate varies the effective permittivity ( eff ) between layers from 7.2 to 9.7, which is considered during the derivation of the grid capacitance in (2). The reflection coefficient for TE and TM waves at the JC-FSS/superstrate boundary is calculated by combining (4) and where the incident ( 1 ) and refracted ( ) angles have minimum effect on the phase of the TE and TM reflected wave (∠Γ ∘ . ) for the frequency range in which the JC-FSS exhibits high impedance, resulting in angular stability. Next, the reflection coefficient at the free-space/superstrate boundary for TE and TM waves is derived. First the input impedance is found for the wave reflected from the FSS: and the results are included in The relation between the angle of incidence and refraction at the boundary of each layer is determined from Snell's law of refraction in [14]. This equation demonstrates that the addition of the superstrate layer reduces the angle of incidence for the wave impinging on the JC-FSS. For example, a travelling wave with an angle of incidence of 60 ∘ at the superstrate surface has an angle of refraction of 15 ∘ . This angle of refraction will be the angle of incidence for the FSS. A comparison is performed on the design of a JC-FSS including the superstrate versus a design without the superstrate. For simplicity the evaluation is carried out for a wave with normal incidence. At normal incidence Γ IN is independent of the polarization of the incident wave since the and fields are both tangential to the boundary. Simulation results in Figure 4 demonstrate that the superstrate shifts the center frequency down to the desired band from 3.2 to 2.45 GHz (750 MHz). This analysis supports the importance of accounting for any additional layer covering the JC-FSS during the FSS closed-form modeling to prevent undesired frequency shifts. In addition, an overall cell size reduction has been achieved by considering the extra layer; an equivalent cell size for 3 GHz, with no superstrate, is used at 2.4 GHz with superstrate. Simulation and Measurement Results A comparison between measured and simulated data on return loss and radiation pattern is presented in this section for the quasi-Yagi antenna backed by the JC-FSS. Additionally, these same results are compared against data for a conventional grounded quasi-Yagi antenna of the same substrate height (5 mm) and the quasi-Yagi design presented in [1]. Finally, a comparison is made between the JC-FSS design and one using a square patch FSS (SP-FSS). The simulated and measured reflection coefficients for the quasi-Yagi antenna over the JC-FSS are compared in Figure 5. The simulated data demonstrate an operational bandwidth from 2.24 to 2.46 GHz. However, the measured frequency band is shifted up in frequency, and the response exhibits undesired reflections in the 2.4 to 2.5 GHz frequency range. These effects are a result of the sensitivity of the JC-FSS to small air gaps from the adhesive used to attach the antenna substrate to the JC-FSS substrate. This explanation is confirmed through simulations, (Figure 6), where the bond line is approximated by a 1.5 mil air gap between layers. Furthermore, a small air gap will also affect eff , disturbing the impedances of the grid design. The results for the simulated and measured normalized H-plane patterns at 2.45 GHz are illustrated in Figures 7 and 8, respectively. The simulated and measured results demonstrate a copolarized H-plane (H-CPOL) pattern with beam tilt of 45 ∘ and 54 ∘ , respectively, towards the end-fire direction. At 45 ∘ the simulated beam peak is 1 and 3 dB larger than at the end-fire ( = 90 ∘ ) and broadside ( = 0 ∘ ) directions. Correspondingly the measured beam peak at 54 ∘ is 2 and 3 dB larger than at = 90 ∘ and 0 ∘ . The simulated and measured H-plane cross-polarization (H-XPOL) levels are −22.6 and −13 dB. The drastic increase in the measured H-XPOL level is attributed to measurement set-up tolerances, sensitivity of the JC-FSS to large angles of incidence, and to the air gap resulting from the adhesive. If the resonant frequency of the high impedance band moves up enough such that the frequency of interest (2.45 GHz) falls below the band gap, then the surface impedance is inductive and TM surface waves from the ground radiate readily thereby increasing the X-POL levels [10]. Simulations were performed on a quasi-Yagi antenna printed on a 5 mm thick grounded dielectric slab in order to assess the impact of including the JC-FSS layer. The comparison presented in Figure 9 shows a drastic improvement of 220 MHz in the return loss bandwidth for the design including the JC-FSS versus the design over the 5 mm thick grounded slab. A similar evaluation is shown in Figure 10 Previous work in [1] presented a quasi-Yagi antenna packaged over a grounded dielectric slab (7.5 mm thick) with an operational bandwidth from 2.36 to 2.55 GHz and off-axis radiation of 40 ∘ . In comparison to [1], the antenna backed by the JC-FSS presents a wider bandwidth extending from 2.29 to 2.55 GHz, 14 ∘ of additional beam tilt towards the end-fire direction and an overall profile reduction of 33%. Quasi-Yagi Backed by an SP-FSS. In this section the SP-FSS derived in [15] is realized as the HIS structure for the quasi-Yagi antenna. The objective is to assess the dependence of the quasi-Yagi on the chosen HIS to promote principal beam tilting towards the off-axis direction. The proposed SP-FSS consists of a periodic cell with length and width of 3 mm and a gap of 0.1 mm between adjacent cells. In addition, as suggested by [16], the size of the ground plane beneath the director/driver dipoles was kept unchanged from JC-FSS design to avoid the introduction of unwanted resonances in the return loss response. The SP-FSS design was fabricated and assembled using the same process used for the JC-FSS design, and the measured performance was compared to HFSS simulation data. Figure 11 demonstrates close agreement between the simulated and measured return loss with an operational bandwidth from 2.28 to 2.43 GHz. Figures 12 and 13 show the simulated and measured normalized H-plane patterns at 2.38 GHz. The simulated and measured results demonstrate a copolarized H-plane (H-CPOL) pattern with beam tilt of 38 ∘ and 35 ∘ , respectively, towards the off-axis direction. At 38 ∘ the simulated beam peak is 1.5 and 3.5 dB larger than at the endfire ( = 90 ∘ ) and broadside ( = 0 ∘ ) directions. Correspondingly the measured beam peak at 35 ∘ is 1.3 and 4.2 dB larger than at = 90 ∘ and 0 ∘ . The simulated and measured H-plane cross-polarization (H-XPOL) levels are −25 and −14 dB. The presented evaluation of the JC-FSS versus the SP-FSS has demonstrated that the JC-FSS provides additional beam tilting of 19 ∘ towards the off-axis direction. This is the result of the inherent angular stability and high inductance from the JC grid which makes it a preferable shielding candidate for antennas with off-axis radiation. Conclusion A new design for a quasi-Yagi antenna backed by a metal ground and with end-fire-like radiation has been proposed. The design consisted of packaging the antenna over a JC-FSS. This is the first time that these two structures are combined for end-fire operation. The results on return loss show an operational bandwidth from 2.29 to 2.55 GHz. The Hplane pattern showed beam tilt of 54 ∘ towards the end-fire direction. In comparison to a design of same substrate height (5 mm) but without the JC-FSS, the proposed design offers 220 MHz more bandwidth and 27 ∘ of extra beam tilting in the end-fire direction. Furthermore, when compared to the option previously proposed in [1] with the quasi-Yagi antenna placed over a thick grounded slab, the proposed design offers a profile reduction of 33% and 14 ∘ of additional beam tilt in the end-fire direction. Additionally, the presence of the superstrate above the JC-FSS reduces the physical size of the unit cells by 23%. Finally, the comparison carried out on the quasi-Yagi backed by a SP-FSS has demonstrated that selecting an FSS with inherent angular stability for oblique 8 International Journal of Microwave Science and Technology angles of incidence is preferred for antennas with radiation patterns towards the end-fire direction.
v3-fos-license
2021-05-10T00:04:11.747Z
2021-01-31T00:00:00.000
234026754
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://traffic.fpz.hr/index.php/PROMTT/article/download/3582/561561870", "pdf_hash": "429c5a859bc3c4f51af83bd3f6c5d47d17a7efc4", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46364", "s2fieldsofstudy": [ "Business" ], "sha1": "715605a0929134a29dbbb241f48956008c5a7702", "year": 2021 }
pes2o/s2orc
MODELLING OF QUEUE LENGTH IN FREEWAY WORK ZONES – CASE STUDY In this study, the traffic parameters were collected from three work zones in Iran in order to evaluate the queue length in the work zones. The work zones were observed at peak and non-peak hours. The results showed that abrupt changes in Freeway Free Speed (FFS) and arrival flow rate caused shockwaves and created a bottleneck in that section of the freeway. In addition, acceleration reduction, abrupt change in the shockwave speed, abrupt change in the arrival flow rate and increase in the percentage of heavy vehicles have led to extreme queue lengths and delay. It has been found that using daily traffic data for scheduling the maintenance and rehabilitation projects could diminish the queue length and delay. Also, by determining the bypass for heavy vehicles, the delay can be significantly reduced; by more than three times. Finally, three models have been presented for estimating the queue length in freeway work zones. Moreover, the procedure shown for creating a queue length model can be used for similar freeways. INTRODUCTION Road maintenance and rehabilitation require the creation of work zones. In general, there are two types of work zones: a completely or a partially closed road. In case of partially closed road, one or more lanes are closed. Work zones on freeways lead to traffic disturbances and delays which causes an increase of the time of travel, vehicle depreciation and fuel consumption [1]. On freeways and urban highways, the maintenance and rehabilitation projects result in creating long queues especially during rush hours. On the other hand, one of the major factors which reduce road safety and increase delay for all users is queueing. In the developed countries many studies have been conducted or are being carried out on the causes of creating queues [2,3,4] while in the developing countries, this has been less considered. By considering the increasing number of vehicles and maintenance and rehabilitation projects, the necessity to study the creation of queues is increasingly emphasized [5]. This study aims to present the queue length models in work zones and to analyse each of their constituent factors. Because dependent on several factors including speed limits, driver conditions, weather conditions, length of work zone, position of the closed lane, lane width, time and period of maintenance and rehabilitation project, number of lanes open in the work zone, number of lanes closed in the work zone, types of road, slope of work zone, percentage of heavy vehicles and intracity or suburban work zone. Jian et al. [13] estimated the capacity and speed in the work zone. For this purpose, they estimated traffic flow parameters using video recording in 15-minute intervals and presented a model. The results of this study revealed that the obtained model estimated the capacity less than the actual value but computed the speed with more accuracy. Von der Heiden and Geistefeldt [14] evaluated the field information of 18 long work zones and 111 short work zones in Germany. For long work zones, a distribution function of capacity and for short work zones, a multivariate non-linear regression were used for modeling. The obtained models presented more accurate results than the ones obtained by the other methods. Comert [15] using VISSIM software at intersections showed that the delay resulting from numerical queueing theory has been estimated lower than its actual value. Therefore, they proposed using a combination of simulations and queueing theory. Benekohal et al. [16] investigated traffic parameters in freeway work zones in the state of Illinois in the US. In this research, the delay in the work zones is composed of two components, which can be classified as delay due to queueing and delay due to lower speed. The delay due to lower speed is because of the bottleneck influenced by a number of factors such as small width of the lanes, small lateral clearance, small width of the hard shoulder and speed limit. Delay due to queueing is due to high demand of the work zones, which is calculated based on the cumulative arrival and departure numbers of vehicles over time and using the queueing theory by Highway Capacity Manual (HCM 2000) [17] application. If the demand exceeds the capacity of a work zone, a queue is created. As shown in Figures 1 and 2, the values of delay and queue length (number of vehicles in the queue) are calculated using the queueing theory which is used in the present study [16]. And as shown in Figure 1: where, n i+1 -number of vehicles in queue at the end of (i+1)-th hour; n i -number of vehicles in queue at the end of of the multifaceted nature of queueing on freeways, determining the causes of creating long queues is generally difficult. In Iran, as a developing country, an authentic database has not been provided. Therefore, the field data collection is very important. One of the goals of this study is to present the models of queue length in work zones. The results can be effective in keeping a balance between the traffic volume and delay. Data gathering included three parts. First, some sections of the freeway were selected in the peak and non-peak traffic hours in order to compare the obtained results, and then the required information from the recorded videos was extracted. Next, the achieved diagrams of the traffic parameters such as speed, flow rate, density and queue length have been evaluated. Finally, the obtained queue length models have been compared with the actual data. LITERATURE REVIEW The review of the previous studies on work zones reveals that the delay has been investigated more than any other parameter and the focus has been on the costs of the users. Shibuya et al. [6] showed in a study that the delay resulting from the rate of change of velocity includes 35 to 40 percent of the total delay in the work zones. Nam and Drew [7] presented a numerical model of the queue. The drawback of this model is that it predicts the delay less than the actual amount. This model has some shortcomings in comparison with the kinematic model. Son [8] presented a model for estimating the queue length. The results showed that the delay due to queueing was less than the delay due to lower speed. Migtez et al. [9] investigated the speed limitation in the work zones. They revealed that speed reduction to a maximum of 16 km/hour, reduces accidents. Ullman and Dudek [10] suggested a theory method for estimating the queue length. This method predicts the queue length and delays less than the actual amount according to the data obtained from the field information. They suggested a macroscopic model based on speed, density, and flow rate. Renata et al. [11] studied 17-speed reduction patterns in the work zone and showed that cars and heavy vehicles follow similar patterns. Weng and Meng [12] provided a model for the estimation of capacity in freeway work zone using field information of 18 US work zones. This model has the ability to estimate the total capacity after the speed reduction and queuing. The capacity of the work zone is CASE STUDY -WORK ZONE LOCATIONS The work zones were specifically created for this study in Karaj-Tehran freeway by Annual Average Daily Traffic (AADT) of 217,084 vehicles. In this study, video recording was used to collect data. The data mainly contain the number of vehicles passing through the work zones, including cars, vans, buses, minibuses, trailers and trucks (light and heavy) per unit time and the speed of vehicles at a specified distance [17,18]. This distance was considered as 50 metres. This was done in 1-minute intervals. The rush hours in the three areas are from 8:00 a.m. to 10:00 a.m. Figure 3 shows the location of the work zones. As shown in Figure 4, two lanes of the eightlane freeway (four lanes in each direction) remained closed in the work zones. These areas were video recorded before, in the middle and at the end of the work zones. Several different cars were marked in each lane and the time they passed through the specified intervals was obtained using the image timer. The images of Work zones 1 and 2 are displayed in Figure 5. Moreover, Adobe Premiere software was used in order to analyse the three work zones. The image of the procedure for Work zone 3 is presented in Figure 6. The information of the three work zones are as follows: where, d q -delay due to queueing [veh-hours]; t -number of hours of queueing; n i -number of vehicles in queue at the end of the the i-th hour; n i+1 -number of vehicles in queue at the end of the (i+1)-th hour. where V s is the space mean speed (km/h), L is the length of the area (km), and N is the number of measuring times and t is the time for passing the distance (hour). At the peak traffic hours, if traffic demand exceeds the service capacity, it causes a shockwave and creates a bottleneck in that section of the freeway [17,18]. In speed wavy changes are Work zone 1: The first area was located at Karaj-Tehran freeway in the west-east direction, the before Kalak Bridge. The camera recording was done on Saturday, 21 January 2017 from 9:00 a.m. to 10:00 a.m. Work zone 2: The second area was located at Karaj-Tehran freeway in the west-east direction, before the Chamran Blvd. The camera recording was done on Wednesday, 1 February 2017 from 10:45 a.m. to 11:45 a.m. Work zone 3: The third area was located at Karaj-Tehran freeway in the west-east direction, before the Sevvom Khordad Flyover. The camera recording was carried out on Thursday, 2 February 2017 from 9:35 a.m. to 10:35 a.m. Speed change models In order to measure the speed, the required time for passing a distance of 50 metres was measured [17,18]. Next, the space mean speed was calculated using the following equation: Speed -density models Speed and density are reversely related. The normal shape of speed -density is a linear curve [18]. According to since speed is reduced suddenly because of the necking of the freeway, the curves have two parts. When the density goes to zero, the mean speed goes towards the Freeway Free Speed (FFS) [17,18]. If the density goes to the highest value, the speed goes towards zero. The speed change curve is shown in As illustrated displayed. These wavy curves are more intense in arrival areas of the work zones. It indicates the shockwaves had been more effective in transition areas. Also as illustrated in in the first minutes, the differences of average speed between the transition and termination areas (∆S 1 ) for Work zones 1, 2 and 3 are 63, 29 and 59, respectively. It indicates that in Work zone 1, the effect of speed reduction plays a significant role in queue length increase. As presented in section "Queue length", Equation 6, the queue length model of Work zone 1, the effect of the acceleration reduction (a) on queue length is characterized. As result of the shockwave, according to Figures 9 and 14, the average speed reduction is about 58 kilometres per hour, while the flow rate of the work zone reaches from 1,550 to 2,300 vehicles per hour. In all curves, with the increasing flow rate, the average speed increases. Flow rate -speed models As shown in Figures 10-15, six fitting types were done on the flow rate -speed data and the equation and detection coefficient of the equation (R 2 ) are written on each one. In Figures 14 and 15 In Figures 19-21, the left part of the curve indicates the state at which the speed is relatively high (about 80 to 85 kilometres per hour). In this part, the shockwave speed (W u ) between every two points can be obtained by dividing the flow rate by density. As illustrated in Figure 21, high shockwave speed in the first minutes and afterwards, high change in the shockwave speed (W u ) caused an abrupt increase in the density and subsequently increased the queue length. As presented in Section "Queue length", Equation 8, the queue length model of Work zone 3; the effect of the shockwave speed (W u ) on queue length is characterized. The right part of the curve corresponds to the state at which the density has in Figures 7-9, the abrupt change in speed divides the curves of Figures 16-18 in two parts. In other words, significant speed reduction has caused abrupt increase in density. Flow rate -density models The relation between density and flow rate is as a second-degree function. If the density is zero, the flow rate is zero and if the density has its maximum value i.e. when the vehicles are stopped, the flow rate is zero. Also, if the density is zero, there is no vehicle on the road. In Figures 19-21, the flow rate change curves as a function of density have been shown. In these curves, the tangent on the curve gives the shockwave speed (W u ). The carried out fittings in all work zones are appropriate. As shown in Figures 19-21, the detection coefficients for these areas are 0.62, 0.53 and 0.7, respectively. In work zone 3, because of the shockwave, the curve has two parts. departing vehicles in the work zones have been presented. Using these tables, the curves in Figures 22-27 were drawn. Using Figures 22-24, and measuring the area between the two curves, it is possible to obtain the total delay by the queueing theory [11]. Figures 25-27 reveal the number of vehicles in the queue. In these figures, the positive and negative slopes of the curves indicate the increased and decreased queue length relative to the previous state, respectively. In all the three work zones the abrupt increase in arrival flow rate caused peak points of queue length curves. For instance, for Work zone 1, considering Figure 25 there are peak points in the 8 th and 16 th minute. That is because according to Figure 22, there are abrupt increases in the arrival increased and as result, the flow rate is decreasing, and the queue length is increasing. In Figure 21, the density is between 130 to 160 vehicles per kilometre. Thus, it can be found that the queue length is proportional to the density and shockwave speed. High changes in flow rate and low changes in density lead to increasing the queue length in the work zones. reduction of the percentage of heavy vehicles, the queue length will be reduced and as a result, the total delay of vehicles decreases. As seen in Table 2, the delay has decreased from 1,166 to 351 vehicles per minute (i.e. three times). Moreover, by considering Work zones 2 and 3, where the percentage of heavy vehicles is equal, it is observed that the delay in Work zone 3 is higher, because it is closer to the rush hours (about 8:00 a.m. to 10:00 a.m.). In this paper, using a non-linear multivariate regression model, the quantity of an unknown variable is determined using the variables defined. Backward Elimination method is used to select the variables. First, independent variables were entered into the regression equation. A summary of the data from each model is presented in Table 3. By these data, the appropriateness of the models has been evaluated. According to Then, by determining the bypass for heavy vehicles, one can obtain the reduction of queue length, delay and the cost of road users. Also, it has been shown that by determining the bypass for heavy vehicles, the delay can be significantly reduced; more than three times. -According to our investigations, each work zone in a specific freeway has its own queue length model, but the procedure shown for creating a queue length model can be used for similar freeways. -With increasing the flow rate at the arrival and departure sections of the work zones, the queue length decreased and as a result, the total delay decreased. This total delay reduction has caused the reduction of the travel time and increase of the speed at the departing section. -Abrupt changes in speed and arrival flow rate have caused shockwaves in the work zones. In general, with decreasing the shockwave speed, long queues are created. Therefore, it can be concluded that changing the working hours in the work zone to non-peak traffic hours causes an increase in the average speed. It is also possible to reduce the travel time and delay in freeways by controlling traffic so that the arrival flow rate from the upstream is less than the capacity level of the work zone. DISCUSSION This paper presents the methodology (that is built on the queueing theory mentioned earlier in the section on the literature review) for computing the queue length and delay due to queueing. The methodology is illustrated by a step-by-step procedure for computing the delay and queue length. This methodology can be used to compute the delays and queue length in work zones and to ensure that the maximum delays comply with the requirements of the jurisdiction. In addition, according to our investigations, each work zone in a specific freeway has its own queue length model, but the procedure shown for creating a queue length model can be used for similar freeways. On the other hand, the effects of the number difference between cars and heavy vehicles, peak and non-peak hours, acceleration reduction, abrupt change in the shockwave speed and abrupt change in arrival flow rate on the delay and queue length were evaluated. Further studies are recommended to collect the field data to quantify the queue length and delay due to the narrow lane widths, lateral clearances, speed enforcement, work intensity and other factors. Also, it is recommended that the queue length and delay be computed in a work zone of a specific part of a freeway with and without determining a bypass for heavy vehicles and comparing the results. CONCLUSION In the present study, after evaluating the traffic parameters and modelling of queue length in Karaj-Tehran freeway work zones, the following conclusions have been drawn: -The results have shown that factors such as acceleration reduction (a) and shockwave speed (Wu) which were more intense in Wsork zones 1 and 3, respectively, were also effective in their queue moldels. -The workzones that have taken place nearer the peak hour and with higher percentage of heavy vehicles, have featured significantly more delay. -At the peak time and if the traffic demand exceeds the service capacity, it caused shockwaves and created a bottleneck in that section of the freeway. Thus, by using daily traffic data and scheduling the maintenance and rehabilitation projects, it can be possible to reduce the queue length.
v3-fos-license
2020-05-28T09:08:48.253Z
2020-05-16T00:00:00.000
219156368
{ "extfieldsofstudy": [ "Environmental Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1016/j.scitotenv.2020.139481", "pdf_hash": "822d92cac878ff012666102231210113c71c62e0", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46365", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "af9c196f2995f20e62fed45e6d55c0fbfe9b62cf", "year": 2020 }
pes2o/s2orc
The impact of rainfall events, catchment characteristics and estuarine processes on the export of dissolved organic matter from two lowland rivers and their shared estuary • Rapid transport of DOC from soils to river after rainfall events • DOC andDONprocessingwithin estuary Introduction Dissolved organic matter (DOM) is an important source of carbon and nitrogen to aquatic ecosystems and a great deal has been learnt about the role of DOM in global biogeochemical cycles over the last few decades (e.g. Yamanaka andTajika, 1997, Hansell andCarlson, 2014 andrefs within). Human activity, in particular the type of land use, has been shown to determine both the source and the composition of DOM, and new processes such as atmospheric deposition have been recognised as important inputs of both carbon and nitrogen to nutrient cycles (e.g. Cornell et al., 2003;Muller et al., 2008). The role of rivers in the transport of DOM to estuaries and coastal seas is also becoming increasingly better understood (e.g. Raymond et al., 2016;Casas-Ruiz et al., 2017;Drake et al., 2018). Once considered simply to be passive transporters of terrestrially-derived DOM from soils to the sea, rivers are now recognised as dynamic systems where both production and loss processes can potentially alter both the concentration and the composition of DOM during transport (Battin et al., 2008;Bertuzzo et al., 2017;Graeber et al., 2018;Harris et al., 2018). The balance of such processes can differ, however, even between reaches of the same river and hence the flux of both dissolved organic carbon (DOC) and dissolved organic nitrogen (DON) is hard to predict (Wymore et al., 2018). The development of models and management tools to predict the composition of DOM entering rivers as well as to quantify the flux of carbon and nitrogen from riverine systems into estuaries has therefore been limited despite recent advances (e.g. Anderson et al., 2019;Yates et al., 2019). As it has also been demonstrated that terrestrial DOM is more bioavailable than previously believed (Autio et al., 2016;Wiegner et al., 2006), both DOC and DON are implicated as potential contributing factors to problems such as eutrophication and hypoxia within estuaries (Seitzinger and Sanders, 1997;Paerl et al., 1998;Wiegner et al., 2009). Stochastic rainfall events that lead to rapid and sustained increased river flow rates are expected to have a disproportionate impact on these estuarine nutrient burdens. For example, much of the export of DOM from soils to streams occurs during brief periods of high river flows following intense rainfall events (Inamdar and Mitchell, 2007;Morel et al., 2009;Hitchcock and Mitrovic, 2013). As these events are expected to increase in temperate latitudes over the coming century and beyond (IPCC, 2014), it is important to understand now how they may impact upon concentrations of DOC and DON both within rivers and downstream in estuaries under different flow conditions. Much work has been done on the impact of rainstorms on DOC and DON concentrations in rivers and streams with various different types of watershed characteristics (e.g. Buffam et al., 2001;Inamdar and Mitchell, 2007;Morel et al., 2009). The majority of these studies, however, have focussed on systems that have low inorganic nutrient loads (nitrogen typically b100 μM) with few studies investigating the role DOC and DON will play in rivers and estuaries already burdened by high inorganic nutrient loads and at imminent risk of eutrophication. The Christchurch Harbour Macronutrients Project was designed to investigate the impact of stochastic rainfall events on the transport and biogeochemical cycling of macronutrients in two temperate UK south coast rivers, the Hampshire Avon and the Stour, as well as in their shared estuary, Christchurch Harbour (UK). The Hampshire Avon has been previously identified as a river of national importance due to its predominantly agricultural watershed and elevated inorganic nutrient loadings (mean nitrate concentration ≈400 μM; Jarvie et al., 2005). We present here a unique data set of frequent observations from the lowest river gauging stations and estuary impacted by high nitrate concentrations. The goals of our study were to ascertain the annual variability of DON and DOC in the context of high inorganic nitrogen loads, to examine the impact of rainfall events leading to rapid but sustained increases in river flow on the potential fate of DOC and DON, and to investigate the role of the shared estuary in determining the organic and inorganic nutrient flux into coastal waters. Study area Christchurch Harbour is a shallow microtidal estuary on the south coast of England with a single outflow into the English Channel (Fig. 1). The mean tidal range during spring tides is 1.2 m, and the mean water depth outside of the main channel is approximately 0.5 m (Huggett et al., 2020). Two rivers, the Hampshire Avon (hereafter referred to simply as the Avon) and the Stour, drain into the estuary with a total catchment area of 2779 km 2 . The mean flow of the Avon and Stour at the lowest gauging stations on each river is 19.5 m 3 s −1 and 13.8 m 3 s −1 respectively (Centre for Ecology and Hydrology (CEH), 2008). A third small river, the Mude, drains into the estuary near the outlet ( Fig. 1) but only has a mean flow of 0.1 m 3 s −1 (CEH, 2008) and was not included in this study. The predominant land use types in the catchments of these two rivers are similar with over 75% of each catchment being a mixture of grassland and arable/horticultural land, but also with some woodland and small areas (1-2%) of heathland and urban areas (Table 1, CEH, 2008). A catchment map with land use classification is available in the Supplementary information (Supplementary Fig. 1). The geology between the two catchments does differ however with the Avon draining from predominantly chalk and the Stour draining from a mixture of chalk (50%) and clay (30%). These geological differences are reflected in their Baseflow Indices (BFI)the Avon has a BFI of 0.90 indicating a high groundwater component in river discharge, and the Stour has a BFI of 0.65 suggesting that a lower proportion of river discharge originates from stored catchment sources (CEH, 2008). Sampling regime Water samples were collected from Environment Agency gauging stations at Knapp Mill (50.744 N, −1.782 W) on the Avon and Throop (50.764 N, −1.842 W) on the Stour, as well as at Mudeford Quay (50.724 N, −1.7409 W) at the mouth of the Christchurch Harbour estuary at 5-8 day intervals between May 2013 and April 2014. The two river flow gauging stations were the closest to the estuary on each of the respective rivers located 12.7 km (Throop) and 6.2 km (Knapp Mill) upstream from the mouth of the estuary. Surface water was collected using a clean bucket and immediately decanted into acidcleaned HDPE bottles for later inorganic nutrient analysis or into combusted (450°C for a minimum of 4 h) glass bottles for total dissolved nitrogen (TDN) and dissolved organic carbon (DOC) analysis. At each site surface water temperature and conductivity (salinity at the estuarine site) were measured in situ using an EXO2 multiparameter sonde (Xylem, UK). At the estuary site, water temperature and salinity were also measured at depth just above the sediment. Samples from the estuary mouth were collected at low tide. Boat transects were carried out at high tide in Christchurch Harbour fortnightly between 27th May 2014 and 4th September 2014 at 6 sites along a salinity gradient from the mouth of the estuary to an upstream site within the Stour (Fig. 1). A YSI 6600 sonde (Xylem, UK) was used to measure salinity and temperature profiles and the depth of highest chlorophyll fluorescence was sampled using a 5 L Niskin bottle except on 27th May 2014 when surface water was sampled using a clean bucket. On return to the lab, water samples for later nitrate (NO 3 − ) plus nitrite (NO 2 − ) analysis were filtered through a 25 mm diameter GF/F filter using an inline syringe unit, preserved with 0.015 M HgCl 2 (100 μL per 20 mL), and stored in the dark at room temperature (Kirkwood, 1996). Samples for DOC and TDN analysis were filtered through a combusted 47 mm diameter Whatman GF/F filter (nominal pore size 0.7 μm) on an acid-washed glass filter rig under low vacuum (b10 mmHg) and 20 mL of filtrate stored in combusted glass vials with acid-washed Teflon septa. Phosphoric acid (60 μL of 50% (v/v)) was added to each sample vial before storing at 4°C (Badr et al., 2003). Samples for ammonium and urea analyses were also collected during this filtration and were pipetted into 25 mL push top glass vials. Reagents for ammonium (NH 4 + ) and urea analyses were added immediately and vials were incubated at room temperature in the dark for up to 24 h (ammonium) and for between 3 and 5 days (urea). Urea samples were not collected during the boat transects. Analytical methods Concentrations of nitrate plus nitrite were determined at the University of Portsmouth on a QuAAtro segmented flow nutrient analyser (SEAL Analytical, UK). Ammonium and urea concentrations were measured according to the method of Holmes et al. (1999) and Goeyens et al. (1998) respectively at all sites from late August 2013. Dissolved inorganic nitrogen (DIN) concentrations were calculated by adding nitrate plus nitrite and ammonium concentrations. Concentrations of DOC and TDN were measured with a TOC-V CPN analyser (Shimadzu, Japan) calibrated with a mixed standard of potassium hydrogen phthalate and glycine. Certified Reference Materials (DSR from University of Miami, USA) were used to validate results by comparing against the certified concentrations for DOC (41-44 μM; analytical mean 45.8 μM) and TDN (31-33 μM; analytical mean 30.5 μM). Concentrations of dissolved organic nitrogen (DON) were quantified by subtracting DIN concentration from TDN concentration. DON data from the first 5 sampling dates are not available. Additional data and calculations Daily mean river flow data for both river sites was provided by the Environment Agency with data for the study period as well as historic flow data from the period 2000-2010 inclusive from which an 11-year daily mean flow was calculated for each river. Limited data for DOC and DIN concentration during the sampling period were also available from the Environment Agency for the Knapp Mill site only and are presented for comparison purposes. Rainfall data was acquired from the Meteorological Office station at Bisterne situated 6.3 km north of Knapp Mill on the Avon. The flushing time of the estuary was calculated using a simple tidal prism method as described by Huggett et al. (2020). Instantaneous fluxes for the river sites were calculated by multiplying the measured concentration of each nutrient by the daily mean flow for the same day at that site. For the estuary, the daily mean flow for both rivers were summed together to determine the total daily mean flow and then multiplied by the nutrient concentrations measured at the estuary mouth. Annual fluxes were calculated by linearly interpolating between known concentration data points to obtain daily concentrations and then calculating flux as above before summing all daily values for each site. Annual fluxes were divided by catchment size to allow direct comparison of annual yield to other rivers and estuaries. The baseflow contribution to daily river flow was estimated using the smoothed minima technique of Gustard et al. (1992) as detailed in Jordan et al. (1997). Nitrate and river flow data from the Avon weekly sampling campaign have previously been published in Pirani et al. (2016) where the data were combined with water quality and phosphate data to develop a model to determine past nutrient fluxes based on historical river flows. River flow and estuarine flushing time Water temperature in both rivers and the estuary followed a seasonal cycle with minimum temperatures between 5.7°C and 6.2°C observed in winter and maximum summer temperatures reaching between 22.8°C and 23°C in July 2013 (data not shown). Daily mean flow in both rivers decreased from the start of sampling (April 2013) to a summer low flow state equal to the estimated baseflow for each river by around mid-July 2013 and remained low until a sharp increase towards the end of October 2013 (Fig. 2a, b). This period of elevated flow lasted for approximately 3 weeks before flow decreased again during a dry period from mid-November to mid-December 2013, although it remained above the summer low flow values. A second period of elevated flow started with sharp increases in flow in both rivers on the 16th December 2013 with sustained flows above both the 11-year mean and the highest flow values from the earlier period of elevated flow until the sampling finished on 10th April 2014 (total duration 150 d). Both periods of elevated flow were associated with increased rainfall locally (Fig. 2a). Flow data was unavailable at Knapp Mill for several days in early January 2014 after the gauging station was struck by lightning, but river flow rates both immediately before and after the loss of data were substantially elevated from background and so the flow was assumed to remain elevated across the period of missing data. Mean flow rates were more variable in the Stour than in the Avon, but mean flow during low flow periods was typically higher in the Avon. Daily mean flow rates for both rivers as well as the summed daily river flow and daily rainfall for the period of the estuarine transect sampling in summer 2014 also show an increase in river flow after rainfall events (Fig. 2c). In each river, the daily mean flow during the study period was significantly different (Mann-Whitney U test, Avon U 60449, Stour U 58501, both p b 0.05) to the historic daily mean river flow from 11 years of Environment Agency data ( Fig. 2a, b). These differences are particularly evident in the second elevated flow period where daily mean flows were up to 3 times greater in the Avon and 7 times higher in the Stour than the 11-year daily mean. Low rainfall in early December 2013 also leads to a period where mean daily flow was as low as 12% (Stour) or 28% (Avon) of the 11-year daily mean. Flushing times within the estuary ranged from 0.1 days with combined river flows of 100 m 3 s −1 to 1.5 days with minimum summer combined flows of 10 m 3 s −1 . The flushing time was consistently b1 day over the second elevated flow period from the 17th December 2013 until the end of sampling in April 2014. Inorganic and organic nutrient concentrations Concentrations of nitrate plus nitrite throughout the sampling period were higher in the Stour (mean 502 μM) than in the Avon (mean 381 μM) and at the estuary mouth (mean 328 μM; Fig. 3a-c). The highest concentrations at all sites were seen during the periods of decreasing flow following each of the elevated flow periods. The impact of low nutrient coastal waters can be seen in the estuary at Mudeford over the periods of low river flow when nitrate plus nitrite concentrations are lower than in either river (Fig. 3c). Spearman correlations revealed a significant relationship between DIN concentrations in both rivers as well as between each river and the DIN concentration in the estuary (ρ 0.533-0.606, p b 0.001 for all). Ammonium concentrations ranged from b1 μM to 9.8 μM and were lower on a weekly basis in the Avon (mean 2.4 μM) than in the Stour (mean 3.8 μM) or the estuary (mean 3.8 μM). Concentrations of ammonium were always b2.5% of DIN in the estuary and b1.8% of DIN in the rivers. DOC concentrations ranged from 167 to 486 μM (mean 249 μM) in the Avon, 156-1119 μM (mean 353 μM) in the Stour, and 162-676 μM (mean 273 μM) in the estuary (Fig. 3d-f). A general pattern of relatively low DOC concentrations (b300 μM) was observed at all three sites ( Fig. 3d-f) between May and October 2013 before the first period of elevated river flow, and subsequently concentrations at all sites increased during elevated river flow and later decreased as flow declined. Again, Spearman correlations revealed a significant relationship between DOC concentrations in both rivers as well as between each river and the DOC concentration in the estuary (ρ 0.680-0.769, p b 0.001 for all). DON concentrations ranged from 0 to 83 μM (mean 32 μM) in the Avon, 0-155 μM (mean 40 μM) in the Stour, and 0-54 μM (mean 17 μM) in the estuary (Fig. 3g-i). There was no clear pattern in DON concentration at any one of the sites and the response to elevated river flow events differed between sites and between events. Concentrations of DON increased in both the Avon and the Stour just prior to the start of the first elevated flow period, but this increase was not observed in the estuary. Over the course of this first elevated flow period DON concentrations in both rivers decreased and then increased steadily again, but in the estuary concentrations remained low. The greatest concentrations in the estuary and in the Avon occurred on the same date (7th March 2014) during the second elevated flow period, but the highest concentration of DON in the Stour was observed during the lower flow conditions between the two elevated flow periods in early December 2013 (Fig. 3h). Spearman correlation revealed a significant relationship between DON concentration in both rivers (ρ 0.504, p b 0.005), but there was no relationship between either of the rivers and the DON concentration in the estuary. The proportion of DON in TDN ranged from 0 to 26% across the three sites with a mean proportion of 5% in the estuary, 8% in the Avon, and 7% in the Stour. Urea concentrations ranged from b1 μM to 3.6 μM but there was no clear pattern between sites. Mean urea concentration was 1.2 μM in the Avon, 1.2 μM in the estuary, and 1.4 μM in the Stour. Relationships with river flow Overall there was a pattern of increasing DIN with river flow in both rivers up to a flow of approximately 25 m 3 s −1 in the Stour and 35 m 3 s −1 in the Avon, after which DIN concentration decreased as river flow increased further (Fig. 4a, b). DOC concentrations in both rivers increased to a peak as river flow increased at the lower range of river flows (up to approximately 25 m 3 s −1 in the Avon and up to approximately 50 m 3 s −1 in the Stour; Fig. 4c, d). Above these river flows there appears to be a positive relationship between river flow and DOC concentration. In the Avon the relationship between DON and river flow is complicated with one peak in DON concentration below 20 m 3 s −1 and another peak at approximately 70 m 3 s −1 (Fig. 4e). DON concentration in the Stour was highest at low river flow but there was a smaller second peak at approximately 60 m 3 s −1 (Fig. 4f). During dry periods the calculated baseflow is equal to or very close to the measured river flow, but rainfall events can decrease the proportion of measured flow contributed by baseflow (see Fig. 2). When DIN, DOC and DON are plotted against the proportion that baseflow contributes to measured flow then some relationships become clearer (Fig. 5). The concentration of DIN in both rivers appears to increase when the proportion of baseflow increases (Fig. 5a, b), whilst the opposite is true for DOC with the highest concentrations observed when baseflow is contributing less to river flow (Fig. 5c, d). Again, the variability in DON concentrations does not appear to have a clear relationship with baseflow contribution (Fig. 5e, f). Estuarine results In the estuary it is clear that periods of elevated river flow restricted the inflow of more saline coastal waters (Fig. 6). At a combined river flow of 40 m 3 s −1 or greater the salinity at the mouth of the estuary (both surface and bottom) was typically b5 and frequently b1, resulting in Christchurch Harbour becoming essentially a freshwater lake under very high river flow conditions. These total river flows correspond to all dates sampled from 17th December 2013 until the end of the weekly sampling programme and so reflect the second elevated flow period in its entirety. Surface salinity within the estuary decreased from the mouth to the upstream sites. There is a clear conservative relationship between nitrate plus nitrite concentration and salinity in both the weekly sampled data at Mudeford (crosses; Fig. 7a) and the estuarine transects (coloured circles). Ammonium concentration is relatively low (b3 μM) at a salinity of 20 or greater, but there is more variability at lower salinities (Fig. 7b). The dominance of nitrate and nitrite in this system is evident in the clear relationship between DIN and salinity (Fig. 7c). The relationships between DOC and salinity (Fig. 8a) and between DON and salinity (Fig. 8b), however, are not as clear. In the weekly DOC samples and some of the estuarine transect samples there is the suggestion of a conservative relationship, but there are other estuarine samples with high concentrations of DOC at each end of the salinity range (e.g. 7th August 2014) or with peaks in the mid-salinity range (e.g. 10th July 2014). Interpretation of the DON data is complicated by the large number of values at concentrations below the limit of detection, but a general relationship between salinity and DON concentration appears to be present with lower DON at higher salinities (Fig. 8b, coloured circles). DOC: nitrate ratios Ratios of DOC: nitrate were b2.5 at all sites throughout the weekly sampling and the highest ratios (1.5 to 2) were observed during high flow periods in late October and in December/ January (Fig. 9a). DOC: nitrate ratios were also low across the estuarine transect sampling study with a mean ratio of 1.87 (Fig. 9b). Only 4 samples had a DOC: nitrate ratio N 3 and 3 of these samples occurred in the mid to low estuary on the same date (7th August 2014) when DOC concentrations were high and DIN concentrations were amongst the lowest observed. The maximum ratio observed on this date was 18.8 at a mid-estuary site. Fluxes of DIN, DOC, and DON When the instantaneous flux of DIN, DOC, or DON is calculated the importance of increased flow events becomes evident (Fig. 10 Table 2. Discussion Two periods of sustained rainfall with associated elevations in river flow were captured during the year of sampling with an initial wetting up period in late October 2013 being followed by a prolonged period of several months duration between December 2013 and March 2014. These events allow the dynamics of DIN, DOC and DON concentrations within the two rivers and the estuary to be examined under a range of hydrological conditions. DIN dynamics and sources Concentration of DIN in both rivers was consistently high (N290 μM) throughout the study period, reinforcing the status of these rivers and the shared estuary as an impacted system. A substantial proportion of each catchment (N35%) is used for arable or horticultural purposes, and thus a major source of DIN is likely to be agricultural in nature. Both Heppell et al. (2017) and Yates et al. (2019) report a positive relationship between the percentage of arable land use and nitrate or total nitrogen concentrations in the upper reaches of the Hampshire Avon catchment. Rainfall events should result in an increase in the flux of this diffuse-source DIN from land into the rivers (Withers and Lord, 2002). The finding that DIN concentrations are high when river flow is dominated by baseflow, however, implies that there are also point sources contributing to the DIN load (e.g. effluent from sewage treatment works), or that the groundwater may also be high in DIN, and finally that in-stream processes such as macrophyte and microalgal uptake at these relatively downstream sites use only a small portion of the total DIN. Jarvie et al. (2005) calculated that the effluent load of nitrate to the Avon at Knapp Mill was around 11% of river load and also identified that groundwater is a major source of nitrate in the Avon. Approximately 75% of nitrate in U.K. groundwater is believed to be from agricultural sources, although other sources such as atmospheric deposition, discharges or leaks from septic tanks and sewers, and the spreading of sewage sludge on land may also contribute (Rivett et al., 2007). In the Hampshire Chalk aquifer, isotopic analysis has shown that denitrification is an insignificant process within the unsaturated zone and, as a result, the nitrate concentrations in groundwater have been increasing since the 1970s (Rivett et al., 2007). Little has been published on the water chemistry of the Stour but, as the Stour has a lower BFI and is less dependent on groundwater contributions, the majority of the DIN is suspected to be from agricultural or sewage treatment sources. The conservative relationship between DIN and salinity again suggests that biological processing within the estuary has minimal impact on total DIN concentration with the result that Christchurch Harbour exports the majority of the DIN to the coastal seas of the English Channel. DOC dynamics and sources Whilst daily flow in the two rivers behaved differently with the Stour displaying greater variability and more rapid fluctuations in flow than the Avon, the DOC concentrations in both rivers behaved in a similar manner over the year as demonstrated by a significant Spearman correlation over time. Little variability in concentration was observed during the lowest flow summer months, potentially reflecting that instream production-loss processes at these downstream sites on each river counterbalance any terrestrial inputs of fresh DOC (Creed et al., 2015). This proposed excess in DOC was also observed in the estuarine concentrations over the same period with constant concentrations throughout the summer baseflow period being transported to the coastal zone. The DOC dynamics under elevated flow conditions throughout the system, however, show a different pattern. Rapid increases in DOC concentration in both rivers as river flows increased and during subsequent peaks within the longer period of prolonged high flows reflect flushing of terrestrial organic matter, which is transported rapidly downstream by the increased flows, subsequently escaping any significant upstream biogeochemical processing. The maximum DOC concentration measured during these pulses in the Stour was more than double that measured in the Avon, reflecting the different catchment characteristics of the two rivers. The Avon is groundwater-dominated in a predominantly chalk catchment (Jarvie et al., 2005;Yates et al., 2016) whilst the Stour has less permeable clay soils where surface runoff can result in rapid transport of DOC into the river. In contrast, groundwater DOC concentration in chalk aquifers is typically low. Rivett et al. (2007) reported a mean DOC concentration of 60.8 ± 19.2 μM from 1725 groundwater samples across the major Cretaceous Chalk aquifer in England, and Stuart and Lapworth (2016) determined the mean baseline concentration of groundwater DOC in Hampshire chalk at 67.8 μM. Point sources such as sewage treatment works (STWs) are present in both catchments with over 140 STWs and 30 fish farm discharges in the Avon catchment alone (Jarvie et al., 2005), and sources such as these are likely to have contributed to the rapid pulses in DOC observed. Other sources such as groundwater seepage and ditch drainage occur on longer timescales and may have contributed to the continued elevation of DOC concentrations during prolonged high river flow events (Morel et al., 2009). Atmospheric deposition from rainwater could also contribute to the DOC concentration, with studies reporting mean DOC concentrations in rainwater ranging from approximately 8 μM in mid-Wales (Wilkinson et al., 1997) to 50 μM in Greece (Pantelaki et al., 2018) and up to120 μM in the coastal United States (Willey et al., 2000). When DOC concentrations are plotted against the number of days since rainfall last occurred, it can be seen that concentrations are high within the first two days then stabilise at around 300 μM or lower at all sites (Fig. 11). This is further evidence that a major source of DOC to these rivers during rainfall events was the flushing of superficial soils. This is commonly Table 2 Annual yields of dissolve inorganic nitrogen (DIN), dissolved organic nitrogen (DON), and dissolved organic carbon (DOC) from study sites and a selection of streams and rivers across the UK, Europe, and the world. Units are kg C or N km −2 y −1 . seen in wetland systems rich in organic soils (Inamdar and Mitchell, 2007;Worrall et al., 2012) but has also been observed in more agricultural catchments to the one studied here (Royer and David, 2005;Morel et al., 2009). Within the estuary there is some evidence of DOC production during the estuarine transect sampling, especially on 10th July 2014 where one mid-estuary sample stands out as higher than the surrounding samples. The chlorophyll a concentration on the same day at this site was 93 μg L −1 and a dinoflagellate bloom was later confirmed using inverted microscopy. These production processes appear to remain relatively localised within the estuary, however, as DOC concentrations both upstream and downstream of the site were over 100 μM lower. On the 7th August 2014 DOC concentrations were elevated throughout the estuary (Fig. 8a, green circles) but the lowest concentration was observed at mid-salinity. There was no associated increase in either chlorophyll a or river flow (data not shown). Ammonium concentration was also relatively high on this date, especially at salinities between 10 and 15 which correspond to the upper estuary sites on that date. Discharge from Holdenhurst sewage treatment works on the lower Stour (Fig. 1) could be the cause of this elevated DOC and ammonium before mixing with the higher salinity waters (e.g. Maier et al., 2012). There may be evidence of localised DOC production near the mouth of the estuary on this date also, as there is an increase in DOC at a salinity of 32. There are shallow sand banks between sites 1 and 2 at the mouth of the estuary. Sand is a highly permeable sediment and at depths of 1-2 m benthic production processes could be tightly coupled to the water column (Huettel et al., 2014). The source of this DOC peak could therefore be the sediments rather than water column processes with the incoming tide or waves driving pore-water exchange and flushing the DOC into the water column (Huettel et al., 2014). The mean estuarine flushing time estimated for low summer flows is around 1.5 days which may be too short a period for significant uptake of DOC relative to the total concentration to occur within the estuary before the DOC is exported to the coastal zone. Annual DOC yield was at the higher end of ranges reported for global rivers and estuaries excluding the Nushagak River (Table 2), and is comparable to the range of export values estimated by Worrall et al. (2012) and Jarvie et al. (2017) for rivers in the United Kingdom based on land use characteristics and predominant soil types. DOC: nitrate relationship As the accumulation of nitrate in aquatic ecosystems has been identified as a major environmental concern, the importance of carbon as an essential nutrient coupled to the microbial processing of nitrate has become more evident (Taylor and Townsend, 2010). The molar ratio of DOC: nitrate is being increasingly used as an indicator for the potential fate of nitrate within a system (e.g. Sandford et al., 2013;Wymore et al., 2016;Heppell et al., 2017), with heterotrophic nitrogen assimilation proposed to be carbon-limited at a DOC: nitrate threshold ratio of 3.5 across all systems (Taylor and Townsend, 2010). During the weekly sampling the DOC: nitrate ratio was b2.5 at all times and at all sites, and the ratio only increased above 3.5 in the estuarine transects on 2 sampling dates (25th June 2014 and 7th August 2014). This would imply that in-stream processing of nitrate by heterotrophic organisms is carbon limited throughout the majority of the year, and is further evidence that control of anthropogenic nitrogen inputs to this system is needed. Interestingly the highest DOC: nitrate ratios during the weekly sampling were observed during the high flow events over the winter period, implying that the increased DOC delivery to the river during rainfall events may have the potential to relieve some of the carbon limitation if nitrate concentrations were decreased. Localised areas of DOC production within the estuary in summer months, such as algal blooms, are enough to raise the ratio and may result in areas of nitrate drawdown. DON dynamics and sources In contrast to the DOC dynamics, DON concentrations showed greater variability between sites with a significant relationship observed between the two rivers but no relationship between either of the rivers and estuarine DON concentrations. Concentration-discharge plots (Fig. 4) for each river are suggestive of a point source, such as a sewage treatment works, acting as the major source of DON in the rivers with concentration generally decreasing with increased flow. As flow in each river increases past~40 m 3 s −1 , however, concentrations increase, and this was particularly obvious in the Avon. This may reflect increased transport of DON from 'new' sources such as septic tanks connected via localised flooding. The highest concentration of DON in the Stour occurred at low river flow rates in early winter between the two high flow periods and at a time when relatively low water temperatures (7.4°C) would be expected to limit biological production processes within the river. This is further evidence for an external source contributing to DON concentrations within the Stour. DON concentrations were lower in the estuary over the dry summer months during the weekly sampling than in either river, reflecting the mixing of higher salinity low nutrient waters but also perhaps some DON removal in the lower reaches of the rivers and the estuary. The concentration range for DON in the estuary was higher during the summer boat transect work than during the weekly sampling campaign (Fig. 8b, coloured circles), and whilst the overall pattern appears to be one of decreasing concentration as salinity increases, there are certain dates that show interesting patterns. For example, the samples on 12th June and 25th June 2014 (Fig. 8b, white and red circles) both appear to have higher DON concentrations in the upper estuary at the 3 sites in lower salinity waters than would be expected in a simple conservative mixing relationship. This would imply that there is removal of DON within the estuary between sampling sites 4 and 3 (Fig. 1). There are large areas of sand banks between these two sites and at the time combined river flows were relatively low at around 20 to 25 m 3 s −1 which would result in an increase in the estuarine flushing time. The estuary is shallow and so, in addition to water column processes such as phytoplankton uptake, bacterial respiration, and photooxidation (Seitzinger and Sanders, 1997;Wiegner et al., 2006;Badr et al., 2008), it is possible that these sandy sediments were acting as a sink for DON. Hopkinson et al. (1999) observed sediments, including sands, acting as a sink for DON in an estuary in Massachusetts with a degree of both temporal and spatial heterogeneity across a seasonal cycle, and Agedah et al. (2009) also report uptake of DON by sediments in the anthropogenically-impacted Colne estuary (UK). The rates reported by Agedah et al. (2009), however, were slow in relation to the residence time of the estuary and thus they proposed that the majority of DON in the system was exported to the coastal zone. In contrast Burdige and Zheng (1998) reported estuarine sediments in Chesapeake Bay to act as a source of DON to the water column. As DON concentrations were substantially lower than DOC or DIN concentrations throughout this study the impact of any potential removal or production process is more likely to be seen reflected in the total DON concentration at the estuary mouth. Unfortunately the data presented here is not sufficient to fully resolve the role of the sediments in the processing of DON in Christchurch Harbour, but further work is ongoing on sediment-water exchanges in this system. A key feature of the two rivers studied was the very high concentrations of nitrate plus nitrite (N290 μM) throughout the year. As the concentration of DON is derived by subtracting the DIN concentration from the corresponding total dissolved nitrogen concentration, it is possible to get values that are negative or below the limit of detection when DON concentrations are low relative to DIN. Vandenbruwane et al. (2007) found the likelihood of this occurring to be of particular importance in samples where the proportion of DON to TDN is ≤15%. The mean DON: TDN ratios in both rivers and the estuary were below 10% which may explain in part the number of DON samples that were found to be below the limits of detection in this study. This in turn may explain the lack of relationship observed between DON concentrations and flow. Yates et al. (2019) determined the proportion of DON to TDN in intensively farmed arable catchments underlain by chalk, such as our Avon catchment in particular, to be b10%. However, the magnitude of the river flows during rainfall events still result in thousands of kilograms of nitrogen being transported to the estuary and beyond on a daily basis as DON. Whilst this flux was an order of magnitude lower than the flux of DIN in this system it is still a considerable yield of nitrogen when compared to many other global estuaries (Table 2). There was no clear relationship found between DOC and DON at any of the sites or across all of the sites. The potential reasons for this are twofoldeither the factors controlling DOC and DON dynamics in these systems are different, or the components of the DOM pool are utilised by the microbial community at different rates (Wiegner et al., 2006(Wiegner et al., , 2009. Several studies have shown that the dynamics of DON and DOC within the same river can differ with DON being cycled faster within rivers than DOC (Stepanauskas et al., 2000;Solinger et al., 2001;Wiegner et al., 2006;Inamdar and Mitchell, 2007). In addition the ultimate fate of the organic carbon and nitrogen can differ as it can be either incorporated into bacterial biomass or oxidised and excreted (Hopkinson et al., 1999). The impact of rainfall events The resolution of sampling in this study was not high enough to fully resolve the behaviour of DOC or DON within the Rivers Avon or Stour or within Christchurch Harbour during storms using hysteresis curves (e.g. Lloyd et al., 2016), but weekly sampling over the course of a year has demonstrated that local increases in rainfall result in sudden increases in both river flow and riverine DOC concentration and thus flux. Elevated river flows also increase the DIN flux to the coastal zone (Supplemental Fig. 2). DON dynamics are more complicated to resolve possibly due to the interplay of different sources as well as currently unidentified removal processes within the estuary. Whilst the river flows observed during the year of study could be considered atypical in comparison to the 11-year mean flows for each river, they are certainly relevant when considering the estimated fluxes of each river under future climate change conditions of drier summers and more frequent stochastic storm events (IPCC, 2014). Conclusion There are few studies detailing the yields of dissolved organic nutrients in rivers and estuaries that are already known to be anthropogenically-impacted with elevated inorganic nutrient loads, despite their potential contribution to causing problems such as eutrophication and hypoxia. Annual yields of dissolved inorganic and organic nutrients at the lowest gauging stations of the Hampshire Avon and the Stour, as well as in their shared estuary of Christchurch Harbour, are comparable to and often greater than yields documented from other riverine and estuarine systems both within the UK and globally. Whilst the yield of DON was typically an order of magnitude smaller than the corresponding yield of nitrate plus nitrite at each site, the range of 118-198 kg N km −2 y −1 is still an ecologically important load of nitrogen potentially available to the aquatic microbial community. The processes controlling the dynamics of DOC and DON differed in both the rivers and the estuary, highlighting the importance of considering the component parts of DOM when investigating the role of DOM in aquatic systems. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
v3-fos-license
2016-10-31T15:45:48.767Z
2016-07-27T00:00:00.000
5665132
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2016.00102/pdf", "pdf_hash": "31223ac8992ec0b073c05b6e45ed2cf9dcc00847", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46367", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "sha1": "31223ac8992ec0b073c05b6e45ed2cf9dcc00847", "year": 2016 }
pes2o/s2orc
Changes in Skeletal Integrity and Marrow Adiposity during High-Fat Diet and after Weight Loss The prevalence of obesity has continued to rise over the past three decades leading to significant increases in obesity-related medical care costs from metabolic and non-metabolic sequelae. It is now clear that expansion of body fat leads to an increase in inflammation with systemic effects on metabolism. In mouse models of diet-induced obesity, there is also an expansion of bone marrow adipocytes. However, the persistence of these changes after weight loss has not been well described. The objective of this study was to investigate the impact of high-fat diet (HFD) and subsequent weight loss on skeletal parameters in C57Bl6/J mice. Male mice were given a normal chow diet (ND) or 60% HFD at 6 weeks of age for 12, 16, or 20 weeks. A third group of mice was put on HFD for 12 weeks and then on ND for 8 weeks to mimic weight loss. After these dietary challenges, the tibia and femur were removed and analyzed by micro computed-tomography for bone morphology. Decalcification followed by osmium staining was used to assess bone marrow adiposity, and mechanical testing was performed to assess bone strength. After 12, 16, or 20 weeks of HFD, mice had significant weight gain relative to controls. Body mass returned to normal after weight loss. Marrow adipose tissue (MAT) volume in the tibia increased after 16 weeks of HFD and persisted in the 20-week HFD group. Weight loss prevented HFD-induced MAT expansion. Trabecular bone volume fraction, mineral content, and number were decreased after 12, 16, or 20 weeks of HFD, relative to ND controls, with only partial recovery after weight loss. Mechanical testing demonstrated decreased fracture resistance after 20 weeks of HFD. Loss of mechanical integrity did not recover after weight loss. Our study demonstrates that HFD causes long-term, persistent changes in bone quality, despite prevention of marrow adipose tissue accumulation, as demonstrated through changes in bone morphology and mechanical strength in a mouse model of diet-induced obesity and weight loss. The prevalence of obesity has continued to rise over the past three decades leading to significant increases in obesity-related medical care costs from metabolic and nonmetabolic sequelae. It is now clear that expansion of body fat leads to an increase in inflammation with systemic effects on metabolism. In mouse models of diet-induced obesity, there is also an expansion of bone marrow adipocytes. However, the persistence of these changes after weight loss has not been well described. The objective of this study was to investigate the impact of high-fat diet (HFD) and subsequent weight loss on skeletal parameters in C57Bl6/J mice. Male mice were given a normal chow diet (ND) or 60% HFD at 6 weeks of age for 12, 16, or 20 weeks. A third group of mice was put on HFD for 12 weeks and then on ND for 8 weeks to mimic weight loss. After these dietary challenges, the tibia and femur were removed and analyzed by micro computed-tomography for bone morphology. Decalcification followed by osmium staining was used to assess bone marrow adiposity, and mechanical testing was performed to assess bone strength. After 12, 16, or 20 weeks of HFD, mice had significant weight gain relative to controls. Body mass returned to normal after weight loss. Marrow adipose tissue (MAT) volume in the tibia increased after 16 weeks of HFD and persisted in the 20-week HFD group. Weight loss prevented HFD-induced MAT expansion. Trabecular bone volume fraction, mineral content, and number were decreased after 12, 16, or 20 weeks of HFD, relative to ND controls, with only partial recovery after weight loss. Mechanical testing demonstrated decreased fracture resistance after 20 weeks of HFD. Loss of mechanical integrity did not recover after weight loss. Our study demonstrates that HFD causes long-term, persistent changes in bone quality, despite prevention of marrow adipose tissue accumulation, as demonstrated through changes in bone morphology and mechanical strength in a mouse model of diet-induced obesity and weight loss. Keywords: obesity, bone, marrow adipose tissue, marrow fat, weight loss, leptin, high-fat diet, fracture Skeletal Changes in Obesity Frontiers in Endocrinology | www.frontiersin.org July 2016 | Volume 7 | Article 102 inTrODUcTiOn Over the past two decades, the prevalence of obesity has increased in Western countries (1,2). In the United States, currently ~68.6% of adults and approximately one-third (~31.8%) of children are overweight or obese (3). Obesity is associated with comorbidities including cardiovascular and metabolic disease, autoimmune disorders and some cancers (1,(4)(5)(6). Recent work has suggested that obesity is also detrimental to bone health (7)(8)(9)(10)(11), with skeletal changes that can persist even after weight loss (10,12). Previously, it was assumed that obesity had a purely positive effect on bone mass (13)(14)(15); increased body weight provides mechanical stimulation, resulting in skeletal loading and bone accrual. However, juxtaposed to this, there is a newly recognized metabolic component, as the adipose tissue itself can exert a negative influence on bone (14). Indeed, increases in body mass index (BMI) have been associated with decreased bone mineral density (BMD) and increased fracture risk in obese adolescents and adults (9,16), and in obese children (17). The effect of obesity on fracture risk is site specific. The presence of soft-tissue padding from fat may contribute to decreased fracture risk in some areas (e.g., hip) while unprotected sites, such as the extremities (e.g., humerus and ankle), have increased risk (18)(19)(20). The cross-sectional nature of previous clinical studies can only identify associations between obesity and bone, thus, rodent models are widely utilized to explore the mechanisms underlying the relationship between obesity and the skeleton. It is well established that high-fat feeding of mice leads to a reduction in cancellous bone mass (7,12,21,22). This may be mediated by leptin-induced sympathetic tone, which has been implicated as strong mediator of cancellous bone loss (23)(24)(25). By comparison, the cortical phenotype in response to high-fat diet (HFD) in rodents remains unclear, with some studies indicating an increase (11), no change (12,21,22,26) or a reduction in cortical bone mass (10,27). Located within the skeleton are the bone marrow adipocytes; recent studies suggest that marrow adipose tissue (MAT) expansion occurs during high-fat feeding (28,29). Whether MAT expansion and bone loss are somehow linked during obesity is still unclear; some studies suggest that these lineages are correlated (29-31) while Doucette et al. recently reported MAT expansion during diet-induced obesity that occurred independently of a bone phenotype (28). In addition to the effects of obesity on bone, weight loss interventions have also been shown to have detrimental effects on bone metabolism, as reviewed by Brzozowska et al. (32). There are a range of interventions including calorie-restricted diets, exercise regimens, medications, and bariatric surgery (32,33). Each of these interventions aim to reduce body fat and improve metabolic disease; the full extent to which these processes may alter MAT and bone mass in the context of obesity are largely unknown. Surgical interventions of bariatric surgery (Roux-en Y gastric bypass, laparoscopic adjustable gastric banding, and sleeve gastrectomy) have all been associated with a decline in bone mass despite improvements in metabolic health (32). In contrast to surgical weight loss, exercise has been shown to be quite beneficial on bone density due to increased muscle loading (34)(35)(36). The most common initial intervention clinically is calorie restriction or "dieting. " Few studies have looked at weight loss in rodent models through interventions of "switching" diet. One study performed showed that switching back to a chow diet following high-fat feeding could rescue bone loss (12); however, the response of MAT and the interaction of MAT with bone loss in these models was not examined. The objective of this study was to investigate the interaction between MAT and bone in the context of high-fat feeding and to examine the response of these tissues to dietary weight loss. We demonstrate that high-fat feeding leads to excess peripheral adiposity, MAT expansion, a reduction in bone mass and impaired bone strength. Weight loss led to a significant reduction in whole body adiposity and blocked MAT expansion; however, it failed to completely rescue defects in skeletal morphology and biomechanics. This work begins to address the potential of adipose tissue within the skeleton to have an impact on bone -working, unlike peripheral fat, from the inside out. MaTerials anD MeThODs animals Male C57Bl6/J mice (Jackson Laboratories) were given a normal chow diet (ND) (13.5% calories from fat; LabDiet 5LOD) or 60% high fat diet (HFD) (Research Diets D12492) at 6 weeks of age for a duration of 12, 16, or 20 weeks. A third group of mice was put on HFD for 12 weeks and then on ND for 8 weeks [weight loss (WL) group]. Animals were housed in a specific pathogen-free facility with a 12-h light/12-h dark cycle at ~22°C and given free access to food and water. All animal use was in compliance with the Institute of Laboratory Animal Research Guide for the Care and Use of Laboratory Animals and approved by the University Committee on Use and Care of Animals at the University of Michigan. The tibia was selected for our longitudinal analyses since it can be used to simultaneously monitor changes in rMAT (proximal tibia) and cMAT (distal tibia) within one sample (37). To compare the changes in bone within the tibia to those in the femur, as reported previously (12), we also analyzed the femurs in the 20-week groups. Micro computed-Tomography Tibiae were fixed in formalin for 48-h and then placed in phosphate buffered saline (PBS). Specimens were embedded in 1% agarose and placed in a 19-mm diameter tube, and the length of the bone was scanned using a Micro Computed-Tomography (microCT) system (μCT100 Scanco Medical, Bassersdorf, Switzerland). Scan settings were: voxel size 12 μm, medium resolution, 70 kVp, 114 μA, 0.5 mm AL filter, and integration time 500 ms. Density measurements were calibrated to the manufacturer's hydroxyapatite phantom. Analysis was performed using the manufacturer's evaluation software. Femurs were removed and frozen after wrapping in PBS-soaked gauze and then analyzed by microCT. Femora were scanned in water using cone beam computed tomography (explore Locus SP, GE Healthcare Pre-Clinical Imaging, London, ON, Canada). Scan parameters included a 0.5° increment angle, four frames averaged, an 80 kVp and 80 μA X-ray source with a 0.508 mm AI filter to reduce beam hardening artifacts, and a beam flattener around the specimen holder. All images were reconstructed and calibrated at an 18 μm isotropic voxel size to manufacturer-supplied phantom of air, water, and hydroxyapatite (38). Biomechanical assessment Following microCT scanning, femurs were loaded to failure in four-point bending using a servohydraulic testing machine (MTS 858 MiniBionix, Eden Prairie, MN, USA). All specimens were kept hydrated in lactated ringers solution-soaked gauze until mechanical testing. In the same mid-diaphyseal region analyzed by μCT, the femur was loaded in four-point bending with the posterior surface oriented under tension. The distance between the wide, upper supports was 6.26 mm, and the span between the narrow, lower supports was 2.085 mm. The vertical displacement rate of the four-point bending apparatus in the anterior-posterior direction was 0.5 mm/s. Force was recorded by a 50 lb load cell (Sensotec) and vertical displacement by an external linear variable differential transducer (LVDT, Lucas Schavitts, Hampton, VA, USA), both at 2000 Hz. A custom MATLAB script was used to analyze the raw force-displacement data and calculate all four-point bending parameters. Combining anterior-posterior bending moment of inertia data from μCT with mechanical stiffness from four point bending, the estimated elastic modulus was calculated using standard beam theory as previously described (38). The modulus of elasticity was derived based on previous methods with "L" set at 3.57 and "a" at 0.99 (39). Quantification of Trabecular and cortical Parameters with microcT Tibia. Regions of interest (ROI) was located for both cortical and trabecular parameters. Analyses were performed with MicroCT software provided by Scanco Medical (Bassersdorf, Switzerland). A mid-diaphyseal cortical ROI was defined as ending at 70% of the distance between the growth plate and the tibia/fibula junction. A ROI spanning 360 μm (30-slices) proximal to this region was analyzed with standard plugins using a threshold of 280. The trabecular ROI was defined as starting 60 μm (5-slices) distal to the growth plate and ending after 600 μm total (50-slices). Trabecular analyses were performed with standard Scanco plugins with a threshold of 180. Femur. ROI was located for both cortical and trabecular parameters. A diaphyseal cortical ROI spanning 18% of total femur length was located midway between the distal growth plate and third trochanter. Cortical bone was isolated with a fixed threshold of 2000 Hounsfield Units for all experimental groups. Parameters including cortical thickness, endosteal and periosteal perimeter, cross sectional area, marrow area, total area, anterior-posterior bending moment of inertia, and tissue mineral density (TMD) were quantified with commercially available software (MicroView v2.2 Advanced Bone Analysis Application, GE Healthcare Pre-Clinical Imaging, London, ON, Canada). A trabecular ROI 10% of total femur length was located immediately proximal to the distal femoral growth plate and defined along the inner cortical surface with a splining algorithm. Trabecular metaphyseal bone was isolated with a fixed threshold of 1200 Hounsfield Units. Quantification of Marrow adipose Tissue Marrow adipose tissue volume within the tibia was assessed as described previously (37,40). After the initial microCT scan, bones were decalcified in 14% EDTA solution, pH 7.4 for 14 days at 4°C. Decalcified bones were stained with 1% osmium tetroxide solution in Sorensen's phosphate buffer pH 7.4 at room temperature for 48 h. Osmium-stained bones were re-scanned using the Scanco microCT settings described above. For analysis of MAT within the tibia, four regions were defined as follows: (1) the proximal epiphysis between the proximal end of the tibia and the growth plate, (2) the proximal metaphysis, beginning 60 μm (5-slices) distal to the growth plate and ending after 600 μm total (50-slices), (3) the growth plate to the tibia/fibula junction (GP to T/F J), and the distal tibia between the tibia/fibula junction and the distal end of the bone. MAT volume analyses were performed with standard Scanco plugins with a threshold of 500. These results were corrected for multiple comparisons using the Benjamini-Hochberg procedure as described previously (41). For comparisons in Figures 4, 6 and 7, a one-way ANOVA with Tukey's correction was applied. In Figure 8, linear regression was applied to test the significance of the correlations. Raw data for the skeletal morphology, marrow fat quantification, and biomechanical testing is available in Data Sets 1-3 in Supplementary Material. resUlTs increases in Body Mass with high-Fat Diet are rescued by Weight loss Mice were fed normal chow diet (ND) or 60% high-fat diet (HFD), starting at 6 weeks of age, for 12, 16, or 20 weeks. A separate group of mice received 12 weeks of HFD, followed by 8 weeks of ND to mimic weight loss ( Figure 1A). Comparison of the 12-week HFD group to the WL group was used to determine if weight loss reversed changes that were already present at 12 weeks, or, rather, prevented further deterioration induced by continued HFD. Increases in body mass relative to ND control were apparent after 12-weeks of HFD ( Figure 1B). Increases in body mass relative to ND control persisted at the 16-and 20-week time points ( Figure 1B). Over time, body mass continued to increase from 12 to 16 weeks of HFD but stabilized between 16 and 20 weeks ( Figure 1B). Body mass returned to normal after weight loss ( Figure 1B). Weight gain was due, at least in part, to increases in liver, inguinal and gonadal white adipose tissue (WAT) mass after 20-weeks of HFD ( Figure 1C). With weight loss, liver mass returned to normal; however, WAT mass was only partially rescued ( Figure 1C). Within the tibia, MAT expansion became significant, relative to ND control, after 16 weeks of HFD (Figures 2A,B). With HFD, changes in the proximal tibial epiphysis mimicked what was observed between the growth plate and tibia/fibula junction, with a 5.5-and 4.3-fold increase at 16 weeks, relative to 12 weeks, respectively ( Figure 2B). The distal tibia was similar, though there was only a 2.1-fold increase between 12 and 16 weeks of HFD, likely owing to the higher baseline MAT in this region ( Figure 2B). No additional MAT accrual in any region of the tibia was observed between 16 and 20 weeks of HFD ( Figure 2B). In the weight loss group, the MAT in the regions of the proximal epiphysis and GP to T/F J was indistinguishable from that of the 20-week ND group ( Figure 2B). However, it was also similar in magnitude to the 12-week HFD group, suggesting that switching to ND was sufficient to block HFD-induced MAT expansion -rather than reversing MAT accrual that had already occurred. In the distal tibia, age-associated increases in MAT were noted from 12-to 16-weeks of age in the ND group. Prior to correction for multiple comparisons, MAT within the distal tibia in the WL group was higher than chow (p = 0.019) but less than HFD (p = 0.050). This suggests that weight loss blunted, but did not entirely prevent, HFD-induced MAT expansion in the distal tibia at 20-weeks ( Figure 2B). Raw data for these comparisons is available as Data Set 1 in Supplementary Material. in the Tibia, Trabecular Bone Quality Decreases with high-Fat Diet and Partially improves with Weight loss Consistent with previous reports (42,43), we observed an age-related decrease in trabecular bone volume fraction (BVF) and trabecular number, with a corresponding increase in spacing, in the proximal tibial metaphysis of ND control mice (Figures 3A,C,E,G). Relative to ND controls, mice fed a HFD had a significant decrease in trabecular BVF, bone mineral content (BMC), and number after 12-weeks of diet (Figures 3C,D,E). Structure model index and trabecular spacing were reciprocally increased (Figures 3F,G). Trabecular thickness remained unchanged (Data Set 1 in Supplementary Material). Loss of trabecular BVF and BMC with HFD, relative to control ND, persisted at the 16-and 20-week time points (Figures 3C-G). Weight loss partially rescued decreases in trabecular BVF, BMC, and number (Figures 3C-E) and increases in spacing ( Figure 3G). Unlike loss of trabecular bone at 12-weeks, MAT volume was not significantly increased relative to ND control until 16-and 20-weeks of HFD ( Figure 3B). Over time, MAT increased by 3.5fold from 12-to 16-weeks of age in the HFD group ( Figure 3B). Region-specific quantification of MAT in the proximal tibial epiphysis, between the growth plate to tibia/fibula junction (GP to T/F J), and the distal tibia. All graphs are mean ± SEM. N = 3-6 per group for the proximal epiphysis; low N is due to accidental removal/fracture of the proximal epiphysis during processing. N = 6-8 for all other groups. ND, normal chow diet; HFD, high-fat diet; WL, weight loss. "a" -significant vs. 12-week on same diet. "b" -significant vs. 16-week on same diet. "c" -significant vs. 12-week HFD. *p < 0.050 for the indicated comparison. No further increases were present from 16-to 20-weeks of age. Weight loss completely prevented HFD-induced MAT accumulation from 12-to 20-weeks ( Figure 3B). in the Femur, Trabecular Bone Quality Decreases with high-Fat Diet and after Weight loss Relative to ND control, HFD caused MAT expansion within the femur (Figure 4A). MAT expansion was absent in the WL group ( Figure 4A). The absolute amount of MAT in the distal femoral metaphysis was ~90% less than the proximal tibia (p < 0.0001, t-test) (Figures 3B and 4A). The magnitude of the loss of BVF with HFD at 20-weeks was comparable between femur and tibia (50 vs. 45%) (Figure 4B; Data Sets 1 and 2 in Supplementary Material). There was also a significant, HFD-induced decrease in trabecular number and increase in trabecular spacing (Figures 4D,F). Trabecular thickness and tissue mineral density remained unchanged (Figures 4C,E). With weight loss, there were no statistically significant differences in trabecular morphology, relative to HFD, in the femur (Figures 4B-F). There was a nonsignificant trend toward a decrease in trabecular spacing in the WL group relative to HFD ( Figure 4F). changes in cortical Bone after high-Fat Diet and Weight loss Within the tibia, there were no statistically significant changes in mid-diaphyseal cortical morphology after 12, 16, or 20 weeks of HFD or after weight loss (Figures 5A-F). However, slight differences may have been missed after statistical correction for multiple comparisons. For example, with standard one-way ANOVA at the 20-week time point only, there was a slight decrease in cortical thickness in the 20-week HFD group relative to ND control (p = 0.045) ( Figure 5D). Raw data are available in Data Set 1 in Supplementary Material. In the femur, 20-week HFD caused a significant decrease in cortical thickness relative to ND control ( Figure 6D). Cortical tissue mineral density was also decreased with HFD ( Figure 6F). With weight loss, cortical thickness and total mineral density improved relative to ND (Figures 6D,F). No differences in total area, marrow area, cortical area, or Iyy were noted (Figures 6A-F). Raw data are available in Data Set 2 in Supplementary Material. Marrow adipose Tissue expansion correlates with Bone loss in the Tibia In the control ND group, pooled over all ages (12,16, and 20 weeks of diet), there was a significant inverse correlation between MAT volume in the proximal metaphysis and measures of trabecular morphology including BVF, trabecular thickness, and BMC -but not with trabecular number (Figures 7A-D). By contrast, in the HFD group, MAT volume was negatively correlated with trabecular BVF, thickness, and BMC in addition to trabecular number (Figures 7A-D). Cortical thickness was significantly negatively correlated with GP to T/F J MAT volume in the HFD group only (Figure 7E). Cortical TMD did not correlate with MAT volume in either group (Figure 7F). high-Fat Diet causes Persistent Decreases in Biomechanical Properties of the Femur Four-point bending was performed to assess the biomechanical integrity of HFD and WL femurs. The femurs from the 20-week HFD and WL groups broke under a reduced maximum load relative to ND, trending toward less total work to induce fracture (Figures 8A,B). This indicates that despite recovery of cortical thickness and mineral density with WL (Figure 6), the bone quality remains impaired, leading to decreased fracture resistance and poor post-yield behavior. The yield load and post-yield work trended toward a decrease relative to ND in the HFD and WL groups, respectively (p < 0.1) (Figures 8D,E). The stiffness, post-yield displacement, and modulus of elasticity (39) were not significantly different between groups (Figures 8C,F) DiscUssiOn To our knowledge, this is the first study that has measured, within the same bone, HFD-induced MAT expansion and changes in skeletal morphology. By incorporating a weight loss group, we were also able to inhibit MAT expansion, and thus examine the impact of HFD on bone quality in the absence of MAT accumulation. In this study, HFD caused increases in body mass at 12-weeks, indicating accumulation of peripheral adiposity. This occurred prior to increases in MAT, supporting the hypothesis that dysfunction of peripheral tissues (e.g., insulin resistance) occurs prior to HFD-induced MAT expansion. As adipocytes and osteoblasts arise from the same mesenchymal progenitor cell, the notion of "fate-switching" whereby one lineage is favored over the other has been suggested (29)(30)(31). The inhibition of osteoblast differentiation may subsequently lead to increased adipocyte production, thus presenting with a situation of reduced bone mass and increased MAT (44). It is of note that this concept fails to capture the complexity of skeletal progenitors -some of which have the capacity to differentiate into osteoblasts but not adipocytes (45). In our study we observed the well-documented inverse correlation between MAT volume and bone mass/density in the tibia (Figure 7). However, despite this correlation, our data do not support the hypothesis that MAT expansion is the sole mediator of bone loss with HFD. Specifically, deterioration of trabecular architecture occurred as early as 12-weeks after HFD in the tibia, while changes in MAT did not become statistically significant until 16 weeks (Figures 2 and 3). Furthermore, though switching from HFD to chow at 12-weeks completely prevented HFD-induced MAT accumulation in the WL group ( Figure 3B), loss of trabecular number and corresponding increases in trabecular spacing beyond that of controls still occurred (Figures 3E,G). Thus, in this context, inhibition of MAT expansion by weight loss was not sufficient to block HFD-induced decreases in trabecular bone within the tibial metaphysis. There are many MAT-independent effects with the potential to regulate bone during high-fat feeding, the presence of which may contribute to the cancellous and cortical bone loss observed in this model. Increased fat mass is associated with increased systemic markers of oxidative stress in both humans and mice (4). Increased peroxide (H2O2) and reduced endothelial nitric oxide synthase in a genetic model of obesity was associated with cancellous bone loss (46). Reactive oxygen species have been found to promote the association of the transcription factors FoxO with β-catenin, subsequently leading to a reduction in Wnt signaling and osteoblastic differentiation (47). Although the direct effects of leptin may also promote osteoblast proliferation and differentiation (25,48), the central effects of leptin have been shown to mediate the opposite effects, promoting cancellous bone loss via the sympathetic nervous system (23,24). Another central pathway involving increased neuropeptide Y (NPY) arising from leptin resistance during obesity is implicated in bone metabolism as mice with increased central NPY have concurrent obesity with bone loss (49) and NPY deficiency in ob/ob mice leads to improved cortical bone mass (50). Lastly, there is an increase in systemic inflammation with obesity that might directly affect bone marrow osteoclasts. A major source of obesity-induced inflammation stems from an increase in bone marrow macrophages and their progenitors (51). These bone marrow-derived macrophages during obesity mediate an inflammatory environment that has been shown to stimulate osteoclastogenesis and reduce osteoblast development (52,53), possibly due to the expansion of the common monocyte-osteoclast progenitor (54). Recently, Yue et al. have also demonstrated that leptin produced from obese adipose tissue can directly bind to leptin receptors on mesenchymal stem cells promoting differentiation of adipocytes and inhibiting osteoblast formation (29). Altogether there are a number of MAT-independent variables involved in coordinating the relationship between diet-induced obesity and bone. Though it is not the sole mediator of bone loss with HFD, our study does not rule out the possibility that MAT, particularly when present in large excess, may exert detrimental effects on bone. Indeed, the magnitude of cancellous bone loss in the (Figure 3) and cortical bone loss in the femur (Figure 6) was significantly greater in the 20-week HFD group with MAT expansion than the WL group in which MAT expansion failed to occur. Comparisons between the femur and tibia provide further clues as to this relationship. Consistent with a previous report, 12-weeks of HFD followed by 8-weeks of normal chow diet (WL group) did not prevent cancellous bone loss in the distal femoral metaphysis (12). By contrast, in the same animals, weight loss partially prevented HFD-induced deterioration of trabecular BVF and BMC in the tibia (Figure 3). It is possible that this discrepancy may be explained by differences in MAT. After 20-weeks of HFD, the volume of MAT in the metaphysis of the tibia was 9.8-fold greater than in the femur. This is similar to previous work by Halade et al., despite substantial differences in their model system (10% corn oil diet for 24-weeks in 12-month-old female mice) (44). Thus, it is possible that this increase in MAT contributed to additional bone loss in the tibia, beyond that observed in the femur, subsequently leading to a difference between the HFD and WL groups. However, given previous work, the nuances of this observation remain unclear (44). Direct interactions between MAT and bone may influence bone loss during high fat feeding. Recently, MAT was found to be a significant contributor of circulating adiponectin during calorie restriction (55), this emphasizes the potential of MAT to influence not only bone but also whole body homeostasis. Direct adipose-bone pathways have been demonstrated to influence bone mass; the main two adipokines implicated are leptin (29,48,(56)(57)(58) and adiponectin (59)(60)(61). More locally within the bone microenvironment, in vitro experiments have demonstrated that the release of free fatty acids from adipocytes inhibited osteoblast differentiation and promoted apoptosis through ROS production (62). Interestingly, co-cultures of osteoblasts and osteoclasts with adipocytes suggest that in addition to reducing osteoblastogenesis, osteoclastogenesis may be increased with increased adiposity, resulting in reduced bone mass (31). Biomechanically, weight loss after 12-weeks of HFD was insufficient to rescue impaired fracture resistance. Indeed, the maximum load endured by the HFD and WL femurs was nearly identical -despite almost complete recovery of body mass and prevention of MAT expansion in the WL group. Comparable stiffness and modulus of elasticity in the ND, WL, and HFD groups indicates that the elastic properties of the bone were not affected. However, the failure properties were similarly reduced in both the HFD and WL groups, despite differential rescue of tissue mineral density and cortical thickness, implying that femur architecture fails to explain the impaired biomechanics. This may point to dysfunction within the organic properties of the bone, such as impaired crosslinking of collagen (26), as a potential mediator of persistent HFD-induced fracture risk. Our study demonstrates that HFD causes long-term, persistent changes in bone quality. We started HFD at an age in which skeletal development is still highly active, likely contributing to impaired bone accrual during growth. Indeed, diet-induced obesity causes greater damage in growing bones (63). This is an important finding given the rise of obesity in pediatric populations (2,64). Furthermore, these data demonstrate that MAT is not necessary for HFD-induced bone loss; however, MAT expansion, when present, may contribute to additional skeletal deterioration. It is likely that changes within the bone microenvironment including the adipocytes themselves are being altered but this was not examined in the current study (31,65) and will need to be evaluated with future mechanistic investigations. Given the rise in obesity across the age spectrum, this is a critical area of research and future studies are needed to determine the effects of weight loss (dietary or surgical) on bone density and to understand the mechanisms that drive changes in bone health. Even with the limitations a clear finding in this study is that there are some reversible and some permanent changes with HFD, followed by WL. Different regimens may be required to maintain bone health after WL, possibly with a focus on activity and diet (36). aUThOr cOnTriBUTiOns ES and KS were involved in designing studies, completion of studies, data interpretation and analysis, and manuscript preparation. BK, KM, SK, KK, SA, and BZ were involved in completion of studies, data analysis, and reviewed the final manuscript. FUnDing This work was supported by grants from the National Institute of Health, K99-DE024178 and R00-DE024178 (ES) and K08-DK101755 (KS).
v3-fos-license
2017-08-03T00:50:46.101Z
2015-04-25T00:00:00.000
16189415
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12253-015-9926-7.pdf", "pdf_hash": "dd54acb23ccc078134648568a87f297d3f7491a9", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46368", "s2fieldsofstudy": [ "Biology" ], "sha1": "dd54acb23ccc078134648568a87f297d3f7491a9", "year": 2015 }
pes2o/s2orc
Exploring the Molecular Mechanism and Biomakers of Liver Cancer Based on Gene Expression Microarray Liver cancer is one of the most common cancers worldwide with high morbidity and mortality. Its molecular mechanism hasn’t been fully understood though many studies have been conducted and thus further researches are still needed to improve the prognosis of liver cancer. Firstly, differentially expressed genes (DEGs) between six Mdr2-knockout (Mdr2-KO) mutant mice samples (3-month-old and 12-month-old) and six control mice samples were identified. Then, the enriched GO terms and KEGG pathways of those DEGs were obtained using the Database for Annotation, Visualization and Integrated Discovery (DAVID, http://david.abcc.ncifcrf.gov/). Finally, protein-protein interactions (PPI) network of those DEGs were constructed using STRING database (http://www.string-db.org/) and visualized by Cytoscape software, at the same time, genes with high degree were selected out. Several novel biomarkers that might play important roles in liver cancer were identified through the analysis of gene microarray in GEO. Also, some genes such as Tyrobp, Ctss and pathways such as Pathways in cancer, ECM-receptor interaction that had been researched previously were further confirmed in this study. Through the bioinformatics analysis of the gene microarray in GEO, we found some novel biomarkers of liver cancer and further confirmed some known biomarkers. Introduction Liver cancer is one of the most common malignancies. It has a high morbidity and mortality, especially in sub-Saharan Africa and eastern Asia. The incidence of liver cancer has doubled or even more in the past 15 years [1]. However, the molecular mechanism of liver cancer is still largely unknown. For above reasons, an increasing number of researches on liver cancer have been conducted in recent years. Different molecular mechanism and various biomarkers related to liver cancer have been identified. Through qRT-PCR and Western blotting, JianXin et al. [2] have inferred that GOLPH3, which has higher expression level in gene and protein level of liver cancer patients compared with that of the normal population, is a new biomarker for liver cancer. Mah et al. [3] have found that the inflammation-related pathway NFkB plays an important role in liver cancer by analyzing the methylation profile of 59 liver cancer patients. Despite a great number of previous researches, molecular mechanism of liver cancer has not been fully grasped. Hence, further researches of molecular level, such as researches of gene or protein, are still needed to find out new molecular mechanism or biomarkers in an effort to improve the prognosis, diagnosis and treatment of liver cancer. Mdr2-knockout (Mdr2-KO) mice lack the liver-specific Pglycoprotein responsible for phosphatidylcholine transport across the canalicular membrane, which may result in dysfunctional phospholipid secretion [4]. Signs of inflammation are accompanied by an increase in plasma transaminase levels and followed by enhanced connective tissue storage and fibrosis progression. As a consequence of chronic inflammation and progressing fibrosis, Mdr2-knockout mice may develop liver cancer between the ages of 12 and 15 months [5]. With the rapid growth of microarry and its implication in cancer research, a lot of genes that are related to cancers (including liver cancer) have been verified. For example, Yang et al. [6] have found that Gα12 is an important therapeutic target for liver cancer through cDNA microarray analysis. Xu et al. [7] have verified, through microarray and RT-PCR technology, the role of CXCL5 in liver cancer migration and invasion. In this research, by analyzing the gene expression microarray of liver cancer in GEO database, we further confirmed the molecular mechanism and some biomarkers of liver cancer that had been investigated previously. Moreover, genes, which had not been researched but also had a great importance to liver cancer, were also included in this research. Also, most of the enriched GO terms and KEGG pathways of those genes were related to liver cancer, especially cell cycle, immune response, inflammatory response, pathways in cancer, MAPK signaling pathway, Cell adhesion molecules and etc. In conclusion, our finding can improve our understanding of liver cancer and provide potential therapeutic targets for further studies. Gene Expression Microarray Data In this study, the gene expression microarray data set GSE4612 was downloaded from the Gene Expression Omnibus (GEO, http://www.ncbi.nlm.nih.gov/geo/). GSE4612 [8] is a gene expression profile data including six Mdr2 knockout (Mdr2-KO) mutant mice samples(3-month-old and 12-monthold) and six control mice samples(3-month-old and 12-monthold). The platform of this microarray data is GPL339 [MOE430A] Affymetrix Mouse Expression 430A Array. Preprocessing of the Microarray Data Unwanted noise of the raw microarray data was filtered out in the preprocessing stage. The normalization of raw data and background correction was conducted via affy [9] package in R. Moreover, multiple probes that corresponded to one gene symbol were summarized-taking the average expression values of those probes as the expression value of this gene. There were a total of 22,690 probes in the microarray and 13,687 gene symbols that had no duplicate before and after preprocessing. Get the Differentially Expressed Genes After the preprocessing, the critical step was to get the differentially expressed genes (DEGs) between the case samples and the control samples. The tool used in this study was the limma [10] package in R. t-test was conducted on the gene expression values between case samples and control samples and the genes with P value<0.05 and |log 2 (fold change)|>1 were selected out. According to those criteria, in the first step, the DEGs between the case samples and the control samples were selected out from 3-month-old mice and 12-month-old mice respectively, then the overlapped genes between those two list DEGs were selected out. The heatmap of the overlapped DEGs was obtained through gplots package in R to visualize their expression value in different samples. GO Enrichment and KEGG Pathway Analysis of the DEGs After getting the DEGs, GO enrichment and KEGG pathway analysis of the DEGs were conducted. Here, the tool used in this study was DAVID (http://david.abcc.ncifcrf.gov/) (Database for Annotation, Visualization and Integrated Discovery). It could be used to do functional annotation for a list of genes, gene functional classfication or gene ID conversion. In this study, the module used in this study was the functional annotation. First, we submitted the DEGs list into the database and selected Mus musculus in species column. Finally, the GO terms and the KEGG pathways with P value smaller than 0.05 and at least five genes were selected out as the enriched function of DEGs. Construct the PPI Network of DEGs To further investigate the molecular mechanism of liver cancer, PPI network of the DEGs was constructed through STRING database (http://www.string-db.org/). STRING is a database that infers the interaction between genes through analyzing the genomic data that comes from different sources, such as high-throughput experiments, coexpression data and the previous data and etc. Also, it has a unique scoring framework which assigns the interaction an integrated score to represent its confidence through combining the score of the different sources. Here, we selected the gene-gene interactions, whose integrated scores were bigger than 0.4 (the default threshold in the STRING database), to construct the PPI network and Cytoscape [11] was used for visualization. To select core genes (the genes that might be more likely involved in liver cancer) from PPI network, we analyzed the topological structure of the network and obtained the degree (the number of genes that directly interact with the gene) of each gene. Here, we selected the genes whose degree is beyond 10 as the core genes in the network. Differentially Expressed Genes (DEGs) There were 1898 DEGs in the 3-month-old mice and 864 DEGs in the 12-month-old mice between the case samples and control samples. A total of 380 overlapped DEGs between those two DEG lists were identified. From the heatmap (Fig. 1), we could get that the gene expression of Mdr2 knockout samples were distinguished from the control samples, meanwhile, the gene expression of 3-month-old samples were distinguished from the 12-month-old samples, indicating that obvious differences existed in these groups. Enriched GO Terms and KEGG Pathways of DEGs In this study, a total of 128 enriched GO terms and 23 KEGG pathways were obtained. The top 10 enriched GO terms of the DEGs according to P value were shown (Table 1). Table 1 indicated that the main enriched GO terms was the biological process of cell, such as cell adhesion, regulation of cell growth, regulation of cell cycle. Besides the cell biological process, there were also some enriched GO terms related to immune response, inflammatory response and etc. The enriched KEGG pathways of the DEGs were shown in Table 2. A few enriched KEGG pathways were directly related to cancer, such as Pathways in cancer, Small cell lung cancer, Bladder cancer. What's more, it was possible that other pathways had an important influence on the progression of cancer via some biological process, such as Toll-like receptor signaling, EMC-receptor interaction, MAPK signaling pathway and etc. The KEGG pathways and their corresponding gene number were shown in Fig. 2. PPI Network of the DEGs and Core Genes in the PPI Network The PPI (Fig. 3) network contained 244 nodes and 1053 edges. The nodes represented the DEGs and the edges represented the interactions between the DEGs. A great number of genes of higher degree, which were the core genes in the PPI network, might relate to liver cancer more closely. The core genes and their corresponding degree were shown in Table 3. Among those core genes, Ctss and Tyrobp had the highest degree and there were 28 genes whose degree was beyond 20. Discussion Although researchers have made considerable efforts in disclosing the mechanisms of liver cancer,current understanding of the genetic alterations associated with the progression of liver cancer has not yet to be elucidated. In this study, we conducted genome-wide gene expression analysis by a high throughput method to identify the DEGs from liver cancer compared with normal liver tissues. Here, a total number of 380 overlapped DEGs from original dataset of two groups (3month-old group and 12-month-old group) were identified, including 289 overexpressed genes, 66 down-regulated genes and 25 genes that had contradictory expression trend. GO analyses revealed that the significant ontology categories included immune response, cell adhesion, inflammatory response and so on. Immune effector process, nuclear division, cell division, mitotic cell cycle and positive regulation of cellular component organization were obviously overrepresented in the up-regulated genes according to the functional enrichment analysis. In the immune response, for example, TLR2 could enhance ovarian cancer stem cell self-renewal and eventually promote tumor repair and recurrence [12]. ICAM-1 is a transmembrane glycoprotein in the immunoglobulin superfamily, which participates in oral cancer progression and induces macrophage/SCC-cell adhesion [13]. Ciftci et al. [14] indicated that serum TGFB1 level might be elevated in breast cancer patients and had a favorable prognostic value. CDH1, involved in cell adhesion, can code the adhesion protein E-cadherin that plays a central part in the process of epithelial morphogenesis [15]. CCL5 belongs to the CCchemokine family and plays a pivotal role in the invasion and metastasis of human cancer cells. Huang et al. reported that CCL5 stimulation could increase lung cancer migration [16]. DEGs were then used in KEGG pathway analyses and 23 pathways were screened out, such as Cell adhesion molecules, Toll-like receptor signaling, EMC-receptor interaction, MAPK signaling pathway and etc. Previous researches reported that most of these pathways were involved in cancer progression. The immune system played a critical role in body defense system, and the dysfunction of immune system might result in cancer. Stimulation of various Toll-like receptors induced specific patterns of gene expression, which resulted in the activation of innate immunity and the development of antigen-specific acquired immunity [17]. Moreover, MAPK signal molecules participated in the amplification and specificity of the transmitted signals that finally activated a number of regulatory molecules in the cytoplasm and the nucleus to initiate cellular processes such as proliferation, differentiation, and development [18]. Furthermore, the topological structure analysis of PPI network suggested that Ctss, Tyrobp, Vim, Cdk1 were the top 4 core genes, which might be potential therapeutic targets for future research. Cathepsin S (Ctss), a key enzyme in major histocompatibility complex class II (MHC-II) mediating antigen presentation, might be involved in malignant progression of lung cancer [19]. CD47 positive liver cancer cells preferentially secreted cathepsin S (CTSS), which regulated liver tumor-initiating cells through the CTSS/protease-activated receptor 2 (PAR2) loop [20]. Shabo indicated that Tyrobp (DAP12) in breast cancer was associated with an advanced tumor grade and higher rates of skeletal and liver metastases [21,22]. Costa reported that Vim could associate with GDF15 and TMEFF2 to predict bladder cancer [23]. Overall, with a microarray data set from the GEO database, a range of DEGs were obtained in liver cancer and normal tissues. These genes might be functionally relevant to pathogenesis of liver cancer. Functional analysis revealed mitotic cell cycle, proteinaceous extracellular matrix and MAPK signaling pathway participated in biological processes as the significant items for liver cancer. These results could provide a valuable data base for further investigation of liver cancer research. Of course, further experiments are still needed to further confirm the potential function of these genes. Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
v3-fos-license
2019-03-09T14:18:31.067Z
2013-12-29T00:00:00.000
72278629
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/archive/2013/920806.pdf", "pdf_hash": "9b543343c9b4c6442a66f2ec72f5feb87d03ef21", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46369", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "sha1": "b2153445dcd23f176a45744674ca7402eb217778", "year": 2013 }
pes2o/s2orc
Lesson Learned from the Emergence of Influenza Pandemic H 1 N 1 in 2009 in Indonesia : The Importance of Influenza-Like Illness ( ILI ) Surveillance Background. In 2009 there were outbreaks of influenza pandemic H1N1 in Indonesia that were caused by different virus from the previous circulated H1N1. Further, the influenza-like illness (ILI) surveillance plays an important role in the early detection of influenza outbreaks in outpatients. To understand the disease burden of ILI in the community at the time of H1N1 pandemic 2009, a sentinel-based survey was performed.Methods. The nasal and throat swabs were obtained from 20 primary health centers of ILI sentinel in Indonesia in 2009. Identification of virus influenza pandemic H1N1 was carried out by real-time RT-PCR using primers that are specific for influenza A. Results. Out of 3254 ILI cases from community-based ILI surveillance in 2009, 11.03% cases were Influenza A positive and 42.59% cases were influenza pandemic H1N1. The first influenza pandemic HINI case was detected at week 15 in April, a case from the province of Banda Aceh, reaching a peak in August and ending at week 44 in November of 2009. Conclusion. The influenza pandemic H1N1 outbreak was detected in ILI surveillance network in Indonesia.This outbreak lasted for eight months which was the final wave of the influenza pandemic H1N1 in the world. Background Influenza is a disease that can potentially become a pandemic.From the history of the various subtypes that have been detected, there are several subtypes that caused pandemic such as H1N1 and H3N2.Influenza viruses circulate throughout the year in Indonesia, with seasonal activity often peaking during the rainy season (December-January) [1]. In 2009 there were outbreaks of H1N1pdm09 in the world.During the pandemic period, more than 214 countries and parts of the world reported positive laboratory confirmation of cases H1N1pdm09 and recorded more than 2900 died in Europe [2].In mid-April 2009, the first cases of 2009 pandemic influenza A (H1N1) were identified in the United States [3].Viruses were detected in Italy in May 2009 on adult men who came from Mexico [4].On July 2, 2009, South East Asia had reported 1,866 cases [3].Since then more than 27,000 confirmed cases and 260 fatalities [2,4].The H1N1pdm09 case among outpatient cases in Indonesia was first detected through influenza-like illness (ILI) surveillance in Indonesia on April 13, 2009 in boys aged 2 years old with a fever the day before, cough, and sore throat and domiciled in Banda Aceh. The influenza A H1N1pdm09 virus originated from countries outside Indonesia that spread rapidly in the world.ILI surveillance is national influenza surveillance laboratories based on outpatient cases contained in each sentinel health center in 19 provinces in Indonesia.ILI surveillance in Indonesia plays an important role for the early detection of influenza H1N1pdm09 cases during outbreak.To have better estimation of the burden of ILI in the community at the time of the emergence of influenza H1N1pdm09 and observation on cases of potentially pandemic influenza in Indonesia, we conducted a sentinel-based survey in 20 sentinel sites by the relevant units in the Ministry of Health to assess the incidence of ILI caused by H1N1pdm09. Methods 2.1.Data and Specimens Collection.ILI cases were obtained from patients with several symptoms including sudden fever with a temperature ≥37.8 ∘ C and with a cough, runny nose, sore throat, muscle pain, and shortness of breath that came to the outpatient at primary health center.Nose and throat swabs were obtained from outpatients in 20 health centers of ILI sentinel surveillance network in 19 provinces in Indonesia Throat and nasal swabs from patients with ILI symptoms were homogenously collected using dacron swabs and obtained from January through December 2009.Swabs were placed into sterile hanks' balanced salt solution (HBSS) viral transport media (VTM) that contained gelatin, 100 U/mL penicillin, 100 g/mL streptomycin, and 25 U/mL fungizone and then transported in cold condition to the regional reference laboratory.Specimens were sent every week to the Virology Laboratory of the CBBTH through the expedition using "one day service" to maintain the cold chain delivery of specimens. Data Analysis. The data included in this paper underwent data extraction by trained study personnel and were organized by Microsoft Office Excel 2007.The linear regression test was used to determine the risk factor of infection and analyzed using Stata software version 09 (StataCorp). Laboratory Diagnosis. The specimens were extracted using QIAmp viral mini kit (Qiagen, Hilden, Germany) according to the manufacturer's instruction.Real-time Reverse Transcriptase-Polimerase Chain Reaction (RT-PCR) was performed according to a recommended protocol from the United States Centers for Disease Control and Prevention (US CDC) on an IQ5 Bio-Rad real-time PCR instrument (Bio-Rad, US) [1,5].RT-PCR was preformed using primers and probes that are specific for influenza A, influenza B, H1N1, H3N2, H5N1, and H1N1pdm09.Primers and probes were provided by the Center for Disease Control and Prevention, USA.All laboratory tests were performed at the Virology Laboratory Center for Biomedical and Basic Technology of Health (CBBTH), Jakarta. Each case of ILI was laboratory-confirmed as negative for influenza A or positive for pandemic influenza A (H1N1)2009, influenza A (H3N2) or unsubtypable influenza A, as appropriate according to the recommended protocol from US CDC.PCR results of influenza were sent to FluNet for regular report and feedback to the sentinel and programs. Results A total of 3254 ILI cases ( = 1595 females (50.9%) and = 1659 males (49.1%)) were obtained from the outpatient case in the health centers of ILI sentinel surveillance.From the results of the real-time RT-PCR, positive cases of influenza A were 359 cases (11.03%) and positive cases for H1N1pdm09 were 187 cases (52.08%).Influenza A unsubtypeable were 89 cases (24.79%).Influenza A (H3N2) and H1N1 human were 41 cases (11.42%) and 42 cases (11.69%), respectively.Table 1 showed that the highest ILI cases were 1311 cases (40.29%) at the age group 6-14 years, followed by age group 0-5 years with 908 cases (27.1%), whereas most cases of influenza A were also in the same age group in the amount of 186 cases (14.19%) and followed by the age group 15−24 years for 62 cases (17.51%). Based on ILI surveillance in 2009, H1N1pdm09 case was firstly detected on April 2009.The number of patients with ILI symptoms increased since April and in July to August there were four times ILI cases reported than that in January to March.The number of specimens collected was also increasing.Most of the ILI cases were detected as H1N1pdm09.The H1N1pdm09 cases also reaching a peak at 71.5% in July to August with 176 influenza H1N1pdm09 were reported and decreased in September 2009.In 2009, H1N1pdm09 became a dominant subtype of flu A. Almost 50% of flu A circulating in 2009 was H1N1pdm09 (187 H1N1pdm09 out of 359 flu A in total).Although there were an H1N1pdm09 outbreak, the low number of seasonal influenza A (H1), A (H3) is still found and the flu B cases remain stable.The number of ILI and influenza subtypes from January to December 2009 was shown in Table 2. The first H1N1pdm09 case was reported from Banda Raya health center in the Nanggroe Aceh Darussalam province in April 2009.The specimen was collected on April 13, 2009 but confirmed influenza H1N1pdm09 in June 2009, after we received new primers and probe of influenza H1N1pdm09 in early May 2009 from US CDC, the World Health Organization Collaborating Centre (WHO CC) for influenza.In 2009, Banda Raya health center reported and collected specimens from 117 ILI cases in which 20 cases were reported as influenza positive and only 3 out of 20 influenza positive were H1N1pdm09 (Table 3).In 2009 not all ILI sentinel sites reported H1N1pdm09.There were only 12 out of 20 health centres of ILI sentinels reported patients infected with H1N1pdm09 (Table 3).Figure 1 shows the distribution of the ILI sentinel sites that reported the H1N1pdm09 throughout Indonesia. Figure 2 shows that the first case of H1N1pdm09 appeared at week 15 and reached a peak at weeks 28 to 30 and decreases slowly and ends at week 34.The virus then was detected again at week 41 to 44 in November.Until the end of 2009, the virus never appeared again. Risk factor analysis of influenza H1N1pdm09 infection among samples was described in Table 4. Proportion of male infected with influenza H1N1pdm09 is higher than female, 112 out of 190 (58.95%).The age group 15-24 y.o had higher risk to be infected by influenza H1N1pdm09 among other age groups.There were 47 out of 65 cases (75.81%) infected by influenza H1N1pdm09 from the age group 15-24 y.o.The risk to be infected by influenza H1N1pdm09 increased 6,7 times H1N1 pdm has been found H1N1 pdm has not been found 4). Discussions Influenza viruses by nature are unstable and therefore the occurrence of the next influenza pandemic remains unpredictable.Early detection of pandemic influenza at national level is a public health concern.Special concern needs to be addressed to Indonesia as a tropical country where influenza viruses are circulating throughout the year without well-defined pattern.In Indonesia, influenza surveillance is the essential system for monitoring routinely the influenza activity especially when the influenza cases with pandemic potential occurred [6]. After the WHO announced the two first cases of confirmed influenza H1N1pdm09 on week 16 in 2009, the virus rapidly spread throughout the world.The specimens of these two cases were collected on March 30 and April 1 and confirmed on April 15 and 17, respectively [7].The US CDC as a WHO CC for influenza shared the primers and probes to detect the influenza H1N1pdm09 to National influenza Centres (NIC) including Indonesia NIC who conduct the influenza surveillance.As a NIC, we collect, identify, analyze, and also isolate influenza strain from the clinical specimens. When pandemic occurred in April 2009, there were some unsubtypeable flu A found in ILI samples in Indonesia (Table 3).Following the distribution of primers and probes for influenza H1N1pdm09 detection from US CDC that we received on May 2009, we retested the unsubtype flu A. We found that one of the case reported ILI symptoms with specimens collected on April 13, 2009 was confirmed positive of influenza H1N1pdm09.That was the only influenza H1N1pdm09 case found on April.On May, we did not find any cases of influenza H1N1pdm09 but from June to August the influenza H1N1pdm09 cases increased rapidly. WHO with the Global influenza Surveillance Network (GISN) has monitored influenza activity for more than 60 years [8].Indonesia as WHO member state is included in the network with the routine influenza surveillance activity for 14 years since 1999.As part of the network, we should provide data of influenza activity to WHO.There are some ways to provide the data and one of them is sharing data through FluNet, a web-based electronic data for reporting the influenza activity.Unfortunately, Indonesia did not share the influenza data through FluNet until 2010.Therefore, the influenza data in 2009 especially for influenza pandemic activity from ILI surveillance were not recorded globally in WHO system.This situation occurred not only in Indonesia as WHO reported that only 54% WHO state members reported the activity for influenza pandemic detection [8]. The growth of the international trade and travel across the world increases the risk of emergence pathogen including pandemic influenza.The international travel can enhance the spread of the influenza H1N1pdm09 [8].From the 20 sentinels of ILI surveillance, there were only 12 sentinels reporting influenza H1N1pdm09 from April to December 2009.The characteristics of the 12 health centers are almost the same where they are in the city of provincial capital with lots of travelers.Especially, in Banda Aceh, there were a lot of foreign volunteers for short visit who wanted to remedy Aceh people after the tsunami in early 2005. This study describes the influenza activity during H1N1 pandemic in 2009 where H1N1pdm09 cases were mostly found in young adult (15-24 years).Based on risk factor analysis, age group 15-24 years old has higher risk to be infected by H1N1pdm09.This finding is consistent with the previous study that has been conducted in 11 states where 75% of confirmed cases of influenza H1N1pdm09 infection were at age <30 years with a peak at 10-19 years of age [9]. The cross-reactive antibodies to the pandemic virus were detected frequently in people aged >60 years old than in younger adults and children.This supports the theory that people at a young age do not have immunity to antigenically distinct influenza viruses [10,11].The highest percentage of H1N1pdm09 cases found in Indonesia was in the end of July 2009 (week 28), indicating the rapid spread of this virus widely since the first occurrence in April 2009 in US.However, influenza B, H1N1, and H3N2 viruses also remained detected during pandemic as seen in other studies in US [12]. Regarding the occurrence of influenza H1N1pdm09 in 2009, some countries impose early mitigation effort to avoid events that involve a lot of people.Unfortunately, in Indonesia early mitigation can only be carried out in a limited way as there was limitation in laboratory testing for early detection due to the availability of primers and probes for laboratory testing.Monitoring of the increasing number of cases of H1N1pdm09 and fatal cases caused by H1N1pdm09 virus was the only effort during outbreaks in 2009.There were no special precautions at the health centres who reported confirmed cases of influenza H1N1pdm09.This is in accordance with the WHO recommendation for close monitoring; however, the recommendation does not control the viral spreads in relation to international travel [13]. Conclusions Influenza A virus infection (H1N1pdm09) as the cause of a pandemic in 2009 worldwide was also detected in Indonesia through sentinels in ILI surveillance.Since the spread of H1N1pdm09 virus, the number of ILI cases increased and H1N1pdm09 became the dominant influenza virus subtypes throughout the 2009.The risk factor of new subtype of influenza A H1N1pdm09 infection increased in male and young adults.The detection of new virus H1N1pdm09 through surveillance activity shows that ILI surveillance is ISRN Infectious Diseases very important as a sustain activity.ILI surveillance has crucial roles especially in the monitoring of the patterns of the influenza virus circulated in Indonesia and for early detection of the emergence novel influenza viruses. Figure 1 : Figure 1: Map of ILI sentinel sites and provinces in Indonesia. Figure 2 : Figure 2: Graph distribution of ILI cases in Indonesia with seasonal influenza weekly 2009. Table 1 : Characteristics of influenza A cases from ILI surveillance in 2009 in Indonesia. Table 2 : Case proportion, type, and subtype of seasonal influenza monthly. Table 3 : Number of ILI cases, type, and subtype of influenza per sentinel sites. Table 4 : Risk factor of influenza H1N1pdm09 infection.
v3-fos-license
2021-02-02T21:13:01.398Z
2021-01-13T00:00:00.000
231744124
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://zookeys.pensoft.net/article/58759/download/pdf/", "pdf_hash": "53688df6f33a782a464ee967af20269662ecf5ac", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46372", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "sha1": "53688df6f33a782a464ee967af20269662ecf5ac", "year": 2021 }
pes2o/s2orc
Numerous new records of tropical non-indigenous species in the Eastern Mediterranean highlight the challenges of their recognition and identification Abstract New data on 52 non-indigenous mollusks in the Eastern Mediterranean Sea is reported. Fossarus sp. (aff. aptus sensu Blatterer 2019), Coriophora lessepsiana Albano, Bakker & Sabelli, sp. nov., Cerithiopsis sp. aff. pulvis, Joculator problematicus Albano & Steger, sp. nov., Cerithiopsis sp., Elachisina sp., Iravadia aff. elongata, Vitrinella aff. Vitrinella sp. 1 (sensu Blatterer 2019), Melanella orientalis, Parvioris aff. dilecta, Odostomia cf. dalli, Oscilla virginiae, Parthenina cossmanni, Parthenina typica, Pyrgulina craticulata, Turbonilla funiculata, Cylichna collyra, Musculus coenobitus, Musculus aff. viridulus, Chavania erythraea, Scintilla cf. violescens, Iacra seychellarum and Corbula erythraeensis are new records for the Mediterranean. An unidentified gastropod, Skeneidae indet., Triphora sp., Hypermastus sp., Sticteulima sp., Vitreolina cf. philippi, Odostomia (s.l.) sp. 1, Henrya (?) sp., and Semelidae sp. are further potential new non-indigenous species although their status should be confirmed upon final taxonomic assessment. Additionally, the status of Dikoleps micalii, Hemiliostraca clandestinacomb. nov. and H. athenamariaecomb. nov. is changed to non-indigenous, range extensions for nine species and the occurrence of living individuals for species previously recorded from empty shells only are reported. Opimaphora blattereri Albano, Bakker & Sabelli, sp. nov. is described from the Red Sea for comparison with the morphologically similar C. lessepsiana Albano, Bakker & Sabelli, sp. nov. The taxonomic part is followed by a discussion on how intensive fieldwork and cooperation among institutions and individuals enabled such a massive report, and how the poor taxonomic knowledge of the Indo-Pacific fauna hampers non-indigenous species detection and identification. Finally, the hypothesis that the simultaneous analysis of quantitative benthic death assemblages can support the assignment of non-indigenous status to taxonomically undetermined species is discussed. Introduction The Eastern Mediterranean Sea is a hotspot of non-indigenous species introductions. The opening of the Suez Canal in 1869 broke a long-standing biogeographic barrier and enabled hundreds of Red Sea species to enter the basin and establish populations (Por 1978;Galil 2009;Zenetos et al. 2010Zenetos et al. , 2017Zenetos and Galanidi 2020). These so-called Lessepsian species are now recorded from all countries bordering this basin west to Greece (Katsanevakis et al. 2009;Çinar et al. 2011;Ammar 2018;Zenetos et al. 2018; Bariche and Fricke 2020;Crocetta et al. 2020) and some have already reached the central Mediterranean, e.g., Tunisia (Ounifi-Ben Amor et al. 2015), Italy (Occhipinti-Ambrogi et al. 2011), and even France (Daniel et al. 2009;Bodilis et al. 2011). The introduction rate is an important metric to describe the invasion process. Genuine variation in this rate can result from changes in vector efficacy, connectivity between the native and introduced range, and environmental conditions in the recipient ecosystem. The introduction rate is often estimated from the discovery record (Solow and Costello 2004). However, even in well sampled and taxonomically well-known groups like mollusks, multi-decadal time lags between introduction and first detection have been quantified (Oliver 2015;Guy-Haim et al. 2017;, suggesting that the detection rate is a poor proxy of the introduction rate. Indeed, although the discovery rate is increasing (Galil 2009;Raitsos et al. 2010), estimates of the introduction rate corrected for temporal variation in sampling effort for Lessepsian fishes showed that it was constant over ~ 1930~ -2010~ (Belmaker et al. 2009). Still, the most recent enlargement of the Suez Canal has raised concerns that the improved connectivity could increase the introduction rate of Lessepsian species . Additionally, rapid climate warming is particularly affecting the Eastern Mediterranean (Ozer et al. 2017), causing, on the one hand, the decline of native species and, on the other hand, more favourable conditions for the establishment of tropical species (Rilov 2016;Albano et al. 2021). To monitor a dynamic process such as the Lessepsian invasion, intensive fieldwork is mandatory. We indeed show that an intensive sampling effort coupled with identification at high taxonomic resolution and collaborative research among individuals and institutions enabled the detection of 23 new Lessepsian mollusks, another nine species which, upon further inspection, may prove to be new Lessepsian species, nine new records for Eastern Mediterranean countries, and new data for eleven already recognized non-indigenous species. We here describe these new findings, providing detailed collecting data, taxonomic comments, and comparisons with similar species. Origin of samples The studied material comes from three main sources. First, sampling on the Israeli Mediterranean shelf performed in the context of the project "Historical ecology of Lessepsian migration" (HELM), in progress at the University of Vienna. Second, benthic assemblage monitoring by the Israel Oceanographic and Limnological Research (IOLR). Third, smaller scale sampling by some of us, further detailed in the Results section. Sampling in the framework of the HELM project was conducted on soft substrates between 10 and 40 m depth with a van Veen grab, and on hard substrates between 5 and 30 m by diver-operated airlift suction sampling, using 0.5 mm mesh-size net bags. Samples were sieved with a 0.5 mm mesh and the retained material fixed in 95% ethanol. Both living individuals and empty shells were identified and counted. IOLR conducts regular monitoring of Israeli soft bottom benthic assemblages in the framework of the National Monitoring (NM) and focused sampling for environmental assessment (APM DAN, Shafdan, Via Maris). The NM, APM DAN and Via Maris projects sampled soft substrates with a 0.11 m 2 van Veen grab at depths between 6 and 12.5 m (NM), 22 and 26.5 m (APM DAN) and 18 and 26 m (Via Maris). Samples were sieved with a 250 μm mesh. During the Shafdan project, three replicate sediment samples were taken at each station from a different 0.062 m² box-corer launch (Ocean Instruments model 700 AL) twice a year in spring (May) and fall (October). The samples were sieved on board with a 0.5 mm mesh. All samples were preserved in 99% ethanol, stained with eosin solution (hence the pink hue that some specimens bear) and picked for living individuals. Finally, we included serendipitous findings by some of us or by colleagues within our extended network, from multiple localities. For each species, we provide detailed collecting data following the guidelines by Chester et al. (2019). Taxonomic assignment and non-indigenous status attribution The depth of taxonomic assignment varies across taxa, mostly reflecting the available knowledge on these groups in the Indo-Pacific province (the source pool of most nonindigenous species in the Eastern Mediterranean). For families like the Triphoridae, some of us (PGA, PAJB, and BS) have been conducting taxonomic research for a long time and we have thus been able to describe new species as we have robust knowledge of inter-and intraspecific variability and of type specimens (Albano et al. 2011(Albano et al. , 2017Albano and Bakker 2016). For other families, like the Eulimidae, we focused our attention on highlighting differences from native species and similarities with Indo-Pacific species, because a more thorough coverage would have required revising the taxonomy of entire Indo-Pacific species-groups, a task well beyond our objectives. In all cases, we strove to provide detailed and high-quality images as a basis to foster further research and enable the scientific community to refine our identifications. The use of qualifiers for species left in open nomenclature follows the recommendations of Sigovini et al. (2016). Acknowledging that an unsettled taxonomic status implies uncertainty in the assignment of non-indigenous status , we here tagged as nonindigenous only the species which: i) unequivocally belong to Indo-Pacific species; ii) belong to clades (genera or families) that do not occur in the Mediterranean Sea, even if left in open nomenclature; iii) belong to species whose diagnostic characters did not enable a clear attribution to a non-Mediterranean clade but that were found alive while not, or only very rarely, in the death assemblage (see Discussion). In contrast, we tagged as "potential" non-indigenous species those whose morphological characters did not allow for an unambiguous attribution to a non-Mediterranean clade and that were found mostly, or exclusively, as empty shells. Imaging and reporting Small specimens were photographed with a Zeiss SteREO Discovery.V20 stereomicroscope, larger ones with a Nikon D7200 camera mounted on a stand, using a Nikon Micro-Nikkor 60 mm lens. Photographs were stacked with Helicon Focus 6. Scanning electron microscope (SEM) images were shot with a Fei Inspect S50 at low-vacuum mode without coating. The internal shell morphology of Odostomia (s.l.) sp. 1, with a particular focus on the intorted protoconch, was visualized using a Phoenix v|tome|x s research edition computer tomographic (CT) scanner. The 3D-reconstruction and virtual sections through the shell were produced with VGSTUDIO MAX 2.1 software. The X-ray image stack, mesh files, virtual sections, and a video showing the interior of the shell are available from the Figshare repository (https://doi.org/10.6084/ m9.figshare.c.5215226). Plates were mounted with the image manipulation software GIMP 2. For each new non-indigenous species record, we report the size of at least one specimen (usually the one figured, unless otherwise stated). The systematic arrangement follows Bouchet et al. (2010Bouchet et al. ( , 2017. Table 1 summarizes the species treated in this work. Table 1. List of the taxa treated in this paper, with indication of the novelty of the records. New NIS for the Mediterranean Sea 12 Figure 6 Opimaphora blattereri Albano, Bakker & Sabelli, sp. nov Class Gastropoda Cuvier, 1795 Family unassigned (Caenogastropoda) Unidentified gastropod Figure 1 New records. Israel • 1 spcm; Haifa Bay; 32.8211°N, 35.0196°E; depth 11 m; 2 Aug. 2015; soft substrate; grab; NM project (sample HM27(c)); size: H 2.5 mm, W 1.6 mm. Remarks. We were not able to confidently assign this specimen to any family. The general characters suggest that it is a caenogastropod. This specimen has apparently traces of the animal inside and has thus been considered live collected. However, as it was found in Haifa Bay, we cannot exclude that it comes from freshwater or transitional ecosystems (the adjacent Kishon River and estuary) whose waters flow into the bay. An anatomical study of the soft parts, should another living specimen become available, will clarify the taxonomic placement of this intriguing species. Remarks. The species has already been reported from Israel and Turkey (Bogi and Galil 1999;Buzzurro and Cecalupo 2006), but only from empty shells. To the best of our knowledge, this is the first record of living individuals from the Mediterranean Sea. Based on our observations, it occurs rather frequently on shallow subtidal rocky substrates. The samples from Turkey led to the description of Parviturbo dibellai Buzzurro & Cecalupo, 2006, at that time supposed to be a native species, but this name was later recognized to be a synonym of the Indo-Pacific Fossarus eutorniscus (Rubio et al. 2015), attributed to Conradia by Janssen et al. (2011). Janssen et al. (2011) highlighted, however, that the Red Sea specimens have seven spiral cords instead of the five cited in the original description based on material from Karachi (Pakistan). Specimens with five spiral cords occur also in the Persian (Arabian) Gulf and rarely in the Red Sea (H. Dekker, pers. comm., November 2020). Further research is required to ascertain if these two morphologies belong to two different taxa. Remarks. This species has been recently described from sediment collected in 2016 at 33-45 m depth at Karpathos and Samos islands in the eastern Aegean Sea (Agamennone et al. 2020b). The authors discussed but declined the possibility that this is a Lessepsian species, but one of us (BS) observed non-distinguishable specimens from the Red Sea and we received reports of further indistinguishable specimens from the Persian (Arabian) Gulf (H. Dekker, pers. comm., November 2020). This is the first record for Israel; all specimens were live collected. Figure 3 New records. Israel • 1 sh;Akko;32.92°N, 35.07°E;depth 4 m;22 Oct. 1998; shell grit sample; size: H 0.6 mm, W 1.0 mm. Skeneidae indet. Remarks. This tiny gastropod (largest diameter 1 mm) is characterized by a small but solid shell, ~ 0.75 whorls of protoconch with axial costae visible near the prototeleoconch transition (more costae closer to the nucleus may be abraded), and two teleoconch whorls with numerous regular spiral cords. The shoulder is slightly angulated near the lip. Umbilicus open, large. Shell white, slightly translucent. No native Mediterranean species shares these features. Only Skenea catenoides (Monterosato, 1877) has a similarly solid shell with numerous regular spiral cords, but it can be distinguished easily by the three nodulose thicker spiral cords on the base and the lack of angulation at the shoulder. Both Mediterranean (e.g., Circulus striatus (Philippi, 1836) and Red Sea Circulus (e.g., C. novemcarinatus (Melvill, 1906a)) and C. octoliratus (Carpenter, 1856)) can be distinguished by the multispiral protoconch and the much more prominent spiral cords. It is most likely a new, probably still unnamed, Indo-Pacific species in the Mediterranean. New records. Israel Family Naticidae Guilding, 1834 Eunaticina papilla (Gmelin, 1791) Figure 5 New records. Israel Remarks. We here report the finding of a living individual of Eunaticina papilla from the Israeli Mediterranean shelf. This juvenile specimen can be assigned to E. papilla because of its overall shape, the sculpture of fine spiral cords, the large umbilicus and the morphology of the thin corneus operculum ( Figure 5D).The species has already been reported in the Mediterranean Sea from Iskenderun in eastern Turkey with a living individual (Öztürk and Bitlis Bakir 2013). An empty shell was collected near Shiqmona, Israel, in November 2019 and reported as Eunaticina linneana (Récluz, 1843) (Schechter and Mienis 2020), a name considered a junior synonym of E. papilla by Beu et al. (2004). Description. Color: protoconch light brown; first teleoconch whorls whitish, with the first spiral cord becoming brown after one to three whorls. The second spiral cord acquires this brown color only on the dorsal part of the last whorl. The fourth cord, visible only on the last whorl, is brown. The base is light brown. Teleoconch: 6 (holotype, paratype 1 and 2), 7.5 (Mediterranean specimen) whorls, height: 2.04 mm (holotype), 1.83 mm (paratype 1), 2.43 mm (paratype 2) and 2.95 mm (Mediterranean specimen). The tuberculate first and third spiral cords start simultaneously after the protoconch with the same size, the third later becomes progressively larger and more acute. The second spiral cord appears only on the last whorl and is smaller than the others in front view, becoming of similar size to the first dorsally. In the second half of the last whorl, a very thin smooth suprasutural cord is visible. The base shows a fourth rather smooth cord of the same color as the first, followed by a fifth and sixth cord that are smooth and very pale in color. Anterior siphonal canal short, tubular, and oblique; posterior siphonal canal a simple notch. Peristome without microsculpture and apparently without bifurcating spiral cords. The Mediterranean specimen is larger, has three white whorls after the protoconch and the second spiral cord appears on the seventh whorl, remaining still smaller than the others. Etymology. Named after the Lessepsian invasion (Por 1978), because we first found this Red Sea species on the Mediterranean Israeli shelf. The species epithet is an adjective in nominative singular feminine. Remarks. The Mediterranean specimen is larger and broader than the Red Sea ones. Triphorids do show a morphological dimorphism characterized by smaller and larger morphs and we think that we captured this dimorphism in our samples. See under Opimaphora blattereri Albano, Bakker & Sabelli, sp. nov Diagnosis. Shell cyrtoconoid of less than 3 mm with 11 (holotype) or 12 (paratype 2) whorls and multispiral protoconch. Nucleus with hemispherical granules. Sculpture of three spiral cords with round tubercles larger than their interspaces; the second cord appears only on the fourth whorl, initially as a thin smooth thread. Microsculpture absent on the teleoconch whorls, present on the peristome, which bears bifurcating spiral cords. Description. Color: protoconch brown; whitish first teleoconch whorls with the first spiral cord becoming brown after two whorls. Light brown irregular patches are randomly distributed on the teleoconch, usually covering one or, more frequently, two tubercles. The base background is white, with the color patches of the last whorl extending onto it. The tip of the anterior siphon is brown. The tuberculate first and third spiral cords start simultaneously after the protoconch with the same size, whereas the second cord appears from the fourth to the seventh teleoconch whorl, depending on shell size. This cord is initially thin and closer to the first one, it progressively increases its size until reaching that of the other two cords on the last whorl. On the second half of the shell, a very thin smooth suprasutural cord is visible. The second cord bifurcates on the peristome. The base shows a fourth rather smooth cord, and a fifth and sixth smooth ones; these cords become towards the peristome more granulated. On the peristome, below the third spiral thread, microsculpture is visible as fine spiral lines. Anterior siphonal canal tubular, short and oblique; posterior siphonal canal a simple notch. Etymology. This species is named after Hubert Blatterer, Austrian conchologist, in recognition of his work on Red Sea mollusks. Moreover, he contributed to our work on Lessepsian species by granting us access to the material he collected in the Red Sea and by donating the type series of O. blattereri and Coriophora lessepsiana. The species epithet is a noun in the genitive case. Remarks. We describe O. blattereri as new because of the similar color pattern to C. lessepsiana Albano, Bakker & Sabelli, sp. nov., even if it has not been reported from the Mediterranean Sea. The two species can be easily distinguished because C. lessepsiana has an monocarinated protoconch while O. blattereri has a bicarinated one; the second spiral cord of O. blattereri never becomes brownish as in C. lessepsiana; O. blattereri has a white background on the base and a distinct brown end of the anterior siphonal canal, whereas C. lessepsiana has a light brown base and the anterior siphon has not a colored end; the teleoconch of O. blattereri has irregular light brown patches, particularly evident on fresh specimens; this feature is totally absent in C. lessepsiana. We have seen specimens very similar to O. blattereri collected in Madagascar, New Caledonia, and French Polynesia. A revision of the group in the Indo-Pacific province is beyond the scope of this paper; however, this species likely has a broad distribution. Opimaphora blattereri and C. lessepsiana share their color pattern of brown to orange spiral cords on a white background with other Indo-Pacific species. Litharium bilineatum (Kosuge, 1962) (holotype illustrated by Higo et al. (2001)), Costatophora iniqua (Jousseaume 1898) (= Notosinister kawamurai Kosuge, 1962, type material illustrated by Higo et al. (2001) and ) and Aclophora albozonata Laseron, 1958 can be easily distinguished by having three fully developed spiral cords since the early teleoconch. Iniforis formosula (Hervier, 1898) and Mastonia peanites Jousseaume, 1898 (= Mastonia squamosa Kosuge, 1962, type material again illustrated by Higo et al. (2001) and ) have only two spiral cords, but the former has three or four dark brown lines on the last whorl, whereas the latter has a dark brown last whorl with lighter tubercles. Triphora fulvescens Hervier, 1898 also has a similar color pattern, but the second spiral cord remains a very fine thread even on the last whorl and the tubercles are whitish even on the first cord (on an orange background). Some species show a delayed appearance of the second spiral cord: Nototriphora regina (Hedley, 1903) has a brown tip of the anterior siphonal canal similarly to O. blattereri, but lacks the patches on the whorls and has an orange line on the third spiral cord on the last whorl; Coriophora tigris Laseron, 1958 has a paucispiral protoconch; Cautor similis (Pease, 1871) has larger and more densely arranged tubercles, a brown fourth spiral cord and white base. Last, a few species have a similar color pattern, but with an inverted pattern: the first spiral cord is white and the third orange to brown, like Mastonia cingulifera (Pease, 1861), which also has a dark yellow teleoconch, Mastonia funebris Jousseaume, 1884 and Mastonia tulipa Jousseaume, 1898 with a brown and white base, respectively. Triphora sp. Remarks. We found a single, adult, empty shell. It likely possesses a large paucispiral protoconch, but it is incomplete in our shell. The second spiral cord starts at midshell height, the fourth and fifth spiral cords are smooth, and the posterior siphonal canal is shallow. It is brown in color with darker spiral cords. We have not been able to assign it to a species so far, but it is distinctly different from all known Mediterranean species and most likely belongs to the Indo-Pacific fauna. Remarks. This species was first recorded from Israel by Steger et al. (2018) based on three living individuals from Palmachim, southern Israel. We here report multiple living individuals all along the Israeli coast, confirming its establishment. The species shows a broad distribution in the Eastern Mediterranean ranging from Greece to Turkey and Cyprus (Micali et al. 2017;Stamouli et al. 2017;Angelidis and Polyzoulis 2018;Chartosia et al. 2018). Its final taxonomic assignment requires the clarification of the relation between several other Viriola such as V. corrugata (Hinds, 1843), V. senafirensis (Sturany, 1903), and V. tricincta (Dunker, 1882) (Albano and Bakker 2016;Albano et al. 2017. Family Cerithiopsidae H. Adams & A. Adams, 1853 Cerithiopsis sp. aff. pulvis (Issel, 1869) Figure 9D-F New records. Israel Remarks. This species superficially resembles the Lessepsian Cerithiopsis pulvis but has a more cyrtoconoid shape and a greater ratio between the height of the last whorl and that of the shell. The base is not concave as in C. pulvis, bears a fourth spiral cord which is more prominently tuberculate, and an additional fifth tuberculate cord that is not present in typical C. pulvis. Additionally, the siphonal canal bears numerous fine cords. The color pattern is similar to C. pulvis which has orange bands on white background; in contrast, in C. aff. pulvis these are brown and yellowish, respectively. It is distinct from any native Mediterranean species and clearly belongs to an Indo-Pacific clade. It is here considered a new non-indigenous species. Cerithiopsis sp. Figure 10 New records. Israel Remarks. This beautiful species has almost eight teleoconch whorls bearing two strong spiral cords with oblong tubercles at the intersection with prosocline axial ribs. Interspaces between spiral cords are approximately as large as the cords themselves, and interspaces between the axial ribs are double the size of the ribs. A third smooth thick cord delimits the rather flat base and is visible above the suture throughout most of the teleoconch. The protoconch is smooth with very fine and extremely short axial riblets just below the suture; it is multispiral but broken in our specimen in which only the last two whorls are preserved. The slender shape, the two strong spiral cords and the smooth flat base distinguish it at once from all native Mediterranean species suggesting it is a new non-indigenous species in the basin. Among Indo-Pacific cerithiopsids, Synthopsis lauta Cecalupo & Perugia, 2013, described from Vanuatu, is among the few similar species we were able to trace. However, the interspace between the spiral cords is broader, the tubercles on the first spiral cord of the last whorl are larger than those on the second cord, and the teleoconch is shorter with just six whorls. Additionally, the color pattern with white tubercles, yellowish interspaces, deep brown suture and violet protoconch is strikingly different from the one of our shell. We have some reservations that S. lauta, as well as our specimen, belong to the genus Synthopsis Laseron, 1956 that was described as bearing three tuberculate spiral cords on the whole teleoconch (Laseron 1956). Pending a molecular phylogeny of the family, we consider this feature important at the genus level. Therefore, we assign our specimen to the nominotypical genus Cerithiopsis, in the wait of a better understanding of cerithiopsid systematics. The specimen identified as Horologica gregaria Cecalupo & Perugia, 2012 and illustrated in the recent revision of Cerithiopsidae from South Madagascar (Cecalupo and Perugia 2014b: fig. 8G) is also similar to ours; that specimen, however, has a distinct basal spiral cord which is absent in our specimen. The latter character, the prominence of the tuberculate spiral cords and the evident but rather flat third cord also raise some doubts that the specimen from South Madagascar is conspecific with the H. gregaria originally described from the Central Philippines (Cecalupo and Perugia 2011). Last, the Sudanese specimen of Horologica cf. taeniata Cecalupo & Perugia, 2013 illustrated by Cecalupo and Perugia (2016: fig. 1P-S) shares the general features of our shell but can be distinguished by the first spiral cord that tends to split into two separate cords, and by the color pattern of white teleoconch and orange base. Protoconch: composed of 3.5 whorls with no clear demarcation between protoconchs I and II, height: ~ 300 μm, width ~ 200 μm (holotype), but accurate measurement hampered by the last protoconch whorl being covered by the first teleoconch whorl. It appears smooth except for growth lines and fine pustules covering the lower half of the first whorl and sparsely present apically and abapically on the following whorls (only visible with scanning electron microscopy at high magnification). Teleoconch: 4 whorls (holotype), height: 1.4 mm (holotype). It bears three spiral cords of equal size, with tubercles at the intersection of orthocline axial ribs. The base is contracted and has two additional tuberculate spiral cords. Tubercles become oblong near the lip. Anterior siphonal canal short, reverted upwards, formed by a prong-like protrusion of the anterior outer lip ( Figure 11A); posterior siphonal canal notch-like. Etymology. The name problematicus refers to the difficult task of recognizing and identifying non-indigenous species belonging to groups whose taxonomy in the tropical seas is poorly known (see Discussion). The species epithet is an adjective in nominative singular masculine. Remarks. This species is characterized by its bulbous contour and constricted last whorl which justify its inclusion in the genus Joculator Hedley, 1909(Hedley 1909Marshall 1978). The Cerithiopsidae of the Indo-Pacific have been subject to numerous in-depth studies (Cecalupo and Perugia 2011, 2013, 2014a, b, 2016, 2017a, 2019a. Still, this species does not fit any of the known species. Among the most similar species in terms of shell shape and ornamentation, Joculator itiensis Cecalupo & Perugia, 2014 has one teleoconch whorl more and a different color pattern characterized by light brown first whorl and base, J. olivoideus Cecalupo & Perugia, 2018 can be distinguished by its clearly prosocline axial ribs and greyish tubercles, and J. sekensis Cecalupo & Perugia, 2018 has only two spiral cords and blunter axial ribs on the first teleoconch whorl, in addition to a blunter siphonal canal. There are several more species of small brown bulbous Joculator often distinguishable only by subtle character differences. Joculator priorai Cecalupo & Perugia, 2012 is corneous in color and has a pointed protoconch with one additional whorl; moreover, in our specimens the interspaces between the spiral cords are smaller. Joculator pupiformis Cecalupo & Perugia, 2012 has one protoconch and one teleoconch whorl more, the tubercles are oblong, and the base lacks a clearly visible fifth tuberculate spiral cord. Joculator fuscus Cecalupo & Perugia, 2012 has much broader interspaces between cords and a wide subquadrangular aperture which is, in contrast, quite small in our specimens. Joculator furvus Cecalupo & Perugia, 2012 has a neat abapical smooth cord on the protoconch, one teleoconch whorl less and a broader aperture. Joculator carpatinus Cecalupo & Perugia, 2012 has one protoconch whorl more, one teleoconch whorl less, a broader aperture and a fine abapical thread on the protoconch. Joculator caliginosus Cecalupo & Perugia, 2012 has one protoconch whorl more and one teleoconch whorl less, the basal fourth and fifth cords are only weakly tuberculate whereas they are neatly tuberculate in our specimens. Joculator coffeus and J. subglobosus, both Cecalupo & Perugia, 2013, have one clear abapical thread on the protoconch, one teleoconch whorl less, the shell has a more roundish shape and the lip does not reach anteriorly the siphonal canal, almost covering it, like in our specimens. The other representatives of Joculator include also other more elongated species that can be easily distinguished from our specimens. This species is superficially similar to the native Mediterranean Cerithiopsis ladae Prkić & Buzzurro, 2007, which, however, can be distinguished at once for not having the last protoconch whorl partially covered by the first teleoconch whorl and lacking the prong-like process of the anterior outer lip. Additionally, tubercles in C. ladae on the last whorl are more elongated, subrectangular, and the shell profile is less bulbous. Cerithiopsis greppii Buzzurro and Cecalupo, 2005, described from Turkey, has a rather oval profile, but not as bulbous as in our species; additionally, it has a paucispiral protoconch. Cerithiopsis micalii (Cecalupo and Villari, 1997), which also has a somewhat oval shell profile, can be quickly distinguished by its protoconch whose last two whorls bear strong axial ribs. Unfortunately, a revision of Red Sea Cerithiopsidae is lacking, but given that Joculator is a broadly distributed genus in the Indo-Pacific province, we consider J. problematicus another previously undescribed Indo-Pacific species recently introduced to the Mediterranean Sea. Remarks. The morphology of this species is unique among the native mollusks of the Mediterranean, which does not host any shallow water Elachisinidae. Therefore, we consider it a new non-indigenous species in the basin. The only Indo-Pacific Elachisina we are aware of is E. robertsoni Kay, 1979, which indeed shares the general characters of our species. However, it can be readily distinguished by the thicker and fewer spiral cords, less rounded whorls and sigmoid, rather than strongly prosocline, aperture profile. Elachisina sp. is more similar to the West-African E. tenuisculpta (Rolán and Gofas 2003), but the Israeli shells have more rounded whorls, a greater height/width ratio and smaller ratio between aperture and shell height. The closest match to our specimens is Iravadia elongata (Hornung & Mermod, 1928) which was described from material collected by Arturo Issel in the Red Sea off Massawa, Eritrea, at 30 m depth (Hornung and Mermod 1928). Compared to our material, however, the syntype of I. elongata is larger (height 3.9 mm vs. 2.8 mm in our largest shell) and has seven less convex whorls. Further, the apical part of its spire has a slightly concave profile and thus appears more tapered. According to Issel's description, the sculpture of I. elongata consists of spiral ridges (12 on the penultimate and 22 on the last whorl) as well as growth lines, although the latter are not indicated in the accompanying line drawing. This suggests that the axial component might be less evident in I. elongata than in our specimens, however, the poor preservation of the shell surface of the syntype of I. elongata did not allow a reliable comparison with our material. Slightly eroded shells very similar to our specimens have been collected from the Sudanese Red Sea ( Figure 13G, H), confirming that the material from Israel indeed represents an Indo-Pacific species rather than an undescribed Mediterranean taxon. Among Mediterranean iravadiids, our specimens superficially resemble only Ceratia proxima (Forbes and Hanley, 1850). This species, however, lacks axial sculpture. Interestingly, Hornung and Mermod (1928) also mention the presence of this latter species at Assab (Eritrea) and "île Saldadin" (Zeila, northern Somalia). While obviously based on a misidentification -C. proxima has an Eastern Atlantic-Mediterranean distribution (Bouchet and Warén 1993;Høisaeter 2009) -one might speculate that this record could be the result of a confusion of C. proxima with the Iravadia presented here. Remarks. This tiny gastropod defeated all our attempts to identify it. It consists of a protoconch and a teleoconch of ~ 1.5 whorls each. Sculpture is absent, except for two spiral ridges that run on the shoulder and on the base. A third ridge runs periumbilically ( Figure 14E). Broad umbilicus, roundish aperture. Our shell closely resembles the Vitrinella sp. 1 illustrated by Blatterer (2019: plate 127, fig. 12a-j) from the Dahab region in the northern Red Sea, which, however, apparently bears fine spiral threads in the umbilicus ( fig. 12e, and unpublished figures). SEM images of our shell show that its surface is taphonomically altered; additionally, Blatterer's specimens look slightly more mature, reaching 2 teleoconch whorls. The significance of these features should be re-assessed upon a satisfying revision of these tiny gastropods from the Indo-Pacific province. Another similar shell is illustrated by Janssen et al. (2011, plate 19, figs. 3a-b), which apparently has less conspicuous or absent spiral ridges as long as can be judged from the optical illustrations provided. It is worth mentioning that gastropods belonging to the family Clenchiellidae D.W. Taylor, 1966 share the small size, low spire, wide umbilicus and presence of strong spiral keels we observed in our specimen (Ponder et al. 2014); the latter, however, lacks the numerous finer spiral cords that characterize clenchiellids. Additionally, these gastropods occur in mangrove swamps or adjacent habitats in tropical estuaries, a kind of habitat that does not occur in Israel. The shell shape and sculpture (in particular the strong spiral keels) distinguish it at once from native Mediterranean species. The extreme similarity with the shell illustrated in Blatterer's book suggests that the species belongs to a Red Sea clade and is here considered a new non-indigenous species in the Mediterranean Sea. Hemiliostraca clandestina (Mifsud & Ovalis, 2019), comb. nov. Remarks. Sticteulima clandestina and S. athenamariae, both Mifsud & Ovalis, 2019, were described on specimens collected in Turkey (Mifsud and Ovalis 2019). However, both belong to species present in the Red Sea and were illustrated by Blatterer (2019) and Hemiliostraca athenamariae. This is the first record of H. clandestina in Israel, but the species has been recorded for Lebanon based on empty shells collected in 1999 (Crocetta et al. 2020). Consequently, it is likely present here since at least 1999, with a ~ 20 year time-lag in first detection as quantified also for other non-indigenous species in the Mediterranean Sea (Crooks 2005;Albano et al. 2018). This is also the first record of living individuals from the Mediterranean Sea. Despite the relatively large number of living individuals, we did not find any attached to an echinoderm host; this is consistent with the fact that some eulimids actively leave the host if disturbed (Warén 1984). Figure 17 New records. Israel • 5 spcms; Ashqelon; 31.6868°N, 34.5516°E; depth 12 m; 30 Apr. 2018; offshore rocky reef; suction sampler; HELM project (samples S12_1F, S12_1M, S12_3F); size: H 2.7 mm, W 1.0 (illustrated specimen) • 1 spcm; same collecting data as for preceding; depth 11 m; 31 Oct. 2018; HELM project (sample S58_2F) • 3 spcms; Ashqelon; 31.6891°N, 34.5257°E; depth 25 m; 2 May 2018; offshore rocky reef; suction sampler; HELM project (samples S16_1F, S16_2F) • 1 spcm; same collecting data as for preceding; depth 28 m; 31 Oct. 2018; HELM project (sample S59_1F). Melanella orientalis Agamennone, Micali & Siragusa, 2020 Remarks. This species can be distinguished from Mediterranean Melanella by its gently curved whorls, straight spire with fewer whorls and thinner shell than most species. It superficially resembles the Red Sea "Eulima" orthophyes Sturany, 1903 (type illustrated by Albano et al. (2017)), which can be distinguished because of its slightly bent apical whorls and the unusual pustulous sculpture of the protoconch. The species presented here is apparently already widespread in the Eastern Mediterranean (Agamennone et al. 2020a). We found only living individuals and no empty shells. Because of this, and the low likelihood that a so widespread species in shallow depths in the Remarks. The genus Parvioris Warén, 1981 was erected for a group of numerous conchologically very similar species of which many are still undescribed (Warén 1981). The species here reported is very similar in general shape and size to P. dilecta (Warén 1981: 146), especially the morphs illustrated here in Figure 18I-M. However, it has a multispiral protoconch of ~ 4.5 whorls ( Figure 18H), whereas P. dilecta has a paucispiral protoconch of ~ 1.5 whorls. The type of protoconch is considered to be related to the developmental mode, which was regarded a diagnostic character at the species level for most molluscan lineages (Hoagland and Robertson 1988;Bouchet 1989). Because Warén (1984) suggested that the number of protoconch whorls is rather constant within species in Eulimidae, we currently do not consider our material conspecific with P. dilecta, but only closely related (thus the "aff." notation). However, there is increasing evidence that poecilogony, the intraspecific variation in developmental mode, occurs in Caenogastropoda (McDonald et al. 2014), Neogastropoda (Russini et al. 2020), and Sacoglossa (Krug 1998;Ellingson and Krug 2006;Vendetti et al. 2012). Parvioris aff. dilecta can be easily distinguished from the native P. ibizenca because of a more arched apical part and because of the protoconch morphology: both have multispiral protoconchs, but P. ibizenca has shorter whorls and a distinct profile which inflates at the third whorl, in contrast with the more slender and regular profile of P. aff. dilecta. Our specimens are likely conspecific with those identified as Melanella sp. 1 by Blatterer (2019) from Dahab, Red Sea, suggesting that it is indeed a new Lessepsian species. An additional issue is whether the animal color is diagnostic at the species level like in other groups whose shells offer few diagnostic morphological characters, e.g., Mediterranean Granulina (Neogastropoda: Granulinidae) and Gibberula (Neogastropoda: Cystiscidae) (Gofas 1990(Gofas , 1992. Some of our live collected specimens show a light yellow-white color (e.g., Figure 18I-K) whereas others have a brownish animal (e.g., Figure 18K-M). The final attribution of our findings to a species requires a thorough revision of Parvioris, which is beyond the scope of this paper. The specimens reported as Parvioris sp. by Albano et al. (2020) from mesophotic reefs off northern Israel belong to this species. Sticteulima sp. Figure 19 New records. Israel • 1 sh; north of Atlit; 32.7820° N, 34.9466° E; depth 10 m; 21 Sep. 2016; sand; grab; HELM project (sample NG10_1F); size: H 1.4 mm, W 0.6 mm (illustrated shell, Figure 19A-F) • 1 spcm; Ashqelon; 31.6891°N, 34.5257°E; depth 28 m; 31 Oct. 2018; offshore rocky reef; suction sampler; HELM project (sample S59_1F). Remarks. We place this species in Sticteulima due to its small size, slender profile with high and rather flat whorls (Warén 1984). In contrast to the native S. jeffreysiana (Brusina, 1869) and the Lessepsian S. lentiginosa (A. Adams, 1861), it is colorless, also in live-collected specimens, and stouter. Further, this species does not match any of the known small-sized Mediterranean eulimids. It can be readily distinguished from Vitreolina curva (Monterosato, 1874) and Melanella levantina (Oliverio, Buzzurro & Villa, 1994) by the lack of the strongly arched apical whorls. This feature differentiates it at once also form other Red Sea small-sized eulimids (Blatterer 2019). Melanella petitiana (Brusina, 1869) is larger, has more numerous whorls (our Sticteulima has a fully thickened lip suggesting that it is an adult) and has a less prominent lip profile. Nanobalcis nana (Monterosato, 1878) (type illustrated by Appolloni et al. (2018)) has shorter whorls, especially the last one, which is also much broader than in this species. It can also be easily distinguished from Hemiliostraca athenamariae (Mifsud & Ovalis, 2019) by the lack of any color pattern, and the more inflated lip profile with a deeper posterior sinus. Sticteulima sp. may be a new non-indigenous species in the Mediterranean Sea. Remarks. This Vitreolina is extremely similar to the native V. philippi, but the animal is whitish with a yellowish digestive gland ( Figure 16E, F), in contrast to the peculiar color pattern of typical V. philippi with a white background and red dots ( Figure 16D). Vitreolina is known to be gonochorous (Warén 1984) but it is unclear if this different color pattern, never reported from the Mediterranean, can be related to sex. We suspect that this could be another new Lessepsian species for the Mediterranean Sea, because we observed several Mediterranean-Red Sea species pairs that are morphologically extremely similar. If we are correct, the occurrence Remarks. Conus fumigatus was first recorded from the Mediterranean Sea in Libya (Röckel 1986) but not recorded again for three decades until a recent report from Syria (Ammar 2018). This is the first finding in Israel, filling the distributional gap from the Suez Canal northward; only shells of juveniles have been found so far. Family Murchisonellidae T.L. Casey, 1904 Henrya (?) sp. Figure 21 New records. Israel Remarks. We were unable to assign this species to any Mediterranean or Indo-Pacific species, despite its conspicuous combination of shell characters. Our single specimen has an elongated, pupoid shell with convex whorls, a narrow but deeply incised suture, and a heterostrophic protoconch of type B (diameter: 250 μm). The surface is glossy and smooth except for densely spaced, very fine growth lines. The latter are straight, slightly prosocline on the spire, becoming orthocline near the aperture. The aperture is drop-shaped with a simple, thin lip that is slightly reflected at the columella. An umbilical chink is present. The shell is translucid-white, ornamented with a single, broad, light brown spiral color band. The shell morphology is similar to species of the murchisonellid genus Henrya Bartsch, 1947. However, the three currently known species of that genus were described form the tropical West Atlantic (Florida, Bahamas, and Yucatan) (Bartsch 1947), and none of them has a brown color band. For these reasons, the lack of anatomical and molecular data, and the fact that only a single specimen was available for study, we refrained from a definitive generic assignment. This species is potentially another non-indigenous one originating from the Indo-Pacific. Family Pyramidellidae Gray, 1840 Odostomia cf. dalli (Hornung & Mermod, 1925) Figure 22 New records. Israel Remarks. The shell of this species is white and rather solid, with convex, unkeeled whorls and a deep, narrow suture. The columellar tooth is visible in frontal view; there are no lirae inside the aperture. The outer surface appears smooth at first sight but bears numerous very fine spiral lines. The protoconch is of type A2, tending to type B. In ethanol-preserved specimens, the soft body is yellowish-white, with the eyes well visible through the shell ( Figure 22A). Odostomia cf. dalli differs in its shell morphology from all known Mediterranean Odostomiinae, but bears close resemblance to the illustration of the type specimen of Odostomia dalli from Sarad Island ("Ile de Sarato"), Dahlak Archipelago, Eritrean Red Sea (Hornung and Mermod 1925). In contrast to our material, however, O. dalli was described as lacking both, spiral sculpture and a visible columellar fold, although a columellar tooth seems to be indicated in the line drawing accompanying the original description. A rigorous assessment of potential conspecificity between O. dalli and our material therefore awaits a thorough study of the type material of the former, but the close similarity suggests that this is a new non-indigenous species in the Mediterranean Sea. This interpretation is also supported by the lack of empty shells in death assemblages (see Discussion). Odostomia (s.l.) sp. 1 Figure 23 New records. Israel • 1 spcm; Haifa Bay; 32.8211°N, 35.0196°E; depth 11 m; 2 Aug. 2015; soft substrate; grab; NM project (sample HM27(c)); size: H 1.4 mm, W 0.7 mm Remarks. This species is characterized by a translucid-white, cylindrical shell with ~ 3 whorls, and an intorted protoconch of type C ( Figure 23I) whose columella is oriented at an angle of ~ 160° relative to the teleoconch axis (revealed by μCT-imaging, Figure 23H and additional scans available at https://doi.org/10.6084/m9.figshare.c.5215226). The growth lines are slightly prosocline on the spire while becoming almost orthocline on the body whorl; an extremely faint spiral microsculpture is present on the apical part of the whorls, but only visible in high-magnification SEM images ( Figure 23G). This species differs from Odostomia cf. dalli by its smaller size (height up to 1.4 mm), the more cylindrical shape, shallower suture, and the absence of a visible columellar tooth. Although this species, in terms of size and overall shape, somewhat resembles representatives of the fresh-and brackish water-dwelling family Hydrobiidae, the fact that numerous living specimens were found in a fully marine environment and its heterostrophic protoconch unambiguously identify it as member of the family Pyramidellidae. Odostomia sp. 1 does not resemble any known Mediterranean pyramidellid; considering the great number of confirmed introductions of Indo-Pacific microgastropods to the eastern Mediterranean Sea, we therefore suspect that also this taxon might be a Lessepsian species. Among Indo-Pacific Odostomiinae, O. bullula Gould, 1861(e.g., Johnson 1964Robba et al. 2004) is similar to our specimens, but differs by its more conical shape and larger size (height to 2 mm, width to 1 mm). Another similar species, O. decouxi Saurin, 1959, was suggested to be a junior synonym of O. bullula (Robba et al. 2004). Odostomia (s.l.) sp. 2 Figure 24 New records. Israel Bogi and Galil (2006)). Remarks. The first record of this species is based on five well-preserved shells found in a shell grit sample taken in 1995 on the beach of Yumurtalik, Adana, Turkey (Giunchi et al. 2001). In 2006, another beached shell was found at Neve Yam, northern Israel (Bogi and Galil 2006, re-illustrated in Figure 24 herein) and, according to these authors, the species was also found in Israel by J.J. van Aartsen. A specimen of Odostomia sp. 2, from the original lot from Yumurtalik, was recently figured by Giannuzzi-Savelli et al. (2014). Here, we report the first finding of a living individual of Odostomia sp. 2 which was recovered from a sediment sample taken at the Soreq desalination plant, southern Israel. Since the first finding in Turkey 25 years ago, the identity of this most likely nonindigenous species has remained unresolved. It differs from all known Mediterranean Odostomiinae at first glance by the presence of two brown spiral bands. We are unaware of any Indo-Pacific pyramidellid resembling this taxon, and it may well represent an undescribed species. To aid the further study of this taxon and raise awareness of its presence and apparent spread in the Mediterranean, we here re-illustrate the wellpreserved shell from Neve Yam using light and scanning electron microscopy. Figure 25A, B New records. Israel Remarks. Oscilla virginiae is characterized by a small-sized, white, conical shell with a type A protoconch. The sculpture consists of thick, smooth spiral cords: the first and second whorl bear two cords; the upper cord is broadest and bifurcates on the third whorl, forming three cords on the last whorl, with the newly formed pair remaining positioned very close one to each other ( Figure 25A, B; Peñas et al., 2020). This species has just been described from the infralittoral of Jordan and also occurs in the Egyptian Red Sea (Peñas et al. 2020). It superficially resembles the Indo-Pacific O. appeliusi (Hornung & Mermod, 1925), and indeed, a juvenile shell from Dahab (Egypt) was fig. 17c, d) under this name. In contrast to O. virginiae, however, O. appeliusi bears spiral cords more similar in thickness which are spaced more equidistantly and closer to each other. Already on the second whorl, three cords are present, and the uppermost cord does not evidently bifurcate (Peñas et al. 2020). Lastly, the illustration of Hornung and Mermod (1925) suggests a greater number of spiral cords on the last whorl. To date, O. appeliusi has not been recorded from the Mediterranean Sea. Oscilla virginiae Peñas, Rolán & Sabelli, 2020 Within the Mediterranean, O. virginiae is superficially similar only to two other nonindigenous pyramidellids, Cingulina isseli (Tryon, 1886) and Miralda sp. (Figure 25C, D). The latter taxon has previously been reported under the name Oscilla jocosa Melvill, 1904, despite recent evidence by Peñas & Rolán (2017) that it is not conspecific with Melvill's (1904) type material. Oscilla virginiae differs from C. isseli by its broader, more conical shell, fewer whorls, smaller size (C. isseli reaches a height of ~ 3 mm), and the much less pronounced axial sculpture between the spiral cords. Compared to Miralda sp., O. virginiae differs by its smaller size (up to ~ 3 mm in Miralda sp.), the absence of beads on the two upper spiral cords ( Figure 25C Remarks. Parthenina cossmanni has an elongated-conical shell with flat-convex whorls and a protoconch of type C. The whorls of the spire have a subangular profile, whereas the body whorl in adult specimens is more convex and evenly rounded. The axial sculpture is made of strong orthocline ribs that become slightly flexuous on the body whorl in some specimens ( Figure 26A-E). The spiral sculpture on the spire consists of a single, thin, suprasutural cord; a second cord emerges on the penultimate whorl, and three cords are present on the last whorl. The columellar tooth is weak and deeply inset, and in some of the studied specimens hardly visible inside the aperture. The soft body of ethanol-preserved specimens is yellowish, with the eyes visible through the shell ( Figure 26G, H). The type material of P. cossmanni was collected from the Red Sea of Massawa (Eritrea) at a depth of 30 m (Hornung and Mermod 1924); the species was recently recorded from Dahab (Gulf of Aqaba, northern Egypt) by Blatterer (2019), and from Jordan by Peñas et al. (2020). Outside the Red Sea, it is known from Vietnam (Saurin 1959) and Thailand (Robba et al. 2004). Among native Mediterranean species, P. cossmanni superficially resembles Parthenina interstincta (J. Adams, 1797). The latter species, however, has only two spiral cords on the last whorl and a more developed columellar tooth. P. cossmanni is further similar to P. indistincta (Montagu 1808) (Figure 26I-L) which has a very weak, internal columellar fold (Warén 1991a: 96, fig. 29f ) and three (rarely four) spiral cords on the last whorl. Compared to P. cossmanni, however, the shell of P. indistincta is more elongated and has two spiral cords on the spire whorls. We suspect the two shells illustrated as P. indistincta in Öztürk et al. (2011: fig. 10A, B) might also be P. cossmanni, considering their broad shape and overall morphology. Öztürk et al.'s material was collected in 2009 from a mud bottom at 9 m depth in Mersin Bay (stn. 46, 36.7167°N, 34.8667°E), south-eastern Turkey; should our hypothesis be confirmed upon re-examination of these shells, this would suggest that P. cossmanni likely has a wider distribution in the southeastern Mediterranean Sea. Our finding of several living specimens on the Israeli shelf, together with the relative rarity of empty shells in the samples, suggests that this species might have established locally only rather recently. Figure 27 New records. Israel Remarks. This species is characterized by a straight, conical profile with flat whorls, separated by a deep, canaliculate suture; the abapical part of the whorl is angulated. The sculpture consists of straight axial ribs and a prominent suprasutural spiral cord; the base is smooth except for faint continuations of the axial ribs; an internal columellar fold is present and visible inside the aperture when slightly turning the shell to the left side. The protoconch is of type C and in the illustrated specimen it has a diameter of ~ 240 μm, which is slightly smaller than 270-290 μm stated by Peñas and Rolán (2017) for this species. Parthenina typica (Laseron, 1959) Parthenina typica (Laseron, 1959) was described from eastern Australia (Laseron 1959) and subsequently recorded from the Solomon Islands, Fiji and the Philippines at infralittoral to bathyal depths (Peñas and Rolán 2017). To our knowledge, it has not been reported from the Indian Ocean nor from the Red Sea, however, its absence could well represent an artifact of the limited knowledge of the micromollusk fauna of these regions. Among native species, the conchologically highly variable Parthenina interstincta (J. Adams, 1797) andP. monozona (Brusina, 1869) are most similar, however, they differ by having more rounded whorls and a greater number of axial ribs. 7917°N, 98.5500°E;depth 15 m;15 Feb. 1985;sand;A.J. Ferreira leg.;LACM 1985-14.2. Pyrgulina craticulata has been reported only from the Red Sea so far, however, a search of previously uncatalogued lots of pyramidellid shells in the LACM collection by one of us (PILF) yielded specimens from Madagascar, the Maldives and Thailand, confirming that this species has a much wider distribution in the Indian Ocean. Shells of P. craticulata seem indistinguishable from the illustration of Chrysallida tribulationis (Hedley, 1909), a taxon recorded from Australia and the western Japan Sea (Hedley 1909;Higo et al. 2001). In the light of a possible distribution of P. craticulata also in the West Pacific, we recommend an assessment of potential synonymy between C. tribulationis and P. craticulata. Here, we provide the first records of P. craticulata for the Mediterranean Sea. Several living specimens were found on hard substrates off southern and northern Israel, suggesting it is established in the region. In terms of shell size, shape and type of ornamentation, this species closely resembles the native Spiralinella incerta ( Figure 28J-N), but has pronounced spiral cords in the interspaces of the axial ribs ( Figure 28I vs. 28N) which enable a reliable segregation of these two species. In addition, the axial ribs are spaced more closely in S. incerta than in P. craticulata. Remarks. Until now, the first record of this species from the Mediterranean was considered to be by Öztürk and van Aartsen (2006), who reported on material obtained from shallow-water sediment samples collected along the Turkish Levantine (Viransehir, Mersin Bay) and Aegean coasts (Güllük Bay) in 1997 and 2000, respectively. However, already van der Linden and Eikenboom (1992) described and illustrated P. nana from the Levantine Sea (page 60, Figure 41), referring to it as Chrysallida spec. C in the lack of a species-level identification. Their material consisted of a single individual, likely an empty shell, but not specified by the authors, from Mersin (south-eastern Turkey) with unknown collecting date, housed in the collection of J. van der Linden (The Hague, The Netherlands). Although we were unable to examine this material, the excellent and detailed line drawing provided enabled an unambiguous assignment of Chrysallida spec. C to P. nana; thus, van der Linden and Eikenboom (1992) should be regarded the first Mediterranean record. Today, the known Mediterranean distribution of P. nana includes Turkey, Lebanon, and Israel (Bogi and Galil 2006;Giannuzzi-Savelli et al. 2014). To our knowledge, this is the first record of living individuals of P. nana from Israel; here, the species occurs along both the southern (Ashqelon) and northern coasts (west of Rosh HaNikra Islands) on rocky bottoms at 12-28 m depth. Figure 29 New records. Israel Saurin, 1959). Turbonilla funiculata de Folin, 1868 Remarks. Shells of Turbonilla funiculata are polymorphic with respect to their shape (Peñas and Rolán 2010 and our own observations) but the species can be readily distinguished from all pyramidellids in the Mediterranean -the most similar being the non-indigenous Turbonilla edgarii (Melvill, 1896), Turbonilla flaianoi Mazziotti, Agamennone, Tisselli, 2006 andPyrgulina fischeri Hornung &Mermod, 1925 -by the presence of a very marked subsutural constriction running along the whorls. This constriction separates the pronounced, almost orthocline axial ribs into a larger lower and a narrow upper, crown-like, portion. The ribs extend adapically beyond the suture of the preceding whorl, giving the transition zone between subsequent whorls a wavy appearance. The interspaces of the lower portion of the ribs bear several thin spiral lines, while those between the narrow upper parts of the ribs are smooth. The protoconch is helicoid and of type A. Turbonilla funiculata has been previously reported from Fiji, Hong Kong, Indonesia, New Caledonia, Thailand, the Solomon Islands and Vietnam from shore to 396 m depth (Robba et al. 2004;Peñas and Rolán 2010). As Robba et al. (2004) already pointed out, the shell figured as Pyrgiscus microscopica (Laseron, 1959) by Okutani (2000: 712, fig. 68) is most likely T. funiculata, confirming that the species also occurs in Japan. This interpretation is re-affirmed by another illustration of the very same Japanese specimen in Mazziotti et al. (2005: 81, fig. 1m, n), showing the subsutural constriction characteristic of T. funiculata. Shells of T. funiculata were found by one of us (PILF) also among hitherto unidentified lots of shells from Pakistan and Sri Lanka housed in the LACM collection, demonstrating that this species also lives in the Indian Ocean. Here, we report the first records of T. funiculata for the Mediterranean, where several living specimens were collected in northern Israel on hard substrates at 12 m depth. Family Cylichnidae H. Adams & A. Adams, 1854 Cylichna collyra Melvill, 1906 Remarks. We record here for the first time in the Mediterranean 13 living individuals of Cylichna collyra, a cephalaspidean originally described from the Gulf of Oman (Melvill 1906b). Cylichna collyra can be distinguished from the native Mediterranean C. cylindracea (Pennant, 1777) by its more elongated and slender shell, the more tapering apical part, the color pattern characterized by fine brown spiral lines apically and abapically, and the smaller size (C. cylindracea commonly reaches 1 cm in height whereas C. collyra attains approximately half that size). Cylichna villersii (Audouin, 1826), another non-indigenous species of Red Sea origin recorded from the Mediterranean coast of Israel (Bogi and Galil 2013a), is smaller (less than 2 mm), less slender, has a more rounded base and stronger growth marks (not visible in C. collyra), and bears two brown bands apically and abapically instead of the fine brown lines. Cylichna biplicata (A. Adams in Sowerby, 1850), a species occurring on the continental platform in the Indo-West Pacific, shares with our specimens the cylindrical shape and the color pattern of reddish-brown spiral bands apically and abapically (Valdés 2008), but is larger, more elongated anteriorly, with a stronger columellar tooth, and the colored spiral bands become a compact larger band apically. Cylichna collyra has not been recorded from the Red Sea yet (Dekker and Orlin 2000). Remarks. This species was first recorded in the Mediterranean Sea in 1974 with the finding of few empty shells in the Bardawil Lagoons in Egypt (Mienis 1976). The species has since been recorded also in Greece , Turkey (Çinar et al. 2011), Cyprus (Katsanevakis et al. 2009), and Lebanon (Crocetta et al. 2020). A single record is available from Israel based on shells collected in 2004 off Palmachim (Bogi and Galil 2006). We here report living individuals from Israel for the first time. This is also the first finding of living individuals in the Mediterranean Sea. Remarks. Atys angustatus was first recorded in the Mediterranean Sea in 1974, based on specimens collected at Haifa, Israel (Aartsen and Goud 2006). It has been reported from Mersin, Turkey, since 1986 (Aartsen and Goud 2006) and multiple records from Israel followed from several locations along its coast (Micali et al. 2016). To the best of our knowledge, this is the first record from Greece. Remarks. This species was first recorded in Israel in 1960 from Bat Yam with living individuals (Barash and Danin 1973). Further living individuals were recorded in the 1960s and early 1970s from Israel and Egypt (Bardawil Lagoons) (Barash and Danin 1971, 1973, 1977. Since then, however, no further living individuals were found, questioning the persistence of its populations in the Mediterranean. We report here the recent finding of two living individuals, suggesting that the species indeed still occurs in Israel. Figure Remarks. We here report Lioberus ligneus for the first time from Israel and Cyprus. The Israeli live-collected specimen comes from a patch of Galaxaura rugosa and Cystoseira sp. whereas the two empty but fresh shells from Cyprus were found attached vertically to Cystoseira shoots. These findings suggest that this species indeed prefers vegetated habitats in the shallow subtidal. This species has been previously reported from Lebanon based on shells collected in 1999-2000 (Crocetta et al. 2013), suggesting that it has likely occurred undetected throughout the Levantine Basin for long. The distinction of L. ligneus from the native Mediterranean Lioberus agglutinans (Cantraine, 1835) is not straightforward because both species share the elongated appearance, sculpture limited to concentric striae, and brown color. Moreover, they are morphologically variable. A fairly consistent character of L. ligneus in the samples we inspected from the Red Sea is the darker internal color, often shading into violet. The group would deserve a taxonomic revision to unambiguously distinguish its species. Musculus costulatus (Risso, 1826): Israel • 9 spcms, 4 vv;Ashqelon;31.6868°N, 34.5516°E;depth 12 m;30 Apr. 2018; offshore rocky reef; suction sampler; HELM project (samples S12_1F, S12_1M, S12_1L, S12_2F, S12_2M) • 3 spcms; Ashqelon; 31.6891°N, 34.5257°E; depth 25 m; 2 May 2018; offshore rocky reef; suction sampler; HELM project (samples S16_1F, S16_2F) • 1 spcm; same collecting data as for preceding; depth 28 m; 31 Oct. 2018; HELM project (sample S59_1M) • 5 spcms; west of Rosh HaNikra Islands; 33.0704°N, 35.0926°E; depth 12 m; 1 May 2018; rocky substrate; suction sampler; HELM project (samples S14_1L, S14_3F, S14_3L, S14_4F) • 56 spcms; same collecting data as for preceding; 29 Oct. 2018; HELM project (samples S52_1F, S52_1M, S52_1L, S52_2F, S52_2M, S52_2L, S52_3F, S52_3M, S52_3L) • 1 v; west of Rosh HaNikra Islands; 33.0725°N, 35.0923°E; depth 20 m; 1 May 2018; rocky substrate; suction sampler; HELM project (sample S13_3F) • 20 spcms; same collecting data as for preceding; depth 19 m; 29 Oct. 2018; HELM project (sample S53_2L). Lioberus ligneus (Reeve, 1858) Remarks. We report numerous living individuals of Musculus aff. viridulus from the Mediterranean Israeli coastline. It is the first record of this Indo-Pacific species in the Mediterranean Sea. This species can be readily distinguished from the native M. costulatus because the latter has a more oval outline and a much smaller number of riblets at the same overall shell size ( Figure 36). These riblets are also much larger compared to M. aff. viridulus. The Atlanto-Mediterranean M. discors (Linnaeus, 1767) has a similarly fine posterior sculpture but at the same size is much higher and at all sizes bears much more prominent riblets anteriorly. The taxonomy of Musculus in the Indo-Pacific province is not settled and the available images of Red Sea M. viridulus (Oliver 1992;Zuschin and Oliver 2003) show a more oval species, hence our dubitative identification. Still, we are confident that this is a Red Sea species because we examined indistinguishable specimens from the northern Red Sea ( Figure 36F-H). Blatterer (2019) illustrated these and other similar specimens (plate 10, fig. 18a, b) as Gregariella ehrenbergi (Issel, 1869). We recorded a morphologically distinct species as G. ehrenbergi from a buoy stranded on the Israeli coastline (Steger et al. 2018;Ivkić et al. 2019). Gregariella ehrenbergi type material is corroded by Byne's disease and the original description likely refers to a juvenile specimen; the identity of this species deserves further scrutiny. Remarks. The discrimination and identification of the species of Isognomon Lightfoot, 1786 is difficult due to their morphological plasticity that is related to their cryptic way of life. Still, the specimens recently reported from Astypalaia, in the Eastern Aegean Sea (Lipej et al. 2017;Angelidis and Polyzoulis 2018), show clear morphological differences from I. legumen (Gmelin, 1791), the established non-indigenous species in the Mediterranean Sea: the main features of the sculpture are radial rather than concentric and the shape of the shell can be very elongated rather than subquadrate. We here report a juvenile living individual and an empty shell from Greece and Cyprus (for which this is a new record), respectively, which are not distinguishable from those previously reported from Astypalaia. Juvenile shells (up to a size of ~ 5 mm) bear a sculpture of radial ribs adorned by tubular spines. We do not comment upon the choice of the australica name for this taxonomic entity by previous authors, lamenting the lack of a thorough revision of this genus in the Indo-Pacific province. Remarks. Pegophysema philippiana was first found in the Mediterranean Sea in 2018 as a single valve from south of Tel Aviv (Mienis 2019). The specimen here reported is much smaller, and we acknowledge that the identification of juvenile individuals of this genus can only be tentative. We do not consider it conspecific to native species such as Loripinus fragilis (Philippi, 1836) because this latter species is much more inflated at this size, nor Loripes orbiculatus Poli, 1795 which has a different valve profile. If the identification is confirmed, this would be the first live collected specimen of P. philippiana in the Mediterranean Sea. New records. Israel • 1 spcm;Palmachim;31.9292°N, 34.6405°E;depth 36.9 m;29 May 2004;soft substrate;grab;NM project (station 19); size: L 3.2 mm, H 3.2 mm. Remarks. We report a juvenile but living individual of Chavania erythraea, a lucinid occurring in the Red Sea, the Persian (Arabian) Gulf and the Arabian Sea (Glover and Taylor 2001). This is the first record of this species from the Mediterranean Sea. Adults develop a commarginal lamellar sculpture. Zorina, 1978 (now in Rugalucina too) from the Mediterranean coast of Israel by Steger et al. (2018), who also illustrated it. A recent molecular phylogeny showed that R. vietnamica is distinct from R. angela and that the non-indigenous species in the Mediterranean belongs to this latter species, which occurs in the Red Sea and northwest Indian Ocean (Taylor and Glover 2019). Remarks. This non-indigenous species has first been recorded in the Mediterranean Sea by Mifsud and Ovalis (2012) as Nudiscintilla cf. glabra Lützen and Nielsen, 2005, based on five living specimens collected at Yumurtalik, Adana (Turkey) in shallow water. Their tentative identification was primarily guided by the external morphology of the living animals, which had a smooth mantle surface. This feature is characteristic for the monotypic genus Nudiscintilla (hence the genus name), but unusual among scintilloid galeommatids in general. Although no observations on living individuals could be made by us, the shell morphology of our material well matches that of the specimen illustrated in Mifsud and Ovalis (2012: fig. 1), suggesting conspecificity. Our findings represent the first records of this species from Israel. However, the dentition of the right valve as seen in SEM images ( Figure 40C-E) clearly differs from that described by Lützen and Nielsen (2005) for Nudiscintilla: the latter has a single cardinal tooth in each valve and no lateral teeth. However, the studied right valve -the hinge of the single live-collected specimen was not examined to avoid damage -bears what appears to be two cardinal teeth ( Figure 40C) that are fused at their base ( Figure 40D, E), as well as a ridge posterior to the internal ligament which most likely is a lateral tooth. This ridge seems to correspond to the left of the two swellings indicated by a pair of arrows on the right hand side of Mifsud and Ovalis (2012: fig. 1e), while the right swelling might correspond to a narrow ridge visible also on the dorsal margin of our valve. Mifsud and Ovalis (2012) interpreted these features as aberrant shell growth, however, the presence of such ridges also in our right valve ( Figure 40D) speaks against this hypothesis. Furthermore, their living individuals had a small tentacle situated above the widely gaping anterior inhalant region (cf. Mifsud and Ovalis (2012: 8, fig. 2a), however, the illustration of N. glabra in Lützen and Nielsen (2005: 292, fig. 38a) shows a small tentacle in the posterior exhalant region of the reflected mantle. In the light of the poorly developed taxonomy and great species diversity of galeommatid bivalves in the Indo-Pacific, further observations on living specimens, thorough comparisons with the type material from Thailand and molecular analyses are required to definitely clarify the relationship of Mediterranean specimens with N. glabra. Figure 41 New records. Israel • 1 spcm; Haifa; depth 15 m; May 1999; biogenic sediment; B.S. Galil leg.; size: L 6.9 mm, H 5.2 mm. Scintilla cf. violescens Kuroda & Taki, 1961 Remarks. The single shell found is trapezoid-oval (L:H ratio = 1.33), slightly higher posteriorly than anteriorly, translucid-white, and has a glossy external surface. The valves are narrowly gaping, more widely in their posterior part. The umbones are prosogyrate, pointed and submedian. The commarginal sculpture consists of fine growth lines that are slightly wavy posteriorly, as well as irregular growth marks. Flat radial ribs are present in the posterior part of the shell; they are visible, upon close examination, also on the inside of the valves in the form of shallow markings. The inner surface, particularly of the right valve, is spotted by blister-like markings ( Figure 41C, D). The hinge of the right valve bears a single cardinal tooth, bent towards the anterior, and an elongated posterior lateral tooth. The left valve has two cardinals, but the anterior one is broken off ( Figure 41G); a posterior lateral is present. Lacking further material and observations on living individuals, which are of great diagnostic importance in galeommatids, we refrained from assigning a definitive specific name to our shell. However, the overall shape, hinge dentition and the presence of radial sculpture match well descriptions of Scintilla violescens Kuroda & Taki, 1961(Arakawa 1961Kuroda and Taki 1961), a species recorded from the intertidal and shallow subtidal of Thailand and Japan (Huber 2015). In contrast to our shell, however, Kuroda and Taki (1961) mention the presence of radial sculpture on the entire surface of the valves of their type material. While our Mediterranean shell is less elongated than the specimens of S. violescens illustrated by Okutani (2000), Lützen and Nielsen (2005), and Huber (2015), it is very similar in outline to the shell shown by Arakawa (1961: fig. 5B) (which was identified by T. Kuroda and I. Taki). Scintilla violescens appears to be variable also with respect to shell size and coloration: specimens from Thailand (maximum length = 10.5 mm, n = 12 spcms) all were considerably smaller than the > 15 mm-long Japanese type (Kuroda and Taki 1961) and had a whitish in-stead of pale violet color (Lützen and Nielsen 2005), like the Israeli shell. Considering this great plasticity in shell characters, and the differences in living animal morphology observed for Thai vs. Japanese specimens of S. violescens by Lützen and Nielsen (2005), the question arises whether more than one biological entity might be involved. Irrespective of its unresolved specific affinity, the shell presented here clearly differs from all native Mediterranean galeommatids and thus cannot be confused; it can be easily distinguished from the non-indigneous Nudiscintilla cf. glabra (see above) by its less elongated shell, smaller L:H-ratio and, most notably, the presence of radial sculpture on the valves. Apart from the present shell, which was found in 1999 in Haifa Bay, we know of no other material. Remarks. Ervilia scaliola was first recorded from Turkey based on material collected in 2013 by Zenetos and Ovalis (2014) who correctly described the complex taxonomic status of this genus in the Indo-Pacific. We here record the species from Israel for the first time. The complete shell from Ashqelon ( Figure 42) is very fresh, and thus probably originates from an extant population. Remarks. We here report the finding of three beached valves of Iacra seychellarum in Kos, Greece. Iacra seychellarum can be readily distinguished from any Mediterranean semelid by its thick valves, large chondrophore and different sculpture in three zones: anterior with fine incised concentric lirations, median to posterior slope of fine incised oblique lines becoming strongly divaricate over the posterior slope, posterior area with closely spaced concentric incised lirations (Oliver 1992;Zuschin and Oliver 2003). It has been recorded from the northern Red Sea by Blatterer (2019) but has a broader distribution in the Indian Ocean (Oliver 1992). Because to our knowledge the finding has not been followed by others, we suggest that living individuals are needed to confirm the introduction and establishment of this species in the Mediterranean Sea. Figure 44 New records. Israel Remarks. We were unable to identify this peculiar bivalve beyond family level, despite a thorough search in the literature on Mediterranean and Indo-Pacific mollusks. The shell of the single specimen found is roundly subtrigonal, fragile, with a rounded, steeply sloping anterior and a subacute to subtruncate posterior part; the ventral margin is slightly concave, the umbo submedian. The outer surface is smooth, glossy, and only sculptured by very fine growth lines. Although the small size and outline are reminiscent of certain galeommatoid genera such as Bornia Philippi, 1836, which is also represented in the Mediterranean Sea, the presence of both an external and an internal ligament, the latter situated on a well-developed resilifer, is typical for the family Semelidae (Oliver 1992;Beesley and Ross 1998;Huber 2010). The hinge of both valves bears two cardinals; the posterior cardinal of the left valve is becoming obsolete by encroachment of the internal ligament portion which also extends vertically beyond the hinge line. Two well-developed laterals are present in the right valve, while in the left valve, only a weak tooth-like ridge is present anteriorly, formed by the dorsal shell margin. Due to the extremely smooth and glossy interior of the valves that renders hardly visible even the adductor muscle scars, it remains unclear whether a deep pallial sinus, another feature typical of semelids, is present in the studied specimen. Semelidae sp. Lonoa katoi Habe, 1976, a semelid from Japan, shares with our species the small size and irregular outline with an often concave ventral margin (related to its attached lifestyle); however, the outer shell surface of L. katoi bears rough lamellae and fine radial threads (Habe 1976;Okutani 2017), while that of the Israeli specimen is almost smooth. The most similar confamilial species from the Red Sea probably is Abra aegyptiaca Oliver and Zuschin, 2000, however, it differs from the specimen described here in shell shape, sculpture, the prosogyrate umbo, and features of the hinge such as the shape of the anterior lateral tooth of the right valve. Similar-sized juveniles of Mediterranean Abra spp., including A. alba, the most common species on the shallow Israeli shelf, have a hinge morphology comparable to our specimen, but differ in their outline and by having more protruding umbos (Scaperrotta et al. 2013). Abra tenuis is most similar in shape, and a teratological specimen might approach the outline of our shell; however, such a specimen would still differ from Semelidae sp. by the presence of commarginal lines on the early dissoconch (Scaperrotta et al. 2013;Oliver et al. 2020). Until the finding of further specimens, it remains open whether the present individual is a juvenile or an adult of a small-sized species. Considering that only a single specimen was found so far, the lack of known native Mediterranean species with a similar morphology, and the geographical proximity of the Israeli coast to the Suez Canal, we suspect that this species might be another Indo-Pacific taxon introduced to the southeastern Mediterranean. Remarks. Clementia papyracea has been recorded in the Mediterranean Sea since 1937, but findings of living individuals are very scarce and limited to samples collected in 1968 in El Arish, Egypt (Barash and Danin 1973), in 1975in Haifa, Israel (Barash and Danin 1977 and in 2012 in Ashqelon, Israel (Crocetta et al. 2016). We here report a further living juvenile individual. Corbula gibba ( Remarks. Corbula erythraeensis is widespread in the northern Red Sea where it has been recorded from the gulfs of Suez (MacAndrew 1870;Oliver 1992) and Aqaba at Eilat (Edelman-Furstenberg and Faershtein 2010), as well as the northern Bay of Safaga (Egypt) (Zuschin and Oliver 2003); outside the Red Sea, its distribution ranges eastward to Pakistan (Huber 2010). Here we report the first findings of this species from the Mediterranean. Corbula erythraeensis was found sympatrically with the common native Mediterranean Corbula gibba but was always present in very low numbers. No empty shells have been found so far. While being similar in appearance to the morphologically variable C. gibba, C. erythraeensis has a convex anterior dorsal margin of the right valve (usually concave in C. gibba, particularly in smaller specimens), a more inflated umbonal region, a regular concentric sculpture on the left valve (Oliver 1992), and its color always is whitish-yellowish (C. gibba frequently has a rosy pattern). Juvenile individuals are more wedge-shaped than those of C. gibba. Massive reporting follows intensive fieldwork and broad cooperation We have covered 52 species, reporting the finding of 23 new Lessepsian mollusks, nine additional species that, upon final identification, may turn out to be further new Lessepsian species, nine new records for Eastern Mediterranean countries and new data for eleven already recognized non-indigenous species. Such a massive report is derived from three characteristics of this study which translate into recommendations for an effective approach to non-indigenous species detection and monitoring. First, the intensive sampling effort and the effective sampling techniques of the "Historical ecology of Lessepsian migration" (HELM) project and of the IOLR monitoring programs. The HELM project in particular targeted also hard substrates, poorly explored at this taxonomic resolution in the Eastern Mediterranean, by suction sampling. This technique has repeatedly proved to be a very effective method on compact (e.g., coral rubble, pebbles (Bouchet et al. 2002;Linnane et al. 2003;Ringvold et al. 2015;Evans et al. 2018)), seagrass (Bonfitto et al. 1998;Albano and Sabelli 2012;Albano and Stockinger 2019) or hard substrates (Templado et al. 2010) and has enabled here the collection of vast amounts of living micromollusks and their shells. Indeed, 26 out of the 52 (50%) species treated here and 17 out of 32 (53%) new and potentially new NIS came from suction samples on hard substrates. The IOLR surveys covered soft substrates along the whole Israeli Mediterranean coastline with a dense station network and multi-seasonal sampling that led to the detection of 16 out of 52 (31%) treated species, notwithstanding several findings had already been published (e.g., Galil 2013b, Aartsen et al. 2015;Lubinevsky et al. 2018). Second, the use of fine mesh sizes. The HELM samples were sieved with a 0.5 mm mesh and those of the IOLR monitoring programs were sieved with either a 0.5 mm or even a 250 μm mesh. Such small sizes enabled retaining the large majority of invertebrates, including mollusks, even those with very small and elongated shells, such as many Pyramidellidae. However, this approach requires an enormous effort when picking and sorting the samples and the availability of high-level taxonomic expertise, since most small-sized species belong to taxonomically challenging groups or represent juvenile individuals. Third, and importantly, cooperation among institutions and individuals. Despite new records of non-indigenous species are often scattered into short papers in the literature and may become difficult to trace in the long term, recent efforts have demonstrated the value of cooperation to build up large datasets (e.g., Katsanevakis et al. 2020). It is also important to highlight the role that citizen scientists had in the detection of the new non-indigenous species treated here by contributing to sample sorting, species identification and their taxonomic study. The challenge of recognizing and identifying tropical non-indigenous species Taxonomic uncertainty is recognized as a major impediment to the reliable inventorying of non-indigenous species (McGeoch et al. 2012;Marchini et al. 2015;Katsanevakis and Moustakas 2018). Species whose identification is uncertain or whose taxonomic status is unresolved were suggested to be excluded from inventories . Taxonomic uncertainty may also imply an uncertain non-indigenous status: a species morphologically distinct from the native species pool can either be a yet undescribed native species or a newly introduced species (which may be undescribed too). If the new species belongs to a clade not occurring in the sampled range, then the attribution of the non-indigenous status is well supported. Still, only the finding of clearly conspecific individuals from the source pool would provide final evidence of the non-indigenous status. We exemplified these cases with two new taxa described here: Coriophora lessepsiana Albano, Bakker & Sabelli, sp. nov. clearly belongs to a tropical clade of Triphoridae and the availability of material from the Red Sea enabled the unambiguous attribution of the species to the Red Sea pool (hence the species name lessepsiana). Joculator problematicus Albano & Steger, sp. nov. belongs to a genus absent from the Mediterranean Sea. Despite multiple similar species have been recorded in the Indo-Pacific province, we have not been able to find conspecific Red Sea material (hence the species name problematicus, to highlight the uncertainty in attributing the non-indigenous status when solid taxonomic and faunistic knowledge is lacking). A more complex case is represented by species that have few diagnostic characters, hampering the unequivocal attribution to a tropical clade (e.g., within Eulimidae and Pyramidellidae). We here propose to exploit the properties of death assemblages to deliver a solid hypothesis of non-indigenous status. Death assemblages, the taxonomically identifiable, dead or discarded organic remains encountered in a landscape or seabed such as molluscan shells, accumulate species richness over time (Kidwell 2013). This property implies that death assemblages are good archives of local species diversity and that species which have long occurred in a study site (like native ones) are likely to be found in a death assemblage even if they are too rare to be regularly detected in living censuses. Indeed, the inclusion of empty shells in field surveys increased the estimations of occupancy and detectability of native land snails (Albano et al. 2015). In contrast, a species which has recently established a population in a new area would be expected to be poorly detectable in the death assemblage because not enough time has elapsed to contribute a significant number of skeletal parts to it. Consequently, newly reported non-indigenous species found alive but not, or very rarely, in the death assemblage may represent newly introduced species. This approach can be applied when living and death assemblages are sampled simultaneously with quantitative methods. The main limitation of this approach is that it is applicable only to organisms with hard skeletal parts like foraminiferans, ostracods and fishes (e.g. Agiadi and Albano 2020). With biological invasions constituting a major element of global change, there is understandable concern on the accuracy of non-indigenous species inventories Katsanevakis and Moustakas 2018). However, also the risk of underestimating the magnitude of biological invasions must be considered. In this respect, the Lessepsian invasion is a special case. The salinity reduction of the Bitter Lakes (Galil 2006), the containment by dams of the annual Nile flood which used to modulate salinity off its delta (Rilov and Galil 2009), and the multiple enlargements of the Suez Canal have certainly enhanced connectivity between the Mediterranean Sea and the Indo-Pacific province during the 20 th century. Additionally, in the most recent decades, rising seawater temperatures in the Mediterranean Sea (Ozer et al. 2017) make it increasingly suitable for the establishment of inherently thermophilic Red Sea species. These factors combined render very likely that an increasing number of Red Sea species manages to cross the Canal and to settle in the Mediterranean Sea. The proper understanding of this phenomenon and of its drivers requires a timely and detailed census of assemblages as well as a broad toolkit to overcome the limitations of the taxonomic challenges associated with tropical species. Authors' contributions PGA and JS conceived the study. PGA, JS, CB, MB, TGH, MFH, HL, MM, and MS contributed to fieldwork, sample sorting and data acquisition. PGA, JS, PAJB, CB, PILF, and BS identified the specimens and contributed to the taxonomic discussion. PGA, JS, and TGH contributed to the discussion. PGA, JS, PAJB, and MA prepared the figures. PGA and JS wrote the first draft of the manuscript, which then received contributions by all co-authors. Funding This research has been conducted in the context of the "Historical ecology of Lessepsian migration" project funded by the Austrian Science Fund (FWF) P28983-B29 (PI: P.G. Albano). Sampling in Crete was supported by the "Kurzfristige wissenschaftliche Auslandsstipendien" program of the University of Vienna to M. Stockinger, the non-profit organization Mare Mundi, and the diving school Dive2gether. Sampling in northeastern Cyprus was conducted in the framework of the project "Classification of coastal habitats of Northern Cyprus according to EUNIS protocol" (PI: F. Huseynoglu). The Faculty of Earth Sciences of the University of Vienna funded a citizen science project conducted in October 2019. MB was supported by an Ernst Mach fellowship of the OeAD (Österreichischer Austauschdienst).
v3-fos-license
2018-02-27T14:02:24.852Z
2018-02-27T00:00:00.000
3513088
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fchem.2018.00008/pdf", "pdf_hash": "b3dd43eb067dd1a34d326bd1f2ffe4ae2aa7b7d1", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46374", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "b3dd43eb067dd1a34d326bd1f2ffe4ae2aa7b7d1", "year": 2018 }
pes2o/s2orc
Surprising Conformers of the Biologically Important A·T DNA Base Pairs: QM/QTAIM Proofs For the first time novel high-energy conformers–A·T(wWC) (5.36), A·T(wrWC) (5.97), A·T(wH) (5.78), and A·T(wrH) (ΔG = 5.82 kcal·mol−1) (See Graphical Abstract) were revealed for each of the four biologically important A·T DNA base pairs – Watson-Crick A·T(WC), reverse Watson-Crick A·T(rWC), Hoogsteen A·T(H) and reverse Hoogsteen A·T(rH) at the MP2/aug-cc-pVDZ//B3LYP/6-311++G(d,p) level of quantum-mechanical theory in the continuum with ε = 4 under normal conditions. Each of these conformers possesses substantially non-planar wobble (w) structure and is stabilized by the participation of the two anti-parallel N6H/N6H′…O4/O2 and N3H…N6 H-bonds, involving the pyramidalized amino group of the A DNA base as an acceptor and a donor of the H-bonding. The transition states – TSA·T(WC)↔A·T(wWC), TSA·T(rWC)↔A·T(wrWC), TSA·T(H)↔A·T(wH), and TSA·T(rH)↔A·T(wrH), controlling the dipole-active transformations of the conformers from the main plane-symmetric state into the high-energy, significantly non-planar state and vice versa, were localized. They also possess wobble structures similarly to the high-energy conformers and are stabilized by the participation of the N6H/N6H′…O4/O2 and N3H…N6 H-bonds. Discovered conformers of the A·T DNA base pairs are dynamically stable short-lived structures [lifetime τ = (1.4–3.9) ps]. Their possible biological significance and future perspectives have been briefly discussed. INTRODUCTION Investigation of the dynamics of the isolated DNA base pairs by both the experimental and especially theoretical methods is urgent biophysical task of exceptional importance (Keepers et al., 1982;Pechenaya and Volkov, 1984;Volkov, 1995;Auffinger and Westhof, 1999). At this, the researchers are convinced that exactly the intrinsic conformational dynamics of the DNA base pairs largely determines the functionally important dynamical behavior of DNA and this approach has no reasonable alternatives. Spontaneous thermal fluctuations or breathing of DNA enables the opening of the DNA base pairs, making reactive their chemical groups, that are normally hidden inside the DNA double helix, available for hydrogen exchange involving imino and amino groups, chemical modification (e.g., by formaldehyde, that is a toxic, mutagenic and carcinogenic compound leading to fatal consequences the open state of the DNA base pairs is and whether there is a barrier on the potential energy surface for providing its existence (Lavery, 1994;Stofer et al., 1999;Yang et al., 2015). It was also demonstrated by NMR experiment (Nikolova et al., 2011(Nikolova et al., , 2013 a Hoogsteen breathing consisting in the flipping of the Watson-Crick DNA base pair from the usual anti-conformation to the less favorable syn-conformation with probability ∼10 −2 , representing another pathway for the reaction of formaldehyde attack on DNA (Bohnuud et al., 2012). The modeling of the conformational heterogeneity of the Watson-Crick A·T DNA base pair allowing the existence of the semiopen states in DNA, which is associated with the presence of the weak C2H. . . O2 H-bond in it, and their support by the semi-empirical quantum-chemical MNDO/H (Hovorun, 1997) and PM3 (Kryachko and Volkov, 2001) methods presented in the papers (Hovorun, 1997;Kryachko and Volkov, 2001) seems attractive. Moreover, none of these interesting ideas has been confirmed by ab initio methods. Nowadays in the literature it does not present the data confirming the presence of the stable conformational states in the isolated Watson-Crick DNA base pairs, except canonical ones (Lavery, 1994;Stofer et al., 1999). It is obviously connected with the lack of the new ideas according as the structural features of the complementary foundations, so the nature of the intermolecular interactions, first of all of the H-bonds responsible for the presence of the conformers, which differs from the classical ones. Thus, the reverse A·T(rWC) Watson-Crick or so-called Donohue DNA base pair (Donohue and Trueblood, 1960), which is formed by the rotation of one of the bases according to the other by 180 • around the N1-N3 axis of the Watson-Crick A·T(WC) DNA base pair, has been registered in the bioactive parallel-stranded DNA (Tchurikov et al., 1989;Parvathy et al., 2002;Brovarets', 2013a,b;Poltev et al., 2016;Szabat and Kierzek, 2017;Ye et al., 2017). The A·T(H) Hoogsteen base pair (Hoogsteen, 1963) is formed due to the rotation on 180 • of the A DNA base relative to the T DNA base around the C9-N9 axis from the anti (WC) to syn (H) conformation, representing itself alternative DNA conformation that is involved into a number of biologically important processes such as recognition, damage induction, replication and has been actively investigated in the literature (Hoogsteen, 1963;Brovarets', 2013a,b;Alvey et al., 2014;Nikolova et al., 2014;Yang et al., 2015;Zhou, 2016;Sathyamoorthy et al., 2017). In particular, in the canonical DNA double helix Watson-Crick base pairs exist in a dynamic equilibrium with sparsely populated (∼0.02-0.4%) and short-lived (lifetimes ∼0.2-2.5 ms) Hoogsteen base pairs (Zhou, 2016). At this, the reverse A·T(rH) Hoogsteen or so-called Haschemeyer-Sobell base pair (Haschemeyer and Sobell, 1963), that is formed by the rotation of one of the bases by 180 • around the N7-N3 axis of the base pair according the other base (Brovarets', 2013a,b), also plays important biological role (Liu et al., 1993;Sühnel, 2002;Zagryadskaya et al., 2003). Density Functional Theory Calculations of the Geometry and Vibrational Frequencies Geometries of the main and high-energy conformers and transition states (TSs) of their mutual conformational transformations, as well as their harmonic vibrational frequencies have been calculated at the B3LYP/6-311++G(d,p) level of theory (Hariharan and Pople, 1973;Krishnan et al., 1980;Lee et al., 1988;Parr and Yang, 1989;Tirado-Rives and Jorgensen, 2008), using Gaussian'09 package (Frisch et al., 2010). Applied level of theory has proved itself successful for the calculations of the similar systems Hovorun, 2010a,b, 2015c;Matta, 2010;. A scaling factor that is equal to 0.9668 has been applied in the present work for the correction of the harmonic frequencies of all conformers and TSs of their conformational transitions (Palafox, 2014;Brovarets' and Hovorun, 2015c;El-Sayed et al., 2015). We have confirmed the local minima and TSs, localized by Synchronous Transit-guided Quasi-Newton method (Peng et al., 1996), on the potential energy landscape by the absence or presence, respectively, of the imaginary frequency in the vibrational spectra of the complexes. We applied standard TS theory for the estimation of the activation barriers of the tautomerisation reaction (Atkins, 1998). All calculations have been carried in the continuum with ε = 4, that adequately reflects the processes occurring in real biological systems without deprivation of the structurally functional properties of the bases in the composition of DNA and satisfactorily models the substantially hydrophobic recognition pocket of the DNA-polymerase machinery as a part of the replisome (Bayley, 1951;Dewar and Storch, 1985;Petrushka et al., 1986;García-Moreno et al., 1997;Mertz and Krishtalik, 2000;Brovarets' and Hovorun, 2014d,e). Single Point Energy Calculations We continued geometry optimizations with electronic energy calculations at the single point at the MP2/aug-cc-pVDZ level of theory (Frisch et al., 1990;Kendall et al., 1992). The Gibbs free energy G for all structures was obtained in the following way: where E el -electronic energy, while E corr -thermal correction. Evaluation of the Interaction Energies Electronic interaction energies E int have been calculated at the MP2/6-311++G(2df,pd) level of theory as the difference between the total energy of the base pair and energies of the Table 2); carbon atoms are in light-blue, nitrogen -in dark-blue, hydrogen -in gray and oxygen -in red. Estimation of the Kinetic Parameters The time τ 99.9% necessary to reach 99.9% of the equilibrium concentration of the reactant and product in the system of the reversible first-order forward (k f ) and reverse (k r ) reactions was estimated by the formula (Atkins, 1998): The lifetime τ of the conformers has been calculated using the formula 1/k r , where the values of the forward k f and reverse k r rate constants for the tautomerisation reactions were obtained as (Atkins, 1998): where quantum tunneling effect has been accounted by Wigner's tunneling correction (Wigner, 1932), successfully used for the double proton reactions in DNA base pairs Hovorun, 2013, 2014c): where k B -Boltzmann's constant, h-Planck's constant, G f ,r -Gibbs free energy of activation for the conformational transition in the forward (f ) and reverse (r) directions, ν i -magnitude of the imaginary frequency associated with the vibrational mode at the TSs. Calculation of the Energies of the Intermolecular H-bonds The energies of the intermolecular uncommon H-bonds (Brovarets' et al., 2013 in the base pairs were calculated by the empirical Espinosa-Molins-Lecomte (EML) formula based on the electron density distribution at the (3,−1) BCPs of the specific contacts (Espinosa et al., 1998;Matta, 2006;Matta et al., 2006b;Mata et al., 2011;: where V(r) -value of a local potential energy at the (3,−1) BCP. The energies of all other conventional AH···B H-bonds were evaluated by the empirical Iogansen's formula (Iogansen, 1999): where ν-magnitude of the frequency shift of the stretching mode of the AH H-bonded group involved in the AH···B Hbond relatively the unbound group. The partial deuteration was applied to minimize the effect of vibrational resonances Pérez-Sánchez, 2016a, 2017;Brovarets' et al., 2016Brovarets' et al., , 2017aBrovarets' et al., ,b, 2018Brovarets' and Hovorun, in press). The atomic numbering scheme for the DNA bases is conventional (Saenger, 1984). RESULTS AND THEIR DISCUSSION For the first time we have detected on the potential (electronic) energy surface of each of the four biologically important A·T(WC), A·T(rWC), A·T(H) and A·T(rH) DNA base pairs the shallow local minima ( E < kT under normal conditions) corresponding to the dynamically stable A·T(w WC ), A·T(w rWC ), A·T(w H ) and A·T(w rH ) conformers, correspondingly, with shifted, wobble (w) architecture (Figure 1). These conformers possess significantly non-planar structure (see Table 1 with the selected angles of the non-planarity) and C 1 point group of symmetry. At this, the piramidalized amino group of the A DNA base is involved into the intermolecular H-bonding with T base through two anti-parallel N6H. . . O4/O2 and N3H. . . N6 H-bonds in the A·T(WC)/A·T(rWC) base pairs and N6H ′ . . . O4/O2 and N3H. . . N6 H-bonds in the A·T(H)/A·T(rH) DNA base pairs. In all conformers and TSs without exception the N3H. . . N6 H-bonds with significantly increased ellipticity are weaker than the N6H/N6H ′ . . . O4/O2 H-bonds ( Table 2). These interactions should be attributed to the weak and medium Hbonds according to the existing classification (Saenger, 1984). Their most important characteristics are presented in Table 2. It should be noted that each of the four investigated A·T 2 | Electron-topological, geometrical and energetic characteristics of the intermolecular H-bonds in the investigated conformers of the A·T DNA base pairs and TSs of their conformational transformations obtained at the B3LYP/6-311++G(d,p) level of theory (ε = 4) (see Figure 1). DNA base pairs in the basic plane-symmetric conformation is stabilized by the participation of the three intermolecular Hbonds, one of which, namely, the C2H/C8H. . . O4/O2 is noncanonical (Brovarets' et al., 2013. For all A·T DNA base pairs without exception the middle N3H. . . N1/N7 H-bonds are the strongest (∼7 kcal·mol −1 ). At this, the total energy of the intermolecular H-bonds in each complex consists only some part of the total electronic energy of the interaction between the bases (Figure 1, Table 2). The same regularity is observed for the other DNA base pairs Hovorun, 2015d,e,f,g, 2016b). For all conformers without exception the amino H or H' atom of the A DNA base, that directly takes part in the H-bonding with T DNA base, significantly deviates from the plane of the purine ring in comparison with the other H ′ or H hydrogen atom (Table 1). In all cases the high-energy conformers of the biologically important A·T base pairs are more polar than main conformers ( Table 2). A·T(H)↔A·T(w H ) and A·T(rH)↔A·T(w rH ) conformational transitions -TS A·T(WC)↔A·T(wWC) , TS A·T(rWC)↔A·T(wrWC) , TS A·T(H)↔A·T(wH) and TS A·T(rH)↔A·T(wrH) , respectively, with low values of imaginary frequency (7.1, 11.4, 9.4 and 14.6 i cm −1 ). These wobble structures ( Table 1) (Figure 1, Table 2). Characteristically, that all revealed conformational transitions without exception are dipole-active, since they are accompanied by the changing of the dipole moment of the initial and terminal base pairs. At this, TSs of each conformational transition have maximal value of the dipole moment ( Table 2). Main characteristics of the investigated conformational transitions are presented in Table 3. Analysis of these data points that short-lived conformers are dynamically-stable structures with the lifetimes (1.4-3.9) · 10 −12 s. Really, for all of them the energy of zero vibrations, which frequency become imaginary in the TS, is less than the electronic energy of the electronic energy barrier E for the reverse conformational transition and Gibbs free energy barrier for the reverse conformational transition G > 0 under normal conditions. Notably, the range of the six low-frequency intermolecular vibrations of the discovered conformers is significantly shifted to the lowfrequency region comparably with the main conformational states. These data points on the fact that revealed conformers are quite soft structures, that could be easily deformed under the influence of the external forces, in particular, caused by the stacking interactions with the neighboring DNA bases. The methyl group of the T DNA base does not change its orientation during the process of the conformational transformations. Moreover, the heterocycles of the bases remain planar, despite their ability for the out-of-plane bending (Govorun et al., 1992;Hovorun et al., 1999;Nikolaienko et al., 2011). Special attention should be payed to the characteristic specificities of the A·T(WC)↔A·T(w WC ), A·T(rWC)↔ A·T(w rWC ), A·T(H)↔A·T(w H ) and A·T(rH)↔A·T(w rH ) conformational transformations. These reactions are nondissociative, since they are accompanied by the transformation of the H-bonds and rupture of only some of them. Intermolecular N6H/N6H ′ . . . O4/O2 H-bonds exist along all intrinsic reaction coordinate opposite the N3H. . . N1/N7 H-bonds, that initially weaken and then rupture with a time delay in order to transform into the N3H. . . N6 H-bond. In other words, in the process of the conformational transformations the N3H group of the T DNA base as proton donor remain for some time free from the intermolecular H-bonding. This comes up with an opinion that discovered conformational transitions could be used for the explanation of the occurrence of the hydrogen-deuterium exchange in the A·T DNA base pairs. It is not excluded that revealed by us novel corridor of the spontaneous thermal fluctuations of the A·T DNA base pairs accompanied by the transformation of the base pair from the plane-symmetric geometry into the significantly non-planar wobble conformation could be useful for the explanation of the specificities of the blurriness of the transition at the DNA pre-melting enriched by the A·T DNA base pairs, that could not be explained in details in the framework of the two-states model. We would continue to work in the direction of the elucidation of the biological importance of the revealed unusual conformers of the biologically important A·T DNA base pairs. CONCLUSIONS In general, in this work at the MP2/aug-cc-pVDZ//B3LYP/6-311++G(d,p) level of theory in the continuum with ε = 4 for the first time we have revealed the A·T(WC) ↔A·T(w WC ), A·T(rWC)↔A·T(w rWC ), A·T(H)↔A·T(w H ) and A·T(rH)↔A·T(w rH ) conformational transformations in the biologically important A·T DNA base pairs and characterized their structural, energetic, polar and dynamical features. These data open new perspectives for the understanding of the physicochemical mechanisms of the opening of the base pairs preceding DNA melting and also to describe in details the breathing of DNA, that has been experimentally registered. Moreover, it is also the subject for the investigation by using modern spectroscopic techniques such as two-dimensional fluorescent spectroscopy (2DFS) (Widom et al., 2013), time-resolved single molecule fluorescence resonant energy transfer (smFRET) , single molecule fluorescent linear dichroism (smFLD) and THz spectroscopy (Alexandrov et al., 2013). AUTHOR CONTRIBUTIONS OB, performance of calculations, discussion of the obtained data, preparation of the text of the manuscript. DH, proposition of the task of the investigation, discussion of the obtained data, preparation of the text of the manuscript. KT, preparation of the numerical data for Tables and graphical materials for Figures, preparation of the text of the manuscript. All authors were involved in the proofreading of the final version of the manuscript.
v3-fos-license
2024-01-07T16:06:37.123Z
2024-01-05T00:00:00.000
266805132
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2023.1291261/pdf?isPublishedV2=False", "pdf_hash": "82deb74375bb3d6abef7dc43be0c5e34cbe14619", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46378", "s2fieldsofstudy": [ "Medicine" ], "sha1": "2670ebada36eb21cd09a8a589e5420f225f2d90a", "year": 2024 }
pes2o/s2orc
Determining the nurses’ perception regarding the effectiveness of COVID-19 protocols implemented in Eastern Province: Saudi Arabia Background The global impact of Coronavirus Disease 2019 (COVID-19) has been profound, affecting public health, the global economy, and overall human life. Past experiences with global pandemics underscored the significance of understanding the perception of HCWs and hospital staff in developing and implementing preventive measures. The World Health Organization (WHO) provided protocols to manage the spread of COVID-19 and assist healthcare workers and health systems globally in maintaining high-quality health services. Objective This study aims to assess nurses’ perception, awareness, and compliance regarding the implementation of COVID-19 protocols and explore factors influencing their perception. Methodology A quantitative cross-sectional survey-based study was conducted, distributing a constructed survey among nurses in the Eastern Province of Saudi Arabia. Results Out of 141 participants, most adhered to protocols such as hand sanitization, social distancing, and proper personal protective equipment (PPE) usage. The predominant age group among respondents was 31 to 40 years (n = 71, 50%). A significant portion of participants reported holding a bachelor’s degree (n = 86, 61%), with only 14% possessing advanced degrees (n = 19). Nearly a third of the nurses in the study had accumulated 6 to 10 years of professional experience (n = 49, 34.8%). A noteworthy percentage of nurses were engaged in daily shifts exceeding 8 h (n = 98, 70%). Gender differences were observed, with females exhibiting a higher tendency to avoid shaking hands and social gatherings. Saudi nationals were more inclined to shake hands and engage in gatherings. Non-Saudi nurses and those aged between <25 to 40 years demonstrated proper donning/doffing practices. Nurses with over 6 years of experience avoided social gatherings, while those working >8 h adhered better to PPE usage, proper donning/doffing, and disposal of PPE in designated bins. Conclusion Understanding COVID-19 protocols is crucial for tailoring interventions and ensuring effective compliance with COVID-19 preventive measures among nurses. More efforts should be made toward preparing the healthcare nursing to deal with the outbreak. Preparing healthcare nursing with the right knowledge, attitude, and precautionary practices during the COVID-19 outbreak is very essential to patient and public safety. Introduction The global impact of Coronavirus Disease 2019 (COVID- 19), stemming from severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2), has left an indelible mark on the world economy, public health, and individuals' quality of life.In a brief span, it has placed an unprecedented burden on the global healthcare industry, necessitating every healthcare worker (HCW) to be at the forefront in managing the disease (1).The response to previous global pandemics underscored the pivotal role of HCWs and hospital staff perception in shaping and implementing protocols to address health crises (2). In response to the COVID-19 outbreak, international health organizations like the World Health Organization (WHO) issued protocols to guide the management of COVID-19, aiding health systems and HCWs worldwide in maintaining the delivery of highquality health services (3).Additionally, governments around the world implemented national protocols to contain the spread and impact of the pandemic.The global management of COVID-19 highlighted the effectiveness of a combination of non-pharmaceutical interventions, including lockdowns, school closures, restrictions on social gatherings and international travel, and robust information campaigns (4,5).However, despite their effectiveness in curbing infections, these foundational protocols and policies fell short of completely halting the virus's spread and containing the disease (6). The timing of protocol implementation was crucial, with evidence demonstrating that earlier implementation significantly influenced virus control (7).HCWs exhibited outstanding performance in executing preventive protocols while addressing the clinical demands of the pandemic.Notably, there was a significant increase in HCWs' adherence to preventive measures such as hand hygiene and the proper use of personal protective equipment (PPE) (8,9).However, the swift adoption of protocols brought about various challenges. HCWs, lacking prior experience in handling such diseases, faced high levels of stress, mitigated to some extent by protocols such as "disinfection efforts and isolation measures" that facilitated their work and maintained focus (10).PPE and resource shortages emerged as primary concerns, particularly in developing countries where these shortages hindered the implementation of preventive protocols (11,12).Poorly designed infrastructure, including overcrowded Emergency Rooms (ER), hindered hospitals from implementing preventive measures such as social distancing (13,14).Inadequate training, especially for redeployed HCWs facing increased workloads or requiring proficiency in using PPE, added to the challenges of managing the pandemic (2,9,11). The spread of the pandemic has created drastic challenges and changes in all aspects of life, especially in health professionals' education.One of the most important challenges is the preparedness and willingness of health professional nursing to work in infectious disease outbreaks (13,14).Therefore, assessing knowledge, attitude, and practices of health professionals regarding any infectious outbreak has become a fundamental step to setting an effective plan related to their preparedness.The initial research on COVID-19 has demonstrated that during unexpected natural crises and infectious diseases, healthcare professionals will make every effort to participate in the efforts to control the outbreak and reduce the complications, but the less consciousness of the risk of the infection. Constant changes in suggested preventive guidelines and protocols further complicated matters, causing confusion among HCWs about which protocol to adopt and how to implement it, leading to potential errors in handling cases (2).Nurses in Madrid emphasized the importance of hospital management and leadership considering feedback from frontline HCWs during the COVID-19 pandemic (15).Effective communication emerged as a crucial factor in clarifying applied preventive measures, ensuring proper implementation, avoiding misconceptions, and supporting HCWs through this challenging period (1,2,15,16). Nationally, Saudi Arabia implemented extraordinary and stringent preventive measures to safeguard citizens, ensure well-being, and enhance awareness, influencing a strong commitment to applying preventive measures (17).Papers highlighted that HCWs in Saudi Arabia possessed sufficient knowledge and skills to manage the COVID-19 outbreak (18,19).Effective communication, leadership coordination, proactive planning, HCWs training, skill development, and the implementation of strict policies contributed to enhancing HCWs' attitudes toward controlling the pandemic (20). While published papers worldwide shared experiences in managing the COVID-19 outbreak, shedding light on various protocols advised and implemented by national and international agencies, including WHO and local governments, there remains a gap in research focusing on the perception of healthcare workers regarding the implementation of COVID-19 protocols in hospitals, particularly in Saudi Arabia.This paper seeks to address this gap by determining nurses' perception, awareness, and compliance regarding the implementation of COVID-19 protocols and exploring the factors influencing their perception. Methodology Research design This is a quantitative, survey-based, cross-sectional study among nurses in the Eastern Province in Saudi Arabia.The survey used a validated survey developed by Agarwal et al. (21).The survey measures the nurses' perception, awareness, and compliance regarding the COVID-19 protocols and define the barriers in implementing them at the hospital.Survey results were analyzed using Statistical Study setting Healthcare organizations operated during the pandemic of COVID-19 in Saudi Arabia. Participants The participants were nurses who worked during the pandemic of COVID-19 in hospitals located in Eastern Province, Saudi Arabia.The sample size was 240.We distributed the survey through public social media accounts, also public nurses' WhatsApp groups.Due to the scope of the study, we specifically targeted the nurses who faced or contacted the patients during that time as they always have direct contact with the patients and therefore, might be more exposed to get the infection compared to other healthcare workers. Instruments An online survey was constructed based on a published validated survey done by Agarwal et al. (21) to evaluate the implemented preventive measures against COVID-19 among healthcare workers in India.It has two sections starting with Section A, which assesses the awareness and compliance toward the preventive measures.In addition, Section B cover the barriers to implementing these measures.Both sections cover the following elements: "hand hygiene, social distancing, personal protective equipment (PPE), gadgets/fomites, lifestyle, and exposure."The elements of interest in this study are hand hygiene, social distancing, and PPE, which are implemented as COVID-19 protocols in hospitals in Saudi Arabia. Ethics and limitations The ethical approval was obtained from the Institutional Review Board of Imam Abdulrahman Bin Faisal University; IRB-PGS-2021-03-443.The main limitation of the study journey was its short period as the study was conducted in one semester, which consequence in a small sample size. Analysis Statistical Package for the Social Sciences (SPSS) software analyzed numerical data.Descriptive analysis was performed to present the participants' characteristics, their reported practices in implementing the COVID-19 preventive protocols (Section A), and their perceived barriers toward implementing them (Section B).Furthermore, due to the small sample size (22), bivariate analysis was done using Fisher's Exact test to assess the association between the participants' characteristics and their perception and attitudes regarding the implemented COVID-19 protocols.Lastly, the significance of the results were based on the value of p (value of p = <0.05). Results Out of the 141 nurses who completed the survey, 118 were females (83.7%), almost half of the participants were Saudis (n = 73).The majority of respondents were between the age of 31 to 40 years (n = 71, 50%).Most of the participants indicated that they have a bachelor's degree (n = 86, 61%), while only 14% had higher degrees (n = 19).Almost third of the nurses in the study had 6 to 10 years of experience (n = 49, 34.8%).A remarkable number of nurses worked more than 8 h a day (n = 98, 70%%). Section I: adherence to prevention practices against COVID-19 infections among healthcare workers Half of the nurses in the study indicated that they rarely shook hands when encountering a colleague (n = 73); on the other hand, 22% of them always or mostly did (n = 31).Most nurses in the study adhered to sanitizing their hands after meeting patients or touching their surroundings (n = 120, 85%), compared to only 5% who occasionally or rarely did (n = 7).Additionally, more than two thirds of the nurses followed the appropriate steps when washing or sanitizing their hands (n = 112, 79%). A considerable number of nurses kept at least 1 meter when communicating with their colleagues (n = 96, 68%).Similarly, when asked about meeting colleagues at work for lunch gatherings, almost half of the nurses in the study mentioned that they occasionally or rarely did (n = 68), while 35% always or mostly did (n = 49). Most of the nursers in the study (more than 80%) followed the proper steps for donning and doffing the PPE as per the guidelines, wore adequate PPE during duty, wore masks inside the hospital premises, cover both their nose and mouth with a mask while wearing it, and indicated that they dispose of PPE in specified colored dustbins after use according to guidelines (Table 1).More than half of the nurses in the study mentioned that they changed their PPE and did not reuse them in a single shift (n = 70, 50%).More than third of the nurses in the study mentioned that they always or mostly carry their face shields/gowns/PPE to their duty room in the ward before completely doffing (n = 55, 39%). Several variables are significantly associated with the participants adherence to Hand Hygiene.Saudis have significantly higher average adherence score to hand hygiene regulations compared to non-Saudis (t = 2.54, p = 0.012).Younger participants have significantly better hand hygiene adherence score compared to older participants (f = 3.085, p = 0.049).Further, participants with less than 6 working hours are more adherent to hand hygiene compared to more than 8 working hours (t = 2.76, p = 0.006, Table 2). None of the variables in the study influence the adherence to social distancing except the nationality.The results show that Saudis are significantly more adherent to social distancing compared to non-Saudis (t = 3.72, p = <0.001,Table 2). In addition, Saudi participants in the study appear to be significantly more adherent to Personal Protective Equipment compared to non-Saudis (t = 3.720, p = <0.001),and younger participants show significantly better adherence to PPE compared to older ones (f = 3.227, p = 0.043, Table 2). Section II: reasons for adherence or non-adherence to preventive practices from COVID-19 infection among healthcare workers In this section, most nurses chose "not applicable, " which indicates that they did not face any difficulty or had any reason for not adhering to the applied COVID-19 protocols.Thus, the primary reported barriers/reasons are presented below. Hand hygiene Out of the 141 nurses, 17.7% were unaware that COVID-19 spread through handshaking, and 11.3% were not convinced that it does.Others felt it was inappropriate to refuse to shake another's hand (17.7%), and some had difficulty changing their habits (12.8%).Moreover, 25.6% of respondents either were tired of continuously sanitizing their hands or did not have time due to their workload, whereas 10.6% faced a lack of sanitizers at their organizations.About following the sanitizing/handwashing steps, 17.7% did not find it crucial, 13.5% felt it was exhausting, and 12.1% did not have to follow all the steps (Table 3). Social distancing 14.2% of the nurses pointed out that lack of space hindered their ability to apply social distancing in hospitals and public places, and 17.0% found it hard to speak to others in public places.Additionally, 14.2% found it difficult to change their habits in the hospital, and equally, 14.2% did not see the necessity to keep a 1-meter distance for they wear their PPE all the time (Table 3). Personal protective equipment (PPE) Reasons for not wearing all the required PPE varied, starting with unavailability of PPE (10.6%), nurses feeling uncomfortable while wearing them (12.8%), or being unaware of the PPE guidelines (7.8%), and 8.5% were not convinced that the required PPE safeguard against COVID-19.Regarding wearing masks, 12.8% could not breathe easily, specifically when covering their nose and mouth (22.0%), 9.2% felt hot while wearing it, and 7.8% reported it sliding down from their nose.Moreover, nurses reused PPE due to its shortage in their organizations (19.9%) and long shifts (12.1%), whereas 14.2% did not see any risk in doing so (Table 3).16.3% of nurses reported the unavailability of a designated area to doff, in addition to the need for an assistant or mirror to ensure proper doffing (11.3%) as barriers to applying the appropriate steps, and only 10.6% found it unnecessary to follow the steps of donning and doffing.When asked about PPE disposal, fatigue led 17.8% of nurses to not dispose of PPE/masks in their appropriate bins.Besides, 29.0% of the nurses pointed out the lack of designated bins.However, 13.5% were unaware that masks should be disposed of separately, and similarly, 24.8% were confused about which bin they should throw the PPE in Table 3. Eighty-three percent of the respondents were female, in line with other studies (23) which indicate that 90% of the workforce during this COVID-19 crisis was female. In the present study 69.5 of nurses spend longer working hours (> 8 h) might affect the efficiency and effectiveness of the workforce in delivering high-quality, safe care.A recent study in China about healthcare providers working longer hours due to the spread of COVID-19 conveyed high symptom rates of depression, insomnia, and work stress (24).An international study reported that when nurses wear personal protective equipment (PPE), they usually take 4-6 working hours without a break.This is very critical to nurses' well-being, since longer hours wearing PPE can cause fatigue, stress, and exhaustion, making healthcare providers prone to causing medical errors (25).Hence, nursing administration should organize staffing and scheduling to avoid mental and physical health impairment. Interestingly, in this study, demographics and work-related issues mattered.Female nurses had better preventive behaviors than male nurses as shaking hands with their colleagues.This distinction can be attributed to the fact that, in Saudi tradition and culture, females are more inclined to be healthcare providers than men (26).This result is consistent with a United Nations policy brief that women are more confident and have higher self-awareness about the impact of COVID-19 on women.Caution should be taken in interpreting this study, since only 16.3% of participants were male nurses, which means the findings cannot be generalized. Discussion In this study, we found that, nationality, age, and working hours influence nurses' different perceptions regarding the effectiveness of COVID-19 protocols.Based on the perception of the nurses regarding the COVID-19 protocol, we found that 79.4% of the respondents followed the appropriate steps in washing their hands, and 85.1% wore their PPE according to their guidelines as compared with Social Distancing wherein only 16.1% of the participants keep at least a meter when communicating.This means that participants perceive both hand washing and wearing PPE as effective protocols against the pandemic.Previous studies have concluded that nurses have demonstrated outstanding performance in conducting preventive protocols to meet the demands of the pandemic.Notably, nurses increasingly adhere to preventive measures such as hand hygiene and wearing PPE (8,9). Furthermore, the study reveals that more than 10 % of the participants are not wearing all the required PPE because of its unavailability.Previous researchers also noted that the shortage in PPE and resources is a major concern for healthcare workers (11,12).In other words, one of the main reasons nurses are not wearing their PPE despite having a positive perception of the protocol is because there is a shortage of this resource. The earliness of implementation determines the effectiveness of COVID-19 protocols implemented in 2020 to 2021 in response to the pandemic.We found that nurses trust PPE use as an effective COVID-19 protocol; once they wear it, they do not see the need to keep a 1-meter distance or have social distancing anymore.Researchers have proven that the earlier the protocols are implemented, the more remarkable the impact (27).In addition, COVID-19 protocols are more effective by combining non-pharmaceutical interventions such as lockdowns, restricting social gatherings and international travels, school closures, and strengthening information campaigns.As for healthcare workers, previous studies noted that effective communication between HCWs, patients, leadership, and team coordination and implementing strict policies to avoid errors and control the pandemic are effective COVID-19 protocols (21). After comparing the social demographic with the nurse's perception of COVID-19 protocol, the education factor was found not to influence nurses' different perceptions regarding the effectiveness of COVID-19 protocols.This finding is consistent with Olum et al. (28) study, which revealed that there is no association between level of education and compliance with COVID-19 protocols.This can be justified by the fact that the level of knowledge about COVID19 precautions the level of knowledge about COVID-19 might be similar irrespective of level of education of healthcare workers (28). Although the gender is not significantly associated with social distance practice in the present study females are more cautious about shaking hands with their colleagues, which means that they are more likely to social distance than their male counterparts.This can be supported by another study which revealed that female nurses had significantly higher good hand hygiene practice than male nurses (24).It was justified that the higher compliance rate of hand hygiene among females may also be associated with their propensity to practice socially acceptable behaviors (25). Moreover, non-Saudi nurses had more tendencies to shake hands and attend gatherings with their colleagues than Saudi nurses.Therefore, non-Saudi nurses are more likely to not adhere to social distancing protocols than Saudi nurses.In addition, Saudi nurses are more likely to follow the proper steps in hand hygiene compared with non-Saudis. Nurses who are 25 to 40 years of age avoid entering their duty rooms with their face shields and are more likely to perform appropriately donning than nurses 40 years of age and above and Nurses who were 31 years and above are less likely to reuse their PPE for a single shift.Moreover, younger participants have significantly better hand hygiene adherence scores compared to older participants.We also found that the more experienced the nurses are, the more they comply with the COVID-19 protocols.This finding is supported by another study that has been conducted in Nigeria which revealed that compliance with the preventive measure significantly increased as nurses' years of experience increased (29). We also found that compared to nurses who worked more than 8 h, those who worked for less than 8 h adhered more to hand hygiene.This shows that nurses who are overburdened are less likely practicing proper handwashing.Previous studies showed that not having previous experience handling certain diseases impacted the HCWs' perception and behavior on COVID-19 protocols; thus, experience influences how nurses handle the pandemic (29). In the present study, the overall compliance with PPE usage and IPC measures in the nurses was 85.1%.However, the discrepancy in compliance rates reported in different studies might be attributed to the time factor.Some studies were conducted during the first wave of the COVID-19 pandemic (30,31). High perception, good level of knowledge, and high compliance rate reported among nurses in this study.Similar findings were reported in Abdel Wahed et al. study conducted among HCWs in Egypt (32). Nurses in this study reported higher preventive practices in dealing with COVID-19.These findings affirm a previous study among healthcare workers in Saudi, and a recent study about COVID-19 in India among students and health care workers, in which, due to constant exposure and previous outbreak experience with similar coronavirus disease, this nurses were able to practice in their full clinical capacity and use preventive measures (25). This study revealed that good compliance with PPE usage, hand hygiene, and IPC measures was independently predicted by nurses' risk perception and knowledge about PPE usage and hand hygiene.Likewise, Brooks et al. (33) review studied 56 papers and revealed evidence that staff with higher concern about the risk of infection were more likely to comply with the recommended measures.Similarly, Webster et al. (34) review found that accurate knowledge about the recommended performances, perception of susceptibility and severity of being infected, and perception of benefits of compliance would facilitate compliance. The nurses did not see "availability of incorrect PPE sizes, feeling uncomfortable and irritable when wearing PPE, as factors that influence their practice of preventive measures against COVID-19.It is vital to the importance to apply preventive measures and comply with PPE usage, hand hygiene, and IPC measures that could minimize the spread of the disease (35).Despite that negative influence of PPE on nurses and some other psychological factors were reported by Chan (36) in their study, which is not the case in the present study, there is a need for improved knowledge through sufficient training in order to enhance compliance to the preventive measures to COVID-19 and stop all the improper practices that may spread the infection. In accordance with the present study, the study of Al-Rawajfah et al. (37), revealed that the overall knowledge of the health-care students about the current COVID-19 is not optimal, as only about one-quarter of the sample scored more than 75% of the maximum score. Variation in compliance rates reported in studies could also be explained by the disparity in the studies' methodology.Selfreporting might overestimate the real compliance rate unlike assessing an observed practice.Similar results revealed from the study of Al-Mugheed et al. (26) who investigated the acceptance and attitudes of nursing students toward the COVID-19 vaccine booster dose in two Gulf Cooperation countries and showed that the total attitude scores for the students ranged from 28 to 35, with a mean score of 15.8 (SD = 2.5), representing 73% of the highest possible score, with 79.3% classified as 'positive attitude toward booster dose of COVID-19′ as vaccine booster might cause infection, vaccine booster ineffective, worried about adverse effects and not safe were major barriers influencing the acceptance of the COVID-19 vaccine booster.However, preparing nursing students with positive attitude of COVID-19 vaccine booster is very important to patient and community safety. Conclusion The purpose of the current research is to define the nurses' perception, awareness, and compliance regarding COVID-19 protocols implementation and explore the factors influencing their perception.Through a quantitative, survey-based, cross-sectional study, we identified the nurse's perception regarding COVID-19 protocol, determined the effectiveness of COVID-19 protocols implemented from 2020 to 2021, and compared nurses' social demographic with their perception toward COVID-19 protocols.As a result, we found out that nurses perceive hand hygiene and wearing of PPE as effective COVID-19 protocols and that, nationality, age, and working hours influence nurses' different perceptions regarding the effectiveness of COVID-19 protocols. These findings suggest that different social demographic factors influence how nurses perceive COVID-19 protocols.Healthcare providers should consider these differences in training nurses and healthcare workers in adhering to COVID-19 protocols.For example, since non-Saudis are less likely to social distance than Saudis, more informative training should be given to non-Saudi nurses regarding the importance of social distancing.Since nurses who are working for more than 8 h a day are less likely to follow the protocols, they should be given more training and their perceptions should be considered while implementing COVID-19 protocols in hospitals and healthcare centers, in order to assure better adherence in their busy schedule. The project's strong points include filling in the research gap on the perception of healthcare workers regarding the implementation of the COVID-19 protocol in hospitals, especially in Saudi Arabia.Moreover, we found social demographics that affect nurses' perceptions of the protocols.However, the paper was based on a structured survey; thus, presenting a limitation in the study.Future researchers can conduct interviews to confirm the study's findings and find a more in-depth explanation of why nurses provided the answers they gave.Indeed, further study is required to understand nurses' perceptions regarding the effectiveness of COVID-19 Protocols implemented in Saudi Arabia. TABLE 1 Frequency (%) of responses to the adherence questionnaire. TABLE 2 Association between the participants' characteristics and their perception toward the implemented COVID-19 protocols. TABLE 3 Reasons for preventive practices among healthcare workers.
v3-fos-license
2018-04-03T04:26:35.457Z
2014-08-15T00:00:00.000
13706198
{ "extfieldsofstudy": [ "Medicine", "Political Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/1471-2458-14-848", "pdf_hash": "a96bedb6af733e0f115a465e4a21fa87a1bd2cd8", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46379", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "sha1": "d997024b89904ee05f78e38904f2348feab2939e", "year": 2014 }
pes2o/s2orc
Australian alcohol policy 2001–2013 and implications for public health Background Despite a complex and multi-faceted alcohol policy environment in Australia, there are few comprehensive reviews of national and state alcohol policies that assess their effectiveness and research support. In mapping the Australian alcohol policy domain and evaluating policy interventions in each of the core policy areas, this article provides a useful resource for researchers. The implications for protecting public health emanating from this mapping and evaluation of alcohol policy are also discussed. Methods This review considered data from: published primary research; alcohol legislation, strategies and alcohol-related press releases for all levels and jurisdictions of Australian government; international publications by prominent non-governmental organisations; and relevant grey literature. These were organised and evaluated using the established framework offered by Thomas Babor and colleagues. Results Findings indicated great variability in alcohol initiatives across Australia, many of which do not reflect what is currently considered to be evidence-based best practice. Conclusions Research showing increasing alcohol-related harms despite steady levels of consumption suggests a need to pursue alcohol policy initiatives that are supported by evidence of harm-reduction. Future initiatives should aim to increase existing alcohol controls in line with suggested best practice in order to protect public health in Australia. Background Alcohol-related health and social harms are well documented [1]. In Australia, recorded annual per capita alcohol consumption stands at 9.89 litres and alcohol accounts for a significant proportion of the total burden of disease and injury (3.3% in 2003) -second only to tobacco in terms as a preventable cause of drug related deaths and hospitalisation [2]. Indeed, alcohol accounts for a considerable number of preventable deaths (over 31,000 between 1992 and 2001) and hospitalisations (over half a million between 1993-94 and 2000-01) [3]. Despite efforts to reduce alcohol-related harms as illustrated in this paper, recent evidence has indicated increasing alcohol-related harms in Australia; while population levels of consumption are relatively stable, there are changing patterns of drinking among sub-groups [4]. Given the burden of alcohol on public health, there is considerable debate on alcohol control policy. However, the depth of research on the effectiveness of some alcohol policies remains limited, especially in the Australian context (although the recently announced International Alcohol Control Study [5] will go some way to increasing understanding). As such, along with the World Health Organization's [6] identification of 'best buy' alcohol control policies for projecting public health (i.e., pricing strategies, limiting availability, controlling marketing), mapping the existing landscape can help inform alcohol policy development and strategy, such as the National Binge Drinking Strategy and components of the Taking Preventative Action national health strategy. This article contributes to knowledge in three ways. Firstly, it maps the alcohol policy environment in Australia, providing a reference for alcohol researchers. Secondly, it evaluates alcohol policy in Australia according to the Babor et al. [7] framework for effective alcohol policies. Finally, comparing and contrasting the findings with available evidence of best practice, it discusses the implications for future alcohol policy and protecting public health in Australia. Methods This article presents the key findings from a systematic review of alcohol policy in Australia spanning all states, territories, and levels of government. The article provides a synopsis of Australian alcohol policy, and an assessment of strengths and weaknesses in each policy area. The literature review followed the PRISMA protocol for conducting and reporting systematic reviews [8]. Identified literature was then categorised and assessed by utilising the matrix offered by Babor et al. [7], outlining seven broad areas of alcohol policy: 1. pricing and taxation; 2. regulating physical availability; 3. modifying the drinking environment; 4. drink-driving countermeasures; 5. restrictions on marketing; 6. education and persuasion; and 7. treatment and early intervention. Importantly, Babor et al. [7] based their identification of seven key areas of alcohol policy on extensive consultation of the extant research literature, and theoretical assumptions underpinning the seven broad areasfor example, for alcohol pricing and taxation the theoretical assumption is that increasing the economic cost of alcohol relative to alternative products will reduce demand. Therefore, the framework offers a useful, and the most established, tool for identifying and evaluating alcohol policy holistically. Currently, policy initiatives relating to each area are employed by all national, state, and territory governments in Australia. Local government in Australia usually has a limited, supporting role in relation to some of these (e.g. land use planning controls, enforcement of local laws). Inclusion/exclusion criteria To investigate alcohol policy across these seven areas, literature published from 2001-2013, in English language only, and consisting of published academic primary research and commentaries examining current or previous alcohol policy/strategy in Australia, its effectiveness, and/ or correlates of change were included. In addition, grey literature such as published alcohol legislation, strategies and alcohol-related press releases for all levels and jurisdictions of Australian government were included. The review also included grey literature publications by prominent non-governmental organisations; government reports, stakeholder publications, and media reports. It should be acknowledged that there were important developments in Australian alcohol policy introduced prior to the date range for the present review, such as the introduction of random breath testing in 1982, and mid strength beer during the 1990s. This review helps map how alcohol policy has developed during the past decade, identifying areas in which it is robust, and areas in which it may be sub-optimal and providing suggestions for future improvements to protect public health. Search strategy A systematic search for academic literature consisting of primary research or commentaries spanned five electronic databases (see Table 1). Search terms included, but were not limited to, alcohol*, Australia*, polic** legislat*, pric*, tax*, regulat*, avail*, environ*, ban*, minimum, restrict*, density, training, code, enforce*, law*, test*, BAC, licen*, punish*, campaign, intervention, and treatment, [a] and results were limited to records published between 2001 and 2013 [b] (see Table 1). From the 1196 results, 187 articles were identified for retrieval by reviewing titles and abstracts, and of these 126 articles were subsequently retained (after removing irrelevant records, and duplicates). Following this, grey literature was identified by searching organisation websites and through Google searches using the same search terms used to search academic databases. This process included identification of alcohol-related legislation, strategies and press releases from national, state, and territorial government websites including all health departments, transportation authorities, liquor and gaming offices, and departments of This supplemental search of the grey literature identified 78 additional sources. Inter-coder reliability checks were conducted by two researchers on a 20% sample of the literature to check for consistency in inclusion/ exclusion, and subsequent categorisation, with disagreements settled by a third reviewer. Once the included literature had been identified it was downloaded into bibliographic software and categorised according to the seven key policy areas identified by Babor et al. [7]. Data extraction was then conducted to enable a summary of Australian alcohol policy in each of seven policy areas to be constructed. The protocol issued by Babor et al. [7] was then used to assess and rate the effectiveness, and research support thereof, of Australian alcohol policy for each of the seven policy areas, with inter-coder reliability checks being conducted during the entire process and disagreements resolved through majority decision (see Table 2). It should be acknowledged that the Babor et al. [7] protocol does have some limitations in that it favours policies for which there is empirical evidence of implementation, and availability of peer reviewed evaluation literature. This means that policy areas for which there is a paucity of evaluation research, or where the design and implementation of policy has been poorly executed, may be ranked lower. However, despite these limitations, this protocol offers the most appropriate framework for reviewing and evaluation alcohol policy in Australia. The following summarises the key findings that emerged [c] . Pricing and taxation Pricing is one of the most effective alcohol harm minimisation policy levers available, with a strong evidence base suggesting that increases in the price of alcohol reduce consumption, and associated harms. Although the price of alcohol in Australia is relatively high compared to other developed countries, this may be due to market conditions, high living costs, and the strong Australian dollar, rather than due to policy effects. A number of factors affect the cost of alcohol, the most prominent of which is taxation. Australia's three national taxes on alcohol account for a large share of the estimated $6 billion the Australian Government receives as a result of alcoholrelated production and consumption [9][10][11]. A volumebased Excise Tax (Excise Tariff Act 1921) that increases according to the strength of alcohol is applied to all beer, pre-mixed alcoholic beverages and spirits. A mix of deliberate concessions and some unintended loopholes have served to create numerous exceptions to the volumetric nature of this tax. Spirits, ready-to-drink (RTD) products, and flavoured cider are subject to excise based the volume of alcohol in the products. Notably, some spirits (mainly Brandy) are subject to a concessional rate (effectively creating differential pricing by beverage). The excise rate for beer is lower than that on spirits and depends upon its packaging (e.g., draught or bottle). Wine, and traditional cider, in contrast is subject to a valuebased Wine Equalisation Tax (WET; A New Tax System [Wine Equalisation Tax] Act 1999). Under the WET, wine is taxed at a flat rate of 29% of its wholesale value [11]. After the Excise or WET rate has been applied, a further value-based GST (A New Tax System [Goods and Services Tax] Act 1998) is then applied to all alcoholic beverages at a flat rate of 10% [10]. As these taxes are regulated and collected by the Commonwealth Government, alcohol taxation is one of the few areas of national uniformity in Australian alcohol policy. Recent research has questioned the merits of this complex and inconsistent system of alcohol taxation, suggesting instead that significant health gains and cost savings could be achieved with a strictly volumetric taxation system, whereby all alcohol products would be taxed according to their alcohol content [12][13][14]. While public discussions on alcohol taxation reform have increased in recent years, these have not yet resulted in tangible policy change. Instead, the Australian government has favoured the adoption of special taxes to influence problematic patterns of alcohol consumption. These additional taxes have commonly been levied against alcoholic beverages identified as disproportionately consumed by youths or at-risk groups. A recent example is the passage of the Excise Tariff Amendment (2009 Measures No. 1) Bill, which addressed a loophole that meant that spiritbased, pre-mixed, ready-to-drink beverages (commonly termed 'alcopops'), were taxed differently to spirits. The Bill targeted alcopops specifically due to their identification as a drink of choice for Australian youths, particularly young Australian women [9][10][11]. In support of these measures, research has suggested their ability to influence preferences and purchasing behaviours [15], to increase taxation revenue, and to reduce pure alcohol consumption [16]. In 2009, a national taskforce on preventative health recommended that the Australian government make further improvements to the alcohol taxation system, and also explore the feasibility of setting a minimum price on alcohol. Public consultations for the development of a national minimum price per standard drink of alcohol were then undertaken in 2011 [17] consistent with Australia's National Alcohol Strategy 2006-2011 [18], which highlights the Australian Government's continued commitment to investigating price-related levers aimed at reducing harmful drinking practices. It is true that alcohol taxation is higher in Australia than many other developed nations (such as the UK) for beer and spirits (but not wine), and there is taxation of specific products such as alcopops in place. However, further policy measures such as the introduction of minimum unit pricing, hold the potential to further protect public health. Although the taxation of alcohol occurs on a national level in Australia, individual states and territories influence alcohol pricing through their regulation of discounts and promotions. In most cases, these regulatory powers have specifically targeted promotional activities that promote unsafe or irresponsible consumption of alcohol, although this pattern of consumption is not clearly defined and its encouragement is almost impossible to prove. Most states have legislated their right to prohibit promotional activities involving alcohol at reduced prices, although the specificities and types of measures to achieve these aims vary. For instance, New South Wales (NSW) specifically states that generally a discount promotion of over 50% should be undertaken with caution and with risks being properly assessed on whether it will encourage immoderate consumption of alcohol [19]. Queensland prohibits discounted price promotions in licensed premises that may encourage immoderate consumption. Less specific stipulations prohibiting alcohol promotions from encouraging immoderate drinking are in place in Victoria, and Western Australia. In the Northern Territory the Liquor Act includes a clause stating that it may be necessary to prohibit or limit promotional activities but provides no further clarification. In contrast, Tasmania and the Australian Capital Territory have no current legislation that explicitly prohibits promotional activities involving alcohol sold at reduced prices. Overall, our assessment is that Australian policy in this area is stronger than other developed countries like the UK [7], but there are inconsistencies in taxation and the level of control over price reductions and promotions is weak and difficult to enforce. Moving forward, consistent taxation policies and measures such as minimum unit pricing of alcohol can further strengthen the policy environment. Regulating physical availability Beyond taxation, regulation of advertising, and delivery of education and persuasion interventions, alcohol policy is largely legislated by state and territory governmentsa diffusion of control that has rendered Australian alcohol policy varied and in a constant state of change. Research support refers to the quantity and consistency of available evidence (0 indicates that no effectiveness studies have been undertaken; + indicates one or two well-designed effectiveness studies have been undertaken; ++ indicates several effectiveness studies have been undertaken, but no comprehensive reviews were available; +++ indicates sufficient studies conducted and a comprehensive review or meta-analysis was available). Bolded initiatives indicate those that are not yet instituted in Australia but are imminent or subject to ongoing discussion. Contributing to this variability are differences in alcohol consumption, alcohol-related harms and the political context that ultimately motivates legislative change across individual states and territories [10]. To illustrate this diversity, several Australian states and territories have recently rewritten their alcohol legislation in the context of increasing rates of alcohol-related harm (i.e., ACT Liquor Act 2010, NSW Liquor Act 2007, NT Liquor Act 2010). Nevertheless, all Australian states and territories have altered their alcohol legislation to some extent in the past decade, with revisions commonly aiming to regulate the physical availability of alcohol as a means to reduce alcohol-related harms. However, other recent revisions have aimed to reduce the regulatory burden on business (see the red tape reduction bill in Queensland discussed later) that could potentially threaten public health oriented provisions in alcohol policy. Licensing is the most commonly used mechanism for regulating the availability of alcohol. Licensing regulates who is able to sell alcohol and places conditions on where (the density of outlets), when (trading hours) and how (license conditions) alcohol can be soldall of which have been shown to correlate with harmful consumption practices [13,[20][21][22][23][24][25][26]. All Australian states and territories currently have a licensing system in place, although they vary in the categories of licenses available (e.g., Victoria has 12 license types to Northern Territory's five) and the conditions attached to each type of license. Although there is some consistency in licence conditions across Australia, such as the minimum legal purchase age (18 years) and mandatory responsible service of alcohol training, which from 2014 will be administered at the Commonwealth level [27], a number of fundamental differences exist. For instance, states and territories differ in their position on the consumption of alcohol by minors. Most states ban the consumption of alcohol by minors in licensed or public premises. However, Tasmania and South Australia (SA) permit minors to drink in non-dry public areas under the supervision of a responsible legal guardian. Northern Territory (NT) extends this by permitting minors to consume alcohol in licensed premises with proper guardian supervision. Another point of divergence is the normal trading hours attached as a condition to licenses. States and territories vary in: whether they have revised trading hours for Sundays (Queensland and Tasmania do not); the earliest time that establishments are permitted to serve alcohol (i.e., 5 am for NSW, Tasmania, SA; 6 am for Western Australia; 7 am for Victoria; 10 am for Queensland, NT); and the latest time an establishment is permitted to serve alcohol on a standard license (i.e., midnight for NSW, SA, Queensland, Western Australia, Tasmania; 11 pm for Victoria; 10 pm for NT). In addition, while most states and territories offer extended (e.g., NSW, SA, Victoria, Western Australia, Tasmania), or even 24-hour (e.g., SA) alcohol trading licenses, other states have taken a different route to alter their late-night trading environment. For example, in 2009 the Queensland Government placed a moratorium on granting licenses that extend the trade of alcohol beyond midnight or before 5 am [28]. However, a recent move in June 2013 to reduce red tape associated with obtaining a liquor licence in Queensland has seemingly relaxed the regulatory environment [27]. Similar measures in NSW mandate that all extended trading licensees maintain a minimum of six hours continuous closure each day (NSW Liquor Legislation Amendment Act 2008). Similarly, SA recently proposed mandatory closure of all alcohol-licensed establishments between the hours of 4 am and 7 am (SA Liquor Licensing (Miscellaneous) Amendment Bill 2011), although this was ultimately defeated in favour of harsher penalties for alcohol-related transgressions [29]. States also place considerable focus on encouraging economic development, with South Australia and New South Wales recently introducing simplified small venue liquor licenses to encourage a small venue culture in Adelaide [30,31]. This focus on urban development may not always be congruous with protecting public health or controlling alcohol related social problems. Another common means for regulating the availability of alcohol is limiting the locations in which alcohol can legally be consumed. In particular, Australian states and territories continue to differ in their treatment of alcohol consumption in public. While all states and territories have established some form of restricted area alcoholfree or dry zones (typically in areas such as public roads, parks and beaches), the extent of these provisions varies. In 2008, NT placed a wholesale ban on alcohol in particular high-risk areas (e.g., the town of Katherine) [32]. In contrast, other states (Victoria and Tasmania) have given authority to the local councils to impose these restrictions. Despite these various initiatives, research has suggested only moderate levels of compliance, perhaps related to a need to redirect current enforcement strategies from individual drinkers to the supplying establishment [33]. The topic of social/secondary supply of alcohol to minors in private premises has generated significant debate, and policy changes, in recent times. For example, supply of alcohol to minors on private premises by persons other than the minor's adult guardians is prohibited in the Northern Territory since 2011, New South Wales since 2007, Queensland and Tasmania since 2009, and Victoria since 2011 [34]. In the Northern Territory, Queensland and Tasmania the legislation also proscribes that supply must occur responsibly and under supervision [35]. Yet in South Australia, the Australian Capital Territory, and Western Australia secondary supply of alcohol to minors in private premises is unregulated. Furthermore, there are some issues with these laws worth considering. It is not illegal to supply alcohol to your own children, or to other's children provided parental or guardian permission is granted, which means that the policy often does not prevent social supply of alcohol to minors. In addition, such laws are very difficult to police, as supply of alcohol to minors often takes places in private homes in which enforcement agencies are not present. There is strong research support for controlling alcohol availability to reduce alcohol consumption and related harms, suggesting that this is an important area of focus in Australia. A number of changes in recent years in the area of regulating the physical availability of alcohol appear to have somewhat strengthened policy in this area, especially when compared with countries like the UK and much of the EU. However, our assessment is that policy in this area is inconsistent. The number of alcohol licenses, and the duration of opening hours remain problems in urban areas, and enforcement of policies such as those pertaining to social supply of alcohol to minors is problematic. An emerging challenge is the growth in packaged (i.e. take-away) liquor sales, which now represent around 80 per cent of all alcohol consumed, because the bulk of this is consumed in the home or other unlicensed premises where there are relatively few controls on servers and drinkers. Overall, while there have been policy developments that have regulated alcohol availability to an extent, inconsistencies between states and territories suggest that further improvements could be made. Modifying the drinking environment Another function of licensing the sale of alcohol is to exercise control over the drinking environment, typically accomplished by placing conditions on alcohol licenses. Nearly all Australian states and territories require responsible service of alcohol (RSA) training for staff involved in the service of alcohol as a condition of alcohol licensure. Some states extend this condition further, requiring this training for all staff and security (i.e., NSW, Victoria, Tasmania, Australian Capital Territory). In contrast, in SA only one 'approved responsible person' with RSA training is required to be on duty (SA Liquor Licensing Act 1997). Western Australia (WA) has legislated the ability to flexibly apply this training requirement as needed (WA Liquor Control Act 1988). From 2014, nationally administered and regulated RSA training will supersede state competencies for this area of policy [27]. Another common feature of alcohol legislation is the recent inclusion of server liability. In all states and territories it is now illegal for licensees and alcohol service staff to sell or supply alcohol to an intoxicated person [9]. Infractions are subject to a fine, carrying a maximum penalty usually in the thousands of dollars for staff and tens of thousands for licensees. In addition, most states have extended this liability to other patrons of the establishment who supply alcohol to an intoxicated person (usually incurring a fine approximately half that of staff ), with NT being a notable exception. An additional measure aimed at modifying the drinking environment is the adoption of late night lockouts. These lockouts aim to restrict the movement of patrons between establishments by setting a time (prior to closing) after which entry or re-entry is no longer permitted [9], yet the use of lockouts remains highly variable. For instance, Queensland has a state-wide 3 am lockout for all late-trading premises [36], whereas other states (NSW, SA) have various locally agreed and/or voluntary lockouts. Further, Victoria trialled and abandoned legislated 2 am lockouts in Melbourne in 2008, whereas WA has recently begun trialling them [37]. Although many states have supported their lockout plans with evidence of harm reduction [38,39], there is somewhat limited formal evidence of their effectiveness due to a lack of comprehensive evaluation studies, and given that lockout polices are often one component of a range of programmes aimed at curbing late night alcohol related problems [9]. Indeed, a recent study identified that although restrictions in opening hours were associated with a sustained lower assault rate in Newcastle CBD, there was no evidence that lockouts in isolation of reductions in opening hours were effective in nearby Hamilton [40]. A number of additional powers are also currently in place to facilitate the enforcement of laws in licensed premises (e.g., barring orders, emergency closure of licensed premises, banned drinker registers, increased penalties for infractions). The success of these measures, however, is reliant upon their effective enforcement. One unique enforcement measure has been NT's creation of a banned drinker register in 2011, such that all individuals purchasing alcohol must have their ID scanned at liquor outlets [41]. Drinkers who had been banned for violations of the NT Liquor Act 2010 were refused service by alcohol service staff. However, the NT government recently scrapped the banned drinker registers, and introduced a draconian mandatory treatment system instead. In Victoria, new powers introduced in 2012 permit police officers, protective services, and gambling and liquor inspectors to seize and tip out alcohol from persons they reasonably believe are under the age of 18 years [42], and similar legislation is in place in NSW. Powers of seizure of alcohol under circumstances that contravene the Liquor Control Act apply in Western Australia and Queensland; and power of seizure of things by investigators apply in the Northern Territory, and the Australian Capital Territory, although these do not specifically stipulate that they apply to the seizure and tipping out of alcohol in possession of minors. Another measure is NSW's 'three strikes' legislation, which states that after three violations (from a prescribed list of offences) license conditions can be imposed, a license can be suspended or cancelled and/or a moratorium on a new license can be invoked (NSW Liquor Amendment (3 Strikes) Bill 2011). NSW also runs the ' Alcohol Linking Program' , which is an intervention that transmits research into practice to enhance police enforcement of liquor laws through the use of data-based feedback to police and licensees about alcohol related crime following drinking on specific licensed premises. Through a series of standard questions asked by police of people involved in an alcohol fuelled incident, the programme effectively 'links' incidents where the offender, victim or driver may have consumed too much alcohol in a licensed premises before that incident, and then uses this information to map links where offenders and victims consumed alcohol, and to inform changes in serving practices and environmental design in licensed premises to potentially reduce alcohol related harm [43]. Despite these measures, research suggests application, or enforcement can be inconsistent, often targeting individual problem drinkers rather than the violating establishment [33]. Current Australian alcohol policy in relation to modifying the drinking environment is reasonably well developed with national mandatory responsible server training (often not comprehensively present in other countries), and reasonably strong powers for police and licensing agencies. However, there is some evidence that enforcement is not consistent, and the research evidence for the effectiveness of strategies such as lockouts is mixed. Therefore, current policy in this area is only moderately robust and effective. Drink driving countermeasures Given the serious risks associated with impaired driving, laws prohibiting driving under the influence of alcohol are entrenched within Australian alcohol policy. Despite states and territories each regulating their roads and driver licensing, there is much consistency with respect to drink-driving legislation. For instance, all Australian states and territories require drivers to have their blood alcohol content (BAC) below .05. They also mandate that learner and provisional drivers have no alcohol in their system while operating a vehicle. All states and territories also enforce suspensions for first and subsequent drink-driving infractions, employ an alcohol interlock scheme for serial offenders (which requires a breathalyser test in order to start the car) [44], and utilise random breath testing to enforce these regulations. Despite these similarities, there are also a number of points of divergence related to drink-driving legislation. One notable example is the BAC threshold leading to immediate license suspension. Immediate license suspension occurs on a first offence for a BAC of .08 in NSW, SA, NT and WA (reduced in 2011), .10 in Queensland (reduced from .15 in 2011), .07 in Victoria and .05 in Tasmania and Australian Capital Territory (ACT). There is also a large degree of variability in the specific penalties for drink-driving infractions across Australia. While increasing monetary penalties for drink driving offences has been a common mechanism to reduce alcohol-related road accidents, evidence suggests that this may in fact be ineffective [45]. For example, research indicates that after doubling the drink driving penalties in NSW in 1998 the rate of single-vehicle accidents increased and the aggregate level of road accidents remained largely unchanged. As a result, Australian governments have increasingly sought viable alternatives to reduce rates of drink driving. One notable example is Queensland's adoption of cumulative disqualification periods for repeat drink drivers in 2008, which requires that in the case of multiple charges (e.g., drink-driving, then driving while disqualified) the first charge must be carried out in full before the second charge begins [46]. This is in contrast to the typical system where multiple charges are served concurrently. Regardless of the specific measures adopted, the extensive government resources committed to combatting drink driving (e.g., multi-million dollar advertising campaigns) suggests that this will continue to be an area of priority for Australian governments. The research evidence on the efficacy of drink driving counter measures is relatively strong, and current Australian policy is robust in this area. For instance, Australia has a lower BAC level, and tougher restrictions on drink driving, and resultant penalties than other nations such as the UK. As such, our assessment is that alcohol policy in this area in Australia is strong. Restrictions on marketing Research has repeatedly shown the impact that alcohol marketing can have on preference and purchasing behaviours [15,25,47]. Despite this impact, there are currently no bans on alcohol advertising in Australia, although a number of laws and codes at the national level regulate its content and exposure (Trade Practices Act 1974; Commercial Television Industry Code of Practice 2010; Commercial Radio Code of Practice 2004; Outdoor Media Association's Code of Ethics 2011), in addition to state and territory fair trade legislation [9]. These laws and codes regulate when alcohol advertisements can be shown and contain requirements for honest and ethical advertising. For instance, the Commercial Television Industry Code of Practice 2010 dictates that alcohol advertisements may only be shown in M, MA, and AV classification periods; although they are also permitted during broadcasts of sporting events that occur on weekends and public holidays. Nevertheless, Australia's system for regulating alcohol advertising can be best described as quasi-regulatory given that the centrepiece of this regulatory system is the Alcohol Beverages Advertising Code (ABAC), which is implemented, funded, and administered by the alcohol industry [9,48]. The stated aim of this code is to ensure that alcohol advertising presents responsible drinking [49], although there is growing evidence of code violations [48,50]. Features of this code include: not targeting children and adolescents; not encouraging excessive consumption of alcohol; not implying that alcohol consumption will lead to a significant change in mood; not depicting an association between alcohol consumption and vehicle operation; and not depicting the consumption of alcohol as contributing to various forms of success [49]. In addition to regulating television and radio advertisements, the ABAC scheme also pertains to Internet advertisements, retail advertisements, promotion of alcohol at events (in conjunction with state regulations), and the packaging of alcoholic beverages (in conjunction with national regulations). Although controlling alcohol marketing remains a point of focus for regulators, this has centred on guidelines and voluntary regulation rather than legislation. For instance, liquor promotion guidelines were introduced in NSW in 2013, with principles pertaining to not appealing to minors, using indecent or offensive promotions, encouraging immoderate drinking, using emotive language to encourage consumption, offering extreme price discounts, and conducting promotions not in the public interest such as associating alcohol with discrimination, crime or violence [19]. However, these new guidelines largely mirror existing voluntary guidelines and have been criticised for their scope (i.e., they predominantly apply to licensed venues and, like pricediscounting restriction, are not applied to packaged alcohol outlets) and lack of enforcement (i.e., rely on breaches being observed by authorities, and subsequently charged and prosecuted) [d] . Furthermore, more recent discussions have centred on health promotion activities such as warnings labels on alcohol packaging albeit the alcohol industry successfully lobbied for a delay on their introduction [51], rather than on regulatory measures on alcohol promotion activities. Australia has a long history of industry working to pre-empt government restrictions on alcohol marketing by introducing voluntary measuresit has been argued that these measures serve to portray the industry as responsible while avoiding more effective policy interventions [52]. For example, governmental reviews of alcohol advertising regulation have typically been followed by industry 'modifications' to the self-regulatory code, which have subsequently been shown to be ineffective [53]. Whilst there is some level of control in current Australian policy with regards level of exposure to alcohol marketing, the current codes are largely self-regulatory and strongly influenced by industry. Given the selfregulatory orientation of policy restricting alcohol marketing in Australia, and the research evidence suggesting that the regulatory system is ineffective, our assessment is that alcohol policy in this area is weak. Education and persuasion With respect to education and persuasion interventions on alcohol in Australia, it is important to acknowledge that such measures are massively dwarfed by pro-drinking messages from alcohol marketing. There has been significant public discourse regarding the inclusion of warning labels on alcohol packaging in Australia over the past five years, but so far only limited policy change. In 2011, Australia's largest brewers, wine makers and spirits s opted to voluntarily place warning labels on their alcohol products [54]. These labels feature a pictogram indicating that pregnant women should not consume alcohol, as well as one of a number of interchangeable statements such as 'it is safest not to drink while pregnant' , 'kids and alcohol don't mix' , or 'is your drinking harming yourself or others?' Given that these measures have been criticised as ambiguous and ineffective by some experts [55,56], the Commonwealth Government is deliberating over the need for legislation mandating the presence and form of these warnings, as well as the inclusion of a nutritional information panel that list energy levels in kilojoules [56]. Australian alcohol education strategies have also commonly featured large-scale mass media campaigns to emphasise the negative aspects of risky drinking behaviours. Common topics have included binge drinking, anti-violence, alcohol related harms, drink-driving and underage drinking. For instance, the Australian Government's National Binge Drinking Strategy [57] committed $20 million to highlight the consequences of excessive alcohol consumption. However, these campaigns have generally been found to be effective at raising awareness but ineffective at changing behaviour. Furthermore, notably absent from campaigns has been a focus on raising awareness of the National Health and Medical Research Council drinking guidelines. For example, the evaluation of the 'Drinking Choices' National Alcohol Campaign, which targeted teenagers aged 12-17 years and their parents, found high levels of awareness, few attitudinal effects, and no change in teen drinking behaviours (other than an increase in binge drinking among females). A focus on binge drinking, is also evident at state level. For instance, SA's 'Drink too much, you're asking for trouble' campaign, and WA's ' Alcohol. Think Again' campaign focuses on alcohol related harm. Furthermore, a number of states have focused their campaign efforts on alcohol-related violence, such as: Queensland's 'Know your limits' campaign, which included the use of YouTube clips aimed at raising awareness of the link between alcohol and violence; and Tasmania's 'Getting through the night without a fight' campaign which involved the use of Facebook and a mobile app (Mate Minder) that provides the ability to track friends, request for friends to 'come find me' and let you know if friends arrived home safely. While these campaigns primarily target the reduction of the most prevalent risky behaviours in current drinkers, Australian governments have also aimed at prevention in future drinkers through classroom education. In 2004 the Australian Government released the second edition of its Principles for School Drug Education framework [58], outlining best-practice principles for incorporating drug education into the school curriculum. Following this lead, all state and territory education departments have incorporated some form of drug and alcohol education into their curriculum. For instance, in NSW drug education is taught in every government primary and secondary school from Kindergarten until Year 10 [59]. The SA Department of Education and Community Services has provided staff with drug and alcohol education units to use in Year 5, 6, and 7 classrooms [60]. The WA Department of Education and Training has outlined its School Drug Education programme [61], which includes a dedicated website with curriculum materials and resources for teachers, parents and students. The programme also involves the parents of students, and local communities, therefore taking a more strategic and community development approach to alcohol education. Despite the variability in the exact requirements and specifications of the drug and alcohol education programme, it is clear that these prevention efforts remain central to governments' harm-reduction strategies. There is a reasonably high focus on alcohol education and persuasion campaigns in Australia, despite the limited evidence that these have a significant impact on reducing alcohol consumption and related harms. This may be changing given that NSW recently closed the Alcohol and Other Drugs branch in the Department of Education and purchased an industry-developed curriculum. This influence of industry on alcohol education, in addition to the extant research evidence questioning the efficacy of such intervention strategies, leads us to evaluate Australian policy in this area to be weak. However, no formal evaluation of alcohol education and persuasion initiatives in Australia has yet been conducted. There is a reasonably strong focus on alcohol education and persuasion as a pillar of Australian alcohol policy, but the evidence base for its effectiveness is weak, leading us to rank this as a prominent but not entirely evidence-based component of the current policy environment. Treatment and early intervention An additional means for reducing the negative effects of harmful drinking in alcohol dependent drinkers, and preventing at risk drinkers from experiencing further harm, is through the provision of various treatment, routine screening and brief intervention options. However, it has been suggested that this is an area in which Australia has not performed particularly well given the various screening, early identification and interventions that were trialled, yet ultimately abandoned, in the 1980s and 1990s [9]. Nevertheless, one area where these interventions have been consistently applied is in the workplace. Workplace programmes commonly provide high-risk drinkers with an opportunity to access treatment options that have research-based evidence of effectiveness. However, the exact nature of these measures often varies from company to company, consistent only in the minimum practice standards laid out by operational health and safety legislation. Research on the impact of brief interventions by primary care providers suggests substantial health gains and cost savings [13,62]; however, legislators have appeared to minimise support for specific treatment or rehabilitation efforts. A prime example is the proliferation of sobering up centres across Australia [9], which are used to temporarily hold publicly intoxicated individuals that are being disorderly. Of particular note is the short-term harm-reduction role this confers on enforcement officials, rather than providing intoxicated individuals with an opportunity for intervention [9]. However, in the Northern Territory, mandatory alcohol treatment has been introduced through which people who are taken into police custody for intoxication three or more times in two months are referred to a compulsory programme of assessment and treatment. Strategies in the programme include treatment in secure residential facilities, community management including income management, life skills and work readiness training [63]. This high involvement programme may be a reflection of the prevalent alcohol problems in the Northern Territory. However, the policy has been criticised as being discriminatory against Aboriginal people, and ignoring issues with problem drinkers who never enter the judicial system [64]. An updated programme for involuntary treatment for alcohol dependence was also introduced in NSW in 2012 [65]. Although the presence of sporadic alcohol-related treatment initiatives suggests that legislators recognise the need for treatment and early intervention for problem drinkers, it is unclear whether this area will become an increased priority in the future. The research support for alcohol treatment and early intervention, particularly brief interventions, is reasonably strong. However, formal national programmes of brief interventions have only recently been introduced in a limited number of countries. Therefore, it may be some time before evaluation research is available to comprehensively assess the effectiveness of this approach. Whilst efforts have been made in Australia to strengthen policy in this area, provision of services can be sporadic, and there are inconsistencies in intervention approaches. Therefore, our assessment is that policy in this area in some areas is moderately strong (for example workplace interventions), but could improve quite considerably through comprehensive resourcing, and consistent design, delivery and evaluation of interventions. Discussion This paper maps the landscape of Australian alcohol policy between 2001 and 2013, illustrating the breadth of policies and initiatives across seven key policy areas [7]. Examination of this landscape suggests that the Australian policy environment is complex, with Commonwealth, state, and territory governments having competence over different policy areas. This has created considerable variation in policy throughout Australia. Although there is a requirement under a federal system to have a degree of flexibility in alcohol policy development, the current landscape does not optimise public health interests (see Table 2). There is ample evidence to guide governments in developing and implementing alcohol policies that will be effective in reducing public health harms. Yet, despite this, successive governments have been unwilling to introduce these evidence-based policies. There may be a number of reasons for this including opposition to robust policies from the alcohol industry; a pro-drinking culture in Australian communitiesfacilitated by ubiquitous alcohol marketing, affordable pricing and easy access and availability; and a lack of national coordination, accountability, and strategic governance in relation to alcohol policy, particularly since the Ministerial Council for Drug Strategy was scrapped in 2011. It is encouraging that some states such as South Australia have now incorporated a public health provision in their alcohol legislation [66], but this is sporadic, and policy implementation that protects public health is often inconsistent. As an example, although many Australian states and territories have attempted to regulate the availability of alcohol for minors, alcohol consumption by minors is subject to fewer restrictions in Tasmania, SA and NT (despite research suggesting that even adultsupervised alcohol consumption by minors results in higher levels of alcohol-related harms than zero tolerance policies [67]. Furthermore, there is little evaluation of the efficacy of alcohol control policy in Australia, with limited research in some policy areas (e.g., marketing, education) suggesting current policy is ineffective. Conclusions This underscores the need for research on the impact and population responses to alcohol control policy in Australia. Relating to WHO's 'best buy' policies of pricing, regulating availability, and marketing control [6], it is clear that Australia currently has moderate controls over alcohol pricing, some limitations on alcohol availability (although these are being weakened, as in the case of alcohol being sold in Victoria's supermarkets) [68], and limited controls over alcohol marketing. Therefore, policymakers and regulators' attention should focus on strengthening policy in accordance the evidence base, and with the WHO best buy recommendations in order to protect public health in Australia. Indeed, recent research canvassing opinions of alcohol experts in Australia identified pricing policies such as volumetric taxation and minimum unit pricing, and regulation of alcohol advertising as top national priorities [69]. This may require incorporating public health provisions as a core pillar of alcohol policy, similar to countries like Scotland [70], to facilitate more effective outcomes. Furthermore, public support, which is currently lacking for some measures such as minimum unit pricing for alcohol [71], will also need to be gained for more robust and effective alcohol policy in Australia to emerge. Endnotes a A full list of search terms is available from the authors upon request. b This article is based upon an earlier literature review from 2001-2011 (a ten year search span common in systematic review research). The date span in this article was expanded up to 2013 to enable the inclusion of contemporary research, and policy changes. c A full list of references identified in the literature search is available by the authors on request. d Liquor Promotion Guidelines apply to ALL licensed premises BUT state "A distinction can be made between promotions offering alcohol to be consumed immediately on a licensed premises and promotions offering alcohol that which may be stored for consumption later away from the premises. As a result, the extent to which each principle in this document applies to different licence types will vary accordingly".
v3-fos-license