added
string | created
string | id
string | metadata
dict | source
string | text
string | version
string |
---|---|---|---|---|---|---|
2018-12-12T19:50:37.397Z
|
2017-01-01T00:00:00.000
|
55091015
|
{
"extfieldsofstudy": [
"Business"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1080/23322039.2017.1325117",
"pdf_hash": "1c45ed965b446867fa7bc001ad44a2b4401fade4",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44429",
"s2fieldsofstudy": [
"Business"
],
"sha1": "fbf869a958e520f46abac37645fce7e7745f6209",
"year": 2017
}
|
pes2o/s2orc
|
Women on boardroom: Does it create risk?
This study examines the impact of the women existence on corporate board. It is believed that the existence of women on board adds more value to the company as more women bring different perspectives on the decision-making process and the company’s strategic plans. Using Islamic listed-firms in the Indonesian Stock exchange, this study employs 7-year panel data comprising 840 observations from 2009 to 2015 quarterly data of 30 listed-firms that comply with Islamic law. Generalized Least Square is employed in this study and the results revealed that the presence of female CEOs has lowered the firm’s risk for all risk proxies. Female CEOs having a higher academic qualification, overseas qualification, and business degree tends to lower the firm’s risk. The result also indicates that the younger the age of female CEOs and the longer tenure of the female CEOs, the lower the firm’s risk. Furthermore, more female directors on boardroom have a significant impact on firm’s risk as the higher the number of female directors the lower the firm’s risk (CR and FCF). In addition, female chief financial officer tends to lower the firm’s risk as they are believed to be more conservative in dealing with the financial issues. Subjects: Finance; Corporate Finance; Corporate Governance
ABOUT THE AUTHORS Fitriya Fauzi is a lecturer in Finance and Banking at Faculty of Business, Curtin University Malaysia. She holds a PhD in Finance from the University of Waikato, New Zealand and has published in refereed journals such as Global Business Review among others. Her main research interest includes corporate finance.
Abdul Basyith was a senior lecturer in Finance at the Faculty of Economics and Business, University of Muhammadiyah Palembang, Indonesia. He holds a PhD from the University of Pancasila, Indonesia. He has published in refereed journals such as Global Business Review among others. His main research interest includes corporate finance.
Poh-Ling Ho is a associate professor in Accounting at Curtin University, Malaysia. She holds a PhD in Accounting from Curtin University, Australia. She is a CPA (Australia) and a Chartered Accountant in Malaysia. She has published in the refereed journals such as Pacific Accounting Review, Asian Review of Accounting, Asian Journal of Business and Accounting, among others. Her main research interests include corporate governance, sustainability reporting, and entrepreneurship.
PUBLIC INTEREST STATEMENT
The inclusion of women on boards is often seen as a good business decision because women directors are hypothesized to bring benefits to the company through their performance as directors. Women directors possess unique attributes that increases financial performance as diversity enhances independence, innovation, and governance. Diversity of thought results in better decision-making, and ultimately, better firm performance. Women are expected to have a conscious commitment to assume their roles and responsibilities in the company. Thus, this study aims to examine the impact of female directors on firm risk. Specifically, this study attempts to break down female directors' characteristics into few parts, such as their education qualification, education background, an education major, and their span of age and tenure. Using their education characteristics is compelling as it can explain whether it reduces firm risk. Further, a female chief financial officer is included in this study as they directly involve in the financial management of the company in which risks could be altered.
Introduction
In the past, women have been considered as a mere housewife and they are strictly prohibited from working in the formal sector. However, the traditional practice is no longer withstood with the changes of culture and technology in the last few decades. Not only have the changes been caused by the rise of economic needs, they have also been caused by the needs to be recognized in the society. Moreover, women are more educated and competent, and it leads to the wider opportunity for women to be employed. Over the last decade, more women are taking part in the formal sector across all industries. Despite this significant transformation, yet the equality between men and women is implicitly different in terms of opportunity, income, and workplace conditions. The higher the participation rate of women in the labor force, the faster the economic growth, and apparently, women participation rate on the global labor force counts for 50% (Organization for Economic Cooperation & Development, 2009). If this increase is also accompanied by an increase in the quality of education for women, a higher economic growth is more plausible (Organization for Economic Cooperation & Development, 2012). However, an increase of education quality for women does not always lead to a better labor market outcome ( UN Women, 2016). Furthermore, not only has the participation rate become important, the quality of the job has also become paramount. Although more women are working in the formal sector, they still have to be responsible for household chores and raising kids, and this can lead to higher burden and to a higher possibility of stress for women. Moreover, it is also believed that women are inseparable with the kids' issues and they are unable to restrain themselves from this responsibility.
Even though educated women are on the rise over the last two decades, women are still struggling to gain an access to decent work. Often, women face the challenge of having to balance the working life and family life. More effort and time will be needed in the workplace particularly if they want to move up the corporate ladder. Generally, women work longer hours per day at both paid and unpaid works than men. According to ILO (2016), employed women (either in self-employment or paid employment) have longer working days on average than employed men, in both developing and developed countries. The juggling of work and family may result in firm's management, especially the large companies, to consider employing women. Consequently, women tend to prefer to work in the micro and small enterprises that offer flexibility. According to ILO (2016), more than one-third of women are employed in the wholesale and retail trade services (33.9%), while about 12.4% of women are engaged in the manufacturing sector in the upper-middle income countries. The major source of employment for women in the high-income countries is health and education sectors, which count for 30.6% of the labor market. On the other hand, the agricultural sector is the main source of employment for women in the low-income and lower-middle income countries. Ironically, over 60.0% of women remain in the agricultural sector in developing countries, and they are remunerated poorly or not remunerated.
In contrast to developed countries where women constitute 48.1% in the managerial, professionals and technicians sectors (high paid job) of total employment, the women in the developing countries constitute over 60% in the clerical, services, and sales sectors (low paid job) of total employment (ILO, 2016). Moreover, only 5.0% or less of women occupied the chief executive officers position of the world's largest corporations, and only 30.0% of women owned and managed business in the micro and small enterprises (ILO, 2016). Hence, the fact that only a few women occupying leadership position are undeniable.
Although only about 40.0% of women worked in the managerial, professionals and technicians sectors (high paid job) in developing countries, yet very few women were able to occupy top management positions. The fact that most of the women occupying the top management positions were selected based on kinship or collegiality, and only a few women who have progressed to the top positions were based on their professional career ladder is prevalent. Hence, gender diversity in the boardrooms has been a topic of corporate governance discussion, especially since the last decade.
Even in the USA, the percentage of women in the boardrooms is only about 15%, and this low composition has become a major concern as investors have pushed companies to disclose gender diversity.
Having gender diversity on boardrooms provides some benefits to the company. It is believed that diversity affects higher financial performance as diversity provides more independence, innovation, and good governance. Further, diversity of thought, higher competitive advantage, and diversity of skills are some advantages of having women in the boardroom. It is widely believed that diversity of thought results in better decision-making and this leads to a better firm performance. Moreover, women are expected to have a conscious commitment to assume their roles and responsibilities in the company.
There is no particular law in Indonesia regulating the minimum number of women on the company's board. Further, there are no corporate governance guidelines for gender diversity under the Indonesian stock exchange's (IDX) code. However, the Indonesian government in 2012, enacted a regulation regarding the minimum number of women required as representatives for parliamentary seats, and it was stated that a minimum of 30% of all candidates for political parties is required to be filled by women. In spite of having a massive gender awareness and good corporate governance awareness campaign, the number of women on board in Indonesian companies is somewhat lower, accounting for about 11.1% in 2016, (Korn Ferry Diversity Scorecard, 2016). This number is slightly lower compared to the number of women on board which accounted for 11.6% as published by Centre for Governance, Institutions & Organisations Report (2012). The low women representation is likely due to the lack of institutional support. Despite the pressure from various parties to increase the number of women in the boardroom, the gender composition of Indonesian boardrooms is still uncertain, and the risk of having more female directors on boards is yet to be known.
The latest study conducted by Loukil and Yousfi (2015), which examined the impact of gender diversity on corporate risk-taking using Tunisian listed-firms, found that gender diversity has no significant effect on risk as measured by liquidity and turnover ratio. They employed stock return and trading volume to calculate liquidity and turnover ratio as a proxy for risk. Similar to Loukil and Yousfi (2015) that employed market-based measure as risk proxies, Sila, Gonzalez, and Hagendorff (2016) investigated the relationship between women on board and firm risk. The result reveals that firms with more female directors tend to have lower equity risk as measured by total risk, systematic risk, and idiosyncratic risk which are calculated using the daily stock returns. Using the daily stock return to measure risk can be fruitful, however, it has to be noted that the stock prices varied daily; Subsequently, the timing of female directors' appointment should be maintained well to adjust the timing gap as it will affect the stock price changes. Further, the stock price changes can be affected by many factors which may introduce noises to the results. Using a quarterly data-set of Islamic firms listed on the IDX which spans from 2009 to 2015, this study aims to examine the impact of female directors on the firm risk. Specifically, this study attempts to break down female directors' characteristics into few parts, such as their education qualification, education background, an education major and their span of age and tenure. Using their education characteristics is compelling because the results of analysis can be used to explain whether their characteristics have an impact on firm value or not. Further, a female chief financial officer (CFO) is included in this study as they directly involve in the financial management of the company in which risks could be altered.
We noted that the previous literature used a market-based measure such as stock return movements as a proxy for risk (Loukil & Yousfi, 2015;Sila et al., 2016). This paper contributes to the current literature as we found that the presence of female CEOs has lower the firm's risk proxied by a number of accounting-based measures which include cash ratio, debt ratio, and free cash flow (FCF) ratio. Second, the findings of this study highlight that the director's gender having demographic attributes such as higher academic qualification, overseas qualification, and a business degree do lower the firm risk. Third, this study also documents that younger female directors with long tenures contribute to firm's financial stability as the firm have lower risk measured by cash ratio, debt ratio, and FCF ratio. In addition, this study supports the notion that female director tends to be conservative toward risk when dealing with the financial issues.
Literature review
The gender diversity has been extensively studied in recent years (Adams & Ferreira, 2009;Adams & Ragunathan, 2013;Berger, Kick, & Schaeck, 2014;Faccio, Marchica, & Mura, 2016;Loukil & Yousfi, 2015;Sapienza, Zingales, & Maestripieri, 2009). Though there is a significant change where the existence of women on boards has been acknowledged, and their appointment can increase firm financial performance and decrease firm risk, yet it is also believed that their appointment on board seats is considered as mere tokenism. Fortune Report (2016) affirmed that women constitute roughly 16% of board seats from S&P 1500 companies in the period of 1997-2014. The report also revealed that female directors on boardroom tend to be younger with an average age of 60 compared to male directors with an average of 63. In addition, 42.1% of female directors tends to have a shorter tenure of fewer than 5 years. Moreover, women are likely to serve on more than two boards which count for 18.9%. Tokenism is considered to be apparent in the boardrooms as if one woman is appointed to a board; it appears there will be a chance finding other women to occupy the directors' positions. According to Fortune Report (2016), 29.0% of companies in the S&P 500 without women on their boards previously had now included a woman; 15.0% of boards with one woman had the additional woman added, and 6% of companies with two women on their boards added an additional woman in the fiscal year 2014.
The appointment of women is considered to be beneficial as women on board increase the diversity of opinions and provide female role model (Catalyst, 1995), influence the leadership styles and the decision-making (Rosener, 1990), improve the firm's image (Mattis, 1997), provide strategic thinking (Bilimoria, 2000), increase the likelihood of firm's survival and growth (Basyith, Idris, & Fitriya, 2014;Weber & Zulehner, 2010), and decrease leverage and earning volatility (Faccio et al., 2016). However, there are two types of directors; executive directors and non-executive directors; and if women are appointed to the executive seat, it may indicate that they have a distinguished capability as their appointment is based on normal career advancement level and, hence only those having greater managerial advancement, longer experience, and higher education levels will be chosen. As female directors having higher academic qualification increase firm performance (Smith, Smith, & Verner, 2006), and qualified and skillful board members can be considered as judicious resource to provide strategic linkage to different external resources (Ingley & van der Walt, 2001), and ultimately, it leads to adding value to the firm (Carpenter & Westphal, 2001). In contrast, non-executive female directors are appointed by invitation of the board chairman or a nominating committee and they are more likely to be employed in higher occupation types, the public sector, or in larger organizations than executive directors were. Their appointment is at the discretion of the chairman or nominating committee (Burgess & Tharenou, 2000), nevertheless, they also bring a different strategic direction (Selby, 2000) and a broader view of thinking (Fondas, 2000) to the company. Mohan (2014) asserted that communication and interpersonal skills are often seen in female leaders while rationality and domination are generally exhibited by male leaders. Moreover, female directors can fill the gaps of having insufficient competent male directors (Burke & Kurucz, 1998). In addition, it is believed that the behavior of directors depends on the gender in facing several situations (Johnson & Powell, 1994). A woman is considered to have different emotional reactions in responding to risk compared to the male director (Croson & Gneezy, 2009). The risk is often associated with a negative term and, it is human nature that risk tends to be avoided. To the extent on how far human can bear risk has been studied extensively (Byrnes, Miller, & Schafer, 1999;Croson & Gneezy, 2009). It is also believed that there are some traits that distinguish between men and women when it comes to a decision involving risks. Such traits are genetic differences (Saad & Gill, 2000), overconfidence behaviour (Barber & Odean, 2001;Bertrand, 2011;Niederle & Vesterlund, 2007), genderstereotypic characteristics (Diekman, Eagly, & Kulesa, 2002), psychological and social considerations (Meier-Pesti & Penz, 2008), and power and compassion (Schwartz & Rubel, 2005).
Some studies found that women are more risk averse than men (Barber & Odean, 2001;Beckmann & Menkhoff, 2008;Croson & Gneezy, 2009;Eckel & Grossman, 2008). A presumption of conceding that women are more risk averse compared to men would only affect to more appointment of positions involved higher risk taking that will be prioritized to men (Croson & Gneezy, 2009). However, Adams and Funk (2012) stated that women occupied higher positions differ from those women among the general population as those women solely concerned about achievement and authority. Furthermore, it is still ambiguous whether or not women are more risk averse than men in particular when they occupied higher positions in the boardroom. Elsaid and Ursel (2011) and Faccio et al. (2016) found that higher female representatives on board generate lower leverage, lower earnings' volatility and a higher chance of firm's survival. Hence, a general inference cannot be derived due to inconclusive results among scholars. However, some studies found no evidence of gender differences in risk taking (Atkinson, Baird, & Frye, 2003;Cosentino, Montalto, Donato, & via, 2012).
There are also some studies distinguish between female directors in the financial industry (Adams & Ragunathan, 2013;Berger et al., 2014;Sapienza et al., 2009) and in the non-financial industry (Faccio et al., 2016). In terms of financial and investment decision, some studies affirmed that women tend to be risk averse (Ertac & Gurdal, 2012;Halko, Kaustia, & Alanko, 2012;Jianakoplos & Bernasek, 1998;Vandegrift & Brown, 2005). Sapienza et al. (2009) affirmed that female director in non-financial industries is more risk-averse than female directors in the financial service industry. Moreover, if CFO is female, then it is believed that they will be more conservative in dealing with the accounting policies (Francis, Hasan, Park, & Wu, 2010), less likely to manipulate earnings' statement (Chava & Purnanandam, 2010), and less likely to issue long-term debt and make significant acquisitions (Huang & Kisgen, 2013). Furthermore, female CFOs are more risk-averse than male CFOs when making their financial decisions (Huang & Kisgen, 2013). Female directors prefer to take less risk compared to male directors (Elsaid & Ursel, 2011;Martin, Nishikawa, & Williams, 2009;Muldrow & Bayton, 1979;Schmidt & Traub, 2002). An appointment of new female CEO leads to a reduction in the firm's stock return volatility (Martin et al., 2009) and a reduction in the firm's risk profile (Elsaid & Ursel, 2011). In contrast, Adams and Ragunathan (2013) asserted that women in finance have the same risk preferences as men in finance. They found that during the previous financial crisis (2007)(2008)(2009)) most firms in their sample have the same risk exposure though those firms have higher representatives of the female, and further having more female directors lower firm performance (Fauzi & Locke, 2012). It can be concluded that all men are identical when they are exposed to risks while women perceived it differently.
Younger female directors compared to male directors are believed to bring some benefits to the company, such as new ideas and strategies (Burke, 1994;Ibrahim & Angelidis, 1994). They are associated with long-term company success and competitive advantage (Cassell, 1997), higher valueadded through distinctive skills (Green & Cassell, 1996), greater sensitivity of corporate social responsibility (Ibrahim & Angelidis, 1994), increased profitability (Catalyst, 1995), and higher intellectual capital (Daily, Certo, & Dalton, 1999). However, Berger et al. (2014) who examined the directors' characteristics in terms of gender, age and education found that younger and higher proportion of female executives increase firm's risk, while the higher academic degree of female executives decreases firm's risk. They argued that the result may be caused by the fact that female directors were less experienced than male directors in their sample.
In the latest studies by Lenard, Yu, and York (2014), Loukil and Yousfi (2015) and Sila et al. (2016) that measured the effect of gender diversity on corporate performance in terms of firm risk using the variability of stock market return, and they found that gender diversity on the board of directors impacts firm risk by contributing to lower variability of stock market return. Their result is similar to the result found by Adams and Ferreira (2004) and Hillman, Shropshire, and Cannella (2007) in that firm with higher stock return variability and compensation structure complexity tend to have less female directors. As we discussed earlier that using the daily stock returns to measure risk can also cause a misleading conclusion as there will be timing constraints during the study period that we should consider, hence using an accounting based-measured to measure risk, this study attempts to examine the impact of women directors on firm's risk as mentioned before that female directors prefer to take less risk compared to male directors (Berger et al., 2014;Elsaid & Ursel, 2011;Martin et al., 2009;Schmidt & Traub, 2002). Furthermore, Fauzi and Locke (2012) found that female directors lower firm performance. Moreover, Basyith (2016) found that education qualification of the director has a significant impact on improving firm performance. Based on the aforementioned theories and empirical evidence, the following hypotheses are formulated: H1: Female participation in the boardroom has a significant effect on risk-taking.
H2: Female directors' characteristics (academic qualification, academic institution, academic major, age, and tenure) have a significant effect on risk-taking.
Further, this study compares if female CFOs has more influence on firm risk attitude than female CEOs, and the hypothesis is as follows: H3: Comparing to female CEOs, female CFOs represent more influence on firm risk attitude.
Data
The data for this study were obtained from the IDX database archive. This study employs quarterly data due to the drawback of using annual financial data which lies in the interpretation of data over time involving the uncertainty in the timing of events that many data series may exhibit activities or movements that recur every year in the same quarter. For example, the appointment of directors differs for each company as one company can appoint the new directors in the beginning of the financial accounting date report and other company can appoint them in the mid or the end of financial accounting date report. Hence, to capture specific changes in the board directors' composition over time, a quarterly data is more appropriate to be employed. Further, the cash ratio and FCF may change significantly every month and the changes may be caused by the seasons as sales usually are at its peak in December. As many industries experience seasonal demand patterns, using quarterly data can capture these dynamic changes in the data series over time. Further, though the quarterly data is not verified by the professional accountant, this quarterly data is audited by the internal auditor and is approved by the board of directors, and hence the reliability of quarterly data ensues.
Using 30 Indonesian listed-firms categorized as Sharia-based firms compliance which is referred as Jakarta Islamic Index (JII) firms and seven years sampling period which starts from 2009 to 2015, therefore, 840 observations of panel data are employed in this study. The JII index is an index created in July 2000, in the IDX to accommodate the market needs. This index only includes all listedfirms (1) complying with Islamic Laws, (2) having obligation asset ratio of no more than 90%, (3) having highest liquidity, and (4) having highest market capitalization. Therefore, it can be concluded that the majority of these firms included in this index could also be listed on the blue-chip index which is referred to as an LQ45 index. The JII index is announced every six months per year, and thus 14 announcement lists are available during the period of this study. Therefore, there are more than 30 firms which are included in the JII index in the sampling period. Non-random sampling technique is employed using purposive sampling and, this study used only 30 firms in the analysis which are selected based on the following criteria: (1) the selected firms should be listed by the JII index for at least six times out of 14 announcements; and (2) the selected firms should have all the information required for the analysis. Moreover, to mitigate the problem of missing values, this study uses multiple imputations by including the weighted value to compensate the missing value excluded in the model (Raghunathan, 2004).
Variables
Most studies of gender diversity in relation with risk employed stock return variability as a measure of risk proxy (Adams & Ferreira, 2009;Hillman et al., 2007). Stock price and trading volume were commonly employed to calculate total risk, systematic risk, and idiosyncratic risk. The use of stock return variability as risk proxy is categorized as a market-based measure indicator. However, there are some studies employed liquidity measures (cash ratio and leverage ratio) as risk proxies. This risk proxy is categorized as an accounting-based measure indicator. This study employs accountingbased measures as risk proxies for the dependent variable, which include cash ratio, debt ratio, and FCF. The measurement scale of these variables is a ratio. Cash ratio is measured as the ratio of cash available over the total of current liabilities; debt ratio is measured as the ratio of total debt to total assets, and FCF is measured as the ratio of total FCF available over the total assets. CEO's characteristics including gender, academic qualification, type of academic degree, age and years of experience, the percentage of female directors on board, and the gender of CFO are employed as explanatory variables. The measurement scale for CEO's gender, type of academic degree, and the CFO's gender, a dummy variable is employed; If the CEO's gender is female, it is set as "1", otherwise it is set equal to "0"; if the female CEO's academic attained is awarded from domestic university, it is set as "1", otherwise it is set equal to "0"; if the female CEO's academic major is business major, it is set as "0", otherwise it is set equal to "1". For CEO's academic qualification, the academic qualification is only be used if the CEO is female, and there are four group of academic qualification; high school graduate qualification, undergraduate qualification, graduate qualification, and postgraduate qualification. The measurement scale of this variable is nominal, and this is coded as follows: The high school graduate is coded as 1 if the CEO's having graduate from high school, the undergraduate is coded as 2 if the CEO's having graduate from undergraduate degree, the graduate is coded as 3 if the CEO' having graduated from graduate degree, and the postgraduate is coded as 4 if the CEO's having graduate from postgraduate degree. If the CFO's gender is female, it is set as "1", otherwise it is set equal to "0". For CEO's age and years of experience, an absolute number of CEO's age and years of experience is employed. The measurement scale of these variables is a ratio. Percentage of female directors is measured as the ratio of total female directors over a total number of directors on the board. The control variables are the firm size and ownership type (state-owned enterprises, domestic private firms, and foreign private firms).
Model specification
This study attempts to examine the impact of independent variables on dependent variables using observable data (numerical data) and, thus a quantitative method of analysis is employed. Furthermore, this study uses the panel data method which offers more advantages in moderating some issues in the regression model. Apart from eliminating unobservable heterogeneity for each observation in the sample, panel data allows multicollinearity among variables to be alleviated as those two issues yield bias estimates resulting from spurious correlation with the dependent variable (Baltagi, 2005). A model for the regression of women on board is then: where y it is firm's risk/value represented by cash ratio, debt ratio, and FCF. X 1it is CEO's gender, X 2it is CEO's academic qualification, X 3it is CEO's academic qualification attained, X 4it is CEO's academic major, X 5it is CEO's age, X 6it is CEO's years of experience, X 7it is percentage of female directors, and X 8it is CFO's gender. μ i denotes the unobservable individual effect, t denotes the unobservable time effect, and v it is the remainder stochastic disturbance term. This model describes three parallel (1) regression planes, which can differ in their intercepts. Hereafter, the X 1it … X nit will be referred as X it (set of regressors): where x it is a K × 1 vector of regressors, β is a K × 1 vector of parameters to be estimated, and α i represents time-invariant individual nuisance parameters. Under the null hypothesis, u it is assumed to be independent and identically distributed (IID) over periods and across cross-sectional units.
Analysis
The sample for the study consists of 30 listed firms selected for the periods from 2009 to 2015. Quarterly data is collected from over this 7-year period; hence, it translates to 840 observations. Table 1 provides the descriptive statistics used in this study. The table depicts the number of observations, mean, standard deviation, and minimum and the maximum value of each variable. The dependent variables are CEO's gender, CEO's academic qualification, CEO's academic attained, CEO's academic major, CEO's age, CEO's years of experience, the percentage of female directors, and CFO's gender, and each of these dependent variables is regressed toward its explanatory variables.
The mean value of cash ratio is 0.3461 with a range of 0.1666-3.5636, suggesting that most of the firms experienced low liquidity of their short-term financial liabilities. The cash ratio is the most conservative measure compare to another liquidity ratio as only cash can be used to pay short-term liabilities in due. This is one of the reasons that many creditors rely on the cash ratio when deciding the debtors' creditworthiness. However, a large number of cash retained in the company may also indicate that the company has poor asset utilization as this large idle amount of capacity can be utilized in the profitable investment.
The mean value for debt ratio is 0.4105 with a range of 0.0567-5.8410, suggesting that most of the firms under the observation are having a moderate leverage. Though higher debt indicates that the company has higher opportunities to grow in future yet it also indicates that the firms are exposed of insolvency and/or bankruptcy. Most of the firms in the sample are asset-intensive firms; hence, it is reflected in the higher debt ratio. Further, a debt ratio of 0.4105 or over is considered as moderate as these firms tend to have stable cash flow.
(3) The mean value of FCF is 0.0195 with a range of −3.1892 to 2.6124, suggesting that most of the firms in the observation are having a moderate FCF. A positive FCF indicates that the company ability in generating cash to be distributed to the shareholders or re-investment purposes and this firm may be categorized as a healthy firm. However, a negative FCF does not always mean that the company is in a worse state, as it can be a sign of large re-investment made and the return generated may be higher for distribution to the shareholders. Though the sample in this study exhibits a lower FCF, it does not mean that the firms in the sample are not profitable.
The mean value of CEO's gender is 0.0714 with a range of 0-1, suggesting that only 7.14% of the CEO is female in the sample used. The lower number of female CEO may be due to the fact that: (1) most firms only fill the minimum number of female directors required by the Stock Exchange Commission (SEC); (2) most firms are owned by family or family group and large institutions; and (3) women empowerment is still uncommon in the Asian culture. Therefore, most firms tend to merely comply with the corporate governance rule.
The mean value of CEO's female of academic qualification is 1.9333 with a range of 1-4, suggesting that most CEO's female academic qualification is at the level of undergraduate degree. The mean value of CEO's female of academic attained is 0.4694 with a range of 0-1, suggesting that 53.06% of CEO's female is graduated from foreign university. The mean value of CEO's female of an academic major is 0.0714 with a range of 0-1, suggesting that 92.86% of CEO's female is graduated from a business major. The mean value of CEO's female age is 38.5459 with a range of 28-51, suggesting that most CEO's female is in their productive stage. The mean value of CEO's female years of experience is 5.6416 with a range of 5-23, suggesting that most CEO's female is in their initial stage of their management career. The mean value of the percentage of female directors on board is 0.1075 with a range of 0-1, suggesting that only 10.75% directors on the board are female. The mean value of CFO's gender is 0.2857 with a range of 0-1, suggesting that 28.57% of CFO's gender is female. Table 2 provides the summary of regression results for all risk proxies (cash ratio, debt ratio, and FCF). The CEO gender coefficient for Debt Ratio is negatively and statistically significant, suggesting that the presence of female CEOs have lowered the firm's risk in terms of debt ratio. Similar to Debt Ratio, the CEO gender coefficient for Cash Ratio and FCF Ratio is a positive and statistically significant, suggesting that the presence of female CEOs has a positive impact on the firm's risk in terms of cash ratio and FCF ratio. This result is in line with Muldrow and Bayton (1979), Martin et al. (2009), Elsaid and Ursel (2011), Schmidt and Traub (2002 that female directors prefer to take less risk compared to male directors. The CEOs academic qualification coefficient for Debt Ratio, Cash Ratio, and FCF Ratio is positively and statistically significant, suggesting that the higher the academic qualification of female CEOs the lower the firm's risk. As the risks are proxied by the debt ratio (DR), cash ratio (CR), and FCF ratio, therefore, the positive sign of the coefficient indicates that having higher DR, CR, and FCF reduces the firm's risk. Though, the higher debt ratio may indicate higher risk, however, most of the firms used in this study are relatively large firms, hence higher debt ratio may not be an indication of higher risk. The higher debt ratio may indicate that the firm has high likelihood of future earnings stability as the more prospective investment from the debt issued. The result is consistent with Smith et al. (2006) that female directors having higher academic qualification increase firm performance. This may be due to the pool of talent on the boardroom as it is required to govern the company and to add firm value (Carpenter & Westphal, 2001). The academic qualification can support the decision-making made by the board as qualified and skillful board members can be considered as a resource to provide a strategic linkage to different external resources (Ingley & van der Walt, 2001).
The CEOs academic attained coefficient for Debt Ratio and FCF Ratio is a negative and significant, suggesting that CEOs graduated from domestic university have a negative impact on firm's risk if compared with CEOs with an overseas graduate. Meanwhile, the CEOs academic attained coefficient for Cash Ratio is a negative and not significant, suggesting that CEOs graduated from domestic university have no impact on firm's risk. Further, the CEOs academic major coefficient for all risk proxies (DR, CR, and FCF) is a positive and significant, suggesting that CEO's with a business degree have a significant and positive impact on firm's risk.
The CEOs age coefficient for Debt Ratio and FCF Ratio is a negative and not significant, suggesting that CEOs age have no impact of firm's risk (DR and FCF). Meanwhile, the CEOs age coefficient for Cash Ratio is a positive and significant, suggesting that CEOs age have a significant impact on firm's risk as it indicates that the younger the CEOs the better the firm's risk. Further, the CEO's year of experience coefficient for Debt Ratio and Cash Ratio is a negative and significant, suggesting that the CEOs having long tenure lower the firm's risk. Meanwhile, the CEO's year of experience coefficient for FCF Ratio is a positive and not significant, suggesting that a number of tenures have no impact on firm's risk (FCF).
The percentage of female directors' coefficient for Debt Ratio is negative and not significant; suggesting that the higher number of female directors on boardroom have no impact on firm's risk (Debt Ratio). Meanwhile, the percentage of female directors' coefficient for Cash Ratio and FCF Ratio is a positive and significant, suggesting that the higher number of female directors on boardroom have a significant impact on firm's risk as the higher the number of female directors the lower the firm's risk (CR and FCF). As explained previously that having higher CR and FCF reduces the firm's risk due to a higher level of liquidity and the availability of money for firm's daily operation.
The CFO's gender coefficient for Debt Ratio is a negative and significant, suggesting that female CFO lowers the firm's risk in terms of debt ratio. Similar to Debt Ratio, the CFOs gender coefficient for Cash Ratio and FCF Ratio is a positive, but not significant, suggesting that female CFOs have no impact on the firm's risk in terms of cash ratio and FCF ratio. The result is consistent with the notion that female CFOs are believed to be more conservative in dealing with the accounting policies (Francis et al., 2010), less likely to manipulate earnings' statement (Chava & Purnanandam, 2010), less likely to issue long-term debt and make significant acquisitions (Huang & Kisgen, 2013). The finding is consistent with Huang and Kisgen (2013) that female CFOs are more risk-averse than male CFOs when making their financial decisions. The firm size coefficient for all proxies is a positive and significant, suggesting that bigger firms with the presence of female directors have lower firm's risk.
Comparing firm-risk attitude between female CEOs and female CFOs, Table 2 shows that the coefficient for female CEOs is higher than the coefficient for female CFOs, hence we should reject the hypothesis that female CFOs represent more influence on firm risk attitude. It can be concluded that female CEOs represent more influence on firm risk attitude for all three firm risk proxies (debt ratio, cash ratio, and FCF ratio). Based on the debt ratio proxy, both coefficients for female CEOs and female CFOs is negative and significant, therefore, the existence of female CEOs and female CFOs can minimize the risk in long term. For short term period (cash ratio and FCF ratio), the existence of female CEOs and female CFOs can increase the firm's liquidity, and hence, it can lower the firm risk in short-term. Moreover, the higher the firm's liquidity indicating the better the firm in managing daily operational finances activity.
Conclusion
In sum, the findings for all three categories of explanatory variables along with their control variables for all risk metrics (DR, CR, and FCF) are only slight differences in the coefficient value. Almost all the coefficient signs and significance values reveal the same direction and similar significance value. The results reveal that the presence of female CEOs has lowered the firm's risk for all risk proxies. Female CEOs having higher the academic qualification, overseas qualification, and business degree tend to lower the firm's risk. The result also indicates that the younger and the longer tenure of the CEOs tend to lower firm's risk. A higher number of female directors on boardroom has a significant impact on firm's risk as the higher the number of female directors the lower the firm's risk (CR and FCF). In addition, the female CFO tends to lower the firm's risk as female CFOs are believed to be
|
v3-fos-license
|
2020-02-13T09:12:37.080Z
|
2020-02-11T00:00:00.000
|
216170991
|
{
"extfieldsofstudy": [
"Biology",
"Medicine",
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://doi.org/10.1364/boe.389345",
"pdf_hash": "8718a1ce8f947b7b43c0d36ab5d9c500e0b7d85d",
"pdf_src": "BioRxiv",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44430",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "e7f5ce91a98dbbb91360aaeed1c895783b86dfb6",
"year": 2020
}
|
pes2o/s2orc
|
Cell phone digital microscopy using an oil droplet
We introduce an accessible imaging method using droplets of microscope immersion and consumer-grade oils and a cell phone camera. We found that oil droplets were more resistant to evaporation than water droplets. We characterized the transverse magnification of oil and water droplets using cell phone screens and a resolution target. We further harnessed the close refractive index of cooking oils to that of the immersion oil and demonstrated their use as lenses for cell phone microscopy. Our method enables stable droplet-based optical imaging without specialized setups or manufacturing processes.
Introduction:
Cutting-edge optical microscopy is currently in high demand in the fields of medicine and biology research. Nevertheless, in low-resource settings where accessibility is limited, the ability to quickly assess the morphology and size of the biological specimen beyond what the human eye can see is of practical interest. In response to the demand for access to low-cost microscopy for educational and diagnostic purposes, several researchers have developed new microscopy devices by attaching lenses and other types of devices to smart phones to perform brightfield, darkfield, fluorescence, and polarized imaging [1][2][3][4][5][6][7] . An added benefit of smart phones is that they can be used to transmit high-quality images through Multimedia Messaging Services (MMS). As of 2007-2008, the percentage of people who had MMS in countries across Africa ranged from 1.5% to 92.2% 8 . This increasingly widespread cellular connectivity could be harnessed to facilitate more rapid scientific communication between individuals and to increase accessibility to microscopy for diagnostic and educational purposes on demand.
Previous studies have demonstrated the ability to capture clinically useful microscopic images using a ball lens in front of the camera lens of a cell phone 9,10 . One study developed on-demand lenses by heat-curing polydimethylsiloxane (PDMS) plano-convex lenses in order to conduct cell phone imaging without using attachments such as ball lenses or accessory devices 11 . While uncured PDMS becomes too thin to function as an effective lens, heat-cured PDMS maintains its droplet shape well, which has been shown to enable a magnification of up to 120 times and a resolution of up to 1 micron 11 . Other studies have explored the option of creating tunable liquid lenses whose focal lengths are tunable through variations in pressure distribution in a liquid-containing chamber using a temperaturesensitive or pH-sensitive hydrogel ring 12 . While these inexpensive lenses enable a vast range of imaging applications with ease of operation, the distribution channels of custom-made lenses to low-resource areas have become a major bottleneck. Moreover, increasing numbers of custom applications call for the on-demand design of accessible, cost-effective lenses. Here, we seek to evaluate the use of simple and accessible materials for optical imaging.
Water droplets are easy to make and do not require specialized fabrication processes, so they can serve as useful tools for microscopy 13 . Nevertheless, water droplets have two significant limitations for imaging applications. First, water droplets evaporate rapidly under ambient conditions, which changes their focal length over time. To achieve optical amplification, water droplets often have small volumes, i.e., less than 10 µl. Temperature, air flow, and humidity affect the evaporation rate, which quickly diminishes the optical magnification. Some studies have used methods to reduce the rate of water droplet evaporation, which enables longer imaging sessions while maintaining a consistent focal length. However, even with these methods, water droplet imaging methods are still very timelimited due to water evaporation. One study used a plastic container with wet paper next to the water droplet to maintain a consistent water vapor pressure, which maintained a constant focal length in the water droplet for two hours 13 . Another study used spherical water droplets at the tip of a syringe needle as lenses and coated them with a silicone oil to reduce evaporation so that the water droplets could be used for an hour 14 . While each of these studies succeeded in developing a more flexible approach to water droplet microscopy, it may be difficult to conduct certain microscopy experiments using only a one to two hour working time. Therefore, one of our focuses was to develop a method that would enable liquid droplets to be used over a much longer period of time. In addition to evaporating quickly, water-based lenses display optical aberration due to the index mismatch between water and glass. Since most biological specimens are mounted on a coverglass, which has a refractive index of 1.515 compared to 1.33 of water, optical refraction at the interface could deteriorate the image quality. In this report, we investigate the use of oil droplets that can be used in smartphone microscopy to obtain images of biological samples.
We started by demonstrating the use of index-matched immersion oil droplets for stable optical imaging, and then extend the method by using household cooking oils. We obtained the refractive index values for common household liquids from the International Gem Society 15 . For instance, safflower, peanut, and sesame oil have refractive indices around 1.47-1.48, closely resembling the refractive index of immersion oil at 1.515 16 . Palm oil has a slightly lower refractive index of 1.46-1.47 17 . In this study, we decided to compare droplets using corn oil, canola oil, and olive oil.
Droplet magnification analysis
To characterize the optical amplification of droplets, we first used cell phone screens for optical illumination and measured amplified pixels through the droplet on a cellphone screen (Fig 1). To prepare a series of droplet "lenses," we used a micropipette to place droplets with volumes ranging from 1-5 µL in a row on a No. 1.5 (170±5 µm) coverglass. For precise pipetting of the oil droplet, we prewarmed the immersion oil (Nikon Immersion Oil Type F, index = 1.518) in a 37 o C water bath to reduce its viscosity. We lifted the coverglass above the cell phone screen through a stack of three glass slides measuring approximately 30 mm in total height. We then placed the cell phone camera approximately 80 mm above the droplet, and captured pictures of the water droplets on a solid white cell phone image.
We imported the captured images into ImageJ and measured the size of amplified pixels through each droplet lens. The size of the pixel was defined as the distance between adjacent red, green, or blue pixels near the center of the droplet. The dimension of the image was calibrated by the width of the cell phone screen, which was obtained from the manufacturer's specifications. The physical size of each pixel was also derived from the pixels per inch (PPI) data from the manufacturer. We then used our magnified pixel values to calculate the magnification factor that each droplet produced and plotted the magnification factor as a function of the droplet volume. For iPhone Xs, we determined the the pixel dimension to be 55.46 µm and the screen width to be 6.22 cm. For the Huawei Honor 7X, we determined the pixel dimension to be 62.41 µm and the screen width to be 6.70 cm.
Fig. 1.
Schematic illustration of a cell phone being used to capture an image of a water droplet that is being used to magnify a biological sample. Use of pixels of known sizes is utilized to quantify the optical magnification of the droplet.
USAF resolution target analysis
In order to study the resolution of an immersion oil droplet, we used a white light source and a Huawei Honor 7X cell phone to capture images of an oil droplet magnifying a Positive 1951 USAF test target (Thorlabs R1DS1P).
Making of cooking oil droplets
To account for the simplicity of the experiment, we used cooking oil as a source of magnification. We obtained consumer-grade corn oil, canola oil, and corn oil.
Imaging biological samples using immersion oil
After we measured the resolution of the oil droplets, we used a white light source to illuminate two biological slides containing an onion epidermis and a zea stem cross section. We used an oil droplet to magnify the biological samples and captured images of the magnified biological samples using the Huawei Honor 7X cell phone. The cell phone images were compared with images taken using a Plan Apo λ 20x NA 0.75 objective on a Nikon Eclipse Ti-E2 microscope and a Photometric Prime 95B back-illuminated sCMOS camera.
Comparison of smartphone images obtained using immersion and cooking oils
Once we compared the images we obtained using immersion oil to those captured with the Nikon microscope, we prepared a series of 1-5 µL droplets of immersion, canola, olive, and corn oil on glass coverslips. We then captured images of the zea stem cross section and onion epidermis using each set of oil droplets.
Oil droplets are more resistant to evaporation
Using a cellphone camera to acquire magnified images through a plano-convex lens formed by a water droplet practically constitutes a two-lens system. When using water as a droplet lens, evaporation causes constant change to the radius of curvature and effective focal length of the droplet. We compared the evaporation rates of droplets made of water, immersion oil, and corn oil on a glass coverslip. After 20 minutes at room temperature in an indoor laboratory setting, water droplets smaller than 5 µl completely evaporated, whereas both immersion and corn oil droplets maintained their shape and volume (Fig 2).
Fig. 2.
Comparison of droplets made of water, immersion oil, and corn oil at room temperature for 20 minutes.
Characterizing optical resolution using screen pixels and a resolution target
We utilized known sizes of cell phone screen pixels to quantify the optical magnification. Fig 3A illustrates the amplified images of pixels on an iPhone Xs screen through the 5 µl and 2 µl droplets. Optical amplification of water and oil droplets on the coverglass was found to be similar in the range between 3 and 5x when the volume of the droplet is greater than 3 µl ( Fig 3B). Smaller oil droplets exhibited much higher magnification than water droplets. The 2 µl oil droplet achieved a magnification of 9.4, about 77% higher than the equivalent water droplet. The higher magnification is likely attributed to the higher surface tension of oil. The angle of contact between the oil droplet and glass is much higher than that between the water droplet and glass. This property creates a higher-power positive lens, and along with the lower tendency to evaporate, rendering immersion oil droplets excellent optical elements for amplifying biological specimen. At 1 µl, the spherical aberration is significant from the oil droplet, rendering it challenging to quantify the magnification.
To demonstrate the enhancement of optical resolution from the oil droplet, we compared two images of the USAF resolution target with and without the oil droplet. Fig 3C demonstrates a typical image of the resolution target. The right panel demonstrates an image of the same resolution target with an oil droplet outlined by the dotted circle above group 5 elements. While the standard cell phone image failed to resolve any elements within this group, the image magnified by the oil droplet resolved several elements. The line profile plots of the vertical and horizontal elements (Fig 3D) demonstrate the ability of the oil droplet to resolve vertical element 4 and horizontal element 3. The line width of element 3 indicates an optical resolution of 12.40 µm. We note that this resolution is likely to be sensitive to the distance of the object from the droplet as well as the distance between the cell phone camera and the droplet. We demonstrate here a simple and versatile method here to achieve enhanced optical resolution for bioimaging.
Resolving cellular structures using an immersion oil droplet
After we examined the resolution of the oil droplets, we used the oil droplets to obtain images of biological samples, including a zea stem and an onion epidermis. As shown in Fig 4, the images obtained using the iPhone and oil droplets showed the same structures that were obtained using a 20x/0.75 objective on a Nikon Eclipse Ti-E2 inverted microscope. While the Nikon image had a visibly higher resolution than our oil droplet images, the oil droplets enabled us to view biological structures that would have been impossible for us to view using the naked eye. For instance, when the onion epidermis was magnified with the oil droplet, the shapes of individual cells were visible. In the zea stem cross section, the xylem and phloem of the plant vascular structure were also visible, although they were less resolved in the oil droplet image than in the Nikon image.
Fig. 4.
A comparison between magnified oil droplet images and images obtained using a Nikon Eclise Ti-E2 inverted microscope with brightfield illumination and a 20x/0.75 objective. The cell phone images were taken by illuminating an onion epidermis (top) and zea stem cross-section (bottom) with a white light source and using an oil droplet (outlined with a blue circle) to magnify the images. The middle images are high-magnification view of the images on the left, and the images on the right were captured using the Nikon microscope. The biological samples were obtained from AmScope.
Comparing immersion oil and cooking oil droplets for optical imaging
In spite of its ideal properties, the cost of immersion oil may present significant barriers in low-resource settings due to its cost and low accessibility. For this reason, we explored the use of consumer-grade oils, which have historically been used to conduct immersion oil microscopy before synthetic immersion oils were commercially available. Before synthetic oils became the standard for immersion oil microscopy, natural oils such as cedar tree oil and castor oil were typically used 18 . More recently, castor oil has been used to obtain immersion objective images of lymphocytes in metaphase to produce images comparable to those taken using synthetic immersion oil.
We obtained canola oil, olive oil blend, corn oil, and Nikon immersion oil to evaluate their performance for smartphone microscopy. Two sets of coverslips were prepared for each oil; one coverslip contained unknown volumes of oil and the other coverslip contained known volumes of oil. Approximately 1 mL of each oil was placed into four conical cylindrical centrifuge tubes to be used for preparation of oil droplets. A pipette tip was then used to transfer small amounts of oil from the centrifuge to the coverslips and form three rows of oil droplets with unspecified volumes. After the droplets of unknown volumes were prepared, the centrifuge tubes containing each oil were heated to 37°C for ten minutes using a hot water bath to reduce the viscosity of the oils. We then prepared one coverslip for each heated oil onto which we placed droplets ranging from 2-4 µL using a micropipette.
Once our oil droplets were prepared, we used them to obtain a series of images of the zea stem and onion epidermis samples. Roughly, similar color and resolution were achieved using cooking oil droplets compared to immersion oil droplets. Generally, images that were taken using 4 µL droplets were less magnified and less resolved than images taken using 3 µL and 2 µL droplets, as expected ( Fig 5). However, there were several images that deviated from this trend; for instance, the 3 µL and 4 µL immersion oil droplets appeared to provide approximately equal resolution.
Discussion
In the future, liquid droplets could be used to design flexible, low-cost microscopy systems for educational or diagnostic purposes. Immersion oil, in particular, could be used as a somewhat more permanent lens to view microscopic structures, since it does not evaporate nearly as quickly and has the same refractive index as glass. As can be seen in Fig 2, while the row of water droplets evaporated almost entirely in about twenty minutes, the oil droplets did not decrease in size over the measured time. Therefore, if an oil droplet lens of a specific size was needed for multiple imaging sessions, the same oil droplet lens could be reused during multiple imaging sessions, which would save time and retain the previously used magnification factor. While glass lenses can be expensive and require extensive manufacturing processes, oil droplet lenses only require the use of a coverslip and plastic tool to transfer oil. While we used a micropipette to create our oil droplets, a micropipette is only needed when exact volumes of oil are desired, which is not required to obtain qualitative images of biological samples. Furthermore, since several oil droplets can be made quickly, it is possible to easily prepare a series of droplets that increase in magnification with either a wide or narrow magnification range, depending on what type of images are being captured. Therefore, microscopy using oil droplets and cell phones for sample illumination and image capture are promising yet simple tools that could be refined in the future to potentially conduct quantitative and qualitative analyses of biological samples.
In our experiments, we demonstrated that cooking oil may be a useful, cost-effective method of obtaining images of biological samples in a low-resource environment. We initially showed that images obtained using immersion oil droplets are resolved up to 12.4 microns, which is a significant improvement over the naked eye. We then compared the immersion oil droplet images to the images we obtained using a conventional Nikon microscope, which showed that the structures we viewed using immersion oil lenses were consistent with those seen using conventional lenses. In order to address the issue of cost-effectiveness in a lowresource setting, we experimented with the use of cooking oils as oil droplet lenses, since cooking oils such as castor oil have a history of being used in immersion oil because their refractive indices are close to that of glass and of synthetic immersion oil. We found that images obtained using cooking oil droplets were similar to those obtained using immersion oil droplets, and that oil droplets could be prepared with or without the use of a micropipette. While we used a pipette tip to transfer the oils when preparing droplets of non-specified volumes, a less expensive alternative could be used in a low-resource setting, such as a plastic stick. Although we were able to produce useful images through the use of cooking oil droplets, the resolution and focus levels that we achieved were, at times, inconsistent. This was likely due to settings that are pre-programmed into cell phones that make them less ideal imaging tools for scientific applications. This obstacle is a potential avenue for improvement that may be explored in future studies.
While cell phones are designed to be cost-effective and simple to use, cell phone imaging often sacrifices quality and versatility for simplicity. Studies have shown that cell phones have certain built-in limitations that reduce their accuracy and hinder their use as a quantitative tool for diagnostic microscopy 19 . One of the key differences between scientific and cell phone microscopes is that scientific imaging systems allow users to have full control over the camera or microscope settings, whereas cell phone microscopes are programmed with default settings, some of which cannot be changed easily. Many smartphones use autofocus, which can lead to inaccurate size quantification in microscopy because it can impact magnification by changing the apparent size of a structure as much as 6% 19 . Another feature that makes it more challenging to conduct cell phone microscopy is the automatic image processing that is programmed into cell phones, such as noise reduction and image compression, which are useful in generating desirable images in cell phone photography, but often lead to a loss of information and inconsistent imaging, which can lead to inaccurate quantitative analysis in cell phone microscopy 19 . For instance, the image quality in Figs 3-5 is subject to the image processing algorithm within the cell phone. We expect the use of calibration standard, such as the known pixel sizes of a cell phone screen, to mitigate the uncertainties and enable quantitative imaging.
There are several challenges that may emerge when working with cooking oil droplets; for instance in an outdoor environment, droplets may attract dust particles, which could potentially deteriorate image quality. While our method is only aimed to be utilized in the short-term, ranging from a few minutes to several days, it may be useful to take measures to account for possible contamination from dust or other particulates. This may be achieved by enclosing coverslips with oil droplets in a container, such as a petri dish, to prevent contamination. Another potential issue is that some vegetable oils such as sesame oil and olive oil are naturally colored, which may impact apparent color of the biological samples in the images obtained using a cell phone. There was no noticeable difference in color between the images we obtained using olive oil and other types of vegetable oil. While we did not see a major color difference due to natural oil colors, it may be valuable to conduct smartphone imaging experiments comparing different brands of several types of vegetable oils to see if there is a color difference between them.
In the future, it may also be useful to consider cost-efficient methods that would increase the tunability of vegetable oil droplet lenses, which would provide more flexibility in low-resource cell phone microscopy. For instance, surface properties such as polarity, can affect the wetting and shape of the droplet. These surface properties can be exploited in future studies to explore the tunability of cooking oil droplet lenses. Taken together, the ease of operation in droplet-based bioimaging will extend the discoveries from medical and biology researcher to the hands of field workers and educators.
Conclusion
We present an accessible imaging method using evaporation-resistant oil droplets and the cellphone camera. The attenable optical resolution enables direct observation of cellular structures in plant tissue samples. We further demonstrate the applicability of using household oils for optical imaging. Combined with the versatility of capturing and sending digital images through the mobile network, our study lays the groundwork for an attractive optical technology for improving healthcare in low-resource settings with a minimal footprint.
|
v3-fos-license
|
2018-04-03T05:50:07.891Z
|
2017-11-15T00:00:00.000
|
38606300
|
{
"extfieldsofstudy": [
"Environmental Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2305-6304/5/4/32/pdf",
"pdf_hash": "3d11bd8ba2d5bb20dd7a8d1553a9e233c9f9b07f",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44431",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "3d11bd8ba2d5bb20dd7a8d1553a9e233c9f9b07f",
"year": 2017
}
|
pes2o/s2orc
|
Probabilistic Prognosis of Environmental Radioactivity Concentrations due to Radioisotopes Discharged to Water Bodies from Nuclear Power Plants
Due to their very low values, the complexity of comparing the contribution of nuclear power plants (NPPs) to environmental radioactivity with modeled values is recognized. In order to compare probabilistic prognosis of radioactivity concentrations with environmental measurement values, an exercise was performed using public data of radioactive routine discharges from three representative Spanish nuclear power plants. Specifically, data on liquid discharges from three Spanish NPPs: Almaraz, Vandellós II, and Ascó to three different aquatic bodies (river, lake, and coast) were used. Results modelled using generic conservative models together with Monte Carlo techniques used for uncertainties propagation were compared with values of radioactivity concentrations in the environment measured in the surroundings of these NPPs. Probability distribution functions were inferred for the source term, used as an input to the model to estimate the radioactivity concentrations in the environment due to discharges to the water bodies. Radioactivity concentrations measured in bottom sediments were used in the exercise due to their accumulation properties. Of all the radioisotopes measured in the environmental monitoring programs around the NPPs, only Cs-137, Sr-90, and Co-60 had positive values greater than their respective detection limits. Of those, Sr-90 and Cs-137 are easily measured in the environment, but significant contribution from the radioactive fall-out due to nuclear explosions in the atmosphere exists, and therefore their values cannot be attributed to the NPPs. On the contrary, Co-60 is especially useful as an indicator of the radioactive discharges from NPPs because its presence in the environment can solely be attributed to the impact of the closer nuclear facilities. All the modelled values for Co-60 showed a reasonable correspondence with measured environmental data in all cases, being conservative in two of them. The more conservative predictions obtained with the models were the activity concentrations in the sediments of a lake (Almaraz) where, on average, values two times higher were obtained. For the case of rivers (Ascó), calculated results were adequately conservative—up to 3.4 times on average. However, the results for coasts (Vandellos II) were in the same range as the environmental measurements, obtaining predictions that are only—at maximum—1.1 times higher than measured values. Only for this specific case of coasts could it be established that the models are not conservative enough, although the results, on average, are relatively close to the real values.
Introduction
In carrying out prospective assessments of the radiological impact of routine releases from a variety of industrial facilities, including nuclear power plants (NPPs), the European Union (EU), the International Atomic Energy Agency (IAEA), and others have developed models which are widely accepted.
In particular, the IAEA published the Safety Report Series No. 19 (SRS-19) document [1], where a graded approach to be used for assessing the radiological impact of discharges to the environment is described. In this approach, the use of intentionally conservative simple models is recommended for the first stages of the assessment. These models use a very limited quantity of information on the characteristics of the releases or the affected environment. More complex models are also described in this document which can be used if needed. Additionally, SRS-19 provides many values to be used by default in some parameters of those models, which are assumed to be conservative in any situation. Here conservative means that there is a low risk of underestimating the concentration values which can be found in real measurements in the environment after routine discharges, within the degree of variability.
Models or parameters can be refined with the intention of achieving a better correspondence of modelled values with measurements. However, this refinement should be ideally complemented with an evaluation or a discussion of the uncertainty associated with the assessment. This uncertainty involves many components [2,3], one of them being the uncertainty associated with the parameters used in the mathematical models. This component of the uncertainty is usually represented by the shape and size of a probability distribution function (pdf), then being evaluated as an intrinsic part of the estimations of effective dose to the representative person [4].
Measurements of the environmental concentrations of radionuclides are usually provided as a central value (typically a mean value) and its associated uncertainty, which includes the standard deviation and other uncertainties associated with the method of measurement, such as the uncertainty of the standard used for the calibration, or the variability of the measured magnitude, among others. Therefore, the values obtained in environmental monitoring programs around nuclear installations are usually described as ranges instead of single values, giving a probability (usually a 95% for a coverage factor k = 2) of finding the real value within the range.
Those parametric uncertainties were also included in the results of generic simple models, in order to analyse how it affects a comparison with the ranges of measured values of radionuclide concentrations in a specific compartment of the environment. For this, IAEA's SRS-19 models were used together with public data on discharges from some Spanish NPPs to obtain prospective modelled values. Finally, a comparison of those modelled results with the measured ranges obtained in the environmental monitoring programs performed in the surroundings of these NPPs was carried out. Specifically, an environmental compartment affected by a cumulative process-sedimentation-was selected, due to the very low level of discharges produced from these installations. Given the conservatism of the models used, it is expected that the modelled results will be significantly above measured concentrations.
The present paper shows the results of those comparisons.
Materials and Methods
Many possible scenarios could be taken into account for assessing the impact of Spanish NPPs on the environment. However, it was assumed that the main differences would appear in the water bodies where each NPP releases their liquid effluents. Three different scenarios based on real NPPs were considered: Release to a reservoir. The case of Almaraz NPP was taken as representative; 2.
Release to coastal waters. Vandellós II NPP was taken this time as example; and 3.
Release to a river. Ascó NPP was used as representative of this situation.
The data for the quantities of each radionuclide annually discharged to the atmosphere and to water bodies (the source term) from Spanish NPPs (see Table 1) was taken from the published European Union Radiation Protection Series Report No. 143 [5].The dispersion models described in SRS 19 were calculated using the Código de cRiba para la evaluaciÓn de iMpacto (CROM) code [6,7]. CROM was designed to automate the calculation of radionuclide concentrations in different compartments of the environment and their transfer to the human food chain, as well as to estimate the effective dose for humans, using generic models for transport, dilution, and transfer from SRS 19. In order to estimate the radionuclide concentrations, the quantities and types of discharged radionuclides (the source term), the mode and characteristics of the discharge, and the receptor points need to be specified. The atmospheric dispersion model is a Gaussian plume model, accounting for the effects of buildings in the vicinity of the release and the effect of the roughness of the ground, designed to assess annual averaged radionuclide concentrations in the air. The surface water models account for dispersion in rivers, lakes, estuaries, and sea coasts. These aquatic models are based on analytical solutions of advection-diffusion equations describing radionuclide transport in surface water with steady-state uniform flow conditions. All the models contain many default values that can be used in the absence of local specific information. The terrestrial food chain models accept inputs of radionuclides from both the atmosphere and the hydrosphere. The process of radioactive decay and build-up is taken into account. The estimated radionuclide concentrations in air, soil, sediment, food, and water (calculated for 30-years of discharge) are combined with the annual rates of intake, the occupancy factors, and the appropriate dose conversion coefficients to obtain the maximum human effective dose for the representative person. Version 8 of CROM [8] allows the propagation of parameters' uncertainties in the models by using Monte Carlo methods. This last version, CROM 8, implements a default database with data for 162 radionuclides.
The comparison of prospective modelled results of air concentrations obtained from atmospheric discharges with measured data was assumed to be not possible for the given conditions of an NPP, as the discharges to the atmosphere from nuclear power plants are usually very low, suffering an additional dilution in the atmosphere, which results in concentrations in the air lower than the usual detection levels for most of the radionuclides. The same can be said for the concentrations in water due to routine discharges from NPPs. However, concentration mechanisms allow the measurement of some radionuclides by using nuclear measurement techniques with typical detection levels in those media, as for example accumulation in bio-indicators, on soils, or in bottom sediments. For this reason, the focus in this work was put on the prospective assessments and comparison with measured radioactivity concentrations in one of them: bottom sediments, as a result of environmental cumulative processes in liquid bodies. Moreover, in this particular case, published measured concentration values were mostly above the detection limits for those three radionuclides.
For comparing the results of the prospective assessments with the values measured in the environment, data of measurements of radionuclide concentrations in sediments collected in the vicinity of selected Spanish NPPs were used, covering three different aquatic environments: reservoirs (or lakes), rivers, and coastal waters. Measurements in the environment, obtained from routine environmental radiological monitoring programs established around all of the Spanish NPPs, are published annually by the Spanish Nuclear Safety Council (CSN), including values for radionuclide concentrations in bottom sediments of those water bodies where liquid effluents are released. For the tests carried out in this study, the period 1999-2007 was used [9][10][11][12][13] (Table 1).
For prospective assessments, data on radioactive discharges (the source term) were defined as a stochastic variable for the input data. Information on releases was derived assuming triangular distributions for the input data of release rates. This assumption was made because the information used for the discharges from each NPP consists of averaged values of the discharges for each year during a period of five consecutive years. Triangular distribution is typically used as a subjective description of a population for which there is only a limited set of sample data where a range and best estimate of the value can be identified. For triangular distributions, the minimum and maximum values of releases in the period studied were taken as parameters, using the mean value as the central tendency estimator. Table 1 shows source term data assumed for each studied NPP.
For modelling the concentrations in bottom sediments, models and distribution coefficients (Kd) taken from SRS 19 [1] were used. In particular, for the case of freshwater scenarios (Almaraz-dammed reservoir and Ascó-river), Kd values were introduced as stochastic input parameters lognormally distributed, according to data provided by the reference. For the remaining parameters used for calculations of radionuclide concentrations in bottom sediments, default values provided in the SRS 19 were adopted.
The use of conservative assumptions in the models assures that prospective assessments of concentrations in the different environmental objects should be above measured values in any real situation. Table 2 collects the modelled results obtained in this case, given the above explained assumptions, together with the range of measured data in the bottom sediments of the reservoir receiving the discharges from this specific NPP. For the case of a reservoir or a lake, a single box model which considers instantaneous and homogeneous mix is used. Therefore, no dependence on the point of measurement is needed. Additionally, equilibrium state is conservatively assumed. In this particular case, measured Sr-90 values in the environment were always below the detection limits, and therefore a comparison was not possible. Although the results of the prospective assessments were expected to be well above the measured values due to the conservatism of the models, it was not the case in all the comparisons. For the case of Cs-137, both calculated and measured values are very close. This effect can be explained by the contribution of radioactive precipitations (global fallout) to the activity concentration of Cs-137 on the soils, and the subsequent cumulative processes due to the wash out and the erosion of the surface soils, which contaminates the water of the rivers, and finally the continuous sorption and deposit of sediments into the bottom. The same results were observed by several authors who provided a similar answer to this observation [14,15].
Reservoirs (Almaraz NPP)
On the other hand, Co-60 is a relatively short-lived radionuclide (T1/2 = 5.27 y), and therefore any past contributions have almost disappeared and any existence of this radionuclide in the environment can be solely attributed to the contribution of the NPP into that environmental compartment. Table 2 and Figure 1 show the results obtained for this particular radionuclide, where an acceptable correspondence is observed, the modelled values (probability distribution shown in the figure) being adequately conservative compared with the measured values (black segment), with differences of around almost one order of magnitude, as expected.
Coastal Waters (Vandellós II NPP)
In this case, Sr-90 was also included in the assessments of liquid releases, as the radionuclide is reported in the measurements of activity concentrations in bottom sediments above the detection limit.
The model applied for calculations of concentrations in coastal waters uses three parameters dependent on the location of both the release and the receptor points [1]. For the scenario covering this case (Vandellós II NPP), y0 = 0 was assumed (i.e., the release is produced on the shoreline, where the measurements are also performed at a certain distance x). Information on the location of sediment sampling points for monitoring was not provided in the public reports. Therefore, prospective calculations were carried out in four different points covering the applicability range of the model. Points were identified as Sea 1 to Sea 4 (see Table 3), the results in Sea 2 being the more conservative for all the radionuclides. Table 3. Modelled results obtained for radionuclide concentrations in bottom sediments (C bs ) due to liquid releases from Vandellós II NPP and the range of reported measurement values [12]. [16], which, corrected for 47 years of radioactive decay (T1/2(Sr-90) = 28.8 y), would return a value of around 130 Bq m −2 . Using typical values for soil density (1300 kg/m 3 ) and a 5 cm depth (usually used for considering the leaching of a deposition in the surface of a soil), a value or 1.99 Bq kg −1 is obtained. This value is in very good agreement with the measured values for this particular radionuclide (1-3 Bq kg −1 ), supporting the hypothesis.
Although there are huge uncertainties associated with the lack of precise information on location of sampling points, a reasonable correspondence is observed between calculated results and measured data for the case of Co-60 (see Table 3 and Figure 2). Again, the presence of this radioisotope in bottom sediments can be only associated with releases from the facility. There is observed a lack of the conservatism expected from the prospective model in any case.
Rivers (Ascó NPP)
The model used for rivers [1] uses the distance from the release point to the shore location downstream and the consideration of the measurement point being located at the same shore or at the opposite shore where discharge is produced. In this case, the information for the exact location of bottom sediments' sampling points was not provided. Five points at representative distances downstream (River 1 to River 5) were considered. Table 4 shows the distances used for those points together with the main results of the calculations and the range of the measured data. Table 4. Modelled results obtained for radionuclide concentrations in bottom sediments (C bs ) due to liquid releases from Ascó NPP and the range of reported measured values [12]. Note: S-same shore; D-different shore. As complete lateral mixing distance for the studied case is 14,000 m, significant differences are observed in the modelled values for points located on the opposite shore or on the same shore where discharge is produced (see Table 4). As the exact location of sampling points is unknown, the most conservative assumption is to consider that they are located on the same shore where the discharge is produced. Therefore, the modelled values obtained using this assumption were finally compared with the measurement results. Additionally, in this particular case, for those radionuclides affected by global fallout (i.e., Cs-137 and Sr-90), modelled values were below measured values in all cases. Other studies [17] performed in the same river also concluded that levels of Sr-90 and Cs-137 in water were unaffected by the presence of Ascó NPP, attributing this radioactivity to the fall-out of former nuclear weapons tests.
For the case of Co-60, the same arguments provided above are applicable. As can be seen in Table 4 and Figure 3, a reasonable correspondence between calculated and measured values was evidenced, with a reasonable degree of conservatism.
Conclusions
The comparison of prospective calculations of environmental radioactivity concentrations-which would be caused by radioactive discharges from NPPs-with measured values is obviously interesting for ensuring an adequate degree of conservatism of the models commonly used for the assessment and protection against radiation of the persons and the environment around nuclear power plants. However, due to the very low values of discharged radioactivity, the comparison of the contribution of NPPs to the environmental radioactivity with modeled values is generally complex. In this study, IAEA-recommended models for the assessment of routine discharges-assumed to be appropriately conservative in all cases-were used.
Only measurements in some environmental compartments where cumulative processes occur were above detection limits in all the studied cases, and could therefore be used for comparisons. In particular, comparisons of prospectively modelled values with measured concentrations were only performed for bottom sediments. Studies were carried out in three Spanish NPPs considered to be representative of the different aquatic bodies where discharges can be produced: Almaraz, Vandellós II, and Ascó. Specifically, three radionuclides were considered in all the cases for both the source term and for the values measured in the environment: Cs-137, Sr-90, and Co-60.
For prospective assessments of radioactive concentrations in bottom sediments, data on the source term (quantities discharged from the NPP to the aquatic receptor body) published by the European Union, together with dispersion models accepted by the IAEA for this specific situation were used. CROM code which implements those mathematical models and Monte Carlo methods for uncertainties propagation were used for the stochastic calculations. Conservative assumptions were also used for selecting the locations where values were modeled.
The results from radiological environmental monitoring programs around the Spanish NPPs-publicly distributed by the Spanish nuclear regulatory body (CSN)-were used for obtaining the measured radioactivity concentrations in the bottom sediments.
In the comparisons, modelled results for both Cs-137 and Sr-90 remained below or very close to measured values in all three cases. In fact, for one of them (Almaraz), values for Sr-90 measurements were below the detection limits. This phenomenon has been attributed in this study-and also in other studies-to the ubiquitous presence of these radionuclides in the environment, caused mainly by the global fall-out. A good agreement was obtained between the estimation of such fall-out from nuclear weapons tests at the location of the NPPs and reported measurements. In conclusion, the values obtained in the measurements of radioactivity concentrations in the environment around the NPPs cannot be simply attributed to the continuous discharges produced in normal operation. In other words, routine releases from nuclear facilities induce increases in environmental concentrations of Cs-137 and Sr-90 which cannot be detected against the existing variability of those radionuclides in the environment.
Co-60 is the only radionuclide reported in all the measurements performed in the environment around all the studied NPPs which-due to its short radioactive half-life-is expected to be present in the environment exclusively due to the contribution of discharges from nuclear facilities, in the absence of other sources discharging this radioisotope to the environment. Therefore, this is the only radionuclide which could be used as indicator in this study.
Given the conservatism of the models used for the study, the results of the prospective assessments were expected to be above concentrations measured in the environment (around one order of magnitude). The modelled values for Co-60 showed a reasonable correspondence with measured environmental data in all cases, being however conservative only in two of them. The more conservative predictions obtained with the models were the activity concentrations in the sediments of a lake (Almaraz) where, on average, values two times higher were obtained (51 Bq/kg modelled against 26 Bq kg −1 measured). For the river (Ascó), calculated results were adequately conservative-up to 3.4 times on average (4.2 Bq kg −1 modelled against 1.25 Bq kg −1 measured). In this case of a river, the importance of location should be pointed out, as when not conservatively selected, overestimations cannot always be assured. Finally, for coasts (Vandellos II), prospective results were in the same range as the environmental measurements, obtaining predictions that were at maximum 1.1 times higher than measured values. Therefore, for the case of the coastal model it can be established that the models were not conservative enough, although the results, on average, were relatively close to the real values.
The scarcity of measured data and information on the location of sampling points did not allow a more precise comparison of prospective assessments with measured values. However, this was a good exercise for testing the degree of conservatism of generic models applied in specific conditions. Carrying out similar exercises using a greater number of measured values, lower detection limits, and precise information on the sampling would be beneficial for further comparisons.
|
v3-fos-license
|
2019-03-18T14:04:45.944Z
|
2009-10-01T00:00:00.000
|
81102179
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBYSA",
"oa_status": "GOLD",
"oa_url": "https://jhiphalexu.journals.ekb.eg/article_20861_b4edff27d333b4a1c536f3dfb3bac36a.pdf",
"pdf_hash": "4910d5ce10f56e4259d29d922eb42b14958bba6f",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44434",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "4910d5ce10f56e4259d29d922eb42b14958bba6f",
"year": 2009
}
|
pes2o/s2orc
|
Quality of Patients ’ Dying and Death Experience in Mansoura University Hospitals : Nurses ' Perception
Death is fundamental to the nature of being human. Critical care nurses and oncology nurses care for dying patients daily. The process of dying in intensive care units (ICUs) and oncology department is complicated, and research on the quality of end of life care and dying experience is limited in Egypt. The main aim of the current study was to describe the quality of dying and death experience of patients as perceived by nurses working in oncology department and ICUs in Mansoura University Hospitals, and compare nurses' perception in the two clinical settings. The sample involved 90 nurses (45 critical care nurses and 45 oncology nurses). Data were collected using a questionnaire sheet which gathered information about nurses' demographic characteristics, and the modified version of the Quality of Death and Dying questionnaire which elicited nurses' perception of patients' dying experiences in ICUs and oncology department. The majority of nurses reported that their patients were unable to feed themselves and did not spend enough time with their families during the end of life period. Nurses also reported that their dying patients suffered pain, nausea and/or vomiting. More than half of the nurses mentioned that their patients were not fully aware that they were dying and were not afraid of death. The findings of the study showed that cancer dying patients suffered more pain and nausea significantly than ICU patients. The majority of patients in oncology department had their family members with them during dying more than ICU patients. The findings of this study provided a rounded picture of the experience of dying patients in ICUs and oncology department. Such information can be used as a guide to enhance dying patients' experiences and improve end of life care in Egyptian hospitals.
Bull High Inst Public Health Vol.39 No. 4 [2009] quality of end-of-life care has become a major agenda for patients, families, and the loved ones of persons near death as well as health care professionals, researchers, and policy makers who organize and provide care. (2)End-of-life care is defined by the World Health Organization (WHO 1998). (3)as "the active, total care of patients whose disease is not responsive to curative treatment".The philosophy of this care is to attain maximal quality of life through control of the myriad physical, psychological, social, and spiritual distress of the patient and family. (4)e quality of end-of-life care has been receiving an increasing amount of attention in Egypt, in response to an increasing number of deaths occurring in Egyptian hospitals.It also became an important issue for nurses particularly those who work in areas where the death rate in high such as ICUs and oncology department.
What happens at the end of life is receiving the attention of researchers, policy makers and the public at large, influenced by debates on physician assisted suicide, by scientific and technological advances that can prolong life, and by the challenges of facing death and providing comfort in dying. (2)rthermore, with the recent emphasis on clinical governance, patient centered care and patient choice, coupled with an increase in the number of complaints about issues related to death in hospitals, much has been written about the quality of death and dying. (5)In parallel to this has been the recognition of those factors that make for a 'bad death' and those important for a 'good death'.Issues that contribute to the suboptimal care of patients dying in hospital are said to include a lack of open communication, difficulties in accurate prognosis and a lack of planning of end oflife care. (6) contrast, common themes contributing to a good death include control, autonomy and independence not Gameel & Kandeel 689 only for issues such as pain and symptom control, but also for the place of death, who should be present at the time of death and the maintenance of privacy.The importance of access is also stressed, not only to information and expertise, but also to spiritual and emotional support. (7)Ekiria Kikule, (2003). (8)stated that a "good death" in a developing country occurs when the dying person is being cared for at home, is free from pain or other distressing symptoms, feels no stigma, is at peace, and has their basic needs met without feeling dependent on others.The quality of death defined by the Committee on Endof-Life Care of the Institute of Medicine as "a death that is free from avoidable distress and suffering for patients, families, and their caregivers; in general accord with the patients' and families' wishes; and reasonably consistent with clinical, cultural, and ethical standards ." . (9)ality of dying is a term that may be used to describe the quality of life for dying patients.Patrick and Curtis, (2001), defined quality of dying and death as 'the degree to which a person's preferences for dying and the moment of death agree with observations of how the person actually died, as reported by others'. (2)ying patients face common and unique challenges that warrant new approaches to measurement of their quality of life at the end of life.Quality of dying is another patient -centered outcome measure that may also be affected by the quality of medical care. (10)According to the Institute of Medicine (1997), four basic elements are required for the care of dying patients: understanding the physical, psychological, spiritual, and practical dimensions of care giving; identifying and communicating diagnosis and prognosis; establishing goals and plans; and fitting palliative and other care to these goals.There have been several studies which investigated patients' and their families description of elements of high quality end-of-life care. (11)(15) Staff members who were perceived as uncaring, unfriendly, insensitive, or disrespectful had a negative influence on families' experiences. (16)ing patients and their families valued open and ongoing communication with health care providers, especially when it was timed to their needs and allowed them to ask questions and express their feelings and concerns. (13,14,16) environment. (23)Death in the ICU environment can be complicated and is often unnatural. (24)entral to ensuring the quality of care at the end of life is nurses' knowledge and skills in caring for dying patients and their families, and their perception of end of life care.In a study conducted by Asch et al . (27) investigate critical care nurses' perception of end of life care in the United States of America, the results showed that nurses wished they had more say in the care of dying patients.In this study, nurses showed their belief that there was inconsistency in the way dying patients were cared for, and that critical care environment did not adequately foster the Bull High Inst Public Health Vol.39 No. 4 [2009] compassion that dying patients need.
Similarly, Cartwright et al (28) and healing. (29) elsewhere, it is likely that many patients still die frightened, alone and without dignity, having lost all control, feeling abandoned by health-care professionals. (5)Although one would hope that those patients known to a palliative care team are well served, palliative care is relatively under-developed in many parts of our country and is still to be recognized formally as a specialty in its own right.
Design
A cross-sectional descriptive comparative research design was used in this study.
Setting
This (30) and adapted in the current study after making some A preliminary validation study suggested that this instrument had good reliability and validity characteristics.
Methods
• An official permission to conduct the study was obtained from the hospitals responsible authorities after explaining the aim of the study.
• Self-administered structured questionnaire sheet was developed by the researchers.• A jury of 5 experts in the field of nursing reviewed the tools to ascertain its content validity, and necessary modifications were done accordingly.
• A pilot study was carried out on ten nurses from the ICUs and oncology departments to ensure the clarity and applicability of the tools.
• The QODD questionnaire was tested for its reliability.Test and retest reliability were computed using a small sample of nurses (10 nurse), and it was satisfactory for the current research purposes (r=0.87).
• The researchers obtained oral consents from the participants after providing an explanation for the purposes of the study.
• Data were collected during the actual visit to each setting.The
RESULTS
Table 1 shows the distribution of deaths. (32)Physical components of a patient's illness and the care a patient receives in preparation for death affect it. (10)Quality measures for end-of-life care include the timely assessment and effective treatment of physical symptoms including pain and dyspnea. (33) In addition, the management of physical symptoms is considered a primary indicator of quality of end-of-life care. (36)lso Tse, et al, (2007) reported that pain was documented in 46.8% of all patients, reflecting that pain was still an issue of concern at end-of-life. (39)2) These 84% of dying patients suffer from pain. (43)6) Kellehear's (1990) asserted that the conception of good death includes an acknowledgement of the social life of the dying and the creation of an open climate about disclosure with the patient being aware of their impending death. (47)There is a need to improve communication with dying patients and families about diagnosis and prognosis in order to ensure that optimal communication takes place and so-called blocking behavior avoided. (48)As death approaches, many terminally ill patients want to prepare for the end of their lives.Preparation may involve a discussion of treatment choices, financial planning, psychological acceptance of death, or coming to peace with God. (10)In the current study, more than half of the nurses stated that their dying patients were not fully aware that they were dying, and accordingly they had no choice where they prefer to die, and had no chance to discuss their end of life wishes.This finding was in agreement with Fallowfield, Jenkins, and Beveridge, (2002) (49) report that physicians worldwide underestimate the information needs of their patients and the negative impact of non-disclosure practice.This underestimation can lead to withholding relevant information from the patient. (50)Cahill, at el,( 2001).
a Canadian study of 126 participants from three patient groups [dialysis patients (n=48), patients with HIV infection (n=40), and residents of a longterm care facility (n=38)], the participants identified five domains of quality end-of-life care.These involved adequate pain and symptom management; avoiding inappropriate prolongation of dying; achieving a sense of control; relieving burden; and strengthening relationships with loved ones. (12)Families praised health care providers who showed concern and compassion; were sensitive and open; took time to listen; treated dying patients and their families as individual human beings; studied end of life care from the Australian critical care nurses' perspectives.In this study, nurses reported the need for better pain control measures for dying patients, emphasized the necessity to improve communication between physicians and patients, and also between physicians and nurses, and thought of themselves as important advocates for patients.While many health care disciplines are concerned about improving care at the end of life, the nursing profession is particularly well suited to lead these efforts in view of the scope and standards of advanced practice.Nursing's social policy statement indicates that nurses "attend to the full range of human experiences and responses to health and illness without of the patient's subjective experience; apply scientific knowledge to the processes of care; and provide a caring relationship that facilitates health the process of dying is scant, and studies on patients' and families' preferences at the end of life are limited in Egypt.From their experience in clinical care settings, the researchers were very interested to look at end of life care provided for dying patients, and how the patients experienced dying and death in hospitals.Therefore, this study aimed firstly to provide a rounded picture of the current quality of dying and death experience of the patient as perceived by nurses in the oncology department and ICUs, and consequently determine what we need in order to improve the care of dying patients in hospitals.Hence, the current study used the Quality of Death and Dying questionnaire (QODD) to describe and compare the quality of dying and death experience of the patient as perceived by nurses in the oncology department and ICUs.In Mansoura University Hospitals and compare nurses' perception in the two clinical settings.
modifications.The modified version of the QODD questionnaire was translated into Arabic.In order to ensure the validity of the translation, back translation technique was used, where the questionnaire was translated from English into Arabic, and then from Arabic into English.The final version of translation was reviewed by an assistant professor from the English Department, the Faculty of Education, Mansoura University, and the suggested modifications were made accordingly.The questionnaire consists of 14 questions which addressed different aspects of dying patients' experience.It was used to describe the quality of dying and death experience of the patients as perceived by nurses in oncology department and ICUs at Mansoura University Hospitals.Then the perception of nurses in the two clinical areas was compared.Nurses were asked to rate their responses to the QODD questions on a four point scale including 'yes', 'uncertain', 'no', and 'not applicable'.
into Arabic by the researchers.A back translation technique was used to ensure the validity of the translation.The two versions of the translation were reviewed by an expert in translation.
questionnaire took from 5 to 10 minute
to analyze the demographic data.Chi Square was used to compare the quality of patients' dying and death experiences in ICUs and oncology department.Level of statistically significance was less than 0.05.
nurses according to their demographic characteristics.The sample consisted of 90 nurses.The majority (81.1%) of nurses were between the age 18 and 28 years with a mean of 18.8±4.4years.Atotal of 62.2% of the sample had nursing diploma, and 66.7% had years of experience that ranged from 1 to 5 years with a mean of 4.1±5.4years.
findings are in congruent with the results of the current study which illustrated that the majority of dying patients in ICUs and oncology department suffered from pain during their dying experience, but oncology dying patients suffered more from pain and nausea.These findings also supported by the report of National Cancer Policy Board (NCPB) of the Institute of Medicine in 2001 that patients, their families and caregivers suffer from the inadequate care available to patients in pain and distress.The report also emphasized that too many patients with cancer suffer needlessly at the end of their life.Focus on the cure too often has diverted attention from the care that patients actually need. (21)Madanagopalan, et al (2005) carried out a study to investigate the quality of dying in head and neck cancer patients.They found that
RECOMMENDATIONS 1 .
individuals prepare for death by facilitating any unfinished business, for example, signing wills, contacting loved ones, appointing a power of attorney.Patients' wishes regarding practical issues such as parenteral feeding, antibiotics and IV fluids should be explored.Each individual should also be given the opportunity for voicing their wishes regarding the desired place of death and who should be present at the time of death.Spiritual and religious Bull High Inst Public Health Vol.39 No.4 [2009]support with an appropriate cultural focus should be offered both to the patient and to the family after death in the context of bereavement support.(51)Achieving a sense of control for persons who are dying, and respecting wishes of patients and their loved ones, are considered from the important goals of high quality end-of-life care.These processes of care are sometimes linked to desirable outcomes such as improved quality of life at the end of life, a notion that has currency for both lay persons and professionals.(5)Unfortunately, the finding of current study reported that dying patients appeared to loose control over what was going on around them.This was supported by Emanuel, et al, (1999) who believed that one of the major issues in end of life care tends to be the loss of control.Loosing of body control, including the inability to feed, bathe, and toilet oneself, is certainly a frequent concern.These losses of control are associated in many people's mind with indignity and shame. (52)For many people, spirituality plays a very important role in reducing fear and increasing hope.The search for meaning or spiritual comfort in the face of death is often guided by religious and philosophical beliefs.Communication with religious advisors, selected hospice volunteers, or others with special empathy and insight may enhance comfort. (53)Clinical chaplains (or religious man) in a palliative care unit provide strength and enlightenment to help patients transcend their death fear and prepare for a good death. (54)Research has found that if patients had contacts with clinical chaplains two days before death, the fear of death was lower than that of other patients.A correlation also exists between the degree of death fear experienced, and the duration of contacts with the clinical chaplains. (55)The findings of the current study documented that nearly all dying patients in both clinical settings (oncology department and ICUs) did not have any visits from any religious man.This is very interesting considering the fact that Egypt is an Islamic country where religion and spirituality play a major role in people's daily life.CONCLUSION According to the results of the present study, it is concluded that, the increasing institutionalization of death and dying in Egyptian society poses a major challenge to physician and nurses as patients continue to die undignified deaths with uncontrolled symptoms.Efforts should be made to ensure that dying patients receive appropriate end of life care that reduces their suffering and allows good death.The results of the current study shed the light on important aspects related to end of life care such as reducing dying patients' suffering, supporting spirituality, giving the patient the opportunity to express their feelings and wishes, and promoting patient's autonomy.The findings of the current study provided a baseline information that guides improvements in end-of-life care in oncology departments and ICUs.Practical guidelines for health care team on cancer pain control are recommended and physician' barriers to pain management in hospitals should be further explored.
study was carried out in the
Table 4 Comparison between the quality of dying and death experience of dying patients as described using the QODD questionnaire and reported by nurses in ICUs and oncology department Question Current area of work Chi-sq test
** Highly significant at P value < 0.001 ** significant at p value 0.05
|
v3-fos-license
|
2021-05-11T13:20:43.284Z
|
2021-05-11T00:00:00.000
|
234350286
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmars.2021.654141/pdf",
"pdf_hash": "fd478183901cc441d96e408f8e25062cf4f1d42d",
"pdf_src": "Frontier",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44435",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "fd478183901cc441d96e408f8e25062cf4f1d42d",
"year": 2021
}
|
pes2o/s2orc
|
Musical Chairs on Temperate Reefs: Species Turnover and Replacement Within Functional Groups Explain Regional Diversity Variation in Assemblages Associated With Honeycomb Worms
Reef-building species are recognized as having an important ecological role and as generally enhancing the diversity of benthic organisms in marine habitats. However, although these ecosystem engineers have a facilitating role for some species, they may exclude or compete with others. The honeycomb worm Sabellaria alveolata (Linnaeus, 1767) is an important foundation species, commonly found from northwest Ireland to northern Mauritania, whose reef structures increase the physical complexity of the marine benthos, supporting high levels of biodiversity. Local patterns and regional differences in taxonomic and functional diversity were examined in honeycomb worm reefs from 10 sites along the northeastern Atlantic to explore variation in diversity across biogeographic regions and the potential effects of environmental drivers. While taxonomic composition varied across the study sites, levels of diversity remained relatively constant along the European coast. Assemblages showed high levels of species turnover compared to differences in richness, which varied primarily in response to sea surface temperatures and sediment content, the latter suggesting that local characteristics of the reef had a greater effect on community composition than the density of the engineering species. In contrast, the functional composition of assemblages was similar regardless of taxonomic composition or biogeography, with five functional groups being observed in all sites and only small differences in abundance in these groups being detected. Functional groups represented primarily filter-feeders and deposit-feeders, with the notable absence of herbivores, indicating that the reefs may act as biological filters for some species from the local pool of organisms. Redundancy was observed within functional groups that may indicate that honeycomb worm reefs can offer similar niche properties to its associated assemblages across varying environmental conditions. These results highlight the advantages of comparing taxonomic and functional metrics, which allow identification of a number of ecological processes that structure marine communities.
Reef-building species are recognized as having an important ecological role and as generally enhancing the diversity of benthic organisms in marine habitats. However, although these ecosystem engineers have a facilitating role for some species, they may exclude or compete with others. The honeycomb worm Sabellaria alveolata (Linnaeus, 1767) is an important foundation species, commonly found from northwest Ireland to northern Mauritania, whose reef structures increase the physical complexity of the marine benthos, supporting high levels of biodiversity. Local patterns and regional differences in taxonomic and functional diversity were examined in honeycomb worm reefs from 10 sites along the northeastern Atlantic to explore variation in diversity across biogeographic regions and the potential effects of environmental drivers. While taxonomic composition varied across the study sites, levels of diversity remained relatively constant along the European coast. Assemblages showed high levels of species turnover compared to differences in richness, which varied primarily in response to sea surface temperatures and sediment content, the latter suggesting that local characteristics of the reef had a greater effect on community composition than the density of the engineering species. In contrast, the functional composition of assemblages was similar regardless of taxonomic composition or biogeography, with five functional groups being observed in all sites and only small differences in abundance in these groups being detected. Functional groups represented primarily filter-feeders and deposit-feeders, with the notable absence of herbivores, indicating that
INTRODUCTION
Unraveling the processes that control how biodiversity is distributed over space and time are central objectives in macroecology and biogeography (Addo-Bediako et al., 2000;Ricklefs, 2004). While biodiversity was first observed to increase from the poles toward the tropics over 200 years ago (Hawkins, 2001), it is now recognized that latitude per se is not the main driver of spatial gradients in biodiversity, but rather a combination of variables and mechanisms that include ecological, evolutionary and historical processes (Hawkins and Diniz-Filho, 2004;D'Amen et al., 2017). For example, the rates of species diversification are thought to be greater in the tropics due to higher mutation rates in warmer regions (Rohde, 1992;Mittelbach et al., 2007). In addition, regions that have greater environmental stability over time may also host higher species diversity than areas that have suffered major environmental change, such as glaciations (Fine, 2015; but see also Fordham et al., 2019). Other variables shown to influence biodiversity include habitat or organism types, and organism properties, such as biomass, dispersal rates and physiology (Addo-Bediako et al., 2000;Hillebrand, 2004;Buckley et al., 2010;Rolland et al., 2015;Gaucherel et al., 2018). Given the multitude of factors that can affect diversity, understanding and predicting spatial and temporal patterns in biodiversity is challenging.
An important factor that influences biodiversity is whether a community is found on geogenic (of geological origin: sedimentary or rocky) or biogenic (of biological origin) substrate. Many communities co-exist in habitats that are altered by another living organism, and these are called foundation species or ecosystem engineers. Foundation species not only build habitat (Dayton, 1972) but also control the availability of resources for other organisms (Sarà, 1986;Jones et al., 1994). Through habitat modification, foundation species can alter the realized niche of species, at times facilitating niche expansion by buffering environmental conditions so that they continue to be favorable within the engineered habitat (Bulleri et al., 2016). Heterogeneity in the engineered habitat can promote facilitation of a greater number of species, while dominance of the engineering species or homogeneity in the habitat can enhance competition and limit the number of associated taxa (Schöb et al., 2012;Bulleri et al., 2016). Given that community composition can vary greatly within biogenic habitats across environmental gradients (Boström et al., 2006;Boyé et al., 2017), investigating withinhabitat diversity is essential for guiding conservation actions (Airoldi et al., 2008) and enhances our understanding of biodiversity over broad geographical scales.
Although the northeast Atlantic is amongst the most studied marine regions on Earth (Hawkins et al., 2019), spatial structure in its coastal marine assemblages remains poorly understood. Variation in diversity over broad spatial scales may be related to species distributions, but even broad biogeographic delimitations continue to be contentious. For example, broadly accepted marine biogeographic frameworks consider two biogeographic provinces in the northeastern Atlantic: Boreal and Lusitanian (Briggs and Bowen, 2012), or Northern European Seas and Lusitania (Spalding et al., 2007), separated by "Forbes' Line" (sensu Firth et al., 2021, after Forbes andGodwin-Austen, 1859). They differ, however, on boundary positions between provinces, and whether or not the southern province includes the Mediterranean Sea. Further biogeographic subdivision has been proposed for the northeastern Atlantic such that four provinces could be recognized: Boreal, Boreal-Lusitanian, Lusitanian-Boreal, and Lusitanian (Dinter, 2001), but these finerscale subdivisions are less often considered or employed in macroecology. Lack of consensus on biogeographic delimitations are partially due to competing criteria used for setting boundaries but may also reflect incomplete distributional knowledge of many marine species, especially for poorly studied invertebrates. If the majority of species have restricted distributions, then species turnover might be expected to be higher over a given spatial scale. Community structure may therefore be related to some extent to biogeographic partitioning, such that communities may be more similar within rather than between biogeographic regions.
To examine diversity in marine ecosystems, it is important to consider how diversity is quantified and described. Biodiversity is a multifaceted concept that includes several components (Whittaker, 1972). In order to better understand what mechanisms influence biodiversity, it may be helpful to consider each of these different facets. While local patterns in diversity (α diversity; Whittaker, 1972) are most commonly assessed, regional differences in diversity due to variation in richness or species composition (β diversity; Whittaker, 1972;Airoldi et al., 2008) can also provide important insights into the mechanisms driving community structure (Hewitt et al., 2005;Anderson et al., 2011;Villéger et al., 2013). Focusing on β diversity is especially important in the context of global change, where ecological communities are subject to large environmental fluctuations and disturbances (Mori et al., 2018). Furthermore, it is now widely recognized that the integration of functional information based on species traits provides a better understanding of community functioning (Díaz and Cabido, 2001;Anderson et al., 2011;Pavoine and Bonsall, 2011;Münkemüller et al., 2012;Mouillot et al., 2014). Thus, comparing taxonomic with functional diversity (α and β) provides a better understanding of the ecological processes that shape community composition (Swenson et al., 2011;Villéger et al., 2013;Mori et al., 2018) and the impact of biodiversity loss on ecosystem functioning (Cadotte et al., 2011;Burley et al., 2016). For example, selective processes, such as environmental filtering, lead to homogenization of traits in communities, since only species with a specific set of traits could survive and develop under certain abiotic conditions. As a result, the loss of species with unique functional characteristics may have significant consequences for ecosystem functioning than the loss of species with characteristics that are more commonly expressed in the community (O'Connor and Crowe, 2005;Queirós et al., 2013). Nevertheless, the comparison of taxonomic and functional β diversity alone may not reveal the underlying ecological processes that structure communities (Baselga, 2010;Villéger et al., 2013;Legendre, 2014). This is in part because variation in species composition among sites (β diversity) is the resultant of two components: species turnover (i.e., replacement of species or functional strategies) and nestedness (i.e., dissimilarity associated with the loss of species or functional strategies, in which an assemblage is a strict subset of another). Partitioning β diversity into turnover and nestedness thus provides an additional facet for dissecting community assembly rules. In sum, a combination of tools and metrics, including taxonomic and functional α and β diversity (and their components) are essential for better understanding biodiversity in marine ecosystems.
The honeycomb worm, Sabellaria alveolata (Linnaeus, 1767), is a physical ecosystem engineer (Berke, 2010) commonly found along the European coast from northwest Ireland to northern Mauritania (Curd et al., 2020), where it builds biogenic structures of varying extent in the intertidal and shallow subtidal zones.
Honeycomb worms build what are considered Europe's largest biogenic reefs (Noernberg et al., 2010) and support a unique and rich assemblage of species (Dias and Paula, 2001;Dubois et al., 2006;Jones et al., 2018). Honeycomb worms play key functional roles in the ecosystems they support, by creating new threedimensional habitat, which increases the physical complexity of the initial substrate, increases local biodiversity (Dubois et al., 2006;Jones et al., 2018), limits coastal erosion (Noernberg et al., 2010) and fashions biogenic structures (ranging from crusts and veneers to large reefs, hereafter "reefs" for simplicity) with high esthetic and recreational fishing value (Plicanti et al., 2016). Honeycomb worm reefs are broadly distributed across temperate Europe, however diversity investigations have only been carried out at local scales (Dias and Paula, 2001;Dubois et al., 2002Dubois et al., , 2006Schlund et al., 2016;Jones et al., 2018). It is therefore currently unknown how biodiversity supported by these reefs varies over its range.
The present study examined the patterns of diversity of benthic marine macrofauna associated with honeycomb worm reefs from sites spanning the entire European distribution of the species (but excluding North Africa), in order to address the following questions: (i) Does taxonomic and functional diversity of communities associated with honeycomb worms vary over broad geographical scales, and if so, what environmental drivers best explain this variation? (ii) Does community composition within honeycomb worm reefs vary with respect to currently described biogeographic provinces? (iii) Are there regional differences in taxonomic and functional β diversity in assemblages associated with honeycomb worm reefs? (iv) If so, are they mainly due to differences in species richness or in turnover? Finally, can differences between taxonomic and functional diversity help identify the ecological processes that affect biodiversity on honeycomb worm reefs?
Study Area and Sampling Methods
Ten sites along the coast of Europe were selected for quantifying the diversity of benthic macrofaunal assemblages: four in the United Kingdom, four in France and two in Portugal (Figure 1). Sampling was carried out in the summer (spring tides of June and August 2017) following a standard protocol at each site. The sampling strategy aimed to maximize the number of species collected by sampling in a variety reef phases (prograding and retrograding, sensu Curd et al., 2019) within each site, as these are known to harbor different assemblages (Dubois et al., 2002;Jones et al., 2018). Reefs were sampled using eight PVC cores of 5 cm in diameter to a maximum depth of 15 cm. Since honeycomb worms occur within the first 15-20 cm of the reef (Gruet, 1986), only the living portion of the reef was sampled. At UK4, only veneer bio-constructions were available for sampling, and only five cores were collected because veneers were too scarce for further sampling. The contents of each core were preserved in 70% ethanol.
In the laboratory, cores were first weighed (wet weight, after removal of alcohol), then sieved on a 1 mm circular mesh. For a given core volume, the weight of the sediment provides means for comparing porosity or the void fraction of a sample. Macrofauna was then extracted from the sediments and enumerated. Individuals were identified to the lowest taxonomic level, most often to the species level. All species names were used according the World Register of Marine Species 1 and references used for taxonomic identification can be found in Supplementary Appendix 1. To ensure consistent taxonomic resolution across samples, the number of operators was limited (n = 4) and each uncertain identification was crossverified by an expert in benthic taxonomy. However, due to the uncertainty regarding the morphological distinction at the species level between Mytilus edulis and Mytilus galloprovincialis, particularly at the juvenile stage (Jansen et al., 2007) and because hybridization occurs between the two (Daguin et al., 2001), all specimens sampled in the hybridization area (UK1, UK4, FR1, FR2, FR3, and FR4) were considered as Mytilus spp. (Wenne et al., 2020). All specimens are stored in the Laboratory of Coastal Benthic Ecology's collections at Ifremer (Plouzané, France).
Data Processing and Environmental Variables
Given that the reef-building species S. alveolata affects resource availability for the associated community (Dubois et al., 2002), all analyses considered the density of this species as an explanatory variable. Hence, S. alveolata abundance was removed from the species (response) matrix. Although some studies exclude rare species (i.e., represented by a single individual in one or two samples) for the calculation of similarity (see Clarke and Warwick, 2001), they may represent a non-negligible portion of a functional group, and are likely to have an impact on ecosystem functioning (Leitão et al., 2016). Rare species were therefore retained in the analysis in order to avoid reducing the functional richness of the communities (Mouillot et al., 2013;Jain et al., 2014).
To obtain information on the environmental conditions at each site, water and air temperature data were recorded using iButton R temperature loggers (accuracy ±0.5 • C, hourly measurements) (Lima and Wethey, 2009), deployed between August and October 2016 for a period of approximately 1 year prior to the sampling campaign. In order to mimic temperatures similar to those experienced within the reefs, the loggers were coated with sand from the reefs and fixed onto rocky substrate at a constant shore level (corresponding approximately to the mid-tide level where the majority of reefs develop).
Taxonomic Diversity
Multivariate analyses were used to test for differences in macrofaunal assemblages across four biogeographic provinces (as delimited by Dinter, 2001). Non-metric multidimensional scaling (nMDS) was used to plot sample stations on a two-dimensional ordination plane based on taxa composition dissimilarities and labeled with the corresponding collection site. nMDS was also run using species abundances averaged across all stations sampled in a given site. In addition, a hierarchical cluster analysis (HCA) was run on all samples using the Ward method (Ward, 1963). All analyses were carried out on the basis of Bray-Curtis similarity matrices. Abundance data were transformed by the log (x + 1) function to reduce the weight of the most abundant species.
In order to examine regional variation in α diversity, each site was coded as belonging to one of four biogeographic regions: Boreal, Boreal-Lusitanian, Lusitanian-Boreal, and Lusitanian (Figure 1; Dinter, 2001). Differences in community composition within and among regions were tested using PERMANOVA with a two-factor design (4999 residual permutations under a reduced model), with region as the fixed factor and site as the nested random factor. The weight of sediment was included as a co-variable in all analyses. Paired tests between regions were performed where the main effect was significant (P < 0.05). Prior to the PERMANOVA, differences in within-site multivariate dispersion were examined using the PERMDISP routine. When significant differences in assembly structure between regions were detected, a SIMPER analysis was performed to determine and rank the taxa responsible for the dissimilarities among sites and biogeographic regions. Variation of univariate assemblage metrics (i.e., abundance, species richness, the exponential of Shannon entropy (N1), and the inverse Simpson concentration (N2); Hill, 1973;Jost, 2006) were examined with permutational ANOVA, using the Euclidean distance in the PERMANOVA procedure (Anderson, 2017).
To qualify the link between the environment and macrobenthic community structure at each site, a distancebased redundancy analysis (dbRDA; Legendre and Andersson, 1999) was carried out. In addition to the variables recorded in the field (density of the engineering species S. alveolata, weight of sediment, maximum water and air temperatures), mean monthly values spanning 2000-2014 were obtained for 30 variables from BioORACLE 2 (Tyberghein et al., 2012;Assis et al., 2018). In order to eliminate multicollinearity among these environmental variables, Spearman rank correlations were calculated for all pairs of variables. Pairs with a Spearman correlation coefficient >0.7 were considered highly correlated. Only the following uncorrelated variables were kept in the analysis, in addition to the variables recorded in the field: mean surface water temperature ( • C), mean chlorophyll a concentration (mg.m −3 ) and maximum current speed (m.s −1 ) (Dormann et al., 2013). Prior to analysis, all values for the environmental variables were standardized, then a DISTLM routine was used to obtain the most parsimonious model using a stepwise selection procedure and adjusted R 2 selection criterion (McArdle and Anderson, 2001).
Functional Diversity
To characterize the functional diversity at each site, a biological trait analysis (BTA) was conducted (Statzner et al., 1994). Eight biological traits (divided into 32 modalities) were selected ( Table 1), providing information linked to the ecological functions performed by the associated macrofauna. The selected traits provide information on: (i) resource use and availability (by the trophic group of species, e.g., Thrush et al., 2006); (ii) secondary production and the amount of energy and organic matter (OM) produced based on the life cycle of the organisms (including longevity, maximum size, and mode of reproduction, e.g., Cusson and Bourget, 2005;Thrush et al., 2006); and (iii) the behavior of the species in general (i.e., how these species occupy the environment and contribute to biogeochemical fluxes through habitat, movement, and bioturbation activity at different bathymetric levels, e.g., Solan et al., 2004;Thrush et al., 2006;Queirós et al., 2013). Species were scored for each trait modality based on their affinity using a fuzzy coding approach (Chevenet et al., 1994), where multiple modalities can be attributed to a species if appropriate, and allowed for the incorporation of intraspecific variability in trait expression. The information concerning polychaetes was derived primarily from Fauchald and Jumars (1979) and Jumars et al. (2015). Information on other taxonomic groups was obtained either from databases 3 of biological traits, publications (Caine, 1977;Leblanc et al., 2011;Rumbold et al., 2012;Jones et al., 2018) and publications listed in Supplementary Appendix 1.
SubT Subtidal species
Ordination of the functional trait data was done using a Fuzzy coded multiple Correspondence Analysis (FCA) (Chevenet et al., 1994). Then, a hierarchical clustering analysis based on the Ward algorithm (Ward, 1963) was carried out using Euclidean distances (Usseglio-Polatera et al., 2000) to define homogeneous functional groups comprising species with similar biological trait associations. The frequencies of the modalities of each trait were calculated in order to visualize the biological profiles of identified functional groups.
Partitioning of Taxonomic and Functional β Diversity Regional differences in diversity (β diversity) were estimated from presence-absence data using Sørensen's (1948) dissimilarity. For each pair of cores, taxonomic β diversity and its two components, turnover and nestedness, were computed using the Baselga partitioning scheme (Baselga, 2017;Schmera et al., 2020). Functional β diversity was computed based on the multidimensional functional space from the Fuzzy Correspondence Analysis, where axes were synthetic components summarizing functional traits (Villéger et al., 2010). The first four axes were used for calculating Sørensen dissimilarity according to Villéger et al.'s (2013) equation for all pairwise comparisons between samples (1) belonging to the same region (within bioregion), or (2) belonging to different regions (among bioregion). Correlations between taxonomic and functional β diversity as well as between their respective components were tested using Mantel permutational tests (Villéger et al., 2013).
Taxonomic Diversity
A total of 129 taxa were observed in association with honeycomb worm reefs across the 10 sampled sites (77 stations, Supplementary Table 1). Taxon richness varied from two taxa (core from UK3, FR2, and PO2) to 34 taxa per station (core from FR2). In all sites except FR3, S. alveolata was the dominant species, with densities ranging from 6,450 ind.m −2 at UK2 to 80,000 ind.m −2 at FR2 (Supplementary Figure 1A). In all sites except FR3, S. alveolata was the dominant species, with densities ranging from 6,450 ind.m −2 at UK2 to 80,000 ind.m −2 at FR2 (Supplementary Figure 1A). The highest densities were observed in UK4 (130,000 ind.m −2 ) but this most likely corresponds to a recent recruitment event, as individuals were on average much smaller (diameter of the opercular crown less than 2 mm; Gruet, 1986). The ratio of individuals of S. alveolata to individuals of associated macrofauna showed a clear dominance of the engineering species at UK4 and UK3 (94 and 87%), while at other sites, this ratio varied from 43 to 68% (Supplementary Figure 1B). It was also at these two sites that the number of individuals of the associated macrofauna were the lowest, reaching up to 5,820 ind.m −2 at UK3 (with an average of 3,507 ± 2,313 ind.m −2 ) and 7,500 ind.m −2 283 at UK4 (with an average of 6,194 ± 1,304 ind.m −2 ; Supplementary Figure 1A). The fauna associated with honeycomb worm reefs were primarily of annelids and arthropods and to a lesser extent, mollusks and nematodes (Supplementary Figure 2). However, community composition varied significantly among sites. Annelids were dominant in UK2, UK3, FR4, and PO2, where they represented between 36 and 56% of individuals. At UK1 and PO4, arthropods dominated the community, representing 47 and 51% of individuals, respectively. For the other sites, mollusks were dominant (53% of individuals in UK4 and 39% in FR2) as well as nematodes (36% of individuals in FR1 and 39% in FR3). Sites FR1 and FR3 had a higher abundance of nematodes and mollusks compared to other sites.
Taxon richness and abundance did not show any significant variation between sites within the same region but showed significant differences between regions ( Table 2). As for N1 and N2 indices, honeycomb worm reef communities were characterized by low values (Figure 1). For both metrics, there was no significant effect of site or region. Sediment weight had a significant effect, but only on the N2 index ( Table 2). The PERMANOVA also detected a significant effect of sediment weight on taxon richness but not on abundance ( Table 2).
Hierarchical cluster analysis based on species composition and abundances per station defined four groups of assemblages ( Figure 2B). Group I included stations sampled in UK1 and UK4, plus two stations from UK3 and one station from FR1. Group II included all stations in UK2 and the remaining stations from UK3. Group III included all but one station in France (FR1, FR2, FR3, and FR4) and Group IV included stations in Portugal (PO1 and PO2). The nMDS indicated some degree of partitioning among sites, with two of the British sites (UK1 and UK4) being distinct from two of the southern sites (nMDS1 and nMDS2; Figure 2A). However, much overlap was observed in community composition among sites found in the center of the distribution, particularly FR1, FR3, and UK3, which exhibited considerable variability among stations within each site (F = 7.98, P < 0.001; PERMDISP; Figure 2A). PERMANOVA detected significant variability between regions and between sites within regions, as well as a significant effect of the sediment covariate (Table 2). Regional differences were driven by differences in Sediment weight provides a proxy for the porosity of the bioconstructions and was included as a covariable in the analysis. Permutations were based on a Bray-Curtis dissimilarity matrix generated from log (x + 1) abundance data. Results of univariate PERMANOVA to test for differences in assemblage-level univariate metrics in macrofaunal assemblages (taxon richness and total abundance) are also shown. Permutations for univariate analysis were based on the Euclidean distance matrix generated from untransformed diversity data. All tests used a maximum of 4999 permutations under a reduced model; significant effects (P < 0.05) are shown in bold. An underlined P-value indicates that PERMDISP detected significant differences in within-group dispersion between levels of that factor (P < 0.05). df, degrees of freedom; MS, mean squares; F, pseudo F-statistic; P, P-value.
FIGURE 2 | (A) nMDS representing the variability across sites and stations within sites. (B)
Cluster constructed using the "Ward D2"' method, showing the four groups of replicates. Height indicates the order in which the clusters were joined. (C) nMDS constructed from data averaged for each site, grouped by cluster groups. A-C were derived from a Bray-Curtis dissimilarity matrix constructed from log (x + 1) abundance data transformed. intra-regional variability (F = 18.08, P < 0.001; PERMDISP; Figure 2A) but also by changes in mean species composition (Figures 2A,C). Paired tests between regions showed that assemblages from the Lusitanian Province (PO1 and PO2) were distinct from those from the Boreal province (UK1, UK2, and UK3) and the Lusitanian-Boreal province (FR1, FR2, FR3, and FR4) (Supplementary Table 2). SIMPER analysis indicated that the differences observed between regions were mainly due to a higher abundance of the polychaete Syllis armillaris (Müller, 1776) and the mussel M. galloprovincialis (Lamarck, 1819) in the Lusitanian Province compared to the Lusitanian-Boreal and Boreal Provinces (Supplementary Table 2). Distance-based redundancy analysis analyses indicated some degree of partitioning between regions. The first two axes represent 12.4 and 10.5% of the explained total variance, respectively (Figure 3). The species assemblages were structured along two gradients. The first was driven by the mean chlorophyll a and mean water temperature variables, which were negatively correlated, highlighting the differences between the northern and the southern assemblages. The second was driven by the amount of sediment in the cores and the maximal temperature of the air, which separated the middle range sites from the southern and northern sites, with higher values in the middle range sites (Figure 3). Note that the amount of sediment in the cores was also negatively correlated with maximal current velocity, which was higher in the northern sites. The DISTLM routine was used to determine links between environmental predictor variables and variability in assemblage structure (Table 3). Marginal FIGURE 3 | Result of the distance-based redundancy analysis (dbRDA) using the Bray-Curtis dissimilarity matrix computed on the log-transformed abundance data and the seven selected environmental variables. Field data: "Water" and "Air (T • C max)" = maximum air and water temperature, "Sediment" = amount of sediment in g.m −2 , "Density SA" = density of S. alveolata (ind.m −2 ). Data extracted from BioORACLE: "Chla" = average chlorophyll level (mg.m −3 ), "Water (T • C avg.)" and "Current (max)" = average maximum current velocity (m.s − 1). The first two axes capture 22.9% of the variation explained by these seven variables.
tests showed that mean water temperature and the amount of sediment in the cores were, individually, the most important predictor variables. Surprisingly, the density of S. alveolata did not appear to be a structuring factor for these intertidal communities. The stepwise selection procedure indicated that the most parsimonious model included all environmental variables, explaining 0.38% (adjusted R 2 = 0.31) of the total observed variation in this assemblage structure.
Functional Diversity
Changes in community structure were also analyzed in terms of functional diversity. Cluster analysis carried out on the FCA axes revealed five main groups of taxa with distinct trait combinations ( Figure 4A). The most clearly delineated group (Group 1) was composed mostly of intertidal, suspension-feeding, small, long-lived organisms that live mostly fixed or in tubes and release their gametes into the water column ( Figure 4B). This group was represented by 14 taxa, the majority of which were bivalve mollusks, sabellidae polychaetes, and barnacles. The four other groups were largely composed of infaunal taxa. Group 2 was composed of very small, free-living and tubedwelling, short-lived, sometimes annual, organisms. Most species in this group lay or incubate eggs and have no larval phase. Their contribution to sediment reworking was mainly at the sediment surface. This group was composed of 47 taxa that included amphipods and isopods but also pycnogonids and The best solution based on stepwise selection and adjusted R 2 is shown. Adj. R 2 , adjusted R 2 ; SS, sum of squares (trace); F, pseudo F-statistic; P, P-value; Prop, proportion of variation explained.
nematodes. Group 3 comprised large, average-lived organisms, that freely release their gametes into the water column, with feeding modes mostly associated to scavenging and sub-surface deposit-feeding, living free or in burrows, and participating to sediment reworking, either as biodiffusors or through vertical sediment transport. This group included 23 taxa, belonging to the polychaete annelids Eunicidae, Lumbrinereidae, and Oenonidae (formerly part of Eunicidae) and Terebellidae. Group 4 and Group 5 comprised species with heterogeneous and intermediate trait characteristics relative to the more functionally homogeneous Groups 2 and 3, the former being represented by 14 taxa, mainly polychaete annelids belonging to the Spionidae, Capitellidae, and Cirratulidae families, while the latter included 23 taxa, most of them being polychaetes belonging to different families of the order Phyllodocida (Phyllodocidae, Nereidae, Syllidae, Glyceridae, and Polynoidae). It also included decapod crustaceans, gastropods, and oligochaetes (all groups are detailed in Supplementary Figure 3). The relative frequencies of each group did not show a significant difference in the proportion of each functional group with latitude (Supplementary Figure 4).
Taxonomic and Functional β Diversity
Taxonomic β diversity values for macrofauna associated with honeycomb worm reefs showed greater similarity on average within regions (19-51%; Figure 5A and Table 4) compared to among regions (9-33%; Figure 5B and Table 4). However, levels of similarity within regions remained low, indicating important heterogeneity across sites of a given region. On average, when considering pairs of assemblages within regions, 60% of the species were found in only one assemblage: 50% of them changed in terms of species identity (turnover) and 10% were unique to the richest assemblage (nestedness) (Figure 5A -within region, Table 4). For pairwise comparisons among different regions, differences were even more pronounced, with an average of 80% of species being found in only one assemblage, with 70% due to species turnover and 10% linked to nestedness ( Figure 5B -among bioregion, Table 4; for all pairwise comparisons among regions, see Supplementary Figure 5 and Table 4). The contributions of nestedness to β diversity were on average similar within and among regions. Overall, variation in species composition within and between bioregions were primarily due to changes in species identity. Functional β diversity values for macrofauna associated with honeycomb worm reefs showed a comparable range in similarity within regions (38-88%; Figure 5A and Table 4) and among regions (34-84%; Figure 5B and Table 4). This similarity within and among regions, indicates high levels of overlap in functional space. On average, two assemblages shared 40% of their functional space, while functional β diversity was mostly driven by nestedness (i.e., by difference in the volume of the functional space filled by the assemblages; 24%) rather than by turnover (i.e., functional spaces not shared by the two assemblages; 15%) ( Figures 5C,D and Table 4). The contributions of nestedness to functional β diversity were similar within and between regions (for all pairwise comparisons among bioregions, see Supplementary Figure 6).
Influence of Honeycomb Worm Reefs on Local Diversity
Honeycomb worm reefs host diverse invertebrate assemblages. Here we examined how multiple facets of diversity, including taxonomic and functional α and β diversity, vary over much of the Atlantic coast of Europe. In terms of local levels of diversity, no significant differences were observed in Hill diversity indices (including richness) over the 10 study sites. Only the abundances of macrofauna were relatively higher in the southern sites compared to the northern sites. Our results are in agreement with a growing number of examples that show that there are many exceptions to the latitudinal diversity gradient described by Brown and Lomolino (1998) and Gaston and Chown (1999). Recent investigations have shown little or no relationship of diversity with latitude for the European marine benthos (Renaud et al., 2009;Hummel et al., 2017), particularly for soft sediment communities (Kendall and Aschan, 1993;Wilson et al., 1993;Kendall, 1996). Latitude is not a unidimensional environmental variable but a proxy for a number of primary environmental Contributions were calculated for comparisons between pairs of samples belonging: to the same bioregion (within bioregion), or to different bioregions (among bioregion).
Frontiers in Marine Science | www.frontiersin.org factors that interact and correlate with each other (Hawkins, 2003). For honeycomb worms, it appears that biotic and abiotic factors associated with the reef environment have contributed to maintaining constant levels of diversity over broad geographical scales, as discussed further below.
The assemblages sampled in our study showed a high diversity of macrofaunal organisms and are typical of the honeycomb worm reef assemblages reported in previous studies (Gruet, 1986;Dias and Paula, 2001;Dubois et al., 2002Dubois et al., , 2006Schlund et al., 2016). Mean species richness was comparable, but notably lower FIGURE 5 | Triangular plots illustrating the geographical pattern of (A,B) taxonomic and (C,D) functional β diversity Sorensen dissimilarity between the species composition (presence/absence data) of the 10 study sites was used to quantify their similarity, and the two components of their beta diversity nestedness (i.e., influenced by the difference in number of species between the two communities) and turnover (i.e., species replacement between two communities). Contributions were calculated for comparisons between samples belonging either to (A,C) the same bioregion (within bioregion) or to (B,D) different bioregions (among bioregion). Red lines indicate the centroid value for each graph with its associated mean values for the three components of the Sorensen dissimilarity.
While species richness in assemblages associated with honeycomb worms remained within a narrow range throughout the coast of Europe, faunal composition did vary among sites. Two environmental variables were found to significantly structure assemblages: mean annual water temperature and the quantity of sediment in the cores. Mean annual temperature distinguished the United Kingdom sites and France sites from the Portugal sites and one site (FR4) in the Bay of Biscay. Thermal regimes affecting faunal composition appear to change within the Bay of Biscay, south of the Brittany peninsula, consistent with higher sea surface temperatures in the Bay of Biscay than in the surrounding areas connected by the Gulf Stream (Jenkins et al., 2008;Hummel et al., 2017). Sediment content was a key variable that structured communities, with sites that had higher sediment content, typically the France sites, being distinct from sites with lower sediment content, namely the United Kingdom and Portugal sites. Sediment content was negatively correlated with current velocity, such that sites that had lower hydrodynamics accumulated more sediment, while sites with higher hydrodynamics had higher porosity within the reef. In areas with high current velocities, the reefbuilding activity of S. alveolata is challenged by wave erosion, generating higher porosity within reefs. Conversely, low current velocities allow reefs to grow homogeneously but also allows unconsolidated particles to settle within fissures in the reefs. Previous studies have reported that within the same site, dense sections of reef, where S. alveolata is in an active growth phase (prograding reef, sensu Curd et al., 2019) tend to host assemblages with lower abundances and diversity than parts of the reef that are more fragmented (retrograding reef) (Dubois et al., 2002;Jones et al., 2018). Sediment content has been found to be an important variable structuring communities at local scales. In the Bay of Mont-Saint-Michel, for example, higher sediment content in retrograding reefs explained the presence of many species typically belonging to muddy sandy bottom communities (Dubois et al., 2002(Dubois et al., , 2006. At the regional level, differences in hydrodynamic and sediment accumulation regimes may therefore lead to differences in reef density, which in turn affect assemblage compositions over the Atlantic coast of Europe. Unlike other engineering species such as haploops (gregarious tube-dwelling amphipods) (Rigolet et al., 2014), the density of S. alveolata was not the main factor structuring communities. Unlike haploops, the reef structures developed by S. alveolata persist after the death of the individuals, such that characteristics of the reef (here sediment content) better explain variation in communities than the density of the engineering species. Our results are consistent with other studies that have shown that reef structure is more important for explaining community composition than density of the engineer, such as in habitats built by Owenia fusiformis Delle Chiaje, 1841 (Fager, 1964); Spiochaetopterus bergensis Gitay, 1969 (Munksby et al., 2002;Hastings et al., 2007), and Lanice conchilega (Pallas, 1766) (Zühlke et al., 1998;Zühlke, 2001;Callaway, 2006;Rabaut et al., 2007;Van Hoey et al., 2008;De Smet et al., 2015).
We found five functional groups in association with honeycomb worm reef formations. Our results show that changes in taxonomic composition did not result in changes in ecological role, but rather that the same functional groups were found in association with honeycomb worm reefs throughout the coasts of Europe. Foundation species greatly influence the structure and functioning of species assemblages (Bruno et al., 2003). However, the effects of foundation species on biodiversity are not necessarily positive for all species, providing resources for some but excluding others (Rigolet et al., 2014). Through tube building activity, honeycomb worms transform unconsolidated sediment into a complex three-dimensional structure with properties that differ from both rocky shores or bare sediment. The reefs attract mostly soft sediment infauna (other polychaetes) and provide pockets of soft sediment for burrowers, but exclude taxa that require rocky substrate to settle upon or that compete for space with S. alveolata, such as barnacles and mussels (Holt et al., 1998;Dubois et al., 2002Dubois et al., , 2006. Brown and red macroalgae cover on honeycomb worm reefs is reduced compared to rocky shores (Dubois et al., 2006), hence, excluding a large set of herbivores, and favoring deposit-feeders, as can be seen in the functional groups recovered here. Honeycomb worm reefs may therefore act as a biological filter for a given local pool of organisms. Similar results have been reported in different bioengineered habitats such as the communities associated with haploops in the bay of Concarneau, France (Rigolet et al., 2014). Contrary to adjacent sandy and muddy bottom communities, the establishment of haploops communities excluded or limited the colonization of other burrowers and tube-dwelling suspension-feeders, but attracted small mobile predators which possibly predate on haploops or other small associated organisms. The biological filtering that occurs in honeycomb worm and haploops habitats may therefore be applicable to other bioengineered habitats.
Relationship Between Taxonomic and Functional β Diversity
Our results show that while the taxonomic composition of the fauna associated with honeycomb worm reefs varied over broad geographical scales, species were replaced by another with the same functional role, such that only few differences in functional groups occurred across the reefs of the northeastern Atlantic. Despite high species turnover observed between biogeographic regions (70% on average), functional turnover was only 15% on average. As a result, high functional similarity was observed between regions and most functional changes across regions were due to one assemblage being a subset of another (24% of functional nestedness). For instance, the functional role performed by the isopod Lekanesphaera levii (Argano and Ponticelli, 1981) sampled in the northern reefs is the same as another isopod, Dynamene bidentata (Adams, 1800) in the southern reefs. This is also the case for two species of Phyllodocidae: Eulalia clavigera (Audouin and Milne Edwards, 1833) and Eulalia ornata (Saint-Joseph, 1888). E. ornata is more abundant at Boreal and Boreal-Lusitanian sites but tends to be replaced by E. clavigera at Lusitanian-Boreal and Lusitanian sites.
Biogenic habitats tend to show little variation in functional groups over broad spatial scales with high levels of redundancy within each group (Hewitt et al., 2008;Barnes and Hamylton, 2015), although this depends somewhat on the foundation species (Boyé et al., 2019). High species turnover accompanied by constant taxonomic richness has also been observed in eelgrass assemblages (Boyé et al., 2017). Similarly, variation in taxonomic composition associated with eelgrass beds, mangroves, maerl beds, and coral reefs did not result in differences in functional trait composition across approximately 500 km of coastline from both sides of the Atlantic and from the Caribbean and coral seas (Hemingson and Bellwood, 2018;Boyé et al., 2019). Our results support previous work that has shown that biogenic habitats are important in structuring benthic assemblages at the regional scale. And as with other biogenic habitats, communities supported by honeycomb worms show high functional redundancy that is thought to provide spatial insurance for benthic ecosystem functioning at local and broad spatial scales (Boyé et al., 2019). In addition, high functional redundancy may indicate that honeycomb worm reefs can offer similar niche properties to its associated assemblages across varying environmental conditions, as has been found for North Atlantic eelgrass, Caribbean mangroves or Indo-Pacific coral reefs (Cornell and Lawton, 1992;Boyé et al., 2017;Hemingson and Bellwood, 2018;Storch and Okie, 2019). Indeed, the reefs themselves provide protection from the physical elements, such as wind, waves, sun exposure, and desiccation, which may mean that they also act as environmental filters, buffering extremes in temperature or other environmental variables as is observed for many foundation species (Bertness and Callaway, 1994;Bruno et al., 2003;Bouma et al., 2009). As such, they may override the effect of large environmental gradients such as latitudinal temperature gradients (Jurgens and Gaylord, 2018), with important consequences for the spatial and temporal variation of their associated communities (Bulleri et al., 2018;Boyé et al., 2019).
Biogeographic Regions
Taxonomic differences were observed in macrobenthic faunal assemblages associated with honeycomb worm reefs along the coast of Europe. Hierarchical clustering analyses showed that assemblages were grouped together into four main clusters, two of which were found in the United Kingdom, a third represented by all assemblages in France, and a fourth represented the assemblages from Portugal. These clusters suggest there may be greater regional differences in macrobenthic communities than is currently recognized in biogeographic frameworks which consider only two main provinces in the northeastern Atlantic (Spalding et al., 2007;Briggs and Bowen, 2012). The assemblages observed in our study suggest important differences in species composition within both Lusitanian (Portugal vs. France sites) and the Boreal/Northern European Seas provinces (UK1 + UK4 vs. UK2 + UK3). While our results show good agreement with assemblage differences that could correspond to differences in species distributions associated with Lusitanian-Boreal and Lusitanian province subdivisions (defined by Dinter, 2001), it is less clear whether the northern assemblages found in honeycomb worms support a Boreal and Boreal-Lusitanian subdivision. Nevertheless, the differentiation observed here does suggest a higher degree of partitioning of species within the northeastern Atlantic that may be in part related to biogeography, with many species having restricted distributions. Our findings for the four United Kingdom study sites are in agreement with patterns of community composition associated with another important temperate foundation species, the kelp L. hyperborea (Teagle et al., 2018). While communities associated with holdfasts were fairly variable throughout the study area, six sites collected in northern and southern Scotland (Boreal province) were distinct from six sites in northwestern Wales and Southern England, supporting partitioning within the Boreal/Northern European Seas province.
In honeycomb worm assemblages, differences in Boreal and Boreal-Lusitanian regions were mainly driven by the occurrence or absence of M. edulis and potential hybrids of M. edulis × M. galloprovincialis in the two studied regions. Given the morphological similarities among the three taxa, identification to the species level was uncertain in UK1 and UK4, with most identifications being kept to the genus level. This may partly explain the similarity among the Scottish and southern England sites in our dataset. Molecular identification was not attempted here, but could help resolve some of these taxonomic uncertainties in future work, as has been done for Mytilus spp. (Wenne et al., 2020). Similarly, nematodes were identified only to the phylum level, but did contribute significantly to the observed differences in assemblages between the Lusitanian and Lusitanian-Boreal regions. Identification to the species level would likely further differentiate these two regions (Bhadury et al., 2006). Overall, our results indicate strong differences in community composition that are related to taxonomic turnover, which in turn indicates that species ranges may better correspond with finer-scale biogeographic partitioning within the northeastern Atlantic, as proposed by Dinter (2001). Future studies that take into consideration more extensive species inventories across phyla and habitats may help refine our understanding of marine biogeography in the northeastern Atlantic, and resolve some of the current discord in various frameworks.
CONCLUSION
Our results highlight the importance of considering various aspects of diversity in order to have a more comprehensive understanding of the ecological processes that shape marine communities, which will ultimately better inform conservation strategies (Meynard et al., 2011;Villéger et al., 2013;Loiseau et al., 2017). In the case of honeycomb worm reefs, the environmental filtering that excludes some functional groups from the reefs would not have been detected if only patterns in taxonomic diversity had been examined. Similarly, the high level of taxonomic turnover observed among and within biogeographic regions would have been overlooked if functional diversity had been considered alone. Examining multiple facets of community diversity has enhanced our understanding of the factors that shape assemblages associated with S. alveolata. Preserving taxonomic diversity is and will continue to be valuable for maintaining regional levels of species diversity (De Juan and Hewitt, 2011). However, while many conservation programs prioritize the conservation of local taxonomic community diversity (Socolar et al., 2016), considering the functional complementarity of communities across broader spatial scales in addition may prove to be more efficient in maintaining healthy ecosystems (Mori et al., 2018). The results presented here show that honeycomb worm reefs support high taxonomic diversity, but functional diversity is more limited, with some key functional groups (such as grazers) being absent from the community. Adjacent habitats may therefore host very different sets of species with equally important ecological roles. Benthic homogenization and loss of complexity of the sea floor may reduce overall functional diversity in the marine benthos (Airoldi et al., 2008). Protecting a diversity of benthic habitats may therefore be necessary for ensuring good ecosystem functioning in marine communities.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material. The functional trait dataset for this study can be found in SEANOE (https://www.seanoe. org/; https://doi.org/10.17882/79817). Further inquiries can be directed to the corresponding author/s.
AUTHOR CONTRIBUTIONS
SD, LF, and FN conceived the study. AM led analysis and drafting of the manuscript. LB, AC, AD, SD, LF, FL, CM, FN, and RS carried out the field work. CP, AM, SD, and ND conducted species and trait identification. AB, CP, MM, and SD contributed to data analysis. All authors contributed to editing the final manuscript.
FUNDING
This work was supported by the Total Foundation (Grant No. 1512215 588/F, 2015. AM and AC were funded by Ph.D. grants from Ifremer, Region Bretagne, and the ISblue project (the Interdisciplinary graduate school for the blue planet, ANR-17-EURE-0015, co-funded by the "Investissements d'Avenir" program).
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
0001-01-01T00:00:00.000
|
2401180
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.nature.com/articles/6606006.pdf",
"pdf_hash": "e894e172aadbbef0e545ecec18cc721ef5557d28",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44436",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"sha1": "e894e172aadbbef0e545ecec18cc721ef5557d28",
"year": 2010
}
|
pes2o/s2orc
|
Chinese men
Background: Few epidemiological studies have explored the associations between occupational exposures and lung cancer in lifelong nonsmoking men. Methods: We obtained lifetime occupational history and other relevant information for 132 newly diagnosed lung cancer cases among nonsmoking Chinese men and 536 nonsmoking community referents. Unconditional multiple logistic regression analysis was performed to estimate the odds ratio (OR) of lung cancer for specific occupational exposures. Results: Significantly increased lung cancer risk was found for nonsmoking workers occupationally exposed to silica dust (OR=2.58, 95% confidence interval (CI): 1.11, 6.01), diesel exhaust (OR=3.47, 95% CI: 1.08, 11.14), spray painting (OR=2.81, 95% CI: 1.14, 6.93), and nonspray painting work (OR=2.36, 95% CI: 1.04, 5.37). Silica dust exposure was associated with a significantly increased risk of adenocarcinoma (OR=2.91, 95% CI: 1.10, 7.68). We observed a positive gradient of all lung cancers and of adenocarcinoma with duration of employment for workers exposed to silica dust and spray painting. Conclusion: This study found an increased risk of lung cancer among nonsmoking Chinese men occupationally exposed to silica dust, diesel exhaust, and painting work.
Tobacco smoking is the most important risk factor for lung cancer (IARC, 2004), contributing around 80% of lung cancer cases of European men and 58% of Chinese men (Tse et al, 2009a). Other risk factors include occupational exposures to suspected carcinogens, environmental tobacco smoke (ETS), residential radon exposure, and genetic susceptibility (Alberg et al, 2005). Numerous studies have examined the associations between occupational exposures and lung cancer risk, but only a few have explored these in lifelong nonsmoking men (Pohlabeln et al, 2000;Zeka et al, 2006). Smoking is such a strong risk factor of lung cancer (IARC, 2004) that its presence makes it difficulty to examine the effects of occupational exposures with weak to moderate carcinogenicity. Moreover, cigarette smoking may modify the effects of occupational exposures on lung cancer risk (Liddell, 2001;Yu and Tse, 2006). As inadequate consideration of smoking can lead to inaccurate risk estimations, restriction of study subjects to lifelong nonsmokers offers the best way to examine the independent effects of occupational carcinogens. However, in most studies, lung cancers in lifelong nonsmokers are very few (o10%) to have high statistical power (Peto et al, 2000). We used a subgroup of lifelong nonsmokers from a large population-based case -referent study to examine the independent effects of occupational exposures on the risk of all lung cancers combined as well as the histological subtype of adenocarcinoma among Chinese nonsmoking men.
MATERIALS AND METHODS
The recruitment of cases and referents of this population-based study has been described elsewhere (Tse et al, 2009b). In brief, we recruited 1208 newly diagnosed and histologically confirmed primary lung cancer consecutively from the largest oncology centre in Hong Kong from 1 February 2004 to 30 September 2006, with a response rate of 96%. All the eligible cases were Chinese men aged 35 -79 years. We recruited 1069 male community referents randomly selected from same districts of the cases, with a response rate of 48%, a response rate comparable with that of other similar studies. Each community referent from the original larger study was frequency matched in 5-year age groups to a lung cancer case, as the age distribution of the subgroup of lifelong nonsmoking cases may not be comparable with the nonsmoking referents. All community referents must have no history of physician-diagnosed cancer in any site. This study was approved by the ethics committees of both the Chinese University of Hong Kong and Queen Elizabeth Hospital.
Personal interviews with the cases and referents were carried out by trained interviewers using structured questionnaires immediately after informed consent was obtained. A lifestyle questionnaire obtained information on demographic data, sources of indoor air pollutants (i.e. exposure to residential radon and ETS, incense burning, use of mosquito coils, and years of cooking by frying), habits of tobacco smoking and alcohol drinking, dietary habits, past history of lung diseases, cancer history in first-degree relatives, and occupational exposures. Histological findings were retrieved from the hospital records. For this study, we selected the subgroup of never smokers defined by subjects who had never smoked as many as 20 packs of cigarettes or 12 oz (342 g) of tobacco in lifetime or one cigarette a day or one cigar a week for 1 year (Ferris, 1978).
A complete work history of jobs held at least 1 year was recorded for each case and referent. The work history included job title, job task description, and the beginning and end date of each job. Job titles and industries were coded according to the International Standard Classification of Occupations (ISCO) and International Standard Industrial Classification of All Economic Activities (ISIC), respectively, for international comparison (International Labour Office, 1968;United Nations Publications, 1971). The whole process of coding was performed blinded to the disease status of subjects.
Additional information on each worker's regular exposure to specific individual or group of agents from each workplace was captured based on a list of confirmed or suspected human carcinogens, including asbestos, arsenic, nickel, chromium, tars, asphalts, silica, spray painting, nonspray painting, pesticides, diesel engine exhaust, cooking fumes, welding fumes, and manmade mineral fibres. Regular exposure referred to exposure at least once a week for at least 6 months. We introduced the study as a general 'male health' study to both cases and referents to minimise the potential recall bias.
We performed unconditional logistic regression models to estimate the odds ratio (OR) and the 95% confidence interval (CI) for lung cancer related to occupations, industries, and exposure to specific or group of agents. In building the model, we initially included various potential confounding factors into a 'base' model using a forward stepwise method; the variables that were statistically significant and finally retained in the 'base' model were age, place of birth, education level, residential radon exposure, past history of lung diseases, any cancer in first-degree relatives, and intake of meat. We then forced each occupational exposure (i.e. occupations, industries, and exposure to specific or group of agents) into the 'base' model to estimate the adjusted OR. Preliminary analyses were performed for major ISCO and ISIC groups; if elevated ORs (ever vs never exposed) were suggested, further analyses were conducted in the submajor groups. Two reference groups were applied to each comparison: (1) workers who had never worked in that occupation or industry or had never been exposed to that defined agent (or group of agents) and (2) workers who had never been exposed to any of the confirmed or suspected human carcinogens in the list. Analyses were carried out separately for adenocarcinoma -the most common histological type among nonsmokers (89 cases, 69.4% of all lung cancers).
RESULTS
This study included 132 nonsmoking male cases and 536 nonsmoking male referents (Table 1). Cases were more likely to be alcohol drinkers, divorced, and exposed to ETS, but they were less educated and had lower family income; a statistical significance was only observed for family income. The mean age of cases was 2.2 years older than the referents (61.9 years) at the time of diagnosis. As described previously, we found that more cases were exposed to higher level of residential radon and had any cancer in first-degree relatives than the referents (Tse et al, 2009b).
The ORs of lung cancer for employment in major industries and occupations are presented in Table 2. After adjustment of residential radon exposure and other potential confounding factors, only workers ever employed as 'bricklayers, carpenters and other construction workers (ISCO code: 9 -5)' showed a significantly increased OR of 2.25 (95% CI: 1.11, 4.54). Elevated ORs were also suggested for some other industries and occupations, but no statistical significance was observed.
The ORs of lung cancer for occupational exposure to specific or group of agents are shown in Table 3. Significantly increased risk was associated with silica dust, painting, and diesel exhaust. Exposure to spray painting showed a 19% higher lung cancer risk than the nonspray painting work. An increased risk of lung cancer was associated with the increasing years of employment for workers exposed to silica dust and spray painting (Table 4).
Separate analyses were repeated for the risk of adenocarcinoma (Tables 3 and 4). We found that a significant OR (2.91, 95% CI: 1.10, 7.68) was retained only for workers exposed to silica dust with an indication of exposure -response relationship with duration of employment. A positive gradient was also observed for painting workers regardless of the process of spray. The risk estimates tended to be stronger when the reference group was replaced by a group of men who had never been exposed to any of the confirmed or suspected human carcinogens in the list ( Tables 3 and 4). We further examined the correlation between occupational agents and found no obvious correlation between them (r ¼ 0.01 -0.35). No important effect modification by exposure to ETS was identified for the associations between these defined occupational exposures and lung cancer.
DISCUSSION
This population-based case -referent study aimed to identify occupational exposures related to elevated risk of lung cancer among lifelong nonsmoking Chinese men in Hong Kong. We Abbreviation: OR ¼ odds ratio. a Models were adjusted for age, place of birth, education level, residential radon exposure, past history of lung diseases, any cancer in first-degree relatives, and intake of meat. b The reference group consists of never exposed to the specific agent. c The reference group consists of never exposed to any of the confirmed or suspected human carcinogens in the list. Bold values indicate Po0.05.
found that the groups with employment as 'bricklayers, carpenters, and other construction workers' or occupational exposure to silica dust, diesel exhaust, or painting were associated with a significantly increased risk of lung cancer. On account of the small number of subjects in each specific industry or occupation, we analysed risk in major industrial and occupational groups, and found an increased risk of lung cancer among men who had ever been employed as 'bricklayers, carpenters, and other construction workers'. In Hong Kong, workers employed in construction and the related work accounts for around 9% of the local workforce, while employment as 'bricklayers, carpenters, and other construction workers' is the major occupation of local construction industry involving several job tasks, such as stone cutting, pneumatic drilling, caisson, tunnelling, dynamiting, rock sand blasting (already banned), stone crushing, cement machine attendant, bricklayer, decoration work, truck driving or operators of excavating machines, and unskilled labourers (Yu et al, 2007). These job tasks are frequently linked to several occupational hazards, in particular, silica dust, diesel exhausts, and painting work. All these occupational exposures are confirmed or suspected as associated with an increased risk of lung cancer (IARC, 1989(IARC, , 1997(IARC, , 2009. Crystalline silica dust has been reclassified as a human group 1 carcinogen by the International Agency for Research on Cancer (IARC) in 1997 (IARC, 1997), while its carcinogenicity has long been debated as potentially confounding the effect of smoking (McDonald and Cherry, 1999;Checkoway and Franzblau, 2000;Hessel et al, 2000;Pelucchi et al, 2006). We estimated the independent effect of silica dust in nonsmokers, thus avoiding this problem. A recently published multicentre case -referent study in Europe found an OR of 1.76 (95% CI: 0.97, 3.21) in nonsmoking subjects who had ever been exposed to silica dust, and a higher OR in the longest duration of employment group (OR ¼ 2.39, 95% CI: 1.11, 5.15, based on 223 cases of which 48 were male) after adjustment of age, sex, and study centre (Zeka et al, 2006). The same group of workers found an OR of 1.41 (95% CI: 0.79, 2.49) after redefining the nonexposure group as subjects not exposed to silica dust for 420 years before interview (Cassidy et al, 2007). We observed an OR of 2.58 among male workers ever exposed to silica dust and a positive association with increasing years of employment. We carried out a sensitivity analysis and found that the risk estimate was almost unchanged (OR ¼ 2.55, 95% CI: 1.14, 5.73) after redefining the nonsmoking status as that of the European study -a man who smoked o100 cigarettes in his lifetime (Zeka et al, 2006). We further re-estimated the results by removing 'any cancer in first-degree relatives', 'past history of lung diseases', and 'meat intake' (these variables were not considered in Cassidy's study) from the model, and found that the OR was reduced by 6.2% (OR ¼ 2.42, 95% CI: 1.07, 5.49), but it was still higher than those reported by Zeka et al (2006). Our study provides supportive evidence for an independent effect of crystalline silica on lung cancer risk among nonsmokers.
About 18% of our nonsmoking lung cancer cases had been involved in painting work and majority of them were assigned in renovation work of construction or car renewals in which spray painting is frequently required. Employment as a painter has been listed as a human group 1 carcinogen (IARC, 1989); two recent reviews support the conclusion of IARC after assessing reports on painters published since 1951 and a weak association for lung cancer risk (1.22 and 1.36) was shown after controlling for smoking (Bachand et al, 2010;Guha et al, 2010). Occupational exposure as a spray painter has been associated with an increased risk of urinary tract and testicular cancer, whereas its separate effect on lung cancer has not yet been reported (IARC, 1989). We found a slightly higher risk of lung cancer among painting workers using spray at work (OR ¼ 2.81) than general painters whose work never involved spraying (OR ¼ 2.36), suggesting exposure differences. Painting workers are commonly exposed by inhalation of solvents and paint dusts (e.g. silica dusts, asbestos dusts, and heavy metals) (IARC, 1989), while spray painting workers may be Abbreviation: OR ¼ odds ratio. a Models were adjusted for age, place of birth, education level, residential radon exposure, past history of lung diseases, any cancer in first-degree relatives, and intake of meat. b The reference group consists of never exposed to the specific agent. c The reference group consists of never exposed to any of the confirmed or suspected human carcinogens in the list. Bold values indicate Po0.05.
additionally exposed to a variety of suspected carcinogens in the form of aerosol or fine particles, which can be readily absorbed deep into the lungs (Sabty-daily et al, 2005). Our positive association with years of employment as spray painters corroborates the IARC conclusion. Previous studies showed an average of 33% (95% CI: 24, 44%) excess risk of lung cancer among railroad workers and truck drivers occupationally exposed to diesel engine exhaust emissions, but were commonly criticised for the lack of reliable exposure assessment and inadequate control for smoking. The IARC evaluated diesel exhaust as a group 2A carcinogen because of the limited evidence of carcinogenicity in humans (IARC, 2009). We observed an OR of 3.47 among nonsmoking men occupationally exposed to diesel exhaust, which is much higher than previously reported, but may well be as only six cases of lung cancer were exposed to diesel exhaust and there was no gradient with duration of employment.
There have been few studies of occupation and histological types of lung cancer among lifelong nonsmoking men. Our study numbers allowed us to explore only the risk of adenocarcinoma (the commonest histological type), and among nonsmoking men, we found a slightly higher association with silica dust exposure (a significant OR retained), but a relatively lower OR for occupational exposure to painting and diesel exhaust than all lung cancer cases. The relatively wide CIs of many of risk estimates indicate our limited powers for investigating associations with adenocarcinoma risk, while the multiple comparisons point to the possibility that some significant results have occurred by chance.
Accuracies in recall of nonsmoking status of our subjects (40.95) and selection bias for the cases and community referents have been addressed in another paper about ETS and lung cancer (Tse et al, 2009b). Misclassification of self-reported occupational exposures is a concern because the workers might not accurately identify the specific hazards in their working environments, but these are likely to be nondifferential between cases and referents, resulting in under-estimation of risk. Also, it is difficult to detangle the effects of different job tasks when workers were employed in several occupations during their lifetimes and thus potentially exposed to multiple chemical substances, which indeed may occur even if only one occupation was involved. We are aware that using 'ever exposure' might not be a good measurement to quantify the independent effect of an occupational exposure. This study is, therefore, only preliminary and needs confirmation.
To further evaluate the potential recall and/or interviewer bias, we interviewed a subgroup of 45 proxy respondents (e.g. spouse) 2 months after the initial interview and found the overall agreement on occupational exposures was excellent (k ¼ 0.72). The test -retest reliability for the same respondents was also very good for both cases (k ¼ 0.65) and referents (k ¼ 0.60). We further interviewed a special group of 64 inpatient referents (who had to undergo surgical operations for suspected lung cancer and were treated as lung cancer cases at the interviews, but eventually were diagnosed as not suffering from lung cancer) who showed a lower proportion of occupational exposures than the confirmed lung cancer cases, suggesting that any interviewer or recall bias was not a major issue.
Our study found that men employed as 'bricklayers, carpenters, and other construction workers' and those who had ever been occupationally exposed to silica dust, diesel exhaust, and painting work were associated with an increased risk of all lung cancers, and the effects were independent of smoking.
|
v3-fos-license
|
2021-09-01T15:06:00.471Z
|
2021-06-29T00:00:00.000
|
237427048
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1155/2021/4316238",
"pdf_hash": "c11269a9784a9fd1cafdc27492088548ef7d538e",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44437",
"s2fieldsofstudy": [
"Mathematics"
],
"sha1": "f343d2f1806cbe33ed2206cbb9d2d20b3c63930e",
"year": 2021
}
|
pes2o/s2orc
|
Global Existence and General Decay of Solutions for a Quasilinear System with Degenerate Damping Terms
In this work, we consider a quasilinear system of viscoelastic equations with degenerate damping, dispersion, and source terms under Dirichlet boundary condition. Under some restrictions on the initial datum and standard conditions on relaxation functions, we study global existence and general decay of solutions. The results obtained here are generalization of the previous recent work.
By taking in which a > 0,b > 0, and 1 < κ < +∞if n = 1, 2 and 1 < κ ≤ 3 − n n − 2 if n ≥ 3: It is simple to show that where To motivate our problem (1), it can trace back to the initial boundary value problem for the single viscoelastic equation of the form This type problem appears a variety of mathematical models in applied science. For instance, in the theory of viscoelasticity, physics, and material science, problem (5) has been studied by various authors, and several results concerning blow-up and energy decay have been studied case (η ≥ 0). For example, Liu [1] studied a general decay of solutions case ðgðu, u t Þ = 0Þ. Messaoudi and Tatar [2] applied the potential well method to indicate the global existence and uniform decay of solutions (gðu, u t Þ = 0 instead of Δu t ). Furthermore, the authors obtained a blow-up result for positive initial energy. Wu [3] studied a general decay of solution case (gðu, u t Þ = ju t j m u t ). Later, Wu [4] studied the same problem case ðgðu, u t Þ = u t Þ and discussed the decay rate of solution energy. Recently, Yang et al. [5] proved the existence of global solution and asymptotic stability result without restrictive conditions on the relaxation function at infinity case (f ðuÞ = σðx, tÞW t ðt, xÞ).
In case gðu, u t Þ = 0 and without dispersion term, problem (5) has been investigated by Song [6], and the blow-up result for positive initial energy has been proved. For a coupled system, He [7] investigated the following problem where η > 0,j, s ≥ 2: The author proved general and optimal decay of solutions. Then, in [8], the author investigated the same problem without damping term and established a general decay of solutions. Furthermore, the author obtained a blow-up of solutions for negative initial energy. In addition, problem (1) with in case η = 0 and without dispersion term, Wu [9] proved a general decay of solutions. Later, Pișkin and Ekinci [10] studied a general decay and blow-up of solutions with nonpositive initial energy for problem (1) case (Kirchhoff-type instead of Δu and without dispersion term).
In recent years, some other authors investigate the hyperbolic type system with degenerate damping term (see [11][12][13][14]). The rest of the paper is arranged as follows: in Section 2, as preliminaries, we give necessary assumptions and lemmas that will be used later and local existence theorem without proof. In Section 3, we prove the global existence of solution. In the last section, we studied the general decay of solutions.
Preliminaries
We begin this section with some assumptions, notations, lemmas, and theorems. Denote the standart L 2 ðΩÞ norm by k:k = k:k L 2 ðΩÞ and L p ðΩÞ norm by k:k p = k:k L p ðΩÞ : To state and prove our result, we need some assumptions: (A1) Regarding h i : ½0, ∞Þ ⟶ ð0, ∞Þ,ði = 1, 2Þ is C 1 functions and satisfies and nonincreasing differentiable positive C 1 functions ς 1 and ς 2 such that (A2) For the nonlinearity, we assume that (A3) Assume that η satisfies In addition, we present some notations: Lemma 1 (Sobolev-Poincare inequality) [15]. Let q be a number with 2 ≤ q < ∞ðn = 1, 2Þ or 2 ≤ q ≤ 2n/ðn − 2Þðn ≥ 3Þ, and then there is a constant C * = C * ðΩ, qÞ such that Now, we state the local existence theorem that can be established by combining arguments of [7,10].
We define the energy function as follows: Also, we define By computation, we get
Global Existence
In this part, in order to state and prove the global existence of solution (1), we firstly give two lemmas.
Theorem 5. Suppose that the conditions of Lemma 4 hold, then the solution (1) is bounded and global in time.
Proof. We have Thus, where positive constant C depends only on κ, l 1 , l 2 : This implies that the solution of problem (1) is global in time.
|
v3-fos-license
|
2019-04-29T13:13:00.767Z
|
2016-08-08T00:00:00.000
|
137829202
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scielo.br/pdf/mr/v19n5/1516-1439-mr-1980-5373-MR-2015-0449.pdf",
"pdf_hash": "8fb624e7e208ba2f9ea9dcb0293a35ce0e418054",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44439",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"sha1": "a20a4b2cfaa23e6b02a876fdb12c51cb6c5806f2",
"year": 2016
}
|
pes2o/s2orc
|
Mechanical Properties and Crystallographic Texture of Symmetrical and Asymmetrical Cold Rolled IF Steels
The crystallographic texture developed during cold rolling and subsequent annealing of interstitial free sheet steels aims to increase conformability. For this, it is necessary to obtain partial α-fiber and continuous and homogeneous γ-fiber texture components. In this work, the influence of symmetric (SR) and asymmetric (AR) cold rolling on crystallographic texture and mechanical properties of an interstitial free steel (IF) was investigated. Symmetric cold rolling yields αand γ-fibers, which are enhanced as deformation increases. Moreover, α-fiber weakening occurs due to recrystallizations, improving formability. The same fibers are produced by asymmetric cold rolling, but in this case, the γ-fiber is slightly shifted in psi, which is one of Euler angles second ROE’s notation1,2, and more homogeneous than in symmetric rolling. The best mechanical properties were achieved by asymmetric cold rolling/annealed with about 80% deformation.
Introduction
Interstitial free steels (IF) are widely used in the automotive industry 3,4 thanks to their low tensile strength (100 to 350 MPa) and high formability.The industry tries to improve these properties even more, through appropriate thermomechanical processes such as cold rolling and annealing.
Cold rolling is one of the techniques used to obtain preferred orientations in the material that will improve its mechanical properties, especially the formability.Cold rolling may be symmetrical or asymmetrical.In the first case, the rolls have the same diameter, the same speed and the same friction coefficient.In the second, at least one of those conditions is different, leading to an additional shear strain, which may improve the mechanical properties and the formability 5 .
By definition, the normal anisotropy coefficient (r m or r ) is the planar average of r value, which is the ratio of the width strain (ln(w 0 /w) to the thickness strain (ln(t 0 /t) (r = ln(w 0 /w)/ ln(t 0 /t), obtained by a simple tensile test 6 .In this equation, w 0 and t 0 are the initial sample width and thickness, respectively, and w and t its width and thickness after about 15% of plastic deformation.This parameter is generally used to verify the ability of a sheet to undergo deep drawing.The larger this coefficient, the better is the formability 7,8 .This can be seen in Equation (1), where, r 0 , r 45 , r 90 are r values obtained for samples removed 0, 45 and 90º degrees from the rolling direction and r is the normal anisotropy coefficient.
If, r 0 , r 45 and r 90 exhibit significant differences, the "earing phenomenon" can occur, due to the planar anisotropy ( r D ) condition described by Equation (2) 12 : The sheet formability is also influenced by the hardening coefficient (n), which, using Hollomon equation 13 (σ=Kε n ), is the slope of the ln(σ) versus ln(ε) curves in the plastic regime and numerically equal to the uniform elongation.This parameter is important for forming operations, because it measures the hardening ability of the material, i.e., its ability to homogeneously distribute deformation along the sheet surface before necking 14 .
This work aims to compare the effect of symmetrical and asymmetrical cold rolling in the texture and mechanical properties of an IF steel.
Symmetrical (SR) and asymmetrical (AR) cold rolling were performed in a FENN MFG.Co, model D51710 Rolling Mill to 70, 80 and 90 % thickness reductions.The configuration for SR was two mill rolls with 133.70 mm diameter each (duo configuration).For the AR, upper and lower rolls with 40.18 mm and 31.72 mm diameter, respectively, were used.For analyzing the effect of annealing, after rolling, samples were annealed in a salt bath at 850 °C for 120 s and cooled in air.Finally, the samples were mechanical polished to half thickness and etched by a 5 % hydrofluoric acid (HF) and 95 % peroxide (H 2 O 2 ) solution for 20s.
The X-ray measurements were performed on a PANalytical, X'Pert PRO MRD diffractometer using Co-Kα radiation and yielded (110), ( 200) and (211) pole figures.These experimental pole figures were processed with the help of popLA software and they are presented for 0 o and 45º φ 2 sections using the Bunge notation.
In order to evaluate the mechanical properties of the annealed materials, 5 rectangular samples with dimensions of 15 x 100 mm, were cut for each processing condition, i.e., symmetric and asymmetric cold rolled followed by an 850ºC during 120 s annealing process, on three different angles with the rolling direction: 0°, 45° and 90°, totalizing 90 samples.The tensile tests were conducted on an EMIC DL 10000 machine with a 3mm/min loading rate, following the ABNT NBR 6892-1 15 standard, and the yield point (σ e ), tensile strength (σ m ) were evaluated.The ABNT NBR 16282 16 was followed for r m , r D and work hardening coefficient (n) determinations.The highest and the lowest values of each measurement were discarded and the average of the others was taken as the final result.
Results and Discussion
The φ 2 = 45º sections of the Orientation Distribution Function (ODF) for the symmetrical cold rolled and annealed samples are shown in Figure 1.α-and γ-fibers can be observed for 70, 80 and 90 % cold rolled samples.The volume fraction of α-fibers increases up to 90 % deformation, but the volume fraction of γ-fibers is smaller for 90 % deformation than for 80 % deformation.This may be related to the development of a new <110>//RD component, which inhibits the development of γ-fibers.The effect of annealing is to decrease the volume fraction of α-fibers and to increase the volume fraction of γ-fibers, thus increasing the formability of IF steels 17 .The graphs of Figure 2 illustrate the development of α-and γ-fibers for all symmetrical cold rolled and annealed samples.
Since the deformation of asymmetrically rolled samples is not homogenous over the sheet thickness, the texture was analyzed in three different sample positions: on the lower and upper surfaces and in the middle plane.Generally, the texture was the weakest in the lower surface and the strongest in the upper surface, where it was similar to that of symmetrically rolled samples.For comparison, Figure 3 shows the φ 2 = 45º ODF sections for the middle plane, where no components are observed at Φ = 54.7º, the original position of the γ-fiber.In this Figure 3, it can be seen the ODF sections for annealed 70, 80 and 90 % asymmetrically cold rolled samples.This is taken as evidence of a shift of the position of the γ-fiber, as reported by Tóth et al 18 .The graphs of Figure 4 show the volume concentration of α-and γ-fibers in asymmetrically rolled samples, before and after annealing.It can be seen that the volume fractions of α-and γ-fibers are smaller than in symmetrically rolled samples and that annealing decreases the volume fraction of α-fibers and thus increases formability.
Stress-strain curves of annealed symmetrically and asymmetrically cold rolled samples are shown in Figure 5.It can be observed that the yield (σ e ) and tensile (σ m ) strengths increase with deformation for both symmetrically (SRXX-Y) and asymmetrically (ARXX-Y) rolled/annealed samples, where XX denotes the deformation degree and Y is the angle with which the sample was taken.Asymmetrical rolling yielded larger values of σ e and σ m values for all deformations and angles, except for SR90-0 sample, whose σ m was larger than that of asymmetrically rolled/annealed samples.
The larger tensile strength (σ m ) of asymmetrically rolled samples may be due to the fact that the larger shear stress associated with asymmetrical deformation leads to grain breaking and, consequently, to a smaller grain size in the rolled sample 19 .Wauthier et al. 20 reported that this phenomenon could be observed by EBSD.The values of σ m and σ e in this work are larger than those reported by other authors 21,22 .
As shown in Table 1, the work hardening coefficient n decreases with increasing deformation, as might be expected if deformation becomes more and more nonhomogeneous as thickness reduction progresses.The decrease is approximately linear and, in the case of asymmetric rolling, can be described by Eq. 3, where %x is the percentage relative thickness reduction and the coefficient of determination is 0.95.
. .% ( ) n x 0 9741 0 0095 3 = - Table 2 shows the normal and planar anisotropy of the annealed samples.The values are lower than the expected for this type of material [23][24][25] .The largest value of r m was 1.45 for the SR90 samples.Annealed asymmetrically rolled samples reached even lower values of r m values, around 1.0.For all annealed symmetrically cold rolled samples and for the AR90 sample (annealed asymmetrically ones) negative values of planar anisotropy were found, while positive values were found for the AR70 and AR80 samples.The ideal planar anisotropy from the point of view of formability should be around zero, but the lowest value found in this work was 0.26 for AR80 sample.This r D values indicates that deep drawing test samples will show "ears" at X and Y degrees with rolling direction.
Conclusions
The results of the present work lead to the following conclusions: (i) The symmetrically cold rolled and annealed samples presented a typical texture of low carbon steels, consisting of partial α-fibers and continuous γ-fibers; (ii) The volume fraction of γ-fibers increased with deformation up to a thickness reduction of 80 %.A decrease in the volume fraction of γ-fibers was observed for a 90 % thickness reduction, possibly due to the appearance of new <110>//RD components; (iii) The texture induced by asymmetric rolling was smaller than the texture induced by symmetric rolling, but the volume fraction of α-fibers was considerably reduced by annealing, thus increasing formability; (iv) In the case of asymmetrically rolled/annealed samples, texture components were not found in the original position of γ-fibers, Φ = 54.7°,but fibers were formed between 60 < Φ < 75°, suggesting a displacement of γ-fibers; (v) All deformations increased the values of the yield and strength stress; the largest increases of σ e and σ m were observed in asymmetrically rolled samples.
Table 1 :
Work hardening coefficient (n) of annealed symmetrically and asymmetrically cold rolled samples.
|
v3-fos-license
|
2020-05-28T09:09:06.605Z
|
2020-05-21T00:00:00.000
|
219523812
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2075-4701/10/5/673/pdf",
"pdf_hash": "6f65f3277d9cd4d898d055ed48d40494836e3e01",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44441",
"s2fieldsofstudy": [
"Materials Science"
],
"sha1": "0c9e1ae4a3c9373d74ec019c0ee85f890e0660cd",
"year": 2020
}
|
pes2o/s2orc
|
Evaluation of Hot Deformation Behaviour of UNS S32750 Super Duplex Stainless Steel (SDSS) Alloy
: The super-duplex stainless steel UNS S32750 consists of two main phases, austenite and ferrite, which di ff er not only by their morphology, physical, and mechanical properties, but also by their deformation behaviour. A heterogenous deformation can be obtained during thermomechanical processing, generating internal stresses and sometimes fissures or cracks on sample lateral surfaces, due to ferrite’s phase lower potential of plastic deformation accommodation in comparison with austenite phase. The research objective is to determine the optimum range of the applied deformation degree, during hot deformation processing by upsetting of the super-duplex steel (SDSS) UNS S32750. In the experimental program several samples were hot deformed by upsetting, by applying a deformation degree between 5–50%, at 1050 ◦ C and 1300 ◦ C. The most representative hot-deformed samples were selected and analysed by scanning electron microscope-Electron Backscatter Di ff raction (SEM-EBSD), to determine the main microstructural characteristics obtained during thermomechanical processing. When considering the experimental results, the influence of the applied deformation degree on the microstructure has been evaluated. Microstructural features, such as nature, distribution, morphology and relative proportion of constituent phases, Grain Reference Orientation Deviation (GROD), and recrystallization (RX), were analysed, in correlation with the applied deformation degree. Finally, it was concluded that the UNS S32750 alloy can be safely hot deformed, by upsetting, at 1050 ◦ C and 1300 ◦ C, with a maximum applied deformation degree of 20% at 1050 ◦ C and, respectively, by 50% at 1300 ◦ C.
Introduction
Super Duplex Stainless Steels (SDSS) are defined as a class of stainless steels with a microstructure consisting of two phases: ferrite δ and austenite γ, in approximately equal proportions, containing enough Cr, Mo, and N to provide high resistance to pitting corrosion. SDSS successfully combine the properties of ferritic and austenitic stainless steels, possessing a good combination of strength, ductility, and corrosion resistance in different corrosive environments [1][2][3][4]. Although they represent a very small percentage from the total of stainless steels (approx. 1%), the SDSS are an industrial success, due to the exceptional combination between superior mechanical strength, high toughness, and increased corrosion resistance under critical conditions (a strong resistance to stress corrosion cracking, to pitting corrosion, etc.) [5,6].
The hot workability of a metallic material depends in a very complex manner on the material composition and on the thermomechanical processing parameters. However, these two categories of factors cannot be treated independently, as they are closely interconnected, since process parameters can induce important changes in properties. The ability of the material to withstand deformation,
Thermomechanical Processing Route
The material used in the experimental program had the same preliminary processing history as the one used in the previous research [25]. This condition is being considered as the initial (as-received) structural state. The used samples had the following geometric configuration: the diameter d = 18 mm and the height h = 27 mm (ratio h/d = 1.5). All of the samples were checked for pre-existing defects, such as micro-cracks, porosity, laps, etc., on the lateral surface, while using liquid penetrant inspection technique.
The samples were hot deformed at two different temperatures of 1050 • C and, respectively, 1300 • C. The deformation temperatures were selected according to previous results [25]. After heating, the samples were hot deformed, by upsetting, in axial compression, with different deformation degrees, ranging from 5% to 50%, in 5% steps, at a constant strain rate of approx. 0.37 [s −1 ]. Table 1 presents the performed experimental program for UNS S32750 SDSS alloy.
All of the samples were heated using a NABERTHERM HTC 08-16 furnace (Nabertherm GmbH, Lilienthal, Germany), at a temperature higher by 20 • C than the desired deformation temperature, while taking the cooling of the samples from the moment of extraction from the furnace until the beginning of the upsetting processing into account. As deformation equipment, a 200 tf hydraulic press was used, using a constant crosshead speed of approx. 10 mm/s (strain rate of approx. 0.37 [s −1 ]). The holding duration, in the above furnace, at the prescribed temperatures was 20 min., to equalize the temperature over the entire volume of the sample. After hot deformation, by upsetting, the samples were water cooled to ambient temperature in order to freeze sample internal microstructure. After cooling, all of the samples were checked for micro-cracks/fissures, while using the liquid penetrant inspection technique.
Microstructural Characterization
Several representative samples, marked by * in Table 1, were selected for SEM-EBSD analysis. Figure 1 shows schematically the reference system of the analysed samples. The samples were cut at the middle hight and analysed in the LD-TD plan (Longitudinal Direction-Transverse Direction), in selected area 3 (2/3 from sample centre), depending on this system. The sample cutting was performed using a Metkon MICRACUT 200 (Metkon Instruments Inc., Bursa, Turkey) precision diamond cutting machine. The cut samples were embedded in conductive phenolic resin. Special attention was paid to the embedding operation, to avoid, as much as possible, overheating the sample and to induce any potential microstructural transformation. The samples were hot-embedded, in conductive phenolic resin, at a temperature of 138 °C, with a holding time of 10 min. at the embedding temperature. The embedded samples were then prepared by polishing on a Metkon Digiprep ACCURA (Metkon Instruments Inc., Bursa, Turkey) equipment. After these operations, an additional super-final polishing was performed with a Buehler VibroMet2 (Buehler, Lake Bluff, Illinois, IL, USA) equipment in order to improve the quality of the sample surface. The polishing and super-final polishing steps are presented in detail in a previous paper [25].
. SEM-EBSD microstructural analysis was performed with a scanning electron microscope (SEM), model TESCAN VEGA II-XMU (TESCAN, Brno, Czech Republic). This microscope was equipped with an EBSD detector model BRUKER Quantax e-Flash (Bruker Corporation, Billerica, MA, USA). The analysis was performed in the LD-TD plane, at 1/2 hight and 2/3 distance from the middle of the sample (see Figure 1). The SEM-EBSD analysis had the role of highlighting the main microstructural characteristics of the UNS S32750 SDSS alloy in the following microstructural states: initial (as received) and hot deformed, by upsetting, with different deformation degrees, ε = 5-50%, at deformation temperatures of 1050 • C and 1300 • C.
In order to identify the microstructure constituent phases of the UNS S32750 SDSS alloy, the following phases were considered: γ-phase (austenite), indexed in the cubic system (225), space group Fm3m (with the lattice parameter a = 3.66 Å) and δ-phase (ferrite), indexed in the cubic system (229), space group Im3m (with the lattice parameter a = 2.86 Å). The following SEM-EBSD parameters were used: 200× magnification, 320 × 240 pixels resolution, 10 ms acquisition time/pixel, 1 × 1 binning size, and less than 3% zero solutions.
The nature, distribution, morphology, and proportion (weight fraction) of the constituent phases, structural homogeneity, grain size, and dynamic recrystallization were studied, in relation to the applied deformation temperature and deformation degree. Figure 2 illustrates a series of representative SEM-EBSD composite images, resulting from the microstructural analysis that was performed for the super-duplex steel in the initial structural state (see Table 1, sample 0). Figure 2a shows the distribution of the austenite (γ-phase) and ferrite (δ-phase) within the microstructure, Figure 2b,c show the distribution of size and shape of austenite and ferrite grains, and Figure 2d shows the GROD (Grain Reference Orientation Deviation) distribution of both austenite and ferrite phases in the analysed field.
SEM-EBSD Microstructural Analysis of As-Received UNS S32750 Alloy
Metals 2020, 10, x FOR PEER REVIEW 5 of 13 ( Figure 2d). Additionally, one can observe that high stress/strain areas (marked with red circles in Figure 2d) are presented in both phases. Due to the low GROD, it can be assumed that these areas have a low susceptibility to generate micro-cracks. In the as-received state, the internal stressed/strained areas are most likely created by the previously applied thermomechanical processing procedure. Figure 3 shows the specimens where cracks were observed on the samples lateral surface after hot deformation, by upsetting, at 1050 °C, in the case of following applied deformation degrees: ε = 35% (Figure 3a), ε = 40% (Figure 3b), ε = 45% (Figure 3c), and ε = 50% (Figure 3d). It can be observed that the increase of applied deformation degree leads to the development of larger cracks on the samples lateral surface, due to the limited plasticity of UNS S32750 SDSS alloy at 1050 °C (mainly due to the limited plasticity of the ferrite).
Hot Deformation, by Upsetting, of As-Received UNS S32750 Alloy.
In the case of samples that were deformed at 1300 °C, fissures or cracks were not observed on the samples lateral surface, the UNS S32750 SDSS alloy maintained its structural integrity, even after upsetting with a higher deformation degree, ie ε = 50% (maximum deformation degree applied within the experimental program). The analysis of the images presented in Figure 2 revealed that the initial microstructure of UNS S32750 is homogeneous, being identified only two phases: ferrite (δ phase), which acts as a metallic matrix, with large grains (up to 400 µm in size), and austenite (γ-phase), which is dispersed, showing elongated and irregular grains of different sizes between 50 µm and 200 µm, generally smaller than ferrite grains. The proportion of constituent phases is about (50-52%) ferrite (δ-phase), and (50-48%) austenite (γ-phase). No other secondary phases were detected in the microstructure of the UNS S32750 alloy in the initial state.
The Grain Reference Orientation Deviation (GROD) map distribution was a tool used to assess the accumulated deformation/strain at the microstructural level [26,27]. The GROD map distribution is based on the misorientation (MO) between a reference point and other points of the considered Metals 2020, 10, 673 5 of 12 grain. The average orientation of the considered grain can be taken into account as a reference point [28,29]. As shown, the GROD map distribution presents grains with deviations from the average grains orientation, deviations that occur either due to the accumulated strain induced by the slip/twinning/rotation of the grains, or by other effects of deformation, such as strain hardening, dynamic recrystallization, etc.
The GROD distribution in the initial structural state (Figure 2d) shows that both ferrite and austenite have low stressed grains; the maximum GROD was equal to 8 • and it was recorded in the case of ferrite. From the analysis of the distribution of the accumulated stress/strain in the ferrite and austenite phases, it can be observed that, generally, the austenite phase shows a more uniform distribution and a lower level of accommodated stress/strain in comparison with the ferrite phase ( Figure 2d). Additionally, one can observe that high stress/strain areas (marked with red circles in Figure 2d) are presented in both phases. Due to the low GROD, it can be assumed that these areas have a low susceptibility to generate micro-cracks. In the as-received state, the internal stressed/strained areas are most likely created by the previously applied thermomechanical processing procedure. ( Figure 2d). Additionally, one can observe that high stress/strain areas (marked with red circles in Figure 2d) are presented in both phases. Due to the low GROD, it can be assumed that these areas have a low susceptibility to generate micro-cracks. In the as-received state, the internal stressed/strained areas are most likely created by the previously applied thermomechanical processing procedure. In the case of samples that were deformed at 1300 °C, fissures or cracks were not observed on the samples lateral surface, the UNS S32750 SDSS alloy maintained its structural integrity, even after upsetting with a higher deformation degree, ie ε = 50% (maximum deformation degree applied within the experimental program). Table 1-samples 2,4,6,7). By upsetting, the following microstructure characteristics were analysed in order to determine the optimum deformation degree: constituent phases, distribution and proportion of phases, size and shape of grains, GROD distribution of constituent phases, and occurrence of recrystallization (RX) of new -phase grains.
SEM-EBSD Microstructural Analysis
At 1050 °C, the SEM-EBSD analysis only revealed the presence of two phases, austenite and ferrite, in approximately equal proportions; the ferrite (δ-phase) acts as a metal matrix and the In the case of samples that were deformed at 1300 • C, fissures or cracks were not observed on the samples lateral surface, the UNS S32750 SDSS alloy maintained its structural integrity, even after upsetting with a higher deformation degree, ie ε = 50% (maximum deformation degree applied within the experimental program). Figure 4 shows a series of representative SEM-EBSD images, which result from SEM-EBSD analysis of samples processed, by upsetting, at 1050 • C, with the following deformation degrees: ε = 10% (Figure 4a-d), ε = 20% (Figure 4e-h), ε = 30% (Figure 4i-l), and ε = 35% (Figure 4m-p) (see Table 1-samples 2,4,6,7). By upsetting, the following microstructure characteristics were analysed in order to determine the optimum deformation degree: constituent phases, distribution and proportion of phases, size and shape of grains, GROD distribution of constituent phases, and occurrence of recrystallization (RX) of new δ-phase grains.
SEM-EBSD Microstructural Analysis
At 1050 • C, the SEM-EBSD analysis only revealed the presence of two phases, austenite and ferrite, in approximately equal proportions; the ferrite (δ-phase) acts as a metal matrix and the austenite (γ-phase) being uniform dispersed in the ferrite matrix. Also, austenite shows elongated and irregular grains, of different shapes and sizes. No other secondary phases were detected in the UNS S32750 alloy that was processed at 1050 • C. While analysing the influence of deformation degree on the microstructural characteristics, it can be seen that the increase of the deformation degree leads to the fragmentation of both austenite and ferrite, with the fragmented grains showing a continuously decreasing average size (see Figure 4b,c,f,g,j,k,n,o). Starting even with small deformation degrees (ε = 10%), new small recrystallized ferrite grains are visible (see Figure 4c,g,k,o), indicating the occurrence of dynamic recrystallization (DRX) mechanism in ferrite phase (marked with withe circles in Figure 4d,h,l,p). It can also be seen that the increase of the applied deformation degree leads to a slight intensification of the RX phenomena, increasing the weight fraction of recrystallized ferrite grains (see Figure 4c,g,k,o). No RX process was observed in the austenite phase (see Figure 4b,f,j,n).
The GROD distributions, as illustrated in Figure 4d,h,l,p, indirectly show that the level of internal local strain is high for the UNS 32750 alloy, hot deformed at 1050 • C, in comparison with as-received/initial state (see Figure 2d). Areas with a higher strain level can be observed in ferrite grains as compared to austenite grains, even at small applied deformation degrees (i.e., ε = 10% and ε = 20%). Analysing the misorientation spread, one can observe that, in the case of ε = 10% (Figure 4d), the maximum recorded deviation was close to 25 • , while, in the case of ε = 20% (Figure 4h), the maximum recorded deviation was close to 38 • , which indicates a high increase of approx. 52%. This high increase in orientation deviation, from 25 • to 38 • of approx. 52%, shows that the added deformation, from ε = 10% to ε = 20%, induced intense effects on both austenite and ferrite grains (slip/twinning/rotation of grains, strain hardening, DRX, etc.). Analysing the misorientation spread in the case of ε = 30% (Figure 4l), the maximum recorded deviation was close to 40 • , while, in the case of ε = 35% (Figure 4p), when the first cracks/fissures are observed/detected on sample lateral surface, the maximum recorded deviation was close to 41 • . This small increase in maximum recorded deviation, from 38 • to 41 • , corresponding to a high increase in applied deformation degree, from 20% to 35%, shows, firstly, that the risk of cracks/fissures development arises if the maximum GROD value exceeds 38-40 • and, secondly, that the UNS S32750 SDSS alloy can be safely processed, at 1050 • C, up to an applied deformation degree of ε = 20%, if higher deformation degrees are used that the risk of cracking is arising. Figure 5 shows a series of representative SEM-EBSD images, which result from SEM-EBSD analysis of samples processed, by upsetting, at 1300 • C, with the following deformation degrees: ε = 10% (Figure 5a-d), ε = 20% (Figure 5e-h), ε = 30% (Figure 5i-l), ε = 40% (Figure 5m-p), and, ε = 50% (Figure 5q-t) (see Table 1-samples 12,14,16,18,20). By upsetting, the following microstructure characteristics were analysed in order to determine the optimum deformation degree: constituent phases, distribution and proportion of phases, size and shape of grains, GROD distribution of constituent phases, and occurrence of recrystallization (RX) of new δ-phase grains.
At 1300 • C, the SEM-EBSD analysis only revealed the presence of two main phases, the austenite (γ-phase) and the ferrite (δ-phase). The weight fraction of constituent phases averages 35% austenite and 65% ferrite, which shows that increasing the deformation temperature from 1050 • C to 1300 • C leads to the increasing of ferrite weight fraction, due to the dissolution of austenite within ferrite matrix by the γ-> δ-phase transformation. Additionally, at 1300 • C, ferrite acts as a metallic matrix and the austenite shows elongated and irregular grains of different shapes and sizes, evenly dispersed in the ferrite matrix. No other secondary phases were detected.
Analysing the influence of deformation degree on microstructure shows that increasing the applied deformation degree leads to the fragmentation of both austenite and ferrite phases, resulting in fragmented grains with a continuously decreasing average grain size (see Figure 5b,c,f,g,j,k,n,o,r,s). Starting even with small applied deformation degrees, new small ferrite grains are visible in the microstructure, indicating the occurrence of RX mechanism in the ferrite phase (in areas marked with white circles in Figure 5c,g,k,o,s). It can also be observed that the increase of the deformation degree leads to the intensification of the RX in ferrite phase, significantly increasing the weight fraction of the new recrystallized ferrite grains (see areas marked with white circles in Figure 5c,g,k,o,s). At 1300 • C, no RX process was observed in the austenite phase.
The GROD distributions of both austenite and ferrite grains, as illustrated in Figure 5d,h,l,p,t, indirectly show that the level of accumulated internal stress/strain fields is quite high, but lower when compared to the ones observed in the case of hot deformation, by upsetting, at 1050 • C. Analysing the misorientation spread, one can observe that in the case of ε = 10% (Figure 5d) maximum recorded deviation was close to 12 • , in the case of ε = 20% (Figure 5h) maximum recorded deviation was close to 26 • , in the case of ε = 30% (Figure 5l) maximum recorded deviation was close to 28 • , in the case of ε = 40% (Figure 5p) maximum recorded deviation was close to 34 • and, in the case of ε = 50% (Figure 5t), the maximum recorded deviation was close to 30 • . These maximum GROD deviations are situated to an inferior level compared with critical ones (38-40 • ), indicating why no-cracks/fissures were induced on samples lateral surfaces during hot deformation. When considering that the maximum GROD deviation was situated close to 34 • , below critical one (38-40 • ), indicates that the UNS S32750 SDSS alloy can safely be processed, by upsetting, using a deformation degree of ε = 50% at 1300 • C.
In general, when trying to explain the deformation behaviour, the specific mechanisms and phenomena that occur during the thermo-mechanical processing of the material must be taken into account. If an attempt is made to explain all of the observations made using the SEM-EBSD microstructural analysis, the crystallographic characteristics of all of the observed phases must be first considered. In the case of UNS S32750 alloy, only two phases were observed, the δ-phase and the γ-phase. Phase δ (belonging to bcc-body centred cubic crystallographic system) and phase γ (belonging to fcc-face centred cubic crystallographic system) have different potentials to accommodate deformation, either by slip/twinning and/or rotations of grains [26][27][28][29]. If the main driving force for accommodating deformation is considered to be slip/twinning, then the main influential factor in accommodation the plastic deformation is the atomic density on the slip/twinning planes [30,31]. In fcc crystals, when considering the minimum activation energy criterion, the slip system most easy to activate is the primary system {111} <110> and the twinning system most easy to activate is the primary system {111} <112>. In bcc crystallographic systems, when considering the minimum activation energy criterion, the slip system most easy to activate is the primary system {110} <111>, while the twinning system most easy to activate is the primary system {112} <111> [26][27][28][29]32]. The analyse of the atomic density on the slip/twinning plane in both fcc and bcc systems revealed that a higher atomic density, almost double, is exhibited by fcc {111} atomic slip/twinning plane as compared to bcc {110} and {112} slip/twinning planes, indicating why fcc crystalline phases better accommodates deformation compared to bcc crystalline phases, for the same external stress level/processing conditions. In addition to the influence of slip/twinning mechanisms, the influence of other deformation mechanisms and phenomena must also be considered. Figure 6 shows the variation of maximum misorientation angle as a function of the deformation degree at 1050 • C (Figure 6a) and 1300 • C (Figure 6b), in the case of δ-phase. It can be seen that, when upsetting at 1050 • C, the maximum misorientation rapidly reaches a plateau value, close to (38-40 • ), starting even with a 20% deformation degree, plateau close to the critical value. As critical misorientation, one can consider when first cracks/fissures are observed/detected on sample lateral surface, in our case 41 • as maximum deviation. In the case of upsetting at 1300 • C, the maximum misorientation limit of (38-40 • ) is not reached, even for a 50% deformation degree. In the case of hot deformation, by upsetting, at 1300 °C, the maximum misorientation, around 34°, was recorded for an applied deformation degree of 40%, increasing the applied deformation degree to 50%, results in a decrease of the maximum misorientation to about 30°; this indicates that, for more intense deformations, besides primary slip/twinning systems, secondary slip/twinning systems may be activated, to accommodate increased deformation. An increased stress/strain level is necessary in order to activate secondary slip/twinning systems (characterised by higher Miller indices), possessing a lower atomic density on slip/twinning plane in comparison with primary ones. This necessary increased stress/strain level is assured by the increased applied deformation degree. Dynamic recrystallization (DRX) is another phenomenon that occurs during upset forging at high temperatures [33]. When analysing all of the microstructural states, it can be seen that both and phases show typical morphologies of strain-hardened microstructures, but only in the -phase can new RX grains be observed (see Figures 4 and 5). The small size of the new RX grains being due to the short duration of hot deformation, by upsetting, at both 1050 °C and 1300 °C. Figure 7 shows the variation of weight fraction of RX -phase grains, as a function of applied deformation degree at 1050 °C ( Figure 7a) and 1300 °C (Figure 7b). One can consider that all of the new grains, with an average size bellow 10 m, are RX grains, due to the short duration of hot deformation, by upsetting, at 1050 °C and 1300 °C. When the UNS S32750 SDSS alloy is hot deformed, by upsetting, at 1050 °C, the weight fraction of -phase RX grains shows a continuous increase up to a deformation degree of 20%, when it reaches a value that is close to 10.8%. A further increase of the applied deformation degree, up to 30%, will result in a small increase of the weight fraction of RX grains, up to 11.6%. At 35% deformation, when the first micro-cracks are observed, the weight fraction of RX grains shows a value close to 16.3%. This increase in RX -phase weight fraction shows that the RX mechanism is related to both deformation temperature and applied deformation degree. In the case of hot deformation, by upsetting, at 1300 • C, the maximum misorientation, around 34 • , was recorded for an applied deformation degree of 40%, increasing the applied deformation degree to 50%, results in a decrease of the maximum misorientation to about 30 • ; this indicates that, for more intense deformations, besides primary slip/twinning systems, secondary slip/twinning systems may be activated, to accommodate increased deformation. An increased stress/strain level is necessary in order to activate secondary slip/twinning systems (characterised by higher Miller indices), possessing a lower atomic density on slip/twinning plane in comparison with primary ones. This necessary increased stress/strain level is assured by the increased applied deformation degree.
Dynamic recrystallization (DRX) is another phenomenon that occurs during upset forging at high temperatures [33]. When analysing all of the microstructural states, it can be seen that both δ and γ phases show typical morphologies of strain-hardened microstructures, but only in the δ-phase can new RX grains be observed (see Figures 4 and 5). The small size of the new RX grains being due to the short duration of hot deformation, by upsetting, at both 1050 • C and 1300 • C. Figure 7 shows the variation of weight fraction of RX δ-phase grains, as a function of applied deformation degree at 1050 • C (Figure 7a) and 1300 • C (Figure 7b). One can consider that all of the new grains, with an average size bellow 10 µm, are RX grains, due to the short duration of hot deformation, by upsetting, at 1050 • C and 1300 • C. When the UNS S32750 SDSS alloy is hot deformed, by upsetting, at 1050 • C, the weight fraction of δ-phase RX grains shows a continuous increase up to a deformation degree of 20%, when it reaches a value that is close to 10.8%. A further increase of the applied deformation degree, up to 30%, will result in a small increase of the weight fraction of RX grains, up to 11.6%. At 35% deformation, when the first micro-cracks are observed, the weight fraction of RX grains shows a value close to 16.3%. This increase in RX δ-phase weight fraction shows that the RX mechanism is related to both deformation temperature and applied deformation degree. When the alloy is hot deformed, by upsetting, at 1300 • C, the weight fraction of the RX δ-phase grains shows a continuous increase up to 43.7%, which corresponds to an applied deformation degree of 50%, when almost half of the δ-phase is recrystallized. This high weight fraction of RX δ-phase grains explains the improved plasticity of δ-phase at 1300 • C. Dynamic recrystallization (DRX) is another phenomenon that occurs during upset forging at high temperatures [33]. When analysing all of the microstructural states, it can be seen that both and phases show typical morphologies of strain-hardened microstructures, but only in the -phase can new RX grains be observed (see Figures 4 and 5). The small size of the new RX grains being due to the short duration of hot deformation, by upsetting, at both 1050 °C and 1300 °C. Figure 7 shows the variation of weight fraction of RX -phase grains, as a function of applied deformation degree at 1050 °C ( Figure 7a) and 1300 °C (Figure 7b). One can consider that all of the new grains, with an average size bellow 10 m, are RX grains, due to the short duration of hot deformation, by upsetting, at 1050 °C and 1300 °C. When the UNS S32750 SDSS alloy is hot deformed, by upsetting, at 1050 °C, the weight fraction of -phase RX grains shows a continuous increase up to a deformation degree of 20%, when it reaches a value that is close to 10.8%. A further increase of the applied deformation degree, up to 30%, will result in a small increase of the weight fraction of RX grains, up to 11.6%. At 35% deformation, when the first micro-cracks are observed, the weight fraction of RX grains shows a value close to 16.3%. This increase in RX -phase weight fraction shows that the RX mechanism is related to both deformation temperature and applied deformation degree. When the alloy is hot deformed, by upsetting, at 1300 °C, the weight fraction of the RX -phase grains shows a continuous increase up to 43.7%, which corresponds to an applied deformation degree of 50%, when almost half of the -phase is recrystallized. This high weight fraction of RX -phase grains explains the improved plasticity of -phase at 1300 °C.
Conclusions
The main results of the present research can be summarized, as follows: (1). After hot deformation, by upsetting, at 1050 • C the microstructure of the UNS S32750 SDSS alloy consists of approximately equal proportions of the γ-phase and δ-phase. However, after deformation at 1300 • C, the weight fraction of δ-phase increases up to about 65%, due to the initiation of γ → δ phase transition; (2). Development of cracks/fissures, on the lateral surface of UNS S32750 SDSS alloy samples, were observed only in the case of hot deformation, by upsetting, at 1050 • C with a deformation degree exceeding 30%, being related to the limited plasticity of the δ-phase at 1050 • C; (3). Grain Reference Orientation Deviation (GROD) analysis showed a limit/critical value, close to (38-40 • ), must be reached in order to develop cracks/fissure on lateral surface of UNS S32750 SDSS alloy samples; and, (4). Recrystallization (RX) of δ-phase grains is observed for all UNS S32750 SDSS alloy processed samples, both at 1050 • C and 1300 • C; a higher weight fraction of RX δ-phase grains is noticed in the case of hot deformation at 1300 • C in comparison 1050 • C; maximum weight fraction of RX δ-phase grains, close to 43.7%, was recorded for an applied deformation degree of 50% at 1300 • C.
As a general conclusion, the super-duplex stainless steel UNS S32750 can be safely hot deformed, by upsetting, at temperatures between 1050-1300 • C, provided that the applied deformation degree at 1050 • C is not exceeding 20%, while at 1300 • C is not exceeding 50%.
The authors are currently undertaking further optimization of the thermomechanical processing parameters for the UNS S32750 SDSS alloy, in order to obtain a favourable balance between plasticity, mechanical properties, and corrosion resistance.
|
v3-fos-license
|
2022-08-17T06:16:19.324Z
|
2022-08-16T00:00:00.000
|
251592728
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://research.aalto.fi/files/91632894/The_impact_of_the_COVID_19_pandemic_on_incident_cases_of_chronic_diseases_in_Finland.pdf",
"pdf_hash": "d4a6dee5737478abcfebb0496f1140add02e4c1b",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44443",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "05cac1adac11dbba7aef9f934bca1fdd21944767",
"year": 2022
}
|
pes2o/s2orc
|
The impact of the COVID-19 pandemic on incident cases of chronic diseases in Finland
Abstract The coronavirus disease 2019 pandemic has caused changes in the availability and use of health services, and disruptions have been reported in chronic disease management. We aimed to study the impact of the pandemic on the incidence of chronic diseases in Finland using register-based data. Incident cases of chronic diseases decreased, except for cases of anxiety disorders. The annual reductions ranged from 5% in cases of cancers to over 16% in cases of type 2 diabetes. These findings may be due to diagnostic delays and highlight the importance of ensuring access to health care and the continuity of care in all circumstances.
Introduction
T he coronavirus disease 2019 (COVID-19) pandemic has affected the provision of health services on a large scale and caused changes in the availability and use of services in several countries. 1 Also in Finland, the range of health services has been narrowed as resources have been shifted to deal with the COVID-19 pandemic. During the first wave of COVID-19, the Finnish Government declared a state of emergency from the mid-March to mid-June 2020 and implemented several restrictive regulations and recommendations, such as public institutions were closed and inhabitants 70 years and older were asked to stay at home. During the second wave in the autumn of 2020, measures were less restrictive. However, the restrictive measures to reduce the spread of COVID-19 and protect those at risk limited people's access to healthcare, and some patients might have hesitated to seek care due to the fear of infection. 2 The management of chronic diseases requires regular monitoring and integrated care, but due to the pandemic, severe disruptions have been reported in the processes of routine care. The study by Coma et al. 3 analyzed the consequences of lockdown measures on the control of chronic diseases in primary care. They found a decrease in 9 out of 10 control indicators for patients in primary care, including a decrease in LDL cholesterol and blood pressure control in ischaemic heart diseases and in glycated haemoglobin A control in type 2 diabetes. Also, delays in the detection of chronic diseases, such as cancers and heart diseases, have been reported due to the sub-optimal screening and testing during the pandemic. 4,5 The aim of this study was to examine the impact of the pandemic on incident cases of chronic diseases among the Finnish population during the first year of the pandemic.
Methods
The study population comprised all individuals aged 18 years or older who used Finnish health care services in 2019-20. The data were extracted from the Finnish Care Register, which covers the health information on the clients treated in health centres, hospitals and other institutions providing outpatient and inpatient care as well as on home-nursing clients. The common chronic diseases were defined from the register using ICD-10 codes for diagnosis. Data on the incidence of type 2 diabetes (E11), asthma (J45, J46), ischaemic heart diseases (I20-I25), cerebrovascular diseases (I60-I69, G45.9), hypertension (I10), hyperlipidaemias (E78), back pain (M54), arthrosis (M15-M17), depression (F32, F33), anxiety disorders (F40, F41), gingivitis and periodontal diseases (K05) and cancers including in situ carcinomas (C00-C97, D00-D09) were included in the analysis. The numbers of the newly diagnosed cases found in the registers in 2020 were compared with the cases found in the previous year 2019. The comparison included all public healthcare providers and those private care providers whose data were complete for both years. Cases where a patient did not have any recordings of a diagnosis of interest from the previous year's 2015-18 were regarded as incident cases.
Results
In 2019, there were a total of 676 846 incident cases, from which 35 748 were type 2 diabetes cases, 29 620 asthma cases, 32 426 ischaemic heart diseases, 28 775 cerebrovascular diseases, 101 996 hypertension cases, 56 928 hyperlipidaemias, 90 044 back pain cases, 51 065 arthrosis cases, 42 248 depression cases, 38 039 anxiety disorders, 128 800 gingivitis and periodontal diseases and 41 157 cancer cases. In 2020, the total number of incident cases (n ¼ 602 144) was 11% lower than in 2019. There were reductions in the numbers of new cases in all disease groups, except for the group of anxiety disorders where a slight increase was observed. The annual reductions ranged from 5 to 16%. Figure 1 shows the changes in the incident cases as percentages between the years.
Discussion
At the time of writing this short report, Finland has been in the middle of a COVID-19 pandemic for 2 years. The effect of the pandemic on incident cases of chronic diseases was assessed by examining the number of new cases reported to the Finnish Care Register during 2019 and 2020. In 2020, there were a total of 11% fewer cases reported to the register than in the previous year. The study by Sisó-Almirall et al. 6 analyzed the impact of prioritizing care for COVID-19 patients on the detection and care of chronic diseases and their risk factors managed in three primary care centres in Spain. Our findings are in line with this Spanish study, which observed significant reductions in the incidence rates of cardiovascular risk factors and diseases (e.g. hypercholesterolaemia and type 2 diabetes), chronic non-cardiovascular diseases (e.g. dementia and chronic obstructive pulmonary disease) and some cancers/tumours (e.g. melanoma and colon polyps) in 2020 compared with 2017-19. A decrease in the incidence of acute coronary syndromes during the COVID-19 lockdown has been reported also in the study by Uimonen et al. 7 covering the catchment area of the three Finnish hospitals.
The decreasing trend in the numbers of new diabetes and cancer cases due to diagnostic delays during the pandemic has been reported from the UK. 5,8 It has been estimated that approximately 60 000 type 2 diabetes diagnoses were missed or delayed in the UK between March and December 2020. 8 Globally, the mental health burden has increased during the pandemic. 9 The systematic review by Santomauro et al. 9 indicated an increase of over 25% in cases of major depressive and anxiety disorders due to the COVID-19 pandemic. Consistent with the results of Sisó-Almirall et al., 6 we found a reduction in new cases of depression and an increase in the number of anxiety disorders in 2020 compared with 2019.
The main strength of our study is a rich data set including all primary and secondary health care visits of the Finnish adult population. Although some misclassification is always present in the registered data, it is a feasible and cost-effective way to study a large, nationwide population. In addition, the Finnish Care Register uses the ICD codes for diagnosis, and codes are recorded to the register by health care professionals. The coverage, accuracy and reliability of the Care Register for Health Care have been documented previously. 10 In this study, the availability of data from prior years was crucial in defining the actual new diagnoses for 2019 and 2020. However, the data were available only until the end of the year 2020. As more data become available, future work should build on our analyses to monitor the long-term effects of the pandemic on the detection and management of chronic diseases and to study the explanatory factors for the observed differences.
This study shows a temporal coincidence between the first year of the pandemic and the reduction of incident cases which may be partly explained by diagnostic delays due to changes in health services and measures limiting the access to healthcare. It is important to evaluate the impact of the pandemic on diagnostic delays after a longer follow-up period and evaluate their possible, long-term health consequences. From a public health point of view, the early detection of chronic diseases is important both during and after the COVID-19 pandemic. Access to health services or alternative services must be secured also in exceptional circumstances and restrictive measures must not be an obstacle for the diagnosis of diseases and the implementation of good care.
Funding
This study was partly funded by the Strategic Research Council of the Academy of Finland [project IMPRO 335524, 336325 and 336329].
|
v3-fos-license
|
2020-06-11T09:05:12.360Z
|
2020-06-02T00:00:00.000
|
225862749
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://lifescienceglobal.com/pms/index.php/JBS/article/download/7736/3992",
"pdf_hash": "0e904e91dd1a0fe6ff07398900994fea8e85c786",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44444",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "7e32a318cf4eb3b22a3908160150e24ecc81a9fd",
"year": 2020
}
|
pes2o/s2orc
|
Considerations on the Breeding and Weaning of Buffalo Calf
: The buffalo calf is more challenging to adapt to the transition from breast milk to other substitutes that favor weaning. Growth in the pre-weaning period is affected by the amount of reconstituted milk consumed. When the quantity of reconstituted milk consumed is low, the weaning weight is also low. The gap between the latter and the optimal weight will never be eliminated because the species cannot perform compensatory growth, such as cattle. There is a delay in reaching an optimal live weight to start puberty. The age at first birth is, in fact, lower in those countries that leave all the milk to the calf for meat production or as happens in Italy where there is a suitable milk substitute. In Italy, it has been verified that calves taking almost ad libitum quantities of cow's milk weigh more than 140 kg at 4 months and have their first birth at the age of 22-26 months. As adults, they have an almost zero percentage of vaginal or uterine prolapse. In further experiences on 3672 heifers, it was possible to verify "ex-post" that the calves that had taken a more significant quantity (150 kg vs. 105) of milk substitute had shown age at the first birth in advance of about 6 months (28, 5 versus 34). Future investigations should verify the effect of weaning birth and not just the cost of weaning. Age at first birth is not only an economic parameter, but it is useful for an early evaluation of bulls in progeny tests.
INTRODUCTION
Interest in raising and weaning calves has increased with the demand for buffalo milk. In countries where the species is intended for meat production or work, the calf receives breast milk until weaning beyond 9 months [1].
On the contrary, in countries where milk production is the main objective, breeders have tried to reduce the weaning age and to use reconstituted milk with the lowest possible cost; these measures are intended to anticipate the availability of milk for sale. The characteristics of the reconstituted milk, with the exception [2] of some differences (copper less than 5 mg/kg), are superimposable to those used for bovine calves.
The tendency to wean buffalo calves as early as possible depends on the fact that buffalo milk has always been paid from 2 times (in countries where it is used as drinking milk) to 3-4 times (in Italy where it is used for the production of a special cheese) more than cow's milk.
In Italy, until a few decades ago, buffalo calves could suck with three, two, and one-quarter of the udder for the 1st, 2nd, and 3rd months of life, and they *Address correspondence to this author at the Department of Veterinary Medicine and Animal Production, Università di Napoli Federico II, Italy; Tel: 3346939579; E-mail: zicarell@unina.it were subsequently kept alive only to maintain milk production. To make sure the females did not stop producing milk when their calf died, the skin of the dead son was placed on top of another calf to deceive the mother with its familiar smell. At the end of breastfeeding, the calves were left to pasture, and they received the food that the breastfeeding females left behind. With this technique, if pasteurellosis did not occur, survival was relatively high; however, 8-months subjects weighed less than 100 kg, and 400 kg of live weight was reached at the age of over 34 months.
STUDIES CARRIED OUT IN
1964 Ferrara et al. [3] showed that it was possible to wean buffalo calves using cow's milk. And to obtain optimal growth (approximately 1 kg/day), it was necessary to administer 7.12 and 10.44 kg/day of buffalo milk and cow milk respectively up to the fourth month of age. Today there are still some farms where after the colostral phase (between 3 and 7 days), weaning is carried out by suckler cows, which give milk, according to their production, to 2-4 calves that suck directly from the cows twice per day.
Recently (personal communication), in some farms, some cows are milked to supply milk to the buffalo calves almost ad libitum for a maximum of 4 months when the subjects weigh more than 140 kg. These subjects give birth for the first time at the age of 22-26 months and have an almost zero percentage of vaginal or uterine prolapse.
In a previous contribution [4], it was reported that vaginal and uterine prolapse increased in buffalo herds in the late 1970s. This period coincided with the growing demand for buffalo milk and the everincreasing use of reconstituted milk.
Buffalo calves' actual nutritional needs were not taken into account in formulating the various reconstituted kinds of milk. Artificial feeding with "less unsuitable" milk substitutes has intensified neonatal gastrointestinal diseases, which have also negatively influenced the absorption of minerals. In our opinion, this factor represents a not negligible cause of the prolapse that occurs in adulthood. In countries where calves still take breast milk, vaginal prolapse is rare. Examining Mediterranean buffalos (Egypt and Latin America) or any other breed or crossbreeds in other countries in the past, what is striking is the remarkable diversity of the conformation of the basin. In the past, unlike the buffaloes bred in Italy, there were no sloping croups; and the relationships between the three widths (front, central, and rear) of the basin were more proportionate, although the growth of the buffaloes in these countries was slower.
The correct pelvic belt architecture, and therefore a harmonic pelvis, is probably influenced by the minerals absorbable in the early stages of life. In formulating milk substitutes for buffalo calves, producers do not take into due account that buffalo milk has a higher Ca / P ratio (Ca 1.8 -2 g/kg, P approximately 1.1 g/kg, ratio equal to 1.73) of that vaccine milk (Ca 1.1 g/kg, P 0.8 g/kg, ratio equal to 1.33) and that the intake capacity of a buffalo calf is less than that of the bovine calf (2% vs. 2.4% -2.8% DM / 100 kg of live weight).
In buffalo milk, in addition to a different Ca / P ratio, there is a narrower Ca / protein ratio (respectively 0.35 and 0.42 in cattle and buffaloes), which leads to hypothesize a Ca function not targeted only at coagulation process but to other physiological needs. According to Ferrara and Intrieri [5], the percentage of colloidal calcium in buffalo milk is about 80% (1,625 g of colloidal calcium/kg / 2.029 g of total calcium/kg) while in cattle milk it is about 67% (11). The percentage of colloidal P is also about 66% [6] and about 45% [5] in buffalo and cow's milk, respectively. The greater presence of Ca and P colloidal testifies to particular nutritional needs in the buffalo calf compared to the bovine calf as the buffalo calf has a more developed skeletal system which, by reducing its specific weight, allows it to swim more easily (buffalo river).
The buffalo calf's lower intake capacity (2% vs. 2.4% of the live weight for the buffalo and the bovine respectively) in the first months of life leads to lower ingestion of DM for the buffalo by 16.67% [7]. If a buffalo calf is fed cow's milk and ingests a smaller amount of dry matter, it will take on a diet characterized by a lower content, except lactose, of all nutrients ( Table 1).
It is known that milk reconstituted under the most favorable conditions is formulated with 60% of cow's milk proteins and for the rest with whey and vegetable proteins. It follows that the percentage of colloidal Ca and P is even lower. There are no substantial differences in weight between the two species, but in buffalo calves, the skeletal system affects the live weight in a higher percentage. It is conceivable that the greater presence of Ca in buffalo milk and its better absorbability thanks to the colloidal shape are aimed at the formation of the skeleton, as well as guaranteeing an adequate Ca / casein ratio useful for promoting coagulation in the abomasum. These observations suggest that to meet the needs of Ca and P during the period of formation of the basin; the buffalo calf must take an adequate amount of reconstituted milk, the content of which is low in colloidal Ca and P. An alternative is represented by the cow's milk in which the presence of these elements in the colloidal form is certain even if it remains lower than the needs. This claim is justified by the field observation that farms that use adequate quantities of cow's milk produce heifers whose age at first birth is 22-26 months and have little or no incidence of vaginal or uterine prolapse.
It should be emphasized that vaginal and uterine prolapse often compromises production and subsequent reproductive career.
RECENT STUDIES
Among the various contributions relating to tests carried out on the weaning of the buffalo calf, we mention two carried out in Italy. Table 2 summarizes the main results of the two contributions.
At the X National Congress of the Scientific Association of Animal Production (ASPA), Palladino et al. [6] showed that the lower administration of acidified reconstituted milk does not change the total daily intake of dry matter between 10 and 80 days of life (Figure 1) among the subjects who had taken a major (group A) and a minor (group B) quantity of reconstituted milk, but it causes a lower daily weight gain (Figure 2) in the subjects of group B. This finding shows that, unlike what has been observed in the bovine calf, the buffalo calf does not increase the consumption of concentrate or hay when the administration of the dry substance of the reconstituted milk decreases (Figure 3).
The buffalo calf completes the eruption of the incisors later than the bovine one (about 25 days against 8 days), and this shows that it is less early. This delay is accompanied by a higher sensitivity of the buccal mucosa. Therefore it is necessary to use a softer rubber for the suction cups than the one used for the bovine calves. The sensitivity of the mucosa is a not negligible cause of the delay in the intake of the dry substance which is different from milk or its substitutes e this makes weaning more difficult.
Vecchio et al., [7] have shown that it is possible to administer reconstituted milk once a day by doubling the concentration of reconstituted milk between 9 and 60 days of life from 18 (group G1) to 36% (group G2) and between 61 and 90 days of life from 14.5 to 29%. The calves of the two groups received 85 kg of reconstituted milk between 6 and 90 days of life. The two techniques recorded overlapping performances, and therefore it was shown that it is possible to reduce the use of labor. The calves reached the weight of 101.9 ± 3.4 kg (group G1) against 104.3 ± 3.7 (group G2) at 90 days while those of Palladino et al. who received reconstituted milk between 10 and 60 days, at 80 days they weighed 93.2 kg (group A which received 48.9 kg of reconstituted milk) and 84.9 kg (group B which received 40.7 kg of reconstituted milk). At 90 days, considering the daily weight gain recorded between 70 and 80 days, it is probable that they would have reached the weight of 96.9 and 89.7 kg, despite having received 40 kg of reconstituted milk less than those reported by other authors [7].
The daily weight gain between 6 days and 90 days was on average 655 g (633 + 677) / 2, in the Vecchio et al. trial [7] and 666 g (726 + 607) / 2 between 10 days and 80 days in that of Palladino et al. [6]. The total daily dry matter is taken from the calves of the Vecchio et al. test. It was between 2% (group G1) and 1.7% (group G2) per 100 kg of live weight while in the test Palladino et al., the values ranged from 1.65% to 1.75%. The total daily dry matter observed by Vecchio et al. [7] was, therefore, higher. Table 2. It appears that the amount of RM affects the final live weight and, consequently, the increase in daily weight.
Previous studies [8,9] showed that 300-day buffalo calves have an average daily weight gain of approximately 800 g / day. The diet administered was characterized by 0.89 MFU / kg dry matter (DM), which did not allow these animals to compensate for growth.
A subsequent study was conducted on 240 male buffaloes, weaned at 80 kg when they were 90-100 days old. After weaning, the animals received an ad libitum diet characterized by 0.80-0.85 kg of MFU / DM with a concentrated / forage ratio of 50:50. In this experience from the beginning of the test, a diet characterized by 0.9 MFU / kg of dry matter (DM), 14% CP and with a forage/concentrate ratio of 38:62 starting from an average age of 148, 218, 302, 320, 374 and 596) days for 1, 2, 3, 4, 5 and group 6 respectively.
The weight gain was recorded monthly up to the slaughter of about 400 kg. Unfortunately, due to the needs of the market, buffaloes were slaughtered at different weights and ages. To this end, the performances obtained at 400 kg and 550 days were recorded and analyzed. It is worth noting that the age, weight, and growth of buffaloes slaughtered after 400 kg and 550 days were also recorded. The weight of subjects slaughtered after 550 days of age was obtained with appropriate interpolations between weight and age.
The buffaloes that received the diet at an average age of 148 days weighed 486 kg at 550 days. The age at which the diet was administered was inversely related (R = -0.689 ***) to the weight of 400 kg ( Table 3) and, consequently, directly related to the daily weight gain. The dry matter intake (kg/day) was lower in the animals who received the diet at the age of 302 days (groups 1, 2, and 3) than in those (groups 4, 5, and 6) who received the diet later. These differences were present both when considering live weight (400 kg) and when considering age (550 days).
Daily weight gain tended to increase between 350 and 400 days and 550 days in Groups 1 and 2 ( Table 3). In the other groups, it always increased after the administration of the diet, with the exception of group 3. It is interesting to note that in groups 3, 4, and 5, the daily weight gain (DWG) decreased after a period during which it was greater than 1 kg. Although the diet was administered ad libitum, a daily weight gain of 1.2-1.3 kg was only recorded in group 3 between 400 and 500 days. DM intake increased to 345 kg, while it progressively decreased in relation to body weight (from 2.8% to 1.6% of live weight). Similar to what was observed in a previous study [8], late administration of the diet did not allow for a compensatory DWG, as usually demonstrated in cattle. This phenomenon may be due to the fact that DM intake is stable until animals reach a critical bodyweight, after which the DWG lowers. The results of this study have shown that in order to properly evaluate growth parameters in buffalo species it is necessary to take into account its growth from birth.
The daily weight gain tends to increase between 300 and 400 days in groups 1 and 2 after about 6.5 months from the administration of diet A characterized ( Table 2) by a higher energy density (0.9 MFU / kg of dry matter -DM -, 14% CP, and a forage/concentrate ratio of 38:62 against 0.80-0.85 MFU / kg / DM and a forage/concentrate ratio of 50:50 both administered ad libitum). Improvement in daily weight gain was observed after 8-9 months in groups 3, 4, and 5 and after 11.5 months in group 6. Eventually, the benefit was less with increasing the age at which diet A started. The phenomenon can be explained if we consider that the percentage intake of DM / live weight progressively decreases from 2.8% to 1.6% with the increase in live weight (from 120 to 510 kg of live weight).
Similar results have been observed in a previous study (Zicarelli et al., 2005): the late administration of a better diet did not allow a compensatory DWG, as usually happens in cattle. This phenomenon may be due to the fact that DM intake is stable until animals reach a critical bodyweight, after which the DWG lowers. The results of this study reinforced the thesis that, in order correctly evaluate the growth parameters in buffaloes species, growth from birth must be considered.
CONSUMPTION OF MILK REPLACER AND AGE AT FIRST CALVING
The contributions reported so far (nor do we know that other researchers have studied it) have not taken into account the influence of weaning and, in particular, the quantity of reconstituted milk taken, on age at first calving. It seemed to me useful to report a field observation by processing the data of a company it owns five farms for a total of about 9000 heads.
The 10-year data were archived daily (personal communication), and it was possible to verify that after weaning, the calves receive the same type of rationing. The weaning period and the quantity of reconstituted milk administered, verified based on the invoiced amounts, for different reasons, however, differed between the five companies. It was, therefore, possible to verify "ex-post" on 3672 buffaloes of first calving that those who had taken a more significant quantity (150 kg vs. 105) of reconstituted milk had shown age at first calving (Figure 4) of about six months less (28, 5 against 34).
Ultimately, this advance was obtained with a surplus of 45 kg of reconstituted milk, therefore, with an increase in costs of approximately € 90, which corresponds to a quantity of 60 kg of buffalo milk, which for a career of 5 parts corresponds to 12 kg of milk/calving. This higher cost is easily compensated anticipating production by 6 months. However, it was observed that on farms where the first calving occurred six months in advance, the females of the first calving had produced 300 kg less than in previous years ( Figure 5). This lower production had already been recovered at the second calving ( Figure 5).
CONCLUSION
The breeding and weaning of buffalo calf present specific species peculiarities that influence not only the growth but the age at first calving and the future reproductive career.
|
v3-fos-license
|
2018-12-25T22:09:19.518Z
|
2018-07-02T00:00:00.000
|
73591122
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/ijap/2018/3949675.pdf",
"pdf_hash": "89b0294eb39148a50a65b1b695fec6b72076f509",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44445",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"sha1": "89b0294eb39148a50a65b1b695fec6b72076f509",
"year": 2018
}
|
pes2o/s2orc
|
Codesigned Wideband High-Efficiency Filtering SIW Slot Antenna with High Selectivity and Flat Gain Response
A substrate integrated waveguide (SIW) slot antenna with wideband bandpass filtering performance is proposed in this journal. It consists of a SIW cavity, a transverse slot, three metal posts, and a SMA connector. Three metal posts split the SIW cavity into two TE110-mode resonators. A long nonresonant transverse slot is utilized to realize a half TE110-mode resonator and generate radiation simultaneously so as to reduce the size and release the use of extra radiator. Three in-band resonance poles and a radiation null at both out-of-bands are obtained. A prototype is fabricated and measured. Measured results demonstrate the proposed antenna is with a center frequency of 4.26GHz, a fractional bandwidth of 9.1%, a high efficiency of 93%, and flat gain response and good skirt selectivity of 270 dB/GHz and 330 dB/GHz for the lower and upper out-of-band, respectively.
Introduction
Modern wireless communications demand the RF frontend system to be compact, lightweight, low cost, high efficiency, and multifunctional.In most of the RF front ends, the bandpass filter and the antenna are the key components whose performance will immediately affect the system performance.In the traditional systems, they are usually designed separately and connected by a 50 Ω or 75 Ω transmission line, which not only increases the volume but also maybe degrades in-band performance due to the mismatch and extra insertion loss caused by the interconnections [1].Recently, a concept of filtering antenna has been proposed by integrating the bandpass filter and the antenna into a single component with filtering and radiating functions simultaneously.The proper integration of the antenna and filter turns out to be an efficient way to reduce the loss and enhance the efficiency of this functional block for a frontend system [2,3].Substrate integrated waveguide (SIW) technology has been effectively applied to design high-performance filters and antennas due to its low insertion loss [4][5][6].Some filtering antennas based on SIW technology have been reported in [7][8][9][10][11][12].In [7,8], the antenna is planar and designed by cascading resonators and radiator.In [9][10][11][12], 3D configurations by placing the resonators under the radiator are applied.However, in reported works, they suffer from more insertion loss as they all need an extra radiator to cascade the filtering circuit and the filtering circuit is realized by full mode resonators which will lead to a large lossy circuit size.What is more, as they have no radiation null (transmission zero) in gain response or the radiation nulls are far away from the passband, the out-of-band skirt selectivity in the reported works is low.
To overcome these problems, a codesigned wideband filtering substrate integrated waveguide (SIW) slot antenna with high selectivity, high efficiency, and flat gain response is proposed in this journal.A long nonresonant transverse slot is utilized to realize a half mode resonator and generate radiation simultaneously so as to reduce the size and release the use of extra radiator.Three resonance poles are achieved to enhance the bandwidth and gain flatness.Two radiation nulls are introduced to increase the out-of-band skirt selectivity.Antenna work mechanism is explained, and the results and discussion are given.The design evolution of the proposed antenna is illustrated in Figure 2. Firstly, a transverse slot is etched on the upper metal layer of SIW with end shorted (Ant.1).Then, three metal posts are introduced at the right side of the slot (Ant.2).Finally, a SIW cavity is formed and a SMA connector is used as feed (Ant.3).It should be mentioned that the slot is nonresonant at the work band and its length is more than half a guide wavelength.The reflection coefficient S 11 and peak realized gain of Ant.1-3 are given in Figure 3. From Figure 3(a), it can be found that the proposed antenna Ant. 3 has three resonance poles f r1 , f r2 , and f r3 .From Figure 3(b), it can be found that Ant. 1 has only one radiation null in gain response while Ant. 2 and Ant. 3 have two radiation nulls at almost the same frequencies.These phenomena will be explained in detail in the following section.
Antenna Mechanism of Radiation Nulls.
The configuration of a SIW transmission line with end shorted is shown in Figure 4(a).When it is excited by TE 10 mode, the surface current I is with standing wave distributions, as expressed by Here, I max is defined as the maximum current magnitude, β f denotes the phase constant at frequency f , and z means the distance from the shorted end. Figure 4(b) gives the surface current distribution at two different frequency f 1 and f 2 , which are labelled as case 1 and case 2, respectively.If a slot is etched on the top layer, it will generate radiation or not which depends on the current distribution at the position of the slot.It should be stated that there will be no radiation at the current null.Thus, if the slot is located at the position of current null as case 1 in Figure 4(b), there will be no radiation generated from the slot.And if the slot is not located at the position of current null as case 2, there will be radiation generated from the slot.Hence, a radiation null will occur at the frequency where the current at the slot satisfies the following condition that That is the reason why one radiation null is generated in Ant. 1, which is also explained in [13].
Actually in Ant. 2 and Ant. 3, when three metal posts are introduced, a resonator R 1 is formed.The configuration of SIW transmission line loaded by a resonator and its equivalent circuit model is illustrated in Figure 5(a).The resonator R 1 can be simply modelled as shunt inductor and capacitor, and it resonates at 4.25 GHz.The phase shift at the reference plane in the one end shorted structure is twice that of a twoport resonator model as discussed in [14].Thus, as shown in Figure 5(b), the phase shift φ 1 of the end shorted resonator R 1 can be expressed as For Ant. 2 and Ant. 3, the resonator R 1 can be considered as the load of the transmission line, as illustrated in Figure 6(a).The phase difference φ at the reference plane can be written as Here, l 2 denotes the distance from the three metal posts.According to transmission line theory [15], the current strength at the reference plane can be expressed as Here, V + 0 means the voltage of incident wave, Z 0 denotes the characteristic impedance of the transmission line, and Γ is voltage reflection coefficient.According to (5), the minimum current strength (current null) satisfies the condition that According to (3), ( 4), and ( 6), when current null occurs, the minimum l 2 should meet that It means when l 2 is about λ g /4 (λ g is guide wavelength), two current nulls will occur at both sides of the resonance.Figure 6(b) gives the phase difference when l 2 changes near λ g /4 (13 mm).The phase difference is modularized by 2π and is limited to the [−180 °and 180 °] range.It can be found when φ = 0, there exist two frequency f n1 and f n2 at both sides of 4.25 GHz.From the abovementioned analysis, the frequency where φ = 0 is the frequency of current null.It can also be seen from Figure 6(b), frequencies of the two current nulls shift downward when l 2 increases from 11 mm to 15 mm.It means the upper current null will come close to the resonance, and the lower current null will be away from the resonance.
2
International Journal of Antennas and Propagation According to transmission line theory, input impedance is defined as Z in = V z /I z .When the current is minimum, the input impedance is maximum.So the current null can be deduced from the feature of input impedance.Figure 7 gives the simulated magnitude of input impedance Z AA′ with the variation of l 2 .It can be found that one impedance peak is observed at both sides of 4.25 GHz.The frequency of the impedance peak is the frequency of current null.Due to the lossy material used in the simulation, when approaching the resonance, the loss is more and voltage reflection TE 10 mode excitation International Journal of Antennas and Propagation coefficient Γ becomes smaller.Meanwhile, the minimum current strength becomes larger according to (5).It means the impedance peak becomes smaller near the resonance 4.25 GHz as can be seen in Figure 7.So the depth of the two radiation nulls in Ant. 2 and Ant. 3 can be estimated from Figure 7.A larger impedance peak means a deeper radiation null, as can be proved from the simulated peak realized gain of Ant. 3 with different l 2 in Figure 8.
To achieve two symmetrical radiation nulls about 4.25 GHz, here in the proposed antenna, the slot is located at the position where l 2 is chosen to be 13 mm which is about a quarter phase wavelength at the resonant frequency of R 1 .
Antenna Mechanism of Resonance Poles.
As can be seen in Figure 3(a), three resonance poles f r1 , f r2 , and f r3 are generated in Ant. 3. Actually, when three metal posts are introduced, the large cavity is divided into two TE 110 -mode resonators.What's more, when a long slot is cut, a half TE 110 -mode resonator is created.The three resonant poles are generated by the couplings between the three resonators, and they can be explained by the basic resonant mode superposition.Figure 9 gives the simulated H-filed at different phase of the three resonance poles and their superposition modes.The modes created by the two TE 110 -mode resonators and their coupling can be considered as an even mode and an odd mode [16].Thus, as illustrated in Figure 9(a), the first resonance pole f r1 can be considered as the superposition of a half TE 110 mode and an even mode.As illustrated in Figure 9(b), the second resonance pole f r2 can be considered as the superposition of a half TE 110 mode and a TE 110 mode.As illustrated in Figure 9(c), the third resonance pole f r3 can be considered as the superposition of a half TE 110 mode and an odd mode.It should be noticed that all the resonance poles are in relation to the half TE 110 mode.This is because only the half TE 110 mode can generate radiation through the slot, while at even and odd mode, the slot is almost located at the position of current null.It is innovative that in the proposed design, the slot not only acts as a radiator but also is essential to the formation of the half mode resonator.This merged structure is different to the commonly used method which will introduce extra radiator.
Results and Discussion
The proposed antenna is fabricated and measured, and the photograph is shown in Figure 10.The simulated and measured reflection coefficient S 11 , realized gains, and total efficiencies are illustrated in Figure 11.It can be found that measured bandwidth (S 11 < −10 dB) is 9.1% (4.09-4.48GHz), agreeing well with the simulated one of 10.1% (4.05-4.48GHz).
The measured and simulated average realized gains over the work band are 6.2 dBi and 6.3 dBi, respectively.The gain response is flat over the work band with ripples less than 0.4 dB, which is attributed to the nonresonant slot radiation.The bandpass filtering performance is remarkable, as the efficiency over the work band reaches to a maximum as high as 93% which means insertion loss of about only 0.3 dB is introduced, while the out-of-band efficiency approaches zero.The high efficiency is benefited from the nonresonant slot radiation and small lossy circuit area as a half mode resonator is realized and no extra radiator is utilized.As can be observed in Figure 11, two measured radiation nulls at 4.04 GHz and 4.56 GHz in gain response are very close to the pass-band, so high skirt selectivity is achieved.The measured lower and upper out-of-band skirt selectivity is of 270 dB/GHz and 330 dB/GHz, respectively.
The simulated and measured radiation patterns at the three resonance poles 4.12 GHz, 4.26 GHz, and 4.46 GHz are shown in Figure 12.It can be seen almost symmetrical radiation patterns are observed in E/H plane.The slightly tilts at E-plane patterns are mainly caused by the offcentered slot position.Good front-to-back ratios and cross A comprehensive comparison with previous works utilizing SIW slot is summarized in Table 1.It can be found in our work the advantages of wideband, high selectivity, and high efficiency (Eff.)compared to other works.The high efficiency can also be deduced from the lossy circuit area which can be evaluated by size × layer.It is obvious that our work has the least value of size × layer.
Conclusion
A substrate integrated waveguide slot antenna with bandpass filtering performance in gain response is presented in this journal.A long nonresonant slot is introduced to realize a half mode resonator and generate radiation so as to reduce the size and release the use of extra radiator.Two radiation nulls are generated to enhance the selectivity.The measured results show that wide bandwidth of 9.1%, high selectivity of 270/330 dB/GHz for lower/upper out-of-band, and high efficiency of 93% and flat gain response are obtained in the proposed antenna.The high performance indicates it is a good candidate for the functional module integrated with filter and antenna in RF front system.
Figure 1 shows the configuration of the proposed filtering antenna.The antenna is designed on a F4B-2 substrate with a thickness of 6 mm, a relative permittivity of 2.485, and a loss tangent of 0.0018.It is composed of a SIW cavity with size W × L, a transverse slot with size ws × ls etched on the top metal layer, three metal posts in the cavity and a SMA connector as feed.The metal posts are with a diameter of d and space of xt.The antenna configuration is symmetrical about the y-axis.The antenna is designed and optimized at the centre frequency of 4.25 GHz.The detailed dimensions are as follows: L = 51, W = 49, m = 5 3, d = 2 4, xt = 13 8, l 1 = 23, l 2 = 13, ls = 29 4, ws = 2 65, a = 1 6, and s = 3 2 (unit: mm).
Figure 4 :Figure 5 :
Figure 4: SIW transmission line with end shorted and surface current distributions.(a) Configuration of SIW transmission line with end shorted.(b) Surface current distributions at different frequencies.
Figure 6 :
Figure 6: Illustration of analysis model and the results.(a) Analysis model of Ant. 2. (b) Simulated phase difference.
|
v3-fos-license
|
2017-07-11T08:17:32.050Z
|
2016-11-08T00:00:00.000
|
14659305
|
{
"extfieldsofstudy": [
"Psychology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://molecularautism.biomedcentral.com/track/pdf/10.1186/s13229-016-0107-7",
"pdf_hash": "c421d8f875282cd79c9667b314bb52d6f876bc28",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44447",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "c421d8f875282cd79c9667b314bb52d6f876bc28",
"year": 2016
}
|
pes2o/s2orc
|
Mimetic desire in autism spectrum disorder
Mimetic desire (MD), the spontaneous propensity to pursue goals that others pursue, is a case of social influence that is believed to shape preferences. Autism spectrum disorder (ASD) is defined by both atypical interests and altered social interaction. We investigated whether MD is lower in adults with ASD compared to typically developed adults and whether MD correlates with social anhedonia and social judgment, two aspects of atypical social functioning in autism. Contrary to our hypotheses, MD was similarly present in both ASD and control groups. Anhedonia and social judgment differed between the ASD and control groups but did not correlate with MD. These results extend previous findings by suggesting that basic mechanisms of social influence are preserved in autism. The finding of intact MD in ASD stands against the intuitive idea that atypical interests stem from reduced social influence and indirectly favors the possibility that special interests might be selected for their intrinsic properties.
Introduction
Reciprocal influence is an essential aspect of social behavior: individuals are influenced by others in their beliefs and preferences [1]. An essential element of this influence is mimetic desire (MD), which is the tendency to pursue goals pursued by others [16]. As an example, children often run after the same toy, even if other identical toys are available. MD is crucial for non-verbally sharing information about values (i.e., whether objects present in the environment are good or bad) without wasting time on trial-and-error learning and might therefore shape preferences during development. Two lines of reasoning led us to hypothesize that MD may be dysfunctional in autism spectrum disorder (ASD).
A first line of reasoning relates to clinical descriptions and cognitive investigations. Clinically, ASD is characterized by "deficits in social communication and interaction" and "restricted, repetitive behavior, interests or activities" [2]. It is also associated with altered social cognition [15] and atypical social motivation [11] including social anhedonia [9]. A lack of MD might underpin lower social influence on perceptual [8] and esthetic [11] judgments as well as learning [21] and donation decisions [20] associated with autism. An absence of MD would also compromise the sharing of desires and, hence, result in altered social interaction and possibly idiosyncratic preferences and atypical interests.
Another line of reasoning comes from neuroscience research. A recent study has empirically demonstrated MD in adults of the general population and revealed its neural basis [22]: visual objects are rated as more desirable once perceived as the goals of another agent's action. According to this study, MD might result from a modulation of the brain valuation system (BVS) by the mirror neuron system (MNS), since MNS-BVS functional connectivity predicts individual susceptibility toward mimetic desires. In line with disconnection theories of autism [17], this functional connectivity between MNS and BVS may be altered in autistic individuals, such that others' behavior would not affect their motivational system.
The present study aimed to assess whether MD is affected in autism, by testing the hypothesis that MD is (1) reduced in individuals with ASD relative to matched controls and/or (2) related to atypical social motivation and social cognition associated with ASD.
Methods
Participants A power analysis using the "power.t.test" formula in the R package "stat" [26] was based on reported MD amplitude (mean = 0.18 and sd = 0.17) in the general population [22]. It indicated that 9 participants in each group would be sufficient to find a difference using a one-sample t test, with a power of .9 at a .05 significance level. Twenty adults with ASD and 19 controls were included in the study. Intelligence quotients were determined by the Wechsler Adult Intelligence Scale. No significant differences in age and intellectual quotient (IQ) were found between individuals with ASD and controls. Participants' demographic characteristics are summarized in Table 1. All participants in the ASD group were recruited from the diagnostic clinic at Hôpital Rivière-des-Prairies, Montréal, Canada. All had been diagnosed by expert clinicians on the basis of DSM-IV (Diagnosis and Statistical Manual fourth edition) criteria, using standardized instruments (Autism Diagnostic Observation Schedule-Generic (ADOS-G) [23] and Autism Diagnostic Interview-Revised (ADI-R) [28]). Control participants were recruited through advertisements.
Procedure All participants were tested in a quiet room, using a previously validated paradigm [22]. In the previous study, 120 different pairs of objects (e.g., food, toys, clothes, and tools) were selected to build the initial stimuli set. To make the task shorter for the present study, we selected the 60 pairs that showed the largest goal vs non-goal contrast on desirability in the previous study. Details about the stimuli can be found in Lebreton et al. [22]. Objects of two different colors were presented in short videos either as the goal of an action or not (G and NG conditions) (see Fig. 1). The face of the agent in the G videos was never shown, to avoid desirability being directly conveyed by facial expression. Also, a subset of NG videos included controls for the quantity of movement (with the object moving by itself ) and for the presence of a human agent (not acting upon the object).
Tasks
All tasks were programmed on a PC, using the Cogent 2000 (Wellcome Department of Imaging Neuroscience, London, UK) library of Matlab functions for presentation of stimuli. All participants took part in both a desirabilityrating (test) task and a recognition (control) task. The desirability-rating task included 120 videos (60 object pairs), divided into two sessions. The two objects of a pair always appeared in the same session to limit the effects of temporal fluctuations and of session-wise rating scale anchors. Also, the presentation order of the different videos was randomized for each subject, with the constraint that the first and the second object of each pair should appear in the first and the second half of a session, respectively. To eliminate color preferences at the group level, color were counterbalanced between subjects.
In the rating task, participants were instructed to rate "how much they would like to have the object." Every trial of the task started with a fixation cross displayed for 1.5 s and immediately followed by the video, which lasted between 2 and 5 s (see Fig. 1a). Next, the desirability scale appeared on the screen below the picture of the object to be rated (without a human agent). The scale was graduated from 0 (not desirable) to 10 (highly desirable). Participants could move the cursor by pressing a button with their right index finger to go left or with their right middle finger to go right. Rating was self-paced: subjects had to press a button with their left index finger to validate their response and proceed to the next trial. The initial cursor position on the scale was randomized to avoid confounding the ratings with the movements they involved. The total trial duration was almost 8 s on average (1500 ms of fixation + 3500 ms of video + 3300 ms of rating).
To allow interpretations of differences between groups in MD, we controlled for basic requirements of the task. First, to control for whether the subject paid attention to the objects, we administered a recognition task (see Fig. 1b), in which 60 pairs of pictures were presented. Each pair included an "old" picture, i.e., an object that the subject had seen during the rating task (in either a G or a NG video), and a "new" picture, i.e., the same object with a third and previously unseen color (which varied across object pairs). The order of the presentation was randomized for every subject. The two pictures of a pair were displayed side by side, following a 500-ms fixation cross. The relative position of the two pictures on the screen was also randomized.
Subjects were asked to select the picture they had already seen (the "old" one). The task was self-paced. MD may be associated with social motivation deficit in ASD, so we asked participants to complete questionnaires assessing social and physical anhedonia [5,9,13]. To control for possible confounding of motivation with depression, participants completed the Beck Depression Inventory [4]. As a measure of social cognition, participants also completed a test of social judgment on photographs [14].
Analysis
Desirability ratings were converted to session-wise z scores. The first goal of the study was to assess whether MD (the difference in standardized desirability rating between G vs NG conditions) was lower in ASD than control participants. We used t tests to compare MD, ratings in the desire attribution task, and performance in the recognition task between groups (ASD vs controls). The data of the recognition task was missing for one participant due to an error in testing.
The second goal was to test whether MD was associated with clinical features of ASD. We looked for Pearson's correlations between MD in the ASD group and the following variables: social and physical anhedonia scores and depression and social judgment scores. p values of correlations were not corrected for multiple testing since none did even reach the uncorrected threshold.
Results
MD was present (i.e., MD >0) both in the ASD (one-sided t test t(19) = 2.08, p = .026, d = 0.46) and control (onesided t test t(18) = 1.99, p = .031, d = 0.46) groups ( Table 2). The main reason for using one-sided t tests to assess MD is that the analysis is confirmatory, since MD has already been found positive in five independent samples of participants [22]. A two-sample t test showed no difference between MD for the control and ASD groups (t(37) = −0.02, p = .98, d = 0.006) (Fig. 2). These results replicate the previous finding that MD is present in the general population, in a different sample in a different country (Canada) and also suggest that MD is similarly present in individuals with ASD. A post hoc power study based on MD in the two groups indicated that a sample of more than 20,000 participants per group would have been required to show a difference with a power of .7. This suggests that the observed absence of difference between the groups was not caused by the small sample size but reflects a true lack of difference between the populations.
No between-group difference was found for recognition, suggesting that ASD and control participants paid equal attention to objects in the rating task. There were between-group differences in social judgment (higher performance by the control group), social and physical anhedonia (higher anhedonia in the ASD group), and a trend for higher depression scores in the ASD group; however, none of these factors were related to MD either in the entire sample (all r < .15, all t(37) < 1) or in the ASD subgroup (all r < .21, all t(18) < 1).
Discussion
We found that individuals with an ASD are prone to MD to a similar extent as individuals in the controls. We found no link between MD and anhedonia or social judgment associated with ASD. These results contradict Fig. 1 a The desirability-rating task. Successive screens displayed in one trial are shown from left to right with durations in milliseconds. Participants were instructed to rate "how much they would like to get/have the object." Every trial of the task started with a fixation cross followed by the video. The desirability scale then appeared on the screen below the picture of the object to be rated (without human agent). The object was taken as the goal of an action in the G condition but not in the NG condition. Colors were counterbalanced at the group level. b The recognition task. Subjects had to select the "old" object, which meant the object that had been featured in the videos (either G or NG) shown during the rating task. Every choice contained one old and one "new" object. In the illustrated example, the correct answer would be green for the choice on the left and yellow for the choice on the right the intuitive idea that the preferences of individuals with ASD are less prone to social influence. They contribute to the understanding of social influence in autism.
There is a large body of literature on deficits in social influence in autism consistent with the notion that MD would be affected [10,12]. A recent study [29] reported a failure to strategically use social cues so as to maximize payoff in situations of changing contingencies. However, some aspects of social influence have been found to be similar in individuals with and without autism. Individuals with autism share the stereotypes of their social group [7,18,19] and, like individuals without autism, can show better task performance in the presence of an observer [20]. Recent work indicates that both automatic and voluntary imitation of actions might be present [6] and even enhanced [30] in individuals with autism. Attention orienting by social cues appears to be preserved in such individuals (see [25] for a review, [27] for the exceptions). This suggests that at least some aspects of social influence are not abnormal in individuals with autism. Our investigation extends these observations by showing an absence of correlation between MD and any of the atypical social motivation and social cognition associated with autism.
Some basic mechanisms of social influence therefore seem to be intact in autism: the presence, goals, and representations of other people can influence a variety of behaviors, from gaze orientation to semantic associations and judgments about desirability. In contrast, differences have been described mostly in situations where control participants may strategically modulate their responses, notably to conform to social desirability using flattery or conformism: to confirm a statement (i.e., [8]), fawn ), mask stereotypes [7], or appear more generous [20]. The current literature is thus consistent with the notion that individuals with ASD Table 2 Results: between-group comparison of scores and correlations between MD and depression, anhedonia, and social judgment Fig. 2 Comparison of MD in the ASD and control groups. Box plots show the minimum, first quartile, median, third quartile, and maximum of the MD effect (difference in desirability ratings between goal and non-goal objects) across individuals. MD was significantly positive in both groups, with no difference between groups display less strategic behavior in social situations, but no basic deficit in social influence. Our study has some limitations. One is that the study population was almost entirely male adults, due to availability for testing. This limits the generalization of the finding. A similar study with a sample of women and a larger age range would be useful. Indeed, it is plausible that the development of MD is delayed in, rather than absent from, individuals with ASD. Also, we have not investigated the underlying processes that may underpin MD. The absence of significant difference in desirability ratings does not preclude that individuals with ASD might have used different strategies at other levels (e.g., eye movements or brain activity), although such a speculation is not parsimonious.
Conclusion
In conclusion, our study contributes to the understanding of social influence in ASD by showing that one of its core aspects, MD, is intact and not related to clinical or cognitive traits of ASD. Our findings suggest that the mechanisms at the neural level underlying MD (possibly the action of the mirror neuron system on the brain valuation system) are preserved in ASD. This weakens the notion that atypical interests in ASD stem from reduced social influence and therefore indirectly favors the idea that special interests might be selected by ASD individuals for their intrinsic properties [3,24].
|
v3-fos-license
|
2023-12-16T12:39:14.532Z
|
2023-12-11T00:00:00.000
|
266227341
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/13607863.2023.2286618?needAccess=true",
"pdf_hash": "6912e8e9eef2b205d3003b170c2c479c9f67728c",
"pdf_src": "TaylorAndFrancis",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44448",
"s2fieldsofstudy": [
"Medicine",
"Psychology",
"Sociology"
],
"sha1": "e419bd08753a0bfd1d00aa9f8fd6ebbb62e33cf6",
"year": 2024
}
|
pes2o/s2orc
|
Dyadic perspectives on loneliness and social isolation among people with dementia and spousal carers: findings from the IDEAL programme
Abstract Objectives This study aims to investigate the impact of self and partner experiences of loneliness and social isolation on life satisfaction in people with dementia and their spousal carers. Methods We used data from 1042 dementia caregiving dyads in the Improving the experience of Dementia and Enhancing Active Life (IDEAL) programme cohort. Loneliness was measured using the six-item De Jong Gierveld loneliness scale and social isolation using the six-item Lubben Social Network Scale. Data were analysed using the Actor-Partner Interdependence Model framework. Results Self-rated loneliness was associated with poorer life satisfaction for both people with dementia and carers. The initial partner effects observed between the loneliness of the carer and the life satisfaction of the person with dementia and between social isolation reported by the person with dementia and life satisfaction of the carer were reduced to nonsignificance once the quality of the relationship between them was considered. Discussion Experiencing greater loneliness and social isolation is linked with reduced life satisfaction for people with dementia and carers. However, having a positive view of the quality of the relationship between them reduced the impact of loneliness and social isolation on life satisfaction. Findings suggest the need to consider the experiences of both the person with dementia and the carer when investigating the impact of loneliness and social isolation. Individual interventions to mitigate loneliness or isolation may enhance life satisfaction for both partners and not simply the intervention recipient.
Introduction
Internationally loneliness has been identified as a major public health problem by both third sector organisations and local and national governments.A range of different countries, including the UK and United States, have loneliness strategies and charities explicitly targeting loneliness, predominantly, but not exclusively, focused on older people (Prohaska et al., 2020).Across strategies and initiatives carers are identified as at high risk of loneliness and social isolation with attendant risks to well-being (Department for Digital, Culture, Media and Sport, 2018; National Academies of Sciences, Engineering, and Medicine, 2020).Both people with dementia and their carers have been identified as an especially vulnerable group (National Academies of Sciences, Engineering, and Medicine, 2020; Victor et al., 2021).The current study aims to investigate: (a) the impact of loneliness and social isolation on life satisfaction in a large sample of people with dementia and their spousal or partner carers living in Great Britain and (b) the extent to which each person's loneliness or social isolation affects the other partner's life satisfaction.
Loneliness and social isolation are distinct but related concepts (Victor et al., 2008).Loneliness describes the discrepancy between expectations of quantity and quality of relationships with the actuality (Perlman & Peplau, 1981).Whilst social isolation is characterised as having few social contacts or limited integration of an individual into the wider social environment of family, friends, neighbours and wider community (Victor et al., 2008).Loneliness is an evaluative concept which only individuals can assess, while social isolation is characterised as a more objective measure of social connectedness.Loneliness is independently associated with a range of physical and mental health outcomes, including depression (Courtin & Knapp, 2017), reduced well-being and life satisfaction (Golden et al., 2009;Shankar et al., 2015), cardiovascular disease (Courtin & Knapp, 2017;Valtorta et al., 2016) and mortality (Holt-Lunstad et al., 2015).Social isolation is associated with poorer mental health (Courtin & Knapp, 2017;Leigh-Hunt et al., 2017), reduced life satisfaction and well-being (Golden et al., 2009;Shankar et al., 2015), increased mortality (Holt-Lunstad et al., 2015), and there is mixed evidence supporting it as a potential risk factor for poorer physical health outcomes (Leigh-Hunt et al., 2017).Both loneliness and social isolation are potential risk factors for cognitive impairment and dementia (Kuiper et al., 2015;Lara et al., 2019;Livingston et al., 2020).Greater life satisfaction is linked with better health and longevity (Diener & Chan, 2011).Besides good social connections and relationships, socio-demographic factors, such as age and educational level, are also important for life satisfaction in older people (Huppert, 2009).
There is limited evidence about the experience of loneliness or social isolation from the perspective of people with dementia (Balouch et al., 2019;Clare, Wu, Jones, et al., 2019;Dyer et al., 2020;El Haj et al., 2016;Holmén et al., 2000;Victor et al., 2020) or their carers (Beeson, 2003;Beeson et al., 2000;Brodaty & Donkin, 2009;Clare, Wu, Quinn, et al., 2019;Victor et al., 2021;Williams, 2005).Available evidence suggests that people with dementia experience comparable levels of loneliness to the general population of older people (Victor et al., 2020) but increasing social isolation over time (Dyer et al., 2020).Carers experience higher levels of loneliness than the general population (Beeson, 2003;Victor et al., 2021), as well as greater isolation (Brodaty & Donkin, 2009).Loneliness and social isolation may have a detrimental influence on the life satisfaction of people with dementia and their family carers (Clare, Wu, Jones, et al., 2019;Clare, Wu, Quinn, et al., 2019) whilst greater social engagement is associated with higher quality of life in people with dementia (Martyr et al., 2018).
A limitation of research focused on loneliness and social isolation in people with dementia and carers is that studies focus on individuals rather than investigating loneliness or social isolation of the caring dyad and how they inter-relate or are linked with life satisfaction.Prior research has demonstrated how carer experiences, such as higher carer stress, perceived social restrictions, and lower caregiving competence, can affect the quality of life of the person with dementia (Quinn et al., 2020), and how greater caregiving satisfaction can reduce the feelings of loneliness of a care-recipient (Iecovich, 2016).Given that dyads may have a close interpersonal relationship it is plausible that the loneliness or social isolation experienced by one member of the dyad may influence the other partner's life satisfaction as well as their own.
Few studies have looked directly at the dyadic association between loneliness or social isolation and life satisfaction.Relationship quality influences both loneliness and life satisfaction in dyadic studies (Carr et al., 2014;Stokes, 2017a), those focused on carers of people with Alzheimer's disease (Beeson et al., 2000) and those comprising older spousal couples (De Jong Gierveld et al., 2009).People with dementia are often cared for by people with whom they have a close existing relationship and dementia may change previously established roles, as one of the dyad adopts the role of the 'carer' who increasingly has to provide care for the other person (Quinn et al., 2009).In this study, we focus on spousal carers.The majority of carers are the spouse or the partner of the person with dementia (Brodaty & Donkin, 2009).Between 60% and 70% of carers for people with dementia in the UK are female, although this sex balance changes in the oldest age groups (85 years and over) where older carers are more likely to be male than female (Alzheimer's Research UK, 2015; Carers UK, 2019).Women are also more likely to provide care for a longer period of time than men.Some prior dyadic studies of loneliness have found sex differences (Segrin & Burke, 2015;Segrin et al., 2019) whilst others have not (Stokes, 2017b).
Two theoretical propositions may explain these potential inter-relationships and underpin the development of a dyadic approach to understanding loneliness and social isolation.Interdependence theory hypothesises that the behaviour or interactions of individuals in close relationships can affect the other person's behaviour or outcomes (Rusbult & Van Lange, 2003) whilst the theory that loneliness is 'contagious' (Cacioppo et al., 2009) suggests that loneliness experienced by one person can spread to, or influence, loneliness in others.There is some limited evidence to support this proposition.Studies of middle-aged and older married couples have indicated that loneliness experienced by one member of the couple impacts on the other member's experience of loneliness (Ayalon et al., 2013;Stokes, 2017b), sleep quality (Segrin & Burke, 2015) and relationship quality (Mund & Johnson, 2020;Stokes, 2017b).Given the close relationship between people with dementia and spousal carers it is plausible that the experiences of one partner will affect the other: if one partner is socially isolated or lonely this might make the other partner socially isolated or lonely.Understanding dyadic relationships in terms of loneliness and social isolation may be a useful approach to enhancing the life satisfaction of people with dementia and spousal carers.
Dyadic studies have predominantly used the Actor-Partner Interdependence Model (APIM) to examine the influence of a predictor as reported by both partners on their own outcome (actor effect) and that of the other member of the dyad (partner effect).Prior dyadic studies examining loneliness in caregiving dyads have found actor effects of loneliness on life satisfaction (Tough et al., 2018) and health-related quality of life (Segrin et al., 2019) but no partner effects.The samples in both these studies were small and focused on distinct health conditions and demographic groups, making direct comparison problematic.To our knowledge, there are no dyadic studies that have examined social isolation in dementia caregiving dyads.
In the present study we consider: (a) the impact of self-and partner-reported feelings of loneliness on the life satisfaction of the person with dementia and the carer; (b) the impact of self-and partner-reported experiences of social isolation on the life satisfaction of the person with dementia and the carer.
Design and sample
We analysed data from people with dementia and carers participating in Time 1 (2014-2016) of the Improving the experience of Dementia and Enhancing Active Life (IDEAL) cohort study (Clare et al., 2014).Participants with dementia and their respective carers were recruited through 29 National Health Service (NHS) Clinical Research Network sites throughout England, Scotland, and Wales.The inclusion criteria at time of enrolment were: a clinical diagnosis of dementia (any subtype) in the mild-to-moderate stages as indicated by a Mini-Mental State Examination (MMSE) (Folstein et al., 1975) score of 15 or over, and living in the community.Family carers of the person with dementia were approached to take part in the study if the person they cared for had agreed to take part and had nominated them.A 'carer' was defined as the main family member or friend providing unpaid practical or emotional support to the person with dementia (Quinn et al., 2020).Overall, 1537 people with dementia and 1277 carers, of whom 1042 were spouses or partners, agreed to take part in the IDEAL study.Our analytical sample comprised 1042 caregiving dyads of which there were 1034 opposite sex and eight same sex couples.
Loneliness
The revised six-item version of the De Jong Gierveld Loneliness Scale (De Jong Gierveld & Tilburg, 2006) was used.Total scores range from 0 to 6 with higher scores indicating more severe loneliness (Cronbach's α = 0.63 for people with dementia and 0.77 for carers).Scores of two or more are indicative of loneliness (Victor et al., 2020(Victor et al., , 2021)).
Social isolation
The six-item Lubben Social Network Scale was used to gauge social isolation by measuring perceived social support received by family and friends (Lubben et al., 2006).Total scores ranged from 0 to 30.A lower score is indicative of a higher risk of social isolation (Cronbach's α = 0.79 for people with dementia and 0.83 for carers) with a score of <12 the threshold for socially isolation.
Life satisfaction
The Satisfaction with Life Scale (Diener et al., 1985) was used.This comprises five positively worded statements rated on a seven-point scale from 'strongly disagree' to 'strongly agree' .Scores ranged from 5 to 35, with higher scores indicating greater life satisfaction (Cronbach's α = 0.82 for people with dementia and 0.88 for carers).
Covariates
Demographic information was collected on age, sex, and education, based on the highest qualification achieved.Number of hours spent caregiving per day (caregiving hours), the dementia subtype of the person with dementia and their MMSE score were recorded.Self-rated health for both carers and people with dementia was collected.Current relationship quality was measured using the Positive Affect Index (Bengston & Schrader, 1982).Scores range from 5 to 30 with higher scores indicating better relationship quality between the carer and person with dementia (Cronbach's α = 0.81 for people with dementia and 0.84 for carers).Both the person with dementia and carer self-completed this measure.
Statistical analyses
Data were analysed using structural equation modelling.Models estimated actor and partner effects of loneliness and social isolation using the APIM framework (Kenny et al., 2006).APIM enabled us to investigate the influence of loneliness or social isolation as reported by both partners on their own life satisfaction and that of the other member of the dyad.The influence of independent variables or predictors (in this case, loneliness or social isolation) on an individual's own outcomes (life satisfaction) is referred to as the actor effect and the influence of these predictors on the partner's outcome is known as the partner effect (Figure 1).
The first model tested the actor (i.e.own) and partner effects of loneliness and social isolation on life satisfaction.The second model, added socio-demographic characteristics (age, sex and education); the third model added, dementia subtype, MMSE score, self-rated health and caregiving hours and our final model added actor and partner-rated current relationship quality.
To account for missing data, the maximum likelihood with missing values estimation method was applied during dyadic analyses.In over three-quarters of dyads (78%) both the person with dementia and carer completed all measures.Missing values on individual measures are shown in Table 1.People with dementia and carers who had missing loneliness and social isolation data did not have any significant differences on socio-demographic measures in comparison to participants with complete data.All data were analysed using Stata 14.2 (StataCorp LP, College Station, TX, USA) and version 7.0 of the IDEAL datasets.
A post hoc analysis was conducted to investigate whether the results differed according to the sex composition of the dyad.The final model was repeated, stratified by sex composition (e.g.whether the dyad comprised a male person with dementia and female carer or a female person with dementia and male carer).Same sex couples were excluded from this analysis because numbers were too small for valid inference (eight same sex couples; seven female and one male).
Results
Table 1 presents the characteristics of the 1042 spousal/partner dyads who participated in the study.Over half of the people with dementia had a diagnosis of Alzheimer's disease.The mean age of the people with dementia was 75.0 (SD = 7.8), and mean age of the carers 72.3 (SD = 8.3).A higher proportion of carers were female in comparison to people with dementia (p < .001).The mean loneliness score for the people with dementia was 1.2 (SD = 1.4) points lower (less lonely) than for carers (2.5 SD = 1.9; p < .001).Almost two-thirds of carers experienced loneliness compared with a third of people with dementia (p < .001).Mean social network sizes were 15.6 (SD = 6.2) for people with dementia and 17.7 (SD = 5.5) for carers (p < .001);levels of social isolation were 26.9% for people with dementia and 12.4% for carers, respectively; 96% of dyads had known each other for more than ten years.
Females with dementia had lower MMSE scores on average (p = .004)and lower levels of education (p < .001) in comparison with males.Female carers were more likely to be lonely (p < .001),spend more hours caregiving per day (p = .026),report lower levels of education (p < .001)have poorer relationship quality (p < .001),self-rated health (p = .032),and life satisfaction (p < .001)compared with male carers but no significant differences in social isolation.
Actor and partner effects of loneliness and social isolation on life satisfaction
Results of the dyadic analyses for loneliness, social isolation, and life satisfaction are set out in Table 2. Model 1, after adjustment for both actor and partner rated loneliness and social isolation, shows both an actor and partner effect of loneliness on the life satisfaction of people with dementia.For carers, only their own loneliness affected their life satisfaction.There were no actor or partner effects of social isolation on their life satisfaction.
Adjustment for socio-demographic factors (Model 2) revealed a weak partner effect between the social isolation of the person with dementia and the life satisfaction of the carer remained.
Further adjustment for dementia diagnosis, MMSE score, self-rated health, and caregiving hours (actor and partner) had a more notable impact on the observed associations (Model 3).The strong actor effect of loneliness on the life satisfaction of people with dementia and carers remained as did the partner effect of loneliness on the life satisfaction of the person with dementia and the partner effect of social isolation on the carer's life satisfaction.Following the addition of actor and partner-rated relationship quality (Model 4) the actor effects of loneliness on life satisfaction were reduced but remained significant, indicating that greater loneliness was linked with poorer life satisfaction for both people with dementia and carers.Both the partner effect of loneliness on the life satisfaction of the person with dementia and the partner effect of social isolation on the life satisfaction of the carer were no longer significant.In the final model, being older, reporting better relationship quality and self-rated health were associated with greater life satisfaction (Supporting Information Table 3).For carers, being female and spending over ten hours per day caregiving were linked with poorer life satisfaction.
Analyses examining the sex composition of dyads
The analysis by sex composition of the dyad revealed similar overall results for loneliness but a notable difference for social isolation (Table 3).For both people with dementia and carers loneliness affected their own life satisfaction whether they were male or female.There were no actor effects of social isolation on own life satisfaction for people with dementia or for carers.
There was only a partner effect of social isolation on the life satisfaction of the person with dementia in dyads where the carer was male and the person with dementia was female.Supplementary analyses showed that only the interaction of carer's social isolation and gender composition of dyad was significant (Supporting Information Table 4).
Discussion
To the best of our knowledge, this is the first study to use an actor-partner interdependence model to investigate the impact of loneliness or social isolation on life satisfaction in dementia caregiving dyads.This study enhances our understanding of loneliness and isolation for these specific populations, by investigating the interdependency between loneliness, social isolation, and life satisfaction.People with dementia had a higher prevalence of social isolation in comparison with carers, whilst a there were eight same sex couples (seven female and one male); this was too small a number to allow valid inference, and hence, these couples were excluded from this analysis.
carers reported greater feelings of loneliness in comparison to the people with dementia, confirming previous IDEAL findings (Victor et al., 2020(Victor et al., , 2021)).The higher prevalence of social isolation amongst people with dementia may reflect loss of or reduction in connection with others following a diagnosis of dementia whilst higher levels of loneliness in carers may reflect the impact of the caregiving role and change in the relationship between partners (Vasileiou et al., 2017).Factors relating to the caregiving role, such as caregiving stress, have previously been shown to be associated with greater loneliness (Victor et al., 2021).There were strong actor effects of loneliness on life satisfaction for both people with dementia and their carers, with feeling lonely associated with reduced life satisfaction.Our hypotheses that there were partner effects of loneliness on life satisfaction was not upheld once social isolation and covariates were accounted for.This accords with previous research exploring actor and partner effects of loneliness in caregiving-care recipient dyads for people with breast cancer or spinal cord injury (Segrin et al., 2019;Tough et al., 2018).Our finding that greater loneliness was linked with poorer life satisfaction among people with dementia and spousal carers is consistent with existing evidence for older adults (Golden et al., 2009;Shankar et al., 2015).In contrast to previous studies of middle-aged and older couples (Ayalon et al., 2013;Segrin & Burke, 2015;Stokes, 2017b), there was no strong evidence to support the proposition that loneliness is contagious and can spread from one person to another (Cacioppo et al., 2009).
Our initial analysis suggested that social isolation experienced by the person with dementia had an impact on their carers' life satisfaction.However, this finding was reduced to nonsignificance once we accounted for relationship quality.Thus, the quality of the relationship between the two individuals plays a key role in mitigating social isolation.The quality of the relationship between the person with dementia and the carer also explained some of the actor effects of loneliness on life satisfaction and the initial partner effect of the loneliness experienced by the carer on the life satisfaction of the person with dementia.Prior studies of older adults have demonstrated that closeness with a partner or family member is an important predictor of both life satisfaction and loneliness (Carr et al., 2014;Shiovitz-Ezra & Leitsch, 2010;Yang, 2018).The closer a person was to their spouse or partner the more this was protective against loneliness over time.Consequently, we might be able to increase the life satisfaction of people with dementia and carers by finding strategies or sources of support to enhance relationship quality and alleviate or prevent the impact of social isolation or loneliness.These finding offer limited support for interdependence theory (Rusbult & Van Lange, 2003) where the experience or interactions of one partner may affect their partner's behavior.Interventions that help people with dementia to remain connected with the community or to have more social contact may help to reduce social isolation and in turn help to increase their life satisfaction and that of their carers also.
The present study indicated that the sex of the carer and person with dementia may have a potential role in the development of loneliness and social isolation.The descriptive analysis indicated that whilst female carers reported greater loneliness, poorer self-rated health and lower life satisfaction in comparison with male carers there were no significant differences in terms of social isolation.This partly accords with findings from a meta-analysis which found small differences in relation to health, depression and well-being but no sex differences in relation to social support (Pinquart & Sörensen, 2006).
Our analysis that was stratified for sex composition of the dyad found only significant partner effects for social isolation on life satisfaction in dyads where the person with dementia was female and the carer was male.There are no comparable studies reporting life satisfaction in dementia caregiving dyads and findings for previous studies have been mixed.Further research is needed to establish the veracity of our findings and to understand the reasons for these differences.We might hypothesise that increase in isolation in male-female caregiving dyads may reflect the impact of gendered roles in maintaining social networks with family, friends and the wider community (Neri et al., 2012;Willis et al., 2020).It is plausible that better support is needed to help male carers adapt to the change in the balance of their relationship with the person with dementia.
A key strength of the present study is the use of a large sample of people with dementia and their carers.To the best of our knowledge it is the first study to examine the dyadic relationship between loneliness, social isolation and life satisfaction in people with dementia and their carers using a novel analytical method to evaluate the interdependence of loneliness and social isolation in caregiving dyads.However, our study has limitations that need to be considered.First, the current study is based on cross-sectional analyses, therefore, it is not possible to say with certainty whether or not loneliness or social isolation leads to poorer life satisfaction.However, it presents an important initial step in examining these dyadic relationships.As the IDEAL programme is longitudinal it will be possible to identify and observe any actor or partner effects of loneliness and social isolation on life satisfaction for dyads who remain in the study.Second, the sample comprises people with mild-to-moderate dementia and there may be changes to perceived quality of social relationships over time and in the number of social contacts as dementia progresses (Dyer et al., 2020).The study found that greater dementia severity was associated with greater decline in social network size longitudinally.Further as the sample comprised almost entirely heterosexual couples we were not able to consider any differences for same-sex couples in our analysis of the sex composition of dyads.Prior dyadic studies have suggested that there may be more concordance between gay and lesbian couples than heterosexual couples on indicators such as health and health behaviours and that this may reflect differences in relationship dynamics (Holway et al., 2018).Finally, we were unable to identify the amount of support the dyad received from other family members or friends.Wider support and help received from others could have implications for social contact with others and may affect the quality of the relationship between the person with dementia and carer.
The findings in the present study suggest that people with dementia have higher levels of social isolation in comparison with their carers.This could indicate that they are less socially integrated, and interventions could consider ways to keep people with mild-to-moderate dementia more socially involved.Carers, whilst reporting lower levels of isolation, have much higher levels of loneliness, and this has implications for the support received.Based on the findings of the present study, support and potential interventions should be developed in the context of dyadic caregiving relationships.Enhancing social connections may help to reduce social isolation and improve the life satisfaction of both dyadic partners.This study has indicated that further consideration of the dyadic relationship is of importance and interventions should consider the experiences of both people with dementia and their carers.
Figure 1 .
Figure 1.Path diagram of the Actor-Partner interdependence Model relating to loneliness and social isolation as predictors of life satisfaction for people with dementia and spousal carers.
Table 1 .
Descriptive characteristics of the participants with dementia and carers (N = 1042).
a Result of chi-squared test or t-test for difference between people with dementia and carers.*p < .05;**p ≤ .001.
Table 2 .
Actor and partner effects of loneliness and social isolation on life sat- Notes.Ci: Confidence interval.Model 1: Actor and partner rated loneliness and social isolation.Model 2: Model 1 + age, sex & education (actor and partner paths).Model 3: Model 2 + dementia subtype, MMSe, self-rated health, & caregiving hours (actor and partner paths).Model 4: Model 3 + current relationship quality (actor and partner paths).Model 4 had a good fit (Root Mean Square error of Approximation (RMSeA) 0.03, Ci 0.01 = 0.05; Comparative Fit index (CFi) = 0.99) and a Bayesian information Criterion (BiC) of 63,331.
Table 3 .
Actor and partner effects of loneliness and social isolation on life sat-
|
v3-fos-license
|
2019-02-20T23:40:38.794Z
|
2018-05-26T00:00:00.000
|
139463768
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2504-477X/2/2/32/pdf?version=1527303584",
"pdf_hash": "0e50c309c89c32f2cc13b2334e21909fde069313",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44450",
"s2fieldsofstudy": [
"Materials Science"
],
"sha1": "0e50c309c89c32f2cc13b2334e21909fde069313",
"year": 2018
}
|
pes2o/s2orc
|
Study of Nano-Mechanical , Electrochemical and Raman Spectroscopic Behavior of Al 6061-SiC-Graphite Hybrid Surface Composite Fabricated through Friction Stir Processing
Aluminium-based hybrid metal grid composites (MMC) are extensively utilized in automobile applications (engine cylinders, pistons, etc.) as they exhibit a fantastic blend of properties. Here, a detailed study of nano-mechanical, electrochemical and Raman spectroscopic behavior of friction stir processed Al6061-SiC-graphite hybrid surface composite is presented. The effect of various tool rotational speeds was evaluated along with the monitoring of variation in axial force. Microstructural changes with various tool rotational speeds are studied by using a scanning electron microscope. Raman spectroscopy and X-Ray diffraction studies are used for the spectroscopic characterization of the fabricated hybrid and mono surface composites. Residual stresses and various crystal structure disorders of reinforcement result in the significant change in intensity and a considerable shift in Raman peak positions. The nano-mechanical behavior of the fabricated composite with various reinforcements and tool rotational speeds are analyzed by using nano-indentation. The nano-mechanical behavior of hybrid composite fabricated with an optimum set of processing parameters is superior to mono composites fabricated with the same processing parameters. Also, the electrochemical behavior of the fabricated composites is studied by linear potentiodynamic polarization test. The Al6061-SiC-graphite hybrid surface composite reveals excellent nano-mechanical and electrochemical behavior when fabricated with an optimum set of processing parameters. The tool rotational speed has a pronounced effect on the dispersion of agglomerates and grain refinement of the matrix material. The processing parameters extensively affect the Raman spectroscopic behavior of the hybrid composite. The hybrid surface composite shows better corrosion resistance than the mono composites when fabricated with an optimum set of processing parameters. Reduced intergranular as well as interfacial corrosion pits in hybrid composites increased its resistance to corrosion.
Introduction
Aluminum-based hybrid metal grid composites (MMC) are widely utilized in automobile applications (engine cylinders, pistons, etc.) as they exhibit high strength, wear resistance, abrasion resistance, chemical stability, and dimensional stability at high temperature [1,2].Friction stir processing (FSP) is a solid-state processing strategy used for the fabrication of surface composites.FSP was first established by Mishra et al. to impart high strain rate superplasticity to 7075 aluminum alloy [3].In FSP, a non-consumable rotating cylindrical tool with a shoulder and probe is plunged into a substrate and then traversed along the surface of the workpiece.The frictional heat is generated by the rubbing action of tool shoulder with the substrate and softens the material under the shoulder.The material under the shoulder also undergoes plastic deformation due to the high strain rate provided by the stirring action of the tool pin.The main applications of FSP for microstructural modification in metals include homogenization of reinforcement in MMCs [4], grain refinement through dynamic recrystallization [5][6][7], and superplasticity [8][9][10][11].
The Al-SiC surface composites fabricated via FSP reveal better mechanical properties than unreinforced Al alloy.A significant improvement of hardness with SiC reinforcement in Al matrix is observed by various researchers [12][13][14][15].The higher hardness of the Al-SiC mono composite is mainly attributed to grain refinement during FSP and the homogeneous distribution of SiC particles in the Al matrix [13][14][15][16][17].The friction stir processed composite shows a considerable increase in hardness with the increase in the percentage of reinforcement and finer size of particles [12,15].The corrosion behavior of the Al-SiC composite has been reported contradictorily by different authors.Some of them reported that Al-SiC composite shows better corrosion resistance as compared to Al alloy [18,19], while some of the authors reported low corrosion resistance in the case of Al-SiC composite as compared to Al alloy [20][21][22].The finer size of SiC particles has better corrosion resistance as compared to the bulky particles [18][19][20].Corrosion resistance is also improved with the increased volume fraction of SiC particles [18,19,22].
On the other hand, a graphite reinforced Al metal matrix composite shows an excellent wear and tribological properties [23].Graphite reinforcement in Al matrix decreases the hardness, friction coefficient and coefficient of thermal expansion [24] while wear resistance increases considerably in comparison to the un-reinforced Al alloy [17,23,25,26].The graphite provides a solid layer of lubricant between the composite and counter hard surface [27].This graphite layer helps to increase the wear resistance of the composite.However, Modi et al. [21] show that the graphite has a pronounced effect on the corrosion properties of the Al-graphite composite fabricated through the liquid metallurgy route.The increased rate of H 2 evolution due to the high conductivity of graphite leads to a shifting of the corrosion potential of Al-graphite composite in an active direction.Also, the porous nature of graphite leads to sucking of electrolyte in the localized region and increases the corrosion current density, which ultimately increases the corrosion loss.Saxena et al. [28] have also shown in their work that graphite reinforcement leads to higher corrosion loss than the base alloy and aluminum due to the cathodic behavior of graphite particles relative to a matrix, which leads to increased rate of galvanic corrosion.It is also reported that the graphite/Al matrix interface is the preferential site for the nucleation of corrosion in the Al-graphite composite [21,29].
Hybrid MMCs are engineering materials fabricated by reinforcing a substrate with a mixture of two or more different types/forms of particles to achieve the combined advantages of both of them.It was reported in many works that the hybrid composite of Al-SiC-graphite exhibits better wear resistance than those reinforced only with SiC or graphite [30,31].However, the nanomechanical and electrochemical behavior of Al-SiC-graphite hybrid composite when fabricated through FSP have not been studied to date.Also, the effect of the presence of SiC particles on graphite morphology has never been studied by any research group.
The objective of the present investigation is to study the effect of tool rotational speed on the nanomechanical and electrochemical behavior of both mono (Al-SiC & Al-graphite) and hybrid composite (Al-SiC-graphite).The spectroscopic analysis is also to be conducted to study the morphological change of graphite due to the presence of SiC reinforcement in the hybrid composite.
Materials and Methods
An Al6061 plate of 6 mm thickness is used as a substrate material.SiC powder (~100 µm, Alfa Aesar, Haverhill, MA, USA) and graphite powder (~44 µm, Alfa Aesar, Haverhill, MA, USA) are used as a reinforcement for both mono and hybrid composite (SiC: Graphite ~1:1).The microstructure of all the raw materials is shown in Figure 1a-c.A viscous solution of reinforcement and polyvinyl alcohol (5 wt %) was prepared in the magnetic stirrer (Remi, 5MLH Plus, Remi Lab World, Maharashtra, India) at 1000 rpm and 70 • C. It is then filled in the grooves (3 mm × 2 mm) on Al substrate (Figure 2a).The filled grooves are then allowed to dry in a vacuum drying oven at 120 • C for 4 h.Friction stir welding (ETA Technology, Bangalore, India) inbuilt with strain gauge setup for force measurement was used to carry out FSP on the filled grooves by using an H13 tool of 25 mm diameter flat shoulder with a square pin (5 mm × 5 mm × 5 mm) (see Figure 2b) at various tool rotational speeds of 1800 rpm, 2200 pm, and 2500 rpm.The constant tool traverse speed of 25 mm/min was used for all the experiments.Shoulder plunge depth is varied as 0.2, 0.3 and 0.4 mm for 1800 rpm, 2200 rpm, and 2500 rpm, respectively.It was noticed that many researchers reported the poor distribution of reinforcement particles at tool rotational speeds of around 1800 rpm [32][33][34].Thus, to homogenize and study the particle dispersion, high tool rotational speeds were selected in the present study.India) at 1000 rpm and 70 °C.It is then filled in the grooves (3 mm × 2 mm) on Al substrate (Figure 2a).The filled grooves are then allowed to dry in a vacuum drying oven at 120 °C for 4 h.Friction stir welding (ETA Technology, Bangalore, India) inbuilt with strain gauge setup for force measurement was used to carry out FSP on the filled grooves by using an H13 tool of 25 mm diameter flat shoulder with a square pin (5 mm × 5 mm × 5 mm) (see Figure 2b) at various tool rotational speeds of 1800 rpm, 2200 rpm, and 2500 rpm.The constant tool traverse speed of 25 mm/min was used for all the experiments.Shoulder plunge depth is varied as 0.2, 0.3 and 0.4 mm for 1800 rpm, 2200 rpm, and 2500 rpm, respectively.It was noticed that many researchers reported the poor distribution of reinforcement particles at tool rotational speeds of around 1800 rpm [32][33][34].Thus, to homogenize and study the particle dispersion, high tool rotational speeds were selected in the present study.The nanomechanical behavior of the mono and hybrid composite was studied by using a nanotriboindenter (using Hystiron, TI 950, Hysitron, Inc., Minneapolis, MN, USA) under a load of 5000 µ N with 10s time for loading, dwelling and unloading, respectively.Electrochemical behavior was analyzed by linear potentiodynamic polarization test (by using Biologic, SP150, Bio-Logic Science Instruments, Seyssinet-Pariset, France) under 3.5 wt % NaCl solution as an electrolyte.The Tafel fit was carried out for obtaining the values of corrosion current and potential.Raman spectroscopy (using Jobin Yvon Horiba, T64000, HORIBA, Ltd., Kyoto, Japan) was used to study the morphology of graphite and SiC before and after FSP.Microstructural characterization was conducted by scanning electron microscopy (Zeiss) at 5-20 kV.Phase study was carried out by using X-ray diffraction (by using PANalytical XPERT PRO (Malvern Panalytical Ltd., Royston, UK)) with CuKα of 2θ range 20°-120°), and the obtained data were analyzed with X-pert Highscore software to get the different phases formed in the composite.The nanomechanical behavior of the mono and hybrid composite was studied by using a nano-triboindenter (using Hystiron, TI 950, Hysitron, Inc., Minneapolis, MN, USA) under a load of 5000 µN with 10 s time for loading, dwelling and unloading, respectively.Electrochemical behavior was analyzed by linear potentiodynamic polarization test (by using Biologic, SP150, Bio-Logic Science Instruments, Seyssinet-Pariset, France) under 3.5 wt % NaCl solution as an electrolyte.The Tafel fit was carried out for obtaining the values of corrosion current and potential.Raman spectroscopy (using Jobin Yvon Horiba, T64000, HORIBA, Ltd., Kyoto, Japan) was used to study the morphology of graphite and SiC before and after FSP.Microstructural characterization was conducted by scanning electron microscopy (Zeiss) at 5-20 kV.Phase study was carried out by using X-ray diffraction (by using PANalytical XPERT PRO (Malvern Panalytical Ltd., Royston, UK)) with CuKα of 2θ range 20 • -120 • ), and the obtained data were analyzed with X-pert Highscore software to get the different phases formed in the composite.2a).The filled grooves are then allowed to dry in a vacuum drying oven at 120 °C for 4 h.Friction stir welding (ETA Technology, Bangalore, India) inbuilt with strain gauge setup for force measurement was used to carry out FSP on the filled grooves by using an H13 tool of 25 mm diameter flat shoulder with a square pin (5 mm × 5 mm × 5 mm) (see Figure 2b) at various tool rotational speeds of 1800 rpm, 2200 rpm, and 2500 rpm.The constant tool traverse speed of 25 mm/min was used for all the experiments.Shoulder plunge depth is varied as 0.2, 0.3 and 0.4 mm for 1800 rpm, 2200 rpm, and 2500 rpm, respectively.It was noticed that many researchers reported the poor distribution of reinforcement particles at tool rotational speeds of around 1800 rpm [32][33][34].Thus, to homogenize and study the particle dispersion, high tool rotational speeds were selected in the present study.The nanomechanical behavior of the mono and hybrid composite was studied by using a nanotriboindenter (using Hystiron, TI 950, Hysitron, Inc., Minneapolis, MN, USA) under a load of 5000 µ N with 10s time for loading, dwelling and unloading, respectively.Electrochemical behavior was analyzed by linear potentiodynamic polarization test (by using Biologic, SP150, Bio-Logic Science Instruments, Seyssinet-Pariset, France) under 3.5 wt % NaCl solution as an electrolyte.The Tafel fit was carried out for obtaining the values of corrosion current and potential.Raman spectroscopy (using Jobin Yvon Horiba, T64000, HORIBA, Ltd., Kyoto, Japan) was used to study the morphology of graphite and SiC before and after FSP.Microstructural characterization was conducted by scanning electron microscopy (Zeiss) at 5-20 kV.Phase study was carried out by using X-ray diffraction (by using PANalytical XPERT PRO (Malvern Panalytical Ltd., Royston, UK)) with CuKα of 2θ range 20°-120°), and the obtained data were analyzed with X-pert Highscore software to get the different phases formed in the composite.
Axial Force Variation
The axial force applied during the fabrication of various composites affects the machine setup and tool geometry severely.Thus, a critical assessment is required for its variation with tool rotational speed and type of reinforcement. Figure 3a show the generalized curve obtained while processing unreinforced Al 6061.The overall processing consists of three regions where axial force assessment is necessary, namely (a) plunging, (b) dwelling, and (c) traversing.During plunging, a rotating tool comes in contact with the substrate material and keeps on penetrating until the tool shoulder starts rubbing against the substrate material.Due to the resistance provided by the sudden substrate jump (peak load), the axial force is observed.Plunging is immediately followed by a dwelling of very short duration.The material under the shoulder gets heated up during this phase due to the high friction generated between the tool shoulder and substrate.This heating due to friction softens the base metal, and the axial force decreases to almost 40% of peak load.In the traversing phase, the relative linear movement between tool and shoulder takes place.Subsequently, force attains a steady value during further processing.
Axial Force Variation
The axial force applied during the fabrication of various composites affects the machine setup and tool geometry severely.Thus, a critical assessment is required for its variation with tool rotational speed and type of reinforcement. Figure 3a show the generalized curve obtained while processing unreinforced Al 6061.The overall processing consists of three regions where axial force assessment is necessary, namely (a) plunging, (b) dwelling, and (c) traversing.During plunging, a rotating tool comes in contact with the substrate material and keeps on penetrating until the tool shoulder starts rubbing against the substrate material.Due to the resistance provided by the sudden substrate jump (peak load), the axial force is observed.Plunging is immediately followed by a dwelling of very short duration.The material under the shoulder gets heated up during this phase due to the high friction generated between the tool shoulder and substrate.This heating due to friction softens the base metal, and the axial force decreases to almost 40% of peak load.In the traversing phase, the relative linear movement between tool and shoulder takes place.Subsequently, force attains a steady value during further processing.Figure 3b show the mean axial force obtained at various tool rotational speeds for various reinforced composites.The mean axial force in the case of un-reinforced Al decreases with the increase in tool rotational speed from 1800 rpm to 2500 rpm.This decrease in force is attributed to the decrease in material flow stress at elevated temperature, while in the case of Al-graphite mono composite, an opposite trend is noticed.This opposite trend is attributed to the (a) very high thermal conductivity of graphite, and (b) the reduction in coefficient of friction (COF) between the FSP tool and Al substrate by graphite.The high thermal conductivity of graphite is partially responsible for the increase in mean axial force with an increase in tool rotational speed.Due to the high thermal conductivity possessed by graphite, the heat generated at the tool-metal interacted surface transferred rapidly to the area just ahead of the processing point and softens this area.Thus, a cyclic interaction of tool with hard and soft zones takes place which results in a severe variation of force during traversing and ultimately increases mean axial force.The graphite has an interlayer shear strength of ~0.48 MPa, which is less than the flow stress of Al during FSP.Due to such low interlayer shear strength of graphite, it is sheared out on the Al surface in the form of a very thin layer during FSP.This thin layer acts as a solid lubricant and reduces the COF between the FSP tool and Al substrate, and hence the seizure could not take place during processing, thereby affecting the material flow and axial force during FSP [14].This effect becomes more pronounced at higher tool rotational speeds, and thus peak axial thrust is observed at 2500 rpm.However, the variation with tool rotational speed is the same as that of un-reinforced Al alloy.Figure 3b show the mean axial force obtained at various tool rotational speeds for various reinforced composites.The mean axial force in the case of un-reinforced Al decreases with the increase in tool rotational speed from 1800 rpm to 2500 rpm.This decrease in force is attributed to the decrease in material flow stress at elevated temperature, while in the case of Al-graphite mono composite, an opposite trend is noticed.This opposite trend is attributed to the (a) very high thermal conductivity of graphite, and (b) the reduction in coefficient of friction (COF) between the FSP tool and Al substrate by graphite.The high thermal conductivity of graphite is partially responsible for the increase in mean axial force with an increase in tool rotational speed.Due to the high thermal conductivity possessed by graphite, the heat generated at the tool-metal interacted surface transferred rapidly to the area just ahead of the processing point and softens this area.Thus, a cyclic interaction of tool with hard and soft zones takes place which results in a severe variation of force during traversing and ultimately increases mean axial force.The graphite has an interlayer shear strength of ~0.48 MPa, which is less than the flow stress of Al during FSP.Due to such low interlayer shear strength of graphite, it is sheared out on the Al surface in the form of a very thin layer during FSP.This thin layer acts as a solid lubricant and reduces the COF between the FSP tool and Al substrate, and hence the seizure could not take place during processing, thereby affecting the material flow and axial force during FSP [14].This effect becomes more pronounced at higher tool rotational speeds, and thus peak axial thrust is observed at 2500 rpm.However, the variation with tool rotational speed is the same as that of un-reinforced Al alloy.
The axial force variation in Al-SiC mono composite is very random.The force of ~0.8 kN is observed at a tool rotational speed of 1800 rpm, which increases to ~0.6 kN at 2200 rpm and further increases to ~0.9 kN at 2500 rpm.The higher axial force at 1800 rpm is due to the insufficient material flow during FSP.The material flow increases at 2200 rpm due to higher heat generation.At 2500 rpm, excessive heat is generated which reduces the flow stress of aluminium.Due to this reduction in flow stress, the fragmentation of SiC particles into smaller pieces is significantly reduced.Instead of fragmenting into smaller pieces, the large particles flow along with the material and during this phenomenon come in contact with the FSP tool and increase the force.
In Al-SiC-graphite hybrid composite, the axial force consistently decreases from 1800 rpm to 2500 rpm.The severe fluctuations which are observed in Al-graphite mono composite does not arise here due to the presence of hard abrasive SiC particles.These SiC particles assist in the seizure between the FSP tool and Al matrix even in the presence of graphite.As explained earlier, in Al-SiC mono composite at 2500 rpm tool rotational speed, the heat generated is excessive, due to which flow stress decreases significantly and fragmentation of SiC particles could not take place resulting in high axial thrust.However, in Al-SiC-graphite hybrid composite, owing to the high conductivity of graphite, the excessive heat generated is transferred quickly to neighbouring regions, and thus the considerable value of flow stress is maintained during processing.This higher flow stress enables the fragmentation of SiC particles into smaller pieces.Thus, the high conductivity of graphite along with the presence of hard abrasive SiC particles explains the force variation obtained in Figure 3b.
Raman Spectroscopy
Raman spectroscopy is an important characterization technique to investigate the structure and morphology of various carbonaceous reinforcement.Figure 4 represents the Raman spectrum obtained for various raw materials and fabricated composites, respectively.The raw graphite powder shows three broad peaks at 1340 cm −1 (D band), 1573 cm −1 (G band) and 2700 cm −1 (2D band).The peak identified at 1573 cm −1 is known as G band and is an outcome of C-C in-plane vibrations.The size of the G band corresponds to the crystalline quality of the carbonaceous compound [35].The Raman peak observed at 1340 cm −1 is a result of single phonon scattering and the interaction of the electron with imperfection.It is termed as the D band or defect band.The intensity of the D band signifies the degree of disorder present in the crystal structure [35,36].The Raman peak at 2700 cm −1 corresponds to the two-phonon scattering mode and is known as a 2D band.The 2D band signifies the degree of graphitization present in the structure [37,38].
The axial force variation in Al-SiC mono composite is very random.The force of ~0.8 kN is observed at a tool rotational speed of 1800 rpm, which increases to ~0.6 kN at 2200 rpm and further increases to ~0.9 kN at 2500 rpm.The higher axial force at 1800 rpm is due to the insufficient material flow during FSP.The material flow increases at 2200 rpm due to higher heat generation.At 2500 rpm, excessive heat is generated which reduces the flow stress of aluminium.Due to this reduction in flow stress, the fragmentation of SiC particles into smaller pieces is significantly reduced.Instead of fragmenting into smaller pieces, the large particles flow along with the material and during this phenomenon come in contact with the FSP tool and increase the force.
In Al-SiC-graphite hybrid composite, the axial force consistently decreases from 1800 rpm to 2500 rpm.The severe fluctuations which are observed in Al-graphite mono composite does not arise here due to the presence of hard abrasive SiC particles.These SiC particles assist in the seizure between the FSP tool and Al matrix even in the presence of graphite.As explained earlier, in Al-SiC mono composite at 2500 rpm tool rotational speed, the heat generated is excessive, due to which flow stress decreases significantly and fragmentation of SiC particles could not take place resulting in high axial thrust.However, in Al-SiC-graphite hybrid composite, owing to the high conductivity of graphite, the excessive heat generated is transferred quickly to neighbouring regions, and thus the considerable value of flow stress is maintained during processing.This higher flow stress enables the fragmentation of SiC particles into smaller pieces.Thus, the high conductivity of graphite along with the presence of hard abrasive SiC particles explains the force variation obtained in Figure 3b.
Raman Spectroscopy
Raman spectroscopy is an important characterization technique to investigate the structure and morphology of various carbonaceous reinforcement.Figure 4 represents the Raman spectrum obtained for various raw materials and fabricated composites, respectively.The raw graphite powder shows three broad peaks at 1340 cm −1 (D band), 1573 cm −1 (G band) and 2700 cm −1 (2D band).The peak identified at 1573 cm −1 is known as G band and is an outcome of C-C in-plane vibrations.The size of the G band corresponds to the crystalline quality of the carbonaceous compound [35].The Raman peak observed at 1340 cm −1 is a result of single phonon scattering and the interaction of the electron with imperfection.It is termed as the D band or defect band.The intensity of the D band signifies the degree of disorder present in the crystal structure [35,36].The Raman peak at 2700 cm −1 corresponds to the two-phonon scattering mode and is known as a 2D band.The 2D band signifies the degree of graphitization present in the structure [37,38].During FSP, graphite present in the composite is strained and the interatomic distance between the layers changes.This phenomenon introduces compressive residual stresses in the graphite and distorts its hexagonal symmetry.This behavior is identified by the low-intensity G band in processed samples.The increase in the D band intensity of Al-graphite composite corresponds to (i) an increase of the unorganized amount of carbon in the sample, and (ii) a reduction in the graphite crystal size [39].
The ratio of D to G band intensity (I D /I G ) increased from 0.15 in powder sample to 0.74 in Al-graphite mono composite, while the corresponding value for the hybrid composite is 0.40.This increase in the ratio signifies the increased amount of disorder in the composites as compared to the graphite powder sample [40].FSP at 2200 rpm increases the edge-related disorder of graphite and consecutively the amount of randomly oriented graphite crystals also increases.As the D band intensity straightforwardly resembles the level of imperfection, it increases in the composites.The combined effect of the above two phenomena results in the higher I D /I G ratio of Al-graphite composite, while in case of the hybrid composite, the presence of hard abrasive SiC particles restricts the edge disorder of graphite during processing.Thus, the low value of the I D /I G ratio is justified in the hybrid composite.The processing also tends to reduce the graphite crystal size, and thus the intense peak of the D band is justified in the composites.Tunistra and Koenig [39] also show that the nature of the D band is dependent on the carbon grain size (L g ) by Equation (1) shown below.
From Equation ( 1), the intensity of the D band is inversely related to the carbon grain size.The grain refinement justifies the intense peak of D band in the processed Al-graphite composite due to friction stir processing.The 2D band intensity corresponds to the number of graphene layer exhibits in the structure [41].The intensity ratio I 2D /I G increases from 0.28 in powder sample to 0.38 in the composite.This increase in (I 2D /I G ) proportion corresponds to the disintegration of graphite and exfoliation of graphitic layers because of amplified residual stresses among the neighboring graphene layers during processing.
Figure 4 also show the Raman spectrum of SiC component in both powder and composites.The Raman peaks corresponding to SiC in the range 500-1000 cm −1 indicate the defects/disorder in the crystal lattice, lattice strain, impurities, and mobility.The present Raman spectra show three characteristic features: the peaks of E2 symmetry at ~770 and ~792 cm −1 correspond to TO(Γ) phonon or transverse optic mode, and the low-intensity peak of A1 ~972 cm −1 corresponds to LO(Γ) phonon or longitudinal optic mode.These three characteristic features confirm the presence of SiC in 6H-SiC polytypes [21].The high-intensity peaks at ~770 cm −1 and ~792 cm −1 correspond to the Si-C bond [21].
The peak observed at ~972 cm −1 resembles the second order band of silicon.
It is observed that the width and intensity of the TO peak analogous to ~790 cm −1 in the case of powder sample is ~50 cm −1 and ~2100 au (arbitrary unit), respectively, while in the case of processed samples the values change to ~40 cm −1 and ~6000 au, respectively.The sharper and more intense TO peak in the case of the processed sample is ascribed to the high temperature generation during processing [22].It is perceived that the LO(Γ) peak intensity is reduced from ~1665 au in powder sample to ~1040 au in the processed samples.This is attributed to the heating effect produced during the processing.The downshift in the peaks is observed in the first as well as second order bands of Al-SiC composite as compared to SiC powder sample.This downshift is attributed to the high temperature generation during friction stir processing [23].The higher temperature leads to thermal expansion and strain being induced due to lattice incompatibility by the other phonons [23].311) and (222), respectively.However, the SiC 200 and other small intensity peaks are not present.There are a couple of potential reasons for the non-existence of SiC (200) peak and other peaks of low intensity in the processed samples.Firstly, due to the lower volume fraction of SiC, it is more likely that lower intensity reflections become indistinguishable from noise.Secondly, preferred orientation issues during processing result in decreasing the intensity of SiC (200) relative to other planes [43].Lastly, Al possesses high-intensity peaks (~2500 au) due to its highly crystalline nature while the intensity of SiC (200) peaks observed in powder sample are very much less (~38 au).This large difference among the intensities makes the SiC (200) peaks indistinguishable from noise.Also, due to the less crystalline behavior of SiC than Al, the long-range order for a specific set of indices is lost.It is visible from the graphs that no new phases (Mg 2 Si or Al 4 C 3 ) were formed except the added reinforcement powders, i.e., SiC and graphite.Witnessing no shifting of the peaks in the graphs with the reinforcements, it can also be inferred that no chemical reaction happened between the reinforcements SiC and graphite powders during FSP, and thus there is no chance of formation of any intermetallic compound.During fabrication of the Al-carbonaceous compound composite, the carbonaceous compound reacts with Al at grain boundaries and forms Al 4 C 3 .This phenomenon becomes the point of brittle weakness and leads to poor mechanical properties.However, the kinetics of this reaction start at a temperature above 500 • C [44].Since FSP is a solid-state processing strategy, the temperature rise during the processing ranges from 400-500 • C [45,46].Thus, Al 4 C 3 formation is avoided due to the unavailability of the temperature required for the start of a reaction between Al and carbon.This effect presents an advantage of friction stir processing over other alloying methods such as thermal spraying, laser beam, and powder metallurgy routes.In these processes, the rise in temperature beyond the melting point can lead to the formation of different chemical compounds by the reaction of the reinforcement particles among themselves which is harmful to the properties of the desired composite [47].
X-Ray Diffraction
in the samples.This low-intensity peak corresponds to the (002) plane of carbon [42].The major peaks of SiC in powder samples were observed at 35.65°, 38.07°, 59.99°, 71.78° and 75.51°, while in composites they were observed at 35.65° and 75.51°.The planes associated with these angles are (111), ( 200), ( 220), ( 311) and ( 222), respectively.However, the SiC 200 and other small intensity peaks are not present.There are a couple of potential reasons for the non-existence of SiC (200) peak and other peaks of low intensity in the processed samples.Firstly, due to the lower volume fraction of SiC, it is more likely that lower intensity reflections become indistinguishable from noise.Secondly, preferred orientation issues during processing result in decreasing the intensity of SiC (200) relative to other planes [43].Lastly, Al possesses high-intensity peaks (~2500 au) due to its highly crystalline nature while the intensity of SiC (200) peaks observed in powder sample are very much less (~38 au).This large difference among the intensities makes the SiC (200) peaks indistinguishable from noise.Also, due to the less crystalline behavior of SiC than Al, the long-range order for a specific set of indices is lost.It is visible from the graphs that no new phases (Mg2Si or Al4C3) were formed except the added reinforcement powders, i.e., SiC and graphite.Witnessing no shifting of the peaks in the graphs with the reinforcements, it can also be inferred that no chemical reaction happened between the reinforcements SiC and graphite powders during FSP, and thus there is no chance of formation of any intermetallic compound.During fabrication of the Al-carbonaceous compound composite, the carbonaceous compound reacts with Al at grain boundaries and forms Al4C3.This phenomenon becomes the point of brittle weakness and leads to poor mechanical properties.However, the kinetics of this reaction start at a temperature above 500 °C [44].Since FSP is a solid-state processing strategy, the temperature rise during the processing ranges from 400-500 °C [45,46].Thus, Al4C3 formation is avoided due to the unavailability of the temperature required for the start of a reaction between Al and carbon.This effect presents an advantage of friction stir processing over other alloying methods such as thermal spraying, laser beam, and powder metallurgy routes.In these processes, the rise in temperature beyond the melting point can lead to the formation of different chemical compounds by the reaction of the reinforcement particles among themselves which is harmful to the properties of the desired composite [47].X-ray Diffraction (XRD) micrographs of as-received samples and friction stir processed samples.1. Figure 6d show the corrosion rate obtained with various composites at different processing parameters.The corrosion rate (CR) is calculated using the equation
Electrochemical Behaviour
where CR is in millimeter per year (mmpy); I CORR is corrosion current (in A); K is the constant that defines the units of the corrosion rate.It should be 3272 mm/(A•cm•year) for CR to be in mmpy; EW is equivalent weight (in gm/equivalent); d is density (in g/cm 3 ); and A is sample area (in cm 2 ).As shown in Figure 6, the addition of reinforcements along with FSP shifts the corrosion potential (E CORR ) in a more active direction, with hybrid composite being more active than the mono composites.The Al-SiC mono composite fabricated at 1800 rpm shows higher corrosion current density (I CORR ) as compared to base alloy.The higher I CORR at 1800 rpm is attributed to the presence of bulky SiC particles and coarse grain structure which provides the potential cathodic sites for the electrolyte (see Figure S1 in Supplementary Materials).At these sites, the corrosion pits initiate and progress.The bulk SiC particle also leads to the breakdown of the Al 2 O 3 layer formed on the matrix, which alloys the contact between the electrolyte and matrix.This contact between electrolyte and composite ultimately increases the corrosion loss of the composite.At 2200 rpm, the optimum combination of grain size, particle size and uniform distribution of SiC particles leads to the minimum I CORR.At a higher tool rotational speed of 2500 rpm, grain coarsening due to high-temperature generation takes place [48].This grain coarsening increases the corrosion loss even though the particles are more uniformly distributed in the matrix.The grain boundaries act as the cathodic sites for the galvanic corrosion, and thus grain coarsening leads to an increase in corrosion loss [18].
In the case of Al-graphite mono composite, maximum corrosion loss was observed as compared to various fabricated composite and unreinforced Al6061.The various oxidation and reduction reaction taking place on the Al surface are shown by Equations ( 3)-( 6), respectively.
As seen from the above equations, there is a wide difference between the reversible potential (E REV ) for the reactions.This wide difference in reversible potentials leads to the formation of galvanic coupling between matrix and reinforcement.Reinforcement acts as an inert electrode upon which O 2 and H + reduction occur [49].The shifting of E CORR towards more negative values and the increase in I CORR signifies the activation of aluminium by the presence of graphite [23,28].Increased evolution of H 2 due to the highly conducting nature of graphite shifts the corrosion potential of Al/graphite mono composite into a more active direction than the as-received Al6061 [28].The cathodic nature of graphite in NaCl solution is also responsible for the galvanic corrosion of the composite [49].This cathodic behavior of graphite leads to the formation of a localized galvanic cell, in which graphite particles act as a cathode and lead to the anodic dissolution of Al in the chloride solution [23,28].These localized galvanic cells break the layer of Al 2 O 3 , and pitting of Al matrix is initiated.The grain boundaries and reinforcement-matrix interface also act as potential sites for galvanic cell formation [29].This discussion concludes with the fact that the primary mechanism of corrosion in Al-graphite composite is galvanic corrosion, whereas the H 2 evolution and intergranular corrosion contribute towards the increased corrosion loss.coupling between matrix and reinforcement.Reinforcement acts as an inert electrode upon which O2 and H + reduction occur [49].The shifting of ECORR towards more negative values and the increase in ICORR signifies the activation of aluminium by the presence of graphite [23,28].Increased evolution of H2 due to the highly conducting nature of graphite shifts the corrosion potential of Al/graphite mono composite into a more active direction than the as-received Al6061 [28].The cathodic nature of graphite in NaCl solution is also responsible for the galvanic corrosion of the composite [49].This cathodic behavior of graphite leads to the formation of a localized galvanic cell, in which graphite particles act as a cathode and lead to the anodic dissolution of Al in the chloride solution [23,28].These localized galvanic cells break the layer of Al2O3, and pitting of Al matrix is initiated.The grain boundaries and reinforcement-matrix interface also act as potential sites for galvanic cell formation [29].This discussion concludes with the fact that the primary mechanism of corrosion in Al-graphite composite is galvanic corrosion, whereas the H2 evolution and intergranular corrosion contribute towards the increased corrosion loss.The minimum corrosion loss is noticed in Al-SiC-graphite hybrid composite fabricated at 2200 rpm tool rotational speed, as shown in Figure 6b.Hybrid composite fabricated at 2200 rpm contains graphite in both particles as well as in layer form (Figure 7d,f).This behavior accomplishes two functions: (i) the reduction of galvanic corrosion due to the less concentrated graphitic region in the absence of particle form; and (ii) the protection of Al matrix from electrolyte due to the formation of a layer over the matrix.With the reduction in particle form of graphite, the interfacial corrosion is also reduced due to a decrease in the number of potential cathodic sites.The crevice corrosion between Al and SiC interface is also reduced due to the presence of a graphite layer between them.Increased grain refinement at 2200 rpm also reduces the possibilities of intergranular corrosion.This combined effect ultimately reduces the corrosion loss of the hybrid composite fabricated at 2200 rpm drastically.The higher corrosion loss at 1800 rpm is attributed to the non-uniform distribution of SiC particles due to high flow shear stress in the absence of sufficient heat.The agglomerated particles become the potential sites for the formation of galvanic coupling and increase the corrosion loss.The The minimum corrosion loss is noticed in Al-SiC-graphite hybrid composite fabricated at 2200 rpm tool rotational speed, as shown in Figure 6b.Hybrid composite fabricated at 2200 rpm contains graphite in both particles as well as in layer form (Figure 7d,f).This behavior accomplishes two functions: (i) the reduction of galvanic corrosion due to the less concentrated graphitic region in the absence of particle form; and (ii) the protection of Al matrix from electrolyte due to the formation of a layer over the matrix.With the reduction in particle form of graphite, the interfacial corrosion is also reduced due to a decrease in the number of potential cathodic sites.The crevice corrosion between Al and SiC interface is also reduced due to the presence of a graphite layer between them.Increased grain refinement at 2200 rpm also reduces the possibilities of intergranular corrosion.This combined effect ultimately reduces the corrosion loss of the hybrid composite fabricated at 2200 rpm drastically.The higher corrosion loss at 1800 rpm is attributed to the non-uniform distribution of SiC particles due to high flow shear stress in the absence of sufficient heat.The agglomerated particles become the potential sites for the formation of galvanic coupling and increase the corrosion loss.The intermediate value of current density at 2500 rpm was due to the grain growth during processing.However, a uniform distribution and fine particles of SiC and graphite are present.At higher tool rotational speeds, due to severe heat generation, the temperature rises and leads to coarsening of grains as well as chemical coalescence of particles (See Figure S2 in supplementary materials).This coarse grain structure provides the sites for galvanic corrosion.intermediate value of current density at 2500 rpm was due to the grain growth during processing.However, a uniform distribution and fine particles of SiC and graphite are present.At higher tool rotational speeds, due to severe heat generation, the temperature rises and leads to coarsening of grains as well as chemical coalescence of particles (See Figure 2S in supplementary materials).This coarse grain structure provides the sites for galvanic corrosion.
Microstructural Characterization
Figures 7a-d show the scanning electron microscopy (SEM) micrographs of various composites fabricated at a tool rotational speed of 2200 rpm.Since the best mechanical and electrochemical properties are obtained at 2200 rpm, the micrographs of composites fabricated at 2200 rpm are only shown here.The presence of various reinforcements is confirmed by energy dispersive X-ray spectroscopy (EDX) analysis at corresponding points.Figure 7a shows the refined grain structure of
Microstructural Characterization
Figure 7a-d show the scanning electron microscopy (SEM) micrographs of various composites fabricated at a tool rotational speed of 2200 rpm.Since the best mechanical and electrochemical properties are obtained at 2200 rpm, the micrographs of composites fabricated at 2200 rpm are only shown here.The presence of various reinforcements is confirmed by energy dispersive X-ray spectroscopy (EDX) analysis at corresponding points.Figure 7a shows the refined grain structure of unreinforced Al after processing.The distribution of SiC reinforcement along with some coarse particles and refined grains can be visualized in the Al-SiC mono composite, as shown in Figure 7b.The presence of SiC particles is also confirmed by EDX analysis (inset).The presence of graphite in both layer and particle forms is noticed in the Al-graphite mono composite, as shown in Figure 7c.This phenomenon is attributed to the plastic deformation at high tool rotational speed, which squeezes out the graphite and distributes it uniformly over the Al matrix.
The point EDX (inset) on the encircled region and line EDX at a-b (Figure 7e) confirm the presence of graphite in both particle as well as layer form, respectively.The presence of oxygen in line EDX at a-b signifies the formation of an Al 2 O 3 layer on the matrix.Figure 7d shows the SEM micrograph of Al-SiC-graphite hybrid composite.Here also, the graphite is present in both particle and layer form.The line (c-d) EDX analysis shown in Figure 7f confirms the presence of both the reinforcements along with Al 2 O 3 layer on the matrix.The grain size in all the above three fabricated composites is nearly the same.
Figure 8a-d show the SEM micrographs of the as-received Al sample, Al-SiC mono composite, Al-graphite mono composite and Al-SiC-graphite hybrid composite after the potentiodynamic polarization test.The cathodic reaction for Al in aerated near neutral ph solution is oxygen reduction which is later followed by adsorption, i.e., This layer of oxide is composed of an adherent, compact and stable inner layer while the outer layer is porous and less stable and thus more favorable for the corrosion [23].Due to this phenomenon, an abrupt increase in current is observed (Figure 6) after increasing the applied potential.This increase in current leads to the attack of chloride ions on the flawed oxide layer, and a breakdown of the passive layer takes place which ultimately results in the pitting corrosion [50].The reactions involved in this phenomenon are as follows: This phenomenon explains the corrosion pits formed on the aluminium surface, as shown in Figure 8a.A sharp pitting region in the case of Al-SiC composite can be seen from the potentiodynamic curves in Figure 6b.Also, Figure 8b shows corrosion pits near the SiC particle.It was reported that Al-SiC mono composite is more susceptible to the corrosion pits than the unreinforced alloy [21].The inhibiting quality of SiC particles to progress the growth of the pit is the reason behind this more corrosion-prone behavior [51].The SiC particles cause the breakdown of the oxide film and provide the path for chloride ions to come in direct contact with the matrix [21].Thus, the SiC/Al matrix interface becomes the potential site for galvanic corrosion [21].However, the pit formation does not start at the particle.Instead, it starts from the Si-rich layer around the SiC particle [51].The fine grain structure due to FSP controls the intergranular corrosion pits to a great extent, and thus the effect is mainly due to the presence of SiC particles, whereas in the case of as-received Al alloy, the intergranular pitting is the primary reason for higher current density.
Figure 8c shows the SEM micrograph of Al/graphite mono composite after the corrosion test.The interfacial pit at the Al/graphite interface can be seen from the micrograph.Also, the pit density of the Al/graphite mono composite is more than the unreinforced Al alloy.The increased pitting tendency was attributed to the increase in corrosion current due to the evolution of H 2 at the aluminium alloy and over the graphite surface.Drastic H 2 evolution at the graphite surface is due to the higher exchange current density for H + /H 2 on graphite [28].This H 2 evolution breaks the oxide layer formed on the surface and leads to galvanic corrosion.The Al/graphite interface is the preferential site for the corrosion pits and galvanic corrosion to occur (Figure 8c).
Figure 8d shows the SEM micrograph of Al-SiC-graphite hybrid composite.It is seen from the micrograph that the presence of SiC and graphite in near periphery reduces the tendency of interfacial pit formation.The reduced intergranular as well as interfacial corrosion pits result in minimum corrosion loss in the case of hybrid composite.Due to the presence of SiC particles, the ratio of graphite present in the form of film is more than that of the particle form.Thus, the H 2 evolution decreases due to the unavailability of a concentrated graphite region and decreases the corrosion loss.This thin film of graphite also protects the matrix from chloride ions contact and avoid unnecessary pitting.the higher exchange current density for H + /H2 on graphite [28].This H2 evolution breaks the oxide layer formed on the surface and leads to galvanic corrosion.The Al/graphite interface is the preferential site for the corrosion pits and galvanic corrosion to occur (Figure 8c).
Figure 8d shows the SEM micrograph of Al-SiC-graphite hybrid composite.It is seen from the micrograph that the presence of SiC and graphite in near periphery reduces the tendency of interfacial pit formation.The reduced intergranular as well as interfacial corrosion pits result in minimum corrosion loss in the case of hybrid composite.Due to the presence of SiC particles, the ratio of graphite present in the form of film is more than that of the particle form.Thus, the H2 evolution decreases due to the unavailability of a concentrated graphite region and decreases the corrosion loss.This thin film of graphite also protects the matrix from chloride ions contact and avoid unnecessary pitting.
Nanomechanical Behaviour
The nanomechanical behavior of various composites fabricated at 2200 rpm and 2500 rpm tool rotational speeds is studied by using nanoindentation technique.Since the best electrochemical properties are obtained at these parameters, the study is conducted only for the best samples.
Nanomechanical Behaviour
The nanomechanical behavior of various composites fabricated at 2200 rpm and 2500 rpm tool rotational speeds is studied by using nanoindentation technique.Since the best electrochemical properties are obtained at these parameters, the study is conducted only for the best samples.Figure 9a-c shows the nanoindentation results obtained for the composites fabricated at 2200 rpm.The maximum nano-hardness (0.38GPa) is obtained for Al-SiC composite, whereas Al-graphite mono composite and hybrid composite show nano-hardness values of 0.30 GPa and 0.35 GPa, respectively.The increase in nano-hardness is attributed to the grain refinement due to dynamic recrystallization and the existence of compressive strains in the stir region [52].Another possible reason for the increase in hardness is the generation of geometrically necessary dislocations (GND).During FSP, the discrepancy between the coefficient of thermal expansion and elastic modulus results in the generation of these GNDs [53].These GNDs restrict the dislocation movement and contribute towards the higher strength of the composite.The hardness of reinforced abrasive SiC particles also plays a significant role in increasing the nano-hardness of the composite.Figure 9c shows the load-displacement curve (p-h curve) for various composites fabricated at 2200 rpm.The figure visualizes that the resistance to deformation offered by a mono composite of Al-graphite is significantly lower compared to the deformation resistance offered by the Al-SiC mono composite and hybrid composite.The Al-SiC composite obtains the hardest crystallographic plane.This statement is justified by the minimum penetration depth achieved by the Al-SiC composite at the peak load.The nature of the curve obtained with Al-SiC composite show a sudden jump around 4500 µN.This sudden rise in slope can be attributed to the higher stiffness provided by the SiC composite [54].
towards the higher strength of the composite.The hardness of reinforced abrasive SiC particles also plays a significant role in increasing the nano-hardness of the composite.Figure 9c shows the loaddisplacement curve (p-h curve) for various composites fabricated at 2200 rpm.The figure visualizes that the resistance to deformation offered by a mono composite of Al-graphite is significantly lower compared to the deformation resistance offered by the Al-SiC mono composite and hybrid composite.The Al-SiC composite obtains the hardest crystallographic plane.This statement is justified by the minimum penetration depth achieved by the Al-SiC composite at the peak load.The nature of the curve obtained with Al-SiC composite show a sudden jump around 4500 μN.This sudden rise in slope can be attributed to the higher stiffness provided by the SiC composite [54].Figure 10a-c shows the nano-hardness result obtained for various composites fabricated at 2500 rpm.From the p-h curve, the mono composite of Al-graphite shows greater elastic recovery as compared to other reinforced composites.The hybrid composite shows a pop-in during loading at around 3000 μN loads.This pop-in is the result of debonding between the Al matrix and reinforcement [55].The twinning effect and slip dislocation also contribute towards this change of ph curve around 3000 μN loads.The minimum depth of penetration in Al-SiC composites indicates the higher hardness of the composite [55].As explained earlier, at 2500 rpm, the grain growth takes place; but at the same time, due to higher heat generation, the particles are more uniformly distributed in the Al matrix.Thus, due to the dominating mechanism of uniform particle dispersion, a higher hardness as compared to the composites fabricated at 2200 rpm is observed.The soft phase of graphite starts early yielding during loading and leads to the lower hardness of the Al-graphite Figure 10a-c shows the nano-hardness result obtained for various composites fabricated at 2500 rpm.From the p-h curve, the mono composite of Al-graphite shows greater elastic recovery as compared to other reinforced composites.The hybrid composite shows a pop-in during loading at around 3000 µN loads.This pop-in is the result of debonding between the Al matrix and reinforcement [55].The twinning effect and slip dislocation also contribute towards this change of p-h curve around 3000 µN loads.The minimum depth of penetration in Al-SiC composites indicates the higher hardness of the composite [55].As explained earlier, at 2500 rpm, the grain growth takes place; but at the same time, due to higher heat generation, the particles are more uniformly distributed in the Al matrix.Thus, due to the dominating mechanism of uniform particle dispersion, a higher hardness as compared to the composites fabricated at 2200 rpm is observed.The soft phase of graphite starts early yielding during loading and leads to the lower hardness of the Al-graphite mono composite.This localized yielding is the result of residual stresses due to the shearing of material during FSP [55].
Comparing Figures 9a and 10a, it could be noted that the nano-hardness at a processing speed of 2500 rpm is greater than that at 2200 rpm.Although the hardness values are higher in the case of 2500 rpm, it was observed that the variation in microhardness (see Figure S3 in Supplementary Materials) at different points is severe and lacks uniformity in hardness.On the other hand, samples processed at 2200 rpm shows less hardness but with more uniformity.
Comparing Figures 9b and 10b, it could be noticed that the hardness for the composite is marginally higher than the base metal but that reduced modulus decreased sharply.This phenomenon is attributed to the fact that the modulus is the intrinsic property of the material and strongly depends on the atomic bonding of the material.In the case of as-received material, the atomic bonding is perfect while it gets distorted by the addition of reinforcement and thus modulus gets decreased.However, a similar phenomenon is not observed in the composites fabricated at 2500 rpm because, under high heat and intensified plastic deformation, the Al and reinforcements forms a strong atomic bonding.strongly depends on the atomic bonding of the material.In the case of as-received material, the atomic bonding is perfect while it gets distorted by the addition of reinforcement and thus modulus gets decreased.However, a similar phenomenon is not observed in the composites fabricated at 2500 rpm because, under high heat and intensified plastic deformation, the Al and reinforcements forms a strong atomic bonding.
Conclusions
The hybrid composite of Al-SiC-graphite has been successfully fabricated through FSP.The effect of various process parameters and reinforcement on mechanical properties, electrochemical properties, and the morphology of reinforcement were studied.The main conclusions can be listed as follows:
The mean axial force during FSP is increased in the case of Al-graphite mono composite due to the high thermal conductivity possessed by graphite; The presence of residual stresses in the fabricated composite is confirmed by noteworthy Raman peak shift.The existence of edge disorder in graphite crystal is also noticed.FSP also leads to the exfoliation of graphite towards single-layer graphene; The mechanical properties are improved due to particle reinforcement, and optimum uniform
Conclusions
The hybrid composite of Al-SiC-graphite has been successfully fabricated through FSP.The effect of various process parameters and reinforcement on mechanical properties, electrochemical properties, and the morphology of reinforcement were studied.The main conclusions can be listed as follows:
•
The mean axial force during FSP is increased in the case of Al-graphite mono composite due to the high thermal conductivity possessed by graphite;
Figure 3 .
Figure 3. Axial load variation: (a) Axial load vs. time curve for non-reinforced processed Al6061 at 2200 rpm; (b) axial load vs. tool rotational speed for different reinforcements.Graphite is abbreviated as Gr.
Figure 3 .
Figure 3. Axial load variation: (a) Axial load vs. time curve for non-reinforced processed Al6061 at 2200 rpm; (b) axial load vs. tool rotational speed for different reinforcements.Graphite is abbreviated as Gr.
Figure 4 .
Figure 4. Raman spectrum of powder and various composites fabricated at 2200 rpm.Graphite is abbreviated as Gr.
Figure 4 .
Figure 4. Raman spectrum of powder and various composites fabricated at 2200 rpm.Graphite is abbreviated as Gr.
Figure 5 .
Figure 5. X-ray Diffraction (XRD) micrographs of as-received samples and friction stir processed samples.
Figure
Figure 6a-c shows the linear potentiodynamic polarization test results conducted to analyze the electrochemical or corrosion behavior of various mono and hybrid composites fabricated at various tool rotational speeds.The corrosion potential (ECORR) and corrosion current densities (ICORR) obtained by using Tafel extrapolarization at various tool rotational speeds are shown in Table1.Figure6d
Figure 5 .
Figure 5.X-ray Diffraction (XRD) micrographs of as-received samples and friction stir processed samples.
Figure
Figure 6a-c shows the linear potentiodynamic polarization test results conducted to analyze the electrochemical or corrosion behavior of various mono and hybrid composites fabricated at various tool rotational speeds.The corrosion potential (E CORR ) and corrosion current densities (I CORR ) obtained by using Tafel extrapolarization at various tool rotational speeds are shown in Table1.Figure6d show
Table 1 .
The potentials and current densities obtained by Tafel extrapolarization analyses for as-received Al6061 and various composites fabricated at different tool rotational speeds.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2011-06-13T00:00:00.000
|
12517482
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://arthritis-research.biomedcentral.com/track/pdf/10.1186/ar3359",
"pdf_hash": "70982f8a5d0d90225e6f0e5c9526c3ad893e59e8",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44451",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"sha1": "70982f8a5d0d90225e6f0e5c9526c3ad893e59e8",
"year": 2011
}
|
pes2o/s2orc
|
Improvement in multiple dimensions of fatigue in patients with fibromyalgia treated with duloxetine: secondary analysis of a randomized, placebo-controlled trial
Introduction Fatigue is one of the most disabling symptoms associated with fibromyalgia that greatly impacts quality of life. Fatigue was assessed as a secondary objective in a 2-phase, 24-week study in outpatients with American College of Rheumatology-defined fibromyalgia. Methods Patients were randomized to duloxetine 60-120 mg/d (N = 263) or placebo (N = 267) for the 12-week acute phase. At Week 12, all placebo-treated patients were switched to double-blind treatment with duloxetine for the extension phase. Fatigue was assessed at baseline and every 4 weeks with the Multidimensional Fatigue Inventory (MFI) scales: General Fatigue, Physical Fatigue, Mental Fatigue, Reduced Activity, and Reduced Motivation. Other assessments that may be associated with fatigue included Brief Pain Inventory (BPI) average pain, numerical scales to rate anxiety, depressed mood, bothered by sleep difficulties, and musculoskeletal stiffness. Treatment-emergent fatigue-related events were also assessed. Changes from baseline to Week 12, and from Week 12 to Week 24, were analyzed by mixed-effects model repeated measures analysis. Results At Week 12, duloxetine versus placebo significantly (all p < .05) reduced ratings on each MFI scale, BPI pain, anxiety, depressed mood, and stiffness. Improvement in ratings of being bothered by sleep difficulties was significant only at Weeks 4 and 8. At Week 24, mean changes in all measures indicated improvement was maintained for patients who received duloxetine for all 24 weeks (n = 176). Placebo-treated patients switched to duloxetine (n = 187) had significant within-group improvement in Physical Fatigue (Weeks 16, 20, and 24); General Fatigue (Weeks 20 and 24); Mental Fatigue (Week 20); and Reduced Activity (Weeks 20 and 24). These patients also experienced significant within-group improvement in BPI pain, anxiety, depressed mood, bothered by sleep difficulties, and stiffness. Overall, the most common (> 5% incidence) fatigue-related treatment-emergent adverse events were fatigue, somnolence, and insomnia. Conclusions Treatment with duloxetine significantly improved multiple dimensions of fatigue in patients with fibromyalgia, and improvement was maintained for up to 24 weeks. Trial registration ClinicalTrials.gov registry NCT00673452.
Introduction
Fibromyalgia is a chronic pain disorder that has been estimated to affect as many as 5 million individuals in the US, most of whom are women [1]. In addition to widespread pain, symptoms that may include sleep disturbances, fatigue, depression, anxiety, and problems with memory and concentration characterize fibromyalgia [2][3][4]. Among these, fatigue greatly impacts quality of life and has been identified as one of the most disabling symptoms associated with fibromyalgia [4]. Individuals with fibromyalgia report that their fatigue typically is not alleviated by sleep or rest [5] but is a physical tiredness, and these people have low energy and require increased effort to overcome inactivity and perform physical tasks [4,6]. Patients with fatigue report having decreased mental endurance and slowed thinking and feel overwhelmed [4]. Symptoms of fibromyalgia that may contribute to fatigue include pain [6][7][8], stiffness [8], sleep quality [6][7][8][9], and depression [6,7,10].
Medications currently approved for the management of fibromyalgia include duloxetine hydrochloride (hereafter referred to as duloxetine), pregabalin, and milnacipran. Duloxetine is a potent serotonin and norepinephrine reuptake inhibitor that has been approved by the US Food and Drug Administration for treatment of major depressive disorder (MDD) and generalized anxiety disorder (GAD) and for the management of pain associated with diabetic peripheral neuropathy, management of chronic musculoskeletal pain, and management of fibromyalgia. In past trials in fibromyalgia, the efficacy of duloxetine versus placebo on improvement in secondary measures of fatigue has not been consistent. Two of the fibromyalgia trials assessed fatigue as a secondary outcome by using the Multidimensional Fatigue Inventory (MFI) [11], which measures multiple domains of fatigue on five scales: General Fatigue, Mental Fatigue, Physical Fatigue, Reduced Activity, and Reduced Motivation. One of the studies reported significant between-treatment differences in only Mental Fatigue at the end of 6 months of treatment with duloxetine 60 to 120 mg given once daily (QD) [12]. The other study reported significant between-treatment differences in improvement with duloxetine 60 mg QD in Reduced Motivation at the 12-week endpoint and in Mental Fatigue at both the 12-and 24-week endpoints. In the same study, treatment with duloxetine 120 mg QD compared with placebo was associated with significant improvement in Reduced Motivation by 12 weeks and in Physical Fatigue, Mental Fatigue, Reduced Motivation, and Reduced Activity after 6 months of treatment [13].
More recently, treatment with duloxetine 60 to 120 mg QD for 12 weeks in comparison with placebo was found to significantly improve fatigue on each MFI domain [14]. In this secondary analysis, we report monthly changes in fatigue domains and in symptoms that may be related to or may contribute to fatigue, such as pain, depressed mood, anxiety, sleep, and stiffness across the entire study. In addition, changes in MFI scales were assessed in subgroups of patients who were pain responders, those who reported 'feeling much better', and those who required a dose escalation in the acute phase. The current analyses were performed to better characterize improvement in fatigue during the entire 24 weeks of the study.
Materials and methods
Details of the 12-week acute phase of the study (F1J-US-HMGB; trial registration NCT00673452) have been published [14]. Briefly, this was a 24-week, multicenter, randomized, double-blind, placebo-controlled trial in outpatients who were at least 18 years old and who had fibromyalgia as defined by the American College of Rheumatology. The purpose of the trial was to confirm the efficacy of flexibly dosed duloxetine 60 to 120 mg QD on patient-rated improvement in fibromyalgia. Double-blind dose adjustments via an interactive voice response system were allowed for patients who were not responding. Response was defined as an at least 50% reduction in pain as assessed by the Brief Pain Inventory (BPI) [15] 24-hour average pain item (referred to hereafter as BPI average pain). At weeks 4 and 8, non-responding patients in the duloxetine group had their dose increased from 60 to 90 mg QD. At week 8, patients who were not responding to 90 mg had their dose increased to 120 mg QD. If the patient could not tolerate the dose increase, it was reduced to the preescalation dose. After week 12, all patients remained on their current dose of duloxetine for the remainder of the study. Patients in the placebo group were transitioned to double-blind active treatment with duloxetine 60 mg QD after week 12.
Efficacy measures in this analysis were assessed at each study visit, which occurred every 4 weeks, and included the MFI, BPI average pain, and numerical rating scales assessing anxiety, mood, 'bothered by sleep difficulties', and musculoskeletal stiffness and Patient Global Impression of Improvement (PGI-I) [16]. The MFI scales each rate symptoms from 4 (low) to 20 (high). The BPI average pain item assesses pain with ratings from 0 (no pain) to 10 (pain as severe as you can imagine). The PGI-I is a categorical scale that patients use to rate their overall impression of how they are feeling since treatment began; ratings on the scale are as follows: 1 = very much better, 2 = much better, 3 = a little better, 4 = no change, 5 = a little worse, 6 = much worse, and 7 = very much worse. The numerical rating scales assessed patient-perceived severity of anxiety, mood, 'bothered by sleep difficulties', and musculoskeletal stiffness. These scales ranged from 0 ('not present/bothered by') to 10 ('extremely').
Response to treatment in the acute phase was defined as an at least 50% reduction from baseline in BPI average pain severity. Treatment-emergent adverse events were assessed for the incidence of events that might be associated with fatigue.
Statistical analyses were done on an intent-to-treat basis. All randomly assigned patients with a baseline visit and at least one post-baseline visit were included in the efficacy analyses, and all randomly assigned patients were included in the safety analyses. For the acute phase, these analyses included baseline to week 12. The extension-phase analyses used week 12 as the baseline and week 24 as the endpoint. All tested hypotheses were considered statistically significant if the two-sided P value was not more than 0.05 (unless otherwise specified). P values are provided where valid statistical inferences can be made.
A restricted maximum likelihood-based MMRM (mixed-effects model repeated measures) analysis was used on longitudinal changes from baseline for continuous efficacy measures. The model included the fixed categorical effects of treatment, investigator, visit, and treatment-by-visit interaction as well as the continuous, fixed covariates of baseline score and baseline score-byvisit interaction. An unstructured covariance matrix was used to model the within-patient errors. Significance tests were based on least-squares means and type III sum of squares. Efficacy results presented here are from the MMRM analysis unless otherwise noted. Last observation carried forward (LOCF) changes from baseline to endpoint were analyzed by using an analysis of covariance (ANCOVA) model with the terms of treatment, investigator, and baseline scores. The term 'mean' refers to the least-squares mean, which is the estimated mean from a specific model (MMRM or LOCF ANCOVA).
Subgroup analyses were conducted on acute-phase mean changes in MFI scale ratings in pain responders and non-responders, patients with endpoint PGI-I of not greater than 2 or greater than 2, and before and after dose escalations in the acute phase. The models included baseline, treatment, investigator, subgroup, and treatment-by-subgroup interaction. Statistical analyses were performed with SAS software (SAS Institute Inc., Cary, NC, USA).
Results
A description of the patient population and the acutephase results have been reported previously [14] and will be briefly summarized here. Overall, most of the patients (93.2% of 530) were women who were middleaged (50.2 ± 11.1 years old) Caucasians (77.4%) or Hispanics (15.7%). About 18% of the study population had a diagnosis of comorbid current MDD, and about 8% had a diagnosis of comorbid current GAD. At the acute-phase baseline, patients reported having moderate to severe fatigue symptoms, moderately severe pain, and musculoskeletal stiffness and being bothered by sleep difficulties. Severity of anxiety and depressed mood was mild to moderate.
In the acute phase, there was a statistically significant mean reduction (improvement) versus placebo on each MFI domain scale rating and BPI average pain measures at weeks 4, 8, and 12 ( Figure 1). In addition, there was a statistically significant improvement versus placebo in patient ratings of anxiety, depressed mood, and musculoskeletal stiffness at weeks 4, 8, and 12 ( Figure 2), but ratings of being bothered by sleep difficulties were significant at weeks 4 and 8 only. In both treatment groups, acute-phase mean reductions from baseline in MFI scale ratings in patients who were pain responders were 2 to 3 points lower on average (showing improvement in fatigue) compared with a less than 1 point decrease in patients who were non-responders (Table 1). Acute-phase mean changes from baseline in MFI scale ratings in patients with acute-phase endpoint PGI-I ratings of not greater than 2 were at least two to three times greater than the mean changes in patients with endpoint PGI-I ratings of greater than 2, regardless of treatment received ( Table 2). Mean changes at endpoint in MFI scale ratings in patients who had a dose escalation are summarized in Table 3. A total of 122 patients not responding to duloxetine 60 mg were escalated to the 90 mg dose. Those patients who responded to the 90 mg dose (n = 59) experienced further reductions (improvement) across the MFI fatigue domains. Those patients who did not respond to the 90 mg dose and were escalated to the 120 mg dose (n = 63) experienced minimal improvement.
At the end of the acute phase, 363 patients entered the 12-week extension phase, and all of them received double-blind treatment with duloxetine. Patients who received duloxetine in the acute phase continued on their stable dose of 60, 90, or 120 mg QD in the extension phase and were referred to as the duloxetine/duloxetine group (n = 176). Patients in the placebo group (n = 187) who continued in the extension phase received duloxetine 60 mg QD and were referred to as the placebo/duloxetine group. Extension-phase baseline (week 12) and mean changes at study endpoint (week 24) on each secondary measure are summarized in Table 4. For patients with 24 weeks of treatment with duloxetine, there were continued statistically significant within-group improvements in MFI General Fatigue and Reduced Motivation, BPI average pain, and patient ratings of anxiety, depressed mood, 'bothered by sleep difficulties', and musculoskeletal stiffness. Placebo patients who were transitioned to duloxetine also experienced improvement in each measure, and after 12 weeks of treatment there were statistically significant withingroup improvements observed for all but MFI Mental Fatigue and Reduced Motivation.
There were no significant between-treatment differences in the occurrence of fatigue-related treatmentemergent adverse events during the acute phase of the study, and these events became less frequent during the extension phase ( Table 5). The most common events were fatigue, insomnia, and somnolence.
Discussion
Treatment of fatigue has become an important component of the overall management of fibromyalgia because it has been identified by patients as being particularly bothersome and contributes to reduced quality of life [4,17,18]. Assessing fatigue is possible with a single question; however, the type of response a patient gives would depend on the nature of the question and what kind of fatigue is being experienced by the patient at that time. The MFI provides more in-depth information across five domains, each of which has been validated against a single global fatigue question, such as the 'Tiredness' question on the Fibromyalgia Impact Questionnaire [19]. Each domain was significantly associated with this global fatigue question, and this supports the notions that fatigue is multidimensional and that different aspects of fatigue should be measured separately [20]. Because patients with fibromyalgia often report fatigue symptoms that are physical as well as mental in
2)
Pain response was defined as an at least 50% reduction from baseline in Brief Pain Inventory average pain severity. A last-observation-carried-forward analysis was conducted during the acute phase. SD, standard deviation.
nature, using the MFI allowed us to examine the effect of duloxetine across several dimensions of fatigue. In this study, mean MFI scale ratings at baseline were nearly twice as severe as those reported for healthy individuals in a large US population, whose ratings were all less than 9 points [21]. The severity of fatigue in the patients with fibromyalgia in the present study was clinically significant because each MFI domain rating was more than 3 points higher than those of healthy individuals [21]. Furthermore, the MFI domain ratings in these patients were as severe as those reported by others for fibromyalgia [20] as well as patients with other chronic diseases like chronic fatigue syndrome [21], Sjögren syndrome [22], and chronic low back pain [23] and cancer patients receiving radiation therapy [24].
Treatment with duloxetine versus placebo significantly improved fatigue across all of the MFI domains within 4 weeks and continued to improve at each visit thereafter, and improvement was maintained for up to 24 weeks. The magnitude of improvement across the MFI domains was similar (reduction of about 2 points each), and this suggests that treatment with duloxetine improves not only global fatigue symptoms but also both physical and mental aspects of fatigue. Across acute-phase treatment groups, patient global impression of feeling at least 'much better' was associated with a 2-to 3-point decrease in fatigue severity across MFI domains, suggesting that improvement in fatigue may be as important as improvement in pain in this patient population. Previous studies have suggested that there is an association between pain and fatigue. For instance, changes in pain and fatigue in patients with fibromyalgia have been found to be moderately correlated with patient ratings of feeling 'better' [25]. In addition, a review of studies in patients with various chronic pain disease states reported that fatigue decreases when pain improves [26]. Also, pain was noted to be a predictor of fatigue in patients with rheumatoid arthritis, osteoarthritis, or fibromyalgia [27], and individuals with greater pain severity report greater fatigue [23]. In the present study, improvement in fatigue across MFI domains was two to three times greater in patients who were pain responders compared with non-responders. In addition, duloxetine 90 mg QD was associated with further reduction in pain [14] and improvement across fatigue domains for those patients who responded to this dose.
Treatment with duloxetine was associated not only with reduction in pain and fatigue in this study but also with reduction in severity of anxiety, depressed mood, 'bothered by sleep difficulties', and musculoskeletal stiffness. Changes in these symptoms may have contributed to the improvement observed in fatigue because, across all of these measures, with the exception of 'bothered by sleep difficulties' for which the mean change at week 12 did not separate from placebo (P = 0.06), significant improvement was noted at weeks 4, 8, and 12. However, when LOCF analysis was used, 'bothered by sleep difficulties' reported in the primary analysis of this study was significantly improved with duloxetine treatment as compared with placebo (P = 0.05) [14]. Overall, these findings are consistent with a study that found that a moderate (30% to 50%) to substantial (> 50%) reduction in pain was associated with significant reductions in fatigue, sleep disturbance, depression, and anxiety in patients with fibromyalgia [28]. Several limitations to the present study may impact the interpretation of the results. First, a specified level of fatigue severity was not required for patients to be included in this study. In addition, the results of this study may not be generalizable to patients with some psychiatric comorbid disorders or unstable medical or comorbid pain disorders or to patients who were treatment-refractory or disabled, because patients with these conditions were excluded from the study. Lastly, the results of this study do not definitively show the relationship between fatigue and any of the other symptoms of fibromyalgia; this relationship requires further research.
Conclusions
Fatigue is a common and often disabling symptom associated with fibromyalgia. The MFI is a measure that captures the multidimensional nature of fatigue that is experienced by patients with fibromyalgia. This secondary analysis provides evidence for the efficacy of duloxetine in improvements in multidimensional fatigue domains across 24 weeks of treatment.
|
v3-fos-license
|
2020-03-26T10:16:46.361Z
|
2020-03-01T00:00:00.000
|
216504112
|
{
"extfieldsofstudy": [
"Political Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/3E3E182352F1AD555CBB788E2380E23F/S2071832220000140a.pdf/div-class-title-the-right-to-be-forgotten-in-the-digital-age-the-challenges-of-data-protection-beyond-borders-div.pdf",
"pdf_hash": "13640d2eabaad44265189d35b710654f8dd6b884",
"pdf_src": "Cambridge",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44452",
"s2fieldsofstudy": [
"Law"
],
"sha1": "ea54e7c10c76bf7d3a47936812cacde473d072ad",
"year": 2020
}
|
pes2o/s2orc
|
The Right to Be Forgotten in the Digital Age: The Challenges of Data Protection Beyond Borders
Abstract This article explores the challenges of the extraterritorial application of the right to be forgotten and, more broadly, of EU data protection law in light of the recent case law of the ECJ. The paper explains that there are good arguments for the EU to apply its high data protection standards outside its borders, but that such an extraterritorial application faces challenges, as it may clash with duties of international comity, legal diversity, or contrasting rulings delivered by courts in other jurisdictions. As the article points out from a comparative perspective, the protection of privacy in the digital age increasingly exposes a tension between efforts by legal systems to impose their high standards of data protection outside their borders – a dynamic which could be regarded as ‘imperialist’ – and claims by other legal systems to assert their own power over data – a dynamic which one could name ‘sovereigntist’. As the article suggests, navigating between the Scylla of imperialism and the Charybdis of sovereigntism will not be an easy task. In this context, greater convergence in the data protection framework of liberal democratic systems worldwide appears as the preferable path to secure privacy in the digital age.
which has been taken as a model by courts also at the national levelincluding in Germany. Among the data privacy rights developed by the ECJ, and now explicitly codified in EU law, is also the right to be forgotten, namely the right of the data subject to request data controllers, including online digital platforms, the erasure of personal data concerning him or her.
However, the scope of EU data protection law in generaland of the right to be forgotten in particularhas been increasingly facing a question of jurisdictional boundaries. One of the most debated features of EU data protection law is its capacity to apply beyond the borders of the EU. 4 Moreover, the recent introduction of harsher fines has led many foreign companies to comply with EU data protection law not only in relation to their European business, but on a global scale. 5 However, it has been a matter of debate and conflicting ECJ judgments whether the right to be forgotten and other requests to delist online content could be enforced worldwide, or if rather reasons of international comity restricted their effects within the borders of the EU.
This article explores the challenges of the extraterritorial application of the right to be forgotten, in particular, and of EU data protection law, more broadly, in light of the recent case law of the ECJ. The paper explains that there are good arguments for the EU to apply its high data protection standards outside its borders. As data are un-territorial, 6 only a global application of EU data protection law can guarantee an effective enforcement of privacy rights. However, the paper also highlights how such an extraterritorial application of EU data protection law faces challenges, as it may clash with duties of international comity and the need to respect diversity of legal systems, and could ultimately be nullified by contrasting rulings delivered by other courts in other jurisdictions.
As the article points out from a comparative perspective, however, this challenge is not unique to the EU legal system. Rather, it emerges in other jurisdictions as well, such as Canada and Australia. In fact, the protection of privacy in the digital age increasingly exposes a tension between efforts by legal systems to impose their high standards of data protection outside their bordersa dynamic which could be regarded as 'imperialist' 7and claims by other legal systems to assert their own power over dataa dynamic which one could name 'sovereigntist'. 8 As the article suggests, navigating between the Scylla of imperialism and the Charybdis of sovereigntism will not be an easy taskparticularly when claims to control the digital realm are made by authoritarian regimes, which are eager to exploit digital technology for their illiberal mission. 9 In this context, greater convergence in the data protection framework of liberal democratic systems worldwide appears as the preferablealbeit far from easypath to secure privacy in the digital age.
The article is structured as follows. Section B presents the EU constitutional framework for data protection and the expanding case law of the ECJ in the field. Section C analyzes the right to be forgotten afforded to data subjectsoriginally developed by the ECJ and then codified in EU legislation. Section D illustrates how the EU framework for data protection has progressively extended its reach outside the jurisdiction of the EU, looking in particular at the recent case law of the ECJ in the field of the right to be forgotten and removal of content from online See Oxford Learner's Dictionaries, "Imperialism" (defining imperialism as "1. A system in which one country controls other countries [ : : : ], 2. The fact of a powerful country increasing its influence over other countries through business, culture, etc."), https://www.oxfordlearnersdictionaries.com/definition/american_english/imperialism. 8 See Oxford Learner's Dictionaries, "Sovereignty" (defining sovereignty as "1. Complete power to govern a country. 2. The state of being a country with freedom to govern itself"), https://www.oxfordlearnersdictionaries.com/definition/english/ sovereignty?q=sovereignty. platforms. Section E, drawing a comparison with other jurisdictions, explores the rationale behind the extraterritorial application of EU data protection law and examines the challenges that this tendency poses. Section F finally concludes suggesting that transnational cooperation among liberal democratic jurisdictions appears as the preferable path to navigate the emerging tension between data protection imperialism and digital sovereignty and to guarantee an elevate standard of protection of data privacy in the digital age.
B. EU Data Protection Law and Jurisprudence
At the constitutional level, the EU abides by one of the most advanced standards for data privacy worldwide. The EU Charter of Fundamental Rights adopted in 2000 introduced a constitutional recognition of the right to data protection in the EU legal order. 10 Whereas Article 7 of the Charter (entitled "Respect for Private and Family Life") re-affirmed the content of Article 8 of the European Convention on Human Rights, proclaiming that "Everyone has the right to respect for his or her private and family life, home and communications," Article 8 of the Charter (entitled "Protection of Personal Data") introduced a new explicit recognition of the rights to data privacy by stating that "Everyone has the right to the protection of personal data concerning him or her. Such data must be processed fairly for specified purposes and on the basis of the consent of the person concerned or some other legitimate basis laid down by law. Everyone has the right of access to data which has been collected concerning him or her, and the right to have it rectified. Compliance with these rules shall be subject to control by an independent authority." With the entry into force of the Lisbon Treaty in 2009, the Charter has acquired full legal value. 11 Moreover, the Lisbon Treaty introduced another provision confirming the centrality that the rights to data protection now play in the constitutional order of the EU. 12 Pursuant to Article 16 of the Treaty on the Functioning of the EU (TFEU), "Everyone has the right to the protection of personal data concerning them." The same provision empowers the European Parliament with the Council to "lay down the rules relating to the protection of individuals with regard to the processing of personal data by Union institutions, bodies, offices and agencies, and by the Member States when carrying out activities which fall within the scope of Union law, and the rules relating to the free movement of such data. Compliance with these rules shall be subject to the control of independent authorities." At the legislative level, then, the EU has been endowed with a comprehensive framework on data protection since the 1990s. The Data Protection Directive, adopted in 1995, 13 introduced a far-reaching obligation for the member states to "protect the fundamental rights and freedoms of natural persons, and in particular their right to privacy, with respect to the processing of personal data" 14 within their jurisdictions. 15 of individuals with regard to the processing of personal data by EU bodies, offices and agencies, 16 which also established the European Data Protection Supervisor (EDPS). 17 Moreover, selected pieces of EU legislation expanded the protection of data privacy in specific sectors, such as electronic communications, 18 and police and judicial cooperation in criminal matters. 19 Ultimately, in 2016, the European Parliament and the Council, on the basis of Article 16 TFEU, enacted the General Data Protection Regulation (GDPR), 20 and simultaneously adopted a Directive on the protection of natural persons regarding processing of personal data connected with criminal offences or the execution of criminal penalties. 21 The GDPR replaced the Data Protection Directive with measures directly and uniformly binding throughout the member states of the EU, with the aim to provide an even more advanced framework for data protection, updated to the challenges of globalization and rapid technological developments. 22 At the jurisprudential level, finally, the ECJ through its case law has championed the protection of data protection, wearing with confidence the role of a human rights court. 23 In particular, heavily drawing on the Charter of Fundamental Rights, the ECJ has expanded its prior jurisprudence 24 and enforced a high standard of data privacy protections: 1) vertically, i.e. vis-à-vis the member states; 2) horizontally, i.e. vis-à-vis the EU political branches; as well as 3) diagonally, i.e. vis-à-vis private companies which withhold relevant power in the processing of personal data. First, the ECJ held that Article 8 of the Charter, and Article 16 TFEU, implied a need for data protection authorities to be fully independent and ruled against member states which had failed to secure this objective in their legislation, 25 and set aside national legislation introducing surveillance measures in breach of data protection rights. 26 Second, the ECJ found that Articles 7 and 8 of the Charter provided data subjects with a right to be protected from practices of systematic government surveillance and thus struck down as incompatible with EU primary law both the EU Data Retention Directive, which required the retention of personal data law enforcement purposes, 27 as well as an international agreement concluded between the EU and Canada, which foresaw the collection of passenger name record (PNR) data. 28 Third, the ECJ has also applied a high standard of data protection vis-à-vis tech companies, subjecting IT providers offering services within the EU internal market to EU data protection laws, and expanding the protections afforded to data subjects. 29 It is in this context that the ECJ has also recognized a right to be forgottenwhich was later codified in the GDPR and taken on board by a number of other courts.
C. The Right to Be Forgotten
The ECJ took a major step toward the recognition of the right to be forgotten in May 2014, in Google Spain SL v. Agencia Española de Protección de Datos (AEPD). 30 The case concerned the interpretation of the Data Protection Directive, which was then applicable in domestic proceedings between Google and the AEPD, the Spanish data protection agency. Pursuant to the application by a Spanish national, the AEDP had required Google to remove from its search engine links to information relating to the applicant, on the account that data protection law applied to it. Google had challenged the administrative decision in Spanish courts, which decided to refer several questions to the ECJ. In its judgment, the ECJ recognized a new right for data subjects to request removal of on-line content, and, correspondingly, an obligation for the operator of a search engine to remove from the list of results displayed following a search made on the basis of a person's name links to web pages, published by third parties and containing information relating to that person. 31 As a preliminary matter, the ECJ ruled that a search engine like Google must be classified as a processor and controller of personal data within the meaning of the Data Protection Directive. 32 On the substance, then, the ECJafter recognizing that a name search through Google could provide a "more or less detailed profile of [the data subject]" 33held that the operator of a search engine "is obliged to remove from the list of results displayed following a search made on the basis of a person's name links to web pages, published by third parties [ : : : ], also in a case where that name or information is not erased beforehand or simultaneously from those web pages, and even, as the case may be, when its publication in itself on those pages is lawful." 34 The judgment of the ECJ in Google Spain opened the door to a full-fledged codification of the right to be forgotten in EU law. The GDPR, in fact, enshrined in Article 17 a "Right to erasure (right to be forgotten)", stating that "The data subject shall have the right to obtain from the controller the erasure of personal data concerning him or her without undue delay and the controller shall have the obligation to erase personal data without undue delay." The same provision clarifies that the right to erasure applies when: "(a) the personal data are no longer necessary in relation to the purposes for which they were collected or otherwise processed; (b) the data subject withdraws consent on which the processing is based [ : : : ]; (c) the data subject objects to the processing [ : : : ] (d) the personal data have been unlawfully processed." Moreover, pursuant to Article 17(2) GDPR, "Where the controller has made the personal data public and is obliged pursuant to paragraph 1 to erase the personal data, the controller, taking account of available technology and the cost of implementation, shall take reasonable steps, including technical measures, to inform controllers which are processing the personal data that the data subject has requested the erasure by such controllers of any links to, or copy or replication of, those personal data." While Article 17(3) GDPR indicates that the right to erasure "shall not apply to the extent that processing is necessary: (a) for exercising the right of freedom of expression and information; (b) for compliance with legal obligations [ : : : ] in the public interest" and for a number of other selected reasons related to public health, scientific or historical research and legal defense, the GDPR seemed to follow the ECJ's view that the data subject's right to request the removal of on-line content "override, as a rule, not only the economic interest of the operator of the search engine but also the interest of the general public in finding that information upon a search relating to the data subject's name." 35 The case law of the ECJ in the field of the right to be forgotten has also become a model national courts have looked atincluding in Germany. 36 In November 2019, the Bundesverfassungsgericht, Google Spain, supra note 30, at 41. 33 Id., at 80. 34 Id., at 88. 35 Id., at 97. 36 See also Jud Mathews, Some Kind of Right, in this Special Issue.
Germany's federal constitutional court, delivered a judgment applying EU law on the right to be forgotten in a dispute between a private citizen and a broadcasting corporation regarding the request to delist links to online information on the applicant. 37 As the court pointed out, since the matter fell under legislation fully harmonized by EU law, the standards of EU fundamental rights protection applied and could be examined by the court. 38 Ultimately, the court rejected the constitutional complaint, ruling that the ordinary courts had correctly balanced competing rights. 39 In another judgment delivered on the same day, 40 however, the Bundesverfassungsgericht also articulated an autonomous, domestic standard of the right to be forgotten, holding that where EU law "allowed for different legislative designs at Member State level" German constitutional rights would be the standard used by the court in adjudicating constitutional complaints, unless it is exceptionally shown that EU law requires a uniform standard of fundamental rights protection, or that German constitutional rights do not meet the minimum standard of protection required by the Charter. 41 In the specific case, therefore, the court ruled that the request by a private citizen to obtain the erasure from the website of the newspaper Der Spiegel of articles concerning him had to be upheld in light of the constitutional right of personality, which includes a right to be forgotten. As the court clarified, the right to be forgotten had to be balanced with freedom of information and freedom of expression 42yet "the realities of information technology and the dissemination of information on the internet attach a new legal dimension to the requirement that time be considered as a relevant contextual factor characterizing information." 43 As such, the court concluded that the constitutional complaint was well founded, as "it would have been necessary to consider whether it was possible, and required, to impose an obligation on the media outlet sued before the ordinary courts to take reasonable precautions upon being notified by the complainant, to provide at least some protection against search engines retrieving the articles in question in the context of searches related to the complainant's name, without unduly restricting the general retrievability and accessibility of the articles as such." 44
D. Extraterritorial Application of EU Data Protection Law
Over the past few years, the EU framework for data protection has progressively extended its reach outside the jurisdiction of the EU. On the one hand, the ECJ has reviewed the standard of data protection existing in third countries to decide whether this was sufficient to authorize the transfer of personal data from the EU to such third countryessentially pressuring the latter to raise its domestic standards to meet the EU benchmark. In the Schrems judgment, 45 in particular, the ECJ reviewed the European Commission Safe Harbor decisionwhich recognized US data protection standards as providing an adequate level of protection, and therefore authorized private companies to transfer data across the Atlantic 46and struck that down, ruling that in light of the revelations of US mass surveillance, it appeared that law and practice in force in the US did not ensure an adequate protection of personal data. 47 The ECJ ruling, which was prompted by a Facebook user disgruntled with the limited protection that his data would receive in the US, forced the EU and the US to renegotiate further guarantees on the protection of personal data including limitations on the access and use of personal data transferred for national security Id.
42
Id., at II.2. 43 Id., at II.2.a). 44 Id., at II.4. purposes as well as oversight and redress mechanisms that provide safeguards for those data to be effectively protected against unlawful interference and the risk of abusewhich were codified in a new Commission adequacy decision called Privacy Shield. 48 This has been challenged as insufficient, 49 but it likely represents a step forward compared to Safe Harbor, suggesting the EU data protection law can indeed create pressures in third countries to raise their standards through international negotiations. 50 On the other hand, the ECJ has directly subjected economic operators incorporated outside the EU to EU data protection rules when they deal with data collected within the EU. The point was already made in Google Spain: here the ECJ ruled that in light of the objective of EU data protection law "of ensuring effective and complete protection of the fundamental rights and freedoms of natural persons, and in particular their right to privacy, with respect to the processing of personal data, [the notion of establishment] cannot be interpreted restrictively" 51and therefore concluded that Google, despite being incorporated in the US, was subjected to the Data Protection Directive, also because it operated a subsidiary in Spain, which managed advertising on a Spanish-localized search engine. In fact, the GDPR has further expanded this state of affairs, 52 as Article 3(2) (entitled "Territorial Scope") now foresees that "This Regulation applies to the processing of personal data of data subjects who are in the Union by a controller or processor not established in the Union, where the processing activities are related to: (a) the offering of goods or services, irrespective of whether a payment of the data subject is required, to such data subjects in the Union; or (b) the monitoring of their behaviour as far as their behaviour takes place within the Union." The extraterritorial reach of EU data protection law has led to important challengesnotably with regard to the right to be forgotten, as the ECJ has attempted to work out the circumstances when requests to remove online content bound businesses established overseas, and with world-wide effect. In particular, the matter was at the heart of two recent ECJ judgments concerning Google and Facebook. In September 2019, in Google v. Commission Nationale de l'Informatique et des Libertés (CNIL), 53 the ECJ reviewed a sanction imposed on Google by the French data protection authority for failure to remove content worldwide, from all its website domains, in pursuance of a right to be forgotten request. Google had challenged the CNIL sanction claiming that the removal of online content exclusively on the French version of its search engine sufficed. In its ruling, the ECJalso taking note of the geo-blocking technology put in place by Google 54upheld the challenge. The ECJ admitted that the GDPR objective is "is to guarantee a high level of protection of personal data throughout the [EU]" 55and that "a de-referencing carried out on all the versions of a search engine would meet that objective in full." 56 However, the ECJ emphasized that "numerous third States do not recognise the right to de-referencing or have a different approach to that right," 57 and claimed that it was not apparent from the GDPR that the intent of the EU legislator was "to confer a scope on the rights enshrined in those provisions which would go beyond the territory of the Member States and 54 Id., at 42. 55 Id., at 54. 56 Id., at 55. 57 Id., at 59. Google [ : : : ] a de-referencing obligation which also concerns the national versions of its search engine that do not correspond to the Member States." 58 Hence, the ECJ concluded that "where a search engine operator grants a request for de-referencing pursuant to those provisions, that operator is not required to carry out that de-referencing on all versions of its search engine, but on the versions of that search engine corresponding to all the Member States, using, where necessary, measures which, while meeting the legal requirements, effectively prevent or, at the very least, seriously discourage an internet user conducting a search from one of the Member States on the basis of a data subject's name from gaining access, via the list of results displayed following that search, to the links which are the subject of that request." 59 Yet, if Google v. CNIL seemed to draw a limit to the extraterritorial effects of the right to be forgotten, the ECJ decision in Eva Glawischnig-Piesczek v. Facebookdelivered just a week later, in October 2019 60counter-balanced that. Although this case did not explicitly concern the right to be forgotten, it dealt with an analogous problemnamely the question whether a digital platform could be forced to remove world-wide content posted online which was regarded as defamatory. Mrs Eva Glawischnig-Piesczek, an Austrian politician, had obtained a court order to remove insulting language against her posted on Facebook, but the latter had disabled access to the content initially published only in Austria, prompting the applicant to sue for breach of EU data protection law. In its judgment, the ECJafter discussing the obligations of digital providers under the e-Commerce Directive 61examined whether EU law imposed "any limitation, including a territorial limitation, on the scope of the measures which Member States are entitled to adopt" vis-à-vis information society services, 62 and ruled that EU law "does not preclude those injunction measures from producing effects worldwide." 63 While the ECJ cautioned that "in view of the global dimension of electronic commerce, the EU legislature considered it necessary to ensure that EU rules in that area are consistent with the rules applicable at international level" 64and that therefore "[i]t is up to Member States to ensure that the measures which they adopt and which produce effects worldwide take due account of those rules" 65the ECJ judgment's consequence was to open the door to Austrian courts to imposing on Facebook obligations "to remove information covered by the injunction or to block access to that information worldwide within the framework of the relevant international law." 66
E. The Challenges of Extraterritoriality in Comparative Perspective
The problem of extraterritorial application of domestic laws in the digital realm is not exclusive of the EU. In fact, as Jennifer Daskal has pointed out, there are now an increasing number of cases adjudicated by courts world-wide which raised "critically important questions about the appropriate scope of global injunctions, the future of free speech on the internet and the prospect for harmonization (or not) of rules regulating online content across borders." 67 In particular, other recent disputes involving US technology companies and decided in the jurisdictions of Canada and Australia have vividly exposed the challenges of an extraterritorial effect of data protection law.
In 2017, in the case Google Inc. v. Equustek Solutions Inc., the Canadian Supreme Court ordered Google to remove worldwide from its search engine the links to a company's website violating intellectual property rights. 68 Equustek, a Canadian IT company, had sued Google claiming that the search engine had failed to de-list from its browser the websites of a competitor, which had breached Equustek intellectual property rights by misappropriating its trademarks. In June 2017, the Canadian Supreme Court, deciding on the matter on appeal, ruled in favour of Equustek and granted it the sought injunction, ordering Google to delist from its browser worldwide all the websites that harmed Equustek. According to the Court, a global enforcement of the delisting request was necessary to prevent harm to the plaintiff. 69 However, Google subsequently sought an injunction before the US District Court for Northern California to prevent enforcement in the US of the Canadian Supreme Court order as incompatible, among others, with the US First Amendment guaranteeing freedom of speech and principles of international comity. In November 2017, the US District Court granted Google the injunction sought, effectively nullifying the effects of the Canadian Supreme Court ruling in the US. 70 However, despite the favourable ruling of the Californian court, in April 2018, Google was eventually unsuccessful in its claims before the Supreme Court of British Columbia. The Canadian court was adamant about its refusal to consider Google's demand to limit the scope of its delisting order. 71 Similarly, also in 2017, in the case X v. Twitter, the Supreme Court of New South Wales in Australia ordered the Californian company and its Irish subsidiary to remove at global level a series of confidential information posted by a troll. 72 The applicant X lamented the publication of confidential financial information leaked on Twitter by an anonymous troll from various accounts, including one that used the name of the company's CEO. Twitter was initially reluctant to suspend the incriminated accounts, but was eventually ordered by the court to provide the identity of the troll and to remove all illegal contents published online. In contrast to the Canadian Supreme Court in the Google Inc. v. Equustek Solutions Inc. case, the Australian court did not consider principles of international comity nor did it carry out a comparative analysis of foreign law on breach of confidence. 73 Yet, in this case too, the Supreme Court of New South Wales did not hesitate to serve an extraterritorial injunction to remedy the detrimental situation of the domestic applicant.
Similarly to the Canadian and Australian courts, both the recent ECJ cases Google v. CNIL and Glawischnig-Piesczek v. From an EU perspective, such an extraterritorial application of EU law can be explained by the need to ensure an effective protection of fundamental rights and limit the risk of circumvention. 77 The enforcement of the right to be forgotten is exemplary. We now live in a global digital society, which overtakes national boundaries. One's right to data protection may be violated even where a search engine shows a specific result in a country, which is not that of residence of the data subject concerned. In principle, enforcing that right exclusively within the territory of the EU would not make any sense, given the ease with which data can be accessed world-wide. A violation of such right would occur if an individual, for example residing in France, after lawfully requesting to delist specific search results, discovered that those links are still referenced not only in France, butsayalso in Germany or in the US, with no difference. And this consideration implies that as much as uniform standards of data protection should apply within the EU -EU data protection rights should also have extraterritorial effects outside the EU.
Nevertheless, the extraterritorial application of EU data protection law poses a series of challengeswhich were vividly exposed in the Google Inc. v. Equustek Solutions Inc. case. Asserting domestic data protection standards outside a jurisdiction's borders may clash with duties of international comity and the need to respect diversity of legal systems. In fact, the balance between the right to be forgotten, freedom of information and free speech is struck differently in jurisdictions around the worldincluding states that share the same belief in democracy, the rule of law and human rights. Moreover, as the recent judgments of the Canadian and US courts point out, the enforcement of data protection standards outside a jurisdiction's borders may ultimately be nullified by opposite claims. In the Canadian Google litigation, in particular, the US federal district court blocked the application of the Canadian Supreme Court rulingde facto limiting the application of the Canadian writ in the US jurisdiction.
In light of these risks, the recent judgments of the ECJ in Google v. CNIL and Glawischnig-Piesczek v. Facebook can be seen as a pragmatic solution, which tries to navigate between the Scylla of data protection imperialism and the Charybdis of digital sovereignty. In fact, it is clear that tensions between these opposing trends are only likely to increase. While criticism have been raised at the 'imperialist' attitude of EU data protection law, 78 other recent developments, including efforts by countries around the world to claim sovereign control over data, expose the risk of a fragmentation of the digital world. Different claims to digital sovereignty are emerging not only in the US 79 or the EU for that matter 80but also in illiberal regimes around the world, 81 potentially generating a progressive erosion of fundamental rights online. In this context, the development of In 2017, China passed a new National Intelligence Law obliging companies to collaborate with Chinese intelligence agencies. The act de facto requires companies incorporated in China to disclose data that may have been collected and stored abroad to Chinese authorities: see See Yi-Zheng Lian, supra note 9. In the context of the trade war with the US, the legislation produced strong criticism, the US lamenting that a similar obligation could put in danger their national security. transnational legal frameworksat least among democratic regimesseems to be the necessary path to preserve data protection rights beyond borders.
F. Conclusion
The EU is at the forefront of data protection worldwide. The GDPR represents the most comprehensive and advanced regulatory framework for data privacy to dateand the ECJ has developed a progressive case law to protect human rights in the digital age, including outlining a right to be forgotten. These EU law principles are increasingly being taken as comparative example, including by national courts. For example, the German Bundesverfassungsgericht, as we have seen, recently introduced in German law a right to be forgotten modelled on the EU template, recognizing in this wayat least in principlethe role of EU law as leading paradigm in the field of data protection. Yet, EU data protection law generallyand the right to be forgotten specificallyare increasingly facing a question of jurisdictional boundaries. From an EU perspective, the extraterritorial enforcement of EU fundamental rights is regarded as a way to guarantee a full and effective protection and prevent the risk of circumvention. However, the reach of EU data protection law beyond the EU borders also raises a series of challenges, clashing with the principles of international comity and respect for global diversity.
The issue of extraterritorial application of EU data protection law was at the heart of two recent judgments decided by the ECJ: in Google v. CNIL and Glawischnig-Piesczek v. Facebook, the ECJ dealt with the question of whether the right to be forgotten and the obligation to remove defamatory content applied worldwide or not. In the first case, the ECJ ruled that the removal was restricted to EU member states only, while in the second it imposed a world-wide injunction. In both cases, however, the ECJ showed awareness for the cross-borders implications of its decisions and for the need to recognize transnational diversity and international comity, thus finding pragmatic solutions to modulate the effects of EU data protection law beyond the EU borders.
As this article has shown, the challenges that the ECJ was facing were not unique to Europe. Other jurisdictions such as Australia and Canada were also confronted with the dilemma of how to protect digital rights across borders. Theoretically, contemporary digital society, being global, would require worldwide rules. However, the extraterritorial application of data protection standards raises significant challenges. In fact, the protection of privacy in the digital age increasingly exposes a tension between efforts by legal systems to impose their high standards of data protection outside their bordersand thus potentially regarded as a form of 'imperialism' -and sovereigntist claims by other legal systems to assert their own power over data.
In this context, states should seek to develop common international law frameworks, which promote transnational standards of data protection. Admittedly, this will not be an easy task. However, this is something that should be explored, particularly among liberal democracies, and at least in the transatlantic context. 82 Despite differences, jurisdictions such as the EU, Canada and Australiabut also the USshare a similar concern for the need to protect privacy, which puts them at odds with developments in other countries, such as China or Russia. Developing transnational rules for the protection of digital privacy, including outlining mutually acceptable claims to the right to be forgotten, represents therefore the best road forward to make sure that privacy remains a protected right, also in the digital era.
|
v3-fos-license
|
2022-01-13T16:16:45.046Z
|
2022-01-01T00:00:00.000
|
246078295
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1420-3049/27/2/444/pdf",
"pdf_hash": "c09cbf77287a3d79e7e0ea187e9f144d40796740",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44454",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "2762ec93ed5836c657518ca2be040da09ec19a3e",
"year": 2022
}
|
pes2o/s2orc
|
A Systematic Review of Orthosiphon stamineus Benth. in the Treatment of Diabetes and Its Complications
(1) Background: Orthosiphon stamineus Benth. is a traditional medicine used in the treatment of diabetes and chronic renal failure in southern China, Malaysia, and Thailand. Diabetes is a chronic metabolic disease and the number of diabetic patients in the world is increasing. This review aimed to systematically review the effects of O. stamineus in the treatment of diabetes and its complications and the pharmacodynamic material basis. (2) Methods: This systematic review was conducted following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), using the databases ScienceDirect, PubMed, and Web of Science. (3) Results: Thirty-one articles related to O. stamineus and diabetes were included. The mechanisms of O. stamineus in the treatment of diabetes and its complications mainly included inhibiting α-amylase and α-glucosidase activities, antioxidant and anti-inflammatory activities, regulating lipid metabolism, promoting insulin secretion, ameliorating insulin resistance, increasing glucose uptake, promoting glycolysis, inhibiting gluconeogenesis, promoting glucagon-likepeptide-1 (GLP-1) secretion and antiglycation activity. Phenolic acids, flavonoids and triterpenoids might be the main components for hypoglycemia effects in O. stamineus. (4) Conclusion: O. stamineus could be an antidiabetic agent to treat diabetes and its complications. However, it needs further study on a pharmacodynamic substance basis and the mechanisms of effective constituents.
O. stamineus is a popular Chinese folk medicine and also a traditional medicine of Dai nationality of Yunnan Province in China [15]. It has been used to treat diabetes and some kidney diseases with a long history. Modern pharmacological studies show that O. stamineus has many pharmacological activities, including antioxidant, anti-inflammatory, kidney protection, antibacterial, anti-tumor, immunoregulation, and especially effective antidiabetic activities. [15,16]. It has been used for the treatment of diabetes and chronic renal failure clinically. It is also reported to have good therapeutic effects on some diabetic of O. stamineus was reviewed, providing a reference for the application of O. stamineus and further research in the treatment of diabetes and its complications.
Literature Search Results
After searching in the three databases by using the chosen keywords, a total of 281 studies were obtained. Of the 281 records, 181 were from ScienceDirect, 35 from PubMed, and 65 from Web of Science. Then, 153 records were removed for the following reasons: duplicate studies, reviews, book chapters, patents, meeting papers, and non-English language papers. By reviewing the titles and abstracts, 88 records were excluded because they had no relevance to the scope of this review. The remaining 40 records were read fully, and 31 were included in this systematic review. The flowchart of the literature search and selection process is shown in Figure 1 and the 31 articles are summarized in Table 1.
Hypoglycemic Activity
Hyperglycemia is a main symptom of diabetes, and could cause damage to organs and tissues in the body. It has been proved in some studies that different extracts of O. stamineus could decrease blood glucose levels.
In a recent study, the 95% ethanol elution fraction (95% EEF) of 80% ethanol extract (0.68 g/kg, 0.34 g/kg and 0.17 g/kg) reduced blood glucose levels in an oral glucose tolerance test in normal C57BL/6J mice after 10-day administration of the extract [39]. The ethanol extract of O. stamineus (0.2 and 0.4 g/kg) obviously reduced fasting blood glucose level in high-fat-diet (HFD) C57BL/6 mice after 8-week administration of the extract [40]. The rats were administered 50% ethanol extract orally and after ten minutes, they were loaded with starch or sucrose. The extract (1 g/kg) reduced blood glucose levels significantly after starch loading in both normal and diabetic rats. The same dose of the extract also lowered blood glucose levels significantly after sucrose loading in normal rats [41]. The rats were administered chloroform extract and its sub-fraction 2 (1 g/kg) orally and after one hour, they were loaded subcutaneously with glucose. The extract and its sub-fraction 2 significantly reduced the blood glucose levels of normal rats [42]. The same sub-fraction (1 g/kg) also caused a significant decrease in blood glucose levels in diabetic rats after 14-day administration of the sub-fraction [34]. The normal and diabetic rats were administered the aqueous extract orally and after ten minutes, they were loaded with glucose. In normal rats, the aqueous extract (0.5 g/kg and 1.0 g/kg) reduced plasma glucose concentration by 15% and 34%, respectively, after one hour of glucose loading. The maximum reduction of the extract (0.5 g/kg and 1.0 g/kg) on diabetic rats was 21% and 24% after 210 min of glucose loading. Besides, the diabetic rats were also treated with the extract (0.5 g/kg) for 14 days and showed reduction in plasma glucose concentration [43]. Hyperglycemia metabolism and excessive free fatty acids can lead to the production of lots of free radicals, such as reactive oxygen species (ROS) and reactive nitrogen species (RNS). These free radicals can cause oxidative stress, impair the structures and functions of islet β-cells, and cause insulin secretion deficiency. Besides, they can also lead to insulin resistance by affecting multiple insulin signaling pathways. The antioxidant activity of O. stamineus is related to protecting islet cells and reducing insulin resistance. Researchers have always tested antioxidant activity by 1,1-diphenyl-2-picrylhydrazyl radical 2,2-diphenyl-1-(2,4,6-trinitrophenyl)hydrazyl (DPPH) assay, ferric ion reducing antioxidant power (FRAP) assay, and 2,2 -azino-bis-(3-ethylbenzothiazoline-6-sulphonate) (ABTS) assay. The activity of superoxide dismutase (SOD) and the level of malondialdehyde (MDA) are also used to determine antioxidant activities. SOD can scavenge free radicals and MDA is the end product of lipid oxidation [44,45].
From these studies, it could be seen that the aqueous extract, ethanol extract, 70% ethanol extract, methanol extract, 50% methanol extract, 70% acetone extract, and chloroform extract all had free radical-scavenging activities in different assays.
O. stamineus ethanol extract (200 and 400 mg/kg) enhanced SOD activity and reduced MDA level in the liver homogenate of the high-fat diet group. Thus, O. stamineus extract might counteract oxidative stress in the liver [40]. The 50% ethanol extracts of O. stamineus roots, stems, and leaves (50 µg/mL) scavenged intracellular ROS and significantly increased cell viability under oxidative stress in IPEC-J2 cells. They could also decrease the MDA level in jejunal homogenates compared to the high-fat group. The extracts of roots and leaves significantly increased the jejunal SOD activity of mice [53].
Anti-Inflammatory Activity
In the pathogenesis of diabetes, inflammatory factors, such as interleukin (IL)-1β, IL-8, tumor necrosis factor (TNF)-α, and induced nitric oxide synthase (iNOS), are important factors related to insulin sensitivity. They interfere with insulin signal transduction by participating in the insulin signaling pathway, leading to insulin resistance. They also possibly damage islet β-cells. In addition, inflammatory factors also interact with oxidative stress, further aggravating insulin resistance. Therefore, anti-inflammatory activity is essential to attenuate the inflammatory response, protect islet cells, and improve insulin resistance. It is always tested through the levels of inflammatory factors and the inhibition of nitric oxide (NO) production in cells [54,55].
The swelling in auricle was inhibited by the treatment of ethanol extract, ethyl acetate (EtOAc), and aqueous fractions in acute inflammatory mice induced by xylene. The inhibition ratios were 48.2%, 63.3%, and 46.0% at the dose of 200 mg/kg. Some compounds isolated from EtOAc fractions, orthosiphol M, orthosiphonone A, orthosiphol B, neoorthosiphol A, orthosiphol D, fragransin B 1 , sinensetin and 5, 6, 7, 4 -tetramethoxyflavone, also showed marked repression in the observed auricle swelling at the dose of 50 mg/kg. Besides, some of these compounds inhibited pro-inflammatory cytokines production in lipopolysaccharide (LPS)-induced HK-2 cells, such as the levels of TNF-α, IL-1β, and IL-8 [56]. The isolated compounds (clerodens A-D) were studied for anti-inflammatory activities on LPS-induced NO production in RAW264.7 macrophages. The results showed that clerodens A-D had inhibitory activities with IC50 values of 18.9 ± 1.2, 14.7 ± 0.48, 12.4 ± 1.5, and 6.8 ± 0.92 µmol/L, respectively, a little higher than the positive control aminoguanidine [16]. Neoorthosiphonone A, isolated from O. stamineus, showed obvious inhibitory activity on NO production in LPS-activated macrophage-like J774.1 cells with the IC50 value of 7.08 µmol/L, which was more potent than the positive control N G -monomethyl-L -arginine ( L -NMMA) [57]. The isolated siphonols A-E also inhibited NO production in LPS-activated macrophage-like J774.1 cells [58].
Regulate Lipid Metabolism
Diabetic patients often have abnormal lipid metabolism. In the pathogenesis of diabetes, disorders in lipid metabolism increase the levels of free fatty acids and total triglycerides (TG), damaging islet β-cells and leading to insulin resistance in other tissue cells. Because of insulin resistance, the serum levels of TG, total cholesterol (TC), and low-density lipoprotein cholesterol (LDL-C) increase, while the level of high-density lipoprotein cholesterol (HDL-C) decreases [59]. In addition, leptin and adiponectin, which are secreted from adipocytes, are also associated with insulin resistance. Leptin can antagonize insulin and produce insulin resistance, while adiponectin can improve insulin sensitivity by increasing fatty acid oxidation and glucose uptake in skeletal muscle cells [60,61].
The inhibitory effect of O. stamineus ethanol extract against pancreatic lipase in vitro was determined by using orlistat as the positive control. The IC50 value of the extract was 5.7 mg/mL, compared to the value of orlistat (0.1 mg/mL). In vivo study, the mice were fed on HFD. The ethanol extract reduced the serum levels of TG, TC, LDL-C, and lipase. It also decreased the leptin level and increased the adiponectin level. The extract also attenuated excessive accumulation of fat in liver tissues through histological examination. These results all showed that the extract might regulate lipid metabolisms in adipocytes, downregulate lipid accumulation in the liver [40]. The aqueous extract lowered TC level and increased the ghrelin level in diabetic rats [62]. The aqueous extract also lowered TG level and increased HDL-C level in diabetic rats [43]. 3-Hydroxybutyrate (3-HBT) and acetoacetate were the representative metabolites of fatty acid metabolism, so their levels might be related to the lipid metabolism in the liver. In 1 H-NMR spectroscopic analysis of urine of Azam' study, aqueous extract showed a remarkable drop in acetoacetate and 3-HBT levels. The reason for that might be that the extract inhibited the abnormal lipid and fatty acid metabolism and re-established energy metabolism [63].
2.3.4. Inhibit the Activities of α-Amylase and α-Glucosidase α-Amylase and α-glucosidase are the two key enzymes in the digestion and absorption of carbohydrates in the body. α-Amylase breaks down long-chain carbohydrates, and αglucosidase hydrolyzes glucoside bonds to release glucose. They are directly involved in the metabolism of starch and glycogen. Therefore, inhibiting the activities of α-amylase and α-glucosidase can reduce the release of glucose from carbohydrate hydrolysis, slow down the absorption of glucose in the small intestine, and effectively lower postprandial blood glucose level [44,64,65]. The inhibitory activities of these enzymes were always tested in vitro.
Promote Insulin Secretion, Ameliorate Insulin Resistance, Enhance Insulin Sensitivity
Insulin is a hormone secreted by islet β-cells. It can control blood glucose level and regulate glucose and lipid metabolism. Insulin promotes glucose uptake and utilization in the liver, muscle, and adipose cells to reduce postprandial blood glucose level. However, these functions can be achieved only by combining with insulin receptors (IR). IRs are widely distributed in the body. Muscle, fat, and liver are all insulin target organs or tissues. Insulin resistance occurs when insulin receptors become less sensitive to insulin due to various factors [69]. Normally, glucose is transported and utilized mainly under the stimulation of insulin through a variety of insulin signaling pathways, such as the phosphoinositide 3-kinase/protein kinase B (PI3k/Akt) pathway. Insulin binds to IRs on the cell membrane, causing tyrosine phosphorylation of insulin receptor substrates (IRS), activating the PI3k/Akt signaling pathway and increasing glucose uptake. Any abnormality in insulin signaling pathway may lead to insulin resistance [70,71]. In addition, protein tyrosine phosphatase 1B (PTP1B) is also associated with insulin resistance. High PTP1B activity can lead to the dephosphorylation of IR and IRS tyrosine and weaken insulin signal transduction, leading to insulin resistance [72,73]. In some investigations, it has been proved that the extract of O. stamineus and its active components promoted insulin secretion, improved insulin resistance, and enhanced insulin sensitivity.
Inhibition of PTP1B activity might improve IR and IRS, leading to the improvement of insulin resistance and enhancement of insulin sensitivity. Hence, five diterpenes isolated from O. stamineus were tested for PTP1B inhibitory activity. The IC50 values of siphonol B, orthosiphols B, G, I, and N were 8.18 ± 0.41, 9.84 ± 0.33, 3.82 ± 0.20, 0.33 ± 0.07, and 1.60 ± 0.17 µmol/L, respectively, compared to the positive control, ursolic acid (3.42 ± 0.26 µmol/L). The inhibition types of these five diterpenes on PTP1B were mixed-competitive, non-competitive, non-competitive, competitive, and uncompetitive, respectively [74]. The hexane fraction of 70% ethanol extract slightly increased insulin secretion in both basal and glucose-stimulated states, and also elevated the mRNA expression of insulin and pancreatic duodenal homeobox-1 (PDX-1) in INS-1 cells under normal and high-glucose conditions. PDX-1 is an essential transcription factor for insulin gene expression. Its main functions are to promote the proliferation of islet β-cells, inhibit the apoptosis of islet β-cells, and regulate the transcription of insulin genes. The fraction also increased p-PI3K levels and Akt phosphorylation in INS-1 cells [75]. The ethanol extract reduced the levels of homeostasis model assessment of insulin resistance (HOMA-IR) index in HFD-induced rats [40].
From these studies, it could be seen that the hexane fraction of 70% ethanol extract could promote insulin secretion and enhance insulin sensitivity. Besides, the ethanol extract and five diterpenes isolated from O. stamineus could both enhance insulin sensitivity.
Reduce the Absorption of Intestinal Glucose, Increase Glucose Uptake by Peripheral Cells
Hyperglycemia is a typical characteristic of diabetes. Carbohydrates are absorbed by intestinal epithelial cells in the form of glucose after digestion by enzymes. The uptake and utilization of glucose mainly exist in peripheral tissues or cells, such as liver, muscle, and adipose cells. Therefore, reducing the absorption of intestinal glucose and promoting glucose uptake by peripheral cells are very important to reduce blood glucose [76].
Promote Glycolysis, Inhibit Gluconeogenesis
Gluconeogenesis and glycolysis are two metabolic mechanisms to ensure glucose homeostasis. Glycolysis is the process of breaking down glucose to produce pyruvate, which is one of the most important pathways of glucose metabolism in the body. Increasing the expression of glucokinase and pyruvate kinase can promote glycolysis and reduce blood glucose. Gluconeogenesis is the process of converting non-sugar substances into glucose. Liver is the main organ for gluconeogenesis. Both insulin and glucagon can regulate liver gluconeogenesis through different signaling pathways [79,80].
In 1 H-NMR spectroscopic analysis of urine of diabetic rats, aqueous extract increased the levels of pyruvate, succinate, and citrate compared to the model group. Pyruvate is an end product of glycolysis, and it can enter tricarboxylic acid (TCA) cycle. High glucose level inhibits glycolytic enzymes and decreases the generation of pyruvate, thereby reducing the TCA cycle activity, and thus may contribute to mitochondrial dysfunction. Mitochondrial dysfunction may induce diabetes by affecting insulin secretion of islet β-cells and aggravating insulin resistance. Citrate and succinate are the TCA cycle intermediates. Thus, the increased levels of pyruvate, citrate, and succinate showed that the aqueous extract might reduce blood glucose level by increasing glycolysis and decreasing gluconeogenesis, and it might also modulate TCA cycle and improve mitochondrial dysfunction [63].
Increase the Level of GLP-1
GLP-1 is released from intestinal cells and maintains blood glucose homeostasis by increasing insulin secretion and inhibiting glucagon secretion [81]. The aqueous extract of O. stamineus (0.1 g/100 g of body weight) increased GLP-1 level in diabetic rats-nonpregnant or pregnant [62].
Mechanisms of O. stamineus in the Treatment of Diabetic Complications
Chronic hyperglycemia may cause damage to vessels and microvessels, and also damage tissues and organs in the body, leading to diabetic nephropathy, diabetic retinopathy, diabetic foot, diabetic peripheral neuropathy, and diabetic cardiovascular complications.
These diabetic complications are related to oxidative stress, nonenzymatic glycation of protein, and inflammatory factors [82].
In addition to antioxidant and anti-inflammatory activity, O. stamineus also has antiglycation effects. The glycation process is the formation of Amadori products at first through the chemical reactions between amino acid residues in proteins and reducing sugars. These products transform into advanced glycation end products (AGEs) by dehydration and rearrangement reactions. The accumulation of AGEs is toxic to cells and tissues, leading to diabetic complications. The aqueous extract of O. stamineus had inhibitory capacities (more than 70%) on the formation of AGEs in bovine serum albumin (BSA)-glucose system [51].
Diabetic nephropathy (DN) is one of the main complications of diabetes. It may lead to renal failure. The O. stamineus aqueous extract lowered the 24 h urine albumin excretion rate (UAER), glomerular filtration rate (GFR), the index of kidney weight to body weight and MDA level in kidney tissues of diabetic rats. It also improved the activity of SOD in renal tissues. Under a light microscope, O. stamineus obviously improved the lesions of renal tissues. The protective effect of O. stamineus on diabetic rats may be related to antioxidative activity, anti-inflammatory activity, and inhibition of the proliferation of mesangial cells [83].
Toxicity
Even though most traditional herbal medicines are generally recognized as safe, they also need to be evaluated the safety and toxicity. Toxicology studies have led to a better understanding of human physiology and drug interactions with the body.
There was no cytotoxicity effect of O. stamineus aqueous extract on 1.1B4, 3T3-L1, and WRL-68 cells viability during 24 h treatment at a concentration of 1.0 mg/mL. In fish embryo acute toxicity (FET) test on zebrafish, there was also no mortality on zebrafish embryos at 1.0 mg/mL [50].
Several studies were about the possible toxicity of O. stamineus in rats. In an acute toxicity study, the aqueous, 50% ethanol and ethanol extracts of O. stamineus (5000 mg/kg) were administered orally to rats for 14 days. In other acute studies, methanol extract and 50% ethanol extract were also administered to rats. While in the subchronic toxicity study, the 50% ethanol extract was administered orally at doses of 1250, 2500, and 5000 mg/kg for 28 days. There was no mortality or any signs of toxicity during the experiment periods. There was also no significant difference in body weight, organ weights, haematological parameters, and microscopic appearance of the organs from the treatment groups. Thus, the extract with these doses would not cause any acute or subchronic toxicity and organ damages in rats. The oral median lethal dose (LD 50 ) might be more than 5000 mg/kg body weight [84][85][86].
The O. stamineus aqueous extract (0, 250, 500, 1000, and 2000 mg/kg/day) did not change pregnancy body weight gain, food and water consumption, and caused no other sign of maternal toxicity in pregnant rats on gestation days 6-20. There was no embryo lethality and prenatal growth retardation either [87].
The genotoxicity of O. stamineus aqueous extract was evaluated by the Salmonella/ microsome mutation assay and the mouse bone marrow micronucleus test. The result showed that O. stamineus extract was not toxic to Salmonella strains and did not have any potential to induce gene mutations in Salmonella strains. The aqueous extract was also not toxic to the mouse bone marrow. Thus, the use of O. stamineus aqueous extract had no genotoxic risk [88].
Clinical Applications
The medical plant O. stamineus has been used in the treatment of some kidney diseases and to improve the renal function for many years in clinical in China, including diabetic nephropathy, chronic nephritis, chronic renal failure, etc. [89].
In a clinical study, the effective rate of the prescription of Cordyceps sinensis and O. stamineus on diabetic nephropathy was 76.7% among 30 patients. The prescription could decrease the levels of fasting and postprandial blood glucose, glycosylated hemoglobin (HbA1c), urinary protein and serum creatinine, and increase endogenous creatinine clearance rate [90]. In another clinical study, the effective rate of the capsule of Cordyceps sinensis and O. stamineus on diabetic nephropathy was 83.3% among 30 patients. The capsule could decrease the levels of urine protein, serum creatinine, and urea nitrogen. O. stamineus might have a good effect on diabetic nephropathy by improving the function of renal function [91]. The Chongcaoshencha capsules used in the literature were prepared by Heilongjiang University of Chinese Medicine, including 1 g Cordyceps sinensis, 40 g raw Astragalus membranaceus, 2 g leeches, 10 g rhubarb, 15 g Alpinia katsumadai, and 20 g O. stamineus. Each capsule was 0.45 g [92,93].
Phenolic Acids
There are almost 50 phenolic acids and their derivatives isolated from O. stamineus up to now. The structures of antidiabetic phenolic acids are summarized in Figure 2 and the mechanisms of these compounds are summarized in Table 2. Ferulic acid, methyl caffeate, vanillic acid, protocatechuic acid and rosmarinic acid lower blood glucose level in vivo [94][95][96]. Salvianolic acid C and rosmarinic acid had been proved to have inhibitory activity on α-glucosidase [97,98]. Vanillic acid and rosmarinic acid are both antioxidants [94,99]. Rosmarinic acid also have anti-inflammatory activity, which reduce NO production and the levels of pro-inflammatory cytokines such as TNF-α, IL-1β, IL-6 [100,101]. Protocatechuic acid and rosmarinic acid regulate the lipid metabolism in diabetic animals. Protocatechuic acid lowers TC, TG, LDL-C levels and increases HDL-C level [102,103]. Methyl caffeate increases hepatic glycogen level and reduces gluconeogenesis through lowering glucose-6-phosphatase activity. It also increases glucose uptake by higher GLUT4 expression [96]. Rosmarinic acid also increases the glucose uptake of muscle cells through activation of adenosine 5 -monophosphate-activated protein kinase (AMPK) phosphorylation and glucose transporter-4 (GLUT4) expression. It promotes insulin secretion and improves insulin resistance by inhibiting dipeptidyl peptidase-4 (DPP-4) and PTB1B [104]. [105]. Methyl caffeate and rosmarinic acid could protect islet β-cells [96]. Ferulic acid and rosmarinic acid also have anti-glycation effects to decrease the formation of AGEs [95,106]. For diabetic complications, vanillic acid ameliorated diabetic liver dysfunction by lowering the levels of aspartate aminotransferase (AST) and alanine aminotransferase (ALT). It also decreases the levels of urea, uric acid, and creatinine in kidney [106]. Protocatechuic acid and rosmarinic acid reduce histological changes in kidney tissues in diabetic nephropathy animals [103,107]. Ferulic acid and protocatechuic acid increase the activity of SOD in cardiac tissues and decrease cardiomyocytes apoptosis to treat diabetic cardiomyopathy [95,108]. For diabetic retinopathy, lithospermic acid B improves oxidative stress in retinal tissues, prevents vascular leakage and basement membrane thickening in retinal capillaries [109]. Decreases lipid hydroperoxides in liver and kidney; decreases TC, TGs, LDL-C and VLDL-C levels and increases HDL-C level in liver and kidney; reduces histological changes in liver and kidney Increases the activities of antioxidants in kidney and liver; reduces the levels of AST and ALT in liver; decreases the levels of urea, uric acid, and creatinine in kidney; reduces histological changes in liver and renal tissues
Flavonoids
To date, more than 20 flavonoids have been isolated from O. stamineus. Most of them are flavones, especially polymethoxy substituted flavones. The structures of antidiabetic flavonoids are summarized in Figure 3 and the mechanisms of these compounds are summarized in Table 3. Isoquercitrin, baicalein, and naringenin lower blood glucose level in vivo. They also increase SOD activity, lower MDA level, and regulate lipid metabolism [112][113][114]. Sinensetin and prunin have inhibitory activity on α-glucosidase [68,115]. Prunin improves insulin resistance through inhibitory activity against PTP1B and the expression of Akt and PI3K [115]. Isoquercitrin and baicalein increase mRNA expression of IR, Akt, and PI3K to enhance insulin sensitivity [113,116]. Prunin and isoquercitrin increase glucose consumption of hepatocytes [115,116]. Baicalein promotes glucose uptake and glycolysis by inhibiting the expression of glucose-6-phosphatase, and inhibits gluconeogenesis of hepatocytes [112]. Naringenin increases the expression of GLUT-4 to promote glucose uptake [117,118]. Besides, isoquercitrin lowers DPP-IV mRNA levels and increases GLP-1 levels. Isoquercitrin and naringenin protected pancreatic tissues in a histopathological study and improved pancreatic necrosis [116,119].
No. Compounds Diabetes and Diabetic Complications Effects and Mechanisms
Ref.
Baicalein
Diabetes Lowers blood glucose and MDA level; inhibits gluconeogenesis of hepatocytes; decreases the expressions of glucose-6-phosphatase; increase SOD activity; promotes glucose uptake and glycolysis; increases the expression of PI3K and Akt; increase hepatic glycogen level [112,113,127,128] Diabetic nephropathy Lowers HOMA-IR level; restores normal renal function; mitigates renal oxidative stress; lowers the level of NF-κB; ameliorates the structural changes in renal tissues; normalizes the levels of serum pro-inflammatory cytokines and liver function enzymes [122] 2 Isoquercitrin Diabetes Lowers blood glucose, serum HOMA-IR, DPP-IV mRNA levels; increases glucose uptake of hepatocytes; increases mRNA expression of Akt and PI3K; increases SOD, HDL-C, insulin and GLP-1 levels; improves pancreatic atrophy and necrosis [116] Diabetic liver dysfunction Reduces serum ALT and AST levels; prevents hepatocytes architecture and hepatic necrosis; suppresses apoptosis and promotes regeneration of hepatocytes 3 Naringenin
Diabetes
Lowers blood glucose, MDA and glycosylated hemoglobin levels; lowers the activities of ALT and AST in serum; increases serum insulin levels; increases the expression of GLUT-4; protects the pancreatic tissues in histopathological study; normalizes lipid concentrations in the serum [114,[117][118][119] Diabetic liver dysfunction Decreases lipid peroxidation level in liver; decreases the number of vacuolated liver cells and degree of vacuolisation [120] Diabetic nephropathy Decreases the 24 h-urinary protein, kidney index and glomerular area; increases creatinine clearance rate; decreases lipid peroxidation level in kidney tissue; increases the activity of SOD; decreases renal IL-1β, IL-6 and TNF-α levels; lowers NF-κB p65 expression in kidney; improves kidney histology; reduces apoptosis [120,121,123,129] Diabetic retinopathy Increases levels of neuroprotective factors, tropomyosin related kinase B and synaptophysin in diabetic retina; ameliorates the levels of apoptosis regulatory proteins in diabetic retina [126]
No. Compounds Diabetes and Diabetic Complications Effects and Mechanisms
Ref.
Triterpenoids
There are almost 20 triterpenoids isolated from O. stamineus. The structures of antidiabetic triterpenoids are summarized in Figure 4 and the mechanisms of these compounds are summarized in Table 4. α, β-Amyrin, arjunolic acid, betulinic acid, tormentic acid, oleanolic acid, and ursolic acid lower blood glucose level in vivo. Among them, oleanolic acid and ursolic acid have an inhibitory activity on α-glucosidase [130,131]. Arjunolic acid, oleanolic acid, and ursolic acid have antioxidant activities to scavenge free radicals, while oleanolic acid and ursolic acid also have anti-inflammatory activities [100,[132][133][134]. α, β-Amyrin, arjunolic acid, tormentic acid, oleanolic acid, and ursolic acid lower the levels of TC, TG, LDL-C and leptin, increase serum HDL-C level to regulate the lipid metabolism [133,[135][136][137][138]. Maslinic acid, oleanolic acid, and ursolic acid improve insulin resistance and enhance insulin sensitivity respectively by a higher expression of IR, IRS, Akt, and PIP1B inhibitory activity [132,133,139]. Tormentic acid promotes glucose uptake by increasing the levels of phospho-AMPK and GLUT4 in skeletal muscle [136]. Oleanolic acid inhibits gluconeogenesis by decreasing expression of glucose-6-phosphatase [133]. Maslinic acid and ursolic acid increase the hepatic glycogen accumulation [135,139]. α, β-Amyrin, arjunolic acid, and betulinic acid protect islet cells and decrease cell death [138,140,141]. Oleanolic acid has anti-glycation effects to inhibit the formation of AGEs products [142]. In diabetic liver dysfunction, arjunolic acid reduces the secretion of ALT and the overproduction of ROS and RNS [141]. While oleanolic acid decreases ROS production, NF-κB expression and IL-1β, IL-6 and TNF-α levels in liver, and increases the activity of SOD [133]. Arjunolic acid and tormentic acid both reduce histological changes in liver tissues [136,141]. With regard to diabetic nephropathy, arjunolic acid, ursolic acid, and betulinic acid improve the lesions of renal tissues [143]. Maslinic acid, ursolic acid, and oleanolic acid decrease ROS and MDA levels and increase SOD activity in renal tissues [133,144,145]. Arjunolic acid, ursolic acid, and betulinic acid reduce the ratio of kidney weight to body weight, the levels of blood urea nitrogen (BUN), and creatinine. Ursolic acid also lowers urine albumin excretion [141,146,147]. Maslinic acid also increases Na + excretion rate and glomerular filtration rate, and decreases creatinine level [145,148]. For diabetic cardiomyopathy, ursolic acid decreases the levels of AGEs, TNF-α, IL-1β, and ROS, increases the activity of SOD in myocardium [149]. Arjunolic acid reduces histological changes in cardiac tissues and reduces the number of apoptotic cells [137]. Inhibitory activity on α-glucosidase and the formation of Amadori, which is an early product of nonenzymatic glycosylation [150] 6 Maslinic acid Diabetes Increases hepatic glycogen accumulation; inhibits glycogen phosphorylase activity; induces the phosphorylation level of IRβ and Akt [139] Diabetic nephropathy Increases the activity of antioxidant enzymes in renal tissues; increases Na + output, Na + excretion rates, fractional excretion of Na + ; increases glomerular filtration rate; decreases plasma aldosterone and creatinine levels; diminishes the expression of GLUT1 and GLUT2 in diabetic kidney [145,148] Lowers blood glucose, LDL and free fatty acids levels; increases insulin level; inhibitory activity on α-glucosidase, α-amylase and PIP1B; inhibits the formation of AGEs products; improve insulin tolerance; inhibits gluconeogenesis; increases serum HDL level; decreases levels of IL-1b, IL-6 and TNFα; increases the activity of SOD; improve glycogen level by the increasing expression of Akt and decreasing expression of glucose-6-phosphatase; increases the expression of IR and IRS-1 [131,133,142,151] Diabetic liver dysfunction Decreases the levels of IL-1β, IL-6 and TNFα in liver; decreases the expression of NF-κB; decreases ROS production; increases the activity of SOD [133,152] 8 Tormentic acid Diabetes Lowers blood glucose, leptin and total lipids levels; increases the protein contents of phospho-AMPK and GLUT4 in skeletal muscle [136] Diabetic liver dysfunction Reduces histological changes in liver tissues; decreases the mRNA level of glucose-6-phosphatase in liver tissues; increases the protein contents of hepatic phospho-AMPK 9 Ursolic acid Diabetes Lowers blood glucose, MDA and LDL levels; inhibits α-amylase and α-glucosidase activity; increases SOD activities; decreases TNF-α and IL-1β level; increases liver glycogen level; decreases the expression of PTP-1B protein; increases the expression of IRS-2 protein [130,132,135] Diabetic cardiomyopathy Decreases levels of AGEs, TNF-α, IL-1β and ROS; increases the activity of SOD in myocardium [149] Diabetic nephropathy Lowers the levels of BUN, creatinine and MDA; lowers urine albumin excretion, renal oxidative stress level, NF-κB activity; prevents the expression of JNK; improves renal structural abnormalities [144,146,147]
Discussion
O. stamineus is a potential natural product to treat diabetes and its complications. The mechanisms of O. stamineus in the treatment of diabetes and its complications are summarized in Figure 5. The antioxidant activity, anti-inflammatory activity, anti-glycation activity and lipid metabolism regulation are all related to antidiabetic activity. O. stamineus protects the islet cells, enhances insulin sensitivity, and improves diabetic complications by lowering the levels of free radicals and inflammatory factors. It also improves insulin resistance by lowering the levels of free fatty acids and leptin. The lower level of AGEs is able to improve diabetic complications. Besides, O. stamineus enhances insulin sensitivity and improves insulin resistance through other pathways, such as the PI3k/Akt signaling pathway, the AMPK pathway, and the JNK pathway (summarized in Figure 6) [153][154][155]. The PTP1B activity might also be related to the PI3k/Akt pathway. Some diterpenes isolated from O. stamineus had inhibitory activity on PTP1B. The hexane fraction of 70% ethanol extract and some flavonoids (prunin, isoquercitrin, baicalein) can increase the expression of PI3K and Akt. Rosmarinic acid and tormentic acid could increase the expression phospho-AMPK. Arjunolic acid and ursolic acid prevent the expression of JNK. In addition, O. stamineus reduces glucose absorption from the small intestine by inhibiting the activities of α-amylase and α-glucosidase, promotes insulin secretion by elevating PDX-1 level, and lowers GLP-1 level. It could also promote glycolysis and inhibit gluconeogenesis by inhibiting glucose-6phosphatase. However, some current experiments have only studied the antidiabetic effects and results of O. stamineus, such as reducing blood glucose level, improving insulin level, increasing glucose uptake, and reducing glucose absorption, without further exploration of its mechanisms and pathways. The relationship between O. stamineus extracts and AMPK, JNK pathways should be further studied. Until now, investigations on the antidiabetic effects and mechanisms of O. stamineus have concentrated mainly on the effects of extracts, especially 50% ethanol extract and aqueous extract. The effects of extracts might be different because the levels of some metabolites vary in the plant from different places. Through literature research, it was seen that phenolic acids, flavonoids, and triterpenoids might be the main active components to treat diabetes and complications. To identify the major bioactive compounds responsible for antidiabetic effects, bioassay-guided isolation should be used. The mechanisms of pure compounds are also required to study, and there might be synergistic effects between these constituents.
In China and some southeastern Asian countries, O. stamineus has been used as traditional medicine for the treatment of diabetes and some kidney diseases for a long time. In recent years, by means of modern science and techniques, there have been more and more investigations in the mechanisms of O. stamineus in the treatment of diabetes and diabetic complications. However, most experiments are in vitro or using experimental animal models in vivo, which may be different from the effects and mechanisms of O. stamineus in the human body. In addition, clinical research is very limited. O. stamineus was only used to treat chronic renal diseases in clinical, such as chronic glomerulonephritis [156]. But because O. stamineus might be a good antidiabetic candidate to reduce blood glucose levels and alleviate kidney injury, it could also be designed to study the clinical treatment of diabetic nephropathy in the future.
At present, diabetes is treated with oral hypoglycemic drugs and insulin injections. The glucose-lowering drugs include α-glucosidase inhibitors (acarbose, miglitol), insulin sensitizers (metformin, thiazolidinediones, biguanides), insulin secretagogues (sulfonylureas), etc. However, most of these medications may have side-effects, including hypoglycemia, weight gain, liver damage, gastrointestinal disturbance, lactic acidosis, edema, headache, dizziness, anemia, nausea, and even death. Besides, long-term use of insulin may decrease insulin receptor sensitivity, resulting in insulin resistance [157]. In the future, glucoselowering drugs might be combined with O. stamineus to find out if they can reduce these side-effects and increase antidiabetic effects. Besides, some other natural products with antidiabetic activities can also be used with O. stamineus to test the combined effects in the treatment of diabetes and diabetic complications. Cordyceps sinensis, Astragalus membranaceus, Rheum officinale, and leech have been combined with O. stamineus to treat diabetic nephropathy as a Chinese traditional medicine prescription [91].
Methods
This review was performed and reported according to PRISMA guidelines [37,38]. The flowchart of selected articles is shown in Figure 1.
Search Strategy
Three databases (ScienceDirect, PubMed and Web of Science) were used to search relevant articles using the terms "((Clerodendranthus spicatus) OR (Orthosiphon stamineus) OR (Orthosiphon aristatus)) AND ((diabetes) OR (antidiabetic) OR (hypoglycemic) OR (diabetic complications))". No time restriction was used. The initial search included 281 articles. The results of ScienceDirect, PubMed, and Web of Science were respectively exported as RIS, NBIB, and ISI files. All obtained files were then imported into EndNote X9 to generate a library.
Eligibility Criteria
The research included in this review met the following criteria: 1. the study reported hypoglycemic activity or the treatment of diabetes and its complications of O. stamineus extract or its isolated compounds, 2. the study reported other biological activities related to diabetes treatment, such as antioxidant and anti-inflammatory activities, of O. stamineus extract or its isolated compounds, 3. the study reported the toxicity of O. stamineus extract or its isolated compounds.
The exclusion criteria of this review were as follows: 1. reviews, book chapters, patents, meeting papers, 2. non-English language papers, 3. lack of access to the full-text of the paper, 4. no relevance to the plant O. stamineus or the field of diabetes and its complications. Besides, duplicate articles were also removed.
Data Extraction
Thirty-one studies met the criteria, and the data were extracted into Microsoft Excel 2007 sheet and inserted to Table 1. The information gathered from the studies included: 1. name of the first author, 2. publication year, 3. the tested substance, 4. the study design and protocol, 5. main results.
Conclusions
In conclusion, O. stamineus is a potential agent to treat diabetes and diabetic complications. The extracts of O. stamineus, including 50% ethanol extract, chloroform extract, aqueous extract, and hexane extract, could be used to treat diabetes through mechanisms including inhibiting the activities of α-amylase and α-glucosidase, antioxidant and anti-inflammatory activities, regulating lipid metabolism, promoting insulin secretion, ameliorating insulin resistance, enhancing insulin sensitivity, increasing glucose uptake, promoting glycolysis, inhibiting gluconeogenesis, promoting the secretion of GLP-1, and antiglycation effects. The mechanisms of insulin resistance might also be related to the PI3k/Akt signaling pathway, the AMPK pathway, and the JNK pathway. The aqueous extract could also be used for diabetic nephropathy treatment. Besides, some main active components, such as rosmarinic acid, ferulic acid, methyl caffeate, vanillic acid, protocatechuic acid, isoquercitrin, baicalein, naringenin, arjunolic acid, betulinic acid, tormentic acid, oleanolic acid, ursolic acid, maslinic acid, siphonols B, orthosiphols B, G, I, and N also had good effects in the treatment of diabetes and its complications. However, it needs further study on pharmacodynamic substance basis and the mechanisms of effective constituents.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2017-08-10T23:21:14.528Z
|
2012-09-26T00:00:00.000
|
6606070
|
{
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
],
"oa_license": "CCBYSA",
"oa_status": "GREEN",
"oa_url": "https://basepub.dauphine.psl.eu/bitstream/123456789/6307/3/max_vertex_k-coveragecah_2.pdf",
"pdf_hash": "8970f8d727ff75ddfd17fc7293cc65dfd8261044",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44457",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"sha1": "ca183432f4a6055a136afbd4d1a80cd9f1918303",
"year": 2012
}
|
pes2o/s2orc
|
Efficient Algorithms for the max k -vertex cover Problem
. We first devise moderately exponential exact algorithms for max k -vertex cover , with time-complexity exponential in n but with polynomial space-complexity by developing a branch and reduce method based upon the measure-and-conquer technique. We then prove that, there exists an exact algorithm for max k -vertex cover with complexity bounded above by the maximum among c k and γ τ , for some γ < 2 , where τ is the cardinality of a minimum vertex cover of G (note that max k -vertex cover / ∈ FPT with respect to parameter k unless FPT = W[1] ), using polynomial space. We finally study approximation of max k -vertex cover by moderately exponential algorithms. The general goal of the issue of moderately exponential approximation is to catch-up on polynomial inapproximability, by providing algorithms achieving, with worst-case running times importantly smaller than those needed for exact computation, approximation ratios unachiev-able in polynomial time.
Introduction
In the max k-vertex cover problem a graph G(V, E) with |V | = n vertices 1, . . ., n and |E| edges (i, j) is given together with an integer value k < n.The goal is to find a subset K ⊂ V with cardinality k, that is |K| = k, such that the total number of edges covered by K is maximized.In its decision version, max k-vertex cover can be defined as follows: "given G, k and ℓ, does G contain k vertices that cover at least ℓ edges?".max k-vertex cover is NPhard (it contains the minimum vertex cover problem as particular case), but it is polynomially approximable within approximation ratio 3/4, while it cannot be solved by a polynomial time approximation schema unless P = NP.The interested reader can be referred to [19,30] for more information about approximation issues for this problem.
In the literature, we often find this problem under the name partial vertex cover problem.It is mainly studied from a parameterized complexity point of ⋆ Research supported by the French Agency for Research under the program TODO, ANR-09-EMER-010 and by a Lagrange fellowship of the Fondazione CRT, Torino, Italy view (see [17] for information on fixed-parameter (in)tractability).A problem is fixed-parameter tractable with respect to a parameter t, if it can be solved (to optimality) with time-complexity O(f (t)p(n)) where f is a function that depends on the parameter t, and p is a polynomial on the size n of the instance.In what follows, when dealing with fixed parameter tractability of max k-vertex cover, we shall use notation max k-vertex cover(t) to denote that we speak about fixed parameter tractability with respect to parameter t.Parameterized complexity issues for max k-vertex cover are first studied in [3] where it is proved that partial vertex cover is fixed-parameter tractable with respect to parameter ℓ, next in [28] where it is proved that it is W [1]-hard with respect to parameter k (another proof of the same result can be found in [9]) and finally in [31] where the fixed-parameter tractability results of [3] are further improved.
Let us also quote the paper by [24], where it is proved that in apex-minor-free graphs graphs, partial vertex cover can be solved with complexity that is subexponential in k.
The seminal Courcelle's Theorem [13] (see also [21,20] as well as [37] for a comprehensive study around this theorem) assures that decision problems defined on graphs that are expressible in terms of monadic second-order logic formulae are fixed parameter tractable when the treewidth 1 of the the inputgraph G, denoted by w, is used as parameter.Courcelle's Theorem can be also extended to a broad class of optimization problems [1].As max k-vertex cover belongs to this class, it is fixed parameter tractable with respect to w.In most of cases, "rough" application of this theorem, involves very large functions f (w) (see definition of fixed-parameter tractability given above).
In [34], it is proved that given a nice tree decomposition, there exists a fixedparameter algorithm (based upon dynamic programming), with respect to parameter w that solves max k-vertex cover in time O(2 w k(w 2 + k) • |I|), where |I| is the number of nodes of the nice tree decomposition and in exponential space.In other words, max k-vertex cover(w) ∈ FPT, but the fixed-parameter algorithm of [34] uses exponential space.Let us note that in any graph G, denoting by τ the size of a minimum vertex cover of G, it holds that w τ .So, max k-vertex cover(τ ) ∈ FPT too, but through the use of exponential space (recall that, as adopted above, max k-vertex cover(τ ) denotes the max kvertex cover problem parameterized by the size τ of a minimum vertex cover.
Very frequently, a serious problem about fixed-parameter tractability with respect to w is that it takes too much time to compute the "nice tree decomposition" that also derives the value of w.More precisely, this takes time O * (1.7549 n ) (notation O * (•) ignores polynomial factors) by making use of exponential space 1 A tree decomposition of a graph G(V, E) is a pair (X, T ) where T is a tree on vertex set V (T ) the vertices of which we call nodes and X = ({Xi : i ∈ V (T )}) is a collection of subsets of V such that: (i) ∪ i∈V (T ) Xi = V , (ii) for each edge (v, w) ∈ E, there exist an i ∈ V (T ) such that {v, w} ∈ Xi, and (iii) for each v ∈ V , the set of nodes {i : v ∈ Xi} forms a subtree of T .The width of a tree decomposition The treewidth of a graph G is the minimum width over all tree decompositions of G.
and time O * (2.6151 n ) by making use of polynomial space [25].Note that the problem of deciding if the treewidth of a graph is at most w is fixed-parameter tractable and takes time O(2 O(w 3 ) n) [33].
Dealing with solution of max k-vertex cover by exact algorithms with running times (exponential) functions of n, let us note that a trivial optimal algorithm for max k-vertex cover takes time O * ( n k ) = O * (n k ), and polynomial space, producing all the subsets of V of size k.This turns to a worst-case O * (2 n ) time (since n k 2 n with equality for k = n 2 ).An improvement of this bound is presented in [9], where an exact algorithm with complexity O * (n ω⌈k/3⌉+O (1) ) was proposed based upon a generalization of the O * (n ωt ) algorithm of [35] for finding a 3t-clique in a graph, where ω = 2.376.This induces a complexity O * (n 0.792k ), but exponential space is needed.As far as we know, no exact algorithm with running time O * (γ n ), for some γ < 2, is known for max k-vertex cover.
In this paper, we first devise an exact branch and reduce algorithm based upon the measure-and-conquer paradigm by [22] (Section 2) requiring running time O * (2 where ∆ denotes the maximum degree of G, and polynomial space.The algorithm is then tailored to graphs with maximum degree 3 inducing a running time O * (1.3339 n ) (Section 4).In Section 3, we devise a fixed parameter algorithm, with respect to parameter τ where, as mentioned above, τ is the cardinality of a minimum vertex cover of G that works in time O * (2 τ ) and needs only polynomial space.By elaborating a bit more this result we then show that the time-complexity of this algorithm is indeed either O * (γ τ ) for some γ < 2 or O * (c k ), for some c > 2. In other words, this algorithm either works in time better than 2 τ or it is fixed parameter with respect to the size k of the desired cover.Finally, we show that the technique used for proving that max k-vertex cover(τ ) ∈ FPT, can be used to prove inclusion in the same class of many other well-known combinatorial problems.A corollary of the inclusion of max k-vertex cover(τ ) in FPT, is that max k-vertex cover in bipartite graphs can be solved in time O * (2 n/2 ) ≃ O * (1.414 n ).Finally, in Section 5, we address the question of approximating max k-vertex cover within ratios "prohibited" for polynomial time algorithms, by algorithms running with moderately exponential complexity.The general goal of this issue is to cope with polynomial inapproximability, by developing algorithms achieving, with worst-case running times significantly lower than those needed for exact computation, approximation ratios unachievable in polynomial time.This approach has already been considered for several other paradigmatic problems such as minimum set cover [7,15], min coloring [2,6], max independent set and min vertex cover [5], min bandwidth [16,26], . . .Similar issues arise in the field of FPT algorithms, where approximation notions have been introduced, for instance, in [10,18].In this framework, we particularly quote [32] where it is proved that, although not in FPT, max k-vertex cover(k) is approximable by an FPT (with respect to k) approximation schema, where function f (k) (in the time-complexity of this schema) is quite large, i.e., around something like O * (k 2k 2 ).
time polynomial space algorithm in general graphs
In what follows, we denote by α j the total number of vertices adjacent to j that have been discarded in the previous levels of the search tree.We denote by d j the degree of vertex j and by N (j) the set of vertices adjacent to j, that is the neighborhood of j.Notice that, whenever a branch on a vertex j occurs, for each l ∈ N (j), if j is selected then d l is decreased by one unit as edge (j, l) is already covered by j.Alternatively, j is discarded: correspondingly d l is not modified and α l is increased by one unit.We propose in this section a branch and reduce approach based on the measure-and-conquer paradigm (see for instance [22]).Consider a classical binary branching scheme on some vertex j where j is either selected or discarded.Contrarily to the classical branch-andreduce paradigm where for each level of the search tree we define as fixed those vertices that have already been selected or discarded, while we define as free the other vertices, when using measure-and-conquer, we do not count in the measure the fixed vertices, namely the vertices that have been either selected or discarded at an earlier stage of the search tree and we count with a weight w h the free vertices h.The vertex j to be selected is the one with largest coefficient is the weights of the vertices are strictly increasing in their c j coefficients.
We so get recurrences on the time T (p) required to solve instances of size p, where the size of an instance is the sum of the weights of its vertices.Since initially p = n, the overall running time is expressed as a function of n.This is valid since when p = 0, there are only vertices with weight w [0] in the graph and, in this case, the problem is immediately solved by selecting the k − γ vertices with largest α j (if γ < k vertices have been selected so far).Correspondingly free vertices j with no adjacent free vertices receive weight w [0] = 0.
We claim that max k-vertex cover can be solved with running time O * (2 ∆−1 ∆+1 n ) by the following algorithm called MAXKVC: Select j such that c j is maximum and branch according to the following exhaustive cases: 1. if c j 3, then branch on j and either select or discard j; 2. else, c j 2 and MAXKVC is polynomially solvable.
Proof.To prove the above statement, we first show that the branch in step 1 can be solved with complexity O * (2 ) and then we show that step 2 is polynomially solvable.Consider step 1.We always branch on the vertex j with largest c j = c max ∆ where c j 3 and either we select or discard j.If we select j, vertex j is fixed and c max vertices (the neighbors of j) decrease their degree (and correspondingly their coefficient) by one unit.Similarly, if we discard j, vertex j is fixed and c max vertices (the neighbors of j) decrease their coefficient as their degree remains unchanged but their α parameter is increased by one unit.Hence, the recurrence becomes: By constraining the weights to satisfy the inequality: the previous recurrence becomes in the worst-case: As c max ∆, where the equality occurs when α j = 0, the above recurrence becomes, in the worst-case, Summarizing, to handle graphs with maximum degree ∆, we need to guarantee that the recurrences )), ∀i ∈ 3, . . ., ∆ (as c j 3), and the constraints: are satisfied simultaneously.This corresponds to a non linear optimization problem of the form: We so get performances 1.4142 n , for ∆ = 3, 1.5157 n , for ∆ = 4, 1.5866 n , for ∆ = 5, 1.6405 n , for ∆ = 6, 1.6817 n , for ∆ = 7, or 1.7143 n , for ∆ = 8.Interestingly enough, for all these values of ∆, the complexity corresponds to O * (2 ∆−1 ∆+1 n ).Indeed, this is not accidental.By setting: (5) we can see that constraints (2) and ( 3) are satisfied.To see that inequalities (2) are satisfied, notice that: For the general recursion with i 4, we have to show that w Also, to see that inequalities (3) are satisfied, notice that equations (4) imply: while equations ( 5) and ( 6) imply Finally, notice that such values of w [j] s satisfy constraints (1) that now correspond to ∆ − 2 copies of the inequality α 2 where the minimum value of α is obviously given by 2 We consider now step 2. For c j = c max 2, max k-vertex cover can be seen as a maximum weighted k-vertex cover problem in an undirected graph G where each vertex j has a weight α j and a degree d j = c j and the maximum vertex degree is 2.But this problem has been shown to be solvable in O(n) time by dynamic programming in [36].
max k-vertex cover and fixed-parameter tractability
Denote by (a − b − c), a branch of the search tree where vertices a and c are selected and vertex b is discarded.Consider the vertex j with maximum degree ∆ and neighbors l 1 , . . ., l ∆ .As j has maximum degree, we may assume that if there exists an optimal solution of the problem where all neighbors of j are discarded, then there exists at least one optimal solution where j is selected.Hence, a branching scheme (called basic branching scheme) on j of type: can be applied.Hence, the following easy but interesting result holds.
Proposition 1.The max k-vertex cover problem can be solved to optimality in O * (∆ k ).
Proof.Consider vertex j with maximum degree ∆ and neighbors l 1 , . . ., l ∆ where the basic branching scheme of type ] can be applied.Then, the last two branches can be substituted by the branch (l 1 − l 2 − . . .− l ∆−1 − j) as, if all neighbors of j but one are not selected, any solution including the last neighbor l ∆ but not including j is not better than the solution that selects j.Now, one can see that the basic branching scheme generates ∆ nodes.On the other hand, we know that in each branch of the basic branching scheme at least one vertex is selected.As, at most k nodes can be selected, the overall complexity cannot be superior to O * (∆ k ).
Corollary 1. max k-vertex cover(k) in bounded degree graphs is in FPT.
Note that Corollary 1 can also be proved without reference to Proposition 1. Indeed, in any graph of maximum degree ∆, denoting by ℓ the value of an optimal solution for max k-vertex cover, ℓ k∆.Then, taking ito account that max k-vertex cover(ℓ) ∈ FPT, immediately derives Corollary 1. Now, let V ′ ⊂ V be a minimum vertex cover of G and let τ be the size of V ′ that is τ = |V ′ |.Correspondingly, let I = V \ V ′ be a maximum independent set of G and set α = |I|.Notice that V ′ can be computed, for instance, in O * (1.2738 τ ) time by means of the fixed-parameter algorithm of [12], and using polynomial space.Let us note that we can assume k τ .Otherwise, the optimal value ℓ for max k-vertex cover would be equal to |E| and one could compute a minimum vertex cover V ′ in G and then one could arbitrarily add k − τ vertices without changing the value of the optimal solution.Theorem 2. The following two assertions hold for max k-vertex cover: 1. there exists an O * (2 τ )-time algorithm that uses polynomial space; 2. there exists an algorithm running in time O * (max{γ τ , c k }), for two constants γ < 2 and c > 4, and needing polynomial space.
Proof.For proving item 1, fix some minimum vertex cover V ′ of G and consider some solution K for max k-vertex cover, i.e., some set of k vertices of G. Any such set is distributed over V ′ and its associated independent set I = V \ V ′ .Fix now an optimal solution K * of max k-vertex cover and denote by S ′ the subset of V ′ that belongs to K * (S ′ can be eventually the empty set) and by I ′ the part of K * belonging to I. In other words, the following hold: , it can be completed into K * in polynomial time.Indeed, for each vertex i belonging to I we need simply to compute (in linear time) the total number e i of edges (i, j) for all j ∈ V ′ \S ′ .Then, I ′ is obtained by selecting the k − k ′ vertices of I with largest e i value.So, the following algorithm can be used for max k-vertex cover: 1. compute a minimum vertex cover V ′ (using the algorithm of [11]); 2. for every subset S ′ ⊆ V ′ of cardinality at most k, take the k − |S ′ | vertices of V \ V ′ with the largest degrees to V ′ \ S ′ ; denote by I ′ this latter set; 3. return the best among the sets S ′ ∪ I ′ so-computed (i.e., the set that covers the maximum of edges).
Step 1 takes time O * (1.2738 τ ), while step 2 has total running time O * ( Note that, from item 1 of Theorem 2, it can be immediately derived that max k-vertex cover can be solved to optimality in O * (2 Indeed if a graph G has maximum degree ∆, then for the maximum independent set we have α n ∆ .Also, we can assume that G is not a clique on ∆ + 1 vertices (note that max k-vertex cover is polynomial in cliques).In this case, G can be colored with ∆ colors [8].In such a coloring the cardinality of the largest color is greater than n ∆ and, a fortiori, so is the cardinality of a maximum independent set (since each color is an independent set).Consequently, τ In what follows, we improve the analysis of item 1 and prove item 2 that claims, informally, the instances of max k-vertex cover that are not fixedparameter tractable (with respect to k) are those solved with running time better than O * (2 τ ).
For this observe that the running time of the algorithm in the proof of item 1 is O * ( k i=1 τ i ).As mentioned above, k can be assumed to be smaller than, or equal to, τ .Consider some positive constant λ < 1/2.We distinguish the following two cases: τ > k λτ and k < λτ .
If τ > k λτ , then τ k/λ.As λ < 1/2, k/λ > 2k and, since i k, we get using Stirling's formula: for some constant c that depends on λ and it is fixed if λ is so.If k < λτ , then, by the hypothesis on λ, 2k < τ and, since i k, expression k i=1 τ i is bounded above by k τ k .In all, using also Stirling's formula the following holds: In other words, if k < λτ , then max k-vertex cover can be solved in time at most O * (γ τ ), for some γ that depends on λ and is always smaller than 2 for λ < 1/2.Expressions ( 7) and ( 8) derive the claim and conclude the proof.In Table 1 the values of c and γ are given for some values of λ.
Let us note that the technique of item 1 of Theorem 2, that consists of determining a decomposition of the input graph into a minimum vertex cover and a maximum independent set and then of taking a subset S ′ of a minimum vertex cover V ′ of the input-graph and of completing it into an optimal solution can be applied to several other well-known combinatorial NP-hard problems.We sketch here some examples: in min 3-dominating set (dominating set in graphs of maximum degree 3), the set S ′ is completed in the following way: • take all the vertices in I \Γ I (S ′ ) (in order to dominate vertices in V ′ \S ′ ); • if there remain vertices of V ′ \ S ′ not dominated yet solve a min set cover problem considering Γ I (S ′ ) as the set-system of the latter problem and assuming that a vertex v ∈ Γ I (S ′ ), seen as set, contains its neighbors in V ′ \ S ′ as elements; since Γ I (S ′ ) is the neighborhood of S ′ , the degrees of its vertices to V ′ \ S ′ are bounded by 2, that induces a polynomial min set cover problem ( [27]); -in min independent dominating set, S ′ is completed by the set I \Γ I (S ′ ), where Γ I (S ′ ) is the set of neighbors of S ′ that belong to I; -in existing dominating clique, min dominating clique (if any), max dominating clique (if any) and max clique, S ′ can eventually be completed by a single vertex of Γ I (S ′ ).
Theorem 3. min independent dominating set, existing dominating clique, min dominating clique, max dominating clique, max clique and min 3-dominating set can be solved in time O * (2 τ ) using polynomial space.
4 Tailoring measure-and-conquer to graphs with maximum degree 3 Let us note that, as it is proved in [23], for any ǫ > 0, there exists an integer n ǫ such that the pathwidth of every (sub)cubic graph of order n > n ǫ is at most (1/6 + ǫ)n.Based upon the fact that there exists for max k-vertex cover(w) an O * (2 w )-time exponential space algorithm [34], and taking into account that in (sub)cubic graphs w (1/6+ǫ)n, the following corollary is immediately derived.
Corollary 2. max k-vertex cover in graphs with maximum degree 3 can be solved in time O * (2 n/6 ) = O * (1.123 n ) using exponential space.
In this section we tailor the measure-and-conquer approach developed in Section 2 to graphs with ∆ = 3, in order to get an improved running-time algorithm for this case needing polynomial space.The following remark holds.
Remark 1.The graph can be cubic just once.When branching on a vertex j of maximum degree 3, we can always assume that it is adjacent to at least one vertex h that has already been selected or discarded.That is, either Indeed, the situation where the graph is 3-regular occurs at most once (even in case of disconnection).Thus, we make only one "bad" branching (where every free vertex of maximum degree 3 is adjacent only to free vertices of degree 3).Such a branching may increase the global running time only by a constant factor.Lemma 1.Any vertex i with d i 1 and α i = 0 can be discarded w.l.o.g.
Proof.If d i = α i = 0, then i can be obviously discarded.If d i = 1 and α i = 0, then i is adjacent to another free vertex h.But then, if h is selected, i becomes of degree 0 and can be discarded.Alternatively, h is discarded, but then any solution with i but not h is dominated by that including h instead of i. Lemma 2. Any vertex i with α i 2 and d i = 3 can be selected w.l.o.g.
Proof.If α i = 3, then i can be obviously selected.If d i = 3 and α i = 2, then i is adjacent to another free vertex h.But then, if h is discarded, we have α i = 3 and i can be selected.Alternatively, h is selected, but then any solution with h but not i is dominated by that including i instead of h.
To solve max k-vertex cover on graphs with ∆ = 3, consider the following algorithm, called MAXKVC-3.
Select j such that c j is maximum and branch according to the following exhaustive cases: 1. if c j = 3, assume, w.l.o.g., that j is adjacent to i, l, m free vertices with c i 2 (see in [14]) and c i c l c m , and branch on j according to the following exhaustive subcases: (a) . else c j 2 and MAXKVC-3 is polynomially solvable.
The following Theorem 4 holds in graphs with maximum degree 3 (due to space constraints, the proof is omitted; it can be found in [14]).
Theorem 4. Algorithm MAXKVC-3 solves max k-vertex cover on graphs with maximum degree 3 with running time O * (1.3339 n ) and using polynomial space.5 Approximating max k-vertex cover by moderately exponential algorithms We now show how one can get approximation ratios non-achievable in polynomial time using moderately exponential algorithms with worst-case running times better than those required for an exact computation (see [4,5] for more about this issue).Denote by opt(G) the cardinality of an optimal solution for max kvertex cover in G and by m(G), the cardinality of an approximate solution.
Our goal is to study the approximation ratio m(G)/ opt(G).
In what follows, we denote, as previously, by K * the optimal solution for max k-vertex cover.Given a set K of vertices, we denote by C(K), the set of edges covered by K (in other words, the value of a solution K for max k-vertex cover is |C(K)|; also, according to our previous notation, opt(G) = |C(K * )|).We first prove the following easy lemma that will be used later.Lemma 3.For any λ ∈ [0, 1], the subset H * of λk vertices of K * covering the largest amount of edges covered by K * , covers at least λ opt(G) edges.
Proof.Indeed, if the λk "best" vertices of K * cover less than λ opt(G) edges, then any disjoint union of k/λ subsets of K * , each of cardinality λk covers less than opt(G) edges, a contradiction.Now, run the following algorithm, called APPROX in what follows: 1. fix some λ ∈ [0, 1] and optimally solve max λk-vertex cover in G (as previously, let H * be the optimal solution built and C(H * ) be the edge-set covered by H * ); 2. remove H * and C(H * ) from G and approximately solve max (1 − λ)kvertex cover in the surviving graph (by some approximation algorithm); let K ′ be the obtained solution; It is easy to see that if T (p, k) is the running time of an optimal algorithm for max k-vertex cover, where p is some parameter of the input-graph G (for instance, n, or τ ), then the complexity of APPROX is T (p, λk).Furthermore, APPROX requires polynomial space.
Theorem 5.If T (p, k) is the running time of an optimal algorithm for max kvertex cover, then, for any ǫ > 0, max k-vertex cover can be approximated within ratio 1 − ǫ with worst-case running time T (p, (1 + 2 √ 1 − 3ǫ)k/3) and polynomial space.
Proof.Denote by K * an optimal solution of max k-vertex cover in G, by G 2 the induced subgraph G[V \ H * ] of G, by opt (1−λ) (G 2 ), the value of an optimal for max (1 − λ)k-vertex cover in G 2 .Suppose that E ′ edges are common between C(H * ) and C(K * ).This means that C(K * ) \ E ′ edges of C(K * ) are in G 2 and are exclusively covered by the vertex-set L * = K * \ H * that belongs to G 2 .Set ℓ * = |L * | and note that ℓ * k and ℓ * (1 − λ)k.
According to Lemma 3, the (1 − λ)k "best" vertices of L * cover more than Taking into account ( 9), the fact that K ′ in step 2 of APPROX has been computed by, say, a ρ-approximation algorithm and the fact that |E ′ | |C(H * )|, we get: Using once more Lemma 3, |C(H * )| λ opt(G), and combining it with (10), we get: Setting ρ = 3 4 in (11), in order to achieve an approximation ratio m(G)/ opt(G) = 1 − ǫ, for some ǫ > 0, we have to choose an λ satisfying λ = (1 + 2 √ 1 − 3ǫ)/3, that completes the proof of the theorem.For Corollary 3, just observe that the running-times claimed for the first two entries are those needed to optimally solve max λk-vertex cover (the former due to [9] and the latter due to item 1 of Theorem 2).Note that the second term in the min expression in the corollary is an FPT approximation schema (with respect to parameter τ ).Observe also that for the cases where the time needed for solving max k-vertex cover is given by the c k expression of item 1 of Theorem 2, this represents an improvement with respect to the FPT approximation schema of [32].Note finally that the result of Theorem 5 is indeed a kind of reduction between moderately exponential (or parameterized) approximation and exact (or parameterized) computation for max k-vertex cover in the sense that exact solution on some subinstance of the problem derives an approximation for the whole instance.Finally, let us close this section and the paper by some remarks on what kind of results can be expected in the area of (sub)exponential approximation.All the algorithms given in this section have exponential running time when we seek for a constant approximation ratio (unachievable in polynomial time).On the other hand, for several problems that are hard to approximate in polynomial time (like max independent set, min coloring, . . .), subexponential time can be easily reached for ratios depending on the input-size (thus tending to ∞, for minimization problems, or to 0, for maximization problems).An interesting question is to determine, for these problems, if it is possible to devise a constant approximation algorithm working in subexponential time.An easy argument shows that this is not always the case.For instance, the existence of subexponential approximation algorithms (within ratio better than 4/3) is quite improbable for min coloring since it would imply that 3-coloring can be solved in subexponential time, contradicting so the "exponential time hypothesis" [29].We conjecture that this is true for any constant ratio for min coloring.Anyway, the possibility of devising subexponential approximation algorithms for NP-hard problems, achieving ratios forbidden in polynomial time or of showing impossibility of such algorithms is an interesting open question that deserves further investigation.
Table 1 .
The values of c and γ for some values of λ.
|
v3-fos-license
|
2021-05-27T05:22:00.097Z
|
2021-05-01T00:00:00.000
|
235197021
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2072-6643/13/5/1562/pdf",
"pdf_hash": "01adc57932e46d754d9dac87484523a8baac8006",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44458",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "01adc57932e46d754d9dac87484523a8baac8006",
"year": 2021
}
|
pes2o/s2orc
|
Enteral Nutrition by Nasogastric Tube in Adult Patients under Palliative Care: A Systematic Review
Nutritional management of patients under palliative care can lead to ethical issues, especially when Enteral Nutrition (EN) is prescribed by nasogastric tube (NGT). The aim of this review is to know the current status in the management of EN by NG tube in patients under palliative care, and its effect in their wellbeing and quality of life. The following databases were used: PubMed, Web of Science (WOS), Scopus, Scielo, Embase and Medline. After inclusion and exclusion criteria were applied, as well as different qualities screening, a total of three entries were used, published between 2015 and 2020. In total, 403 articles were identified initially, from which three were selected for this review. The use of NGT caused fewer diarrhea episodes and more restrictions than the group that did not use NG tubes. Furthermore, the use of tubes increased attendances to the emergency department, although there was no contrast between NGT and PEG devices. No statistical difference was found between use of tubes (NGT and PEG) or no use, with respect to the treatment of symptoms, level of comfort, and satisfaction at the end of life. Nevertheless, it improved hospital survival compared with other procedures, and differences were found in hospital stays in relation to the use of other probes or devices. Finally, there are not enough quality studies to provide evidence on improving the health status and quality of life of the use of EN through NGT in patients receiving palliative care. For this reason, decision making in this field must be carried out individually, weighing the benefits and damages that they can cause in the quality of life of the patients.
Introduction
Initially, the aim of Palliative Care (PC) was to relieve suffering at the end of life. However, it is nowadays considered as a model to follow in patients in whom there is no curative treatment, and is therefore being implemented at earlier stages. Initially, PC was focused on cancer patients, but it currently covers other conditions such as advanced dementia, HIV/AIDS, heart disease, etc. [1].
Every year, 40 million people need palliative care, but only 3 million have access to such special attention [2]. Currently, the goal of PC is to promote comfort and to maintain an optimal quality of life for patients and their families under palliative care [3] through prevention and management of physical, psychosocial and spiritual issues in these patients [4]. It should not be forgotten that quality of life evaluates the subjective perception that each patient has around alterations or limitations that the disease undertakes in the physical, psychosocial and spiritual aspects of their lives [5].
Nutrition and hydration are basic elements for maintaining life, and they are considered signs of health in our society [6]. Occasionally, patients present failure at maintaining adequate oral intake for meeting their nutritional needs [7], and this can lead to physical and psychosocial issues such as anxiety and distress [8]. Therefore, it may be necessary to commence Artificial Nutrition (AN). In 2008, Cochrane published a review regarding the use of AN in adult patients receiving palliative care. The authors concluded that there was not enough evidence to guide the development of guidelines for practice [9]. Six years later, the update of this review presented the same results; therefore, there are no new quality studies regarding this subject [7].
If the patient takes less than 50% of their nutritional requirements and there are no contraindications or bronchoaspiration risks, and their life expectancy is less than 6 weeks [10], Enteral Nutrition (EN) must be prescribed through a nasogastric (NG) tube [11]. This is a widely used and easily accessible technique, although in the case of patients with advanced dementia who receive PC, evidence supporting the use of NG tube is limited, and this technique may have a negative impact on the quality of life of these patients [12]. The use of tubes in patients with advanced dementia does not improve survival, prevent aspiration [13], or improve their functional status. In addition, the use of tubes for artificial nutrition has been associated with agitation, increased physical restrictions, and complications related to the tubes [14,15].
The use of EN through NG tubes in patients under PC continues to be a controversial subject [16], since there is little evidence on the role of nutritional support and whether its implementation improves quality of life. In addition, it affects the psychological sphere of patients, because it can influence their social relationships and the way they interact with others. However, Mitchell et al. reported that more than a third of nursing home residents with dementia had been subjected to a feeding tube [17]. Decisions and/or choices may confront patients, family members, and health professionals. Therefore, having a good knowledge of the benefits and harms of the use of this technique is paramount in order to reduce ethical conflicts and to understand how the use of this technique can influence the physical, psychological and spiritual spheres, and therefore, the quality of life of patients receiving PC. Accordingly, the goal of the present study is to understand the current state of the management of EN using NG tubes in patients receiving palliative care, along with its effect on health status and quality of life.
Materials and Methods
A systematic review of the literature was made. The results were obtained by direct online access through the following database: PubMed, Web of Science (WOS), Scopus, y Scielo, Embase y Medline. The aim of this review was to address the next question: Is it appropriate the use of EN by NG tube in patients under palliative care?
To define the research, question the PICOS criteria (Table 1) was used. The articles reviewed were published in any country, by any institution or individual investigator, and written in Spanish or English. The research was limited to those published in the last 5 years (between 2015 and 2020).
For the documentary retrieval, the following MeSH descriptors were used: "palliative care", "enteral nutrition", "terminal care" "terminally ill". Neither Subheadings nor Entry Term classifiers were used. The search strategy was: ("Palliative Care" OR "Terminal Care" OR "Terminally ill") AND "Enteral Nutrition". The final choice of articles was made following the inclusion criteria: (a) studies published in journals indexed in international databases subject to peer review, (b) published between 2015 and 2020, and (c) written in English or Spanish; and the exclusion criteria were: (a) studies based on pediatric age, (b) expert reports, editor's letters, books, monographs, clinical narratives or reviews. Due to the large number of articles found in the first search, and as a quality assessment, two screenings were carried out. The first was based on the title and summary, eliminating those articles that dealt with a topic other than the one proposed. In the second screening, review articles, editor's letters, etc., were eliminated.
To carry out the critical reading and evaluation of the articles found, the STROBE (Strengthening the Reporting of Observational studies in Epidemiology) statement was used for the observational studies [18] and the CONSORT guide (Consolidated Standards of Reporting Trials) for randomized clinical trials [19].
Once the first screening was applied based on the title and abstract, 168 articles were eliminated. After the second screening, nine articles were eliminated. The number of articles selected was three, all of which were observational studies, for which the STROBE statement was made. All of these articles fulfilled 90% of the points of the set declaration. The parameters of PRISMA (Preferred Reporting Items for Systematic Review and Meta-Analyses) were followed ( Figure 1). The results obtained showed different study parameters in the approach to the proposed topic (Table 2). No studies were found that addressed the use of NGT versus not using a feeding tube, but there was always a third group representing the use of either Percutaneous Endoscopic Gastrostomy (PEG) or esophageal stent. Therefore, the results obtained in relation to the use of NG tube and the other groups were taken.
In the study carried out by Bentur et al. 2015 [20], three groups were compared: subjects without feeding tubes, subjects with NG tubes and another group caring Percutaneous Endoscopic Gastrostomy (PEG). The results related to the use of NG tube versus the nonuse of a catheter or the use of PEG were taken as a reference for this review. They concluded that the use of a feeding tube in people with advanced dementia in the community was associated with negative outcomes and increased caregiver burden. The use of an NG tube caused less diarrhea and more restrictions than the group that did not carry a catheter. The use of feeding tubes increased attendances to the emergency department, although they did not distinguish between NGT and PEG. No statistical difference was found between catheter use (NG tube and PEG) and non-use with respect to the treatment of symptoms at the end of life, comfort or satisfaction at the end of life.
Yang et al., in 2015, compared hospital stays and survival among patients with esophageal obstruction and a short life expectancy in subjects with EN by tube, with esophageal stent placement, and with nutritional support without oral intake. The results obtained showed that the patients with NGT and esophageal stent had a shorter hospital stay (19 and 12 days, respectively) and a longer median survival (p < 0.01) than the group with nutritional support. Concluding that enteral feeding by NG tube in palliative care was safe, inexpensive, and had a low complication rate [21].
The multicenter study carried out by Shinozaki et al. in 2017 in Japan, found that 74.6% of patients in the terminal phase required EN.
These authors suggest that the nutritional intake route may play a role in quality of life. No significant difference was found in quality of life between the different study groups. However, the mean hospitalization period was significantly shorter for gastrostomy-fed patients than for nasogastric tube-fed patients (21 vs. 64 days). Patients with PEG had a shorter period between study prescription and death than patients fed through an NG tube [22]. Table 2. Studies included in the systematic review.
To examine the prevalence of feeding tube use among older people with advanced dementia (OPAD) living in the community; to evaluate the characteristics, quality of care, and the burden on caregivers. 13% of patients carried NG tubes. The use of this type of device caused less diarrhea episodes than those subjects that did not use any feeding tube (6.6% vs. 32.5%) and more restrictions (60.0% vs. 9.9%, p < 0.05). Subjects with feeding tubes (NG tube or PEG) attended the emergency department at least once a day (40% vs. 34.2%, p < 0.05), and on more occasions (2.92 ± 1.68 vs. 1.6 ± 0.9 during the day and 2.9 ± 1.6 times compared to 1.4 ± 0.5 times during night time, p < 0.05). No statistical differences of significance were found between the use or non-use of feeding tubes in the scales of SM-EOLD, SWC-EOLD y CAD-EOLED, finding a difference in the wellbeing subscale of the last one, in subjects with or without feeding tubes, either NGT o PEG (6.9 ± 2.3 vs. 5.2 ± 2.0, p < 0.05, respectively), without finding differences between the types of feeding tubes.
Comparing clinical results of EN by tube and the placement of an esophageal stent in patients with malignant esophageal obstruction and a short life expectancy.
Retrospective observational study in 31 patients diagnosed with advanced-stage esophageal cancer, divided into three groups: patients with NG tube (n = 12), with esophageal stent, (n = 10) and patients with nutritional support but without oral intake (n = 9).
The average duration of hospital admissions was 19 days in the group of NGT, 12 days in the group of esophageal stents, and 39 days in the group without nutritional support (p = 0.01). The mean average survival after the diagnosis of malignant esophageal obstruction was 122 days in the group with NGT tube, 133 days in the group with esophageal stents and 51 days in those not receiving any nutritional support. The most common complication for the group using feeding tubes was pneumonia caused by aspiration (58%), although this was lower than in the group not receiving any nutritional support (100%).
Shinozaki et al., 2017 [22]
To examine the quality of life and functional state in terminal patients with brain and neck cancer.
Prospective and multicenter observational study with 11 oncology centers and hospitals in Japan. The survey EORTC QLQ-C15-PAL was used weekly formed by 15 items related with health wellbeing and quality of life. The sample was formed by 100 patients.
74.6% of patients required EN. Those with NGT showed longer hospital admissions than patients using PEG (64 compared to 21 days, p < 0.05). Patients using PEG presented shorter periods between the study prescription and death, compared to those fed by NGT. No significant difference was found in quality of life, between the starting point and week 3 of the study, among the different study groups.
Discussion
The results obtained show the limited bibliography in the field of EN through NG tube in patients receiving palliative care. There are studies on the use of tube feeding in these patients, but without distinction between the NG tube and PEG, so it was not possible to obtain individual and differentiated results between both routes of administration.
The articles in this research can be found to represent a low level of evidence, since they are observational studies, and no randomized clinical trials (RCTs) were performed. These results coincide with those reported by other studies, such as the systematic reviews carried out by Good et al. in 2008 and later in 2014 [7,9].
Malnutrition leads to increased comorbidities and decreased performance status and quality of life [10]. Therefore, nutritional support should be integrated into palliative care, and its implications with respect to quality of life and life expectancy should be assessed [23]. Within such nutritional support is included the use of nutrition through a tube, although its use remains controversial, especially in the case of the NG tube. The emergence of research and guidelines on the management of patients under palliative care has managed to reduce the use of tube feeding by 50% [24].
Some studies report that the use of enteral tube feeding is effective for improving the quality of life of patients [25], since it may improve physical, psychosocial and spiritual aspects. Although the quality of life of patients with NGT was not studied in the study carried out by Bentur et al. in 2015, they did find that these patients presented more diarrhea and restrictions, which can affect the physical and even psychosocial sphere, which could influence the quality of life of these patients.
Even though there was no distinction between patients with NG tube and PEG, it was concluded that these patients attended the emergency department more times than those who did not carry any type of feeding tube, which also negatively influences their quality of life, since they present more comorbidities, making it necessary for them to go to a health center more frequently, and causing changes in their daily life, as reflected in the well-being subscale of CAD-EOLED [20]. Another aspect that can negatively influence quality of life is the increase in the number of hospital stays and the decrease in survival. The use of NGT may decrease hospital stays and improve survival in patients receiving palliative care, and thus improve the quality of life perceived by these patients [21]. However, Shinozaki et al. concluded that subjects presenting NG tube had longer hospital admissions than those using PEG. Even though the survival period was longer, no significant differences in quality of life were found among the various groups [22]. This may be due to the choice of the measurement interval, since it was performed in patients with a short life expectancy. It should be noted that the perception of quality of life is related to reality and expectations. In patients receiving palliative care, the expectations for improvement are sometimes low, especially when their life expectancy is short [26]. Perhaps for this reason, no differences were found in quality of life in these investigations. The scant evidence on this topic has led to different interpretations and approaches in these patients.
The Ethics Work Group of the Spanish Society in Parenteral and Enteral Nutrition (SENPE) recently (2019) published a confirmation that the placement of tubes for nutrition in patients with advanced dementia was a futile treatment that only contributed to prolonged suffering and concluded that health care professionals should not make wide use of EN by tube [27]. Schwartz et al. considered that EN by tube could improve quality of life, but that the benefits in the last days phase were limited and did not exceed the loads [28].
Furthermore, there may be discrepancies between health professionals and patients when prescribing nutritional support through an NG tube. For example, Amano at al. found that 78.6% of subjects in their study did not wish to receive artificial nutrition by feeding tube, even though their intake was insufficient [29]. In the study undertaken by Pengo et al. in 2017, it was found that the numbers of doctors and nurses who agreed with the use of the AN declined when life expectancy decreased [30]. These decisions can create ethical dilemmas and are related to feelings, thoughts and beliefs [31]. Sometimes, it is the patients themselves who do not wish to receive EN by NG tube [28]. Therefore, it is necessary to make an individualized decision, even though no other contraindications may be found. This respects the principles of autonomy, beneficence and non-maleficence [32]. Furthermore, the team of health care professionals looking after such patients should establish what the aims and benefits of such treatment are, whether these are achievable, and any possible damage that may be encountered [33]. In addition, the principle of autonomy recognizes the right and the capacity of a person to make their own personal decisions. Self-determination includes the right to reject EN, although this refusal may be difficult to understand for family members and healthcare professionals [3]. Perhaps the means to avoid ethical conflicts and future dilemmas is the use of anticipated instructions, where patients can reflect their decisions regarding future treatments or techniques, although the prevalence of patients who make use of such mechanisms is very low [34].
Among the limitations in this review are the lack of studies with a large enough sample to be able to describe the results, and the subjectivity of the results.
Although it is a difficult field of research, conducting higher-quality research could result in the provision of recommendations or guidance to aid patients and healthcare professionals in decision making.
The results obtained lead us to consider the need to create a clinical practice guide on the nutritional management of these patients, which includes the use of EN by NGT. Progress must be continued in education so that these differences do not exist, and such clinical practice is common to all nurses. The benefits and risks of the use of EN by NGT in these patients should be investigated, in order to provide evidence-based care. Clear evidence would help to reduce variability in the management of these patients.
Conclusions
There are not enough quality studies to provide evidence regarding the benefits for wellbeing and quality of life in patients under palliative care receiving EN through an NG tube.
For this reason, decision making in this field must be carried out individually, weighing the benefits and damages that they can cause in the quality of life of the patients.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2019-02-19T14:08:19.268Z
|
2020-07-20T00:00:00.000
|
70280017
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1016/j.aci.2018.12.002",
"pdf_hash": "95a31d86f04bc530874dd93649f58c83009bde4e",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44459",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "4c413b017e226d484c8ba70300746e1eec8b6637",
"year": 2020
}
|
pes2o/s2orc
|
An adaptive hybrid approach: Combining genetic algorithm and ant colony optimization for integrated process planning and scheduling
Optimization algorithms can differ in performance for a specific problem. Hybrid approaches, using this difference, might give a higher performance in many cases. This paper presents a hybrid approach of Genetic Algorithm (GA) and Ant Colony Optimization (ACO) specifically for the Integrated Process Planning and Scheduling (IPPS) problems. GA and ACO have given different performances in different cases of IPPS problems. In some cases, GA has outperformed, and so do ACO in other cases. This hybrid method can be constructed as (I) GA to improve ACO results or (II) ACO to improve GA results. Based on the performances of the algorithm pairs on the given problem scale. This proposed hybrid GA-ACO approach (hAG) runs both GA and ACO simultaneously, and the better performing one is selected as the primary algorithm in the hybrid approach. hAG also avoids convergence by resetting parameters which cause algorithms to converge local optimumpoints.Moreover,thealgorithmcanobtainmoreaccuratesolutionswithavoidancestrategy.Thenew hybridoptimizationtechnique(hAG)mergesaGAwithalocalsearchstrategybasedontheinteriorpointmethod.TheefficiencyofhAGisdemonstratedbysolvingaconstrainedmulti-objectivemathematical test-case.ThebenchmarkingresultsoftheexperimentalstudieswithAIS(ArtificialImmuneSystem),GA,andACOindicatethattheproposedmodelhasoutperformedothernon-hybridalgorithmsindifferentscenarios.
Introduction
Both process planning and scheduling terms have a paramount importance in many industrial processes, especially in manufacturing systems.In this field, the process planning term determines production steps according to the product specifications.Besides, scheduling concept determines how resources can be used according to the process plan.Primitive and traditional optimization methods normally handle this problem in a sequential order.On the other hand, with the increase of the computational capacity of processors, using process planning and scheduling in a hybrid method simultaneously becomes more popular.
Many scheduling problems in the real world applications cannot be accurately solved in polynomial time with any known algorithms.These types of problems are not described in the P complexity class.Therefore, metaheuristic algorithms are widely preferred today in real scheduling problems in industries from the 1970s up to now [1].As one of the most popular optimization methods, Genetic Algorithm (GA) and its hybrid variation are one of the widely used algorithm for integrated planning and scheduling systems [2].Its mechanism is inspired by natural selection.Ant Colony Optimization (ACO) is another algorithm for solving Integrated Process Planning and Scheduling (IPPS) problems to minimize the maximum completion [3].This type of optimization method is time-based on crossover and mutation mechanism which is inspired by ants which search food in a graph.
In this field, there are various experimental and theoretical studies in the literature.Some valuable studies are listed in a chronological order below.Firstly, Allahverdi and Aldowaisan presented several new heuristic algorithms for multi-machine non-waiting flow-type problems, taking into account the total completion time, and show that these new approaches are better in terms of error performance than known approaches, including the newly developed GA [4].
Tseng and Lin have presented a hybrid GA to solve the non-wait flow-type scheduling problems with the goal of completion time.This presented algorithm combines a new local search scheme with GA.The local search algorithm combines Insertion Search with Cut-and-Repair algorithms [5].
Li et al. have investigated two different data mining techniques for their study.These techniques are artificial neural networks and binary logistic regression methods.They have evaluated their approach to graphically based hyper-intuitive solutions proposed for test scheduling problems.Time complexity analysis has shown that artificial neural networks and binary logistic regression method accelerate the study.They have assisted in the development of more sophisticated information-based decision support systems [6].
Araujo and Nagano have investigated the scheduling problems in order to minimize the execution time.This problem is famous for being NP-hard, and this problem made a small contribution.In their study, they proposed a new constructive heuristic method named GAP Heuristics with structural property base [7].The proposed approach is based on two well-known methods in the literature, such as TWOs proposed by Bianco, Dell'Olmo and Giordani [8] and intuitive TRIPS (Triple) [9], which is proposed by Brown, McGarvey, and Ventura are superior in terms of required computational time.
Chaundry and Mahmood have developed an unprecedented flow type scheduling using genetic algorithms.Non-standing flow type scheduling is a limited flow type schedule that is commonly found in manufacturing systems.In this research, it is considered that the total completion time is minimized for N number of jobs processed in M machines using general purpose table based GA.The proposed approach solution is compared with the problems already published in the literature.The proposed approach produces the most appropriate solution for all situations.It also demonstrates that an objective function can be minimized by using the same model without changing the general conception of the GA [10].
Prot et al. model an industrial workshop scheduling problem in their articles as a multi-modal production line type workshop.In the problem they deal with, there are additional ACI constraints such as sequence-dependent preparation times and delivery dates.The problem in decision makers is to minimize the biggest delay.To solve the problem, a taboo search procedure has introduced a valid lower bound to measure this taboo search procedure [11].
Gamma and Singhal have tried to find the ideal table order with GA for flow type scheduling problems involving M machines and N jobs with time-dependent and job partitioned preparations in order.Authors who focus on two types of case studies, both traditional and general, have shown that the optimized time-to-completion value can be accessed with multiple different business sequences instead of one, and can help reduce the time to completion in the scheduling process [12].
Vidal et al. have contributed a component-based heuristic approach to the development of an efficient, applicable and general-purpose algorithm for vehicle route problems and the determination of the challenges in this area.As a result of extensive computational experiments, the method has demonstrated a remarkable performance as well as the most successful problem-oriented algorithms in the literature, or better than them [13].
Pacini et al. have investigated distributed job scheduling efforts for Parameter Scan Experiments (PSE) with bio-inspired techniques in their work.They have created a taxonomy for organizing and analyzing the investigated materials.They point out the strengths and weaknesses of the present experiments.This area describes the work that can be done in the future [14].
Burdett and Kozan have addressed the problem of creating train timings in their work.Train time-table creation is a complicated problem in terms of delays and facilities.They have developed numerically efficient algorithms to define the delay effect in terms of the affected operations.The adjustments and delay values of the affected investigations are spread by the differential graphical model of train surveys.The results of the proposed sensitivity analysis were used to determine program integrity.The analyzes provided information that could be used as part of the proactive scheduling approach.Affected processes can be used to develop meta-intuitive approaches to the chart [15].
Pugazhenthi and Xavior are approaching the primary goal of minimizing the time to complete flow type scheduling problems for M machines with N jobs.In order to solve the flow-type scheduling problem on a modern production framework, EPDT (Extended Prim-Dijkstra Tradeoff) has proposed meta-intuitive approaches called as the BAT (Bat).They applied these two algorithms together with GA for further development in achieving the minimum execution time.In order to measure the performance of these new heuristic approaches, MATLAB solved the problem of benchmarking Taillard of different sizes.GA-applied FPDT heuristic approach for flow-type problems and GA-applied BAT meta-heuristic approach are effective in finding a better set of solutions to solve scheduling problems and to reduce the completion time [16].
Laha and Sapkal propose a heuristic algorithm to minimize the total flow time in the nonwaiting flow type scheduling in their work.In experiments, the proposed heuristic approach has outperformed well-known intuition apart from time complexity.Statistical significance tests have proved the superiority of the method [17].
Dey et al. have proposed meta-intuitions to make multilevel thresholding faster in their work.They used quantum mechanics to propose six different quantum-inspired meta-intuitive methods.The results of the six proposed quantum meta-heuristic methods are discussed in order to establish consensus results.Quantum-inspired particle flock optimization is superior to other methods.The computational complexities of the proposed methods are explained in order to find the time-out efficiency of these methods [18].
Kianfara and colleagues have worked on a flexible flow-type system based on the non-deterministic dynamic development for jobs and the sequence-dependent preparation time.The problem is to specify a schedule that minimizes the average delay of the intended tasks.Since the class of the problem is NP-hard, a new shipment rule and a hybrid GA have An adaptive hybrid approach been developed.The two new methods they have included in their research and the most commonly used shipping conventions in the literature have been combined in the simulation model.The results show that the methods they propose in their work are better than the traditional shipping rules [19].
Li and Gao lately have published a book, summarizing a series of extended researches study on IPPS.They have focused on details of novel solution techniques, discussing the properties, and applications of process planning and scheduling under different environments [20].
Various algorithms like GA, Artificial Immune System, and ACO which aim to solve Scheduling problems are combined and an HTML page & open source Javascript Library is developed as an interface which allows users to compare their algorithms with others in a graphical interface.Users can create various types of Scheduling problems and solve through these algorithms in this application.Also, parameters of GA are optimized for Scheduling problems and via these parameters.Furthermore, a hybrid algorithm is developed using ACO and GA.
In this study, a hybrid approach using GA and ACO called as hAG has both theoretically and empirically presented.The basic approach of the proposed system hAG is to solve one of the two optimization methods first, then to try to improve the solution with the other one.For this reason, the starting algorithm should be chosen wisely at the initial state for better performance.Namely, this approach suggests to run both algorithms simultaneously first, then monitoring their performance to differentiate the better one.This selection criterion makes this proposed approach unique when compared with the others.
This paper has four main sections.In the first section, as placed above there is an introduction to this study with a literature review concerning related studies with this proposed method.In the second chapter below, there are explanations of IPPS problems.In the third section, there are and details of the suggested technique.In the fourth section, experimental studies are presented.In the last section, contributions are summarized.
IPPS problem definition
IPPS problems are optimization problems which include both process planning and scheduling.In this study, operations can work on different machines with possibly different running times.This is known as operation flexibility or machine flexibility.There are J jobs and a job consists of P operations which have to be done sequentially.Also, there is M non-identical machine for assigning operations according to their performance.The aim is basically to find minimum makespan [21,22].
The sequence of jobs is alterable but in a specific job, the sequence of operations of a job should be in given order.Any process of any job can operate in any machine which is allowed in the given table.Our main objective is minimizing total makespan.
ACI
Minimize F 5 ∀ i,j,k max {x ijk *d ijk } subject to x ijk À x imn ≥ p ijk ∀i; j; k; m and n : j > m (1) x ijk À x lmk ≥ p ijk ∀i; j; k and m: j th process of i th job runs after m th process of l th job on k th machine: (2) x ijk ≥ p ijk ∀i; j and k : i th has j th process runs on k th machine: As it is seen there are some related limitations given above about the IPPS problems.The first constraint (1) implies that the tasks of every customer arrange are handled by the priority required.Constraints (2) guarantees that any two tasks having a place with a similar customer arrange cannot be handled in the meantime.Constraint (3) guarantees that just a single resource for every activity ought to be chosen.Finally, constraints ( 4) and ( 5) infer nonnegativity and integrality of the corresponding variables [23].
Figure 1 shows a random generated IPPS problem variable table.This table shows machines' performances with respect to the product and machine specifications.According to this table, all jobs have three operations which have to be done sequential, and these operations can be done in one of five machines.It means J 5 4, P 5 3, and M 5 5.If we choose Operation 1 of Job 1 performs on Machine 1, it cost us 198-unit time.
Figure 2 shows an example Gantt diagram of a solution to the given scenario.In this solution, three operations run in Machine 1. 2nd operation of Job 4, 2nd operation of Job 2 and 3rd operation of Job 1 are them.Makespan of this solution is 580 because 3rd machine ends last.Figure 2 also shows the priority of jobs.2nd operation of Job 4 waited for 1st operation of the 4th job on Machine 5.The adaptive GA and ACO parameters for the solution given in Figure 2 is as follows as seen in Table 1.
In GA, there are some mutation types such as Bit Flip mutation, Swap mutation, Scramble mutation, and Inversion mutation.All of these have some specific techniques.Similarly, there are some crossover types such as Single point crossover, Two-point crossover, Uniform An adaptive hybrid approach crossover, and Arithmetic crossover.Generally, the adaptive parameter might be chosen experimentally in order to achieve better performance.In the empirical studies, Swap mutation and as Single point crossover have been chosen as the most adjusting parameters.
As these optimization methods might outperform depending on the best-selected parameters, all these values have been observed in the tests.
Proposed hybrid approach (hAG)
It will be better to remember the general aspects of both ACO, GA, and AIS techniques before explaining the proposed method.
Ant colony optimization (ACO)
This proposed hybrid approach (hAG) uses the ACO algorithm design which proposed in [24].This ACO design takes IPPS problem as a graph.In that graph, every process on every machine that can execute that process is a node.All nodes that represent a process of the job has a directed arc to every node that represents the next process of that job and undirected arc to any process of other jobs.In addition, a node is located at the start point, has directed arcs to nodes that represent the first processes of all jobs.Ants start here and move throughout arcs considering direction constraints.If an ant comes toward a node, any other nodes that represent this process on other machines are deleted.When there is not any unvisited node in the graph, that means ant is finished its journey and has a solution for the IPPS problem.By using an equation that includes the next nodes' processes' makespan values and pheromone levels on arcs, an ant decides the next node to visit when it moves around.Pheromone level on arcs is key of this algorithm.In the first iteration, every node has the same pheromone level.But after the first iteration, the ant that obtained the best result increases pheromone levels of the arc that it's visited.Therefore, ants in the next iteration can use winner ants' path with higher percent.
Genetic algorithm (GA)
The hAG approach uses GA algorithm design which proposed in [25].There are job number 3 process number 3 2 genes in this chromosome model.In the first process 3 job number of a gene, genes show to that process must run that machine.In the second process 3 job number, genes show which job should be scheduled first in that solution.
For instance, when there are 2 jobs, 2 process and 2 machines; ½ 1 2 2 1½ 2 1 2 1 chromosome means first job's first process must run 1st machine, 2nd process must run 2nd machine.1st process of 2nd job must run 2nd machine, and 2nd process must run 1st machine.And first, 1st process of 2nd job must be scheduled, then 1st process of 1st job, 2nd process of 2nd job and finally 2nd process of 1st job.GA runs on this chromosome model.The presented hAG hybrid approach basically uses one of the algorithms for improving results obtained by another.First, both algorithms run simultaneously and at the end of the Table 1.
Hyperparameters of Genetic Algorithm
Optimization and Ant Colony Optimization.ACI time limit, or any other stop condition, the supervisor detects which one gave better solutions.After this, the algorithm which has better solution continues and another one is stopped.If GA runs first, after it stops, ACO gets GA's good solutions as pheromone update on its graph.On the other hand, if ACO runs first after it stops, GA gets ACO's good solutions as individual chromosomes.Both ACO and GA have an avoid of convergence approach.So, if throughout a constant number of iterations, algorithms cannot find a better solution, parameters that cause them to stuck local optimum areas are reset.Therefore, the second algorithm has more solution to improve instead of one.Pseudocode of the algorithm is as presented below.
Proposed hybrid approach (hAG)
This proposed hybrid approach hAG basically uses one of those two algorithms GA and ACO for improving results initially obtained by the other.In this approach, the first phase is to select one of the appropriate methods, GA or ACO.Selection of the starting algorithm and the following algorithm is an important issue.The Algorithms' variable level of success in different problem types drives us to select a starting & following algorithm dynamically during running time.
First, both algorithms run simultaneously and at the end of the time limit, or any other stop condition, the supervisor detects which one gave better solutions.After this, the algorithm which had a better solution continues where the other one is stopped.If GA runs first; after it stops, ACO takes GA's good solutions as pheromone update on its graph.On the other hand, if ACO runs first after it stops, GA takes ACO's good solutions as individual chromosomes.Both ACO and GA have an avoid of convergence approach.Therefore, if throughout a constant number of iteration, algorithms cannot find a better solution, parameters that cause them to get stuck local optimum areas are reset.Hence, the second algorithm has more solutions for improvement instead of one.The pseudo code of hAG is as presented below.For (i in solutionNumber) 10.
An adaptive hybrid approach
The inner mechanism and the hAG algorithm of the introduced approach are written in the pseudo codes given above.As mentioned before, the adaptive system performs according to the structure of the test case.The procedural steps in front of the proposed approach are handled due to the determinative mechanism, which decides which of the optimization approach is executed for the next step.
Artificial immune systems (AIS)
AIS is a rule-based machine learning systems inspired by the structure of the immune systems of living creatures.It is typically modeled due to the immune system's characteristics.AIS algorithm has been used in scheduling problems for more than 20 years.The basic approach is to create random antibodies that represent solutions and then trying to improve them with using various mutations.Antibody design of the algorithm is basically the same as the Genetic Algorithm.AIS does not take part in proposed hybrid approach but used as a comparison algorithm in chapter 4.3.The AIS algorithm in this paper constructed by using Engin & Doyen and Nhu Binh Ho and others' papers [26,27].
Experimental results
In experimental results, algorithms are compared in three problems, one of them was small and second was a large-scale problem.In these problems, each algorithm is executed 5 times to obtain a clearer result.In the third problem, the proposed hAG algorithm is compared with ACO, GA, and AIS with different 5 problems which include 5 jobs 4 processes and 3 machines.A table and a graphical method used for comparing algorithms.Table result uses three parameters for comparison.First is Success Rate (SR) and it indicates the percentage of all trials where the algorithm found the best solution.Next one is Average Relative Percentage Deviation (ARPD) and it shows how much percentage, algorithm makespan of solutions are worse than the best solution for that trial.The last parameter is how much Central Processing Unit (CPU) time algorithm is needed for running.In addition, a graphical method is used to compare algorithms.This graphic shows all better results found by the algorithm in any iteration.Hence, algorithms' progress with respect to time can be monitored with using this tool.
All algorithms are written in JavaScript and executed in Google Chrome browser in a MacBook Pro with 3 GHz Intel Core i7 processor and 8 GB 1600 MHz DDR3 memory.
Problem with 4 jobs -4 processes -4 machines
In this section, algorithms are tested in a problem that has 4 jobs, 4 processes, and 4 machines.All algorithms have been run 5 times.The average results are shown in Table 2.This hAG model and GA found best results in 60% of trials, ACO found best results in just 20% of trials.In addition, the hAG model's ARPD was a bit higher than GA and used more CPU time than other algorithms.Also, Figure 3 In this section, all the presented algorithms are tested in a problem with 6 jobs, every job has 6 processes and processes have to be assigned in 6 machines.All algorithms executed 5 times and average results are shown in Table 3.In all evaluations, the hAG model has found the best solution.ARPD of GA was about 40% and ARPD of ACO was 14%.The hAG model significantly used more time than both two algorithms.In addition, in Figure 4, each algorithm's solution-finding times can be shown for all trials combined.
Shuffled 5 problems with ACO-GA-AIS and hAG
In this section, algorithms are compared with the hAG.These algorithms are tested with 5 different problems with 5 jobs, 4 operations, and 3 machines.Operations' running times of machines are shuffled among problems.Results can be seen in Table 4.The hAG model has found the best solutions in all tests but used more time than any other algorithms.
Conclusion
In this paper, a hybrid Ant Colony Optimization (ACO) -Genetic Algorithm (GA) approach is presented.This introduced hAG model has better a performance rate than other algorithms in An adaptive hybrid approach large-scale test problems.Also in small-scale test cases, this introduced model has a similar success rate (SR) with the genetic algorithm but better than ant colony optimization.In both types of problems, the proposed hybrid approach needs more CPU time for execution.Additionally, this study has experimentally proven that in large-scale problems ant colony optimization is better than the genetic algorithm and in small-scale problems, the genetic algorithm is better than ant colony optimization.
shows when each algorithm finds which solution exactly in ACI all trials of this problem.ARPD and CPU are the Average Relative Percentage Deviation and Central Processing Unit respectively.4.2 Problem with 6 jobs -6 processes -6 machines
Table 3 .
Algorithm success table (computational time in s) on a 6j/6p/6m problem.
|
v3-fos-license
|
2024-01-27T06:17:46.837Z
|
2024-01-25T00:00:00.000
|
267256107
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.nature.com/articles/s41375-024-02158-1.pdf",
"pdf_hash": "7a6d959d320040eba2b37a01176e817d942564ab",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44461",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "2992c1a1b7a9b1af70e1550bfda36b2dce029b79",
"year": 2024
}
|
pes2o/s2orc
|
Impact of hemodilution on flow cytometry based measurable residual disease assessment in acute myeloid leukemia
Measurable residual disease (MRD) measured in the bone marrow (BM) of acute myeloid leukemia (AML) patients after induction chemotherapy is an established prognostic factor. Hemodilution, stemming from peripheral blood (PB) mixing within BM during aspiration, can yield false-negative MRD results. We prospectively examined hemodilution by measuring MRD in BM aspirates obtained from three consecutive 2 mL pulls, along with PB samples. Our results demonstrated a significant decrease in MRD percentages between the first and second pulls (P = 0.025) and between the second and third pulls (P = 0.025), highlighting the impact of hemodilution. Initially, 39% of MRD levels (18/46 leukemia-associated immunophenotypes) exceeded the 0.1% cut-off, decreasing to 30% (14/46) in the third pull. Additionally, we assessed the performance of six published methods and parameters for distinguishing BM from PB samples, addressing or compensating for hemodilution. The most promising results relied on the percentages of CD16dim granulocytic population (scarce in BM) and CD117high mast cells (exclusive to BM). Our findings highlight the importance of estimating hemodilution in MRD assessment to qualify MRD results, particularly near the common 0.1% cut-off. To avoid false-negative results by hemodilution, it is essential to collect high-quality BM aspirations and preferably utilizing the initial pull for MRD testing.
Unfortunately, approximately 30% of MRD-negative patients still experience relapse [13].Although MRD is not the only determinant of relapse occurrence, a factor that may contribute to false-negative MRD results is hemodilution, caused by the admixing of PB during aspiration of the highly vascular BM [14].This effect was discovered by retracing radioactively labelled erythrocytes in BM aspirates that were injected intravenously [15,16].Hemodilution may lead to an underestimation of the MRD percentage and thus to a false-negative result, due to different proportions of leukemic blasts in PB compared to BM [17,18].Consequently, both qPCR and MFC are influenced by hemodilution, making them less reliable when a high volume of PB is aspirated with the BM.
In recognition of this problem, the United States Food and Drug Administration (FDA) advises to take hemodilution into account when assessing MRD and requests that investigators use the first BM pull for MRD assessments [19].Practical challenges arise due to the required amounts of patient material for the different routine diagnostic tests (flow cytometry, qPCR) that need to be performed, or potential obligations related to sending BM for a clinical trial.Therefore, in practice, it is difficult to adhere to the aforementioned advise to use first pull BM aspirates only and, thus, second or later pulls could result in hemodilution.The European LeukemiaNet (ELN) encourages laboratories to explore strategies for assessing hemodilution, especially when MRD is used for clinical decision-making [3,10,[20][21][22].
Various formulas and approaches have been proposed to quantify or compensate for hemodilution in AML and other hematologic diseases [8,16,[23][24][25][26][27][28].These formulas may have additional requirements for laboratory procedures, such as acquisition of paired BM-PB samples or the inclusion of markers such as CD16 that are not typically part of an MRD AML panel.Another way to mitigate hemodilution effects could be by using the primitive marker based MRD assessment (PM-MRD) as a denominator instead of CD45-expressing cells, as this equation is expected to be less influenced by changes in cell proportions [29].Furthermore, another potential solution is using PB as an alternative specimen to BM.However, despite some smaller studies demonstrating a correlation between the two specimens, the reduced sensitivity of PB-MRD and the lack of validation in large-scale prospective studies make it unlikely that this approach will be the definitive solution in the near future [30][31][32][33].Nevertheless, the exact impact of hemodilution on MRD measurement results remains largely unknown.
In this study, we prospectively assessed the impact of hemodilution by dividing the regular 6 ml of BM into three separate 2 ml pulls, numbered according to their collection order.These three BM samples, along with a PB sample collected on the same day, were individually analyzed and compared at each time point for all patients.Furthermore, we evaluated the sensitivity and specificity of previously established hemodilution formulas by applying them to the four samples of each patient and comparing the outcomes.To facilitate this analysis, a new flow cytometry tube incorporating all necessary antigens, such as the CD16dim population marking granulocytes that are virtually absent in BM and CD117 high mast cells were utilized.Finally, we examined the changes in cell populations across subsequent BM pulls and PB to determine the most effective method for distinguishing between the samples.
Patients and treatment
This prospective study included AML patients aged 18 years or older undergoing high-dose chemotherapy following HOVON-SAKK guidelines at the Department of Hematology of Amsterdam University Medical Center (UMC).Patients with acute promyelocytic leukemia (APL) were excluded.Eligible patients in complete remission (CR) at the time of BM aspiration provided written informed consent.BM samples were collected after one or two cycles of chemotherapy.The study adhered to the Declaration of Helsinki (2013) and Medical Research Involving Human Subjects Act (WMO).Additionally, it derives, in part, from the trial registered under the identifier NL9690.
Dilution series
Standard practice guidelines for BM aspiration were followed [34].After aspirating 1 mL of BM for inspection of spicules and morphology, three additional tubes were filled (max 3 mL per pull) following a "four eye" principle.The aspiration needle remained stable.PB samples were collected using heparin tubes within three hours of the aspiration.Given that the initial 1 mL of BM was reserved for morphology examination, the first pull dedicated to MRD measurement, referred to as Pull 1 in this manuscript, corresponds to the second pull in the sequential order.Despite aligning with clinical practice, we chose to designate it as Pull 1 for clarity purposes.
Multiparameter flow cytometry MRD assessment
Before MFC measurement, the white blood cells (WBCs) were counted in all four samples, (pull 1, pull 2, pull 3 and PB).If ≥1,000,000 cells were present, then all samples were stained with the full four-tube eight-color AML-MRD panel, which has been prospectively used in large clinical trials [21,35].An additional tube (designated P6) for hemodilution analysis was used, containing monoclonal antibodies against CD10, CD16, CD38 and CD138, which are necessary to validate the previously published formulas for detecting hemodilution.A comprehensive overview of the previously published formulas can be found in Table 1.The panel composition of the four-tube eight-color panel can be found in Supplementary Table S1 S1.When an insufficient number of cells was available for the entire panel (four MRD tubes and P6 tube), priority was given to the tube containing the LAIP at diagnosis, followed by the P6 hemodilution tube, and subsequently the remaining tubes in the order of their number.The procedure for measuring MRD was as previously decribed [10,34,36].For previously published hemodilution parameters, the gating strategy utilized in the original publication was replicated where possible.To avoid intra-instrument and inter-operator differences, all samples from one time point were measured on the same FACSCanto II Flow Cytometer (Becton Dickinsons, San Jose, CA, USA) by the same operator.In addition, all samples were gated by the same expert to minimize the inter-gating variability.MRD was determined as the proportion of LAIP-positive cells relative to the total WBC count.
Reference samples from both BM and PB, previously measured in other studies, were utilized as control [31].The blast percentage is calculated based on the cells expressing CD45 and either CD34, CD117, or CD133.This percentage was derived from the LAIP, or if no LAIP is identified, it was determined from the highest among the three markers.A LAIP consists of the CD45 marker, a primitive marker (CD34, CD117, or CD133), and an aberrant marker.In cases where multiple LAIPs where identified within a sample, they are not combined; only the LAIP with the highest quantity was documented.A LAIP percentage exceeding 0.1% of the total WBC was classified as MRD-positive.In parallel with the LAIP method, the Differentfrom-normal (DfN) approach was employed; nevertheless, none of the samples yielded MRD-positive results using this method [37].MRD results from the first pull were reported back to the clinic.This study exclusively focused on flow-based MRD measurements, and molecular assays were not concurrently used for MRD assessment in subsequent pulls.
Statistical analyses
A Friedmann test compared percentages of blasts, MRD, and PM-MRD among pulls and PB, followed by Dunn-Bonferroni tests for pairwise comparisons.Differences in outcomes across specimens were assessed using Friedman's ANOVA test.Wilcoxon tests compared two groups (e.g., pull 1 vs. PB).To determine which population would discriminate best between BM pull 1 and PB, the Chi-squared-test was used.Additionally, the ability of individual features and/or populations to discriminate between BM and PB samples was evaluated in a binary classification task using the area under the receiver operating characteristic curve (ROC-AUC).For every potential threshold, the true and false-positive rate was determined using the scikit-learn package (v1.2.2) in Python (v3.9.10), with the optimal threshold determined as the threshold where the difference between the true and false-positive rate was the smallest.Statistical significance was defined as p-value < 0.05.Analyses were performed using GraphPad Prism® Version 5.00, (GraphPad Software, San Diego, CA), R (programming language) with R-package ggplot2 and Python (Python Software Foundation).
RESULTS
We analyzed 30 patients (median age: 62, range: 19-75), with relevant patient characteristics in Supplementary Table S2.We collected 40 paired BM and PB samples, post-cycle 1 (n = 14) and cycle 2 (n = 26).Each BM sample had three pulls, totaling 160 samples.All had enough cells for the tube containing the LAIP at diagnosis, except one pull 3 and two PB samples from three patients.Of the 160 included samples, 157 had sufficient cells for diagnosing the LAIP-containing tube.The complete four-tube panel could be measured for 141 samples (88.1%), each with at least 1,000,000 WBC per tube.For 10 samples from three patients, only two tubes were measurable, while for 6 samples from two patients, only three tubes could be measured.Eleven samples had no detectable LAIP above 0.01% MRD in the first pull.Among the 30 samples, we identified 46 distinct LAIPs above the 0.01% MRD threshold in pull 1.
Consecutive pull analysis
We observed significant decreases in the primitive blast-and median MRD% between different pulls and PB samples (Supplementary Table S3).The median MRD% in pull 1 was 0.055%, which was significantly higher compared to pull 2 (0.045%), pull 3 (0.040%), and PB (0.01%) (P < 0.001).Pairwise comparisons, adjusted for multiple testing, revealed significant differences between pull 1 and pull 2 (P = 0.025), pull 1 and pull 3 (p < 0.001), and pull 2 and pull 3 (P = 0.025) (Fig. 1A).However, there was no significant difference in the primitive-marker MRD (PM-MRD) levels among the sample pulls and PB (Fig. 1C).Decreases in MRD percentages between consecutive pulls differed among samples (Fig. 1D).Using a 0.1% cut-off, we found that 18 out of 46 leukemia-associated immunophenotypes (LAIPs) (39.1%) were positive in the first pull, compared to 16 out of 46 (34.8%) in pull 2 and 14 out of 46 (30.4%) in pull 3.In PB samples, 4 out of 46 (8.7%) LAIPs were above the 0.1% cut-off, of which all samples were also MRD-positive in BM (Fig. 1E).A sample was considered MRD-positive if at least one LAIP was above the cut-off.Regardless of the decrease in MRD% observed between pull 1 and 2, 9 (22.5%) were considered MRD-positive in both the first and second pulls.One sample became MRD-negative in the third pull (20% MRD-positive or 8 out of 40), and 3 out of 40 PB samples were MRD-positive using the 0.1% cut-off.Based on the data, among the 30 patients, 9 were classified as MRD positive in the first pull.Pull 3 yielded only one "false negative" result.However, when the pulls would be pooled by taking the median of the three pulls, no differences were found compared to the first pull.
Validation of hemodilution markers
The required P6 tube could only be measured in 28/40 samples because the tube was not available at the start of the study and ten samples had insufficient cell numbers.This tube contained the CD-markers that were not present in our standard four-tube assay but that were necessary to validate the previously published formulas for detecting hemodilution.Of the six formulas, only the one proposed by Holdrinet et al. [16] could not be tested due to the necessity to measure erythrocytes, which are lysed during our regular sample processing steps.
Peripheral blood contamination index
The PB contamination index (PBCI) formula consists of three different cell populations, CD10+ neutrophils, CD34+ cells and the CD138+ CD38+ plasma cells (Table 1) [23].The assumption behind this formula is that CD34+ cells and plasma cells are almost absent in PB, while neutrophils are primarily present in PB.We observed a statistically significant increase in CD10+ granulocytes (Fig. 2A) and a significant decrease in CD34+ cells (Fig. 2B) and plasma cells (Fig. 2C) between pull 1 and pull 3/PB, but not between pull 1 and pull 2. Combining these changes, the PBCI was calculated for all samples, and a significant increase in PBCI was observed from pull 1 to all other samples (Fig. 2D).Applying the published threshold of 1.2 PBCI, which distinguishes contaminated samples from those with good quality, three samples from pull 2, two samples from pull 3.
Calculating PBCI in PB showed that only 16/26 PB samples also exceeded this cut-off.
Predicted bone marrow purity
The formula proposed by Aldawood et al. [24], aimed at determining BM purity, was used to normalize the blast population and not for MRD optimization.Despite this, we applied the formula to our samples as it addresses hemodilution and estimates BM purity.According to the formula, lymphocytes, primarily derived from PB, can be used as a surrogate to estimate pure BM proportions.Analysis of the lymphocyte population in the three BM pulls and PB revealed a significant increase in lymphocytes between pull 1 and PB, but not between the other pulls (Supplementary Fig. S2A).When assessing BM purity for all samples according to the Aldawood formula, a modest but progressive reduction was observed from pull 1 to pull 3 but this did not reach statistical significance (Supplementary Fig. S2B; P = 0.20).
Normalized blast count
The normalized blast count (NBC) formula, originally designed to evaluate and correct blast counts, was used to correct for an estimated general degree of hemodilution, based on a comparison of the proportion of mature myeloid cells (designated as CD16dim cells) to immature blast cells [25].Calculating the NBC showed no significant differences with the original blast counts in all pulls (Supplementary Fig. S3).
Mature neutrophils contamination.The ELN addressed the issue of hemodilution in their 2018 MRD guidelines [8].They recommend estimating PB contamination by assessing the percentage of mature neutrophils (CD16dim cells) within the total white blood cell (WBC) population.An increase in the percentage of mature neutrophils >90% would indicate significant hemodilution.We observed a significant increase in the percentage of mature neutrophils with each pull (Fig. 3).The median percentage changed from 74.05% in pull 1 to 79.68% in pull 2 (P = 0.030), 80.02% in pull 3 (P = 0.016), and 97.96% in PB (P < 0.001).Using the proposed cut-off of 90%, two samples from pull 1, five samples from pull 2, and four samples from pull 3 would be identified as hemodiluted.In two PB samples mature neutrophil percentage was <90%.
Mast cell based blood contamination estimation
As mast cells (CD117 high ) are solely present in the BM, a decreased percentage can suggest blood contamination (⩽0.002%) [26].Mast cell populations were measured in all samples, and a decrease was observed between pull 1 and pull 2 (P = 0.076), pull 1 and pull 3 (P < 0.001), and pull 1 and PB (P < 0.001) (Fig. 4A).Applying the 0.002% cut-off, four samples from pull 1, ten from pull 2, and 13 from pull 3 were designated hemodiluted.All PB samples had mast cell levels ⩽ 0.002%.All BM samples with mast cell populations ⩽ 0.002% in pull 1 or pull 2 remained below this limit at subsequent pulls (Figs.4B, 5C).
Concordance between methods
By combining the three formulas that use a cut-off level, we assessed the concordance between methods (Fig. 5).Concordance was best between the recommended ELN method, which evaluates CD16dim cells, and the mast cell population method.All samples marked as diluted by the ELN method, except for one pull 1 sample, were also marked as diluted based on the mast cell threshold.The mast cell population method consistently labeled the highest number of samples as diluted in all successive pulls (Fig. 5C).
Retrospective re-analysis of samples Among the previously published formulas, only the mast cell formula could be tested retrospectively in previously measured samples since CD117 is a backbone marker in the fixed four-tube panel.We validated mast cells for indicating hemodilution in borderline (0.06-0.09%)MRD-negative samples from HO102 and HO132 trials prospective phase 3 trials (n = 18, Supplementary Table S4) [21,35].These samples were analyzed after two cycles of chemotherapy to identify potential cases of hemodilution and subsequent relapse.Among these, four samples had mast cell percentages below the threshold, with three patients relapsing within two years, suggesting potential false-negative MRD reports.Furthermore, a sample measured both at the treating center and central lab showed how mast cells could quantify hemodilution (Supplementary Fig. S4).In the treating center, a CD45+ CD13+ CD7+ LAIP comprising 0.18% of WBCs was detected, while the central lab observed the same LAIP at 0.05%, reporting it as MRDnegative.Retrospective analysis of the mast cells percentages showed a level of 0.024% in the MRD-positive sample at the treating center, compared to 0.001% in the MRD-negative sample measured at the central lab facility, indicating hemodilution as the likely cause for the disparity.
Proposed hemodilution indicator
In addition to the previously published formulas, we assessed various individual cell populations to determine how they changed between the successive pulls and PB.The ability to differentiate between BM and PB samples based on these parameters was evaluated by comparing the Area Under the Curve (AUC) in a Receiver Operating Characteristic (ROC) curve.Four parameters (CD10, plasma cells, CD16dim cells, mast cells, and the PB contamination index) showed AUCs >0.9 (0,956, 0,949, 0,940, 0,924 and 0.905 resp.).Notably, the optimal cut-off for mast cells (0.002%) is the same as the threshold proposed by Flores-Montero et al. [26].An overview can be found in Fig. 6.
DISCUSSION
Hemodilution is a crucial factor that poses a significant challenge to reliable MRD assessment, especially near the 0.1% threshold.Despite previous proposals for formulas to detect or quantify hemodilution, there is currently no widely used method or consensus on the standard approach.In our cohort, we observed a significant decrease in MRD percentages between the first 2 ml Fig. 2 Individual cell populations used to calculate peripheral blood contamination index (PBCI) and PBCI in subsequent samples.A CD10+ neutrophils were not statistically different between pull 1 and pull 2, but significantly different between pull 1 and pull 3 and significantly higher in PB.B CD34+ population significantly decreased between pull 1 and pull 3/PB.C Plasma cells decreased with subsequent pulls, resulting in a statistical significant difference between pull 1 and pull 3/PB, but no difference between pull 1 and pull 2. D Calculated PBCI showing significant increase between pull 1 and subsequent pulls.When the 1.2 threshold is applied, three samples from pull 2, two from pull 3 and 15 of the PB samples would be marked as diluted.
BM (pull 1) and subsequent pulls, leading to shifts from MRDpositive to MRD-negative Although the effect may also be less when the first 6 ml is pooled and this median is closest to our clinical practice to determine the MRD status.This finding is concerning, considering that our study design was relatively conservative, only subdividing the first 6 ml of BM, while the effect might persist in further pulls.Hence, hemodilution prevention or quantification is crucial.
The safest and easiest way to prevent hemodilution is to follow the European LeukemiaNet (ELN) recommendation B3, which suggests taking only 5 mL of BM aspirate from the first pull of the syringe for MRD assessment [3].However, this option may not be feasible when BM also needs to be collected for different assays such as MFC, qPCR and possibly NGS.Another proposed solution is to reposition or reinsert the needle after the first aspiration, although its impact on MRD results remains uncertain.Since it is often not possible to prevent hemodilution, the use of formulas to detect or quantify hemodilution appears necessary to warn clinicians for possible unreliable MRD results.However, the formulas we tested have advantages and limitations.The PBCI, relying on CD10, CD38, and CD138 markers not commonly included in MRD assays, showed good discrimination between BM and PB (AUC: 0.905), but this was achieved when using the optimal cut-off in this cohort of 0.354%, which is lower than the proposed 1.2%.When using the proposed 1.2% cut-off, 10/26 PB samples would still not be designated as hemodiluted, thus providing lower sensitivity for hemodilution.The degrading impact of sample aging on plasma cells and their CD138 is noteworthy; however, it was not a concern in this study since all samples were processed within 24 h.Nevertheless, it is important to acknowledge that this factor could potentially impact the reliability of the formula.The CD10+ granulocytes and plasma cell population by themselves seem to have a good AUC of above 0.9, apart from being used in a formula (Fig. 6).Another formula, the normalized blast count formula, which required an additional CD marker (CD16), showed moderate normalization of blast count and may not be sufficient for hemodilution detection.In addition, the lymphocyte and leukocyte compartments were not considered valuable enough for hemodilution detection.The formula based on the percentage of mature neutrophils (CD16dim cells) within the total WBC population, as recommended by the ELN guidelines, performed well in discriminating BM from PB samples [8].However, in this smaller cohort, the optimal cut-off in this cohort was not at 90% as proposed, but at 95.94%.With both thresholds, only two PB samples would be marked as not diluted and two pull 1 samples would be marked as diluted.Implementing this formula could be a practical way to quantify hemodilution, although the use of CD16 as a marker may not be standard in all panels.The mast cell population, which depends on the CD117 marker (a backbone marker), proved to be the easiest formula to use and showed good performance, with none of the PB samples exhibiting a mast cell concentration above the proposed threshold of 0.002%.
In accordance with the standard protocol, the initial ml of BM was reserved for morphology analysis and assessment of the BM quality.This could potentially account for the characterization of pull 1 samples as diluted and underestimate the effect of hemodilution.Another possibility is that the mast cell test might be overly sensitive, as the first 2 ml pull of BM samples contained insufficient mast cells, as illustrated in Fig. 5. Nevertheless, we recommend implementing the mast cell population as a quick indication of hemodilution.In cases of borderline Fig. 4 Mast cell population.A Mast cell populations (CD117 high ) were measured in all samples, with a decrease between pull 1 and pull 2 (p = 0.076), pull 1 and pull 3 (p < 0.001) and pull 1 and PB (p < 0.001).When the 0.002% cut-off was applied, 4 samples from pull 1, 10 from pull 2, 13 from pull 3 were designated as hemodiluted.All PB samples showed CD117 high percentages <0.002%.B All samples marked as diluted due to the low mast cell population in the first pull, were also marked as diluted in the subsequent pulls and PB.Fig. 3 CD16dim expression in successive BM samples and PB.CD16dim cells of the total WBC significantly increased with a median from 74.05% in pull 1, to 79.68% in pull 2 (p = 0.030), to 80.02% in pull 3 (p = 0.016) and to 97.96% in PB (p < 0.001).When the proposed 90% cut-off would be used, two samples from pull 1, 5 samples of pull 2 and 4 samples of pull 3 would be marked as hemodiluted.For comparison, CD16dim expression in PB is shown.Fig. 5 Concordance of samples marked as A Using 1.2 threshold from the peripheral contamination index (PBCI), three samples from pull 2 and two from pull 3 were marked as diluted (in red).A sample that could not be measured is shown as X.B The European LeukemiaNet (ELN) recommended to mark samples with a CD16dim population of >90% as diluted.With this cut-off, two samples from pull 1, six samples from pull 2, three samples from pull 3 were marked as diluted.C A sample with ⩽0.002% mast cells was considered to be diluted.Five pull 1 samples, 12 pull 2 samples and 15 pull 3 samples would meet this criteria.D Combination of all three formulas which use a cut-off for assessment of concordance between techniques.Concordance was most profound between the ELN-method (CD16dim population) and the mast cell population, where all samples marked as diluted by the ELN-method except for 1 pull 1 sample, were also marked as diluted based on the mast cell threshold.At all time points, most samples were marked as diluted based on the mast cell population.For comparison, PB results are given in (A, B, C, D).
MRD-negative samples (MRD between 0.07% and 0.1%), mast cell concentration can provide additional information determine whether the negative result is most likely truly negative or possibly affected by hemodilution.If hemodilution is suspected, clinicians can be notified that MRD levels may not be reliable and a new BM aspiration should be advised.
Remarkably, the outcomes of PM-MRD analysis exhibited no statistically significant differences across the successive samplings (Fig. 1C).This observation suggests an increased stability of PM-MRD against hemodilution in comparison to the conventional MRD methodology, but further investigation is needed.
There are still several unresolved questions regarding the impact of hemodilution on MRD outcomes.Prior research has indicated similar blast counts between BM aspirations and biopsies in patients with AML, suggesting that malignant cells are not aspirated in higher proportion to non-malignant cells [38,39].However, discrepancies in aspiration of different cell types can arise in specific cases involving markers such as LAIPs with CD56, or other adhesion molecules, potentially due to the adhesive properties of malignant cells or their interactions with the BM microenvironment [40][41][42].Therefore, some LAIPs may be more susceptible to hemodilution compared to others.Nevertheless, it is imperative to emphasize that the available dataset currently lacks the requisite scale and scope to definitively address this intricate question.
Limitations include a small sample size of only 30 patients, which may limit the generalizability of the findings.Throughout the study design, we took measures to minimize differences between samples, such as a four eyes principle to ensure the right amount was aspirated, using paired samples on the same flow cytometry machine and analyzes were performed by the same lab technician.However, variability can still arise during processing, including pipetting or gating, which may explain some of the observed variability between pulls, particularly when dealing with small differences of 0.01%.
Future studies should prospectively investigate the relevance of the mast cell formula in particular and correlate the results with clinical outcomes to see if correcting for low mast cell percentages can decrease the false-negative MRD results.However, at this point the formulas can only be used to identify hemodilution and not to correct for it.Once validated, these formulas could be implemented as a standard comment on sample quality in MRD reporting.The differences between BM and PB cell populations should serve as the basis for hemodilution detection formulas.Additionally, with the increasing use of automated gating, an automated hemodilution index could be developed and added to the MRD assessment process, as recently proposed by Hoffman et al. [27].However, caution must be exercised to avoid script-mediated errors when comparing data sets [43].Furthermore, detecting hemodilution may also be important in other hematological diseases such as acute lymphoblastic leukemia (ALL) or multiple myeloma (MM), where MRD assessment from BM is critical and hemodilution may also lead to false-negative results.Therefore, we do advise to validate these formulas in these diseases as well [44].
In conclusion, hemodilution is a concern even after minimal BM aspiration and warrants consideration in MRD assessment.We recommend incorporating a hemodilution formula, focusing on CD16dim or mast cell populations (CD117 high ).Additionally, to emphasize the importance of the first pull for MRD measurement, BM tubes should be numbered in order of aspiration, with a strong advice to send in the first pull to the MRD lab and if this would be impossible, include the tube number in combination with the mast cell percentage in the final MRD report.In cases of uncertainty, advising BM aspiration repetition is prudent, especially in MFC-MRD between 0.07% and 0.09% in later pulls.Fig. 6 Discrimination of factors between bone marrow (BM) and peripheral blood (PB) based on the receiver operating characteristic curve (ROC).Four parameters (of which the mast cells population is also proposed as formula) and the PB contamination index formula were able to correctly identify BM and PB samples with a AUC above the 0.9.These being, CD10 (AUC: 0.956), plasma cells (AUC: 0.949), CD16dim (AUC: 0.940), mast cells (AUC: 0.924) and PB contamination index (AUC: 0.905).
Fig. 1
Fig.1Differences in measurable residual disease (MRD) between samples.A Differences in MRD percentages given as percentage of total white blood cell (WBC) count, between pull 1 (first 2 ml bone marrow (BM)), pull 2 (second 2 ml BM), pull 3 (third 2 ml BM) and peripheral blood (PB).Boxes represent the samples between 10%-90% of total.Differences between pull 1 and pull 2 (p = 0.025), pull 1 and pull 3 (p < 0.001) and pull 2 and pull 3 (p = 0.025) were statistically different.All pulls had a significantly higher MRD percentage compared to the paired PB samples.B Differences in primitive blasts (CD45+ cells with a primitive marker being CD34+, CD117+ or CD133+) between the three pulls and PB.A significant difference was found between pull 1 and pull 2/pull 3, but not between pull 2 and pull 3. C Difference in primitive marker MRD (PM-MRD) depicted as the percentage of LAIP cells with the primitive cells as denominator showed no statistical differences between the pulls and also not between BM and PB.D Consecutive MRD results of the individual successive pulls and PB.Colours indicate level of absolute decrease between pull 1 and pull 3. E In the 40 paired samples, a total of 46 different leukemia associated immunophenotypes (LAIPs) were identified.Based on the 0.1% cut-off, 18/46 LAIPs (39.1%) were positive in the first pull, compared to 16/46 (34.8%) in pull 2 and 14/46 (30.4%).In the PB samples, only 4/46 (8.7%) of the LAIPs were above the 0.1% cut-off.
Table 1 .
and Overview of previously published formulas to calculate hemodilution.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2010-06-22T00:00:00.000
|
14338314
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcbiol.biomedcentral.com/track/pdf/10.1186/1741-7007-8-89",
"pdf_hash": "2fc5fbfdd6542845a300ffd34384dbd0734ff663",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44462",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "2fc5fbfdd6542845a300ffd34384dbd0734ff663",
"year": 2010
}
|
pes2o/s2orc
|
Characterization of a heat resistant ß-glucosidase as a new reporter in cells and mice
Background Reporter genes are widely used in biology and only a limited number are available. We present a new reporter gene for the localization of mammalian cells and transgenic tissues based on detection of the bglA (SYNbglA) gene of Caldocellum saccharolyticum that encodes a thermophilic β-glucosidase. Results SYNbglA was generated by introducing codon substitutions to remove CpG motifs as these are associated with gene silencing in mammalian cells. SYNbglA expression can be localized in situ or detected quantitatively in colorimetric assays and can be co-localized with E. coli β-galactosidase. Further, we have generated a Cre-reporter mouse in which SYNbglA is expressed following recombination to demonstrate the general utility of SYNbglA for in vivo analyses. SYNbglA can be detected in tissue wholemounts and in frozen and wax embedded sections. Conclusions SYNbglA will have general applicability to developmental and molecular studies in vitro and in vivo.
Background
A fundamental technique in biological research is the use of reporter genes to track cells or tissues in developmental studies, to quantify or recognize gene expression from defined cis-regulatory elements and to normalise for differential uptake of DNA or delivery vectors in transfection experiments in vitro [1][2][3][4]. The most frequently used reporters are Escherichia coli lacZ gene encoding β-galactosidase (βgal), the green fluorescent protein (GFP) of Aequorea victoria and to a lesser degree human placental alkaline phosphatase [3,5,6]. In transgenic studies GFP tends to be the reporter of choice for studies at single cell or intracellular resolution or where viable cells need to be isolated by fluorescence activated flow sorting (FACS). Histochemical detection of lacZ is still widely used at the single cell/tissue level of resolution especially where visualization is in wholemounts of tissues or embryos. Here we present a new reporter protein for cellular and whole organism studies that is validated in vitro and by generating a Cre-reporter mouse in which the reporter is detected in histological sections following induction of Cre recombinase. This new reporter gene, termed SYNb-glA, is based on the bglA gene (GenBank: Accession X12575), of the thermophilic bacterium Caldocellum saccharolyticum that encodes a β-glucosidase (βglu) thermostable to 85°C [7].
Results
A mammalian expression construct in which the subcloned C. saccharolyticum bglA gene is regulated from the human elongation factor 1α (EF1α) promoter was generated to create pEFbglA. For comparison, lacZ was also subcloned to generate pEFlacZ. The EF1α promoter was chosen as, unlike powerful viral promoters such as the cytomegalovirus immediate early promoter (CMV IE1), it does not tend to undergo silencing over time [8]. Following transient transfection cells expressing thermostable βglu could be detected using BCI-glu in fixed cultures with or without heat-treatment (65°C for 20 min; data not shown). Colonies of NIH 3T3 cells transfected with pEFbglA and pEFlacZ were isolated. βglu expression remained detectable after heat treatment unlike βgal which was heat inactivated (Figure 1a-d). To establish if βgal and βglu can be co-localized stable clones separately expressing either reporter were derived and mixed in 1:1 ratio and cultured together for 1-2 days prior to fixation and staining with BCI-glu and Magenta-gal at 37°C. Individual cells in the co-cultures stained with one substrate only and showed no cross-reactivity demonstrating the potential to visualize these two reporters simultaneously ( Figure 1e).
After several passages, cultures derived from bglA+ clones in which all cells stained positive for βglu activity, exhibited large numbers of unstained cells. Such a 'silencing effect' has been reported in experiments with the E. coli lacZ gene [9]. Silencing of lacZ is ameliorated by changing its sequence to minimize the number of CpG dinucleotides that are targets for methylation in mammalian cells [9]. The bglA coding sequence contains 109 CpG dinucleotides. Consequently, we undertook to resynthesize bglA such that the nucleotide sequence was depleted for CpG dinucleotides and the codon sequence was biased towards mammalian usage. A nuclear localization signal was also added as a 5' fusion ( Figure 2a). The reporter gene thus generated was termed SYNbglA (Gen-Bank: Accession AY528410).
SYNbglA was subcloned to generate pCMV-SYNbglA that was cotransfected with pSV2neo into 293 cells and G418 resistant clones obtained. The CMV promoter was selected as a more robust test of the resistance of SYNb-glA to silencing. Four clones were expanded and all expressed the altered βglu (SYNβglu) strongly and homogenously as evaluated by BCI-glu staining. The clones were serially passaged (cultures approaching confluency were split 1:4 to 1:8 every 2 or 3 days) and continuously evaluated for expression by staining with BCI-glu. No reduction in the proportion of cells stained or in their intensity was observed up to passage 40, the highest analysed ( Figure 2b). We conclude that SYNbglA is not prone to the gene-silencing phenomenon observed with bglA and reported with unmodified lacZ [9,10].
βgal expression can be quantified by detecting cleavage of O-nitro-phenyl-galactoside conjugated sugars in colorimetric assays. We confirmed that SYNβglu expression is quantifiable with ONP-glucopyranoside (not shown). This method was then used to gain insight into the functional stability of SYNβglu compared to βgal. Both proteins were expressed inducibly in stably derived clones of 293EcR cells (engineered to express a heterodimeric transcriptional transactivator that only binds to the DNA binding domain present in an inducible promoter in the presence of Pronesterone A, an ecdysone analogue) [11]. The rate at which reporter enzyme activity decayed following withdrawal of Pronesterone A was determined by harvesting treated cells daily for 6 days. Following the cessation of induced transcription, the rate of decay for SYNβglu is similar to that of βgal, with a 50% reduction in activity observed at 2.65 days and 2.33 days, respectively, suggesting that they have similar stabilities within cells ( Figure 3).
In contemporary studies of gene function in vivo, genes are not only deleted or overexpressed constitutively but conditionally mutagenized using site-specific recombi- nases such as Cre [12,13]. Reporter lines for recombinase activity are necessary to determine the timing, spatial regulation and tissue specificity of recombinase expression [14]. Consequently, we decided to generate an alternative reporter line for assessing Cre activity by targeting the ubiquitously expressed murine ROSA26 locus with SYNbglA placed downstream of a floxed STOP cassette to create a line named R26(SYNbglA)R ( Figure 4). Frozen sections and intestinal wholemounts from adult R26(SYNbglA)R and non transgenic animals were analysed for βglu activity using BCI-glu. Tissues from R26(SYNbglA)R and non transgenic animals showed identical staining patterns with all (including liver, oesophagus, bladder, pancreas, muscle and heart) except small intestine showing no staining due to background enzymatic activity. In the small intestine, in both cases, the positive staining was associated with the epithelial brushborder and could be prevented by incubation at 65°C for 20 min prior to incubation in BCI-glu (Figure 5a, b). These results indicate that there is no background expression from the targeted allele and that detectable endogenous βglu is only present in the small intestine.
In order to functionally test the R26(SYNbglA)R line it was mated to two different Cre expressing lines, PGKcre m and Ahcre. PGKcre m is expressed from a maternally inherited transgene that is expressed during the diploid phase of oogenesis resulting in complete recombination at loxP flanked cassettes even where PGKcre m is not inherited [15]. Male R26(SYNbglA)R mice were crossed with PGKcre m females and the offspring analysed around the time of weaning. Tissues were excised, fixed and stained as wholemounted tissues in BCI-glu for 48 h. Tis- sues from age-matched mice not crossed to PGKcre m mice were also prepared. In all tissues of mice obtained from the R26(SYNbglA)R/PGKcre m intercross, including heart, thymus, spleen, pancreas, kidney, skeletal muscle, liver, stomach and brain, there was intense staining with BCI-glu, where there was none in the control tissue. This demonstrates that SYNβglu expression is sustained in diverse tissue types following cre-mediated activation (Figure 5c-i).
Ahcre is a mouse line in which Cre recombinase is conditionally expressed from the rat cytochrome P450 IA1 promoter in several gastrointestinal tissues, following treatment with the inducing agent β-napthoflavone [16]. The Ahcre line was originally validated with the R26R reporter line, in which lacZ is expressed from the ROSA26 locus following cre-mediated recombination, allowing the known pattern of recombination to be compared with that obtained in Ahcre/R26(SYNbglA)R mice. Thus, Ahcre/R26(SYNbglA)R were either left untreated or treated with five daily intraperitoneal injections of 80 mg/kg β-napthoflavone to activate transcription of cre and mediate excision of the stop cassette allowing expression of SYNbglA (Figure 5j, k). There was very a very low level of background recombination in the target tissues of untreated adult animals (Figure 5j). After induction, there was near complete recombination with extensive expression of SYNβglu that could be detected with BCI-glu in small intestinal wholemounts (Figure 5k). In the colon of induced Ahcre/R26(SYNbglA)R mice there was extensive recombination that was maximal proximally and became increasingly mosaic towards the anus (Figure 5l-o). In both untreated and treated mice the pattern of recombination was identical to that observed previously in Ahcre/R26R mice although overall the extent of recombination as determined by expression of SYNbglA seems greater in more distal regions of the intestine [16].
Increasingly transgenic experiments require the simultaneous application of two or more reporter genes. We wanted to establish if SYNβglu can be co-localized with βgal in tissues. In order to achieve this we chose to analyse the intestinal epithelium and exploit the known clonality of intestinal crypts [17]. Ahcre and reporter strains R26R and R26(SYNbglA)R were intercrossed to generate animals carrying all three modifications (Ahcre/R26R/ R26(SYNbglA)R mice). These were injected with a single dose of β-napthoflavone to induce Cre submaximally such that mosaic patterns of recombination resulted. The intestines were then analysed for expression of both βgal and SYNβglu after 12 weeks, a time by which the process of crypt monoclonal conversion is largely complete [18]. Intestinal wholemounts or cryostat sections were stained first for βgal (6 h, 37°C) and then, after heat-treatment, for SYNβglu as described in the Methods section using BCI-gal and Mag-glu, respectively. Individual and clus-ters of stained crypts could be clearly identified and could be related to either reporter expressed alone or occasionally both together (Figure 6a-e).
In order to determine if the introduced SYNβglu could be detected in sections processed for histology in paraffin wax blocks, different fixatives and protocols were tested. Liver, pancreas, bladder and small intestinal samples from Ahcre/R26(SYNbglA)R mice induced with βnapthoflavone 1-6 weeks previously were fixed in various fixatives for 1-6 h, processed through an ascending series of ethanols, xylene and into paraffin wax at 65°C for embedding. Five micrometer sections were cut, dewaxed in xylene and rehydrated before incubation with BCI-glu at 37°C. Clear nuclear localized histochemical product was found in patterns identical to that observed with cryostat sections as described above (Figure 7a and 7b). The main determinant of staining intensity was the fixation protocol with the best results achieved after 1 h fixation with 2%formalin/0.2% glutaraldehyde.
In order to test whether or not SYNβglu can be localized with other markers, we performed immunohistochemistry specific for intestinal cell types in tissue sections already stained for BCI-glu (either for 2 days at 37°C or overnight at 65°C). Villus enterocytes were easily identified in such sections on the basis of their positive staining for villin as were intestinal goblet cells on the basis of staining for muc2 (Figure 7c and 7d).
Discussion
The βglu encoded by SYNbglA is an easily detected and stable protein that seems an ideal cell marking reporter molecule. It has potential application to cell and molecular studies where gene expression has to be localized. Its potential applications in vivo include studies during development and in the adult where gene expression from defined promoter elements have to be detected or for the fate mapping of tissues or individual cells, for example in clonal studies [19,20].
Like βgal expression levels of SYNβglu can be quantified using colorimetric substrates. Such assays are routinely used for normalizing for vector uptake in transfection experiments with the stability of βgal making it suitable for this purpose. However, such colorimetric assays are relatively insensitive and, as has been pointed out previously for βgal, chemiluminescent substrates can greatly increase sensitivity [3]. Appropriate chemiluminescent substrates are available for the detection of SYNβglu but, to date, we have not attempted to apply them. However, the ultimate limit on sensitivity for βgal is the presence of background enzymatic activity from endogenous, mammalian β-galactosidases and it is likely that the ability to destroy such background by heat inactivation will mean that SYNβglu has an enhanced sensitivity over that of βgal [21].
SYNβglu may have advantages over E. coli βgal for cellular localization in some experimental settings. The thermostability of SYNβglu allows it to be visualized in high-resolution sections from paraffin wax embedded tissues. The size of the coding sequence for SYNbglA is 1.2 kb compared to 3.1 kb for lacZ. This smaller size is advantageous for viral delivery vectors where cloning of large inserts is problematic.
The similarity of SYNβglu and βgal both in terms of processing requirements for visualization and the relative stabilities of the two proteins, together with the observation that they can be can co-localized, suggests that they may be used in tandem in compound genetically modified transgenic animals where gene expression changes are being effected. The compatibility of SYNβglu and βgal for dual detection may become especially significant as transgenic analyses become more elaborate. For example, a stem cell marker gene (Lgr5) has been validated in mice by localizing stem cells using an enhanced yellow fluorescent protein targeted to the Lgr-5 locus along with a Tamoxifen activated Cre recombinase [22]. Recombination mediated by the latter is recognized in crosses to R26R mice as clones of cells expressing βgal. A recent study of neuronal function combines use of GFP and human alkaline phosphatase to allow simultaneous detection of cell bodies, neurites and presynaptic sites and envisages the potential to also detect βgal in crosses to mutant strains [23]. In this regard, SYNbglA will add to the limited platform of available reporters. The R26(SYN-bglA)R mice described here will further permit detailed analysis of the pattern of Cre activity in mouse models in which recombination is restricted to specific cell types and tissues and to different stages of development.
Conclusions
SYNβglu is an easily detected reporter protein that has a variety of applications in vitro and in vivo where cell tracking, accurate localisation and high sensitivity is required. SYNβglu may offer some advantages to the E. coli βgal but the ability to use these two reporter systems together suggests that they will complement each other and will be used in tandem. The Cre-reporter animals described here demonstrate the applicability of SYNβglu to transgenic tissues and in the analysis of Cre mediated recombination.
Plasmid cloning
The bglA sequence was initially PCR amplified from pNZ1065 (gift of Dr D Love) and subcloned into pCR3 (Invitrogen) using primers (5' ttccatggGGATCCtaagtttcccaaaaggatttttgtgg 3' and 5' ttAGATCTgtcgacttacgaattttcctttatatactg 3') designed to introduce a consensus translation start sequence (bold) and flanking restriction sites (caps) for subsequent cloning [7,24]. This bglA fragment was subcloned into pEF1alpha [25] to generate an expression construct (pEFbglA) containing the human type 1α elongation factor promoter and SV40 polyadenylation sequence. In order to allow a comparison, the equivalent lacZ construct (pEFlacZ) was also made by conventional subcloning (cloning details available on request). The SYNbglA and lacZ cassettes were also subcloned into pIND (Invitrogen, CA, USA) containing five ecdysone response elements upstream of a Drosophila minimal promoter. Additional cloning details are available upon request.
In order to generate the final gene targeting construct, conventional cloning was performed to make a cassette comprising loxP-PGKneo.pA-loxP SYNbglA.pA which was subcloned into pROS-MCS-13 [26] containing the two arms of ROSA26 locus homology. Additional cloning details are available upon request.
Cell lines
Mouse NIH 3T3 and human 293-EcR cells were obtained from the American Type Culture Collection and Invitrogen, respectively, and were maintained in standard tissue culture media (Dulbecco's Modified Eagle's Medium) containing 10% fetal calf serum. Cells were transfected using the Stratagene MBS kit. Where cotransfection was required for the purpose of antibiotic selection the test plasmid was cotransfected (10:1 ratio) with pSV2neo (Clontech, CA, USA) followed by G418 selection at 1 mg/ mL.
Gene targeting
Embryonic stem cell manipulation procedures were performed by the Gene Targeting Service, Babraham Institute (Cambridge, UK). E14 129Ola ES clones were selected with G418 and colonies initially screened by polymerase chain reaction using primers anchored 5' to the shorter targeting arm (R26TOPV: 5' ggtagtggggtcgactagatgaaggagagcc 3') and at the introduced splice acceptor (R26SAmut: gtcctcaaccgcgagctgtg) which amplified a unique 4kb band. Two clones were selected following further screening by Southern blotting using a probe located 5' to the targeting vector as described previously [26]. These clones, C8 and C10, were microinjected into blastocysts and the resultant chimeras used to establish R26(SYNbglA)R mice. Both clones were evaluated for expression of reporter following induction of Cre in double transgenic Ahcre/R26(SYNbglA)R mice and were found to behave identically.
Protein stability determination
The colorimetric assay for βgal and βglu was carried out using a commercially available kit (No. E2000; Promega, WI, USA) as per the manufacturers instructions, except that the substrate containing solutions [o-nitrophenyl-β-D-galactopyranoside (ONP-gal), Sigma No. N1127, onitrophenyl-β-D-glucopyranoside (ONP-glu), Sigma No. N8016] at a concentration of 1.33 mg/mL were prepared independently. The absorbance of the cleaved substrates at 420 nm was determined on a Tecan SpectraFluor Plus plate reader.
The commercially available ecdysone inducibility system was obtained including 293EcR cells that stably express the VgRXR receptor (a heterodimer of ecdysone receptor and retinoid X receptor) (Invitrogen, Scotland, UK). In 293 EcR cells the heterodimer binds to an ecdysone response element in pIND in the presence of Pron-testerone A. Clones of 293EcR cells were obtained by selection with G418 following transfection with pINDSYNbglA or pINDlacZ that contain a neo selection cassette. Clones were screened for conditional expression of βglu and βgal, respectively, in the presence of Pronesterone A (5 μM). For each reporter one clone was plated out at low density in 2.5 cm 2 replicated tissue culture wells which were incubated in media containing Pronesterone A (5 μM) for 24 h. Triplicate wells for each clone were harvested in lysis buffer (Promega, No. 397A) at 24 h intervals after PBS wash (x3). Cells harvested at the end of Pronesterone A treatment were designated day 0. For each lysate total protein concentration was determined using a Pierce BCA kit (PerBio, No. 23227) and the volumes analysed in the enzyme assay were normalized for total protein content.
Treatment of animals
Homozygous Ahcre mice were crossed with R26(SYNb-glA)R animals and offspring carrying both transgenes selected for subsequent experiments. Ahcre mice were genotyped as described and R26(SYNbglA)R mice by polymerase chain reacton using a primer combination (5' cagaaaggtagacggatttagcc 3'; 5' gggatacagaagaccaatgcaga3'; 5' tcctcaaccgcgagctgtg 3') giving a 440 bp and 350 bp for the wild type and targeted R26 loci, respectively [16]. For induction of the Ah promoter, mice received interperitoneal injections of 80 mg/kg β-napthoflavone (βNF; Sigma) dissolved in corn oil (8 mg/mL) at the frequencies stated and controls received either no treatment or corn oil only.
Tissues and immunohistochemistry
Whole tissues for BCI-glu staining were dissected from 3-week-old mice and were sliced to present a cut facet for histochemistry. Tissues were fixed in 4% paraformaldehyde for 2 h, washed in PBS and incubated in BCI-glu at 50°C for 48 h. Intestinal wholemounts from Ahcre/ R26(SYNbglA)R animals were prepared lumenal side up as described previously [16] except that they were fixed in ice cold 2% formaldehyde/0.2% glutaraldehyde in PBS (pH7.4) for 1 hour prior to overnight incubation in BCIglu (as described above) substrate at room temperature. For frozen sections small pieces of intestine were snap frozen in liquid nitrogen and cryostat sectioned. Slides were air dried, fixed for 5 minutes in ice cold 2% formaldehyde/0.2% glutaraldehyde in PBS (pH7.4) for 10 min, before transferral to BCI-glu. For heat inactivation sections were incubated in PBS at 65°C for 10-20 min prior to incubation in BCI-glu. Tissues processed for histology were immersed in the fixatives for the length of time stated and processed into paraffin wax blocks using a Citadel tissue processor (ThemoShandon, Cheshire, UK) with freshly prepared dehydrating ethanols (x1 70%, 30 min; x1 90%, 30 min; x3 100%, 30 min each), xylene (x3, 20 min each) and wax (x2, 30 min each). Sections were cut at 3-5 μm, dewaxed and rehydrated into PBS prior to immersion into BCI-glu, prepared as above, and incubated at 37°C (48 h) or 65°C (overnight).
|
v3-fos-license
|
2020-04-09T09:13:33.402Z
|
2020-04-01T00:00:00.000
|
225943465
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.techrxiv.org/articles/preprint/Improvised_learning_for_pre-primary_students_using_augmented_reality/12056046/files/22155654.pdf",
"pdf_hash": "8354c3537c59069303f160086f7986064ffd1ca5",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44463",
"s2fieldsofstudy": [
"Education",
"Computer Science"
],
"sha1": "369508533ed13e12a5558fc3074ec2c34c76035a",
"year": 2020
}
|
pes2o/s2orc
|
Improvised Learning for pre-primary students using Augmented Reality
In this age of digital advancements, where technologies are changing in a fraction of time. From the abacus which made tutoring math easy millennia back, to word processor which changed the way research paper is being written and presented. After every era, with the advancements in the technology has not only given shaped the education but also transformed it. There was a time when the education world of black on white changed to projected presentations. However, now in this paper, we believe to go beyond the two-dimensional space and create a whole new educational world for children. Augmented Reality (AR) has successfully made classroom learning more interactive and engaging for students as well as for teachers to deliver their lectures. AR is the combination of the real-world with computergenerated world. It is one of the most emerging fields in computer science. The conventional approach for learning can be stressful and to a certain extent less effective for some students. So, we propose a system in which students use smart devices like tablets, mobile, etc. that act as an alternative to boring supportive textbooks. Also, we plan to develop an application consisting of two modules like interactive learning and fun examination, a hybrid of the traditional approach and innovative practical illustrations of complicated concepts leading education in another dimension.
I. INTRODUCTION
Augmented Reality can superimpose real-time computer animation onto the real world created by combining the perceived physical elements in the world around us with computer-based graphics, video, audio, GPS data, etc. It is enriched by adding virtual data in the real world we sense but unlike virtual reality, augmented reality does not often require a separate room or confined area to create an immersive environment. The idea which makes this developing technology a stronger tool to provide toddlers with education is the way it is delivered. What someone may call an 'unsurprising' almost every nine out of ten own smartphones, so why aren't teachers utilizing these in the classroom or campus? To elaborate further, Augmented reality makes use of smart technologies and smart devices to covey knowledge. According to the cone of experience theory, learners only remember 10% of what they read but remember 90% of what they say as they perform an action by seeing and doing a simulation experience (Chih, 2007). So, it is rightly proven that audio-visual catalyzes the process of learning. Imagine Augmented reality in various fields like science, can be used in a range of topics in science, allowing students to be provided with guided tours to places like outer space to the depth of the ocean life. Moreover, if implemented in the field of mathematics, it can play a major role in subjects like geometry and calculus, creating its own world of numbers. Also, Augmented reality in gaming uses the existing environment and creates a playing field within it. AR games are typically played on devices like smartphones, tablets and portable gaming systems. An approach of merging these two concepts of gaming and education to interact with the real world in digital form while experiencing the real-life environment giving them a euphoric experience and greater innovations was derived by us.
A game can be implemented in many important fields such as medical training, retail, repair maintenance, military, education, etc. and implementing it in education provides new possibilities. With the benefits of Augmented reality as well as gaming, education can be more varied, unique, personalized and interactive. It helps in turning complex and boring content to be easy and fun and also makes it creative. So, we attempted to implement our research by developing an augmented reality-based education application using Unity 3D Platform and Vuforia SDK, which targets an audience of preschool students (4-9 years). There are many platforms for making AR application and SDK's to choose from, but for our research Unity and Vuforia made a dynamic combination. Likewise, Unity is a 2D and 3D game creation environment that supports social media integration, fantastic graphical support, multiplatform support, and Unity has its own Asset Store. Vuforia is the most widely used platform for AR development, with support for leading phones, tablets, and eyewear. The motive of our application is to use augmented reality to make learning more interactive and fun as well as to keep the application scalable and executed over a wide range of devices. We have developed two modules-Learning, and examination. In the Learning module, children can learn interactively. When users display the flashcard of one of the different alphabet or numbers pictures defined in the application, they see the respective moving 3D augmented object of fruit. This can help the students to remember better and makes learning fun. In testing, Examination will not stress. For each question, users will have to display the answer flashcard which they think is correct. If the student chooses the correct flashcard it will display on-screen as correct. This is expected to help the young minds to capture the alphabet or numbers into their memory easily.
II. LITRETURE REVIEW
Although AR is one of the technologies which can change the scenario in education these days, still the significance of AR in learning environments remains unclear and unaware [2]. Furthermore, various types of AR applications exist in educational environments, which may differ regarding their benefits towards educational outcomes [1]. From the context of this paper, we refer to educational environments as any outline, in which people are acquiring knowledge in a structured and controlled process.
These are the Five Directions by Yuen et al. [1] which classifies the AR applications into five groups which are referred before carrying out our research in AR. Firstly, AR can be used in applications that enable Discovery-based Learning. A user is provided with information about a realworld place while simultaneously considering the object of interest. Secondly, AR can also be used in Objects Modeling applications. Such applications allow students to receive immediate visual feedback on how a given item would look in a different setting. Thirdly, AR Books are books that offer students 3D presentations and interactive learning experiences through AR technology. The books are augmented with the help of technological devices such as special glasses. Moreover, one idea for the support of training individuals in specific tasks is described by Skills Training. Especially mechanical skills are likely to be supported by AR Skills Training applications. Lastly, video Games offer powerful new opportunities for educators which have been ignored for many years [3]. Nowadays, educators have recognized and often use the power of games in educational environments. AR technology enables the development of games which take place in the real world and are augmented Also, one of the researches that we referred in which the children were taught using the flashcards of different animals on which they see the 3D character defined and even they can hear their voices. This was the best way to bring twodimensional education to the 3-D world which not only provides education in audio-visual format but also creates a whole new virtual world of learning [9]. While, discussing how this works were implemented they used Unity with Vuforia SDK which can be compiled in accordance with different platforms such as Web, IOS, Android, and Windows Phone without any infrastructure changes.
The benefit of Augmented Reality is described in quotations such as "the AR-style gameplay successfully enhanced intrinsic motivation towards the self-learning process" [6], "Participants using the AR books appeared much more eager at the beginning of each session compared with the NAR group" [4], and "students have been satisfied and motivated by these new methodologies, in all cases" [5]. The benefit can be further described by findings such as the users being "more proactive" [7] [8] or the will to continue learning using the AR technology after class. A more detailed description was found in Iwata et al. [6], where physical interaction is explicitly identified as a driver to enhance emotional engagement.
Since various types of AR applications exist in educational environments, which may differ regarding their benefits towards educational outcomes. For the context of this research, we refer to educational environments as any scenario, in which pre-school children can acquire knowledge in a structured and controlled process. We have seen the idea of developing scientific literacy using augmented reality in preschool education [10]. Several models of data collection are referred here which are collecting opinions, beliefs, explanations, knowledge of subjects involved in Semi-structured interviews and focus group discussions. The following paper consists of the idea of Unstructured Observation, Direct as well as Indirect observations that are used to collect various research data. Selective coding, Continuous comparison, and Open coding methods are used to analyze the collected data. Conceptual labels were assigned to individual events, cases and other occurrences of a given phenomenon. By categories, we mean classes of concepts that were identified when comparison of concepts seemed to indicate they belonged to a similar phenomenon [10]. A lot of research material in the form of video recordings and pictures and photographs was also collected in the research. In indirect observation, video recordings of educational activities were watched several times; significant elements supplying evidence on the development of science literacy of pre-school students using augmented reality were looked for.
One of the researches that we went through [11]that focuses on enhancing the use of AR for the learning experience of kindergarten students while addressing parents' concern that a long-time usage of electronic devices may affect their child's health. They developed an AR mobile application prototype to teach kindergarten students English vocabulary interactively and attractively. It allows kindergarten students to learn English vocabulary in any place and at any time using a mobile device [11]. To address the parents' concern on health, they integrate a monitoring system into the application, which allows the parents to monitor their child's usage and stop the application in real-time online. They include a spelling game using AR is included to enhance students' motivation to learn; they can practice spelling, reading and speaking the words through the AR game [11]. Here, Parents can set a time limit to automatically stop the application, manually stop the application online, and track their kid's learning progress. Such monitoring features can let parents control the application usage of their child and alleviate their concerns on their child's health.
After scrutinizing the researches and analyzing the above important researches we decided to go with education domain for the pre-primary students by building an AR application using Unity and Vuforia.
III. DESIGN METHODOLOGIES
Our application is developed for the pre-primary students which makes learning fun and examination no stress. We used the Unity 3D game engine platform and Vuforia SDK for developing this application. Unity 3D game engine platform is developed by Unity Technologies for creating video games. It has a cross-platform development, i.e. games can be made for Android, IOS, PC, PlayStation4 lots of other platforms. Unity also has an asset store that provides lots of inbuilt 3D and 2D models which can be directly used in the game development process. Besides that, it has tools to add some new features to the engine. Vuforia SDK Is a software development kit that is created by Qualcomm. It uses computer vision algorithms to recognize an object or image or reconfigure the real world. It supports different development environments such as Unity, MS Visual Studio, Apple XCode and Android Studio. Similarly, it supports many devices such as smartphones, tablets and AR smart glasses. Also, it supports building Universal Windows Platform applications for Intel-based Windows 10 devices. It also gives a good user interference. The Fig 1 shows GUI overview of our application. It has two modules "Learning" and" Examination". One of the most important features implemented for this research is the interactive learning module. With the help of this module, children can gain knowledge about new objects. Visualizing these objects in 3D models is the best approach for them to learn. So, the teacher can simply run the application and move the mobile application over the flashcards. These flashcards can be simply for terms that are planned for children to learn such as alphabet or numbers. When the camera gets focused on the flashcard, the application will recognize the image then display the relative virtual 3D model. At the point, children can explore the real object based on the flashcard image. Thus, they can later point to the correct items when they are asked to describe an object based on the alphabets. Flashcard images that have been used for implementation are shown in the Vuforia made the job easier to build this module by providing a set of features that we could apply after integrating Vuforia to Unity 3D studio. So Vuforia lets you define the image targets which will be recognized to render the associated models. Additionally, it allows selecting the 3D model from a list of different packages. In our research, we have used the fruits package to link between alphabet and numbers on one side and fruits on the other side to make the process easier for children to understand. Fig 3 shows the sequence of the Learning module activities.
A. Learning Module
We wanted our application to be fun and interesting as the audience for this application is children. For this purpose we created rotated and moving the game object.
• Rotation: We used "transform.Rotate" inbuilt function of unity. A game object can rotate around the X axis, Y axis, Z axis. We Used rotation around 20degree Y-axis with a default speed. • Movement: for moving augmented object we used "trans-form.Translate" function of unity. It moves the transform in the direction and distance of translation. We moved our object forward along its z axis i.e. Z:0.5 in 1 unit/second with and default speed. Else a. Go to step 3 6. STOP
B. Examination Module
The examination module similar to the learning module is divided into two sub-categories; alphabets and numbers. Since the application is for pre-school children, the question level will be simple. Alphabet section will handle basic English language questions whereas the number section will have preschool maths. Fig 4. shows the sequence of the examination module activities.
Each target image has a tag attached to it. This tag helps in identifying the target image and the model associated with it. Trackable behavior from Vuforia helps with getting the associated tag, once the tag is received simply compare the received tag with the tag for the correct answer. The image will show a 3D model regardless if the answer is correct or not. However, an event can be raised if the target tag matches the required tag. For our research, we have simply displayed a label raised after the event is triggered. The questions are customizable and can be changed to manage the level of difficulty. The user interface is simple as shown in fig. 5. The home screen contains two buttons for our two modules; learning and examination. Inside these two modules, there are three buttons for the alphabet, number, and a back button. The back button is to return to the home screen. The alphabet section when clicked will activate the AR camera and the alphabet target models whereas the number section will activate the AR camera and the number target models. The difference in learning and examination modules is that the learning module once inside does not have questions whereas the examination module does.
IV. RESULTS
Our Augmented Reality application had been developed as per the design. Fig 6. Shows an example of learning numbers using the application, so once the camera is focused on the flashcard with the number (3), the application shows three apples. That will simplify understanding the quantitive meaning of the number. Fig.7,8 show a few basic questions, the trackable behavior from Vuforia helps with the examination.
V. CONCLUSION
Augmented reality is considered an emerging technology that has been around for years has been making tremendous and astounding growth every year. AR is going to change the shape of research and the world's technology thoroughly shortly in almost every domain. The research we are proposing is primarily based on the education domain that encourages and aiding the children to make learning fun. Also, we are planning to cover both crucial concepts of not only learning but also examination aspects of education. The innovative part is the application of our implementation which is in the form of a game. Moreover, we are making sure to develop a game that is supported by every platform, so we use a dynamic combination of Unity 3D and Vuforia. The augmented reality technology is a bit crude yet and still in its infancy. To conclude, Augmented Reality is revolutionary, and it can take the educational system to a whole new level. Our proposed research in AR is an approach to this modern era by surpassing the limitations of conventional systems.
|
v3-fos-license
|
2018-05-08T18:23:32.685Z
|
0001-01-01T00:00:00.000
|
135699
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CC0",
"oa_status": null,
"oa_url": null,
"pdf_hash": "b28847c9ed2076592a7a0846bb556ae5eb6163d6",
"pdf_src": "Grobid",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44468",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "b28847c9ed2076592a7a0846bb556ae5eb6163d6",
"year": 2014
}
|
pes2o/s2orc
|
Morbidity and Mortality Weekly Report Vital Signs: Improving Antibiotic Use among Hospitalized Patients
Background: Antibiotics are essential to effectively treat many hospitalized patients. However, when antibiotics are prescribed incorrectly, they offer little benefit to patients and potentially expose them to risks for complications, including Clostridium difficile infection (CDI) and antibiotic-resistant infections. Information is needed on the frequency of incorrect prescribing in hospitals and how improved prescribing will benefit patients. Methods: A national administrative database (MarketScan Hospital Drug Database) and CDC's Emerging Infections Program (EIP) data were analyzed to assess the potential for improvement of inpatient antibiotic prescribing. Variability in days of therapy for selected antibiotics reported to the National Healthcare Safety Network (NHSN) antimicrobial use option was computed. The impact of reducing inpatient antibiotic exposure on incidence of CDI was modeled using data from two U.S. hospitals. Results: In 2010, 55.7% of patients discharged from 323 hospitals received antibiotics during their hospitalization. EIP reviewed patients' records from 183 hospitals to describe inpatient antibiotic use; antibiotic prescribing potentially could be improved in 37.2% of the most common prescription scenarios reviewed. There were threefold differences in usage rates among 26 medical/surgical wards reporting to NHSN. Models estimate that the total direct and indirect effects from a 30% reduction in use of broad-spectrum antibiotics will result in a 26% reduction in CDI. Conclusions: Antibiotic prescribing for inpatients is common, and there is ample opportunity to improve use and patient safety by reducing incorrect antibiotic prescribing. Implications for Public Health: Hospital administrators and health-care providers can reduce potential harm and risk for antibiotic resistance by implementing formal programs to improve antibiotic prescribing in hospitals.
Introduction
Antibiotics offer tremendous benefit to patients with infectious diseases and are commonly administered to patients cared for in U.S. hospitals.However, studies have demonstrated that treatment indication, choice of agent, or duration of therapy can be incorrect in up to 50% of the instances in which antibiotics are prescribed (1).One study reported that 30% of antibiotics received by hospitalized adult patients, outside of critical care, were unnecessary; antibiotics often were used for longer than recommended durations or for treatment of colonizing or contaminating microorganisms (2).
Incorrect prescribing of antibiotics exposes individual patients to potential complications of antibiotic therapy, without any therapeutic benefit.One such complication is infection with Clostridium difficile, an anaerobic, spore-forming bacillus that causes pseudomembranous colitis, manifesting as diarrhea that often recurs and can progress to sepsis and death; CDC has estimated that there are about 250,000 C. difficile infections (CDI) in hospitalized patients each year (3).Other complications related to unnecessary use of antibiotics include infection with antibiotic-resistant bacteria (4) and complications from adverse events (5).
Evidence is accumulating that interventions to optimize inpatient antibiotic prescribing can improve patient outcomes (6).To assist health-care providers to reduce incorrect inpatient prescribing, information is needed regarding how frequently incorrect prescribing occurs in hospitals and how improving prescribing will benefit patients.In this report, current assessments of the scope of inpatient antibiotic prescribing, the potential for optimizing prescribing, and the potential benefits to patients are described.
Methods
The objectives of this evaluation were to 1) describe the extent and rationale for antibiotic prescribing in U.S. acute care hospitals, 2) present data illustrating the potential for improving prescribing in selected clinical scenarios, and 3) estimate the potential reductions in CDI among patients when antibiotic use is improved.For this report, antibiotics include parenteral, enteral, and inhaled antibacterial agents.
The first objective was accomplished using proprietary administrative data from the Truven Health MarketScan Hospital Drug Database (HDD) and data from CDC's Emerging Infections Program (EIP).EIP is a network of state health departments, academic institutions, and local collaborators funded by CDC to assess the effect of emerging infections and evaluate methods for their prevention and control.*Antibiotic prescribing data and patient demographics were obtained from HDD, which contains individual billing records for all patients from a large sample of U.S. hospitals.† Antibiotic agents and doses provided were identified for all patients discharged during 2010.Age group-specific proportions of hospitalizations during which antibiotics were prescribed were calculated by antibiotic group.In 2011, EIP performed an antibiotic use prevalence survey in acute care hospitals within the 10 EIP sites.Each hospital selected a single day on which to conduct the survey on a random sample of inpatients.EIP data collectors gathered information on antibiotics given to patients and determined the rationale for antibiotic use.
For the second objective, additional data from the EIP were used to determine the frequency of opportunities to improve prescribing for selected urinary tract infections (UTIs) and prescribing of intravenous vancomycin.In addition, data reported during October 2012-June2013 to the National Healthcare Safety Network (NHSN) Antimicrobial Use Option were analyzed; key percentile distributions of usage rates and differences in usage (between usage at 90th percentile and at 10th percentile) were calculated.This difference should be small when comparing usage rates among patient care locations caring for similar types of patients.
The third objective was accomplished through development of a dynamic model that was used to interpret the findings of an observational study and predict changes in CDI with changes in antibiotic use.First, a retrospective cohort study was conducted to quantify the relative risk for CDI using hospital discharge data and pharmacy data from two large academic centers, in New York and Connecticut, linked to active population-based CDI surveillance data from the EIP (6).The primary outcome was hospital-associated CDI (CDI >2 days after hospital admission and ≤180 days after discharge).Primary exposure of interest was receipt of inpatient broad-spectrum antibiotics (i.e., 3rd and 4th generation cephalosporins, beta-lactam/beta-lactamase inhibitor combinations, and fluoroquinolones) during hospitalization.A multivariate logistic model was used to estimate an adjusted risk ratio controlling for age, sex, Gagne comorbidity score (7), hospital, and hospital CDI rates.A stochastic, compartmental model of hospital CDI that represented distinct states of infection (uncolonized, colonized, and symptomatic) was constructed.Antibiotic use was classified with respect to type (high-and low-risk) and where the patient was in the treatment pathway (untreated, treated, and post-treatment).The model was calibrated based on the results of the epidemiologic analyses described in this report and drew other parameter estimates from stochastic distributions based on a previously published agent-based model (8).§
Results
In 2010, based on data obtained from all 323 hospitals by MarketScan HDD, 55.7% of patients received an antibiotic during their hospitalization, and 29.8% received at least 1 dose of broad-spectrum antibiotics (Figure 1).The EIP evaluated 11,282 patients in 183 hospitals in 2011, of whom 4,189 (37.1%) had received one or more antibiotics to treat active infections; half (49.9%) of all treatment antibiotics were prescribed for treatment in one or more of three scenarios: lower respiratory infections, UTIs, or presumed resistant Gram-positive infections (Table 1).Prescribing scenarios at a convenience sample of 36 hospitals across eight EIP sites were reviewed.Reviews of 296 instances of treatment in two specific scenarios (UTIs in patients without indwelling catheters, and treatment with intravenous vancomycin) identified that antibiotic use could potentially have been improved in 37.2% (39.6% of 111 UTI patients, 35.7% of 185 vancomycin patients); improvement opportunities mostly involved better use of diagnostic testing (Table 2).
NHSN began receiving antibiotic use data in 2012.Among the 19 hospitals reporting to the NHSN Antimicrobial Use Option that had completed data validation and submitted antibiotic use data from one or more patient care locations, results were reported for 266 patient care locations.Among the six most common types of patient locations, critical care units reported higher rates of antibiotic use (median = 937 days of therapy/1,000 days-present) compared with ward locations (median = 549 days of therapy/1,000 days-present).The variability in usage rates within any one patient location type was highest (threefold difference between 90th and 10th percentile) among combined medical/surgical wards (i.e., 26 wards categorized as caring for a mixture of medical and surgical patients).When limiting the comparisons within combined medical/surgical wards, differences in usage were eightfold for fluoroquinolones, sixfold for antipseudomonal agents, threefold for broad-spectrum agents (antibiotics considered high risk for subsequent CDI), and threefold for vancomycin (Figure 2).Overall, in the cohort study, the risk for CDI among patients unexposed and exposed to antibiotics was 6.8 and 24.9 per 1,000 discharges respectively.Multivariate modelling adjusting for covariates, for all ages combined, estimated the adjusted relative risk for development of CDI within 180 days after inpatient exposure to broad-spectrum antibiotics to be 2.9 (95% confidence interval = 2.3-3.5).The dynamic model, which accounts for both direct and indirect effects, predicted that a 30% decrease in exposure to broadspectrum antibiotics in hospitalized adults would lead to a 26% decrease in CDI (interquartile range = 15%-38%).Such a reduction in broad-spectrum use equates to an approximately 5% reduction in the proportion of hospitalized patients receiving any antibiotic.
Conclusions and Comment
Antibiotics are prescribed for the majority of patients hospitalized in U.S. acute care hospitals, usually to treat infections.This post prescription review of two common prescribing scenarios for treating suspected infections * Data provided by Truven Health MarketScan Hospital Drug Database.
† Antibiotics from these three groups, which are considered to place patients at high risk for developing Clostridium difficile infection, were administered to 29.8% of the patients.identified opportunities to improve 37.2% of prescriptions, often by timely use of diagnostic tests or documentation of symptoms.This observation is similar to results of older studies (1) and a recent study (2) documenting that about 30%-50% of prescribing might be incorrect.Although the aspect of prescribing that could be improved has varied between studies, it usually involves the wrong dose or wrong duration (2).The EIP review focused on relatively objective criteria, including established standards around diagnostic testing and documentation of symptoms supporting the presence of infection.A threefold difference in overall antibiotic use in the most common patient care location, where more similar usage rates would be expected, considering similar types of patients are being cared for in these locations, is additional evidence of opportunities for improvement.This difference is a conservative measure made by comparing usage reported at the 90th percentile distribution compared with that at the 10th percentile distribution, among locations caring for similar types of patients.The magnitude of differences seen in some antibiotic groups might be the result of differences in formulary or clinical practice guidelines in place at different institutions.However, within similar location types, twofold differences were consistently measured.Although some of these differences might be attributable to differences in the mix of patients within these similar patient care locations, it is likely some might be explained by differences in prescribing practices.This type of monitoring system, which involves antibiotic use measurement to inform quality improvement activities, has been cited as an urgent need by a recent government report (10).
The data in this report confirm the findings of several previous studies demonstrating that antibiotic prescribing in hospitals is common and often incorrect.In particular, patients are often exposed to antibiotics without proper evaluation and follow-up.Misuse of antibiotics puts patients at risk for preventable health problems.These include immediate complications; antibiotics are among the most frequent causes of adverse drug events among hospitalized U.S. patients (11), and near-term complications, such as CDI, which can be severe and even deadly (9).The analysis of risk for CDI from exposure to broad-spectrum antibiotics during hospitalization found an exposed patient was at three times greater risk than a patient without this exposure.Elevated risks of similar magnitude were observed in previous studies (12,13).An estimated 30% reduction in use of these broad-spectrum antibiotics (which would reduce overall antibiotic use by only 5%) would prevent 26% of CDI related to inpatient antibiotic use.Reductions in CDI of this magnitude could also have additional positive effects in reducing transmission of C. difficile throughout the community.
An additional near-term complication of the unnecessary and incorrect use of inpatient antibiotics is the growing the types of patients and available resources and expertise between hospitals calls for flexibility in how these programs are implemented.However, experience demonstrates that these programs can be successful in a wide variety of hospital types to reduce overall and incorrect antibiotic prescribing, decrease drug costs, prevent adverse events caused by antibiotics, and reduce CDI rates and antibiotic resistance locally (6,15).Although cost savings from these programs will vary depending on the size of the facility and the extent to which interventions are implemented, published studies from mostly larger settings have consistently shown significant annual savings ($200,000-$900,000) (1).Correct antibiotic treatment (e.g., prompt treatment of sepsis) is critical to saving lives of hospitalized patients with certain infectious diseases.Given the proven benefit of hospital stewardship programs to patients and the urgent need to address the growing problem of antibiotic resistance, CDC recommends that all hospitals implement an antibiotic stewardship program.CDC has developed guidance that can assist hospitals in either starting or expanding a program to improve antibiotic problem of antibiotic resistance in U.S. hospitals, creating treatment challenges not only for patients who are exposed to the antibiotics, but for other patients to whom these resistant bacteria spread (3).Some hospitalized patients now have infections for which there are no available antibiotic treatments (14).Urgent action is required to address this growing public health crisis.Improving the prescribing of antibiotics in hospitals is one important part of a broader strategy to counter the increase in antibiotic resistance.The CDC report, Antibiotic Threats in the United States, 2013, addresses other priority needs to reduce antibiotic resistance, including preventing infections and the spread of resistance, tracking resistance patterns, and developing new antibiotics and diagnostic tests (3).
Programs dedicated to improving antibiotic prescribing in hospitals are commonly referred to as antibiotic stewardship programs.Such programs serve to ensure optimal treatment for hospitalized patients with infection and reduce unnecessary antibiotic use to minimize harm to patients and prolong the length of time antibiotics are effective (15).Variability in 16).Central to this guidance are seven core elements that have been critical to the success of hospital antibiotic stewardship programs (Box).In addition to highlighting these key elements for success of stewardship programs, the CDC guidance also provides background information on the proven benefits of improving antibiotic prescribing in hospitals and more details on the structural and functional aspects of successful programs.To accompany the guidance, CDC also has developed a stewardship assessment tool that includes a checklist to help facilities assess the status of their efforts to improve antibiotic prescribing and point out potential areas for further improvement (16).
Key Points
• Antibiotics are commonly prescribed in hospitals.
• Evidence of incorrect prescribing and observed variability in current usage patterns suggest that improvements are needed and will benefit patients.• CDC recommends that all hospitals implement antibiotic stewardship programs that include, at a minimum, seven core elements: 1) leadership support; 2) accountability through a single physician lead; 3) drug expertise through a single pharmacy lead; 4) action including at least one intervention, such as an "antibiotic timeout," to improve prescribing; 5) tracking prescribing and resistance patterns; 6) reporting local prescribing and resistance information directly to clinicians, and 7) education for clinicians.
FIGURE 1 .
FIGURE 1. Percentage of hospital discharges with at least one antibiotic day, by antibiotic group -323 hospitals, United States, 2010*
FIGURE 2 .
FIGURE 2. Rate of antibiotic use, by antibiotic group, class, or specific agent, among medical and surgical patients in 26 wards at 19 acute care hospitals -National Healthcare Safety Network Antimicrobial Use Option, October 2012-June 2013*
TABLE 2 . Assessment of antibiotic prescribing among inpatients in 36 hospitals treated for urinary tract infection (UTI) without indwelling catheter or treated with intravenous vancomycin -Emerging Infections Program health-care-associated infections and antimicrobial use prevalence survey, United States, 2011
Abbreviation: SSTI = skin and soft tissue infection.
Seven core elements critical to the success of hospital antibiotic stewardship programs •
• Urgent action is needed to promote correct antibiotic prescribing to ensure these lifesaving drugs work in the future.• Additional information is available at http://www.cdc.gov/vitalsigns.Leadership commitment: Dedicating necessary human, financial, and information technology resources.• Accountability: Appointing a single leader responsible for program outcomes.Experience with successful programs has shown that a physician leader is effective.• Drug expertise: Appointing a single pharmacist leader responsible for working to improve antibiotic use.• Action: Implementing at least one recommended action, such as systemic evaluation of ongoing treatment need after a set period of initial treatment (i.e., "antibiotic time out" after 48 hours).
|
v3-fos-license
|
2019-08-02T20:21:54.271Z
|
2019-07-01T00:00:00.000
|
199126985
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1099-4300/21/7/694/pdf",
"pdf_hash": "2f93804c6b61ef73aeaf9342d6620b0770d3e14c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44469",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "685c3254166c48457269e3df1c09a5e24b3a1b8e",
"year": 2019
}
|
pes2o/s2orc
|
Twenty Years of Entropy Research: A Bibliometric Overview
Entropy, founded in 1999, is an emerging international journal in the field of entropy and information studies. In the year of 2018, the journal enjoyed its 20th anniversary, and therefore, it is quite reasonable and meaningful to conduct a retrospective as its birthday gift. In accordance with Entropy’s distinctive name and research area, this paper creatively provides a bibliometric analysis method to not only look back at the vicissitude of the entire entropy topic, but also witness the journal’s growth and influence during this process. Based on 123,063 records extracted from the Web of Science, the work in sequence analyzes publication outputs, high-cited literature, and reference co-citation networks, in the aspects of the topic and the journal, respectively. The results indicate that the topic now has become a tremendous research domain and is still roaring ahead with great potentiality, widely researched by different kinds of disciplines. The most significant hotspots so far are suggested as the theoretical or practical innovation of graph entropy, permutation entropy, and pseudo-additive entropy. Furthermore, with the rapid growth in recent years, Entropy has attracted many dominant authors of the topic and experiences a distinctive geographical publication distribution. More importantly, in the midst of the topic, the journal has made enormous contributions to major research areas, particularly being a spear head in the studies of multiscale entropy and permutation entropy.
Introduction
Entropy, making its debut in 1999, is a monthly open access journal, which mainly focuses on the studies of entropy and information. As a member of the Multidisciplinary Digital Publishing Institute (MDPI), it has a full-scale departmental structure and often publishes Special Issues to keep in step with research hotspots. Based on its years of hard work, the journal has gradually arisen as one of the well-known journals in the academic world, indexed by the Science Citation Index Expanded of the Web of Science (WoS) since 2009 and being ranked 22th out of 78 journals in the "Physics, Multidisciplinary" category according to the latest Journal Citation Reports (till the work).
From its inception to the year of 2018, exactly 20 years, Entropy had already published 3881 documents, including 3544 articles, 176 reviews, 54 editorials, 18 letters, etc. In particular, 3147 of them are related to the topic of entropy in accordance with WoS criteria, promoting the journal to become a miniature of this distinctive research domain remarkably. Therefore, at this special time, it is quite reasonable and interesting to carry out a retrospective overview in commemoration of Entropy's 20th anniversary.
At present, while there are plenty of disciplines widely used to summarize and analyze the literature, bibliometric methodologies do play an irreplaceable role because of the preciseness and wide applications. According to Broadus [1], bibliometrics is an interdisciplinary study originated in consideration of the journal's special name, publication features, and remarkable performance, the work might as well put forward a brand-new bibliometric method to deal with this issue by combining the above two analysis types together. In other words, this study adopts a two-pronged strategy to not only introduce the entire entropy topic, evaluating its publication situations, influential papers, evolutionary path, and hotspots, but also appraise Entropy's influence and characteristics in the meantime.
Setting the word "entropy" as the topic, 123,063 records of publications (only articles and reviews) are directly collected from all the indexes in the WoS Core Collection database or Entropy's official website, in the range of 1999-2018 (from 1 January 1999-31 December 2018, to be more precise). CiteSpace was chosen as the major software for visualization, because, in practice, it is easier to customize and can provide more valuable information than others, widely adopted for bibliometric studies over the world [37].
After this brief introduction as Section 1, Section 2 presents annual publication trends and productive authors and especially uses nation and fund distributions to illustrate geographical differences between the topic and the journal. Section 3 introduces the most cited articles in detail, and journal categories' information is presented for exemplifying the evolution of research areas relevant to the topic. Then, Section 4 synthesizes the reference co-citation networks to explore not only the evolving process and the hotspots of the entire topic, but also the journal's status and impacts, followed by Section 5, which summarizes the major conclusions of this study.
Features of Publication Outputs
In this section, we provide the annual quantity of the topic's and the journal's publications and then introduce their most productive authors. Furthermore, according to the results above, the rest of this part uses nation and fund distributions to illustrate Entropy's publishing features.
Annual Distribution of Publications
As can be seen in Figure 1, the topic included almost 3000 documents in 1999, and the annual number of publications has enjoyed a continuous increase for the past 20 years. To be specific, the annual quantity has risen from 2997 to 12,470, and especially has been larger than 5000 since 2008 and more than 10,000 since 2016. Nearly half of the articles were published from 2013 onward.
This result is quite significant and meaningful in bibliometrics, because, first, the topic had a tremendous base of publications from the very beginning, so even its modest growth can easily cause a big effect on the scientific world. In addition, according to bibliometric studies, like Price [38,39], in general, the annual publication quantity of an area is growing exponentially over time, rising slowly and then quickly, especially at the early stage of its life, and if the majority of papers are published in recent years, then this research area is considered getting into its vigorous period. By speculation, the topic's ascent is far from over, which will be astonishingly influencing the science and technology world soon afterwards. Figure 2 depicts the publishing trends of the top-five most productive journals related to the topic, and particularly marks several historical moments of Entropy. The annual publication number of Entropy was less than 50 until 2008, while it has dramatically rocketed since 2012, surpassing that of others with great rapidity. However, the other four prestigious, physics-related journals are largely in stagnation or even on the wane in recent years. Combining Figure 2 with Figure 1, the share of Entropy in the entire topic outputs has also moved on to a gradual upward arc over the last 20 years, accounting for 0.23% in 1999 and for 6.77% in 2018. All the information not only demonstrates the fast growth of Entropy, but also indicates the development and expansion of the entire topic.
Most Productive Authors
As an important part in traditional bibliometric analysis, productive authors are considered to be introduced in detail, because they are the major dedicators and may even lead the directions of their domains. Table 1 lists the top-20 prolific authors of the topic in the descending order of publication quantity. Indicators from left to right are rank, name, institution, country or region, total publications (TP), total citations (TC), total citations per publication (TC/TP), h-index, and citation thresholds. The nations are judged by the locations of institutions written in the documents, not the authors' real nationality.
Apparently, most of the institutions are located in Asia, America, and Europe. Over half of the authors are Chinese, and four of them are working at the Chinese Academy of Sciences, a linchpin for researching high technology and natural sciences, suggesting that Chinese scholars have played a vital role in this domain.
The top-three authors are Lingen Chen, Fengrui Sun, and Angelo Plastino. Chen and Sun are both professors at People's Liberation Army of China (PLA) Naval University of Engineering. Chen specializes in energy and power engineering and modern thermodynamics, and Sun is an expert in energy and power engineering and engineering thermophysics. Due to their working relationship and similar research domains, the two have collaborated in research for a long period of time, producing a great number of articles relevant to the entropy topic. For example, in Table 1, Chen and Sun shared a paper [40] having more than 500 citations together. This paper forecasted the future direction of finite thermodynamics by reviewing the study's historical background, research development, and theories. The third author is Angelo Plastino, an emeritus professor and physicist at National University La Plata. He is mainly interested in information theory, statistical mechanics, and quantum information, showered with innumerable honors and prizes.
Moreover, in Table 1, a paper authored by Jienwei Yeh [41] has been cited more than 2000 times. This paper, a landmark in materials science and engineering, provided a new method for designing nanostructured high-entropy alloys. Jienwei Yeh is a professor working at National Tsing Hua University in Taiwan, China, having considerable findings on materials science, especially high-entropy alloys.
The most productive authors of Entropy are listed in Table 2. Obviously, the two tables contrast sharply with each other. In the first place, there are some familiar figures appearing again, like Lingen Chen, Angelo Plastino, and Vijay P. Singh, both in Tables 1 and 2. Besides, the countries and institutions listed in Table 2 are more plentiful than in Table 1, which indicates that the journal has a distinctive geographical distribution. This phenomenon is meaningful and is worth being further investigated because as discussed by Liang and Zhu [42], it might illustrate the spatial differences of publication quantities and cooperation. Thus, the work visualizes nation and fund distribution networks to deal with this issue in the latter part of this section.
The top-three prolific authors of the journal are Dumitru Baleanu, Vijay P. Singh, and Angelo Plastino, from Turkey, the USA, and Argentina, respectively. Dumitru Baleanu is a professor interested in fractional dynamics and its applications, fractional differential equations, mathematical physics, and so on. He is productive in various fields, writing or participating in over 200 articles. Vijay P. Singh is a distinguished hydrologist at Texas A&M University, specialized in biological and agricultural engineering with plenty of honors and awards. His current interests include surface-water hydrology, groundwater hydrology, hydraulic engineering, irrigation engineering, etc.
The most attractive author in Table 2 is Yudong Zhang, who has two articles receiving more than 100 citations, respectively. Zhang is a professor now working at University of Leicester, mainly focusing on knowledge discovery and machine learning. As for the two papers, the first one [43], honored as a highly-cited paper by WoS, proposed a new automatic system of computer-aided diagnosis, which is more accurate for magnetic resonance brain images, and the other [44] presented a new approach for image segmentation by creatively employing Tsallis entropy rather than Shannon entropy.
To sum up, there are plenty of researchers devoting themselves to the entropy topic, especially to physics-related areas, which has made tremendous impacts on the scientific world. Entropy has attracted large numbers of celebrities and key scholars in different areas and has gained acceptance worldwide. Nevertheless, on the one hand, Entropy's citation situation is relatively weak, which will be further discussed when talking about the most cited papers in Section 3. On the other, through investigation, or it also can be partly told from the content above, the topic's research areas seem a little different to that of the journal, as if they have diverse taste and interests for publications. This difference will be explored and studied in Section 4, since data here could not provide an overall landscape.
Nation Distribution Analysis
The nation relationships of the topic and the journal are portrayed by CiteSpace. According to CiteSpace textbooks, a node represents a country, and its radius is in proportion to the country's publication quantity. A line linking two nodes symbolizes the cooperation between the two countries. A country is considered to play a pivotal role in cooperation if its node is surrounded by a purple ring. Colors reflect the chronological order by changing from dark to light. Specifically, the different colors of a node indicate the country's different publication years, and a line's color presents the first year that the two countries cooperated with each other. Due to CiteSpace's design, not all of the articles can be identified and visualized, so in this article, we selected the top-300 and the top-100 most cited papers in each year for the topic's and the journal's visualization, respectively. Figure 3 displays the top-20 productive countries or regions of the topic. Most of them are in Europe (9), Asia (8), and North America (2). The USA and China are the top-two prolific countries in history and have still maintained their high productivity of late in terms of their thick, light tree-rings. Germany ranks in third place, followed by France, India, England, and Italy, successively. Iran, Brazil, India, and Russia have published plenty of literature in recent years, while the speed of Japan, Spain, Italy, and Canada is slowing down to some extent, which points to the fact that developing countries are growing more quickly by comparison. This phenomenon also appeared in the analysis of EJOR [14].
Surprisingly, the nodes of the USA and China do not have purple rings, whereas those of Germany and France do. Three reasons can mainly explain this result. To begin with, there are numerous researchers and institutes in China and the USA, so that it is easy for researchers in both countries to find domestic partners working on the same topic. Comparatively, scholars in small-or medium-sized countries are more likely to seek cooperation internationally. Secondly, the entropy topic has already penetrated into many disciplines, especially physics and engineering, which are traditional and powerful domains in Germany and France. As a result, the two countries could easily win popularity in international cooperation. At last, through further investigation, the majority of countries contributing to the topic are in Europe. Therefore, the influence exerted by Germany and France might be relatively strong and durable. (1), which compose a greater geographic scope. Second, China has replaced the USA, becoming the most productive country, and the countries from the third to the fifth are displaced by Italy, Spain, and Germany. Note that Japan and Spain are active in the journal, while they are both on the decline in the entire topic. Taiwan, a region ranking 19th out of 20 and almost invisible in Figure 3, has climbed to the tenth position in Figure 4. All the evidence suggests that Entropy is particularly attractive among Asian scholars.
Furthermore, Saudi Arabia has a purple ring in Figure 4, which is somewhat unexpected because of its late start in education and research. By survey, in Entropy, there are in total 88 papers (including but not limited to the topic) authored by Saudi Arabia from 1999-2018; the first one was in 2011, and 79 of them were written by international collaboration, enjoying near the highest collaborative rate among major countries. Unlike other countries preferring to team up with developed countries, Saudi Arabia is more willing to cooperate with its neighbors, like China, Pakistan, Turkey, Romania, Iran, etc. Presumably in recent years, Saudi Arabia tried to employ or collaborate with foreign researchers in an effort to enhance its research reputation. Generally, such high-level international cooperation can inflate research development rapidly, but may also cause severe problems in its scientific infrastructure, which should be a concern.
Despite all this, the result still indicates that major research communities in Entropy are around Germany, Italy, the USA, and Saudi Arabia, located in Europe, North America, and the Middle East, respectively. China, however, is still without a purple ring. Although the country has numerous publications and plays a key role in this area, its international cooperation needs to be improved as soon as possible for further development.
Fund Distribution Analysis
In order to deal with this issue, we first survey the topic's funds. There are over 50,000 funds participating in the topic, mainly located in Europe, North America, and China, and the top-20 are largely governed by Europe (5), the USA (5), China (4), and Canada (2). The National Natural Science Foundation of China (NSFC) is the dominant one, publishing more than 12,000 documents during the last 20 years, followed by the National Science Foundation (NSF, USA, 4910), the Fundamental Research Funds for the Central Universities (China, 1437), the Engineering and Physical Sciences Research Council (U.K., 1322), and the National Institutes of Health (NIH, USA, 1143), in descending order by publications.
The major funding agencies of the topic and the journal are quite similar to each other, mainly including natural science foundations granted by governments at national or ministerial levels, whereas the biggest difference is that in Entropy, there are many smaller, regional Chinese funds emerging at the top of the rank. To be specific, there are in total 1922 foundations supporting the studies in Entropy, and the majority of them are established by China (32.62%), the USA (22.94%), Spain (12.33%), Germany (7.28%), and Italy (5.57%). Concerning that most of the funds cannot be visualized due to the great mismatch of publication quantity, we only depict the top-20 most productive foundations of Entropy by CiteSpace in Figure 5. In the figure, a node represents a foundation, and other basic instructions are the same as in Section 2.3. As can be seen, all the funds are in China (10), Europe (7), South America (3), and the USA (2). Funds in Europe and the Americas are at the upper right and left, respectively. There are two USA foundations, i.e., NSF and NIH. Spain is the most active country in Europe, owning two foundations in the figure.
Nearly half of the foundations are organized by Chinese administrations, occupying the entire lower part of the depiction. Still, NSFC is the most prolific one, which acts as a tremendous hub among Chinese funds and is even the key to push Entropy moving forward. Small-sized Chinese foundations include the Ministry of Science and Technology Taiwan, the Nanjing Normal University Research Foundation for Talented Scholars, the Jiangsu Key Laboratory of 3D Printing Equipment and Manufacturing, etc., which are managed by local governments and schools.
This difference may be principally ascribed to Chinese research policies and open access journals' business method. In China, the promotion and the payment of researchers, especially those who are young and have no place in the academic world, are highly relevant to their number of papers per year. For them, small funds are easy apply for, and open access journals' quality and publishing speed can meet their needs appropriately, even though they have to pay some money. With this in mind, it also can be imagined that Chinese researchers will play a more and more important role in open access journals as Chinese local academies and foundations have been increasing rapidly in recent years.
Most Cited Publications
Papers receiving more citations than others may contain the imperative or fundamental knowledge of their domains, which needs to be carefully studied. Therefore, Section 3 is arranged in order to find out the topic's and the journal's most attractive documents and to catch a glimpse of the major research areas. Table 3 At first, two facts should be explained. The mean publication year was about 2003, while this result is quite reliable because articles always need a long time to be widely accepted, and 10 more years is a must to accumulate sufficient citations, according to bibliometrics research [45]. In addition, more than a third of the papers in the list are reviews. Actually, this is natural and reasonable since high-quality reviews can provide comprehensive information and research tendencies, which are extremely useful to scholars.
Most Cited Literature of the Topic
The first article [46], a milestone astonishingly gaining more than 10,000 citations, provided an abstract framework and an information operator for the research of compressed sensing of objects. Its author, David L. Donoho, is a professor of statistics and of humanities and sciences at Stanford University. As a mathematician, David L. Donoho has immensely contributed to statistics, signal processing, and harmonic analysis, especially his algorithms, which have considerably promoted the maximum entropy principle. The second article [47] used the maximum entropy method in species geographic distributions because of its simplicity and accuracy. The third one [48], also a proceedings paper, proposed the holographic principle, a breakthrough in physics. Juan Maldacena, the author, is a theoretical physicist working at the Institute for Advanced Study in America, famous for his research on the holographic principle. Then, through further studies, many articles and scholars of the topic are highly related to physics, ecology, and computer science. Therefore, the work might as well list the most attractive papers in recent years for reference, since these areas are growing or upgrading at a fast rate. Table 4 lists the topic's top-20 attractive documents over the last five years. In contrast, there are more reviews appearing in Table 4, which demonstrates that this publication type is becoming more and more popular and in demand with time. Furthermore, the average values of NA, NI, and NR of Table 4 are much higher than those of Table 3, regardless of the first article [49], which was written by a collaboration including 244 authors and 99 institutions. Usually, an article authored by a large number of authors, institutions, as well as countries is considered to be more complex, difficult, and extensive, and it might be profound and detailed when referring to plenty of references.
Nevertheless, the most intriguing thing might be that the categories in Table 4 are more diverse than those in Table 3, and multidisciplinary, materials, and physics do appear more frequently in the former. In bibliometrics, this phenomenon may exemplify the evolution of research trends, which is worthy of an in-depth analysis.
Firstly, from a quantity perspective, Figure 6 portrays the proportion change of the top-10 categories relevant to the topic with a 20-year timespan. Apparently, the categories have not yet changed from beginning to end, while their proportions have varied in part. Over the past two decades, materials science multidisciplinary is the fastest growing category, which has risen by about 12%, followed by physics applied (2.8%), physics multidisciplinary (2.2%), and engineering electrical electronic (2.1%), yet the shares of astronomy astrophysics, physics particles, and biochemistry molecular biology have declined by more than 7%, 7%, and 4%, respectively. Furthermore, we investigated the first year that each category appeared in the topic and found several typical time intervals. For example, physics-and math-related categories appeared frequently for the entire 20 years. Even though these depictions are relatively rough and abstract, they still indicate that the change of research areas really exists. Combined with the information in Section 2, this analysis also explains the reason why Entropy has published quickly since the last seven years. By survey, Entropy has conducted a series of reforms and innovations in an effort to deal with the boom of various kinds of entropy articles. For example, professor and doctor Kevin H. Knuth has taken over as Editor-in-Chief since 2012; six sections were launched in 2014; and four new sections started to operate in 2018. All these improvements diversified its publications and improved its ability to better accommodate the change of the topic.
In short, all the evidence discloses the fact that entropy research is becoming more and more extensive, sophisticated, and interdisciplinary, so that research cooperation should be valued and emphasized unprecedentedly, and how to get general and specific research trends in a scientific way has increasingly become a key issue for domain experts.
Most Cited Literature of Entropy
The top-20 most influential documents in Entropy are listed in Table 5. There are three articles gaining over 200 citations. The most cited one [50] was published in 2014 by six authors from the Research Laboratory governed by the U.S. Air Force, proposing a high-entropy alloys design and evaluation method, which is mainly applied to transportation and energy industries. The second article [51] celebrated the 10th anniversary of permutation entropy by summarizing its theoretical foundations and major applications in areas of economical markets and biomedicine. The third paper [52] is a review that held that maximum entropy can be useful in recognizing or distinguishing wild animal's distributions and habitat selection and introduced the model's advantages, disadvantages, as well as future improvements in great detail. This review cited many references, like [47], the second most cited article of the topic over time, and can be considered as an overall retrospect about relevant research. These three articles above are all labeled as highly-cited papers by WoS. Then, through further survey, the journal has inherited and improved several areas of the topic and also has its own research favorites, which will be vividly demonstrated in Section 4.
Moreover, three facts should be explained. Above all, three quarters of the journal's documents are published in Special Issues. This well-targeted publication method could help researchers to find their interests in a more rapid and precise way, reflecting that the journal has been successful in its Special Issue development and really has a keen eye for research hotspots. Next, the average values of publication years, NA, NI, and NR in Table 5 are all larger than those in Table 3. As mentioned above, higher values of these indicators are often coupled with better performance. Although this can be partly ascribed to Entropy's publication burst in recent years, the fact still displays the journal's gradual improvement. Thirdly, it cannot be denied that there is a large gap between Entropy and prestigious journals, as no article of Entropy ranks on Tables 3 and 4. This conclusion seems to contradict Entropy's good performance and high impact factor.
After a comprehensive survey, two reasons can largely account for this result. First, the performance of Entropy can be attributed to not only its continuous advancement, but also open access journals' publication policies. Papers submitted to these journals would be quickly reviewed, and everyone has the equal right to read and cite them freely when they are accepted and published online, which has contributed to Entropy's high popularity. Second, highly-cited papers on the topic (including, but not limited to publications in Tables 3 and 4) are always related to theoretical and practical breakouts, while Entropy likes to issue reviews or articles that are easy to read and not so technical. Honestly, researchers in large numbers are more willing to publish their masterpieces in journals that are time-tested and technical-oriented, even though they may wait for a long time. Therefore, in such cases, Entropy is not their best choice.
Reference Co-Citation Networks
Along with the analyses above, the work not only witnesses the achievements of the topic and the journal, but also finds some evidence that indicates the topic's evolution and Entropy's publication interests. As a consequence, this section is aimed to further investigate these phenomena and to explore the hotspots by using reference co-citation networks.
Co-citation, proposed by Henry Small [5] in 1973, is a commonly-used measure in bibliometric research, appearing when two articles are cited together by any other papers. Conspicuously, the more citations two articles obtain together, the more related they should be. Over time, these relationships would be gradually synthesized as a huge network, which can vividly symbolize publications' evolutionary process, research areas, and hotspots.
Co-Citation Networks of the Topic
Visualized by CiteSpace, Figure 7 presents the co-citation network of the topic from 1999-2018, depicting 11 clusters, which are more stable and significant. According to Table 6, these clusters account for more than 42.13% of the entire references, from which the panorama of the topic could be fairly displayed.
Technical instructions should be introduced at first. In co-citation networks, clusters are ranked by their sizes, and their labels are keywords extracted from the references by log-likelihood ratio (LLR) algorithm. For completeness, the work also labels the clusters by two other famous algorithms, i.e., term frequency-inverse document frequency (TF-IDF) and mutual information tests (MI). A node, representing a reference, is colored in deep red if it has a citation burst, which reflects that the reference was significantly active at one point in history. Furthermore, it should have a dark purple legend composed of the author's name and publication year since the reference is highly cited in its cluster over time. A line linking two nodes symbolizes the two papers' co-citation relationship, and its thickness is in proportion to the frequency that the two paper have been co-cited. Colors reveal the chronological order, similar to the criteria listed in Section 2.
Cluster #1, for example, is the largest one unanimously named as graph entropy by the three algorithms. It has a 0.992 silhouette score in Table 6. The silhouette score reflects a cluster's good homogeneity or consistency when it tend to close to one, and a 0.992 silhouette score is usually considered extremely high.
References highly cited or with citation bursts in Cluster #1 include [53][54][55][56], publishing in 2006, 2009, 2011, and 2008, respectively. The first one [53] is a fundamental textbook named Elements of Information Theory, which introduced nearly all of the essential knowledge in information theory. Needless to say, its author, Thomas M. Cover, is a great information theorist and past president of the IEEE Information Theory Society, dedicating his entire life to promoting the mix of information theory and statistics. Subsequently, the second paper [54] analyzed entropy-based molecular descriptors in chemical use. The third document [55] offered an overall introduction about the measures of graph entropy from a historical view, and the fourth article [56] provided a general structure of graph entropy and explored relationships among several kinds of graph entropies. The articles from the second to the fourth are all authored by Matthias Dehmer, a professor now teaching at the University of Applied Sciences Upper Austria, who has plenty of interests like data science, bioinformatics, machine learning, information theory, computational statistics, etc. In simple terms, Cluster #1 is a big family about entropy, graph entropy, and information theory, as well as their applications, reflecting entropy's, especially graph entropy's, development and practicability.
Then, we further investigate the most cited papers of Cluster #1 to explore the hotspots of this cluster, as in an area, citing papers is always the latest or representative extension of cited papers. These papers include [57][58][59][60][61], etc., mainly talking about new types of graph entropy measures and their extremal properties. Apparently, the research of Cluster #1 is still at the initial stage, i.e., theory development, and how to design graph entropy measures and prove their extremal values are the principal works at present. Figure 8 displays the relationships among the clusters in terms of timeline to make up the simplistic structure of Figure 7. As can be seen, Cluster #4, labeled as Clausius entropy, composability, and q-exponential distribution, is the oldest one existing from 1997-2006, which includes many papers having citation bursts. Therefore, by speculation, this cluster had declined and almost perished years ago after previous prosperities. Nevertheless, Cluster #4 closely connects with the sources of Clusters #2, #11, and #12, suggesting that it might largely affect these clusters or be the knowledge base of them. According to their life spans, the bars of Clusters #6, #7, and #11 are in dark color, relatively short, and without citation bursts, and even do not last to the present, which illustrates that these clusters are outdated and short of research value. However, Clusters #6 and #8 are highly related, so presumably Cluster #8 has carried forward the studies of Cluster #6. Definitely, the most important messages in Figure 8 are that Clusters #1, #3, and #12 still continued their strong performance in recent years, which should be surveyed in great detail because they may still maintain their tendencies currently.
Cluster #3 can be titled as permutation entropy, multiscale entropy, or detecting weak abrupt information, enjoying the longest lifetime from 2005-2017. Its papers that are highly cited or with citation bursts include [51,62,63]. The first paper [62] is a review presenting the theoretical bases and major applications of permutation entropy. The second one [51] provided a concept named composite multiscale entropy, which is more suitable for practical use because it has overcome the handicap of traditional multiscale entropy. The third article [63] applied the multiscale entropy method to human heartbeat fluctuations in order to verify the method's capability for biological signals' measurement. In short, this cluster basically refers to multiscale entropy, permutation entropy, and their applications. Wu [62] and Zanin [51] were issued by Entropy.
The most cited papers of Cluster #3 include [64][65][66][67], etc. These papers referred to theoretical and practical reviews of entropy methods [64] and permutation entropy [65], new entropy types based on permutation entropy [66], and a new algorithm for accelerating entropy computation [67], respectively. Unlike Cluster #1, Cluster #3 is relatively mature, and now, its theoretical innovation and optimization might be worthy of attention.
Additionally, in Figure 7, Zanin [51] is strongly co-cited with Sharma [68], a paper in Cluster #15. By survey, Sharma [68] employed vast numbers of entropy-based algorithms for electroencephalogram (EEG) signals' evaluation, and Cluster #15, known as ordinal pattern, sample entropy, and multiscale permutation, can be considered as the practical extension of Cluster #3. To be specific, the milestones in Cluster #15 include [68][69][70][71][72], mainly discussing the applications of permutation entropy, especially in clinical and medical use. Note that Sharma [68] and Unakafova [70] were also published by Entropy, which demonstrates the journal's superiority and acceptance in this field. Cluster #12 is titled as pseudo-additive entropy, Tsallis entropy, and different entropy formalism, respectively. This cluster mainly talks about Tsallis entropy, Tsallis statistics, and their applications, including some foundational articles like [73][74][75], etc. The first paper [73] provided some important and basic results about Tsallis entropy. Its author was Constantino Tsallis, a well-known theoretical physicist and also the great contributor of Tsallis entropy and Tsallis statistics. In the second article [74], the author employed Tsallis entropy and Kaniadakis entropy to build up a minimal entropy martingale for semi-Markov regime switching interest rate models, and the third reference [75] classified entropies according to their asymptotic scaling.
The most cited papers to Cluster #12 include [76][77][78][79][80], etc. Except a review about entropy application in the fields of mathematics and science [80], other papers mainly introduced the theoretical innovation or extension of different kinds of entropies, especially those that are related to entropy functionals [76,78]. It seems that Cluster #12 is sophisticated and has integrated with other entropy studies, so that research across different entropy areas and ensuing application may have potential for research.
Co-Citation Networks of Entropy
Similarly, Table 7 and Figures 9 and 10 present the co-citation information of Entropy, including 13 clusters.
The most superficial phenomenon is that in Table 7, the labels of each cluster are more identical, suggesting that Entropy's clustering result is more stable and clearer. However, in Figure 10, nodes and lines are intertwined almost in a crisscross pattern, so that it seems that the journal's clusters are significantly related, neck and neck with each other, and difficult to distinguish.
Several reasons can account for this paradox: Firstly, Entropy has 12 independent publishing sections. Therefore, its research areas would grow up side by side, and corresponding references can be synthesized in an accurate way. secondly, as an important part of the journal, interdisciplinary publications unavoidably need to refer to different kinds of disciplines, which contributes to the complexity of co-citation relationships; thirdly, an entire topic is even more complicated than a journal belonging to it, thus the topic's labels are more difficult to summarize; and at last, as mentioned before, CiteSpace has a bunch of data sifting and slicing criteria, so that not all of the nodes and lines can be seen when visualized, which makes the topic's networks relatively simple. As a consequence, this phenomenon is really possible to occur and coincidentally reveals Entropy's interdisciplinary nature. By comparison, some clusters between Tables 6 and 7 are exactly the same, like maximum entropy and black holes, if all kinds of labels can be employed. Furthermore, it can be seen that Entropy has some distinct areas, including transfer entropy, discrete wavelet entropy, etc. This result seemingly reflects the contribution and the innovation of the journal in the midst of the topic. However, perhaps affected by publication quantity, Entropy's labels are more related to certain specified theory extensions or applications, while those of the topic are highly relevant to the basic concepts of entropy. Therefore, in this way, the role Entropy played can only be partly demonstrated, and more detailed analysis still needs to be conducted.
Here, we use Cluster #0 to further discover Entropy's potential influence. Described as EEG signal or fault diagnosis, Cluster #0 has the most articles with citation bursts, including [51,62,68,72,[81][82][83][84][85] and enjoys nearly the longest lifespan from 2004-2018. Briefly speaking, Cluster #0 refers to the theoretical and practical extensions of multiscale entropy and permutation entropy, especially involving clinical medicine and EEG signal applications. As introduced before, the works of [51,62,68,72] in Entropy's Cluster #0 are also the mainstays of the topic's Cluster #15 or #3, and the works of [51,62,68] were published by Entropy. All messages indicate that Entropy's Cluster #0 plays a dominant role in the studies of multiscale entropy and permutation entropy, especially in EEG signal application.
In summary, during the last 20 years, the mainstream research has changed remarkably, and still, there are several strong and active domains meriting swiftly being studied and followed. Besides, the contrastive analysis reveals that Entropy not only puts forward some distinctive research areas, but also has played a significant role in several cutting-edge areas of the topic.
Conclusions
In 2018, Entropy enjoyed its 20th birthday, so that the work intended to provide a bibliometric overview in commemoration of its anniversary.
Through document investigation, the work proposed a new bibliometric analysis method to respond to the journal's features by analyzing the topic and the journal together. Based on the data from WoS in the range of 1999-2018, this review successively introduced the entropy topic's publication situations, influential papers, evolutionary path, and hotspots, and in this context, Entropy's impacts and inner patterns have been uncovered step by step. Major conclusions and comparisons are listed in Table 8.
In short, the entropy topic has already influenced or penetrated into various kinds of disciplines, showing its muscles and great potentiality to the academic world. According to the geographical distributions, the USA and China are the top-two productive nations, whereas European countries, especially Germany and France, play a pivotal role in international cooperation. The majority of the funds supporting the topic are natural science foundations constructed by European, North American, and Chinese governments at national or ministerial levels. Through decades of evolution, the research areas have varied significantly, and the hotspots mainly include graph entropy, permutation entropy, pseudo-additive entropy, etc. Specifically, for graph entropy, the key work is to improve its theory structure; for permutation entropy, theoretical innovation and optimization matter greatly; and for pseudo-additive entropy, cross-over study and application might be the first priority.
Besides, Entropy has experienced an astonishing growth in recent years, attracting large numbers of the scholars who are indispensable in the topic. By comparison, Entropy is more popular among Asian researchers, and its major cooperation communities are located in Europe, North America, and the Middle East, which are more diversified and international. The foundation situation of Entropy resembles that of the topic, but the journal is more preferred by smaller, regional foundations in China. Moreover, with respect to research domains, the journal has contributed much to the topic, especially leading the trends of multiscale entropy and permutation entropy. Table 8. Main comparisons between the topic and Entropy.
Feature Entropy Topic Entropy Journal
Publication Trend Owing a tremendous base; at the origination of its exponential increment. Experiencing an exponential increase with a sharp slope; becoming the most productive entropy-related journal since 2013.
Most Productive Author
Principally working in Asia and North America, especially China, which has 9 authors ranking in the top-20.
Attracting many celebrities of the topic; referring to more diversified institutions and nations.
Most Productive Country
Including the USA, China, Germany, France, etc.; largely located in Europe. Including China, the USA, Italy, Spain, etc; but enjoying a more extensive geographical distribution.
International Cooperation
Mainly promoted by European countries where Germany and France play a pivotal role.
Enjoying a global cooperation network in which the USA, Italy, Germany, and Saudi Arabia exert major influence.
Most Productive Fund
Countless; primarily related to natural science foundations in Europe, North America, and China.
Similar to the topic's situation, except that there are more small-sized Chinese funds coming to the fore. Most Cited Paper Including many reviews; referring to plenty of disciplines, especially highly related to physics, ecology, computer science, etc.
Relatively new; with good bibliometric indicator performance; mainly published in Special Issues.
Hotspots
Changed significantly during the last 20 years; mainly including three areas at present, i.e., graph entropy, permutation entropy, and pseudo-additive entropy.
Playing a leading role in multiscale entropy and permutation entropy; having several innovative areas, like transfer entropy, discrete wavelet entropy, etc.
|
v3-fos-license
|
2021-10-18T17:59:45.708Z
|
2021-09-30T00:00:00.000
|
239468382
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2304-8158/10/10/2326/pdf",
"pdf_hash": "28865260f7f821b79f79a7025a584b59ff430302",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44470",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "ec3d95d44bed9ebe17ff85a11a606ad65154a140",
"year": 2021
}
|
pes2o/s2orc
|
Solar Cookers and Dryers: Environmental Sustainability and Nutraceutical Content in Food Processing
This work reviewed the state of the art concerning solar cookers and dryers used in food processing. The general description of solar cookers and dryers was presented, with a specific attention to the equipment where the cooking takes place with the contribution of the direct sunlight. Some insight about the history of design and development of devices that use solar light to process food were provided. The possibility to store the heat produced by solar light using Phase Change Materials was analyzed. Moreover, some “case-studies” were revised and discussed, in which solar light is efficiently used to dry or cook food, focusing on the quality of the food in terms of nutraceuticals content. The analyzed literature points out the necessity for further research about the effects produced by direct solar rays on different foods. The reliable data on this aspect will allow assessment of the quality of food transformation by solar cookers and dryers, adding a strong incentive to the development of such devices, up to now primarily motivated by energy-saving and environmental issues.
Introduction
In the last decade, research has given a lot of space and attention to the development of materials, instruments and production processes that take into account and allow environmental protection and sustainability [1][2][3][4][5][6]. Certainly, the development of new technologies and materials is moving in this green direction, as evidenced by the fervent scientific production on this issue. Furthermore, the use of clean and renewable energy represents a new possibility of development for many companies increasingly involved in the ecological transition. In this context, solar energy is enjoying great success, both in terms of investments for research and development, and also in terms of facilities for companies that are able to modify and/or implement processes using this source of energy [7][8][9][10].
In fact, solar energy is clean and safe and guarantees use without negative impact on the environment and society. Historically, solar energy has always been present in daily activities, such as cooking, drying food and clothes and heating water. Moreover, following innovative research activities, new fields of application of solar energy have been developed, which today are also used for drying, steam and energy production, water distillation and desalination, heating, cooling and refrigeration [11]. The solar food cookers (SCs) and dryers (SDs), even if they require an upfront expense, provide better taste and safe, marketable nutritious food. The food industry is also fully involved in these applications, both in the production and in the processing of food (cooking and dehydration) [12][13][14][15][16][17]. Numerous studies have shown that solar cooking and drying can
The History of Solar Cookers and Dryers
It is not only in recent years that solar light has been used to cook or dry food. The greenhouse can be considered the ancestor of the SC box and SDs, as both retain solar heat within a confined space. From this concept, in 1767, Horace-Bénédict de Saussure, a Swiss scientist, started to build the first real SC in history, with the aim of studying the collection of heat inside glass boxes. Another figure certainly fundamental for the development of the study of solar thermal energy was the French inventor Augustin Mouchot who invented a glass boiler that, when exposed to the sun brought the water to boiling, whose steam then powered a small steam engine [18,19].
Around the end of the nineteenth century the American Aubrey Eneas, retracing the work of Mouchot, built a large parabolic reflector in the USA and set up the first company involved in the production of solar devices. Frank Shuman (1862-1918) established the Sun Power Company and built the world's first solar power plant in Maadi, Egypt, consisting of parabolic mirrors to power a 60-70 hp engine capable of pumping 23,000 L of water per minute from the Nile to the adjacent cotton fields [20]. Unfortunately, even the results of Shuman's research did not enter the market, since they proved to be economically not competitive with respect to coal and oil which, still having a low price, remained the sources of cheaper energy on the market.
In the same period, Alessandro Annibale Battaglia (1842-unknown) was the first to realize that, to concentrate much more energy, it is required to separate the receiver (the oven box) from the mirrors [21].
An important contribution to modern solar cooking was made by Maria Telkes [22]. During the 1920s she invented a plastic bag still capable of producing, by sun heating, a few liters of fresh water from sea brine for castaways. In 1948, Maria Telkes directed the construction of the first solar-heated experimental house, followed by thermoelectric generators for space and terrestrial uses. From 1959 she dedicated herself to the construction of a solar kitchen, a structure for outdoor use consisting of a central body in which food was inserted and a series of reflective aluminum panels arranged to capture the heat of the sun. The reflectors popularly known as the "Telkes kitchen" were among the best solar cookers and developed a temperature of 225 • C.
Another pioneer of solar energy technology was George Löf (1913Löf ( -2009), director of the Industrial Research Institute at the University of Denver, Colorado. In 1950, he experimented with a parabolic SC which he nicknamed the "Umbroiler", as its shape resembled that of an umbrella [23]. This project proved to be economically unsuccessful for the time, although he subsequently managed to distribute other models of SCs in various third world countries thanks to UNESCO.
A further impetus to the discovery of solar cooking came during the 1980s, due to the incipient oil crisis, with considerable experimentation in Europe and the USA. It is no coincidence that groups such as ULOG (Switzerland), EG Solar (Germany) and Solar Cookers International (USA) were born in those years. It was also in the 1980s that, Barbara Kerr experimented with the efficiency of various types of SCs, especially wall-mounted SCs and solar panel ovens, and it is this last type that was very successful.
In the same period, a large event was held to raise awareness of the population towards solar cooking in the Bolivian highlands, an area of the world heavily subject to shortage of timber and deforestation. The event was organized by the Pillsbury Corporation and Meals for Millions, which held cooking demonstrations and lectures on building SCs with local materials. In 1988, the Pillsbury Corporation, in partnership with Foster Parents (now Save the Children), sponsored a similar project in Guatemala. These were the first international projects in the field of solar cooking, which then gave way to other similar projects that continue to this day.
Finally, the first mass-utilization of solar cookers was in India, where every year a special day with a huge number of students from several schools is organized. During this event, a practical use of solar cookers is shown.
The history of properly defined dryers is more recent. Solar drying has been a simple and inexpensive way to process and store food since ancient times. This treatment removes water or moisture present in various ingredients and prevents fermentation. Everitt and Stanley in 1976 made the first solar dryer that avoided the problems of drying in the open sun [24]. It was a simple box-shaped unit that allowed sunlight to enter through a clear-coated area for sunlight. The main purpose of this invention was to provide a novel method for overcoming the problems of drying in the open sun (United States patent).
In the following decades, solar drying technology has been the subject of research and improvements, both using natural and forced circulation, and with regard to heating from auxiliary sources [15,24,25] Sun drying is still the most common method used to preserve agricultural products in many countries of the world: if advanced techniques are not available, farmers spread their products in thin layers on open ground or on mats where they are exposed to the sun and wind to be dried. Several authors report that significant losses can occur during natural sun drying mainly due to rodents, birds, insects, rain, storms and microorganisms [26]. It has been calculated that about 40% of food loss in less-developed countries derived from post-harvest handling including bad or incomplete drying practice. For this reason, in many of these countries many organizations and researchers are developing and comparing drying systems to obtain better performances in the food drying process from both an economic [27] and nutraceutical point of view.
Solar Cooker and Dryer: Classifications
Here, we clarify the concept of solar cookers because this term can include different types of this general concept. We may refer to some classifications such as the one in Aramesh ( Figure 1) [28]. This classification is based on how thermal energy from the sun is transferred to the cooking vessel. It is important to clarify that in this context the "cooking method" is "indirect" when the sun's energy is captured by a part of the system and then transferred to the cooking box by means of specific fluid having a high thermal capacity ( Figure 2). However, since we are focusing on food transformation, we are more interested here on how energy is transferred to the food during the cooking process. Here, we may distinguish two different types of energy transfer, and both of them use the concentration by mirrors: 1. indirect transfer of the sun's energy to the food ( Figure 3B,C). Here we will always have at least a pot or pan or stovetop in the middle. Mirrors take the form of a parabola and the receiver is in the focus. Energy is concentrated on the pot and then the pot will soon transfer it to the food. With this approach the cooking will depend only on the temperature of the pot and the food will have the same treatment as in conventional cooking with the heat source placed below the pot. 2. direct light into the food. With this approach the sunlight is concentrated into the food, and the cooking process depends either on sunlight or heat ( Figure 3A).
Within type (1) it is important to underline the advantages for the approach, as in Figure 3C where the sunlight is focused below a stovetop, through a second reflection. The position of the focus is far from the parabola and the receiver is completely separated from the capturing system. This approach gives two advantages: (a) we can increase the size of the capturing system, and so the power of the oven; (b) to follow the sun, it is possible to move the parabola which keeps the focus always below the stovetop. This classification is based on how thermal energy from the sun is transferred to the cooking vessel. It is important to clarify that in this context the "cooking method" is "indirect" when the sun's energy is captured by a part of the system and then transferred to the cooking box by means of specific fluid having a high thermal capacity ( Figure 2). This classification is based on how thermal energy from the sun is transferred to the cooking vessel. It is important to clarify that in this context the "cooking method" is "indirect" when the sun's energy is captured by a part of the system and then transferred to the cooking box by means of specific fluid having a high thermal capacity ( Figure 2). However, since we are focusing on food transformation, we are more interested here on how energy is transferred to the food during the cooking process. Here, we may distinguish two different types of energy transfer, and both of them use the concentration by mirrors: 1. indirect transfer of the sun's energy to the food ( Figure 3B,C). Here we will always have at least a pot or pan or stovetop in the middle. Mirrors take the form of a parabola and the receiver is in the focus. Energy is concentrated on the pot and then the pot will soon transfer it to the food. With this approach the cooking will depend only on the temperature of the pot and the food will have the same treatment as in conventional cooking with the heat source placed below the pot. 2. direct light into the food. With this approach the sunlight is concentrated into the food, and the cooking process depends either on sunlight or heat ( Figure 3A).
Within type (1) it is important to underline the advantages for the approach, as in Figure 3C where the sunlight is focused below a stovetop, through a second reflection. The position of the focus is far from the parabola and the receiver is completely separated from the capturing system. This approach gives two advantages: (a) we can increase the size of the capturing system, and so the power of the oven; (b) to follow the sun, it is possible to move the parabola which keeps the focus always below the stovetop. However, since we are focusing on food transformation, we are more interested here on how energy is transferred to the food during the cooking process. Here, we may distinguish two different types of energy transfer, and both of them use the concentration by mirrors: indirect transfer of the sun's energy to the food ( Figure 3B,C). Here we will always have at least a pot or pan or stovetop in the middle. Mirrors take the form of a parabola and the receiver is in the focus. Energy is concentrated on the pot and then the pot will soon transfer it to the food. With this approach the cooking will depend only on the temperature of the pot and the food will have the same treatment as in conventional cooking with the heat source placed below the pot. 2.
direct light into the food. With this approach the sunlight is concentrated into the food, and the cooking process depends either on sunlight or heat ( Figure 3A). Within type (1) it is important to underline the advantages for the approach, as in Figure 3C where the sunlight is focused below a stovetop, through a second reflection. The position of the focus is far from the parabola and the receiver is completely separated from the capturing system. This approach gives two advantages: (a) we can increase the size of the capturing system, and so the power of the oven; (b) to follow the sun, it is possible to move the parabola which keeps the focus always below the stovetop. Movement of the parabola can be automated, as has been suggested by Wolfgang Scheffler [29].
Movement of the parabola can be automated, as has been suggested by Wolfgang Scheffler [29]. With regard to type (2), the approach based on sunlight concentrated into the food ( Figure 3A), it is also important to note that the food is cooked with a strong contribution of the visible light, so in this case the wavelength is different compared to irradiation used in standard ovens. Most SCs have also glass at the top of the box, where the light goes through because the glass (also called "float") is transparent to visible, near-UV and nearinfrared.
As we know, the microwave oven has a different way to cook compared to the standard oven heated by electricity or fire of standard fossil, as shown in Figure 4. As solar cookers are concerned, the sunlight wavelength range is close to IR but far from the microwave. With regard to type (2), the approach based on sunlight concentrated into the food ( Figure 3A), it is also important to note that the food is cooked with a strong contribution of the visible light, so in this case the wavelength is different compared to irradiation used in standard ovens. Most SCs have also glass at the top of the box, where the light goes through because the glass (also called "float") is transparent to visible, near-UV and near-infrared.
As we know, the microwave oven has a different way to cook compared to the standard oven heated by electricity or fire of standard fossil, as shown in Figure 4. As solar cookers are concerned, the sunlight wavelength range is close to IR but far from the microwave. To compare the overall cooker efficiency between SCs is not an easy task, because this parameter is strongly dependent on the test conditions since it is given by the ratio between the amount of solar energy transferred to the test load and the pot and the solar energy being collected by the cooker. The value generally spams from 10 to 30%. In a recent paper, the overall cooking efficiency for Funnel solar cookers was calculated by Apaolaza-Pagoaga et al. [30] with values spanning from 10.2 to 11.8%. These values can be considered as very good results for low-cost cookers. Higher efficiencies were found by El-Sebaii et al. [31] where a box-type cooker was tested. The box cooker was able to cook most kinds of food with an overall utilization efficiency of 26.7%. Recently, a SC with a high-performance light-concentrating lens was tested [32]. Overall efficiencies were found to be from 7 to 11% when silicone oil was the load and from 11 to 26% when water was the load.
SDs (see their classification in Figure 5) are similar to SCs, but with two main differences: 1. maximum temperature is around 60 °C [33]; 2. there is good ventilation to remove humidity from the air in the cabinet during the drying process.
The SDs can be different in structure ( Figure 6) but, in general, they allow food drying without problems of bad weather, pollution, or insects contamination. They should be easy to build, even in rural zones, and have low cost of usage [34]. In Figure 5 the "Controlled Drying" branch includes all these devices. A SD can be simply built as a greenhouse dryer (solar tunnel) in which sun rays selectively penetrate through glasses or transparent plastic polyethylene foils, or as a cabinet with many trays inside a rigid wood or metal structure closed by glass or transparent plastic (passive mode, Figure 5). Air circulation through perforated zones can eliminate moisture by internal convection ( Figure 6A). In a more performant version external pipes further heat air by sunlight before letting it enter the cabinet ( Figure 6B). In other cases, the cabinet has no direct penetration of sun rays on the food and sun is only needed to heat the air or produce energy by an absorber plate ( Figure 6C). Lastly, hybrid sun dryers can use both sunlight and electricity (photovoltaic or from other sources) to heat air to enhance the drying process (mixed mode Figure 5) [27]. Natural or forced air flow permits the elimination of the major part of fruit and vegetables moisture. To compare the overall cooker efficiency between SCs is not an easy task, because this parameter is strongly dependent on the test conditions since it is given by the ratio between the amount of solar energy transferred to the test load and the pot and the solar energy being collected by the cooker. The value generally spams from 10 to 30%. In a recent paper, the overall cooking efficiency for Funnel solar cookers was calculated by Apaolaza-Pagoaga et al. [30] with values spanning from 10.2 to 11.8%. These values can be considered as very good results for low-cost cookers. Higher efficiencies were found by El-Sebaii et al. [31] where a box-type cooker was tested. The box cooker was able to cook most kinds of food with an overall utilization efficiency of 26.7%. Recently, a SC with a high-performance light-concentrating lens was tested [32]. Overall efficiencies were found to be from 7 to 11% when silicone oil was the load and from 11 to 26% when water was the load.
SDs (see their classification in Figure 5) are similar to SCs, but with two main differences: there is good ventilation to remove humidity from the air in the cabinet during the drying process. The SDs can be different in structure ( Figure 6) but, in general, they allow food drying without problems of bad weather, pollution, or insects contamination. They should be easy to build, even in rural zones, and have low cost of usage [34]. In Figure 5 the "Controlled Drying" branch includes all these devices. A SD can be simply built as a greenhouse dryer (solar tunnel) in which sun rays selectively penetrate through glasses or transparent plastic polyethylene foils, or as a cabinet with many trays inside a rigid wood or metal structure closed by glass or transparent plastic (passive mode, Figure 5). Air circulation through perforated zones can eliminate moisture by internal convection ( Figure 6A). In a more performant version external pipes further heat air by sunlight before letting it enter the cabinet ( Figure 6B). In other cases, the cabinet has no direct penetration of sun rays on the food and sun is only needed to heat the air or produce energy by an absorber plate ( Figure 6C). Lastly, hybrid sun dryers can use both sunlight and electricity (photovoltaic or from other sources) to heat air to enhance the drying process (mixed mode Figure 5) [27]. Natural or forced air flow permits the elimination of the major part of fruit and vegetables moisture.
Developments in Technology: Heat Storage
Many SC configurations are widespread and the search for improved performance has continued over the years but has recently become much more pronounced. The classification of SCs is therefore a rather complicated task (Figure 1) [35]. As discussed above, today most SCs without thermal storage generally fall into two main groups based on how heat is transferred to the cooking unit: direct and not direct, when heat is collected directly for cooking or via a fluid, respectively.
Typical examples of direct solar cookers are box and concentrating models. In the box-type, transparent glass covers a well-insulated box while multiple reflectors generally help to direct the sun rays towards the box ( Figure 3A). Concentrating SCs, on the other hand, are based on optical principles that allow solar energy to be concentrated on the base of a pan or pot without intermediate obstructions, allowing it to reach very high temperatures ( Figure 3B).
Not-direct SCs, conversely, are somewhat more elaborate devices as they comprise a collector and a cooking unit. The collector gathers the thermal energy, while heat exchange between the cooking unit and the collector is achieved by means of an intermediate transfer fluid (Figure 3C).
In the case of SDs, due to their high utilization in rural zones, their structure is generally simple as already presented (Figure 6) even if their use in food industry is increasing [24].
As SCs and SDs are based on sunlight, their main limitation is that they cannot be used, or lose performance, when the intensity of sunlight is low or even absent. Furthermore, another important limitation is related to the typically rather long cooking times, which could expose the user to significant solar radiation. For SDs the process can be prolonged over time, even for a few days. For this reason, there are temperature fluctuations, during night hours, that can slow down the drying process. To avoid these drawbacks, thermal energy storage (TES) is generally considered to be the best option. Among the commonly adopted techniques to store thermal energy, three main categories can be pointed out: heat storage can be achieved thermochemically, by sensible and latent heat exchange [36,37]. Since thermochemical storage is characterized by a rather low controllability, which makes its use difficult, this storage method will not be discussed here.
Sensible Heat Storage
Sensible storage materials can be found in both liquid and solid states. In order to compare them, it is necessary to estimate their thermal conductivities, heat capacities and densities. Among liquids, vegetable oils (coconut [38] and sunflower) and mineral oils (Mobiltherm [39], Shell Thermia C and Shell Thermia B) have been studied and compared in recent articles, with sunflower oil [40] providing the best performance.
Among solids, particular attention was paid to cast iron and granite [41], while two-phase oil-pebble beds were also studied [42,43].
As a general rule, in case of sensible heat storage the amount of heat stored is generally rather low [44,45]. It can be stated that solid sensible heat storage materials offered better energy density and thermal diffusivity than their liquid counterparts.
Latent Heat Storage
Materials involved in storage using latent heat exchange are called Phase Change Materials (PCM). In other words, PCM are materials whose phase change is used to store and release heat, and according to recent studies they are able to charge approximately 5 to 13 times more thermal energy per unit mass than materials in which only sensible heat is stored [46,47]. When a given temperature is reached, which varies from material to material, the absorption of solar heat from the environment brings to a phase transition of the material from solid to liquid. When the temperature drops, e.g., because of clouds or in the evening, the material solidifies and releases the stored heat. The stored thermal energy can then be used to extend the cooking process.
Even if all physical stages of phase transitions are in principle suitable as a PCM, the phase transitions involving gas phases are generally not exploited for these applications because of large volumes or high pressures necessary to store heat. On the basis of the requirements that a PCM must meet to be considered as thermal storage, between the different phase transitions the solid-liquid appears to be the most interesting for these purposes.
The main requirements are: • phase change temperatures should be in the operating temperature range of the specific application; • the enthalpy of phase change should be as high as possible; • the phase change conditions should be reproducible; • specific heat and thermal conductivity should be as high as possible; • the volume during phase transitions should show minimal variations.
In addition, other characteristics to be met should be that the material remains chemically stable during cycles, it has low cost and availability, and it should not be corrosive, toxic or flammable [48].
A phenomenon that is quite common in some PCM and should be avoided is supercooling, i.e., the release of latent heat stored during solidification to a temperature other than the melting temperature [49]. Generally, high nucleation rate helps in avoiding supercooling of the liquid phase.
In direct SCs, the PCM is placed in contact (typically under the absorbent plate) [35] and heats the food in the cooking unit by conduction and convection while it gradually solidifies. It has been verified that the heat diffusion process between the PCM and the cooking vessel takes more time, particularly for evening cooking, because it occurs very slowly [50].
In not-direct SCs, the pan is connected to a collector via pipes and during sunshine hours, the solar radiation falls on an absorber (Figure 2). The heat transfer fluid passes through the absorber gathering the heat and transferring it to the PCM. Finally, this heat will be stored by the PCM and released to the cooking unit. During the day, the heat transferred from the PCM is used for cooking purposes. The stored heat is used for cooking purposes during the evening periods [51].
In SDs the PCM can be put in a latent heat storage tank external to the drying chamber [52] or in a flat plate collector connected on the top of a drying system structurally analogous to that represented in Figure 6B [53]. The energetic performances of various PCM solar dryers are extensively analyzed in the recent review of Mofijur [47].
Comparing organic with inorganic PCM, it can be said that organic materials generally show larger latent heat, even if inorganic have higher heat capacities, densities and thermal conductivities, when comparable. Another important difference lies in the melting temperatures, which are generally lower than 120 • C for organic materials. However, it has to be considered that PCM analyzed for solar cookers are only a small part of available materials.
A wide range of potential PCM is presented by [63,64]. Furthermore, since cooking food generally requires temperatures above 100 • C, PCM with high phase change temperature are necessary. For this reason, the research on appropriate high-temperature PCM for solar cooker applications is very active, also bearing in mind that the cooking power is greatly influenced by thermal diffusivity of the storage medium and TES design.
When comparing sensible and latent heat storages, it is evident that latent heat storage produces smaller temperature changes than sensible heat but higher energy storage capacity and lower energy losses during phase change. Moreover, the exploitation of this storage mode generally allows good control [44,[65][66][67]. For all these reasons, latent heat storage is the most exploited thermal storage technique in solar cookers.
As a final comment, it can be stated that, when possible, the best option is the integrated action of both thermal storage methods: while the stored latent heat helps in cooking, the sensible heat substance assists the PCM in its performances.
Processed Food by Sun: Ovens and Dryers
Cooking using a bonfire, or even dry dung, is a less expensive and routine way of cooking in poor countries, while charcoal barbecue is still very appreciated in the USA and Europe. These kinds of cooking methods enhance the production of heterocyclic aromatic amines (HAA) and Polycyclic Aromatic Hydrocarbons (PAH) that are extremely dangerous toxicants with mutagenic activity [68]. The use of a SC could be a significant improvement in the health of billions of poor people and in environmental protection while, in western rich countries, it could be a substitute option for barbecues to obtain healthier food in summer parties or in calamities where gas or oil is temporarily inaccessible. Unfortunately, the dissemination of SCs is connected with their adjustment to the social and cooking requirements of the people that use them. The necessity of a multilevel approach (economic, social, cultural and political) seems the only way of success [69]. In a report sponsored by the European Union, the use of solar cooking in Lebanon has been extensively scrutinized from an engineering, practical, social and economic point of view by Touma [70]. The study listed the opportunities that a SC could bring to poor families and even refugee communities.
The simplest SC "Cookit" (www.solarcookers.org (accessed on 7 October 2020)) can be made of cardboard covered by aluminum foil, folded so that the sun converges on a pot closed in a transparent plastic bag. These very inexpensive devices (around 5 USD) can reach temperatures up to 135 • C that permit water and food pasteurization along with food cooking. Touma reported the information present in the manual of this simple SC. Due to the low temperature reached, cooking time for a meal of 2 kg can vary from 1 to 8 h depending on the food ( Table 1, Entry 1). With these cooking times, this kind of SC is considered a help to everyday meal preparation but, where it is used, it has not completely eliminated traditional cooking methods. It can be useful for cooking rice, boiling eggs, or preparing potatoes together with various dishes that do not need too much time for cooking. With this aim, it can be used not only in rural households but also in hospitals and-hostels of more sunny countries [34] or in campsites or outdoor lunches. The countries in which the SCs have been most widely introduced are those with large periods of dry sunny days, so the majority of the literature is derived by researchers from equatorial countries. A "combined solar baking oven" was developed by Mekon-nen et al. [71] to solve the cooking problems of Ethiopian people still largely accustomed to biomass fuel use in food preparation. The heating is obtained by sun rays directed onto the oven either from mirrors around or from one parabolic mirror under the cooking zone. A rectangular tray, easily extractable, was used to put the bread in to cook. With this device, the bread's baking time lowers to 50 min, fast enough to be utilized in everyday preparations without a long period of attendance of the person under the sun (Table 1, Entry 2).
As a matter of fact, cooking with less expensive direct solar devices needs to check the position of the oven during the preparation and this often brings to the necessity for the cook to stay near the oven, under the sun during the hottest hours of the day. To address this inconvenience, Singh [72] has developed an indirect solar cooking system suitable for indoor cooking using a heater plate connected with an external parabolic collector. The energetic analysis was combined with the cooking of common use food. During the cooking experiments the maximum temperature reached was 109 • C and the cooking time of dishes were reasonable, spanning from the 45 min of Maggi (noodles) to 90 min of pulses (Table 2, Entry 3).
As we have shown, a drawback of the SCs is its difficulty in keeping the temperature high (180-250 • C) during cooking time regardless of solar radiation. In a long cooking process the maintenance of a stable and high temperature is important for the homogeneous cooking of the dish. In Nicaragua, the use of the SC to cook tortillas, a traditional and widely used dish, has proved to be ineffective due to the difficulty of keeping the temperature high enough during a relatively long time to obtain a good preparation [74,75].
For these reasons, the introduction of PCM in the SC design, to assure the conclusion of the long cooking processes, even without the presence of sunlight, is a great stimulus for its diffusion. Bhave and Kale [73] developed a solar storage cooking device formed by a special pot that could be heated thanks to the presence of an outer PCM layer heated by solar energy. The system involves a separate outdoor point of absorption where the PCM made of a mixture of NaNO 3 /KNO 3 (60:40) salts, can absorb heat from a sun parabolic collector. Afterwards, the pot is delivered to the cooking point permitting to cook inside the kitchen. The system was tested by cooking potatoes and rice (Table 1, Entry 4). Two batches of rice (125 g each) cooked in boiling water (200 mL) were cooked in 20 + 20 min. Furthermore, 250 g of potatoes were fried in peanut oil in 17 min, at almost constant temperature (150-170 • C) with very good results. Normally, the impossibility of frying by SC, due to the need for high constant temperatures, was considered a drawback to its employment due to the cooking habits in many countries [70]. With the device developed by Bhave and Kale, the problem seems partially solved.
In more wealthy countries the use of the SC is more connected with environmental and social concerns or chefs' experimentation. Many internet sites present recipes specifically designed for SCs and famous chefs take up the challenge of cooking delicious dishes using this alternative way of cooking.
However, even if in these last decades a great impulse has been devoted to the improvement of the technical performances of SCs to permit a broad use of these devices, few research groups have investigated the impact of the solar cooking on the flavor, taste, and nutraceutical properties of food so prepared. This is partially because many SCs are used as a traditional oven, to cook food only by heat irradiation. In these cases, the effects of baking are similar to those of a traditional oven, when used at analogous temperatures.
The heating of food at a high temperature-generally 180-250 • C-triggers loss of water, denaturation of proteins, gelation of collagen and depolymerization of complex sugars. The main visible effects are the changes in texture and colour, together with the generation of the aroma and flavour derived by the Maillard reaction between proteins and carbohydrates [76]. This is especially pleasant in roasted meat due to the increase of tenderness, with consequent ease of chewing and digestion. Obviously, many other effects are less appreciable such as a faster oxidation of lipids, vitamins and carotenoids' degradation, antioxidants loss [77] and the production of various derivatives such as hydroxymethylfurfural (HMF) and acrylamide [68].
Generally, every kind of cooking method produces a variation in vitamin content, even considering the weight change due to water loss. For instance, in the case of the hydrosoluble vitamin C, research has demonstrated that the higher the quantity of water used in cooking (boiling, blanching, steaming and microwaving) the higher the loss detected. In this case, microwave cooking can almost completely retain vitamin C content [78]. On the other hand, during the cooking process vitamin E is retained more in green vegetables than in roots, while beta-carotene content is completely preserved or even enhanced by the partial disruption of its complexes with proteins [79].
The use of direct solar irradiation on food together with heating, could further modify food nutraceutical properties. UV light is helpful in preventing bacterial spoilage and fungal infection [80] but it can activate photolytic radical reactions that can further decrease the content of antioxidants, vitamins and other nutraceuticals that are fundamental for the beneficial impact of food on our diet. A few studies have analyzed the effect of direct solar irradiation on cooked food; however, many research groups have studied the effect of the sun on food, seeds and vegetables dried using SDs and these studies can give us an idea of the possible effects of UV light in direct solar cooking. In many developing countries SDs are more and more used for the production of dried local fruits, vegetables and spices. Drying procedures were often traditional food treatments used to avoid microbial and fungal attacks on harvested products. The food, dehydrated to a safe extent, has a longer storage time and can be more easily used and exported, contributing to healthier lives for farmers and economic development [27]. Various drying processes analyzed in this paper are reported in Table 2 with their final moisture content and dehydration time. For instance, a SD can be filled with up to 50 Kg load of wheat flour or with up to 4 kg of curry leaves and the drying process may reach the required water content in 4-6 h (0% moisture in flour, and up to 4% in curry leaves) ( Table 2 Entry 1) [12]. As SDs are the most used devices for the processing of food by the sun, many research groups have analyzed different drying methods (natural solar desiccation, SD, electric oven, etc.) to compare nutritional and sensory qualities, sometimes also considering the economic sustainability of the technique by the farmers [89].
Mohammed et al. compared the traditional systems of desiccation of mango and pineapple with an improved SD, a solar cabinet connected with a series of empty tubes that enhance air heating by sun (see Figure 6B) [81]. This SD enhanced the organoleptic attributes; for instance, an increase of sugars content (in mango: 14.3 ± 0.3 vs. 9.1 ± 0.3 mg/100 g) and mineral content (Ca, Zn, P, Fe, Mn and Cu) of the dried fruits ( Table 2 Entry 2) with respect to fresh fruits. Furthermore, even if all drying processes resulted in a lowering of the total phenolic, Vitamin C and Vitamin A, in comparison with fresh fruit, the improved SD produced minor losses than open solar drying. So, compared with traditional processes, in mango dried with this improved SD the total phenolic content (TPC) proved to be 0.75 ± 0.03 vs. 0.44 ± 0.06 g/100 g; Vit C 35.6 ± 0.4 vs. 27.5 ± 0.4 mg/100 g and Vit A 849 ± 6 vs. 820 ± 4 mg/100 g [81].
Many studies have evidenced that fruits suffer loss of nutraceuticals directly depending on the heating temperature and time of drying process [12]. In general, the higher the temperature and the time of the drying process, the higher the loss of antioxidants and vitamins [81]. Similar studies on eggplant drying processes confirm that the TPC, beta carotene contents and antioxidant capacity were affected by all kinds of desiccation methods as a function of the temperature utilized [90]. It must be kept in mind that a too rapid desiccation should be avoided because it could bring to incomplete elimination of moisture in the core of fruit pieces and this could produce storage and health problems.
Vangdal et al. [83] have also verified that not all different plum cultivars behave in the same way if exposed to various drying processes ( Table 2 Entry 4). In a comparison between conventional and organic plums, the anthocyanins, neo-chlorogenic acid (NCA) and ascorbic acid contents turned out to be different after oven drying (OD), SD or freeze drying (FD). If the FD always demonstrated to be the technique that preserves the higher quantity of antioxidants and vitamins, in comparison, the level of NCA and the Folin Ciocalteu index were less depleted by OD, while SD produces a minor loss of anthocyanins and Vit C, above all in Jubileum Cultivar plums.
The data of Deus et al. [33] goes against the general trend just highlighted: in the comparison of different cocoa beans drying methods, the antioxidant activity, together with the phenolic and methylxanthine content, were better conserved by the traditional open-air sun drying method. In particular, catechins (0.02 ± 0.00 vs. 0.037 ± 0.00 mg·g −1 ), epicatechins (0.09 ± 0.05 vs. 1.037 ± 0.02 mg·g −1 ) together with caffeine (1.60 ± 0.06 vs. 2.33 ± 0.02 mg·g −1 ) and theobromine (11.14 ± 0.59 vs. 14.96 ± 0.55 mg·g −1 ) were all less concentrated after the drying process using a solar cabinet ( ) confirmed the lower antioxidant depletion by traditional exposure to direct sun. This data could be due to the different temperatures of seeds drying processes. Generally, direct sun drying works at lower temperatures than artificial dryers and, for this reason, the loss of antioxidants and alkaloids could be reduced by using traditional methods. Furthermore, the cocoa seeds could be less sensitive to the UV light than fruits due to the stronger structure of tegument in comparison with fruit peel.
Many desiccation methods were also compared to find more eco-friendly and economic techniques for the preparation of flours, dry leaves or spices useful in the food industry. In the study of Kiharason et al. the nutrient integrity of pumpkin flour dehydrated using the SD (Table 2 Entry 6) was compared with the electric oven or open sun drying. All desiccation processes have enhanced the content of beta-carotene (fresh fruit: 16.6 ug/g; dried flour: 74.84 ug/g) and proteins (fresh fruit: 2.6%; dried flour 13.8-16.5%) while lowered the level of Calcium and Iron by half, and of Zinc by a fifth, if compared to the fresh fruit. Interestingly, direct solar drying produced a lower concentration in beta-carotenes in comparison to solar cabinets because of the partial photo-degradation of carotenoids [84].
On the other hand, in the Stevia R. leaves drying process using SD with natural (Table 2 Entry 7) or forced air flow with or without mesh protection, the analysis of color change evidenced that the temperature is more crucial than the UV exposure [85]. Analogously, lemongrass dried with SD (50.6 • C, Table 2, Entry 8), showed a greater variation of color and pH (6.25 vs. 5.9) with respect to direct sun drying (34.7 • C) probably depending on the higher thermal stress suffered [86].
In addition, spices have been studied to compare the effect of OD with SD processes. In the case of hihatsumodoki fruits (Piper retrofractum Vahl), an Asiatic pepper, the higher content of piperine (21.7 ± 3.2 mg/gdm) was obtained by solar drying (41.9 • C, Table 2 entry 9) while, antioxidants resulted better preserved by oven drying at 60 • C instead of the SD as evidenced by the data of Total Phenolic Content (SD vs. OD = 18.5 ± 3.8 vs. 15.2 ± 2.2 mg GAE/gdm) [87]. Similarly, ginger powder obtained by desiccation under shade at room temperature overtook either sun (Table 2 Entry 10) or oven drying. In particular, when using the SD the moisture content was better lowered (3.5 ± 0.08% instead 3.8 ± 0.08%) but the beta-carotene (0.68 ± 0.02 vs. 0.81 ± 0.01 mg/100 g DM) and ascorbic acid (2.2 ± 0.08 vs. 3.8 ± 0.07 mg/100 g DM) content were lost the most [88].
As we have presented, research has either investigated the cooking time of foods to analyze the energy performance of the various SCs or focused attention on the variations of nutraceuticals only in SDs. To the best of our knowledge, no general studies are available on the activity of direct sunlight on cooked food. Certainly, this lack should be covered in order to have a deep and complete knowledge of the potential and drawbacks of these devices.
Perspectives
Today, the usage of solar cookers is not widespread enough globally. According to our knowledge it is present in a few areas of the developing countries or in refugee camps thanks to humanitarian associations while in Europe and the USA SCs are seldom used in camping areas or by amateur groups. The development of SCs has two different aims: (i) to be low cost and durable to help poor people to avoid fuel shortages and (ii) to keep the quality high and ease the cooking process with a device without continuous movement of the oven.
From an engineering perspective there are several improvements to help achieve even more quality results with the cooking process. As for the first aim, many researchers are studying low cost solar cookers that could withstand use more than the simple panel model in comparison with other cooking systems [91].
As for the second aim, at the moment it is not possible to regulate the temperature correctly. This is a limit experienced by skilled chefs who started to use the solar cooker in several worldly events. Having in mind the Sheffler SC model, it is possible, from the technical point of view, to automate the mirrors in order to concentrate the sun by moving each of them, or a group of them, according to the instantaneous temperature in the oven.
In this way the oven could maintain a constant temperature and the cook can focus only on his job.
About the use of thermal energy storage systems in SCs and SDs, it is important to point out that PCM shows both positive aspects and drawbacks. In fact, a thermal storage such as PCM can prevent temperature gradients by imposing, during the cooking or drying procedures, the melting temperature of the material which is supposed to remain constant during the phase change process. By identifying the proper material and increasing its thermal performances by some triggering processes, cooking times could be generally extended by up to 3-4 times, if compared to SCs with no thermal storage, while the drying process can be shortened in the same way. Moreover, the appropriate choice of PCM and, consequently, its melting temperature, is equivalent to setting the cooking temperature of the food, which is an advantage for the cook who does not have to waste time managing this parameter as well.
On the other hand, inserting a thermal storage implies inserting an additional load that could delay the cooking time. In addition, as thermal storage, it should be chosen a material that does not damage the food in case of leakages. With this in mind, sugar alcohols, which can be obtained from fruit and plants, would seem to be particularly appropriate.
Conclusions
In conclusion, this review reports a large overview about SCs and SDs applied to the processing of food. The explored aspects range from the technical description of SCs and SDs, to the possibility to store the heat produced by solar irradiation and then use it for time-consuming cooking or drying procedures exploiting PCM, to the analysis of the composition of cooked and dried food, with special attention to the nutraceuticals content after drying process. To summarize the main results of this study, we would like to highlight some strong points and issues posed by the analyzed literature, that are also useful to open challenging perspectives for future studies in the field.
Concerning the effect of SCs and SDs on food, no complete and exhaustive research is available on the effects on food of solar cooking while the studies on solar drying have evidenced an heterogeneous behavior depending on the type of processed food, direct or indirect irradiation, drying temperature and time. Reported data confirm the effect of direct UV rays in depletion of vitamins and antioxidants in fruits and vegetables. In some cases, it has been reported that the use of polyethylene foils that let only specific radiations through, has overcome or at least reduced this problem.
While plenty of literature has analyzed the effect on nutraceuticals of sun-dried food, the identification of nutritional benefits of food cooked using direct sun radiation has not been yet analyzed and this could be a boost to the use of SCs in the more developed countries where a healthy lifestyle is increasingly considered.
It certainly appears essential to carry out further research in this field in order to deepen the knowledge on the effect of the various radiations on nutraceuticals even in the solar cooking of food and eventually to find solutions that can selectively let penetrate only harmless wavelengths. The final aim is to obtain not only tasty but healthy foods with solar cookers, while looking at the environmental sustainability.
|
v3-fos-license
|
2022-06-27T17:35:55.792Z
|
2022-06-25T00:00:00.000
|
250057144
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2304-8158/11/13/1884/pdf?version=1656325229",
"pdf_hash": "5f44b800010c30a333febeca23667c776f24c6ed",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44471",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science",
"Chemistry"
],
"sha1": "098edf112b95606fc93f0fd022e8fb237e412807",
"year": 2022
}
|
pes2o/s2orc
|
Recent Advances in pH-Responsive Freshness Indicators Using Natural Food Colorants to Monitor Food Freshness
Recently, due to the enhancement in consumer awareness of food safety, considerable attention has been paid to intelligent packaging that displays the quality status of food through color changes. Natural food colorants show useful functionalities (antibacterial and antioxidant activities) and obvious color changes due to their structural changes in different acid and alkali environments, which could be applied to detect these acid and alkali environments, especially in the preparation of intelligent packaging. This review introduces the latest research on the progress of pH-responsive freshness indicators based on natural food colorants and biodegradable polymers for monitoring packaged food quality. Additionally, the current methods of detecting food freshness, the preparation methods for pH-responsive freshness indicators, and their applications for detecting the freshness of perishable food are highlighted. Subsequently, this review addresses the challenges and prospects of pH-responsive freshness indicators in food packaging, to assist in promoting their commercial application.
Introduction
Packaging is usually used to protect goods from environmental pollution and other effects (such as odors, vibrations, dust, physical damage, temperature, light, and humidity) and to provide consumers with commodity information such as production date, shelf life, nutrients, and usages [1]. It is crucial for guaranteeing food quality and safety and for contributing to prolonging shelf life and reducing food loss and waste [2][3][4]. Recently, with the improvement of living standards and the gradual enhancement of consumer health awareness, food safety issues have attracted considerable attention, further promoting the development of new packaging technologies. Traditional food packaging is mainly used as a protective barrier to resist external forces and to promote sales [5]. Oxidation, microbial growth, and decomposition of enzymes are the main causes of spoilage in many foods (such as grapes, pears, fish, pork, and shrimp) during production, transportation, processing, storage, and sale [6,7]. These processes are directly linked to the loss of food quality and safety, affecting consumers' health and the overall economy of the food industry [8]. Hence, protecting food quality is an important research orientation since it directly affects the goal of improving quality of life. Additionally, the improvement in awareness of food safety for consumers leads to higher requirements for food safety. Intelligent packaging improves product quality and safety while promoting product sales and enhancing the influence of a company [9,10]. Intelligent packaging involves intelligent devices that can detect the quality of packaged food or internal packaging environment factors such as temperature, pH, gas composition, rotten metabolites, etc., providing consumers with chemical, physical, microbiological, and other quality information [11]. It has the ability to detect and record variations in the food or packaging environment, in order to display antibacterial activity [27]. Most natural pigments can be obtained from agricultural waste and food processing plant wastes such as fruit peel, pressed residue, etc. Consequently, research into pH-responsive freshness indicators not only promotes the rational utilization of waste but also provides a green substitute for synthetic dyes for monitoring food quality.
Methods for Detecting Food Freshness
Detection methods for food freshness cover conventional methods (such as microbiological methods, physical and chemical detection methods, and sensory evaluation methods) and rapid nondestructive testing technology [28]. Conventional detection methods are time-consuming, destructive, and costly [26]. Most of the freshness indicators in the freshness monitoring process need to be evaluated in conjunction with the indicators for the food-quality change process (e.g., total bacterial colony, TVBN value, weight loss, etc.) to further improve the application of freshness indicators [29].
These methods refer to the detection of the internal and external properties, state, and structure of the object by means of physical methods without destroying the object. Traditional chemical detection methods generally require destructive pretreatment of analytes. Unlike traditional detection methods, nondestructive rapid detection techniques possess the advantages of not damaging the samples, fast detection speed, less contamination, and low analysis cost [30]. In food analysis and detection, nondestructive rapid detection techniques can be divided into optical methods, mechanical and acoustic methods, X-ray methods, electromagnetic methods, sensor methods, and other detection methods (microbial indication and enzyme indication freshness indicators), depending on the detection mechanism. Figure 1 shows the specific classification scheme. Electronic nose and tongue technology, biosensors, and pigment sensors have important applications in the identification of the quality of food products [31]. In food analysis, visible/near infrared spectroscopy is mainly used to analyze the composition and quality of food products. Zhang et al. (2010) [32] used headspace solid- phase microextraction (HSSPME) and gas chromatography-mass spectrometry (GC-MS) to investigate the volatile substances in different seafood products at different storage stages. Afterwards, the characteristics of the volatile substances were statistically interpreted using methods such as normalization and principal component analysis (PCA), thus establishing a systematic method for evaluating the freshness of stored seafood. Computer vision technology is a technology that can objectively obtain image information from the food being inspected through optical imaging sensors and then mine the information on foodquality characteristics contained in the image through image processing technology [33]. Xu et al. [33] used a PLS (partial least squares) algorithm to obtain data and developed a quality testing model based on the minimum error probability. The detection accuracy of the method was over 93%, and the classification efficiency was 5400/h, which indicated that the method was feasible for grading salted eggs. Meanwhile, electronic nose and electronic tongue technology has also been gradually applied to the quality identification of food. Han et al. [34] developed a new method for nondestructive detection of fish freshness using electronic nose and electronic tongue techniques combined with chemometric methods. A three-layer radial basis function neural network model was developed for the qualitative discrimination of fish freshness via PCA analysis of the electronic nose and electronic tongue data.
Nondestructive detection techniques have attracted considerable attention, as they are nondestructive with respect to samples and environmentally friendly, with no contamination and a fast detection speed. With the improvement of requirements for food safety, nondestructive detection techniques have been developed in the directions of simplicity, miniaturization, portability, specificity, and high sensitivity. Hence, pH-responsive freshness indicators based on natural food colorants represent an indispensable development direction for non-destructive detection techniques.
Overview of pH-Responsive Freshness Indicators
Color, pH, and smell, along with variations in the internal and external properties of packaged food, are all important signs of food spoilage. These changes are mainly related to the decomposition of proteins, fats, and sugars by microorganisms and endogenous enzymes, as well as being related to the type of food (fish, meat, milk, vegetables, fruits, etc.) and the storage conditions [35]. In the process of growth and reproduction, microorganisms release various metabolites (hydrogen sulfide, amines, carbon dioxide, water, etc.), resulting in changes in the acidity or alkalinity of the food and the environment around the food. For this reason, pH-sensitive colorants have been introduced into the preparation of freshness indicators. During changes in packaged food freshness, the pH-responsive freshness indicator reacts with the metabolites of microorganisms on the food and shows a color change due to the change in pH value [36,37]. Hence, pH-responsive freshness indicators convert the detection of qualitative or quantitative changes in the concentration of one or more substances associated with food spoilage in a package into a perceptible signal that can be visually detected by the consumer [38]. Color changes in pH indicators are usually attributed to the protonation or deprotonation of carboxylic acid functional groups or amidocyanogen. Additionally, previous research has found that the probability of dissociation occurring within two pH units is about 20% to 80% when only one type of dissociable group exists [39]. Hence, the color range of the freshness indicator is mainly affected by the number of dissociable groups.
Water-vapor permeability (WVP) and oxygen permeability (OP) are key barrier parameters in evaluating polymeric packaging films for food protection and shelf-life extension. Most of the prepared pH-responsive freshness indicators are composed of hydrophilic natural colorants and bio-based materials [40]. They contain more hydrophilic groups (e.g., hydroxyl, carboxyl, etc.), resulting in poor water-vapor permeability of the membrane. Numerous studies have found that electrostatic interactions between substrates in films and cross-linking between natural colorants and film substrates reduce the availability of hydrophilic groups in the substrate, thereby reducing the affinity for water molecules [41,42].
However, pH-responsive freshness indicators still have higher oxygen/moisture permeability than commercial films such as LDPE, polypropylene, and polyvinyl chloride. Hence, future studies may consider the use of multilayer films or the addition of other materials to further reduce the WVP of the freshness indicator.
Synthetic Colorants
As one of the most striking features of food, color is one of the most important standards for recognizing and perceiving the quality and appearance of food, and it directly affects the choices, acceptance and consumption tendencies of consumers [43]. Synthetic colorants are often used as additives in the food industry, especially in confectionery and indicators/sensors, due to their advantages of good stability, excellent color changes, and low cost. Synthetic colorants containing phenolphthalein, bromocresol purple, bromocresol green, and methyl orange are widely applied in pH-responsive freshness indicators, showing red, blue, purple, yellow/orange, green, and other colors in different pH environments [44][45][46]. The prepared freshness indicator presents a reversible color change in contact with gases, liquids, and semisolids with different pH values [47]. Chemical barcode sensors can be prepared by using bromocresol green as a pH-sensitive pigment. By studying the characteristics of the sensor and its response in standard ammonia, it was found that these sensors can show a visible color response to volatile nitrogen compounds [48]. Researchers have conducted sufficient tests on cod and other fish to confirm that the sensor response is related to the growth pattern of bacteria in aquatic products and seafoods, thus enabling real-time monitoring of changes in the freshness of various perishable products. Additionally, other researchers have reported studies with similar principles using sensors prepared from synthetic colorants in intelligent packaging to monitor food freshness [49][50][51]. To establish a more accurate freshness monitoring index system, an indicator based on mixed pH-sensitive pigments was also proposed as a "chemical barcode", to monitor deterioration in desserts and skinless chicken breasts [52,53]. Furthermore, pH-responsive freshness indicators consisting of two or more pH-sensitive pigments such as bromocresol green and methyl red or bromothymol blue and methyl orange remain in the form of a single sensor. To overcome the disadvantage that it is difficult for a single indicator to accurately monitor food freshness, Kuswandi et al. [54] proposed a package sensor label where dual synthetic dyes were used to prepare dual indicators to detect meat freshness. Since the freshness indicator used two synthetic dyes, the cross-referencing of colors could effectively avoid false positives in freshness testing, resulting in more accurate freshness testing. They found a good correlation between the color variation of the freshness indicator and the sensory evaluation, TVBN, and bacterial growth for packaged beef by applying a dual-dye freshness indicator to the monitoring of beef freshness. The freshness indicator was successfully applied to monitor the real-time freshness of beef at indoor and refrigerated temperatures. Hu et al. [55] prepared a pH-responsive antibacterial film by adding aminoethyl-phloretin to a mixture of polyvinyl alcohol and polyacrylic acid for smart food packaging. The film has strong antibacterial activity against Listeria monocytogenes and Staphylococcus aureus, so it can not only monitor the freshness of pork but also prolong its storage time. In general, synthetic pigments are considered to be teratogenic, carcinogenic, or mutagenic compounds that might pose a potential risk to humans and other organisms, endangering the environment [56]. Consequently, current food research trends and consumers are more inclined towards natural food colorants. The advantages of natural colorants, i.e., vivid colors, nontoxicity, environmental friendliness, and versatility, have encouraged researchers to utilize them as pH-sensitive colorants for freshness indicators [57].
Natural Colorants
Natural colorants have gradually become substitutes for synthetic colorants because they are generally non-toxic or have low toxicity, and they are environmentally friendly and easy to extract. Most natural colorants are polyphenolic compounds, which are widely found in the roots, stems, leaves, and fruits of plants. Natural colorants could be classified into chromogenic groups with conjugated systems (carotenoids, carotenoids, and betaines) and metal-ligated porphyrins (chlorophyll, myoglobin, and their derivatives) according to the chemical structure of their chromogenic groups [58]. Natural colorants are usually extracted by solution extraction, while organic solvents, water, and low-carbon alcohols are used for the extraction of lipophilic colorants and water-soluble colorants, respectively [59,60]. New technologies such as ultrasound, microwaves, and pulsed electric fields could also be applied to extract natural colorants [61]. Natural colorants are used in the preparation of sensors and intelligent packaging systems as well as being used as food colorants, and they can exhibit special color changes in response to acid-base variations in the surrounding environment [62]. The mechanism of pH-responsive freshness indicators based on natural colorants is dependent on the protonation/deprotonation tendency of colorants in acidic/alkaline environments [63]. However, the quality of food could be monitored by other types of sensors that show color changes for specific gases in food [64]. The main colorants used in the preparation of pH-responsive freshness indicators are anthocyanin, curcumin, alizarin, and betalain. Among these, anthocyanins have been widely studied due to their wide color range, easy availability, and fast response speed [65,66].
Anthocyanin
Anthocyanins are flavonoid compounds formed from glycosylated polyhydroxy or polymethoxy derivatives of 2-phenylchromen, a natural colorant that reflects light from red to blue in the visible spectrum [67]. More than 600 types of anthocyanins have been found in the environment. There are mainly six types of anthocyanins in plants: cyanidin, petunidin, delphinidin, malvidin, peonidin, and pelargonidin [68,69]. The color-changing function of anthocyanins as pH-sensitive colorants depends on the structural changes caused by the acid-base properties of the environment [70]. When the environment is strongly acidic, the structure of anthocyanin is mainly in the form of red flavylium cations [71]. With an increase in pH value, the flavylium cation structure is destroyed and rapidly hydrated on C-2 to form a colorless carbinol pseudobase, and the red color becomes pale. Under neutral and alkaline conditions, a large number of purple quinone base structures and blue quinone base structures are produced and gradually transformed into pale-yellow chalcone structures [72]. The stability and color of anthocyanin is greatly influenced by external conditions such as pH value, temperature, enzymes, metal ions, and so on, which have influences on the application as a freshness indicator [73]. A large number of studies have indicated that the interaction between anthocyanins and bio-based materials plays an important role in enhancing the stability of anthocyanins, which may be controlled by electrostatic interactions [74]. Other studies have proved that intermolecular forces can further extend the π-π conjugate system of anthocyanins and further enhance their color-changing effect [75]. The color changes and mechanism for anthocyanins are shown in Figure 2A. [5]; (B) curcumin [76]; (C) alizarin at different pH values [77]. Source: reprinted with permission from Liu et al. [5], 2021, Elsevier; Ezati et al. [76], 2020, Elsevier; and Roy et al. [77], 2021, ACS.
Curcumin
Curcumin is a diketone compound extracted from the rhizome of zingiberaceae, which has excellent anticancer and anti-inflammatory effects. The main curcumin compounds in turmeric are curcumin, demethoxycurcumin, didemethoxycurcumin, and cyclocurcumin [78]. Curcumin has been recognized as a powerful antioxidant due to the presence of an O-methoxyl group, and its multi-dimensional therapeutic effect on numerous chronic diseases has been well proved. It can be used in the preparation of food packaging films on account of its antibacterial and antioxidant properties [76]. Meanwhile, due to its structure change under different pH conditions, curcumin can present visible color variations in different acid-base environments. Curcumin possesses an ordered crystal structure composed of β-diketone groups consisting of two aromatic rings with a methoxyl group and a phenolic hydroxyl group. Enol and keto are two possible tautomeric forms of curcumin, which can be transformed by pH changes in the environment. In polar, acidic, and neutral media, the keto form is dominant, while in non-polar and alkaline environments, the enol forms appear [79]. The structural features of curcumin are illustrated in Figure 2B. Curcumin usually has poor solubility in aqueous solutions, and its stability decreases under strong acid and alkaline conditions [80]. Hence, the extraction method for curcumin mainly uses organic solvents for liquid extraction [81]. Research on freshness indicators based on curcumin remains surprisingly scarce due to the poor stability and solubility of curcumin in strong acid and alkali environments. [5]; (B) curcumin [76]; (C) alizarin at different pH values [77]. Source: reprinted with permission from Liu et al. [5], 2021, Elsevier; Ezati et al. [76], 2020, Elsevier; and Roy et al. [77], 2021, ACS.
Curcumin
Curcumin is a diketone compound extracted from the rhizome of zingiberaceae, which has excellent anticancer and anti-inflammatory effects. The main curcumin compounds in turmeric are curcumin, demethoxycurcumin, didemethoxycurcumin, and cyclocurcumin [78]. Curcumin has been recognized as a powerful antioxidant due to the presence of an O-methoxyl group, and its multi-dimensional therapeutic effect on numerous chronic diseases has been well proved. It can be used in the preparation of food packaging films on account of its antibacterial and antioxidant properties [76]. Meanwhile, due to its structure change under different pH conditions, curcumin can present visible color variations in different acid-base environments. Curcumin possesses an ordered crystal structure composed of β-diketone groups consisting of two aromatic rings with a methoxyl group and a phenolic hydroxyl group. Enol and keto are two possible tautomeric forms of curcumin, which can be transformed by pH changes in the environment. In polar, acidic, and neutral media, the keto form is dominant, while in non-polar and alkaline environments, the enol forms appear [79]. The structural features of curcumin are illustrated in Figure 2B. Curcumin usually has poor solubility in aqueous solutions, and its stability decreases under strong acid and alkaline conditions [80]. Hence, the extraction method for curcumin mainly uses organic solvents for liquid extraction [81]. Research on freshness indicators based on curcumin remains surprisingly scarce due to the poor stability and solubility of curcumin in strong acid and alkali environments.
Alizarin
Alizarin (1,2-dihydroxyanthraquinone, C 14 H 8 O 4 ) is an orange crystal dye derived from the root of Rubia officinalis. It is often used as a fabric colorant in industry [82]. Via the transfer of protons, the hydroxyl group of alizarin interacts with the carbonyl group to form hydrogen bonds, with a color change from yellow to purple [83]. In the case of low pH values, the remaining charged molecules appear yellow due to ionization of the phenolic hydroxyl groups on alizarin and the effect of the azo groups in azobenzene dyes. With an increase in pH value, primary and secondary dissociations of phenolic hydroxyl groups occur under the action of a resonance effect, leading to the appearance and accumulation of single anions in the solution; thus, the solution changes from yellow to purple [77]. Additionally, alizarin also possesses anti-ultraviolet and antibacterial properties. Since microbial growth and enzymatic decomposition release volatile alkaline compounds to change the pH of food, the pH-responsive discoloration of alizarin can be used as a freshness indicator for packaged food [84]. Alizarin has been used in the preparation of meat freshness indicators. Figure 2C shows the color change and mechanism for alizarin.
Betalain
Betalains are water-soluble colorants found in amaranth, beets, prickly ash, and dragon fruit plants [85]. Currently, they are used as food additives in various foods such as meat, dairy products, poultry, soft drinks, and so on. Structurally, betalains can be classified into yellow/orange betacyanins and red/purple betacyanins [86]. Red/purple betacyanins are composed of cyclo-3,4-dihydroxyphenylalanine (cyclodopa) and beet acid (chromophore). Yellow/orange betacyanins are condensation products of beet acid with an amino acid or amine [87]. The structure of betalain is relatively stable in neutral and acidic environments, so it is often used as an additive in acidic food. The structure and color of betalain changes with an increase in environmental alkalinity and with changes in temperature and light. In strong alkali solutions, betalains can gradually degrade into colorless cyclo-DOPA 5-O-(malonyl)-β-glucoside and yellow betaxanthins [88]. Recently, a number of experiments have proved that betalains possess antibacterial, anticancer, lipid-lowering, and antidiabetic properties [89]. They can be used in the preparation of active packaging and intelligent packaging due to their antibacterial, antioxidant, and pH-dependent color-changing properties. Betalains are polyfunctional pigments that can be used in the preparation of smart indicators due to their color diversity in different acid-base environments.
A large number of studies have revealed that natural colorants can be used in monitoring food freshness. Natural colorants are generally non-toxic or have low toxicity, with antioxidant and antibacterial properties that contribute to prolonging the quality guarantee period of food in active packaging. However, natural colorants possess certain limitations, including strong water solubility and degradation under hard light, harsh temperatures, and non-neutral conditions. Consequently, the future development of natural colorants should overcome their shortcomings, in order to further expand their scope of application.
Shikonin
Comfrey is a herb with a wide range of pharmacological activities, including woundhealing, antibacterial, anti-inflammatory, and antitumor activities. Shikonin is a natural colorant extracted from comfrey. Its main chain consists of alternating 1,3-linked-D-galactose and 1,4-linked 3,6-anhydro-L-galactose units, with a complex multiphase structure [90]. Anthraquinone pigment is also a pH-sensitive dye with a more stable structure than the hydrophilic anthocyanin. In addition, the structure of shikonin contains more hydrophobic groups, so its application in the preparation of pH-responsive freshness indicators can enhance the hydrophobicity of the indicator and thus expand its range of application. Huang et al. [91] incorporated shikonin into agar for the preparation of freshness indicator films and applied it to the freshness monitoring of fish. It was found that the color response of the freshness label was consistent with the deterioration threshold of the total viable Foods 2022, 11, 1884 9 of 25 count (TVC) and total volatile basic nitrogen (TVBN) content in fish samples. The indicator film provided a nondestructive and convenient way to assess the freshness of fish in storage. Dong et al. [90] prepared a novel hydrophobic colorimetric film using cellulose and shikonin to improve the mechanical properties and hydrophobicity of the colorimetric sensing membrane. The colorimetric film could monitor the freshness of shrimp and pork under storage conditions of 20 • C, 4 • C, and −20 • C. Further findings revealed that the performance of the novel colorimetric film for freshness monitoring of meat products was consistent with the current Chinese standard. Huang and Dong also investigated the stability of shikonin and found a slight red shift in its UV spectrum under acidic conditions, which proved that its structure was more stable under acidic conditions [90,91]. However, the absorption peak intensity in the UV spectrum increased with an increase in alkalinity, indicating that the chromophore molecular structure in shikonin changes under alkaline conditions.
Polymer Support
Generally, the polymer support used to immobilize pigments is a key component of pHresponsive freshness indicators. As an important component of pH-responsive freshness indicators, polymer supports can be divided into synthetic polymers and biopolymers (proteins and polysaccharides). It is important to note that the polymer support must meet some basic requirements for the preparation of pH-responsive freshness indicators. The polymer support must have the following properties. (1) The polymer should be a water-based polymer, which is helpful for the fixation of water-soluble natural pigments.
(2) The polymer should be almost colorless to avoid masking the color of the natural dyes and affecting the monitoring. (3) The polymer should ensure the stability of natural pigments at low or high pH values. (4) The polymer should possess sufficient mechanical strength. Numerous synthetic polymers and biopolymers have been used in the preparation of freshness indicators, e.g., filter paper, polyethylene, starch, polyvinyl alcohol, chitosan, cellulose, and κ-carrageenan [92,93].
Synthetic Polymers
Synthetic polymer supports are mainly based on petroleum polymer, which possesses excellent physical, chemical, and mechanical properties for resisting external temperature, microbial, and physical/chemical damage. Pacquit et al. [48] first obtained a smart sensor by coating a polyethylene terephthalate (PET) film with bromocresol green. The smart sensor detects changes in fish freshness quickly and shows great potential as a food-quality indicator. Since the dye migrates during application and its response to pH changes is susceptible to temperature, Kuswandi et al. (2012) [94] developed a novel colorimetric method based on polyaniline (PANI) films that could be used to monitor changes in fish freshness in real time, while being reusable after acid solution treatment. Wang et al. (2018) [95] prepared a functional reproducible colorimetric indicator based on polyaniline that could be used for fish freshness monitoring. Due to the toxic and carcinogenic problems of synthetic dyes, Zhai et al. (2020) [96] prepared a non-toxic, low-cost colorimetric gas sensor using curcumin, a natural dye, co-extruded with low-density polyethylene (LDPE). With good stability and accurate monitoring of the TVBN gas associated with meat spoilage, the sensor has good prospects for smart packaging applications. Although petroleum-based polymer products are convenient, there are numerous environmental and health issues associated with them, so the search for suitable green materials to replace petroleum-based polymers is urgent.
Biopolymers
Recently, biopolymers have gained widespread interest due to their biodegradability and the accumulation of petroleum-based packaging polymers in the environment. The synthetic polymer support is mainly based on a petroleum polymer, which possesses excellent physical, chemical, and mechanical properties for resisting external temperature, microbial, and physical/chemical damage. The widespread application of petroleum-based polymers and the lack of degradation have a huge harmful impact on the environment. Consumers are increasingly inclined to choose environmentally friendly polymers, due to the implementation of global government environmental policies and increased consumer awareness of environmental protection. There are many materials on the market to replace petroleum-based polymers. The application of renewable resources to developing environmentally friendly and biodegradable materials could ameliorate the health and environmental problems associated with the application of petroleum-based materials in food packaging [97].
Biopolymers are natural polymers derived from living organisms. They contain monomer units with covalent bonds that degrade naturally in the environment [98]. Biopolymers such as lipids, polysaccharides, and proteins have been used in the preparation of packaging films. Polysaccharide biopolymers can form hydrogen bond and ion interactions with pH-responsive colorants to enhance their color-changing effect and reduce the interference of the external environment. Hence, biopolymers such as chitosan, sodium alginate, cellulose, and pectin have been widely used in the preparation of pH-responsive freshness indicators [99,100]. Additionally, fruit and vegetable processing wastes such as fruit, peel, and residue could become a rich source of cellulose and polysaccharide biopolymers [101]. Biopolymers are widely used in active packaging and intelligent packaging due to their non-biotoxicity, biodegradability, and excellent compatibility. Biopolymers are often mixed with other polymers or modified materials to improve their water-solubility and poor mechanical and barrier properties.
Preparation of pH-Responsive Freshness Indicators
The preparation methods for pH-responsive freshness indicators include casting, blending extrusion, compression molding, electrostatic spinning, coating, adsorption, and electrochemical etching. Figure 3 shows the process for preparing freshness indicators. The process of freshness indicator formation is influenced by several factors such as the molecular structure and compatibility of biopolymers and the particular application. Different preparation methods have an influence on the monitoring effect and stability of the freshness indicator. Hence, in this section, we discuss the preparation methods and deficiencies of pH-responsive freshness indicators.
Freshness Indicator Preparation by Solvent Casting
Solvent casting or flow drying is a common method of preparing freshness indicators at laboratory or pilot scales. The preparation for casting is mainly divided into the following three processes: dissolution, casting, and drying/molding. Firstly, the biopolymer, additives, and other components are dissolved in a suitable solvent to develop the mixed solution. The preparation conditions of the mixed solution mainly depend on the structure and application scenarios of the material. Then, the mixed solution is cast in a specific mold. The final step is drying, which promotes interactions among the polymeric molecules. It is a necessary step for obtaining excellent performance from the freshness indicator. Drying temperatures usually range from 20 • C to 60 • C. The drying temperature determines the drying time, which usually ranges from 6 h to 3 days.
Biodegradable materials such as polyvinyl alcohol, polysaccharide (cellulose, glue, chitosan, sodium alginate, etc.), and protein are often used in the preparation of freshness indicators. The hydrogen bonds between the polysaccharide polymer and the natural colorants can further promote their interaction, stabilize the natural colorants, and improve the effect of the color change in the slow casting process [102]. Compared with other polysaccharides or phenols, pectin and natural colorants have the highest affinity, which might be affected by electrostatic interaction and anthocyanin accumulation [103]. Accordingly, most of the studies on freshness indicators are related to pectin [77]. It is found that hydrophilic materials help the freshness indicator to absorb more water in the application process and react directly with volatile basic nitrogen to form NH4 + , thus accelerating its color variation [104] (Table 1).
Freshness Indicator Preparation by Solvent Casting
Solvent casting or flow drying is a common method of preparing freshness indicators at laboratory or pilot scales. The preparation for casting is mainly divided into the following three processes: dissolution, casting, and drying/molding. Firstly, the biopolymer, additives, and other components are dissolved in a suitable solvent to develop the mixed solution. The preparation conditions of the mixed solution mainly depend on the structure and application scenarios of the material. Then, the mixed solution is cast in a specific mold. The final step is drying, which promotes interactions among the polymeric molecules. It is a necessary step for obtaining excellent performance from the freshness indicator. Drying temperatures usually range from 20 °C to 60 °C. The drying temperature determines the drying time, which usually ranges from 6 h to 3 days.
Biodegradable materials such as polyvinyl alcohol, polysaccharide (cellulose, glue, chitosan, sodium alginate, etc.), and protein are often used in the preparation of freshness indicators. The hydrogen bonds between the polysaccharide polymer and the natural colorants can further promote their interaction, stabilize the natural colorants, and improve the effect of the color change in the slow casting process [102]. Compared with other polysaccharides or phenols, pectin and natural colorants have the highest affinity, which might be affected by electrostatic interaction and anthocyanin accumulation [103]. Accordingly, most of the studies on freshness indicators are related to pectin [77]. It is found that hydrophilic materials help the freshness indicator to absorb more water in the application process and react directly with volatile basic nitrogen to form NH4 + , thus accelerating its color variation [104] (Table 1).
Freshness Indicator Preparation by Extrusion
The disadvantage of preparing freshness indicators by casting is that the evaporation process of the solution cannot be controlled, which leads to a residue of toxic substances in the freshness indicator, harming human health in the process of application [96]. The mechanical properties and barrier properties of freshness indicators prepared by the casting method are also poor.
Extrusion has become one of the main processing methods for petroleum-based polymers, including ethylene vinyl alcohol copolymer (EVOH) and polypropylene (PP). The working temperature for extrusion molding is generally 180 • C-290 • C, while the processing temperature of petroleum-based materials is above 200 • C, due to their excellent thermal stability. The polymer material is easily degraded by moisture in the processing; the water content of the film matrix is required to be very low. The extrusion process is mainly composed of the following three parts: (1) the feeding zone, where the polymer is evenly mixed under pressure and the action of the screw and moved into the next region, (2) the kneading zone, where the mixture is further homogenized by removing air under the action of a screw and high temperature, and (3) the equalization zone, where the mixture, which is a molten viscous fluid in this area, is extruded quantitatively from the machine head [110].
Mills et al. [105] first proposed preparing a hydrophobic gas-sensitive film by embedding colorants in polymer plastic by extrusion ( Table 1). The main operation steps included feeding, melt plasticizing, extruding the film tube, blowing, shaping, and so on. With regard to hydrophobic polymers, extrusion possesses numerous advantages such as fast preparation speed, ease of control, and high production. Bromophenol blue was embedded in low-density polyethylene (LDPE) by extrusion to prepare a hydrophobic NH 3 -sensitive film which was successfully applied in smart packaging to monitor the real-time freshness of fish [64]. Zhai et al. [96] prepared a hydrophobic biogenic amine-sensitive film by encapsulating curcumin in LDPE using a melt extrusion blowing method, with curcumin as an indicator ( Table 1). The film exhibited excellent mechanical properties and barrier properties, together with the potential for application in food freshness monitoring.
Freshness Indicator Preparation by Electrospinning
The high temperatures and pressures of extrusion can affect and deactivate the colorants in the freshness indicator. Electrospinning is an effective and versatile method for the preparation of nonwoven and continuous polymer nanofibers, with additional advantages with respect to orientation, excellent porosity, and fiber uniformity. Thanks to these splendid and interesting properties, electrospun nanofibers have been used in the preparation of food packaging materials [111]. In the process of electrospinning, the polymer solution or melt at the tip of the needle changes from a sphere to a cone (Taylor cone) and extends from the tip of the cone, giving fiber filaments under the action of a strong electric field. Generally, a high-voltage electric field, a nozzle, and a metal collection plate are the crucial components of the electrostatic spinning process. Specifically, the spun yarn ejected from the spinning needle is drawn and split by the external constant high-voltage electric field, while the solvent in the spun yarn evaporates rapidly. Due to electrostatic repulsion and stretching coupled with solvent volatilization, the fluid forms fibers with a small diameter. After curing, the fibers are arranged in a disordered fashion on the collection device to form a fiber felt similar to nonwoven fabrics [112]. The spinning voltage, polymer solution concentration, solvent volatility, and extrusion speed are the main factors affecting the performance of electrostatically spun fibers.
According to the different forms of raw materials, electrospinning technology can be classified into solution electrospinning and melt electrospinning [113,114]. It can also be split into needleless electrostatic spinning, coaxial or triaxial electrostatic spinning, and multi-jet electrostatic spinning, according to the design of the nozzle [115,116]. Electrostatic spinning technology is mostly applied to food preservation and antibacterial packaging and is less used in the study of freshness indicators. Liu et al. [117] prepared films using curcumin-containing maize alcoholic protein by forming fibers using electrostatic spinning. The films exhibited excellent antibacterial and antioxidant activities. Yildiz et al. [106] prepared a pH-halochromic sensor based on electrospinning nanofibers, utilizing curcumin, chitosan (CS), and polyethylene oxide (PEO) for detecting chicken freshness (Table 1). Their experimental results showed that curcumin nanofibers met the application expectations for providing visualization for detecting chicken spoilage.
Freshness Indicator Preparation by Compression Molding
Compression molding is a common method used in polymer processing to prepare continuous polymer materials by melting the polymer matrix at high temperature and pressure. Additionally, compression molding is a simple, fast, and low-cost method. Recently, Uranga et al. [107] prepared bio-based films based on anthocyanin and fish gelatin from food processing waste using compression molding and used them for active packaging (Table 1). Using compression molding, it is easier and faster to produce films in large quantities. Further investigation showed that the films prepared by the molding process were homogeneous. The addition of anthocyanins changed the optical properties of the film, making the film surface rougher, but improved its antioxidant properties, suggesting that the colorimetric film could be used as an active film to extend the shelf life of food products. In another study, Andretta et al. [62] prepared starch-based films by compression molding using blueberry pomace as a pH-sensitive colorant. They found that the addition of blueberry residues resulted in poor uniformity of the starch-based film but had no significant effect on the water content, moisture permeability, or mechanical properties of the film. Colorimetric analysis of the film revealed that it exhibited visually perceptible color changes in buffers with different pH values, indicating the potential for application in smart packaging. However, anthocyanins can be degraded when they are exposed to high temperatures during compression molding and used in food packaging applications, due to interactions with visible light. Gaviria et al. [108] developed pHindicator films based on cassava starch, Laponite, and jambolan (Syzygium cumini) fruit using compression molding (Table 1). Due to the presence of colorants such as anthocyanins in jambolan fruit, the film appeared purple. The addition of Laponite and jambolan fruit affected the chemical structure, the crystallinity, and the distance between the starch molecular chains of the starch-based film. The application of monitoring steak freshness at different temperatures showed that the film could exhibit significant color changes. Therefore, compression molding can be used for the mass production of freshness indicators by changing the working conditions of the compression molding process and adding other protective materials to the indicator preparation.
Freshness Indicator Preparation by Other Methods
Most of the early freshness indicators were prepared using synthetic dyes and filter paper, and the dyes were fixed on the filter paper by adsorption [118]. Nevertheless, the freshness indicators prepared in this way were poorly monitored and prone to dye migration, threatening human health. Printing was used to improve the preparation of freshness indicators in a follow-up study. Three-dimensional (3D) printing technology has been widely used in the packaging, food, medical, and other industries due to the advantages of time-saving processes, good shapes, and convenient operation [119]. Based on the rapid development of 3D printing and food packaging, some researchers have proposed that 4D printing could offer a response over time to different environmental factors. A recent study reported the utilization of mashed potatoes and anthocyanins extracted from purple sweet potatoes to prepare a sample that could change color in different acidic and alkaline environments, using a 4D printing method [109] (Table 1). Consequently, natural colorants might be promising raw materials for 4D printing of food packaging in the future, due to spontaneous color changes under pH-stimulating conditions. To date, there has been little work focused on 4D printing for smart food packaging.
Application of Natural-Colorants-Based pH-Responsive Freshness Indicators
Evaluation of food quality is commonly achieved by destructive and time-consuming methods such as chemical and microbiological methods, which are usually used in the laboratory and require a great deal of time for analyzing the results. Changes in food quality begin initially and continue to occur in transportation, storage, and distribution. The pH-responsive freshness indicator, together with the properties of nondestructive packaging, could allow the freshness of the food to be presented in real time. Table 2 shows the application of pH-responsive freshness indicators in different food-quality monitoring applications. to green and to yellow (alkaline) No date [131] The application of pH-responsive freshness indicators for food freshness monitoring has shown that their color changes as the food deteriorates. Nevertheless, the effectiveness of different pH-responsive freshness indicators for food freshness detection is directly related to the source and concentration of natural colorants, the carrier matrix, and the preparation method [132]. Researchers should design and develop indicators suitable for monitoring the freshness of food based on the freshness variation characteristics. Table 3 presents examples of color changes in different pH-responsive freshness indicators for different food freshness monitoring applications. Most studies have not only described the relationship between the color variation of the pH-responsive freshness indicator and the microbial spoilage but also linked it to the shelf life of foods. In the case of high-protein foods (e.g., pork, shrimp, fish, beef), researchers have found that these foods should not be consumed when the volatile basic nitrogen value (TVBN) is greater than the current standard value and the microbial count is higher than 7 lg CFU/g [133,134]. No date [131] The application of pH-responsive freshness indicators for food freshness monitoring has shown that their color changes as the food deteriorates. Nevertheless, the effectiveness of different pH-responsive freshness indicators for food freshness detection is directly related to the source and concentration of natural colorants, the carrier matrix, and the preparation method [132]. Researchers should design and develop indicators suitable for monitoring the freshness of food based on the freshness variation characteristics. Table 3 presents examples of color changes in different pH-responsive freshness indicators for different food freshness monitoring applications. Most studies have not only described the relationship between the color variation of the pH-responsive freshness indicator and the microbial spoilage but also linked it to the shelf life of foods. In the case of high-protein foods (e.g., pork, shrimp, fish, beef), researchers have found that these foods should not be consumed when the volatile basic nitrogen value (TVBN) is greater than the current standard value and the microbial count is higher than 7 lg CFU/g [133,134]. No date [131] The application of pH-responsive freshness indicators for food freshness monitoring has shown that their color changes as the food deteriorates. Nevertheless, the effectiveness of different pH-responsive freshness indicators for food freshness detection is directly related to the source and concentration of natural colorants, the carrier matrix, and the preparation method [132]. Researchers should design and develop indicators suitable for monitoring the freshness of food based on the freshness variation characteristics. Table 3 presents examples of color changes in different pH-responsive freshness indicators for different food freshness monitoring applications. Most studies have not only described the relationship between the color variation of the pH-responsive freshness indicator and the microbial spoilage but also linked it to the shelf life of foods. In the case of high-protein foods (e.g., pork, shrimp, fish, beef), researchers have found that these foods should not be consumed when the volatile basic nitrogen value (TVBN) is greater than the current standard value and the microbial count is higher than 7 lg CFU/g [133,134]. No date [131] The application of pH-responsive freshness indicators for food freshness monitoring has shown that their color changes as the food deteriorates. Nevertheless, the effectiveness of different pH-responsive freshness indicators for food freshness detection is directly related to the source and concentration of natural colorants, the carrier matrix, and the preparation method [132]. Researchers should design and develop indicators suitable for monitoring the freshness of food based on the freshness variation characteristics. Table 3 presents examples of color changes in different pH-responsive freshness indicators for different food freshness monitoring applications. Most studies have not only described the relationship between the color variation of the pH-responsive freshness indicator and the microbial spoilage but also linked it to the shelf life of foods. In the case of high-protein foods (e.g., pork, shrimp, fish, beef), researchers have found that these foods should not be consumed when the volatile basic nitrogen value (TVBN) is greater than the current standard value and the microbial count is higher than 7 lg CFU/g [133,134]. No date [131] The application of pH-responsive freshness indicators for food freshness monitoring has shown that their color changes as the food deteriorates. Nevertheless, the effectiveness of different pH-responsive freshness indicators for food freshness detection is directly related to the source and concentration of natural colorants, the carrier matrix, and the preparation method [132]. Researchers should design and develop indicators suitable for monitoring the freshness of food based on the freshness variation characteristics. Table 3 presents examples of color changes in different pH-responsive freshness indicators for different food freshness monitoring applications. Most studies have not only described the relationship between the color variation of the pH-responsive freshness indicator and the microbial spoilage but also linked it to the shelf life of foods. In the case of high-protein foods (e.g., pork, shrimp, fish, beef), researchers have found that these foods should not be consumed when the volatile basic nitrogen value (TVBN) is greater than the current standard value and the microbial count is higher than 7 lg CFU/g [133,134]. No date [131] The application of pH-responsive freshness indicators for food freshness monitoring has shown that their color changes as the food deteriorates. Nevertheless, the effectiveness of different pH-responsive freshness indicators for food freshness detection is directly related to the source and concentration of natural colorants, the carrier matrix, and the preparation method [132]. Researchers should design and develop indicators suitable for monitoring the freshness of food based on the freshness variation characteristics. Table 3 presents examples of color changes in different pH-responsive freshness indicators for different food freshness monitoring applications. Most studies have not only described the relationship between the color variation of the pH-responsive freshness indicator and the microbial spoilage but also linked it to the shelf life of foods. In the case of high-protein foods (e.g., pork, shrimp, fish, beef), researchers have found that these foods should not be consumed when the volatile basic nitrogen value (TVBN) is greater than the current standard value and the microbial count is higher than 7 lg CFU/g [133,134]. No date [131] The application of pH-responsive freshness indicators for food freshness monitoring has shown that their color changes as the food deteriorates. Nevertheless, the effectiveness of different pH-responsive freshness indicators for food freshness detection is directly related to the source and concentration of natural colorants, the carrier matrix, and the preparation method [132]. Researchers should design and develop indicators suitable for monitoring the freshness of food based on the freshness variation characteristics. Table 3 presents examples of color changes in different pH-responsive freshness indicators for different food freshness monitoring applications. Most studies have not only described the relationship between the color variation of the pH-responsive freshness indicator and the microbial spoilage but also linked it to the shelf life of foods. In the case of high-protein foods (e.g., pork, shrimp, fish, beef), researchers have found that these foods should not be consumed when the volatile basic nitrogen value (TVBN) is greater than the current standard value and the microbial count is higher than 7 lg CFU/g [133,134].
Freshness Monitoring of Meat and Seafood Products
Meat and seafood products contain large amounts of protein, fat, and free amino acids, which are susceptible to spoilage and deterioration due to the action of microorganisms and enzymes during storage, transportation, and marketing, resulting in changes in their pH and the production of volatile basic nitrogen (TVBN) species such as ammonia, methylamine, dimethylamine, trimethylamine, and other similar compounds [139]. Among the current smart food packaging materials, indicators (freshness indicators, gas indicators, and time and temperature indicators) have been widely studied for their ability to provide qualitative or semi-quantitative indicators of food properties through color changes. A smart packaging system based on a pH-responsive freshness indicator could provide consumers with quality information on various foods by utilizing a pH-dependent color variation that is perceptible with the naked eye [140,141].
Zhang et al. [66] developed a colorimetric pH-sensing film by immobilizing natural colorants extracted from Chinese redbud flowers with chitosan, which changed color from red to green in different acidic and alkaline environments. They evaluated the response time, stability, and reproducibility with respect to storage time of pH-sensing films. As the pH values of pork or fish samples were related to their freshness, pH-sensing films were identified as a rapid, nondestructive, and intuitive way to estimate the change in pork and fish freshness in different environments. Othman et al. [142] found that the colorants in hibiscus changed color in response to environmental changes in acidity and alkalinity. They developed a pH-based detection system using degradable materials such as chitosan, corn starch, and hibiscus extract to monitor the freshness of chicken breasts. Choi et al. [15] prepared a novel colorimetric film by adding purple sweet potato anthocyanin to a mixture of agar and potato starch agar, and the results indicated that the film [138]
Freshness Monitoring of Meat and Seafood Products
Meat and seafood products contain large amounts of protein, fat, and free amino acids, which are susceptible to spoilage and deterioration due to the action of microorganisms and enzymes during storage, transportation, and marketing, resulting in changes in their pH and the production of volatile basic nitrogen (TVBN) species such as ammonia, methylamine, dimethylamine, trimethylamine, and other similar compounds [139]. Among the current smart food packaging materials, indicators (freshness indicators, gas indicators, and time and temperature indicators) have been widely studied for their ability to provide qualitative or semi-quantitative indicators of food properties through color changes. A smart packaging system based on a pH-responsive freshness indicator could provide consumers with quality information on various foods by utilizing a pH-dependent color variation that is perceptible with the naked eye [140,141].
Zhang et al. [66] developed a colorimetric pH-sensing film by immobilizing natural colorants extracted from Chinese redbud flowers with chitosan, which changed color from red to green in different acidic and alkaline environments. They evaluated the response time, stability, and reproducibility with respect to storage time of pH-sensing films. As the pH values of pork or fish samples were related to their freshness, pH-sensing films were identified as a rapid, nondestructive, and intuitive way to estimate the change in pork and fish freshness in different environments. Othman et al. [142] found that the colorants in hibiscus changed color in response to environmental changes in acidity and alkalinity. They developed a pH-based detection system using degradable materials such as chitosan, corn starch, and hibiscus extract to monitor the freshness of chicken breasts. Choi et al. [15] prepared a novel colorimetric film by adding purple sweet potato anthocyanin to a mixture of agar and potato starch agar, and the results indicated that the film could indicate the pH changes and spoilage of pork. Luchese et al. [121] developed biodegradable smart films using blueberry residues, cassava starch, and glycerol as an alternative to petroleumbased plastics. The results of their study indicated that the prepared biodegradable film containing agro-industrial residues had potential as a freshness indicator. A similar study was carried out by Dudnyk et al. [143], who developed pectin films containing red cabbage anthocyanins used for monitoring the freshness of various high-protein perishable foods. Due to the release of volatile amines during microbial growth, the color of the film altered sharply from purple to yellow as the alkalinity of the environment increased. Another study was carried out by Zhang et al. [144], utilizing roselle extracts and biodegradable polymers to prepare an intelligent colorimetric film to detect the freshness of packaged pork. They prepared three types of films, utilizing starch, polyvinyl alcohol, and chitosan. It was found that the film based on polyvinyl alcohol/chitosan/roselle extracts possessed the highest sensitivity to ammonia vapor and could be used as a visual indicator of pork freshness at room temperature (25 • C). The film was initially red, gradually changing to green or yellow over a certain period of time, showing that the freshness of the pork changed over time. Liu et al. [5] developed a novel colorimetric film by mixing sodium carboxymethyl cellulose/ polyvinyl alcohol and red cabbage to detect the freshness of pork. They found that the electrostatic interaction between the mixed film and anthocyanins improved the sensitivity and color stability of the film. The poor mechanical strength and hydrophilicity of smart colorimetric films are the main problems that prevent their largescale application. Researchers have prepared a pH-sensing film with excellent mechanical properties using biodegradable cellulose and naphthoquinone colorants (AENDs) extracted from comfrey [90]. The colorimetric film has promising applications in preparing smart labels with excellent mechanical properties and hydrophobicity, for freshness detection in shrimp.
To address the problems of the poor hydrophobicity and stability of the freshness indicators and the migration of natural dyes, Zhang et al. [145] designed a novel freshness indicator. The indicator consisted of a sensing layer and a hydrophobic protective layer, which had been verified as capable of monitoring the freshness of food while having good hydrophobicity. Zhang et al. [146] microencapsulated mulberry extracts using a microencapsulation technique, compounding them with psyllium seed gum to prepare pH-responsive films. The films acted as a type of pH-sensitive food packaging material, while ensuring the stability of the natural colorants.
Freshness Monitoring of Milk and Dairy Products
Milk and dairy products are nutrient-rich foods that are highly susceptible to decomposition by microorganisms and enzymes during storage, which can negatively impact their quality attributes and safety [147]. Hence, intelligent packaging is critical in identifying the expiration date and quality of these foods during transportation and consumption. There has been a great deal of research on active packaging for dairy products in the past, but only a few studies have focused on freshness indicators based on natural dyes.
Stefani et al. [148] prepared a freshness indicator based on polyvinyl alcohol/chitosan incorporating anthocyanins obtained from red cabbage. The color changes of this indicator provided an inexpensive and simple way to present a variation in the chemical composition of the milk. The color of the indicator gradually changed from gray to pink, showing that the milk had deteriorated. In a different experiment, Ma et al. [129] developed colorimetric indicators to detect milk freshness by incorporating grape-skin extracts into a mixture of cellulose nanocrystal (CNC)/tara gum (TG). The color changed from red (acidic environment) to slightly green (alkaline environment) when the indicator was placed in different buffers. The indicator exhibited significant color changes during the variation in milk freshness, indicating that it could be applied for detecting the freshness of dairy products. Liu et al. [149] developed intelligent films based on starch/polyvinyl alcohol (PVA) that monitored pH changes and inhibited the growth of harmful microorganisms in food. Their water resistance and mechanical properties were enhanced by modifying the matrix with sodium trimetaphosphate and boric acid. The films could detect changes in the freshness of milk and prolong its shelf life under the action of anthocyanins (ANT) and limonene (LIM). Zhai et al. [150] also prepared a pH-sensitive film for smart food packaging utilizing gellan gum, gelatin, and carrot extracts as the main raw materials. The composite film presented a color variation from orange to yellow in different pH environments. As shown in Figure 4, the film showed visible color variation, indicating that the food had spoiled during the application of fish freshness detection, while the written pattern was retained. Consequently, the colorimetric film could be used as part of a smart packaging system for monitoring dairy products. In similar research, Yong et al. [29] prepared an active smart packaging film using chitosan and purple or black eggplant extracts (PEE and BEE). PEE and BEE improved the mechanical properties, oxidation resistance, and pH sensitivity of the film. The authors found that an intelligent film with high anthocyanin content could be used to detect milk spoilage, with good application prospects in the field of food freshness detection. Bandyopadhyay et al. [131] developed intelligent films consisting of polyvinyl pyrrolidone-carboxymethyl cellulose-bacterial cellulose-guar gum (PVP-CMC-BC-GG) and anthocyanins derived from red cabbage to detect cheese quality. Initially, the pH-responsive film appeared slightly pink due to the presence of lactic acid and other acids in the cheese. During the freshness monitoring process, the anthocyanins in the responsive film were converted to a reddish yellow molten-salt positive-ion structure due to the production of large amounts of organic acids by microorganisms in the cheese. The films exhibited significant color changes in cheese freshness detection applications.
Freshness Monitoring of Fruits and Vegetables
Fresh-cut fruits and vegetables are highly perishable foods, whose quality and safety may deteriorate during storage due to biochemical processes resulting from pests, microbial contamination, or respiration [151]. Hence, some researchers have developed smart and active packaging materials for quality protection of fresh-cut fruits and for monitoring their freshness changes to reduce food-borne diseases [152]. Chen et al. [153] prepared pH-sensitive labels using a mixture of methyl red and bromocresol blue dyes. During pepper freshness monitoring, the label showed the change in pepper freshness through a visible color change that was due to the increase in CO 2 concentration as a result of pepper respiration. Consequently, labels made from methyl red and bromothymol blue could be an easy-to-use indicator to detect the freshness of packaged peppers. Smart films made by incorporating anthocyanin-rich blackberry extract into carboxymethyl cellulose (CMC) have also been developed to prolong the shelf life of tomatoes [154]. Overall, there are only a few studies on using natural colorants to prepare pH-responsive films for monitoring the freshness and prolonging the shelf life of fresh-cut vegetables and fruits. Researchers typically spray or macerate fresh fruits and vegetables with a variety of edible materials to create a semi-permeable coating on their surface for controlling populations of natural bacteria, molds, yeasts, and food-borne pathogens [155].
Currently, numerous studies have been conducted on freshness indicators based on various biomaterials and natural pigments, where the color changes of indicators can detect the real-time quality changes in food during storage. In particular, as awareness of environmental protection and food safety increases, the requirement for indicator films utilizing eco-friendly biomaterials and natural pigments to replace petroleum-based materials and synthetic pigments is increasing in the smart packaging sector.
with high anthocyanin content could be used to detect milk spoilage, with good application prospects in the field of food freshness detection. Bandyopadhyay et al. [131] developed intelligent films consisting of polyvinyl pyrrolidone-carboxymethyl cellulose-bacterial cellulose-guar gum (PVP-CMC-BC-GG) and anthocyanins derived from red cabbage to detect cheese quality. Initially, the pH-responsive film appeared slightly pink due to the presence of lactic acid and other acids in the cheese. During the freshness monitoring process, the anthocyanins in the responsive film were converted to a reddish yellow molten-salt positive-ion structure due to the production of large amounts of organic acids by microorganisms in the cheese. The films exhibited significant color changes in cheese freshness detection applications.
Freshness Monitoring of Fruits and Vegetables
Fresh-cut fruits and vegetables are highly perishable foods, whose quality and safety may deteriorate during storage due to biochemical processes resulting from pests, microbial contamination, or respiration [151]. Hence, some researchers have developed smart and active packaging materials for quality protection of fresh-cut fruits and for monitoring their freshness changes to reduce food-borne diseases [152]. Chen et al. [153] prepared pH-sensitive labels using a mixture of methyl red and bromocresol blue dyes. During pepper freshness monitoring, the label showed the change in pepper freshness through a visible color change that was due to the increase in CO2 concentration as a result of pepper respiration. Consequently, labels made from methyl red and bromothymol blue could be an easy-to-use indicator to detect the freshness of packaged peppers. Smart films made by incorporating anthocyanin-rich blackberry extract into carboxymethyl cellulose (CMC) have also been developed to prolong the shelf life of tomatoes [154]. Overall, there are only a few studies on using natural colorants to prepare pH-responsive films for monitoring the freshness and prolonging the shelf life of fresh-cut vegetables and fruits.
Conclusions and Future Perspective
This document provided a classification and overview of pH-responsive freshness indicators based on natural colorants. Smart packaging is growing in importance in the production and distribution of food. Smart packaging based on natural colorants can increase the safety and quality of packaged foods by informing consumers about real-time freshness and extending the shelf life of foods.
Nonetheless, pH-responsive freshness indicators based on natural dyes are still a long way from commercial application. In addition, pH-responsive freshness indicators based on natural colorants have obvious disadvantages, for example: (1) natural colorants have lower pH sensitivity than synthetic dyes; (2) natural colorants have poor color stability and good water solubility; (3) the existing process is not suitable for large-scale processing; and (4) there is a low matching degree with food quality indicators in packaging. There are still limitations and gaps in the large-scale use of natural colorants for food freshness indication. To overcome these problems, the following recommendations are proposed: (1) finding natural colorants with more stable performances and excellent heat resistance or improving the thermal stability of natural colorants by encapsulation and immobilization; (2) modifying the carrier matrix to reduce the water solubility of the carrier matrix and improve compatibility with colorants; and (3) improving the pH sensitivity of freshness indicators by adding nanomaterials or other new additives.
Consequently, the following perspectives are available for efficient intelligent packaging based on pH-responsive indicators. (1) A more pH-sensitive freshness indicator should be prepared for detecting subtle changes in freshness, because less alkaline gas is produced at the initial stage of food freshness change. (2) The types and amounts of volatile nitrogenous compounds produced by different foods during changes in freshness vary from food to food. The developed pH-responsive freshness indicators are only suitable for freshness detection in certain types of foods. Hence, the follow-up development of efficient intelligent packaging should be in the direction of convenience, speed, and real-time display of food quality changes, in order to reduce health problems caused by poor food quality and safety.
|
v3-fos-license
|
2023-12-16T17:20:46.677Z
|
2023-12-06T00:00:00.000
|
266292408
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://ejournal.usm.my/aamj/article/download/aamj_vol28-no2-2023_6/pdf",
"pdf_hash": "dde44293255a481dd1b68eeb40049027915f9656",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44472",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"sha1": "43340ba6995a484d3e4bc91469159f2a7b35bf1e",
"year": 2023
}
|
pes2o/s2orc
|
ROLE OF COMMITTEES IN BANK VALUATION: EVIDENCE FROM AN EMERGING MARKET
The purpose of this study is to examine the effect of corporate governance mechanisms on bank performance in general and the effect of board-constituted committees on bank performance in particular. Primarily, two questions are addressed in the context of the banking sector of India. First, does corporate governance mechanisms reduce the quantum of non-performing assets (NPAs)? Second, does the internal committee affect bank performance? Hence, this paper determines whether independent directors strengthen corporate boards and whether committees affect bank performance. The panel data ordinary least square regression analysis is used for this study. We also use logistic regression models for various committees to find their relationship with bank performance and NPAs. Tobin’s Q is used as proxy for bank performance. Independent variables are board size (BSIZE), proportion of independent directors on the board (PERIND), number of board meetings per year (BMEET), size of the audit committee (AUC)
INTRODUCTION
Several studies have suggested that a well-functioning banking system spurs economic growth (Levine et al., 2000;Claessens & Laeven, 2003), particularly in economies where capital markets are not well developed.An underdeveloped capital market causes commercial enterprises to have a limited access to inexpensive funds.Banks serve as an intermediary between lenders and borrowers in such a financial setup.Efficient mobilisation and allocation of funds by banks reduce the cost of capital to banks and firms, thus accelerating capital accumulation and productivity and effectively resulting in economic growth.Moreover, effective application of sound governance mechanisms leads to inexpensive raising of capital, efficient allocation of society's savings, and exertion of sound governance over firms they fund (Caprio et al., 2007).Andres and Vallelado (2008) indicated that good corporate governance is essential for operating a sound financial system and improving the country's economic development.However, high-profile businesses, such as Lehman Brothers, Enron Corp., WorldCom Inc., Global Crossing Ltd., and Satyam Computer Ltd., have failed globally.These corporate mismanagements jolted investors' confidence and attracted the attention of regulators and other stakeholders alike.Such incidents have eroded the public confidence in corporate governance structures and raised a question regarding the ability of corporate boards and various committees to monitor and control management's behaviour.
A substantial amount of the literature is available on corporate governance but very few have focused on corporate governance in banks (e.g., Adams & Mehran, 2005;Andres & Vallelado, 2008;Caprio et al., 2007;Levine, 2004;Macey & O'Hara, 2003).The vital characteristics of corporate governance can be applied to the banking system too.The banking sector is one of the most regulated sectors.Hence, banks have to mandatorily abide by the prescribed regulatory requirements.According to Levine (2004), board members play a vital role in governance.
Opacity exists in banks' lending process where the role of the board becomes more important because other small stakeholders would be incapable of enforcing effective governance themselves.The governance mechanism plays a crucial role in mitigating opportunistic and unlawful activities.To make governance systems more robust in order to be able to encounter challenges, various measures and regulations have been implemented from time to time by regulators, taking cues from various reports by corporate governance committees worldwide.The governance mechanism codes differ from country to country.In India, corporate governance mechanisms were introduced through Clause 49, 1 which borrowed heavily from the report of the Cadbury committee.Apart from the board of directors, companies also constitute other types of committees to measure internal controls.
One such important committee is the audit committee (AUC), which plays an essential role in monitoring internal controls.Furthermore, the board of directors additionally oversee internal controls as part of their fiduciary duties.Regulators reduce systemic risk that may arise from conflict with the main goal of shareholders.
Non-Performing Assets and Gross Non-Performing Assets of Indian Banks
The Indian economy and banking industry witnessed a drastic change after the implementation of financial reforms in the 1990s.The Reserve Bank of India 2 (RBI) introduced several reforms such as the deregulation of interest rates, reduction of reserve requirements, strengthening of bank supervision, introduced prudential norms, and improved the competitiveness of the banking system through the entry of private banks (Narasimham, 1991).During the 1990s, the Indian banking industry grew tremendously (i.e., effective mobilisation of deposits).
The second Narasimham Committee Report (1998) stressed on two features of banking regulation, namely the capital adequacy ratio, asset classification and resolution of non-performing assets (NPAs) and gross non-performing assets (GNPAs).The RBI introduced various measures for the early identification of asset quality problems, timely restructuring of debt, and recovery of loans.In addition, the RBI introduced Basel III norms of the minimum capital requirements to improve the overall health and strength of the Indian banking industry.The NPA and GNPA levels of the Indian banking system marginally decreased from 2005 to 2011 but substantially increased after 2011.A significant increase in NPAs and GNPAs and a decline in the return on assets (ROA) in the Indian banking industry created major challenges not only for the regulator and Indian government but also for other stakeholders because of huge capital losses experienced by banks.However, the regulator encounters numerous challenges from political parties, businesses, and economic interest groups in handling various concerns and issues. 3Tripathi and Brahmaiah (2018) documented that NPAs and GNPAs negatively affect the bank performance (see Figures 1 and 2).In view of these developments and the lack of studies on the NPAs of Indian banking institutions, corporate governance mechanisms, and the effect of internal committees on bank performance, we intend to explore this topic.This paper contributes by extending the literature on bank board governance in a major emerging economy.Most of the extant studies have focused on developed economies and indicated a significant role of corporate governance in the banking performance (Adams et al., 2010;Adams & Mehran, 2012;Denis & McConnell, 2003;Levine, 2004;Macey & O'Hara, 2003).The mechanism and effectiveness of bank governance in India is considerably different from those in other economies.The difference is mainly due to the fact that India is an emerging economy and is witnessing the implementation of several regulations after the economy opened in the early 1990s.Finally, this paper provides new evidence on the effect of various internal committees on bank performance and NPAs in India.
LITERATURE REVIEW AND HYPOTHESES DEVELOPMENT
The board of directors monitor the management on behalf of the shareholders where it oversees the approval of major business decisions and corporate strategies such as disposal of assets, investments, or acquisitions, and tender offers made by acquirers.The board is also in charge of executive compensation, risk management and audits.Boards operate through committees such as compensation, nominating, and audit committees (Tirole, 2010;Zingales, 1998).However, the boards of banks are different from the boards of non-financial firms.De Andres et al. (2012) indicated that boards in the banking sector are bigger and more independent than those in the non-financial sector.Furthermore, boards in the banking sector are accountable to all stakeholders and are liable to respond to all regulators on crises or unlawful activities because individual bank failures can exert a cascading effect on other related banks.
In the banking industry, major complexities occur due to the quality of loans that cannot be evidently observed, intricate financial statements that are not produced with transparency, and accessibility of significant information travelling only between managers and insiders (James & Joseph, 2008;Alexander et al., 2013).From a cross-country perspective, studies on NPAs have focused on several useful perspectives.Researchers have investigated the relationship between bank performance and NPAs and have found that banks' profitability and efficiency are negatively associated with NPAs (Berger & DeYoung, 1997;Podpiera & Weill, 2008).Some studies have documented that higher credit growth also leads to NPAs (Hess et al., 2009;Keeton, 1999;Salas & Saurina, 2002).In the same direction, Louzis et al. (2012) reported a negative relationship between NPAs and profitability.They also found that well-capitalised banks have lower NPA issues; however, these banks maintain a low credit risk level at the time of extending loans to borrowers (Bhatia et al., 2012;Gonzalez-Hermosillo et al., 1997).The bank's efficiency and management have a significant effect on the NPAs of the banks (Breuer, 2006;Drake & Hall, 2003).The extant literature has showed a negative relationship between the cost to income ratio, credit to deposit ratio, and loans to expense ratio with NPAs (Hanweck, 1977;Karim et al., 2010;Kwan, 2006;Pantalone & Platt, 1987).
Other researchers investigating the relationship between loan growth and NPAs have shown that banks with a high loan growth rate had higher NPAs.Therefore, high and liberal credit growth led to higher NPAs in banks (Borio et al., 2001;Clair, 1992;Hess et al., 2009).The high NPA level adversely affects not only banks' efficiency and loan growth but also the banks' capital.Therefore, banks with higher capital are less inclined to undertake excess credit risk because the higher level of capital can result in higher loss absorption capacity and a lower level of NPA (Das & Ghosh, 2005;Greenidge & Grosvenor, 2010;Rajaraman et al., 1999).Meacci (1996) examined the NPAs of various banks of Italy and reported that an increase in the riskiness of loan assets is rooted in a bank's lending policy, attributing to the relatively unselective and inadequate assessment of sectoral prospects.Muniappan (2002) concluded that the problem of NPAs is related to several internal and external factors confronting borrowers.Ranjan and Dhal (2003) reported that the probability of default decreases during favourable macroeconomic conditions because borrowers want to maintain their credit worthiness.Banks' lending policy can exert a crucial effect on NPAs (Reddy, 2004).The literature shows a negative relationship between NPAs and banks' profitability and between bank size and NPAs (Thiagarajan et al., 2011).In the same direction, Kent and D'Arcy (2000) examined the cyclical lending performances of banks in Australia and argued that the banks experience substantial losses on their advances, which increase during the peak of the expansion phase of the economy.Although the risk inherent in banks' lending portfolios peaks at the top of the cycle, this jeopardy inclines to be realised during the shrinkage phase of the business cycle during which an increase in banks' NPAs negatively affects their profits.
Board Size and Bank Performance
The role of the board of directors for the soundness and safety of the banking system through the Basel Committee on Banking Supervision ( 2006) is well established globally.In the Indian context, it is described in Clause 49 listing agreements.Good governance by the board provides benefits through greater access to financing and reduces the cost of capital (Adams & Mehran, 2012;Liang et al., 2013;Claessens & Yurtoglu, 2013).Germain et al. (2014) reported that bigger boards can delegate more human resources to supervise and advise on managers' decision, in line with the resource-based theory.However, a large board size is ineffective due to coordination issues, and it also has free riding concerns.Some of the studies have indicated a negative relationship between board size and bank performance (Hermalin & Weisbach, 2001;Liang et al., 2013;Rowe et al., 2011;Yermack 1996).To control adverse situations, firms pay high coordination costs, leading to a negative effect on bank performance (James & Joseph, 2015).Accordingly, we hypothesise the presence of a negative relationship between board size and bank performance.
H1: Board size impacts bank performance.
Board Independence and Bank Performance Harris and Raviv (2008) asserted that independent directors possess the knowledge and abilities to monitor, discipline, and advise managers, thus enabling the directors to resolve conflicts of interest between insiders and shareholders.Likewise, Andres and Vallelado (2008) reported that the balance between executive and non-executive directors can result in efficient advising without overlooking the monitoring function, less conflict of interest when monitoring managers, and a positive relationship between non-executive directors and bank performance.Berger and Bowman (2013) also reported that independent directors can facilitate information exchange because they have the advantage of accessing other firm's privately owned data, which can help in unravelling favourable prospects for the bank.The outside knowledge and better experience of independent directors on the board create unique resources for the bank, leading to a positive relationship between board independence and bank performance.Therefore, we hypothesise that greater board independence enables its members to take prudent decisions and also compensates management according to their performance.
H2: Board independence positively impacts bank performance.
Board Meetings and Bank Performance
An empirical study on the relationship between bank value and board composition would be inadequate if it does not consider the internal functioning of the board.Several factors can affect the functioning of boards.In particular, one of the causes is the frequency of board meetings (Vafeas, 1999).The author indicated that board meetings are an important source through which the board of directors deliver their duties and plan future strategies that can lead to the enhanced performance of firms (Vafeas, 1999).The activity of the board and the frequency of board meetings are a measure of the power and efficacy of the board of directors (Conger et al., 1998;Lipton & Lorsch, 1992).Board activities through regular meetings help to appraise managers and are also an important platform to address any arising issue in a timely and effective manner (Vafeas, 1999).A positive relationship was found between board meetings and firm performance (Mangena et al., 2012).In contrast to that, a negative association was found between board meetings and firm performance (Andres & Vallelado, 2008;El Mehdi, 2007).Overall, the extant literature is inconsistent with regard to the effect of board meetings on firm performance.We hypothesise that more board meetings indicate more discussion and implementation of companies' operations and strategies.
H3: Board meetings positively impacts bank performance.
NPAs and Bank Performance
Bank NPAs and performance are inversely related to each other (Berger & De Young, 1997;Podpiera & Weill, 2008;Tripathi & Brahmaiah, 2018).Louzis et al. (2012) also found a negative relationship between NPAs and profitability and reported that well-capitalised banks have lower NPA issues.The bank's management exert a significant effect on the NPAs of banks (Breuer, 2006;Drake & Hall, 2003).Hence, we hypothesise (alternate hypotheses) the following: H4: Audit committee negatively impacts bank performance.
H5: NPA committee positively impacts bank performance.H6: Risk management committee positively impacts bank performance.
Corporate Governance, Committees and NPAs
When a firm engages itself in excessive risky projects, it might likely lead to negative results.Diamond and Rajan (2009) documented that banks with highquality corporate governance introduce appropriate incentives and controls to align the risk-taking practices of the banks to increase shareholder value.The corporate governance mechanism of banks is essential because banks play a crucial role in the mobilisation and allocation of capital and growth.Hence, when banks implement good governance structures, bank managers allocate capital efficiently and improve market conditions (Levine, 2004).Banks with better governance make effective decisions that can reduce losses due to bad loans (Graham & Narasimhan, 2004).Furthermore, the ineffective corporate governance of a firm negatively affects the entire financial system directly and indirectly.Therefore, risky projects of banks have different effects on markets compared to risky projects of non-financial firms.Tarchouna et al. (2017) reported that poorly governed banks with governance proxies are positively related to NPAs.They asserted that when banks have excessive liquidity, they invest in risky projects.To test the following hypothesis, we consider total NPAs, board size, board independence, AUC, NPA committee (NPAC), and risk management committee (RSKC).Hence, we hypothesise (alternate hypotheses) the following: H7: Audit committee inversely impacts the quantum of NPA.H8: NPA committee inversely impacts the quantum of NPA.H9: Risk management committee inversely impacts the quantum of NPA.
METHODOLOGY
Panel data allows analysing bank performance when the sample is a mix of crosssectional and time series data.Incorporating the temporal dimension of the data enhances the accuracy of results in the study.The panel data structure permits to consider the constant and unobservable heterogeneity, which is an explicit construct of each bank (such as management style and quality and business strategy).The pooled ordinary least squares (OLS) estimations produce estimators that are biased and inconsistent when the unobserved effect is correlated among independent variables.This issue of the econometric challenge can be eliminated with the use of the first differences or the fixed effects (within) estimators.However, Hermalin and Weisbach (2003) reported that it is rational to consider that the board creates endogeneity problems.Hence, it is mandatory to use an econometric method that can deal with endogeneity issues along with the existence of unobservable fixed effects, which are connected with each bank.
Dependent variables are Tobin's Q (TBQ) as proxy for bank valuation and asset quality (NPAs).Independent variables are board size (BOD), proportion of independent directors on the board (PERIND), number of board meetings per year (BMEET), size of audit committee (AUC), and two measures of the bank business (asset size and loan) and controlled for time.
When panel data are used in the empirical study, it must consider both the individual represented by the sub index i and the time point represented by t.Additionally, the error term is decomposed into two parts: the combined effect (µ i,t ), which varies between individuals and time periods, and the individual effect (η i ), which is a characteristic of each individual (bank).This term varies among individuals but is constant over time.The regression models (Equations 1-4) are used to test the hypotheses with a non-linear relationship for corporate governance proxies and other bank attributes.The regression models (Equations 5-6) use dummy variables for the establishment of various committees because some banks have established that particular committee, whereas others have not.
NPACD or RSKCD
When the strict exogeneity condition fails, then both the first differences and fixed effects (within) are unpredictable and have different probability limits.The general approach for estimating models that do not satisfy the strict exogeneity is to use transformation to eliminate unobserved effects and instruments to deal with endogeneity (Wooldridge, 2002).Thus, the aforementioned models are empirically estimated by applying the generalised method of moments (GMM) estimator proposed by Arellano and Bond (1991).The GMM approach can control for endogeneity problems that may appear in the models.Although endogeneity problems can also be controlled by using simultaneous equation estimators, such as the maximum likelihood and two-or three-stage least squares estimators, the choice is based on consistency concerns (De Miguel et al., 2005).This is because the aforementioned estimators are more efficient than GMM and they are not consistent, and thus, generating biased results because they do not eliminate unobservable heterogeneity firms' specificity that gives rise to a particular behaviour.These differences between individuals (banks in this case) are potentially correlated with explanatory variables (also called individual specific effects), invariant over time, and they thus directly influence corporate decisions (entrepreneurial capacity, corporate culture, etc.).
Data Collection
The financial data and corporate governance information are extracted from the Prowess database (a comprehensive database on Indian companies maintained by the Centre for Monitoring Indian Economy) and banks' annual reports for the financial years 2005-2018.We obtain data on the board independence, BOD, AUC, NPAC, RSKC, IT strategy committee, and credit approval committee as well as the sundry financial data of commercial banks.The establishment of an AUC is mandatory for all banks but that of other committees is discretionary.We obtain 100%, 29.4%, 80.80%, 53.10%, and 26.90% firm-year data for the AUC, NPAC, RSKC, IT strategy committee, and credit approval committee, respectively.
Variable Construction
The construction of the variables used for the study are explained as follows: Tobin's Q: Bank value is measured using Tobin's Q (TBQ).TBQ is the ratio of market to book value.It is computed as the sum of market capitalisation and book value of debt over total assets.Previous studies (Adams & Mehran, 2005;Andres & Vallelado, 2008;Bhagat & Black, 2002;Caprio et al., 2007;Fernandez & Weinberg, 1997;Hermalin & Weisbach, 1991;Yermack, 1996) have used TBQ.
PERIND:
The measure of independent directors on the board is taken as the ratio of number of independent directors to that of the total size of the board.
AUC:
The measure of the size of audit committee is the number of members on the audit committee.
BMEET:
The number of board meetings conducted during the year as a proxy for the functioning of the boards of directors.
NPAC:
The non-performing asset committee is a dummy variable that takes a value 1 if it is present in the company and 0 otherwise.
RSKC:
The risk management committee is a dummy variable that takes a value 1 if it is present in the company and 0 otherwise.
IT Strategy:
The IT strategy committee is a dummy variable that takes a value 1 if it is present in the company and 0 otherwise.
Credit Approval Committee:
The credit approval committee is a dummy variable that takes a value 1 if it is present in the company and 0 otherwise.
LNTA:
The size of banks is considered as the total assets of banks (natural logarithms of total assets, LNTA).
LOAN:
The magnitude of loan disbursal by the bank is calculated by the proportion of loans to total assets.
ROA:
The return on assessment (ROA) is used as a measure of bank performance to test the analysis.ROA is calculated as the profit after tax divided by total assets.
Table 1 shows the descriptive statistics of all the variables of this study.Note: TBQ = Tobin's Q; ROA = return on assets; GNPA = gross non-performing assets divided by total assets; BOM = number of board members on the board; PERIND = proportion of independent directors on the board; AUC = audit committee members; PERAUIND = proportion of independent directors on audit committee; BMEET = board meetings conducted during the year; LNTA = natural logarithm of total assets of bank; LOAN = total loan amount given divided by total assets; NPACD and RSKCD = dummy variables for establishment of non-performing assets committee and risk management committee.This descriptive statistic is for unbalanced panel data and bank-year is 480.
Board Characteristic and Bank Performance
The average TBQ ratio is > 1, the average ROA is 0.7%, and the average NPA is 4.5% of total advances.The average board size is 13.46 directors, higher than average board size of 12 directors for non-financial firms (Rosenstein & Wyatt, 1997;Klein, 1998;Vafeas, 1999;Andres et al., 2005;Yermack, 1996;) but less than 17 directors reported by Adams and Mehran (2005) in their study for the period of 1995-1999 for financial institutions.PERIND is an average of 43.2%, which is less than that reported by Adams and Mehran (2005) and Andres and Vallelado (2008).
The AUC is mandatory for all banks according to Clause 49 of the listing agreement; the average size of AUC is 6.59 directors.The average number of board meeting is 5.22, which is lower than the average of 8.48 reported by Adams and Mehran (2005) and 10.45 reported by Andres and Vallelado (2008).We find 29.4% firmyear for the NPAC and 80.80% firm-year for RSKC.
Models I and II of Table 2 shows GMM estimators where dependent variable is TBQ.We find that the F test of model is statistically significant at 1% level, and the statistical test does not reject the validity of our model.The variance inflating factor (VIF) 4 for each coefficient is < 3, which indicates that the model is free from multicollinearity 5 problems.The adjusted r 2 ranges from 0.1149 to 0.2396.We find negative coefficients for PERIND in Models I and II, and both the coefficients are significant at 1% level. a) , 5% (b) and 10% (c) .
Table 3 shows the empirical relationship between RSKC and TBQ (Models V and VI).The data shows 80.8% bank-years for RSKC out of a total of 480 bank-years.
Examining the effect of RSKC on bank performance and GNPARAT, we find the coefficients are positive and statistically significant at 1% level.Models VII and VIII exhibit linkage between RSKCD and NPARAT.The coefficients of RSKCD in both models are negatively related with GNPARAT.All the coefficients are statistically significant at the 1% level. a) , 5% (b) and 10% (c) .
DISCUSSION
Table 2 lists the empirical findings of GMM estimations for the dependent variable TBQ and GNPARAT.To control potential endogeneity problems with board characteristics, GMM estimation is developed by Hansen (1982) and White (1982).The GMM with adjusted standard errors take into account the unobservable heterogeneity, transforming original variables into first differences and the endogeneity of independent variables by using instruments.In GMM, one way to alleviate the bias caused by endogenous variables is to use instrumental variables (variables that can also predict the endogenous variable but themselves are not endogenous).
Models I and II of Table 2 show GMM estimators where dependent variable is TBQ.We find no serial correlation in residuals by performing the first and second order correlation tests (AR1 and AR2, respectively) and confirm both the absence of the second order serial correlation and the validity of instruments used to avoid the endogeneity problem.
We find a positive and statistically significant relationship between BOD and TBQ; this finding is in line with those of previous studies (Dalton et al., 1999;Lipton & Lorsch, 1992;Singh et al., 2018;Veprauskaite & Adams, 2013).However, we find a negative and statistically significant relationship between BODSQ (square of board size) and TBQ.This result demonstrates a non-linear relationship between BOD and TBQ.Our empirical findings confirm the hypothesised inverted U-shaped relationship between BOD and TBQ (Figure 3).Adams and Mehran (2005) indicate that the addition of new directors may positively affect bank performance, although the rise in performance shows a diminishing marginal growth.Therefore, the negative and significant coefficient of BOMSQ (square of board meetings) indicates that there is a point after which adding a new director reduces bank value.
For banks in the sample, this value of board size is between 9 and 17 directors.
We find negative coefficients for PERIND in Models I and II, and both the coefficients are significant at 1% level.This finding indicates that a high proportion of independent directors may not increase bank performance.A negative relationship between PERIND and bank performance has been reported by several researchers (Beasley, 1996;Fosberg, 1989;Grace et al., 1995;Hermalin & Weisbach, 1991;Molz, 1988;Vafeas, 2000) (see Figure 4).In terms of board meetings (BMEET), coefficients are positively related with TBQ.These empirical findings support our hypothesis that board meetings play a vital role that is more proactive than reactive.Our findings are consistent with those of the extant research conducted by Mangena et al. (2012).These results are upright in terms of agency theory, which recommends that board meetings provide solid monitoring activities to advise and monitor management and enhance performance (Vafeas, 1999).Thus, regular meetings should be conducted to implement strategic decisions to improve firm value and also develop cohesiveness among board members (Lipton & Lorsch, 1992).In summary, more board meetings can result in solid monitoring, leading to an improvement in firm performance (see Figure 5).Overall, it can be concluded that a relationship exists between TBQ and the corporate governance mechanisms in India.We also find a negative relationship between bank performance and gross non-performing asset ratio (GNPARAT) in Models I, II, V, and VI of Table 2.Moreover, all the coefficients are statistically significant at 1% level.These empirical results are in line with our hypothesis, and the extant literature shows a negative relationship between GNPAs and bank performance (Berger & De Young, 1997;Podpiera & Weill, 2008;Tripathi & Brahmaiah, 2018).Hence, the bank management should reduce their GNPAs at the minimal level.
Board Characteristics and GNPAs
Models III and IV of Table 2 list the empirical results where the dependent variable is GNPARAT and the explanatory variables are board's characteristics.The coefficients of BOD and PERIND are negative and statistically significant for GNPARAT.These results suggest that the corporate governance mechanisms of India can reduce the level of GNPAs.These empirical results are in line with the findings of Zagorchev and Gao (2015) and Mayur and Saravanan (2017).The findings also suggest that a medium board size and approximately 50%-80% board independency can maintain good quality of assets or reduce GNPAs.
Board Characteristics and AUC
Models V and VI of Table 2 report the effect of AUC on bank performance.Therefore, the major variables of interest are AUC and PERAUIND.We find that both the coefficients AUC and PERAUIND are negatively related with TBQ and both of them are statistically significant at the 1% level.These empirical results are in line with H4.Figures 6a to 6g exhibit that whenever AUC is between 3 to 6 and the proportion of independent directors is between 80% and 100%, bank performance increases.Hence, we suggest that AUC should be constructed with more independent directors, which can increase the performance and quality of financial information and decisions (Carcello & Neal, 2000;Dechow et al., 1996;McMullen, 1996;Tirole, 2010;Zingales, 1998).
AUC and GNPAs
Models V and VI of Table 2 report linkage between GNPARAT and the explanatory variables are AUC and PERAUIND.The findings show that a negative relationship exists between AUC and GNPARAT and between PERAUIND and GNPARAT, however all the coefficients are statistically significant at the 1% level.We assert that AUC takes decisions that mitigate the probability of the default of loans (Graham & Narasimhan, 2004).Finally, our empirical results suggest the AUC plays a crucial role in minimising losses from loan defaults and improving bank performance if the size of committee members is between 4 to 8 directors with higher independency of AUC.
NPA Committee, Bank Performance and GNPAs
Table 3 shows the GMM empirical results in Models I-IV to examine the effect of NPAC on TBQ and NPAC on GNPARAT.However, a few studies have examined the effect of committees on bank performance and asset quality/NPAs, but no study has included NPAC and RSKC to explore the relationship between these two committees and bank performance and NPAs.All the models shown in Table 3 are statistically significant at 1% level.Setting up of NPAC is not mandatory, nevertheless, some banks have done it.We find 29.4% bank-years for NPAC out of a total of 480 bank-years.To examine the effect of NPAC on bank performance and GNPARAT, we use a dummy variable of 1 for banks that have established NPAC, otherwise 0 (NPACD).We find a negative relationship between NPACD and TBQ in Models I-II and are statistically significant at 1% level.The results show that the existence of the committee does not increase firm performance.Models III and IV illustrate the empirical relationship between NPACD and GNPARAT.We find a negative relationship between NPACD and GNPARAT, all the coefficients are statistically significant at 1% level.The results indicate that banks that establish NPAC can improve their asset quality or reduce the GNPA level compared with their competitors.These findings support our hypothesis.Thus, we conclude that NPAC improves good governance in the Indian banking system.Tarchouna et al. (2017) reported that poorly governed banks and governance proxies are positively related to NPAs.We recommend that NPAC should be mandatory for all banks to evaluate the asset quality of banks from time to time.
RSKC, Bank Performance, and GNPAs
Table 3 shows the empirical relationship between RSKC and TBQ (Models V and VI).Again, setting up of RSKC is not mandatory; however, some Indian banks have done it.The data shows 80.8% bank-years for RSKC out of a total of 480 bank-years.To examine the effect of RSKC on bank performance and GNPARAT, we use a dummy variable of 1 for banks that have established RSKC, otherwise 0 (RSKCD).We find the coefficients are positive and statistically significant at 1% level.The results show that RSKC does not initiate highly risky projects and take up projects with typical risks with a low probability of default; thus, the firm can improve its performance and enhance the wealth of shareholders.In the same direction, Diamond and Rajan (2009) indicated that banks with good corporate governance introduce appropriate incentives and controls to prevent risk-taking practices.
Models VII and VIII exhibit linkage between RSKCD and NPARAT.The coefficients of RSKCD in both models are negatively related with GNPARAT.
All the coefficients are statistically significant at the 1% level.Thus, the results indicate that the establishment of RSKC can more effectively and significantly for reduce the GNPA level compared with other committees of this study.The empirical results are in line with our hypothesis.We assert that RSKC improves governance and asset quality in the Indian banking system.
Steps Taken by Government of India to Mitigate NPAs
One of the primary reasons for such an insurmountable amount of NPAs was the aggressive lending policy adopted by public sector banks.A loan is classified as an NPA if the principal or interest or both are due for repayment for over 90 days.The number of advances lent by public sector banks from the year 2008 to 2014 almost tripled from INR18,000 billion to INR52,000 billion. 6The government of India has proposed a 4R strategy to reduce NPAs.One of the important steps taken under this strategy to reduce the NPAs of public sector banks include the insolvency and Bankruptcy Code, which can now revoke the control of the defaulting company from promoters/owners, debarring wilful defaulters from the resolution process and raising funds from the market.
In 2015, the process of asset quality review was initiated by the RBI.It forced banks towards transparency in the recognition and classification of NPAs across the board.It helped both in getting a real picture of the NPA situation and subsequently its reduction.Banks started making required provisions and restructuring existing loans.The banks would have to take a "hair cut" for some time until the NPA situation is under control.
Robustness Check
Table 4 shows that our results are robust to changes in the dependent variables with GMM estimations.The TBQ ratio is the most common measure of valuation in corporate governance studies.We redo this study using an accounting variable (ROA); and another market-related variable, namely average quarterly returns from the market to shareholders (RETQTR), which is a market performance variable.ROA measures the actual performance but might be biased by earnings management.
In our study, two alternative models measuring bank performance and market performance provide information that the main coefficients are equal and statistically significant.Hence, both the inclusion of new directors on the board (BOD) and higher PERIND indicate a positive and statistically significant relationship with ROA and RETQTR.However, the coefficients of the number of board meetings are negative and have a statistically significant relationship with ROA and RETQTR.
We study the relationship between various committees and bank performance/ valuation.Hence, we perform a robustness test for all the committees included in this study.We find that the coefficients of all the committees, namely AUC, PERAUIND, NPACD, and RSKCD are statistically significant and positively related to ROA and RETQTR.
In conclusion, bank boards and committees efficiently take up the challenge to improve the corporate governance of banks.Our empirical results indicate that bank boards and various established committees provide an effective platform to elucidate the weaknesses of other corporate governance mechanisms when these mechanisms are introduced to financial institutions.An efficient board and active committees are significant for all stakeholders and play a major role in developing an economic system.Sound governance of banks is a necessary condition to safeguard both the health of financial intermediaries and the business and economic development of a country.
CONCLUSION
This study finds a positive and significant relationship between board independence and bank performance.The performance also increases with the increase in board size (measured as the total number of members on the board), but after a point, the curve declines forming and an inverted U-shaped curve.It indicates that the application of sound corporate governance measures would lead to banks performing better, resulting into an overall expansion of economy and vice versa.The mandatory internal committees have a crucial role to play, which is demonstrated by their effect on the reduction of NPAs.
Banks must comply with laid down regulations both in letter and spirit, and the board of directors have to be objective in their scrutiny.One way of achieving this feat is to compose a board with a majority of independent directors and constitute internal committees for matters of vital importance with the majority of independent directors.
This study does not compare its findings with other developing economies that can provide more extensive findings and conclusion.The study can further expand with different committees and can also examine the impact of COVID-19 effect on banking industry.
Theoretical and Practical Implications
This study has crucial implications for emerging economies.Banks perform the role of depository for lenders with excess capital and the primary source of capital disbursement for commercial enterprises.The significance of a well-functioning board and internal committees in discharging their fiduciary duties is highlighted in this study.Various internal committees other than the mandatory committees, such as AUC, can be constituted depending on the requirement (committees such as the grievance redressal committee and compensation committee).An internal committee comprising a majority of independent directors is found to positively affect the performance of banks.They can help managers disburse good-quality loans and keep a check on risk-laden ventures.Further, the risk committee should anticipate their value at risk proportion on the outstanding loan amount and create sufficient provision to offset the risk.It monitors all the financial functions of the country very closely.Being the central bank of the country, it reviews monetary and fiscal policies from time to time, also implementing them into the system very effectively as and when required.
4. The VIF of a determinant is computed as 1 divided by 1 minus the coefficient of determination of the determinant.The coefficient of determination of the determinant is generated with an auxiliary regression of one of the determinants on the remaining determinants.Strong multicollinearity is indicated by VIF values > 2, indicating unreliable OLS estimators.
5. All the models used for this paper are free from multicollinearity problems.
Figure 1 .
Figure 1.Percentages of NPAs and GNPAs with total advances
Table 1
Descriptive statistics
Table 2
Board characteristics, AUC and value creation: GMM estimations (Continued on next page)Note: The table reports the GMM estimations.The dependent variable is TBQ and gross non-performing assets ratio (GNPARAT).The t-values of coefficient significance are in brackets.Statistically significant at 1%
Table 3
Non-performing assets committee and risk management committee and value creation: GMM estimationsThe table reports the GMM estimations.The dependent variable is TBQ and GNPARAT.The t-values of coefficient significance are in brackets.Statistically significant at 1%
Table 4
GMM estimation: Board and committees characteristics and alternative measures of bank performance (ROA) and (RETQTR)The table reports the GMM estimations.ROA is a dependent variable.RETQTR is averages quarterly returns to shareholders.Explanatory variables are: board size (BOASIZE), proportion of independent directors (PERIND), meetings per year (BMEET), audit committee size (AUC), proportion of independent directors in audit committee (PERAUIND), NPA committee size is a dummy (NPACD), risk management committee size is a dummy (RSKCD), control variables that measure bank business (log of bank total assets, LNTA; the ratio of loans to total assets, LOAN).The t-values of coefficient significance are in brackets, statistically significant at 1% 1.In 1996, the Confederation of Indian Industry formed a task force, which was headed by Rahul Bajaj, a leading industrial entrepreneur.The report was titled "Desirable Corporate Governance: A Code," and the report was submitted in April 1998.Furthermore, to improve corporate governance mechanisms, the regulator of the securities and commodity market in India, Securities and Exchange Board of India (SEBI), had established additional committees in 1998; one of them was the Birla Committee headed by Kumar Mangalam Birla.The Birla Committee submitted their report in early 2000.In March 2001, SEBI initiated the recommendations of the Birla Committee report by introducing Clause 49, The Listing Agreement (Clause 49 hereafter).The implementation of Clause 49 is a leading milestone to transform corporate governance actions in India (Chakrabarti et al., 2008).It was implemented in three phases.In the first phase, Group I firms were instructed to follow the recommendation of Clause 49 by 31 March 2001.In the second phase, Group 2 companies were instructed to follow the recommendations of Clause 49 by 31 March 2002.In the third phase, Group 3 companies were instructed to follow the recommendations of Clause 49 by 31 March 2003.Several major key features/ disclosures are recommended in Clause 49 that are mandatory to the companies.The major mandatory recommendations of Clause 49 are to appoint the board of directors, setup the audit committee and other important committees, and report the corporate governance practice in the annual report.2. The Reserve Bank of India was established on 1 April 1935, in accordance with the provisions of the Reserve Bank of India Act, 1934.The Central Office of the Reserve Bank was initially established in Calcutta (now Kolkata) but was permanently moved to Mumbai in 1937.The Central Office is where the Governor sits and where policies are formulated.Although originally privately owned, after nationalisation in 1949, the Reserve Bank is fully owned by the Government of India.It is the central bank of India.
6.As per the statement of the Minister of State for Finance and Corporate Affairs given in a written reply in the parliament.He stated a combination of various factors for the increase in NPAs that included aggressive lending practices, wilful default/loan frauds/corruption in some cases, and economic slowdown.He further stated that primarily as a result of the transparent recognition of stressed assets as NPAs, gross NPAs of PSBs, as per RBI data on global operations, rose from INR2,790.16 billion as of 31 March 2015 to INR6,847.32 billion as of 31 March 2017 to INR8,956.01 billion as of 31 March 2018.As a result of government's 4R strategy of recognition, resolution, recapitalisation, and reforms, they have since declined by IINR1,060.32 billion to INR1,060.32 billion as of 31 March 2019 (provisional data reported by RBI on 2 July 2019).
|
v3-fos-license
|
2016-05-04T20:20:58.661Z
|
2014-05-20T00:00:00.000
|
5179727
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0096768&type=printable",
"pdf_hash": "20a1c97fa8fccc7160450013e69b1ba9269b8da8",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44473",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"sha1": "20a1c97fa8fccc7160450013e69b1ba9269b8da8",
"year": 2014
}
|
pes2o/s2orc
|
Correction of Body-Mass Index Using Body-Shape Perception and Socioeconomic Status in Adolescent Self-Report Surveys
Objectives To propose a simple correction of body-mass index (BMI) based on self-reported weight and height (reported BMI) using gender, body shape perception and socioeconomic status in an adolescent population. Methods 341 boys and girls aged 17–18 years were randomly selected from a representative sample of 2165 French adolescents living in Paris surveyed in 2010. After an anonymous self-administered pen-and-paper questionnaire asking for height, weight, body shape perception (feeling too thin, about the right weight or too fat) and socioeconomic status, subjects were measured and weighed. BMI categories were computed according to Cole’s cut-offs. Reported BMIs were corrected using linear regressions and ROC analyses and checked with cross-validation and multiple imputations to handle missing values. Agreement between actual and corrected BMI values was estimated with Kappa indexes and Intraclass correlation coefficients (ICC). Results On average, BMIs were underreported, especially among girls. Kappa indexes between actual and reported BMI were low, especially for girls: 0.56 95%CI = [0.42–0.70] for boys and 0.45 95%CI = [0.30–0.60] for girls. The regression of reported BMI by gender and body shape perception gave the most balanced results for both genders: the Kappa and ICC obtained were 0.63 95%CI = [0.50–0.76] and 0.67, 95%CI = [0.58–0.74] for boys; 0.65 95%CI = [0.52–0.78] and 0.74, 95%CI = [0.66–0.81] for girls. The regression of reported BMI by gender and socioeconomic status led to similar corrections while the ROC analyses were inaccurate. Conclusions Using body shape perception, or socioeconomic status and gender is a promising way of correcting BMI in self-administered questionnaires, especially for girls.
Introduction
Obesity is responsible for numerous health complications, chronic diseases and increased risk of mortality [1], and it contributes to health inequalities because it mainly concerns poor families [2]. Its prevention and treatment account for a large share of health budgets in Western countries (1.5%-4.6% of the annual expenditures in France [3]). Obesity occurs very early in life and has psychological and social consequences among young people, such as discrimination [4] and bullying [5]. Monitoring the adolescent population is thus an important public health objective, especially because there are indications in some countries that the social gradient in childhood obesity may be increasing over time [6].
The most widely-used indicator of obesity is the Body Mass Index (BMI) based on height and weight [7], although other indicators may be more reliable [8]. Unfortunately, employing technicians to measure these data is often impossible in large-scale surveys, and alternative indicators such as waist circumference and waist-to-hip ratio require a certain amount of training to be reliable.
Further to this, ''reported'' BMI (based on self-reported height and weight) is not reliable for estimating the true prevalence of obesity in a population. In adult populations, under-reporting for weight and BMI and over-reporting for height is common, although the extent of under-reporting varies between men and women [9]. For example, in a representative sample collected in the general population of France in 2002-2003, on the basis of self-report, 32.2% of subjects who were actually obese were misclassified as non-obese (BMI, 30) whereas only 0.9% of actually non-obese individuals were misclassified as obese [10], showing that there was a strong BMI-related bias in these reports; a more recent analysis conducted in 2006-2007 reported a similar bias [11]. Other biases also exist, such as the occupational category [12], socioeconomic status [13], age and ethnic origin [14]. In an adolescent population, a recent literature review indicated that the sensitivity of reported BMI for screening for actual overweight ranged from 55% to 76%, and that reported overweight prevalence was 0.4% to 17.7% lower than actual prevalence [15]. As among adults, there was a strong weight-related bias in the underreporting of weight.
Efforts have therefore been made to remedy this situation and correct reported height and weight by means of additional information such as age, gender, reported diabetes mellitus or smoking habits [16], waist-to-hip ratio, health status, or healthcare variables [17]. This is difficult to generalise and requires a battery of measures or variables. Following a different approach, a recent publication [18] provided new thresholds for screening for obesity with self-reported measures, but only in the adult population, in Switzerland. The threshold found cannot therefore be used in adolescents or young adults.
Besides gender, two major characteristics are linked to the bias in reported BMI: social status [19,20] and actual weight. The assessment of the social status of their parents by adolescents is often difficult: it requires numerous questions, and responses are frequently missing or inaccurate because of ignorance, misunderstanding or a desirability bias, but it is nevertheless generally judged sufficiently reliable [21]. Its association with the bias in reported BMI seems not to be systematic in adolescence [11,22]. Regarding weight, it is by definition unknown, but, as shown in [23], one way of bypassing this difficulty may be to use body shape perception because it helps to predict the bias in self-reported weight: adolescents who regard themselves as too fat may more readily underestimate their BMI [22], whereas people satisfied with their body image are less prone to under-reporting their weight [17]. This approach is all the more promising because the social norms for thinness differ across social classes, gender and age groups [17,24]. As noted by de Saint-Pol [25], the ''ideal'' BMI, defined as the BMI that represents a balance of judgments on one's body shape, is lower among women than men and is lower among higher social categories: body dissatisfaction and desire for slimness are common in high -socio-economic environments across the world [26]. Using body perception could thus be efficient for correcting BMI. This approach has been successfully used in a representative German adolescent population [27]. It is also supported by a recent French study on adolescents exploring a wide range of socioeconomic and healthrelated factors, which showed that BMI under-or over-reporting (compared with measured data) were mainly influenced by age, gender, the father's occupation, actual BMI, and body image perception [28].
More generally, if BMIs were adequately corrected, epidemiologists would be able to estimate the prevalence of underweight, over-weight, obesity, and normal weight. This would be useful to obtain a more accurate picture of the distribution of body shapes in the population as well as to provide early warning to screen for anorexia nervosa (which concerns around 0.3% and 0.9% of adolescents [29,30]) in large-scale surveys.
The aim of this study is to propose strategies that could be easily tested in different populations in order to compute corrected BMI from self-administered adolescent surveys. They are based on the use of gender and a variable related to body shape perception (BSP) or socioeconomic status (SES).
Sample and Protocol
The ESCAPAD survey (Survey on health and behaviour) is regularly carried out by the French Monitoring Centre for Drugs and Drug Addiction with the National Service department during the national defence preparation day (JAPD). Attendance at this one-day session of civic and military information is compulsory for all French adolescents when they reach their 17 th birthday. The ESCAPAD survey takes place in March in all the 300 civilian or military centres across the country. Participants are guaranteed complete confidentiality and anonymity and the completion of the pen-and-paper self-administered form is entirely voluntary: this is explicitly stated by the staff before the distribution of the questionnaire. The survey has gained the Public Statistics general interest seal of approval from the National Council for Statistical Information (2008X713AU) as well as the approval of the ethics commission of the National Data Protection Authority (CNIL). A complete description has been published elsewhere [31,32].
In 2010, a specific ESCAPAD survey was conducted in the city of Paris (n = 2,165) from 6 th October to 6 th December. The questionnaire was completed in the morning. Adolescents attending the day (whether or not they completed the questionnaire) were informed that a random sample would attend an additional face-to-face interview in the afternoon, but this announcement did not mention that they would be measured and weighed during the session. The four interviewers in the afternoon were members of the Survey and Sampling Department of the National Institute for Demographic Studies (INED: www. ined.fr), and all are specialists in qualitative research and interviewing on sensitive topics. Training was organised at INED and work meetings were conducted each week.
Ethics
Based on an examination of the protocol and questionnaire of the Paris survey, the approval of the CNIL did not require written consent of the participants nor that of the parents of minors over 16 years old.
Questionnaire
Reported height and weight (in centimetres and kilograms) were used to compute reported BMI. Body shape perception -BSP-(''How do you feel about your body? ''much too thin'', ''a bit too thin'', ''about the right weight'', ''a bit too fat'', ''much too fat'') was recoded in three categories by combining the upper and lower response categories, giving too thin, about the right weight, too fat. This question is commonly used this way in studies on obesity and mental health to identify adolescents who ''feel fat'' [33] or (using a simple dichotomisation) to identify adolescents who perceive themselves as overweight [34].
Material and Setting
The interviewers were trained in the protocol and use of equipment [39]. The scales were electronic s with automatic calibration and a precision of 0.1 kg which was checked regularly. The height gauges (precision 0.5 cm) were fixed to the wall: a small chair was used to read the height of the tallest individuals. Before the measurements, the adolescents were asked to remove their jackets, pullovers, watches, jewelry and shoes and to empty their pockets. A correction of 0.6 kg for the weight of the remaining clothes during weighing was applied. The references for BMI categories were taken from the study by Cole et al. [40,41].
Statistical Analysis
Differences in categorical (resp. continuous) variables were tested with Pearson Chi 2 tests (resp. by t-tests). Two models based on linear regressions were used to correct reported BMI, where names in italic are dummy binary variables coding for BSP (resp. SES) categories and e is a random term.
Model 1: actual BMĨ constantza|too thinzb|too fat z(c|too thinzd|about the right weight zf|too fat)|reported BMIze All regressions were computed separately for boys and girls. The quality and predictive power of each model was assessed using R 2 and the Root mean square of errors (RMSE). These analyses were first conducted on the full sample with no missing values: a cross validation was then conducted using the leave-one-out method and the resulting RMSE values were computed (the lower, the better). Then, missing values for actual and reported BMI (7.6%) and BSP (1.5%) were handled by multiple imputation regressions using the Monte Carlo Markov chain for BMI and logistic regression for BSP: 5 imputations were produced this way [42] to compute regression coefficients. The impact of non-response was assessed by comparing the sets of coefficients obtained in the respondent and imputed datasets (the closer, the better). SES had no missing values.
The final indicator of the accuracy of the models was the agreement between the categories of corrected BMI and actual BMI. For this purpose, we computed weighted Kappa indexes [43] and Intraclass correlation coefficients (ICC) with 95% confidence intervals.
Finally, as in the work by Dauphinot et al. [18], ROC analyses (receiver operating characteristics) [44] were used, to provide a corrected threshold for obesity, overweight and underweight for the two genders. This analysis was computed only on the initial dataset without imputing the missing values. We then applied the corrections to the whole Paris sample in the ESCAPAD survey (n = 2,165, age = 17, 18) from which the present subsample of individuals was taken. The significance level was set at 0.05. All analyses were conducted using SAS V9.3.2.
Results
The initial random sample comprised 176 boys and 165 girls aged 17-18, of whom 85.4% were still at school, 6.6% were in a vocational training, and 8.0% were unemployed or working. This is consistent with the results found in the whole Paris sample.
Non-response
In all, 7.6% of the individuals refused to report their height or weight (n = 26), with a non-significant difference between boys and girls (6.3% vs 9.1%, p = 0.323). Among them, 5.0% refused to measure their height, while 6.5% refused to report their weight (without significant difference between genders), and 5.3% refused to measure both their height and weight (n = 18), girls more often than boys (8.5% vs 2.3%, p = 0.010). In the whole sample, both reported and measured BMI was obtained for 88.9%. Only 1.5% (5 subjects) refused to answer the question concerning their BSP and their reported BMI was also missing. There were no missing values for SES. The final sample represented 88.5% of the initial sample: it comprised 163 boys and 140 girls with no missing values for BSP, SES, actual and reported height and weight. The imputed dataset comprised 174 boys and 161 girls (6 individuals with missing actual and reported BMIs were removed). Table 1 shows that 16.3% of the subjects found themselves (much) too thin (23.3% of the boys, 7.9% of the girls) while 24.1% found themselves (much) too fat (14.7% of the boys, 35.0% of the girls). Using Cole's criteria, the proportions of overweight/obese individuals using reported BMI were much lower than those obtained using actual BMI, especially among girls (6.4% and 1.4% vs 12.9% and 3.6%). In addition, girls tended to present themselves as thin (16.4% instead of 5.7% for the actual BMI).
Validity of Reported BMI
The agreement between reported and actual BMI categories was moderate for boys (kappa = 0. 56
Regression Analyses
In the raw dataset (Table 2), the comparison of the slopes c, d and f showed they were generally different, and differed between boys and girls. These results suggest the need for separate analyses by gender. The R 2 values were lower for boys than girls, whereas the RMSE values were higher for boys, showing that the models are more efficient among girls. The lowest RMSE was obtained for model 1 (BSP) for boys and girls. The cross-validation RMSE values were only slightly above the initial values and the comparison of the coefficients obtained in the raw and imputed datasets shows that the results are similar, especially for the slopes c, d, f. These last two findings suggest that the results are robust. Table 3 shows mean corrected BMI and categories in the nonimputed dataset. Compared to the situation with uncorrected data, the agreement of the actual and the corrected BMI Considering results of model 1 as the most balanced for boys and girls, we found that the proportion of underweight girls based on reported BMI was reduced by 57%, while the proportions of overweight and obese girls increased by 67% and 50% and were much closer to the actual values (but still underestimated). The corrected proportion of underweight boys was still below actual values, but the proportion of overweight or obese boys was close to the actual values.
We also conducted a ROC analysis (in the dataset without imputation) to compute the optimal thresholds for reported BMI to screen for obesity, underweight, and overweight. For obesity screening among boys, the best threshold was 27.7 for reported BMI, the corresponding sensitivity (Se) and specificity (Sp) were 100.0% and 98.8% .5% for girls. One reason was the poor correction for obesity, especially among girls: the procedure led to corrected proportions of ''obese'' individuals that were 2.5% instead of 1.2% among boys and 11.4% instead of 3.6% among girls. Table 4 shows the estimated prevalences of corrected BMI categories when applying models 1 and 2 to the whole Paris sample from the ESCAPAD survey (n = 2,165). The effect on BMI average, underweight and overweight/obese BMI categories was considerable, especially among girls. For example, using model 1 (with BSP), the mean corrected BMI was 22.04 instead of 21.34 among boys and 21.57 instead of 20.29 among girls. The corrected proportions of overweight and obese were 10.5% and 3.2% among boys (compared to 7.8% and 1.9% before correction) and the corresponding values were 9.8% and 1.9% among girls (compared to 4.3% and 1.2% before correction).
Discussion
This study aimed to propose a simple correction of BMI obtained in a self-administered adolescent survey, using only selfreported BSP or SES as external information. Two linear models were used to correct reported BMI: 1/based on body-shape perception (BSP); 2/based on socioeconomic status (SES). The robustness of the corrections was evaluated through a crossvalidation and multiple imputations. Model 1 gave the best and most balanced Kappas and ICCs for both genders (Kappa = 0.63, ICC = 0.67 for boys, Kappa = 0.65 and ICC = 0.74 for girls). Both strategies improved the estimation of BMI for both genders, and especially for girls. Using model 2 (SES) instead of model 1 (BSP) led to an overestimation of numbers of underweight boys and an underestimation of numbers of obese boys. For girls, the main difference between model 1 and 2 was that model 1 rated more girls as underweight, but the Kappas and ICCs were very close. By comparison, a ROC analysis used to determine the optimal thresholds of reported BMI for screening actual underweight, overweight and obesity yielded less accurate results.
Comparison with other Studies
Most studies focus on comparing reported and measured BMI in order to determine which characteristics most influence the reporting bias, some introducing numerous variables into the analysis [16,17]. Despite a greater potential accuracy, this strategy produces less reproducible studies, as the number of additional variables would need to be large in order to correct the values reported in self-administered questionnaires. The method used here is comparatively more parsimonious and easier to implement [45]. ROC analyses based on the study by Dauphinot et al. [18] have also been applied to determine the optimal cut-offs for reported BMI that predicts real obesity. The fact that our ROC analyses led to inaccurate results may be due to the restricted numbers of subjects, especially obese subjects, but also to the fact that it did not consider auxiliary variables.
Our actual and corrected BMI categories can be compared to other French studies. In a regional sample of schoolchildren aged 6-11 (n = 1000) surveyed in 2004 in the south of France, the prevalence of overweight and obesity also based on Cole's criteria were found to be 17.3% and 3.3%, respectively, with similar prevalences in boys (15.8%, 2.9%) and girls (18.8%, 3.7%) [46]. Using a national sample of adolescents aged 14-17, Lobstein and Frelut found that measured overweight/obesity was around 16% in France in 2003, according to Cole's BMI categories [47]. For 11-14 year-olds, the prevalences of reported for overweight and obesity were found to be 13.1% and 2.1% in 2006-2007, with no significant differences between genders [48]. In a national representative study of pupils aged 11-15 years conducted in 2006, prevalence of reported overweight/obesity was similar [49], while a representative school survey in one French eastern region found higher actual obesity prevalence, and under-reporting was twice as common as over-reporting [28]. These differences in overweight/obesity prevalence can be explained by differences in age, regional dietary and lifestyle habits and socioeconomic status [28,49]. But overall, no substantial change in the prevalence of overweight and obese children and adolescents was noted in France between 2000 and 2007, this stability being partly due to large-scale health and obesity prevention campaigns in the context of the National Nutritional Health Programme (Programme National Nutrition-Santé), according to certain authors [50]. Results concerning underweight are comparatively scarce: in a sub-sample of 83 18-29 year-olds taken from a national survey among 18-74 years old conducted in 2006-2007, Julia et al. found that 8.4% were underweight (actual BMI, 18.5) [11], that is close to results in table 4.
Limitations
First, our sample size is small, so its statistical power is limited. This is particularly true for the extreme categories of BSP. In these categories, the regression statistics suggest that the correction strategies are rather ineffective. A larger sample would be required to confirm whether this result is due to our small number of subjects or rather reflects particular individual variability. Second, the procedure was limited to subjects aged 17-18 years old who were interviewed using a pen-and-paper self-administered questionnaire. Different results might also be obtained if a different data collection mode were used or if an interviewer was to be included in the process [51].
More importantly, our method is based on the assumption that social background (or parental social status) (SES) and body-shape perception (BSP) are confounders of the bias in self-reported BMI for the two genders because of related social body norms. Selfreported social background is subject to bias, such as social desirability or ignorance regarding the parents' occupation, but it can be considered independent from BMI. This may not be the case for BSP which could vary with SES even when actual BMI is controlled for. We checked that no interaction of this kind was significant, as found by [52]. Nevertheless, the use of our item regarding BSP raises some questions. It is exactly the same as the one used in [33,34]; it is related to ''feeling fat'' rather than to ''being fat'' and we used the answer ''feeling (much) too fat'' as a proxy for the perception of overweight, as in the study by Perrin et al. [34]. As underlined by Allen et al. [53], there are differences between over-concern with weight and shape and body dissatisfaction, which our measure of ''feeling fat'' tends to mix together. But unlike most of the studies that either aimed to disentangle the components of body image or tried to quantify the effects of each of these components on the mental health, we were only interested in the corrective potential of this subjective measure.
Perrin et al. [34] found that the perception of true overweight varies with actual BMI: the proportion of adolescents who perceived themselves as overweight was positively linked with the BMI percentile category and was highest among the actually overweight individuals, especially among girls. For boys, this proportion ranged from 2.7% among those in the 0,60% BMI category, to 23.9% among those in the 75%-85% BMI category and finally to 60.9% among those in the $85% BMI category (i.e. overweight). For girls, the corresponding values were: 3.0% among those in the 0-20% BMI category, to 46.5% among those in the 60%-85% BMI category and finally 82.4% among those in the $85% BMI category. This clearly supports the fact that a correct perception of overweight is much more likely among those who are actually overweight. Our correction strategy is based on this result. Nevertheless, the study by Perrin et al. shows that a large proportion of the girls misperceived their shape, as they (wrongly) thought they were overweight, especially among those in the 60- 85% BMI category (46.5%). Our correction strategy may therefore lead to inaccurate corrections among normal-weighted girls, in particular among those whose BMI is close to the overweight category. This finding should be related to the fact that, as found in [54,55], girls are more influenced by the media, parents and peers than boys to engage in strategies to lose weight. For them, body dissatisfaction, body importance and the feeling of being fat are more markedly the result of social pressures. It is because of these worries about weight and shape that the correction of BMI is better for girls than boys, despite some inaccurate corrections for normal-weighted girls. The fact that the correction is less efficient among boys could also be related to the fact that for them, muscles may to some extent have greater importance than fat [56,57]. Incorporating a measure of body dissatisfaction linked to the perceptions of muscle would be interesting.
Conclusions
Using body shape perception and objective socioeconomic status is a promising way of correcting BMI based on reported height and weight. Replications of this study are needed.
|
v3-fos-license
|
2017-06-25T22:39:41.612Z
|
2012-09-26T00:00:00.000
|
8692065
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://bmcclinpathol.biomedcentral.com/track/pdf/10.1186/1472-6890-12-18",
"pdf_hash": "f92943d790e3503893b15ba89582268a72e7c8a7",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44475",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "53a52f3daa2067f91e417d3e5b4469fec3c6537d",
"year": 2012
}
|
pes2o/s2orc
|
Triple-negative breast cancer is associated with EGFR, CK5/6 and c-KIT expression in Malaysian women
Background Triple-negative breast cancer (TNBC) is a heterogeneous subgroup of breast cancer characterized by the lack of estrogen receptor (ER), progesterone receptor (PR) and the human epidermal growth factor receptor 2 (HER2) expressions. This subgroup of refractory disease tends to have aggressive clinical behavior, high frequency of metastasis and lack of response to current hormonal or targeted therapies. Despite numerous studies reporting the clinicopathological features of TNBC and its association with the basal-like phenotype in the Western population, only limited data are available in the Asian population. Therefore, the aim of this study was to investigate the clinicopathological characteristics of TNBC and its association with epidermal growth factor receptor (EGFR), cytokeratin 5/6 (CK5/6) and mast/stem cell growth factor receptor (c-KIT or CD117) expression in Malaysian women. Methods A total of 340 patients diagnosed with primary breast cancer between 2002 and 2006 in Malaysia were reviewed and analyzed. Results The incidence of TNBC was 12.4% (42/340). Bivariate analysis revealed that TNBC was strongly associated with a younger age, higher grade tumor and p53 expression. Further immunohistochemical analysis suggested that TNBC in Malaysian women was strongly associated with EGFR, CK5/6 and c-KIT expression with high a Ki-67 proliferation index. Conclusion In conclusion, our study confirms the association of TNBC with basal-like marker expression (EGFR, CK5/6 and c-KIT) in Malaysian women, consistent with studies in other populations.
Background
The recent advances of DNA microarray technology has enabled the classification of breast cancer into subgroups based on the gene expression profile [1]. Based on the study of these profiles, breast cancer can be divided into five subtypes: luminal A, luminal B, basal-like, normal-like and human epidermal growth factor receptor 2 (HER2)overexpressing subtype [1][2][3]. Of particular importance is the basal-like subtype, which accounts for 15 to 20% of all breast cancers and confers a markedly poor prognosis compared to other subtypes [1,4,5].
The majority of basal-like breast cancers exhibit a "triple-negative" phenotype, characterized by the lack of expression of the estrogen receptor (ER) or the progesterone receptor (PR) and a lack of HER2 amplification. They also often have a high frequency of p53 mutations [6,7].
Although the clinicopathologic characteristics of the basal-like carcinomas, compared with other subtypes, have been reported in the Korean and Singaporean populations recently [14,15], the true relationship between triple-negative breast cancer and those showing basal-like expression markers has not been enunciated. Thus, our study aimed to investigate the pathology of TNBC in Malaysian women and comprehend the relationship between TNBC and BLBC in our population.
Tissue and patient data
Patients diagnosed with primary breast cancer at the Gleneagles Intan Medical Centre (GIMC), Malaysia, between 2002 and 2006 were included in the study. Clinicopathological parameters including age, tumor size, histological grade, histological subtype, associated ductal carcinoma in situ (DCIS), lymphovascular invasion and nodal status were evaluated. ER and PR statuses were determined using a standard immunohistochemistry (IHC) staining protocol on initial diagnostic material using proteinase K antigen retrieval method followed by mouse anti-human ERα monoclonal antibody (clone 1D5; DAKO, Denmark) and mouse anti-human PR monoclonal antibody (clone PgR 636; DAKO, Denmark). ER or PR positivity was defined as the presence of 1% or more positively-stained tumor cells as described previously [16,17]. HER2 expression was determined using the DAKO Herceptest W Kit (Dako, Carpinteria, CA, USA) and scored according to international guidelines [18]. HER2 scores of 0 and 1+ were considered negative. HER2 scores of 2+ and 3+ were considered as HER2 overexpression [16]. All results were available from the original pathology reports except for HER2 amplification . Written informed consent for use of all human specimens in this study was obtained at the time of enrollment.
Determination of proliferation indices
To estimate the growth rate of tumors, the percentage of tumor cells expressing the proliferation marker Ki-67 was measured. A proliferation index was calculated for each tumor lesion by counting the total number of tumor cell nuclear profiles and the number of Ki-67-positive nuclear profiles in randomly and systematically selected fields as described previously [20][21][22]. On average, 500 nuclear profiles were counted per tumor lesion.
Statistical analysis
The Fisher's exact test was used to analyze the correlation between the triple-negative phenotype and EGFR, CK5/6 or c-KIT expression. The Student's t-test and Mann-Whitney test was used to compare the Ki-67 proliferaton index of TNBC and non-TNBC. All statistical analyses were performed using SPSS for Windows (Version 11). A P value of less than 0.05 was considered statistically significant.
Results
Triple-negative breast cancer is associated with a younger age and high tumor grade in Malaysian women Table 1. All cases were further stratified based on ER, PR and HER2 statuses. A total of 42 cases (12.4%) were identified to be TNBC and the remaining 298 cases (87.6%) expressed at least one of the markers and were classified as non-TNBC cases. Among all the non-TNBC cases, a total of 111 (37.2%) cases were ER/PR + and HER2+, 106 (35.6%) cases were ER/PR + and HER2-, and 81 (27.2%) cases were ER/PR-and HER2+ ( Table 2). Of note, the majority of patients diagnosed with TNBC were of a younger age (below 40 years) with a mean age of 45.3 ± 10.3 years versus 50.0 ± 10.4 years in the non-TNBC cases (Student's t-test, P = 0.0029). In addition, most of the TNBC cases were high grade tumors with 76.2% of the cases diagnosed as grade 3 versus 50.7% in the non-TNBC group.
Although the tumor size from the TNBC cases were slightly larger (2.8 ± 1.6 cm) compared to non-TNBC cases (2.5 ± 1.4 cm), the difference, however, was not statistically significant (Student's t-test, P = 0.153). Similarly, no differences in histology (IDC vs DCIS) (Fisher's exact test, P = 0.322) and lymph node infiltration rate (Fisher's exact test, P = 0.177) were observed between TNBC and non-TNBC cases. Thus, the major differences between TNBC and non-TNBC were age and tumor grade, in which TNBC patients were younger and with high grade tumors compared to non-TNBC patients.
Triple-negative breast cancer is strongly associated with EGFR, CK5/6 and/or c-KIT expression Based on the available clinical data, tissue samples from a total of 36 patients were reviewed and retrieved for EGFR, CK5/6 and c-KIT staining. Of the 36 samples, 18 were TNBC and 18 were non-TNBC based on the prior ER, PR and HER2 staining. All cases were age and grade matched as closely as possible and the majority was grade 3 tumors. The clinical pathological features of the cases included in the current study are summarized in Table 3.
Triple-negative breast cancers have higher Ki-67 indices
To further characterize the phenotypes of breast cancers in Malaysian women, we also analyzed the Ki-67 proliferation index in TNBC and non-TNBC cases in the current cohort. Thirty six out of 38 specimens (16 TNBC and 18 non-TNBC cases) were stained with a Ki-67-specific antibody (clone MIB-1) and the proliferation index was estimated as the percentage of Ki-67positive nuclear profiles in randomly and systematically selected fields. As shown in Figure 2, TNBC had a significantly higher Ki-67 index than non-TNBC tumors in Malaysian women (Student's t-test, P = 0.003 and Mann-Whitey test, P < 0.004). The mean proliferation index for TNBC and non-TNBC tumors were 47.48 ± 17.55 and 31.43 ± 11.81, respectively. The median proliferation index was 45.98 and 32.39 for TNBC and non-TNBC, respectively. These results suggest that TNBC has a higher proliferation rate than non-TNBC in Malaysian women.
Discussion
Breast cancer is a heterogeneous group of disease that can be characterized into clinically, morphologically and biologically distinct subgroups [14,23]. By gene expression profiling and IHC markers, breast cancers can be classified into five major subtypes: like tumors [1,8,12,24,25]. Of particular importance is the basal-like subtype, which accounts for 15 to 20% of all breast cancers and confers a markedly poor prognosis [1,4,5]. Recent studies have shown that basal-like breast cancers are likely to be mitotically active high-grade invasive tumors that are associated with a younger patient age [4,26,27]. A population-based study also identified this subtype to be more prevalent in premenopausal African American women and highly associated with BRCA-1 mutation [4,12,26,27]. Due to their lack of ER, PR and HER2 expression, basal-like breast cancers are also unlikely to respond to anti-estrogen hormonal therapies or trastuzumab [26,28].
To date, the gold standard for identifying basal-like breast cancers is based on gene expression profiling. However, cost and technical issues have rendered gene expression profiling impractical as a routine diagnostic tool in the clinical setting. In the Western population, approximately 70 to 90% of "triple-negative" breast cancers (ER-, PR-, HER2-) express basal markers, resulting in the triple-negative subtype commonly used as a surrogate marker for the basal-like subtype [1,4,8,18,[29][30][31][32][33][34][35][36][37]. Relatively little is known about this disease entity within Asian populations, and in particular Malaysian populations.
Within the small cohort of 340 breast cancer patients described in this study, a total of 42 cases (12.4%) were identified as triple-negative. This proportion was slightly lower than the recent studies in the Malaysian, Japanese, Chinese and Korean populations that estimated the prevalence of TNBC to be around 15 to 20% [23,[38][39][40][41]. Consistent with earlier studies, our results showed that TNBC in Malaysian women was strongly associated with a younger age and high grade tumors compared to non-TNBC [5,10,14,15,38,42]. However, no significant differences in tumor size, histology (IDC vs DCIS) and lymph node infiltration rates were observed between TNBC and non-TNBC in the current study.
Further analysis was carried out to investigate the expression of EGFR, CK5/6 and c-KIT in TNBC and non-TNBC specimens. Due to the lack of information on HER2 amplification, only tumors with HER2 scores of 0 were included in the TNBC cohort. Our results demonstrated that TNBC in Malaysian women was strongly associated with EGFR, CK5/6 and c-KIT expression. Our results also showed that TNBC had a significantly higher Ki-67 proliferation index than non-TNBC, suggesting that TNBC could be more progressive.
Numerous studies have also shown that basal-like breast cancer can be specifically identified using IHC surrogate panels including ER, PR and HER2 negativity and either EGFR or CK5/6 positivity (ER-, PR-, HER2-, CK5/6+ and/or EGFR+) [8,19,26,43,44]. Using such surrogates, our study showed that 78% (14/18) of TNBC can be categorized as basal-like breast cancers. This proportion is consistent with previous studies that also show that 71.5% of TNBC are basal-like by gene expression profiling [30].
Conclusions
In conclusion, the incidence of TNBC in this small cohort, predominantly Asian population, is comparable to reported data in other populations. Consistent with other studies, TNBC in Malaysian women is associated with a younger age and higher grade of tumor, as well as p53 expression in bivariate analysis. Our findings also confirm that TNBC in Malaysian women strongly correlates with EGFR, CK5/6 and c-KIT expression, and have a higher proliferation rate.
|
v3-fos-license
|
2021-07-27T00:05:35.527Z
|
2021-05-26T00:00:00.000
|
236394020
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.researchsquare.com/article/rs-494018/v1.pdf?c=1631897850000",
"pdf_hash": "091cdaccc450789b6a2a123a2b584f24f035fbbd",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44477",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"sha1": "99cca9cc3c0479b0455c5b63dc2a55e925cc0a62",
"year": 2021
}
|
pes2o/s2orc
|
Research on the Influence of Production Technologies on the Positioning Accuracy of a Robotic Arm for Low-Handling Weights
: The subject of the paper is the research of production technologies’ influence on positioning accuracy of a robotic arm. The aim was to find out whether different production technologies (additive and conventional) and the related design differences of the robotic arm affect its operational functionality. In the research, positioning accuracy of a robotic arm formed by three partial arms was specifically investigated, while the first partial arm, Arm I, was manufactured by two different technologies. On the robotic arm, the research was carried out in such a way that the first partial arm, Arm I, was being continuously changed and was available for research purposes in two variants. Each of the Arm I variants was manufactured using a different technology (additive and conventional) while, at the same time, the individual variants also differed in construction. The design differences of both variants were related to the production technology used. The measurement of positioning accuracy was performed with the use of two methods. Specifically, a contact and a non-contact method were used. The contact method was implemented on a 3D-measuring machine, RAPID, and the second contactless method was performed using an inductive sensor.
Introduction
The use of robots for handling and positioning, welding, subtractive and additive production has been expanding in recent years in line with the Industry 4.0 concept [1]. A robotic arm can be a separate mechanism or part of a more complex robot. It is a type of mechanical hand that is usually programmable and has similar functions to a human hand [2].
At present, we encounter the use of various production technologies in the production of robotic arm components. CNC machining technologies are the most widespread [3]. Covaciu and Filip [4] designed and validated a robotic arm made with two CNC machines. Gordaninejad and Vaidyaraman are among the first researchers to compare the positioning accuracy of robotic arms made of conventional metallic materials and advanced composites [5].
Until recently, it was not possible to find a published use of an additive manufacturing component in an industrial robotic arm. However, the great expansion of 3D printing technology is creating increasingly suitable conditions for this [6]. A conventional 3D printer uses layering in a horizontal plane to produce a 3D printed part. Ishak and Larochelle [7] proposed the integration of existing additive manufacturing process technologies with the arm of an industrial robot to create a multilayer layered 3D printer. Their approach allows printing to be layered in multiple planes, while existing conventional 3D printers 2 of 23 are limited to one plane of the toolpath. The integration of the robotic arm and the extruder allows the movement of several planes of the tool path to be used in the production of structural parts.
Hajash et al. [8] introduced a fast liquid printing device that can freely print in any direction, rather than layer by layer, depositing liquid material in a granular gel to create 3D structures. Recently, however, several scientific papers have appeared on the additive manufacturing of robotic arm components. Mick et al. [9] proposed a prototype of a robotic arm made using 3D printing, which is also economically more advantageous compared to the price of an industrial robot thanks to conventional drives. Junia Santillo Costa et al. [10] implemented and validated a 3D printed open source robotic arm with 6 degrees of freedom made of ABS (Acrylonitrile Butadiene Styrene) material, due to higher mechanical strength. Ismail et al. [11] designed and developed a robotic arm for lifting light parts with 4 degrees of freedom. They used a 3D printing method to make the robotic arm components, which provided more accurate dimensions and time and cost savings. Wang et al. [12] proposed a special holder for the InnoMotion robotic arm, where the main components of the holder were made by 3D printing using plastic material and are fully compatible with the MR and CT system and the robotic arm.
The positioning accuracy of the robotic arm can vary widely in the workspace, which is influenced by various factors. Therefore, researchers propose various methods and methodologies to improve reliability and repeatability of positioning accuracy [13]. One of the parameters influencing the accuracy of the robotic arm is vibration. Elvira-Ortiz et al. [14] proposed a methodology to improve the estimation of kinematic parameters on industrial robots by correctly suppressing the vibrational components present on the signals obtained from the two primary sensors: the accelerometer and the gyroscope. Their results prove that the sensor fusion technique, accompanied by correct vibration suppression, provides a better estimate of the kinematic parameters than other proposed techniques. The accuracy of the robotic arm is also affected by sensing the position of each joint with a high-resolution optical coding device that cannot detect certain mechanical deformations, thus reducing the accuracy of the robot's positioning and orientation. Research in this area has been addressed by Rodriguez-Donate et al. [15], who developed an intelligent processor using Kalman filters to filter and fuse information from a network of sensors. They used two primary sensors: an optical encoder and a 3-axis accelerometer. Calibration of a robot arm is an important factor in the accuracy of robot positioning. A simple, lowcost calibration procedure that improves the surface positioning accuracy of a SCARA (Selective Compliance Assembly Robot Arm) double-arm robot was published by Joubair et al. [16]. One of the key problems in examining the positioning accuracy of robotic arms is the working temperature. Kluz et al. Analyzed the analysis of the influence of temperature on the positioning accuracy of the robot arm [17]. The results obtained were subjected to statistical analysis using the Shapiro-Wilk test, which confirmed that the three-sigma rule can be used to calculate the value of the total positioning error of the robot arm.
The use of cooperating robotic solutions is also supported by current trend of automation and data exchange in production technologies, the so-called Industry 4.0 [18]. The goal of Industry 4.0 is ultimately to achieve efficiency, reduce costs and increase productivity through integrated automation. In Industry 4.0, production systems are characterized by individualized products in conditions of highly flexible mass production. A literature review of current standards for human-robot collaboration shows they can be used in a wide range of different regimes [19]. The field of application of human robot collaborative systems includes handling, welding, assembly and automotive [20]. Nowadays human robot collaboration systems are widely used in manufacturing companies that operate according to the concept of Industry 4.0. Due to the necessity of meeting the needs of individual customers, interest in the application for knowledge transfer support is growing. Ballestar et al. [21] provided knowledge related to the interconnection of industrial robotics and productivity of work in small and medium enterprises (SMEs). Patalas-Maliszewska and Krebs [22] developed the knowledge transfer approach is to select the best character-Appl. Sci. 2021, 11, 6104 3 of 23 istics of a knowledge worker in order to achieve an improvement of effective use of the application for knowledge transfer support among employees. This approach is based on survey and data obtained from Polish production plants.
The research so far has mostly focused on positioning accuracy control, research of control systems, control methods and positioning, but we have not been able to find publications aimed at examining the relationship between positioning accuracy of the robotic arm and manufacturing technologies with which it was manufactured. It is from this knowledge that the main research intention of the presented paper is based, the aim of which is to expand the field of knowledge concerning the accuracy of robotic arm positioning in relation to used production technologies.
The subject of the paper is the research of production technologies impact on positioning accuracy of a robotic arm. The aim was to find out whether different production technologies (additive and conventional) and the related design differences of the robotic arm affect its operational functionality. In the research, positioning accuracy of a robotic arm formed by three partial arms was specifically investigated, while the first partial arm Arm I was manufactured by two different technologies. On the robotic arm, the research was carried out in such a way that the first partial arm, Arm I, was being continuously changed and was available for research purposes in two variants. Each of the Arm I variants was manufactured using a different technology (additive and conventional) while, at the same time, the individual variants also differed in construction. The design differences of both variants were related to the production technology used. The measurement of positioning accuracy was performed with the use of two methods. Specifically, a contact and a non-contact method were used. The contact method was implemented on a 3D-measuring machine, RAPID, and the second contactless method was performed using an inductive sensor. The maximum working load of the robotic arm was 2 kg, therefore the positioning accuracy was examined at three degrees of operating load equal to 0, 50 and 100% of the maximum workload.
The research result proves that the technology of robotic arm production does not have a direct influence on its positioning. This result is based on results of mathematicalstatistical analysis. However, production technology affects its design. This fact might be considered the secondary aspect, which, however, can already affect the positioning accuracy. However, the hypothesis, which emerged from the research, needs to be further investigated. Within the presented research results, inaccuracies in the positioning of the robotic arm were manifested that are to be attributed to various constructions due to the production technology used.
Materials and Methods
The robot developed with the most used angular structure with rotational movements in 3 axes, Figure 1, was used for the experiment.
The robot is in shape of a human arm with swivel joints. The working space consists of spherical areas. Such robots are suitable for a wide range of activities involving the use of three rotary motion axes. The robot itself consists of a robot base and three arms. The base of the robot is usually anchored horizontally, Arm I is mounted on it and rotates around the vertical axis Z. The remaining two axes of rotation are horizontal and parallel to each other. They consist of Arm II and Arm III. Arm III can work in proximity to the Z axis. The individual arms of the robot are connected to each other by gear mechanisms driven by servo drives. As the arms move, their elastic deformation occurs, which also affects positioning accuracy of the end robotic arm with the manipulation effector. Positioning accuracy research was carried out on two identical versions of Arm I, which differed in production technology used. The robot is in shape of a human arm with swivel joints. The working space consists of spherical areas. Such robots are suitable for a wide range of activities involving the use of three rotary motion axes. The robot itself consists of a robot base and three arms. The base of the robot is usually anchored horizontally, Arm I is mounted on it and rotates around the vertical axis Z. The remaining two axes of rotation are horizontal and parallel to each other. They consist of Arm II and Arm III. Arm III can work in proximity to the Z axis. The individual arms of the robot are connected to each other by gear mechanisms driven by servo drives. As the arms move, their elastic deformation occurs, which also affects positioning accuracy of the end robotic arm with the manipulation effector. Positioning accuracy research was carried out on two identical versions of Arm I, which differed in production technology used.
Arm I-Made by Additive Manufacturing (AdM)
Production of Arm I was realized by a 3D printer Xline 2000R [23], and the material used was AlSi10Mg ( Figure 2
Arm I-Made by Additive Manufacturing (AdM)
Production of Arm I was realized by a 3D printer Xline 2000R [23], and the material used was AlSi10Mg ( Figure 2).
Technical specification: • layer height 0.06 mm, • support structure layer height 0.12 mm, • total time of finishing and removing of support structure, finishing and sandblasting 32 h, The robot is in shape of a human arm with swivel joints. The working space consists of spherical areas. Such robots are suitable for a wide range of activities involving the use of three rotary motion axes. The robot itself consists of a robot base and three arms. The base of the robot is usually anchored horizontally, Arm I is mounted on it and rotates around the vertical axis Z. The remaining two axes of rotation are horizontal and parallel to each other. They consist of Arm II and Arm III. Arm III can work in proximity to the Z axis. The individual arms of the robot are connected to each other by gear mechanisms driven by servo drives. As the arms move, their elastic deformation occurs, which also affects positioning accuracy of the end robotic arm with the manipulation effector. Positioning accuracy research was carried out on two identical versions of Arm I, which differed in production technology used.
Arm I-Made by Additive Manufacturing (AdM)
Production of Arm I was realized by a 3D printer Xline 2000R [23], and the material used was AlSi10Mg ( Figure 2). Technical specification: Y, Z, X in Figure 2 represent the Cartesian coordinate system, i.e., the orientation of Arm I for the individual measurement methods, which is identical with the manipulation of the robotic arm with the load in operation.
Arm I-Made by CNC (Computer Numerical Control) Milling (CvM)
Production of Arm I was realized by CNC (computer numerical control) machine Pinnacle VMC 650S, used material is AlMg4.5Mn, DIN 1732, with dimensions 150 mm × 150 mm − 450 mm ( Figure 3). Its chemical composition is in Table 1 and physicalmechanical properties in Table 2.
Contact Measuring Method (CoM) of Robotic Arm's Position
The contact measuring method (CoM) on a 3D-measuring machine can be implemented by a stylus with a ruby ball manually or by programming the stylus for repeated measurements. In our case, stylus programming by the learning method was used. Prior to measurement itself, the coordinate system of the 3D-measuring machine and the coordinate system of the measured-scanned body were identified. An NC (numeric control) scanning program was created in the control system of the 3D-measuring machine RAPID THOME Präzision [24], which ensured scanning of positions during repeated measurements. Stylus with a 4 mm diameter ruby ball was used for scanning as in Figure 4. The defined scanning sensitivity of the 3D-measuring machine 0.001 mm was achieved by observing the design and operating conditions. A holder was mounted on the robotic arm, which allowed mounting and weight exchange, and at the same time a part of it was machined so that reading of three perpendicular planes is possible, which after alignment with the reference coordinate system 3D-measuring machine were used to sense the position.
Contact Measuring Method (CoM) of Robotic Arm's Position
The contact measuring method (CoM) on a 3D-measuring machine can be implemented by a stylus with a ruby ball manually or by programming the stylus for repeated measurements. In our case, stylus programming by the learning method was used. Prior to measurement itself, the coordinate system of the 3D-measuring machine and the coordinate system of the measured-scanned body were identified. An NC (numeric control) scanning program was created in the control system of the 3D-measuring machine RAPID THOME Präzision [24], which ensured scanning of positions during repeated measurements. Stylus with a 4 mm diameter ruby ball was used for scanning as in Figure 4. The defined scanning sensitivity of the 3D-measuring machine 0.001 mm was achieved by observing the design and operating conditions. A holder was mounted on the robotic arm, which allowed mounting and weight exchange, and at the same time a part of it was machined so that reading of three perpendicular planes is possible, which after alignment with the reference coordinate system 3D-measuring machine were used to sense the position.
The measuring chain for CoM is shown in Figure 5. After setting the reference coordinate system of 3D-measuring machine (it consists in setting a fixed zero point on the base frame from which all measured values are read), the program for positioning the robotic arm is initialized and then adjusted to the position with maximum reach. The 3D measuring machine is then initialized. The stylus with ruby ball automatically senses the end position of the robotic arm [mm] and loads it into the PC application Metrolog XG. In this application, the end position of the robotic arm is evaluated. The measuring chain for CoM is shown in Figure 5. After setting the reference coordinate system of 3D-measuring machine (it consists in setting a fixed zero point on the base frame from which all measured values are read), the program for positioning the robotic arm is initialized and then adjusted to the position with maximum reach. The 3D measuring machine is then initialized. The stylus with ruby ball automatically senses the end position of the robotic arm [mm] and loads it into the PC application Metrolog XG. In this application, the end position of the robotic arm is evaluated.
Non-Contact Measuring Method (NcM) of Robotic Arm's Position Deviation
In the second method of measuring position deviation, NcM was used using the proximity sensor MTN/EP080 Probe [25] in Figure 6. The defined sensing sensitivity of the 0.01 mm proximity sensor was achieved by adhering to the design and operating conditions. The measuring chain for CoM is shown in Figure 5. After setting the reference coordinate system of 3D-measuring machine (it consists in setting a fixed zero point on the base frame from which all measured values are read), the program for positioning the robotic arm is initialized and then adjusted to the position with maximum reach. The 3D measuring machine is then initialized. The stylus with ruby ball automatically senses the end position of the robotic arm [mm] and loads it into the PC application Metrolog XG. In this application, the end position of the robotic arm is evaluated.
Non-Contact Measuring Method (NcM) of Robotic Arm's Position Deviation
In the second method of measuring position deviation, NcM was used using the proximity sensor MTN/EP080 Probe [25] in Figure 6. The defined sensing sensitivity of the 0.01 mm proximity sensor was achieved by adhering to the design and operating conditions.
Setting the reference coordinate system of a 3D-measuring machine
Non-Contact Measuring Method (NcM) of Robotic Arm's Position Deviation
In the second method of measuring position deviation, NcM was used using the proximity sensor MTN/EP080 Probe [25] in Figure 6. The defined sensing sensitivity of the 0.01 mm proximity sensor was achieved by adhering to the design and operating conditions.
A holder with a replaceable weight machined in three planes perpendicular to each other was reused on the robotic arm. A stable stand made of non-magnetic material with a non-conductive proximity sensor holder was mounted on the base frame of a 3D-measuring machine. The holder with a replaceable weight is used (after setting the switching position of the proximity sensor) to sense the incremental position deviation in the direction of the X, Y and Z axes in micrometers.
In the measuring chain (Figure 7), the power supply to the proximity sensor is initialized first. Subsequently, the program for positioning the robotic arm is activated. This is followed by positioning the robotic arm to the position with maximum reach. After reaching this position, a graphical recording of the maximum reach of robotic arm measurement will be performed. The incremental value of position deviation for the maximum reach of the robotic arm in µm is read from the graphic record in the LabVIEW software. Appl. Sci. 2021, 11, x FOR PEER REVIEW 7 of 23 Figure 6. Non-contact measuring method (NcM) of robotic arm's position deviation using a proximity sensor in the direction of the X, Y, Z axis.
In the second method of measuring position deviation, NcM was used using the proximity sensor MTN/EP080 Probe [25] in Figure 6. The defined sensing sensitivity of the 0.01 mm proximity sensor was achieved by adhering to the design and operating conditions.
A holder with a replaceable weight machined in three planes perpendicular to each other was reused on the robotic arm. A stable stand made of non-magnetic material with a non-conductive proximity sensor holder was mounted on the base frame of a 3D-measuring machine. The holder with a replaceable weight is used (after setting the switching position of the proximity sensor) to sense the incremental position deviation in the direction of the X, Y and Z axes in micrometers.
In the measuring chain (Figure 7), the power supply to the proximity sensor is initialized first. Subsequently, the program for positioning the robotic arm is activated. This is followed by positioning the robotic arm to the position with maximum reach. After reaching this position, a graphical recording of the maximum reach of robotic arm measurement will be performed. The incremental value of position deviation for the maximum reach of the robotic arm in µ m is read from the graphic record in the LabVIEW software. Robotic arm with Arm I-AdM and Arm I-CvM ( Figure 8) are shown for NcM at maximum reach. For this method of measurement, the base frame of the 3D measuring machine was used only to fix the position of the robotic arm while maintaining a constant position of the base frame of the robotic arm. At the same time, the base frame of the 3D measuring machine was used to place and fix the proximity sensor holder. Robotic arm with Arm I-AdM and Arm I-CvM ( Figure 8) are shown for NcM at maximum reach. For this method of measurement, the base frame of the 3D measuring machine was used only to fix the position of the robotic arm while maintaining a constant position of the base frame of the robotic arm. At the same time, the base frame of the 3D measuring machine was used to place and fix the proximity sensor holder.
Measurement Procedure and Evaluation of Deviations in the Position of the Robotic Arm
The robotic arm was at a maximum reach of 609 mm, according to Figure 1, programmed by the learning method for Arm I-AdM and Arm I-CvM. For CoM and NcM, the same program was used in the robotic arm control system. At the beginning of the measurement, the CoM of robotic arm's position was selected, and n measurements were performed with Arm I-AdM in individual directions of the X; Y; Z coordinate system. NcM of robotic arm's position deviation followed. After its completion, the disassembly of Arm I-AdM and assembly of Arm I-CvM followed. With this arm, n measurements were performed using NcM of robotic arm's position deviation in each direction of the X; Y; Z coordinate system. Then the CoM of robotic arm's position followed. The position of the proximity sensor was always adjusted to each scanned position of the robotic arm in the corresponding coordinate system. The data obtained were recorded and statistically processed for n measurements, for each axis of the coordinate system, for CoM, NcM, AdM and CvM, while the individual measurements were marked with the order index i.
Measurement Procedure and Evaluation of Deviations in the Position of the Robotic Arm
The robotic arm was at a maximum reach of 609 mm, according to Figure 1, programmed by the learning method for Arm I-AdM and Arm I-CvM. For CoM and NcM, the same program was used in the robotic arm control system. At the beginning of the measurement, the CoM of robotic arm's position was selected, and n measurements were performed with Arm I-AdM in individual directions of the X; Y; Z coordinate system. NcM of robotic arm's position deviation followed. After its completion, the disassembly of Arm I-AdM and assembly of Arm I-CvM followed. With this arm, n measurements were performed using NcM of robotic arm's position deviation in each direction of the X; Y; Z coordinate system. Then the CoM of robotic arm's position followed. The position of the proximity sensor was always adjusted to each scanned position of the robotic arm in the corresponding coordinate system. The data obtained were recorded and statistically processed for n measurements, for each axis of the coordinate system, for CoM, NcM, AdM and CvM, while the individual measurements were marked with the order index i. The proposed procedure for measuring and evaluation of the robotic arm's position deviation is shown in Figure 9.
Appl. Sci. 2021, 11, x FOR PEER REVIEW 9 of 23 The proposed procedure for measuring and evaluation of the robotic arm's position deviation is shown in Figure 9.
Ishikawa Diagram
The generated Ishikawa diagram ( Figure 10) defines, investigates and detects the effects of several influences and causes, which result in the variability of the robotic arm position. The expected sources of variability in deviations of the robotic arm position that affect it are: The source of variability, which is the subject of the experiment, is the accuracy of the position of the robotic arm with respect to the used production technology. However, in addition to expected resources, other factors also affect the accuracy of the robotic arm's
Ishikawa Diagram
The generated Ishikawa diagram ( Figure 10) defines, investigates and detects the effects of several influences and causes, which result in the variability of the robotic arm position. The expected sources of variability in deviations of the robotic arm position that affect it are: The source of variability, which is the subject of the experiment, is the accuracy of the position of the robotic arm with respect to the used production technology. However, in addition to expected resources, other factors also affect the accuracy of the robotic arm's position. The causes are divided into categories and represent potential sources of variability for measured position deviations in the robotic arm. Measuring apparatus is a factor influencing the complexity of measurement, the methodology design, requiring the experience of the implementer of measurements. It also affects the time aspect.
Robotic arm construction influences the production, operation, method of measurement, the possibility of using the apparatus, the choice of measuring members, the choice of the material of the robotic arm, while directly affecting its mechanical properties.
The human factor is the sum of characteristics of the person performing the measurement. In it, physical properties can be included, e.g., the promptness and measurement speed. Furthermore, experience that can be not only theoretical but also practical. Experience, within the researched issue, means practical experience in the field of measurement, metrology, creation of measurement methodology, planning of experiments with the aim of their harmonization for the measurement of required quantities.
Position is a factor reflecting the location of the robotic arm. It includes external influences that can be mechanical or physical. Their effect on the position of the robotic arm can be in terms of duration: temporary or permanent. In terms of severity, it can be minimal, neutral or serious.
Course of measurement is a factor that affects the method of measurement, the accuracy of results, their reliability, the equipment used and the complexity of the implementation.
Other sources of variability are undesirable in the experiment and require correction using the following methods: • elimination-the conditions of the experiment will ensure that this source of variability does not occur in the experiment at all; Measuring apparatus is a factor influencing the complexity of measurement, the methodology design, requiring the experience of the implementer of measurements. It also affects the time aspect.
Robotic arm construction influences the production, operation, method of measurement, the possibility of using the apparatus, the choice of measuring members, the choice of the material of the robotic arm, while directly affecting its mechanical properties.
The human factor is the sum of characteristics of the person performing the measurement. In it, physical properties can be included, e.g., the promptness and measurement speed. Furthermore, experience that can be not only theoretical but also practical. Experience, within the researched issue, means practical experience in the field of measurement, metrology, creation of measurement methodology, planning of experiments with the aim of their harmonization for the measurement of required quantities.
Position is a factor reflecting the location of the robotic arm. It includes external influences that can be mechanical or physical. Their effect on the position of the robotic arm can be in terms of duration: temporary or permanent. In terms of severity, it can be minimal, neutral or serious.
Course of measurement is a factor that affects the method of measurement, the accuracy of results, their reliability, the equipment used and the complexity of the implementation.
Other sources of variability are undesirable in the experiment and require correction using the following methods: • elimination-the conditions of the experiment will ensure that this source of variability does not occur in the experiment at all; • minimization-targeted reduction of variability so that the rest will be part of the experimental error; • part of the experimental error-we know about this source, it is impossible to treat, thus will be reflected in a random error in the calculations.
As part of ensuring the conditions for carrying out the experiment, each known source of variability was corrected accordingly.
The measuring apparatus for measuring a robotic arm's positioning accuracy by NcM, using a proximity sensor, consisted of a voltage source, connecting electrical cables, an A/D (an analog-to-digital) converter and a proximity sensor. The measuring apparatus for measuring of robotic arm's positioning accuracy by CoM consisted of a 3D-measuring machine, RAPID, and a transducer with a complete connection of sensors.
Statistical Evaluation of Measurement Results
Statistical evaluation of measurement results was performed for both Arm I-AdM and Arm I-CvM with the use of CoM and NcM. In each case, 11 measurements described in chapter 2.5 were performed, while each group of 11 measurements formed one set. The results were processed according to Figure 9 as follows: a.
Verification that the selection of measured values comes from a population with a normal distribution.
• Check outlier method of comparing the distance of the minimum and maximum from the first and third quartiles.
•
The following descriptive statistics are used to describe the measurement groups for each setting. Position statistics-average, median, minimum, maximum. The median is the mean value in the data sorted by size, and together with the minimum and maximum values, they give us a view of the data on how densely they are distributed. • Statistics on the variability of results-range, standard deviation. While range is the difference between the minimum and maximum value and only expresses the width of the data occurrence, standard deviation refers to the data density around the average.
•
Other statistics show the shape of the frequency curve, skewness about the location of the furthest sampling value from the arithmetic mean, and kurtosis about the density of data around the mean. c.
Comparison of measurement pairs.
• For Arm I-AdM, Arm I-CvM, where the other mode settings were the same, the Two Sample t-test was used for the null hypothesis (difference in means is equal to 0) µ_1 = µ_2. To calculate the test statistic t, it was necessary to verify whether var.equal applies. This was preceded by an F test to compare two variances with the ratio of variances equal to 1.
Validation of Results by Computer Simulations
During the research implementation, verification of measured values was also carried out. FEM (finite element method) was used to verify them. The results were compared with the results of experimental calculations. As an example, we present the Z-axis deformation magnitudes in AdM_NcM determined by experimental measurements, the values of which are given in Table 3. Then, the calculation of robotic arm load was performed using FEM, and the magnitude of the deformation was determined. An example of the result of the calculation is shown in Figure 11.
Appl. Sci. 2021, 11, x FOR PEER REVIEW 12 of 23 compared with the results of experimental calculations. As an example, we present the Zaxis deformation magnitudes in AdM_NcM determined by experimental measurements, the values of which are given in Table 3. Then, the calculation of robotic arm load was performed using FEM, and the magnitude of the deformation was determined. An example of the result of the calculation is shown in Figure 11. The comparison of experimentally measured value of deformation and with FEM calculation shows the difference of 8.5 %.
Verification was performed in the same way for the CvM_NcM arm. The magnitude of the measured deformations is given in Table 4. The comparison of experimentally measured value of deformation and with FEM calculation shows the difference of 8.5 %.
Verification was performed in the same way for the CvM_NcM arm. The magnitude of the measured deformations is given in Table 4. By comparing the experimentally measured value of deformation and FEM calculation, it is possible to state a difference of 1.4%.
Verification That the Selection of Measured Values Comes from a Population with a Normal Distribution
In this part, it was necessary to verify the hypothesis of a normal population distribution. The hypothesis of a normal population distribution (Table 5) was not rejected, i.e., the measurements were performed correctly, and the results of measurements can be further used for statistical evaluation using parametric tests.
Tables 5 and 6 present the results for groups of measurements without outliers. This section may be divided by subheadings. It should provide a concise and precise description of the experimental results, their interpretation, as well as the experimental conclusions that can be drawn.
Verification That the Selection of Measured Values Comes from a Population with a Normal Distribution
In this part, it was necessary to verify the hypothesis of a normal population distribution. The hypothesis of a normal population distribution (Table 5) was not rejected, i.e., the measurements were performed correctly, and the results of measurements can be further used for statistical evaluation using parametric tests. Tables 5 and 6 present the results for groups of measurements without outliers. This section may be divided by subheadings. It should provide a concise and precise description of the experimental results, their interpretation, as well as the experimental conclusions that can be drawn. Since the comparison using non-parametric methods is not possible for the measurement results, a comparison for measurements in the direction of the Y axis was not performed for the aforementioned sets, they are pairs: CoM_0_Yy, CoM_1_Yy, CoM_2_Yy. Statistical conclusions show for practice that in the measurements in the Y-axis direction, there was a dimensional anomaly during the exchange of the arms, caused by noncompliance with assembly procedures and prescribed tightening torques of screws securing the arm.
The hypothesis of a normal population distribution for all NcM measurements in Table 6 was not rejected, so the measurements were performed correctly, and the measurement results can be further used for statistical evaluation using parametric tests. When replacing the arms and ensuring the correct assembly procedures and the prescribed tightening torques of the screws, measurements were achieved that also met the statistical requirements.
Descriptive Statistics
In Tables 5 and 6, for every measurement the number of n values is determined which were left out after excluding the outliners. The descriptive statistics used for the robotic arm position -were: mean, median, minimum, maximum. Additional descriptive statistics of robotic arm position variability were used in the article: range, standard deviation, and descriptive statistics of shape: skewness and kurtosis. Descriptive statistics are calculated separately for CoM and NcM due to different measurement methodology.
CoM Descriptive Statistics
Descriptive one-dimensional statistics of position and variability CoM of Arm I-AdM and Arm I-CvM are in Table 7. As follows from the CoM principle (Section 2.3), the results in Table 7 are given in mm.
It is clear from Table 7 For measurements in the direction of the X axis ( Figure 13), for Arm I-AdM and Arm I-CvM, a significant decrease in the median position of the arm showed when changing the load from 0 kg to 1 kg and a slight decrease in median position is still visible when changing the load from 1 kg to 2 kg. The variability of measurements in the direction of the X axis evaluated by range and standard deviation sd is highest for the measurement AdM_CoM_1_Xx. With both types of arm, the robot arm decreases in the X-axis direction as the load increases. Table 7 For the measurement in the direction of the Y axis (Figure 14), for Arm I-AdM, there is a significant increase in the median position when changing the load from 0 kg to 1 kg and a slight decrease in the median position when changing the load from 1 kg to 2 kg. Arm I-CvM shows a significant decrease in the median position when changing the load from 0 kg to 1 kg, and a slight decrease in the median position when changing the load from 1 kg to 2 kg. The highest range is for measurements in the direction of the Y axis for CvM_CoM_2_Yy, namely range 0.44 mm and standard deviation sd of 0.14 mm. For measurements in the direction of the X axis (Figure 13), for Arm I-AdM and Arm I-CvM, a significant decrease in the median position of the arm showed when changing the load from 0 kg to 1 kg and a slight decrease in median position is still visible when changing the load from 1 kg to 2 kg. The variability of measurements in the direction of the X axis evaluated by range and standard deviation sd is highest for the measurement AdM_CoM_1_Xx. With both types of arm, the robot arm decreases in the X-axis direction as the load increases. For the measurement in the direction of the Y axis (Figure 14), for Arm I-AdM, there is a significant increase in the median position when changing the load from 0 kg to 1 kg and a slight decrease in the median position when changing the load from 1 kg to 2 kg. Arm I-CvM shows a significant decrease in the median position when changing the load from 0 kg to 1 kg, and a slight decrease in the median position when changing the load from 1 kg to 2 kg. The highest range is for measurements in the direction of the Y axis for CvM_CoM_2_Yy, namely range 0.44 mm and standard deviation sd of 0.14 mm. For measurements in the direction of the Z axis (Figure 15), for Arm I-AdM, there is a significant increase in the median position when changing the load from 0 kg to 1 kg and slightly decreasing in the median position when changing the load from 1 kg to 2 kg. Arm I-CvM shows a slight decrease in the median position when changing the load from 0 kg to 1 kg and a slight decrease in the median position when changing the load from 1 kg to 2 kg. The range in the direction of the Z axis is the largest for measurement AdM_CoM_2_Zz, namely the range up to 0.71 mm and the standard deviation sd of 0.20 mm. Figures 13-15 show that both Arm I-AdM and Arm I-CvM, without load, will reach a different position than when measuring with load. The resulting increase in the position of the robotic arm in the Z axis direction using Arm I-AdM with respect to Arm I-CvM, is caused by springing of the Arm I-AdM structure. For measurements in the direction of the X axis (Figure 13), for Arm I-AdM and Arm I-CvM, a significant decrease in the median position of the arm showed when changing the load from 0 kg to 1 kg and a slight decrease in median position is still visible when changing the load from 1 kg to 2 kg. The variability of measurements in the direction of the X axis evaluated by range and standard deviation sd is highest for the measurement AdM_CoM_1_Xx. With both types of arm, the robot arm decreases in the X-axis direction as the load increases. For the measurement in the direction of the Y axis (Figure 14), for Arm I-AdM, there is a significant increase in the median position when changing the load from 0 kg to 1 kg and a slight decrease in the median position when changing the load from 1 kg to 2 kg. Arm I-CvM shows a significant decrease in the median position when changing the load from 0 kg to 1 kg, and a slight decrease in the median position when changing the load from 1 kg to 2 kg. The highest range is for measurements in the direction of the Y axis for CvM_CoM_2_Yy, namely range 0.44 mm and standard deviation sd of 0.14 mm.
NcM Descriptive Statistics
Descriptive one-dimensional NcM position and variability statistics for Arm I-AdM and Arm I-CvM are in Table 8. As follows from the CoM principle (Section 2.3), the results in Table 5 are given in mm. From Table 8 and Figure 16, for Arm I-AdM measurements in the direction of the X axis, it is possible to observe a higher value of mean and lower range and standard deviation sd when the load is increased to 1 kg; and range and standard deviation sd increased when the load was increased to 2 kg and the median shifted. For Arm I-CvM in the direction of the X axis, it is possible to observe a decrease in the median value as well as the mean when the load increases, and at the same time a decrease of range and standard deviation sd. This follows from the above behavior of the position of the robotic arm Arm I-AdM and Arm I-CvM and is due to the rigidity of the sensor holder. From Table 8 and Figure 17, for Arm I-AdM, measurements in the Y axis direction, an increase in the median value can be observed as the load increases and the range and standard deviation sd change slightly. For Arm I-CvM in the direction of the Y axis, it is From Table 8 and Figure 17, for Arm I-AdM, measurements in the Y axis direction, an increase in the median value can be observed as the load increases and the range and standard deviation sd change slightly. For Arm I-CvM in the direction of the Y axis, it is possible to observe an increase in the median and the mean with increasing load, while the range and standard deviation sd is higher at higher loads. Failure to meet the condition of normal distribution of measured values is also confirmed by anomalies in Figure 17, at the same time, the stress caused by the construction of Arm I-AdM needs to be considered. From Table 8 and Figure 17, for Arm I-AdM, measurements in the Y axis direction, an increase in the median value can be observed as the load increases and the range and standard deviation sd change slightly. For Arm I-CvM in the direction of the Y axis, it is possible to observe an increase in the median and the mean with increasing load, while the range and standard deviation sd is higher at higher loads. Failure to meet the condition of normal distribution of measured values is also confirmed by anomalies in Figure 17, at the same time, the stress caused by the construction of Arm I-AdM needs to be considered.
From Table 8 and Figure 18, for Arm I-AdM measurements in the direction of the Z axis, it can be observed that the median value always decreased and the range changed slightly due to the increase in load. For Arm I-CvM in the direction of the Z axis, it is possible to observe that the median value and the mean oscillate with increasing load and From Table 8 and Figure 18, for Arm I-AdM measurements in the direction of the Z axis, it can be observed that the median value always decreased and the range changed slightly due to the increase in load. For Arm I-CvM in the direction of the Z axis, it is possible to observe that the median value and the mean oscillate with increasing load and the range is not always the same. The expected course, met by Arm I-AdM, is caused by the interruption of the Arm I-AdM suspension, and at the same time suspension of the sensor holder. The increase in Arm I-CvM at 2 kg load was again caused only by the springing of the sensor holder.
Presented results indicate a statistical significance of differences in positions of the robotic arm endpoint for Arm I-AdM and Arm I-CvM, in position and variability. Since values in the table and the graph are in micrometers, these differences are negligible from a practical point of view. For further investigation, it would be appropriate to plan experimental measurements to better examine whether the differences are random or caused by a factor that has not yet been considered. This was affected by: arm flexibility, loading time, movement time of the robotic arm, sensitivity of the proximity sensor and rigidity of the sensor holder.
Due to the fact that the production documentation of Arm I-AdM and Arm I-CvM contained dimensional tolerances in micrometers and was observed, the position shift could occur during the mutual exchange, disassembly and assembly of Arm I-AdM for Arm I-CvM.
the range is not always the same. The expected course, met by Arm I-AdM, is caused by the interruption of the Arm I-AdM suspension, and at the same time suspension of the sensor holder. The increase in Arm I-CvM at 2 kg load was again caused only by the springing of the sensor holder. Presented results indicate a statistical significance of differences in positions of the robotic arm endpoint for Arm I-AdM and Arm I-CvM, in position and variability. Since values in the table and the graph are in micrometers, these differences are negligible from a practical point of view. For further investigation, it would be appropriate to plan experimental measurements to better examine whether the differences are random or caused by a factor that has not yet been considered. This was affected by: arm flexibility, loading time, movement time of the robotic arm, sensitivity of the proximity sensor and rigidity of the sensor holder.
Due to the fact that the production documentation of Arm I-AdM and Arm I-CvM contained dimensional tolerances in micrometers and was observed, the position shift could occur during the mutual exchange, disassembly and assembly of Arm I-AdM for Arm I-CvM. Table 9 shows the results of the F test and the t-test for CoM. It is clear from Table 9 that no match of the mean values is confirmed for any pair. In four cases, for the CoM_0_Zz, CoM_1_Zz, CoM_2_Xx and CoM_2_Yy modes, the agreement of the variances is confirmed. Table 9 shows the results of the F test and the t-test for CoM. It is clear from Table 9 that no match of the mean values is confirmed for any pair. In four cases, for the CoM_0_Zz, CoM_1_Zz, CoM_2_Xx and CoM_2_Yy modes, the agreement of the variances is confirmed. Table 10 shows the results of the F test and the t-test for NcM. It is clear from Table 10 that no agreement of the mean values is confirmed for any pair. In six cases, for NcM_0_Y, NcM_0_Z, NcM_1_Y, NcM_2_Y, NcM_2_Z, and NcM_2_X modes, the variance agreement is confirmed. The test results in Tables 9 and 10 confirm that Arm I-AdM and Arm I-CvM are not interchangeable in this study.
Conclusions
The proposed methodology for measuring of robotic arm positioning accuracy was verified on its construction consisting of three arms Arms I, II, III, while Arm I was manufactured by two different technologies AdM and CvM. A series of repeated measurements was performed for each robotic arm configuration with two different measurement methods, CoM and NcM. The results of measurements were verified by statistical methods, based on which unsatisfactory values of measurements were excluded from the evaluation.
The use of proposed methodology is not only in the field of metrology and testing, but also to verify the interchangeability of components in construction of robotic systems. The methodology provides a detailed view of a dimensional chain quality of the robotic arm structure and determines conditions for maintaining accuracy during disassembly and reassembly of individual components of robotic system structures.
Statistical evaluation of the results verified that the data obtained were measured correctly and have a normal distribution.
For CoM with the same load of Arm I-AdM and Arm I-CvM: • homoscedasticity was confirmed for: CoM_0_Zz, CoM_1_Zz, CoM_2_Xx and CoM_2_Yy; • the conformity of the mean values has not been confirmed.
This means that for interchangeable Arm I it is necessary to modify the design of ArM I-AdM.
For NcM with the same load of Arm I-AdM and Arm I-CvM: • homoscedasticity was confirmed for: NcM_0_Y, NcM_1_Y, NcM_2_Y, NcM_0_Z, NcM_2_Z; • the conformity of the mean values for NcM_2_X has been confirmed.
This means that for mutually interchangeable Arm I-AdM, Arm I-CvM, to increase its rigidity, the structural design of the sensor holder must be modified.
Considering the results obtained, the paper follows up the work of Zhang and Wei [26]. The authors addressed the accuracy of robotic arm positioning in their work, too. However, they did it from the point of view of its control system. A similar issue is also covered in the work of Clitan and Ionut [27]. The authors also focus on accuracy of robotic arm positioning. Specifically, they dealt in detail with its payload. Another paper focusing on this issue is by Visan et al. [28]. Even in this case, the problem of robotic arm positioning accuracy was solved in detail. In this case, however, the research was focused on stepper motors that implement the positioning of the robotic arm. It follows from the above examples that the presented research brings a new area of research to the issue of positioning the robotic arm, which needs to be further addressed in order to further expand the knowledge that can be transferred to the application.
Joubair et al. [16] is another paper that dealt with positioning accuracy. The authors took a closer look at the calibration procedure, where a simple low-cost procedure improves the accuracy of surface positioning. This is the issue similar to one presented in the article. The difference is mainly that the authors used a two-arm robot, compared to our researched single-arm robotic arm.
The following contributions also deal with the calibration of the robotic arm. In our paper, the proximity sensor MTN/EP080 Probe was used for NcM. Aoyagi et al. [29] used a laser-tracking system to calibrate the kinematic parameters of the robotic arm, which achieves high positioning accuracy using a genetic algorithm.
Švaco et al. [30] used a contactless method to perform measurements of calibration points in space, using a stereovision system attached to the robot arm. This method is very similar to the NcM for which the MTN/EP080 Probe proximity sensor was used. Points (represented as spheres localized by a stereo system) are projected by the authors as circles in two planes of image capturing, regardless of the angle of view. The positioning error after calibration has been reduced to 1.29 mm.
Peng et al. [31] proposed their own method of geometric parameters' calibration of a kinematic model of a robotic arm, based on monocular vision. Similar to our paper, they used the contactless method, while in measuring accuracy of the robotic arm positioning in our paper, the CoM was also used, in which it was necessary to set the reference-calibration Cartesian coordinate system. Peng et al., to determine the kinematic parameters, first used the classical Denavit-Hartenberg (D-H) modeling method. Subsequently, they implemented nonlinear optimization and parameter compensation. Their method improved the absolute accuracy of positioning the end of the robotic arm while being universal and effective, similar to the CoM method in our paper.
Due to confirmed differences in positions for the same measured axis in different types of arm, we assume that Arm I-AdM, Arm I-CvM or an arm made by different technology may have dimensional deviations. From previous observations and calculations, for the research of other factors influencing the localization of the robotic arm position, these deviations in dimensions need to be eliminated.
Based on the results, we can conclude that production technology does not affect the positioning accuracy of the robotic arm, but the design of Arm I-AdM needs to be changed regardless of the operating load.
To further investigate, it will be appropriate to plan experimental measurements to better examine whether differences in a position of the robot arm are random or caused by loads or other factors that have not been taken into account (arm flexibility, load time, transport time, programmed trajectory of the robot arm etc.). The answer to the question will be known after analysis of accuracy of measured positions for Arm I-AdM and Arm I-CvM, depending on the measurement directions (X, Y, Z) and the load.
In the future, to increase the accuracy of measurement process, it will be necessary to use the proximity sensor MTN/EP080 Probe with higher sensitivity and to increase rigidity of the proximity sensor holder. Furthermore, we will need to ensure the same position of the sensor holder for Arm I-AdM, Arm I-CvM on the robotic arm.
Conflicts of Interest:
The authors declare that they have no conflict of interest. "made by additive manufacturing, CoM measurement, load 0 kg, measuring axis X" CvM_CoM_2_Zz "made by CNC milling, CoM measurement method, load 2 kg, measuring axis Z" AdM_NcM_0_X "made by additive manufacturing, NoC measurement method, load 0 kg, measuring axis X" CvM_NcM_2_Z "made by CNC milling, NoC measurement method, load 2 kg, measuring axis Z"
|
v3-fos-license
|
2017-07-21T16:47:56.000Z
|
2017-07-21T00:00:00.000
|
5013006
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.aclweb.org/anthology/D17-1064.pdf",
"pdf_hash": "db6c2901a61435478fccf5f007094b90609a7dee",
"pdf_src": "ArXiv",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44478",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"sha1": "43e24832ac3e5081c263d88b1071578b7b9ef4ec",
"year": 2017
}
|
pes2o/s2orc
|
Split and Rephrase
We propose a new sentence simplification task (Split-and-Rephrase) where the aim is to split a complex sentence into a meaning preserving sequence of shorter sentences. Like sentence simplification, splitting-and-rephrasing has the potential of benefiting both natural language processing and societal applications. Because shorter sentences are generally better processed by NLP systems, it could be used as a preprocessing step which facilitates and improves the performance of parsers, semantic role labellers and machine translation systems. It should also be of use for people with reading disabilities because it allows the conversion of longer sentences into shorter ones. This paper makes two contributions towards this new task. First, we create and make available a benchmark consisting of 1,066,115 tuples mapping a single complex sentence to a sequence of sentences expressing the same meaning. Second, we propose five models (vanilla sequence-to-sequence to semantically-motivated models) to understand the difficulty of the proposed task.
Introduction
Several sentence rewriting operations have been extensively discussed in the literature: sentence compression, multi-sentence fusion, sentence paraphrasing and sentence simplification.
We propose a new sentence simplification task, which we dub Split-and-Rephrase, where the goal is to split a complex input sentence into shorter sentences while preserving meaning.In that task, the emphasis is on sentence splitting and rephrasing.There is no deletion and no lexical or phrasal simplification but the systems must learn to split complex sentences into shorter ones and to make the syntactic transformations required by the split (e.g., turn a relative clause into a main clause).Table 1 summarises the similarities and differences between the five sentence rewriting tasks.
Like sentence simplification, splitting-andrephrasing could benefit both natural language processing and societal applications.Because shorter sentences are generally better processed by NLP systems, it could be used as a preprocess- ing step which facilitates and improves the performance of parsers (Tomita, 1985;Chandrasekar and Srinivas, 1997;McDonald and Nivre, 2011;Jelínek, 2014), semantic role labelers (Vickrey and Koller, 2008) and statistical machine translation (SMT) systems (Chandrasekar et al., 1996).
In addition, because it allows the conversion of longer sentences into shorter ones, it should also be of use for people with reading disabilities (Inui et al., 2003) such as aphasia patients (Carroll et al., 1999), low-literacy readers (Watanabe et al., 2009), language learners (Siddharthan, 2002) and children (De Belder and Moens, 2010).
Contributions.We make two main contributions towards the development of Split-and-Rephrase systems.
Our first contribution consists in creating and making available a benchmark for training and testing Split-and-Rephrase systems.This benchmark (WEBSPLIT) differs from the corpora used to train sentence paraphrasing, simplification, compression or fusion models in three main ways.
First, it contains a high number of splits and rephrasings.This is because (i) each complex sentence is mapped to a rephrasing consisting of at least two sentences and (ii) as noted above, splitting a sentence into two usually imposes a syntactic rephrasing (e.g., transforming a relative clause or a subordinate into a main clause).
Second, the corpus has a vocabulary of 3,311 word forms for a little over 1 million training items which reduces sparse data issues and facilitates learning.This is in stark contrast to the relatively small size corpora with very large vocabularies used for simplification (cf.Section 2).
Third, complex sentences and their rephrasings are systematically associated with a meaning representation which can be used to guide learn-ing.This allows for the learning of semanticallyinformed models (cf.Section 5).
Our second contribution is to provide five models to understand the difficulty of the proposed Split-and-Rephrase task: (i) A basic encoderdecoder taking as input only the complex sentence; (ii) A hybrid probabilistic-SMT model taking as input a deep semantic representation (Discourse representation structures, Kamp 1981) of the complex sentence produced by Boxer (Curran et al., 2007); (iii) A multi-source encoderdecoder taking as input both the complex sentence and the corresponding set of RDF (Resource Description Format) triples; (iv,v) Two partition-andgenerate approaches which first, partition the semantics (set of RDF triples) of the complex sentence into smaller units and then generate a text for each RDF subset in that partition.One model is multi-source and takes the input complex sentence into account when generating while the other does not.
Related Work
We briefly review previous work on sentence splitting and rephrasing.
Sentence Splitting.Of the four sentence rewriting tasks (paraphrasing, fusion, compression and simplification) mentioned above, only sentence simplification involves sentence splitting.Most simplification methods learn a statistical model (Zhu et al., 2010;Coster and Kauchak, 2011;Woodsend and Lapata, 2011;Wubben et al., 2012;Narayan and Gardent, 2014) from the parallel dataset of complex-simplified sentences derived by Zhu et al. (2010) from Simple English Wikipedia2 and the traditional one3 .
For training Split-and-Rephrase models, this dataset is arguably ill suited as it consists of 108,016 complex and 114,924 simplified sentences thereby yielding an average number of simple sentences per complex sentence of 1.06.Indeed, Narayan and Gardent (2014) report that only 6.1% of the complex sentences are in fact split in the corresponding simplification.A more detailed evaluation of the dataset by Xu et al. (2015) further shows that (i) for a large number of pairs, the simplifications are in fact not simpler than the input sentence, (ii) automatic alignments resulted in incorrect complex-simplified pairs and (iii) models trained on this dataset generalised poorly to other text genres.Xu et al. (2015) therefore propose a new dataset, Newsela, which consists of 1,130 news articles each rewritten in four different ways to match 5 different levels of simplicity.By pairing each sentence in that dataset with the corresponding sentences from simpler levels (and ignoring pairs of contiguous levels to avoid sentence pairs that are too similar to each other), it is possible to create a corpus consisting of 96,414 distinct complex and 97,135 simplified sentences.Here again however, the proportion of splits is very low.
As we shall see in Section 3.3, the new dataset we propose differs from both the Newsela and the Wikipedia simplification corpus, in that it contains a high number of splits.In average, this new dataset associates 4.99 simple sentences with each complex sentence.
Rephrasing.Sentence compression, sentence fusion, sentence paraphrasing and sentence simplification all involve rephrasing.
Paraphrasing approaches include bootstrapping approaches which start from slotted templates (e.g.,"X is the author of Y") and seed (e.g.,"X = Jack Kerouac, Y = "On the Road"") to iteratively learn new templates from the seeds and new seeds from the new templates (Ravichandran and Hovy, 2002;Duclaye et al., 2003); systems which extract paraphrase patterns from large monolingual corpora and use them to rewrite an input text (Duboue and Chu-Carroll, 2006;Narayan et al., 2016); statistical machine translation (SMT) based systems which learn paraphrases from monolingual parallel (Barzilay and McKeown, 2001;Zhao et al., 2008), comparable (Quirk et al., 2004) or bilingual parallel (Bannard and Callison-Burch, 2005;Ganitkevitch et al., 2011) corpora; and a recent neural machine translation (NMT) based system which learns paraphrases from bilingual parallel corpora (Mallinson et al., 2017).
Most sentence compression approaches focus on deleting words (the words appearing in the compression are words occurring in the input) and therefore only perform limited paraphrasing.As noted by Pitler (2010) and Toutanova et al. (2016) however, the ability to paraphrase is key for the development of abstractive summarisation systems since summaries written by humans often rephrase the original content using paraphrases or synonyms or alternative syntactic constructions.Recent proposals by Rush et al. (2015) and Bingel and Søgaard (2016) address this issue.Rush et al. (2015) proposed a neural model for abstractive compression and summarisation, and Bingel and Søgaard (2016) proposed a structured approach to text simplification which jointly predicts possible compressions and paraphrases.
None of these approaches requires that the input be split into shorter sentences so that both the corpora used, and the models learned, fail to adequately account for the various types of specific rephrasings occurring when a complex sentence is split into several shorter sentences.
Finally, sentence fusion does induce rephrasing as one sentence is produced out of several.However, research in that field is still hampered by the small size of datasets for the task, and the difficulty of generating one (Daume III and Marcu, 2004).Thus, the dataset of Thadani and McKeown (2013) only consists of 1,858 fusion instances of which 873 have two inputs, 569 have three and 416 have four.This is arguably not enough for learning a general Split-and-Rephrase model.
In sum, while work on sentence rewriting has made some contributions towards learning to split and/or to rephrase, the interaction between these two subtasks have never been extensively studied nor are there any corpora available that would support the development of models that can both split and rephrase.In what follows, we introduce such a benchmark and present some baseline models which provide some interesting insights on how to address the Split-and-Rephrase problem.
We derive a Split-and-Rephrase dataset from the WEBNLG corpus presented in Gardent et al. (2017).
The WEBNLG Dataset
In the WEBNLG dataset, each item consists of a set of RDF triples (M ) and one or more texts (T i ) verbalising those triples.
An RDF (Resource Description Format) triple is a triple of the form subject|property|object where the subject is a URI (Uniform Resource Identifier), the property is a binary relation and the object is either a URI or a literal value such as a string, a date or a number.In what follows, we refer to the sets of triples representing the meaning of a text as its meaning representation (MR).Figure 1 shows three example WEBNLG items with M 1 , M 2 , M 3 the sets of RDF triples representing the meaning of each item, and {T 1 1 , T 2 1 }, {T 2 } and {T 3 } listing possible verbalisations of these meanings.
The WEBNLG dataset4 consists of 13,308 MR-Text pairs, 7049 distinct MRs, 1482 RDF entities and 8 DBpedia categories (Airport, Astronaut, Building, Food, Monument, SportsTeam, University, WrittenWork).The number of RDF triples in MRs varies from 1 to 7. The number of distinct RDF tree shapes in MRs is 60.
Creating the WEBSPLIT Dataset
To construct the Split-and-Rephrase dataset, we make use of the fact that the WEBNLG dataset (i) associates texts with sets of RDF triples and (ii) contains texts of different lengths and complexity corresponding to different subsets of RDF triples.The idea is the following.Given a WEBNLG MR-Text pair of the form (M, T ) where T is a single complex sentence, we search the WEBNLG dataset for a set {(M 1 , T 1 ), . . ., (M n , T n )} such that {M 1 , . . ., M n } is a partition of M and T 1 , . . ., T n forms a text with more than one sentence.To achieve this, we proceed in three main steps as follows.
Sentence segmentation We first preprocess all 13,308 distinct verbalisations contained in the WEBNLG corpus using the Stanford CoreNLP pipeline (Manning et al., 2014) to segment each verbalisation T i into sentences.
Sentence segmentation allows us to associate each text T in the WEBNLG corpus with the number of sentences it contains.This is needed to identify complex sentences with no split (the input to the Split-and-Rephrase task) and to know how many sentences are associated with a given set of RDF triples (e.g., 2 triples may be realised by a single sentence or by two).As the CoreNLP sentence segmentation often fails on complex/rare named entities thereby producing unwarranted splits, we verified the sentence segmentations produced by the CoreNLP sentence segmentation module for each WEBNLG verbalisation and manually corrected the incorrect ones.
Pairing Using the semantic information given by WEBNLG RDF triples and the information about the number of sentences present in a WEBNLG text produced by the sentence segmentation step, we produce all items of the form • C is a single sentence with semantics M C .
• T 1 . . .T n is a sequence of texts that contains at least two sentences.
• The disjoint union of the semantics M 1 . . .M n of the texts T 1 . . .T n is the same as the semantics M C of the complex sentence C. That is, This pairing is made easy by the semantic information contained in the WEBNLG corpus and includes two subprocesses depending on whether complex and split sentences come from the same WEBNLG entry or not.
Within entries.Given a set of RDF triples M C , a WEBNLG entry will usually contain several alternative verbalisations for M C (e.g., T 1 1 and T 2 1 in Figure 1 are two possible verbalisations of M 1 ).We first search for entries where one verbalisation T C consists of a single sentence and another verbalisation T contains more than one sentence.For such cases, we create an entry of the form (M C , T C ), {(M C , T )} such that, T C is a single sentence and T is a text consisting of more than one sentence.The second example item for WEB-SPLIT in Figure 1 a WEBSPLIT item associating the complex sentence (T 1 1 ) with a text (T 2 1 ) made of three short sentences.
Across entries.
Next we create (M, C), {(M 1 , T 1 ) . . .(M n , T n )} entries by searching for all WEBNLG texts C consisting of a single sentence.For each such text, we create all possible partitions of its semantics M C and for each partition, we search the WEBNLG corpus for matching entries i.e., for a set S of (M i , T i ) pairs such that (i) the disjoint union of the semantics M i in S is equal to M C and (ii) the resulting set of texts contains more than one sentence.The first example item for WEBSPLIT in Figure 1 is a case in point.C(= T 1 1 ) is the single, complex sentence whose meaning is represented by the three triples M .T 2 , T 3 is the sequence of shorter texts C is mapped to.And the semantics M 2 and M 3 of these two texts forms a partition over M .
produced in the preceding step, we determine an order on T 1 . . .T n as follows.We observed that the WEBNLG texts mostly5 follow the order in which the RDF triples are presented.Since this order corresponds to a left-to-right depth-first traversal of the RDF tree, we use this order to order the sentences in the T i texts.
Results
By applying the above procedure to the WEBNLG dataset, we create 1,100,166 pairs of the form (M C , T C ), {(M 1 , T 1 ) . . .(M n , T n )} where T C is a complex sentence and T 1 . . .T n is a sequence of texts with semantics M 1 , . . .M n expressing the same content M C as T C .1,945 of these pairs were of type "Within entries" and the rest were of type "Across entries".In total, there are 1,066,115 distinct T C , T 1 . . .T n pairs with 5,546 distinct complex sentences.Complex sentences are associated with 192.23 rephrasings in average (min: 1, max: 76283, median: 16).The number of sentences in the rephrasings varies between 2 and 7 with an average of 4.99.The vocabulary size is 3,311.
The Split-and-Rephrase task can be defined as follows.Given a complex sentence C, the aim is to produce a simplified text T consisting of a sequence of texts T 1 . . .T n such that T forms a text of at least two sentences and the meaning of C is preserved in T .In this paper, we proposed to approach this problem in a supervised setting where we aim to maximise the likelihood of T given C and model parameters θ: P (T |C; θ).To exploit the different levels of information present in the WEBSPLIT benchmark, we break the problem in the following ways: where, M C is the meaning representation of C and M 1−n is a set {M 1 , . . ., M n } which partitions M C .
Split-and-Rephrase Models
In this section, we propose five different models which aim to maximise P (T |C; θ) by exploiting different levels of information in the WEBSPLIT benchmark.
A Probabilistic, Semantic-Based Approach
Narayan and Gardent (2014) describes a sentence simplification approach which combines a probabilistic model for splitting and deletion with a phrase-based statistical machine translation (SMT) and a language model for rephrasing (reordering and substituting words).In particular, the splitting and deletion components exploit the deep meaning representation (a Discourse Representation Structure, DRS) of a complex sentence produced by Boxer (Curran et al., 2007).
Based on this approach, we create a Split-and-Rephrase model (aka HYBRIDSIMPL) by (i) including only the splitting and the SMT models (we do not learn deletion) and (ii) training the model on the WEBSPLIT corpus.
A Basic Sequence-to-Sequence Approach
Sequence-to-sequence models (also referred to as encoder-decoder) have been successfully applied to various sentence rewriting tasks such as machine translation (Sutskever et al., 2011;Bahdanau et al., 2014), abstractive summarisation (Rush et al., 2015) and response generation (Shang et al., 2015).They first use a recurrent neural network (RNN) to convert a source sequence to a dense, fixed-length vector representation (encoder).They then use another recurrent network (decoder) to convert that vector to a target sequence.
We use a three-layered encoder-decoder model with LSTM (Long Short-Term Memory, (Hochreiter and Schmidhuber, 1997)) units for the Splitand-Rephrase task.Our decoder also uses the local-p attention model with feed input as in (Luong et al., 2015).It has been shown that the local attention model works better than the standard global attention model of Bahdanau et al. (2014).We train this model (SEQ2SEQ) to predict, given a complex sentence, the corresponding sequence of shorter sentences.
The SEQ2SEQ model is learned on pairs C, T of complex sentences and the corresponding text.It directly optimises P (T |C; θ) and does not take advantage of the semantic information available in the WEBSPLIT benchmark.
A Multi-Source Sequence-to-Sequence Approach
In this model, we learn a multi-source model which takes into account not only the input complex sentence but also the associated set of RDF triples available in the WEBSPLIT dataset.That is, we maximise P (T |C; M C ; θ) (Eqn.2) and learn a model to predict, given a complex sentence C and its semantics M C , a rephrasing of C.
As noted by Gardent et al. (2017), the shape of the input may impact the syntactic structure of the corresponding text.For instance, an input containing a path (X|P 1 |Y )(Y |P 2 |Z) equating the object of a property P 1 with the subject of a property P 2 may favour a verbalisation containing a subject relative ("x V 1 y who V 2 z").Taking into account not only the sentence C that needs to be rephrased but also its semantics M C may therefore help learning.
We model P (T |C; M C ; θ) using a multi-source sequence-to-sequence neural framework (we refer to this model as MULTISEQ2SEQ).The core idea comes from Zoph and Knight (2016) who show that a multi-source model trained on trilingual translation pairs ((f, g), h) outperforms sev- eral strong single source baselines.We explore a similar "trilingual" setting where f is a complex sentence (C), g is the corresponding set of RDF triples (M C ) and h is the output rephrasing (T ).
We encode C and M C using two separate RNN encoders.To encode M C using RNN, we first linearise M C by doing a depth-first left-right RDF tree traversal and then tokenise using the Stanford CoreNLP pipeline (Manning et al., 2014).Like in SEQ2SEQ, we model our decoder with the localp attention model with feed input as in (Luong et al., 2015), but now it looks at both source encoders simultaneously by creating separate context vector for each encoder.For a detailed explanation of multi-source encoder-decoders, we refer the reader to Zoph and Knight (2016).
Partitioning and Generating
As the name suggests, the Split-and-Rephrase task can be seen as a task which consists of two subtasks: (i) splitting a complex sentence into several shorter sentences and (ii) rephrasing the input sentence to fit the new sentence distribution.We consider an approach which explicitly models these two steps (Eqn.3).A first model P (M 1 , . . ., M n |C; M C ; θ) learns to partition a set M C of RDF triples associated with a complex sentence C into a disjoint set {M 1 , . . ., M n } of sets of RDF triples.Next, we generate a rephrasing of C as follows: where, the approximation from Eqn. 4 to Eqn. 5 derives from the assumption that the generation of We propose a pipeline model to learn parameters θ.We first learn to split and then learn to generate from each RDF subset generated by the split.
Learning to split.For the first step, we learn a probabilistic model which given a set of RDF triples M C predicts a partition M 1 . . .M n of this set.For a given M C , it returns the partition M 1 . . .M n with the highest probability We learn this split module using items
Experimental Setup and Results
This section describes our experimental setup and results.We also describe the implementation details to facilitate the replication of our results.
Training, Validation and Test sets
To ensure that complex sentences in validation and test sets are not seen during training, we split the 5,546 distinct complex sentences in the WEB-SPLITdata into three subsets: Training set (4,438, 80%), Validation set (554, 10%) and Test set (554, 10%).
Table 2 shows, for each of the 5 models, a summary of the task and the size of the training corpus.For the models that directly learn to map a complex sentence into a meaning preserving sequence of at least two sentences ( HY-BRIDSIMPL, SEQ2SEQ and MULTISEQ2SEQ), the training set consists of 886,857 C, T pairs with C a complex sentence and T , the corresponding text.In contrast, for the pipeline models which first partition the input and then generate from RDF data (SPLIT-MULTISEQ2SEQand SPLIT-SEQ2SEQ), the training corpus for learning to partition consists of 13,051 M C , M 1 . . .M n pairs while the training corpus for learning to generate contains 53,470 M i , T i pairs.
Implementation Details
For all our neural models, we train RNNs with three-layered LSTM units, 500 hidden states and a regularisation dropout with probability 0.8.All LSTM parameters were randomly initialised over a uniform distribution within [-0.05, 0.05].We trained our models with stochastic gradient descent with an initial learning rate 0.5.Every time perplexity on the held out validation set increased since it was previously checked, then we multiply the current learning rate by 0.5.We performed mini-batch training with a batch size of 64 sentences for SEQ2SEQ and MUL-TISEQ2SEQ, and 32 for SPLIT-SEQ2SEQ and SPLIT-MULTISEQ2SEQ.As the vocabulary size of the WEBSPLIT data is small, we train both encoder and decoder with full vocabulary.We randomly initialise word embeddings in the beginning and let the model train them during training.We train our models for 20 epochs and keep the best model on the held out set for the testing purposes.We used the system of Zoph and Knight (2016) to train both simple sequence-to-sequence and multi-source sequence-to-sequence models6 , and the system of Narayan and Gardent (2014) Table 3: Average BLEU scores for rephrasings, average number of sentences in the output texts (#S/C) and average number of tokens per output sentences (#Tokens/S).SOURCE are the complex sentences from the WEBSPLIT corpus.
Results
We evaluate all models using multi-reference BLEU-4 scores (Papineni et al., 2002) based on all the rephrasings present in the Split-and-Rephrase corpus for each complex input sentence. 8As BLEU is a metric for n-grams precision estimation, it is not an optimal metric for the Split-and-Rephrase task (sentences even without any split could have a high BLEU score).We therefore also report on the average number of output simple sentences per complex sentence and the average number of output words per output simple sentence.The first one measures the ability of a system to split a complex sentence into multiple simple sentences and the second one measures the ability of producing smaller simple sentences.
Table 3 shows the results.The high BLEU score for complex sentences (SOURCE) from the WEB-SPLIT corpus shows that using BLEU is not sufficient to evaluate splitting and rephrasing.Because the short sentences have many n-grams in common with the source, the BLEU score for complex sentences is high but the texts are made of a single sentence and the average sentence length is high.HYBRIDSIMPL performs poorly -we conjecture that this is linked to a decrease in semantic parsing quality (DRSs) resulting from complex named entities not being adequately recognised.The simple sequence-to-sequence model does not perform very well neither does the multisource model trained on both complex sentences and their semantics.Typically, these two models often produce non-meaning preserving outputs (see example in Table 4) for input of longer length.In contrast, the two partition-and-generate models outperform all other models by a wide mar- gin.This suggests that the ability to split is key to a good rephrasing: by first splitting the input semantics into smaller chunks, the two partitionand-generate models permit reducing a complex task (generating a sequence of sentences from a single complex sentence) to a series of simpler tasks (generating a short sentence from a semantic input).Unlike in neural machine translation setting, multi-source models in our setting do not perform very well.SEQ2SEQ and SPLIT-SEQ2SEQ outperform MULTISEQ2SEQ and SPLIT-MULTISEQ2SEQ respectively, despite using less input information than their counterparts.The multi-source models used in machine translation have as a multi-source, two translations of the same content (Zoph and Knight, 2016).In our approach, the multi-source is a complex sentence and a set of RDF triples, e.g., (C; M C ) for MULTISEQ2SEQ and (C; M i ) for SPLIT-MULTISEQ2SEQ.We conjecture that the poor performance of multi-source models in our case is due either to the relatively small size of the training data or to a stronger mismatch between RDF and complex sentence than between two translations.Table 4 shows an example output for all 5 systems highlighting the main differences.HYBRID-SIMPL's output mostly reuses the input words suggesting that the SMT system doing the rewriting has limited impact.Both the SEQ2SEQ and the MULTISEQ2SEQ models "hallucinate" new information ("served as a test pilot", "born on Nov 18, 1983").In contrast, the partition-and-generate models correctly render the meaning of the input sentence (SOURCE), perform interesting rephrasings ("X was born in Y" → "X's birth place was Y") and split the input sentence into two.
Conclusion
We have proposed a new sentence simplification task which we call "Split-and-Rephrase".We have constructed a new corpus for this task which is built from readily-available data used for NLG (Natural Language Generation) evaluation.Initial experiments indicate that the ability to split is a key factor in generating fluent and meaning preserving rephrasings because it permits reducing a complex generation task (generating a text consisting of at least two sentences) to a series of simpler tasks (generating short sentences).In future work, it would be interesting to see whether and if so how, sentence splitting can be learned in the absence of explicit semantic information in the input.
Another direction for future work concerns the exploitation of the extended WebNLG corpus.While the results presented in this paper use a version of the WebNLG corpus consisting of 13,308 MR-Text pairs, 7049 distinct MRs and 8 DBpedia categories, the current WebNLG corpus encompasses 43,056 MR-Text pairs, 16,138 distinct MRs and 15 DBpedia categories.We plan to exploit this extended corpus to make available a correspondingly extended WEBSPLIT corpus, to learn optimised Split-and-Rephrase models and to explore sentence fusion (converting a sequence of sentences into a single complex sentence).
Figure 1 :
Figure 1: Example entries from the WEBNLG benchmark and their pairing to form entries in the WEB-SPLIT benchmark.
John Clancy is a labour politican who leads Birmingham, where architect John Madin, who designed 103 Colmore Row, was born.Labour politician, John Clancy is the leader of Birmingham.John Madin was born in this city.He was the architect of 103 Colmore Row.Labour politician, John Clancy is the leader of Birmingham.John Madin was born in this city.He was the architect of 103 Colmore Row.
Table 2 :
Tasks modelled and training data used by Split-and-Rephrase models.
in the WEB-SPLIT dataset by simply computing the probability P (M 1 , . . ., M n |M C ).To make our model robust to an unseen M C , we strip off named-entities and properties from each RDF triple and only keep the tree skeleton of M C .There are only 60 distinct RDF tree skeletons, 1,183 possible split patterns and 19.72 split candidates in average for each tree skeleton, in the WEBSPLIT dataset.given a complex sentence C and a set of RDF triples M i ∈ M C ; and (ii) we approximate P (T i |C; M i ; θ) by P (T i |M i ; θ) and learn a simple sequence-to-sequence model which, given M i , generates a text T i .Note that as described earlier, M i 's are linearised and tokenised before we input them to RNN encoders.We refer to the first model by SPLIT-MULTISEQ2SEQ and the second model by SPLIT-SEQ2SEQ.
SOURCE
Alan Shepard was born in New Hampshire and he served as the Chief of the Astronaut Office .HYBRIDSIMPL Alan Shepard was born in New Hampshire and he served as of the the chief astronaut office .SEQ2SEQ Alan Shepard 's occupation was a test pilot .Alan Shepard was born in New Hampshire .Alan Shepard was born on Nov 18 , 1923 .MULTISEQ2SEQ Alan Shepard served as a test pilot .Alan Shepard 's birth place was New Hampshire .SPLIT-MULTISEQ2SEQ Alan Shepard served as the Chief of the Astronaut Office .Alan Shepard was born in New Hampshire .SPLIT-SEQ2SEQ Alan Shepard served as the Chief of the Astronaut Office .Alan Shepard 's birth place was New Hampshire .
Table 4 :
Example outputs from different models.
|
v3-fos-license
|
2018-08-09T01:00:48.991Z
|
2015-02-01T00:00:00.000
|
51963060
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.journals.aiac.org.au/index.php/alls/article/download/622/547",
"pdf_hash": "d00d02bf3db85c9702becbd0ec99511c67c07090",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44480",
"s2fieldsofstudy": [
"Education"
],
"sha1": "d00d02bf3db85c9702becbd0ec99511c67c07090",
"year": 2015
}
|
pes2o/s2orc
|
Analyzing EFL Teachers ’ Initial Job Motivation and Factors Effecting Their Motivation in Fezalar Educational Institutions in Iraq
Teacher motivation is one of the primary variables of students’ high performance. It is experienced that students whose teachers are highly motivated are more engaged in the learning process. Therefore, it’s mostly the teacher who determines the level of success or failure in achieving institution’s goal in the educational process. Thus, teachers are expected to demonstrate a high job motivation performance by administrations. However, some teachers seem naturally enthusiastic about teaching while others need to be stimulated, inspired and challenged. There are several factors that provide teachers with necessary motivation driven by which they can work effectively. These factors can be emotional, financial, physical or academic. This study is an attempt to find out what motivates teachers to enter this profession, since the reasons of entering this job has significant influence on their commitment to the job, investigate factors which are responsible for high or low motivation of language teachers in Fezalar Educational Institutions (FEI), which is a Turkish private institution that operates in Iraq, and ascertain the degree to which intrinsic and extrinsic motivational factors impact teachers in their work situation. Based on the review of the recent researches of motivation, in general, and of language teacher motivation, in particular, and relying on the qualitative and quantitative study of the issue, a detailed analysis of some aspects of foreign language teacher motivation is presented in the article.
Introduction
Human resources are the most significant and the most expensive value for any organization.As for the educational organization, teacher's role is vital to the success of the most educational institution.It's mostly the teacher who determines the level of success or failure in achieving institution's goal in the educational process.It's the teacher who gives the institution its credibility and determines its character (Wicke, 1964).The teacher is considered to be a vehicle that presents subjects to the students and he\she has the greatest impact on the lives of the students.Moreover, the teacher is a major role model in the lives of the students who idealize them and through whom they develop their worldview.
Teachers are expected to demonstrate a high job performance by administrations.Thus, it is challenging to be a teacher in any school or university nowadays!Even the best teachers reach moments of frustration or burnout.Perhaps as a result of these circumstances many good teachers leave teaching in the first three years (Frase 1992).Obviously, administrators must find ways to keep teachers in the profession and keep them motivated.Some teachers seem naturally enthusiastic about teaching while others need to be stimulated, inspired and challenged.Many educational institutions organize different kinds of social and academic events to motivate teachers and encourage them to become self-motivated.There are several factors that provide teachers with necessary motivation driven by which they can work effectively.Those factors can be emotional, financial, physical or academic.Many educators are unanimous that teachers are primarily motivated intrinsically by a sense of accomplishment, self-respect and responsibility.However, for maintaining teachers' enthusiasm extrinsic factors should also be taken into account by the administrations.This study is an attempt to find out factors which are responsible for high or low motivation of language teachers in Fezalar Educational Institution (FEI), which is a Turkish private institution that operates in Iraq, and ascertain the degree to which intrinsic and extrinsic motivational factors impact teachers in their work situation.The main objective of this paper is to identify the motivating and demotivating factors and thus suggest possible recommendations to all persons interested to help improve the motivational level of teachers in FEI.This study aims to investigate two research questions: 1.The reason that motivated EFL teachers to enter this job; 2. To find out to which level they are motivated and investigate motivating factors.
The study is highly significant of its kind since it may help administrative bodies, principals, educators and parents plan how to work for the improvement of teacher motivation by enhancing the positive factors found in this study.
Definition and Theories of Motivation
Scholars have approached the concept of motivation from different perspectives.The term motivation is complex and difficult to define; therefore a precise definition of this concept is elusive as the notion comprises the characteristics of an individual and a situation as well as the perception of that situation by an individual (Ifinedo 2005, Rosenfield andWilson 1999).Golombiewski (1973, p.597) discusses motivation as the degree of readiness of an organization to pursue some designated goal and implies the determination of the nature and locus of forces inducing the degree of readiness.To Gibson, Ivancevich and Donnelly (2000) motivation is a word used to describe forces acting on or within a person to initiate and guide behavior . Dessler, (2001) defines motivation as the intensity of the person's desire to engage in some activity.Harmer (2001:51) refers to motivation as "some kind of internal drive which pushes someone to do things in order to achieve something".From the definitions above, it can be summarized that motivation is something that starts, moves, directs, energizes and maintains human behavior.A motivated employee is easy to spot by his/her agility, dedication, enthusiasm, focus, zeal and general performance and contribution to organizational objectives and goals (Ifinedo, 2003).
To Williams and Burden (1997:111) "interest, curiosity, or a desire to achieve" are the key factors that compose motivated people.However, they believe that arousing interest is not enough to be motivated.This interest should be nourished.A highly motivated person will work hard to achieve the performance goal; on the other hand, an unmotivated person will not work so hard which will in turn cause low productivity.Schultz, Sono and Werner (2001) point out that motivation is intentional as well as directional.A motivated person always knows that a specific goal must be achieved and he\she uses most of his\her energy and effort to achieve this goal even in most difficult times.Abraham Maslow's theory of hierarchy of needs (1943) is one of the mostly recognized motivation theories.According to Maslow there are five basic needs of each individual: physiological needs which include pay, food, shelter, etc., security needs which include job security, protection against threats, safety, etc. affiliation needs which include the need of love and affection, esteem needs include the needs for respect, autonomy, achievement, recognition, etc.; self actualization needs include realizing one's full potential and ability.According to Maslow, once a person's need is satisfied it is no longer a need, so the need at the next level of the hierarchy motivates employees.Many empirical studies have supported the motivational force of physiological, safety, love and esteem needs; however, the same studies failed to discover hierarchical arrangement.There is some evidence that opposes the order of needs in Maslow's model.For example, in one culture some people might place social needs before any others.Despite the lack of scientific support, this theory is very well-known since it's the first theory of motivation.
According to Johnson (1986), there are three theories of motivation and productivity that employees' motivation is based on: Expectancy theory-Individuals are more likely to try in their work if they expect a reward worth working for, such as a bonus or a promotion, than if there is none.Equity theory-Individuals are dissatisfied if they are unfairly treated for their efforts and accomplishments.Job enrichment theory-When the work is varied and challenging, employees become more productive.The first two theories represent extrinsic factors which offer a financial reward for a teacher who achieves the preordained goals.This indicates a link between the effort and reward.People are thought to be more motivated if their effort is rewarded and they are not productive if their efforts are not equally compensated.Consequently, equity and fairness at work lead to job satisfaction and high motivation.Educational administrators should consider these theories which link teachers' needs satisfaction and job performance and they should make sure that teachers should be rewarded, fairly paid and professionally challenged.
However, Odden and Kelley (1997) having reviewed the recent research and experience and concluded that individual merit and incentive pay programs do not work and, in fact, are often detrimental.Many studies indicated that merit pay might not be appropriate to organizations such as schools which need cooparative and collabrative work (Lawler, 1983).On the other hand, merit pay system or performance-based system can be productive mostly in business community.
There are primarily two types of motivation: extrinsic motivation and intrinsic motivation (Herzberg 1959)
Extrinsic motivation
Extrinsic factors are related to the context or environment in which the job is performed (Herzberg, Mausner and Snyderman, 1959).Extrinsic motivation occurs as a result of external environment, external to a job and it is usually created by others.Extrinsic motivation is not related to the task which is performed by people.Teachers can be motivated extrinsically by means of salary, bonus, pension, insurance, promotion, days-off, praise, etc., all of which might contribute to motivation to teach and job satisfaction as well.An example of this would be a school administrator rewarding a teacher whose students achieve the highest score in the national test.
Intrinsic motivation
Intrinsic motivation stems from the internal factors and it is generated by the doer him/herself.Certain behavior is performed by a person because it gives him\her pleasure and a person gets a psychological reward rather than physical. Ellis (1984) discusses that "teachers are primarily motivated by intrinsic rewards such as self-respect, responsibility, and a sense of accomplishment."To Frase (1992), teachers choose to become teachers to help young people to learn and their most gratifying reward is accomplishing this goal.An example of this would be a teacher whose students successfully graduate from the college.This teacher gets the sole satisfaction as he\she sees his\her students at the graduation ceremony in their graduation gowns.
Teacher Motivation
After doing something for a while, our curiosity gradually fades.Some of us lose the enthusiasm and pleasure about the work we do; some of us continue to preserve the long term benefits.Others try to change the job in order to get back their enthusiasm.This scenario is known by many organizations and educational institutions.Some teachers might be highly motivated, others not.This issue is one of the primary challenges for many school administrators because teacher motivation determines the level of success or failure in achieving an institution's goal in the educational process.
Since researchers thought that language success was connected mostly with students' motivation, there were not many studies conducted on teacher motivation in the past.However, since the 1990s researchers started to investigate teacher motivation as they had realized that teacher motivation had a great impact on student motivation.Pennington seems to be the pioneer of this kind of research with her articles related to teacher motivation in 1991, 1992.In her articles she writes about ESL teachers' work satisfaction and dissatisfaction and the role of teachers' motivation to improve their performance.Another significant figure on teacher motivation is Dörney.In 2001 he mentioned the importance of teacher motivation by stating "The teachers' level of enthusiasm and commitment is one of the most important factors that affects the learners' motivation to learn" (p.156).Pennington (1995) states that to improve teacher motivation employers need to address and eliminate de-motivating factors in their teaching environment.Those factors can be stress, heavy workloads, work hours, job stability, disagreement with teaching methods, etc. Teacher motivation and classroom efficacy can be increased if those factors mentioned above are removed from the work environment.Doyle and Kim (1999) listed factors that cause dissatisfaction among ESL and EFL teacher: • "Lack of respect from administration • Lack of advancement opportunities • Lack of long term employment and job security • Overly heavy work loads • Separation and alienation of teachers • Lack of rewards for creativity • The malfunctioning of the educational system • Lack of funding for projects • Lack of autonomy in the teaching and evaluation • Lack of appropriate teaching environment • Over-commercializing textbooks • Discrepancies in teaching philosophies
• Lack of teacher training
• Institution of team teaching and foreign assistant teacher" Teachers who work in these conditions will lose their motivation which will in turn create less effective learning environment.Most teachers consider these factors as an important barrier which hinders effective teaching.Many teachers even quit their profession as a result of one or some of those factors listed above.
Both extrinsic and intrinsic motives do motivate everyone not just teachers.Promotion opportunities, good working conditions, job security, etc. are external factors which increase teacher motivation.However, some studies reveal that students who were taught by an extrinsically motivated teacher demonstrated lower engagement in tasks and lower interests in learning; on the other hand, students who were taught by an intrinsically motivated teacher showed a great interest and engagement in the learning process.Sergiovanni (1967) found that teachers obtain their greatest satisfaction not through external factors but through a sense of achievement in reaching and affecting students, experiencing recognition, and feeling responsible.Wild T. C., Enzle M. E. and Hawkins W. L. (1992) found that teachers who were perceived more intrinsically motivated were more willing to experiment and explore their fields of study.In a study conducted by Tardy and Snyder (2004), teachers who were intrinsically motivated by a strong connection and a sense of accomplishment in their English lessons, demonstrated a greater desire to teach in order to feel the same kind of success.
Research Methodology
The present research aims to investigate EFL teachers' reasons of choosing this job, to find out the most important sources of teacher motivation.The research incorporates methods of both qualitative and quantitative studies.The findings are backed up by self-observations of the author as an EFL teacher himself.Moreover, observations made during years of educating L2 learners, namely, of the dynamics of transformation, maintenance and fluctuation of motivation among colleagues is retrieved to further confirm the results of the study.The population of the study is English language teachers who are mostly from Turkey and who work in Fezalar Educational Institution (FEI).This educational institution consists of secondary schools, high schools and a university in different cities of Iraq.A description of participants, the instruments of the study and data collection methods and its analysis are presented below.
Participants
The participants consist of 37 English language teachers who were chosen randomly from the various schools of FEI.
The respondents of this study are mainly from the high schools which makes 40.5%, however, the other 32.4% are university lecturers which is the second largest group, and the other 27% of them are secondary school teachers.This study involves 69.9% (no.24) male and 35.1 (no.13) female teachers.For the age of the respondents, 8.1% (no.3) of them are below 25, 56.8% (no.21) of them are between 26-30 years old which is the majority of the respondents, the other 29.1% (no.11) are between the age of 31-40 and finally the other 5.4% (no.2) of them are over 41 years old.
Among the teachers who participated in this study, 40,5% of them have been working as teachers of English between 6 to 10 years, 35,1% of them have less than 5 years of teaching experience, 21,6% have been teaching between 11 to 20 years and only 2,7% have over 21 years teaching experience.In terms of qualification, more than half of the participants (51,4%,) have a BA degree, the other 40,5% have Master's Degree and only the 8,1% of them are PhDs.
Instrument and data collection
The research instrument titled "Teachers' job Satisfaction and Motivation Questionnaire" (TEJOSAMOQ) was adapted for this study.A questionnaire consisting of close-ended and open-ended questions was developed and later tested with a small number of teachers.Moreover, the instrument was checked by several experts to prove its content validity.Then it was sent to 40 teachers by email.We received completed questionnaires from 37 of them, the resr refused to contribute.The questionnaire consisted of three parts.In the first part biographical information was obtained, for example: gender, age, teaching experience and the establishment they teach English at.The second part of the questionaire contained statements with yes/no answers to elicit answers to reaserch question one.We asked respondents to give the reasons that motivated them to choose teaching as a career since we think that there is a close relationship between the reasons behind opting for this job and their current job motivation.In the third part, we asked questions related to their current level of motivation.This part contained questions mainly on extrinsic motivating factors such as salary, feedbacks from supervisors, professional development opportunities, etc.In the fourth part, we tried to elicit respondents' attitutes towards the intrinsic factors that keep them motivated and this part was measured on a five point scale, from 1 strongly disagree to 5 strongly agree.In this part we tried to investigate which intrinsic factors mostly motivate teachers of English in FEI.For the statistical analysis of the data we used the statistical package for the social sciences (SPSS) 22 program.
Data Analysıs And Results
Table 1.Reasons for choosing teaching as a career.In the second part of the questionnaire we tried to investigate teachers' reasons for choosing this career.It can be seen from Table 1 that most teachers of FEI prefered this career due to altuiristic motives such as potential of changing students' lives (70%), contributing society (65%), thinking to be born to teach (32%).However, there are other factors which had an effect on choosing this job.Among the respondents 67,6% agreed that teaching fitted their lifestyles.55,1% of teachers claimed that a social status of a teacher was another reason for entering this job, since teaching is still highly valued in this culture.32,4% of respondents claimed that they entered this career because of job security.Since it's fairly certain to find jobs in most countries, there is even a chance of working in different cities or countries.32,4% of teachers stated that having autonomy in the classroom was another reason why they opted for this job.Once a teacher closes the classroom door behind he/she decides what to do.Not many jobs might guarantee this sort of autonomy.29,7% of respondents agreed that summers-off were another reason for choosing this career, since summers-off might provide teachers with other jobs in summer or several months to relax.21,6% of respondents agreed they were pressurized to select this job, while 18,9% agreed they didn't get the admission they wanted, which are the least powerful motivators we found out in this study.
In the third part of the questionnaire we tried to investigate teachers' general level of current job satisfaction with different variables such as their perception of the salary, relationship with administrations, etc.It reveals that teachers in FEI are mostly satisfied with the salary, relationship with the administration and holidays /educational leaves.The results are presented in the tables and analysed using frequency and percentage principles.The majority (83,8%) of English language teachers in FEI agreed that they are satisfied with their job, only 16,2% (no.6) stated that they are not satisfied.
Table 3 shows that 83,8% (31) of teachers claimed that they receive a reasonable salary.It can be explained that FEI is a private educational institution and it pays well enough to the employees.Thus, most of them are satisfied.Table 2 and Table 3 reveal that the percentage of the teachers who agreed they are satisfied with the job and who agreed that they receive reasonable salary is the same -83,8%.This indicates that there is some correlation between the salary and job satisfaction in FEI; however, it doesn't necessarily mean a reasonable salary leads to job satisfaction.On the other hand, many other studies proved that a low salary might be a cause of job dissatisfaction.
In Table 4 it can be seen that 81,1%(30) of respondents agreed that they have a good rapport with their administration while 18,9% (no.7) of them didn't think similarly.During our interviews we recorded that many teachers stated that administrative factors help them get motivated.They noted that whenever they are asked to participate in the decision making process, it gives teachers an increased sense of belonging to the institution which consequently results in a better job performance and higher motivation.They also noted that when their administrators or supervisors celebrate important events or their achievements or celebrate the end of a busy week with food or beverages, it creates a positive climate which affects teachers' motivation as well.Table 5 demonstrates that 75,7% of teachers are satisfied with the holidays and educational leaves provided by FEI, while only 24,3% of them expressed their dissatisfaction.
In the fourth part of our questionnaire we investigated teachers' attitutes towards the intrinsic and exrtinsic factors that keep them motivated in FEI.The factors which are directly related to this study are measured on a five point scale, from 1 strongly disagree to 5 strongly agree and given in the table 6. 37 Scale: 1=Strongly disagree, 2=disagree, 3=neither agree or disagree, 4=agree, 5=strongly disagree To ascertain mean values the collected data was analyzed by using a descriptive statistic method and the results revealed that the most important motivational factor for teachers of FEI is "having good relationship with students" which has a mean of 4,51 with SD 0.7, while "my students' language learning success" scored second with the mean 4,45 with SD 0.7 which is very close to the first one.The sense of achievement or success and the possibility of improving professional skills took the 3rd and 4th position respectively with the mean of 4,32 and 4.24.These mean values prove that training programs for professional developments are one of the most important motivational factors that raise teachers' performance."My work itself and working conditions" had means of 3,7 and 3,6.Authority and independence obtained the mean of 3,59, while receiving praise from administration, parents and students obtained the mean of 3,34.Relationship with collegues and fringe benefits are seen as the least significant sources for motivation in the list with the mean of 3,35 each.The above results demonstrate that most teachers in FEI agreed that mostly intrinsic factors motivate them.
Discussion And Conclusion
A motivated teacher is a key factor that importantly affects students' success in the educational process and he\she guides and shapes their worldview.Thus, motivation of teachers is highly important since it directly influences the learning process.Moreover, it maintains the interest of students to the course.Whatever degree of motivation a teacher brings to the classroom is transformed for better or for worse.A motivated teacher will work harder, try new approaches and do a lot for the sake of students which will in turn contribute to effective learning.Moreover, a motivated teacher will strive for the excellence and growth of the institution.Motivating teachers is one of the primary challenges for many school administrators because teacher motivation determines the level of success or failure in achieving an institution's goal in the educational process.
This study aimed to explore two main questions.First, what motivated English teachers at FEI to choose this career and second, to find out to which level FEI teachers are satisfied and to investigate their attitutes towards the intrinsic and exrtinsic factors that keep them motivated in FEI.
It has been a common question in teacher motivation researches to pinpoint the causes of opting for the teaching profession as the reasons behind entering it have a significant influence on educators' commitment to the job.The literature on teacher motivation demonstrates that new teachers' willingness to enter this job can be defined by intrinsic, extrinsic and altruistic factors.The results obtained in this study also demonstrate that there are intrinsic, extrinsic and altruistic factors that motivated teachers to enter this job.However, the most important reasons found in this study are intrinsic and altruistic factors that significantly motivated FEI teachers to make such a decision.These findings are in consonance with Wadsworth, (2001) in a research involving 914 public and private school teachers in the USA, she reports that 96% of them joined this profession due to intrinsic reasons.2000) and Hettiarachchi, (2010) which strongly support the findings of this study.In this study, among the intrinsic and altruistic reasons: teaching as an indispensible part of my life (67.6%),potential of changing students' lives (70%), contributing to society (65.5%), ranked as the most important motivating key factors.These findings are in agreement with Frase (1992), who claims that people decide to become teachers to help young people learn and their most gratifying reward is accomplishing this goal.It indicates that teachers chose this job on the basis of intrinsic and altruistic reasons and they volunteered in teaching profession because of these reasons.Although many studies conducted by different researchers in different cultures disclose that the most common motivators are intrinsic reasons (Wadsworth, 2001, Watt H. M. G. andRichardson P. W., 2008), it might not be true to the same extend in all contexts.The survey also revealed that some of the participants had extrinsic and other reasons to select the teaching profession.The reasons mostly include other factors such as: a social status of a teacher, job security, autonomy in the classroom, summers-off and a profession suitable for women.Among these reasons, the most common was the social status of a teacher since it was stated by 55.1 % of the participants.Job security (32.4 %) is another motive that drives teachers to enter this job since it's fairly certain to find teachers' jobs in some countries.Another extrinsic reason is "summers-off" (32.4%) because not many jobs might provide several months to relax.We can see similar findings in the study conducted by Bastick (2000), where she studied teacher motivation in Jamaica and she detected that extrinsic reasons such as: the job with the most holidays, job security, opportunity for earning extra money, and a social status for teachers were among the factors which motivated teachers to join this profession.As this study revealed the teachers, who had not entered this profession because of intrinsic or altruistic reasons, had other drives in their lives for selecting teaching.They either could not reach other professional goals or somebody recommended teaching as a better choice.The study revealed that some teachers entered this profession because they were urged to do so by their parents.During the interview one female teacher mentioned how she had been pressurized by her parents to choose this job.This confirms that in some cultures a teaching profession is considered to be the best job for women.Some other teachers also stated that they didn't get the admission they wanted, so they resorted to teaching.One teacher mentioned that he wanted to study law but he didn't get the adequate points in the national test so he picked teaching.The explanation is that in some countries like Turkey, Iraq, etc. graduates with the lowest grades are admitted to colleges of education.Thus, the declining status of teachers can be a de-motivator for many new teachers in such cultures.
The second main question of this study was to find out to which level FEI teachers are satisfied and to investigate their attitutes towards the intrinsic and exrtinsic factors that keep them motivated in FEI.The study reveals that the majority of teachers in FEI seem generally satisfied with various aspects of their job such as the job itself, the salary and the relationship with the administrators.However, some teachers express their dissatisfaction with the relationship they have with their administrators which is considered a dissatisfying source for teacher motivation.This study implies that there is some interdependence between the salary and job satisfaction in FEI, however, it doesn't necessarily mean a reasonable salary leads to job satisfaction.On the other hand, many other studies indicated that a low salary might be a cause of job dissatisfaction.Similar results (Evans 1998, Maenpaa 2005) were also found in other researches.Evans (1998) found that many teachers wanted to leave their jobs as a result of low salary.Doyle G and Kim Y. M. (1999) carried out a research on teacher motivation and found that salary, teacher-administrator relationship, curriculum, course materials, heavy workload, lack of job security and autonomy were the causes of teachers' demotivation.Another study conducted by Connie (2000) demonstrated similar results.In her research involving 98 Mexican English teachers, she found that demotivating factors included a low salary, the lack of teaching materials, an inflexible curriculum, a heavy workload and the lack of enthusiasm in teaching.However, in a recent study Hettiarachchi (2010) found that poor relationship among colleagues and teacher transfer were among the highest-rated demotivating factors.All these studies support our finding by stating that salary and the relationship with administrators, supervisors or colleagues might be the causes of job dissatisfaction which will in turn lead to low motivation.
In terms of motivating factors, most teachers agreed that primarily intrinsic factors motivate them.Teachers value having good relationships with the students and it is the most important motivator while the students' language learning success is the second important drive.Positive relationships with students will certianly lead teachers to better deal with students' problems or needs which will in turn lead to improved teacher motivation.Similiar findings were also discovered by Colodarci (1992).These findings also agree with Tardy and Snyder (2004) who found that teachers who were intrinsically motivated by a strong connection and a sense of accomplishement in their English lessons, demonstrated a greater desire to teach in order to feel the same kind of success . Dinham S. and Scott C. (2000) obtained similar results, in their study when they discovered students' academic achievements as being one of the common motivators for teachers among others.Moreover, most teachers feel motivated when their students achieve success and when they perform desired tasks successfully.Another study that supports these findings was conducted by Connie (2000) involving Maxican EFL teachers, she found that the most important factors that motivate teachers are better performance among students and students' own motivation.This study makes evident that the realtionship between teachers and students is a crucial source of motivation or de-motivation.Students' lack of commitment to the subject might also make teachers feel burnout, demotivated or stressed.Thus, students' willingless to learn is one of the most important factors for teacher motivation.Vanderberge and Hubberman (1999) also confirmed the importance of student-teacher relationship as an enhancing factor for teacher job satisfaction.Additionally, Barnaus M., Wilson A. and Gardner R. C. (2009) analyzed the relationship between the teachers' and students' motivation and its influence on students' accomplishements.They found that the close interconnection between them facilitated students' achievements.The findings of this study revealed that, regardless of culture in which teachers teach, most of them obtain their motivation both from teaching itself and from their students.
Possibility of improving professional skills is another motivating factor found in this study.It is suggested that training programs are one of the most important motivational aspects that boost teachers' performance.This finding is in consonance with Woodward (1992) who considered the training as a motivational force and with Qayyum A. and Siddique M. (2003) who found in their research that teacher's competency motivated them to perform well.Dinham S. and Scott C. (2000) also found that self-growth opportunities and mastery of professional skills were among the common motivators for teachers.However, participants of this study, in particular, of the interviews, were unhappy with the lack of opportunities available for their professional development.They complained about having limited possibilities for professional development in FEI.This is a common finding in teacher motivation study that teachers often lose their motivation if they are not provided with the opportunity of professional development.
The study also suggests that many teachers feel motivated when their efforts are fairly recognized and praised by the administration, parents and students.This finding supports the Expectancy Theory.Individuals will respond favorably if they perceive their goals are realistic, achievable and a reward comes with them.teachers are known to be more motivated when their efforts are rewarded and they are not sufficiently productive if their attempts are not equally compensated.Thus, teachers are more likely to be motivated if their goals seem achievable and a particular "prize" is expected.We also believe that teachers should be provided with feedbacks to realize their weaknesses and strengths and positive efforts should be rewarded by the administration.If sufficient feedback is not given about the task performed by teachers, it might affect teacher's performance negatively because teachers should be aware of the results of any task implemented in the classroom (Mufflin, 1995).
Although most of the findings are in agreement with the results of teacher motivation research in other countries, the results of this survey about the teacher pay demonstrated different findings.In most studies conducted in different countries incentive pays and fringe benefits were among the primary de-motivators for teachers (VSO, 2002).However, in this study the majority of teachers don't regard fringe benefits and incentive pay in this way which is in agreement with Odden and Kelley (1997) who reviewed various researches and experiences concluded that individual merit and incentive pay programs do not work and, in fact, are often detrimental.Thus, fringe benefits and merit pay programs are not suitable for schools and universities where teachers are supposed to work cooperatively and in harmony.The study, conducted by Ramachnadran et al. (2005) in India, revealed similar results about concerning teacher salary, where teachers mentioned their contentment with the salary.Other teacher motivation studies have demosntrated as well that the amount of salary does not play a significant role in teacher motivation.The findings of this study confirm that a salary doesn't seem to be as significant as many beleive.This could be primarily because, teachers in a private sector a receive reasonable salary.The results of this study also demonstrate that working conditions might impinge on teacher motivation and satisfaction.In teacher motivation research, the studies revealed that physical conditions of classrooms, insufficient teaching materials and heavy workloads might be the utmost source of demotivation and they are likely to negatively affect teacher commitment.
Teacher motivation is also concerned with job satisfaction.It is important to keep teachers' intrinsic motivation high, since the higher intrinsic motivation educators have the more satisfied they will be with their work (Davis J. and Wilson S. M., 2000).We believe that a motivated teacher can certainly have a significant influence on students' perception and desire to learn.It is experienced that students whose teachers are highly motivated seem to be more engaged in the learning process.Positive interaction and rapport with learners lead teachers to better work with their students' needs individually, and thereby create teacher efficacy and improve teacher motivation (Colladarci, 1992).Most teachers feel motivated when their students are able to achieve success or when they perform desired tasks successfully.Since motivation is contextual and it can be influenced by administrative, social, economic and personal issues, the school/ university administrators must find ways to deal with teachers' low motivation.In order to raise the motivation of a teacher they should carefully distinguish between the intrinsic and extrinsic rewards as well.
Table 2 .
Are you generally satisfied with your job as a teacher?
Table 4 .
Do you think that you have good relationship with your administrators or supervisors?
Table 6 .
Source of motivation for FEI teacher.Descriptive Statistics
|
v3-fos-license
|
2021-06-03T00:35:31.881Z
|
2021-01-01T00:00:00.000
|
235286613
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1088/1755-1315/768/1/012086",
"pdf_hash": "393ff8317765af64c210f482866c3def932fb614",
"pdf_src": "IOP",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44484",
"s2fieldsofstudy": [
"Geology"
],
"sha1": "393ff8317765af64c210f482866c3def932fb614",
"year": 2021
}
|
pes2o/s2orc
|
Study on the application of excess pore water pressure in analyzing the effect of dynamic compaction for the subgrades filled with aeolian sand and gravel soil underwater
This text selects subgrades filled underwater in the section of Taitema lake of an expressway project in Xinjiang. The underwater deep treatment of aeolian sand and gravelly soil subgrades were carried out by dynamic compaction, and excess pore water pressure of that was monitored during compaction. The consolidation effects after dynamic compaction were compared. The results showed that: After dynamic compaction, excess pore water pressure in subgrades of aeolian sand and gravel soil increases as the number of tamping, and dissipates quickly after tamping. The subgrade strength of aeolian sand and gravel soil is improved obviously after dynamic compaction, but the overall reinforcement effect of gravel soil subgrade is better than that of aeolian sand subgrade, so gravel soil should be chosen first when the subgrade is filled underwater.
Introduction
There is a lot of aeolian sand and Gobi gravel soil in Xinjiang and other western regions of China. Aeolian sand and Gobi gravel soil can be used as subgrade filling when building roads in these areas. Research [1~2] shows that eolian sand is not a good natural backfill. Gobi gravel soil is a good natural Backfill materials that were widely used in subgrade filling in the western region. However, there are subgrades filled underwater in Taitema Lake of a expressway project in Xinjiang, and there is abundant aeolian sand near the project, but the transportation distance of gravel soil is long to the project, the construction cost is high if gravel soil is used to fill the subgrade. Therefore, it is meaningful to study whether aeolian sand can replace the gravel soil for underwater filling of the subgrade.
At present, scholars [3~4] have conducted a large number of effective studies on the engineering characteristics of aeolian sand and it has been successfully applied to some highways. The research on the treatment method of aeolian sand mainly focuses on the shallow surface treatment of the aeolian sand roadbed by vibration rolling, while the research on the deep or underwater treatment of the aeolian sand filled roadbed is still rare. As for the deep treatment of gravel soil under water, the research [5~6] found that the deep treatment of the gravel soil underwater filling can be carried out by dynamic compaction. At the same time, aeolian sand also has a loose grain structure and has strong water permeability, and it can also be treated by dynamic compaction. In this text, the test study on the deep treatment of subgrade filled with aeolian sand and Gobi gravel soil by dynamic compaction is carried out, and excess pore water pressure of that was monitored during compaction, the consolidation effects after dynamic compaction were compared by excess pore water pressure.
Test location and subgrade layer distribution
Selected the K328+90~K328+140 of aeolian sand backfill subgrade and K333+640~K333+690 of Gobi gravel soil backfill subgrade in the Taitema Lake to carry out the dynamic compaction test. This test mainly focuses on the study of subgrade treatment underwater by dynamic compaction. The treatment depth below the water surface is about 5m.
Tamping point plan layout and construction parameters
According to experience, the grid spacing between dynamic compaction points is generally 1.5 to 2.5 times of the tamper diameter, that is, 3.0~5.0m. This test adopts two grid spacing of 3.5m and 4.5m. For the test, each test group is divided into 2 trial areas, and the construction parameters are shown in Table 1.
Ironing tamping
After the points tamping are completed, perform ironing tamping. Energy of ironing tamping is 1000kN.m. The numbers of drop are 2, and the tamping pass is 1. Overlap length is 1/4 times of tamper print.
Arrangement and quantity of monitoring and effect testing 2.3.1. Monitoring of excess pore water pressure
Excess pore water pressure generated by dynamic compaction can reflect the impact range of dynamic compaction in the treatment depth and horizontal direction. At the same time, it can monitor the change of excess pore water pressure in the soil after dynamic compaction to understand the dissipation of the excess pore pressure in the reinforced soil; Two sets (4 numbers) of pore water pressure gauges were buried near tamper point A3 of the trial area 1 and tamper point B3 of the trial area 2 of each group of tests. The layouts are shown in Figure 2.
(a) Test group of aeolian sand (b) Test group of gravel soil Figure 2 Layouts of pore water pressure gauges (unit: mm)
Test of foundation
Heavy dynamic penetration tests were carried out on the foundation after dynamic compaction to study the effect before and after reinforcement. Each test area tests 2 heavy dynamic penetration test points before and after dynamic compaction, of which one is located at the center of the compaction point, another one is located at the midpoint of the line connecting the two compaction points.
Analysis of test results
3.1. Excess pore water pressure 3.1.1. The law of excess pore water pressure in the horizontal direction Select the monitoring results of aeolian sand and gravel soil in tamping points of A3 and B3, respectively, and draw comparison curves of excess pore water pressure with time in depth of 3m of aeolian sand and 2.5m of gravel soil. As shown in Figure 3. Relative time (min) Excess pore water pressure(kPa) 3.5m of distance of aeolian sand 4.5m of distance of aeolian sand 3.5m of distance of gravel soil 4.5m of distance of gravel soil The abscissa "relative time" in the figure refers to the time relative to the first drop. When the excess pore water pressure reaches the peak, the settlement of tamping point meets the standard, and the subsequent curve is its dissipation process. According to Figure 3, at the same depth under the two energies, the excess pore water pressure of gravel soil at the distance of 4.5m from the center of the tamping point is basically the same as that at the distance of 3.5m, while the excess pore water pressure of aeolian sand at the distance of 4.5m from the center of the tamping point is greater than that at the distance of 3.5m. It indicates that the soil at a distance of 4.5m from the center of the tamping point under the two types of fillers can be effectively reinforced. At the same time, Figure 3 also shows that under the same energy, the excess pore water pressure of the gravel soil backfill at the same distance from the tamping point is greater than that of the aeolian sand, indicating that reinforcement effect of the gravel soil backfill is better than that of aeolian sand backfill in the horizontal direction under the same energy.
It can be seen from the dissipation curve of the excess pore water pressure in Figure 8, that most of the excess pore water pressure of the aeolian sand and gravel soil backfill layer dissipates rapidly after tamping is completed, and then the dissipation becomes slower. After about 3 hours, the degree of dissipation exceeds 80%, so when there are multiple passes in dynamic compaction, both types of backfill can be continuously tamped without waiting time.
The law of excess pore water pressure in the depth direction
Select the monitoring results of aeolian sand and gravel soil, respectively, and draw comparison curves of excess pore water pressure with time. As shown in Figure 4. Figure 4, it shows that at the same tamping point distance under the two energies, the excess pore water pressure at the depth of 2.5m are basically the same with that of 5m. The soil of two depths has the same reinforcement effect, and the reinforcement effect hardly decreases with the increase of depth. However, the excess pore water pressure at the depth of 3.0m for the aeolian sand backfill is greater than that of 5m. The reinforcement effect of the upper depth soil is better than that of the lower depth soil. The reinforcement effect gradually decreases with the increase of depth. At the same time, the comparison curves also show that under the same energy, the excess pore water pressure of the gravel soil backfill is higher than that of aeolian sand backfill, it indicates that the consolidation effect of the gravel soil backfill in the depth direction is better than that of the aeolian sand backfill under the same energy, especially the lower soil.
Reinforcement effect test
After dynamic compaction, two groups of heavy dynamic penetration tests were carried out on the aeolian sand and gravel soil (2 tests points in each group: center of tamping point, center of 2 tamping points), as shown in Figure 5. Figure 5, it can be seen that the blows of heavy dynamic penetration for gravel soil at center of tamping point and center of 2 tamping points are all greater than that of the aeolian sand backfill under the same energy. It shows that the reinforcement effect of gravel soil is better than that of aeolian sand under the same energy.
Conclusions
The consolidation effects of aeolian sand and gravelly soil after dynamic compaction were compared by excess pore water pressure. The main test results are summarized as follows.
(1) Most of the excess pore water pressure of the aeolian sand and gravel soil backfill layer dissipates rapidly after tamping is completed, and then the dissipation becomes slower. After about 3 hours, the degree of dissipation exceeds 80%, Both types of backfill can be continuously tamped without waiting time when there are multiple passes in dynamic compaction.
(2) Based on analysis of excess pore water pressure, the dynamic consolidation effect of gravel soil in both horizontal and depth directions is better than that of aeolian sand. The reinforcement effect test results show that the strength of the gravel soil after dynamic compaction is better than that of aeolian sand, and the reinforcement effect is more obvious.
(3) Gravel soil has the same reinforcement effect hardly decreasing with the increase in depth, while the upper aeolian sand has a better reinforcement effect than that of the lower, gradually decreasing with the increase in depth.
(4) Gravel soil should be preferred for underwater filling. However, the quality of aeolian sand filling needs to be strictly controlled in areas where aeolian sand is rich in storage. If the strength of the foundation can be meet the design requirements after dynamic compaction, it can be considered, otherwise it should be cautious.
|
v3-fos-license
|
2021-05-08T00:04:38.111Z
|
2021-02-09T00:00:00.000
|
233905838
|
{
"extfieldsofstudy": [
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.mdpi.com/2624-8549/3/1/18/pdf?version=1612863871",
"pdf_hash": "4ac9bf6373762ff970872f08a75111e0163a430f",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44487",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"sha1": "d010bd6cd746cdcc614f1227c0587ccd065b10b1",
"year": 2021
}
|
pes2o/s2orc
|
Effects of Substituents on Photophysical and CO-Photoreleasing Properties of 2,6-Substituted meso -Carboxy BODIPY Derivatives †
: Carbon monoxide (CO) is an endogenously produced signaling molecule involved in the control of a vast array of physiological processes. One of the strategies to administer therapeutic amounts of CO is the precise spatial and temporal control over its release from photoactivatable CO-releasing molecules (photoCORMs). Here we present the synthesis and photophysical and photochemical properties of a small library of meso -carboxy BODIPY derivatives bearing different substituents at positions 2 and 6. We show that the nature of substituents has a major impact on both their photophysics and the efficiency of CO photorelease. CO was found to be efficiently released from π -extended 2,6-arylethynyl BODIPY derivatives possessing absorption spectra shifted to a more biologically desirable wavelength range. Selected photoCORMs were subjected to in vitro experiments that did not reveal any serious toxic effects, suggesting their potential for further biological research.
It has been found that, despite its overt toxicity when present in high concentrations, carbon monoxide (CO) as an endogenously produced signaling molecule can have beneficial effects in various physiological processes and cellular functions, including apoptosis, proliferation, and inflammation at sub-micromolar concentrations (≈0.2 µM) [19][20][21].
ficial effects in various physiological processes and cellular functions, including apoptosis, proliferation, and inflammation at sub-micromolar concentrations (≈0.2 μM) [19][20][21]. When considering its therapeutic applications, sufficiently low concentrations of CO must be administered in a controlled manner to avoid its toxic effects. One CO delivery strategy is based on metal carbonyl complexes that release a weakly bound CO by simple hydrolytic ligand exchange upon dissolution in aqueous media [22,23]. Another approach is the use of photochemically activatable CO-releasing molecules (photoCORMs), which allow precise spatial and temporal control over its release in tissues [24][25][26][27][28]. Visible/NIR light has a reduced tissue penetration due to high optical scattering and strong absorbance by endogenous chromophores, such as hemoglobin or melanin [29,30]; therefore, photo-CORMs activatable in this wavelength range are highly desirable. Most of the reported visible/NIR light-absorbing photoCORMs are metal carbonyl complexes, but several transition-metal-free (organic) photoCORMs, such as xanthene-9-carboxylic acid [31] or flavonol derivatives [32][33][34][35], have also been designed and studied. Some of us have introduced meso-carboxy BODIPY-based photoCORM derivatives 1a and b that can release CO upon irradiation at wavelengths of up to ≈750 nm (Scheme 1) [36]. The density functional theory (DFT) calculations suggested that a strained α-lactone intermediate, formed upon irradiation via a triplet biradical, is responsible for the subsequent CO liberation. Scheme 1. Carbon monoxide photorelease from meso-carboxy BODIPY derivatives.
In this work, we prepared several meso-carboxy BODIPY derivatives of 2, compounds 3-10 ( Figure 1), as potential new BODIPY-based photoCORMs, and studied their photophysical and photochemical properties. We evaluated the effects of various electron-donating and electron-withdrawing substituents at positions 2 and 6 on their absorption and emission properties, and the efficiencies of CO release and singlet oxygen production upon irradiation. The selected derivatives with the highest yields of CO were subjected to cell culture experiments to determine possible cytotoxic effects to assess their potential for future therapeutic use. Scheme 1. Carbon monoxide photorelease from meso-carboxy BODIPY derivatives.
In this work, we prepared several meso-carboxy BODIPY derivatives of 2, compounds 3-10 ( Figure 1), as potential new BODIPY-based photoCORMs, and studied their photophysical and photochemical properties. We evaluated the effects of various electrondonating and electron-withdrawing substituents at positions 2 and 6 on their absorption and emission properties, and the efficiencies of CO release and singlet oxygen production upon irradiation. The selected derivatives with the highest yields of CO were subjected to cell culture experiments to determine possible cytotoxic effects to assess their potential for future therapeutic use.
Results and Discussion
Synthesis: Methyl and benzyl esters of carboxylic acid derivative 2, compounds 11 and 12, were prepared from 2,4-dimethylpyrrole and the corresponding chlorooxalate in 34 and 31% chemical yields (Scheme 2), respectively, using a modified procedure previously described [37]. Compound 12 was treated with POCl3 in DMF to give aldehyde 13 as a synthetic precursor, which was subsequently converted to carboxylic acid 14 in the presence of NaClO2 and NH2SO3H in 55% yield (Scheme 3). Compound 13 was used for the preparation of oxime 15 by the reaction with hydroxylamine hydrochloride and sodium hydroxide in ethanol (52% yield), and then the resulting 15 was treated with oxalyl chloride in acetonitrile to give 2-cyano derivative 16 in 78% yield (Scheme 3).
Results and Discussion
Synthesis: Methyl and benzyl esters of carboxylic acid derivative 2, compounds 11 and 12, were prepared from 2,4-dimethylpyrrole and the corresponding chlorooxalate in 34 and 31% chemical yields (Scheme 2), respectively, using a modified procedure previously described [37]. Compound 12 was treated with POCl 3 in DMF to give aldehyde 13 as a synthetic precursor, which was subsequently converted to carboxylic acid 14 in the presence of NaClO 2 and NH 2 SO 3 H in 55% yield (Scheme 3). Compound 13 was used for the preparation of oxime 15 by the reaction with hydroxylamine hydrochloride and sodium hydroxide in ethanol (52% yield), and then the resulting 15 was treated with oxalyl chloride in acetonitrile to give 2-cyano derivative 16 in 78% yield (Scheme 3).
Results and Discussion
Synthesis: Methyl and benzyl esters of carboxylic acid derivative 2, compounds 11 and 12, were prepared from 2,4-dimethylpyrrole and the corresponding chlorooxalate in 34 and 31% chemical yields (Scheme 2), respectively, using a modified procedure previously described [37]. Compound 12 was treated with POCl3 in DMF to give aldehyde 13 as a synthetic precursor, which was subsequently converted to carboxylic acid 14 in the presence of NaClO2 and NH2SO3H in 55% yield (Scheme 3). Compound 13 was used for the preparation of oxime 15 by the reaction with hydroxylamine hydrochloride and sodium hydroxide in ethanol (52% yield), and then the resulting 15 was treated with oxalyl chloride in acetonitrile to give 2-cyano derivative 16 in 78% yield (Scheme 3). Scheme 4. Synthesis of 2,6-diphenyl (18) and 2,6-diethynyl (19)(20)(21) esters.
The target meso-carboxy BODIPY derivatives 2 and 4-10 were obtained by the deprotection of the corresponding methyl esters using lithium iodide or by catalytic hydrogenation from the benzyl esters in good isolated yields (65-92%; Scheme 5). 2,6-Dibromo analog 3 was prepared by the reaction of 2 with NBS in 49% yield (Scheme 5). The target meso-carboxy BODIPY derivatives 2 and 4-10 were obtained by the deprotection of the corresponding methyl esters using lithium iodide or by catalytic hydrogenation from the benzyl esters in good isolated yields (65-92%; Scheme 5). 2,6-Dibromo analog 3 was prepared by the reaction of 2 with NBS in 49% yield (Scheme 5). and em max of 6, bearing an electron-withdrawing group at position 2, are slightly hypsochromically shifted. The molar absorption coefficients (ε) were found to be in the range of 3.2-5.8 × 10 4 M -1 cm -1 , which is common for BODIPY chromophores [11]. The type of the solvent (methanol , Table 1, and PBS, Table 2) had only a marginal effect on the abs max , em max , and ε values. The solvent and properties of the substituents at positions 2 and 6 had a substantial effect on the fluorescence quantum yields (Φf). Except for compound 2, all Φf values were found to be higher in methanol; indeed, increasing the solvent polarity leads to a low fluorescence efficiency of BODIPY derivatives [12]. Besides, the Φf was found to be relatively small for bromo-and iodo-derivatives 3 and 4 due to the presence of heavy atoms (i.e., an efficient ISC). An efficient nonradiative decay must also be responsible for the small values of Φf in the case of 2-cyano (6) and 2phenylethynyl (9 and 10) derivatives. This is consistent with the low Φf found for 2-phenylethynyl [38,41] or 2,6-arylethynyl [40] BODIPY derivatives. Analogous meso-alkenyl substituted BODIPYs have also been reported to be practically non-fluorescent [39,42]. It has been explained by the effect of large stabilization upon excitation along with the bending of the fused BODIPY core, and the accessible S1/S0 conical intersection point [16]; the small geometrical evolution prompts the nonradiative relaxation to the ground state [43]. We also cannot exclude fluorescence quenching by internal charge transfer (ICT) in derivatives 9 and 10 [42,44,45]. Different amounts of DMSO were added to PBS to dissolve the BODIPY derivatives because, except for 10 (the p-phenyl polyethylene glycol substituents were introduced in 10 to improve its solubility in aqueous solutions), they are only par- Both λ abs max and λ em max of 6, bearing an electron-withdrawing group at position 2, are slightly hypsochromically shifted. The molar absorption coefficients (ε) were found to be in the range of 3.2-5.8 × 10 4 M -1 cm -1 , which is common for BODIPY chromophores [11]. The type of the solvent (methanol, Table 1, and PBS, Table 2) had only a marginal effect on the λ abs max , λ em max , and ε values. The solvent and properties of the substituents at positions 2 and 6 had a substantial effect on the fluorescence quantum yields (Φ f ). Except for compound 2, all Φ f values were found to be higher in methanol; indeed, increasing the solvent polarity leads to a low fluorescence efficiency of BODIPY derivatives [12]. Besides, the Φ f was found to be relatively small for bromo-and iodo-derivatives 3 and 4 due to the presence of heavy atoms (i.e., an efficient ISC). An efficient nonradiative decay must also be responsible for the small values of Φ f in the case of 2-cyano (6) and 2-phenylethynyl (9 and 10) derivatives. This is consistent with the low Φ f found for 2-phenylethynyl [38,41] or 2,6-arylethynyl [40] BODIPY derivatives. Analogous meso-alkenyl substituted BODIPYs have also been reported to be practically non-fluorescent [39,42]. It has been explained by the effect of large stabilization upon excitation along with the bending of the fused BODIPY core, and the accessible S 1 /S 0 conical intersection point [16]; the small geometrical evolution prompts the nonradiative relaxation to the ground state [43]. We also cannot exclude fluorescence quenching by internal charge transfer (ICT) in derivatives 9 and 10 [42,44,45]. Different amounts of DMSO were added to PBS to dissolve the BODIPY derivatives because, except for 10 (the p-phenyl polyethylene glycol substituents were introduced in 10 to improve its solubility in aqueous solutions), they are only partially soluble in aqueous solutions. The pK a of non-substituted meso-carboxy BODIPY derivatives has previously been determined to be 4.7; the absorption band of the conjugate acid is bathochromically shifted by ≈100 nm [36]. The pH titration of 9 in a PBS/methanol (95:5, v/v) solution showed that the signal of the conjugate acid appears at ≈680 nm and a pH below 5 ( Figure S99). BODIPY derivatives were present only as conjugate base forms in solutions under the experimental conditions used in this work. Time-resolved spectroscopy: We performed nanosecond transient absorption (TA) spectroscopy of selected BODIPY derivatives in both aerated and degassed PBS/DMSO mixtures (c ∼ 10 −5 M; λ exc = 532 nm) to identify long-lived intermediates. Derivatives 3 and 4 gave strong transient signals with λ max at ≈436 nm (Figures S102-S111). A prominent ground state bleach with λ max = 506 and 530 nm (Figures S102, S103, S107 and S108) for 3 and 4, respectively, was also observed. The kinetic traces were fitted to a first-order rate law (the oxygen concentration in an aerated solution at 20 • C (∼2.7 × 10 −4 M [48]) was almost 30 times higher than that of a BODIPY derivative) to provide the lifetimes of 320 ns and 25 µs for 3 and 256 ns and 11.40 µs for 4 in aerated and degassed solutions, respectively (Table S1), which is in good agreement with the lifetimes reported for analogous BODIPY systems [36]. We assigned these signals to the triplet states [49]. Derivatives 2 and 7 did not show any apparent signals in the range typical for the triplet-excited BODIPYs; only fluorescence and ground-state bleach signals were detected (Figures S100, S101, S112 and S113); thus, triplet state concentrations were below the detection limit of our TA spectroscopy setup.
Intersystem crossing efficiency and singlet oxygen production: As the photochemical release of CO from meso-carboxy BODIPYs was reported to occur from the triplet-excited state [36], we evaluated the ISC efficiency (Φ ISC ) by quantitative analysis of the transient optical density [50,51] using nanosecond TA spectroscopy. Triplet state transient signals of 3 and 4 in methanol were sufficiently strong to obtain the Φ ISC values of 0.66 and 0.83, respectively. Besides, we evaluated quantum yields of the singlet oxygen production (Φ ∆ ) for selected derivatives in methanol using 1,3-diphenylisobenzofuran (DPBF) as a 1 O 2 trap (Table 1). The high Φ ∆ values of 0.59 and 0.72 for 3 and 4 ( Figures S95 and S96), respectively, bearing heavy halogen atoms directly attached to the BODIPY core, match those of analogous BODIPY derivatives with halogen or chalcogen atoms in various core positions [8,11,14,15]. These values are only slightly lower than those of Φ ISC , thus Φ ∆ can be used advantageously to estimate the lower limit of Φ ISC . As anticipated, a small Φ ∆ was found for compounds 2, 5, 6, 8, and 9 ( Figures S97 and S98).
Photorelease of CO: We evaluated the quantum yields of photochemical degradation (Φ r ) of selected derivatives in a PBS/DMSO mixture ( Table 2). In all cases, the efficiencies were greater in the absence of oxygen because of quenching of the reactive triplet state. Besides, the Φ r value for 3 (58 × 10 −4 ) was larger than that of 8 by two orders of magnitude (0.7 × 10 −4 ), indicating that the triplet state is considerably more photochemically active, as previously discussed [36]. Because the efficiency of photodegradation of all BODIPY derivatives in methanol was too low (irradiation of 8 in methanol under the same conditions led to a ≈5% conversion in 12 h, for example), we have not investigated their photochemistry in this solvent in detail.
Compounds 2-10 in PBS/DMSO mixtures were found to produce different amounts of CO upon irradiation at the corresponding λ abs max ( Table 2; Figure 2 and Figures S77-S94). The chemical yields obtained upon exhaustive photolysis were rather small for BODIPYs 2-7 (<15%) but considerably higher for phenethynyl derivatives 8-10 (up to 45% upon complete conversion in a degassed solvent). As expected (see above), the yields dropped approximately by half in aerated solutions. However, compounds 3 and 4 with an enhanced ISC, which could promote a more efficient CO release from the triplet state [36], did not liberate CO in high yields. Besides, lower chemical yields of CO found in degassed rather than aerated solutions for compounds 4 and 6 (Table 2) suggest that at least one of the competing degradation pathways is independent of the presence of oxygen. The introduction of an electron-withdrawing group (6) or an additional carboxy group (5) at position 2 did not improve the CO yields. According to the proposed mechanism of the CO release from meso-carboxy BODIPYs 1a, b (Scheme 1), electron-withdrawing groups should enhance electron transfer from the carboxylate (with pKa of ≈4.7 [36]; the acids are fully dissociated at pH = 7.4) to the triplet-excited BODIPY core. However, Therefore, we analyzed the photoproducts formed upon irradiation of 10 in a PBS/DMSO mixture by high-resolution mass spectra (HRMS). Based on the proposed photoproduct structures (Supplementary Materials), we suggest that the CO production competes with bleaching of the starting material with the generated singlet oxygen [36] (for example, by oxidative cleavage of the triple bond or ring opening of the BODIPY core, reported before in [52]), or the compound undergoes a photoinduced attack of water as a nucleophile at position 3 [53] or an exchange of fluorine(s) by the OH group(s). Figure 2 shows that a photoproduct formed upon irradiation of 10, characterized by a slight hypsochromic shift of the main absorption band, is further consumed upon continuing irradiation. Because the absorption maximum of this intermediate is similar to that of the starting material, the compound must still retain a BODIPY or BODIPY-like chromophore (Supplementary Materials).
The introduction of an electron-withdrawing group (6) or an additional carboxy group (5) at position 2 did not improve the CO yields. According to the proposed mechanism of the CO release from meso-carboxy BODIPYs 1a, b (Scheme 1), electron-withdrawing groups should enhance electron transfer from the carboxylate (with pK a of ≈4.7 [36]; the acids are fully dissociated at pH = 7.4) to the triplet-excited BODIPY core. However, we could not verify this hypothesis because the competing dye degradation in both the presence and absence of oxygen was more efficient than the CO release. A low CO yield in 2 was also unexpected because analogous BODIPY derivatives with the unsubstituted 1,7-positions (1, Scheme 1) were reported to liberate CO in up to 87% yields in degassed solutions [36]. The methyl substituents at positions 2 and 6, which were introduced to improve the stability in the dark (Figures S74-S76) and simplify the synthesis of meso-carboxy BODIPYs, thus must play a detrimental role in the CO photorelease. This behavior may be related to an out-ofplane geometry of the BODIPY core induced by the steric hindrance of a meso-substituent and the 1,7-dimethyl groups that are responsible for a more efficient nonradiative decay via "butterfly motion" discussed for analogous substituted BODIPY derivatives [24,54].
Fortunately, π-extended derivatives 8-10 not only provided improved chemical yields of the CO formation but their absorption bands were bathochromically shifted toward the biologically desired longer wavelengths. Compounds 8 and water-soluble 10 were therefore selected for the assessment of the cellular cytotoxicity before their use in further biological studies.
Cellular toxicity: To determine the cellular toxicity of compounds 8 and 10 and their corresponding photoproducts, an MTT test was used to assess cell viability on the human hepatoblastoma HepG2 cell line. HepG2 cells were incubated with selected photo-CORMs or their respective photoproducts for 2, 6, and 24 h. Compound 8 displayed no cytotoxicity within the concentration range of 12.5-100 µmol L -1 ( Figure S116), whereas compound 10 showed no effects on cell viability up to the concentration of 50 µmol L -1 ( Figure S117). Their photoproducts were found to be non-cytotoxic in the concentration range of 12.5-200 µmol L -1 (Figures S117 and 118). These cytotoxicity data are comparable to those observed for meso-carboxy BODIPY photoCORMs 1a, b (Scheme 1) [36].
Experimental Section
Materials: Reagents and solvents of the highest purity available were used as purchased, or they were purified/dried using the standard methods when necessary.
Methods: The lowest possible intensity of incident light was used in the spectroscopic identification of the samples to prevent their photodegradation. All measurements were accomplished using fresh solutions prepared in the dark. 1 H and 13 C NMR spectra were obtained in CDCl 3 , CD 2 Cl 2 , DMSO-d 5 , or CD 3 OD on 75, 125, 300, and 500 MHz spectrometers (Bruker AVANCE III (300 MHz) and AVANCE III HD (500 MHz) spectrometers). 1 H chemical shifts are reported in ppm relative to the tetramethylsilane signal (TMS, δ = 0.00 ppm) using the residual solvent signal as an internal reference. 13 C NMR chemical shifts are reported in ppm relative to the solvent signal as an internal standard. High-resolution mass spectra (HRMS) were recorded on an Agilent 6224 Accurate-Mass TOF LC-MS instrument using ESI or APCl techniques. Absorption spectra and the molar absorption coefficients were obtained on a UV-vis spectrometer with matched 1.0 or 0.1 cm quartz cells. Molar absorption coefficients were determined from the absorption spectra; the average values were obtained from three independent measurements with solutions of different concentrations. No dependence of the molar absorption coefficient on the sample concentration was observed in the range from 1 × 10 −4 to 1 × 10 −6 M. All glassware was oven-dried before use. Purification procedures were performed using silica gel (Merck 60; 230-400 mesh) columns or by recrystallization.
Determination of CO yields: A solution of compounds 2-10 (100-10 mM) in a PBS (I = 0.1 M, pH ≈ 7.4)/DMSO solution was irradiated with LEDs emitting at the corresponding wavelengths (λ max = 490, 525, or 545 nm) in closed GC vials fitted with PTFE septa to complete the conversion of the starting material. The released CO into the vial headspace was determined by a GC-Agilent 5973 Mass Selective Detector headspace technique, which was calibrated using the photoreaction of cyclopropenone photoCORM [55] (50−500 µL, c ≈ 5 × 10 -4 M, in methanol).
Decomposition quantum yields: The quantum yields of the decomposition of BODIPY derivatives in both aerated and degassed (purged with argon for 20 min) PBS/DMSO solutions were determined at λ irr = 525 nm (LEDs) using the BODIPY derivative 22 [36] as an actinometer dissolved in PBS (I = 0.1 M, pH = 7.4) according to the published procedure [36]. All quantum yield measurements were repeated five times with independently prepared samples.
obtained in CDCl3, CD2Cl2, DMSO-d5, or CD3OD on 75, 125, 30 eters (Bruker AVANCE III (300 MHz) and AVANCE III HD (50 chemical shifts are reported in ppm relative to the tetramethyls ppm) using the residual solvent signal as an internal reference are reported in ppm relative to the solvent signal as an internal mass spectra (HRMS) were recorded on an Agilent 6224 Accu strument using ESI or APCl techniques. Absorption spectra an efficients were obtained on a UV-vis spectrometer with matche Molar absorption coefficients were determined from the absor values were obtained from three independent measurements concentrations. No dependence of the molar absorption coeffic tration was observed in the range from 1 × 10 −4 to 1 × 10 −6 M. All before use. Purification procedures were performed using sil mesh) columns or by recrystallization.
Determination of CO yields: A solution of compounds 2-1 = 0.1 M, pH ≈ 7.4)/DMSO solution was irradiated with LEDs em wavelengths (λmax = 490, 525, or 545 nm) in closed GC vials fitte plete the conversion of the starting material. The released CO in determined by a GC-Agilent 5973 Mass Selective Detector he was calibrated using the photoreaction of cyclopropenone phot ≈5 × 10 -4 M, in methanol).
Decomposition quantum yields: The quantum yields of th IPY derivatives in both aerated and degassed (purged with arg solutions were determined at λirr = 525 nm (LEDs) using the BO an actinometer dissolved in PBS (I = 0.1 M, pH = 7.4) according t [36]. All quantum yield measurements were repeated five time pared samples.
Fluorescence quantum yields: Fluorescence quantum yiel Edinburg Instrument FLS920 fluorimeter as the absolute va Singlet oxygen production quantum yields: Quantum yields for the singlet oxygen production, sensitized by BODIPYs derivatives in methanol at 525 (compounds 2, 3 and 4), 490 (compounds 5 and 6), and 545 nm (compounds 8 and 9), were determined by monitoring the photooxidation of 1,3-diphenylisobenzofuran (DPBF) using rose bengal (RB) [46] as a reference sensitizer. For derivatives 5 and 6, compound 3 was used as a reference. A solution of DPBF (c = 5 × 10 −5 ) and either BODIPY (c = 1 × 10 −5 M) or RB (c =5 × 10 −6 M) sensitizers in methanol was prepared. The stirred solution (3 mL) in a 1 cm quartz cell was irradiated using LEDs at a selected wavelength, and the UV−vis spectra were recorded periodically. The irradiation time was chosen to reach approximately 10% conversion of DPBF. The procedure was repeated five times.
Fluorescence quantum yields: Fluorescence quantum yields were determined on an Edinburg Instrument FLS920 fluorimeter as the absolute values using an integrating sphere. The quantum yields were measured three times and were averaged for each sample. The solution concentrations were kept low (A < 0.1). The fluorescence quantum yields were determined in methanol or a PBS (pH = 7.4, 10 mM, I = 0.1 M)/DMSO mixture.
Transient spectroscopy: The nanosecond laser flash spectroscopy setup was generally operated in a right-angle arrangement of the pump and probe beams. Laser pulses of ≤170 ps or 700 ps duration at 532 nm (20-240 mJ) were obtained from an Nd:YAG laser. The laser beam was dispersed onto a 40 mm long and 10 mm wide modified fluorescence cuvette held in a laying arrangement. An overpulsed Xe arc lamp was used as a source of the probe light. Kinetic traces were recorded using a photomultiplier. Transient absorption spectra were obtained using an ICCD camera equipped with a spectrograph. The samples were degassed by three freeze-pump-thaw cycles under reduced pressure (0.01 Torr). Absorption spectra of the sample solutions were measured regularly between laser flashes to test for possible photodegradation of the solution components using a diode-array spectrophotometer [56].
Intersystem crossing quantum yield: The ISC efficiency (Φ ISC ) was evaluated by a quantitative analysis of the transient optical density (see Supplementary Materials for details) [50,51] using nanosecond TA spectroscopy for selected derivatives in degassed methanol solutions (three freeze−pump−thaw cycles) at three different concentrations (3.0, 6.0, and 10.0 × 10 −6 M).
Cellular toxicity experiments: Human hepatobastoma HepG2 cell line (ATCC, Manassas, VA, USA) was used to test the cytotoxicity. The cells were grown in supplemented MEM media in 96-well plates according to the manufacturer´s instructions. The cells were kept at 37 • C and 5% CO 2 atmosphere during the experiment. The stock solution was prepared by dissolving a BODIPY derivative in DMSO; the final solutions were prepared by dissolution of stock solutions with Minimal Essential Medium (MEM). The final concentration of DMSO in media did not exceed 1%. The corresponding photoproducts were prepared by exhaustive irradiation of the solutions at λ irr = 505 nm for 24 h. Viability was determined using an MTT test (colorimetric assay based on the reduction of a yellow tetrazolium salt (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) as described before [57]. All experiments were performed at least in triplicates.
Synthesis of meso-carboxy BODIPYs: general procedure A. A methyl ester of mesocarboxy BODIPYs (1 equiv.) was dissolved in dry ethyl acetate and LiI (10 equiv.) was added. The reaction mixture was heated to reflux for 16 h under a nitrogen atmosphere. TLC was used to monitor the reaction. When the reaction was finished, the mixture was cooled to room temperature, and a small amount of HCl (0.2 mL) was added to quench the reaction. Water was added, and the mixture was extracted with ethyl acetate (3 × 10 mL). The combined organic layers were washed with water (20 mL), dried over anhydrous sodium sulfate, filtered, and concentrated to dryness under reduced pressure. The compounds were purified by flash chromatography on silica gel.
Conclusions
We report the synthesis and photophysical and photochemical properties of a series of 2,6-substituted meso-carboxy BODIPY derivatives designed as photoactivatable CO-releasing molecules (photoCORMs). The results provide valuable insights into the structural and electronic factors that affect their photoreactivity. We show that the methyl substituents at positions 1 and 7, originally introduced to improve the chemical stability of BODIPY derivatives, play an unfavorable role in the release of CO. Neither the enhancement of intersystem crossing by heavy-atom substituents nor the decrease in the electron density thanks to electron-withdrawing substituents improved the CO yields. However, CO was efficiently released from π-extended 2,6-arylethynyl BODIPY derivatives with absorption spectra shifted toward a more biologically desirable wavelength range. Subsequently, in vitro cytotoxicity experiments with the most potent meso-carboxy BODIPY photoCORMs and their photoproducts did not reveal any major toxic effects, which justifies them for further biological studies.
|
v3-fos-license
|
2019-03-06T14:06:59.379Z
|
2013-10-30T00:00:00.000
|
67986784
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.47795/zjai9676",
"pdf_hash": "70f98c6be7deefd6cd134f37b22b29f5bf585fea",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44488",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "b3c3d0e8371ffd50949c2722b5b0ee20489369a3",
"year": 2013
}
|
pes2o/s2orc
|
The Use of Virtual Reality in Assisting Rehabilitation
Strathprints is designed to allow users to access the research output of the University of Strathclyde. Unless otherwise explicitly stated on the manuscript, Copyright © and Moral Rights for the papers on this site are retained by the individual authors and/or other copyright owners. Please check the manuscript for details of any other licences that may have been applied. You may not engage in further distribution of the material for any profitmaking activities or any commercial gain. You may freely distribute both the url (https://strathprints.strath.ac.uk/) and the content of this paper for research or private study, educational, or not-for-profit purposes without prior permission or charge.
Using virtual reality (VR) to assist with rehabilitation is an attractive option for many reasons.
With the desire to increase the intensity and frequency of therapy sessions whilst maintaining or cutting costs, the use of VR provides a feasible and efficient method of delivering therapy.VR systems are based on three dimensional computer generated simulations of the real world.Interacting with these simulations creates compelling perceptual illusions which allow the user to behave in the virtual world in a similar way to how they behave in the real world.The capability that this type of interaction affords means that VR systems have many advantages in rehabilitation settings; they can provide safe environments which can be tailored to meet the individual's needs, they can mimic real situations, they can make boring repetitive tasks more engaging and interesting, they enable detailed monitoring of performance to be taken and they allow specific and measureable goals to be set.VR also offers a variety of mechanisms for therapeutic gain including the repetitive practice of movements, engaging in problem solving, memory and attention tasks and exposure to anxiety provoking stimuli or events.Until recently though the use of VR in rehabilitation has been described as 'more virtual than real' 1 , but with rapid developments in affordable software and hardware this is changing rapidly.Today the number and nature of computer-based interactive tasks that can be used for rehabilitation is growing, and their use is becoming more commonplace.However, in reviewing the current state of play it is clear that whilst the potential for VR-based therapies is significant we still have some way to go before they are embedded in everyday clinical practice or the home.
VR-based therapies have been used for a variety of conditions including movement disorders, pain management 2 , cognitive deficits 3 and anxiety disorders 4 but the most commonly reported and assessed neurorehabilitation applications have been in postural control 5,6 and stroke rehabilitation.Assessing the efficacy and effectiveness of VR-based therapies is not straightforward though as the literature on the use of VR in stroke rehabilitation exemplifies.A Cochrane review 7 carried out in 2011 that evaluated the effects of virtual reality and interactive video gaming on upper limb, lower limb and global motor function after stroke, revealed only 19 randomised control trials that met the inclusion criteria and 12 of these had sample sizes of less than 25 participants.Whilst the conclusions of this review were favourable for the use of VR and interactive video gaming in improving arm function and activities of daily living in stroke rehabilitation, there was insufficient data to draw more conclusions.This lack of empirical evidence also extends to which aspects of VR-based therapies will be the most important for different groups of patients, and whether the benefits of VR-based therapies are maintained in the long term.
Similarly, a meta-analysis to determine whether VR-based therapies provide additional benefits for arm motor recovery after stroke published in 2011 8 included 12 studies of which only five were randomised control trials.When pooled the data showed that the patients who were randomised to the VR-based therapy were 4.9 times more likely to improve their motor strength compared to patients in the control conditions.However, there were no large studies which compared conventional therapy to VR-based therapy, and a large and varied number of outcome measures were used in the different trials included in this review.This poor evidence base for the efficacy of VR-based therapies reflects a number of difficulties.The cost of equipment and the need for skilled programmers to create bespoke virtual environments has restricted research programmes in the past, although this is improving.Greater difficulties lie in the designing of informed games-based tasks and in understanding the nature of how the intervention could or should be delivered.Lange et al. 9 described seven core elements that a VR-based intervention should address, including specifying the precise tasks to be targeted for rehabilitation and adjusting the levels of difficulty as the person progresses.This indicates that clinicians and therapists have critical roles to play in designing and implementing VR interventions, and the importance of this was raised by Levac et al. 10 who pointed out that VR systems are tools whereas VR-based therapies involve making decisions about the appropriateness of the VR system in terms of the person's ability to interact with it, the types of VR tasks to be used, frequency of use, rates of progression etc.The role of the therapist in ensuring the clarity of instructions and objectives, and helping with the initial interactions with the virtual world has been documented in a qualitative study of stroke patient's experiences of VR-based therapy 11 , but unfortunately within the current quantitative literature the processes and procedures surrounding how interventions were delivered are generally not well described.
The rapid evolution of the technology in this field has seen different forms of VR systems come on the market ranging from fully immersive room sized systems to the more common non-immersive experience of using a games console or a computer and monitor.The range of ways in which individuals can interact with virtual environments has also expanded with the invention of haptic and force feedback devices which provide tactile sensations and allow the user to grasp and feel objects in the virtual world.Recent advances in augmented reality (where the user wears a head mounted display and views the real world, but with the addition of computer generated information overlaid onto the scene) may also prove to be useful in rehabilitation settings.Alongside these developments the games industry is also making an impact on rehabilitation with products such as the Nintendo Wii being incorporated into therapies.However, viable concerns are being raised about games that have been designed for entertainment being used in therapeutic settings 12 .Studies that have classified the content of games 13 will certainly help clinicians decide the appropriateness of the game, but knowing whether playing the game will generate the most appropriate movement pattern or behaviour is more challenging.For example, the mapping of a patient's movement amplitude and direction to the movements of an avatar in the game may not be sufficiently sensitive to provide adequate feedback 13 , and when patients have been asked about how they played the games some have admitted to 'cheating' by making proscribed rather than prescribed movement patterns in order to gain more points in the game 11 .
Overall, there is good evidence for the feasibility of using VR-based therapies in neurorehabilitation, although consideration needs to be given to the kinds of devices used since some have the potential to cause cybersickness (nausea, eyestrain, blurred vision etc.) 14 .However, robust evidence for the effectiveness and efficacy of this type of therapy is yet to emerge although the signs are promising.Clearly much more work needs to be done and future studies will need to explore not only the functional outcomes of VR-based therapies but also the extent to which they influence cortical reorganization.Some progress has already been made on this, for example, a preliminary report using fMRI to assess changes in five patients with hemiparetic stroke who had received VR training daily for five weeks indicated that following VR training there was a decrease in the ipsilateral activation and an increase in contralateral activation of the sensorimotor cortex when moving the affected limb 15 .Future work also needs to consider the extent to which there is transfer from the virtual to the real world, and a greater understanding of the mechanisms that promote change in VR-based rehabilitation settings will aid this.These challenges are likely to be met soon since this rapidly developing field has seen the creation of numerous research laboratories and companies in recent years and the formation of an International Society for Virtual Rehabilitation (www.isvr.org/).
1 The
Use of Virtual Reality in Assisting Rehabilitation Madeleine A. Grealy Bilal Nasser School of Psychological Sciences and Health, University of Strathclyde, Glasgow, UK Department of Biomedical Engineering, University of Strathclyde, Glasgow, UK
|
v3-fos-license
|
2018-12-26T22:16:46.067Z
|
2009-12-01T00:00:00.000
|
59491146
|
{
"extfieldsofstudy": [
"Geography"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://revistas.inia.es/index.php/fs/article/download/1070/1067",
"pdf_hash": "5eceb94f8c8bbc4af3e8cf592957506f976778fb",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44489",
"s2fieldsofstudy": [],
"sha1": "5eceb94f8c8bbc4af3e8cf592957506f976778fb",
"year": 2009
}
|
pes2o/s2orc
|
Combining spectral mixture analysis and object-based classification for fire severity mapping
This study shows an accurate and fast methodology in order to evaluate fire severity classes of large forest fires. A single Landsat Enhanced Thematic Mapper multispectral image was utilized with the aim of mapping fire severity classes (high, moderate and low) using a combined-approach based in a spectral mixing model and object-based image analysis. A large wildfire in the Northwest of Spain was used to test the model. Fraction images obtained by Landsat unmixing were used as input data in the object-based image analysis. A multilevel segmentation and a classification were carried out by using membership functions. This method was compared with other simpler in order to evaluate the suitability to distinguish between the three fire severity classes above mentioned. McNemar’s test was used to evaluate the statistical significance of the difference between approaches tested in this study. The combined approach achieved the highest accuracy reaching 97.32% and kappa index of agreement of 95.96% and improving accuracy of individual classes.
Introduction
Large forest fires are becoming more frequent in Mediterranean areas due to climatic factors and changes in lifestyles and economic conditions.They are one of the most important causes of environmental alteration and land degradation in the Mediterranean Basin, because of the post-fire exposure of bare soil to rainfall (Leone and Lovreglio, 2005).The main consequences of fire on vegetation depend largely on fire severity.In this study, the term fire severity is defined as the conditions resulting from fire, which can be described by the degree of mortality in above ground vegetation (Ryan and Noste, 1985;Patterson and Yool, 1998;Morgan et al., 2001;Rogan and Franklin, 2001;Key and Benson, 2002;Miller and Yool, 2002;Van Wagtendonk et al., 2004;Doerr et al., 2006).Fire severity maps may complement ecosystem management, providing foresters with baseline data on fire severity and extent required for fire management.Data from these maps may be used to identify areas that have experienced differing fire severity, to plan and monitor restoration and recovery activities, to provide a method for updating current vegetation maps and information for future pre-fire planning (Patterson and Yool, 1998;Brewer et al., 2005).It is important therefore, to dispose of techniques to efficiently evaluate fire effects in burned areas.
Considering the extremely broad spatial expansion and often limited access to areas affected by fires, satellite remote sensing provides an important means of gathering information about a burned area in a timely and consistent manner (Rogan and Yool, 2001).Optical satellite imagery from Landsat Enhanced Thematic Mapper (ETM+) has been chosen for this work because the midinfrared reflectance of vegetation is strongly related to important vegetation canopy characteristics relative to fire effects.It was decided to employ only one post-fire image as it is considered of great interest to find quick and affordable methodology for obtaining fire severity maps avoiding the use of pre-fire images.In doing this, money and time would be saved in terms of obtaining, correcting and normalising images.Landsat missions such as Multi Spectral Scanner (MSS), Thematic Mapper (TM) and ETM+ have been widely used for mapping fire severity (Ryan and Noste, 1985;Milne, 1986;Chuvieco and Congalton, 1988;White et al., 1996;Key and Benson, 1999;Key et al., 2002;Key and Benson, 2004;Roldán-Zamarrón et al., 2006;De Santis and Chuvieco, 2007;González-Alonso et al., 2007;Miller and Thode, 2007;Wimberly and Reilly, 2007;Hoy et al., 2008;Verbyla et al., 2008;Norton et al., 2009).
A lack of spectral contrast is partly responsible for the classical errors related to post-fire classifications of burned areas (Koutsias et al., 1999): confusion of burned areas with dark land covers (water, dark forests), confusion between slightly burned and sparsely unburned vegetation (problem of the mixed pixels), difficulties in discriminating severity of burning, and confusion between burned vegetation and non-vegetated categories, such as urban areas.To minimize these prob-lems, it has usually been necessary to combine diverse remote sensing systems and a variety of image processing techniques (Justice et al., 1993).The range of methods dealing with level-of-damage mapping using post-fire satellite data includes, among others: (1) vegetation indices (White et al., 1996;Key and Benson, 1999;Key et al., 2002;Díaz-Delgado et al., 2003;Chafer et al., 2004;Van Wagtendonk et al., 2004;Epting et al., 2005), (2) linear transformation techniques such as principal components (PC) analysis and Kauth-Thomas transform (KT) (Patterson and Yool, 1998), (3) spectral unmixing (Roldán-Zamarrón et al., 2006), etc.
Among the large number of techniques applied for the characterization of burned areas, only a few have quantitatively compared their accuracies (Chuvieco and Congalton, 1988;Koutsias et al., 1999), offering little information about the potential and limitations of each technique.To address this issue, this study focuses on a quantitative comparison of four approaches for mapping fire severity using a case of study of a large fire that burned in Northwest Spain, in 1998.We are particularly interested in finding synergies combining both a subpixel-based approach such as Spectral Mixture Analysis (SMA) and an Object-based Analysis Image (OBIA).SMA approach has been widely used due to its ability to cope better with the problem of the mixed pixel and minimize the effects of topography on satellite data (Caetano et al., 1994;Caetano, 1995;Caetano et al., 1996;Cochrane and Souza, 1998;Rogan and Franklin, 2001;Rogan et al., 2002).SMA has the potential of producing results that are directly related to post-fire land management (Caetano et al., 1994;Cochrane and Souza, 1998;Roldán-Zamarrón et al., 2006).In the case of post-fire assessment, the potential of spectral unmixing relies on the sub-pixel analysis of the materials of a burned area and it has been considered advantageous over vegetation index-based methods, due to its improved capability to distinguish burns from other bare or sparsely vegetated areas (Caetano et al., 1996;Díaz-Delgado et al., 2001).Despite Object-based classifications are increasingly being used to.In comparison with pixels, image objects carry much more useful information and, therefore, can be characterised by far more properties, such as form, texture, neighbourhood or context, than pure spectral or spectral-derivative information (Baatz and Shäpe, 1999).Object-based classification models have been developed and applied on Landsat TM (Mitri and Gitas, 2002;Mitri and Gitas, 2004a;Mitri and Gitas, 2004b), NOAA-AVHRR images (Gitas et al., 2004), and IKONOS images (Mitri and Gitas, 2006;Mitri and Gitas, 2008) resulting in the accurate mapping of burned areas in the Mediterranean areas.
The main objective of our research is to demonstrate the superior accuracy obtained using a combined approach (SMA plus OBIA) for fire severity mapping with medium-resolution remote sensing image than the obtained by a more traditional approaches.
Study site description
The study site, 'Tabuyo del Monte', is located in the Sierra del Teleno, in Northern Spain (figure 1).It is a small mountain chain in the South-East (SE) of León province (Spain) with SE aspect, a maximum slope of 11% and elevation ranges from sea level between 850 to 2,100 m.
The climate is Mediterranean with an average annual rainfall between 650 and 900 mm and two or three months of dryness in the summer time.Soil in this area is very sandy and acidic (pH=5.5)(Calvo et al., 1998).Currently vegetation is a large natural Pinus pinaster Ait.community covering 11,500 ha.Spanish Vegetation Map shows that into the fire scar roughly the 78% was covered by pineland, 18% by shrubs and 4% by Pyrenean oak.Fires have occurred frequently in this community, generally affecting small areas and mostly caused by dry spring-summer storms.However, in September 1998 there was a large fire presumably caused by a military manoeuvre, which burned more than 3,000 ha during four days (between September 13 and 17).This fire is the object of this study.
Remotely sensed data
No Landsat cloud-free scenes close to the wildfire date were found so the first scene available corresponded on September, 16 1999.Van Wagtendonk et al. (2004) used also one year after the fire occurred postfire Landsat ETM+ for fire detection.Key (2005) pointed out that extended assessment (EA), (it occurs during the first growing season after fire) may provide more complete representation of actual fire effects.It captures first-order effects that include survivorship and delayed mortality of vegetation present before fire.The former is detected by regrowth from roots and stems of vegetation that burns but remains viable (McCarron and Knapp, 2003;Safford, 2004).Most other first-order effects, such as char, scorch and fuel consumption, are expected to persist until the next growing season, with two exceptions.Areas prone to surface erosion from wind or precipitation may show a decrease in ash cover and an increase of newly exposed mineral soil.Also, canopy foliage that is heat scorched or dies from girdling may drop to ground litter over the interval before EA.Since such effects are more or less complementary in regards to severity assessment, these delayed responses are not expected to significantly alter the remotely sensed magnitude of change detected between initial and extended assessment.In addition, is complete, so the extent of perimeters and distribution of severity represents final conditions.
Preprocessing of remotely sensed images is a preparatory phase that, in principle, improves image quality for further analyses.In this study only a geometric correction was performed.Atmospheric correction was not necessary since only one post-fire image was used to map the fire severity into the fire scar and it was cloud free.In addition, there is a likelihood that uneven implementation of corrections would not necessarily provide a better representation of the mixing space of the SMA model (Elmore et al., 2000).
For the geometric correction a set of 22 Ground Control Points (GCP's), selected using the National Topographic Map (Instituto Geográfico Nacional, IGN) at 1:50,000 (UTM 30 T European 1950 mean), and a 25 (1) low: areas where shrubs to 2 m burned and no or partial canopy scorched.(2) moderate: areas where shrubs incinerated and canopy scorched.(3) high: areas where shrubs incinerated and canopy completely burned and apparently dead, even though some plants may still be able to sprout.
Data analysis
The development of the main proposed methodology involved two cascaded image analysis techniques: linear spectral mixture (SMA) and object-based image analysis (OBIA).
Image objects were extracted from the fraction images (obtained from SMA algorithm) in the segmentation procedure prior to classification (4 th approach).In order to emphasize the benefits achievable using the adopted approach they were provided quantitative evaluations and comparisons with other approaches (1 st , 2 nd and 3 rd ) (figure 2).
(1) First approach: data analysis for Pixel-Based Method (ETM+ISODATA) The first approach is a pixel-based image unsupervised classification by Iterative Self-Organizing Data Analysis Technique (ISODATA) (Sunar and Özkan, 2001;Miller and Yool, 2002) to ETM+ image.ISODA-TA is clustering algorithm that compares the radiometric value of each pixel with predefined number of cluster attractors and shifts the cluster mean values in a way that the majority of the pixels belongs to a cluster.In this case, we interacted with the procedure at the beginning indicating the number of the predefined cluster to be created and the iterations to be carried out and at the end, where it decides which class represents which surface objects and merges or rejects the classes with nonrealistic representatives.We masked the satellite image with the official fire perimeter polygon in order to estimate the fire severity categories in the fire scar.
(2) Second approach: data analysis for Subpixel-Based Method (SMA+ISODATA) Because the spatial resolution of Landsat ETM+ imagery is 30 by 30 m, the materials in a given picture element (pixel) are rarely represented by a single physical component.Therefore, in the first stage of this approach (figure 3), a linear spectral model was used which is based on the assumption that the image spectra m-grid size digital elevation model (DEM) were used.A first-order polynomial warp function was applied and a nearest neighbour resampling protocol was then used to preserve original pixel values (Jensen, 1996;Lillesand and Kiefer, 2000).The root mean square error (RMSE) for the transformation was less than 1 pixel.In general, road intersections were used as GCP's since it was possible to locate them in the image.An illumination correction was performed with the C-correction (Teillet et al., 1982).This lessens the effects of shadows that may occur due to elevation variations in the landscape.
Field data
In addition to the optical satellite image data, field data were collected in the autumn of 1999 from 72 random plots for fire severity in the Tabuyo burned area (figure 1).Two different datasets were used: one for training and developing the classification rules, and another one for assessing the accuracy of the classification.Separate and independent data were used for training and for accuracy assessment.Despite this time lag, sufficient material remained in the field (scorched leaves on branches and the ground, char on the tree trunks, etc.) for an adequate, qualitative estimate of the degree of severity.Resprouting green leaves did not interfere with these observations.
The field survey plots were sized, with an average area of 0.78 hectare (100 m diameter).Random sampling plots location was correlated with the ETM+ location using a global positioning system.The plots were randoming located within pre-selected large areas with homogenous fire severity levels and low slope gradients by interpreting a 0.7 m-pixel post-fire colour aerial photograph in order to locate in the fire scar representative fire severity categories (scale 1:25,000, digital images orthorectified, mosaicked and examined on-screen in a GIS, captured on October 1998).
Classification of each field plot was determined by visual inspection, based on the observed majority fire severity class within each plot.Three possible fire severity categories were defined according to the degree of scorching vegetation (figure 2).We considered a high-severity, moderate-severity and low-severity as the used by other researchers (e.g., Jakubauskas et al., 1990;Turner et al., 1994;DeBano et al., 1998;Patterson and Yool, 1998;Brown and Smith, 2000;Rogan and Yool, 2001;Arno and Fiedler, 2005).The different fire severity classes were defined as follows: are formed by a linear combination of n pure spectra, such that: where DN b is the digital number in band b, DN i,b is the digital number for endmember i, in band b, F i the fraction of endmember i, and ε b is the residual error for each band.
The most common approach is to assume linear unmixing (Shimabukuro et al., 1991), although non-linear mixing can occur (Adams et al., 1993;Roberts et al., 1993).Smith et al. (2005) tested the most appropriate mixing model to use (linear or non-linear) in fire severity estimation.Whether the optical mixing was lin-ear or non-linear was largely controlled by the size of the particles present in the ash.
Endmember selection is the most important step in SMA.It determines how accurately the mixture model can represent the spectra.The endmember selection must accommodate the dimensionality of the mixing space.It involves determination of the number of endmembers and the methods to select these endmembers.Possible endmembers, however, are restricted to the number of bands the image data plus one (Hill, 1993;Small, 2004).The Landsat ETM+ sensor has sufficiently low noise that the inherent dimensionality of spectrally diverse images is generally equal to the full six dimensions.We limited this analysis to bands 3, 4, 5, and 7 as White et al. (1996) did when they tried to map members spectra, and their spectral response was visually verified using local knowledge (Goodwin et al., 2005).
Usually shade could be included either implicitly (fractions sum to 1 or less) or explicitly as an endmember (fractions sum to 1).In our case it was included implicitly (the following equation, ΣFi = 1.0, was not included into the equation system of the unmixing model; unconstrained solution).
The least-squares solution is the method most often used for solving the linear mixture model (Smith et al., 1990;Shimabukuro and Smith, 1991;García-Haro et al., 1996) due to its simplicity and ease of implementation.As the results from the unconstrained solution do not reflect the true abundance fractions of endmembers then the root-mean-square error (RMSE) was used to assess the fit of the model (Adams et al., 1993;Roberts et al., 1998a) and it is shown in equation ( 2), where m is the number of bands.
The ISODATA classifier was used to classify fraction image into fire severity categories: high, moderate and low (Sunar and Özkan, 2001;Miller and Yool, 2002;Roldán-Zamarrón et al., 2006).Fraction image was masked by fire perimeter polygon before performing unsupervised classification.
(a) Image segmentation
Segmentation is a prerequisite to object-based classification which is the subdivision of an image into separated regions or objects by gathering together many pixels in fire severity.The definition of appropriate spectral endmembers may be either done using reference endmember from spectral libraries or from the image itself (image endmember).As appropriate reference endmembers were not available for the study site, an approach to extract pure pixels from the image was applied to retrieve image endmembers.For most SMA applications, image endmembers are utilized because they can be easily obtained and can represent spectra measured at the same scale as the image data (Roberts et al., 1998a).A minimum noise fraction (MNF) technique (essentially two cascaded principal components transformations) was used to determine the inherent dimensionality of image data, to segregate noise in the data, and to reduce the computational requirements for subsequent processing (Boardman and Kruse, 1994).The data space could be divided into two parts: one part associated with large eigenvalues and coherent eigenimages, and a complementary part with near-unity eigenvalues and noisedominated images.By using only the coherent portions, the noise was separated from the data, thus improving spectral processing results (ENVI, 2000).It was possible to run an inverse MNF transform using a spectral subset to include only the good bands, or smoothing the noisy bands before the inverse.Separating purer from more mixed pixels reduced the number of pixels to be analyzed for endmember determination and made separation and identification of endmembers easier.
Four new MNF transformed bands were then analysed to find the most spectrally pure (extreme) pixels in the image using a pixel purity index (PPI) classifier.The PPI image was the result of several thousand iterations of the PPI algorithm.The higher values indicated pixels that were nearer to the corners of the n-dimensional data cloud, and were thus relatively purer than pixels with lower values.After the purer pixels were identified in the n-dimensional scatter plot, an inverse-MNF transform was applied to obtain the end- ( ) certain way.In comparison to pixels, image objects carry much more useful information and, therefore, can be characterized by far more properties (such as form, texture, neighbourhood or context) than pure spectral or spectral-derivative information (Baatz and Schäpe, 1999).
The segmentation used in this study was a bottom up region-merging process, starting with one-pixel objects.Throughout the segmentation procedure, the whole image was segmented and image objects were generated based upon several criteria of homogeneity in colour and shape (compactness and smoothness).In a subsequent step smaller image objects (Level 1 or fine scale to define fire severity categories in the fire scar) were merged into bigger ones (Level 2 or coarse scale to define the object boundaries of fire scar).The scale parameter was set to 5 and 20 at level 1 and 2, respectively.The composition of homogeneity criterion was set as follows: colour 0.8 and shape 0.2.For the shape criterion, smoothness was 0.1 and compactness was 0.9.
This process is called multiresolution segmentation, which was used to construct a hierarchical network of image object that simultaneously represented image information in different spatial resolutions (level 1 and 2).
(b) Object-based classification
The classification of the image objects was performed by using membership functions based on fuzzy theory combined with user-defined rules.A membership function ranges from 0 to 1 for each object's feature values with regard to the object's assigned class (Navulur, 2007).Spectral, shape, and statistical characteristics as well as relationships between linked levels of the image objects can be used in the rule base to combine objects into meaningful classes (Benz et al., 2004).The fuzzy sets were defined by membership functions that identify those values of a feature that are regarded as typical, less typical, or not typical of a class.
In our combined approach, an object-based image classification was performed using fraction images obtained in the second approach of this study as input of the model.
Burned vegetation fraction image performed better result in order to fit the image objects on both first and second segmentation levels.The scale parameter was set to 1 and 5 at level 1 and 2, respectively.The composition of homogeneity criterion was set as follows: colour 0.9 and shape 0.1.For the shape criterion, smoothness was 0.2 and compactness was 0.8.As it was done at the third approach, a fuzzy set was defined by membership functions that identified those values of a feature that were regarded as typical, less typical, or not typical of a class.
Classification accuracy was evaluated using ground referenced data.To ensure independence, no training data were used for the validation.Ground referenced data in this context means having been derived from a presumably more accurate data source than the thematic map, in this case from ground visits.The same set of ground data was used in the assessment of the accuracy of the thematic maps obtained by different classifiers in order to compare their suitability in fire severity mapping.
The accuracy assessment was based on confusion matrices, Overall Accuracy (OA), Producer's Accuracy (PA), User's Accuracy (UA) and Kappa Index of Agreement (KIA) statistic (Congalton, 1991).Error matrices were formed with data from thematic map and ground data (Congalton and Green, 1999).McNemar's test was selected to determine significant differences among classifications.Foody (2004) stated that for dependent samples, the statistical significance of the difference between two proportions might be evaluated using McNemar's test.It is a non-parametric test that is based upon confusion matrixes that are 2 by 2 in dimension.The attention is focused on the binary distinction between correct and incorrect class allocations.The McNemar test is based on the standardized normal test statistic (3) in which f ij indicates the frequency of ground data lying in confusion matrix element i, j. f 12 and f 21 are the number of pixels that with one method were correctly classified, while with the other one were incorrectly classified.
Spectral unmixing was performed using the four endmembers derived from the image data leading to 4-fraction images and a root-mean-square error (RMSE) image.The RMSE was calculated for all image pixels.Therefore, the error image was used to assess whether the endmembers were properly selected and whether the number of selected endmembers was sufficient.The value of the RMSE must be lower than the level of noise in the system, in order to guarantee the viability of the results.Landsat ETM+ signal-to-noise value is approximately 2 DN.The unmixing model results were evaluated as proposed by Adams et al., 1995.First, we evaluated the RMSE image.Our final model showed low RMSE (<2 DN values).Typically, a reasonable mixing model results in an overall RMS-threshold-error of 2.5 DN values for an image (Roberts et al., 1998a).Next, fraction images were evaluated and interpreted in terms of field context and spatial distribution.In this study, final fractions were allowed to be negative or superpositive (Román-Cuesta et al., 2005).
Fraction images derived from different combinations of image endmembers were evaluated with visual interpretation and the error extent and distribution in the error fraction image.The criteria used to identify the best suitable fraction images were based on: (1) high-quality fraction images in the fire scar, and (2) relatively low errors in the fire scar.The best results of the spectral mixture analysis for the Tabuyo fire scar are shown in figure 6.
Bright values in these images indicated areas of high fractional abundance for the endmember in question.Bright values on the RMS error image indicated areas that were poorly modelled by the least squares algorithm (values were greater than the 2.5 threshold).A cross-check with the Spanish vegetation map (1:50,000) revealed that these areas with a high RMS error were represented by crops at the time of image acquisition.
General results by approach
The different image processing methods employed and the classification techniques applied with either ISODATA or OBIA yielded varied results (figure 4).
When the unsupervised classification (ISODATA) was applied directly over the satellite image (first approach), the best results for mapping fire severity were reached using bands 3-5 and band 7, 5 iterations and forcing to 3 clusters.This approach could be considered as the best cost-effective method since no image processing technique was applied to the digital number data but it carried out the worst results among the tested approaches.
Regarding second approach, it was obtained a finally four endmembers dataset formed by: soil, two kinds of vegetation (veg1 and veg2) and burned vegetation endmember.The final vegetation 1 (veg 1) endmember was extracted from canopy of pine stands (Pinus pinaster Ait.), the vegetation 2 (veg 2) was mainly derived from canopy of Quercus pyrenaica Willd., while the soil endmember was located on agricultural areas and the burned vegetation endmember was extracted from the fire scar (figure 5).The rest of the image presented a random error distribution.Given the fact that these areas fell outside of the fire scar, we assumed that the endmembers chosen had produced robust and representative image fractions of burned areas, soil and two kinds of vegetation.In this method, fire severity map was carried out by means of applying the ISODATA classifier to burned vegetation fraction image (figure 4).
For the third approach, an object-based analysis was carried out to the ETM+ image.Visual interpretation of different image segmentation results showed that it was extremely beneficial to use band 4 (NIR) and band 7 (SWIR) since they are related with wildfire reflectance values.Official fire perimeter polygon was used as thematic layer in the segmentation in order to a better delineation of fire scar boundaries (figure 7).
Two different levels of image objects representing different scales were created: a fine scale to capture fire severity categories and a coarser scale to define the burned area.Classification at level 2 included the following classes: not burned and possibly burned.This level provided a context to detect the burned area in the image and it was used as super-object information for level 1. Features based on object spectral information (image DN) as well as object contextual information, such as relation to super-objects were used in the classification.The features based on object spectral information were: brightness and ratio B5/ratio B7 to level 2 and the Normalized Burn Ratio (NBR=B4-B7/B4+B7).Existence of super-objects was used as contextual feature.The object NBR was calculated from the NBR values of all n pixels forming an image object.Membership functions were adapted for each chosen classification feature.Aerial photos and field notes were used to help interpret the satellite image and select burn thresholds.
For the fourth approach (combined-based approach using SMA and OBIA) two different levels of image objects representing different scales were created also with the aim of capturing fire severity categories.Same class hierarchy as developed in third approach was also adopted (figure 7).Fraction images were used to extract features that were not well distinguishable in the multispectral image.Fuzzy membership functions, which are the knowledge-based part of the classification methodology in eCognition® software, were used to apply fuzzy range to the selected features (which separates a class from other classes, fire severity classes for instance).Because it is necessary to choose the feature to which to apply the membership function, it was explored the feature space to determine which feature(s) best separate the problem classes (fire severity classes).Features based on object information (from fraction image-abundance values) as well as object contextual information, such as neighbourhood and relation to super-objects were used in the classification (figure 8).The features mean burned vegetation, mean difference to scene burned vegetation and mean soil were useful at differentiating possibly burned and not burned from each other in level 2. At level 1, features such as existence of super-objects and mean burned vegetation were the best in separating fire severity classes (low, moderate and high) from each other.Membership functions were adapted for each chosen classification feature by interactively funding the lower and upper limits of the fuzzy ranges on segmentation level 1 for the classes Low, Moderate and High fire severity (figure 9).
Fire severity class areas
The area of fire severity categories varied among the fourth approaches analyzed in this study.Total areas of each approach depend on the combined ability of the classifier and the potential of each technique to separate
Accuracy assessments
The classification accuracy between sites visited on the ground and the final fire severity classification displayed different results (table 2).To ensure independence, no training data were used for validation.The remote sensing fire severity values at pixel-level of the plots were then compared to the field survey (100 mdiameter) fire severity classes (table 4).
Regarding the KIA (Kappa Index of Agreement) statistics, the highest values corroborated the overall accuracy (OA) results, with object-based approaches (3 and 4) showing the highest accuracies, in opposite relation with the pixel-based approaches.As happen in overall accuracy, KIA values displayed better values when fraction images were introduced.
A revision of significant differences among KIA statistics based on equation (3) was calculated and showed in table 3. It was apparent that the large differences in accuracy observed between the classifications expressed in terms of proportions of correctly allocated pixels were statistically significant at the 0.1% level of significance.This led to the conclusion that the classification accuracy derived from the four approaches was distinctively different, and the advantage of the approach 4 over the rest of approaches was significant.
An error matrix for each approach was also produced.Based on the User's, Producer's and KIA per class accuracy, individual class accuracies revealed diverse differences among methodologies (table 4).
Discussion
Confusion among classes occurred for all approaches even though classes were supposedly quite different in their spectral responses.For the approaches that used ETM+ data instead of fraction imagery as input data for distinguishing among classes, the moderate and low severity classes were confused in the visible and NIR band, but displayed well in the SWIR range.In this regard, White el al. (1996) reported Landsat TM band 7 (SWIR) data is useful for distinguishing among different burn severity classes.Regarding approaches using fraction images, they displayed high severity areas characterized by a large amount of burned vegetation and low amount of vegetation and soil, the opposite trend for the low severity areas and an intermediate trend for the moderate severity areas (figure 10).The incorporation of the fraction image into the classification procedure increased the accuracy for both subpixel-and objectbased approaches.By employing fraction image the overall accuracy of fire severity classification was improved by 18.30% in subpixel-based and by 12.65% in combined-based approaches (Table 2).Several authors have reported that using fraction images in a classification produces a higher accuracy than results produced by classifying the single sensor bands (Smith et al., 1990;Settle and Drake, 1993;Caetano et al., 1994;Ustin et al., 1996;Huguenin et al., 1997;Cochrane and Souza, 1998;Settle and Campbell, 1998;Aguiar et al., 1999;Elmore et al., 2000;Riaño et al., 2002;Theseira et al., 2002).
The methods that used ISODATA as classifier performed more moderate and low categories, whereas the OBIA methods (object-based classification), which classified more high and moderate classes.Approach 1 (pixel-based) displayed the highest values for the low severity class, being almost two times larger than the object-based approaches, and lowest values for the high severity class.This was because the ISODATA clustering algorithm only had a one-dimensional space to separate pixels into classes and the means of the classes therefore tended to be uniformly distributed along the one-dimensional space (Miller and Yool, 2002).The better results obtained by approaches that included object-based image analysis into its process are mainly due to the ability of a context-based classification to reduce the speckle in the classification.Obviously, the object-based classification, which first extracts homogeneous regions and then classifies them, avoids the annoying salt-and-pepper effect of the spatially-fine classification results, which is typical of pixel-based analysis.Besides, combined-based approach (SMA/OBIA) performed better results at individual class level.It dealt satisfactory with the problems of classes confusion (burned vegetation and non-vegeted, slighty burned and unburned vegetation) using the information contained into the fraction imagery to the object-based classification and, it had an advantage over the rest of approaches tested by supplying the opportunity to combine contextual and subpixel information (contribution of each surface material in each mixed pixel) into classification which enhanced the accuracy.KIA per class, PA and UA reached high accuracy values, indicating that confusion between problematic classes such as moderate and high were minimized.This implies that introducing fraction image and object-based classification may helpful for improving separability between classes (Table 4).
The proposed method shows potential for further applications such as land cover changes, mining activities, etc.Despite future studies will examine alternative approaches to analyzing images of different resolutions.
Conclusions
Fire severity mapping is an important step by providing operational information for post-fire restoration.This paper investigated the utilization of satellite image and image processing techniques to derive fire severity information.Different approaches were tested in order to obtain accurate fire severity maps.Results showed that fraction images generated by unmixing of a Landsat ETM+ post-fire image can be used as input in an object-based classification improving the result accuracy.Results complement the findings of a small number of previous studies that support the use of SMA in mapping fire severity due to its ability to produce fractions representative of subpixel components directly related to fire severity.The accuracy of fire severity categories was better combining SMA and OBIA than for the rest of approach tested.McNemar's test was used to evaluate the statistical significance of the difference between the four methods.The difference in accuracy expressed in terms of proportions of correctly allocated pixels was statistically significant at the 0.1% level, which meant that the thematic mapping result using the combinedapproach (SMA/OBIA) achieved a much higher accuracy than the rest of approaches.
Figure 1 .
Figure 1.Location of study area in Spain; and location of field samples in the fire scar.
Figure 4 .
Figure 4. Final classified images obtained by means of ISO-DATA (approaches 1 and 2) and Object-based classification (approaches 3 and 4).Colours corresponding to each class are indicated in the legend.
Figure 5 .
Figure 5. Image endmembers used in the spectral mixing model, expressed in image radiance (DN) for the 3, 4, 5 and 7 reflective ETM+ bands.
Figure 7 .
Figure 7.A section of the study area showing the segmentation results on level 1.(a) Approach 3rd.(b) Approach 4th.(c) Class hierarchy created for both approach 3rd and 4th.
Figure 8 .
Figure8.Membership functions of level 2 and 1 for fourth approach.
Figure 9 .
Figure 9. Membership functions of level 1 for fourth approach.
10.
Mean values for each fire severity class (high, moderate and low), for approaches using ETM+ data (left) and for those using fraction imagery (right).Left graphic bars represent the utilized bands: B3, B4, B5 and B7.Right graphic bars represent the different endmembers: vegetation 1, vegetation 2, burned vegetation, soil and error term (RMS).Each sub-section in the right graphic shows the three selected classes: high, moderate and low severity.
Table 1 .
Severity area estimated for each approach
Table 2 .
Overall accuracies and KIA statistics for each considered approach
|
v3-fos-license
|
2020-01-16T09:05:17.723Z
|
2019-12-01T00:00:00.000
|
210715049
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.2478/pomr-2019-0078",
"pdf_hash": "8658b40d72dd98ebf43d37d196a5f588366ae6ba",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44490",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "c6feec2af03b64528fd614901c0aa50dc3a62521",
"year": 2019
}
|
pes2o/s2orc
|
DESIGN AND OPERATIONAL INNOVATIONS IN ADAPTING THE EXISTING MERCHANT RIVER FLEET TO COST-EFFECTIVE SHIPPING
Modernisation of the existing river fleet adapted for the local conditions of the Middle and Lower Vistula can be considered as a solution to slow down the progressive decrease of river transport in this area. The implementation of technical improvements, smart technologies and enhancement of transport performance may partially solve the problem of growing demand for multimodal transport of containers and oversized loads in a shorter perspective than the expected period of planned revitalisation of the river. The paper presents investigations on the modernisation of river convoys adapted to the current navigational conditions of the Lower Vistula. The different options have been discussed by the authors with river fleet operators and the best recognised solution was agreed to be the use of river convoys combining modernised motor barges and the pushed barges previously used in this area. Improvement of the transport profitability, reduction of fuel consumption, air pollution and noise can be achieved at minimum costs by modernisation of the main power-propulsion systems of outdated motor barges and the implementation of innovative steering systems on pushed barges. The demand for power-propulsion and manoeuvring performance of modernised convoys is discussed in the paper.
INTRODUCTION
The ten-year perspective of inland waterways modernisation in Poland anticipated in the assumptions made by the Ministry of the Maritime Economy and Inland Navigation [6] is the economic justification for the modernisation of the existing means of transport before the expected navigational requirements are satisfied.
The technological and economical study presented here aimed to determine technical assumptions for inland waterborne transport units intended for navigation on the Lower Vistula, currently used mainly for the individual transport of bulk and oversized loads.
The growing demand for container transport via inland waterways is the reason for the development of technical solutions related to ecological river training, planning river ports with logistic centres and transport means adapted to the local conditions within the framework of European programmes [3,10], as well as feasibility studies commissioned by the Polish government. The systemic approach to waterborne inland transport development should include comprehensive logistic planning [7].
The ship-owners currently operating river vessels have recognised the construction of a new river fleet for current river conditions as non-negotiable. However, modernisation of the existing fleet adapted for local conditions should be considered and should slow down the further decrease of river transport. The developments should be based on the types of barges currently being used.
The latest study on the implementation of smart ships in waterborne transportation [21] presents the view that the first fully autonomous ships will be put into service in less than five years, so new designs and developments of existing waterborne transport units should take into account the rapid introduction of smart technologies in river transport.
The wide implementation of ICT technologies will increase the flexibility and efficiency of operations and enable areas previously not available for manned vessels to be used for safe and efficient transport [21].
DEVELOPMENT OF SMART RIVER TRANSPORT IN EUROPE
The main tendencies in modern waterborne transport development are the implementation of smart technologies and economies of scale.
SMART TECHNOLOGY APPLICATIONS
There are six basic levels of ship autonomy [21]: level 0, which means manual operation; level 1 -automatic control over the set route; level 2 on which the calculated route can be updated by an external system; level 3 on which the decisions on navigation and ship operation are calculated by the system and controlled by the operator in case of uncertainty; level 4 on which the decisions worked out by the system should be approved by a human operator; level 5 on which the monitored autonomy needs a human response only in situations that are uncertain for the system; and level 6 -full autonomy based on artificial intelligence.
Remote control of operations requires the automation of all the main systems on board, and their integration into a single communication channel [11,12]. The transition from level zero to level one of autonomy means, as a first step, the integrated control of steering and propulsion devices.
DNVGL class guidelines regarding autonomous and remotely operated ships, introduced in September 2018, recommend that onboard "systems and components supporting the propulsion function shall be arranged with redundancy and capacity sufficient to ensure that the vessel can maintain a navigable speed in case of potential failures of single systems and components" [9].
The conditions of smart inland waterborne transport development should be included in the design assumptions for both the new builds and modernised units.
ECONOMY OF SCALE OF RIVER TRANSPORT UNITS
The well-known development of economies of scale with respect to maritime transport cannot be simply transferred to the inland waterborne transport environment. However, in the last ten years the tendency of a growing tonnage of transport units has been observed within the main inland fleets in Western Europe.
Inland waterway transport is energy-efficient, as an inland vessel is able to transport one tonne of cargo almost four times further than a truck using the same consumption of energy (370 km as against 300 km by rail and 100 km by truck). The transport cost is competitive and the unit cost decreases over long distances.
Transporting goods on inland waterways is advantageous, as convoys of pushed barges can transport more goods per distance unit (tkm) than any other type of land transport and could help to reduce road traffic. In the first quarter of 2018, the transport performance on European inland waterways reached 34.9 billion tkm [5].
The observed tendencies of changes in the size of river units towards greater tonnages since 2005 are presented in Fig. 1 [15].
The waterborne inland transport of dry goods in Western Europe is dominated by Dutch vessels. These account for 49% of West European vessels and 56% of their tonnage. There are around 10000 inland vessels operating on the Rhine and more than 3000 vessels operating on the Danube. 75% of the waterborne transport means on the Rhine are dry cargo selfpropelled units or dumb barges, while 15% are tanker vessels. Push and tug boats account for 13%. 7% of the total number of inland vessels in the Danube countries are tankers, while push and tug boats account for 18% of the total vessels [5].
The transport performance on European inland waters presented an increase of 4% in the first quarter of 2018 compared to the first quarter of 2017. In the same period, the waterborne inland transport performance in Poland presented a 27% decrease.
POSSIBILITIES OF COST-EFFECTIVE CARGO TRANSPORT ON LOWER VISTULA
The number of transport units operated by Polish shipowners has not changed much in recent years. The number of barges BP-500 was 509 and decreased by 7 units. The fleet of pushers and tug boats was 219 units and increased by 5 units in 2017. The total number of barges BM-500 was 89, a decrease of 2 units from 2017 [8].
PERSPECTIVES OF DEVELOPMENT OF RIVER NAVIGATION
The Polish Ministry of Maritime Economy and Inland Navigation has carried out several feasibility studies related to the modernisation of Polish rivers. The detailed scope of investment in the short-term perspective includes the following tasks: the building of a new dam below Wloclawek, a feasibility study and investment documentation of Lower Vistula cascades [6].
The Kujawsko-Pomorskie Voivodeship together with the City of Bydgoszcz carried out, within the EMMA European Project, a location study for the construction of a multimodal platform: a river port and logistics centre in the area between Bydgoszcz and Solec Kujawski [10].
The inland waterway of the Vistula River, planned within the II priority of Assumptions for the plans of inland waterways development in Poland in 2016/2020, with the perspective to 2030 [6], and the proposed location of a new river port and logistics terminal in Otorowo [10] are presented in Fig. 2.
The main navigational limits on the Gdansk-Otorowo section are related to the water depth and air draft. The transit depth is 1.8 m, so the maximum mean draft of vessels is 1.5 m. The maximum lengths and breadths of convoys are limited by the dimensions of the lock at Przegalina: 188.37 m length and 11.91 m breadth.
Transport units
The river convoys operated on the Middle and Lower Vistula are different combinations of pushing and towing trains [18].
The main parameters of the barges BP-500 and BM-500 are presented in Table 1.
The fleet characteristics influencing transport costs are the fleet age and operational parameters, including the power-propulsion performance. The development of river navigation on the basis of modernisation of the existing fleet, including self-propelled barges BM-500 and pushed barges BP-500, has been discussed with river fleet operators as a best possible solution in the 10-15 year period of the planned river revitalisation.
The improvement of transport units' profitability can be achieved at minimum costs by modernising the outdated main power-propulsion and steering systems to reduce fuel consumption, air pollution and noise.
The hybrid diesel electric propulsion and integrated steering system, including the bow hydrodynamic rotors and dynamic coupling of barges, have been considered.
The new river pushed train proposed for operation on the Lower Vistula combines modernised barges: barge BM-500 and one or two barges BP-500 will have a greater transportation capacity and efficiency than a pushed train combining a pusher and barges. The available power of the modernised motor barge power-propulsion system should be not less than the power of the pusher [13].
The convoy of a motor barge BM-500 and pushed barges BP-500 with dynamic coupling and a bow steering system on the pushed barge has already been tested with respect to resistance and manoeuvrability using the physical scale model and CFD simulations [1].
TRANSPORT CAPACITY OF RIVER CONVOYS BASED ON MODERNISED BP-500 AND BM-500 BARGES
The presented analysis of possible transport units and convoys for the Lower Vistula has been carried out on the basis of previous experience of river fleet operators. It has been assumed that the analysed convoys operate during a navigational season of 250 days on not less than II class waterways. The port operations time was assumed to be 24 hours with 12 working hours a day. For the considered Gdansk-Otorowo section of the Lower Vistula, 198 km in length, the corresponding maximum air draft has been limited to two layers of containers.
RIVER CONVOYS CONFIGURATIONS
The single BP-500 motor barge and three configurations of river convoys were selected to compare the transport capacity: • single motor barge BP-500, • pusher with two pushed barges: pusher Bizon and 2 BP-500 barges, • motor barge BM-500 pushing one pushed barge BP-500, • motor barge BM-500 pushing two pushed barges BP-500.
The operational parameters of motor barge BM-500 are presented in Table 2.
The main operational parameters of the convoys are presented in Tables 3-5. The configuration of the convoy of barges BM-500 and BP-500 is presented in Fig. 3.
The proposed configuration of the convoy of BM-500 and 2 BP-500 barges is presented in Fig. 4.
ECONOMICALLY EFFECTIVE TRANSPORT CAPACITY OF RIVER CONVOYS
Transport capacities in the navigational season in relation to operations between Gdansk and Otorowo estimated for the considered convoys are presented in Table 6. The assumed numbers of round voyages were as follows: 32 for a BM-500 barge, 20 for the pusher with 2 BP-500 barges, 31 for a BM-500 pushing a BP-500 barge, 31 for a BM-500 pushing 2 BP-500 barges.
The unit costs of transport and time of a round voyage without time of port operations and night breaks are presented in Table 7. The costs include fuel costs, personnel costs, depreciation costs, maintenance costs and overheads due to conducting business -assumed as 35%.
The unit cost of transport of the convoy of a modernised BM-500 and two BP-500 barges is estimated to be less than the unit cost for a pusher and two pushed barges. It is about 26 PLN/t for bulk cargo and 490 PLN/TEU for containers. The difference is 10% for bulk cargo and 7% for containers; however, it is dependent on the operational cost of the modernised barges. The number of round cruises in the average navigational period of 250 days may be increased due to a decrease of port operations time.
REQUIREMENTS WITH RESPECT TO MANOEUVRABILITY OF A RIVER CONVOY
The pushed convoy of a BM-500 and BP-500 has worse manoeuvring characteristics than the convoy consisting of a pusher and barge BP-500 [1,13]. The analysis of new solutions of river convoys based on a motor barge and pushed barges for coal transport on the Oder Waterway was presented in 2012 by Kulczyk [13]. Due to the manoeuvrability required, the authors proposed a Schottel pump jet as a bow rudder. The Schottel pump jet is an azimuth thruster that can be operated in shallow water conditions -with 0.3 m under-keel clearance. The pump jet used as an auxiliary propulsion unit greatly increases the possibility of convoy control, but it is expensive, limits the cargo space in the bow hold and generates thrust streams that influence the river environment [16].
The innovative solution of the bow steering system presented in [1], installed on the bow of the pushed barge, and flexible coupling between the motor barge and pushed barge, can significantly improve the manoeuvrability of the convoy.
With respect to the Polish Register of Shipping rules [17], the push train manoeuvring characteristics should satisfy the criteria for pushed convoys based on trials performed in deep water conditions and shallow water conditions with the water depth to draft ratio in the range of 0.5-1.2 (Table 8).
For practical ship design for operation and for safety, the local environmental conditions should be taken into accountespecially the possible widening of the safe manoeuvring area due to strong wind [14]. This is important for convoys with big windage areas carrying oversized goods or containers.
The results of model tests presented in Fig. 6 [1, 2] allowed the estimation of the turning ability of a motor barge and pushed barge convoy of 100 m length equipped with bow rotors and dynamic coupling.
Tab. 8. Manoeuvrability criteria for river convoys [17] Tab. 6 Stopping distance over the ground in shallow water for convoys having length equal to or less than 110 m and beam equal to or less than 11.45 m.
-Shall be no greater than 480 m in flowing water with current velocity of 1.5 m/s in direction of flow, until speed over ground is 0 m/s, -Shall be no greater than 305 m in standing water.
Turning speed (Fig. 5 The parameters of the turning trial performed using different combinations of steering devices are presented in Table 9.
The differences between the performance of the convoy with and without dynamic coupling were 10% in advance and 20% in transfer distances.
The stopping distance over the ground for the push barge should be less than 305 m in standing water. If the turning is used as an anti-collision manoeuvre, the advance for the push barge without bow rotors is equal to 300 m and is only a little less than the stopping distance. The use of bow rotors decreases the advance to 270 m. The difference is equal to one third of the push barge length.
The turning performance of the BM-500 and BP-500 barges convoy is presented in Fig. 7.
The turning performance of the convoy of a BM-500 and 2 BP-500 barges is presented in Fig. 8. Turning using the bow steering system and dynamic couplings between barges give numerous possibilities of push train handling in winding rivers.
The necessary developments of modernised convoys are hybridisation and electrification of shipboard systems.
The replacement of the conventional drive used on the BM-500 (with main engine power 2 x 88-100 kW) with a hybrid diesel-electric drive with increased power corresponding to the pusher (Bizon: 2 x 118 kW; Koziorozec 2 x 120 kW) should provide the necessary power for the convoy [13].
The power required by the convoy should also take into account changing river depth [19,20]. It has been confirmed by CFD calculations [1] that the predicted resistance of a 100 m length convoy at 10 km/h speed in deep water is 25.5 kN; the resistance at 15 km/h in 5 m deep water, which means a 0.6 shallow water Froude number, is 70 kN.
Diesel-electric drive and automatic steering accompanied by the integration of all the main on-board control systems [11] introduces the possibility of applying remote operation via a single communication channel.
CONCLUSIONS
The three configurations of river convoys presented in the paper were compared with respect to their transport capacity over the navigational season. The capacity for the pusher Bizon with two pushed barges was estimated to be 31% less than the capacity of the modernised motor barge BM-500 in convoy with a barge BP-500. The capacity of the motor barge BM-500 pushing two BP-500 barges was estimated to be 15% greater than the capacity of the modernised motor barge BM-500 pushing one BP-500 barge.
The design and operational innovations presented in the paper have been proposed for adapting the existing merchant river fleet of BM-500 and BP-500 barges built in the middle of the last century, and operating on the Lower Vistula River, to cost-effective modern shipping. Improvement of the manoeuvrability and reduction of the required manoeuvring area for the push train can be obtained by installing new steering devices including a dynamic coupling system and bow rotors on the pushed barge. The model tests confirmed the increase of the turning capacity of a modernised BM-500 barge dynamically coupled with a BP-500 barge and equipped with bow rotors. The transfer during convoy turning for the dynamically coupled barges with rotors is 0.9 L less than for the conventional convoy using stern rudders only. The system should be flexible to allow for different levels of autonomy, depending on location, congestion or emergencies. The integrated control of onboard systems should be a step forward in the implementation of smart technologiessatisfying the requirement that the navigation system should be able to maintain the ship's route, adapt it to the changing river conditions, avoid collisions and operate the ship efficiently.
|
v3-fos-license
|
2023-08-20T06:17:31.868Z
|
2023-08-18T00:00:00.000
|
261005703
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jocd.15964",
"pdf_hash": "f5361f3a1819655768240d09ba0c73ed3d104cb3",
"pdf_src": "Wiley",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44494",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "6a433fad76140a42ec4e6638b27252daa2381f07",
"year": 2023
}
|
pes2o/s2orc
|
Treatment of mild to severe acne with 1726 nm laser: A safe alternative to traditional acne therapies
Acne is the most common reason for dermatology consultation in adolescents and young adults. Consultation is often delayed despite unsuccessful self‐treatment. Postponing effective treatment places acne sufferers at higher risk for permanent acne scars and post‐inflammatory pigment changes.
unemployed when compared to age-matched controls. 6Appearance bias may be more than just cultural; there is research to suggest evolutionary and subconscious influences. 7,8ne is the most common reason for dermatology consultation in adolescents and young adults, 9 but even moderate to severe acne sufferers may not seek professional advice for a year or longer despite unsuccessful self-treatment. 10,11Lack of early effective treatment places acne sufferers at higher risk for permanent skin changes such as erythema, post-inflammatory hyperpigmentation (PIH), and atrophic or hypertrophic scar formation. 10,11rrent treatment guidelines 12,13 rely on topical and systemic therapies that are neither effective nor well tolerated in all patients.
Interest in the potential of light-based therapies as alternative treatments is growing.This review discusses clinical challenges with present therapeutic options for acne treatment and the role of a 1726 nm laser device for acne.
| ME THODS
Seven physicians plus an advisor with international scientific experience (the authors) met to discuss the literature on acne treatments.
All physicians are dermatologists who were chosen for their clinical experience and medical expertise in treating acne-affected people of all ages.Following a discussion of the limitations of medical therapy and literature reviews of light-based therapy, the panel discussed the best practice use of a newly FDA-approved 1726 nm laser for acne treatment.The panel's expert opinions regarding the use of the 1726 nm laser to optimize acne outcomes were summarized.
| Literature review
A literature search of the National Library of Medicine PubMed Database for studies published from 2010 to February 2023 was conducted.Randomized controlled trials of novel acne treatments using light-based and laser therapy to prevent disease progression were reviewed.Inclusion criteria were English language studies, consensus papers, and other reviews that focused on light-based therapy designed to diminish the impact of the pilosebaceous inflammation that leads to acne.Exclusion criteria were articles with no original data (unless a review was deemed relevant) or published in a language other than English.Also excluded were studies that combined medical treatment with light-based modalities (such as isotretinoin, platelet-rich plasma, growth factors, facial peels, and others), or studies designed to measure improvement in acne scars.
| Medical treatment for acne
A current understanding of the pathophysiology of acne helps to grasp the rationale for therapy.Acne comedones, papules, pustules, and nodules are primarily the result of four factors: hyperkeratinization of the pilosebaceous duct resulting in duct obstruction; ductal colonization with Cutibacterium acnes (C.3][14][15] Standard topical and systemic therapies target one or more of these features.
The decision to treat with one medication or multiple medications depends on acne severity, though there is no standardized acne-grading method. 13Comedonal acne is noninflammatory.
Inflammatory acne, which can be mild, moderate, or severe, refers to papules, pustules, nodules, and cysts.Acne severity can be rated clinically on a scale of 0-4, with 0 indicating "clear"; 1 is comedonal acne; 2 is mild-moderate papulopustular acne; 3 is severe papulopustular acne or moderate nodular acne; and 4 is severe nodulocystic acne or conglobate acne. 13
| Topical therapy
Improvement in Type 1 and Type 2 acne can be achieved with topical treatment.A single topical agent, such as benzoyl peroxide (BPO) or a low-strength retinoid, 12 may be sufficient.The bactericidal activity of BPO is especially effective in controlling C. acnes, which plays a pivotal role in acne.C. acnes increases the proliferation and differentiation of keratinocytes; it activates innate immunity via toll-like and protease-activated receptors, which trigger the production of pro-inflammatory cytokines. 16tinoids work by normalizing keratinization and reducing inflammation.Type 2, or mild papulopustular acne, may require a combination of two topical agents such as BPO plus a retinoid, or fixed combinations of a topical antibiotic + BPO, or a topical antibiotic + a retinoid. 12,13Once control has been achieved, topical retinoids are ideal as monotherapy for long-term maintenance in all types of acne as they have the unique ability to prevent the formation of microcomedones. 17
| Systemic therapy
Antibiotics are the most frequently added systemic therapy for acne that has not responded well to topical remedies.More antibiotics are prescribed by dermatologists than any other specialty. 18For moderate to severe acne, tetracycline-derived antibiotics-minocycline and doxycycline-are most often used in combination with topical agents.Although their use is intended to reduce C. acnes numbers, oral tetracycline derivatives have anti-inflammatory qualities that add to their efficacy. 19Due to increasing levels of bacterial resistance, oral antibiotics should be used for short periods of time, 3-4 months or less. 12,13,16rmonal therapies are effective acne treatment adjuncts for women, yet oral antibiotics are prescribed more frequently. 20rmonal therapy can be achieved with spironolactone or oral contraceptives, which downregulate the effects of androgens on sebum production.Spironolactone, originally developed as a diuretic, has antagonistic effects on androgen and progesterone receptors.Spironolactone is helpful for adult-onset acne in women and in women with acne due to polycystic ovary disease.It can be safely used long-term in healthy women. 21,22Oral contraceptives that contain estrogen and progestin reduce free testosterone which diminishes sebum production.In the United States, a few specific estrogen/progesterone combination oral contraceptives [21][22][23] have been approved for acne treatment.Improvement with these agents may take several months.Progestin-only oral contraceptives, and progestin-containing long-acting implants or depot products, can make acne worse. 23
Isotretinoin
Isotretinoin should be considered when a patient with moderate to severe acne fails combination topical therapy plus a systemic agent.It can be started as first-line therapy in a severely affected patient. 12,13,24Isotretinoin normalizes follicle keratinization, inhibits C. acnes, reduces inflammation, and reduces sebum secretion by shrinking sebaceous glands.It is the only disease-modifying acne therapy.When dosing and length of treatment are sufficient, there is the likelihood of durable shrinkage of the pilosebaceous unit that persists once the drug is discontinued, 25 resulting in prolonged and often permanent acne resolution.Isotretinoin does not permanently remove all sebaceous gland functions.Sebaceous gland function will renormalize to levels enough to sustain sebum production but not enough for excessive bacterial proliferation.
| LIMITATI ON S OF CURRENT ACNE TRE ATMENTS
Available acne medications are effective, but adverse effects and delayed onset of action limit their use.The demographic with the most acne, adolescents and young adults, may become impatient with their perceived slow progress.Topical products such as BPO and low-strength retinoids routinely cause dry, flaking skin, that, although temporary, prompts many patients to discontinue therapy before optimal effects are achieved.In a study of 250 patients with a mean age of 18.6, 26 45% abandoned therapy before an adequate therapeutic trial.Lack of response was cited by 62%, and 38% reported unacceptable side effects.Patients with severe acne were more likely to quit topical treatment early due to a lack of response. 26yond their skin irritant effects, topical retinoids may pose additional risks in adolescent and young adult women-groups with high pregnancy rates.Adapalene and tretinoin are both pregnancy category C, meaning animal data suggest fetal risks though human pregnancy data are lacking.Tazarotene is pregnancy category X, indicating it should not be used during pregnancy.The newest topical retinoid, trifarotene, has not been assigned a pregnancy-risk category.
Pregnancy or plans to become pregnant also limit use of tetracycline-derived antibiotics.Minocycline can cause skin and mucosal pigment changes, and it has been associated with a lupus-like syndrome with a higher incidence in young women. 27Common side effects of tetracycline-derived antibiotics are photosensitivity, gastrointestinal upset, dizziness, and headaches.Pseudotumor cerebri (PTC) is a risk with this class of antibiotics and it can occur in children. 28,29Subtle symptoms and early fundoscopic signs of increased intracranial pressure can be missed, allowing unrecognized PTC to progress to visual impairment. 28,29stemic strategies with hormonal therapy have considerable side effect potential.Spironolactone leads to dose-dependent menstrual irregularities in 15%-30% of patients. 21In lab animals, spironolactone has been shown to feminize a male fetus. 21Oral contraceptives are often associated with nausea, breast tenderness, and breakthrough bleeding.Even the small chance of a thromboembolic event may discourage oral contraceptive use.
Despite decades of use, controversies still exist regarding the use of isotretinoin.As with oral tetracyclines, isotretinoin poses an increased risk of PTC 29 and the two should not be used together.Almost all patients treated with isotretinoin develop mucocutaneous and eye dryness that can be severe.Myalgias, liver enzyme abnormalities, and elevated triglycerides have been reported. 24,25Severe depression has been reported though there is controversy about whether this is drug-related or due to the severe acne for which isotretinoin is prescribed. 30Retinoid embryopathy is a known side effect and the use of isotretinoin involves the added administrative burden of enrollment in the iPledge program, 31 an FDA-mandated safety program intended to diminish the risk of isotretinoin's teratogenicity.Based on histological data that demonstrate a drastic decrease in the size, shape, and lipid content of sebaceous glands while undergoing isotretinoin treatment, 32,33 patients who can tolerate a long course of isotretinoin are rewarded with dramatic disease improvement.
| ENERGY DE VI CE S FOR ACNE
Energy-based devices fill a therapeutic need for patients who cannot tolerate or who do not respond to conventional acne therapy.
A 2016 Cochrane review of randomized controlled trials of light treatments for acne concluded reliable synthesis of data could not be determined due to differences in patient selection, different wavelengths used, varying total doses and a number of sessions, and lack of standardized outcome measures. 40More than half of the studies in the Cochran review were industry sponsored.Of the energy device trials reviewed, photodynamic therapy (PDT) was the most widely studied and was thought to have some usefulness. 40In an evidence-based review of photodynamic therapy, 38 moderate to severe inflammatory and non-inflammatory acne responded to red light as a light source when the skin was pretreated with a photosensitizer for several hours before light exposure.PDT appears to be effective, but it can be painful, and the durability of response over time is unclear.Light-based therapies, their mechanisms of action, and common side effects are listed in Table 1.
| Novel 1726 nm laser for acne
Table 1 illustrates that reduction of sebum production is the primary mechanism of action in light-based therapies.Energy delivered by lasers increase the skin's temperature because water is abundant in skin, and water's absorption coefficient is significant with infrared lasers. 41Sebaceous glands are injured by the increased temperature, but discomfort and collateral damage to other skin structures cannot be avoided. 41better approach is to selectively deliver energy to specific chromophores within the sebaceous gland that has a higher absorption coefficient than water.Sebum in the sebaceous gland is a favorable target because it has a narrow absorption peak, higher than water, at 1726 nm. 41,42In 2012 Sakamoto et al. demonstrated optical pulses with wavelengths between 1700 and 1720 could destroy sebaceous glands in ex vivo human facial skin, with minimal damage to surrounding tissues. 42novel infrared diode laser device with a nominal wavelength of 1726 nm was designed to generate a significant and rapid temperature rise inside the sebaceous glands to heat sebum, producing a controlled thermal injury of the glands. 41As heat causes pain, the addition of skin cooling minimizes skin discomfort.Thermal protection for the epidermis and superficial dermis is provided by a temperature-controlled skin-contact cooling window. 41In an in vivo model, human facial skin from around the ear was exposed to the energy settings determined to be ideal.The treated area was excised 5 days later.Total necrosis of the sebaceous gland was seen, with sparing of the epidermis and of the follicular epithelium. 41rformance testing to assess the safety and efficacy of the new 1726 nm laser was done in an open-label, prospective, multicenter clinical study prior to FDA approval. 43A total of 104 patients, 57% female, 43% male, aged 16-40, with mild (n = 1), moderate (n = 81), or severe (n = 22) acne were enrolled.More than 20% of those enrolled had severe acne and 28% of total patients were males aged 16-19.The study's primary objective was to show that ≥50% of subjects attained treatment success defined as a reduction ≥50% in inflammatory acne lesions 12 weeks after their final treatment compared to baseline.Treatment consisted of a total of three 30-min laser facial treatments spaced 2-5 weeks apart.Photographs taken throughout the study were sent to a panel of three trained expert physicians for Investigator Global Assessment (IGA) grading.Noninflammatory and inflammatory acne lesion counts (ILC) were performed by lesion counters who were blinded to the study design.Results obtained at 12 weeks after the last treatment are summarized in Table 2.
Subjects were also assigned to subgroups by age, gender, Fitzpatrick skin type, and baseline IGA.Responder rates and device-related adverse events within these three subgroups were TA B L E 1 Summary of light-based procedures that have been used for inflammatory acne.pre-procedural numbing was not performed.There were no differences in discomfort levels by skin type, with median discomfort levels rated as 5.0-5.1 (0-10 scale). 43The results of this report that led to FDA approval are similar to those published by Goldberg et al in 2022. 44In a study of 17 patients, statistically significant reductions in inflammatory lesions were seen beginning 4 weeks after treatment.Improvement continued well beyond the initial study.Subjects who continued follow-up showed progressive improvement, with a 97% reduction in inflammatory lesion counts at 2 years. 44
| DISCUSS ION
Nonadherence with topical and systemic therapy is high due to complicated, multistep treatments, topical and systemic side effects, and administrative burdens.Undertreatment of acne leads to pigment changes and scarring.Permanent acne scarring occurs across all levels of acne severity, a critical finding for practitioners who treat acne. 11Of those affected by acne scarring, 20% fear that scarring will affect their employability. 11til now, the benefits of laser therapy for acne came at the cost of undesirable skin discomfort and skin damage due to a lack of selectivity for sebaceous glands.Selective photothermolysis using a new 1726 nm laser ensures effectiveness and safety in mild to severe acne while managing discomfort with a contact cooling system.The 1726 nm laser was studied and is approved for ages 16 and above.
Unlike the dozens of laser trials using non-selective wavelengths, pigment changes were not observed with the 1726 nm laser.Melanin is not a clinically meaningful absorbing chromophore at 1726 nm, so the new laser is safe for all skin types.
Adequate, early treatment is the key to acne remission without physical, social, or emotional sequelae.There are many effective topical products to treat acne and three options for systemic treatment.Of all available medications, only isotretinoin is diseasemodifying.The 1726 nm laser may have a similar long-term effect on sebaceous glands as does isotretinoin.Both treatments have histological data that show a marked decrease in the size, shape, and lipid content of sebaceous glands in human skin after treatment. 44wever long-term follow-up of patients treated with 1726 nm laser compared to those treated with oral isotretinoin may provide conclusive evidence.
The new 1726 nm laser may represent a paradigm change in acne treatment, providing safe, effective, and convenient treatment for mild to severe acne in all skin types.
| Limitations
The review discusses clinical challenges with present therapeutic options for acne treatment and the role of a 1726 nm laser for acne.
| Future directions
A clinical trial (clini caltr ials.govidentifier: NCT05430464) is recruiting participants to assess the benefits of the 1726 nm laser versus sham laser treatment.
AUTH O R CO NTR I B UTI O N S
DJG and AA performed the research for this manuscript.DJG, AA, ACB, MHG, ABL, MSL, JHM, and AR contributed to the development of the manuscript, reviewed it, and agreed with its content.
FU N D I N G I N FO R M ATI O N
This work was supported by an unrestricted educational grant from Cutera.
DATA AVA I L A B I L I T Y S TAT E M E N T
The data supporting this manuscript's findings are available in The review was limited to exploring the role of a 1726 nm laser for acne in view of currently used treatments.A review comparing various laser treatments is outside the scope of this paper.Evidence-based guidelines rate the quality of evidence to support treatment options.Clinical consensus recommendations utilize expert opinion based on the experience of what treatment works well in particular situations.Laser treatment studies are challenging to design due to the difficulty in assessing a control group; patients often serve as their own controls.The 1726 nm laser was recently approved in 2022 which has limited widespread real-world clinical experience.
|
v3-fos-license
|
2018-07-27T11:49:42.631Z
|
2018-07-01T00:00:00.000
|
51622443
|
{
"extfieldsofstudy": [
"Medicine",
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.3390/nano8070516",
"pdf_hash": "92440ffa3fd34581dddc24bdc947b8da9fd6b551",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44495",
"s2fieldsofstudy": [
"Chemistry"
],
"sha1": "78f46ecc59cb55c1da609b6eeab96f1bc0560bd2",
"year": 2018
}
|
pes2o/s2orc
|
Iodoxybenzoic Acid Supported on Multi Walled Carbon Nanotubes as Biomimetic Environmental Friendly Oxidative Systems for the Oxidation of Alcohols to Aldehydes
Iodoxybenzoic acid (IBX) supported multi walled carbon nanotube (MWCNT) derivatives have been prepared as easily recyclable solid reagents. These compounds have been shown to be able to mimic the alcohol dehydrogenases and monooxygenases promoted oxidation of aromatic alcohols to corresponding aldehydes. Their reactivity was found to be dependent on the degree of functionalization of MWCNTs as well as from the chemical properties of the spacers used to bind IBX on the surface of the support. Au-decorated MWCNTs and the presence of longer spacers resulted in the optimal experimental conditions. A high conversion of the substrates and yield of desired products were obtained.
Introduction
The oxidation of alcohols to corresponding carbonyl compounds is one of the most fundamental and important processes in synthetic organic chemistry. Although a variety of methods and reagents have been developed, they all suffer from the difficulty of selectively oxidizing primary alcohols to aldehydes without the concomitant formation of carboxylic acids and other over-oxidation products [1]. The oxidation of alcohols to aldehydes is usually performed in the presence of stoichiometric reagents [2] including the Dess-Martin oxidation [3], the Swern and Corey-Kim reaction [4], and the Burgess reagent [5]. Heavy metal reagents have been also used in catalytic procedures, for instance, hydrogen-transfer reactions (Ru, Rh, Ir) [6], and Oppenauer oxidations (Al, Zr, lanthanides) [4]. On the other hand, metal-free oxidations are desired processes in the context of green-chemistry due to the known toxicity and high environmental impact of metal species. In this context, biotechnological applications of oxidative enzymes, e.g., alcohol dehydrogenases and monooxygenases with high vacuum PHI 1257 system and an Agilent 7500 ICP-MS instrument under clean room ISO6 (Santa Clara, CA, USA), respectively.
Preparation of oxMWCNTs I
In a round-bottomed flask, equipped with an egg-shaped magnetic stirring bar, MWCNTs and a mixture of concentrated H 2 SO 4 -HNO 3 (3:1) were stirred for 4.0 h at r.t. and an additional 12 h at 40 • C. The reaction mixture was cooled down to r.t. and cold H 2 O (400 mL) was poured into the reactor. The mixture was washed by centrifugation at 4000× g rpm (30 min), and the supernatant was removed. The remaining solid was further washed with deionized H 2 O (200 mL). At each washing step, the mixture was centrifuged (4000 rpm for 30 min), filtered using GH Polypro membrane filters 0.2 µm and the supernatant was removed. The resulting oxidized MWCNTs (oxMWCNTs I) were dried in vacuo and used without further purification.
Preparation of Oxidizing Solid Reagents VIII A-B
MWCNTs (100 mg) were sonicated in 100 mL of ethanol for 2 h. Afterwards, 8.5 mL of 0.1 M HAuCl 4 ethanolic solution was added. In order to obtain Au particles, reduction with 300 mg of NaBH 4 was carried out by stirring for about 30 min. Then, Au-MWCNTs V was isolated by centrifugation and filtered using GH Polypro membrane filters 0.2 µm washed several times with ethanol and dried at 80 • C. 2-amino-1-ethanethiol (for NH 2 -Au-MWCNTs VI A) and 6-amino-1-hexanthiol (for NH 2 -Au-MWCNTs VI B) was dissolved in a mixture of water (20 mL) and 1.0 M HCl (3.0 mL). Au-MWCNTs V (30 mg) and ethanol (3.0 mL) were added and the mixture was left under magnetic stirring for 24 h. After that time, the product was isolated by centrifugation, washed three times with 0.01 M NaOH and ethanol, and filtered using GH Polypro membrane filters 0.2 mm. Resulting NH 2 -Au-MWCNTs VI A-B were dried under argon stream. NH 2 -Au-MWCNTs VI A-B (200 mg) were suspended in DMF (0.8 mg/mL) and treated with DIC (790 mg, 6 mmol) and DIPEA (2.1 mL, 12 mmol) in a 500 mL round-bottomed flask with an egg-shaped magnetic stirring bar. Thereafter, IBA (1.5 g, 6 mmol) was added to the solution, and the mixture was stirred for 8 h at 30 • C. The resulting IBA-Au-MWCNTs VII A-B were washed with DMF and H 2 O by centrifugation (4000× g rpm, 20 min) and filtered using GH Polypro membrane filters 0.2 µm. IBA-Au-MWCNTs VII A-B were suspended in H 2 O (125 mg/250 mL) in a round-bottomed flask, then Oxone ® (950 mg, 1.5 mmol), and methane sulfonic acid (100 µL, 1.5 mmol) were added and stirred for 8 h at r.t. Thereafter, IBX-Au-MWCNTs VIII A-B were washed with DMF (5 × 10 mL) and H 2 O (3 × 10 mL) and filtered using GH Polypro membrane filters 0.2 µm.
Preparation of Oxidizing Solid Reagent VIII-C
11-mercapto-1-undecanol was dissolved in a mixture of water (20 mL) and 1.0 M HCl (3.0 mL). Au-MWCNTs V (30 mg) and ethanol (3.0 mL) were added and the mixture was left under magnetic stirring for 24 h. After that time, the product was isolated by centrifugation, washed three times with 0.01 M NaOH and ethanol, and filtered using GH Polypro membrane filters 0.2 mm. The resulting OH-Au-MWCNTs VI C was dried under argon stream. OH-Au-MWCNTs VI C (200 mg) was suspended in DMF (0.8 mg/mL) and treated with DIC (790 mg, 6 mmol), and DIPEA (2.1 mL, 12 mmol) in a 500 mL round-bottomed flask with an egg-shaped magnetic stirring bar. Thereafter, IBA (1.5 g, 6.0 mmol) was added to the solution and the mixture stirred for 8 h at 30 • C. The resulting IBA-Au-MWCNTs VII C was washed with DMF and H 2 O by centrifugation (4000× g rpm, 20 min) and filtered using GH Polypro membrane filters 0.2 µm. IBA-Au-MWCNTs VII C was suspended in H 2 O (125 mg/250 mL) in a round-bottomed flask, then Oxone ® (950 mg, 1.5 mmol) and methane sulfonic acid (100 µL, 1.5 mmol) were added and stirred for 8 h at r.t. Thereafter, IBX-Au-MWCNTs VIII C was washed with DMF (5 × 10 mL) and H 2 O (3 × 10 mL) and filtered using GH Polypro membrane filters 0.2 µm.
Transmission Electron Microscopy (TEM), Scanning Electron Microscopy (SEM), and X-Ray Photoelectron Spectroscopy (XPS) Analyses
For transmission electron microscopy (TEM), samples were suspended in bi-distilled water. Droplets of sample suspensions (10 µL) were placed on formvar-carbon coated grids and allowed to adsorb for 60 s. Excess liquid was removed gently by touching the filter paper. Samples were observed with a JEOL 1200 EX II electron microscope (Waltham, MA, USA). Micrographs were acquired by the Olympus SIS VELETA CCD camera equipped with iTEM software (Waltham, MA, USA). For scanning electron microscopy (SEM), the sample suspensions (50 µL) were let to adsorb onto carbon tape attached to aluminum stubs and air dried at 25 • C. The observation was made by a JEOL JSM 6010LA electron microscope (Waltham, MA, USA) using Scanning Electron (SE) and Back Scattered Electrons (BSE) detectors. Energy Dispersive Spectroscopy (EDS) analysis was carried out to reveal the chemical elements. X-ray photoelectron spectroscopy (XPS) analysis was done in an ultrahigh vacuum PHI 1257 system equipped with a hemispherical analyzer, operating in the constant pass energy mode (with the total energy resolution of 0.8 eV) and using a non-monochromatized Mg Kα radiation source. The distance between the sample and the anode was about 40 mm, the illumination area was about 1 × 1 cm 2 , and the analyzed area was 0.8 × 2.0 mm 2 with a take-off angle between the sample surface and the photoelectron energy analyzer of 45 • . The energy scale was calibrated with reference to the binding energy of the C 1s at 284.8 eV with respect to the Fermi level. Survey scans of the III-B, IV-B, VII-A, and VIII-A compounds acquired in the range of 0-1100 eV (not shown here) displayed the contribution coming from the main elements involved in the reaction process for all of the samples: carbon, nitrogen, oxygen, sulfur, gold, and iodine. No contaminant species were observed within the sensitivity of the technique.
Inductively Coupled Plasma Mass-Spectrometry (ICP-MS) Analysis
The samples were weighed (from 1.6 to 6.9 mg) and transferred in Fluorinated ethylene propylene (FEP) vials, previously washed to avoid any kind of external contamination. Regia solution was chosen for the mineralization as it combines the oxidizing capacities of HNO 3 with the complexing capacities of chlorides against I 2 produced during digestion. In particular, 750 µL of HCl and 150 µL HNO 3 were added and the solution was heated to 80 • C for 3 hours. The volume was adjusted to 5.0 mL and then diluted another 10 times before the ICP-MS analysis. The analysis was performed with an Agilent 7500 ICP-MS instrument (Palo Alto, CA, USA). Four standards at 10, 20, 50, and 100 ppb of iodine and gold were used for calibrating the instrument.
Oxidation of Aromatic Alcohols
The oxidation of alcohols 1-8 (1.0 mmol) in EtOAc (10 mL) was performed by adding the appropriate solid reagent (IV A-B or VIII A-C, 1.2 eq) to a single neck round-bottomed flask equipped with a water condenser under magnetic stirring at reflux conditions (c.a. 80 • C) for 24 h. At the end of the oxidation, IV A-B and VIII A-C were filtered off using GH Polypro membrane filters 0.2 µm and washed with EtOAc (5 × 10 mL). The yield of aldehydes 9-16 was determined by GC-MS analysis using n-dodecane (0.1 mmol) as an internal standard. The reactions were performed in triplicate. GC-MS was performed using a VF-5ms column (30 m, 0.25 mm, 0.25 µm) through the following program: injection temperature 280 • C, detector temperature 280 • C, gradient 50 • C for 2 min, and 10 • C/min for 60 min, flow velocity of the carrier (helium), 1.0 mL min −1 . In order to identify the structures of the products, two strategies were followed. First, the spectra of identifiable peaks were compared with commercially available electron mass spectrum libraries such as that of National Institute of Standards and Technology (NIST-Fison, Manchester, UK). In this latter case, spectra with at least 98% similarity were chosen. Secondly, GC-MS analysis was repeated using commercially available standard compounds. The original mass spectra of compounds 9-16 are reported in Figure S1 (Supporting Information).
Preparation of IBX Supported MWCNTs and MWCNTs-Au Oxidizing Solid Reagents
The immobilization of IBX on MWCNTs was first based on the formation of an amide-type linkage between the spacer functionalized MWCNTs and 2-iodobenzoic acid (IBA), followed by activation of IBA to IBX (Scheme 1). In particular, commercially available MWCNTs were oxidized with HNO 3 /H 2 SO 4 to oxMWCNTs I with the aim of increasing the amount of polar moieties (alcoholic and acidic groups) on the surface [23]. Next, ox-MWCNTs I was functionalized with selected alkyl diamino spacers (1,2-di-aminoethane and 1,6-diaminoethane) by coupling with N,N-diisopropyl carbodiimide (DIC) and 1-hydroxy benzotriazole (HOBt) in DMF at room temperature for 24 hours to yield the intermediates II A-B. The effectiveness of the coupling procedure was confirmed by Fourier Transform Infrared Spectroscopy (FTIR) analysis for II-A as a selected example. In particular, the peak at 1649 cm −1 , corresponding to the stretching vibration of the carboxylic groups in oxMWCNTs I ( Figure S2), was shifted to 1633 cm −1 in II-A as a consequence of the amide formation, in accordance with data previously reported for the functionalization of MWCNTs ( Figure S3) [24]. The intermediates II A-B were successively suspended in DMF and treated with IBA at room temperature for 24 hours in the presence of DIC and HOBt to afford IBA-MWCNTs III A-B. The formation of the novel amide linkage was again confirmed by the shift of the amide peak from 1633 cm −1 to 1627 cm −1 ( Figure S4). Finally, III A-B were activated to IBX-MWCNTs IV A-B by reaction with Oxone ® and methansulfonic acid. In this latter case, only a slight shift of the amide peak toward 1606 cm −1 was observed ( Figure S5) [20].
Oxidation of Aromatic Alcohols
The oxidation of alcohols 1-8 (1.0 mmol) in EtOAc (10 mL) was performed by adding the appropriate solid reagent (IV A-B or VIII A-C, 1.2 eq) to a single neck round-bottomed flask equipped with a water condenser under magnetic stirring at reflux conditions (c.a. 80° C) for 24 h. At the end of the oxidation, IV A-B and VIII A-C were filtered off using GH Polypro membrane filters 0.2 μm and washed with EtOAc (5 × 10 mL). The yield of aldehydes 9-16 was determined by GC-MS analysis using n-dodecane (0.1 mmol) as an internal standard. The reactions were performed in triplicate. GC-MS was performed using a VF-5ms column (30 m, 0.25 mm, 0.25 µ m) through the following program: injection temperature 280 °C, detector temperature 280 °C, gradient 50 °C for 2 min, and 10 °C/min for 60 min, flow velocity of the carrier (helium), 1.0 mL min −1 . In order to identify the structures of the products, two strategies were followed. First, the spectra of identifiable peaks were compared with commercially available electron mass spectrum libraries such as that of National Institute of Standards and Technology (NIST-Fison, Manchester, UK). In this latter case, spectra with at least 98% similarity were chosen. Secondly, GC-MS analysis was repeated using commercially available standard compounds. The original mass spectra of compounds 9-16 are reported in Figure S1 (Supporting Information).
Preparation of IBX Supported MWCNTs and MWCNTs-Au Oxidizing Solid Reagents
The immobilization of IBX on MWCNTs was first based on the formation of an amide-type linkage between the spacer functionalized MWCNTs and 2-iodobenzoic acid (IBA), followed by activation of IBA to IBX (Scheme 1). In particular, commercially available MWCNTs were oxidized with HNO3/H2SO4 to oxMWCNTs I with the aim of increasing the amount of polar moieties (alcoholic and acidic groups) on the surface [23]. Next, ox-MWCNTs I was functionalized with selected alkyl diamino spacers (1,2-di-aminoethane and 1,6-diaminoethane) by coupling with N,N-diisopropyl carbodiimide (DIC) and 1-hydroxy benzotriazole (HOBt) in DMF at room temperature for 24 hours to yield the intermediates II A-B. The effectiveness of the coupling procedure was confirmed by Fourier Transform Infrared Spectroscopy (FTIR) analysis for II-A as a selected example. In particular, the peak at 1649 cm −1 , corresponding to the stretching vibration of the carboxylic groups in oxMWCNTs I ( Figure S2), was shifted to 1633 cm −1 in II-A as a consequence of the amide formation, in accordance with data previously reported for the functionalization of MWCNTs ( Figure S3) [24]. The intermediates II A-B were successively suspended in DMF and treated with IBA at room temperature for 24 hours in the presence of DIC and HOBt to afford IBA-MWCNTs III A-B. The formation of the novel amide linkage was again confirmed by the shift of the amide peak from 1633 cm −1 to 1627 cm −1 ( Figure S4). Finally, III A-B were activated to IBX-MWCNTs IV A-B by reaction with Oxone ® and methansulfonic acid. In this latter case, only a slight shift of the amide peak toward 1606 cm −1 was observed ( Figure S5) [20].
Scheme 1. Preparation of IBX supported MWCNTs oxidizing solid reagents IV A-B.
As an alternative, Au decorated Au-MWCNTs V were used instead of oxMWCNTs I as anchorage supports. Briefly, Au-MWCNTs V [25] were treated with selected alkyl mercapto-amino spacers (2-amino-1-ethanethiol and 6-amino-1-hexanthiol, respectively) in an acidic water/ethanol mixture (pH 2, HCl 1.0 M) to afford the intermediates NH2-Au-MWCNTs VI A-B by formation of As an alternative, Au decorated Au-MWCNTs V were used instead of oxMWCNTs I as anchorage supports. Briefly, Au-MWCNTs V [25] were treated with selected alkyl mercapto-amino spacers (2-amino-1-ethanethiol and 6-amino-1-hexanthiol, respectively) in an acidic water/ethanol mixture (pH 2, HCl 1.0 M) to afford the intermediates NH 2 -Au-MWCNTs VI A-B by formation of covalent Au-sulfur bonds (Scheme 2). These intermediates were successively suspended in DMF and treated with IBA at room temperature for 24 h in the presence of DIC and HOBt to yield IBA-Au-MWCNTs VII A-B. Finally, IBX-Au-MWCNTs VIII A-B were obtained through the reaction of VII A-B with Oxone ® and methansulfonic acid [20]. The TEM images of IV B and VIII B, as the selected samples, are reported in Figure 1 (Panel A and C). In VIII B, the black-spots represent the Au particles, whose presence was unambiguously confirmed by SEM associated to BSE analysis ( Figure S6). Note that the structural integrity of the MWCNTs was retained after the loading of IBX. covalent Au-sulfur bonds (Scheme 2). These intermediates were successively suspended in DMF and treated with IBA at room temperature for 24 h in the presence of DIC and HOBt to yield IBA-Au-MWCNTs VII A-B. Finally, IBX-Au-MWCNTs VIII A-B were obtained through the reaction of VII A-B with Oxone ® and methansulfonic acid [20]. The TEM images of IV B and VIII B, as the selected samples, are reported in Figure 1 (Panel A and C). In VIII B, the black-spots represent the Au particles, whose presence was unambiguously confirmed by SEM associated to BSE analysis ( Figure S6). Note that the structural integrity of the MWCNTs was retained after the loading of IBX. Moreover, VIII C was prepared using a longer thio-alcohol spacer (11-mercapto-1-undecanol), with the aim to bind IBA through the formation of an ester bond instead of an amide bond (Scheme 3). Briefly, Au-MWCNTs V was treated with 11-mercapto-1-undecanol in HCl 1.0 M and EtOH to afford the intermediates VI C by formation of covalent Au-sulfur bonds (Scheme 3). This intermediate was successively treated with DIC, DIPEA, and IBA to yield VII C. Finally, VII C was suspended in H2O and treated with Oxone ® and methansulfonic acid to afford VIII C. [20]. The TEM images of IV B and VIII B, as the selected samples, are reported in Figure 1 (Panel A and C). In VIII B, the black-spots represent the Au particles, whose presence was unambiguously confirmed by SEM associated to BSE analysis ( Figure S6). Note that the structural integrity of the MWCNTs was retained after the loading of IBX. Moreover, VIII C was prepared using a longer thio-alcohol spacer (11-mercapto-1-undecanol), with the aim to bind IBA through the formation of an ester bond instead of an amide bond (Scheme 3). Briefly, Au-MWCNTs V was treated with 11-mercapto-1-undecanol in HCl 1.0 M and EtOH to afford the intermediates VI C by formation of covalent Au-sulfur bonds (Scheme 3). This intermediate was successively treated with DIC, DIPEA, and IBA to yield VII C. Finally, VII C was suspended in H2O and treated with Oxone ® and methansulfonic acid to afford VIII C. Moreover, VIII C was prepared using a longer thio-alcohol spacer (11-mercapto-1-undecanol), with the aim to bind IBA through the formation of an ester bond instead of an amide bond (Scheme 3). Briefly, Au-MWCNTs V was treated with 11-mercapto-1-undecanol in HCl 1.0 M and EtOH to afford the intermediates VI C by formation of covalent Au-sulfur bonds (Scheme 3). This intermediate was successively treated with DIC, DIPEA, and IBA to yield VII C. Finally, VII C was suspended in H 2 O and treated with Oxone ® and methansulfonic acid to afford VIII C. Figure 2 presents the detailed spectra of the C 1s, O 1s, N 1s, S 2p, Au 4f, and I 3d peaks of III B, IV B, VII A, and VIII A. All spectra were normalized to C 1s, which corresponded to the signal due to the MWCNTs support. In this way, we have the possibility of comparing the different peaks. XPS analysis clearly confirmed the presence of iodine and gold in the analyzed samples. Therefore, from the intensity of the XPS peaks ( Figure 2) after the last step of the sample preparation (III B → IV B and VII A → VIII A), a slight leaching of Au and I was observed. The C 1s spectra were fitted by the sum of five components assigned to C atoms belonging to: aromatic rings carbon (C=C/C-C, 284.8 eV), hydroxyl groups (C-OH, 285.9 eV), epoxy groups (C-O-C, 286.9 eV), carbonyl groups (C=O, 288.2 eV), and carboxyl groups (C=O(OH), 289.3 eV) (the hump at 290.6 eV was assigned to a π-π* shake-up satellite (in line with [20]). The O 1s spectra were fitted by the sum of three components: OH-C (533.4 eV), C-O-C (532 eV), and O=C (530.4 eV) [26]. Electron binding energies of the peak positions of N 1s, S 2p3/2, Au 4f7/2, and I 3d5/2 for all samples are listed in Table 1. Figure 2 presents the detailed spectra of the C 1s, O 1s, N 1s, S 2p, Au 4f, and I 3d peaks of III B, IV B, VII A, and VIII A. All spectra were normalized to C 1s, which corresponded to the signal due to the MWCNTs support. In this way, we have the possibility of comparing the different peaks. XPS analysis clearly confirmed the presence of iodine and gold in the analyzed samples. Therefore, from the intensity of the XPS peaks ( Figure 2) after the last step of the sample preparation (III B → IV B and VII A → VIII A), a slight leaching of Au and I was observed. Figure 2 presents the detailed spectra of the C 1s, O 1s, N 1s, S 2p, Au 4f, and I 3d peaks of III B, IV B, VII A, and VIII A. All spectra were normalized to C 1s, which corresponded to the signal due to the MWCNTs support. In this way, we have the possibility of comparing the different peaks. XPS analysis clearly confirmed the presence of iodine and gold in the analyzed samples. Therefore, from the intensity of the XPS peaks ( Figure 2) after the last step of the sample preparation (III B → IV B and VII A → VIII A), a slight leaching of Au and I was observed. The C 1s spectra were fitted by the sum of five components assigned to C atoms belonging to: aromatic rings carbon (C=C/C-C, 284.8 eV), hydroxyl groups (C-OH, 285.9 eV), epoxy groups (C-O-C, 286.9 eV), carbonyl groups (C=O, 288.2 eV), and carboxyl groups (C=O(OH), 289.3 eV) (the hump at 290.6 eV was assigned to a π-π* shake-up satellite (in line with [20]). The O 1s spectra were fitted by the sum of three components: OH-C (533.4 eV), C-O-C (532 eV), and O=C (530.4 eV) [26]. Electron binding energies of the peak positions of N 1s, S 2p3/2, Au 4f7/2, and I 3d5/2 for all samples are listed in The C 1s spectra were fitted by the sum of five components assigned to C atoms belonging to: aromatic rings carbon (C=C/C-C, 284.8 eV), hydroxyl groups (C-OH, 285.9 eV), epoxy groups (C-O-C, 286.9 eV), carbonyl groups (C=O, 288.2 eV), and carboxyl groups (C=O(OH), 289.3 eV) (the hump at 290.6 eV was assigned to a π-π* shake-up satellite (in line with [20]). The O 1s spectra were fitted by the sum of three components: OH-C (533.4 eV), C-O-C (532 eV), and O=C (530.4 eV) [26]. Electron binding energies of the peak positions of N 1s, S 2p 3/2 , Au 4f 7/2 , and I 3d 5/2 for all samples are listed in Table 1.
Determination of the Iodine Loading Factor by ICP-MS Analysis
The iodine Loading Factor (LF) for IV A-B and VIII A-C, defined as mmol of iodine per gram of support, was measured by Inductively Coupled Plasma Mass-Spectrometry (ICP-MS) analysis ( Table 2). As reported in Table 2, IV B showed a Loading Factor (LF) significantly higher than IV A (entry 2 versus entry 1), highlighting the easier immobilization of IBA in the presence of the longer spacer (that is 1,6-diaminoethane versus 1,2-diaminoethane) [27]. VIII A and VIII B showed LF values of 0.4 and 0.7, respectively, while for VIII C, the iodine LF was found to be 0.3 (Table 1, entries 3-5).
The LF values found for IV A-B and VIII A-C were of the same order of magnitude, and higher than those previously reported for solid reagents based on the immobilization of IBX on both polymer resins and GO [14,20,25]. Moreover, the higher amount of Au with respect to iodine measured for VIII A-C proved that the initial linkage of mercapto containing spacers was not quantitative with respect to the Au binding sites available on the support ( Table 2, entries 3-5).
Oxidation of Aromatic Alcohols with IV A-B and VIII A-C
The mechanism of the oxidation of aromatic alcohols with IBX is reported in Scheme 4. The oxygen atom transfer from IBX to the substrate requires the initial addition of the alcohol on activated iodine followed by water elimination and disproportionation with the displacement of the aldehyde [14]. The mechanism of the oxidation of aromatic alcohols with IBX is reported in Scheme 4. The oxygen atom transfer from IBX to the substrate requires the initial addition of the alcohol on activated iodine followed by water elimination and disproportionation with the displacement of the aldehyde [14]. IV A-B and VIII A-C were applied for the oxidation of a large panel of aromatic alcohols, including benzyl alcohols 1-6 and phenethyl alcohols 7,8 (Scheme 5, Tables 4 and 5).
Scheme 5. Oxidation of alcohols 1-8 with IV A-B and VIII A-B.
Homogeneous IBX showed a reactivity higher than the supported reagents in the oxidation of benzyl alcohol 1, probably as a consequence of the diffusional barriers for the access of substrate to active iodine atom, with the only exception of VIII-B, which showed a comparable efficacy (Table 4, entry 1 versus entry 11). On one hand, IV A-B and VIII A-B oxidized benzyl alcohol 1 to aldehyde 9 in a higher yield with respect to sIBX, suggesting the beneficial role of MWCNTs as support with respect to the organic resin (Tables 4 and 5). Irrespective of the experimental conditions, VIII-C was totally ineffective in the oxidation of 1, and was not further investigated (Table 5, entry 19). Probably,
Scheme 5. Oxidation of alcohols 1-8 with IV A-B and VIII A-B.
The reactions were performed treating the appropriate alcohol (1.0 mmol) with a slight excess of IV A-B and VIII A-C (1.2 IBX equivalent calculated on the basis of the specific LF value) in EtOAc (10 mL) at 80 • C for 24 h. Tentatively performing the oxidation in other reaction solvents usually applied for IBX transformations (e.g., Dimethyl Sulfoxide (DMSO) and water) were unsuccessful.
Temperatures lower than c.a. 80 • C were not effective, while at temperatures higher than 80 • C, the reagents showed low stability affording only complex mixtures of reaction products. The reactions were analyzed by gas chromatography mass spectrometry (GC-MS) through a comparison with the original standards. Mass-to-charge ratio (m/z) values of aldehydes 9-16 are reported in Table 3 (the original MS fragmentation spectra are in Figure S1). Under optimal conditions, aromatic aldehydes 9-16 were detected as the only recovered products aside from unreacted substrates (Tables 3 and 4). In the case of the oxidation of benzyl alcohol 1, the reaction with commercially available IBX and with IBX supported on polystyrene (sIBX) were performed as references (Table 3, entries 1 and 2). Table 3. Mass-to-charge ratio (m/z) value and the abundance of mass spectra peaks of compounds 9-16. Homogeneous IBX showed a reactivity higher than the supported reagents in the oxidation of benzyl alcohol 1, probably as a consequence of the diffusional barriers for the access of substrate to active iodine atom, with the only exception of VIII-B, which showed a comparable efficacy (Table 4, entry 1 versus entry 11). On one hand, IV A-B and VIII A-B oxidized benzyl alcohol 1 to aldehyde 9 in a higher yield with respect to sIBX, suggesting the beneficial role of MWCNTs as support with respect to the organic resin (Tables 4 and 5). Irrespective of the experimental conditions, VIII-C was totally ineffective in the oxidation of 1, and was not further investigated (Table 5, entry 19). Probably, the low reactivity of VIII-C was ascribable to the detrimental effect of the ester linkage with respect to the amide counterpart on the stability of the Iodine (V) active species [28]. As a general trend, benzyl alcohol derivatives 1-6 were more reactive than phenethyl alcohols 7,8. Moreover, benzyl alcohol bearing electron donating substituents 2-5 were more reactive than 1 (Tables 4 and 5), in accordance with previously reported data focusing on the role of the electron density on the benzylic position in the rate-determining step of IBX-mediated oxidations [14]. The dimension of the spacer also played a significant role, where IV-B and VIII-B bearing the longer spacer chains were the most reactive systems. The effect of the spacer on the reactivity of the supported reagents has been previously investigated, the increase of the length of the chains always being related to the increase of the low energy conformational changes attained by the reagent and to the reduction of the diffusional barrier for the substrates [29]. Finally, Au-MWCNTs based reagents VIII A-B were generally more reactive than the MWCNTs counterparts IV A-B, most likely due to the increased electron-transfer properties of the support as a consequence of the increased conductance of nanotubes in the Au carbon junctions [30]. The recyclability of supported IBX was evaluated for the more reactive VIII B reagent in the oxidation of benzylic alcohol 1. After the first run, the reagent was recovered by filtration, washed with EtOAc, dried and restored in the active form by treatment with Oxone ® and methansulfonic acid. VIII B retained the same reactivity to afford aldehyde 9 in a quantitative yield for at least five successive runs. The absence of leaching of IBX from VIII B was confirmed by testing the oxidative capacity of the organic solution recovered after filtration of its EtOAc solution once maintained at reflux under the same experimental conditions applied for the oxidation. Any oxidation capacity was observed.
Compounds IV B and VIII B retained their morphological structural integrity after the oxidation of alcohol 1, as highlighted by the TEM analysis of the recovered samples ( Figure 1, panels B and D, respectively).
Conclusions
The preparation of a series of IBX based reagents supported on MWCNTs, as heterogeneous biomimetic systems for the selective oxidation of primary alcohols to the corresponding aldehydes under mild conditions, has been described. Two different types of carbon structures have been investigated, namely oxidized MWCNTs or an alternative, Au-decorated MWCNTs. The immobilization of the iodine active reagent was realized by exploiting the direct formation of an amide linkage with IBX, mediated by different spacer lengths, or through the high binding affinity of sulfur containing linkers in the case of the Au-decorated MWCNTs. In general, the benzyl alcohol derivatives were shown to be more active than the corresponding phenethyl alcohols, thus confirming the prominent role exerted by the electron density on the benzylic carbon in the rate-determining step of the IBX-mediated oxidative process [29]. In accordance with this hypothesis, benzyl alcohol bearing electron donating substituents showed the highest reactivity. The dimension of the spacer incorporated between the IBX fragment and the carbon nanotube surface also played a significant role. Indeed, the reagents bearing the longer spacer chains showed higher LF values and better oxidation performances. Regarding the LF values, the longer spacer may reduce steric hindrances for the IBA-functionalization of MWCNTs, increasing the number of groups involved in the multipoint covalent attachment [31]. Similarly, the better oxidation performance measured in the presence of the longer spacer was in accordance with previously reported data on the role that the spacer length can play for a certain mobility of the active species [32]. Interestingly, Au-MWCNTs based systems behaved as the more reactive reagents, thus justifying their increased electron-transfer properties ascribable to the presence of electroactive Au-carbon joints [33] in comparison with simple MWCNTs counterparts. The novel IBX supported reagents were easily recoverable from the reaction mixture, being successfully used for more runs after a simple reaction with the primary oxidant. These novel reagents can be applied in large scale processes, overcoming drawbacks associated with the use of oxidizing enzymes. Moreover, their metal-free structure, associated with the biocompatibility of MWCNTs, ensures novel reagents high eco-compatibility and low environmental impact.
|
v3-fos-license
|
2016-05-04T20:20:58.661Z
|
2013-02-06T00:00:00.000
|
15700280
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/1471-2407-13-64",
"pdf_hash": "9503c4d6ffacc7155d76d50135fceea26e730a41",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44496",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "c211ea213a1a411a2d1e232122bdfdc3d8621066",
"year": 2013
}
|
pes2o/s2orc
|
Long-term outcome and effect of maintenance therapy in patients with advanced sarcoma treated with trabectedin: an analysis of 181 patients of the French ATU compassionate use program
Background The long term outcome of advanced sarcoma patients treated with trabectedin outside of clinical trials and the utility of maintenance treatment has not been reported. Methods Between 2003 and 2008, patients with advanced sarcoma failing doxorubicin could be treated within a compassionate use program (ATU, Temporary Use Authorization) of trabectedin in France using the standard 3-weekly regimen. Data from 181 patients (55%) were collected from 11 centres and analyzed. Results Trabectedin was given in first, second, third or fourth line in metastatic phase in 6%, 37%, 33% and 23% of patients respectively. With a median follow-up of 6 years, median PFS and OS were 3.6 months and 16.1 months respectively. The median number of cycles was 3 (range 1–19). Best response were partial response (PR, n = 18, 10%), stable disease (SD, n = 69, 39%) and progressive disease (PD, n = 83, 46%), non evaluable (NE, n = 9, 5%). Thirty patients (17%) had to be hospitalized for treatment- related side effects. Independent prognostic factors in multivariate analysis (Cox model) were myxoid LPS and line of trabectedin for PFS, and myxoid LPS and retroperitoneal sarcomas for OS. Patients in PR or SD after 6 cycles continuing treatment had a better PFS (median 5.3 vs 10.5 months, p = 0.001) and OS (median 13.9 vs 33.4 months, p = 0.009) as compared to patients who stopped after 6 cycles. Conclusions In this compassionate use program, trabectedin yielded similar or better PFS and OS than in clinical trials. Maintenance treatment beyond 6 cycles was associated with an improved survival.
Background
Soft tissue sarcoma (STS) constitutes a heterogeneous group of rare cancers, with heterogeneous clinical presentation, histological subtypes and molecular alterations [1]. The established standard of care for unresectable STS in first line is doxorubicin-based chemotherapy, with typical response rates ranging from 10% to 30% [1][2][3][4]. For patients who relapse or develop resistance, other therapeutic options were limited before the availability of trabectedin [5]. For these patients, progression-free survival (PFS) and overall survival (OS) rarely exceed 6 months and 1 year respectively [5].
Trabectedin is a tetrahydroisoquinolone alkaloid isolated from the marine organism Ecteinascidia turbinata, a tunicate originally extracted from the Caribbean Sea. Its complex mechanism of action involves a covalent bond to the minor groove of double-stranded DNA, resulting in an inhibition of gene activation and nucleotide excision repair (NER) mechanism, and also inducing lethal DNA double-strand breaks and cell cycle arrest in S and G2 phases [6][7][8][9][10][11]. In vitro, trabectedin has shown potent cytotoxic activity against a variety of human STS cell lines, and antitumor activity against a variety of human xenografts, including sarcomas, with limited cross-resistance between trabectedin and other cytotoxic agents [11][12][13][14][15].
In clinical trials, single-agent trabectedin has shown activity in a variety of tumor types, including sarcomas, breast cancer, and ovarian cancer [16][17][18][19][20][21][22][23][24][25][26][27][28]. The clinical activity of single-agent trabectedin has been demonstrated in heavily pretreated patients with advanced STS, with a median duration of response of 9 to 12 months and 6-months PFS rates ranging from 24% to 29% [16][17][18][19][20][21][22][23][24], as well as in first line patients, either as single agent or in combination with doxorubicin [25,26]. The STS201 randomized, open-label study was conducted in adult STS patients with unresectable/metastatic liposarcoma or leiomyosarcoma, after failure of prior conventional chemotherapy including anthracyclins and ifosfamidei. Patients were randomly assigned to one of two trabectedin regimens (given intravenously at a dose of 1.5 mg/m 2 on a 24-h infusion every 3 weeks or at a dose of 0.58 mg/m 2 on a 3-h infusion weekly for 3 weeks of a 4-week cycle). The study met its primary endpoint with a median TTP of 3.7 months in the 24-h arm vs. 2.3 months in the 3-h arm (p = 0.0302), showing a statistically significant 27% reduction in the risk of progression with the 24-h trabectedin arm [27]. According to these results, trabectedin was approved in September 2007 in the European Union for patients with advanced STS after failure of anthracyclins or ifosfamide or for those who are unsuited to receive such agents. Before that date, a compassionate use program (ATU) was set up in France where 328 patients were included since 2003. The efficacy of trabectedin in compassionate use programs may be different from those obtained in clinical trials, because patients with less favorable clinical characteristics are included.
Hereby are reported the results of a retrospective study in which the outcome of patients included in this compassionate use program population was analyzed. The survival and response rates of these patients were comparable to those reported in clinical trials. Interestingly, maintenance treatment after 6 cycles was associated with improved PFS and OS over treatment discontinuation after 6 cycles.
Centres
From 2003 to 2008, 87 centres in France enrolled included at least one patient in the ATU ("Autorisation Temporaire d'Utilisation") program, a compassionate use program for STS patients matching the inclusion criteria (see below). Requests for participation were sent to all 43 centres that included more than 1 patient. Only centres that had included more than 5 patients actually contributed to this retrospective study. These centres treated 252 patients in total, among which 181 patient files (71%) were collected and updated as of March 20 th , 2012. 181 of the 328 (55%) patients of the ATU program are therefore included in this report. Trabectedin was given at the standard schedule of 1.5 mg/m2 in 24 h continuous infusion every 21 days, as previously reported, with dose adaptations similar to those applied in the protocols [19][20][21][22][23].
Objectives
The primary objective of this study was to evaluate progression-free survival, while secondary endpoints were response rates, duration of response, overall survival, toxicity leading to hospital rehospitalisation, description of the patient populations, impact of treatment duration on treatment efficacy. Because of its retrospective nature, only very simple clinical parameters were collected.
Inclusion criteria for the retrospective study
These criteria were those from the EORTC trial [23], the largest of the single-arm phase II studies with trabectedin. Patients had to have a documented progressive disease at inclusion). No concurrent antitumor therapy was allowed. Other eligibility criteria were age older than 18 years; performance status 0 or 1; no functionally important cardiovascular disease, no prior cancer (except adequately treated in situ carcinoma of cervix or basal cell carcinoma); presence of measurable lesions not previously irradiated, no central nervous system metastases; adequate bone marrow reserve (neutrophils > 2,000/mm 3 , platelet count > 100,000/mm 3 ); and adequate renal and hepatic functions: serum creatinin less than 120 μmol/L or calculated creatinin clearance (Cockroft method) greater than 60 mL/min, bilirubin > 30 μmol/L, AST and ALT less than 1.5 U/L (<2.5 U/L in case of liver metastases), alkaline phosphatase less than 2.5 U/L and albumine > 25 g/L. Mesothelioma, chondrosarcoma, neuroblastoma, osteosarcoma, Ewing's sarcoma, embryonal rhabdomyosarcoma, and dermatofibrosarcoma were excluded.
Case report form
A simple Case Report Form with 22 items was used to collect patients' characteristics and outcome. Information has been collected on an excel spreadsheet, consolidated in an Excel database, and then analyzed using the SPSS 12.1 software by institutional data manager. Collected information included the following: anonymized patient identity, centre, date of birth, gender, date of diagnosis, histotype, grade, date of metastasis, description of first/ s/>2 line treatments, best response, duration, date of trabectedin first course, ECOG PS at that date, metastatic sites at that date (lung, liver, local, soft part, bone, or other), number of cycles, best response to trabectedin, toxicity of trabectedin requiring re-admission, date of last course of trabectedin, date of progression after trabectedin, treatment after trabectedin and best response, date of death. Optional data were number of available pathology tissue block, and contact information for the pathology department where diagnosis was made and which held the pathology samples.
Descriptive analysis and statistics
Baseline demographics and clinical outcome statistical analyses were based on all data available up to the cut-off date of December, 31 st 2011. Descriptive statistics were used to depict the variables distribution. Follow-up was calculated from course 1 of trabectedin. Progression-free survival (PFS) was defined as the interval between the date of the first trabectedin cycle and the date of disease progression, death, or last follow-up contact. The interval between the date of the first cycle of trabectedin and the time of death or last follow-up defined the Overall Survival (OS). PFS and OS rates were estimated using the Kaplan-Meier method and were compared using the log-rank test. Univariate analyses included the following variables: age; sex; performance status; grade; histological subtype; disease location, myxoid liposarcoma histology, translocation sarcoma, treatment line, hospitalization for toxicity and liver/lung metastases. Responses were determined retrospectively using RECIST 1.1. All statistical tests were 2-sided, and a p-value below 0.050 was considered statistically significant. This study was approved by the local institutional review board at each participating institution.
Population and patient characteristics
Between 2003 and 2008, 87 centres have included 328 patients in the ATU program. The present study was performed on 181 patients from 11 centres having treated at least 5 patients who agreed to participate. This represents 55% of the total cohort of 328 ATU patients. Inclusion criteria of the ATU were those of the EORTC trial. The only difference was that no restrictions were imposed on the previous number of lines. The median number of patient per centre was 17 (range 5 to 32). Patients' characteristics are described in Table 1. 29% had translocation-related sarcomas. At diagnosis, grade 3, 2 and 1 STS represented 44%, 28% and 9% of the tumors respectively. Median line of therapy was third line for trabectedin, with a range of 1 to 4 lines. 56 of 181 patients (31%) received 6 cycles or more. During the course of treatment, 30 (17%) of the patients had to be re-admitted for treatment-related adverse events.
Survival
With a median follow-up of 64 months after initiation of trabectedin treatment, the median PFS was 3.6 months, with a 6-months PFS rate of 39%. Median OS survival was 16.1 months, with 3, 4, and 5 years OS rate of 23%, 15%, and 4% respectively ( Figure 1A and 1C). PFS and OS were superior in patients treated in 1 st and 2 nd line but prolonged survival >24 months was observed in all subgroups ( Figure 1B, and 1D). PFS was superior in patients with myxoid liposarcomas, retroperitoneal sarcomas and grade 1 tumors (Table 1). OS was superior in patients with myxoid liposarcomas, retroperitoneal sarcomas and grade 1 tumors (Table 1). In multivariate analysis, the only two independent prognostic factors identified for PFS were histological subtype of myxoid LPS and the line of treatment. For OS, the two favorable prognostic factors in multivariate analysis were histological subtype of myxoid liposarcomas and retroperitoneal locations for primary disease (Table 2).
Response to treatment
Partial response (PR), stable disease (SD), and progressive disease (PD) were recorded as best response in 10%, 39%, 46% of the patients respectively, with 5% patients being non evaluable. No significant difference was were observed according to the line of trabectedin administration (Table 3, p = 0.17). Myxoid liposarcoma had a better response and stable disease rate (21% and 54% respectively) as compared to other histological types (8% and 36% respectively) (p = 0,002) with no significant difference between other translocation-related sarcomas and the remaining group of sarcoma (not shown). The median duration of response was 10.5 months (95% CI: 5.4-15.6). Overall survival of partial responders and patients with SD were similar in the first years, but only partial responders were long-term survivors beyond 5 years (44%). Overall survival of patients with PR or SD subgroups, were equivalent, and both superior to that of patients with progressive disease or non evaluable disease as best response (Figure 2).
Maintenance therapy after 6 cycles
A total of 56 (31.1%) patients were in SD or PR after 6 cycles. In 16, the treatment was stopped, whereas in 40 patients it was continued beyond 6 cycles for a median of 9 cycles (range 7-19). The subgroup of patients treated with 7 or more cycles had a significantly better PFS (median 5.3 months vs 10,5 months, p = 0,001) and OS (median 13,9 vs 33,4 months p = 0.009) (Figure 3) than the other subgroup, suggesting that maintenance).
Maintenance therapy was associated with a better PFS and OS in this series analyzed retrospectively.
Discussion
The objective of this retrospective study was to assess the outcome of STS patients treated in the French ATU compassionate use program and to compare it with that of published clinical trials. Between 2003 and 2008, this program enabled the treatment of patients failing doxorubicin with trabectedin 1.5 mg/m2/21d. The inclusion criteria were the same than those of the EORTC trial, with the exception that all lines were allowed. Not all patients could be retrospectively collected. Actually only centres that included more than 5 patients contributed to this analysis, and these included 181 patients in this retrospective study. This series represents therefore a selected subgroup of patients treated mostly in reference centres for sarcoma, and in experienced centres regarding trabectedin usage. Among the 11 centres participating to the study, 5 had participated to the phase II EORTC trial, reflecting the experience of the centres with this agent. This is therefore a selected subgroup of the ATU series, but this selection makes comparison with phase II data maybe more relevant. It would have been of interest to compare this series to that of patients treated in nonexpert centres but this could not be obtained. In this group of 181 heavily pretreated STS sarcoma patients, either resistant or relapsing, trabectedin was received as a second line therapy for a majority of them and some patients received the treatment in 4th line. This is a more heavily pretreated patient population than that of the EORTC trials. Despite of this, the response rate (10%), stable disease rate (39%), PFS (median 3.6 months) and OS (median 16.1 months) were comparable to those observed with trabectedin in the phase II trials. According to the EORTC-STBSG (European Organisation for Research and Treatment of Cancer-Soft Tissue and Bone Sarcoma Group) criteria, because the 3-months progression free rate was largely superior to 40% and the 6-monthz PFR was superior to the threshold used to define an active treatment according to the EORTC STBSG [5]. It is however challenging to compare the present series with the EORTC database of the pretrabectedin era [2,3] published since 1999 for several reasons : 1) the former series included mainly first line patients, while the present series gathers patients in all lines (from first line metastatic in patients pretreated in the adjuvant setting to fourth line patients. 2) histological classifications and inclusion criteria varied considerably between the 2 series; for instance, GIST were mixed amongst leiomyosarcoma in the former series. The exhaustive histological reviews of the former series were has not performed with the classifications of 2002 or 2013.
Possibly the best comparison can be obtained with the subsequent paper by Van Glabbeke et al., reporting separately second line + patients. In this case, the median progression free rate is 2.3 months, and a 1 year PFR rate of 7% in the whole series, and 12% for the series of patients treated with "active agents" [5]. The results observed with the present ATU series, median PFS of 3,6 months, and 12 months PFS close to 30% compare therefore favorably with these historical controls, despite all these limitations.
Detailed side-effects of trabectedin treatment were not collected in this retrospective study. Only toxicity leading to hospitalization was documented and remained limited, affecting only 17% of patients. As expected these patients had a smaller number of cycles delivered and, perhaps as a consequence, had a worse PFS. Overall survival was however not significantly different than that of on patients Overall survival according to the best response to trabectedin. Overall survival of patients whose best response was: partial response (light brown), stable disease (purple), progressive disease (green), or non evaluable (blue). Log-rank p value, p < 0,0001.
without toxicity-related hospitalization. Because most toxicities do not lead to rehospitalisation, no formal conclusion can be proposed on a possible lack of correlation with therapeutic efficacy, considering also the limited number of patients in this series. As previously described, patients with myxoid liposarcomas had better response rates, PFS,OS, and was an independent prognostic factor for survival. The number of lines of chemotherapy administered before trabectedin also correlated significantly with longer PFS in the Cox model, but not for OS. Conversely, retroperitoneal location was were associated with a improved survival, possibly because of the low grade and loco-regional behavior; most are liposarcomas, a subset associated with a better outcome in large retrospective datasets in the present series as well [2]. Interestingly, OS and PFS of translocation-related sarcomas excluding myxoid liposarcoma was not different of that of other sarcoma types. Similar observations were made for response rate (not shown).
Thirty percent of the patients in the present study received more than 6 cycles of trabectedin, which underlines an acceptable toxicity profile allowing prolonged treatment. Long-term treatment is feasible with trabectedin, while this is not feasible with doxorubicin nor ifosfamide because of cumulative cardiac and renal toxicities. Prolonged trabectedin treatment thus allows testing the importance of maintenance treatment. Interestingly, among the 56 patients who were not progressing after 6 cycles, the 40 who continued treatment had a significantly better PFS and more surprisingly OS, with a more than doubling of the median OS. The retrospective nature of the study implies potentials biases in these observations, and therefore this cannot be considered as an evidence for the utility of prolonged treatment. However, these observations strengthen the rationale of the ongoing study randomizing treatment maintenance vs interruption after 6 cycles which is currently ongoing within the French Sarcoma Group (NCT01303094). It has previously been shown in a large randomized clinical trial (SUCCEED) that maintenance with an mTOR inhibitor enables to prolong PFS, but not OS (Demetri et al. submitted for publication). Maintenance therapy maybe a strategy worth further exploring in patients with advanced STS.
Conclusion
In conclusion, this retrospective analysis of 55% of the advanced sarcoma patients treated in France in the compassionate use program shows that the use of trabectedin in routine clinical practice, in large volume centres, yields an outcome similar to the previously observed results in earlier clinical trials. Trabectedin is confirmed as an active and safe agent for the treatment of advanced STS patients who have failed to standard therapies. Patients treated beyond 6 cycles of trabectedin had a significantly better survival, pointing out a potential role of maintenance treatment. An exhaustive retrospective study collecting the information on all patients treated with trabectedin since its approval could be very helpful to describe the outcome of patient populations in low volume centres. A prospective studies is ongoing to evaluate the efficacy of maintenance after 6 cycles. Figure 3 Progression-free and overall survival according to maintenance after 6 cycles. No maintenance-treatment interruption after 6 cycles (blue); maintenance-treatment beyond 6 cycles (green). Log-rank p-value for PFS, p = 0,007; Log-rank for OS, p = 0,0002.
|
v3-fos-license
|
2018-08-06T13:07:18.741Z
|
2018-07-20T00:00:00.000
|
49894332
|
{
"extfieldsofstudy": [
"Medicine",
"Geology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41467-018-05313-2.pdf",
"pdf_hash": "ef063fd59f103e3ca771beca056a3154db9efb94",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44498",
"s2fieldsofstudy": [
"Geology"
],
"sha1": "aec198356125f03452235e0dce9cff88a473b711",
"year": 2018
}
|
pes2o/s2orc
|
Arc-like magmas generated by mélange-peridotite interaction in the mantle wedge
The mechanisms of transfer of crustal material from the subducting slab to the overlying mantle wedge are still debated. Mélange rocks, formed by mixing of sediments, oceanic crust, and ultramafics along the slab-mantle interface, are predicted to ascend as diapirs from the slab-top and transfer their compositional signatures to the source region of arc magmas. However, the compositions of melts that result from the interaction of mélanges with a peridotite wedge remain unknown. Here we present experimental evidence that melting of peridotite hybridized by mélanges produces melts that carry the major and trace element abundances observed in natural arc magmas. We propose that differences in nature and relative contributions of mélanges hybridizing the mantle produce a range of primary arc magmas, from tholeiitic to calc-alkaline. Thus, assimilation of mélanges into the wedge may play a key role in transferring subduction signatures from the slab to the source of arc magmas.
S ubduction zones are widely studied because they are a major locus of volcanic and seismic hazards. In particular, the compositions of arc magmas have been used to understand the magmatic processes operating in the deep Earth. During subduction, hydrated oceanic crust and sediments are subducted and recycled back into the Earth's interior. Although the fate of subducted sediments is uncertain, their signature is imprinted in the chemistry of most arc magmas around the world 1 . Sediments are globally enriched in many trace elements (e.g., K, Rb, Th, rare earth elements) relative to peridotite mantle 2 , thus small volumes of sediments can drastically shift the trace element budget of the mantle wedge. Arc magmas are also characteristically enriched in fluid-mobile large-ion lithophile elements (LILE) such as Ba and Sr, and depleted in high field strength elements (HFSE) such as Nb, relative to mid-ocean ridge basalt (MORB) 3 . The LILE enrichment has usually been attributed to mantle wedge metasomatism by slab-derived fluids 4 produced during dehydration of the subducting slab. The HFSE depleted character, on the other hand, has been attributed to different processes such as a 'pre-subduction' mantle depletion 5,6 , selective retention of HFSE by accessory phases (e.g., rutile, sphene, and perovskite) stabilized in the mantle wedge and/or in the slab 7,8 , and preferred partitioning of HFSE into orthopyroxene during melt-rock reaction 9 . Although extensive geochemical studies have suggested that arc magma chemistry reflects variable contributions from a depleted MORB mantle (DMM), altered oceanic crust (AOC) and sediments 10,11 , experimental studies have faced challenges to simultaneously reproduce both the major and trace element characteristics of tholeiites and calc-alkaline melts, the most common types of arc magmas. In addition, the processes by which typical trace element signatures are produced and transferred to arc magmas remain a matter of debate. In particular, it has been recently argued that the trace element and isotope variability of global arc magmas could not be reconciled with the classic model of arc magma genesis, which invokes hybridization of the mantle wedge by discrete pulses of melted sediments and aqueous fluids from dehydrating AOC. Instead, the trace element and isotope data of global arcs can only be reconciled if physical mixing of sediments + fluids + mantle takes place early on in the subduction process before any melting occurs 12 . This prerequisite redefines the order of events in subduction zones and supports an important role for mélange rocks in arc magmatism. However, the trace and major element chemistry of melts that would result from the interaction of natural mélange rocks with a peridotitic mantle in subduction zones has never been investigated experimentally and remains unknown. Such information is critical to determine whether mélange rocks are viable contributors to arc magmatism worldwide.
Mélange rocks are observed in field studies worldwide 13 and are believed to form by deformation-assisted mechanical mixing, metasomatic interactions and diffusion at different P-T conditions along the slab-mantle interface during subduction [13][14][15][16] . Mélanges are hybrid rocks composed of cm to kmsized blocks of altered oceanic crust, metasediments, and serpentinized peridotite embedded in mafic to ultramafic matrices 14,17,18 . These matrix rocks include nearmonomineralic chlorite schists, talc schists, and jadeitites with variable amounts of Ca-amphibole, omphacite, phengite, epidote, and accessory minerals (e.g., titanite, rutile, zircon, apatite, monazite, and sulfides), among others. Although the volumes of mélange rocks at depth are poorly constrained, several km-thick low-seismic velocity regions observed at the slab-top in subduction zones worldwide indicate the persistence of hydrated rocksmélange zonesat the slab-mantle interface 14,15,19 . This km-scale estimate of mélange rocks from seismic observations is corroborated by numerous field studies of exhumed high-pressure terranes reporting thicknesses ranging from several hundreds of meters up to several kilometers 14,17,[20][21][22] . Mélange rocks display significant spatial heterogeneity, but detailed field observations indicate that chemical potential gradients between juxtaposed lithologies (e.g., metasediments, eclogite, and serpentinized peridotites) may be reduced to homogenous matrices through diffusion and fluid advection processes as mélanges mature 14,23 . For the purpose of this first study, we will assume that mélange matrices are broadly representative of the bulk composition of the mélange and provide a relevant first-order approximation of mélange compositional variability as they form at the expense of, and reflect chemical contributions from their protoliths 15,21 . Although more compositions will be studied in the future, the mélange matrix samples used here reflect two contrasting members in the range of mélange materials that we use to explore possible melt compositions produced by mélangeperidotite interaction.
Laboratory 24 and numerical simulations of subduction process [25][26][27][28] have shown that hydration and partial melting may induce gravitational instabilities at the slab-mantle interface, which can develop into diapiric structures composed of partially molten materials. Although these diapirs have not been unambiguously imaged in active subduction zones, we note that alongarc geophysical studies are rare, that the current resolution of seismic techniques may not be appropriate to image mixed mélange-peridotite lithologies, and that magnetotelluric approach, sensitive to interconnected free fluids, would not easily detect the presence of mélanges, where most of the water may be crystallographically bounded. With their intrinsic buoyancy, mélange diapirs have been predicted to form at the slab-top, migrate to the overlying mantle 25,26 , and transfer the compositional signatures of slab-derived rocks to the source region of arc magmas 23,29 . In particular, physical mixing and homogenization of viscous mélange diapirs and sub-solidus mantle peridotites is predicted in the hot zones of the mantle wedge 30 . Recent findings on ophiolitic zircon grains also support the idea that material can be transported in the wedge via cold plumes 31 . However, as stated previously, the major and trace element compositions of melts that would be produced by melting of a mélange-hybridized mantle wedge remains unexplored.
Here we present the first experimental study on the generation of arc-like magmas by melting of mélange-hybridized mantle sources. We perform piston-cylinder experiments at 1.5 GPa and 1150-1350°C and simulate a scenario where mélange materials rise as a bulk 26,32 into the hot corner of the wedge and homogenize with the peridotite mantle ( Fig. 1). Using powder mixtures of DMM-like natural peridotite (LZ-1, Supplementary Fig. 1; 85-95 vol. %) and natural mélange rocks from a high-pressure terrane (SY400B, SY325; 5-15 vol. %), we show that experimentally produced glasses display the major and trace element characteristics typical of arcs magmas (e.g., high Ba contents, high Sr/Y ratios, and negative Nb anomaly). Our study provides evidence that the compositional signatures of sediments and fluids, initially imparted to mélange rocks during their formation at the slab-mantle interface, can be delivered to the source region of arc magmas by mixing of mélange materials with mantle wedge peridotites, and variably enhanced during melting of mélangehybridized peridotite source. We show that depending on the types and relative contributions of mélange materials that hybridize the mantle wedge, the compositions of the melts vary from tholeiitic to calc-alkaline. We further discuss how lithological heterogeneities observed in supra-subduction ophiolites and arc xenoliths could represent direct evidence for peridotitemélange interactions.
Results
Experimental techniques. We performed piston-cylinder experiments to investigate the composition of melts produced by partial melting of a natural DMM-like peridotite hybridized by small proportions of natural mélange matrix. We used two starting mixes that consisted of homogenized 'peridotite + sediment-dominated mélange matrix' (PER-SED mix) and homogenized 'peridotite + serpentinite-dominated mélange matrix' (PER-SERP mix). Both mélange matrices are fine-grained chlorite schists from Syros (Greece) with estimated water contents between 2-3 wt. %. These two types of natural mélange matrices span a range of compositions that reflect the first-order variability of global mélange rocks in terms of mineralogy (Supplementary Data 1), immobile element chemistry (Fig. 2), and trace element chemistry ( Supplementary Fig. 6). As mélange rocks should be volumetrically small compared to peridotite in the mantle wedge, we only added limited volumes (5-15%) of natural mélange matrix to a natural lherzolite powder (85-95%). We note that mélange rocks would not necessarily represent 5-15 vol.% of the sub-arc region at all times because of the 3-D nature of mélange diapirs. Certain regions of the wedge could be hybridized by different amount of mélange materials at different times. Although this experimental design is more challenging because it produces small melt pools, it simulates a more realistic scenario. Experimental melts were collected using glassy carbon spheres placed at the top of Au-Pd capsules. The natural peridotite (LZ-1; from Lherz, France) displays modal proportions and major and trace element compositions similar to DMM ( Supplementary Fig. 1). The PER-SED and PER-SERP starting materials were partially melted at 1.5 GPa and 1280-1350°C, conditions applicable to arc magmatism 33,34 . In addition, near-solidus (1230°C ) and solidus (1150°C) experiments were performed to better constrain the solid phase assemblage at the beginning of and before melting, respectively. The quenched, dendrite-free glasses were analyzed for major elements using electron microprobe (EPMA) at the Massachusetts Institute of Technology. In addition, chemical maps for major elements were acquired on all experiments ( Fig. 3 and Supplementary Fig. 2). Trace element compositions of glass pools were analyzed using a Cameca 3 F secondary ion mass spectrometer (SIMS) at the North East National Ion Microprobe Facility (Woods Hole Oceanographic Institution). Backscattered electron (BSE) images and energy dispersive spectroscopy (EDS) maps were acquired on all experiments using a Hitachi tabletop SEM-EDS TM-3000. The major and trace element compositions of starting mixes and experimental melts are summarized in Supplementary Data 1 and 2, respectively. We assessed approach to equilibrium by performing a time-series of experiments at 1.5 GPa and 1280°C, with run durations ranging from 3 h to 96 h. The capsules were preconditioned to minimize Fe loss, although we still observed a decrease in FeO T (total iron) with increasing run duration. We observed that melt compositions performed between 72 and 96 h were indistinguishable in terms of SiO 2 , Al 2 O 3 , MgO, Na 2 O, CaO, K 2 O, MnO, and TiO 2 , within 1 s.d. value (Supplementary Fig. 4). Thus, a 72-h run duration was chosen to closely approach equilibrium in those experiments. Mass balance calculations yielded a sum of squared residuals <0.39 (FeO excluded), attesting for a close system for all other major oxides. Phase proportions for each experiment were calculated from mass balance calculations and are reported in Supplementary Data 3. Additional information is provided in the Supplementary Information. The alkali contents of melts produced from PER-SED experiments are higher than those from PER-SERP experiments at similar temperatures, due to the higher alkali contents of PER-SED starting material (Supplementary Data 1 and Supplementary Fig. 6). The FeO T contents of peridotite-mélange melts are lower than global arc data due to some limited Fe loss to the capsule ( Supplementary Fig. 7). We now compare the major element compositions of experimental melts ( Fig. 4 and Supplementary Fig. 7) with fractionation-corrected global arc data 35 (normalized to MgO = 6 wt.%), primitive arc melts compilations 33,34 , and previous experimental studies (Supplementary Data 4). Experimental hydrous peridotite melt compositions 36 match well the major element compositions of global arcs, although alkali contents are expectedly lower than in most arc magmas. Experimental melts from mantle hybridized by slab melts 37 are lower in CaO, and higher in TiO 2 , Na 2 O, K 2 O, and SiO 2 compared to arc datasets. Experimental melts from olivine hybridized by sediment melt 38 are lower in CaO, and higher in Na 2 O and K 2 O compared to arc datasets. Experimental mélange-type 1 and type 2 melts 27,39 are both lower in CaO and MnO and higher in K 2 O and SiO 2 compared to arc datasets. Interestingly, the major element compositions of experimental mélange-type 2 melts 40 , which are partial melts from the sediment-dominated mélange material used in this study (SY400B), plot in the continuity of PER-SED experiments but with higher elemental abundances. Experimental melts from mantle hybridized by sediment melts 41 are higher in K 2 O compared to arc datasets. Conversely, partial melts of peridotite hybridized by mélange materials produced in this study plot within or near the compositional field defined by arc datasets for SiO 2 , MgO, Na 2 O, K 2 O, TiO 2 , P 2 O 5 , and CaO. In terms of alkali contents, lower degree melts (10-19%) of PER-SED experiments are slightly higher than global arcs but plot within the field of global arcs at higher degree of melting (25-31%). Overall, partial melts of peridotite hybridized by mélange materials are similar to partial melts of hydrous peridotites and match well the alkali and major element compositions of typical arcs magmas.
Experimental melts from PER-SED experiments range from the boundary between tholeiitic and calc-alkaline fields to high-K calc-alkaline field (Fig. 5). On the other hand, experimental melts from PER-SERP experiments plot tightly within the tholeiitic field. Experimental mélange-type 2 melts 40 show a strong enrichment in K 2 O and plot in the ultrapotassic shoshonitic field. Our results, along with the experimental data of Cruz-Uribe et al. 40 , highlight a continuum in alkali enrichment from tholeiitic melts produced by melting of mantle hybridized by serpentinitedominated mélange, to calc-alkaline melts produced by melting of mantle hybridized by sediment-dominated mélange materials, to ultrapotassic shoshonitic melts from melting of pure sedimentdominated mélange materials.
Trace element composition of the melts. The trace element compositions of hybrid peridotite-mélange melts are presented in N-MORB-normalized spider diagrams ( Fig. 6) along with global arc data 35 , with emphasis on the dominant primitive arc magma types 33 (i.e., calc-alkaline and tholeiite), and published experimental studies that provided both major and trace element contents of experimental melts (Supplementary Data 4). Primitive calc-alkaline arc magmas are geochemically characterized by up to two orders of magnitude higher trace element concentrations compared to primitive arc tholeiites. Pure sediment melts 7 and melts from olivine hybridized by sediment melts 38 have higher trace element concentrations than global arc magmas and display elemental fractionations that are different from global arcs (e.g., Ba/Th, Sr/Nd). Other previous studies 37,40,42 display trace element abundances that plot in the highest range for natural arc magmas, but with major element compositions that are missing CaO or reflect ultra-potassic melts (high K 2 O). Here we show that, compared to N-MORB, partial melts of hybrid peridotitemélange materials display enrichment in LILE (e.g., Ba, Th, Sr, K), high LREE/HREE (e.g., Ce/Yb), high LILE/HFSE (e.g., high Th/Nb, Ba/Nb, and K/Ti), and plot tightly within the trace element fractionation range defined by global arc data (Fig. 7). Experimental melts from PER-SED experiments record elevated trace element concentrations and show fractionations that are characteristic of primitive calc-alkaline magmas. Sr/Nd ratios still fall within the range of global arcs (Fig. 7), although within the lower range of values. Experimental melts from PER-SERP experiments display trace element concentrations that are an order of magnitude lower than melts from PER-SED experiments, and show a slight enrichment in Sr relative to Ce and Nd. In PER-SED experiments, Zr-Hf are slightly enriched compared to Sm and Ti, whereas in PER-SERP experiments, Zr-Hf are not fractionated from Sm and Ti. Trace element concentrations in the melts generally decrease with increasing temperature, consistent with dilution at higher melting extents in the absence of accessory phases that would retain trace elements in the residue. Overall, melts produced from melting of a peridotite source hybridized by mélange rocks (this study) carry trace element signatures typical of natural arc magmas. In particular, peridotite hybridized by serpentinite-dominated and sediment-dominated mélanges produced melts that carry the trace element characteristics of arc tholeiites and calc-alkaline magmas, respectively.
Discussion
Geodynamic models of rising mélange diapirs have predicted an uneven distribution of mélange rocks in the mantle wedge that involves both complete and incomplete mixing of mélange rocks and peridotites 30 . Our experiments investigate a scenario where the peridotite mantle wedge and limited volumes of mélange rocks are fully mixed and form a new hybrid source that partially melts (Fig. 1). As the extent and volumetric significance of mélange rocks at the slab-mantle interface are still debated, a growing number of studies support their ubiquitous occurrence and importance at the slab-mantle interface. Petrologic modeling 43 , numerical instability analysis of subduction zones 44,45 , and metamorphic P-T-t histories of exhumed high-pressure mélange terranes [46][47][48] strongly support the possibility of exhumation of high-pressure rocks through diapirism within the mantle wedge. Further experiments will model how the path of mélange materials would be affected by the thermal structure of individual subduction zones but are beyond the scope of the current study.
For the purpose of this study, we consider that the two endmember mélange matrices from Syros (Fig. 2) offer compositions that represent a reasonable first-order approximation of global mélange variability. Our choice of using natural chlorite schist 39 , while the experimental mélange-type 2 melts are from Cruz-Uribe et al. 40 Our experiments are plotted as averages with error bars representing 1 s.d. All the data, including the literature, are plotted on volatile-free basis matrix from Syros (Greece) was guided by the fact that the Syros mélange record the mechanical and metasomatic interactions at P-T conditions appropriate for slab-mantle interface at depths of about 50-60 km in subduction zones 16,21,23 . In addition, the chlorite ± talc-dominated assemblage in global mélange matrices (including Syros mélange) is relatively insensitive to pressure 49,50 , making them a reasonable proxy to the type of mélange extending down to sub-arc depths 14,15,51 . Importantly, our natural starting mélange materials record minimal late-stage modification and overprinting during their exhumation, making their mineralogy, elemental, and volatile concentrations 21 closely approximate the in-situ compositions of mélange rocks at the slab-mantle interface. Thus, the present study offers a reasonable approximation of subduction dynamics where mélange rocks formed at 1.6-2.2 GPa, detach from the slab and homogenizes with peridotite in the hot zones of the mantle wedge at 1.5 GPa (~45 km depth). Results from our experiments support the idea that primary melts in arcs are not only limited to MgO − rich (up to 15.9 wt.%) basalt but may also resemble trachyandesite and basaltic trachyandesite with MgO contents of around 7 wt.% (Supplementary Data 2), covering the MgO range of primitive arc magmas 33 . All of our experiments display CaO, K 2 O, Na 2 O, TiO 2 , and P 2 O 5 that more accurately reproduce the chemistry of global arc magmas compared to previous studies that simulated hybridization of the wedge by discrete slab melts or discrete sediment melts. The fact that the hybrid source is largely peridotite-like (85-95%) explains why realistic, arc-like major element compositions can be produced in our experiments. Indeed, the large dominance of mantle-equilibrated arc magmas from different subduction zones should reflect the fundamental control of mantle peridotites in controlling the major element compositions of primary arc melts 34,52 .
The presence of mall mélange components within the mantle wedge significantly affects the trace element budget of melts generated by melting of a mélange-hybridized mantle source. Although hydrous melting of peridotite would typically produce melts that display a MORB-like trace element pattern 53,54 , the trace element compositions of peridotite-mélange melts show striking similarity with global arc magmas, with enriched LILE such as Ba, Th, and K, and depleted HFSE such as Nb and Ti. Previous experimental studies on mantle hybridization by slab melts 37 and sediment melts 42 also produce melts enriched in LILE and depleted in HFSE (Fig. 6d); however their major element compositions mostly reflect (ultra-) potassic shoshonitic melts (high K 2 O) that occur lesser widely in subduction zones worldwide. Traditionally, melts with high Sr/Y signature have been interpreted as slab melts due to the presence of garnet in the melting residue 55 while the high Th/Nb signature was interpreted to record contribution from sediments melts, as Th can be mobilized more efficiently in sediment melts 56 . In addition, high Ba contents have traditionally been ascribed to addition of fluids 57 . The peridotite-mélange melts plot tightly within the range defined by global arcs for ratios that have traditionally required discrete sedimentary, slab melt, and/or AOC fluid addition to the arc magma source 57 . In particular, the peridotitemélange melts carry arc-like Sr/Y, Th/Nb, Ba/Th, K/Ti ratios among others (Fig. 7).
In nature, there exists a large compositional variability in primitive arc magmas, ranging from arc tholeiites to calc-alkaline and shoshonites. However, such compositional variability and their spatial distributions (or the lack thereof) have not been satisfactorily explained. Primitive arc tholeiites are usually thought to be produced by decompression style melting (similar to MORB), whereas the classically invoked model for the formation of primitive calc-alkaline magmas envisages their production by flux melting of the mantle induced by the addition hydrous slab components. These slab components are responsible for the up to two orders of magnitude higher trace element concentrations of primitive calc-alkaline magmas relative to N-MORB. For instance, the elevated Th-Zr-TiO 2 concentrations of primitive calc-alkaline magmas reflects higher slab contributions in their sources 33 . We have shown that melts produced from melting of a mantle hybridized by sediment-dominated mélanges (PER-SED) strongly resembled primitive calc-alkaline magmas whereas melts produced from melting of a mantle hybridized by serpentinite-dominated mélanges (PER-SERP) strongly resembled primitive arc tholeiites, both in terms of major (e.g., K 2 O, TiO 2 ) and trace element abundances (e.g., Ba, Th, Zr) and in terms of fractionation characteristics (Fig. 6).
It is critical to determine whether those abundances and fractionations are simply inherited from the starting material or if they are enhanced during melting of the mélange-hybridized peridotite. We make several important observations regarding elemental abundances and fractionations in the melt compared to the starting materials. The bulk starting compositions of PER-SED 95-5 and PER-SERP 85-15 experiments (the two types of experiments that are dominated by ultramafic componenteither peridotite or serpentine) fall either outside of the global arc range or within the lower range of values observed in arcs (Fig. 6a, c). It is thus clear that melting plays an important role in producing elemental abundances that are similar to values observed in global arc magmas.
The bulk composition of PER-SED 85-15 experiments (more strongly influenced by a sediment-dominated mélange) is already similar to global arcs for most elements (Fig. 6b), and less surprisingly, melting produces melts that are also similar to arcs. Yet, regardless of abundances, characteristic element ratios acquire a slightly enhanced "arc-like" signature for most elemental ratios during melting of mélange-hybridized peridotite. Specifically, Ba/ Th, Sr/Y, Zr/Hf, Zr/Sm, and K/Ti slightly increased in melts compared to the starting materials; Ba/Nb, Sr/Nd, and Sm/Nd stay relatively unchanged whereas Th/Nb and Th/Zr slightly decreased compared to the starting materials (Fig. 7). Experimental melts produced from PER-SED experiments have higher Ba than melts produced from PER-SERP experiments because the sediment-dominated mélange matrix initially had a higher Ba content than the serpentine-dominated mélange matrix (Supplementary Figs. 6 and 8). Still, melts that are produced during melting of PER-SED and PER-SERP starting materials have slightly higher Ba/Th, Sr/Y, Zr/Hf, Zr/Sm, and K/Ti and slightly lower Th/Nb and Th/Zr ratios (compared to starting materials), and thus are not only inherited from the starting materials.
In Supplementary Fig. 9, we show that primitive arc magmas mainly record Nb/Ce N < 1 (normalized to N-MORB 58 ), but their Zr/Sm N can be below or above 1 and is unrelated to the magma type. In addition to Nb depletion and low Nb/Ce ratios, depletion in Zr and Hf is seen for example in shoshonites from Sulawesi and Fiji, and in calc-alkaline basalts from Solomon and Bismarck 33 . However, Zr, Hf, and Zr/Hf are actually variable in natural primitive arc magmas. Elevated Zr-Hf and Zr/Sm N ( >1) observed in low-degree melts from PER-SED experiments are features that are observed in natural arc magmas such as calcalkaline andesites from Japan and New Zealand, calc-alkaline basalts from Mexico, and depleted andesites from Izu-Bonin. Meanwhile, low-degree melts from PER-SERP experiments have Zr/Sm N < 1 that overlap with some HFSE-depleted arc magmas such as tholeiitic basalts and andesites from Japan, Cascades and Tonga arcs. We note that in PER-SED experiments, elevated Zr/ Sm (and Hf/Sm) does not reflect inheritance from the mélange matrix ( Supplementary Fig. 6). Instead, the variability in Zr-Hf contents and Zr/Hf in natural mélange matrices most likely reflect some Zr-Hf mobility in the absence/destabilization of zircon 15 . Overall, the trace element characteristics of our experimental melts plot well within the range of primitive arc magmas (Fig. 7). Thus, these experiments do not only reproduce elemental abundances (major and trace) but also elemental fractionations observed in global arc magmas. In addition, we show that although the trace element compositions of peridotitemélange melts are partly inherited from the mélanges themselves (i.e., some characteristic subduction signatures may be already imprinted at the slab interface), those arc-like abundances and fractionation signatures can be readily produced and variably enhanced during melting of a mélange-hybridized mantle source (i.e., additional fractionation should occur in the mantle wedge).
Using chemical maps and high-resolution BSE images, we did not observe accessory phases, unlike what had been found in pure mélange melt residues 40 . Our results indicate that elements that have similar incompatibilities during pure peridotite melting can still be slightly fractionated during mélange-hybridized peridotite Experimental pure sediment melts are from Skora and Blundy 7 . Experimental melts of a sediment-dominated mélange material (mélange-type 2 melts) are from Cruz-Uribe et al. 40 The literature data are plotted as averages with error bars representing 1 s.d melting. Also, we did not observe HFSE-or REE-compatible accessory phases that could retain these elements in the residue. Niobium depletion was in part inherited from the starting bulk compositions (Supplementary Fig. 6) but we hypothesize that it was enhanced by the preferential partitioning of Nb into orthopyroxene 9 . In particular, the presence of an opx-rich reaction zone in all 72-h experiments could have contributed to Nb depletion in the melts. The opx-rich band is likely due to reaction of hydrous melts with the peridotite assemblage, as has been observed in previous studies 59 . Natural pyroxenites, including orthopyroxenites, have been ubiquitously found in exhumed mantle sections. Previous experimental 27,59 and field-based studies [60][61][62] have pointed out that orthopyroxenites should form as reaction products of hydrous melts and mantle minerals. The ubiquitous occurrence of orthopyroxenites exposed in suprasubduction zone ophiolites such as in the Josephine 62 , Coast range ophiolites 63 , and UHP Maowu Ultramafic Complex 64 , and sampled in arc-related xenoliths 65 , may also potentially record the hybridization of mantle wedge by mélange materials 31 . Thus, the incorporation of mélange diapirs into the mantle wedge may also have implications for the formation of mineralogical and lithological heterogeneities in the mantle. This study has important implications for the understanding of subduction zone magmatism. During subduction, mélange diapirs may propagate, and dynamically mix with the overlying mantle. Our study shows that depending on the nature and relative contributions of the hybridizing mélange materials in the source of arc magmas, a large variety of primary magmas with characteristic arc-like signatures can be produced. As LILEenriched shoshonitic melts are expected to form from melting of pure sediment-dominated mélange materials 40 , our study shows that both primitive arc tholeiites and primitive calc-alkaline magmas, which are the two most abundant magma types in subduction zones worldwide, can be produced by melting of mantle hybridized by serpentinite-dominated and sedimentdominated mélange materials, respectively. The rarer occurrence of ultrapotassic shoshonites as compared to tholeiites and calc-alkaline magmas likely reflects the volumetric significance of peridotites in the wedge and the dilution effect due to mixing of mélange materials with mantle wedge peridotites. The absence of systematic along-and across-arc spatial distributions of primitive tholeiitic and calc-alkaline arc magmas is consistent with the complexity involved in mélange-diapir ascent paths, and their eventual distributions and mixing with peridotite in the mantle wedge.
In summary, this experimental study provides unique constraints for the role of mélange materials in arc magmatism, as invoked in previous studies. We have shown that melting of a mélange-hybridized peridotite represents a mechanism to generate melts with major element, trace element and trace element ratios characteristic of tholeiitic and calc-alkaline arc magmas. In these experiments, the compositions of starting materials, P-T conditions, and melting degrees were designed to be as realistic as possible compared to natural processes in the mantle wedge. Where mélanges can form and ascend into the wedge, variations in their compositions, thicknesses, and relative contributions in the arc magma source will likely result in the formation of compositionally diverse primary arc melts and can result in the formation of lithological heterogeneities in the mantle. Mélange transfer from the subducting slab to the mantle wedge may be one of several mechanisms by which arc magmas are produced, but we emphasize that both major and trace element of experimental melts need to be reported to better assess how closely we can simulate arc processes. Although further experiments will help decipher the type and amount of mélange materials that could be involved in individual subduction zones, we show that hybridization of peridotite by buoyant mélange rocks is a viable process to transfer crustal signatures from the slab surface to arc magmas.
Methods
Starting material preparation. Alteration-free, natural peridotite (LZ-1; typelocality in Lherz, France) was grinded to a fine powder using agate ball mill. The bulk composition of LZ-1 is similar to DMM 66 in major and trace element compositions ( Supplementary Fig. 1) and is here considered to be representative of peridotite mantle wedge. Following grinding, the LZ-1 powder was loaded into a nickel bucket and preconditioned in a 1-atm vertical gas-mixing furnace at 1100°C with fO 2 maintained at the FMQ buffer (Fayalite-Magnetite-Quartz buffer) for 96 h. This fO 2 was adjusted by changing the partial pressures of CO and CO 2 gases Fig. 7 Trace element ratios of experimental melts compared to natural arc magmas. Trace element fractionations of experimental peridotite-mélange melts (a-c) compared to the bulk starting compositions (yellow, green, and red lines) and global arc ratios defined by Turner and Langmuir database 35 (white rectangles) in the furnace, and is within the range of estimated fO 2 for sub-arc mantle 67,68 . Two chlorite schist matrices from Syros (Greece) were selected to represent two end-member compositions of global mélange rocks: the sediment-dominated mélange matrix (SY400B) and the serpentinite-dominated mélange matrix (SY325). Both natural mélange matrices contain water contents of~2-3 wt. %. We acknowledge that there exists a wide range in chemical and mineralogical compositions of exhumed mélange rocks worldwide and that there is no single rock material that can represent such wide variability. In order to account for this and capture its first-order variability, we selected two mélange matrix rocks from Syros (Greece) based on mineralogical assemblages (Supplementary Data 1), immobile element chemistry (Fig. 1), and trace element chemistry ( Supplementary Fig. 5). In Supplementary Data 1, the mineralogical assemblages of SY400B and SY325 are consistent with being derived from a sediment-like and ultramafic/serpentinite-like protoliths, respectively. Using immobile element systematics, Fig. 1 shows a mixing trend between serpentinites and sediment/upper crustal rocks to account for the range in global mélange variability where mélange material SY400B plotted close to GLOSS composition while SY325 plotted close of DMM composition. In Supplementary Fig. 5, the trace element composition of SY400B closely resemble the GLOSS composition while SY325 broadly resemble the DMM-like peridotite, with exception for some highly fluid-mobile elements (e.g., U, K). SY400B and SY325 from Syros record minimal late-stage modification and overprinting during their exhumation, making their mineralogy, elemental and volatile concentrations 21 closely approximate the in-situ compositions of mélange rocks at the slab-mantle interface. Taken together, the mineralogy, immobile element (Cr vs Cr/Al) and trace element chemistry strongly support for the representability of mélange materials SY400B and SY325 to cover for the first-order variability in global mélange composition. Since Syros mélange is one of the most studied and wellconstrained exhumed high-pressure mélange terranes in terms of its metamorphic P-T-t condition 69,70 and mélange formation 21,71,72 , we have more control on the conditions at which our starting materials have been subjected to and the processes that led to their formation. These natural mélange materials were grinded to fine powders using agate ball mill. Experimental setup. Partial melting experiments were performed in 0.5′′ endloaded solid medium piston cylinder device 73 at the Woods Hole Oceanographic Institution (USA). The starting mixes were packed in Au 80 Pd 20 capsules conditioned (Fe-saturated) to minimize Fe loss 36 . The Au 80 Pd 20 capsules were conditioned by packing MORB-like basalt powder (AII92 29-1) in the capsules and firing them at 1250°C in a 1-atm vertical gas-mixing furnace with fO 2 maintained at 1 log unit below FMQ buffer for 48 h. Ideally, we would have used actual starting materials to condition the capsules, but due to limited quantities of starting materials we considered that a MORB-like basalt would provide enough Fe to saturate the capsules. The silicate glass was removed from the Au 80 Pd 20 capsules using warm HF-HNO 3 bath.
When loading the starting material into the conditioned Au 80 Pd 20 capsules, approximately 35-45 mg of the starting mix was first packed in the capsule and then topped with 3.5-4 mg of vitreous carbon spheres (80-200 µm in diameter) to act as melt entrapments. The capsule was triple-crimped and welded shut. All the experiments were performed in a CaF 2 pressure assembly with pre-dried crushable MgO spacers, straight-walled graphite furnace and alumina sleeves. The sealed capsule was strategically positioned in the assembly such that the top portion of the capsule is the side that contains the vitreous carbons spheres to facilitate easy migration of melt during the experiment. Silica powder was placed in between the sealed capsule and alumina sleeve to fill up the space and maintain the capsule's position. Lubricated Pb foils were used to contain the friable CaF 2 assembly and to provide lubrication between the assembly and the bore of the pressure vessel.
The experiments were performed at 1280-1350°C and 1.5 GPa, relevant to arc magma generation 74,75 . Run duration was set at 72 h after verifying approach to equilibrium from a 3 h to 96-h time-series (see paragraph below). Pressure was applied using the cold piston-in technique 76 where the experiments were first raised to the desired pressure before heating them at desired temperature at the rate of 60°C/min. The friction correction was determined from the Ca-Tschermakite breakdown reaction to the assemblage anorthite, gehlenite, and corundum 77 at 12-14 kbar and 1300°C and is within the pressure uncertainty ( ± 50 MPa). Thus, no correction was applied on the pressure in this study. Temperature was monitored and controlled in the experiments using W 97 Re 3 /W 75 Re 25 thermocouple with no correction for the effect of pressure on thermocouple electromotive force. Temperatures are estimated to be accurate to ±10°C and pressures to ±500 bars, and the thermal gradient over the capsule was <5°C. The experiments were quenched by terminating power supply and the run products were recovered. The capsules were longitudinally cut in half before mounting in epoxy. All the mounted capsules were polished successively on 240-to 1000-grit SiC paper before the final polishing on nylon/velvet microcloth with polycrystalline diamond suspensions (3-0.25 µm) and 0.02 µm colloidal silica. Vacuum re-impregnation of capsules with epoxy was performed to reduce plucking-out of the vitreous spheres during polishing.
Approach to equilibrium. Approach to equilibrium was evaluated by performing a time-series of experiments using PER-SED (95-5) starting material at 1. (Supplementary Fig. 4). It has been shown experimentally that hydrous melting of peridotite produces melts with lower FeO* (~6 wt. %) contents than anhydrous equivalents (~8 wt. %) 36 but we also observed a decrease in FeO T with increasing run duration, which suggests Fe loss. We speculate that this Fe loss/depletion reflects one or a combination of the following causes: (1) Fe diffusion to the Au 80 Pd 20 capsule due to incomplete Fe saturation during conditioning; (2) formation of orthopyroxenite reaction zone, which could have further contributed to Fe depletion. Other observation that indicates a close approach to equilibrium in our experiments is the homogenous distribution of minerals in the matrix away from the reaction zone, and homogeneous major element compositions within single capsule.
Another way of assessing equilibrium between the melt and minerals, and check whether the experiment behaved as a closed system, is based on the quality of mass balance calculations performed for all the major elements. Using the MS-Excel optimization tool Solver, we obtained low values for the sum of squared residuals (<0.39), for all the major elements, except for Fe, attesting for a close approach to equilibrium for all other major oxides in our experiments, and confirming a small amount of Fe loss in the capsule walls. Phase proportions for each experiment estimated from the mass balance calculations were verified visually in every experiment.
Electron microprobe analysis. Major element compositions of the quenched melts and coexisting silicate minerals from all experimental run products were analyzed using JEOL JXA-8200 Superprobe electron probe micro-analyzer at Massachusetts Institute of Technology. Analyses were performed using a 15 kV accelerating potential and a 10 nA beam current. The beam diameter varied depending on the target point. For quenched melt pools, beam diameters varied between 3 μm to 10 μm (mostly 5 μm) depending on the size of the melt pools. For silicate minerals, a focused beam (1 µm) was used. Data reduction was done using CITZAF package 78 . Counting times for most elements were 40 s on peak, and 20 s on background. In order to prevent alkali diffusion in glass, Na was analyzed first for 10 s on peak and 5 s on background. All phases (melt and coexisting minerals) were quantified using silicate and oxide standards. The compositional maps for different major elements were performed at similar instrumental setup using a focused beam. Major element compositions of melts and minerals are reported in Supplementary Data 2 and 5, respectively.
Secondary ion mass spectrometry. Concentrations of selected trace elements in melt pools (usually <30 µm diameter) were obtained using a Cameca IMS 3f ion microprobe at the Northeast National Ion Microprobe Facility (NENIMF) at the Woods Hole Oceanographic Institution (WHOI). Analyses were done using 16 O − primary ion beam with 8.4 keV voltage, 500 pA to 1 nA current and~10 µm beam diameter. No raster was used in the beam. Positive secondary ions are accelerated to a nominal energy of 4.5 keV. The energy window of the mass spectrometer was set to 30 eV. 30 Si was set as the reference isotope and ATHO-G, T1-G, StHs6/80-G glasses were used as standards 79 . Analyses were carried out in low mass resolution (m/δm = 330) with an energy offset of −85 V. Secondary ions were counted by an electron multiplier. A 1800 µm diameter field aperture size was used for most of the measurements. We did not use the field aperture to block any of the ion image from the sample since the spot was already very small. Each measurement consists of four minutes of pre-sputtering, then five cycles with an integration of 10 s/cycle for 30 Si and 10 s/cycle for elements 88 Sr, 89 Y, 90 Zr, 93 Nb, 138 Ba, and 30 s/cycle for 140 Ce, 143 Nd, 147 Sm, 174 Yb, 180 Hf, 232 Th, and 238 U. Th concentrations are reported if 1SE error is above detection limit. 1SE error for U is below detection limit for all measurements so U is not reported. In total, each analysis spot requires a total analysis time of approximately 60 min. Reduced trace element concentrations of melts obtained by SIMS are reported in Supplementary Data 2. Internal errors from analyses (2 SE) and error from calibration curves (2SE) have been propagated and are incorporated in the total 2SE error reported in the figures and Datasets.
X-ray fluorescence technique (XRF). Whole-rock elemental concentrations of LZ-1, SY400B, and SY325 were analyzed using X-ray fluorescence technique for major elements and inductively coupled mass spectrometer technique for trace elements at GeoAnalytical Laboratory at Washington State University. As stated before, whole-rock compositions (major and trace elements) of LZ-1, SY325, Major element variability of residual phases. Major element compositions of residual minerals are homogeneous through the capsule in individual experiments, and vary between experiments due to differences in temperature and starting compositions ( Supplementary Fig. 10). They are within the range of values observed in peridotites worldwide, although Fe loss probably artificially increased Mg# of minerals (93-96 in olivine; 91-95 in clinopyroxene; 92-95 in orthopyroxene). Temperature (1280-1350°C) has variable effect on mineral compositions. For example, with increasing temperature, olivines display a slight decrease in Al 2 O 3 , a constant CaO, and a slight increase in MgO. D MgO ol/melt decreases with increasing temperature. Orthopyroxenes display a slight decrease in TiO 2 and Al 2 O 3 with increasing temperature, whereas SiO 2 and MgO increase, and CaO is constant. As predicted experimentally, D Al2O3 opx/melt decreases with increasing temperature and D Na2O opx/melt increases with increasing temperature 80
|
v3-fos-license
|
2018-12-28T07:05:50.978Z
|
2013-09-28T00:00:00.000
|
122264162
|
{
"extfieldsofstudy": [
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://downloads.hindawi.com/journals/mpe/2013/190785.pdf",
"pdf_hash": "6680ff2d874a963988f78b3fa335614b7a231cb5",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44500",
"s2fieldsofstudy": [
"Engineering",
"Mathematics"
],
"sha1": "6680ff2d874a963988f78b3fa335614b7a231cb5",
"year": 2013
}
|
pes2o/s2orc
|
Coordinated Control for a Group of Interconnected Pairwise Subsystems ChenMa and
The implementation of pairwise decomposition is discussed on an interconnected system with uncertainties. Under the concept of system inclusion, two systems with the same expanded system achieved by the same expand transformation are considered as approximations. It is proven that a coordinated controller can be found to stabilize both the two systems.This controller is contracted from the coordinated controller of expanded system, with each pairwise subsystem having information structure constraint taken into consideration. At last, this controller design process is applied on a four-area power system treated as a group of subsystems with information structure constraints.
Introduction
Complex systems in real world are usually composed of a large group of interconnected subsystems.The interconnections among the subsystems are commonly presented in dynamics, and not only their weight values but also their connections with others keep evolving from time to time.Decentralized control is an ideal control strategy to handle the structural perturbations.Inclusion principle [1][2][3] is widely used as a general mathematical framework of decomposition, for example, automatic generation control (AGC) for a four-area power system [4][5][6][7][8], formation control of unmanned aerial vehicles [9], and structural vibration control of tall buildings under seismic excitations [10][11][12].
Particularly, pairwise decomposition provided in [4][5][6][7][8] can take full use of interconnections in the system, by treating each pair of subsystems with information structure constraint as a basic connected unit.Based on the inclusion principle framework, the system will be expanded into a much bigger space in a recurrent reverse order, so that the system is completely decomposed.Then a pairwise coordinated controller for the expanded system will be constructed by achieving coordinated consensus of each pairwise subsystem in parallel.After properly compensated, the controller can be contracted into the original space to fix the original system.However, to apply pairwise decomposition methodology, an explicitly defined overall system model in particular superposition form is needed, and this condition may not always be satisfied due to system complexity.The work of this paper is to present an implementation approach of pairwise decomposition for interconnected system with state uncertainties.As the basis of system expansion and contraction, adequate knowledge of interconnection structure between pairwise subsystem is necessary, and this is the presumption to apply pairwise decomposition in this paper.For the system whose model is uncertain, the inclusion principle can not achieve its expansion exactly.But under the circumstance that the interconnection structure of system is available, an approximate expanded system can be constructed instead.Motivated by the idea that the whole system could achieve high performance only if each part could be consistent, an expanded system can be constructed, which comprises all pairwise subsystems with information structure constraints of the original system, and this expanded system is treated as an approximate expansion of the origin.According to the inclusion conditions, the expanded system can be contracted to the original space.A contraction dual to the expansion can always be found, so that the contracted system and original system are approximate in state dynamic.In this way, the coordinated controller that can stabilize the contracted Mathematical Problems in Engineering system is also suitable for the original system.Similar to the system level, the coordinated controller of contracted system is also established by contracting from that of expanded system properly.In fact, this contracted controller can be used directly on the original system.As long as the dynamic of original system is adequately included in the expanded system, the contracted controller can be used as a suboptimal controller of the origin.The approximation of this paper mainly represents how good the expanded system would include the original system state.However, it is difficult to describe the approximation without a comparison of control performances.Considering the uncertainties of system state, a static state feedback controller at each subsystem is designed to robustly stabilize the system dynamics.This control design process mainly depends on the decentralized system form; it can suit a group of systems which can only use local information, for example, the multiagent system.Moreover, just the same as the ordinary pairwise decomposition, this process is also able to deal with the information structure constraints variation.
The organization of this paper is as follows.In the next section, preliminaries of permuted inclusion principle and system contraction are provided.The main result is presented in Section 2, where the approximate expansion under the concept of system inclusion is discussed, as well as the controller design procedure.In Section 3, a simulation example is provided to illustrate the proposed method on a group of subsystems with information structure constraints.
Preliminaries
The controller design process provided in this paper mainly relies on the permuted inclusion principle [6,7] and system contraction [7,13] that are presented in the following.
Permuted Inclusion
Principle.Suppose that system S is a group of interconnected subsystems and each subsystem S is connected to every other counterparts.Then system S can be decomposed into the expanded space of ( − 1)/2 pairwise subsystems with a pair recurrent reverse order subscripts as follows: S : S 12 , S 23 , S 13 , S 34 , S 24 , S 14 , . . ., Notice that the pairwise subsystems are arranged by a reverse order of subscript , and this unnatural order enables the last one or some subsystems of the sequence to disconnect from, or connect to on the contrary, the overall system without impact on the remaining orders.It is convenient for representing the system information structure constraints variations.
The expected pairwise subsystems order is established by both row and column permutation matrices, which are composed of a series of basic permutation matrices representing a special case of nonsingular transformations.Assume that is a subidentity matrix corresponding to the subsystem S , as provided in [6,7], (+1) where the signs "←" and " → " indicate right and left directional multiplying operations, and are the basic permutation matrices for the and + 1 groups of adjacent columns and rows, respectively.The Ñ in (3) indicates the number of subsystems in the expanded system, and here Ñ = ( − 1).The literature [8] provides an alternative matrix position-based form to construct this permutation matrix more simply.Use (, ) to notate the block position in of subidentity matrices and corresponding to pairwise subsystem S in ; then it comes ( Example 1.Consider an expansion for system S with full network structure and = 3; its pairwise subsystems can be ordered as According to (4), the block positions of and to S can be obtained as that is to say It is equivalent to the result of (2) when = 3.
Consider an interconnected system S in compacted form and its expanded system S as follows: where () ∈ R , () ∈ R , and () ∈ R are the state, input, and output vectors of the system S; x () ∈ R ñ, ũ () ∈ R m, and ỹ () ∈ R l are those of system S .It is supposed that ≤ ñ, ≤ m, and ≤ l.
For the input-state-output inclusion principle mentioned in [14,15], a definition of the permuted inclusion principle is given.
Call the system S a contraction of system S .It is supported by the inclusion principle that all information about the behavior of S is included in S , such as stability and optimality.One of the necessary and sufficient conditions for the inclusion is restriction, the following theorem considers the restriction type (d) ( [2,3,7]).Theorem 3. The system S is a typical restriction of the system S , if there is a triplet of full rank matrices { , , } such that Proof.The proof follows directly from the results in [6,7,14,15].The systems S and S are related by where , , and are complementary matrices with proper dimensions.See [6,7] for details.
System Contraction.
One of the difficulties in applying system contraction by inclusion principle is that the conditions may be too restrictive, and a complete contraction from the given expanded system S to system S will not always exist.It is indicated by the restriction conditions of (9) that system S completely includes S if and only if it is uncontrollable.A natural way to resolve this problem is to introduce an incomplete contraction as an approximation.Split the permuted state matrix à into two parts as where à is the part that can be contracted as (8) implies, M is a complementary matrix with proper dimension standing for the remnant after contraction from the expanded space.System S is a reduced-order model of system S , according to the restriction conditions in ( 9) and ( 10), and take the state matrix for example, this incomplete system contraction requires that There are arbitrary choices of the expanding transformation matrix .Since this paper is based on the pairwise decomposition methodology, is chosen as the same form of that in [6,7], which will be presented in next section.Anyway, when is confirmed according to the inclusion condition, here goes To satisfy the restriction condition, there must be so that ‖ M ‖ will be minimum, and this results in the minimal norm solution
Pairwise Decomposition for a Group of Interconnected Subsystems
Assume that system S is composed of a group of interconnected subsystems as the coordinated control target, S = {S } : ẋ = + , = , where a pairwise subsystem with basic interconnection.
The time-varying parameters () and () in ( 17) can describe the dynamic weight values between the connected subsystems S and S .They represent the information structure constraints of the interconnected system and play a very important role in the system dynamic.In the literature [16], a fundamental interconnection (adjacency) matrix = ( ) ∈ R × is defined in order to describe the normal structure of a given system graph.This notation can also be used here to indicate whether there is information structure constraint between a subsystem pair, by the rule When = 1, it indicates that there is interconnection from subsystem S to S , and = 0 indicates not.This binary interconnection matrix will be used later in the inclusion principle framework.If one of and is equal to 0, the pairwise subsystem S is half connected; the original information structures of S would be changed by using the coordinated control mentioned earlier.In this case, the sequential LQ optimization provided in [3,17] can be consulted to keep the information structure of S .Moreover, note that and will both be valued 0 under some circumstances, which means that the pairwise subsystem S will be disjointed.The disconnected modes have been discussed in [6,7].Particularly, when and evolute in a dynamical way and enforce S to disjoint and then joint again, the discussion is provided in [8].
Theoretically, any existing control technique can be applied to the coordinated control of this pairwise subsystem S .Take pairwise subsystem S as this compact form with = [ , ] , = [ , ] , and = [ , ] , and Call the basic coordinated controller, if it can stabilize the closed loop pairwise subsystem For every pair of subsystems S with information structure constraints = − , = 2, 3, . . ., , = 1, 2, . . ., − 1, their basic coordinated controllers can be constructed in this way.
As the fundamental idea of pairwise decomposition, a given system should be expanded following the recurrent reverse order first, so that a coordinated controller can be designed to stabilize all of the pairwise subsystems and then contracted to the original space.However, restricted by the mathematical framework of inclusion principle, it is difficult to expand the system with uncertainties in its dynamics.Consider the procedure of pairwise decomposition, the original states can be almost included in the block-diagonal expanded system which is composed of state functions of all pairwise subsystems, S = {S 12 , S 23 , S 13 , S 34 , S 24 , S 14 , . . ., S 2 , S 1 } : This block-diagonal system is a reasonable approximate expansion of the origin.To achieve this form, the interconnection structure of system S should be available, and this is also the restriction in using inclusion principle.The interconnection structure is supposed to be given by the fundamental interconnection matrix = ( ).By expanding the original space of system S into a bigger space of system S in recurrent reverse order, take the state matrix as an example, the transformation matrices of pairwise decomposition can be selected as , , , and have the same structure as their counterparts, respectively.Notice that there are arbitrary choices of these transformation matrices, and their forms are bound up with the inclusion form.Since the structure of expanded system S is confirmed, then the transformation matrices are also fixed, just as (24).Consider the permuted inclusion principle; the transformation matrices will be permuted as Therefore the state matrices of systems S and S are related by (12), and the relationship of the state, input, and output vectors can be obtained by Definition 2 as At the same time, a virtual system S can be constructed as another contraction of system S .The state dynamic of system S is in a certain form, and S is raised as an estimation of the original system S.According to the contraction condition, it is possible to use the transformation matrix such that system S can be contracted to S by ( 12) after an appropriate compensation.This process will also lead to the same relationship as (26).In this way, systems S and S may share the same state, input, and output vectors, since they have the same expanded system S which is calculated by the same transformation matrices.It can be concluded that systems S and S represent a pair of systems with approximate dynamics, and the bias between them is mainly reflected in compensation M of the contraction procedure.
Suppose that the expanded system S comprises every state function of pairwise subsystem S in system S, and the pairwise subsystems are arranged in the recurrent reverse order as (23).Each S is stabilized by the basic coordinated controller (21); then the coordinated controller for S can be constructed in a block-diagonal form as It is clear that a redundant control set is established with all pairwise controllers, which contains all necessary coordinated information for both system S and S. When the structure form of the estimator S is determined, the coordinated controller of system S can be obtained by contracting K together with a proper compensator M .The contraction is checked by the following theorem.Theorem 4. For the systems mentioned above, system S is the expansion for both systems S and S. The state feedback controller : = − can stabilize the closed loop system of S, if the controller of system S can be contracted from C , and it satisfies Proof.Since system S is a contraction system S , supported by the contraction condition (12), the state function of system S is rewritten as and it apparently indicates the controller form of (5).According to the inclusion principle, here goes moreover, the approximation between systems S and S may indicate that = and = , so that the controllers of systems S and S are related as Notice that = , then (30) and (31) will conclude that = ( K + M ) .
Remark 5.The literature [6] provides a sufficient condition of connective stability.But since far more information might be accumulated in the largest singular values of subsystem matrices, the criterion of connective stability might be somewhat conservative.
The virtual system S is used as the estimator of system S, and it may have many possible forms.This diversity will mainly impact on the controller design process in determining the compensator M .One of the most challenging problems in controller design for multiagent systems is the estimation of information structures among agents.The further research of this paper on implementation of pairwise decomposition in systems with dynamic information structure constraints, as well as the estimation of the interconnection structure, is undergoing.This issue is based on inclusion principle for time-varying system ( [18]) and method in dealing with the structure perturbation under the concept of pairwise decomposition ( [8]).However, in a particular case when the state function of each subsystem satisfies the linear superposition principle, there is a way to determine the structure of M much more easily.
Consider the mathematical framework of permuted inclusion principle; this position information can be concluded with (8) by using the block row-order of sub-identity matrices.Consider that system S is in full network structure, the row-order of a particular pairwise subsystem S is Besides, the row-order of every pairwise subsystem can also be concluded in this way as so that the complementary matrix M can be constructed by the following lemma.
Lemma 6. Suppose that system S is in full network structure; the row-order of subsystems in each pairwise subsystem is concluded as (32) and (33), and M complements the information structure constraints bias between system S and system S. Then M can be presented by the information structure of corresponding pairwise subsystem S as This matrix structure-based lemma is convenient for calculation, especially for real-time control in practice.
Example 7. Also consider system S with = 3 subsystems, and its recurrent reverse order is presented as (5).According to (32) and (33), the calculation is proceeded as So that the matrix M can be constructed by Lemma 6:
Automatic Generation Control (AGC) for a Four-Area Power System
A four-area power system is shown in Figure 1; assume that areas 1, 2, and 3 contain three reheat turbine type thermal units and area 4 contains a hydro unit, respectively.Each pairwise subsystem is interconnected by tie line indicated by solid lines, and its information structure constraint is indicated by dotted ellipse.Details of the system description can be found in [19,20].References [4][5][6][7][8] implement the pairwise decomposition methodology in the procedure of coordinated control to this four-area power system AGC.As a counterpart, the controller design procedure of the new pairwise decomposition modality in this paper is presented here.Consider the system dynamic bias between system S and system S taken as approximation; each pairwise subsystem is robustly stabilized in terms of linear matrix inequalities (LMI) ( [21,22]).Suppose that the system graph is undirected and () = () = 1 for description convenience.The pairwise subsystem model is provided in (17) (38) where = and = 1/ 2 .Further details of this robust control procedure can be found in [21,22].
According to the permuted inclusion principle, the expanded system S is supposed to contain those particular pairwise subsystems listed as following: Choose the transformation matrices by (24) and the permutation matrix can be constructed by ( 2 where the dimensions of and are determined by the system state and control input vector, respectively.In this simulation example, = 6×6 , = 1.Use (34) to construct the complementary matrix M as follows: Figures 2 and 3 illustrate the frequency and tie-line power perturbations of the group of subsystems.The respond curves are very similar to those of [4][5][6][7].
Conclusion
This paper presents a theoretical study of the pairwise decomposition, which can be seen as a reverse modality of this methodology.The proposed approach is able to coordinated the interconnected system with uncertainties, and it can achieve high quality control performance as well.Moreover, this process is convenient for a group of interconnected subsystems without a superposition-form overall system model, which is in the case that only local information is available.Further research is ongoing, and one task is to determine the structure of system S as an estimator of the original system S.For this purpose, an update law which can fit the features of pairwise decomposition is needed as well as a calculation framework to deal with the structure perturbations effectively enough.The proposed approach can also motivate the application of pairwise decomposition to a nonlinear time-variant system.
Figure 1 :Figure 2 :
Figure 1: Schematic diagram of the four-area power system.
); is in the same structure as ,
Figure 3 :
Figure 3: Deviations of tie-line power among the power system.
|
v3-fos-license
|
2024-05-26T15:54:37.358Z
|
2024-05-22T00:00:00.000
|
270015147
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/journals/microbiology/articles/10.3389/fmicb.2024.1390371/pdf",
"pdf_hash": "cec7a0bd1ef1e6ae1f49022fae36e970501b6fd4",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44501",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"sha1": "2a9bcdd47e30cbcf8a7311c6b782e3400e181c15",
"year": 2024
}
|
pes2o/s2orc
|
Identification of a putative α-galactoside β-(1 → 3)-galactosyltransferase involved in the biosynthesis of galactomannan side chain of glucuronoxylomannogalactan in Cryptococcus neoformans
The cell surface of Cryptococcus neoformans is covered by a thick capsular polysaccharide. The capsule is the most important virulence factor of C. neoformans; however, the complete mechanism of its biosynthesis is unknown. The capsule is composed of glucuronoxylomannan (GXM) and glucuronoxylomannogalactan (GXMGal). As GXM is the most abundant component of the capsule, many studies have focused on GXM biosynthesis. However, although GXMGal has an important role in virulence, studies on its biosynthesis are scarce. Herein, we have identified a GT31 family β-(1 → 3)-galactosyltransferase Ggt2, which is involved in the biosynthesis of the galactomannan side chain of GXMGal. Comparative analysis of GXMGal produced by a ggt2 disruption strain revealed that Ggt2 is a glycosyltransferase that catalyzes the initial reaction in the synthesis of the galactomannan side chain of GXMGal. The ggt2 disruption strain showed a temperature-sensitive phenotype at 37°C, indicating that the galactomannan side chain of GXMGal is important for high-temperature stress tolerance in C. neoformans. Our findings provide insights into complex capsule biosynthesis in C. neoformans.
Introduction
Cryptococcus neoformans, a basidiomycete yeast, is the primary pathogen responsible for cryptococcosis, a globally prevalent disease that affects immunocompromised individuals, specifically those infected with HIV (Maziarz and Perfect, 2016;Altamirano et al., 2020).Upon pulmonary infection, C. neoformans disseminates to the central nervous system and causes severe meningoencephalitis with a high mortality rate.
Herein, we report that a putative glycosyltransferase belonging to the GT31 family in C. neoformans is involved in GXMGal biosynthesis.Using the cap59 disruption strain (a GXM-deficient strain of C. neoformans) as the parental strain, we constructed disruption strains of GT31 family glycosyltransferases (ggt1, ggt2, and ggt3).A nuclear magnetic resonance and methylation gas chromatographymass spectrometry analysis of the structure of GXMGal produced by the cap59 and ggt2 double-disruptant strain revealed that the galactomannan side chain was reduced or almost completely lost.The ggt2 disruption strain exhibited a temperature-sensitive (Ts) phenotype at 37°C.These results indicate that, in C. neoformans, the galactomannan side chain of GXMGal has an important role in hightemperature stress tolerance.
Strains and medium
The C. neoformans strains used in this study are listed in Supplementary Table S1.The C. neoformans var.grubii H99 strain was obtained from the Fungal Genetics Stock Center (Kansas City, USA).
Construction of the cap59 disruption strain
CAP59 (CNAG_00721) was disrupted in C. neoformans H99 by inserting NEO using the CRISPR/Cas9 system (Huang et al., 2022).A gene replacement cassette encompassing a 50-bp homology arm at the 5′ end and a 50-bp homology arm at the 3′ end of CAP59 was amplified by recombinant PCR using pNEO_6xHA (So et al., 2017) as a template and the cap59-del-F-cap59-del-R primer pair (Supplementary Table S2).The Cas9 expression cassette was amplified by PCR using pBHM2403 as a template and the M13-F-M13-R primer pair.The sgRNA expression cassette was amplified in two PCR steps.An sgRNA scaffold containing a 20-bp target sequence and U6 promoter and an sgRNA scaffold containing a 20-bp target sequence and U6 terminator was amplified by PCR using pBHM2329 as a template and the M13-F-cap59-gRNA-R1 and cap59-gRNA-F2-M13-R primer pairs.The PCR fragments were combined by fusion PCR using the U6-F-U6-R primer pair.All PCR fragments were introduced into C. neoformans by electroporation using Gene Pulser II (Bio-Rad, Hercules, CA), yielding cap59Δ strain.Transformants were selected using YPD agar plates supplemented with 200 μg/ mL G418.The introduction of NEO into each locus was confirmed by PCR using the cap59-comf-F-cap59-comf-R primer pair (Supplementary Figure S1).
2.4 Construction of the cap59, ggt2, and ggt3 triple disruption strain GGT3 was disrupted in C. neoformans cap59Δggt2Δ strain by inserting NAT.A gene replacement cassette encompassing a 50-bp homology arm at the 5′ end and a 50-bp homology arm at the 3′ end of GGT3 was amplified by recombinant PCR using pNAT_mCherry (So et al., 2017) as a template and the ggt3-del-F-ggt3-del-R primer pair (Supplementary Table S2).Transformants were selected on YPD agar plates supplemented with 100 μg/mL nourseothricin sulfate.The introduction of NAT into each locus was confirmed by PCR using the ggt3-comf-F-ggt3-comf-R primer pair (Supplementary Figure S2).
Complementation of the ggt2 disruption strain with wild-type GGT2
For complementation, analysis of GGT2 using a gene replacement cassette, encompassing a homology arm at the 5′ end of GGT2, wildtype GGT2 containing 3′-UTR, hygromycin B resistance gene (hph), and a homology arm at the 3′ end of GGT2, was constructed by recombinant PCR using H99 genomic DNA; pNAT_mCherry as a template; and the ggt2-comp-1-ggt2-comp-2, ggt2-comp-3-ggt2comp-4, and ggt2-comp-5-ggt2-comp-6 primer pairs.The resultant DNA fragment was amplified with the ggt2-comp-1-ggt2-comp-6 primer pair.An sgRNA scaffold containing a 20-bp target sequence and U6 promoter and an sgRNA scaffold containing a 20-bp target sequence and U6 terminator were amplified by PCR using pBHM2329 as a template and the M13-F-HYG-gRNA-R1 and HYG-gRNA-F2-M13-R primer pairs.All PCR fragments were introduced into C. neoformans ggt2Δ and cap59Δggt2Δ by electroporation.Transformants were selected on YPD agar plates supplemented with 100 μg/mL nourseothricin sulfate.The introduction of NAT into each locus was confirmed by PCR using the ggt3-comp-comf-F-ggt3comp-comf-R primer pair (Supplementary Figure S1).
Measurement of capsule size
C. neoformans strains were cultured in 3 mL of YPD liquid medium at 30°C for 24 h.The cells were then collected by centrifugation and washed three times with sterile phosphate-buffered saline (PBS), suspended in 2 mL of 10% Sabouraud liquid medium, and incubated at 30°C for 24 h to induce capsule production.The culture medium was diluted with PBS mixed at a ratio of 1:1 with India ink (Syogeikuretake Shikon BB1-18; Kuretake Co., Ltd.Nara, Japan) and incubated for 15 min.Images of stained cells were acquired using a microscope equipped with a digital camera.The diameters of the cells and capsules were measured immediately (50 cells), and the average diameter was calculated.
Preparation of the GXMGal fraction
Purification of GXMGal was performed as described (Rocha et al., 2015).Briefly, cap59Δ strains were cultivated in 1 L of 10% Sabouraud medium at 30°C with shaking (160 rpm) for 5 days.The culture supernatant was collected by centrifugation, mixed with an equal volume of phenol:chloroform, and centrifuged.The collected supernatant was dialyzed overnight at 4°C using a Visking Tube (Nihon Medical Science, Inc. Japan).The polysaccharides were powdered by lyophilization.GXMGal powder was dissolved in 3% cetyltrimethylammonium bromide solution in 1% borate at pH 9.5.The GM fraction was collected as precipitate, washed with 75% ethanol, dialyzed with water, and lyophilized.
Methylation GC-MS and nuclear magnetic resonance spectroscopy
Glycosidic linkages were analyzed as previously described (Klutts and Doering, 2008;Katafuchi et al., 2017).Briefly, GXMGal was separately dissolved in dimethyl sulfoxide, followed by NaOH addition.After stirring for 3 h, methyl iodide was added, and the suspension was stirred for 24 h.The methylated products were extracted in chloroform and washed using dH 2 O.Then, the methylated samples were hydrolyzed using 2 M trifluoroacetic acid, reduced, and acetylated.The partially methylated alditol acetates were analyzed by GC-MS using a capillary column (30 m × 0.25 mm; DB-5, Agilent, CA) with helium as the carrier gas at a gradient temperature program of 210°C-260°C at 5°C/min.The GC-MS analyses were performed using a JMS-K9 mass spectrometer (JEOL, Tokyo, Japan).NMR experiments were performed as previously described (Klutts and Doering, 2008;Katafuchi et al., 2017).The NMR spectra were recorded using a JNM-LA600 spectrometer (JEOL) at 45°C.Proton and carbon chemical shifts were referenced relative to internal acetone at δ 2.225 and 31.07 ppm, respectively.
Identification of candidate galactosyltransferases involved in GXMGal biosynthesis in C. neoformans
First, we searched the C. neoformans H99 genome for candidate genes encoding enzymes that transfer the β-galactosyl residue to the hydroxyl group at position 3 of the α-galactosyl residue in GXMGal.S. pombe Pvg3 is a glycosyltransferase that exhibits similar enzymatic activity.Therefore, we selected candidate genes by PSI-BLAST search using Pvg3 as a query.Two homologous proteins were selected (CNAG_01050 and CNAG_01385).CNAG_01385 is Ggt1--an α-mannoside β-(1 → 6)-galactosyltransferase involved in GIPC biosynthesis in C. neoformans.Then, a BLASTp search was performed using CNAG_01050 as a query to select candidate genes.CNAG_06918 was selected in addition to CNAG_01385.Therefore, we named CNAG_01050 and CNAG_06918 as Ggt2 and Ggt3, respectively, the two α-galactoside β-(1 → 3)-galactosyltransferase candidates.Ggt2 and Ggt3 are members of the GT31 family.Ggt2 has an amino acid sequence homology of 23% (in the 434-645 amino acid region of Ggt1) and 33% (in the 122-320 amino acid region of Ggt3) with Ggt1 and Ggt3, respectively (Figure 1).Ggt1 does not have a transmembrane domain, but analysis using DeepLoc 2.0 (Thumuluri et al., 2022) predicted a transmembrane region at 338-360 aa.Ggt2 and Ggt3 were predicted to be Golgi-localized type II membrane proteins with one transmembrane region at N-terminal 35-57 and 21-43 aa, respectively, according to DeepLoc 2.0 (Figure 1).
Phenotypic analysis of GGT mutants
To clarify the physiological role of GGT in C. neoformans cells, ggt1, ggt2, and ggt3 single-disruptant strains were constructed using H99 as the parental strain.As uge1Δ and ugt1Δ cannot supply UDP-Gal to the Golgi, they are deficient in GXMGal and GIPC biosynthesis and exhibit a Ts phenotype at 37°C (Moyrand et al., 2008;Li et al., 2017).ggt1Δ lacks GIPC and exhibits a Ts phenotype at 37°C.Therefore, we observed growth of GGT disruptants at 37°C (Figure 3).At 30°C, ggt1Δ showed slightly delayed growth compared with the wild-type; moreover, ggt2Δ and ggt3Δ showed similar growth compared with the H99 strain.By contrast, at 37°C, ggt1Δ and ggt2Δ showed dramatically delayed growth, indicating a Ts phenotype.The growth of ggt3Δ and H99 was similar.These results indicate that the glycan structure synthesized by Ggt2 is important for high-temperature stress tolerance in C. neoformans.
Drug resistance of ggt2 disruptant strain
We examined the growth of ggt2Δ on media containing various drugs.ugt1 disruptant strains show sensitivity to NaCl, Congo red, H 2 O 2 , and sodium dodecyl sulfate (SDS) (Li et al., 2017).Therefore, we tested the sensitivity of ggt2Δ to these drugs (Figure 4).ggt2Δ was not significantly sensitive to any of the drugs.Conversely, the Ts phenotype of ggt2Δ was completely rescued by complementation with wild-type GGT2 and slightly rescued by 1 M sorbitol, which exerts high osmotic pressure.These results indicate that the galactomannan side chain of GXMGal biosynthesized by Ggt2 is important for high-temperature stress tolerance in C. neoformans.
Capsule productivity of ggt2 disruptant strain
To further analyze the ggt2Δ phenotype, the capsule structure was stained with India ink for microscopic observation (Supplementary Figure S3).Quantification of cell and capsule sizes revealed no significant differences between ggt2Δ and the wild-type strain.These findings indicated that Ggt2 absence did not affect GXM production.Frontiers in Microbiology frontiersin.org
Role of Ggt2 in GXMGal biosynthesis
To examine the role of Ggt2 in GXMGal biosynthesis, we complemented wild-type GGT2 in cap59Δggt2Δ and analyzed the structure of GXMGal.The structure of GXMGal produced by each strain was analyzed by 1 H-NMR (Figure 5).Sharp doublet peaks at 4.98 ppm of the α-galactan backbone seen in cap59Δggt2Δ disappeared due to GGT2 complementation, and the Man-derived chemical shift seen in wild-type GXMGal reappeared.
Phylogenetic analysis of Pvg3, Ggt1, Ggt2, and Ggt3 family proteins belonging to the GT31 family
Sequences of Pvg3, Ggt1, Ggt2, and Ggt3 family proteins were used to construct an evolutionary phylogenetic tree (Figure 7).The data set for analysis was obtained from FungiDB using the amino acid sequences of S. pombe Pvg3 and C. neoformans Ggt1, Ggt2, and Ggt3 as search queries. 1 The protein sequences were clearly divided into the 1 https://fungidb.orgPhenotype of ggt1, ggt2, and ggt3 single disruptants.Colony morphology of H99, ggt1Δ, ggt2Δ, and ggt3Δ cultured on YPD agar at 30°C and 37°C for 3 days, respectively.The agar medium was inoculated with 10-fold serial dilutions of cells adjusted to 10 6 cells.
This study aimed to identify glycosyltransferases involved in the biosynthesis of C. neoformans capsules.We identified a putative β-(1 → 3)-Gal transferase belonging to the GT31 family that plays a role in GXMGal biosynthesis (Figure 8).Ggt1 is conserved in a wide range of species in the phylum Basidiomycota, whereas Ggt2 is only conserved in certain basidiomycete yeasts, such as Pucciniomycetes and Tremellomycetes (Figure 7).Multiple alignments of Ggt2 homologs revealed that amino acids in the GT-A domain are highly conserved (Supplementary Figure S4).Therefore, Ggt2 homologs may be responsible for synthesizing important glycan structures, including GXMGal.The enzyme involved in capsule biosynthesis has attracted attention as a novel antifungal drug target due to its contribution to virulence (Almeida et al., 2015).Many putative glycosyltransferases, involved in GXM biosynthesis, have been identified as CAP genes, because they can be easily screened based on phenotypes, such as India ink-negative staining (Fromtling et al., 1982).However, identification of glycosyltransferases involved in capsule biosynthesis is challenging, and only a few glycosyltransferases have been identified: α-(1 → 3)-Man transferase Cmt1 (Doering, 1999;Sommer et al., 2003) is involved in GXM biosynthesis and β-(1 → 2)-Xyl transferases Cxt1 and Cxt2 are involved in GXMGal biosynthesis (Klutts et al., 2007;Klutts and Doering, 2008;Reilly et al., 2009;Wang et al., 2018).We believe that our discovery of Ggt2 will contribute to studies on capsule biosynthesis in C. neoformans.Drug sensitivity of the ggt2 disruptant.Colony morphology of H99, ggt2Δ, and ggt2Δ + GGT2 on YPD agar supplemented with or without 1 M sorbitol, 1 M NaCl, 1 mg/mL calcofluor white (CFW), 1 mg/mL Congo red (CR), 5 mM H 2 O 2 , and 0.005% sodium dodecyl sulfate (SDS) at 30°C and 37°C for 3 days.The agar medium was inoculated with 10-fold serial dilutions of cells adjusted to 10 6 cells.Methylation GC-MS NMR analyses revealed the detailed role of Ggt2 in GXMGal biosynthesis (Table 1; Figure 6).The loss or severe reduction of the galactomannan side chain of GXMGal in the ggt2 disruptant strain indicates that Ggt2 is the only α-Gal β-(1 → 3)-Gal transferase involved in GXMGal biosynthesis (Figure 2) because Ggt1 and Ggt2 accept α-mannoside and α-galactoside as receptor substrates, respectively.This is a logical result because the structure of the sugar chains involved in their biosynthesis suggests that Ggt1 and Ggt2 are likely to use α-mannoside and α-galactoside as their acceptor substrates, respectively.The GXMGal structure between the ggt3Δ and wild-type strains was not notably different and did not exhibit a Ts phenotype, suggesting that Ggt3 is not involved in GXMGal or GIPC biosynthesis (Figures 3, 4).The function of Ggt3 must be analyzed in detail.Interestingly, methylation GC-MS analysis detected methylesterified sugars that may have originated from glycans other than GXM and GXMGal.These indicate the presence of unknown glycan structures, such as N-or O-glycans or glycolipids.Considering how GXMGal was first discovered in the culture supernatant of a mutant strain lacking GXM (Cherniak et al., 1982), C. neoformans may possess unknown glycan structures.Thus, the detailed structures of these glycans should be clarified in the future.
Phenotypic analysis of gene disruptant strains revealed the physiological functions of Ggt2.ggt2Δ, like ggt1Δ, exhibited a Ts phenotype at 37°C, indicating that the galactomannan side chain of GXMGal, like GIPC, is important for high-temperature stress tolerance in C. neoformans.Notably, GXMGal is important for hightemperature stress tolerance in C. neoformans, although it is less abundant than GXM.In C. neoformans, Saccharomyces cerevisiae, and Aspergillus fumigatus, O-glycan deficiency leads to reduced hightemperature stress tolerance and disruption of cell wall integrity (Wagener et al., 2008;Kadooka et al., 2022;Thak et al., 2022).However, ggt2Δ was insensitive to Congo red or calcofluor white, which are inhibitors of cell wall synthesis, suggesting that loss of galactomannan side chains in GXMGal does not affect cell wall integrity in C. neoformans.Additionally, ugt1Δ is sensitive to Congo red, NaCl, and SDS (Li et al., 2017), but ggt2Δ is not, suggesting that phenotype of ugt1Δ is not due to the loss of galactomannan side chains of GXMGal but rather due to a loss of GIPC.Consistently, GXM-deficient mutants were sensitive to NaCl and SDS (Li et al., 2018a,b).The phenotypic differences in polysaccharide-deficient mutants are interesting and should be analyzed further.
We used a bacterial heterologous expression system to generate recombinant Ggt2 and measured its β-Gal transfer activity to 4-methylumbelliferylated α-Gal.However, we could not detect glycosyltransfer activity (data not shown).This may be characteristic of the substrate specificity of Ggt2.The galactomannan side chain of GXMGal is added in succession, probably because Ggt2 recognizes di-or trisaccharide α-galactooligosaccharides as substrates and can only transfer β-Gal to a certain location.Another hypothesis is that Ggt2 does not exhibit glycosyltransferase activity by itself.For example, S. pombe Pvg3, which belongs to the GT31 family, is not active alone but exhibits glycosyltransferase activity by forming a complex with several proteins (Fukunaga et al., 2023).The enzymatic features of Ggt2 may be essential for the formation of the unique GXMGal structure and should be studied in more detail.
In conclusion, we have partially identified the mechanism of GXMGal biosynthesis in C. neoformans.Our findings will contribute substantially to understanding the structure and biosynthesis of the fungal cell wall and developing anticryptococcal agents.Phylogenetic analysis of Ggt1, Ggt2, and Ggt3 homologs in basidiomycetes and Pvg3 homologs in basidiomycetes and fission yeasts.Protein sequences were downloaded from FungiDB.The phylogenetic tree was drawn using iTOL.Alignment and phylogenetic tree inference were performed using MAFFT and RAxML,respectively,included in ETE v3.note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers.Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
FIGURE 1
FIGURE 1 Schematic of the Ggt1, Ggt2, and Ggt3 proteins.The vertical black bars indicate transmembrane (TM) domains of Ggt1 (338-360 aa), Ggt2 (35-57 aa), and Ggt3 (21-43 aa).The gray bars indicate GT-A fold domains of Ggt1, Ggt2, and Ggt3.The dark gray bar indicates an unknown domain of Ggt1.The identity of the amino acid sequences of Ggt1 and Ggt2 and Ggt2 and Ggt3 in the GT-A fold domain are indicated.
TABLE 1
Methylation analysis of GXMGal from the ggt2 disruptant.
b n.d.means none detected.
|
v3-fos-license
|
2021-08-02T00:05:21.377Z
|
2021-05-17T00:00:00.000
|
236552232
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://opendermatologyjournal.com/VOLUME/15/PAGE/36/PDF/",
"pdf_hash": "cd56eb67343df4853d6b8404bd19e0bb4aed1240",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44502",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "2afbd962c1c8373d3e5cceaffb38be5ac37d283a",
"year": 2021
}
|
pes2o/s2orc
|
Case Report: Successful Treatment of Giant Condyloma Acuminata with Intralesional Injection of Purified Protein Derivative
Condyloma acuminata can be treated with different modalities, including topical and destructive procedures. However, treatment of large recalcitrant lesions tends to be difficult with the risk of pain, scarring and recurrence. Here we report a case of a large foul-smelling condyloma acuminata, treated successfully with intralesional purified protein derivative (PPD) injections.
INTRODUCTION
Warts are the most common cutaneous infection that is caused by the human papillomavirus (HPV) via direct or indirect contact. Condyloma acuminata, large cauliflower genital warts, are manifestations of anogenital human papillomavirus infections. They may be associated with pain, foul-smelling discharge and negative psychological effect. They spread through direct skin-to-skin contact, usually during oral, genital, or anal sex with an infected partner [1]. Treatments include topical creams, destructive methods, and immunotherapy. There is increased evidence for using intralesional immunotherapy for recalcitrant, recurrent, and extensive genital warts. It is non-destructive, easy to use and less painful [2]. Despite the fact that several vaccines have been used to clear warts, few studies assessed the effect of intralesional (IL) Purified Protein Derivative (PPD) in treating genital warts and hardly found any study or case report that assesses the effect of IL PPD injections in treating or debulking large condyloma acuminata. The advantages of PPD over other immunotherapy are its ubiquity, ease of access and low cost.
CASE
A 26-yr-old man with no significant past medical history presented with large multiple warty lesions on the pubic area for more than 5 months. It has been rapidly progressive and coalescing to form a giant mass with a warty surface. There was a history of multiple unprotected heterosexual encounters with multiple partners. Examination revealed multiple large warty lesions with foul-smelling discharge on the pubic area. There was no urethral discharge, genital ulcers or lymph nodes enlargement.
Routine blood investigations were normal and serological screening tests for HIV, HBV, HCV and syphilis were negative. In view of the size of the lesion, destructive treatment modalities were discussed, but the patient declined. Therefore, intralesional immunotherapy with Purified Protein Derivative (PPD) was administered in a biweekly dose of 0.2-0.3 ml distributed over two to three different sites, at least two centimeters apart. No topical or local anesthesia injections were needed as the patient felt very little discomfort with this procedure. More than 98% clearance was achieved after 7 sessions ( Figs. 1 and 2). Subsequently, four sessions of cryotherapy and topical podophyllin were given to treat the remaining small few warty papules. The lesions were successfully eradicated with no recurrence ten months after the last visit (Fig. 3). Fig (3). Condyloma acuminata successfully eradicated after 4 sessions of cryotherapy and podophyllin for the remnant warty papules with little hypopigmentation and no recurrence for ten months.
DISCUSSION
Condyloma acuminata refers to anogenital warts caused by the Human Papilloma Virus (HPV), which is the most common sexually transmitted infection. HPV6 and 11 are the most common strains that cause anogenital warts [3]. Genital warts are usually asymptomatic and can be found most commonly on the cervix, vagina, perineum, penis, scrotum and perianal skin. Most genital warts are seen in people between the age of 16 and 29 years which is similar to other sexually transmitted diseases like gonorrhea, syphilis and genital herpes simplex [4,5].
Topical and systemic immunotherapies have now found a significant place in the treatment of warts because of their nondestructive nature, ease of use and promising results [2]. They are becoming more popular especially in the treatment of refractory cutaneous and genital warts. They act by enhancing or inducing the cell-mediated immune system to target and destroy the infected cells. Some of these agents are injected intralesional and include PPD, BCG vaccine, MMR vaccine, candida antigen and trichophyton antigen [6]. Intralesional immunotherapy can be used in different age groups, even in children, as Nofal and coworkers mentioned in their study [7].
Many studies have used IL PPD alone or alternating with other types of immunotherapy in treating different types of warts [8,9]. It can be used as a valuable first-line treatment in difficult to treat sites like palmoplantar warts and periungual warts. In addition, immunostimulation with IL PPD provides increased chances of treating warts on distant sites and attaining a retained immune response for whole life [10,11]. Riza and his colleagues found that immunostimulation with IL PPD is dose-dependent and multiple injections may be used for faster clearance [12].
Despite many studies that have been performed to detect the effect of immunotherapy in treating different types of warts, sparse reports exist on the effect of IL PPD in treating condyloma acuminata. In 2011, Eassa and his colleagues published a study showing that 85% of pregnant women with anogenital warts improved after receiving weekly intradermal PPD injections [2]. This study showed that PPD is safe in pregnancy compared to other immunotherapies like MMR. Another study was done in 2005 by Metawea, which showed a significant response of topical applications of BCG vaccine over condyloma acuminate [13]. Recently, Gupta published the first case of a female patient with extensive genital warts successfully treated with intralesional immunotherapy in the form of Bacillus Calmette-Guérin (BCG) vaccine [14].
CONCLUSION
In this case report, we show a case of giant condyloma acuminata treated successfully with intralesional PPD injections. More than 98% clearance was seen after 7 sessions of treatment. This promising option can be used as a primary treatment modality and should be considered before deciding to expose the patient to destructive methods or wide local surgical excision for giant condyloma acuminata.
ETHICS APPROVAL AND CONSENT TO PARTICIPATE
This study was approved by the Forces Medical Services Ethics Committee under approval code FMC-EMC 001/2021.
HUMAN AND ANIMAL RIGHTS
No Animals were used in this research. All human research procedures followed were in accordance with the ethical standards of the committee responsible for human experimentation (institutional and national), and with the Helsinki Declaration of 1975, as revised in 2013.
CONSENT FOR PUBLICATION
Informed consent was obtained from the patient involved in the study.
STANDARDS OF REPORTING
CARE guidelines have been followed.
AVAILABILITY OF DATA AND MATERIALS
The data supporting the findings of this study are available within the article.
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
2016-03-03T00:00:00.000
|
18657979
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1371/journal.pone.0150526",
"pdf_hash": "9e72e650c1959b6ae68c308d1d3b403df32b0e52",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44503",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"sha1": "9e72e650c1959b6ae68c308d1d3b403df32b0e52",
"year": 2016
}
|
pes2o/s2orc
|
Mode of Action of the Sesquiterpene Lactones Psilostachyin and Psilostachyin C on Trypanosoma cruzi
Trypanosoma cruzi is the causative agent of Chagas’ disease, which is a major endemic disease in Latin America and is recognized by the WHO as one of the 17 neglected tropical diseases in the world. Psilostachyin and psilostachyin C, two sesquiterpene lactones isolated from Ambrosia spp., have been demonstrated to have trypanocidal activity. Considering both the potential therapeutic targets present in the parasite, and the several mechanisms of action proposed for sesquiterpene lactones, the aim of this work was to characterize the mode of action of psilostachyin and psilostachyin C on Trypanosoma cruzi and to identify the possible targets for these molecules. Psilostachyin and psilostachyin C were isolated from Ambrosia tenuifolia and Ambrosia scabra, respectively. Interaction of sesquiterpene lactones with hemin, the induction of oxidative stress, the inhibition of cruzipain and trypanothione reductase and their ability to inhibit sterol biosynthesis were evaluated. The induction of cell death by apoptosis was also evaluated by analyzing phosphatidylserine exposure detected using annexin-V/propidium iodide, decreased mitochondrial membrane potential, assessed with Rhodamine 123 and nuclear DNA fragmentation evaluated by the TUNEL assay. Both STLs were capable of interacting with hemin. Psilostachyin increased about 5 times the generation of reactive oxygen species in Trypanosoma cruzi after a 4h treatment, unlike psilostachyin C which induced an increase in reactive oxygen species levels of only 1.5 times. Only psilostachyin C was able to inhibit the biosynthesis of ergosterol, causing an accumulation of squalene. Both sesquiterpene lactones induced parasite death by apoptosis. Upon evaluating the combination of both compounds, and additive trypanocidal effect was observed. Despite their structural similarity, both sesquiterpene lactones exerted their anti-T. cruzi activity through interaction with different targets. Psilostachyin accomplished its antiparasitic effect by interacting with hemin, while psilostachyin C interfered with sterol synthesis.
Introduction
American Trypanosomiasis or Chagas' disease is caused by the protozoan parasite Trypanosoma cruzi. This parasitosis is endemic in 21 countries of Latin America and about 7 to 8 million people are affected worldwide. The parasite is transmitted to humans mainly by the faeces of triatomine bugs known as "kissing bugs", by blood transfusion, organ transplantation, vertically and, to a lesser extent, by food contaminated with T. cruzi. The disease has two clinical stages: the acute stage, in which 5% of children die, and a chronic stage. In the chronic phase, up to 30% of patients suffer from cardiac disorders and up to 10% suffer from digestive, neurological or mixed alterations. The outcome of this phase may sometimes be related to sudden death or heart failure due to progressive destruction of the heart muscle [1,2]. Current available drugs to treat this parasitosis, benznidazole and nifurtimox, have side effects that can lead to therapy discontinuation.
In recent years, an important progress in the knowledge of the biology and biochemistry of T. cruzi has been accomplished. These efforts have led to the identification of potential targets for Chagas' disease chemotherapy. The ergosterol biosynthesis and trypanothione pathways, cysteine protease (cruzipain, CP) and thiol-dependent redox metabolism are considered the most promising biochemical targets for rational drug design [3].
Natural products have played an important role in the drug discovery process [4]. Regarding the treatment of parasitic diseases, the sesquiterpene lactone (STL) artemisinin and the alkaloid quinine, and their derivatives, are currently being used for the treatment of malaria [5]. STLs are an important group of natural compounds with pharmaceutical potential [6]. These compounds are mainly found in species of the Asteraceae family and have shown significant activity against trypanosomatids such as Trypanosoma spp. and Leishmania spp. [7][8][9].
In previous reports we have described the isolation of two sesquiterpene lactones (STLs) (Fig 1), psilostachyin (Psi) and psilostachyin C (PsiC) from species of the genus Ambrosia (Asteraceae) and we have described their antiprotozoal activity [10][11][12][13][14]. In this work we have evaluated the effects of Psi and PsiC on different targets and metabolic pathways of T. cruzi. The assays were selected taking in account some of the targets for the development of new trypanocidal drugs and some of the antiprotozoal mechanisms of action proposed for STLs, such as hemin interaction, ergosterol biosynthesis, generation of oxidative stress and apoptosis induction.
Test compounds and reagents
Psi and PsiC have been isolated from Ambrosia tenuifolia and Ambrosia scabra, respectively [5,8]. The purity of Psi and PsiC was 96.8 and 95.5%, respectively as confirmed by high-performance liquid chromatography (HPLC) analysis.
Standard solutions of these compounds were prepared in dimethyl sulphoxide (DMSO) at a final concentration that never exceeded 0.5%. Hemin, artemisinin, NADPH, Rh123 and H 2 DCFDA were obtained from Sigma Chem. Co. (Saint Louis, MO, USA). Yeast extract, tryptose, powered beef liver and brain heart infusion were from Difco Laboratories (Sparks, MD, USA). Benznidazole (Bnz) was kindly provided by Roche (Argentina). All other chemicals were of the highest purity commercially available.
In vitro assays for anti-Trypanosoma cruzi activity
To evaluate growth inhibition of T. cruzi epimastigotes, the percentage of inhibition (I%) and IC 50 values (50% inhibitory concentration) for Psi and PsiC were estimated by counting the parasites using a Neubauer chamber, as previously described [11].
The variations in IC 50 values of both STLs were analyzed for parasites cultured in the presence of different hemin concentration and to evaluate potential interaction between Psi and PsiC.
Hemin binding assay
The interaction of the STLs with hemin was measured spectrophotometrically under reducing and non-reducing conditions by monitoring the Soret absorption band of hemin, using the methodology described by Taylor et. al (2004) with slight modifications [15]. The assay system under non-reducing conditions consisted of 0.23 M sodium phosphate buffer pH 7.4, 1% SDS, 7.5 μM hemin (protoporphyrin IX-Fe (III)) and different concentrations of Psi and PsiC (7.5 to 45 μM). Sodium dithionite (14 mM) was added to the solution to evaluate the interaction between the compounds and heme (protoporphyrin IX-Fe (II)). The absorption spectra were recorded using a Hewlett Packard 8452 -Diode Array spectrophotometer. The absorbance ratio at 430 and 400 nm (A 430 /A 400 ) was used to quantify changes in the shape of the Soret band. Artemisinin was used as positive control.
Cruzipain inhibition assay
A partially purified fraction containing cruzipain (CP) was obtained from a cell-free extract of T. cruzi epimastigotes by ConA-Sepharose affinity chromatography, as previously described [16]. The enzymatic activity was assayed with the synthetic chromogenic substrate Bz-Pro- Phe-Arg-pNA [17]. The reaction was monitored spectrophotometrically at 410 nm. The E-64 protease inhibitor was used as reference drug.
Trypanothione reductase inhibition assay
A partially purified fraction containing trypanothione reductase (TryR) was obtained from a cell-free extract of T. cruzi epimastigotes as previously described [18] The enzymatic activity was determined following NADPH oxidation at 340 nm, at 25°C [19]. The corresponding nonenzymatic conversion controls were performed.
Intracellular oxidative activity assay
The induction of intracellular oxidative stress was assessed using the oxidant-sensitive fluorescent probe H 2 DCFDA. T. cruzi epimastigotes growing in logarithmic phase were incubated with Psi or PsiC (35 μM) during 4, 8 or 24 h. Treated-parasites were harvested and stained for 45 min in the dark with 10 μM H 2 DCFDA at 37°C. As positive control, parasites were treated with 0.2 mM H 2 O 2 . The fluorescence intensity of dichlorofluorescein (DCF) in cells was then analyzed in a Becton Dickinson FACScalibur flow cytometer with an excitation wavelength of 480 nm and an emission wavelength of 530 nm. Results were expressed by the ratio Gm t /Gm c , where Gm t and Gm c correspond to the geometric mean of histograms obtained for treated and untreated (control) cells respectively [20].
Electrochemical behaviour
Cyclic voltammograms for Psi and PsiC dissolved in 1% methanol were carried out using an EQMAT instrument with an EQSOFT Processor at a sweep rate of 0.2 V/s under a nitrogen atmosphere at room temperature and employing lithium perchlorate as supporting electrolyte. A three-electrode cell was used: a working electrode equipped with vitreous carbon; a gold wire as auxiliary electrode and a saturated calomel reference electrode [21].
Analysis of ergosterol biosynthesis
Trypanosoma cruzi epimastigotes previously treated with 35 μM Psi or PsiC or 50 μM terbinafine (positive control) for 24 h were harvested by centrifugation at 10000 xg for 10 min and then washed once with 0.05 M sodium phosphate buffer pH 7.4. Cells were resuspended in 2.0 mL chloroform:methanol (2:1, v/v). Lipid extraction was complete after the suspension was sonicated in a Soniprep 150, MSE Ultrasonic Power employing two cycles of 30 s each and heated at 50°C during 30 min. After centrifugation at 500xg for 5 min, the organic phase was separated and the extraction was repeated twice with 1 mL of chloroform:methanol (2:1, v/v). The organic phases were then pooled, washed with 0.25 volume of 0.88% KCl and evaporated. Residues were dissolved in chloroform and analyzed by TLC employing silica-gel 60 plates (Merck) and developed in two runs, employing firstly hexane (to separate squalene of ergosterol) and then hexane:EtOAc (8:2, v/v) as eluents. Chromatograms were obtained by staining the plates with 1% CuSO 4 in 8% H 3 PO 4 and heating at 100°C. Ergosterol, lanosterol and squalene standards were run in parallel. Relative band intensities were determined by densitometry using the Scion Image software (Scion). Results were expressed in arbitrary units [17].
Assays to evaluate cell death
These assays were developed with T. cruzi epimastigotes (2.5 x 10 7 cells/mL) treated with Psi or PsiC (35 and 350 μM) during 8-72 h. Treated cells were harvested, washed and the following assays were carried out: For the evaluation of parasite death, cell viability and phosphatidylserine (PS) exposure were measured. Annexin V-fluorescein isothiocyanate (FITC) (Invitrogen™) and propidium iodide (PI) staining were performed following the manufacturer's instructions. As positive control, epimastigotes exposed to 30% fresh human serum for 2 h at 28°C were used. Parasite death was assessed by flow cytometry, acquiring 20,000 events per sample [21].
The mitochondrial membrane potential was assessed by Rh123 staining. After treatment with Psi and PsiC, parasites were suspended in PBS (2 x 10 6 cells/mL) with 10 mg/L Rh123 and incubated for 15 min at 37°C. Trifluoromethoxy carbonyl cyanide phenyl hydrazone (FCCP) (250 nM) was used as positive control. Samples were analysed by flow cytometry with an excitation wavelength of 480 nm and an emission wavelength of 530 nm. A total of 20,000 events were acquired and the variations in the fluorescence were quantified using an index of variation (IV) calculated as IV = (Gm t −Gm c )/Gm c , where Gm t and Gm c corresponded to the geometric mean of histograms obtained for treated and untreated (control) cells respectively. Negative IV values corresponded to depolarization of the mitochondrial membrane. [22].
To analyze DNA fragmentation, a terminal deoxynucleotidyltransferase-mediated fluorescein dUTP nick end-labeling technique (DeadEnd Fluorometric TUNEL System, Promega, Madison, USA) was carried out following the manufacturer's instructions. Parasites were pretreated for 10 min at room temperature with 10 IU/mL DNase I prior to the TUNEL for positive control. A negative control was performed in the absence of the terminal transferase. Samples were incubated with 1 μg/mL 4,6-diamidino-2-phenylindole (DAPI) for DNA labeling, which allows visualization of the parasites' nuclei. Samples were mounted in triplicate and examined immediately using an Olympus microscope [23].
Drug interaction experiments
To evaluate the combinatory effect of Psi and PsiC, the IC 50 s for each of them, in the presence of different concentrations of the other, were calculated. The fractional inhibitory concentrations (FICs) were calculated as the ratio of the IC 50 of one compound in combination and the IC 50 of the compound alone. The FIC index (FICI) for the two compounds was the FIC of Psi plus the FIC of Psi C. FICI values 0.5 were considered as synergy, values >4.0 as antagonism and values in between 0.5 and 4 as no interaction [24].
Statistical analysis
Results are representative of three to four separate experiments, performed in duplicate or triplicate. Data are expressed as means ± standard errors of the mean (SEMs). To calculate the IC 50 values, I% values were plotted against the log of drug concentration (μM) and fitted to a sigmoidal curve determined by a non-linear regression (Sigma Plot 12 software). The significance of differences was evaluated using Student 0 s t test, or One-way ANOVA; p values < 0.05 ( Ã ) and < 0.01 ( ÃÃ ) were considered significant. Flow cytometry data were analysed employing WinMDI 2.9 software.
Hemin binding
Psi and PsiC belong to the same phytochemical group as artemisinin, a STL currently used as an antimalarial drug. Since artemisinin has been demonstrated to exert its antiprotozoal activity through intracellular heme binding [9], it was of great interest to investigate the interaction between these STLs and hemin.
It is known that in the metabolism of heme-deficient parasites, hemin [protoporphyrin IX-Fe(III)] is a product of the digestion of hemoglobin. Hemin is then converted to hemozoin, a polymerized non-toxic form of heme [19]. To evaluate whether heme could be a target of Psi and PsiC, the affinity of both STLs to hemin was determined (Table 1). These STLs showed a considerable interaction with hemin, even higher than that shown by artemisinin. In fact, higher hemin:artemisinin ratios were required to achieve similar Soret band shifts to those exerted by Psi and PsiC. Under non-reducing conditions, the affinity of PsiC for hemin was higher than that of Psi. Under reducing conditions, PsiC decreased its affinity for hemin, unlike Psi, whose affinity for hemin was found to be slightly increased.
Trypanosoma cruzi growth inhibition under different hemin concentrations
Taking into account that hemin is essential for parasite survival and considering its interaction with Psi and PsiC, the IC 50 values for these STLs were calculated in the presence of different hemin concentrations (0-20 mg/L) (Fig 2). For the hemin concentration yielding optimum growth (5 mg/L), the antiparasitic effect shown by both STLs was similar. In the absence of hemin added to the medium, PsiC showed the highest inhibitory activity, whereas high levels of hemin were required to obtain the lowest IC 50 for Psi. It has been reported that hemin itself (20 mg/L) has an inhibitory effect on T. cruzi epimastigotes growth [18]. We have observed such effect since 20 mg/L hemin decreased about 40% the optimal growth obtained with 5 mg/ L hemin. When 10 μM Psi or PsiC were added to the culture medium, containing either 5 or 20 mg/L, inhibitory effects over 40% were observed, reaching values of 87% and 72% respectively (data not shown).
Our results showed that Psi and PsiC can bind to both protoporphyirin IX-Fe(III) (hemin) and protoporphyrin IX-Fe(II) (heme). The inhibition of hemin detoxification could lead to an oxidative stress within the parasite. PsiC showed higher affinity for hemin than for Psi, which was evidenced by the affinity test performed without parasites. The highest affinity of PsiC for hemin would justify the IC 50 values obtained for 0 and 20 mg/L hemin. The absence of hemin in the medium increased the availability of the PsiC inside the parasite where the high affinity for intracellular hemin would allow this compound to manifest its maximum inhibitory capacity (IC 50 4.74 μM). In contrast, Psi showed a significant reduction in its IC 50 value in the presence of 20 mg/L of hemin. This fact would indicate that the presence of high levels of hemin in the medium is a requirement to exert its maximum inhibitory activity.
Oxidative stress induction
Taking into consideration that these compounds could exert their antiparasitic effect by inhibiting hemin detoxification, the generation of an oxidative stress should manifest inside the parasite. To test this hypothesis, concentrations and times of treatment were selected in order to prevent parasite death or to prevent response revertion. When evaluating the electrochemical behavior of Psi and PsiC, neither cathodic nor anodic peaks, corresponding to reactions of reduction and oxidation respectively, were observed at the tested concentrations and potentials (data not shown). The absence of cathodic and anodic peaks was observed even in the presence of glutathione (GSH), added at the ratio GSH: STL (1:1) and (2:1) (data not shown). These results allow inferring that the modification of the intracellular oxidative state of the parasites could not be attributed to the reduction of Psi and PsiC.
CP and TryR inhibition
CP and TryR enzymes are present only in trypanosomatids and absent in mammalian cells. CP is a cysteine protease, relevant for the parasite metabolism, which is considered an important candidate for the development of new trypanocidal drugs [25].
TryR is an oxidoreductase, also specific for trypanosomatids, but parasites can survive with up to 10% of TRyR activity [26]. Because of this reason, not always a good correlation between the enzyme inhibition and the antiprotozoal efficiency has been observed, possibly due to a limited entrance of the drug to the parasite. Nevertheless, according to these authors, this enzyme can still be considered a potential target for drug design. A possible mechanism of action of STLs as trypanocidal agents could involve the trypanothione redox system, since STLs contain an α,β-unsaturated-γ-lactone moiety in its structure. Taking in account these considerations, the effects of Psi and PsiC were evaluated on CP and TryR. Neither Psi nor PsiC produced inhibition of these enzymes at the assayed doses (10, 20 and 50 μM) (data not shown). Our results are in accordance with those reported by other authors [27], demonstrating that TryR would not be a target of this class of compounds.
Sterol biosynthesis inhibition
The biosynthesis of ergosterol has proved to be crucial for the growth of T. cruzi. Lipids were extracted from epimastigotes and analysed by TLC after 24 h incubation in the presence or absence of Psi or PsiC (35 μM) (Fig 4). No accumulation of lanosterol (Lan) was observed for any treatment with the compounds. A decrease of ergosterol (Erg) levels was observed in PsiCtreated parasites. A ratio squalene (Sq)/Erg of 8.58±0.43, 4-fold higher than that obtained in the untreated control (C), was found for these parasites, while terbinafine-treated parasites (50 μM, positive control) presented a Sq/Erg ratio of 4.10±0.50. The different lipid profiles obtained with parasites treated either with Psi or with PsiC suggests that the biosynthesis of Erg could be a target of PsiC, probably due to an inhibition of the Sq epoxidase activity.
Apoptosis induction
STLs have proved to induce programmed cell death in trypanosomatids [10]. Since apoptosis has been pointed as one of the mechanisms by which STLs exert their antiprotozoal activity, we have evaluated the effect of Psi and PsiC on the induction of apoptosis on T. cruzi.
To evaluate cell death and mitochondrial damage, parasites treated with Psi or PsiC at 35 μM for 8, 24 and 48 h were used. Annexin-V FITC/PI staining was used as a parameter to detect apoptotic cells. Results demonstrated that the number of apoptotic cells increased during the treatment with both STLs in a time-dependent manner (Fig 5a and 5b). Results were The mitochondrial membrane depolarization was evident at 24 h of treatment with IV values of -0.45 (Psi) and -0.72 (PsiC), increasing by 51% (Psi) or remaining at similar levels (PsiC) up to 48h (Table 2). Depolarized cells reached values of 73% (Psi) and 85% (PsiC) of the evaluated cells after 48 h of treatment, vs 33% for the untreated control (data not shown).
Finally, apoptotic cells were evaluated by the TUNEL assay. After 72 h, both STLs were able to induce nuclear fragmentation in 82.9% (PsiC) and 72.2% (Psi) of the parasites, when high concentrations (350 μM) were employed. PsiC was found to be a stronger apoptosis inductor, since smaller amounts (35μM) stimulated cell death in a 25.5% of the treated T. cruzi epimastigotes. Moreover, after 24 h treatment (350 μM), this compound induced a significant amount of apoptotic cells (7.9 ± 1.4%) (Fig 6). Treatment with either Psi or PsiC induced time-dependent changes in the exposure of PS on the outer surface layer of the plasma membrane and in the mitochondrial membrane potential (early apoptotic events). Although DNA fragmentation could be seen by TUNEL staining for both compounds, Psi required higher concentrations and longer times of exposure than PsiC.
Considering that cell death by apoptosis was observed when high concentrations of STLs were used and after a long period of treatment, it is unlikely that the trypanocidal effect of Psi and PsiC could be mediated by programmed cell death induction.
Drug interaction experiments
Nowadays, the combination of drugs is a useful therapeutic approach to improve efficacy and to reduce side effects. To evaluate a possible interaction between Psi and PsiC, the combined effects of both STLs was investigated. The FICI was 1.11 ± 0.05 (Fig 7). The association of Psi and PsiC revealed an additive trypanocidal effect, which could be related to the interaction with different targets to exert their antiparasitic activity.
Conclusions
In conclusion, this work provides a characterization of the mode of action of the STLs Psi and PsiC on T. cruzi. Although there are structural similarities between both STLs, our results have proved that they exert their anti-T. cruzi activity by acting on different targets. This work suggests that the interaction with heme seems to be one of the mechanism of action for Psi, while PsiC would act by inhibiting the synthesis of sterols. The combination of Psi and PsiC produced an additive effect, supporting the previous findings indicating the existence of different Action Anti-Trypanosoma cruzi of the Psilostachyin and Psilostachyin C mechanisms of action of these compounds. This association may be further investigated as a potential new therapeutic modality for the treatment of Chagas' disease.
|
v3-fos-license
|
2017-04-06T13:56:47.533Z
|
2012-06-01T00:00:00.000
|
15082277
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.1002754&type=printable",
"pdf_hash": "71e364f2d0772f501de339d5f9b60bbf9e862e43",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44507",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"sha1": "ca20ad9ffbd1bc2d7b698d40d4acab5ab15a3cde",
"year": 2012
}
|
pes2o/s2orc
|
African Swine Fever Virus Uses Macropinocytosis to Enter Host Cells
African swine fever (ASF) is caused by a large and highly pathogenic DNA virus, African swine fever virus (ASFV), which provokes severe economic losses and expansion threats. Presently, no specific protection or vaccine against ASF is available, despite the high hazard that the continued occurrence of the disease in sub-Saharan Africa, the recent outbreak in the Caucasus in 2007, and the potential dissemination to neighboring countries, represents. Although virus entry is a remarkable target for the development of protection tools, knowledge of the ASFV entry mechanism is still very limited. Whereas early studies have proposed that the virus enters cells through receptor-mediated endocytosis, the specific mechanism used by ASFV remains uncertain. Here we used the ASFV virulent isolate Ba71, adapted to grow in Vero cells (Ba71V), and the virulent strain E70 to demonstrate that entry and internalization of ASFV includes most of the features of macropinocytosis. By a combination of optical and electron microscopy, we show that the virus causes cytoplasm membrane perturbation, blebbing and ruffles. We have also found that internalization of the virions depends on actin reorganization, activity of Na+/H+ exchangers, and signaling events typical of the macropinocytic mechanism of endocytosis. The entry of virus into cells appears to directly stimulate dextran uptake, actin polarization and EGFR, PI3K-Akt, Pak1 and Rac1 activation. Inhibition of these key regulators of macropinocytosis, as well as treatment with the drug EIPA, results in a considerable decrease in ASFV entry and infection. In conclusion, this study identifies for the first time the whole pathway for ASFV entry, including the key cellular factors required for the uptake of the virus and the cell signaling involved.
Introduction
ASFV is a 200 nm large DNA virus that infects different species of swine, causing acute and often fatal disease [1][2][3]. Infection by ASFV is characterized by the absence of a neutralizing immune response, which has so far hampered the development of a conventional vaccine. A strong hazard of ASFV dissemination through EU countries from Caucasian areas has recently emerged, thus making progress of knowledge and tools for protection against this virus urgent.
Analysis of the complete DNA sequence of the 170-kb genome of the Ba71V isolate, adapted to grow in Vero cells, has revealed the existence of 151 genes, a number of enzymes with functions related to DNA replication, gene transcription and protein modifications, as well as several genes able to modulate virushost interaction [4][5][6][7][8][9][10][11][12].
ASFV replicates within the host cell cytosol, although a nuclear step has been reported [13,14]. Discrete cytoplasmic areas are reorganized into viral replication sites, known as factories, during the productive virus cycle. Regarding this, we have recently described ASFV replication as fully dependent on the cellular translational machinery since it is used by the virus to synthesize viral proteins. Thus, during infection, factors belonging to the eukaryotic translational initiation complex eIF4F are phosphory-lated, and then redistributed to the periphery of the ASFV factory. Furthermore, ASFV late mRNAs, ribosomes and mitochondrial network were also located in these areas [15]. Such phosphorylation events and redistribution movements suggest, first, a reorganization of the actin skeleton induced by ASFV, and second, virus-dependent kinases activation mechanisms. Several other critical steps of the infection, probably including virus entry and trafficking, might be also regulated by phosphorylation of key molecules targeted by the virus.
As the first step of replication, entry into the host cell is a prominent target for impairing ASFV infection and for potential vaccine development. Endocytosis is a major pathway of pathogen uptake into eukaryotic cells [16]. Clathrin-mediated endocytosis is one of the best studied receptor-dependent pathways, characterized by the formation of clathrin coated pits of 85-110 nm in diameter that bud into the cytoplasm to form clathrin-coated vesicles. Relatively low size viruses, as Vesicular stomatitis virus, Influenza virus, and Semliki forest virus all enter their host cells using this mechanism [17][18][19]. On the other hand, the caveolaemediated pathway is dependent on small vesicles termed caveolae (50-80 nm) enriched in caveolin, cholesterol, and sphingolipid. It has been implicated in the entry of other small viruses such as Simian virus 40 [20].
Macropinocytosis is another important type of endocytic route used by several viruses to enter host cells. It is defined as an actindependent endocytic process associated with a vigorous plasma membrane activity in the form of ruffles or blebs induced by activation of kinases and Rho GTPases. This pathway involves receptor-independent internalization of fluid or solutes into large uncoated vesicles sized between 0.5-10 mm called macropinosomes [21,22]. In recent years, it has been reported that macropinocytosis is responsible for virus entry of Vaccinia virus (VV) [23,24], Coxsackievirus [25], Adenovirus-3 [26], Herpes simplex virus [27][28][29], and is required for other viruses to promote viral internalization after entry by some different endocytic mechanism [30][31][32].
Regarding ASFV entry, preliminary studies were reported many years ago by our lab describing this process as temperature, energy, cholesterol and low pH-dependent, and also showing that ASFV strain Ba71V enters Vero cells by receptor-mediated endocytosis [33][34][35][36][37]. However, the cellular molecules involved and the precise mechanisms for ASFV entry remain largely unknown.
A recent paper [38] reported that ASFV uses dynamin and clathrin-dependent endocytosis to infect cells. However, it is noteworthy that this work employed the expression of ASFV early proteins as readout of virus entry, which is not equivalent to virus uptake, since several post-entry events could be involved in virus early protein expression. Hence, explanation of several controversial points, such as the larger size of ASFV (200 nm) compared to the smaller size (50-80 nm) of clathrin coated pits, or the existence of several other possible roles for dynamin in addition to virus entry [39], are not discussed in that work.
In the present work we have characterized the mechanisms of entry of ASFV-Ba71V and ASFV-E70 strains either in Vero or swine macrophages, as representative models for ASFV infection. By means of a combination of pharmacological inhibitors, specific dominant-negatives and confocal and electron microscopy, we show that ASFV is taken up predominantly by macropinocytosis. Therefore, we provide evidence, for the first time, that the ASFV entry requires sodium/proton exchanger (Na + /H + ), activation of EGFR and PI3K, phosphorylation of Pak1 kinases together with activation of Rho-GTPase Rac1 and relies on actin-dependent blebbing/ruffling formation, all events fully linked with macropinocytosis activation.
Cell culture, viruses and infections
Vero (African green monkey kidney) cells were obtained from the American Type Culture Collection (ATCC) and grown in Dulbecco's Modified Eagle's Medium (DMEM) supplemented with 5% fetal bovine serum (Invitrogen Life Technologies). IPAM cells (porcine macrophage-derived cell lines) were kindly provided by Dr. Parkhouse (Fundaçao Calouste Gulbenkian -Instituto Gulbenkian de Ciência, Oeiras, Portugal) and grown in RPMI 1640 medium supplemented with 10% fetal bovine serum. Cells were grown at 37uC under a 7% CO 2 atmosphere saturated with water vapour in a culture medium supplemented with 2 mM Lglutamine, 100 U/ml gentamicin and nonessential amino acids. The Vero-adapted ASFV strain Ba71V and isolate E70 were propagated and titrated by plaque assay on Vero cells, as described previously [40,41]. In brief, subconfluent Vero cells were cultivated in roller bottles and infected with ASFV at a multiplicity of infection (MOI) of 0.5 in DMEM 2% fetal bovine serum. After 72 h post infection the cells were recovered and centrifuged at 3000 rpm for 15 min and the cellular pellet was discarded. The supernatant containing viruses was clarified at 14000 rpm for 6 h at 4uC and the purified infectious virus was resuspended in medium and stored at 280uC. Vero cells were infected with Ba71V isolate and IPAM cells with E70 or Ba71V as indicated. The MOI used ranged from 1 to 3000 pfu/cell, as explained.
Viral adsorption to cells was performed at 4uC (synchronic infection) or at 37uC (asynchronic infection) during 90 min (or 60 min when indicated), followed by one wash with cold PBS, and a shift to 37uC to allow the infection until indicated times.
Plasmids construct
GFP-tagged versions of wild type forms of actin (pEGFP-actin) and Rac1 (pEGFP-Rac1) were kindly provided by Dr. J. Mercer (ETH Zurich, Institute of Biochemistry, Zurich, Switzerland) and Author Summary ASFV is a highly pathogenic zoonotic virus, which can cause severe economic losses and bioterrorism threats. No vaccine against ASFV is available so far. A strong hazard of ASFV dissemination through EU countries from Caucasian areas has recently emerged, thus making urgent to acquire knowledge and tools for protection against this virus. Despite that, our understanding of how ASFV enters host cells is very limited. A thorough understanding of this process would enable to design targeted antiviral therapies and vaccine development. The present study clearly defines key steps of ASFV cellular uptake, as well as the host factors responsible for permitting virus entry into cells. Our results indicate that the primary mechanism of ASFV uptake is a macropinocytosis-like process, that involves cellular membrane perturbation, actin polarization, activity of Na + /H + membrane channels, and signaling proceedings typical of the macropinocytic mechanism of endocytosis, such as Rac1-Pak1 pathways, PI3K and tyrosine-kinases activation. These findings help understanding how ASFV infects cells and suggest that disturbance of macropinocytosis may be useful in the impairment of infection and vaccine development.
ASFV uptake and infection assays
To analyze ASFV uptake, Vero cells were pretreated with the pharmacological inhibitors listed above at 37uC for 60 min in serum free medium. Ba71V synchronic infection was carried out at a MOI of 10 pfu/cell in the presence of the drugs. After binding, cells were washed once with cold PBS, followed by the addition of containing drug medium, and infection was allowed to proceed for 60 min at 37uC. After infection, cells were fixed and prepared either for Fluorescence Activating Cell Sorting (FACS) or Confocal Laser Scanning Microscopy (CLSM) analysis.
The specific effect of the drugs on virus entry and post entry steps was analyzed by incubation of the cells either 60 min before virus addition or 60 min after virus addition, and viral infection was allowed in the presence of the drugs at 37uC, in each case. Ba71V or E70 asynchronic infection was carried out for 16 or 48 h at a MOI of 1 pfu/cell or at a MOI of 5 pfu/cell to analyze viral proteins by Western blot or number of infected cells by CLSM, respectively.
To analyze Akt phosphorylation upon ASFV infection, Vero cells were infected at a MOI of 10 pfu/cell and viral adsorption was allowed for 60 min at 37uC. Actin distribution analysis was performed at different times post infection since virus addition at 37uC at MOI 50. Rac1 distribution and Pak1 phosphorylation was measured after synchronic infection at a MOI of 10 pfu/cell. At the indicated times, cells were prepared for Western blot or CLSM analysis.
Viral production assays
Vero cells were pretreated with DMSO or pharmacological inhibitors for 60 min at 37uC. The asynchronic infection was carried out at a MOI of 1 pfu/cell for 48 h in the presence of the inhibitors and the supernatant was recovered. The number of productive viral particles was titrated by plaque assays on Vero cells as described in [41].
Field Emission Scanning Electron Microscopy (FESEM)
Cells were grown on glass coverslips, serum starved for 24 h, infected synchronously (MOI 50) and at the indicated times post infection, fixed in 2.5% glutaraldehyde and 4% paraformaldehyde in 0.1 M phosphate buffer (pH 7.4) for 3 h at 4uC. They were washed three times in phosphate buffer, postfixed in 2% OsO 4 / water at RT for 60 min, washed in water, dehydrated in acetone, critical point dried for 2 h and coated with graphite-gold in a sputter coater. The samples were analyzed with a JSM-6335-F (JEOL) Field Emission SEM (Electron Microscopy National Center, UCM; Madrid, Spain).
Transmission Electron Microscopy (TEM)
Vero cells were serum starved 24 h and virus binding was allowed for 90 min at 4uC with Ba71V (MOI 3000). Cells were fixed with 2% glutaraldehyde and 4% paraformaldehyde in 0.1 M phosphate buffer (pH 7.4) for 3 h at 4uC. Sections of infected cells were prepared as described [43] and analyzed in a JEOL 100B electron microscope.
Phase Contrast Microscopy and Nomarski
In order to study real-time live imaging of ruffles formation induced by ASFV infection, Vero cells were serum starved for 24 h and virus binding was allowed for 90 min at 4uC at MOI 100. After binding, cells were washed with cold PBS and images were collected for 30 min with an Orca R2 digital camera (Hamamatsu) on a wide-field microscope (LeicaDMI6000B, Leica Microsystems) with controlled environmental chamber (temperature 37uC and 5% CO 2 humidified atmosphere). Images were captured with LAS AF version 2.6.0 software (Leica Microsystems) at a resolution of 134461024 pixels using a 206, 0.40 NA objective with a 1.66 magnification-changer, and analyzed with Image J software.
To analyze blebs formation, IPAM cells were infected synchronously (MOI 50) and at different times post infection, fixed with paraformaldehyde 4% for 20 min. Images were taken with a ccd monochrome camera (Hamamatsu) on a invert microscope (Axiovert200, Zeiss) using a 636 objective and analyzed with Image J program.
Fluorescence Activated Cell Sorting (FACS)
Mock-infected or infected cells in the presence of pharmacological inhibitors were detached with trypsin-EDTA after 60 min post infection (mpi), fixed with 2% paraformaldehyde for 30 min at 4uC and then permeabilized with PBS-Staining buffer (PBS 16, 0.01% sodium azide, 0.5% BSA) 0.2% saponin for 15 min at RT. Detection of infected cells was performed by incubation with an anti-p72 monoclonal antibody (17LD3) (diluted 1:100 in PBS-Staining buffer 0.2% saponin) for 20 min at 4uC, followed by incubation with an anti-mouse Alexa Fluor-488 (diluted 1:500 in PBS-Staining buffer, 0.2% saponin) in the same conditions. Finally, 2610 4 cells were analyzed in a FACSCalibur flow cytometer (BD Science) to determine the percentage of infected cells. All FACS analyses were performed at least in triplicate and displayed as the average percentage of infected cells relative to control infection in the absence of a pharmacological inhibitor. Error bars represent the standard deviation between experiments.
Confocal Laser Scanning Microscopy (CLSM)
Cells were grown on glass coverslips and, at indicated times post infection, were fixed with 4% paraformaldehyde for 20 min and permeabilized with PBS-0.2% Triton X-100 for 30 min at RT. Viral particles or infected cells were stained with an anti-p72 monoclonal antibody (17LD3) (diluted 1:250 in PBS-5% BSA) for 60 min at RT, followed by incubation with an anti-mouse Alexa Fluor-488 or an anti-mouse Alexa Fluor-555 for the same time. Alexa Fluor-488 phalloidin (dilution 1:100) or TRICT-phalloidin and Topro3 (dilution 1:500) were used to stain actin filaments and nuclei of cell, respectively. Goat anti-Rock1 was used at a dilution 1:50.
To analyze the virus binding to the cellular membrane, the viral adsorption was allowed for 90 min at 4uC (MOI 10) and after 60 min from virus addition cells were incubated with Alexa Fluor 594-WGA for 30 min. Cells were washed twice with cold PBS-0.1% BSA Buffer and incubated with anti-p72 monoclonal antibody (17LD3) and Alexa Fluor-488 for 60 min at 4uC. Finally, cells were fixed with 4% paraformaldehyde at RT for 20 min.
Samples were analyzed by CLSM (Zeiss LSM510) with a 636 oil immersion objective. To investigate ASFV uptake as well as actin, Rock1 and Rac1 distribution, Z-slices per image were collected and displayed as maximum z-projection of vertical slices (x-z plane) and/or maximum z-projection of horizontal slices (x-y plane). For presentation of images in the manuscript, LSM images were imported into Image J software for brightness and contrast enhancements. In all instances one image is representative of three independent experiments. ASFV uptake in the presence of inhibitors was analyzed automatically by a Macro algorithm from Image J program (developed by CBMSO Confocal Microscopy Service, Spain) in which Intermode threshold was used to count the number of virus inside the cells.
Fluid phase uptake assays
Vero cells were serum starved for 24 h and pretreated with DMSO or EIPA. After 60 min at 37uC the cells were synchronously infected (MOI 10) or treated with PMA (200 nM) at 37uC for 30 min. Fifteen min prior to harvesting or fixation, cells were incubated with 0.5 mg/ml 10 KDa 647-dextran or 3 KDa Texas Red-dextran (Invitrogen) at 37uC. Dextran uptake was stopped by placing the cells on ice and washing three times with cold PBS and once with low pH buffer (0.1 M sodium acetate, 0.05 M NaCl, pH 5.5) for 10 min. Then, the cells were prepared for FACS or CLSM analysis. In FACS experiments dextran uptake was displayed as fluorescence mean of three independent experiments. Error bars represent the standard deviation between experiments. Cells without wash acid buffer were added as an experiment control.
PI3K activation assay
Vero cells were serum starved for 24 hours and treated with DMSO or LY294002 for 60 min at 37uC in serum free medium. Asynchronic infection (viral adsorption for 60 min) was carried out at a MOI of 10 pfu/cell in the presence of the drug at 37uC until indicated times. PI3K subunit p85 was immunoprecipitated from lysed cells and PI3-kinase activity was measured as PI(3,4,5)P 3 production by ELISA activation kit, following the manufacturer's recommendations (Kit#1001s Echelon).
Rac1 activation assays
Vero cells were serum starved for 24 hours before synchronic infection at a MOI of 10 pfu/cell. The cells were washed once with cold PBS, shifted to 37uC and harvested at the indicated times post infection. Rac1 activation was measured with a G-LISA activation kit (Kit #BK128 Cytoskeleton, Inc.) and by immunoblotting after a Pak1-PBD-Agarose Beads (Upstate) pull down step as described following the manufacturer's recommendations. Bound Rac1-GTP was detected by incubation with an anti-Rac1 specific antibody followed by a secondary antibody conjugated to HRP and a detection reagent. The signal was read by measuring absorbance at 490 nm using a microplate reader and by autoradiography.
Acid-mediated endocytosis by-pass assay
To check if the EIPA inhibitor was specifically blocking virus entry and not a down-stream process such as early gene expression, we induced the fusion of the viral membrane with the plasma membrane (PM) by lowering the pH of the medium [23]. The cells were pretreated with EIPA for 60 min at 37uC in serum free medium. Viral adsorption was allowed at MOI 1 for 90 min at 37uC in neutral (7.4) or acid (5.0) pH. Cells were washed once with cold PBS and infection was allowed to proceed for 16 h at 37uC in the presence of the inhibitor in neutral pH. Samples were prepared for Western blot analysis.
Transfection assays
Vero cells were transfected with 1 mg of specific expression plasmids per 10 6 cells using the LipofectAMINE Plus Reagent (Invitrogen) according to the manufacturer's instructions and mixing in Opti-MEM (Invitrogen) in a 6-well plate. Cells were incubated at 37uC for 4 h in serum free medium, washed and incubated at 37uC. After 16 or 24 h post transfection the cells were infected at indicated MOI and either lysated and analyzed by Western blot, or fixed and prepared for CLSM analysis.
P72 protein detection in purified viruses
In order to analyze the localization of p72 in the viral particle, we carried out an experimental procedure as described in [44]. In brief, purified virus was treated with different buffers (
Toxicity analysis by Trypan Blue
To check cell viability after treatment with inhibitors cells were dyed with Trypan Blue and dead cells were counted in hemocytometer as blue cells.
Densitometry analysis
After Western Blot analysis, bands developed by ECL chemiluminescence were digitalized by scanning and quantified with Fujifilm Multi Gauge V3.0 software. Data were normalized after subtracting background values and calculated as factors by their ratio against the highest or lowest positive value obtained. All quantifications represent the mean of three independent experiments.
ASFV induces membrane ruffles and blebs to enter host cells
Macropinocytosis mainly differs from other endocytic processes in the requirement of extensive actin cytoskeleton restructuring and formation of blebs or ruffling in the cellular surface, through which the specific cargo enters the cell [22]. These rearrangements are coupled to an external-induced formation of plasma membrane extensions. Several viruses have been described to use macropinocytosis for entry, including Vaccinia virus [23,45,46], Ebola virus [47] and Kaposi's sarcoma-associated herpesvirus [29,48].
Receptor-mediated endocytosis has been postulated in classic studies as the most likely mechanism for ASFV entry into Vero cells [33][34][35]. Yet the specific characteristics to further depict the viral entry procedure have not been elucidated. To analyze the possible perturbation of the cellular membrane induced by ASFV, the virus strain Ba71V was used to synchronously infect Vero cells at MOI 50. To achieve this, we have analyzed by Field Emission SEM analysis (FESEM) the induction of ruffling and bubbles-like perturbations at 10, 60 and 90 min after ASFV uptake. The results are shown in Figure 1A, where a maximum level of membrane perturbation similar to ruffles appears in ASFV-infected Vero cells between 10 and 60 mpi, decreasing after 90 mpi, indicating that ASFV-induced macropinocytosis is a transient event.
On the other hand, Figure 1B shows that ASF virions internalize in Vero cells adjacent to retracting ruffles, thus indicating that the macropinocytic uptake of viral particles seems to occur as part of the macropinocytic process.
Finally, we have analyzed in vivo in real-time the membrane protrusions observed during Ba71V infection in Vero cells. Figure 1C shows the sequence of images during the first minutes of the infection (Video S2), illustrating the ASFV-induced ruffling, and in concordance with the data shown in Figure 1A. For comparison to Mock-infected Vero cells, see Video S1.
To assess whether the ASFV entry also induces membrane perturbation in swine macrophages, the natural target cell of ASFV infection in vivo, the virulent strain E70 was used to synchronously infect IPAM cells at MOI 50. As early as 10 mpi, strong membrane protrusions were observed by FESEM analysis (Figure 2A). To better characterize these membrane rearrangements, IPAM cells were synchronously infected with E70 strain at MOI 50, during 30, 45 and 60 mpi. Next, IPAM cells were fixed and analyzed by optic microscopy. Figure 2B shows images compatible with blebs induced by ASFV infection in swine macrophages from 30 mpi. To prove this point, we achieved an additional experiment showing the inhibition of virus entry with different doses of blebbistatin, an inhibitor of blebbing and macropinocytosis [23,29,[49][50][51]. Western blot analyses have shown that blebbistatin impairs the entry of the virus in IPAM cells, as the drug inhibits the expression of ASFV proteins when preincubated before virus addition. Hence, when blebbistatin was incubated 60 min after virus addition, a much lower inhibition of viral proteins was observed, thus indicating the role of blebbistatin on early steps of virus entry. Results are presented in Figure 2C.
Last, by using a specific anti-Rock1 antibody as a marker of blebs [52], we have shown that Rock1 colocalizes with virus particles on blebs in IPAM cells from 30 min after ASFV uptake ( Figure 2D), revealing the close relation between bleb and viral particle.
Taken together, these data strongly indicate that ASFV induces a vigorous plasma membrane activity during the first steps of the infection, both in Vero and IPAM cells, well-matching with macropinocytosis-mediated entry.
ASFV entry is dependent on Na + /H + membrane exchangers and stimulates uptake of fluid phase markers With the membrane perturbation pattern shown above, it was likely that ASFV was using macropinocytosis to enter cells. Macropinocytosis is dependent on the Na + /H + exchanger [21], and thus amiloride and its analogue 5-(N-ethhyl-n-isopropil)amiloride (EIPA) are frequently used as the main diagnostic test to identify macropinocytosis because this drug has been shown to be specific to this endocytic pathway without affecting others [53][54][55]. Consequently, to further assess the involvement of macropinocytosis in ASFV entry, the effect of EIPA was investigated. When tested on Vero cells, EIPA had no significant cytotoxic effect as assessed by cell monolayer integrity and trypan blue cell viability assessment (Table S1).
It has been previously described that after 60 mpi more than 90% of the ASF viral particles are located in the cell [34]. Furthermore, the viral uncoating does not completely occur before 2 hours post infection (hpi) [34]. According to these data, we measured viral uptake by using the specific antibody 17LD3 against p72, the major protein of ASFV capsid [42,56] (see Materials and Methods and Figure S1A, B and C). Interestingly, amounts of EIPA from 40 mM to 60 mM caused a significant reduction (60%) in the uptake of ASFV infective particles after 60 mpi ( Figure 3A), suggesting that ASFV entry depends on Na + / H + exchanger activity/function.
To further visualize the effect of EIPA on virus uptake, Ba71V strain was added to Vero cells, previously treated with DMSO or 60 mM EIPA. Sixty min after infection, the cells were incubated with anti-p72 antibody 17LD3 to stain the virus. A confocal microscopy analysis revealed that there was a noticeable drop in virus particles incorporated into the cells incubated with EIPA, as compared to those incorporated into DMSO-incubated cells ( Figure 3B, bottom panels). Images were taken as a maximum zprojection (x-y plane). For clarification, individual channels are shown in Figure S2A. Moreover, we also analyzed images of a maximum z-projection of vertical slices to determine whether viral particles could be imbibed into the membrane in the presence of the inhibitor. As shown in the Figure 3B upper panels, a different distribution of viral particles in the cells infected in the presence of EIPA, compared to that found in cells infected in the absence of the drug, was observed. This last data strongly suggests that in EIPA-treated cells the virus can bind to the membrane but is not able to internalize. This could be the explanation for the percentage of cells that were positive for 17LD3 antibody detected in Figure 3A. The total number of virus obtained in the confocal images was automatically quantified using a macro algorithm in the Image J program ( Figure S3).
In regard to this, it is also remarkable that, although a small amount of viral particles can still be detected inside the cells in the presence of EIPA, neither early, p32, nor late ASFV proteins, p17, p24, p12 and p72 [57][58][59][60] could be detected by Western blot in the presence of the drug ( Figure 3C). Hence, it is likely that EIPA is mainly affecting virus uptake since when drug is added 60 min after virus uptake, it does not affect the viral protein synthesis ( Figure S4A). As expected, no viral factories detected by using anti-p72 antibody (green) and Topro3 (blue) for viral and cellular DNA, were found after EIPA treatment by confocal microscopy ( Figure 3D). Separate channels are shown in Figure S2B and a morphological detail of an ASFV factory is shown in Figure S1C. Consequently, viral production was also strongly inhibited by the drug ( Figure 3E). Finally, and to fully ascertain if EIPA was specifically blocking ASFV entry and not a downstream step, we performed the infection by using the acid-mediated fusion of plasma membrane. Briefly, in the presence of acid pH, endocytosis is subverted and virions fused with the plasma membrane and then directly carried into the cytosol. When an inhibitor blocks virus endocytosis, inhibition of viral protein synthesis in the presence of drug can be bypassed through fusion. If membrane fusion could not rescue viral gene expression, the blocking would most probably occur at a post-entry step [23]. By using this method, we find that when the viral adsorption is performed in the presence of EIPA in acidic pH, p72 viral synthesis is clearly recovered in relation to the infection developed at neutral pH ( Figure 3F).
Next, we investigated the dextran uptake during ASFV infection, since it has been described that macropinocytosis activation induces a transient increase of this fluid phase marker [61,62]. To achieve this, Vero cells were treated with EIPA for 60 min and then infected synchronously with Ba71V for 30 min, or stimulated with PMA as a positive control. Fifteen minutes before stopping the infection, cells were pulsed with dextran and prepared for FACS analysis. As indicated in Figure 3G, ASFV infection induces dextran uptake during the virus entry and this action is inhibited by EIPA. Moreover, to reinforce the hypothesis that ASFV entry occurs mainly by macropinocytosis, we developed an experiment to assess the colocalization between the virus particles and the macropinocytosis marker dextran. These results are included in Figure 3H.
All together, these data strongly indicate that ASFV induces activation of macropinocytosis to enter cells.
Chemical disruption of actin cytoskeleton inhibits ASFV entry
Macropinocytosis is a very specific actin-dependent endocytic process since it depends on acting rearrangements to induce membrane ruffling formation, and inhibitors of actin microfilaments, such as Cytochalasin D (Cyto D) [63,64], Latrunculin A [65] and Jasplakinolide [66], are commonly used to inhibit this process.
To demonstrate whether ASFV depends on actin to enter cells, we used Cyto D, which binds to the positive end of F-actin impairing further addition of G-actin, thus preventing growth of the microfilament [67]. Vero cells were pretreated with Cyto D at a concentration of 8 mM and ASFV uptake (MOI 10) at 60 mpi was next analyzed by FACS. As shown in Figure 4A, the disruption of actin dynamics by the inhibitor reduced ASFV entry in a percentage of about 50%. To assess whether the drug impairs the synthesis of viral proteins, Vero cells were untreated or treated with Cyto D (4 mM) and then infected with Ba71V, MOI 1. After 16 hpi, we used a specific antiserum against both early and late ASFV proteins (generated in our lab), to analyze viral protein expression. As expected, Cyto D treatment importantly reduced both the synthesis of p32, one of the main ASFV early proteins, and the synthesis of p12, p17 and p72, three typical late proteins in the ASFV cycle ( Figure 4B). In agreement with this, both virus production and viral factories clearly diminished as shown in Figure 4C and 4D, respectively. However, it is noteworthy that even in the presence of Cyto D, a number of virions seem able to enter the cell and induce a productive infection, thus suggesting that the actin cytoskeleton is involved in ASFV entry and also in successive post-entry steps, as shown in Figure S4B.
To further assess the importance of actin microfilaments in the first steps of ASVF entry, we examined whether ASFV infection causes rearrangements of actin cytoskeleton in Vero cells, by using phalloidin in confocal microscopy experiments. Data are presented in Figure 4E, showing the change of actin pattern after 10 and 30 min after virus uptake at MOI 50. Furthermore, and to reinforce these data, Vero cells were transfected with pEGFP-actin plasmid (kindly gifted by Dr. J. Mercer), and infected with Ba71V, MOI 50. Figure 4F shows the redistribution in aggregates of GFPactin in transfected Vero cells, which are similar to those observed when endogenous actin was analyzed. Not only that, but also, viral particles (red) are found together with the actin aggregates both in endogenous and ectopically expressed actin.
Since it has been described that blebs and ruffles contain actin, Rac1 and cortactin [23,68], it is likely that these actin spots correspond to membrane active places where ASFV-induced ruffling should occur, thus suggesting that actin dynamics is a very important factor to ASFV in the host cell to mediate cell-wide plasma membrane ruffling.
Another component of the cytoskeleton that has been reported to be involved in several virus entry processes is the microtubules system, although the importance of microtubules specifically regarding the macropinocytosis pathway is controversial [69]. In respect to ASFV infection, whereas it has been reported that nocodazole (a specific inhibitor of microtubules system [70]) does not affect viral DNA replication [71], a report from Health et al. [72] describes that nocodazole produces a decrease in the expression of p72 and p12 late proteins, but not in the early proteins of ASFV. To investigate whether the microtubule system has a role in ASFV entry, Vero cells were treated with different concentrations of nocodazole and then infected with ASFV at MOI 1. Microtubule disruption had no effect on early viral protein synthesis and barely on late proteins synthesis such as p12 and p72 ( Figure S5). Therefore, we conclude that the microtubules system is not likely significant for ASFV entry.
ASFV induces EGFR and PI3K-Akt pathway activation
Macropinocytosis is typically started by external stimulation. This stimulation is usually associated with growth factors that trigger activation of receptor tyrosine kinases (RTKs). These molecules then activate signaling pathways that induce changes in the dynamics of actin cytoskeleton and disturb plasma membrane [21]. Among them, epidermal growth factor receptor (EGFR) has been connected with actin rearrangement and activation of Rho family GTPases, and its activation is known to trigger macropinocytosis [45,73].
Besides the membrane perturbations and actin remodeling observed following ASFV uptake, we have found that EGFR activation was essential for ASFV infection, since 324674, the specific inhibitor of this receptor tyrosine kinase [74], efficiently inhibited ASFV uptake in a dose-dependent manner as assessed by FACS experiments in Vero cells. Accordingly, ASFV entry relies on tyrosine kinases activity, as preincubation of the cells with genistein (tyrosine kinase inhibitor [75]) also inhibited ASFV infection ( Figure 5A).
The PI3K/PDK1/Akt/mTORC1 pathway regulates vital cellular processes that are important for viral replication and propagation, including cell growth, proliferation, and protein ASFV antibody. b-actin was detected as a load control. Fold induction was determined by densitometry (mean 6S.D) as shown in the graphic below. D) Rock1 colocalizes with ASFV in blebs (arrows). Cells were infected (Ba71V, MOI 50) and fixed at 30 min after infection. Cells were incubated with, anti-Rock1 (red), anti-p72 (green) and Topro3 (blue) to stain blebs, virus and nuclei, respectively. Images were taken by CLSM and represented as a maximum z-projection of x-y plane and Normasky. Magnifications of the bleb containing Rock1 and viruses (boxes) are shown in the corresponding bottom panels. S.D., standard deviations. doi:10.1371/journal.ppat.1002754.g002 translation [76]. Concerning macropinocytosis, it has been described that PI3K and its effectors induce the formation of lipid structures in ruffles and macropinocytic cups involved in cytoskeleton modulation [77][78][79]. In recent years, it has been reported that several viruses use the PI3K-Akt pathway to support entry into cells and early events of the infection [23,80]. In order to investigate the importance of this pathway on ASFV entry, we have developed, after different times of ASFV uptake, an ELISA test that directly measures the activity of PI3K by analyzing phosphorylation of its specific substrate PI(4,5)P 2 . The results ( Figure 5B), show the increase of substrate phosphorylation from 5 min after virus uptake, reaching a maximum after 30 min of infection. Importantly, the presence of the PI3K inhibitor LY294002 (LY) [81] strongly impaired the kinase activation by the virus.
It has been reported that Akt is the major downstream effector of the PI3K pathway and is commonly used as readout of PI3K activation [82], since Akt phosphorylation has been considered to be a direct consequence of PI3K activation pathway [83][84][85]. To analyze the effect of virus uptake on Akt phosphorylation, Vero cells were serum starved for 4 h and then infected with Ba71V (MOI 10) from 5 to 90 min. Figure 5C shows that Akt is phosphorylated from 5 min after virus uptake, reaching a maximum at 30 min. It has been established that Akt phosphorylation of Thr308 is a direct consequence of PI3K activation pathway [83] while phosphorylation of Ser473 depends on mTORC2 [84,85]. Since phosphorylation in both residues of Akt is required for its complete activation, we measured the ASFV-induced Akt phosphorylation with two different antiphospho antibodies. Figure S6 shows that Akt is phosphorylated both in Thr308 and in Ser473 early after ASFV infection, suggesting that ASFV entry fully activates this pathway in the infected cell.
To further investigate whether the PI3K activation observed early during ASFV infection involves mainly upstream steps, we pretreated Vero cells with LY at a concentration of 60 mM. Cells were then infected with Ba71V MOI 10, and the virus uptake was analyzed by FACS at 60 mpi. Figure 5D shows that virus uptake decreased to about 45% in treated Vero cells in respect to DMSOtreated cells, indicating that PI3K activation is involved in the virus entry. Not only that, but we also found that the activation of this kinase has a key role in the consecution of infection since, as shown in Figure 5E, the presence of 20 mM LY severely impairs the synthesis of both ASFV early and late virus proteins. Recently, our group has described that ASFV regulates the cellular machinery of protein synthesis to guarantee the expression of its own proteins [15]. Since it has been reported that one of the main roles of PI3K is regulating the translational machinery through the PI3K-Akt-mTOR pathway [86], the strong effect observed of LY on the ASFV protein synthesis is not surprising ( Figure S4C). Finally, and to confirm the role of PI3K on ASFV infection, we performed experiments to analyze the number of cells presenting viral factories in the presence of LY. As shown in Figure 5F, a dramatic decrease of infected cells was observed after 16 hpi (MOI 5) when the infection was performed in the presence of the inhibitor. Similarly, virus production was diminished about 3 logs units by the effect of LY after 48 hpi ( Figure 5G).
ASFV triggers Rac1 activation to enter into the host cells
Since activation of Rac1-GTPase has been involved in the regulation of macropinocytosis by triggering membrane ruffling in the cell [87], we investigated the activation status of Rac1 during the first steps of ASFV entry in Vero cells. Ba71V was used to synchronously infect cells (MOI 10), and Rac1 activation was measured with the G-LISA activation kit following the manufacturer's instructions. The results showed that Rac1 activation is a very fast and strong event during ASFV entry, reaching a maximum (2.5 fold) at 10 mpi compared to mock-infected cells ( Figure 6A). It has been shown that Rac1 controls macropinocytosis by interacting with its specific effectors, the p21-activated kinases (Paks), thus modulating actin cytoskeleton dynamics [88,89]. It is also known that Rac1 binds and activates Pak1 only under its Rac1-GTP active form. To confirm the results obtained by G-LISA, we further analyzed the Rac1 activation during ASFV entry by performing a pull down assay using Pak1-PBD-Agarose Beads, which carried the PBD-Pak1 ready to bind Rac1-GTP. As shown in Figure 6B, Rac1-GTP was found together with the pulled Pak1-PBD-Agarose Beads after 10 min post ASFV infection, slightly diminishing 30 min after the infection. This result further corroborates that ASFV entry induces the formation of the Rac1 active conformation. Since it has been described that Rac1 is contained in blebs and ruffles [22,23,90] and, as shown above, ASFV induces these type of the structures when it infects cells, we next analyzed the localization of Rac1 during the process of ASFV entry. To achieve this, Vero cells were first transfected for 24 h with pEGFP-Rac1 (kindly given by Dr. J. Mercer) and then infected with Ba71V, MOI 10. As shown in Figure 6C, we found clusters of the GTPase as early as 10 min after infection. Accordingly with the experiments shown above, this effect was clearly perceptible at 30 mpi, demonstrating, first, that ASFV infection induces accumulation of active Rac1 in ruffling areas, and second, that this is an event that takes place mainly during ASFV entry.
The effect of Rac1 inhibition on virus uptake was next investigated. Cells were pretreated with 200 mM Rac1 inhibitor [91] and the virus uptake was measured after 60 mpi by FACS analysis, using the specific antibody against the ASFV capsid protein p72, as described in Materials and Methods. Figure 6D shows the dramatic decrease of virus uptake when the infection is performed in the presence of the pharmacologic inhibitor of Rac1. Furthermore, we analyzed the effect on the ASFV uptake in the presence of the inhibitor by CLSM experiments, using the same conditions as above. The images were taken as a maximum zprojection of horizontal and vertical slices. As Figure 6E (bottom panels) indicates, a strong inhibition of virus uptake could be observed in the presence of the Rac1 inhibitor, since the number of ASFV particles in the cell (green) is visibly lower in the presence of the drug. Moreover, and as shown in the upper panels of Figure 6E, virus (green) colocalized (yellow), with cortical actin (red), indicating that the drug immobilizes the virions imbibed into the plasma membrane and impairs their entry into the cell. Separated channels are also shown in Figure S2D.
Alternatively, and to reinforce the role of Rac1 on ASFV infection, we studied the level of ASFV protein synthesis in Vero cells previously transfected with the mutant pGFP-Rac1-N17 (a detected as a load control. C) After 48 hpi (MOI 1) supernatants from treated cells (8 mM Cyto D) were recovered and lytic viruses were titrated (n = 3, mean 6S.D). D) Development of viral factories (arrowheads) was analyzed by CLSM after treatment (8 mM Cyto D) and infected (MOI 5) for 16 h. Fixed cells were stained with Topro3 (blue), TRITC-phalloidin (red), and anti-p72 (green) to visualize cell nuclei, actin filaments and viral factories, respectively. Images of a mid z-section are shown. The percentage of infected cells of three independent experiments from CLSM images (
cells per condition) is represented in graphic format (mean 6S.D.). E-F) ASFV infection induces rearrangements of the actin cytoskeleton. Cells were infected at a MOI of 50 pfu/cell (E) or transfected with pEGFP-actin for 16 h and then infected (MOI 50). For both, E and F, cells were fixed at indicated times post infection and incubated with Alexa
Fluor 488-phalloidin (E), anti-p72 and Topro3 (E and F) to stain actin filaments, viral particles and cell nuclei, respectively. Z-slides images were taken by CLSM and represented as a maximum of z-projection. S.D., standard deviation; Cyto D, Cytochalasin D. * Unspecific cellular protein detected by the antibody. doi:10.1371/journal.ppat.1002754.g004 kind gift from Dr. R. Madrid). The expression of the inactive form of Rac1 strongly inhibited the expression of the ASFV early p32 protein ( Figure 6F). As expected, the synthesis of viral late proteins was also affected by treatment with the inhibitor ( Figure S7). Not only that, but also, when Rac1 inhibitor was added 60 min after virus addition, the level of viral protein synthesis observed was completely recovered, thus reinforcing the role of Rac1 in virus entry ( Figure S4D).
Hence, the role of Rac1 on ASFV morphogenesis and virus production was investigated. To achieve this, Vero cells were treated with the Rac1 inhibitor and then infected during 16 h, MOI 5. Cells were fixed and stained with anti-p72 to visualize the viral factories by CLSM and the percentage of infected cells in the presence or absence of the inhibitor was represented in the graph ( Figure 6G). As observed, the number of cells containing ASFV factories decreased about 65% in the presence of Rac1 inhibitor compared to the untreated controls (separate channels are shown in Figure S2E). In line with these results, the viral production at 48 hpi decreased strongly when the activity of Rac1 GTPase was inhibited ( Figure 6H). Finally, since Rac1 has been reported to be an important component of ruffles [22,23,90], we have used the Rac1 inhibitor to assess its involvement in the inhibition of these membrane perturbations and therefore, indirectly, the role of ruffles in ASFV uptake. To achieve this, we have performed FESEM assays in Vero cells treated with 200 mM Rac1 inhibitor during 60 min prior to virus addition. As shown in Figure 6I, Rac1 inhibitor strongly decreases the ASFV-induced ruffles, in accordance with the decrease in virus uptake ( Figure 6D), viral infection (6G) and virus production (6H) previously observed.
Taken together, these results demonstrate the significant role of Rac1 on ASFV entry.
Pak1 activation has a key role in ASFV infection
The p21-activated kinase 1 (Pak1), a serine/threonine kinase activated by Rac1 or Cdc42 [89] is one of the most relevant kinases related to several virus entry processes since it is involved in the regulation of cytoskeleton dynamics and is needed during all the stages of macropinocytosis [88,92,93]. Among the different residues to be phosphorylated in Pak1 activation, the Thr423 plays a central role because its phosphorylation is necessary for full activation of the kinase [94].
To determine whether Pak1 was activated during ASFV entry, we first analyzed the phosphorylation on Thr423 in Vero cells synchronously infected (MOI 5) with Ba71V. At different times post infection, samples were collected and analyzed by immunoblotting using an anti-phospho-Pak1 Thr423 antibody. As early as 30 mpi, phosphorylation of Pak1 could be detected, increasing until 120 mpi ( Figure 7A).
IPA-3 has been identified as a direct, noncompetitive and highly selective Pak1 inhibitor. In the presence of IPA-3, Thr423 phosphorylation is inhibited since the Pak1 autoregulatory domain is targeted by the inhibitor [95]. To assess the role of Pak1 activation in ASFV uptake, we measured by FACS analysis the p72 levels detected into the Ba71V-infected Vero cells (MOI 10) after 60 mpi. As shown in Figure 7B, the p72 levels incorporated into the cells in the presence of 30 mM IPA-3 were significantly lower (70%) than those obtained in the absence of the inhibitor. These results indicate that Pak1 activation is involved in the first stages of ASFV entry, since phosphorylation of the kinase occurs at very early times after virus addition, and even more importantly, the uptake of the virus into the host cells is strongly dependent of Pak1 activity.
Apart from the role played by Pak1 in viral entry, the sensitivity of ASFV infection to IPA-3 was investigated in Ba71V-infected Vero cells by Western blot. Using specific antibodies against both early and late ASFV proteins, the effect of the inhibitor from 1 to 10 mM on viral protein synthesis was evaluated. Figure 7C shows the strong dose-dependent IPA-3 inhibition over the most important early (p32) and late proteins (p72, p24, p17 and p12). To reinforce the role of Pak1 in ASFV entry, a similar experiment performed by incubation with IPA-3 during 60 min after virus addition is shown in Figure S4E. These data indicate that the drug is mainly affecting virus entry as it does not induce important inhibition on viral protein synthesis when incubated after virus uptake. Moreover, virus title was reduced 1.5 log units in cells pretreated with 5 mM IPA-3 and then infected with Ba71V (MOI 1) in the presence of the inhibitor during 48 h ( Figure 7D).
To corroborate the significant role of Pak1 during ASFV infection, we used different Pak1 constructs affecting Pak1 activation (see Materials and Methods). Vero cells were transfected for 24 h with pEGFP, pEGFP-Pak1-WT, pEGFP-Pak-AID and pEGFP-Pak1-T423E (all of them kindly gifted by Dr. J. Chernoff) and infected for 16 h with ASFV at a MOI of 1 pfu/cell. As shown in Figure 7E, the constructs containing the Pak1 autoinhibitory domain (AID) inhibited p12 and p32 viral protein expression, whereas cells transfected with wild type (WT) form showed the same protein levels than infected control cells. It is noteworthy that constitutively active Pak1 construction T423E (even although it was only shortly expressed in the transient transfection process) induced a remarkable enhancement on the expression of the ASFV early protein p32, indicating that increasing Pak1 activity intensifies the early protein synthesis, probably due to its effect on virus entry. Numeric values of these data are shown in Figure 7F.
These data, together with those of Rac1 activation explained above, strongly supports our hypothesis of ASFV triggering the Rac1-Pak1 pathway during the virus entry.
Role of dynamin and clathrin during ASFV entry and infection
Dynamin is a cellular essential GTPase which plays an important role in cellular membrane fission during vesicle formation [96]. It is likely involved in Rac1 localization and function, since it has been shown that Rac1-dependent macropinocytosis is blocked by the dynamin-2 (DynK44A) dominantnegative [39].
Since, as we demonstrated above, Rac1 is important to ASFV entry, we have analyzed whether dynamin-2 pathway plays a role either in ASFV entry or infection. To achieve this, we first investigated the effect of Dynasore (Dyn), a reversible inhibitor of GTPases activity [97], over ASFV uptake. After 60 min of pretreatment with 100 mM Dyn, Vero cells were infected with Ba71V at MOI 10 and virus uptake was measured by FACS using the specific antibody against the capsid viral protein p72. The result showed that treatment with Dyn partially inhibited virus uptake (35%) ( Figure 8A). A higher effect of the inhibitor on ASFV entry could not be found by using different experimental Figure 6. Rac1 plays a critical role in ASFV entry in Vero cells. A-B) Activation of Rac1 during ASFV entry. Vero cells were infected (MOI 10) and 0.Rac1 activation was measured by A) Kit Activation Assay (n = 3; mean 6S.D.) and B) Pak1 PBD-Agarose Beads pull down assay. Fold induction was determined by densitometry (mean 6S.D). C) ASFV infection induces clustering of Rac1. Cells were transfected with pEGFP-Rac1, infected (MOI 10) and stained with Topro3 (blue) and anti-p72 (red). Analyzed images by CLSM were represented as a maximum of z-projection. D-E) Rac1 inhibitor blocks viral entry. Pretreated cells (200 mM Rac1 inhibitor) were infected (MOI 10) for 60 min. D) Graphic shows percentage of virus entry relative to DMSO control, measured as p72 signal analyzed by FACS (n = 3, performed in duplicate; mean 6S.D.). E) Cells were incubated with Topro3 (blue), TRITC-phalloidin (red) and anti-p72 (green). Images are represented as a maximum z-projection of x-y plane (bottom panels) and x-z plane (upper panels). F) Expression of inactive form of Rac-1 reduces viral infection. Transfected cells with pcDNA or pGFP-Rac1-N17 were infected (MOI 1) for 16 h. Viral protein synthesis was analyzed by immunoblotting with an anti-p32 antibody. GFP and b-actin levels were measured as a control. conditions (data not shown), further indicating the partial involvement of dynamine in virus uptake. Moreover, the role of clathrin-mediated endocytosis was examined in parallel using Chlorpromazine (CPZ), which inhibits the assembly of coated pits at the plasma membrane and is considered a specific inhibitor of clathrin-mediated endocytosis [98]. Using parallel experimental conditions, and in contrast with the data obtained after treatment with Dyn, we observed that the virus uptake was not likely affected in the presence of 20 mM CPZ ( Figure 8A). These data indicate that whereas dynamine is to some extent involved in ASFV entry in accordance with its role in macropinocytosis [39], clathrin is not related to ASFV uptake in Vero cells.
In order to investigate whether other steps downstream ASFV entry were affected by Dyn and CPZ, Vero cells were separately pretreated with the inhibitors, and then infected with ASFV (MOI 1). At the indicated times after infection, the synthesis of both early and late ASFV proteins was analyzed by Western blot. The treatment with 100 mM Dyn strongly inhibited p72 and p32 expression from early times post infection ( Figure 8B), consequently indicating that dynamine is required for ASFV both early and late infection course. As Figure 8C shows, CPZ had a similar effect to Dyn both on ASFV early and late protein synthesis, in concordance with the data from Hernaez et al. [38], in which the expression of the viral protein p32 depends on clathrin function. Higher amounts of CPZ could result in an inhibition of p72, but this effect is likely due to the cytotoxic effect of the drug, as reported in Table S1. Taken together, our data showed that whereas the effect of Dyn on viral protein synthesis is probably due to dynamine participation on ASFV entry events, the clathrin inhibition does not involve virus uptake, but only viral protein synthesis, thus indicating a role for clathrin function merely in post entry events. Future experiments are planned to more specifically Finally, and as expected, both inhibitors had an important effect on viral production measured after 48 hpi (MOI 1) in Vero cells ( Figure 8D).
Discussion
Endocytosis constitutes an efficient way for viruses to cross the physical barrier represented by the plasma membrane and to pass through the underlying cortical matrix. Knowledge of the specific pathway of virus entry and of the precise mechanisms regulating is key to understand viral pathogenesis, since virus entry into host cell is the first major step in infection. Whereas there is ample evidence showing that ASFV enters cells through endocytosis in a pH-dependent manner and that saturable binding-sites on the plasma membrane mediate the productive entry of the virus into Vero cells and swine macrophages [33,34], the specific endocytic and signaling pathways used by the virus are largely unknown.
In this report, by combining different and independent approaches, we have achieved an exhaustive analysis of the ASFV endocytic pathway. We have obtained a precise picture of how ASFV enters the cell and have identified the main cellular proteins required. Careful assessment of specificity and functionality of each pathway was performed and correlated with infection and virus uptake.
Many recent reports have shown that viruses can directly use macropinocytosis as an endocytic way for productive infection [21,[23][24][25][26][27][28][29], and also to promote the penetration of viral particles that enter by other endocytic mechanisms [31,32]. Macropinocy-tosis activation is related to significant cell-wide membrane ruffling mediated by activation of actin filaments. These structures may have different shapes: lamellipodia, circular-shaped membrane extrusions (ruffles) or large membrane extrusions in form of blebs.
Here we have illustrated by FESEM that ASFV strain Ba71V induced prominent membrane protrusions compatible with ruffles after 10 mpi. Transmission electron microscopy images further support this result by showing that ASF virions internalize adjacent to retracting ruffles, likely indicating uptake of viral particles occurs as part of the macropinocytic process. Not only that, but also, we found that inhibition of Rac1, an important component of ruffles, importantly impaired the ASFV uptake, thus involving the formation of these membrane perturbations in virus entry.
Moreover, and in parallel to the data obtained in Vero cells, we found that the E70 virulent strain induced a type of membrane protrusion similar to blebs a few minutes after the infection of the swine macrophage line IPAM. This last result is important, since macrophages are probably the natural target cell of the infection in vivo and suggests that different macropinocytic programs can be used by different ASFV strains, as has been published for other virus as Vaccinia [45]. Because of this, we have carefully characterized these structures. First, we showed the inhibition of virus entry with different doses of blebbistatin, and second we demonstrated that Rock1 (a marker of blebs [52]) colocalized with virus particles on blebs in IPAM cells from 30 min after virus uptake.
Apart from characteristic membrane perturbations, macropinocytosis is also distinguished from other entry pathways by features that include actin-dependent structural changes in the plasma membrane, regulation by PI3K, PKC, Rho family GTPases, Na + /H + exchangers, Pak1, as well as ligand-induced upregulation of fluid phase uptake. In this regard, our work demonstrates that EIPA, a potent and specific inhibitor of the Na + /H + exchanger [23,53,54,99], severely impairs ASFV infection and entry. By using FACS analysis we found that EIPA treatment caused a significant dose-dependent manner reduction (more than 60%) in the uptake of ASFV infective particles. Confocal microscopy analysis also revealed that there was an evident drop in virus particles incorporated into the cells incubated with EIPA. It is important to note that macropinocytosis is the only endocytic pathway susceptible to the inhibition of the Na + / H + exchangers. Thus, these results strongly indicate the involvement of macropinocytosis in ASFV virus entry.
Actin plays a central role in formation and trafficking of macropinosomes. Cyto D, which binds to the positive end of Factin (impairing further addition of G-actin and preventing the growth of the microfilament [67]), reduced ASFV entry by approximately 50% and inhibited the synthesis of both early and late viral proteins, together with viral morphogenesis. However, it is remarkable that virions that escape from the action of Cyto D induce a productive infection, thus suggesting that actin cytoskeleton is mainly involved in ASFV entry, although it could have a role in successive post-entry steps.
Corroborating this hypothesis, we have observed that ASFV infection causes rearrangements of endogenous actin cytoskeleton in Vero cells as early as 10 min post infection. These data were reinforced by overexpression of GFP-actin that was concentrated in aggregates in virus-infected cells. Together, these data provide evidence for a role of actin in ASFV entry and suggest that the virus can actively promote localized actin remodeling to facilitate its uptake through macropinocytosis or a similar mechanism.
The first reports describing the endocytic entry of viruses into their host cells presumed that incoming viruses took advantage of ongoing cellular endocytosis processes [16]. However, it is now clear that several viruses are not only passive cargo but activate their own endocytic uptake by eliciting cellular signaling pathways. The activation of these pathways significantly depends on the interaction of the virus with cellular receptors specific to the type and activation status of the host cell [100,101]. ASFV, as Vaccinia virus [21,45], seems to belong to the viruses that actively trigger their endocytic internalization. In this respect, we have found that entry of ASFV is dependent on signaling through tyrosine kinases as EGFR, and activation of PI3K together with Rho-GTPases as Rac1, which have been all described to be important regulators of macropinocytosis [69].
Concerning the function of the PI3K pathway, activation of this kinase early after virus uptake was confirmed by analyzing the phosphorylation of its specific substrate PI(4,5)P 2 . Also, phosphorylation of both residues Thr308 and Ser473 of Akt was observed early after ASFV infection. Besides, pretreatment of Vero cells with the specific PI3K pharmacological inhibitor LY strongly inhibited virus uptake at 60 mpi. Not only that, but we also found that the activation of this kinase has an important role in the infection, since the presence of LY severely impairs the synthesis of both ASFV early and late virus proteins. In this regard, our group has recently described [15] that ASFV uses the cellular machinery of protein synthesis to express its own proteins. Since it has been reported that one of the main roles of PI3K is to regulate the translational machinery through the AKT-mTOR pathway [86], the strong effect observed of LY on ASFV protein synthesis is very much expected.
We have also demonstrated that Rac1, a regulatory guanosine triphosphatase of Pak1, was activated during ASFV entry. Rac1 protein belongs to the Rho family of small guanosine triphosphatases, a subgroup of the Ras superfamily of GTPases [102]. In the last years, several viruses have been described to target Rho-GTPases activation to enter the host cells, such as Vaccinia virus [23,45], Ebola virus [80], Echovirus [92] or Adenoviruses type 2 [103], among others. Through interaction with its specific effector Pak1, Rac1 modulates actin cytoskeleton dynamics and controls macropinocytosis [88,89]. Consistent with the data reported by Mercer and Helenius, 2008 [23], showing that active Rac1 is contained in virus-induced membrane perturations, our results show that ASFV induces clusters of this GTPase as early as 10 min after infection. Hence, Rac1 accumulates in ruffling areas very early during the process of ASFV entry, suggesting that ASFV targets Rac1 to entry in host cells. In agreement with this hypothesis, a strong inhibition of virus uptake, in parallel with ruffle formation, was observed in the presence of the Rac1 inhibitor. Moreover, by performing CLSM experiments, we showed that the drug immobilized the virus particles imbibed into the plasma membrane, thus impairing their entry into the cell. Taken together, these results demonstrate the significant role of Rac1 on ASFV entry. Our data strongly contrasts with a recent study [104], which reported that, although Rac1 is activated by ASFV infection, it is not involved in either ASFV entry or viral protein synthesis. In that study by Quetglas et al. [104], Rac1 would be responsible of a downstream process that only affected viral production. The discrepancies about the role of Rac1 in ASFV entry and infection might be explained by the fact that the Rac1 inhibitor concentration used does not match with the amounts usually employed to analyze the role of Rac1 in virus uptake [80], and it is likely too low to disturb ASFV entry or viral protein synthesis. Moreover, confocal microscopy images to measure ASFV uptake were taken as mid z-section, in contrast to our procedure that includes several z-sections that allow us to count the total virus particles inside the cells. Finally, important information regarding the effect of the dominant-negative Rac1-N17 on viral protein synthesis were not shown in that study, in contrast to our results described in Figure 6F. Therefore, the limitations of that work [104] make it difficult to reach any conclusions about the function of Rac1 on ASFV entry and infection. Furthermore, in support of our data, we should note that we have found an important role for Pak1 in Ba71V entry in Vero cells. Pak1 is a serine/threonine kinase activated by Rac1 or Cdc42 involved in the regulation of cytoskeleton dynamics and needed during all stages of macropinocytosis [88,93,105]. Our results indicate that Pak1 activation is involved in the first steps of ASFV entry, since phosphorylation of the kinase occurs at very early times after virus addition, and even more importantly, the uptake of the virus into the host cells is strongly dependent of Pak1 activity. However, our preliminary studies using the E70 strain did not show a clear effect of the Pak1-specific inhibitor IPA-3 on the synthesis of ASFV proteins (data not shown), either in IPAM or in alveolar swine macrophages. These data suggest that ASFV may activate other different pathways in macrophages or that IPA-3 cannot be efficient enough to inhibit Pak1 if this kinase is constitutively activated in these cells [106,107]. Nevertheless, the synthesis of viral proteins was strongly inhibited in macrophages after EIPA and LY treatments, indicating that Na + /H + exchangers and the PI3K pathway are involved in macropinocytosis-mediated ASFV entry into these cells ( Figure S8).
In conclusion, the involvement of the EGFR and PI3K, the nature of the signaling pathway, the involvement of Rac1, Pak1 and Na + /H + exchangers, and the actin-cytoskeleton rearrangements, all support a macropinocytosis-driven endocytic process for ASFV entry. In addition, ASFV caused significant induction of dextran uptake (a specific fluid phase marker of macropinocytosis), and colocalization of the internalized ASF virus particles with dextran was also observed.
The ASFV genome encodes several glycoproteins [108], whose role in host-cell binding and entry has not yet been described. However, it has been shown that glycoproteins and lipids are required for several virus binding and entry steps to the host cells [23,109,110]. It has been also reported that cellular partners that bind to specific regions of viral glycoproteins translocate from intracellular compartments to regulate the susceptibility of different cells to the infection [111]. These kinds of mechanisms could explain the differences found among ASFV viral isolates and their ability to infect different host cells. Future experiments are planned to study the role of both ASFV glycolipids and the putative host partners involved in the mechanisms of ASFV entry and infection of different cell populations.
Dynamin is a large GTPase that is involved in scission of newlyformed endocytic vesicles at the plasma membrane [112][113][114]. Although we have shown that dynasore partially inhibits virus entry, we have found no evidence for a role of clathrin in ASFV entry despite the use of multiple approaches. The fact that in our hands dynamin was only partially involved in ASFV entry further ruled out roles for clathrin or caveolae-mediated pathways, as both require dynamin activity. Therefore, our data contrast with a recent study concluding that clathrin-mediated endocytosis is the major entry pathway for ASFV [38]. The key concern about the conclusion of this work is that virus entry is merely measured by the synthesis of ASFV early proteins in the presence of chlorpromazine, and not by specific analysis of virus uptake. Moreover, it is important to note that whereas chlorpromazine disrupts clathrin-coated pits, it may also interfere with biogenesis of large intracellular vesicles such as phagosomes and macropinosomes [115].
Here, by combining different and separate strategies we have carried out a precise analysis of each key endocytic pathway concerned, obtaining, for the first time, a relatively complete description of the mechanism by which ASFV enters into a cell, including identification of several cellular molecules and routes. We have carefully evaluated the specificity and functionality of each pathway and correlated them with virus uptake and infection.
Two different strains of ASFV, the virulent E70 and the virulent Ba71V, adapted to growth in Vero cells, have been used to study the virus entry mechanism either in swine macrophages or Vero, respectively.
Several drugs were used to inhibit pathways, but specificity was evaluated by testing the function of the main pathways after treatment. Furthermore, highly specific dominant-negative mutants were used to confirm the data obtained by pharmacological inhibitors. More importantly, all throughout this work either a FACS-based or a confocal sensitive virus entry assays were used in discriminating blockage in virus entry versus blockage in downstream steps of the infection cycle. This is particularly relevant when using drugs that frequently affect multiple cellular functions in addition to specific entry.
Overall, our data provide strong evidence that ASFV entry takes place by a process closely related to macropinocytosis, adding new and valuable information regarding endocytosis mechanisms in the context of ASFV entry (plotted in Table 1).
The evidence presented demonstrates for the first time, that ASFV utilizes a macropinocytosis-like pathway as the primary means of entry into IPAM and Vero cells. However, we cannot state that virus entry occurs exclusively by this pathway, especially in swine macrophages. But our data clearly show that its disruption blocks the greater part of infection and particle uptake. Our work also indicates that clathrin-mediated endocytosis plays at most a minor role in ASFV entry. However, and in accordance with the data of Hernaez et al. [38], we found that CPZ diminishes both ASFV early and late protein synthesis, together with viral production. Thus, our data demonstrate a role for clathrin function merely in post entry events.
A strong hazard of ASFV dissemination from Sardinia and Caucasian areas to EU countries has recently appeared, thus making the progress of knowledge and tools for protection against this virus urgent. Infection by ASFV is characterized by the absence of a neutralizing immune response, which has so far hampered the development of a conventional vaccine. Therefore, our findings are relevant as they not only provide a detailed understanding of ASFV entry mechanism, but also identify novel cellular factors that may provide new potential targets for therapies against this virus. In parallel, further studies are planned to characterize viral factors that may interact with components of the macropinocytosis pathway, probably useful for vaccine development.
Supporting Information
Figure S1 Specificity of p72 antibody and analysis of the ASFV infection. A) Distribution of the p72 protein in the virus particle. Purified virus was treated with different buffers as explained in Materials and Methods. The supernatant (SP) and pellet (P) of the different treatments was analyzed by immunoblotting and p72 protein was detected with a monoclonal antibody (17LD3). B) The monoclonal antibody 17LD3 recognizes the viral particles bound to the cell surface. Viral adsorption to cells was allowed for 90 min at 4uC at a MOI of 10 pfu/cell. Sixty min after virus addition, cells were stained for 30 min with 594-WGA to stain the edge of plasma membrane. Cells were stained with anti-p72 monoclonal antibody without permeabilization and fixed finally with paraformaldehide. Images were analyzed by CLSM and represented as a mid z-section. C) Monoclonal anti-p72 antibody 17LD3 is a useful tool to follow the infection at early and late times post infection. Vero cells were mock-infected or infected synchronously for 60 min or 16 h at a MOI of 10 pfu/cell and 5 pfu/cell, respectively. At indicated times post infection the cells were fixed with paraformaldehide, permeabilized and stained with Topro3 (blue), TRITC-phalloidin (red) and monoclonal anti-p72 (17LD3) (green) to stain cell nuclei, actin filaments and viral particles (middle panels) or viral factory (bottom panels, arrowheads), respectively. Images were taken by CLSM and represented as a mid z-section. (TIF) Figure S2 Separate channels of CLSM experiments. A-E) Vero cells were pretreated with DMSO or different pharmacological inhibitors and infected with Ba71V for 60 min or 16 h, as indicated in the principal figure legends. The virus uptake or viral factory formation was analyzed by CLSM staining the cell nuclei with Topro3 (blue), actin filaments with TRITCphalloidin (red) and the virus particles or viral factories with anti-p72 antibody (green). Images were taken by CLSM and represented as a mid z-section or maximum z-projection as indicated. A) Figure 3B; B) Figure
|
v3-fos-license
|
2016-02-02T08:36:57.578Z
|
2012-09-26T00:00:00.000
|
8454968
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0045958&type=printable",
"pdf_hash": "8f37e99dae28c808ada509afbab72ff53b43809f",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44513",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "8f37e99dae28c808ada509afbab72ff53b43809f",
"year": 2012
}
|
pes2o/s2orc
|
A Comparison of Spatial and Movement Patterns between Sympatric Predators: Bull Sharks (Carcharhinus leucas) and Atlantic Tarpon (Megalops atlanticus)
Background Predators can impact ecosystems through trophic cascades such that differential patterns in habitat use can lead to spatiotemporal variation in top down forcing on community dynamics. Thus, improved understanding of predator movements is important for evaluating the potential ecosystem effects of their declines. Methodology/Principal Findings We satellite-tagged an apex predator (bull sharks, Carcharhinus leucas) and a sympatric mesopredator (Atlantic tarpon, Megalops atlanticus) in southern Florida waters to describe their habitat use, abundance and movement patterns. We asked four questions: (1) How do the seasonal abundance patterns of bull sharks and tarpon compare? (2) How do the movement patterns of bull sharks and tarpon compare, and what proportion of time do their respective primary ranges overlap? (3) Do tarpon movement patterns (e.g., straight versus convoluted paths) and/or their rates of movement (ROM) differ in areas of low versus high bull shark abundance? and (4) Can any general conclusions be reached concerning whether tarpon may mitigate risk of predation by sharks when they are in areas of high bull shark abundance? Conclusions/Significance Despite similarities in diet, bull sharks and tarpon showed little overlap in habitat use. Bull shark abundance was high year-round, but peaked in winter; while tarpon abundance and fishery catches were highest in late spring. However, presence of the largest sharks (>230 cm) coincided with peak tarpon abundance. When moving over deep open waters (areas of high shark abundance and high food availability) tarpon maintained relatively high ROM in directed lines until reaching shallow structurally-complex areas. At such locations, tarpon exhibited slow tortuous movements over relatively long time periods indicative of foraging. Tarpon periodically concentrated up rivers, where tracked bull sharks were absent. We propose that tarpon trade-off energetic costs of both food assimilation and osmoregulation to reduce predation risk by bull sharks.
Introduction
Because movement promotes energy flow across habitat boundaries [1,2], ecological and evolutionary processes are inherently linked to movement, including ecosystem function and biodiversity [3]. Predicting organismal movement is central to establishing effective management and conservation strategies, such as restoring degraded habitats, reducing exploitation rates, preventing spread of invasive species, and protecting wildlife (i.e. ''movement ecology'' [4]). A key aspect of movement ecology is interactions among species, especially predators and prey. Dynamics between predators and prey are often complex when considered across relevant spatial and temporal scales [4,5]. However, growing evidence reveals that predators can regulate ecosystem structure and function via trophic cascades arising through both consumption and predator-induced modifications in prey behavior [6]. Therefore, studies of predator movement patterns are becoming increasingly important for predicting the ecosystem consequences of their declines, especially for marine species that are experiencing significant population declines due to overfishing [7][8][9]. Consequently, further studies of marine predator movements and habitat use are needed to identify and prioritize areas for protection (e.g. feeding and natal grounds) as well as generate sufficient data for modeling how changes in their habitat use can affect sustainability and potentially alter community dynamics [10][11][12].
Bull sharks (Carcharhinus leucas Müller & Henle, 1839) are apex predators in tropical and subtropical seas [13][14][15]. In the western Atlantic, the species grows to a relatively large size [.340 cm, .230 kg; 34] and occurs from northeastern United States to Brazil. Within this geographic range, bull sharks are common to coastal, estuarine, lagoon and fresh waters, especially certain large lakes and rivers [16,17]. Bull sharks are unique among elasmobranchs for their ability to inhabit brackish or freshwater systems for relatively prolonged periods due to unique physiological adaptations that permit osmoregulation in low salinity environ-ments [17][18][19]. Studies from Florida and the Gulf of Mexico have found that young of the year and juvenile bull sharks regularly occupy inshore rivers as nursery habitats [20][21], but transition out of these areas once they reach about 160-180 cm TL [16]. Gravid females likely return to pup [16]. Only two papers [15,22] have reported on movement patterns of large (.150 cm TL) bull sharks using archival satellite tags, the latter being the first to describe movements of adult bull sharks in the Gulf of Mexico and southeastern United States. These studies found that adult bull sharks exhibit high site fidelity and primarily utilize shallow coastal zones [16,22]. Atlantic tarpon (Megalops atlanticus Valenciennes, 1847) are highly mobile mesopredators and very popular sportfish [23,24]. Satellite tagging of Atlantic tarpon in the southeastern United States, Gulf of Mexico and Florida Keys has revealed that, similar to bull sharks, tarpon also tend to utilize inshore coastal, estuarine and freshwater areas where they co-occur [25][26][27][28]. Bull sharks are commonly observed preying upon tarpon at popular fishing locations in the Florida Keys, southern Florida and Gulf of Mexico during recreational catch and release angling [26]. Examination of bull shark stomachs from the aforementioned region has shown that in addition to tarpon, sharks feed on mullet Mugilcephalus, menhaden Brevoortiapatronis and ladyfish Elopssaurus, all favored food items of the Atlantic tarpon [28]. Given similarities in spatial and trophic niches, tarpon may be susceptible to bull shark predation while foraging.
Here we conducted a joint tagging study of bull sharks and Atlantic tarpon in southern Florida to describe their spatial distribution, habitat use and movement patterns relative to one another. Our first goal was to describe seasonal abundance and general movement patterns of bull sharks and tarpon. Our second goal was to identify core areas of bull shark activity and then examine the movement patterns and swimming behaviors (speed, tortuoisty) of tarpon relative to these core areas of bull shark habitat use. We used these data to address four general questions. First, how do the seasonal abundance patterns of bull sharks and tarpon compare? Second, how do the movement patterns of bull sharks and tarpon compare, and what proportion of time do their primary ranges overlap? Third, do tarpon movement patterns (e.g., straight versus convoluted paths) and/or their rate of movement (ROM) differ in areas of low versus high bull shark abundance? Finally, given the potential for predator-prey interactions, can any general conclusions be reached concerning whether tarpon may mitigate risk of predation by sharks when they are in areas of high bull shark abundance?
Bull Sharks
Between October 2009 and May 2012, standardized surveys were conducted to capture and tag sharks as part of an ongoing shark abundance and movement study in the Florida Keys (Biscayne Bay, Key Largo, Islamorada, Dry Tortugas) and southeastern Gulf of Mexico (Florida Bay, Everglades National Park, Fort Myers). Sharks were captured using baited circle-hook drumlines as described by Hammerschlag et al [29]. Briefly, sets of 5 drumlines were deployed and left to soak for 1.0 hour before being checked for shark presence. Upon capture, shark sex was recorded, total length (TL) in cm was measured and thereafter, sharks were marked with an identification tag and then released back into the water. Catch per unit effort (CPUE) of drumlines (for all years combined, averaged by season) was used to determine if there were seasonal changes in occurrence and size (TL). CPUE was expressed as the number of bull sharks caught per set and average size of bull sharks caught per set within each season (Winter: Dec, Jan, Feb; Spring; Mar, Apr, May; Summer; Jun, Jul, Aug; Fall: Sep, Oct, Nov).
If a large bull shark (.150 cm TL) was captured during a survey, a satellite telemetry tag was affixed to the sharks' first dorsal fin. We used Smart Position and Temperature Transmitting (SPOT) tags (SPOT5, Wildlife Computers; www. wildlifecomputers.com) because they provided relatively detailed horizontal movements that could be analyzed at a much higher resolution than light-based position data derived from pop-up archival satellite tags [30]. SPOT tags were coated with Propspeed, a non-toxic, nonmetallic anti-fouling agent, to minimize biofouling [31,32]. Transmitters were attached using titanium bolts, neoprene and steel washers, and high carbon steel nuts to prevent any metallic corrosion from contacting the fin as well as to ensure that the steel nuts corroded, resulting in eventual tag detachment [33].
Tarpon
Data on seasonal abundance patterns for tarpon were obtained from two sources: (1) the Cooperative Tagging Center (CTC) based at NOAA's National Marine Fisheries Service, Southeast Fisheries Science Center, Miami, Florida [34,35]; and, (2) creel surveys of professional fishing guides in Everglades National Park acquired from the National Park Service, Homestead, Florida [36]. Release locations of conventionally tagged tarpon from 1962 to 2004 derived from the CTC database were plotted with ArcGIS by season (Winter: Dec, Jan, Feb; Spring; Mar, Apr, May; Summer; Jun, Jul, Aug; Fall: Sep, Oct, Nov) and by size (weight in kg). In addition, numbers of tarpon caught by recreational fishers in ENP by month, averaged for the period 1980-2006, were extracted from the creel survey database.
Between March 2011 and June 2011, tarpon were captured for satellite tagging using standard hook-and-line gears on chartered recreational fishing boats in the southern Florida Keys (Islamorada, Bahia Honda), Biscayne National Park (Broad Key), Everglades National Park (Whitewater Bay), Boca Grande and southeastern Gulf of Mexico. Upon capture, tarpon fork length (FL) and girth (G) were measured in cm and weight in kg was computed with the algorithm of [37]; thereafter, a SPOT tag was attached to the tarpon's body via a 40 cm long stainless steel wire tether to a titanium anchor dart. The anchor dart was inserted into the flank of the tarpon about 15-20 cm anterior to the dorsal fin and roughly 5-10 cm above the lateral line.
Movement Data Analysis
The geographic location of satellite-tagged sharks and tarpon were determined by Doppler-shift calculations made by the Argos Data Collection and Location Service, www.argos-system.org) whenever a passing satellite received signals from the tag at the surface. To improve location accuracy, we processed all Doppler derived data using Kalman filtering (KF). Argos provides the following radius of error for each KF-derived location class (LC): LC 3,250 m, 250 m , LC 2,500 m, 500 m , LC 1,1500 m; Argos states that the median error for LC 0, A and B ranges from 1 to 3 km [38]. Class Z indicates that the location process failed and estimates of position are highly inaccurate. All transmitted locations were filtered to remove positions with LC Z, those on land, and those exceeding a speed of 2 m/s (following Weng et al [39]). Argos-derived locations were plotted using ESRI ArcGIS 9.3.
We performed utilization distribution analyses on position data using fixed kernel density metrics. Kernel density estimates quantify the core regions of occupancy within an animal's home range or activity space [20,40]. Kernel density values are cumulated from the highest to lowest density areas to create kernel density contours. Thus, the 25% contours represent areas of the top highest observed densities, while the 95% contours represent up to 95% density areas. These metrics were calculated according to the equations provided by Worton [41] and plotted using Interactive Data Languages (IDL, www.ittvis.com) software. Following Domeier and Nasby-Lucas [40] and Weng et al [42], kernel density estimates were calculated for all sharks grouped as species-specific habitat utilization instead of the individual's home range.
Kernel estimates cannot be conducted on SPOT-derived raw data because of the irregular sampling intervals at which data are acquired, gaps in data, and autocorrelations due to successive locations [43]. To account for these biases, filtered tracks were regularized to a frequency of 12 hour intervals (midnight and noon), using piecewise Bézier interpolation methods similar to Tremblay et al [44], but modified with the algorithm by Lars Jenson (http://ljensen.com/bezier/). We employed the modified algorithm to eliminate unnatural loops in the tracks that occur with Bezier method used in [44]. Interpolating track sections with large temporal gaps increases uncertainty (reduces confidence) in data. To explicitly deal with this, we did not interpolate gaps in the data that exceeded three days following the methods of [42].
To describe potential interactions between sharks and tarpon, ROM and tortuosity of tarpon movements were compared relative to bull shark core areas of occupancy (i.e., shark kernel densities) by applying generalized linear models [45]. ROM was calculated as the linear distance traveled in 12 hours. We used the VFractal d [46] as a metric of movement tortuosity. VFractal d values were calculated as a function the turning angle for each pair of consecutive movements described in detail by Nams [46]. ROM and VFractal d were calculated based on the filtered interpolated positions. VFractal d is different, but similar to, fractal d (each point versus each track) which can be estimated by calculating the mean of VFractal d values of all location points for each tarpon movement track [46]. In this study, we used only the VFractal d values, not the fractal d.
Results
During shark surveys, we deployed 1,382 standardized sets, in which 3,699 individual drumlines were deployed. During these sets, 815 sharks were caught, of which 56 were bull sharks, ranging in size from 142-269 cm TL (average 200 cm TL). Bull shark Table 1). Accuracies of spatial locations ranged from ,250 m to ,3 km (Table 2). No sharks within the dataset moved into inshore rivers.
Bull sharks exhibited high site fidelity, primarily restricting movements to shallow inshore areas where they were tagged (Fig. 2, 3). Only one shark (194 cm female, # 68483, Table 1) made a relatively long-distance migration. Initially tagged in Everglades National Park (17 miles west of Islamorada), this shark traveled northwest into the Gulf of Mexico over the course of 10 days and after approximately one month, it returned to the Florida Keys. Over the next month, the shark moved northward along the Florida Keys crossing the Straits of Florida to the Bahamas, swimming to the vicinity of Bimini. The shark then traveled southeast, again crossing the Straits of Florida before entering Biscayne Bay when transmissions ceased a month later. The minimum straight line distance of this 68 day trip was approximately 1,200 km.
The fixed kernel results for tagged bull sharks, displayed as volume contours, showed that a core area of 670 km 2 (25% kernel contour) centered at the northwestern region of Florida Bay (Fig. 4). The 50% kernel contour (2,260 km 2 ) indicates the areas of moderate use extended out to most of Florida Bay, Florida Keys and the Biscayne Bay (Fig. 4). The 95% kernel contour (18,042 km 2 ) shows the areas of 95% habitat utilization by our tagged bull sharks. We consider areas where bull shark kernel densities exceeded 50% to be ''high density'' zones, whereas areas where kernel densities were less than 50% were ''low-density'' zones.
Tarpon were captured year-round by recreational anglers in southern Florida waters; however, strong seasonal differences in catch rates and sizes of animals caught were found (Fig. 5, 6). Large mature fish (.45.4 kg) appear to be virtually absent from the region in winter (early December-late March). The bulk of the migratory front arrives in late spring (mid-to late-April) and departs the area (going northward) by early-summer (late June) (Fig. 5). There is a secondary surge of catch rates in fall as tarpon travel southward through the area during the October to mid-November period (Fig. 5). Other tarpon caught during the year are largely immature fish that tend to use the local rivers and estuaries. Creel data derived from surveys of anglers fishing in Everglades National Park showed the same bi-modal pattern in catches and catch rates (Fig. 6). Tarpon catches were lowest from November through February; highest from April to a peak in June, declining from July through September, and then with a secondary peak again in October (although to a much lesser extent than in early summer).
Tarpon that were satellite-tagged ranged in size from 150-199 cm FL (average 169.4 cm FL, Table 3). Accuracies of spatial locations were similar to those for sharks ( Table 2). Of the 10 tarpon tracks, three were on the east coast of Florida, two in the Florida Keys, and five along the west coast of Florida (Fig. 4, 7). The first three tracks (T-176, T-177, T-178) were relatively short due to apparent tag failures. The other three short tracks (T-180, T-182, T-188) were most likely a result of shark attack (Fig. 4, see discussion for more details). Relatively few tarpon tracks, in relation to bull shark tracks, were distributed over open or deep waters (Fig. 4, 7). In contrast, tracks of tarpon, relative to bull sharks, were clustered around shallow Keys and passes. Moreover, tarpon tracks were also concentrated up rivers, where tracked bull sharks were absent (Fig. 4, 7). Tarpon ROM were highest (.1 m/ s) where bull shark kernel densities were highest (,50% kernel contour) and ROM were slowest (,0.5 m/s) where shark kernel densities were lowest (.50% kernel contour, Fig. 4). Tagged tarpon spent most of their time (.90%) swimming at relatively low ROM (,0.5 m/s, Fig. 4, 8), coinciding with areas where shark kernel densities were lowest (kernel contours .50%, i.e., ''lowdensity'' zones). In contrast, tarpon spent little time (,4%) swimming at high ROM (.1 m/s), coinciding with areas where shark kernel densities were high (kernel contours ,50%, i.e., ''high-density'' zones, Fig. 4, 8). This is statistically supported by the positive correlation from the regression model of tarpon ROM dependent on bull shark kernel density (Fig. 9 a). To inspect the data at different levels, two statistical analyses were conducted: (1) with all tarpon data (black and red dots in Fig. 9 b) overlain on bull shark distribution; and, (2) with bottom 25% of ROM values (red dots) for each 0.1 bin of bull shark kernel density. In these analyses, bull shark kernel densities were rescaled from 0 to 1.0 for low to high density and ROMs were transformed by log10. In both analyses, correlations from the regressions were statistically significant: for all data correlation coefficient (r) was 0.1035 The tortuosities along tarpon movement tracks were negatively correlated with bull shark kernel density (Fig. 9 b). Similar to the ROM analysis, two additional analyses were conducted for VFractal d data: one with all tarpon data (black and red dots in Fig. 9 b) overlapped with bull shark distribution range; the other with top 25% of VFractal d values (red dots) for each 0.1 bin of bull shark kernel density. In both analyses, these correlations were statistically significant: for all data r = 20.093 (P,0.005, b 0 = 1.3021, b 1 = 20.2917); and, for the top 25% data r = 20.5887 (P,0.0001, b 0 = 1.7762, b 1 = 20.8009). These results indicated that tarpon generally used low tortuous (i.e., straightline) movement patterns in shark high-density zones, and used high tortuous movement patterns in shark low density zones.
Discussion
Our study had several key findings. First, bull sharks were present in the ecosystem year round; but, abundance was generally higher in the winter. In contrast, tarpon catches were highest in early summer with a secondary peak in the early fall. However, presence of the largest bull sharks (.230 cm) coincided with peak tarpon abundance. Second, bull sharks and tarpon generally occupied different aquatic habitats despite similar trophic niches. Bull sharks preferred shallow marine habitats close to the coast of Florida; while tarpon preferred estuarine and riverine regions, with only occasional forays into deeper marine waters where bull shark abundance was greatest. Third, the locomotor behavior and ROMs of tarpon also differed notably between inland riverine habitats and the more open coastal marine habitats. Specifically, tarpon approximately doubled their average ROM in marine coastal regions where bull sharks appeared to concentrate. Finally, tarpon also had straighter and more direct paths in areas of high bull shark patch use and more convoluted paths in areas of low bull shark use. We propose several hypotheses relating to optimal foraging strategies of both tarpon and bull sharks to explain these observed patterns. At a regional scale, tarpon migration is likely driven principally by water temperatures and prey abundance [25,26]. Tarpon migrate characteristically with the 26uC isotherm, for example, which passes northward through southern Florida waters during the period of mid-April to late May each year. The timing of large mature tarpon movement into Florida Bay and the Florida Keys is coincident with the spawning event (i.e., specifically the process of Examples of recovered tags from tagged tarpon that had likely fallen prey to sharks: (c) PAT tag; and, (d) SPOT tag, both bear the tell-tale teeth marks (based on spacing and serration) of a shark. Although we cannot identify the species of shark by the bite marks on the tag, it seems plausible that a bull shark was responsible given that the other large shark species are relatively rare in the region, whereas the attack occurred at the location of highest bull shark density in the area. doi:10.1371/journal.pone.0045958.g010 building the gonad just before spawning, and ensuring survivorship of the fertilized eggs to larvae via biophysical factors) and feeding (building the soma for survivorship, and preparing the long northward migrations ahead) [25,26].
The core area of bull shark activity found within northwestern area of Florida Bay is likely driven by the high abundance of teleost prey concentrated there. By conducting shark and fish surveys throughout Florida Bay, Torres et al [47] found that the abundance of seven species of sharks (including bulls) in the northwestern area of Florida Bay was highly correlated with the abundance of 45 teleost species. Given tarpon feeding habits, we would have similarly expected tarpon habitat use to have also been relatively high within the northwestern area of Florida Bay. However, tarpon movements were suggestive of avoiding this area (low residence and high rate of movement in directed lines). In contrast, tarpon exhibited highly tortuous movements over relatively long time periods along the outskirts of Florida Bay as well as in adjacent rivers, which is indicative of foraging, although prey abundance patterns are relatively low in these areas compared to the northwestern area of the Bay.
Productive habitats that contain the greatest food resources are often inherently dangerous for prey, thus creating the need for prey to modify their locomotor behavior and habitat use in response to the threat of predation [48][49][50].The observed movements by tarpon in Florida Bay are suggestive of a foodrisk trade-off. For example, studies with lizards and rodents [51] have each shown that they tend to use a bimodal distribution of locomotor speeds, with slower speeds in more protected, safer, habitats and faster speeds in more open, risky, habitats. Desert lizards (Uma scoparia) move slowly along convoluted paths underneath vegetation when undisturbed, which likely shields them from both overheating and from predators, but they then move rapidly in direct lines in open areas [52]. Given that prey can elude predators by escaping into a refuge, moving through exposed habitats results in dramatically increased locomotor effort [53][54][55]. This pattern is consistent with the alterations in speed by tarpon in areas of high and low bull shark density observed in this study.
We suggest that another trade-off may be associated with the additional metabolic costs incurred by tarpon that occupy brackish or freshwater zones where bull shark density is low. Generally, the energetic costs of osmoregulation in teleost fish are higher in freshwater than seawater (e.g., Febry and Lutz [56]). The energetic expense occurs because of the need to maintain fluid volume balance by excreting the extra water, while at the same time, trying to conserve internal ionic balance, a biological process which is energetically expensive ( [56]; G. Anderson, personal communication). The fact that tarpon spend relatively little time in what would appear to be more optimal coastal marine habitats (from both a food and osmotic perspective), and move so quickly through them, further suggests that these habitats may be risky for them.
It is worth noting that our own anecdotal observations indicate threat of predation mortality to tarpon in areas of high bull shark use. For example, Tarpon T-182 was tagged and released on May 23, 2011, in an area of low bull shark density. The tarpon moved southward through Florida Bay and into a bull shark high density area, at which point it was likely attacked and consumed by a shark on May 28th (Fig. 4, 10). This presumption is based on two factors. First, the depth and light-level data derived from the recovered tag is indicative of being ingested (Fig. 10 a,b). Additionally the recovered tag displayed scratch marks that appear to have been inflicted by a shark based on tooth spacing and serration (Fig. 10 c,d). Although we cannot identify the species of shark by the bite marks on the tag, we believe only tiger (Galeocerdo cuvier), hammerhead (Sphyrna sp.) and bull sharks are likely candidates for attacking a large tarpon (and severing the tag's stainless steel tether). However, it seems plausible that a bull shark was responsible given that the former two species are relatively rare in the region, whereas the attack site represents the location of highest bull shark density in the area.
Critical examination of bull shark diet from the region is limited [28] and although tarpon have been found in bull shark stomach contents, there exists little evidence of bull sharks routinely targeting tarpon as preferred prey. In contrast, bull sharks are commonly observed preying upon tarpon in the region during recreational catch and release angling [26]. Therefore, we hypothesize that a behaviorally mediated indirect interaction (BMII; reviewed by [57]) may be occurring between sharks and tarpon. Specifically, we speculate that higher shark abundance in the northwestern area of Florida Bay is largely driven by relatively high teleost abundance (preferred prey) there [47], which in turn, indirectly causes tarpon to reduce their use of this productive area when foraging to minimize their risk of potential mortality by sharks. A similar BMII has been described in Shark Bay, Western Australia, among tiger sharks, duogongs (Dugong dugon), dolphins (Tursiops aduncus), turtles (Chelonia mydas) and cormorants (Phalacrocoraxv arius) [58]. Here, seasonal presence of dugongs (preferred prey of tiger sharks) in shallow waters during summer results in peak tiger shark abundance in these habitats. This, in turn, causes dolphin, turtles and cormorants (species not routinely attacked by sharks) to reduce their use of these productive habitats during summer to minimize risk of potential predation [58]. That said, our hypotheses outlined above require significant investigation by increasing tracking efforts and gathering further ecological data for sharks, tarpon and their potential prey. For example, greater confidence in our hypotheses would be achieved if changes in the spatial and/or temporal movements of sharks corresponded with compensatory adjustments in tarpon swimming behavior and distribution in areas previously occupied by sharks [59]. Because movement patterns in animals are complex and can be influenced by many different variables, our study cannot directly reveal whether the movements of tarpon or bull sharks influence one another per se. Tarpon seasonal migrations are likely cued to the changes in water temperature in combination with the movement and distribution of prey [25]. Therefore, the observed tarpon swimming behavior could also be driven by other factors or the combination of them such as environmental preferences (temperature and salinity), feeding needs, and reproductive behaviors [25].
Although use of SPOT tags provided spatial data at higher resolution than archival tags, the major limitation of using Argosderived data from SPOT tags is the need for animals to surface for long enough to allow successive transmissions for obtaining accurate positions and, therefore, estimating fine scale measurements of speed and fractal values. This is problematic because sharks and tarpon surface irregularly and thus can generate gaps in data acquisition and autocorrelation due to consecutive positions [43]. To overcome this issue, we used filtered tracks that were regularized to a frequency of 12 hour intervals using interpolation. Ideally, it would be better to use higher resolution temporal data (i.e. ,12 hrs) if sharks and tarpon transmitted frequently; however, we found that a 12 hr intervals was optimal in this study based on the frequency of transmissions received. Further, given the limitations in estimating tarpon versus bull shark density, results were strongly influenced by several high shark density values; however, these data were not outliers, but the analysis (and its interpretation) would benefit from a larger data set. We are aware that it would have been ideal to analyze potential overlap in kernel densities between tarpon and sharks. However, since tarpon were concentrated up inland rivers, kernel density estimates calculated would have indicated primary activity space over land, therefore negating such a comparison. Additionally, kernel density estimates for bull sharks could have been biased to the site of tagging, and although this cannot be ruled out, we believe it is unlikely since sharks were tagged throughout the middle keys on both the Atlantic and Gulf coasts (where they also transmitted). Further, restricting focus on data derived from Florida Bay, where shallow water depths likely favored transmission, would not impact the general conclusions drawn from this work. Another potential shortcoming of this study worthy of consideration is that tracking period and duration for tarpon was shorter than for sharks, making our discussion on predator-prey interactions somewhat speculative. Also, positional data used varied in accuracy from less than 250 m up to 3 km. However, we believe that this error scale, when compared to the scale of shark and tarpon movements, was sufficient to describe the spatial habitat use patterns observed.
Investigating the movements and fine scale foraging behaviors of marine predators presents several formidable biological and logistical challenges. Future investigations of this kind in marine systems will benefit from employing multiple types of animalborne instrumentation and sensors (e.g. video, accelerometers, satellite and acoustic telemetry, etc.) to better understand and quantify dynamic interactions among marine predators and between highly mobile fishes and their prey [60]. Given their relatively high site fidelity in shallow near shore waters, both bull sharks and tarpon may be disproportionately vulnerable to coastal fishing and other anthropogenic impacts including reduced water quality, pollution, reductions in their prey, and habitat modifications. Accordingly, increasing studies of these and other marine predator movement patterns are needed to identify and prioritize areas for protection as well as for predicting how anthropogenicdriven changes in their habitat use may impact ecosystem dynamics and vice versa.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2010-04-13T00:00:00.000
|
17257126
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/dte/2010/240260.pdf",
"pdf_hash": "af5457a16a13ae8a2e7175181909b271d6804d94",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44514",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "af5457a16a13ae8a2e7175181909b271d6804d94",
"year": 2010
}
|
pes2o/s2orc
|
Initial Experience of the Feasibility of Single-Incision Laparoscopic Appendectomy in Different Clinical Conditions
Introduction. Single-incision laparoscopic surgery (SILS) is a new technique developed for performing operations without a visible scar. Preliminary studies have reported the use of the technique mainly in cholecystectomy and appendectomy. We evaluated the feasibility of the technique in various appendicitis conditions including children, fertile women and obese patients. Materials and Methods. SILS technique was used in a random sample of patients hospitalised for suspected appendicitis. The ordinary diagnostic laparoscopy was performed and the appendix was removed if needed. The ligation of appendix was performed by thread loop, absorbable clip or endoscopic stapler. The details regarding the recovery of patients were collected prospectively. Results. Ten SILS procedures were performed without conversions or complications. The patient series included uncomplicated and complicated appendicitis patients. The mean age of the patients was 37 years (range 13–63), mean BMI was 26 (range 18–31), mean operative time was 40 minutes (range 18–31), and mean postoperative stay was 2 days (range 1–5). Conclusions. SILS technique is feasible for obese patients, uncomplicated and complicated appendicitis as well as for exploratory laparoscopy. Most common methods to ligate appendix are feasible with SILS technique. The true benefit of the technique should be assessed by randomised controlled trials.
Introduction
During the era of laparoscopic surgery common trend has been towards less invasive techniques and a natural extension of the trend is to perform operations without scars. The most prominent techniques representing scarless surgery are transumbilical single-incision laparoscopic surgery (SILS) and natural orifice transluminal endoscopic surgery (NOTES). As the latter is still struggling with some technical and equipmental difficulties, SILS seems to be more ready for wider use in surgical community. There are reliable and simple equipment available for SILS procedures, and the operative technique, although different than in conventional laparoscopy, is probably easier to learn compared to NOTES technique.
Several operations have, thus, been until now performed by SILS technique including, for example, cholecystectomy, appendectomy, splenectomy, and sleeve gastrectomy. The most abundant are publications presenting results of SILS cholecystectomy [1][2][3][4] and results obtained in pediatric surgery [5][6][7]. All these reports have indicated that the SILS technique is safe and feasible in these surgical populations and that the operative time with this new technique is reasonable.
Appendectomy is the most common abdominal operation performed as an emergency basis in the western world [8]. The advantage of laparoscopic technique over the conventional technique has been proven especially in fertile women and obese patients [9][10][11]. SILS appendectomy may be even more advantageous to the patients by eliminating the scars and potentially diminishing postoperative pain. However, the role of the SILS appendectomy is still evolving since all published reports of the technique should be regarded as preliminary [5][6][7]12]. More studies evaluating the technique in different clinical situations as well as randomised controlled trials are needed in order to assess the real benefits of the SILS appendectomy in general surgical practice.
The aim of the present study was to evaluate the feasibility of SILS diagnostic laparoscopy and appendectomy in heterogenic patient population presenting with symptoms suggestive for appendicitis. The suitability of different equipments for appendiceal ligation was also evaluated as well as the learning of the procedure.
Materials and Methods
This report is a case series of 10 patients admitted to Päijät-Häme Central Hospital due to right lower abdominal pain suggestive for appendicitis. All patients were clinically deemed to have high suspicion of appendicitis and were scheduled for emergency single-incision laparoscopy and subsequent appendectomy, if needed. The intention was to recruit a heterogeneous patient population to the procedure including, for example, children, fertile women, and obese patients. The operation was performed transumbilically using SILS port (Covidien, Norwalk, CT, USA). Firstly, intraumbilical cutaneous vertical incision was made and the umbilicus was detached from the fascia. The fascia was opened (2-3 cm) and the SILS port was introduced into the abdomen. After that, three 5 mm trocars were put through the port and the pneumoperitoneum was induced. A 5 mm 30-degree optic was used in all operations. One straight and one curved grasper (Roticulated endo grasp, Auto Suture, Norwalk, CT, USA) were introduced into the abdomen and right lower abdominal quadrant was explored and the operation was continued according to the findings. When deemed necessary, appendix was removed. In all patients mesoappendix was dealt with bipolar electrocautery and laparoscopic scissors. If extensive dissection was needed, dissecting monopolar hook was additionally used. The ligation of appendix was performed by thread loop (Endoloop, Ethicon, Somerville, NJ, USA), absorbable clip (Lapro-clip, Auto Suture, Norwalk, CT, USA), or endoscopic stapler (Endogia, Auto Suture, Norwalk, CT, USA). When clip or endoscopic stapler was used, one of the 5 mm ports was replaced by 12 mm port (Versastep, Auto Suture, Norwalk, CT, USA). The appendix was extracted with a pouch (Endocatch Gold, Auto Suture, Norwalk, CT, USA). If the appendix proved to be normal, standard diagnostic laparoscopy was performed including the examination of 100 cm of distal ileus, female genital organs, ascending colon, sigmoid colon, and gallbladder. At the end of the procedure, fascia was closed with continuous absorbable suture, umbilicus was refixed to the fascia, and the skin was closed with absorbable sutures. After discharge details of intraoperative and postoperative data were recorded.
Results
Altogether 10 patients were operated on by the SILS technique. There were 5 men and 5 women. Nine patients had appendectomy while one patient with sigmoid diverticulitis had only diagnostic laparoscopy. The mean age of the patients was 37 years (range 13-63), mean BMI was 26 (range 18-31), and the mean operative time was 40 minutes (range 23-50). The mean postoperative stay was 2 days (range 1-5). There were no conversions, no wound complications, or other complications among patients. The operative finding, operative time, and some other clinical details of different patients are shown in Table 1. All types of appendicitis from uncomplicated disease to disease with diffuse peritonitis were represented in our patient series. The patient with perforated appendicitis and diffuse peritonitis made an uneventful recovery although she spent 5 days in the hospital due to the therapy for diffuse peritonitis. Another patient with local dense inflammatory reaction and incipient abscessus formation could be operated by SILS technique and recovered normally. The method was also suitable for the most obese patient in our series. In the young female patient with rupture of ovarian cyst the exploratory laparoscopy with therapeutic intervention could be performed without difficulties by SILS technique.
Discussion
Appendectomy is the most common abdominal emergency operation in the western world. More and more appendectomies are currently performed laparoscopically due to the fact that the technique offers advantages to patients in terms of more accurate diagnosis, diminished wound infections, and more rapid recovery [9]. Compared to traditional laparoscopy, SILS appendectomy results surely in better cosmesis but additional benefits, for example, in terms of more rapid recovery have not been proven scientifically. However, randomised controlled clinical trials are urgently needed to define the role of SILS appendectomy in the modern surgical armamentarium.
Always when a new technique is introduced to the surgical community, the focus should be concentrated on the feasibility, safety, and clinical advantage of the method. Further, safety is highly dependent on how easily the new technique can be learned by average surgeons. It is well acknowledged that the implementation phase of new techniques is associated with an increased risk of complications emphasizing the importance of thorough training and education. The SILS technique differs from traditional laparoscopic technique remarkably by the use of the grasping and dissecting instruments. Due to the vicinity of the ports at the fascial plane, the operative technique necessitates crossing of the instruments (or specially designed instruments) making the procedure more challenging and initiating new learning curve for surgeon. Thus, transition from conventional laparoscopy to SILS is demanding, initiates new learning curve for surgeons, and increases initial operative time as shown in a previous study [12]. The most common conventional laparoscopic technique for appendectomy uses three ports meaning that the removal of appendix by SILS technique is performed principally similarly compared to traditional laparoscopy. Secondly, appendectomy is relatively easy operation performed in a relatively safe abdominal area decreasing the risk of disastrous complications that may happen, for example, in cholecystectomy. Further, SILS appendectomy can be performed properly by one straight instrument and one curved instrument making the procedure easier compared to use of two curved instruments. When performing appendectomy, one must be prepared for different abdominal findings. The appendicitis may be oedematic, gangrenous, perforated with varying degree of peritonitis, or even associated with peritoneal abscess. The technique chosen to treat the patients should be suitable for all these clinical situations. In the present patient series there were both uncomplicated and complicated cases with even different degrees of peritonitis. All our patients could be operated by SILS technique without conversions or additional ports and they had an uneventful recovery. Further, the mean operating time was 40 minutes comparing well to the operating time of conventional laparoscopic appendectomy in our hospital (mean 43 minutes, range 18-103) and in a recent Cochrane review (mean 23.5-102 minutes) [9]. According to our experience, although limited, SILS technique seems to be suitable for variety of appendiceal infections.
Another issue is the feasibility of SILS technique for performing exploratory laparoscopy when surgeon encounters a normal appendix and the nature of the disease should be determined. According to our experience a proper diagnostic laparoscopy can be performed by SILS technique relatively easily and rapidly. The examination of distal ileum, female genital organs, and other organs situated in pelvic area could be accomplished without difficulties.
We tried intentionally different techniques for ligation of appendix in order to find out how feasible they are. Probably the most common methods to ligate appendiceal stump are thread loop, absorbable clip, and endoscopic stapler. All these options seemed to be suitable for SILS appendectomy. However, the easiest and fastest method in our hands was endoscopic stapler that has been suggested to lower the risk of postoperative intraabdominal surgical-site infection and the need for readmission to hospital [13], although a recent systematic review did not support this view [14].
According to literature especially obese patients benefit of laparoscopic appendectomy compared to open one and laparoscopy should be preferred technique for these patients [9][10][11]. It is, thus, important that new mini-invasive operative techniques are suitable for this patient population too. As shown in Table 1 [15]. As the main advantage of the SILS technique is that the visible scar can be avoided, further studies evaluating the issue urgently needed. Conventional laparoscopic appendectomy produces relatively small scars and the superiority of SILS in that respect remains to be shown. Further, the importance of abdominal scar may be age related since a limited survey among scrub nurses in our hospital revealed that young nurses would have scarless operation if it were available, but older ones did not see the issue so important.
Although SILS technique looks promising and offers some potential benefits for patients compared to conventional laparoscopy, two possible disadvantages should be considered. SILS technique may be associated with increased risk of hernias. The technique necessitates fascial incision through the abdominal midline that has been considered to be prone to hernia formation. Further, the fascial incision is more traumatic compared to 5 or 12 mm trocar wounds made with dilating trocars. The second possible disadvantage is the additional costs caused by the procedure-specific port and instruments. These extra operative costs should be taken into account in the current trend towards costeffectiveness in healthcare.
Conclusions
SILS technique is feasible for a variety of appendiceal inflammatory conditions and for explorative laparoscopy. The technique suits well for obese patients and different technical methods for appendiceal ligation can be easily used. Appendectomy is suitable procedure for the training of SILS technique. The technique may have few disadvantages and
|
v3-fos-license
|
2017-12-06T23:05:53.877Z
|
2017-12-02T00:00:00.000
|
6262234
|
{
"extfieldsofstudy": [
"Political Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1186/s12939-017-0697-5",
"pdf_hash": "bca21f8589c9e5d12fc5629c75f7959fc984a862",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44518",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"sha1": "bca21f8589c9e5d12fc5629c75f7959fc984a862",
"year": 2017
}
|
pes2o/s2orc
|
Setting priorities for knowledge translation of Cochrane reviews for health equity: Evidence for Equity
Background A focus on equity in health can be seen in many global development goals and reports, research and international declarations. With the development of a relevant framework and methods, the Campbell and Cochrane Equity Methods Group has encouraged the application of an ‘equity lens’ to systematic reviews, and many organizations publish reviews intended to address health equity. The purpose of the Evidence for Equity (E4E) project was to conduct a priority-setting exercise and apply an equity lens by developing a knowledge translation product comprising summaries of systematic reviews from the Cochrane Library. E4E translates evidence from systematic reviews into ‘friendly front end’ summaries for policy makers. Methods The following topic areas with high burdens of disease globally, were selected for the pilot: diabetes/obesity, HIV/AIDS, malaria, nutrition, and mental health/depression. For each topic area, a “stakeholder panel” was assembled that included policymakers and researchers. A systematic search of Cochrane reviews was conducted for each area to identify equity-relevant interventions with a meaningful impact. Panel chairs developed a rating sheet which was used by all panels to rank the importance of these interventions by: 1) Ease of Implementation; 2) Health System Requirements; 3)Universality/Generalizability/Share of Burden; and 4) Impact on Inequities/Effect on equity. The ratings of panel members were averaged for each intervention and criterion, and interventions were ordered according to the average overall ratings. Results Stakeholder panels identified the top 10 interventions from their respective topic areas. The evidence on these interventions is being summarized with an equity focus and the results posted online, at http://methods.cochrane.org/equity/e4e-series. Conclusions This method provides an explicit approach to setting priorities by systematic review groups and funders for providing decision makers with evidence for the most important equity-relevant interventions. Electronic supplementary material The online version of this article (10.1186/s12939-017-0697-5) contains supplementary material, which is available to authorized users.
Background
The number of reports of systematic reviews of research has increased from about 80 a year in the late 1980s to more than 8000 a year today [1]. This makes it very difficult for decision makers to keep abreast of the latest evidence. The Campbell and Cochrane Equity Group is committed to finding ways of helping decision makers access and use the evidence on interventions that has impact on health inequities. Health inequities are avoidable differences in health outcomes [2]. The importance of equity in health, wellbeing and wealth is increasingly accepted globally, and it underpins research, global development goals and reports, and international declarations [3][4][5][6][7][8][9]. The Campbell and Cochrane Collaborations, and other groups, such as the Alliance for Health Policy and Systems Research and the International Initiative for Impact Evaluation (3ie), publish systematic reviews of the evidence for what works and what does not. There has been an increased emphasis on health equity in systematic reviews with the establishment of a Campbell and Cochrane Equity Methods Group (Equity Methods Group), whose members have provided a framework [10] and methods [11,12] for applying an 'equity lens' to systematic reviews.
However, there is an ongoing need for dissemination and integrated knowledge translation of systematic reviews, to make users aware of knowledge and facilitate its use to improve health and health systems [13][14][15][16][17]. A number of initiatives are currently addressing this challenge, such as the following: Evidence Aid review summaries for major healthcare emergencies, including disasters (www.evidenceaid.org/) [18]; Supporting Policy-relevant Reviews and Trials (SUPPORT) evidence summaries of health systems interventions in low-and middle-income countries, which are based on a simplified version of the Cochrane Summary of Findings Tables (www.supportsummaries.org/); Evidence summaries developed by the International Initiative for Impact Evaluation (3ie) in the areas of health, nutrition and population which emphasize photographs and text and are exploring the use of expert commentaries (http://www.3ieimpact.org/en/ inform-policy/health-nutrition-and-population/); Syntheses of research evidence about governance, financial and delivery arrangements within health systems, and about implementation strategies that can support change in health systems (www.healthsystemsevidence.org/); and Countdown to 2030 produces thematic or countryspecific briefing notes for policymakers on topics related to maternal, newborn, and child survival (http://countdown2030.org/reports-and-articles/ briefing-notes); These websites and databases include varying amounts of information related to health equity, such as 'what works' for disadvantaged individuals and groups. We developed this Evidence for Equity (E4E) project to focus specifically on equity-relevant interventions. E4E applies an equity lens to systematic reviews through a knowledge translation product comprising summaries of systematic reviews from the Cochrane and Campbell libraries. E4E translates evidence from systematic reviews into "friendly front-end" summaries for policy makers. Building on these other collections of summaries, E4E aims to summarize evidence on interventions that may reduce inequities. The aim of this special collection of systematic review summaries is to provide policy makers, clinicians, and other practitioners, particularly those working in resourcelimited settings, with easily accessible, high quality evidence on relevant interventions.
Despite the increased recognition of the importance of knowledge translation of systematic reviews which summarize the totality of the evidence, there is very little done to prioritize topics for focused knowledge translation efforts. The objective of this study was to identify which systematic reviews were highest priority for knowledge translation, with a focus on promoting health equity, in collaboration with policymakers and program managers.
Take Home Messages 1. For policy makers and program managers in high-or low-/middleincome countries who want to make evidence-based decisions on equity-focused interventions, it is challenging to find evidence on interventions that are effective. 2. This pilot project assessed priority setting methods to identify priority interventions from Cochrane systematic reviews for which there is evidence of a benefit in five topic areas: diabetes/obesity, HIV/AIDS, malaria, nutrition, and depression. 3. This paper presents criteria for priority setting for systematic review groups and funders which may help identify the most important equity-relevant interventions.
Methods
A steering group of individuals with extensive experience with systematic reviews and knowledge translation methods met face-to-face in London, England in February of 2012. During a two-day meeting, the group decided to focus on a combination of priorities using the Millennium Development Goals as a starting point and expanding on these to also include non-communicable diseases. This resulted in the selection of the following pilot topic areas, each of which has a high burden of disease globally, as indicated by associated disability-adjusted life years (DALYs): [20] In 2015, the United Nations created the global Sustainable Development Goals (SDGs); a group of 17 goals to be met by 2030 [21]. The topic areas listed above are still relevant to the SDGs. Goal number 3 addresses all health priorities and includes reproductive, maternal and child health; communicable and noncommunicable diseases; as well as access for all to safe, effective, and affordable medicines and vaccines [21]. In addition, goal number 10 is to reduce inequalities within and between countries and focuses on eliminating inequities based on age, sex, disability, race, ethnicity, origin, religion, and socioeconomic or other status.
Systematic reviews on these five topic areas were retrieved through a search in the Cochrane Library (via Wiley (http://www.cochranelibrary.com/), up to 2013, Issue 6) using relevant key words in the title field and limiting the start date to 2008. The exact search strategies are reported in Additional file 1.
Two independent screeners reviewed the results section (Data and Analysis) of the Cochrane reviews to identify: a) any statistically significant difference in mortality; b) for any other categorical morbidity outcomes besides mortality, Odds Ratio (OR) or Relative Risk (RR) greater than 2 or less than 0.5; [22] c) all statistically significant continuous morbidity outcomes (SMD, MD) that when transformed into ORs were greater than 2. Surrogate outcomes and non-statistically significant effects were excluded. Details of the population, intervention, comparisons, outcomes, and effect size were extracted.
Negative effect sizes that demonstrated benefit were converted to a positive value, by reversing the scale for continuous outcomes or by taking the inverse of dichotomous outcomes (e.g. >1). All effect sizes were converted to odds ratios to allow for comparison across reviews using the formulae provided in the Cochrane Handbook [23]. The results are described as the "converted effect size and confidence interval".
Five "Stakeholder Panels" were assembled to participate in the priority-setting exercise, each addressing one of the five condition-related topic areas listed above. For each panel, a chair(s) was recruited based on their expertise in one or both conditions and in conducting systematic reviews. The chair(s) helped identify and approach five other policy makers and researchers (stakeholders) to join the panel. Members of these panels were purposefully selected to ensure a variety of policymakers (e.g. national, regional, civil society, NGO) from both HIC and LMIC, with responsibility in the topic area of their panel and with interest in evidence-based policy making.
Stakeholder panel chairs reviewed the initial list of potential interventions and outcomes and eliminated: a) those which are no longer used b) those which could not be implemented globally due to prohibitive costs, especially in resource-constrained settings c) interventions whose outcomes were not meaningfully important.
Chairs collaborated on the development of a rating sheet which was used by other panel members to rank the interventions on a scale from 0 to 4, with 4 denoting an optimal intervention, for four criteria. These criteria were developed based on the Child Health and Nutrition Research Initiative (CHNRI) priority setting exercise [24].
A. Ease of Implementation: Ease with which the intervention can be implemented. Consider whether there is sufficient capacity to implement the intervention. B. Health System Requirements: Potential effect on the health system. Consider the level of difficulty with intervention delivery, the infrastructure required (human resources, facilities, etc.). Consider the resources available and whether the intervention is affordable. C. Universality/Generalizability/Share of Burden: Relevance of the intervention to other settings. Is the intervention relevant to most countries? Consider whether the intervention poses safety concerns and whether these may be different in different settings. Rank lower for a less generalizable intervention, or one that applies only to a specific population. D. Impact on Inequities/Effect on equity: Does the distribution of the disease burden mainly affect the disadvantaged? Are the disadvantaged most likely to benefit from the intervention? Will the intervention improve equity in disease burden distribution long-term? Rank lower for interventions that may increase inequities.
Stakeholder panel members were also asked to note any safety concerns. Finally, they were asked to give an overall rating for each intervention (from 1 to 4 where 1 was the least important intervention and 4 was the most important intervention). Instructions given to stakeholder panel members are provided in Additional file 2.
Lastly, the ratings of all panel members were averaged for each intervention and criteria. We converted the average rating into a score out of 100 for ease of interpretation. This step is different from the CHNRI method which calculates the scores divided by the number of received answers to obtain a percentage of agreement [24]. We ordered interventions according to the average overall rating. We provided these rankordered lists to all panel members.
Results
Each stakeholder panel consisted of at least six members, including the panel chair plus five or more additional experts. The characteristics of our stakeholders are listed in Table 1.
Eligible systematic reviews, reaching criteria for important effects
We reviewed all systematic reviews in the areas of depression, malaria, nutrition, diabetes/obesity and HIV in the Cochrane Library from 2008 to 2013. Of these, 96 reviews met the criteria for being relevant to current practice, having an odds ratio > 2 for morbidity, and/or for having a meaningful impact on mortality.
Consensus ratings
Stakeholder panel members reported that the wide range of interventions and outcomes made ranking difficult and in some cases reported that they gave more priority to interventions with which they were more familiar. We needed to provide additional information for some panel members to complete their rankings. Panel members also reported having some difficulty judging the intervention for some of the criteria without having a particular context or without more details about the intervention (e.g. frequency, delivery method). Additional judgment was needed where interventions may be provided in combinations that may differ depending on the local context. In such situations, panel members were encouraged to think of the real-life practicalities in one of the countries with a high burden of the condition of interest.
Panel members used the full range of the scale from 1 to 4 for each criterion. We did not find evidence of bimodal distributions in the scores that would suggest disagreement within the panel ratings. Furthermore, panel members reached consensus on the top 10 interventions in each panel easily. Table 2 shows the prioritisation results for diabetes/ obesity. See Additional file 3 for the same tables for the other 4 conditions. These show the ratings by the panels on the degree that these systematic reviews merited focus for knowledge translation based on their importance for improving the health of the disadvantaged, based on the four criteria of health system effects, generalizability, impact on health equity and ease of implementation.
Discussion
With the realisation that single studies, however large, should not drive policy due to the fact that they may not be replicable [25] there has been an exponential increase in systematic reviews. Research community members, especially those working on reducing health inequities, have a responsibility to inform policymakers and their advisors who make decisions on which systematic reviews should be prioritized for knowledge translation for the benefit of the most vulnerable members of their populations. Such global exercises need to be sensitive to major regional differences in needs and perceived priorities.
Our approach differs from other priority-setting exercises because we chose to focus on prioritizing knowledge translation of completed systematic reviews that have the potential to promote health equity. We also involved those who need and use this evidence with researchers and publishers in order to meet the information needs for those making decisions related to equity. The intent is to provide an international platform to deliver summaries from systematic reviews on interventions that impact on health in disadvantaged populations. The target audience includes policymakers, clinicians, regulators, and the general public. This E4E initiative addresses the criticism that Cochrane reviews fail to draw useful conclusions [26] and instead call for Is there sufficient capacity to implement the intervention? Is it feasible to provide required training to staff? Rankings are 0 to 4. 4 = optimal (easier to implement), 0 = more difficult b Consider the level of difficulty with intervention delivery, the infrastructure required (human resources, facilities, etc.). Consider the resources available and whether the intervention is affordable. Rank 0-4, 4 = optimal (easier/fewer health system effects), 0 = more difficult/greater health system effects, c Is the intervention relevant to most countries? Rankings are 0 to 4. 4 = Optimal (more generalizable/population-based, 0 = less generalizable/specific population d Does the distribution of the disease burden affect mainly the disadvantaged? Are the disadvantaged most likely to benefit from the intervention? Will the intervention improve equity in disease burden distribution long-term? Rankings are 0 to 4. 4 = Optimal (more generalizable/population-based, 0 = less generalizable/specific population more research by prioritizing reviews with potential for health equity impact for knowledge translation and broad dissemination. Consensus was successfully achieved in identifying the top group of equity-relevant interventions in each of the five pilot areas. The intent was not to focus on specific ranking; rather, it was to provide a matrix across these five criteria to highlight the importance of health equity in decisions on identifying priority interventions given limited resources. The next step is to meet with the relevant Campbell and Cochrane review groups, and other interested systematic review groups, and explore with them whether and how this process can be incorporated into their own priority-setting processes for knowledge translation, as the Cochrane and Campbell Collaborations are both currently developing knowledge translation strategies for their reviews.
Many Cochrane systematic reviews are focused on intervention efficacy, and equity concerns are often more related to intervention implementation and delivery. Therefore, the evidence in the review may not relate to its actual importance in practice. To address this issue, we asked our stakeholders to consider the feasibility of the intervention, deliverability, universality, and effects on health equity. We did not include nonexperimental data on harms in this exercise but will include this information in future updates, when available.
Our methods for this priority-setting exercise are similar to those used by other groups, such as Child Health and Nutrition Research Initiative CHNRI [24] and the James Lind Alliance, which uses priority-setting partnerships to develop priorities for ten intervention uncertainties for consideration by research funders [27]. Our approach also aligns with guidance provided by Lavis et al. for health decision makers (policy and programs) which includes using explicit criteria based on the underlying problem and burden of disease and intervention options [28]. Other papers similarly describe priority-setting exercises for research. These methods include surveys and face-to-face consultations and evidence mapping [29].
Informing decision makers should involve providing an easily understood 'Friendly Front-End' [13]. Firstly, this derivative summary must provide information on not only the relative effect or statistical significance alone, but also the absolute magnitude of the benefits as well as potential harms, where relevant. Secondly, for those interventions with meaningful, substantive benefit, policymakers also require guidance on: a) ease of implementation of the intervention, including the available capacity and human resources; b) health system requirements and effects on the health system c) universalityi.e., the magnitude of the burden of illness in the country of interest. Finally, policymakers should be informed about whether the intervention will reduce health inequities. There is very little research available on the types of policy summaries and their impact on policy-makers knowledge and decision-making [30].
Strengths
Each topic area was co-led by an internationallyrecognized "content leader" in the respective content field (i.e., depression, diabetes/obesity, HIV, malaria, and nutrition). Each content leader was teamed up with a Cochrane methodologist with expertise in performing systematic reviews in the same area. Each team was composed of a mixture of researchers and policymakers. The explicit focus on equity was helped by the delineation of the three additional criteria: a) the ease of implementation of the intervention, including the available capacity; b) Health system requirements and effects on the health system c) Universality, or the magnitude of the burden of illness in the country of interest. Consensus was achieved remarkably easily on the assessment criteria. Also, disaggregation of the components contributing to the total score did not show any one of the components driving the total score. This may well be different for a specific country or program where there are political factors and competing programs.
Challenges/weaknesses
It was challenging building the teams as both leaders and team members are in great demand; they are all very busy and typically do not attend Cochrane or other systematic review meetings. There was no financial payment nor academic reward beyond this publication. We initially planned to hold teleconferences but the logistics proved daunting so although we did meet in person or electronically with the leaders, the completion of the worksheets was done asynchronously with the understanding that if there were major disagreements we would set up a teleconference to resolve; however, these were not needed. We had some difficulty getting agreement on the criteria and definitions from our stakeholder panel chairs. As mentioned above, some Stakeholder panel members reported that the wide range of interventions and outcomes made ranking difficult. If the Stakeholder Panels had included different stakeholders this could have changed the priority ranking for some stakeholders. However, since our Panels included diverse individuals and were based on consensus, we feel that the priority lists would have remained similar. Another limitation of our exercise is that we were mostly limited to Cochrane reviews, although the nutrition exercise included some non-Cochrane systematic reviews because the nutrition stakeholder panel chair identified these as interventions with important effects. The other topic areas used only Cochrane systematic reviews. Had additional reviews been included, the results of the exercise may have differed. However, for this exercise we aimed to conduct a
|
v3-fos-license
|
2019-11-20T14:04:45.437Z
|
2019-11-01T00:00:00.000
|
208170326
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1422-0067/20/22/5693/pdf",
"pdf_hash": "3584830b39248011b4722e04e25af63654103d39",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44519",
"s2fieldsofstudy": [
"Materials Science",
"Environmental Science",
"Biology"
],
"sha1": "47cd7a0dc30258c7f84bf9e286dcc7959d6331cf",
"year": 2019
}
|
pes2o/s2orc
|
Strategies to Obtain Designer Polymers Based on Cyanobacterial Extracellular Polymeric Substances (EPS)
Biopolymers derived from polysaccharides are a sustainable and environmentally friendly alternative to the synthetic counterparts available in the market. Due to their distinctive properties, the cyanobacterial extracellular polymeric substances (EPS), mainly composed of heteropolysaccharides, emerge as a valid alternative to address several biotechnological and biomedical challenges. Nevertheless, biotechnological/biomedical applications based on cyanobacterial EPS have only recently started to emerge. For the successful exploitation of cyanobacterial EPS, it is important to strategically design the polymers, either by genetic engineering of the producing strains or by chemical modification of the polymers. This requires a better understanding of the EPS biosynthetic pathways and their relationship with central metabolism, as well as to exploit the available polymer functionalization chemistries. Considering all this, we provide an overview of the characteristics and biological activities of cyanobacterial EPS, discuss the challenges and opportunities to improve the amount and/or characteristics of the polymers, and report the most relevant advances on the use of cyanobacterial EPS as scaffolds, coatings, and vehicles for drug delivery.
Introduction
Biopolymers are macromolecules produced by different organisms or derived from natural resources [1]. Owing to their biocompatibility, non-toxicity, flexibility, functionality, biodegradability, and possibility to be recycled by biological processes, they constitute a sustainable alternative to petrochemical-derived polymers [1][2][3]. Polysaccharides are a highly abundant and diverse group of biopolymers that can be found in all domains of life [4]. In fact, the most abundant biopolymers,
Due to the limited structural information available for cyanobacterial EPS, the relationship between their structures and biological activities is far from being understood. However, the available data suggests that the negative charge and presence of sulfate groups contributes significantly to the antiviral activity displayed by several polymers [29,30,[49][50][51]59]. These effects are likely due to inhibition of fusion of the enveloped virus with its target membrane, either by impairing the virus-cell attachment or the direct interaction of the negative charges of the polymer with positive charges on the virus surface [60,61]. The antiviral activity of the polymers seems to be mainly dependent on the number of negative charges and the molecular weight [60]. In the case of the sulphated polymer calcium spirulan, isolated from Arthrospira platensis, it was suggested that the presence of sulfate groups provides an additional contribution to the antiviral activity of these polymers by chelating calcium ions, which helps to retain the molecular conformation of the polymer [51].
The antimicrobial activity of cyanobacterial products is also well documented in the literature (reviewed in [28]). However, many of the available data were obtained using crude extracts [62,63], and thus, it is not always easy to uncouple the effects of the EPS from those resulting from the other molecules. Despite these constraints, it was demonstrated that the EPS produced by Synechocystis sp. R10 and Gloecapsa sp. Gacheva 2007/R-06/1 display antimicrobial activity against a broad spectrum of the most common food-borne pathogens [31]. Extracts of EPS released by the cyanobacterium Arthrospira platensis also showed antimicrobial activity against both Gram-positive and Gram-negative bacteria. Importantly, different EPS extracts showed different activities, indicating the presence of different components that differ in their solubility in the solvents employed [52].
A strong correlation between the sulfate content of cyanobacterial polymers and its antioxidative and anticoagulant activities was also found [26,32,33,53], and the immunomodulatory effects of specific cyanobacterial EPS were demonstrated [54]. The presence of sulfate has also been associated to the antitumor activity displayed by some EPS [34,55], although further studies are required to unveil the exact contribution of the sulfate groups. The mechanism of selective cytotoxicity displayed by different EPS with antitumor properties is also being evaluated. Studies performed with EPS isolated from Aphanothece halophytica, Nostoc sphaeroides, Aphanizomenon flosaquae, and Synechocystis ∆sigF revealed that the antitumor effect of these polymers is due to the induction of apoptosis in the tumor cells [27,34,57,58].
The vast range of biological activities displayed by cyanobacterial EPS opens a new set of possibilities for its use. However, for this process to be viable, it is necessary to complement these investigations with efforts aiming at optimizing polymer yield and tailoring its composition for specific applications.
Strategies to Optimize Cyanobacterial EPS Production and/or Polymer Characteristics
Due to their minimal nutritional requirements, cyanobacteria constitute a sustainable platform for polymer production. Moreover, depending on the environmental conditions (e.g., favorable and regular conditions), their photosynthetic metabolism allows large-scale cultivation outdoors, either in closed systems or open ponds, minimizing the costs of energy supply compared to the cultivation of e.g., heterotrophic bacteria [64]. Nevertheless, it is important to take into consideration that some cyanobacterial strains can produce toxins, and although these strains are not used for EPS production, it is essential to monitor possible contamination of cultures and/or polymers with these substances, particularly in open systems.
Despite the advantages of using cyanobacteria for EPS production, to achieve economic viability it is necessary to optimize the production process, by (i) evaluating the best cultivation system and/or photobioreactor geometry (ii) determining the most favorable growth conditions including nutrients (carbon, macroelements, microelements), temperature, light and gases exchange, (iii) establishing of a zero-waste value chain by re-utilizing waste biomass, and (iv) optimizing downstream processing including extraction and purification of the EPS. These parameters may vary significantly depending on the strain, as already well established for the effect of the growth conditions on EPS production [12,13,15,16,65], and will not be discussed here. It is however important to emphasize that, depending on the strain, changes in the cultivation/growth conditions can affect both the amount as well as the composition of the EPS. Metabolic engineering approaches also provide an opportunity to optimize the amount of EPS produced and/or the polymers' characteristics in order to meet industrial demands [11,66]. However, the limited information available on the cyanobacterial EPS biosynthetic process has limited the use of this approach, but the information available in the literature can provide important clues guiding future actions.
Metabolic Engineering of EPS-Producing Strains
The connection between central metabolic pathways and EPS biosynthesis has been elucidated for several bacteria, opening the way for the successful optimization of EPS-producing strains such as the xanthan-producing Xhantomonas campestris [1,66]. More recently, the mechanisms of EPS production by cyanobacteria started to be unveiled, mainly using the model strain Synechocystis sp. PCC 6803 (hereafter Synechocystis) [67][68][69][70][71]. Nevertheless, more studies are necessary to fully understand this process in cyanobacteria.
Studies performed in several bacteria point out that, regardless of the variety of surface polysaccharides produced, their biosynthetic pathways are relatively conserved [72]. Generally, the EPS biosynthetic pathway starts with the activation of monosaccharides and its conversion into sugar nucleotides; then, the monosaccharides are sequentially transferred from the sugar nucleotide donors to carrier molecules and assembled as repeating units. Finally, the EPS are exported to the exterior of the cell [1,72]. These steps require the participation of three groups of proteins, namely (1) enzymes involved in the biosynthesis of the sugars nucleotides, (2) glycosyltransferases to transfer the sugars to specific acceptors, and (3) proteins involved in EPS assembly, polymerization, and export [1,73,74] (Figure 1). has limited the use of this approach, but the information available in the literature can provide important clues guiding future actions.
Metabolic Engineering of EPS-Producing Strains
The connection between central metabolic pathways and EPS biosynthesis has been elucidated for several bacteria, opening the way for the successful optimization of EPS-producing strains such as the xanthan-producing Xhantomonas campestris [1,66]. More recently, the mechanisms of EPS production by cyanobacteria started to be unveiled, mainly using the model strain Synechocystis sp. PCC 6803 (hereafter Synechocystis) [67][68][69][70][71]. Nevertheless, more studies are necessary to fully understand this process in cyanobacteria.
Studies performed in several bacteria point out that, regardless of the variety of surface polysaccharides produced, their biosynthetic pathways are relatively conserved [72]. Generally, the EPS biosynthetic pathway starts with the activation of monosaccharides and its conversion into sugar nucleotides; then, the monosaccharides are sequentially transferred from the sugar nucleotide donors to carrier molecules and assembled as repeating units. Finally, the EPS are exported to the exterior of the cell [1,72]. These steps require the participation of three groups of proteins, namely (1) enzymes involved in the biosynthesis of the sugars nucleotides, (2) glycosyltransferases to transfer the sugars to specific acceptors, and (3) proteins involved in EPS assembly, polymerization, and export [1,73,74] ( Figure 1).
Figure 1.
Sequence and compartmentalization of the events leading to the production of bacterial extracellular polymeric substances (EPS). EPS assembly, polymerization, and export usually follows one of three main mechanisms: the Wzy-, ABC transporter-or Synthase-dependent pathways. Adapted from [69].
All steps of the biosynthetic process offer opportunities for optimizing of the amount of EPS produced and/or its quality through genetic manipulation [11]. Here, we discuss the opportunities to improve cyanobacterial EPS production/characteristics by targeting carbon availability, synthesis of sugar nucleotide precursors, assembly of the repeating unit, and polymerization and export of the polymer.
Carbon Availability
The production of polysaccharides is a carbon-intensive and energy-demanding process that competes with cell's growth for available carbon resources. Thus, one of the strategies to improve All steps of the biosynthetic process offer opportunities for optimizing of the amount of EPS produced and/or its quality through genetic manipulation [11]. Here, we discuss the opportunities to improve cyanobacterial EPS production/characteristics by targeting carbon availability, synthesis of sugar nucleotide precursors, assembly of the repeating unit, and polymerization and export of the polymer.
Carbon Availability
The production of polysaccharides is a carbon-intensive and energy-demanding process that competes with cell's growth for available carbon resources. Thus, one of the strategies to improve EPS production consists of increasing the carbon pool of the cells, either by boosting the photosynthetic efficiency and/or the inorganic carbon intake. Previously, it was shown that the overexpression of the endogenous Synechocystis bicarbonate transporter BicA led to an increase in EPS production [75], and that high CO 2 pressure boosts the generation of these polymers in Synechococcus sp. PCC 8806 [76]. Carbon availability can also be increased by eliminating the carbon sinks and competing pathways, such as the production of glycogen, sucrose, and compatible solutes (e.g., glucosylglycerol). The branching points between Synechocystis' primary metabolism and sugar nucleotide, glycogen, sucrose, and glucosylglycerol pathways is depicted in Figure 2. Glycogen is a glucose storage polymer that, in cyanobacteria, can accumulate to levels of more than 50% of the cellular dry weight, depending on the growth conditions [77]. A Synechocystis mutant (∆glgC) unable to produce glycogen possesses a higher energy charge and produces more organic acids [78]. The overexpression of the glycogen debranching enzyme GlgP, also results in massive decline of the glycogen content [79], compensating the carbon drain in an ethanol-producing Synechocystis mutant [79]. Although these studies unequivocally demonstrate that glycogen depletion increases the availability of carbon, it remains to be shown if this carbon surplus can be efficiently redirected towards EPS production. Regarding sucrose metabolism, the overexpression of Ugp (responsible for converting uridine triphosphate (UTP) and glucose-1-phosphate into uridine diphosphate (UDP)-glucose that serves as a substrate for sucrose and EPS synthesis) inhibited sucrose accumulation in Synechocystis under salt stress [80], raising the hypothesis that this effect may be due to the shift of carbon flux towards the synthesis of the exopolysaccharides [81]. A relationship between the glucosylglycerol metabolism and EPS synthesis in Synechocystis was also found. In this case, a mutant in a glucosylhydrolase (GghA) released higher amounts of polysaccharides (RPS) to the medium, suggesting a function of glucosylglycerol degradation via GghA in the synthesis and/or attachment of EPS to Synechocystis cells [82]. EPS production consists of increasing the carbon pool of the cells, either by boosting the photosynthetic efficiency and/or the inorganic carbon intake. Previously, it was shown that the overexpression of the endogenous Synechocystis bicarbonate transporter BicA led to an increase in EPS production [75], and that high CO2 pressure boosts the generation of these polymers in Synechococcus sp. PCC 8806 [76]. Carbon availability can also be increased by eliminating the carbon sinks and competing pathways, such as the production of glycogen, sucrose, and compatible solutes (e.g., glucosylglycerol). The branching points between Synechocystis' primary metabolism and sugar nucleotide, glycogen, sucrose, and glucosylglycerol pathways is depicted in Figure 2. Glycogen is a glucose storage polymer that, in cyanobacteria, can accumulate to levels of more than 50% of the cellular dry weight, depending on the growth conditions [77]. A Synechocystis mutant (∆glgC) unable to produce glycogen possesses a higher energy charge and produces more organic acids [78]. The overexpression of the glycogen debranching enzyme GlgP, also results in massive decline of the glycogen content [79], compensating the carbon drain in an ethanol-producing Synechocystis mutant [79]. Although these studies unequivocally demonstrate that glycogen depletion increases the availability of carbon, it remains to be shown if this carbon surplus can be efficiently redirected towards EPS production. Regarding sucrose metabolism, the overexpression of Ugp (responsible for converting uridine triphosphate (UTP) and glucose-1-phosphate into uridine diphosphate (UDP)glucose that serves as a substrate for sucrose and EPS synthesis) inhibited sucrose accumulation in Synechocystis under salt stress [80], raising the hypothesis that this effect may be due to the shift of carbon flux towards the synthesis of the exopolysaccharides [81]. A relationship between the glucosylglycerol metabolism and EPS synthesis in Synechocystis was also found. In this case, a mutant in a glucosylhydrolase (GghA) released higher amounts of polysaccharides (RPS) to the medium, suggesting a function of glucosylglycerol degradation via GghA in the synthesis and/or attachment of EPS to Synechocystis cells [82].
Synthesis of Sugar Nucleotide Precursors
A common bottleneck in microbial EPS production is the insufficient levels of sugar nucleotides [66,74]. This aspect is particularly relevant in Gram-negative bacteria, as these precursors are also required for the production of other surface polysaccharides, including the O-antigen of the lipopolysaccharides (LPS) and the S-layer glycans [85,86]. Thus, another strategy to increase cyanoabacterial EPS production consists of increasing the levels of sugar nucleotide precursors. However, the success of this approach is still controversial [74], since it is necessary to balance the carbon supply for sugar nucleotide synthesis with glycolysis [66,74]. Higher levels of sugar nucleotides can be achieved by overexpressing enzymes such as Ugp involved in the branching-point between the cell's primary metabolism and the sugar nucleotide pathway [10,66,74,87], as previously suggested ( Figure 2) [80]. It is also necessary to consider the energetic requirements of sugar nucleotide synthesis. Availability of high-energy compounds such as adenosine triphosphate (ATP) and UTP may limit sugar nucleotide production, and therefore, strategies to increase the levels of cellular energy may also be advantageous for EPS production [66]. Finally, increasing or decreasing the synthesis of a certain type of nucleotide sugar precursor may have an impact on the EPS monosaccharidic composition [74]. Targeted modifications to obtain improved EPS for different applications are the increase in uronic acids (e.g., by targeting UDP-glucose dehydrogenase) and amino sugars (e.g., through modification of UDP-N-acetylglucosamine pyrophosphorylase). Enrichment in rare sugars such as rhamnose and fucose can also be advantageous to confer unique physical and bioactive properties to the polymers [8]. Recently, Synechocystis' mutants in the tyrosine kinase Sll0923 (Wzc homologue) and/or the low molecular weight tyrosine phosphatase Slr0328 (Wzb homologue) produced EPS enriched in rhamnose [70]. Similar results had been obtained for a mutant in the ATP-binding component (Sll0982; KpsT homologue) of an EPS-related ABC transporter [68], raising the hypothesis that rhamnose metabolism is closely associated with the last steps of EPS production. This is further supported by the presence of slr0985, encoding a dTDP-4-dehydrorhamnose 3,5-epimerase, in close proximity to wzc and kpsT [70].
Assembly of the Repeating Unit
Genetic engineering of glycosyltransferases offers a great opportunity for the optimization of the polymers' composition and structure [74]. Overexpression of a native glycosyltransferase may increase the incorporation of the substrate sugar, provided that sufficient amounts of the sugar nucleotide are available. Alternatively, new monosaccharides may be introduced into the polymer by heterologously expressing the corresponding glycosyltransferase genes [66]. New insights into the mechanism and structure of these enzymes will enable approaches to broaden the substrate specificity and/or to swap substrate and acceptor domains from different glycosyltransferases [66,88]. However, further knowledge on this class of enzymes is necessary, as most of the cyanobacterial glycosyltransferases identified have not been characterized biochemically, making it difficult to understand their exact role in the synthesis of EPS [15]. The enzymes responsible for methylation, acetylation and pyruvylation of the EPS can also be targeted to modulate the rheological behavior of the polymers [66]. Interestingly, a Synechocystis mutant in a putative methyltransferase (Slr1610) displayed differences in both the molecular weight and monosaccharidic composition of its EPS compared to the wildtype [68]. Despite the significant contribution of the sulfate groups for the biological activities of the polymers, genetic engineering strategies aiming to tailor the sulfate levels in cyanobacterial EPS remain unexplored. This could be achieved by targeting the sulfotransferases responsible for the transfer of sulfate to the polymers.
Polymerization and Export of the Polymer
A clear understanding of the last steps of EPS production and the structure/function of the proteins that participate in this process is essential to enable the rational design of engineering strategies (e.g., enzyme engineering, random mutagenesis and/or site-directed evolution) aiming at improving EPS production and/or tailoring the polymer length [10,88,89]. This last aspect is important to determine the rheological properties of the polymers as well as its potential for the production of biomaterials [66]. Therefore, targeted modification of the molecular weight by engineering the proteins involved in the polymerization, export, or degradation of the polymer (e.g., synthases, polymerases, glucosidases) represents a possibility to obtain new polymer variants [88], as successfully shown for xanthan gum and bacterial alginate [90,91].
Although the knowledge on the last steps of EPS production in cyanobacteria is limited, these mechanisms seem to be relatively conserved throughout bacteria, with the polymerization and export of the polymers usually following one of three main mechanisms: the Wzy-, ABC transporter-, or synthase-dependent pathways [88]. However, a phylum-wide analysis of cyanobacterial genomes reveled that most strains harbor gene-encoding proteins related to the three pathways but often not the complete set defining a single pathway, implying a more complex scenario than that observed for other bacteria [69]. This complexity raises the hypothesis of functional redundancy, either owing to the existence of multiple copies for some of the EPS-related genes/proteins and/or a crosstalk between the components of the different assembly and export pathways [69,70]. In agreement, mutational analyses showed that proteins related to both the Wzy-and the ABC-dependent pathways operate in Synechocystis' EPS production, although their exact roles have only recently started to be elucidated [67,68,70]. Further knowledge is required to identify the bottlenecks in polymer export and pinpoint the best candidates for chain length regulation in cyanobacteria. Despite that, it was recently shown that the truncation of the C-terminal region of the Synechocystis' polysaccharide copolymerase Wzc leads to an increase of the EPS attached to the cell [70] and that the deletion of a monooxigenase involved in polysaccharide degradation and recycling results in increased levels of RPS [92]. More studies are necessary to determine if these or similar modifications affect the length of the polymers obtained.
Isolation, Purification, and Functionalization of Cyanobacterial EPS
The isolation and purification of the polymers must be cost effective, scalable, and easy to perform. It is also important to take into consideration that the methods selected influence the polymers' yield and quality [15] and, thus, it may be necessary to adapt the protocols to the characteristics of the polymers and their final application [93]. One of the main aspects to consider is whether the EPS are attached to the cells or released to the culture medium (RPS). In the case of the EPS attached to the cells, detachment can be achieved using formaldehyde, glutaraldehyde, ethylenediaminetetraacetic acid (EDTA), sodium hydroxide, sonication, heating, cell washing with water, complexation, or ionic resins [15,16]. To select one of these methods, it is important to not only evaluate the yield, but also the levels of contamination of the polysaccharides with other cellular components. In addition, RPS are much easier to recover, being usually separated from cells by filtration and/or centrifugation. Once isolated, polymers are usually precipitated using ice cold absolute alcohols such as methanol, ethanol, or isopropanol and recovered [16,93]. The polarity of the alcohol and the low temperatures used have an impact on the yield of the polysaccharides and on the co-precipitation of impurities [16]. Despite the efficiency of selective alcohol precipitation, the costs and requirement of large amounts of precipitating agents led to the search of alternative techniques more suitable at the industrial scale, such as tangential ultrafiltration [16,94]. However, this methodology may need to be improved to minimize the problems of high viscosity of polymer solutions resulting in membrane clogging [16]. Tangential ultrafiltration can also be used to obtain a concentrated polymer solution before precipitation or spray-drying of the polymers, thus increasing the efficiency of these processes.
After isolation of the EPS, contaminants such as inorganic salts, heavy metals, proteins, polyphenols, endotoxins, nucleic acids, or cell debris may still be present in the polymer solution. However, it is necessary to have polysaccharides with high purity levels to accurately determine their structure and composition and to obtain reproducible results for therapeutic applications [95]. Inorganic salts, monosaccharides, oligosaccharides and low molecular weight non-polar substances can be removed by dialysis. The choice of device, the molecular weight cut-off, and duration of the dialysis is very important to determine the success of this method. However, at an industrial scale, dialysis may not be a viable option. An alternative way to remove inorganic salts is through ion exchange resins, normally in the form of beads [96]. Removal of peptides and proteins can be achieved using different methods, including protease (e.g., pronase) treatment or the Sevag method (usually less efficient) [96,97]. Trichlorotrifluoroethane and trichloroacetic acids can also be used to remove proteins from the polysaccharide's solution. However, it is necessary to consider that the first is highly volatile and, thus, has to be employed at 4 • C limiting its use, while the trichloroacetic acid is widely used but its acidity can damage the polymer structure [96,97]. The levels of polyphenol contaminants are usually reduced with charcoal washes and centrifugations, hydrogen peroxide method or functionalized resins with imidazole and pyridine [95,98]. The selection of the best purification methods depends on the characteristics of the polymers, the methods used for their isolation, and the envisaged application.
The presence of endotoxins is one of the major issues to be addressed before any biomaterial is consider safe to be used. Endotoxins are mainly due to the presence of LPS, with lipid A being responsible for most of the biological activity of these contaminants [99]. Endotoxins can significantly affect the biological effects of the polymers by eliciting a wide range of cellular responses that compromise cell viability [100,101]. Therefore, limits are imposed by regulatory entities ( [102], pp. 171-175, 520-523) As an example, the food and drug administration (FDA) adopted the US Pharmacopoeia endotoxin reference standard, limiting the amount of endotoxins in eluates from medical devices to 0.5 Eurotoxin Units (EU)/mL [103]. Endotoxins are highly heat-stable and not easily destroyed by standard autoclave programs [104]. However, they can be removed by other techniques including ultrafiltration, two-phase extraction, and adsorption [99], although the efficiency of these methods depends on the characteristics of the polymer.
Depending on the application, it may be necessary to isolate fractions of the polymers with specific molecular weights. Fractionation is usually achieved by ultracentrifugation, with the added advantage of simultaneous elimination of contaminants [105]. Filtration and ultrafiltration are also popular alternatives, however, depending on the material of the filter membrane, the polysaccharides can be retained in the filter, decreasing the yield of the purification [106]. Other methods include affinity chromatography, gel chromatography, anion exchange chromatography, cellulose column chromatography, quaternary ammonium salt precipitation, graded precipitation methods, and preparative zone electrophoresis (reviewed in [96]).
The development of polysaccharide-based biomaterials often requires the chemical functionalization of the polymers. In this context, the characteristics of cyanobacterial EPS, offer a vast range of opportunities for targeted modifications (Figure 3). Successful examples of these functionalization reactions have already been described for other bacterial EPS [107][108][109][110][111][112][113][114]. The hydroxyl groups present in hexoses, pentoses, deoxyhexoses, uronic acids, and aminosugars can act as nucleophiles in base-catalyzed esterification reactions in the presence of anhydrides, esters, or carboxylic acids ( Figure 3A-C). This strategy has been successfully used to fabricate photocrosslinkable hydrogels based on dextran and hyaluronic acid [108,110,114]. Another approach consists of the oxidation of diols in the presence of sodium periodate to generate reactive aldehydes, which can further react with primary amines in reductive amination reactions ( Figure 3D) to produce hydrogels [107,112]. Hydroxyl groups can also undergo free radical polymerization reactions to generate graft copolymers for drug delivery ( Figure 3E), as previously demonstrated for xanthan gum [109]. The carboxylic groups present in uronic acid residues allow the polymers' functionalization through esterification or carbodiimide reactions, with the latter being particularly interesting for bioconjugation ( Figure 3F) [111,113]. On the other hand, free amino groups from glucosamine residues can react with anhydrides and carboxylic acids to form amides ( Figure 3G,H), or with aldehydes to form Schiff bases, which can be further reduced to imines. Overall, these chemical modifications are valuable strategies to obtain designer polymers with improved properties suitable for the development of novel biomaterials. hydrogels [107,112]. Hydroxyl groups can also undergo free radical polymerization reactions to generate graft copolymers for drug delivery ( Figure 3E), as previously demonstrated for xanthan gum [109]. The carboxylic groups present in uronic acid residues allow the polymers' functionalization through esterification or carbodiimide reactions, with the latter being particularly interesting for bioconjugation ( Figure 3F) [111,113]. On the other hand, free amino groups from glucosamine residues can react with anhydrides and carboxylic acids to form amides ( Figure 3G,H), or with aldehydes to form Schiff bases, which can be further reduced to imines. Overall, these chemical modifications are valuable strategies to obtain designer polymers with improved properties suitable for the development of novel biomaterials. For their use in biomedical applications, the polymers and/or derived biomaterials have to be biocompatible, i.e., be able to "perform with an appropriate host response in a specific application" [115]. Biocompatibility is usually evaluated in vitro by accessing the effects that biopolymers or biomaterials have on living cells [116]. Several guidelines are described in international standard protocols, with the material's toxicity (defined as cytotoxicity) being the most common and widely used parameter evaluated (ISO 10993-5) [117]. Depending on the application, biotolerability, i.e., "the ability to reside in the body for long periods of time with only low degrees of inflammatory reaction" is an important issue to consider. This property is particularly important for non-degrading or slowdegrading implant materials [115]. Other important biosafety tests include the evaluation of the mutagenic and carcinogenic potential [118,119].
Development and Possible Applications of Cyanobacterial EPS-Based Biomaterials
Over the past few years, the development of biomaterials for therapeutic applications has become a rapidly expanding multidisciplinary field of research, with an increasing interest in uncovering novel polysaccharide-based scaffolds, coatings, and drug carriers [120]. Despite the potential of the cyanobacterial EPS and the vast range of opportunities to further improve the For their use in biomedical applications, the polymers and/or derived biomaterials have to be biocompatible, i.e., be able to "perform with an appropriate host response in a specific application" [115]. Biocompatibility is usually evaluated in vitro by accessing the effects that biopolymers or biomaterials have on living cells [116]. Several guidelines are described in international standard protocols, with the material's toxicity (defined as cytotoxicity) being the most common and widely used parameter evaluated (ISO 10993-5) [117]. Depending on the application, biotolerability, i.e., "the ability to reside in the body for long periods of time with only low degrees of inflammatory reaction" is an important issue to consider. This property is particularly important for non-degrading or slow-degrading implant materials [115]. Other important biosafety tests include the evaluation of the mutagenic and carcinogenic potential [118,119].
Development and Possible Applications of Cyanobacterial EPS-Based Biomaterials
Over the past few years, the development of biomaterials for therapeutic applications has become a rapidly expanding multidisciplinary field of research, with an increasing interest in uncovering novel polysaccharide-based scaffolds, coatings, and drug carriers [120]. Despite the potential of the cyanobacterial EPS and the vast range of opportunities to further improve the characteristics of the polymers by genetic engineering and/or chemical modification, the number of studies reporting their use as biomaterial is still very limited. Nevertheless, the available data represent an important step to validate the potential of cyanobacterial EPS.
The RPS produced by the cyanobacterium Trichormus variabilis VRUC 168 were combined with diacrylated polyethylene glycol to produce photopolymerizable hybrid hydrogels [35]. These gels were stable over time and resistant to dehydration and spontaneous hydrolysis, being successfully used as matrices for the active form of the enzyme thiosulfate:cyanide sulfur transferase, as well as for 3D culture system of human mesenchymal stem cells (hMSCs). In another study, the RPS produced by Nostoc commune were combined with glycerol to prepare biopolymeric films suitable for the development of new materials, including coatings and membranes [38]. Importantly, the simple and effective methodology developed allows control of the films' thickness and mechanical properties, thus expanding the repertoire of applications in the food and biomedical industries. The polymer produced by the strong RPS producer Cyanothece sp. CCY 0110 [14] was also shown to be a promising vehicle for topical administration of therapeutic macromolecules. This polymer was able to spontaneously assemble with functional proteins into a new phase with gel-like behavior, and the proteins were released progressively and structurally intact near physiological conditions, primarily through the swelling of the polymer-protein matrix. The release kinetics could be modulated by the addition of divalent cations, such as calcium [37]. The same polymer combined with arabic gum was also used to generate microparticles capable of encapsulating vitamin B12 [36]. More recently, the RPS isolated from this Cyanothece strain was used to produce an anti-adhesive coating, obtained by spin coating (for details see [39]). This coating efficiently prevents the adhesion of relevant etiological agents, even in the presence of plasma proteins, being an important step towards the establishment of a new technological platform capable of preventing medical device-associated infections [39].
Conclusions and Future Perspectives
Owing to their characteristics and biological activities, the EPS produced by cyanobacteria are a promising platform for biotechnological and biomedical applications, including the development of novel biomaterials for therapeutic applications. However, their successful exploitation largely depends on combined efforts to optimize the amount of EPS produced and tailor their characteristics. The recent advances in the knowledge of cyanobacterial EPS biosynthetic pathways pave the way for the generation of genetically modified strains. However, there are still challenges to address, including (i) a better understanding of the relationship between central metabolism and the synthesis of sugar nucleotides, (ii) the identification and characterizing of other key components of the EPS production machinery, and (iii) elucidation of the regulatory networks of the EPS production process. Further studies, taking into account high throughput data obtained from systems biology approaches and structural information of both proteins and polymers, will be crucial to address these issues. Moving beyond cellular processes, the chemical functionalization of the polymers can also significantly increase the repertoire of cyanobacterial EPS suitable for targeted applications. The implementation of this strategy is currently limited by the lack of knowledge on the structure of cyanobacterial polymers. However, the advent of new technologies and approaches will help to overtake this bottleneck. The results obtained in the (yet limited number of) studies reporting the use of cyanobacterial EPS-based biotechnology validate their potential, encouraging future endeavors. Funding: This work was financed by FEDER-Fundo Europeu de Desenvolvimento Regional funds through the COMPETE 2020 -Operacional Programme for Competitiveness and Internationalisation (POCI), Portugal 2020, and by Portuguese funds through FCT-Fundação para a Ciência e a Tecnologia/Ministério da Ciência, Tecnologia e Ensino Superior in the framework of the project POCI-01-0145-FEDER-028779, contract DL57/2016/CP1327/CT0007 and fellowship SFRH/BD/119920/2016.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2023-07-01T13:34:07.302Z
|
2023-07-01T00:00:00.000
|
259299023
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcmusculoskeletdisord.biomedcentral.com/counter/pdf/10.1186/s12891-023-06591-8",
"pdf_hash": "6f449b522a9dc7a51ca4c8bf6dd2cca7170e95c8",
"pdf_src": "Springer",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44521",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "dfef4cf89858fa2f52538b56c182a931086348eb",
"year": 2023
}
|
pes2o/s2orc
|
Satisfactory immediate spontaneous correction may not mean satisfactory final results for moderate TL/L curves after selective thoracic fusion in AIS patients
Background Few studies have focused on the chronic spontaneous behavior of the unfused TL/L curve during follow-up. The purpose of the present study was to explore the behavior of the unfused TL/L curve during a long-term follow-up to identify the risk factors for correction loss. Methods Sixty-four age-matched female AIS patients undergoing selective thoracic fusion were enrolled. Patients were divided into 2 groups according to whether there was correction loss. Risk factors for correction loss of the unfused TL/L curves were analyzed. The relationship and difference between the immediate postoperative thoracic and TL/L Cobb angles were explored. Results The TL/L Cobb angle was 28.17° before surgery, 8.60° after surgery, and 10.74° at the final follow-up, with a correction loss of 2.14°. Each subgroup contained 32 cases. A smaller postoperative TL/L Cobb angle was the only risk factor that was independently associated with TL/L correction loss. In the LOSS group, there was a significant difference and no correlation between the immediate postoperative TL/L and the thoracic Cobb angle. In the NO-LOSS group, there was a moderate correlation and no difference between them. Conclusion A smaller immediate postoperative TL/L Cobb angle may have been associated with TL/L correction loss during the long-term follow-up. Thus, good immediate postoperative spontaneous correction may not mean a satisfactory outcome at the final follow-up after STF. Mismatch between thoracic and TL/L Cobb angles immediately after surgery may also be related to correction loss of the unfused TL/L curves. Close attention should be paid in case of deterioration. Supplementary Information The online version contains supplementary material available at 10.1186/s12891-023-06591-8.
Background
Adolescent idiopathic scoliosis (AIS) is a three-dimensional (3D) deformity of the spine that predominantly affects individuals aged 10 to 17. Fusion level selection in the surgical treatment of AIS patients with structural major thoracic (MT) and secondary thoracolumbar or lumbar (TL/L) curves remains a great challenge [1][2][3][4].
In nonselective fusion, instrumentation of both curves sacrifices the mobile segments of the spine.But in selective thoracic fusion (STF), progression of the uninstrumented lumbar curve or coronal imbalance may occur [5].STF dates back to the era of Harrington instrumentation with the purpose of sparing lumbar motion [4].At present, pedicle screws are predominantly used because of their powerful corrective force [6].
Various studies have focused on the prognosis or prediction of the unfused lumbar curve after STF [2,5,7,8] and the change from preoperation to the final followup.However, few studies have focused on the chronic spontaneous behavior of the unfused TL/L curve during follow-up (the change from immediate postoperation to the final follow-up) or the risk factors for its correction loss.In selective TL/L fusion, our previous study showed that higher flexibility and better immediate correction were risk factors for correction loss of the unfused thoracic curve during the follow-up [9].Therefore, the purposes of the present study were to explore the behavior of the unfused TL/L curve after STF during the two-year follow-up and to identify the risk factors for its correction loss.
Our specific goals were to (1) evaluate the radiographic outcome of STF, (2) compare the difference between two age-and sex-matched subgroups, (3) identify the risk factors for correction loss of the unfused lumbar curve, and (4) explore the influence of immediate postoperative mismatch between thoracic and TL/L curves.
Patient selection
After the institutional review board (IRB) approved the study, patients with Lenke 1 AIS were identified retrospectively.The Lenke classification [3] criteria were utilized and confirmed with another independent surgeon.It was considered selective fusion for AIS patients with MT and secondary TL/L curves if the TL/L curves were unfused.The inclusion criteria were as follows: patients diagnosed with Lenke 1 AIS with a minimal follow-up of 2 years; underwent posterior STF.The exclusion criteria were as follows: incomplete data or poor radiographic images that do not allow measurement; age and sex were not matched between subgroups according to Subgroup Analysis section.
Surgical technique
During preoperative planning, the last substantially touched vertebra (LSTV) [10] was selected as the lower instrumented vertebra (LIV).The patient was placed prone on a radiolucent spinal frame after general anesthesia.After surgical exposure, the pedicle screws were placed and the posterior elements were released if necessary.Then the rods were placed.The curve was corrected with direct apical vertebra rotation, rod rotation and compression and/or distraction.Then, the bone graft was applied.Intraoperative neurophysiological monitoring was used.
Radiographic measurements
Radiographic measurements were performed on the Surgimap (Nemaris) by 2 independent staff members on standing whole-spine posteroanterior and lateral radiographs taken before surgery, 1 month after surgery, and at the most recent follow-up.Postoperative X-rays were taken 2 weeks after surgery, instead of at the first erect instance, to rule out the influence of postoperative pain and to allow the patients to recover their physiological balance [11].Before surgery, supine side-bending films were also taken.Coronal parameters included MT and its convex side-bending Cobb angle, TL/L and its convex side-bending Cobb angle, lower instrumented vertebra tilt (LIV Tilt), global coronal balance and apical vertebral translation (AVT) as previously described [9].Sagittal alignments included global sagittal balance or sagittal vertical axis, thoracic kyphosis, lumbar lordosis, and thoracolumbar junction.The correction rate was defined as (preoperative Cobb angle -immediate postoperative or final Cobb angle)/preoperative Cobb angle.The correction loss was defined as the final Cobb angle -immediate postoperative Cobb angle.The Cincinnati correction index was calculated as the immediate postoperative correction rate/preoperative flexibility [6].
Subgroup Analysis
According to TL/L correction loss, all cases were divided into 2 age-and sex-matched subgroups.If the TL/L Cobb angle improved or was maintained during the follow-up with a negative or no correction loss, the case was allocated to the NO-LOSS group.If the TL/L Cobb angle deteriorated with a positive correction loss, the case belonged to the LOSS group.Comparison and correlation analyses were performed to explore the difference between these two subgroups and the risk factors for correction loss of the unfused TL/L curve.
Statistical analysis
We presented summary statistics by means and standard deviations (SDs) for continuous variables and frequencies for categorical variables.Paired or independent t tests were used for continuous variables obeying a normal distribution.Nonparametric tests were utilized if the data did not obey a normal distribution.A multivariate binary logistic regression model with forward stepwise elimination (Conditional) was created to evaluate the adjusted association of each potential risk factor predicting correction loss of the unfused TL/L curves.We considered variables with a univariate significance level of less than 0.05 for inclusion in the multivariate analysis.For regression models, the adjusted odds ratio and their subsequent 95% confidence interval (CI) were reported.Pearson correlation was employed to examine the relationship between immediate postoperative MT and TL/L Cobb angles.The strength of the correlation was defined by the r value: negligible correlation (r < 0.3), weak correlation (0.3 < r < 0.5), moderate correlation (0.5 < r < 0.7), strong correlation (0.7 < r < 0.9) and very strong correlation (r > 0.9).We performed all analyses using SPSS (version 23.0,IBM Corp., USA).A p value < 0.05 was considered significant.
General Information
We identified 73 cases of AIS in our database, and 9 patients were excluded because they were unmatched for age and sex.Finally, 64 patients were age-matched female patients with an average age of 14.3 years old (range, 11-19 years).The follow-up duration averaged 36.9 months (range, 24-61 months).(Table 1)
Surgical Outcomes
General coronal and sagittal measurements are shown in Table 2.Only 1 patient in the LOSS group underwent revision surgery to fuse the progressive TL/L curve.The TL/L Cobb angle was 28.17 ± 5.99° before surgery and 8.60 ± 6.28° immediately after surgery (p < 0.001).At the final follow-up, it had deteriorated significantly to 10.74 ± 5.34° (p: 0.045), with a correction loss of 2.14 ± 6.71°.
Risk factors for correction loss
TL/L curves did not deteriorate after spontaneous correction in 32 cases in the NO-LOSS group, while deteriorated in 32 cases in the LOSS group.The correction losses were − 3.43 ± 3.91° (range − 14°-0°) and 7.71 ± 3.42° In the multivariate analysis, a smaller TL/L postoperative Cobb angle was the only risk factor that was independently associated with TL/L correction loss (odds ratio = 1.417; 95% CI: 1.160-1.731;p<0.001) (shown in Table 4).Typical cases are shown in Figs. 1 and 2.
Mismatch between MT and TL/L Curves
Furthermore, we explored the relationship and difference between the immediate postoperative TL/L and MT Cobb angles.In the total group, the TL/L Cobb angle had a weak correlation with the MT Cobb angle (p: 0.023) and was not significantly different from the MT Cobb angle (p: 0.230).In the LOSS group, the TL/L Cobb angle had no correlation with the MT Cobb angle (p = 0.749) and was significantly different from the MT Cobb angle (p = 0.011).However, in the NO-LOSS group, the TL/L Cobb angle had a moderate correlation with the MT Cobb angle (p = 0.008) and was not significantly different from the MT Cobb angle (p = 0.420).(Table 5)
Mechanism and indications
The theoretical basis of STF is that following correction of the MT curve, forces are transmitted to the lumbar spine, inducing spontaneous lumbar correction [12,13].STF.A thoracic:lumbar curve ratio of more than 1:2 is generally considered an indication [3,14].Lumbar curve magnitude/flexibility and coronal balance are also taken into consideration [15].
Clinical outcomes
STF for AIS gained satisfactory outcomes with pedicle screw constructs.Gebrelul et al. [5] reported 102 AIS patients undergoing STF using all-screw constructs, and the average rate of spontaneous correction of the TL/L curve was 43% at the 2-year follow-up.Chen et al. [16] showed a spontaneous correction rate of more than 70% of the TL/L curves for Lenke 1 and 2 AIS patients.Similar results were reported over a wide range of studies [7, [8], [12], [14], [17][18][19].In the present study, after STF, the TL/L curve was corrected from 28.17 ± 5.99° preoperatively to 8.60 ± 6.28° postoperatively and remained at 10.74 ± 5.34° at the final follow-up, which was comparable to previous studies.
Characteristics of lumbar compensation
After STF, progression of the residual TL/L curve may not only exacerbate coronal imbalance or shoulder imbalance [20] but may also be associated with diminished patient self-image [21].Therefore, the behavior of the unfused TL/L curve gained focus over years.Bachmann et al. [1] from the USA found that selective fusion had a limited ability to change the lower lumbar vertebral segments, including the lumbosacral takeoff angle (the angle between the central sacral vertical line and a best-fit line through the center of S1, L5, and L4).They explained that the limited correction of the lower lumbar segments made worsening of coronal balance more likely with selective fusion.Therefore, spontaneous correction occurred mainly at the upper part of the unfused lumbar curve.Similar results were noted by researchers in China.Chen et al. [16] found that when choosing L1 as the LIV, the distal unfused lumbar segments' compensation tended to decrease from the proximal end to the distal end, suggesting that the L1/2 and L2/3 discs significantly contributed to this compensation.These two studies focused on the difference in compensation between the upper and lower lumbar segments, but neither identified the risk factors for correction loss during the long-term follow-up nor explored the relationship between thoracic and TL/L curve magnitude.
Risk factors for lumbar curve progression
The primary focus was the prediction or prognosis of the unfused lumbar spine.A wide range of risk factors or predictors have been recognized.In 2011, the preoperative lumbar Cobb angle and lumbosacral takeoff angle were reported to be predictors of the 2-year postoperative lumbar Cobb angle, and a predictive formula was calculated [22].Then, the formula was tested in 2019.[1] Koller et al. [23] found that the preoperative TL/L Cobb angle and preoperative convex-bending TL/L Cobb angle were significant predictors for the final TL/L Cobb angle.Mason et al. [24] also developed a formula including the preoperative TL/L Cobb angle, preoperative MT Cobb angle and its convex-bending Cobb angle.Most of the identified factors were preoperative, and most previous literature focused on the change from preoperation to the final follow-up.Few studies have focused on correction loss of the unfused TL/L curve during the long-term, from immediate postoperation to final follow-up.In the present study, we recognized four risk factors for correction loss of the unfused TL/L spine in the univariate analysis, including a smaller postoperative MT Cobb angle, a smaller postoperative TL/L Cobb angle, a higher postoperative spontaneous correction rate of the lumbar curve and a larger postoperative coronal balance.Furthermore, in the multivariate analysis, a smaller postoperative TL/L Cobb angle was identified as an independent risk factor for lumbar correction loss during follow-up (p < 0.001, odds ratio: 1.417, 95% confidence interval: 1.160-1.731).Therefore, a smaller immediate postoperative TL/L curve may be associated with correction loss of the unfused TL/L curve.The potential explanation for the above result was similar to our report in selective TL/L fusion [9]: the preoperative TL/L Cobb angles were similar between the LOSS and NO-LOSS groups (p = 0.501), but the postoperative TL/L Cobb angle was significantly smaller in the LOSS group than in the NO-LOSS group (p = 0.008).Thus, a higher spontaneous correction rate in the LOSS group caused a larger change in curve magnitude.This may increase the tension of the concave soft tissues, which contained more fibrosis and fatty involution [25], and thus exacerbate the tendency toward curve progression during the follow-up.Additionally, the flexible unfused TL/L segments were susceptible to this tension.On the other hand, in the NO-LOSS group, a smaller spontaneous correction rate may have led to relatively low soft tissue tension on the concave side of the unfused TL/L curve, so there was a lower risk of progression.Another reason may be that a smaller postoperative TL/L Cobb angle contributes to the mismatch between the MT and TL/L Cobb angle, which may be related to correction loss, as we discussed below.These explanations were our
Mismatch between MT and TL/L Curves
The correction of the TL/L curve was said to echo the correction of the thoracic curve after STF.Although some authors have reported that there is no relationship between the correction of the thoracic and TL/L curves after STF with the Harrington system and sublaminar wiring [26], many studies have found an apparent relationship between the MT curve and TL/L curve using more modern instrumentation.Mizusaki et al. [27] retrospectively concluded that overcorrection of the MT curve might result in less satisfactory results after STF in lumbar modifier B. This means that overcorrection of the MT curve may exacerbate the mismatch between the MT curve and TL/L curve.Ishikawa et al. [28] found that the final Cobb angle of the TL/L curve was significantly correlated with the immediate postoperative MT Cobb angle, which meant that the MT and TL/L Cobb angles matched each other.Jansen et al. [29] found a significant correlation between the relative corrections of the MT curve and the lumbar curve after STF.Similarities were noted in the present study.Comparison and correlation analyses between the postoperative MT and TL/L Cobb angle were performed.In the total group, the postoperative TL/L Cobb angle was weakly correlated with the postoperative MT Cobb angle (r: 0.350, p: 0.023), and there was no significant difference between them (p: 0.230).Going further in the subgroup analysis, in the LOSS group, the postoperative TL/L Cobb angle was not correlated with the postoperative MT Cobb angle (r: 0.074, p: 0.749), and a significant difference was found between them (p: 0.011).On the other hand, in the NO-LOSS group, the postoperative TL/L Cobb angle was moderately correlated with the postoperative MT Cobb angle (r: 0.561, p: 0.008), and no significant difference was noted between them (p: 0.420).Therefore, if the postoperative MT and TL/L Cobb angle were matched, as in the NO-LOSS group, the risk of TL/L correction loss was relatively low.If there is a mismatch between them, TL/L correction loss may occur.Nevertheless, this finding needs multicenter studies and a larger sample size for further verification.
Limitations
First, the sample size was relatively small, but it is not easy to identify a large sample for an age-and sexmatched comparative study.A multicenter study with a larger sample may be helpful.Second, this radiographic study did not evaluate the patient's self-assessment/satisfaction.Our next step is to explore the relationship between our findings and health-related quality of life.Third, most of the TL/L curves were moderate, and our conclusions may not be applicable to larger curves, which may not yield satisfactory outcomes after STF.
Strengths
Our study has several major strengths.First, few studies have focused on the risk factors for TL/L correction loss following STF.This is the first study focusing on the correction loss of the unfused TL/L curve during a long-term follow-up.Second, although the relationship between MT and the TL/L curve was reported, this is the first study reporting its association with correction loss.Finally, our conclusions are meaningful for clinical practice.Good immediate postoperative spontaneous correction does not mean a satisfactory outcome at the final follow-up after STF, and close observation is needed.
Conclusions
Posterior selective thoracic fusion is an effective treatment for AIS patients with major thoracic and secondary TL/L curves.A smaller immediate postoperative TL/L Cobb angle may be associated with TL/L correction loss during a long-term follow-up.Thus, good immediate postoperative spontaneous correction may not mean a satisfactory outcome at the final follow-up after STF.Mismatch between major thoracic and TL/L Cobb angles immediately after surgery may also be related to correction loss of the unfused TL/L curves.Although these findings were radiographic and patients were asymptomatic, close attention should be paid to smaller unfused TL/L curve and its relationship with thoracic curve in case of deterioration.
Fig. 2 AFig. 1 A
Fig. 2 A typical case in the LOSS group: A 13-year-old female AIS patient underwent posterior selective thoracic fusion.The TL/L Cobb angle was 26.8° before surgery (a-b) and was corrected to 8.2°(c-d).After a follow-up of 32 months, the TL/L Cobb angle was 15.0°, with a correction loss of 6.8° (e-f )
Table 1
Demographic Details of the Patients
Table 2
Comparison of Coronal and Sagittal Parameters
Parameters Pre-op Post-op Follow-up p Value Pre-op vs. Post-op Pre-op vs. Follow-up Post- op vs.
*TM: major thoracic curve, LIV Tilt: lower instrumented vertebra tilt, TL/L: thoracolumbar or lumbar curve, SVA: sagittal vertical axis, * means significant difference (range 2°-14°), respectively.Comparisons were made using the univariate analysis (Table3).General conditions (including age, Risser signs and follow-up duration) and preoperative Cobb angles, especially the TL/L Cobb angle, its convex side-bending Cobb angle and flexibility, were not significantly different.After surgery, patients in the LOSS group had a smaller immediate postoperative MT Cobb angle (p: 0.044), smaller immediate postoperative TL/L Cobb angle (p: 0.008), higher TL/L immediate spontaneous correction rate (p: 0.014), and higher immediate postoperative coronal balance (p: 0.027) than those in the NO-LOSS group.However, after a long-term follow-up, the patients in the LOSS group had a larger TL/L Cobb angle (p<0.001),but the MT Cobb angle was not significantly different (p: 0.155).
Table 3
Univariate Analysis of Risk Factors for Correction Loss of TL/L curves
Table 4
Multivariate Analysis of Risk Factors for Correction Loss of TL/L curves MT: major thoracic curve, TL/L: thoracolumbar or lumbar curve, * means significant difference
Table 5
Comparison and Relationship between Post-op MT and TL/L Cobb Angle
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
1998-11-01T00:00:00.000
|
153954581
|
{
"extfieldsofstudy": [
"Economics"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://digital.library.unt.edu/ark:/67531/metadc711965/m2/1/high_res_d/6484.pdf",
"pdf_hash": "9ed95eaf8e16dc66aea8b7d658dc0bbdbf1c05f4",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44522",
"s2fieldsofstudy": [
"Economics"
],
"sha1": "1898c53bed1be4f24bb880894deea2b6e1d8c99c",
"year": 1998
}
|
pes2o/s2orc
|
Joint Implementation Initiatives in South Africa: A Case Study of _ Two Energy-Efficiency---Ptojects
This paper explores the issues pertinent to Joint Implementation in South Africa by examining two prototype potential projects on energy efficiency With the potential for reducing greenho_use gas emissions. The first is an energy-efficient lighting project based on the public electricity utility, Eskom's plan for a compact fluorescent lighting program in the residential sector. The analysis indicates that the CFL program, could avoid emissions of up to 243 thousand tons of_-' carbon over the first five years, at negative cost (that is, with a positive economic return). The second project involves the delivery of passive solar, energy-efficient housing to a low-income township in the Western Cape Province, at an incremental capital cost of approximately $2.5m for the 6000 houses. In this case, the avoided GHG emissions over the first five years amount to between 14 and 20 tons of carbon, and over the 50 year life-span of the project it will result to 140 to 200 thousand tons of avoided emissions at a cost of $13 to $17 per ton. The housing project has significant non-GHG benefits such as savings on energy bills and health, which accrue to the low-income dwellers. Findings examination both projects conclude that capacity-building is critical_to ensure the technology being transferred balances efficiency, and Finally, assessment and evaluation, monitoring and verification criteria and institutions are called for to guarantee measurable long-term environmental, economic and other non-GHG related benefits of potential JI projects.
The three years since the first democratic elections in April 1994 have seen positive real rates of economic growth of around 3 percent per annum, although these followed a decade of economic decline in the mid-1980s and early 1990s, so that real income levels are considerably lower than they were in the early 1980s. Unemployment is a major problem, with about 40 percent of the work force lacking formal employment; consequently, the informal economy provides the basis for survival for a large part of the population.
The new government lias pursued a cautious fiscal and monetary policy aimed at achieving two primary goals: more vigorous economic growth, reaching 6 percent per annum by the turn of the century, and, at the same time, rapid , ~nvestment in social infrastructure such as housing, electricity, water and education. This is in. a context where the economy has been liberalized considerably since the re-integration of the country into the international economy after the demise of apartheid.
Several features stand out in relation to South Africa and the climate change issue: firstly, it is classified as a non-Annex 1 country, and so does not face any immediate greenhouse gas (GHG) abatement targets flowing out of international negotiations. Secondly, GHGs are very significant in its economy. At an aggregate level, South Africa accounts for only 1.4 percent of global carbon dioxide emissions or 1.2 percent of total GHG emissions, according to a national inventory study undertaken with 1988 data (Scholes & van der Merwe 1995 . Similarly, the country produces less economic output per unit of carbon dioxide emitted than most countries (refer to While South Africa has taken no position on Joint Implementation (JI), it has· come out in support of the concept of Activities Implemented Jointly (AIJ). This is a strategic move on South Africa's part, which hopes to gain experience through a finite and voluntary pilot phase which will be used to formulate a position on JI, while firmly aligning itself with the Africa Group, G-77 & China, SAbC and Valdivia, of which South Africa is a member. South Africa's participation in the AIJ phase is conditional on the following (NCCC 1997): • Projects must contribute to national development programs, specifically the objectives of mass housing, water provision, education and electrification as outlined in the Reconstruction and Development Program.
• Evaluation, reporting and monitoring performance of AIJ projects must be transparent.
• Projects must contribute to the achievement of the objective of the UNFCCC by aiming to bring about in a cost-effective manner real, measurable and long-term environmental benefits related to the mitigation of climate change that would not have occurred in the absence of such activities.
• Funding for AIJ projects should be additional to all existing funding and technology transfer provided for under the UNFCCC.
• The AIJ pilot phase must be used to develop capacity in South Africa.
The South African Minister of Minerals and Energy signed a Statement _of Intent in early 1996 with his United States counterpart committing both parties to investigate joint projects which produce global environmental benefits. Although n and AIJ were not explicitly mentioned, it was clear that AIJ projects fell within the scope of the statement. More recently, several AIJ project proposals have been put forward, including the energy-efficient housing project discussed here which anticipates USill secretariat approval early in 1998. A steel industry sector energyefficiency project is also under consideration by the AIJ Working Group of the NCCC, which is acting as the interim clearing-house for registry and assessment of proposed AIJ projects as a part of the approval process.
Given its position both as a relatively high emitter of GHGs and as a middle-income country with the potential for significant economic growth, South Africa is likely to develop a special interest in n projects. On one hand, ~uth Africa's energy-and GHG-intensive economy presents many potential prospects as a host country for investors seeking credit for GHG reductions. On the other hand, because it is a relatively advanced and (high per capita GHGemission) developing country which could face abatement commitments of its own in the future (Rowlands 1996), this raises the stakes in the ll debate. Against this background, it is pertinent to consider two possible n case studies as a means of mapping out the benefits and challenges, which would arise if a more aggressive JI regime were to be instituted globally.
PROGRAM AND AN ENERGY-EFFICIENT HOUSING PROJECT
This section explores the issues and concerns arising out of two potential Joint Implementation projects in South Africa as a means of mapping out the benefits and challenges which would arise if international discussions about a greenhouse gas tradable credits scheme are instituted.
The first project is an energy-efficient lighting project based on the country's public electricity utility (Eskom) plan for a compact fluorescent lighting program for the residential sector. The second project involves the delivery of passive solar, energy-efficient housing to a low-income township in the Western Cape Province based on a proposal by a non-governmental organization -International Institute for E11ergy Conservation (IIEC), and a consulting firm (PEER Africa) for piggy-backing the project on an existing government housing program. Conclusions are drawn regarding the main institutions, policies and research requirements needed to implement an energy-efficiency-related project in South Africa.
Background
In mid-1996, Eskom launched a major resource plan referred to as the 'Integrated Electricity Plan', which included various demand-and supply-side components, one of which is a residential demand-side management (RDSM) program. The three main reasons put forth by Eskom justifying the introduction of this program are: • to sustain the decline in the real price of electricity; • to increase electricity's competitiv~ness in the small-customer energy market; and • to contribute towards environmental conservation and awareness (Eskom 1996b ).
Climate change and greenhouse gas emissions do not feature in any explicit way in the rationale for the RDSM, and it is probably fair to say that Eskom is more concerned with national environmental problems than global ones. The RDSM program has therefore not been designed with a view to n projects. This paper evaluates the program from an perspective.
Within the RDSM, Eskom has identified a number of programs with potential, such as time-ofuse tariffs, water heating load management, appliance labeling, thermal efficiency of dwellings, limited supply capacity, consumer education and efficient lighting. It is the last of these which is the focus of this case study. Due to the fact that energy-efficient lighting is already a part of Eskom's business plan, the question of additionality becomes pertinent. However, while Eskom has stated its intent, the pilot phase of the project has been repeatedly delayed and implementation has yet to occur.
Eskom's energy-efficient lighting project was born largely out of a concern for the increasing peak to base load ratio of Eskom's residential power supply, and the negative effect this has on the cost of supplying electricity. While residential consumption accounts for only 15 percent of South Africa's national electrical energy consumption, it constitutes 75 percent of the national variable load (Nauda & Lane 1996). Furthermore, the accelerated electrification program threatens to increase the impact of residential peaks on the national load profile. Since the launch of the national electrification program in 1991, over 2 million additional household electricity connections have been made by Eskom and other municipal power distributors (National Electricity Regulator 1996), thus negatively affecting the utility's load profile (van Horen et al 1993).
The main rationale for the utility's compact fluorescent lighting (CFL) program is to mitigate the impact on the peak ·of demand growth by existing and new consumers. While lighting contributes a relatively small proportion to Eskom's load profile (less than 10 percent), peak use of lighting coincides with the peaks of cooking, space heating and water heating. Furthermore, newly electrified households use electricity predominantly for lighting -with few base load appliances, thereby contributing disproportionately to these peaks.
Eskom has set ambitious goals in its energy-efficient lighting program. Due to the highly differentiated natUre of the South African market, the program is targeted towards lights that contribute significantly to the total lighting load. Three consumer groups have been identified for the CFL program: • in the high-income household sector, Eskom plans to replace 1.25 million incandescent light bulbs with CFLs over the five-year program period; • in low-income households, the utility aims to install 576,000 CFLs m existing readyboards2 over a period of five years; • in low-income households which will be electrified in coming years, Eskom aims to install 2 million CFLs over five years.
The CFL lamps being used in Eskom's pilot projects have an expected life span of 5,000 to 8,000 hours, and cost in the region of $10 to $14 each. Eskom aims to aggressively promote CFLs over an implementation period of five years. Thereafter, it is expected that sales of CFLs will continue at the same momentum with reduced marketing efforts.
At present, South Africa has no capacity to produce CFLs domestically and so importation of the lamps will be necessary, at least in the short term. It is possible, however, that demand from consumers. in this country will grow to the extent that sufficient economies of scale will be present for a local producer to establish production capabilities; discussions along these lines were held between the Department of Trade and Industry and potential investors in early 1996, although no immediate investments were forthcoming.
Direct Project Impacts Project Costs
Preliminary estimations of total costs oft\le project, based on the utility's contribution to the cost of the CFLs (ranging between U.S. $10 and $14) installed over the five years of the program and the direct marketing and support costs associated with the dissemination of the lights, are between $45 and $65 million.
Project-specific economic impacts
The main economic impacts from Eskom~s CFL program include, on the cost side, the incremental capital costs of the CFLs as compared to normal incandescent light bulbs, and the promotional and marketing costs to support the dissemination of the lights. The benefits include the reduced operating costs of CFLs and the avoided costs of meeting peak demand. To date, experience with these costs and benefits has been fairly limited and thus only a rough approximation of net economic impacts can be made.
Based on assumptions and data about capital and operating costs of various light bulbs, as well as avoided costs of peak capacity, the net economic benefit of installing one CFL can be calculated (see Table 2). Based on the assumptions listed, the net present value derived from replacing an incandescent with a CFL will be between $33 and $38, based on a time period of 8,000 hours and a real discount rate of 8 percent. By applying these net economic values to the CFL installation scenarios (915 ,200 CFLs per annum over a period of five years), it is possible to calculate the aggregate economic effects. Over five years, the net economic value of the proposed CFL Program, taking into account estimated marketing and support costs of $1 million per annum, amounts to between $119 and $13 5 million in net present value terms (see Table 3 ). Clearly, this calculation is based on a number of variables, which may change as the program proceeds, notably the capital cost of CFLs, but it gives an indication of the scale of expected benefits from the CFL program.
GHG benefits
Over five years, the CFL _program would reduce electricity production by about 1,002 GWh, based on the assumptions listed above. Eskom's emissions of C0 2 in 1994 were 142.9 million tons (Eskom 1995) which, based on electricity output of 160,293 GWh, yields an average emission factor of, 891 tons of C0 2 per GWh produced. Significantly, however, this average probably overstates the amount of GHG emissions, which would be avoided by the CFL program, since it will reduce the amount of peaking power that has to be generated. Eskom's supply mix is such that base load is met by its coal and nuclear power stations, while peak power needs are met with pumped storage hydro schemes and gas turbines. The pumped storage schemes are, in turn, effectively powered by base load stations during off-peak periods and so a reduction in peak demand would indeed lead to reduced C02 emissions. A practical difficulty remains, however, insofar as there is no clear way of matching reduced peak demand with reduced generation, especially when there are similar DSM programs being implemented simultaneously.
At most, therefore, the CFL program would avoid 892,782 tons of C0 2 over its first five years.
This represents just 0.62 percent ofEskom's total emissions for 1994 alone, or about 0.1 percent of its expected emissions over the same five-year period. Over a ten-year period, the same CFL program would reduce emissions by about 1.9 percent ofEskom's 1994levels, or 0.2 percent of its expected total emissions over that period.
From this, it is clear that the GHG benefits of the CFL program do not feature prominently in relation to the direct economic effects. Nonetheless, because the CFL project has a positive economic return, it will be one of the first GHG abatement projects to be implemented, whether as a JI project or not.
Like the CFL program, the proposed energy-efficient housing project discussed here is aimed at reducing C02 emissions for the residential sector. Known as the Guguletu Eco-Homes (Energy Cost Optimized) Project, the 6,000-hom~ planned development is located in the Cape Town metropolitan area in the Western Cape province. The project was submitted for review and approval to the U.S. Initiative on Joint Implementation (USIJI) for certification as an AIJ pilotphase project. The proposed project involves the use of thermally-efficient design measures in a new low-income hous.ing program. Measures such as the optimization of dwelling solar orientation, correct window sizing and positioning, provision of wall and ceiling insulation, and energy-efficient lighting could reduce carbon dioxide emissions from heating, cooking and lighting activiti~s (Parker 1997).
The primary energy sources for these services are currently kerosene and electricity in the Cape Town region. The project, which will be funded by the South African government's incomescaled housing subsidy, seeks AIJ accreditation to help overcome the existing barriers to thermally-efficient, low-income homes in South Africa. Homes built to date through the government's Reconstruction and Development Program (RDP) do not incorporate energyefficient design measures, often resulting in homes only marginally better in terms of energy consumption, emission reduction and habitability than the shacks they are replacing. Less than 20 percent of these homes include a ceiling, and a negligible few percent have made provision for insulation (IIEC 1997). Institutional barriers to energy-efficient housing development currently existing in South Africa include, but are not limited to: • An incentive system that rewards developers who forsake home quality by minimizing investment in energy-efficient options. There is no incentive to include even the simplest of thermal efficiency measures since contractors are paid by the government only after project completion, and are not held to any set government housing standards.
• Lack of awareness of the potential for cost-effective energy-efficient measures and technologies; • Lack of interest by international technology providers and material suppliers in the lowincome sector in developing countries and emerging economies; and • Lack of an implementation process and techniques to achieve both cost and environmental goals.
• Lack of domestic financial instruments (affordable public. and private credit facilities) for low income housing • Lack of incentives by the power' .Company and municipal suppliers who would benefit from avoided new capacity installation. This attitude is partly due to the existence of over-capacity in the power system for the last decade.
The housing delivery process proposed by PEER Africa -a civil and environmental engineering consulting firm, which is the U.S. project participant with a proven track record in a similar housing development outside Kimberly, is designed to help overcome these barriers. The proposed project seeks to demonstrate that cost-effectiveness and energy-efficiency are not incompatible, and can be delivered within the existing RDP housing subsidy. The other two participants in the project are the Community of Guguletu (a limited trust development company), and IIEC (a US-registered non profit organization) (IIEC 1997).
Project costs
A RDP subsidy of up to (17,000 Rand) U.S.$3,900 per family will be the primary funding source for the proposed AIJ project. The subsidy is intended to help finance municipal services, land ownership clearance, project management, and the housing structure itself. To the extent possible, small amounts of additional funding such as utility rebates for energy-efficient lighting may also be sought (see Section 2.1).
While municipal services and land ownership clearance must be secured in any type of housing development, some of the immediate costs associated with project management and housing construction will be higher for an energy-efficiency housing project. The majority of interventions associated with the proposed Eco-home project construction are no-cost measures, such as building orientation and window sizing. Other energy-efficiency measures such as insulation, ceilings and CFLs do involve an incremental cost, with positive financial returns as indicated by the CFL example in Section 2.1. Consultancy costs related to the thermal efficiency · aspects of project management, such as technical expertise and awareness-raising, will add approximately U.S. $2.5 million to the project cost beyond that of standard contractor-built homes. Table 4 lists the activities requiring investment by PEER Africa with their associated costs (core activities in shaded rows). The success of these activities will determine whether the proposed project would achieve energy sayings of either 50 or 70 percent.
Based on the cost data presented here, assummg that these activities are adequate for implementing the whole project, the cost of the energy-efficiency component of the proposed project is about U.S. $425 per Eco-house. This figure is the maximum cost estimate since some of the activities listed in Table 4 would have been partially or fully funded from the subsidy even if the homes built were not thermally efficient. Therefore, the maximum total cost of the 6,000home project is the $24 million covered by the subsidy, plus the $2.5 million additional investment by PEER Africa targeted for energy efficiency, or $26.5 million. Training-of c()nstructi<m teams - Proje~t management and supei"Visiort: · · _ _ · Person-days, travel -Arranging bulk purchasing agreements Person-days, travel Establishment of local industrial parks Person-days, travel -:,:~·ehkY,ioraltrainirig on optimizing the · : : -· · · · , E~o~H9m~ . , . ,
US$2,54 7,200
Project-specific economic impacts The capital cost of the proposed Eco-Homes project will be reimbursed by the government subsidy, as indicated above. Economic benefits of the project would accrue primarily to the homeowner in a number of direct and ancillary ways.
Homes built under the current government-contractor arrangement fail to incorporate even the simplest thermal measures, supplying residents with little improvement over their previpus shacks that left them cold in the winter and hot in the summer. Building in technologies that make homes responsive to the local climate is significantly less expensive when performed at the time of construction, and results in dwellings that are affordable, more healthy, and that significantly reduce C02 emissions.
Benefits incurred from the proposed Eco-Home project include improvements in family health, economic well-being, comfort, employment, safety, and opportunities for women. If the energy requirement for space heating is reduced by as much as 70 percent, the paraffin-using households 'r -the majority of non-electrified houses, would save about 100 liters each winter thus saving about U.S. $40. Since electrification is being extended to the 2.5 million houses that are currently unelectrified, and one million low-cost homes are planned in any case, it is assumed that that the overall impact to the environment can be reduced by introducing energy-efficiency measures, even if it means an increased take-back in total electricity use. However, these benefits can be only partially quantified economically. For example, in addition to the direct annual treatment costs of respiratory disease of about U.S.$75 million due to exposure to coal combustion in South Africa (van Horen 1996), indirect costs such as losses in productivity and quality of life prove more difficult to quantify.
Capital costs of purchasing fuel such as kerosene and electricity have a bearing on the access to employment and economic opportunities in communities. Numerous studies have found that the poorest households (those eligible for the RDP housing subsidy) pay the largest portion of household income on meeting basic energy needs such as heating and cooking, amounting to about 11% (Simmonds and Mammon, 1996). Improving the affordability of these energy services will result in improved payment for services, liberated household income to use for small business development or other prioritY investment, and ability to meet other basic needs.
The proposed project will also raise employment in the community. In contrast to a standard contractor-built project that brings in outside professionals and leaves only 2 percent of the housing subsidy with the community, PEER Consultants is committed to shifting 30 percent of the subsidy to the local economy by training unemployed people, including women, and putting them to work on the job site. In another housing project at Kutlwanong (IIEC, 1997), construction of 2,300 units created 120 local jobs, 10 percent of which went to women. If the same assumptions pertain to Guguletu, the project would create more than 300 paid jobs in construction -a significant impact for a community with 80 percent unemployment.
Furthermore, the project generates other employment in the housing material supply sector. Jt is not easy to discern, however, how many of these jobs would have been created with an RDP project lacking any improved thermal performance, though experience in other standard contractor arrangements suggests very littJe local job creation occurs, since outside professionals are brought in and local labor is used only for unskilled tasks (PEER and IIEC, personal com).
Reducing the amount of energy required to maintain comfort in the home will also reduce the incidence of three chief safety concerns related to energy use: poisonings, burns and fires. Over
GHG benefits
C02 savings for the proposed project will be realized from the reduced use of electricity and kerosene for space heating and lighting provided by the improved thermal performance of the Eco-Hqme and promotion of the use of energy-efficient lighting. Space heating and lighting each account for about 30 percent of annual energy consumption for low-'income homes in Cape Town (IIEC 1997). C02 savings can be claimed upon habitation of the Eco-Home, and maintained for the estimated 50-year life span of the project, adjusting for an increased take-up in energy use for the first 15 years. Table 5 shows baseline estimates for energy consumption for space heating and lighting in the standard informal and formal housing stocks in the low-income housing sector in Cape Town.
Space heating in the region is accomplished through the combustion of kerosene and the use of electricity, depending on household access to appliances, preferences, and ability to afford fuels.
The proportion of energy used for space heating is assumed to remain constant over the transition (5-15) years of the project. The proportion of electricity consumption dedicated to lighting is also assumed to remain constant for the duration of the study. (data source: Simmonds and Mammon, 1996;EDRC, 1996;Scholes and van der Merwe, 1994) Under the baseline scenario, it is assumed that energy-use patterns will at first be similar to the present informal shacks that predominate in the area. A transition to formal dwelling energy use is expected for the next 10 years as standard contractor-built homes are delivered (where the values of energy consumption for the informal and formal sectors are interpolated over a ten-year period). Finally, energy use is conservatively assumed to remain constant in the formal sector for the remaining 35 years of the comparison period. Table 6 compares C02 emissions projections for a baseline project of 6,000 standard-built homes versus 6,000 Eco-Homes in Guguletu, based on 50 percent savings (low efficiency scenario) and 70 percent (high efficiency scenario). Projections combine emissions from both kerosene and electricity use, with an assumed 50% usage rate of CFL lighting for the Eco-Homes (in the absence of social acceptability). The cost of carbon reduction is generated from the energyefficiency investment data as shown in Table 4, not the total project costs. Over the life of the project, the C0 2 savings is an estimated 7 tons per house in the low-energy-_ savings scenario, and 9 tons in the 70 percent energy-savings projection. Therefore, the total GHG-avoidance for all of the proposed 6,000 Eco-Homes is between 40,000 and 55,000 tons of C0 2 (IIEC 1997). However, the actual GHG savings critically depend on the accuracy of the baseline projections. Projects of such long, life-span like the Guguletu housing project (50 years) carry more uncertainty than shorter term projects since a number of exogenous changes such as real income levels, income distribution, urban housing patterns, building standards and styles, fuel use, etc. can take place, and make it much more difficult to isolate the n-relevant credits.
For analytical purposes, this ambiguity about baseline projection calls upon putting more weight on the near term impacts than those_ in outer years. At the practical level, the length of these projects may require that the actual credits be assigned periodically -let say every 5 years, after a thorough verification of the savings. The downside of this ,approach is that it complicates the decision process for investing in the most cost effective -n projects, though a probabilistic assessment could be used to increase the comparability of projects with varying life-spans.
As a high emitter of GHG and as a country with an economy in transition, South Africa is well positioned to seek JI opportunities such as the potential energy-efficiency projects detailed above. The benefits of the CFL program are more economic than GHG-saving, while the proposed energy-efficient housing project promises significant GHG reduction with numerous no-and low-cost measures. The benefits of both scenarios accrue to residential home dwellers in a number of direct and indirect ways (particularly in the Eco-Home case), as well as to the utility Eskom in the form of avoided costs of meeting peak demand. The costs for the CFL program and the lighting component of the proposed housing project both require the increased capital costs of the CFLs themselves, as well as direct marketing and support costs associated with dissemination of the lights. Both projects are well positioned for implementation as JI -type projects.
At the national level, electricity and housing delivery are high government priorities, currently being implemented without energy efficiency components due to economic and institutional barriers. Preliminary steps by the govefl11llent to address climate change concerns indicate an interest in cost-effective mitigation measures such as JI.
JI-SPECIFIC ISSUES AND CONCERNS
This section addresses a number of generic concerns commonly associated with JI projects in relation to South Africa's CFL program and the proposed Eco-Home housing project in Guguletu.
Additionality of Funds from Bilateral Sources
In principle, pilot JI projects are not supposed to detract from conventional development-oriented financial assistance, but are meant to attract additional sources of finance. If the present CFL program were structured as a JI project, it would be in the interests of a foreign JI investor to invest, since the project yields GHG savings at negative cost (net benefit). A condition for this to occur would be that Eskom compensates the JI investor for a portion of its own avoided costs, and Eskom would logically be prepared to pay up to the amount of those avoided costs. Of course, in practice, account would have to be taken of the risks to both parties and other transaction costs but, in principle,_ it would seem that this would be an attractive project from a JI investor's perspective.
However, as already mentioned, the JI activity should also be additional to Eskom's own business plans. The fact that energy-efficient lighting is already part; of Eskom's Integrated Energy Plan suggests thatthe project is not economically unattractive to Eskom and that it wo,uld go ahead with the project with or without JI investment. The question therefore arises, how would project implementation differ if it were a JI project? For example, could JI investment overcome financial barriers to participation in the energy-efficient lighting program in the lowincome sector that would not be overcome if Eskom were to implement the program alone. This question is difficult to answer at this stage as Eskom is yet to define its strategies for implementing the program ..
In the energy-efficient housing case, the rissue of additionality is less of a factor as technical assistance, rather than direct outside funding, is being sought. The interested U.S. participant, in this case PEER Africa, would be investing technical (energy-efficient design) knowledge, project management experience and housing development expertise in the host country, South Africa, in return for a portion of theoretical carbon credits (discussed in Section 3.2). In contrast to JI-type projects involving tree planting or a large infrastruCture development, an energy-efficient housing project in South Africa provides a 'one-off, ' or no regrets opportunity to include C0 2saving measures in the project design, where they would not otherwise occur under the current housing scheme.
Sharing of Carbon Credits
The sharing of carbon credits is likely to be one of the most important issues for South Africa in the climate change debate generally and the JI debate specifically. Given its status as a relatively significant source of GHG emissions, coupled with its middle-income status, the prospect of future emission control targets being imposed on South Africa means that the· cost of relinquishing low-cost GHG abatement options could grow in the future. Eskom, for one, is hesitant to engage in JI projects because of its potential vulnerability on the GHG issue (Lennon 1996) and would therefore be very cautious before entering such agreements without clear criteria for the sharing of any carbon credits. At present, South Africa does not hold a formal position on sharing of carbon credits and it is hoped that the AIJ pilot phase will lead to greater understanding in South Africa of the implications of credit sharing.
Having said that, the different credit-sharing options discussed in the literature include: • Total emissions reductions could be shared on the basis of the percentage of initial investment made by the host and the investor countries. This is, however, not considered fair as the host country will not share significantly in the benefits of the avoided GHGs.
• South Africa could establish a policy, which sets or fixes the credit-sharing ratio. For example, some countries have been calling for a 50/50 split of the total emissions reductions for all projects (Chatterjee & Fecher 1997). Predetermining the credit-sharing ratio may, however, discourage certain types of investment.
• Total emissions reductions coul.Q: be shared on the basis of a percentage of initial investment and avoided costs, including avoided consumer power costs, avoided capital cost of generation and avoided cost of abatement abroad. Table 7 demonstrates the implications of a range of hypothetical credit sharing scenarios for the cost and NPV per ton of carbon. Clearly, more research is required to determine the most appropriate and fair carbon credit-sharing scenario for these projects. Such research would need to include calculations of: • The up-front or initial investment made by the host and investor countries.
• The net cash flows to each party.
The division of credits based on the above and determined by an agreed framework. The lack of any formal framework for distribution of carbon credits necessitated the proposed energy-effiCient housing project participants to experiment with creative applications of emissions trading with hopes of producing positive environmental and developmental outcomes.
The three participants agreed on the following voluntary assignment of theoretical emissions credits which can be capitalized once an international carbon trading mechanism is established: • 45 percent to PEER Africa for future carbon trading potential.
• 45 percent to the Community of Guguletu to help fund further sustainability projects within the commimity. The credits would be disbursed either communally to purchase a .
';
shar~d resource, or used to establish a revolving loan structure that would be accessible to individual families.
• IIEC would receive the remaining 10 percent of the theoretical credits which will either be "retired" in order to achieve environmental gains beyond the stipulated emissions reductions,· or used to fund further climate change-related projects and thus extend the GHG emissions impact of the Guguletu project. (IIEC 1997).
Non-GHG effects
Two categories of non-GHG effects are pertinent here: firstly, economic effects, and secondly, other environmental effects.
Income distribution inequities are especially stark in South Africa's current energy situation.
Most newly electrified households have not shared significantly in the country's wealth and have low incomes, with the result that energy expenditures account for a high proportion of their spending. As indicated previously, low-income households spend approximately 11 percent of their monthly household expenditure on energy, compared with wealthier households which spend in the region of 4 percent (Simmonds & Mammon 1996). Other estimates have put the proportion of energy expenditure as high as 20 to 40 percent of total household income (IIEC 1994). Consequently, a reduction in the monthly energy bill resulting from the use of more efficient lighting will release scarce financial resources for other household needs. Poor households experiencing energy poverty may use these freed resources to meet other household energy needs (the 'take back' effect). Thus, although there would probably not be any major increase in savings rates in poor households, the effect of the CFL program would nonetheless be positive insofar as expenditure could be re-directed towards other needs. For example, lowincome households currently spending U.S. $ 15 per month on meeting their cooking, lighting, media and space-and water-heating needs, may continue to spend U.S. $ 15 per month on energy even after their lighting costs have been reduced through CFL use, but achieving a higher level of energy service. It should be noted, however, that this implicitly assumes that the increased disposable income resulting from energy efficient interventions, is not expended again on items with higher GHG impacts. In pr~ctice, of course, this is extremely difficult to estimate.
A further non-GHG economic effect of the programs outlined here is their potential to achieve economies of scale in the production of energy-efficient and passive solar technologies, thus reducing the initial capital outlay to incorporate such measures.
Secondly, both the CFL program and the energy-efficient housing project would bring about a reduction in electricity generation. To the extent that electricity generation leads to negative health and environmental costs due to air pollution emissions and occupational hazards, any reductions in electricity generation would have positive effects in that regard, as discussed above in relation to benefits of the Eco-Home project. A recent study has estimated some of the external effects and found them to be an order of magnitude higher than the direct effects, especially in the case of coal power stations (van Horen 1996).
It would, however, be misleading to count these avoided external costs as a benefit of either of the programs, because of second-round substitution effects which would more than likely lead to a shift in consumption patterns towards non-lighting demands. Thus avoided emissions caused by the use of CFLs and high thermal-performance homes may be offset by increased emissions due to higher demand for other energy services. The pertinent question is, therefore, whether the net effect is positive or negative. Ideally, the analysis should calculate the net GHG savings by offsetting against' gross GHG savings, the incremental consumption, which results from increased demand for other services. In the absence of information about individual households' consumption profiles and their corresponding GHG-intensity, the comparison could be based on the GHG-intensity of the average consumption basket of the relevant consumer sectors; or at an even higher level of aggregation, the average GHG emission intensity for the economy as a whole (14.2 tC/$1000 GDP).
Lack of Assessment Methods for JI Projects
This is a generic problem in JI projects, and apart from broader questions about baseline levels of GHG emissions, more specific measurement and assessment problems arise in the case of the CFL program. Some of these were alluded to previously, and include the difficulty of apportioning avoided electricity generation to the various power plants with their consequent GHG emissions. While lighting services mostly coincide with the peak periods, it is difficult to say whether a CFL program would reduce generation from gas, pumped storage, coal or nuclear plants, especially when other DSM programs are causing similar effects during peak periods.
Assessment of the' impacts of the energy-efficient lighting project requires an understanding of the total demand now and in the future, the corresponding lighting demand with and without the lighting program, and the planned generation expansion to meet the future demand. Further research is required to calculate the embedded emissions of pumped storage.
In the housing case, project-specific measurement and assessment challenges arise mainly out of the long-term nature of the project. A monitoring plan has been developed for the proposed project in~ which data on direct GHG and ancillary benefits would be collected for both baseline homes (the control group) and new Eco-Homes. Household interviews would be conducted, along with collection of data from suppliers of energy services to verify figures given by the households. Documentation of baseline values would begin one year prior to construction of the Eco-Homes to establish energy use within new homes that lack thermal efficiency measures.
After construction of the proposed Eco-Homes, data related to household energy usage and GHG emissions would be gathered quarterly for the first year, biannually for the second and third years, and annually thereafter, with an emphasis on the winter months. Utilizing such rigorous monitoring and verification methods, by enabling long-term tracking and comparison between different types of low-income housing, is expected to alleviate much ofthe uncertainty related to the 50 year time scale of the project.
Inadequate Financing and Unknown Macroeconomic Impacts
In the case of the CFL project, if it were presented as a potential n opportunity, it would be unlikely to suffer from inadequate financing due to its favorable economic returns. Rrovided Eskom was prepared to share these benefits with investors, it would most likely, in turn, attract investors keen to make the project succeed.
In the housing project, since the gove~ent subsidy reimburses the basic housing construction and management costs, the only additional financing required is about $425 per house, or $17 and $13 per ton of C saved in the low and high energy-savings scenarios respectively, for the technical expertise, training and other consultancy investment associated with the thermal efficiency aspects of the project. The few additional costs related to the structure itself (such as the CFLs and wall 'insulation) all have positive medium to long-term economic returns.
With respect to macroeconomic impacts, the main risk in the case of the CFL program would concern the exchange rate and importation of CFL products. The South African Rand depreciated by some 25 percent in 1996, with the result that balance of payments pressures grew and foreign reserves declined to low levels. Whilst this situation improved during 1997, a CFL program would nevertheless involve the importation of large quantities of the lamps, at least in the initial years, and this would have potentially negative consequences. In the longer term, however, it could be possible to reverse this effect, particularly if foreign (and local) investments were made in local CFL production capabilities, with the potential even for export growth.
A CFL program and the lighting aspect of the proposed Eco-Home project are also likely to have a negative impact on South Africa's incandescent lighting industry. More than half the CFLs are planned for the new demand for electrification, and as such the program will reduce the potential growth of the incandescent-industry. Those CFLs, which are planned to replace incandescent lighting in both high-and low-income households, will have an impact on the existing market ·share ofthe domestic incandescent lighting industry. The scale of the impact is, however, likely to be small. Due to the longer life of CFLs, the unit sales of CFLs will always be small relative to the sales of incandescent bulbs. To illustrate, it has been estimated that ifhalfthe light sockets in the world held CFLs, they would still account for only 5 percent of bulbs sold (Clarke 1997).
Having said this, the competition for market share that the domestic incandescent industry will face from imported CFLs may provide a platform for the incandescent industry to lobby for higher import tariffs. Furthermore, the impact of a CFL program on the domestic incandescent lighting industry needs to be weighed against any future potential to establish a local CFL industry. The instability experienced in South Africa's foreign currency markets and foreign reserve holdings during the last few years underlines the importance of these macroeconomic questions.
The issues raised above related to the CFL program are also relevant for the lighting aspect of the Eco-Home project. For housing construction, however, locally-attained suppliers and labor will be used as much as possible, generating long-term economic growth for the community. As for the macroeconomic effects of successful energy-efficient housing development on the standard housing delivery industry, it is possible that other developers may benefit from a trained local work force and regionally-produced materials. Such benefits are expected to bring housing costs down so that energy-efficiency measures will be more cost-effective to introduce (IIEC 1997). Expectations for quality, energy-efficient housing will also likely rise among community members, perhaps putting pressure on other developers to incorporate thermal measures as well.
Dumping of Old Technology
An important concern emerging out of the African literature on JI is that JI will provide industrialized countries with the opportunity to dump old technology on the developing world (Maya 1995;Gupta et al 1996). If the technology provided by the investor country is inferior, th_en it is likely that the host country will be compelled to replace this technology in the future and the potential benefits of participating in n will not be re~lized.
South Africa shares the African group position. There is a real concern that South Africa's knowledge of the international market is inadequate and, therefore, its capacity to assess the technology in terms of whether it is state-of-the-art is limited or, at the very least, has to be built · up at some cost. To this end, one of South Africa's conditions for acceptance of the AIJ pilot phase, is that it must build capacity in South Africa so that full local understanding of issues relating to the implementation of the UNFCCC via n is achieved.
Having said this, CFLs have a relatively short life span compared to other capital equipment and this high turnover reduces the risk for the host country. As long as the fittings for CFLs remain the same, South Africa will be able to adopt and promote new, more advanced technologies as they emerge. It must be noted, however, that dumping of inferior quality CFLs is likely to cause irreparable damage to people's perceptiQ~S of energy-efficient lighting, jeopardizing the longterm global benefits of the CFL program and the lighting component of the energy-efficient housing development.
Conversely, the thermal-efficient design measures of the Eco-Home project have a relatively long life span of approximately 50 years. In this case, however, the energy-efficient measures proposed are fairly "low-tech," minimizing host country risk since the design measures employed, such as low overhanging roofs and window positioning, will continue to deliver benefits for as long as they are properly maintained. Dumping of obsolete technology is not a concern for energy-efficient housing construction as long as capacity-building takes place within the community. Training both workers and home dwellers about the energy-saving properties of the house is critical to any type of energy-efficient housing project to ensure the long-term C02saving and comfort benefits of the development.
The success of the program is also dependent on the appropriateness of the technology to the South African context. Technologies that are developed in other countries tend to be developed within the socio-cultural context of those countries and may be inappropriate to another setting . Eskom's analysis suggests that the higher-priced CFLs with higher specifications are not necessarily the most appropriate for South Africa. There is a need to choose an appropriate CFL technology which balances efficiency, cost and quality, in relation to the specific context in which electricity is supplied in South Africa. This is particularly relevant with regard to the lower-income residential sector in South Africa.
Care must also been taken to design thermal-efficient housing measures appropriate to the n~eds of the community being served, in this case, a low-income, urban residential area in a temperate climate. The challenge of the proposed Eco-Home project is to employ energy-efficient technology that meets the emissions-reducing and comfort-raising criteria of Guguletu residents, in the most cost-efficient manner possible. Again, a balance between efficiency, cost and quality must be struck, and this can only be done between parties with an in-depth understanding ofboth the current needs of the community, and the economic, political, and cultural context in which it exists.
High Technology Costs
Technology costs do not represent a major barrier for potential investors in either the CFL program or the proposed energy-efficient housing project. Although more expensive than incandescent light' bulbs, CFL costs would probably not present major difficulties for project financiers. One of the key goals of the Eco-Homes project is to demonstrate that thermalefficient design measures can be incorporated with minimal additional cost over the RDP housing subsidy allotment.
As far as consumers are concerned, however, the higher capital costs of purchasing and replacing CFL bulbs would almost certainly represent a major barrier to their more widespread use .and thus innovative financing schemes would be essential. These could include, as has occurred in other countries, leasing programs or recovery of CFL costs through the electricity tariff over their useful life span (or a shorter period if risks are perceived to be higher). This factor would have to be designed into the project for it to succeed, particularly in the lower-income household sector.
Sustainability of the Program/Project
The JI project must bring about measurable and long-term environmental benefits related to C02 reduction. With the CFL project, the question arises whether households will continue to use CFLs once the program is over. This is dependent on the perceived benefit, availability, cost and associated mechanisms of financing.
The 50-year life span of the proposed energy-efficient housing project promises long-term environmental benefits in the form of GHG emissions avoidance. The question in this case is whether Eco-Home residents will maintain the structure so that its benefits will continue over the life ofthe project. As indicated in Section 3.2, capacity-building in the form of training workers and home dwellers would encourage, but not guarantee consistent C02 savings. Maintaining long-term benefits is dependent on the length of time families live in the homes, available funds for necessary repairs (such as broken windows or weather-stripping replacement), availability of financing mechanisms for rebuilding ener~-efficient homes in the case of fire or severe weather damage, etc. Rigorous monitoring of emissions must be conducted and recorded for both the energy-efficient house and standard-built homes to ensure C02 savings and to mitigate hidden costs and uncertainties that arise. Should the AIJ phase of an energy-efficient housing project such as the Guguletu development move forward, emissions and efficiencies data collected will be invaluable to determining the feasibility of potential JI opportunities for energy-efficient housing in South Africa in the future.
Lack of Institutions to Assess, Evaluate and Monitor Projects
At the global level, there is presently no institutional structure, which can monitor and evaluate JI projects. Within South Africa, however, both Eskom and PEER have considerable institutional capacity to play a role in this process. Unlike most electricity suppliers in the region, Eskom has a strong financial position with a large and skilled work force. Given its role as the main local stakeholder in the hypothetical CFL JI project, this is an important advantage. PEER Africa has established a presence in the South African housing sector since 1996, and offers extensive experience in construction management, worker training programs and monitoring/reporting services. Clearly, however, Eskom and PEER's role would be limited and they could not act as "player" and "referee" simultaneously. Furthermore, n projects need to be assessed and evaluated not only in ten:Ds of their emissions reductions and avoided cost achievements, but also in terms of their technical appropriateness, their social content and their contribution to national development priorities (Asamoah & Grobbelaar 1996). Neither the host nor the investor industry may be the appropriate party to assess or evaluate these components of the projects. There is ' clearly a need for capacity-building in national governments to ensure that they are able to evaluate projects on this basis.
Responsibility for monitoring the GHG and other impacts of the n projects once they are approved should be conducted or determined by the project participants, under standard guidelines set by an international body. Energy usage should be recorded for both baseline and energy-efficiency scenarios, and data relating to ancillary benefits collected. Independent, local organizations should be sought to verify the GHG and economic magnitude of such projects prior to assigning credits to interested papies. It is therefore imperative to identify institutions and mechanisms for (a) evaluating the appropriateness of proposed n projects by the host country, (b) monitoring the GHG and other impacts of n projects (c) verifying the GHG and economic magnitudes of such projects prior to assigning credits to interested parties.
Lack of an Acceptance Process for JI Projects
This point is related to the previous one, insofar as there is no regulatory body, which oversees the processing of potential n projects, and as a governance framework for GHG trading has still to be developed internationally. This generic point obviously applies to South Africa as it does to other n actors.
As Soqth Africa has only recently ratified the UNFCCC, its procedural mechanisms for evaluating and accepting potential AIJ projects are still in their infancy. As an interim measure, the AIJ Working Group of the NCCC has the mandate from the DEAT to act as the 'clearinghouse' for the acceptance of potential pilot phase projects, with input from the broader NCCC.
However, no formal criteria exist against which projects can be evaluated and accepted. The AIJ Working Group is at present guided by the broad criteria set out in South Africa's position statement on AIJ, the most significant of which are that the AIJ projects must dovetail into developmental priorities of South Africa and must bring about real and measurable long-term environmental benefits related to the mitigation of climate change that would not have occurred in the absence of such activities, and that the funding for AIJ projects must be additional to all existing funding and technology transfer. It is clear that these position statements are too broad to allow for the effective screening of AIJ projects. Without a more detailed set of criteria, South Africa runs the risk of adopting a random project approach, which fails to address the country's developmental needs in a sustainable manner.
To reduce the risks to both the investor and the host countries, AIJ projects must be scrutinized by a 'clearing-house' and approved by national government with a clearly defined set of criteria.
The institutional culture of consultation and participation in South Africa also necessitates that AIJ be owned and operated by a broad spectrum of involved persons including representatives from Government, research organization~, labor,· community, environmental organizations and industry (Asamoah & Grobbelar 1996).
CONCLUSION
This section draws out the main institutions, policies and research requirements to implement an energy-efficiency-related JI project in South Africa.
Institutional Concerns
South Africa already has an interim clearing-house for the acceptance of JI projects in the pilot phase, in the form of the AIJ Working Group of the NCCC. However, the role of this group has . not been fully clarified and the lines of authority have yet to be established. To reduce risk to both the host and the investor countries, a formal national acceptance institution must exist, with clear lines of responsibility to both the government and stakeholders.
While the capacity exists in South Africa, specifically in Eskom and PEER, to monitor and evaluate the projects in terms of their costs, benefits and specific environmental impacts, there is a need for a national institution which is not involved in the implementation of projects to evaluate the reported project results and to assess projects in terms of their contribution to national development priorities. There may also be a need to have an independent institution (local or external) to verify the GHG impacts of the project/program.
Policy
This paper has highlighted several JI/ AIJ issues and concerns, which need to be addressed through policy. These include: Refine the selection criteria for JI projects in order to ensure a programmatic approach that ensures that South Africa's national development needs are met in a sustainable manner.
Build capacity to assess, monitor anc;l evaluate projects in terms of their C0 2 reduction ·achievements, avoided costs and social development impacts. Specifically, there is a need to develop a pool of professionals who can offer technical support for the .monitoring and evaluation of projects and institutions which can assess the project results in terms of meeting national development needs. ' Establish policy on credit-sharing. To-date there has been limited debate on the sharing of credits due to the fact that JI is still in its pilot phase and AIJ projects are not credited. Given South Africa's relatively advanced economic position, however, it is necessary for South African officials to start debating and considering the implications of different credit-sharing scenarios.
Establish standardized methodologies for the assessment, evaluation and monitoring the projects to track the-'sustainability' of emissions reductions. The CFL and Eco-Home projects have highlighted some of the difficulties assQciated with monitoring and evaluating the results of such energy.,-efficiency projects in South AfriCa. Methods to determine a scheme for apportioning the avoided electricity generation to different power plants need to be explored in order to determine the extent of emission reduction and the associated costs.
Research Needs
There are several uncertainties that have arisen in the potential ll/ AIJ projects that require further exploration. These include: • The impact of the 'take back' effect and the extent to which it decreases the total emissions reductions ofthe projects.
• The viability of a local CFL manufacturing sector in the longer term and the impact of importing CFLs on the balance of payments in the short term.
• The impact of the projects on the local incandescent lighting industry and South Africa's national priority of job creation.
• The real potential for sustained penetration levels after the projects are complete and the impact of this on long-term emissions reductions.
• The embedded emissions of pumped storage.
• The magnitude of transaction costs associated with long-term monitoring and verification responsibilities of a project.
• The stability of the RDP housing subsidy program.
• Barriers to widespread adoption and acceptability of such energy-efficiency measures by local communities and suppliers.
Explicit mitigation of the above uncertainties at the proposal stage of any energy-efficiency project will likely reduce the risk and increase the desirability for potential n investors.
|
v3-fos-license
|
2021-06-03T13:20:18.708Z
|
2021-06-03T00:00:00.000
|
235305670
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2021.672894/pdf",
"pdf_hash": "0278026bdae97521b2c6b8104adaff6458c307da",
"pdf_src": "Frontier",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44525",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "0278026bdae97521b2c6b8104adaff6458c307da",
"year": 2021
}
|
pes2o/s2orc
|
Tuberculosis Risk Stratification of Psoriatic Patients Before Anti-TNF-α Treatment
Psoriasis is a skin inflammatory condition for which significant progress has been made in its management by the use of targeted biological drugs. Detection of latent M. tuberculosis infection (LTBI) is mandatory before starting biotherapy that is associated with reactivation risk. Together with evaluation of TB risk factors and chest radiographs, tuberculin skin tests (TST) and/or blood interferon-γ-release assays (IGRA), like the QuantiFERON (QFT), are usually performed to diagnose M. tuberculosis infection. Using this approach, 14/49 psoriatic patients prospectively included in this study were identified as LTBI (14 TST+, induration size ≥ 10mm, 8 QFT+), and 7/14 received prophylactic anti-TB treatment, the other 7 reporting past-treatment. As the specificity and sensitivity of these tests were challenged, we evaluated the added value of an IGRA in response to a mycobacterial antigen associated with latency, the heparin-binding haemagglutinin (HBHA). All but one TST+ patient had a positive HBHA-IGRA, indicating higher sensitivity than the QFT. The HBHA-IGRA was also positive for 12/35 TST-QFT- patients. Measurement for 15 psoriatic patients (12 with HBHA-IGRA+) of 8 chemokines in addition to IFN-γ revealed a broad array of HBHA-induced chemokines for TST+QFT- and TST-QFT- patients, compared to a more restricted pattern for TST+QFT+ patients. This allowed us to define subgroups within psoriatic patients characterized by different immune responses to M. tuberculosis antigens that may be associated to different risk levels of reactivation of the infection. This approach may help in prioritizing patients who should receive prophylactic anti-TB treatment before starting biotherapies in order to reduce their number.
INTRODUCTION
Psoriasis is a frequent skin inflammatory condition with a worldwide prevalence of 3%, characterized by erythematous and scaly plaques that may affect any part of the body (1,2). Psoriatic patients may develop comorbidities, such as psoriatic arthritis and cardiovascular diseases, leading to the concept of a systemic immune-mediated inflammatory disease (IMID) (3). Significant progress has been made in the management of psoriasis by the use of targeted biological drugs, initially limited to tumor necrosis factor-a (TNF-a) inhibitors (4). Patients receiving TNF-a-targeted therapies have an increased risk of reactivation of a latent Mycobacterium tuberculosis infection (LTBI), and although there are few and discrepant specific reports in psoriatic patients (5), the risk of active tuberculosis (aTB) is, according to a recent meta-analysis, doubled for patients treated with anti-TNF-a (6). The use of biological drugs to treat psoriasis was further extended to other therapeutic agents targeting the interleukin (IL)-23/IL-17 axis (7,8), but their potential risk of reactivation of LTBI is not yet firmly established (9).
Classically, M. tuberculosis infection in humans is thought to present either as aTB or as LTBI defined by the presence of immunological responses to mycobacterial antigens in absence of clinical symptoms of disease (10,11). LTBI subjects are thought to present a life-long risk of reactivation of the infection, with 5 to 15% of them developing aTB during their lifetime (11). Recent data however challenged this concept and indicated that LTBI comprises a range of infection outcomes associated with different bacterial persistence and host containment, from cleared infection to low-grade TB (10). It became evident that these last individuals are probably more at risk to reactivate the infection compared to other LTBI subjects.
In view of the higher risk of psoriatic patients to reactivate LTBI when receiving TNF-a-targeted therapies, detection of LTBI before initiating biotherapies is mandatory and essential to provide preventive anti-TB treatment (9,12). This detection is nowadays based on the classical definition of LTBI, e.g. on the detection of memory T cell responses to mycobacterial antigens, revealing the presence of host sensitization to these antigens (11). The tuberculin skin test (TST) is the gold standard for this detection since decades, in spite of possible false-positive results in Bacillus Calmette-Gueŕin (BCG)-vaccinated subjects and in non-tuberculous mycobacteria (NTM)-infected patients (13), and despite possible lower sensitivity in patients suffering from IMID with immune-suppressive treatment history (14). Therefore, TST has been replaced in several countries by interferon-gamma (IFNg) release assays (IGRAs). These blood tests measure the IFN-g secretion within whole blood or by peripheral blood mononuclear cells (PBMC), upon in vitro stimulation with peptides from the mycobacterial antigens early-secreted antigenic target-6 (ESAT-6), culture filtrate protein-10 (CFP-10), and sometimes TB7.7 (9). These IGRAs, commercially available as the QuantiFERON (QFT) (Qiagen, Hilden, Germany) or the T-SPOT.TB (Oxford Immunotec, Oxford, United Kingdom), are more specific for M. tuberculosis infection than TST, as the antigens used for in vitro stimulation are absent from BCG and most NTM. In addition, they both include positive and negative controls to identify possible false negatives. However, they were reported by several authors to have lower sensitivity than initially thought to detect immune responses to M. tuberculosis antigens (14), so that in Belgium, a low TB incidence country (<10 new cases/100.000 inhabitants/year) with a low BCG vaccination coverage, IGRAs are recommended only in case of doubtful TST results, or to increase sensitivity in patients already receiving immunosuppressive drugs (www.fares.be).
Using either TST or IGRA to detect LTBI before the initiation of TNF-a-targeted agents is however not optimal as prophylactic anti-TB treatment in these selected patients did not provide them complete protection from developing aTB (15). Therefore, in addition to a careful evaluation of the patient's risk factors for LTBI and chest X-ray radiography to exclude aTB, a dual strategy performing both tests (TST and IGRA) is now largely recommended to reduce any possible risk of developing aTB. The positivity of any of these tests for the diagnosis of LTBI should be considered (16). Unfortunately, neither the TST, nor the IGRA allowed us to detect patients with the highest risk of reactivation as they cannot differentiate the newly recognized different stages within the spectrum of LTBI and are positive both in LTBI subjects and in patients with aTB (17).
Given the limitations of the TST and the commercial IGRAs to diagnose LTBI in patients with IMID, and their inability to select among LTBI subjects those who have the highest risk to reactivate the infection, we evaluated in this study the added value of an IGRA based on the latency-associated antigen heparin-binding haemagglutinin (HBHA), reported to detect LTBI with high sensitivity and specificity (18), and we compared the results of the HBHA-IGRA to those of the TST and of the QFT.
Study Population
Forty-nine adult patients suffering from psoriasis were prospectively recruited from the outpatient clinic of the Dermatology department at the "hopital Erasme" as part of their evaluation before starting biotherapy (Ethics Committee 021/406, P2012/082). TB screening performed for all participants included TST (0.1 ml tuberculin PPD RT23 2 TU, SSI, Copenhagen, DK), chest X-ray, and QFT. TST were read after 72 hours and the results were assessed in the context of the patient's individual TB risk factors. In the absence of TB risk factors, TST -QFTpatients without chest X-Ray sign suggesting aTB, were considered as non-infected with M. tuberculosis. QFT + and/or TST + patients (induration size ≥ 15 mm) were considered as LTBI after exclusion of aTB. In the context of patients at risk to reactivate LTBI, patients with a TST positivity between 10 and 14 mm were also considered as being LTBI (www.fares.be). Four patients were treated with methotrexate at the time of inclusion. Ten others already received anti-TNF-a antibodies and were included in this study before changing their biotherapy. When they were initially evaluated for possible LTBI before their first anti-TNF-a treatment, 3/10 were considered LTBI and received at that time prophylactic anti-TB treatment.
in response to the mycobacterial peptides, after subtraction of the concentration obtained for the unstimulated condition, with a result > 25% of the unstimulated condition.
HBHA-IFN-g Release Assay (IGRA)
PBMC were isolated from fresh blood samples and in vitro stimulated during 24 hours at 37°C under 5% CO 2 with 2 µg/ ml HBHA, left unstimulated in culture medium (negative control) or were stimulated with 0.5 µg/ml staphylococcal enterotoxin B (SEB, Sigma-Aldrich, Bornem, Belgium) (positive control). IL-7 was added in the culture medium at 1 ng/ml to increase the sensitivity of the 24 hrs assay (19). HBHA was purified from Mycobacterium bovis BCG culture supernatants by heparin-Sepharose chromatography (Sepharose CL-6B; Pharmacia LKB, Piscataway, NJ) (20). The bound material was eluted by a 0-500 mM NaCl gradient and was further passed through a reverse-phase high-pressure liquid chromatography (HPLC; Beckman Gold System), using a Nucleosil C18 column (TSK gel Super ODS; Interchim) equilibrated in 0.05% trifluoroacetic acid. Elution was performed by a linear 0-80% acetonitrile gradient and HBHA eluted at 60% acetonitrile (21). The HPLC chromatogram revealed a single peak and analysis by SDS-PAGE showed a single band after Coomassie-blue staining, indicating the absence of contamination of HBHA with other proteins.
Cell culture supernatants were frozen at -20°C until measurement of secreted cytokines/chemokines. IFN-g concentrations were measured by ELISA (19). IFN-g concentrations < 50 pg/ml in the non-stimulated condition and > 200 pg/ml in the positive controls were required for further analysis of the results. When detectable, IFN-g concentrations obtained under non-stimulated conditions were subtracted from those obtained in response to HBHA. A positive HBHA-IGRA was defined as IFN-g concentrations ≥ 50 pg/ml IFN-g as previously determined by ROC curve analysis comparing results obtained for LTBI subjects to those of noninfected controls (19).
Multiple Cytokine/Chemokine Measurements
Based on our previous experience with M. tuberculosis-infected patients, 8 cytokines/chemokines were measured in addition to IFN-g in the 24 h culture supernatants of HBHA-stimulated PBMC from 12 HBHA-IGRA + psoriatic patients and from 3 HBHA-IGRApatients taken as negative controls: granulocyte macrophage colony-stimulation factor (GM-CSF), IFN-g, IL-1b, IL-2, IL-6, IL-10, IL-17A, macrophage inflammatory protein (MIP-1a), and TNF-a. The cytokine/chemokine concentrations were measured by Milliplex human cytokine/chemokine kits (Merck, Belgium) according to the manufacturer's instructions with supernatants dilution factors specific for each analyte to obtain concentrations within the standard curves. Results were analyzed with a Bio-Plex ® MAGPIX ™ Multiplex reader, Bio-Plex Manager ™ MP Software and Bio-Plex Manager 6.1 Software (BIO-RAD laboratories, Nazareth Eke, Belgium). When detectable, the analyte concentrations in the antigen-free conditions were subtracted from those obtained with antigen stimulation. Concentrations below the detection limit were allocated an arbitrary value of 5 pg/ml, whilst results exceeding the assay's upper limit of detection were attributed the concentration corresponding to this limit. For each marker, the positivity limit was arbitrarily determined as being minimum 4 times the detection limit or maximum 2 times the median concentration obtained for non-infected patients when cytokines/chemokines were detectable. A grey zone of doubtful positivity defined as ± 20% of the cut-off value was established for each analyte. A scale representing the intensity of cytokine/chemokine concentrations was established for each analyte from negative values to doubtful, low and strong cytokine/chemokine concentrations.
Statistical Analysis
Differences between several groups were assessed by the nonparametric Kruskal-Wallis test, followed by the non-parametric Dunn test. Differences between HBHA-induced IFN-g concentrations at two different time-points were evaluated by the paired Wilcoxon test. A value of p <0.05 (*) was considered significant. All results were obtained with the Graphpad Prism Software version 4.0.
Prevalence of M. tuberculosis Infection in Psoriatic Patients According to Standard Criteria
In Belgium, a low-TB incidence country, forty-nine adult patients suffering from psoriasis were prospectively recruited from the outpatient clinic of the Dermatology department (hopital Erasme), as part of their evaluation before starting biotherapy. The main demographic and clinical characteristics of these patients are reported in Table 1. Eleven patients had a positive TST≥15 mm and, in the absence of clinical and/or radiological signs of aTB, were classified as LTBI. Three had a TST induration size between 10 and 14 mm, and in the context of a future biotherapy, they were considered as LTBI as recommended in Belgium, and 35 patients had a negative TST ( Figure 1). TST results were probably not influenced by previous BCG vaccination recorded for 2/14 TST + and for 4/35 TSTpatients ( Table 1). To avoid possible false negative TST results in patients with abnormal cellular immune responses due to their pathology and/or their treatment, QFT was performed on all patients, as now largely recommended. The QFT was positive for 8/49 patients, all of them having a positive TST (≥10 mm) (Figure 1). TST and QFT results were not correlated (Figure 2A), and the presence of LTBI risk factors was higher in the QFT + (6/8 = 75%) than in the QFT -(3/6 = 50%) LTBI patients ( Table 1). Altogether, this resulted in a pre-selection of patients for prophylactic anti-TB treatment of 14/ 49 patients (28.6%).
Thirty-eight/49 included patients received anti-TNF-a (n=29) or anti-IL-23/12 (n=9) antibodies after inclusion in this study, and none of them developed aTB during a 2 year follow-up. An alternative therapeutic option was chosen for the other 11 patients. Among the 14 patients pre-selected for an initial prophylactic anti-TB treatment, 7 did not receive it because they reported a past-treatment for LTBI (n=5, less than 5 years before their inclusion), or for aTB (n=2 without radiological sequela) ( Figures 1 and 2A with open symbols for patients with a past treatment).
Added Value of the HBHA-IGRA As the QFT was recently reported to be less sensitive to detect LTBI subjects than previously thought, even in a healthy population (14), and as the sensitivity of the HBHA-IGRA was reported to be higher than that of the QFT and to help to stratify LTBI subjects in different subgroups (18,22), we evaluated the sensitivity of the HBHA-IGRA to detect M. tuberculosis infection in this cohort of psoriatic patients. Among the 14 TST + patients, 13 of them had a positive HBHA-IGRA result, indicating that the HBHA-IGRA were better correlated with the TST than the QFT (Figures 1 and 2B). The only TST + patient with a negative HBHA-IGRA was a patient with a TB risk factor (nurse) on immunosuppressive treatment (methotrexate), with a TST induration size of 16 mm in spite of a negative QFT ( Figure 2B). Among the 13 TST + HBHA-IGRA + patients, only 8 had a positive QFT (represented by open symbols on Figure 2B). The results of the HBHA-IGRA were not considered for the decision to provide or to avoid prophylactic anti-TB treatment, as this test was still under investigation in these potentially immunocompromised patients (Figure 1). The HBHA-IGRA was also positive for 12/35 patients that were negative for both TST and QFT (Figures 1 and 2B). The demographic and clinical characteristics of these patients were not different from those of the TST + patients ( Table 1). A trend for lower HBHA-induced IFN-g concentrations in these patients compared to the TST + patients was observed, but the differences were not significant ( Figure 3). These results indicate that within the all cohort of psoriatic patients, 51% of them have developed an IFN-g response to the mycobacterial antigen HBHA. Serial HBHA-IGRA During Biotherapy Twelve patients with a positive HBHA-IGRA were re-tested after one or two years of treatment with anti-TNF-a (n=7) or anti-IL-23 antibodies (n=5). Six of them were initially TST + (with 5 QFT + and 2/5 prophylactically treated for TB before starting the biotherapy), whereas the other 6 were TST -QFT -. They all were persistently positive in the HBHA-IGRA, and for 10/12 patients, the HBHA-induced IFN-g concentrations were even higher during biotherapy than before treatment (p=0.002) (Figure 4). One patient, initially TST -QFT -, had a very strong increase in the HBHA-induced IFN-g concentration between the two IGRAs (from 231 pg/ml to 39,919 pg/ml), and the QFT became positive at the second blood sampling (13.83 UI/ml) (open circle on Figure 4). This patient reported professional contact with a patient with aTB in the months preceding the second IGRA, so that following this contact, and after exclusion of aTB, he received prophylactic anti-TB treatment for LTBI.
HBHA-Induced Chemokines
To further characterize the HBHA-induced immune responses in psoriatic patients, we analyzed a panel of selected cytokines/ chemokines induced by HBHA in 12 HBHA-IGRA + patients, and compared the results to those obtained for 3 HBHA-IGRA -TST -QFTpatients included as controls. Among the HBHA-IGRA + patients, 5 were TST + QFT + , 3 were TST + QFT -, and 4 were TST -QFT -(Supplementary Figure 1). No HBHA-induced cytokine/chemokine was detected for the 3 psoriatic patients with an absence of identified immune response to M. tuberculosis (TST -QFT -HBHA-IGRA -) and hence considered as non-infected ( Figure 5). In contrast, the 12 HBHA-IGRA + patients were characterized by various profiles of HBHA-induced cytokines/ chemokines. A restricted profile of HBHA-induced cytokines characterized TST + QFT + psoriatic patients, compared to the TST + QFTpatients ( Figure 5). Most TST + QFT + HBHA-IGRA + patients secreted IL-2, TNF-a and IL-10 in response to HBHA, in addition to IFN-g, whereas the proportion of these patients secreting GM-CSF, IL-17A, IL-1b, IL-6 and MIP-1a in response to HBHA was very low ( Figure 5B), with low concentrations of these chemokines when they were detected ( Figure 5A). TST + QFT -HBHA-IGRA + patients also secreted IFN-g, IL-2 and TNF-a in response to HBHA, but they all additionally secreted GM-CSF, IL-6, MIP-1a, and most of them also secreted IL-10 and IL-1b, and 1/3 secreted IL-17A. All these chemokines were secreted at high concentrations ( Figure 5A). These HBHA-induced chemokine profiles were not a consequence of psoriasis but rather reflected the LTBI status of the patients, as they share similar profiles with LTBI subjects who did not suffered from psoriasis (V.C. unpublished). Finally, all HBHA-IGRA + patients in spite of negative TST and QFT secreted TNF-a, IL-10, IL-1b and IL-6 in addition to IFN-g in response to HBHA. Most of them secreted GM-CSF and MIP-1a, and 1/4 secreted IL-17A, whereas only 50% of them secreted low concentrations of IL-2 ( Figure 5). The profile of HBHA-induced cytokines/chemokines was thus similar in HBHA-IGRA + patients who were TST -QFTand those who were TST + QFT -.
DISCUSSION
Using the recommended strategy to detect LTBI among psoriatic patients eligible to receive biological treatment, i.e. combining TST and IGRA after evaluation of the patients' risk factors and chest Xray results, we identified here 28.6% of LTBI psoriatic patients. This is a high proportion of LTBI patients for a low TB incidence country like Belgium where the prevalence of positive TST among healthy unexposed adolescents is 0.2% (V. Sizaire, FARES, personal communication). Among 54 adults with Crohn disease evaluated before anti-TNF-a treatment, we found only 3.7% of TST + QFT + patients (2/54), a prevalence which remains quite low (V. Corbière, personal communication), whereas the prevalence of positive TST among TB contacts reaches 30% in Belgium (V. Sizaire, FARESpersonal communication). Even if 6/50 patients mentioned a contact or a possible contact with a TB patient, the results of this study indicate that the prevalence of LTBI among psoriatic patients is elevated, in agreement with some previous reports also applying TST and/or IGRA-based guidelines for LTBI screening of psoriatic patients before anti-TNF-a treatment (15,23). Based on TST only, 50% of patients with psoriasis who were candidates for biological therapy were treated for LTBI in Greece (23), and up to 20% in Spain (24). In these two studies, the TST cut-off level was ≥ 5 mm, which could at least partially explain the high prevalence of possible LTBI. Based on T-SPOT.TB only, 20% of psoriatic patients screened before anti-TNF-a treatment were treated for (25). These authors recommended to base the diagnosis of LTBI on T-SPOT.TB only rather than on TST, as most of their patients were BCG vaccinated and as they reported strong association between the T-SPOT.TB results and the presence of risk factors for LTBI (25). High prevalence of LTBI among psoriatic patients as defined by a positive TST in the Greek and Belgian studies may be attributed to possible false positive TST results due to previous BCG vaccination or immune responses to NTM. The proportion of BCG vaccinated patients in our cohort was however low (12%) as systematic BCG vaccination is not recommended in Belgium, and the proportions of TST + (≥ 10 mm) attributable to BCG is very low (1%) if tested ≥ 10 years after BCG vaccination (13). Concerning a possible interference of NTM on the positivity of the TST, it remains unlikely even if it cannot formally be excluded. As nicely analyzed by Farhat et al. (13) in an extensive review of the literature and meta-analysis estimating the false positive TST results between 10 and 14 mm due to NTM, it appears that this proportion ranged from 0.1% in Montreal or France to reach a maximum of 2.3% in India (13). False-positive TST results due to immune responses to NTM in our study are unlikely as only two QFTpatients had TST induration size < 15 mm (between 10 and 14 mm): one of them reported active TB history during infancy, and the other was previously treated for LTBI. All the other patients considered as LTBI had a TST induration size ≥ 15mm. Finally, we cannot formally excluded that false positive TST in psoriatic patients could occur as a result of the pro-inflammatory state of their skin (26). However, if we considered only the QFT results, the incidence of LTBI in our patients cohort reached 16% which remain higher than in the general population in Belgium. We therefore conclude that psoriatic patients evaluated for LTBI when eligible for a biotherapy are characterized by a high incidence of LTBI. As previously suggested by Ramagopolan (27), this might be due to a predisposition of patients with a past TB to develop an IMID like psoriasis as 3 patients reported a past history of TB.
LTBI in Switzerland
As LTBI is now recognized as being an heterogeneous group of individuals with different risk of reactivation of the infection, it is widely accepted that different subgroups should be identified based on different immune responses with the aim to identify those who are most likely to reactivate the infection (10,11). In view of the high proportion of LTBI patients detected among psoriatic patients by classical tests, this is of utmost importance within these cohorts of patients to avoid unnecessary and potentially toxic preventive anti-TB treatment. By evaluating here in addition to the QFT, the IFN-g response to a latency-associated mycobacterial antigen, HBHA, and by analyzing also a panel of other chemokines induced by this antigen, we identified different subgroups of psoriatic patients based on their immune responses to mycobacterial antigens. The HBHA-IGRA was positive in all but one TST + patients, and may therefore eventually be proposed to replace the TST, which is difficult to perform in psoriatic patients with extensive skin lesions. Among the 13 TST + HBHA-IGRA + patients, only 8 of them had a positive QFT defining two different groups of patients with an immune response to mycobacterial antigens. The analysis of a large array of chemokines and cytokines induced by HBHA in these psoriatic patients further allowed us to substantiate the existence of two clearly distinct subgroups. Whereas TST + HBHA + QFTpatients secreted several chemokines (IL-1b, IL-6, MIP-1a, GM-CSF), as well as IL-2, TNF-a, and for some of them, IL-17A, reported to play a role in protection against TB (28), TST + HBHA + QFT + patients had a more restricted profile of cytokines induced by HBHA. As HBHA was reported to be a protective antigen against TB in mouse models of vaccination with HBHA followed by a challenge with M. tuberculosis (29,30), and as in humans, HBHA-immune responses are more common in LTBI subjects and in treated aTB patients than in untreated patients with aTB (18,21,29), our results suggest that the broad array of HBHAinduced chemokines associated with a negative QFT may identify patients with a lower risk of reactivation of the M. tuberculosis infection. QFT + patients are in contrast probably those with a higher risk of reactivation as they also have increased frequencies of M. tuberculosis antigens induced regulatory T cells subsets (31), known to be preferentially elevated in patients with aTB (32).
Combining the results of the HBHA-induced immune responses with those of the QFT may therefore help to stratify the LTBI psoriatic patients in different subgroups and to identify patients who should be prioritized to receive prophylactic anti-TB treatment before starting biotherapy, those with a positive QFT, and not those with an isolated positive HBHA-IGRA who are better protected by their immune responses against an eventual reactivation of M. tuberculosis infection. This proposed attitude would have result in the prophylactic treatment of only 2/49 patients (4%) in place of 7/ 49 (14%) in our cohort of psoriatic patients. We further identified a third group of psoriatic patients with positive immune responses to mycobacterial antigens. A subgroup of patients had positive HBHA-IGRA in spite of negative TST and negative QFT. Similarly to results obtained in TST + patients, these HBHA-IGRA were persistently positive, often with higher responses after 1 or 2 years of biotherapy than before treatment. This suggests the existence in these patients of mycobacteria-specific memory immune responses and is consistent with a rise in intensity of IGRA responses reported previously during biotherapies (33). These HBHA-IGRA + TSTpatients had less frequent LTBI risk factors than the TST + QFT + LTBI patients, and we cannot formally exclude a possible interference from immune responses to M. avium in these patients as HBHA is produced by this NTM as well (34). However HBHA proteins produced by different mycobacteria differ in their structure and activity (34), and the importance of the precise amino acid sequence and of the methylation pattern of HBHA for its recognition by T cells from LTBI subjects was demonstrated (35). Interestingly, the HBHAinduced chemokines and cytokines profiles were very similar in these HBHA-IGRA + TSTpatients to those found for TST + QFT -HBHA-IGRA + patients. The induction by HBHA of IL-1b and IL-6 secretions in both TST + QFTand TST -QFTpatients further suggests the possible presence in these patients of innate memory cells, as described in association with trained immunity induced by previous BCG vaccination (36,37). These HBHA-induced immune responses do however not imply that all these psoriatic patients have an enhanced risk of TB reactivation. On the contrary, these HBHAinduced immune responses may contribute to a better protection of these patients against a reactivation or a new infection with M. tuberculosis. The development of LTBI (TST + QFT + ) after exposure to a TB index case reported here in a psoriatic patient under anti-TNF-a treatment, having initially an immune response to HBHA with a negative TST, support this hypothesis and suggests that this patient was at least partially protected against the development of aTB disease.
We conclude that the incidence of LTBI in psoriatic patients is high, even in a low TB incidence country, and that sensitive immunological tests should be used to detect them. Combining different immunological tests may help to select patients who should be prioritized to receive prophylactic anti-TB treatment before starting biotherapies. Based on the indirect evidence of protective immune responses against aTB induced by HBHA in humans and on direct evidence in animal models, we propose that HBHA-IGRA + QFTpatients should not be prioritized to receive anti-TB prophylaxis before anti-TNF-a treatment, but that the persistence of their protective anti-HBHA immune response during treatment should be controlled. However, more information on the predictive value of HBHA-induced immune responses for the protection against aTB development in psoriatic patients are still needed.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The study involving human participants was reviewed and approved by the Comitéd'ethique hospitalo-facultaire Erasme-ULB (021/406). The patients/participants provided their written informed consent to participate in this study.
|
v3-fos-license
|
2016-03-14T22:51:50.573Z
|
2015-12-23T00:00:00.000
|
1904362
|
{
"extfieldsofstudy": [
"Materials Science",
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2072-666X/7/1/1/pdf",
"pdf_hash": "73ea6ffca37cb37c3caa393e3e2bc4f88a99aec6",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44526",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"sha1": "73ea6ffca37cb37c3caa393e3e2bc4f88a99aec6",
"year": 2015
}
|
pes2o/s2orc
|
Miniature Microwave Notch Filters and Comparators Based on Transmission Lines Loaded with Stepped Impedance Resonators (SIRs)
In this paper, different configurations of transmission lines loaded with stepped impedance resonators (SIRs) are reviewed. This includes microstrip lines loaded with pairs of SIRs, and coplanar waveguides (CPW) loaded with multi-section SIRs. Due to the high electric coupling between the line and the resonant elements, the structures are electrically small, i.e., dimensions are small as compared to the wavelength at the fundamental resonance. The circuit models describing these structures are discussed and validated, and the potential applications as notch filters and comparators are highlighted.
Introduction
Stepped impedance resonators (SIRs) were proposed in the late 1970s as electrically small semi-lumped (planar) resonant elements useful for the realization of microwave filters [1][2][3]. These resonators are typically (although not exclusively) implemented by means of a tri-section structure where a narrow strip (high impedance section) is sandwiched between two wide (and hence low impedance) sections. The typical topology is depicted in Figure 1a, whereas Figure 1b shows the topology of the folded-SIR. At the fundamental resonance, both topologies exhibit an electric wall at the bi-section plane of the resonator (indicated in the figures), and there is an electric dipole moment orthogonal to this plane at such resonance frequency. Thus, both structures can be excited by means of a time-varying electric field, with a non-negligible component in the direction of the electric dipole moment. The folded-SIR, however, can be driven not only electrically, but also by means of a time-varying magnetic field applied orthogonal to the plane of the resonator, since there is also a magnetic dipole moment in that direction [4]. Folded SIRs are electrically small and can be useful as an alternative to split ring resonators (SRRs) [5] for the implementation of negative effective permeability metamaterials [4]. SIRs (including meandered SIRs and multi-section SIRs) and folded-SIRs have found numerous applications in microwave engineering, where size reduction has been a due [1][2][3]6,7].
In most of the previous applications, the resonators are coupled or attached to a host transmission line. A high level of miniaturization has been achieved in SIR-based coplanar waveguide (CPW) structures, where elliptic filters [6] and radiofrequency (RF) barcodes (or spectral In this paper, we will review some of the applications of transmission lines loaded with SIRs. Specifically, the focus is on asymmetric structures, or symmetric structures that can be made asymmetric by appropriately loading the SIRs. We will consider both microstrip and CPW transmission lines loaded with SIRs, and the applications include dual-band microwave notch filters and comparators (i.e., structures able to detect defects or abnormalities in samples, as compared to a reference). In Section 2, the microstrip line loaded with a pair of SISS is analyzed and modeled, and the model is validated experimentally. Section 3 deals with SIR-loaded CPWs, where two different structures are considered: A 5-section SIR-loaded CPW, where the central (wide) section of the 5-SIR is capacitively coupled to the CPW, and the same structure (5-SIR) but directly connected to the central strip of the CPW through metallic vias. In both cases, the 5-SIR is etched in the back substrate side of the CPW transmission line. The models of both structures are also presented and validated. In Section 4, the applicability of these SIR-based structures to dual-band notch filters and comparators is demonstrated. Finally, the main conclusions are highlighted in Section 5.
Microstrip Line Loaded with Pairs of SISSs
Figure 2a depicts a microstrip line section loaded with a pair of SISS. Assuming that the microstrip line is electrically short, and that there is a high impedance contrast between the narrow and wide sections of the SISS, the structure can be described by the lumped element equivalent circuit shown in Figure 2b [9]. The model considers the general case of an asymmetric structure, where the SISS are modeled by the inductances L1,2 and the capacitances C1,2, and the microstrip line section is accounted for by the capacitance C and the inductance L. The coupling (magnetic) between the two SISS cannot be neglected and is modeled through the mutual inductance M (such coupling is negative because the currents in the inductances flow in opposite directions). Losses are not considered in the model. In this paper, we will review some of the applications of transmission lines loaded with SIRs. Specifically, the focus is on asymmetric structures, or symmetric structures that can be made asymmetric by appropriately loading the SIRs. We will consider both microstrip and CPW transmission lines loaded with SIRs, and the applications include dual-band microwave notch filters and comparators (i.e., structures able to detect defects or abnormalities in samples, as compared to a reference). In Section 2, the microstrip line loaded with a pair of SISS is analyzed and modeled, and the model is validated experimentally. Section 3 deals with SIR-loaded CPWs, where two different structures are considered: A 5-section SIR-loaded CPW, where the central (wide) section of the 5-SIR is capacitively coupled to the CPW, and the same structure (5-SIR) but directly connected to the central strip of the CPW through metallic vias. In both cases, the 5-SIR is etched in the back substrate side of the CPW transmission line. The models of both structures are also presented and validated. In Section 4, the applicability of these SIR-based structures to dual-band notch filters and comparators is demonstrated. Finally, the main conclusions are highlighted in Section 5.
Microstrip Line Loaded with Pairs of SISSs
Figure 2a depicts a microstrip line section loaded with a pair of SISS. Assuming that the microstrip line is electrically short, and that there is a high impedance contrast between the narrow and wide sections of the SISS, the structure can be described by the lumped element equivalent circuit shown in Figure 2b [9]. The model considers the general case of an asymmetric structure, where the SISS are modeled by the inductances L 1,2 and the capacitances C 1,2 , and the microstrip line section is accounted for by the capacitance C and the inductance L. The coupling (magnetic) between the two SISS cannot be neglected and is modeled through the mutual inductance M (such coupling is negative because the currents in the inductances flow in opposite directions). Losses are not considered in the model.
The transmission zeros of the structure are given by those frequencies that null the reactance of the series branch, that is: with Expression Equation (1) can be easily inferred from the transformed model of Figure 2b, depicted in Figure 3a. If the structure is symmetric (i.e., L 1 = L 2 " L r and C 1 = C 2 " C r ), the mathematical solutions of Equation (1) are: However, ω´is not actually a physical solution since it nulls the denominator of the reactance (obviously, for the symmetric case only one notch at the fundamental frequency of the SISS is expected, as results from the circuit model depicted in Figure 3b). Thus, the mutual coupling between the two inductors of two SISSs has the effect of increasing the notch frequency (symmetric case).
2 structures are considered: A 5-section SIR-loaded CPW, where the central (wide) section of the 5-SIR is capacitively coupled to the CPW, and the same structure (5-SIR) but directly connected to the central strip of the CPW through metallic vias. In both cases, the 5-SIR is etched in the back substrate side of the CPW transmission line. The models of both structures are also presented and validated. In Section 4, the applicability of these SIR-based structures to dual-band notch filters and comparators is demonstrated. Finally, the main conclusions are highlighted in Section 5.
Microstrip Line Loaded with Pairs of SISSs
Figure 2a depicts a microstrip line section loaded with a pair of SISS. Assuming that the microstrip line is electrically short, and that there is a high impedance contrast between the narrow and wide sections of the SISS, the structure can be described by the lumped element equivalent circuit shown in Figure 2b [9]. The model considers the general case of an asymmetric structure, where the SISS are modeled by the inductances L1,2 and the capacitances C1,2, and the microstrip line section is accounted for by the capacitance C and the inductance L. The coupling (magnetic) between the two SISS cannot be neglected and is modeled through the mutual inductance M (such coupling is negative because the currents in the inductances flow in opposite directions). Losses are not considered in the model. The transmission zeros of the structure are given by those frequencies that null the reactance of the series branch, that is: (1) can be easily inferred from the transformed model of Figure 2b, depicted in Figure 3a. If the structure is symmetric (i.e., L1 = L2 ≡ Lr and C1 = C2 ≡ Cr), the mathematical solutions of Equation (1) are: However, ω− is not actually a physical solution since it nulls the denominator of the reactance (obviously, for the symmetric case only one notch at the fundamental frequency of the SISS is expected, as results from the circuit model depicted in Figure 3b). Thus, the mutual coupling between the two inductors of two SISSs has the effect of increasing the notch frequency (symmetric case). The validation of the model has been done by comparing full wave electromagnetic simulations with circuit simulations with extracted parameters of different structures. To extract the parameters, we have first considered microstrip lines loaded with a single SISS, following a procedure reported in [9], and similar to that reported in [10]. Then M has been obtained in the structures loaded with pairs of SISS by curve fitting. The agreement is good, as depicted in Figure 4, where the responses of three different structures are presented. One of such structures is symmetric, whereas the other two are obtained from the first one by increasing or decreasing one of the capacitances, as indicated. The responses of the microstrip lines loaded with single SISS are also indicated, so that the positive shift of the transmission zero for the symmetric structure can be appreciated. Note also that the agreement with the responses of the fabricated structures is also good (except by the effect of losses, not considered in the model, and fabrication related tolerances). The validation of the model has been done by comparing full wave electromagnetic simulations with circuit simulations with extracted parameters of different structures. To extract the parameters, we have first considered microstrip lines loaded with a single SISS, following a procedure reported in [9], and similar to that reported in [10]. Then M has been obtained in the structures loaded with pairs of SISS by curve fitting. The agreement is good, as depicted in Figure 4, where the responses of three different structures are presented. One of such structures is symmetric, whereas the other two are obtained from the first one by increasing or decreasing one of the capacitances, as indicated. The responses of the microstrip lines loaded with single SISS are also indicated, so that the positive shift of the transmission zero for the symmetric structure can be appreciated. Note also that the agreement with the responses of the fabricated structures is also good (except by the effect of losses, not considered in the model, and fabrication related tolerances).
CPW Loaded with 5-SIRs
Figure 5a depicts a CPW line section loaded with a 5S-SIR, etched in the back substrate side. The equivalent circuit model is depicted in Figure 5b [11], where L and C are the inductance and capacitance of the CPW line section, and L1,2 and C1,2 describe the inductances and capacitances of the middle and external sections, respectively, of the 5S-SIR. The 5-SIR is electrically coupled to the line through Cc, the broadside capacitance between the central strip of the CPW and the central section of the 5S-SIR. Finally, the magnetic coupling between the two inductances of the resonator is accounted for by M (negative, for the reasons explained in reference to the SISS-loaded microstrip line of the previous section). Since the considered structure is electrically short, it is reasonable to assume, to a first order approximation, that the slot mode is not generated (the ports in the electromagnetic simulation and the connectors in the measurement act as air bridges, effectively connecting the two ground plane regions).
CPW Loaded with 5-SIRs
Figure 5a depicts a CPW line section loaded with a 5S-SIR, etched in the back substrate side. The equivalent circuit model is depicted in Figure 5b [11], where L and C are the inductance and capacitance of the CPW line section, and L 1,2 and C 1,2 describe the inductances and capacitances of the middle and external sections, respectively, of the 5S-SIR. The 5-SIR is electrically coupled to the line through C c , the broadside capacitance between the central strip of the CPW and the central section of the 5S-SIR. Finally, the magnetic coupling between the two inductances of the resonator is accounted for by M (negative, for the reasons explained in reference to the SISS-loaded microstrip line of the previous section). Since the considered structure is electrically short, it is reasonable to assume, to a first order approximation, that the slot mode is not generated (the ports in the electromagnetic simulation and the connectors in the measurement act as air bridges, effectively connecting the two ground plane regions).
CPW Loaded with 5-SIRs
Figure 5a depicts a CPW line section loaded with a 5S-SIR, etched in the back substrate side. The equivalent circuit model is depicted in Figure 5b [11], where L and C are the inductance and capacitance of the CPW line section, and L1,2 and C1,2 describe the inductances and capacitances of the middle and external sections, respectively, of the 5S-SIR. The 5-SIR is electrically coupled to the line through Cc, the broadside capacitance between the central strip of the CPW and the central section of the 5S-SIR. Finally, the magnetic coupling between the two inductances of the resonator is accounted for by M (negative, for the reasons explained in reference to the SISS-loaded microstrip line of the previous section). Since the considered structure is electrically short, it is reasonable to assume, to a first order approximation, that the slot mode is not generated (the ports in the electromagnetic simulation and the connectors in the measurement act as air bridges, effectively connecting the two ground plane regions). In this case, the transmission zero frequencies are given by Equation (1) with: If the structure is symmetric (i.e., L 1 = L 2 " L r and C 1 = C 2 " C r ), the mathematical solutions are of the form: However, ω´is not actually a physical solution since it nulls the denominator of the reactance. Thus, the mutual coupling between the two inductors of the two 5-SIR has the effect of increasing the notch frequency for the symmetric case, i.e., a behavior identical to the one of the microstrip line loaded with a pair of SISS.
A variation of the previous CPW structure consists of a direct connection (through vias) of the 5-SIR to the central strip of the CPW, as depicted in Figure 6. This effectively shorts the capacitance C c , and the resulting circuit model is identical to the one depicted in Figure 2b.
The validation of the models of these CPW loaded structures has been also carried out by comparison between the frequency responses inferred from full wave electromagnetic simulation and the responses derived from circuit simulation with the parameters conveniently extracted. For the structure of Figure 5a the parameter extraction method is more complex (as compared to the one of the previous section) since we have an additional parameter, namely, C c (the details can be found in [11]). Indeed, the procedure first considers the structure with vias, so that all the parameters, except C c , are determined; then C c is determined by curve fitting.
Micromachines 2015, 6, page-page 5 In this case, the transmission zero frequencies are given by Equation (1) with: If the structure is symmetric (i.e., L1 = L2 ≡ Lr and C1 = C2 ≡ Cr), the mathematical solutions are of the form: However, ω− is not actually a physical solution since it nulls the denominator of the reactance. Thus, the mutual coupling between the two inductors of the two 5-SIR has the effect of increasing the notch frequency for the symmetric case, i.e., a behavior identical to the one of the microstrip line loaded with a pair of SISS.
A variation of the previous CPW structure consists of a direct connection (through vias) of the 5-SIR to the central strip of the CPW, as depicted in Figure 6. This effectively shorts the capacitance Cc, and the resulting circuit model is identical to the one depicted in Figure 2b.
The validation of the models of these CPW loaded structures has been also carried out by comparison between the frequency responses inferred from full wave electromagnetic simulation and the responses derived from circuit simulation with the parameters conveniently extracted. For the structure of Figure 5a the parameter extraction method is more complex (as compared to the one of the previous section) since we have an additional parameter, namely, Cc (the details can be found in [11]). Indeed, the procedure first considers the structure with vias, so that all the parameters, except Cc, are determined; then Cc is determined by curve fitting. Figure 7 (dimensions and substrate parameters are indicated in the caption). One is symmetric and the other two asymmetric, where the two asymmetric structures are derived from the symmetric one by increasing or decreasing the area of one of the external patch capacitors, while the other external patch capacitors for these two asymmetric structures keeps the same dimensions as in the symmetric one. The element values of the circuit model for the symmetric structure are L = 3.49 nH, C = 1.21 pF, L r = 4.20 nH, C r = 2.65 pF, M =´1.08 nH, and C c = 3.62 pF. The comparison of the electromagnetic simulation (using Keysight Technologies Momentum, Keysight Technologies Inc., Santa Rosa, CA, USA) and circuit simulation of the symmetric structure is shown in Figure 8 (the measurement data is included as well), where good agreement can be appreciated, pointing out the validity of the proposed model.
Micromachines 2015, 6, page-page 6 decreasing the area of one of the external patch capacitors, while the other external patch capacitors for these two asymmetric structures keeps the same dimensions as in the symmetric one. The element values of the circuit model for the symmetric structure are L = 3.49 nH, C = 1.21 pF, Lr = 4.20 nH, Cr = 2.65 pF, M = −1.08 nH, and Cc = 3.62 pF. The comparison of the electromagnetic simulation (using Keysight Technologies Momentum, Keysight Technologies Inc.,Santa Rosa, CA, USA) and circuit simulation of the symmetric structure is shown in Figure 8 (the measurement data is included as well), where good agreement can be appreciated, pointing out the validity of the proposed model. For the asymmetric cases, the small external patch inductance and capacitance of the 5-SIR have been found to be 4.30 nH and 0.97 pF, and the big external patch inductance and capacitance of 5-SIR have been found to be 4.26 nH and 4.53 pF, whereas the mutual inductances for these two cases have been found to be −1.19 nH and −1.06 nH respectively, i.e., very similar values, and also similar to the value corresponding to the symmetric structure. This indicates that M is scarcely dependent on the dimensions of the patch capacitances of the 5S-SIR, as expected. The resulting middle patch capacitances of small and big structures are 3.61 pF and 3.75 pF respectively. The agreement between the electromagnetic simulation, circuit simulation and measurement for the two asymmetric cases (Figures 9 and 10) is reasonable.
For the structures with vias, good agreement between circuit and electromagnetic simulation has been also obtained, as Figure 11 reveals (these structures have not been fabricated). For the asymmetric cases, the small external patch inductance and capacitance of the 5-SIR have been found to be 4.30 nH and 0.97 pF, and the big external patch inductance and capacitance of 5-SIR have been found to be 4.26 nH and 4.53 pF, whereas the mutual inductances for these two cases have been found to be −1.19 nH and −1.06 nH respectively, i.e., very similar values, and also similar to the value corresponding to the symmetric structure. This indicates that M is scarcely dependent on the dimensions of the patch capacitances of the 5S-SIR, as expected. The resulting middle patch capacitances of small and big structures are 3.61 pF and 3.75 pF respectively. The agreement between the electromagnetic simulation, circuit simulation and measurement for the two asymmetric cases (Figures 9 and 10) is reasonable.
For the structures with vias, good agreement between circuit and electromagnetic simulation has been also obtained, as Figure 11 reveals (these structures have not been fabricated). For the asymmetric cases, the small external patch inductance and capacitance of the 5-SIR have been found to be 4.30 nH and 0.97 pF, and the big external patch inductance and capacitance of 5-SIR have been found to be 4.26 nH and 4.53 pF, whereas the mutual inductances for these two cases have been found to be´1.19 nH and´1.06 nH respectively, i.e., very similar values, and also similar to the value corresponding to the symmetric structure. This indicates that M is scarcely dependent on the dimensions of the patch capacitances of the 5S-SIR, as expected. The resulting middle patch capacitances of small and big structures are 3.61 pF and 3.75 pF respectively. The agreement between the electromagnetic simulation, circuit simulation and measurement for the two asymmetric cases (Figures 9 and 10) is reasonable.
For the structures with vias, good agreement between circuit and electromagnetic simulation has been also obtained, as Figure 11 reveals (these structures have not been fabricated).
Application to Microwave Notch Filters and Comparators
According to the results of the previous subsections, the symmetric structures can be used as single broadband notch filters. Dual-band functionality is achieved by asymmetrically loading the line, and expression Equation (1) can be used to set the position of the notches. Note that the SISS-loaded microstrip line and the 5-SIR-loaded CPW without vias exhibit a wideband notch and a very narrow notch, whereas for the 5-SIR-loaded CPW with vias, the width of both notches is comparable. The reason is that in the CPW with vias there is not a coupling capacitance (Cc) in the shunt branch, and this favors sensitivity (also influenced by the mutual inductance, M). Thus, depending on the application (i.e., notch width requirement), one structure or the other may be more convenient.
In order to use the structures as microwave comparators, the SIR or SISS loaded lines must be symmetric. If line loading (dielectric or metallic) is symmetric, then the structure is expected to exhibit a single notch in the frequency response, whereas if the loading is asymmetric, two notches Figure 9. Electromagnetic simulation, circuit simulation and measurement response for the asymmetric structure of Figure 7b. Reprinted with permission from [11].
Micromachines 2015, 6, page-page 7 Figure 9. Electromagnetic simulation, circuit simulation and measurement response for the asymmetric structure of Figure 7b. Reprinted with permission from [11].
Application to Microwave Notch Filters and Comparators
According to the results of the previous subsections, the symmetric structures can be used as single broadband notch filters. Dual-band functionality is achieved by asymmetrically loading the line, and expression Equation (1) can be used to set the position of the notches. Note that the SISS-loaded microstrip line and the 5-SIR-loaded CPW without vias exhibit a wideband notch and a very narrow notch, whereas for the 5-SIR-loaded CPW with vias, the width of both notches is comparable. The reason is that in the CPW with vias there is not a coupling capacitance (Cc) in the shunt branch, and this favors sensitivity (also influenced by the mutual inductance, M). Thus, depending on the application (i.e., notch width requirement), one structure or the other may be more convenient.
In order to use the structures as microwave comparators, the SIR or SISS loaded lines must be symmetric. If line loading (dielectric or metallic) is symmetric, then the structure is expected to exhibit a single notch in the frequency response, whereas if the loading is asymmetric, two notches Figure 10. Electromagnetic simulation, circuit simulation and measurement response for the asymmetric structure of Figure 7c. Reprinted with permission from [11].
Micromachines 2015, 6, page-page 7 Figure 9. Electromagnetic simulation, circuit simulation and measurement response for the asymmetric structure of Figure 7b. Reprinted with permission from [11].
Application to Microwave Notch Filters and Comparators
According to the results of the previous subsections, the symmetric structures can be used as single broadband notch filters. Dual-band functionality is achieved by asymmetrically loading the line, and expression Equation (1) can be used to set the position of the notches. Note that the SISS-loaded microstrip line and the 5-SIR-loaded CPW without vias exhibit a wideband notch and a very narrow notch, whereas for the 5-SIR-loaded CPW with vias, the width of both notches is comparable. The reason is that in the CPW with vias there is not a coupling capacitance (Cc) in the shunt branch, and this favors sensitivity (also influenced by the mutual inductance, M). Thus, depending on the application (i.e., notch width requirement), one structure or the other may be more convenient.
In order to use the structures as microwave comparators, the SIR or SISS loaded lines must be symmetric. If line loading (dielectric or metallic) is symmetric, then the structure is expected to exhibit a single notch in the frequency response, whereas if the loading is asymmetric, two notches
Application to Microwave Notch Filters and Comparators
According to the results of the previous subsections, the symmetric structures can be used as single broadband notch filters. Dual-band functionality is achieved by asymmetrically loading the line, and expression Equation (1) can be used to set the position of the notches. Note that the SISS-loaded microstrip line and the 5-SIR-loaded CPW without vias exhibit a wideband notch and a very narrow notch, whereas for the 5-SIR-loaded CPW with vias, the width of both notches is comparable. The reason is that in the CPW with vias there is not a coupling capacitance (C c ) in the shunt branch, and this favors sensitivity (also influenced by the mutual inductance, M). Thus, depending on the application (i.e., notch width requirement), one structure or the other may be more convenient.
In order to use the structures as microwave comparators, the SIR or SISS loaded lines must be symmetric. If line loading (dielectric or metallic) is symmetric, then the structure is expected to exhibit a single notch in the frequency response, whereas if the loading is asymmetric, two notches separated a distance depending on the level of asymmetry are expected. Thus, the reported structures are useful to determine differences between a sample under test (SUT) and a reference sample (i.e., compare the two samples). To demonstrate the potential of these structures as comparators, the symmetric structure of Figure 7a has been loaded with a dielectric load (consisting of a small piece of Rogers RO3010 substrate with the copper removed from both substrate sides) placed on top of one of the patch capacitances. The measured response, shown in Figure 12, exhibits two notches, indicative of the asymmetric loading. Then, we have repeated the experiment by using the same piece of substrate but keeping the metal layers (metallic loading). The measured response is also included in Figure 12, where it can be seen that the depth of the first notch is superior (as compared to dielectric loading), since the structure is more sensitive to the effects of a metallic layer placed on top of one of the patch capacitances. The reason is that adding a metal increases more effectively the capacitance of the patch, as compared to dielectric loading.
As sensors, the SIR-based structures discussed in this paper belong to the category of resonance frequency splitting sensors. However, there is also another type of sensing structures based on symmetry properties: Coupling modulated resonance based sensors [12]. In this case, the sensor is based on a transmission line loaded with a single (symmetric) resonant element, the symmetry plane of the line and resonator are aligned, and these planes are of different electromagnetic nature. One of them is a magnetic wall, and the other one is an electric wall. Under these conditions, the resonator is not coupled to the line. However, by truncating symmetry, line to resonator coupling arises, producing a notch in the transmission coefficient, and the depth of this notch depends on the level of asymmetry, since it determines the coupling level. Several sensing structures based on these principles have been proposed (several of them by the authors) [13][14][15][16][17]. For instance, split ring resonant elements or complementary split rings have been used for sensing purposes. By using SIRs, the sensors are small (this extends also to notch filters) since the coupling with the host line is broadside. This is the main advantage over other sensors of this type based on other resonant elements. Also, ground plane etching is avoided (contrary to sensors based on complementary resonant elements).
Micromachines 2015, 6, page-page separated a distance depending on the level of asymmetry are expected. Thus, the reported structures are useful to determine differences between a sample under test (SUT) and a reference sample (i.e., compare the two samples). To demonstrate the potential of these structures as comparators, the symmetric structure of Figure 7a has been loaded with a dielectric load (consisting of a small piece of Rogers RO3010 substrate with the copper removed from both substrate sides) placed on top of one of the patch capacitances. The measured response, shown in Figure 12, exhibits two notches, indicative of the asymmetric loading. Then, we have repeated the experiment by using the same piece of substrate but keeping the metal layers (metallic loading). The measured response is also included in Figure 12, where it can be seen that the depth of the first notch is superior (as compared to dielectric loading), since the structure is more sensitive to the effects of a metallic layer placed on top of one of the patch capacitances. The reason is that adding a metal increases more effectively the capacitance of the patch, as compared to dielectric loading.
As sensors, the SIR-based structures discussed in this paper belong to the category of resonance frequency splitting sensors. However, there is also another type of sensing structures based on symmetry properties: Coupling modulated resonance based sensors [12]. In this case, the sensor is based on a transmission line loaded with a single (symmetric) resonant element, the symmetry plane of the line and resonator are aligned, and these planes are of different electromagnetic nature. One of them is a magnetic wall, and the other one is an electric wall. Under these conditions, the resonator is not coupled to the line. However, by truncating symmetry, line to resonator coupling arises, producing a notch in the transmission coefficient, and the depth of this notch depends on the level of asymmetry, since it determines the coupling level. Several sensing structures based on these principles have been proposed (several of them by the authors) [13][14][15][16][17]. For instance, split ring resonant elements or complementary split rings have been used for sensing purposes. By using SIRs, the sensors are small (this extends also to notch filters) since the coupling with the host line is broadside. This is the main advantage over other sensors of this type based on other resonant elements. Also, ground plane etching is avoided (contrary to sensors based on complementary resonant elements). Concerning the demand of SIR based sensors or comparators in microwaves, applications include sensors for dielectric characterization, quality control, and microfluidics, among others. In the paper, proof-of-concept demonstrators are presented. Since the electric field below the patches is high, significant sensitivity can be potentially achieved by using multilayer structures and arrangement of the structures under test in those regions. Figure 7a with asymmetric dielectric loading and metallic loading; (b) fabricated prototypes loaded with dielectric loading and metallic loading, respectively. Reprinted with permission from [11].
Concerning the demand of SIR based sensors or comparators in microwaves, applications include sensors for dielectric characterization, quality control, and microfluidics, among others. In the paper, proof-of-concept demonstrators are presented. Since the electric field below the patches is high, significant sensitivity can be potentially achieved by using multilayer structures and arrangement of the structures under test in those regions.
Conclusions
In summary, it has been shown that miniature microwave notch filters and comparators can be implemented by means of transmission lines loaded with stepped impedance resonators (SIRs), including stepped impedance shunt stubs (SISS) in microstrip technology, and 5-section SIRs (5-SIRs) in coplanar waveguide (CPW) technology. The lumped element equivalent circuit models of these electrically small planar structures have been proposed and validated, and an analysis that has lead us to find the position of the transmission zero frequencies has been carried out. Finally, a proof of concept of microwave comparators has been presented.
|
v3-fos-license
|
2020-12-17T09:06:22.736Z
|
2020-11-13T00:00:00.000
|
230643548
|
{
"extfieldsofstudy": [
"Political Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.31149/ijie.v3i11.842",
"pdf_hash": "ccd87e069e6416d0a2f1d90818bf4dba39c1cbea",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44528",
"s2fieldsofstudy": [
"Education"
],
"sha1": "b66497a71a7034b372effeef5555c5c51bc8661b",
"year": 2020
}
|
pes2o/s2orc
|
ASSESSMENT AND EVALUATION STRATEGIES FOR BOOSTING TEACHING AND LEARNING IN NIGERIA SECONDARY SCHOOLS
This paper discussed secondary education as well as governments’ rationale for its establishment. The paper also discussed assessment and the various strategies for the assessment of teaching-learning processes at the level. Moreover, the paper discussed evaluation of teaching and learning, and the strategies that could be employed in executing evaluation in schools, with explicit key differences between assessment and evaluation highlighted. In all, the paper concluded that assessment and evaluation are vital procedures for boosting teaching and learning activities in secondary schools in Nigeria; and suggested that assessment and evaluation strategies be enshrined in the secondary school curriculum, teachers be regularly trained and re trained in the art of assessment and evaluation; government and education ministries should provid e the needed tools and instruments for implementing assessment and evaluation of the teaching-learning processes, and education inspectors should frequently visit secondary schools as to ascertain teachers’ level of compliance with government policies on assessment and evaluation.
Introduction
Secondary education is the conventional link between the primary and tertiary levels of education. It is the level of education designed for the training of young minds who at the time; undergo varying emotional, physiological and psychological changes. As cited in Molagun (2006); Taiwo (1985) defined secondary education as an institution where students are admitted after the satisfactory completion of their primary education as prescribed by the government education syllabi and curriculum. It is education received by children between ages eleven plus to fifteen or twenty two years. The secondary school is a means through which violence, chaos and conflict-oriented tendencies can be curtailed among students. It is also an avenue through which such social negative behaviours can be prevented or eradicated completely (Molagun, 2006).
The rationale for government establishment of secondary education especially in Nigeria according to Taiwo (1985) were:-(a) To provide quality education for students regardless of their social background; (b) To diversify the curriculum in meeting students' needs and in catering for their talents; (c) To instill versatility, industry and self-reliance among students; (d) To enlighten students on their duties, obligations and privileges as citizens of Nigeria; (e) To teach, develop and project the Nigerian culture, languages and arts among students; (f) To develop a sense of spiritual and moral values, integrity and uprightness among students; (g) To raise citizens with high thinking ability, respect for the views and feelings of others and respect for the dignity of labour; (h) To produce students who will foster the unity of Nigeria. Learning can be referred as the process of knowledge, attitude and skills acquisition; involving behavioral changes in an individual. Students' learning is enhanced by their active involvement in the learning activity; providing positive reward reinforcements; stimulating fresh experiences among students; providing a rich and varied environment for learning; designing learning in a more structured pattern, and where the knowledge obtained is transferable and applicable (Gbamaja, 1991). Secondary school students learn a lot of subjects at the same time; as it can take the form of a regular or elective learning based on subjects.
The compulsory subjects are mandatory for all students in their different levels, whereas the elective ones are mandatory throughout the school year for those students who opt for them. Usually, in the Nigeria secondary school setting, students' learning is divided into the sciences, the social sciences, the arts and the commercials. Students aspiring to pursue a career from the aforementioned are to mandatorily enroll and attend the classes so planned. In all, a minimum of seven subjects and a maximum of eleven subjects are allowed for students.
The teacher is fundamental for the effective implementation of the curriculum. Teaching is an activity done deliberately in a specialized manner, leading to a positive changes on the learner (Dorgu, 2015); it is the art of instilling knowledge and an a way of helping learner obtain the rig ht attitude and skills through a series of planned activities (Buseri & Dorgu, 2011). Awotua-Efebo (2001) defined teaching as an interface between a teacher and a student under the teacher's guide, in order to bring about the expected change in students behaviour. For teachers to teach well, they should be guided the necessary rules or principles of teaching and learning, which will determines students' learning outcomes. Teachers at the secondary school level teaches a number of different classes made up of students of different age groups, abilities, attitudes and experiences; and these teachers, more often than not, have the opportunity to teach subjects reflecting their area of interest and specialisation. Teaching according to Omieibi-Davies (2011) can bring about: increase in the understanding of and information on the subject matter, obtaining psychomotor skills, habits and abilities. For teaching and learning to be exciting, enriching and accessible, the teacher should be creative and hard working, funny, less demanding, inspiring and absorbing. A successful secondary school teacher should also be career driven, resilient, excellent time managers, who is able to work well under pressure either alone or as a team. Secondary school teachers are to possess the following skills, amidst others: be a relationship builder, excellent communication skills, be a good role model, be brilliant and intelligent, be versatile, be accommodating, be a problem solver, plan his or her teaching, be able to guild students in attaining the objectives of the subject matter, to mention but a few.
These teachers have a freedom with regards to achieving the set of educational goals by adopting any of these teaching methods: cognitive development methods (talk chalk/recitation method, discussion method, field trip/excursion method, questioning/socratic method, team teaching method); affective dev elopment methods (simulation method, modelling method, simulation games, role-playing method, dramatic method) and the psychomotor development methods (discovery method, laboratory/experimentation method, inquiry method, process approach method, project method, programmed learning method, demonstration method, Dalton plan/assignment method, mastery learning method, microteaching method). For learning and teaching to be effective and efficient, assessment and evaluation strategies must be entwined into it . Hence, the purpose of this paper is to expose the various strategies for assessing and evaluating teaching and learning activities in Nigeria secondary schools.
Assessing students' learning is significant in the teaching-learning process (Earl, 2012). It is a procedure for ascertaining the nature of teaching as well as directing the extent to which students achieve in their learning endeavours (Wiliam, 2011). According to Tulu et al. (2018), assessment as the process of measuring students' knowledge within a subject context through a quiz, test or assignments. It concerns how students perform at the end of an instruction . Usually, classroom assessment is expected to promote additional improvement in student learning by factoring learning experiences and procedures during instruction; as classroom oriented assessments are likely to assist learners to know their areas of weaknesses and strengths (Linn & Gronlund, 2005). Assessment for learning is a more teacher-centered and provides a platform for determining how to improve students engagement, learning and performance (Black, Harrison, Lee, Marshall & Wiliam, 2007).
Assessment Strategies are techniques applied to teaching and learning activities and are for gathering information which can help teachers get the needed insight and feedback into their own teaching and that of students learning activities (Black & Wiliam, 2005). The role of assessment is to ascertain students' learning and a tool of reflection for teachers for the purpose of improving their teaching. The students' achievement information that is gathered from an assessment process, if accurate and valid, will be an advantage towards effective instruction while helping the teachers in providing appropriate feedback (Martínez, Stecher & Borko, 2009;Earl & Katz, 2006). Studies have shown that assessment of students is advantageous in encouraging them with taking up learning responsibilities, to be proactive about learning, fostering good interactions between them and their fellow students and teachers, providing opportunities for students' self and peer assessment, and helps students in understanding their next steps of learning . Assessment Strategies for Secondary schools i. Diagnostic assessment: This assessment provides information on the strengths and weaknesses of students within in a learning activity. The teacher can also apply this information in adapting to better teaching practices that meets students' needs. It provides school inspectors with information to understand the needs of the schools within their districts or locality, enabling them to provide relevant support to the teaching staff and for their professional development. The information can also be shared with students' parents with the aim of making them participate in the learning activities of their wards. Diagnostic assessment is sometimes referred to as pre-assessment. ii.
Portfolio assessment: Portfolios refer to a collection of samples of student work from classroom activities and can document a broad range of students' competencies as they provide exact evidence of what a student knows and can do, rather than time-pressured tests. Portfolio assessment assists students in critically evaluating their work, abilities, growth and progress in learning. iii.
Formative assessment: A formative assessment is applied in the initial stage of instruction planning and development. It is aimed at monitoring students' learning for feedback purposes. It helps in identifying gaps in teachers' instructional plan. This assessment provides students with the timely, specific feedback that they need to make adjustments to their learning. It is also known as assessment for learning.
iv.
Summative assessment: Summative assessment measures the effectiveness of learning, how students react to the instruction and the benefits accruing from the teaching-learning activities. It is an assessment aimed at measuring the extent to which the teaching and learning objectives is achieved. It provides information about student achievement and it is also known as assessment of learning. v.
Confirmative assessment: Confirmative assessment is carried out after instruction and is to find out if the teachers' instruction technique is still a success after a given time frame such as term, a session etc. It is an extension of the summative assessment. vi.
Norm-referenced assessment: This is assessment aimed at comparing students' performance against an average norm (e.g. national, state or local government norm). It could also be when a teacher compares the average grade of his or her students against the average grade of the entire school. vii.
Criterion-referenced assessment: This assessment measures students' performances against a fixed set of predetermined criteria or learning standards. It checks what students are expected to know and be able to do at a specific stage of their learning. viii.
Ipsative assessment: This is assessment in which a students' present performance is measured against his or her previous performances. ix.
Interim Assessment: This assessment is administered during instruction which is designed to evaluate students' knowledge and skills relative to a specific set of goals to inform decisions in the classroom and beyond.
x.
Peer and self assessment: This assessment is aimed at supporting students' metacognitive skills. It is a situation where students engage in measuring their prior knowledge while using it for new learning. Self-assessment helps students develop critical awareness and reflexivity (Dearnley & Meddings, 2007). Students use this information for making adjustments, improvements and necessary changes (Kajander-Unkuri, Meretoja, Katajisto, Saarikoski, Salminen, Suhonene, & Leino-Kilpi 2013). Peer assessment, on the other hand, is a process where individuals of similar status evaluate the performance of their peers and provide feedback, can also help students develop a critical attitude towards their own work and that of others (Mass, Sluijsmans, Van der Wees, Heerkens, Nijhuis-van der Sanden, & van der Vleuten, 2014) Evaluation of teaching and learning activities in secondary school is also necessary for the efficiency of the educational system. It can take the form of teachers' appraisal, school and system evaluation. Evaluation can be defined as the process of determining the worth of a program or an intervention. The main rationale for evaluation is to proffer a valid and reliable judgment for decision making. Various strategies of evaluation are available depending on the information that needs to be assessed at any point in time. These are outlined below: i.
Formative Evaluation: Formative evaluations are evaluations that occur during the process or implementation state of a program (e.g. a learning activity). These evaluations are used to measure how well the process is proceeding and if changes are necessary. It is a continuous, diagnostic and focuses on what and where students are doing well and areas where they need to improve in regard to their future performance (Gaberson, Oermann & Scellenbarger, 2015 Outcome Evaluations: Outcome evaluations measure the short-term impact of implementing programs. The evaluation gives information on how well the program is reaching its target audience.
Differences between Assessment and Evaluation
The significant differences between assessment and evaluation are discussed below: i.
Assessment entails collecting, reviewing and using information for the purpose of improving a current performance while evaluation is the process of passing judgment, based on a set criteria and evidence. ii.
Assessment is diagnostic in nature as it tends to identify areas of improvement while evaluation is judgmental, as it aims at providing an overall grade.
iii. Assessment provides feedback on performance and ways to enhance performance in future while evaluation ascertains whether the standards so set are met or not. iv.
Assessment is to increase quality whereas evaluation is to judge the quality of a programme. v.
Assessment is concerned with process, while evaluation focuses on product. vi.
In assessment, feedback is based on observation but evaluation feedback relies on the level of quality as per set standard. vii.
In assessment, the relationship between the assessor and the assessee is reflective (internally defined), while in evaluation, the evaluator and evaluatee share a prescriptive relationship (Externally defined). viii.
Assessment criteria are determined by parties involved whereas in evaluation, the criteria are set by the evaluator. ix.
In assessment, the measurement standards are absolute, which seeks but comparative in evaluation (Surbhi, 2016).
Conclusion
Assessment and evaluation are vital procedures for boosting teaching and learning activities in Nigeria secondary schools. The various assessment and evaluation strategies as detailed in this pap er can be applied in the assessment and evaluation of teaching-learning endeavours at the level. Teachers, learners and all other education stakeholders; should be familiar with the strategies for implementing teaching and learning assessment and evaluation.
In view of these, the paper hereby suggests that: i. A vivid assessment and evaluation strategies be enshrined in the secondary school curriculum; ii.
Teachers be regularly trained and re-trained in the art of assessment and evaluation; iii.
Government and education ministries should as a matter of urgency, provide the various tools and instruments to the actual execution of teaching and learning assessment and evaluation; iv.
Officials and inspectors from the education ministries should make it a point of duty to always visit secondary schools in their states and districts as to ascertain teachers' level of compliance with government policies on assessment and evaluation.
|
v3-fos-license
|
2022-07-31T06:16:45.903Z
|
2022-07-30T00:00:00.000
|
251179379
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00284-022-02956-9.pdf",
"pdf_hash": "fa3133b8ed122447152326d2dc17863c8779547e",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44529",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"sha1": "76df2cd009f4140bacc2471fd7845d0f058cbd61",
"year": 2022
}
|
pes2o/s2orc
|
Evaluation of Full-Length Versus V4-Region 16S rRNA Sequencing for Phylogenetic Analysis of Mouse Intestinal Microbiota After a Dietary Intervention
The composition of microbial communities is commonly determined by sequence analyses of one of the variable (V) regions in the bacterial 16S rRNA gene. We aimed to assess whether sequencing the full-length versus the V4 region of the 16S rRNA gene affected the results and interpretation of an experiment. To test this, mice were fed a diet without and with the prebiotic inulin and from cecum samples, two primary data sets were generated: (1) a 16S rRNA full-length data set generated by the PacBio platform; (2) a 16S rRNA V4 region data set generated by the Illumina MiSeq platform. A third derived data set was generated by in silico extracting the 16S rRNA V4 region data from the 16S rRNA full-length PacBio data set. Analyses of the primary and derived 16S rRNA V4 region data indicated similar bacterial abundances, and α- and β-diversity. However, comparison of the 16S rRNA full-length data with the primary and derived 16S rRNA V4 region data revealed differences in relative bacterial abundances, and α- and β-diversity. We conclude that the sequence length of 16S rRNA gene and not the sequence analysis platform affected the results and may lead to different interpretations of the effect of an intervention that affects the microbiota. Supplementary Information The online version contains supplementary material available at 10.1007/s00284-022-02956-9.
Introduction
The composition of gut microbiota have been associated with a variety of pathophysiological conditions, including obesity, low-grade inflammation, and overt disease [1][2][3]. We [4][5][6] and others [7,8] have exploited possibilities to beneficially affect microbiota using probiotics or dietary compounds that affect the composition and/or activity of the gut bacteria. To determine the success of intervention, the composition of gut microbiota are commonly determined by massive parallel sequencing of one of the variable (V) regions of the bacterial 16S rRNA gene [9]. Sequence analysis of the V-region of 16S rRNA has proven to be a powerful tool to describe the composition of bacterial communities [10,11]. However, the resolution of the taxonomic description of the communities is limited by the uniqueness of the V-region sequences and available reference databases [12]. Numerous different bacterial species have almost identical V-region sequences which makes distinguishing of these bacteria based on a single V-region impossible. The currently available 16S reference databases that are used for taxonomic classification of 16S sequencing data are still quite limited and do not contain a reference sequence for all experimentally obtained 16S sequences [13,14]. Therefore, some 16S V-region sequences can only be assigned up to the family and/or genus level or cannot be assigned at all.
Massive parallel sequencing of 16S rRNA V-regions has been made possible by the development of next-generation sequencing technology (NGS). A typical NGS run on an Illumina MiSeq will provide several million 250 bp paired-end reads per flow cell. The advantage of high throughput is countered by the relatively short reads that are produced by NGS. Although many of the limitations of short-read sequencing can be addressed using computational approaches, it is extremely challenging, if not impossible, to assemble longer sequences composed of highly homologous parts. Examples of this are repeated sequences in the human genome, but also the repeated sequences in the genomes of various bacteria that constitute the microbiota. A number of so-called third-generation sequencing technologies have been developed to overcome these limitations by sequencing very long amplicons. One such approach is developed by Pacific BioSciences (PacBio) and is termed single-molecule real-time (SMRT) sequencing [15].
We aimed to assess whether sequencing the full-length 16S rRNA gene using SMRT sequencing affected the results and interpretation of a dietary intervention compared to sequencing only the V4 region of this gene. This study included two experimental conditions; a Western-type diet (WTD) and a WTD complemented with the fibre inulin. Inulin is a fructose polymer that can only be degraded by intestinal microbiota and therefore strongly favours the expansion of specific intestinal microbiota [16][17][18][19]. To compare the effects of the dietary intervention measured on either the PacBio or Illumina MiSeq platform, we performed taxonomic analysis and diversity analysis on primary and derived data sets.
Cecum Samples
Cecum content was collected for microbial analysis. The cecum samples used in study were obtained in the context of a larger study of which the results were published recently [5].
PacBio Sequencing
16S rRNA full-length amplification was performed using degenerate primers containing 5' M13 universal tail sequences (Table S1). The 16S locus was amplified using LA Taq polymerase (Takara) with 400 μM dNTPs, 50 ng DNA template, and 400 nM of each primer in 1 × LA buffer + magnesium with 30 cycles of PCR (20 s 94 °C, 30 s 48 °C, 2 min 68 °C). PCR reactions were size selected using 0.65 × AMPure XP beads (Beckman Coulter). Amplicons were barcoded in a second PCR reaction containing universal tail oligos complementary to the M13 universal tail sequences (Table S1). Barcodes were added using Herculase II Taq polymerase (Agilent) with 250 μM dNTPs, 2ul of purified PCR product, and 400 nM of each primer in a 1 × reaction buffer with 5 cycles of PCR (20 s 95 °C, 20 s 58 °C, 2 min 72 °C). The barcoded amplicons were size selected using 0.65 × AMPure XP beads (Beckman Coulter). 500 ng of barcoded amplicons were prepared for sequencing using the amplicon template preparation protocol, 2015 release (Pacific Biosciences) including DNA damage repair and SMRTbell adapter ligation. Libraries were sequenced on the Pacific Biosciences RSII using MagBead loading with 6 h of movie time and P6-C4 chemistry.
Illumina Sequencing
Genomic DNA was sent to the Broad Institute of MIT and Harvard (Cambridge, USA). Microbial 16S rRNA was amplified targeting the hyper-variable V4 region using forward primer 515F (5′-GTG CCA GCMGCC GCG GTAA-3′) and the reverse primer 806R (5′-GGA CTA CHVGGG TWT CTAAT-3′). The cycling conditions consisted of an initial denaturation of 94 °C for 3 min, followed by 25 cycles of denaturation at 94 °C for 45 s, annealing at 50 °C for 60 s, extension at 72 °C for 5 min, and a final extension at 72 °C for 10 min. Sequencing was performed using the Illumina MiSeq platform generating paired-end reads of 175 bp in length in each direction. Overlapping paired-end reads were subsequently aligned. Details of this protocol have previously been described [21].
Sequencing Data Analysis
All three data sets were analysed using the operational taxonomic unit (OTU) approach. This was done by using the QIIME pipeline [22]. We used SILVA 132 QIIME release as reference OTU taxonomy database. Prior to OTU picking, each data set was quality filtered by sickle version 1.33 and low-quality reads were discarded. Open reference OTU picking strategy with 97% sequence similarity and minimum OTU size of two reads was used. The α-diversity metric based on observed OTUs was calculated continuously from 50 reads/sample up to 3300 reads/sample with increasing steps of 50 reads, with 10 × rarefaction. Unweighted Uni-Frac distances, with 10 jack-knifed replicates was measured at rarefaction depth of 3000 reads per sample, based on the unfiltered OTU table and relative bacterial abundance was determined. Prior to relative abundance visualization, rare taxa that were present at less than 0.1% were filtered. Sequence data are submitted to SRA database and are accessible with BioProject accession number PRJNA786882.
Sequencing Depth
Cecum content from mice fed a WTD without or with 10% inulin for 11 weeks was collected (n = 2 per group) and genomic DNA was extracted. The full-length 16S rRNA gene was amplified for PacBio sequencing, and the V4 region of the bacterial 16S rRNA gene was PCR amplified for Illumina short-read sequencing. To determine platform bias in the data sets obtained from the PacBio and Illumina platforms, a 16S rRNA V4 region data set was generated in silico from the full-length 16S rRNA PacBio data set (V4 PacBio). Table S2 shows that the read count obtained by PacBio and Illumina sequencing are in range of a typical run for the platforms, and the reads have the correct mean read length for the full-length 16S rRNA (approx. 1500 bp) and V4 region (approx. 250 bp). Interestingly, the V4 PacBio read count for individual samples are approximately 50% of the read count for the full-length 16S rRNA PacBio data they were derived from (Table S2). The V-ripper script in combination with the used primer sequences, apparently, does not recognize 50% of the full-length 16S rRNA sequences.
Sequencing Data Analysis
For operational taxonomic unit (OTU) picking, open reference OTU picking strategy with 97% sequence similarity and minimum OTU size of two reads was used. The minimum OTU size of at least two sequences/OTU ensured that singletons are excluded from the data. Table 1 shows the number of OTUs for individual samples and the number of sequences that these OTUs contained. In the 16S rRNA full-length PacBio data set, proportionally more reads were discarded in the OTU picking step compared to both 16S rRNA V4 data sets. These discarded reads were singletons and sequences that failed to align with the reference database. Furthermore, sequencing full-length 16S rRNA resulted in a higher percentage of unassigned taxa (2.9-8.4% of total reads) compared to both V4 data sets (0.05-0.6% of total reads; Table 1). These were reads without any reference sequence available in the reference database. The number of unassigned reads in the full-length 16S rRNA data set was in particular higher for samples of inulin-fed mice compared to samples of control mice.
Full-Length 16S rRNA Results into Higher α-Diversity
The OTU richness was assessed by plotting α-diversity versus sequencing depth. The α-diversity expressed as number of unique observed OTUs was calculated continuously from 50 reads/sample up to 3300 reads/sample with increasing steps of 50 reads, with 10 × rarefaction. Already at a sequencing depth of 300 reads/sample, α-diversity of 16S rRNA full-length PacBio samples was increased compared to both V4 PacBio and V4 Illumina data sets for control and inulin-fed samples (Fig. 1), while α-diversity of the V4 PacBio and V4 Illumina data sets were comparable. These data show that sequencing the full-length 16S rRNA resulted in a higher number of unique OTUs, already at a relatively low sequencing depth.
Using Full-Length 16S rRNA Reveals a Different Bacterial Phylogeny as Compared with V4 Region
The between sample diversity, or β-diversity, was determined by calculating unweighted UniFrac distances. This is a validated and widely used quantitative distance metric for studying microbial community clustering that takes the phylogeny of communities into account [23,24]. Principal coordinate analysis was performed and the variation explained by the first two principal coordinates is plotted in (Fig. 2). Principal coordinate (PC)1, which explains 34.8% of the data, clearly separates the full-length 16S rRNA PacBio data from the V4 amplicon data. The unweighted UniFrac distance for the V4 PacBio data set was comparable with the UniFrac distance of Illumina V4 regions, indicating limited sequencing platform bias in determining β-diversity. In order to assess the robustness of the UniFrac distance 10 × jackknifing at 3000 reads/sample was performed for all samples. The jack-knifing variance, indicated by the ellipsoids around the data points, was smaller for the full-length 16S rRNA sequenced samples compared to both V4 data sets (Fig. 2). This indicates that a longer amplicon length provided a more robust UniFrac distance assignment.
Using Full-Length 16S rRNA Gene Results in a Different Bacterial Composition and Relative Abundance
In addition to diversity analyses, we aimed to study if sequence length affected the taxonomic analysis outcome. We hypothesized that a longer amplicon length increased the resolution of the analysis by detecting additional taxa which would not be observed by sequencing the V4 region only. Therefore, we compared the full-length 16S rRNA PacBio samples with the V4 PacBio and V4 Illumina data sets. In this way, we could exclude platform bias and detect the effects of amplicon length on taxonomic analysis after a dietary intervention. Genus level is considered as the maximum resolution of 16S sequencing. Therefore, we compared relative abundance of bacterial taxa in the three data sets at genus level. Sequencing the full-length 16S rRNA gene showed a different relative abundance at genus levels compared to both V4 data sets, both for samples of control and inulin-fed mice (Fig. 3). Bacterial relative abundances of V4 PacBio and V4 Illumina data sets were comparable for control samples. For inulin-fed mice, sample In2 showed variation in relative abundance for several taxa between the V4 PacBio and V4 Illumina data set (Fig. 3). Interestingly, relative abundance of the genus Faecalibaculum that blooms with inulin intervention was higher in the full-length 16S rRNA data set compared to both V4 data sets. Relative abundance of the uncultured genus of Muribaculaceae family that increases with inulin intervention was lower in the full-length 16S Fig. 1 α-diversity. α-diversity metric observed species was calculated continually both for control and inulin-fed mice (n = 2) with 10 × rarefaction from 50 reads/sample up to 3300 reads/sample with steps of 50 reads. Each line represents one individual sample. FL, 16S rRNA full-length PacBio rRNA data set compared to both V4 data sets (Fig. 3). Relative abundance of the Bacteroides genus that decreases with inulin intervention was higher in full-length 16S rRNA data set compared to both V4 data sets (Fig. 3). Remarkably, the genus Lactobacillus was detected in the V4 PacBio and V4 Illumina data sets for both dietary conditions, but was completely absent inform the full-length 16S rRNA PacBio data set for both dietary conditions. After inulin intervention, other taxa like GCA-900066575, Lachnospiraceae-UCG006, Lachnospiraceae uncultured genus, Oscillibacter and Ruminiclostridium 9 were detected in both V4 data sets, and were also almost or completely absent in the full-length 16S rRNA PacBio data set. Taken together, this taxonomic analysis shows that sequencing the full-length 16S rRNA gene results in a different bacterial composition and relative abundance of bacterial species both for control and inulinfed mice compared to determining the sequence of the V4 region only.
Discussion
We hypothesized that sequencing the full-length 16S rRNA gene would provide a higher resolution in terms of diversity and taxonomic analyses compared to sequencing a single short amplicon of the 16S rRNA marker gene such as the V4 region.
Our results show that in the in silico-extracted V4 PacBio data set, individual samples have approximately 50% of the read count of the full-length 16S rRNA PacBio data set. This reduction in read count after in silico isolation of the V4 sequences from the full-length 16S rRNA data set might be caused by variability in the primer sequences. It is known that primer choice for sequencing hypervariable regions of 16S rRNA influences sequencing outcome, due to the fact that primers do not cover the 16S rRNA V4 flanking region for all bacteria [25][26][27]. These data could indicate that a proportion of the taxa that are identified by full-length 16 s Fig. 2 β-diversity, unweighted UniFrac distances. Unweighted UniFrac distances for individual samples were calculated both for control and inulin-fed mice (n = 2) using PacBio and Illumina MiSeq platform. Identical sample names in the graphs indicate individual mouse samples studied using different approaches. 10 × Jackknifing at 3000 reads/sample was performed. C1 and C2 are individual samples from the control group. In1 and In2 are individual samples from the inulin group. FL, 16S rRNA full-length PacBio rRNA gene sequencing are not detected by sequencing the V4 region only. Alternatively, although the circular consensus sequencing approach of PacBio has a very low error rate, this could also explain a proportion of the V-regions that could not be extracted using the V-ripper script. However, since PacBio sequencing errors are random, this would have no consequences on the distribution and phylogenetic assignment of the extracted sequences.
In addition to primer choice, other factors including the DNA extraction method and choice of the 16S V-region may affect experimental outcome and introduce biases to the diversity and taxonomic analysis. DNA extraction method: Mackenzie et al. studied the effects of different DNA extraction methods, including commercially available DNA isolation kits and the phenol: chloroform: isoamyl alcohol method [28]. Different DNA isolation methods resulted in different DNA yield, DNA quality, and relative abundance of taxon-assigned OTUs. Other studies addressing microbial DNA extraction methods report similar issues [29,30]. These results emphasize that it is important, if at all possible, to be consistent in the use of a DNA extraction method. Choice of 16S V-region: Sequencing the V4 region in combination with Illumina MiSeq platform has been widely used for taxonomic and diversity analysis [11,31]. More recently, a combination of two regions like the V2-V3 or V3-V4 region have been used for this purpose [32]. Burkin et al. compared V2-V3 with V3-V4 regions in water samples and reported that V2-V3 sequencing has higher resolution for lower-rank taxa [32]. Abellan-Schneyder et al. conducted an extensive study including six different combinations of the V-regions on human gut and mock samples [33]. They recommended sequencing of V3-V4 regions for human gut samples, but also mentioned that primer choice has significant influence on the resulting microbial composition [33]. Since there seems no consensus on which V-regions provides the best results, investigators should consider the choice for their desired V-region carefully based on the experimental design and sample type. The cecum samples used in study were obtained in the context of a larger study of which the results were published as mentioned in the Materials and Methods section [5]. In order to maintain comparability with previously obtained data we have used the V4 region in this current study.
Diversity analyses and taxonomic analysis are based on OTUs. An OTU is described as a cluster of sequences with a minimum amount of sequence identity; in the case of genus level the threshold for sequence identity is set at 97% similarity [9]. Since OTU picking is based on sequence identity, sequence length can thus affect the number and composition of OTUs in a given data set. α-diversity metric observed that OTUs showed increased number of unique OTUs for the full-length 16A rRNA PacBio data set compared to both V4 data sets.
In addition, our results showed that β-diversity is affected by the sequence length. β-diversity analysis was performed by calculating the unweighted UniFrac distances. The unweighted UniFrac distance is a qualitative distance metric which takes the phylogeny of the sample into account [24]. The PCoA plot of unweighted UniFrac distance is based on the number of shared and unshared branches of the phylogenetic tree of the samples and is therefore a measure of heterogeneity of the bacterial population [23,24]. Since 16S rRNA full-length PacBio and V4 Illumina sequenced samples are separated in the PCoA plot, we can conclude that these samples had different phylogenetic trees which reflected different bacterial compositions. As samples of the V4 PacBio data set and the V4 Illumina clustered together, we can conclude that the difference in phylogenetic trees and thus bacterial composition is not due to platform bias (PacBio vs Illumina), but caused by the difference in sequence length. Furthermore jack-knifing variance, which determines how often the cluster results are recovered using random subsets of the data, was smaller for the full-length 16S rRNA PacBio samples compared to both V4 data sets and shows that sequencing full-length 16S rRNA resulted in increased robustness of the data [24]. It has previously been shown that the PacBio platform can be used for studying microbiota communities [34,35]. Based on our findings and the fact that β-diversity metric UniFrac can distinguish bacterial communities at a depth of 50 reads/sample [23], we suggest that the PacBio platform can be used to study intestinal microbial communities at a lower sequencing depth. This allows multiplexing multiple samples on a single-molecule real-time (SMRT) cell in order to reduce resources and sequencing costs.
In addition to diversity analysis, interpretation of experimental outcome requires insight into the bacterial composition of a sample to understand e.g. which bacterial species are able to convert a dietary compound. Taxonomic analysis of the three data sets showed that sequencing full-length 16S rRNA resulted in a different bacterial composition as relative abundances of taxa were increased or decreased with 16S rRNA full-length PacBio after inulin intervention compared to both V4 data sets. Interestingly, the genus Lactobacillus was completely absent in the full-length 16S rRNA PacBio data set, while being detected in both V4 data sets. This difference in taxa detection is of major importance for interpretation of biological data. It should be mentioned that in our previous article, exclusively relied on 16S rRNA V4 region sequencing by Illumina, we reported that the genus Allobaculum bloomed after inulin intervention [5]. However, here we report that Faecalibaculum bloomed after inulin intervention. Faecalibaculum is closely related to Allobaculum with 86.9% sequence similarity and was recently isolated from laboratory mice [36]. Microbial data of our initial article were analysed using the Greengenes 13.8 reference database and for the current work we used the SILVA 132 reference database which likely explains this discrepancy in annotation.
Sequencing the full-length 16S rRNA gene resulted in the detection of a higher percentage of unassigned reads compared to sequencing the V4 regions only. Interestingly, in our study the percentage of unassigned reads was higher in samples of inulin-fed mice. This finding might suggest that at least part of the bacterial taxa blooming on inulin are in this unassigned fraction of the data. Since we cannot assign these reads, we cannot fully utilize the advantage of full-length 16S rRNA gene sequencing compared to V4 sequencing.
Conclusion
Taken together, we conclude that sequencing the fulllength 16S rRNA gene provides a different view regarding bacterial relative abundance, in-sample diversity, and in in-between-sample diversity, as compared to V4 sequencing regardless of sequence analysis platform. This clearly has implications for interpretation of biological data after a dietary intervention.
Supplementary Information
The online version contains supplementary material available at https:// doi. org/ 10. 1007/ s00284-022-02956-9. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2011-04-29T00:00:00.000
|
807478
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://virologyj.biomedcentral.com/track/pdf/10.1186/1743-422X-8-196",
"pdf_hash": "712b6f27eb94faffb98e12c76ad89ca698eb2645",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44530",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "712b6f27eb94faffb98e12c76ad89ca698eb2645",
"year": 2011
}
|
pes2o/s2orc
|
Cellular transcripts regulated during infections with Highly Pathogenic H5N1 Avian Influenza virus in 3 host systems
Background Highly pathogenic Avian Influenza (HPAI) virus is able to infect many hosts and the virus replicates in high levels in the respiratory tract inducing severe lung lesions. The pathogenesis of the disease is actually the outcome of the infection as determined by complex host-virus interactions involving the functional kinetics of large numbers of participating genes. Understanding the genes and proteins involved in host cellular responses are therefore, critical for the elucidation of the mechanisms of infection. Methods Differentially expressed transcripts regulated in a H5N1 infections of whole lung organ of chicken, in-vitro chick embryo lung primary cell culture (CeLu) and a continuous Madin Darby Canine Kidney cell line was undertaken. An improved mRNA differential display technique (Gene Fishing™) using annealing control primers that generates reproducible, authentic and long PCR products that are detectable on agarose gels was used for the identification of differentially expressed genes (DEGs). Seven of the genes have been selected for validation using a TaqMan® based real time quantitative PCR assay. Results Thirty seven known and unique differentially expressed genes from lungs of chickens, CeLu and MDCK cells were isolated. Among the genes isolated and identified include heat shock proteins, Cyclin D2, Prenyl (decaprenyl) diphosphate synthase, IL-8 and many other unknown genes. The quantitative real time RT-PCR assay data showed that the transcription kinetics of the selected genes were clearly altered during infection by the Highly Pathogenic Avian Influenza virus. Conclusion The Gene Fishing™ technique has allowed for the first time, the isolation and identification of sequences of host cellular genes regulated during H5N1 virus infection. In this limited study, the differentially expressed genes in the three host systems were not identical, thus suggesting that their responses to the H5N1 infection may not share similar mechanisms and pathways.
Background
Avian Influenza virus (AIV) is a member of the Orthomyxoviridae family of negative-stranded, segmented RNA viruses and represents a particularly attractive model system as viral replication strategies are closely intertwined with normal cellular processes including the host defense and stress pathways [1]. Over the course of evolution, Influenza virus has developed translational control strategies that utilize cap-dependent translation initiation mechanisms. This causes the host-cell proteins to preferentially synthesize viral proteins and prevent the activation of antiviral response. Translational regulation is a critical component of the cellular response to a variety of stimuli, including growth promoting and growth-repressing signals. Similarly, the cellular response to stress, such as viral infection, nutrient deprivation, accumulation of misfolded proteins and ER stress, and finally heat shock involves translational control mechanisms that function to activate and repress mRNA translation depending on environmental conditions. For example, during Influenza virus infection, there is a dramatic shutoff of cellular protein synthesis and the selective translation of viral mRNAs [1][2][3]. Concurrently, in heat shock or stressed cells, there is similarly a disruption of 'normal' cellular protein synthesis and a subsequent redirection of translation to heat shock mRNAs [4][5][6]. This clearly shows that the Influenza virus infections on cells are closely intertwined with normal cellular processes, including host defense and stress pathways.
The widespread distribution of highly pathogenic avian H5N1 Influenza A viruses in wild birds and, in particular, in domestic poultry populations continues to pose a threat to public health. Severe respiratory disease and a high case-fatality rate have become a hallmark of H5N1 infection in humans as well as in other mammalian species [7][8][9][10]. To develop efficient therapeutics against this virus, understanding how the virus interacts with the host in natural infection is necessary. Having insights into the host's responses to influenza (H5N1) would help define targets for therapeutic intervention [11].
One way to do this is to elucidate the mechanisms of virus pathogenesis in chickens, however, how host cells interact and the molecular mechanisms underlying the pathophysiologic process of HPAIV infection in chicken is still poorly understood. Still lacking, also, are the first hand information on the molecular changes in the host induced by the virus to promote its replication and also the pathways triggered in the host that result in immunity and or clearance of the viral infection [11]. The outcome of the infection is determined by complex host-virus interactions with a large number of altered transcriptional and translational rates, and functional kinetics of participating genes. An overview of host responses to AIV at transcriptional level in the trachea and lungs induced by H9N2, H3N2 and H1N1 infection have described the involvement of many genes involved in innate immunity, interleukin activity and vesicle trafficking such as endocytosis and phagocytosis during virus entry [12][13][14][15].
We undertook the present study as a preliminary work to understand the selective transcriptome which were up regulated and down regulated during the time of infection with 3 different types of hosts i.e. MDCK cells, primary CeLu cells and lung tissues of infected chickens. We employed a new differential display GeneFishing™ PCR technique to compare the gene expression in normal and infected cells and tissues. This sensitive technique is based on the determination of multiple expression patterns of pre-determined sequences and we also combined it with the use of annealing control primer (ACP)™ technology in order to provide a primer with annealing specificity to the template, and allow only targeted product to be amplified without any false artifacts [16,17]. The other great advantage of this technique is that the bands can be isolated and the genes cloned in a vector for sequence identification and stored for further use.
Viruses
Avian Influenza virus, isolate A/chicken/Malaysia/5744/ 2004 H5N1 was provided by Veterinary Research Institute, Ipoh, Perak, Malaysia. This virus was confirmed to be highly pathogenic in chickens via the intra cerebral pathogenicity test (conducted at the OIE, World Avian Influenza Reference Centre of the Australian Animal Health Laboratory, Geelong, Australia). The pathogenicity of the virus was also confirmed by the demonstration of multiple basic amino-acid sequence at the cleavage site of the HA gene. The viruses were initially isolated and passaged in Madin-Darby canine kidney (MDCK) cells. The virus stock was aliquoted, and titrated to determine tissue culture infection dose 50% (TCID 50 ) in MDCK cells. The experiments were carried out in a Bio-safety level 3 (BSL-3) facility at the Veterinary Research Institute, Ipoh, Perak, Malaysia.
Chickens and virus infection
Six specific pathogen free (SPF) chickens of one-week of age were each infected intranasally with allantoic fluid containing 10 4 EID50/100 μl H5N1 virus. The chickens were kept in an isolator within a BSL-3 facility of the Veterinary Research Institute, Ipoh Malaysia with food and water available ad libitum. The same batch and age-matched control chickens were treated with PBS pH 7.2 used for virus dilution. As this virus is highly pathogenic to chickens, previous studies have shown that the virus was able to kill the chickens within 24-48 hrs post infection, depending on the dose. In this study, however, chickens were euthanized at 32 hrs post infection and the lungs harvested. Both the control and infected chickens were sacrificed at the same time. The lung tissues were kept at -80°C until used for total mRNA extraction. All animal studies were performed according to protocols approved by Animal Ethics committee of the Veterinary Research Institute (VRI) Malaysia, Department of Veterinary Sciences Malaysia.
Cells and virus infection
Primary chicken embryo lung cells (CeLu) were prepared from 19-20 day old-embryos of SPF eggs. Cells were seeded at a density of 10 6 cells/ml in Dulbecco's Modified Eagle Medium (DMEM; GIBCO), supplemented with 10% fetal bovine serum (FBS; GIBCO), 100 units/ml penicillin (GIBCO), and 100 mg/ml streptomycin (GIBCO). MDCK (Madin Darby Canine Kidney) cells were also grown as monolayers in Minimum Essential Media (MEM; GIBCO), supplemented with 10% fetal bovine serum (FBS; GIBCO), and the same concentration of antibiotics as above. Confluent monolayer cells were infected with the isolate A/chicken/Malaysia/5744/2004 H5N1 at a multiplicity of infection of 5. Infected cells showing 5-10% cytopathic effect (within 32 hours) were harvested control and infected monolayer cells were washed twice with PBS pH 7.2, scraped into a 15 ml conical tube and centrifuged at 1,500rpm for 15 mins. The cell pellets were stored at -80°C or used immediately for total mRNA extractions.
Messenger RNA isolation mRNA was extracted from the infected and uninfected or control MDCK and primary CeLu cells, lungs of infected and control chickens using the RNeasy ® mini kit (QIAGEN Inc., Valencia, CA), according to the manufacturer's instructions.
Annealing control primer™-based GeneFishing™ PCR
Differentially expressed genes (DEGs) were screened by the annealing control primer (ACP) ™-based PCR method using the GeneFishing™ DEG kits (Seegene, Seoul, South Korea) [17]. The GeneFishing™ PCR technique involved an ACP™ system that had a unique tripartite structure in that its distinct 3'-end target core sequence and 5'-end nontarget universal sequence portions were separated by a regulator, it used primers that annealed specifically to the template, and it allowed only genuine products to be amplified; this process eliminates false positive results. Second-strand cDNA synthesis and subsequent PCR amplification were conducted in a single tube. Briefly, second-strand cDNA synthesis was conducted at 50°C (low stringency) during one cycle of first-stage PCR in a final reaction volume of 49.5 μl containing 3-5 μl (about 50 ng) of diluted first-strand cDNA, 5 μl of 10× PCR buffer plus Mg (Roche Applied Science, Mannheim, Germany), 5 μl of dNTP (each 2 mM), 1 μl of 10 μM dT-ACP2, and 1 μl of 10 μM arbitrary primer preheated to 94°C ( Table 1). The tube containing the reaction mixture was held at 94°C, while 0.5 μl of Taq DNA Polymerase (5 U/μl; Roche Applied Science) was added to the reaction mixture. The PCR protocol for second-strand synthesis was one cycle at 94°C for 1 min, followed by 50°C for 3 min, and 72°C for 1 min. After the completion of second-strand DNA synthesis, 40 cycles were performed. Each cycle involved denaturation at 94°C for 40s, annealing at 65°C for 40 s, extension at 72°C for 40 s, and a final extension at 72°C to complete the reaction. The amplified PCR products were separated in 1.5-2% agarose gel stained with ethidium bromide.
Cloning and sequencing
The differentially expressed bands were extracted from the gel using the QIAquick ® Gel extraction kit (QIA-GEN Inc., Valencia, CA), and directly cloned into a TOPO TA ® cloning vector (Invitrogen) according to the manufacturer's instructions. The cloned plasmids were sequenced with an ABI PRISM ® 3100 Genetic Analyzer (Applied Biosystems, Foster City, CA). Complete sequences were analyzed by searching for similarities using the Basic Local Alignment Search Tool (BLAST) search program at the National Center for Biotechnology Information GenBank [18].
Quantitative reverse transcription -polymerase chain reaction (qRT-PCR) All RT-PCR were set up in 96-well optical plates using 50 ng of extracted uninfected and infected RNA from CeLu, MDCK cell lines and chicken lung tissue, 10 μl TaqMan Universal PCR Master Mix (Applied Biosystems, Foster City, CA, USA), and 1 μl of primers/probe set containing 900 nM of forward and reverse primers and 300 nM probe was added to a final volume of 20 μl per reaction. All samples were tested in triplicates. RT-PCR program consisted of incubation at 48°C for 30 min, and 40 cycles at 95°C for 10 min, 95°C for 15 sec, and 60°C for 1 min with the Step One Plus Real-Time PCR System ® (Applied Biosystems). A non-template control and an endogenous control (eukaryotic 18s rRNA) were used for the relative quantification. All quantitations (threshold cycle [CT] values) were normalized to that of 18s rRNA to generate ΔCT, and the difference between the ΔCT value of the sample and that of the reference (uninfected sample) was calculated as ΔΔCT. The relative level of gene expression was expressed as 2 -ΔΔCT [19]. Primers for qRT-PCR were designed using Primer3 software (http://frodo.wi.mit. edu/cgi-bin/primer3/primer3.cgi) with these parameters: amplicon length, 95-100 bp; primer length, 18-27 nucleotides; primer melting temperature, 60-64°C; primer and amplicon GC content, 20-80%; difference in melting temperature between forward and reverse primers, 1-2°C. Primers were synthesized by Integrated DNA Technologies (Coralville, IA, USA). Primer information is listed in Table 2.
Statistical analysis
Results expressed as 2 -ΔΔCT were reported as mean standard deviation and analyzed using paired student's t test. P values < 0.05 were considered statistically significant.
Cellular transcripts from 3 different host systems regulated during infection with HPAI H5N1
Differentially expressed mRNAs using a combination of 20 arbitrary primers and two anchored oligo (dT) primers (dT-ACP1 and dT-ACP2) of the 3 different types of host systems infected with the H5N1 virus were isolated, cloned and sequenced. For each of the host system, more than 100 transcripts were observed however, only distinct up regulated or down regulated transcripts as observed on the agarose gels were chosen i.e. after exclusion of poor bands and bands which did not show much differences in intensity between the control and infected cells. For the CeLu H5N1 infected cells ( Table 3, 4 and 5. BLASTn searches in GenBank revealed that the differentially expressed genes displayed significant similarities with known genes or expressed sequence tags (ESTs).
Quantitative reverse transcription-PCR (qRT-PCR)
A real-time RT-PCR assay was developed to validate the mRNA differential display expression data based on the use of a TaqMan probe. We have chosen 7 genes (CCND1, CCND2, CCND3, RTN4, L14, Hsp60 and IL8) that were analyzed using qRT-PCR. Initially, we had intended to only measure the relative quantitation of the down regulated CCND2 (from the mRNA differential display results), however, we thought that it would be interesting to investigate whether H5N1 down regulates other cyclin D family (CCND1 and CCND3) as well. For optimal relative quantification of the 7 selected genes, the fold difference of ΔCT (2 -ΔΔCT ± SD ) between study groups were calculated (uninfected and infected Figure 4A). These results indicate that there is a significant decrease (P < 0.05) in transcription of these 5 genes due to H5N1infection. On the contrary, the resulting values for Hsp60 gene in uninfected cell were 1.00 ± 0.10, 3.78 ± 0.15 for Hsp60 gene in infected cell, 1.00 ± 0.13 for IL-8 gene in uninfected cell and 4.18 ± 0.12 for IL-8 gene in infected cell ( Figure 4B). These results indicate that there is significant increase (P < 0.05) in transcription of these 2 genes due to H5N1 infection.
Discussion
Difficulty often arises in identifying a gene responsible for a specialized function during a certain biological stage because the gene is expressed at low levels, whereas the bulk of mRNA transcripts within a cell are highly abundant [20]. To screen DEG transcripts in low concentrations while minimizing the false positive results, it was reasonable to use a PCR-based technique. Moreover, it was possible to detect GeneFishing™ technology reaction products easily on ethidium bromidestained agarose gel. This was supposed to greatly assist studies searching for genes that are expressed differentially in cells under various physiological stages or experimental conditions. In the past, several approaches have been used to compare levels of gene expression, e.g. RT-PCR and northern blot analysis. These approaches were limited to the analysis of one gene at a time, whereas other methods, such as subtractive hybridization or variations in the differential display techniques, can determine multiple expression patterns of predetermined sequences; the latter technique is very sensitive, but not quantitative [21]. Large numbers of expressed genes can also be investigated using nucleic acid microarrays. These arrays allow for scanning of large numbers of genes rapidly; however, these techniques have the relative disadvantage of being suitable only for analysis of a fixed number of predetermined gene sequences [22,23]. In this study, mRNA differential display technique was utilized to screen for regulated cellular transcripts during H5N1 infection in 3 different host systems. MDCK cells derived from a continuous (immortal) neoplastic cell line are one of the best mammalian cell models to study the infection of avian influenza virus as it has all the necessary receptors for virus attachment and also can be propagated in large amounts. It is interesting though to compare the responses to the same virus infections of two cell lines, one a continuous mammalian cell line and a primary chicken cell line. Cultured primary CeLu cells provide a more physiologically relevant environment for the molecular target under examination than that same target expressed in an "artificial" immortalized cell environment. This is notably the case with primary chicken embryo lung cells, for which the complex interplay of endogenously expressed ion channels, second messengers and other cell signaling proteins, can be better recapitulated than in transformed immortalized cell lines. It was hypothesized that intact lungs, which are the primary organ for the proliferation of the virus would probably generate some similar DEGs as the CeLu cells due to their similarity in the origin. However, the physiological environment and complex immune interactions of lungs from infected chickens reveal genes that are relevant in a physiological setting involving host responses components to infection. One of these genes is the IL-8. Hundreds of genes (Figures 1, 2 and 3) of the 3 host systems were expressed, out of which 37 DEGs up-and down- Overall, our studies have demonstrated that the responses of the three host systems to infections with the same virus were substantially different from each other. The functional roles, sequence similarities and characterization of differentially expressed transcripts are summarized in Table 3, 4 and 5. In this study, ACPbased RT-PCR results showed that most of the differentially expressed genes exhibited significantly higher sequence similarity (90 to100%) with known coding regions of genes. Munir et al., 2005 have showed that the regulation of host cell transcription following infection with a virus is an intricate phenomenon and may or may not be shared among viral agents; indicating that viruses that are evolutionarily closely related may not share similar mechanisms to regulate host gene expression and likely have their own signature patterns of altering host physiology during replication. In our study, we can also conclude that host cell responses or the regulation of host cell transcriptions of different permissible cell or host systems following infection with the same virus may have their own specific patterns for altering host gene expressions and may not share similar mechanisms and pathways.
One of the drawbacks in using this technique is the small number of DEGs generated that could be captured on agarose gels. In comparison to the large numbers of expressed genes that can be investigated using nucleic acid microarrays. Despite this, we found several interesting genes, which were predominantly and consistently up regulated only in infected cells both in cell lines and also lung tissues. They are the Hsp60 small heat shock proteins, Cyclin D2, Interleukin-8.
Hsp60 is a common cellular protein that assists in the correct folding of proteins and stabilizes unfolded labile proteins [24]. These functions maintain the activities of some cellular proteins and facilitate enzymatic maturation. The former is a well-known function of Hsp60 under stress conditions, and an example of the latter is activation of procaspase-3 and prion protein through conformational change by Hsp60 [25][26][27]. Functioning as a chaperonin in eukaryotes, Hsp60 assembles into a heptamer and has ATPase activity for the release of bound protein [25,28]. Hsp60 is an essential factor for the activation of human Hepatitis B Virus polymerase for it to function inside cellular environment [29]. Apart from that, Hsp60 as a mitochondrial protein has been shown to be involved in stress response as well. The heat shock response is a homeostatic mechanism that that protects cell from damage by up regulating the expression of genes that code for Hsp60 [30]. The up regulation of Hsp60 production allows for the maintenance of other cellular processes occurring in the cell, especially during stressful times. Infection and disease are extremely stressful on the cell. When a cell is under stress, it naturally increases the production of stress proteins, including heat shock proteins such as Hsp60. In order for Hsp60 to act as a signal it must be present in the extracellular environment. This explains the up regulated various types of heat shock protein and Interleukin-8 cytokine which we found during time of infection in our study. In addition to that, IL-8 is also a potent chemo-attractant and stimulus of neutrophils. It plays a pivotal role in inflammatory diseases. Hsp60 has also been found to be involved in signal transduction cascade of immune response when the cells are under stress from environment, in this case under viral attack. It acts as a signaling molecule, and indicates the action for other immune molecules such as cytokines (interleukins, interferons and also tumor necrosis factor) [31][32][33][34]. Further investigation on this Hsp60 also is essential, especially as it may shed some light on its role in cytokine storm that occurs in most H5N1 infection which is fatal to the host. We also found the cyclin D2 transcript down regulated only during time of infection in MDCK cell lines which mimics a mammalian model of study. This cyclin forms a complex with and functions as a regulatory subunit of CDK4 or CDK6, whose activity is required for cell cycle G1/S transition. This protein has been shown to interact with and be involved in the phosphorylation of tumor suppressor protein Rb. G1 cell cycle regulators are often targets for deregulation in cancers [35,36]. Cyclin D2 is up regulated in many cancers, including breast cancer and its role is to increase cellular proliferation. In addition to cancer, the cyclin Ds are often seen deregulated in viral infections. For instance, cyclin D2 is up regulated in Epstein-Barr virus (EBV) and Hepatitis B virus (HBV) infected cells [37,38] and cyclin D1 is up regulated in Simian virus 40 (SV40) transformed cells [39]. It is interesting to note that EBV and HBV, both of which have increase levels, are associated with cancers. Many oncogenic viruses contain a viral protein, such as the SV40 T antigen and human papillomavirus (HPV) E6 and E7 proteins, which aid intumorigenesis by altering cell cycle progression [40,41]. Interestingly, in our study, we have found that, all 3 of the Cyclins, D1, D2 and D3 were down regulated. Very little is known about crosstalks between influenza A virus and the cellular machineries that regulate the cell cycle, thus our finding clearly opens up new avenues of research to determine whether alteration of cell cycle progression is a strategy used by H5N1 Avian Influenza virus to better replicate in host cells.
Conclusion
The use of functional genomics methods, led by mRNA differential display technique, has significantly advanced our findings of organ and cell specific transcriptomes, especially when comparisons are made between infected and non-infected. We have managed to identify and isolate 37 authentic genes which were up and down regulated in this study. These findings in this preliminary study with MDCK and CeLu cell lines and lung tissues has open up new avenues of research especially into exploration and elucidating the functions of several interesting candidate genes such as the Hsp60, cyclin D2, IL-8 and the many unknown genes, and elucidating any role they might play in the virulence or pathogenicity of the virus. We have also shown that host systems infected by the same virus may have their own specific patterns for altering host gene expressions and may not share similar mechanisms and pathways. With further identification of these novel genes and the availability of sequence data of some of the unknown genes, would provide resources for further research in their use as markers or inhibitors in the development of novel biologics and reagents for diagnostic and anti-viral therapies.
|
v3-fos-license
|
2019-10-23T13:06:38.110Z
|
2019-10-01T00:00:00.000
|
204832516
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.3390/pharmaceutics11100542",
"pdf_hash": "39d9b186da7842e64774c05235be08fadc392a0d",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44531",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"sha1": "977fe30ae13f7e96b889ff8270241c6e1b8423a0",
"year": 2019
}
|
pes2o/s2orc
|
Utilizing a Kidney-Targeting Peptide to Improve Renal Deposition of a Pro-Angiogenic Protein Biopolymer
Elastin-like polypeptides (ELP) are versatile protein biopolymers used in drug delivery due to their modular nature, allowing fusion of therapeutics and targeting agents. We previously developed an ELP fusion with vascular endothelial growth factor (VEGF) and demonstrated its therapeutic efficacy in translational swine models of renovascular disease and chronic kidney disease. The goal of the current work was to refine renal targeting and reduce off-target tissue deposition of ELP–VEGF. The ELP–VEGF fusion protein was modified by adding a kidney-targeting peptide (KTP) to the N-terminus. All control proteins (ELP, KTP–ELP, ELP–VEGF, and KTP–ELP–VEGF) were also produced to thoroughly assess the effects of each domain on in vitro cell binding and activity and in vivo pharmacokinetics and biodistribution. KTP–ELP–VEGF was equipotent to ELP–VEGF and free VEGF in vitro in the stimulation of primary glomerular microvascular endothelial cell proliferation, tube formation, and extracellular matrix invasion. The contribution of each region of the KTP–ELP–VEGF protein to the cell binding specificity was assayed in primary human renal endothelial cells, tubular epithelial cells, and podocytes, demonstrating that the VEGF domain induced binding to endothelial cells and the KTP domain increased binding to all renal cell types. The pharmacokinetics and biodistribution of KTP–ELP–VEGF and all control proteins were determined in SKH-1 Elite hairless mice. The addition of KTP to ELP slowed its in vivo clearance and increased its renal deposition. Furthermore, addition of KTP redirected ELP–VEGF, which was found at high levels in the liver, to the kidney. Intrarenal histology showed similar distribution of all proteins, with high levels in blood vessels and tubules. The VEGF-containing proteins also accumulated in punctate foci in the glomeruli. These studies provide a thorough characterization of the effects of a kidney-targeting peptide and an active cytokine on the biodistribution of these novel biologics. Furthermore, they demonstrate that renal specificity of a proven therapeutic can be improved using a targeting peptide.
Introduction
Elastin-like polypeptides (ELP) are a class of protein biopolymers composed of repeating five-amino acid units (VPGxG, where x is any amino acid except proline) with unique physical properties [1] and many advantages as drug carriers [2]. ELPs are relatively biologically inert, having little to no cytotoxicity [3][4][5] and low immunogenicity [6,7]. Also, being proteins rather than chemically synthesized polymers, they degrade in vivo into non-toxic natural amino acids [8]. ELPs also have a unique physical property of being thermally responsive. ELPs are highly soluble in aqueous solution below a distinct transition temperature, and they form coacervates and precipitate above the transition temperature. This aggregation process is fully reversible, and the transition temperature at which it occurs can be precisely tuned by changing the hydrophobicity of the guest residue in the VPGxG repeat or by changing the number of repeats [9]. This tunable phase transition makes ELPs extremely versatile as drug delivery platforms via three major strategies: ELPs with transition temperatures below body temperature can be used as slow-release drug depots, ELPs with transition temperatures just above body temperature can be used for thermally targeted drug delivery, and ELPs with high transition temperatures (above body temperature) can be used as soluble protein carriers for therapeutics.
ELPs designed with transition temperatures below body temperature form coacervates at the injection site after delivery in vivo. This was utilized to achieve slow-release drug depots [10,11]. For example, an ELP fusion with multiple copies of glucagon-like peptide 1 with a transition temperature below body temperature achieved slow release of GLP-1 over the course of 10 to 17 days in mice and monkeys [12], and controlled plasma glucose levels for five to ten days in multiple mouse models [10,12,13] of diabetes. In another application, an ELP-lacritin fusion protein with a low transition temperature formed a depot after injection into the lacrimal gland and enhanced tear production in mice [14]. It is also possible to design di-block ELPs containing a hydrophobic, low-transition temperature block and a hydrophilic, high-transition temperature block, which form nanoparticles at physiologic temperatures [15]. These constructs were used for chemotherapy and anti-cancer peptide delivery [16,17], ocular drug delivery [18], and immunotherapy [19], among others. If the transition temperature is tuned just above body temperature, ELPs can be used as carriers for thermally targeted drug delivery. In these applications, ELP-fused drugs were administered systemically and circulate as soluble proteins. However, at a target site (most development in this area was regarding tumor targeting) where external mild hyperthermia was applied, the ELPs formed coacervates and accumulated, resulting in enhanced delivery [20][21][22][23][24]. Finally, ELPs with transition temperatures well above body temperature can be used as soluble carriers for many types of therapeutic agents. ELP fusion to small peptides, proteins, or small molecule drugs can increase solubility [25,26], protect the cargo from degradation [27,28], slow plasma clearance [29], and alter biodistribution [29], overall resulting in improved bioavailability. In addition to drug or therapeutic attachment, ELPs can be modified with targeting agents [4,30,31] or cell-penetrating peptides [5] to alter biodistribution [4] and intracellular distribution [32]. The pharmacological properties of soluble ELP-based biologics are also tunable. The plasma half-life of soluble ELPs is directly proportional to their molecular weight [33,34], allowing unique control over the in vivo clearance time.
Our lab focused on the development of in vivo soluble (high transition temperature) ELP fusion proteins as novel biologics. In one application, we tested an ELP fusion with vascular endothelial growth factor (VEGF), a pro-angiogenic cytokine [29,35,36]. VEGF is a potent inducer of angiogenesis and a stimulator of endothelial cell function [37], and reduced VEGF availability has been implicated in several disease states, including ischemic renal diseases [38,39] and preeclampsia [40]. VEGF supplementation therapy, however, is limited by the rapid plasma clearance of the small protein [41], the need for direct infusion into target tissues [42,43], and the ubiquitous nature of this cytokine that may lead to off-target effects. To overcome some of these limitations, we first developed an ELP fusion protein with human VEGF-A 121 [29], the smallest, non-heparin-binding form of VEGF-A [44,45]. In extensive preclinical testing, the therapeutic potential of the ELP-VEGF fusion protein was demonstrated for treatment of kidney disease, including renal artery stenosis-induced renovascular disease [36,46,47] and chronic kidney disease [48]. These models, as observed in humans with these diseases, displayed a progressive loss of renal function associated with extensive microvascular rarefaction and renal fibrosis. ELP-VEGF was capable of targeting the kidney after either direct intra-renal injection [36] or systemic intravenous injection [46], improved microvascular density, induced increases in renal blood flow and the glomerular filtration rate, and reduced renal fibrosis. Furthermore, ELP-VEGF also improved renal outcomes in renovascular disease when used in combination with renal angioplasty and stenting [47].
Given the promise of ELP-VEGF for renal therapy, we are currently working to further optimize the therapeutic protein for renal disease treatment. Pasqualini et al. described a series of peptides that homed preferentially to the kidney or to the brain [49]. Using a phage display screen in mice, two peptides were identified that were enriched three-to five-fold in the kidney relative to the brain. The authors found that phages expressing one of these peptides were present at high levels in the glomerulus and between tubules [49], likely reflecting their target-binding in the renal vasculature. Previously, we showed that this kidney-targeting peptide (KTP), discovered by Pasqualini et al., increased the renal deposition of ELP when fused at ELP's N-terminus about five-fold in both rats and pigs, while not affecting ELP levels in other organs [4]. The addition of KTP also slowed the plasma and whole-body clearance of ELP. Hence, we aimed to further improve the kidney-targeting of ELP-VEGF by incorporating the KTP technology. In this study, a kidney-targeted form of the ELP-VEGF protein was generated by fusing KTP to its N-terminus. All control proteins lacking each domain (ELP, KTP-ELP, and ELP-VEGF), plus non-ELP-fused VEGF, were generated in order to determine the effects of each domain on the in vitro activity and in vivo pharmacology of the molecule. After purifying all proteins, primary human glomerular microvascular endothelial cells were used to assess the in vitro potency of each protein, and the in vivo pharmacology of each protein (pharmacokinetics, biodistribution, kidney concentrations, and intrarenal distribution) was assessed in a mouse model.
Protein Expression and Purification
The ELP domain used in this study was ELP-[V 1 G 7 A 8 ]-160, an ELP containing 160 VPGxG repeats, where x is Val, Gly, or Ala in a 1:7:8 ratio [20] (previously referred to as ELP2 [20,22,50], and referred to throughout this manuscript simply as ELP). This ELP has a molecular weight of approximately 61 kDa and a transition temperature of approximately 60 • C, making it a mid-sized ELP that does not aggregate in vivo, making it ideal as a soluble drug carrier. To generate ELP fusion proteins, the ELP coding sequence was modified by fusing an E. coli codon-optimized coding sequence for human VEGF-A 121 in frame at the ELP C-terminus (as described in [29]) and/or fusing a coding sequence for a short kidney-targeting peptide [49] at the N-terminus (as described in [4]). The resulting constructs (ELP, KTP-ELP, ELP-VEGF, and KTP-ELP-VEGF) were expressed in E. coli and purified by inverse transition cycling, as previously described [29,50]. Free human VEGF-A 121 was purchased from ProSpec (East Brunswick, NJ, USA).
Determining the Transition Temperature of ELP Fusion Proteins
Each ELP fusion protein was dissolved in phosphate-buffered saline at a final concentration of 10 µM. Turbidity of the ELP protein solutions was measured by monitoring optical density at 350 nm (OD 350 ) using a UV-visible spectrophotometer with a Peltier-controlled temperature block (Cary 100, Agilent, Santa Clara, CA, USA). The temperature was increased from 20 • C to 90 • C at a rate of 0.5 • C per minute and data were collected every 0.5 • C with an average time of 2 s. Turbidity data were plotted as the percentage of the maximum OD 350 after correcting the baseline to zero at 20 • C. A plot of the first derivative of the turbidity profile was generated using Graphpad Prism (GraphPad Software, Inc., San Diego, CA, USA). The transition temperature (T t ) was defined as the peak in the first derivative plot of the aggregation curve.
Cell Culture
Human glomerular microvascular endothelial (HGME) cells were purchased from Cell Systems (Kirkland, WA, USA) and subcultured according to the manufacturer's recommendations using Attachment Factor TM (Cell Systems, Kirkland, WA, USA) and complete classic medium supplemented with Culture Boost TM (Cell Systems, Kirkland, WA, USA). Cells in passage 4-13 were used for all experiments. Human renal proximal tubular epithelial cells (HRPTEpC) were purchased from Cell Applications, Inc. (San Diego, CA, USA) and subcultured according to the manufacturer's recommendations using RenaEpi Growth factor media. Cells in passage 2-4 were used for all experiments. Human podocyte cells were purchased from Celprogen (Torrance, CA, USA) and subcultured according to the manufacturer's recommendations using human podocyte cell culture media plus serum. The cells were seeded in ECM-coated flasks or Microtiter plates purchased from Celprogen. Cells in passage 9-13 were used for all experiments. All cells were maintained at 37 • C in a humidified incubator at 5% CO 2 .
Western Blotting and Silver Staining
ELP, KTP-ELP, ELP-VEGF, and KTP-ELP-VEGF proteins were electrophoresed using SDS-PAGE (4%-20%) Stain-Free TM gels (Bio-Rad, Hercules, CA, USA). After electrophoresis, one of two identical gels was imaged by Bio-Rad stain-free imaging or processed for silver staining (Pierce), and the second gel was used for Western blotting. The proteins were transferred to a nitrocellulose membrane using 1-Step transfer buffer (Thermo Scientific, Rockford, IL, USA) with a Pierce G2 Fast Blotter (Thermo Scientific, Rockford, IL, USA). Following the transfer, the membrane was blocked with 5% dry milk in PBS-T for 1 h at room temperature. At the end of the incubation, the membrane was probed with anti-VEGF (A20) antibody (Santa Cruz, SC 152) at 1:200 dilution overnight at 4 • C. Following incubation, the membrane was washed with PBS-T. After the wash, the membrane was incubated with goat anti-rabbit poly-HRP antibody (Pierce) at a 1:10,000 dilution for 1 h at room temperature. Following the incubation, the membrane was washed with PBS-T, incubated with SuperSignal West Femto substrate (ThermoFisher, Waltham, MA, USA), and the bands were visualized using chemiluminescence. Blot imaging was performed using the Bio-Rad Universal Hood Gel Doc System.
In another set of experiments, cell lysates of HGME, HRPTEpC, and human podocytes were prepared using radioimmunoprecipitation assay (RIPA) buffer. Cell lysates at equal concentrations were electrophoresed on SDS-Page (4%-20%) Stain-Free TM gels (Bio-Rad). After electrophoresis, the proteins were transferred to a nitrocellulose membrane. The membranes were blocked with 5% dry milk in PBS-T for 1 h at room temperature. The membranes were probed for anti-VEGF R1 (Abcam, ab 23152, Cambridge, MA, USA) and anti-VEGF R2 (D8, Santa Cruz Biotechnology, Santa Cruz, CA, USA) at 1:1000 and 1:200 dilutions, respectively. The membranes were incubated with primary antibodies overnight at 4 • C. At the end of incubation, the membranes were washed with PBS-T, followed by incubation with anti-rabbit poly-HRP at 1/10,000 dilution for 1 h at room temperature. Blots were visualized as described above, then re-probed with anti-GAPDH antibody (Millipore, MAB374).
Proliferation Assay
HGME cells were seeded at 10,000 cells/well in 96-well plates and incubated at 37 • C in a humidified incubator with 5% CO 2 overnight. The cells were serum and growth factor starved for 2-3 h before treatment. After starvation, 100 µL of each protein (ELP, KTP-ELP, ELP-VEGF, KTP-ELP-VEGF, and free VEGF) was added to basal media to make final concentrations of 1, 10, and 100 nM, and incubated for an additional 72 h. Viable cells were detected using MTS cell proliferation assay (Promega). The data shown represent the mean ± standard error of the mean (s.e.m.) of three independent experiments each performed in quadruplicate.
Tube Formation Assay
A 48-well plate, which was sterile and non-tissue culture treated, was coated with growth factor-reduced Matrigel (BD Biosciences). HGME cells were serum-and growth factor-starved for 2-3 h before seeding them over Matrigel-coated wells at 30,000 cells per well in 5% complete media containing 0.1 mg/mL of heparin in the absence or presence of a final concentration of 100 nM of the proteins ELP, KTP-ELP, ELP-VEGF, KTP-ELP-VEGF, or free VEGF. The cells were incubated at 37 • C in a humidified incubator with 5% CO 2 for 5 h. At the end of the incubation, the cells were imaged with an inverted microscope using bright field illumination and 10× magnification. Five non-overlapping fields per well were imaged, and the tubes between two cell nodes were counted for each field, averaged for each well, and expressed relative to untreated wells. The data represent the mean ± s.e.m. of three independent experiments.
Migration Assay
Corning BioCoat growth factor-reduced Matrigel Invasion Chambers (Corning Biocoat) were warmed to room temperature, and the interior of the inserts were rehydrated with basal media (Cell Systems) for 2 h in a humidified incubator at 37 • C with 5% CO 2 . HGME cells at 30,000 cells per well in basal media containing 1% fetal bovine serum and 0.1 mg/mL heparin were added to the interior of the inserts in 500 µL volume. ELP, KTP-ELP, ELP-VEGF, KTP-ELP-VEGF, and free VEGF at a final concentration of 100 nM in a final volume of 750 µL was added to the same media in the wells of a 48-well tissue culture-treated plate. The inserts were gently placed into each designated well, taking care to avoid air bubbles. The cells were incubated for 16-18 h in a humidified incubator at 37 • C with 5% CO 2 . After incubation, any cell suspension left in each insert was removed, the inserts were rinsed with DPBS, and non-invading cells were scrubbed from the upper surface of the membrane using a cotton swab. The cells on the lower surface of the membrane were stained with 0.1% crystal violet in 10% ethanol at room temperature for 30 min. The inserts were rinsed with water and air dried for an additional 60 min. Membranes were photographed using an inverted microscope and 10× magnification objective on five independent fields per membrane. The number of cells per field were counted and averaged for each well. The data represent the mean ± s.e.m. of three independent experiments.
Flow Cytometry
HGME, HRPTEpC, and human podocyte cells were seeded at 300,000 cells per well in 6-well plates (ECM-coated plates were used for human podocyte cells) and incubated at 37 • C in a humidified incubator with 5% CO 2 overnight. The cells were washed and treated with fluorescein-labeled ELP, KTP-ELP, ELP-VEGF, and KTP-ELP-VEGF at a final concentration of 10 µM and incubated at 37 • C in a humidified incubator with 5% CO 2 overnight. At the end of the incubation, the cells were washed with DPBS twice and 500 µL of cell-stripper buffer (Corning Mediatech, Tewksbury, MA, USA) was added to each well, followed by the addition of 1 mL of DPBS. The cell suspension was removed, placed in fresh polystyrene tubes, and centrifuged at 400× g. The cell pellets were resuspended in 400 µL of DPBS. The relative green fluorescence intensity of the cells was measured using flow cytometry (Gallios, Beckman Coulter, Indianapolis, IN, USA). Forward versus side scatter was used to gate viable cells, and the mean fluorescence intensity was determined. The mean fluorescence intensity was corrected for autofluorescence (determined by analyzing untreated cells) and normalized by correcting for differences in labeling efficiency among the various proteins. Independent experiments were performed in duplicate and repeated thrice.
In Vivo Biodistribution Studies
All animal studies were approved by the Institutional Animal Care and Use Committee of the University of Mississippi Medical Center (Approval number: 1379B, 25 April 2019), and the experiments were performed according to the Guide for the Care and Use of Laboratory Animals [51]. For acute tissue biodistribution studies, SKH-1 Elite hairless mice (female, Charles River) were anesthetized with 3% isoflurane, followed by administration of rhodamine-labeled ELP, KTP-ELP, ELP-VEGF, and KTP-ELP-VEGF (20 mg/kg) by intravenous injection (IV) into the femoral vein. Four hours after the injections, the mice were euthanized while still under anesthesia, and their organs were collected for whole-organ fluorescence biodistribution analysis (n = 4 mice per protein). All organs were imaged using an in vivo imaging system (IVIS Spectrum, Perkin Elmer, Foster City, CA, USA), followed by embedding in freezing medium (Tissue-Plus optimal cutting temperature (O.C.T.), Fisher scientific, Waltham, MA, USA) and flash freezing in dry ice/isopentane for further analysis.
For longer-term pharmacokinetic and whole-body fluorescence experiments, SKH-1 Elite hairless mice (n = 4 mice per protein) were injected with rhodamine-labeled ELP, KTP-ELP, ELP-VEGF, and KTP-ELP-VEGF intravenously (20 mg/kg, femoral vein), as above, and blood was sampled intermittently after injection by nicking the tail vein. Whole-animal fluorescence images of the live animals were collected at regular intervals for 120 h using an IVIS Spectrum (Perkin Elmer).
IVIS images were collected using 535 nm excitation and 580 nm emission filters, auto exposure, and small binning. Regions of interest (ROIs) were drawn over the entire organ (tissue biodistribution studies) or animal (whole-body clearance studies), and the mean radiant efficiency was determined. Standard curves of each protein were pipetted into a black 96-well plate, which were subsequently imaged with identical imaging parameters. The mean tissue fluorescence was fit to these standard curves to correct for any differences in labeling levels among the polypeptides.
Plasma samples from each blood collection were prepared by centrifugation, and 2 µL of each plasma sample was used to measure the fluorescence intensities for each labeled protein using a fluorescence plate reader and a NanoQuant plate (Tecan, Männedorf, Switzerland) with an excitation wavelength of 535 nm and an emission wavelength of 585 nm. The relative fluorescence measurements of the plasma samples were compared to a standard curve for each protein with known concentrations to determine the exact plasma concentrations of each protein at each time point. Plasma clearance data were fit to a two-compartment pharmacokinetic model, as described in [22].
Frozen kidneys from the acute biodistribution cohort described above were sectioned into 16 µm mid-hilar sections using a cryomicrotome. Standards of known quantities of each labeled protein were also frozen and sectioned to the same thickness, as described in [52]. For the quantitative histology assay, kidney sections and standards were scanned with a fluorescence slide scanner as described [52,53], and the mean fluorescence intensities of the sections were fit to the standard curves to determine the intra-renal concentrations of each protein. Next, the sections were fixed and processed for fluorescence histology by co-staining for CD31, an endothelial cell marker, or synaptopodin, a podocyte marker, and imaged by confocal microscopy, as described in [4].
Statistical Analysis
In vitro experiments were plated in replicates, as indicated above, and repeated at least three times independently. Data were analyzed using one-way ANOVA with post-hoc Tukey's multiple comparison to detect differences among proteins or two-way ANOVA with post-hoc Tukey's multiple comparison to detect differences among protein treatment and dose, as appropriate, using Graphpad Prism. Biodistribution data were analyzed using two-way ANOVA with factors for protein treatment and organs, and a post-hoc Tukey's multiple comparison correction was used.
Production of ELP Fusion Proteins
ELP, KTP-ELP, ELP-VEGF, and KTP-ELP-VEGF proteins were purified using inverse transition cycling. All of the proteins were obtained at high purity, as assessed by SDS-PAGE and stain-free or silver staining ( Figure 1A, left panel). Yields of the proteins from the bacterial expression system varied according to the construct. ELP and ELP-VEGF were readily purified in mg to gram quantities, with yields >10 mg of protein per liter of bacterial culture. In contrast, KTP-ELP and KTP-ELP-VEGF were expressed by bacteria at much lower levels, typically less than 1 mg of protein per liter of bacterial culture. Stain-free imaging was used to visualize ELP and KTP-ELP proteins (lanes 1 and 2), as they did not stain with Coomassie and stained very poorly with silver, but were highly visible using Bio-Rad Stain-Free TM technology. Silver staining worked well on the VEGF-modified proteins (lanes 3 and 4). Gel imaging revealed that all of the proteins were highly pure and electrophoresed at the expected molecular weights. Western blotting was used to confirm the presence of the VEGF moiety. A duplicate gel was probed for VEGF, and the ELP-VEGF and KTP-ELP-VEGF lanes produced strongly reactive bands that matched well to the silver-stained bands, with no VEGF reactivity in the ELP or KTP-ELP lanes. A higher molecular weight VEGF-reactive band was also visible in the ELP-VEGF and KTP-ELP-VEGF samples, and its molecular weight indicated that it represented a disulfide linked covalent dimer of ELP-VEGF or KTP-ELP-VEGF (due to the lack of complete reduction prior to electrophoresis). However, these dimer bands were a very minor component of the total protein, as assessed by silver staining. There also appeared to be a VEGF-reactive band at a molecular weight just below the full length KTP-ELP-VEGF band in the Western blot. The identity of this band was unknown, but it was not visible in the silver staining, suggesting that it was a very minor component of the total protein. These data illustrated the integrity and purity of the proteins and confirmed the presence of the VEGF moiety in the ELP-VEGF and KTP-ELP-VEGF proteins.
The transition temperature of all proteins was determined by monitoring the turbidity of the solutions with increasing temperature. As shown in Figure 1B, all ELP-containing proteins underwent a temperature-induced phase transition, resulting in production of polypeptide coacervates. The T t of each protein was defined as the peak of a first-derivative plot of the turbidity curve. The unmodified ELP had a transition temperature of 71.8 • C ( Figure 1C). Modifications of the ELP carrier with the KTP or VEGF moieties caused large decreases in the T t . The T t of KTP-ELP was 52.3 • C, and the T t of both ELP-VEGF and KTP-ELP-VEGF was 50.4 • C. Importantly, for all ELP fusion proteins used in this study, the T t was well above physiologic temperature, suggesting that when injected in vivo, all of the ELP fusion proteins described here were present as soluble proteins and underwent coacervation.
ELP-Fused VEGF Constructs Stimulate Angiogenic-Like Activity in Human Glomerular Microvascular Endothelial Cells
In order to determine if the ELP-VEGF and KTP-ELP-VEGF fusion proteins maintained their VEGF signaling activity, their ability to stimulate proliferation, tube formation, and extracellular matrix invasion in HGME cells and their potency relative to free VEGF-A121 were assessed. VEGF is a potent mitogen and a chemokine for endothelial cells. When HGME cells were exposed to VEGF over the course of a 72 h experiment, proliferation was significantly stimulated in a dose-dependent manner ( Figure 2). There were four-fold more viable cells after 72 h of exposure to VEGF versus unstimulated cells. Similar to free VEGF, ELP-VEGF and KTP-ELP-VEGF both induced HGME proliferation. The dose response was similar for all three proteins, as there were no statistically significant differences among the levels of stimulation induced by each protein within each dosage. In contrast, the ELP protein alone and the KTP-ELP protein that lacked the VEGF domain had no effect on HGME proliferation. These data were consistent with our previous results regarding ELP-VEGF in human umbilical vein endothelial cells [29] and HGME cells [36], clearly showing that fusion of VEGF to ELP or KTP-ELP carriers did not affect its potency to stimulate proliferation in endothelial cells.
ELP-Fused VEGF Constructs Stimulate Angiogenic-Like Activity in Human Glomerular Microvascular Endothelial Cells
In order to determine if the ELP-VEGF and KTP-ELP-VEGF fusion proteins maintained their VEGF signaling activity, their ability to stimulate proliferation, tube formation, and extracellular matrix invasion in HGME cells and their potency relative to free VEGF-A 121 were assessed. VEGF is a potent mitogen and a chemokine for endothelial cells. When HGME cells were exposed to VEGF over the course of a 72 h experiment, proliferation was significantly stimulated in a dose-dependent manner ( Figure 2). There were four-fold more viable cells after 72 h of exposure to VEGF versus unstimulated cells. Similar to free VEGF, ELP-VEGF and KTP-ELP-VEGF both induced HGME proliferation. The dose response was similar for all three proteins, as there were no statistically significant differences among the levels of stimulation induced by each protein within each dosage. In contrast, the ELP protein alone and the KTP-ELP protein that lacked the VEGF domain had no effect on HGME proliferation. These data were consistent with our previous results regarding ELP-VEGF in human umbilical vein endothelial cells [29] and HGME cells [36], clearly showing that fusion of VEGF to ELP or KTP-ELP carriers did not affect its potency to stimulate proliferation in endothelial cells.
In addition to mitogenic activity, pro-angiogenic activity of the proteins was determined by assessing their ability to induce HGME tube formation on growth factor-reduced Matrigel. Without protein treatment, HGME cells poorly formed tube-like structures on the growth factor-reduced matrix ( Figure 3A). ELP and KTP-ELP had no effect on the number of tube like structures ( Figure 3B,C). ELP-VEGF, KTP-ELP-VEGF, and free VEGF, on the other hand, strongly induced tube formation ( Figure 3D-F). The average number of tubes per field was not different between the cells treated with ELP-VEGF, KTP-ELP-VEGF or free VEGF at an equimolar dose ( Figure 3G). In addition to mitogenic activity, pro-angiogenic activity of the proteins was determined by assessing their ability to induce HGME tube formation on growth factor-reduced Matrigel. Without protein treatment, HGME cells poorly formed tube-like structures on the growth factor-reduced matrix ( Figure 3A). ELP and KTP-ELP had no effect on the number of tube like structures ( Figure 3B,C). ELP-VEGF, KTP-ELP-VEGF, and free VEGF, on the other hand, strongly induced tube formation ( Figure 3D-F). The average number of tubes per field was not different between the cells treated with ELP-VEGF, KTP-ELP-VEGF or free VEGF at an equimolar dose ( Figure 3G). The chemokine activity of the proteins was tested using HGME cells in a Boyden chamber Matrigel invasion assay. As shown in Figure 4, ELP-VEGF, KTP-ELP-VEGF, and free VEGF all induced significant invasion of HGME cells through the matrix toward the protein-containing chamber. No difference in the number of cells was seen between the three VEGF-containing protein groups. In contrast, ELP and KTP-ELP induced no endothelial cell Matrigel invasion. The chemokine activity of the proteins was tested using HGME cells in a Boyden chamber Matrigel invasion assay. As shown in Figure 4, ELP-VEGF, KTP-ELP-VEGF, and free VEGF all induced significant invasion of HGME cells through the matrix toward the protein-containing chamber. No difference in the number of cells was seen between the three VEGF-containing protein groups. In contrast, ELP and KTP-ELP induced no endothelial cell Matrigel invasion. Sixteen to eighteen hours after plating, the lower surface of the membrane was stained with crystal violet, and the number of cells per field were averaged over six identical fields for each treatment group. The experiment was repeated three times. * Statistically significant increase relative to unstimulated cells (one-way ANOVA with post-hoc Tukey's multiple comparison).
Cell Binding/Uptake of Polypeptides in Primary Human Renal Cells
After demonstrating that the ELP-fused VEGF fusion proteins maintained the ability to stimulate endothelial cells, primary human tubular epithelial cells and podocytes, as well as endothelial cells, were used to determine the contribution of each portion of the protein to cell-type binding. The addition of KTP to ELP increased its binding to all three renal cell types ( Figure 5A), which was consistent with our previous results [4]. The proteins that contained the VEGF moiety, ELP-VEGF and KTP-ELP-VEGF, showed the most binding to endothelial cells. This was consistent with the VEGF moiety-mediating interaction with VEGF receptors, which were expressed by the endothelial cells ( Figure 5B). Interestingly, KTP-ELP, ELP-VEGF, and KTP-ELP-VEGF all bound strongly to or were internalized by tubular epithelial cells, which did not express VEGF receptors Sixteen to eighteen hours after plating, the lower surface of the membrane was stained with crystal violet, and the number of cells per field were averaged over six identical fields for each treatment group. The experiment was repeated three times. * Statistically significant increase relative to unstimulated cells (one-way ANOVA with post-hoc Tukey's multiple comparison).
Cell Binding/Uptake of Polypeptides in Primary Human Renal Cells
After demonstrating that the ELP-fused VEGF fusion proteins maintained the ability to stimulate endothelial cells, primary human tubular epithelial cells and podocytes, as well as endothelial cells, were used to determine the contribution of each portion of the protein to cell-type binding. The addition of KTP to ELP increased its binding to all three renal cell types ( Figure 5A), which was consistent with our previous results [4]. The proteins that contained the VEGF moiety, ELP-VEGF and KTP-ELP-VEGF, showed the most binding to endothelial cells. This was consistent with the VEGF moiety-mediating interaction with VEGF receptors, which were expressed by the endothelial cells ( Figure 5B). Interestingly, KTP-ELP, ELP-VEGF, and KTP-ELP-VEGF all bound strongly to or were internalized by tubular epithelial cells, which did not express VEGF receptors ( Figure 5A,B). This binding may have been due to interaction with other membrane proteins, including receptors responsible for protein reabsorption, which are a subject of ongoing studies. Binding to podocytes was increased by the addition of KTP to the ELP protein in the case of KTP-ELP, but not in the case of KTP-ELP-VEGF. The VEGF-containing proteins did not bind to podoctyes at levels higher than the ELP control, despite these cells expressing high levels of VEGF receptors. The reason for the lack of binding of the VEGF-containing proteins to podocytes was unclear, but it was consistent with the intra-renal distribution of these proteins (shown below). ( Figure 5A,B). This binding may have been due to interaction with other membrane proteins, including receptors responsible for protein reabsorption, which are a subject of ongoing studies. Binding to podocytes was increased by the addition of KTP to the ELP protein in the case of KTP-ELP, but not in the case of KTP-ELP-VEGF. The VEGF-containing proteins did not bind to podoctyes at levels higher than the ELP control, despite these cells expressing high levels of VEGF receptors. The reason for the lack of binding of the VEGF-containing proteins to podocytes was unclear, but it was consistent with the intra-renal distribution of these proteins (shown below).
Pharmacokinetics and Biodistribution of ELP, KTP-ELP, ELP-VEGF, and KTP-ELP-VEGF
The pharmacokinetics, whole-body clearance, and biodistribution of all proteins were determined in SKH-1 Elite hairless mice following a single bolus intravenous (IV) injection (20 mg/kg injected via the femoral vein). The plasma clearance kinetics were not significantly different among all polypeptides ( Figure 6A). The terminal half-lives were 6.0, 8.0, 6.2, and 3.2 h for ELP, KTP-ELP, ELP-VEGF, and KTP-ELP-VEGF, respectively. Qualitative differences were seen in the whole-body
Pharmacokinetics and Biodistribution of ELP, KTP-ELP, ELP-VEGF, and KTP-ELP-VEGF
The pharmacokinetics, whole-body clearance, and biodistribution of all proteins were determined in SKH-1 Elite hairless mice following a single bolus intravenous (IV) injection (20 mg/kg injected via the femoral vein). The plasma clearance kinetics were not significantly different among all polypeptides ( Figure 6A). The terminal half-lives were 6.0, 8.0, 6.2, and 3.2 h for ELP, KTP-ELP, ELP-VEGF, and KTP-ELP-VEGF, respectively. Qualitative differences were seen in the whole-body clearance rates, which were determined by in vivo imaging of the hairless mice at each time point after polypeptide injection ( Figure 6B). ELP peaked in the tissue most rapidly among the polypeptides tested, within 1 h of injection, then started to clear from the body. KTP-ELP peaked much later after injection (5 h) and remained at higher in vivo levels than ELP for approximately two days, which was consistent with our previous results in rats [4]. ELP-VEGF and KTP-ELP-VEGF also showed delayed peak tissue concentration times relative to ELP and were similar to each other with a peak at 4 h after injection. ELP-VEGF and KTP-ELP-VEGF cleared from the body with similar kinetics over the course of about three days. The slowed whole-body clearance of KTP-ELP, ELP-VEGF, and KTP-ELP-VEGF relative to the unmodified ELP was likely related to their binding to target molecules and/or extravasation in tissues. These data demonstrated that whole body clearance of KTP-ELP or ELP-modified VEGF proteins occurred over the course of two to three days, and they suggested that daily or every-other-day dosing would be ideal for future therapeutic applications if a repeated dosing regimen was required for a given disease indication. clearance rates, which were determined by in vivo imaging of the hairless mice at each time point after polypeptide injection ( Figure 6B). ELP peaked in the tissue most rapidly among the polypeptides tested, within 1 h of injection, then started to clear from the body. KTP-ELP peaked much later after injection (5 h) and remained at higher in vivo levels than ELP for approximately two days, which was consistent with our previous results in rats [4]. ELP-VEGF and KTP-ELP-VEGF also showed delayed peak tissue concentration times relative to ELP and were similar to each other with a peak at 4 h after injection. ELP-VEGF and KTP-ELP-VEGF cleared from the body with similar kinetics over the course of about three days. The slowed whole-body clearance of KTP-ELP, ELP-VEGF, and KTP-ELP-VEGF relative to the unmodified ELP was likely related to their binding to target molecules and/or extravasation in tissues. These data demonstrated that whole body clearance of KTP-ELP or ELP-modified VEGF proteins occurred over the course of two to three days, and they suggested that daily or every-other-day dosing would be ideal for future therapeutic applications if a repeated dosing regimen was required for a given disease indication. A separate cohort of mice was given an identical injection protocol, but these individuals were sacrificed 4 h after drug administration in order to determine organ biodistribution. Ex vivo wholeorgan imaging ( Figure 7A) revealed that, consistent with previous work in rats and pigs [4,23], ELP accumulated most in the kidney, and the addition of KTP to ELP increased renal deposition approximately five-fold ( Figure 7B). The liver was a distant second in terms of organ levels of ELP and KTP-ELP, although KTP-ELP levels in the liver were significantly higher than ELP liver levels. A separate cohort of mice was given an identical injection protocol, but these individuals were sacrificed 4 h after drug administration in order to determine organ biodistribution. Ex vivo whole-organ imaging ( Figure 7A) revealed that, consistent with previous work in rats and pigs [4,23], ELP accumulated most in the kidney, and the addition of KTP to ELP increased renal deposition approximately five-fold ( Figure 7B). The liver was a distant second in terms of organ levels of ELP and KTP-ELP, although KTP-ELP levels in the liver were significantly higher than ELP liver levels. ELP-VEGF demonstrated a different biodistribution profile. Its levels were highest in the liver, and it also accumulated at high levels in the kidney. The deposition of ELP-VEGF in the liver was consistent with our previous mouse study [29], although the biodistribution of ELP-VEGF appeared to be dependent on the species. In mice after a bolus IV injection, both in this study and in our previous work [29], ELP-VEGF accumulated at higher levels in the liver than in the kidney. However, in rats (after continuous intraperitoneal infusion [35], IV injection, or subcutaneous injection (unpublished data)) and in pigs after intravenous injection [46], ELP-VEGF accumulated in the kidneys at higher levels than in the liver. Interestingly, the addition of KTP to the ELP-VEGF construct redirected the protein toward the kidneys. Though the liver levels of KTP-ELP-VEGF remained high, KTP-ELP-VEGF was also present at equally high levels in the kidney. All polypeptide levels were low in the brain, heart, lung, and spleen, and there were no differences in these organs among the various proteins. These results demonstrated the ability of KTP to increase deposition of the ELP carrier in the kidney and to re-direct ELP-VEGF from the liver to the kidney. Future work will evaluate the effect of adding KTP to ELP-VEGF in other species where ELP-VEGF does not accumulate to such high levels in the liver. In order to obtain more quantitative measures of intra-renal polypeptide concentrations and to determine intra-renal polypeptide distribution, the kidneys were cryosectioned and analyzed using direct fluorescence detection of the labeled polypeptides. Slice imaging of mid-hilar sections revealed that all polypeptides accumulated predominantly in the renal cortex ( Figure 8A). Using a quantitative fluorescence histology assay [52], intra-renal concentrations were determined ( Figure 8B). ELP levels were the lowest among the four proteins, with levels reaching approximately 12 μg/mL at this time point with this dosage. KTP-ELP, ELP-VEGF, and KTP-ELP-VEGF levels were all higher than ELP In order to obtain more quantitative measures of intra-renal polypeptide concentrations and to determine intra-renal polypeptide distribution, the kidneys were cryosectioned and analyzed using direct fluorescence detection of the labeled polypeptides. Slice imaging of mid-hilar sections revealed that all polypeptides accumulated predominantly in the renal cortex ( Figure 8A). Using a quantitative fluorescence histology assay [52], intra-renal concentrations were determined ( Figure 8B). ELP levels were the lowest among the four proteins, with levels reaching approximately 12 µg/mL at this time point with this dosage. KTP-ELP, ELP-VEGF, and KTP-ELP-VEGF levels were all higher than ELP levels, ranging from 58-93 µg/mL at this dose and time point. In contrast to the whole-organ imaging, this more accurate assay did not reveal ELP-VEGF kidney levels to be lower than KTP-ELP or KTP-ELP-VEGF levels.
Pharmaceutics 2019, 11, x FOR PEER REVIEW 16 of 23 levels, ranging from 58-93 μg/mL at this dose and time point. In contrast to the whole-organ imaging, this more accurate assay did not reveal ELP-VEGF kidney levels to be lower than KTP-ELP or KTP-ELP-VEGF levels. Intra-renal histology was conducted with all proteins and overlaid with cell-type staining for podocytes (marked by synaptopodin staining, Figure 9A) or endothelial cells (marked by CD31 staining, Figure 9B). All of the polypeptides accumulated at high levels in the tubules. ELP and KTP-ELP were present at much lower levels in the glomeruli. However, ELP-VEGF and, to a lesser extent, KTP-ELP-VEGF were seen in punctate foci in the glomeruli ( Figure 9A, insets). The focal glomerular staining of ELP-VEGF and KTP-ELP-VEGF did not directly overlay with synaptopodin staining, likely reflecting the presence of these proteins in the glomerular capillaries but not directly interacting with the podoctyes. When endothelial cells were co-stained, all of the proteins were visible in the walls of larger vessels ( Figure 9B) in both the endothelial and vascular smooth muscle cell layers. Tubular staining of all proteins was also readily visible in these sections. To control for autofluorescence or non-specific antibody staining, sections from an animal injected with saline were stained using the same protocol, but with no primary antibody, and imaged with identical Intra-renal histology was conducted with all proteins and overlaid with cell-type staining for podocytes (marked by synaptopodin staining, Figure 9A) or endothelial cells (marked by CD31 staining, Figure 9B). All of the polypeptides accumulated at high levels in the tubules. ELP and KTP-ELP were present at much lower levels in the glomeruli. However, ELP-VEGF and, to a lesser extent, KTP-ELP-VEGF were seen in punctate foci in the glomeruli ( Figure 9A, insets). The focal glomerular staining of ELP-VEGF and KTP-ELP-VEGF did not directly overlay with synaptopodin staining, likely reflecting the presence of these proteins in the glomerular capillaries but not directly interacting with the podoctyes. When endothelial cells were co-stained, all of the proteins were visible in the walls of larger vessels ( Figure 9B) in both the endothelial and vascular smooth muscle cell layers. Tubular staining of all proteins was also readily visible in these sections. To control for autofluorescence or non-specific antibody staining, sections from an animal injected with saline were stained using the same protocol, but with no primary antibody, and imaged with identical parameters. Autofluorescence was observed in the red channel at the settings used to detect the ELP proteins, especially in the tubules ( Figure 9B, bottom panel). However, the autofluorescence did not reach the level of signal seen in the protein-treated animals, indicating that there were indeed ELP proteins present in the blood vessels and renal tubules. No staining occurred with the antibody control, thereby validating the specificity of the podocyte and endothelial cell markers. These data indicated that ELP proteins were present at high levels in the kidney in both the renal blood vessels and tubular epithelial cells. Additionally, the VEGF-containing proteins were present in focal regions within the glomeruli, consistent with the glomerular capillaries. parameters. Autofluorescence was observed in the red channel at the settings used to detect the ELP proteins, especially in the tubules ( Figure 9B, bottom panel). However, the autofluorescence did not reach the level of signal seen in the protein-treated animals, indicating that there were indeed ELP proteins present in the blood vessels and renal tubules. No staining occurred with the antibody control, thereby validating the specificity of the podocyte and endothelial cell markers. These data indicated that ELP proteins were present at high levels in the kidney in both the renal blood vessels and tubular epithelial cells. Additionally, the VEGF-containing proteins were present in focal regions within the glomeruli, consistent with the glomerular capillaries.
Discussion
The ELP molecule was shown to be a versatile drug carrier with many therapeutic advantages, including improved therapeutic targeting, controlled pharmacokinetics and/or drug release, and the ability to deliver many types of therapeutic cargo [2]. Our group has used ELPs extensively for the delivery of growth factors. We are currently developing ELP-fused growth factors, including members of the VEGF family, as therapeutics for kidney disease [54] and preeclampsia [55,56]. This work expanded on our previous studies, in which we demonstrated the ability of ELP fusion to facilitate VEGF purification [29], stabilize VEGF for in vivo delivery [29,36,46], and improve the efficacy of renal therapeutic angiogenesis [36,46,47]. In this work, we modified our previously-used ELP-VEGF fusion protein with a kidney-targeting peptide, and we determined the effects of each domain of the KTP-ELP-VEGF protein on its in vitro activity and cell binding and its in vivo pharmacokinetics and biodistribution.
Discussion
The ELP molecule was shown to be a versatile drug carrier with many therapeutic advantages, including improved therapeutic targeting, controlled pharmacokinetics and/or drug release, and the ability to deliver many types of therapeutic cargo [2]. Our group has used ELPs extensively for the delivery of growth factors. We are currently developing ELP-fused growth factors, including members of the VEGF family, as therapeutics for kidney disease [54] and preeclampsia [55,56]. This work expanded on our previous studies, in which we demonstrated the ability of ELP fusion to facilitate VEGF purification [29], stabilize VEGF for in vivo delivery [29,36,46], and improve the efficacy of renal therapeutic angiogenesis [36,46,47]. In this work, we modified our previously-used ELP-VEGF fusion protein with a kidney-targeting peptide, and we determined the effects of each domain of the KTP-ELP-VEGF protein on its in vitro activity and cell binding and its in vivo pharmacokinetics and biodistribution.
In vitro studies in renal microvascular endothelial cells consistently showed that fusion of VEGF to the ELP carrier did not hamper its ability to stimulate angiogenic-like activities. Since the potency of endothelial cell proliferation, invasion, and tube formation were not affected by the fusion of VEGF to ELP carriers, we suspected that the chimeric proteins were able to engage with VEGF receptors at near-native affinity. Ongoing work will determine the affinity constants of the ELP-fused VEGF proteins for the Flt-1 and Flk-1 VEGF receptors, and compare the affinity to unmodified VEGF-A. Cell-binding studies also revealed interesting and somewhat unpredicted effects of each of the protein's domains on cellular interaction. The VEGF-containing proteins were the highest binders to the endothelial cells, which was expected given the expression of VEGF receptors by these cells. Also consistent with expectations and previous studies was the ability of KTP to improve binding of the proteins to all renal cell types. The target of the kidney-targeting peptide is not yet known, but ongoing proteomic studies are attempting to determine a binding partner for this peptide. More unexpected was the high binding of all proteins, including those that possess the VEGF domain, to tubular epithelial cells. This binding was not due to VEGF-receptor interactions, as these cells did not express either Flt-1 or Flk-1. Rather, this binding was possibly mediated by more generic protein reabsorption receptors, such as megalin or cubilin [57], and examining these potential interactions is also the subject of ongoing work. The in vitro binding to the tubular epithelial cells was consistent with the in vivo distribution of all of the proteins within the kidneys, which were highly concentrated in the renal tubules. Also surprising was the lack of the podocyte binding by VEGF-containing proteins, as these cells expressed high levels of VEGF receptors. However, the lack of binding to podocytes was also consistent with the in vivo distribution, as these proteins were mostly excluded from glomeruli or present only in glomerular capillaries.
The in vivo pharmacology studies revealed that the content of the proteins had strong effects on their pharmacokinetics and biodistribution. Whole-animal clearance studies showed that all additions to the core ELP molecule slowed its in vivo clearance, likely by mediating binding to target molecules. ELP and KTP-ELP predominantly accumulated in the kidneys, and KTP significantly enhanced renal deposition, which was consistent with its function as a kidney-targeting agent. ELP-VEGF, on the other hand, was present at high levels in the kidneys and in the liver. The accumulation of ELP-VEGF in the liver appeared to be species-specific, as this was shown in our previous mouse study [29] but not in rats or swine, where the kidney is the main organ of ELP-VEGF accumulation [4,23,46]. Most importantly, KTP was still effective for directing ELP-VEGF to the kidney, as kidney levels of KTP-ELP-VEGF were strongly increased relative to ELP-VEGF. We are currently evaluating the KTP-ELP-VEGF molecule in our swine model. This ongoing study will determine the therapeutic efficacy of KTP-ELP-VEGF to protect or restore the microvasculature and improve renal function in a translational model of chronic renovascular disease.
Conclusions
The ELP protein is a valuable drug carrier, and it can be modified with targeting domains and therapeutic cargo to achieve optimal drug delivery. This study demonstrated that ELP-fused cytokines retained functionality as large chimeric proteins. It also demonstrated the utility of a targeting agent to modulate the biodistribution of an ELP-fused therapeutic. Delivery of the ELP carrier and ELP-fused VEGF to the kidney was improved by the utilization of a kidney-targeting peptide. The KTP-ELP-VEGF fusion protein described here may have efficacy for treatment of multiple types of ischemic renal diseases.
Patents
The intellectual property described in this manuscript is protected by US patent number 10,322,189 and by additional pending US and worldwide patent applications.
Conflicts of Interest: G.L.B.III is the owner of Leflore Technologies, LLC, a private company working to commercialize ELP-based drug delivery technology. The company had no role in the design of the study, in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. G.L.B.III and A.R.C. are inventors of patents related to this work. The funders had no role in the design of the study, in the collection, analyses, or interpretation of data, in the writing of the manuscript, or in the decision to publish the results.
|
v3-fos-license
|
2020-06-11T09:09:53.171Z
|
2020-05-19T00:00:00.000
|
225879752
|
{
"extfieldsofstudy": [
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.9744/ced.22.1.1-5",
"pdf_hash": "005994c25e4be44dcabcd6b2152ae95b30369559",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44532",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"sha1": "16d8b248b80e60c50083629096b69b526bed677a",
"year": 2020
}
|
pes2o/s2orc
|
Promoting Precipitation Technique using Bio-Chemical Grouting for Soil Liquefaction Prevention
The applicability of bio-chemical grouting as an environmentally friendly method for liquefaction remediation was evaluated. Several combinations of organic and in-organic precipitations methods were conducted to obtain the optimum grouting solution. Organic precipitation method which employs a bio-agent of urease enzyme, promotes the precipitated calcite crystals. Meanwhile, the in-organic method was performed using chemical compounds only, without the bio-agent. Unconfined compression strength tests were conducted to assess the applicability of the grouting solutions for improving the soil strength. The experimental results showed that the organic precipitation produced a high precipitated amount and resulted in a significant improvement in the strength of the soil. The presence of the precipitated materials within the soil grains generated the strength of 272 kPa. The results of this study have elucidated that the organic precipitation method composed of reagents and enzyme of urease may be an alternative soil-improvement technique to prevent liquefaction susceptibility.
Introduction
Liquefaction commonly happens in the saturated granular soils, such as sandy and silty soil. It is subjected to cyclic loading during an earthquake and is characterized by the increase in pore water pressure in the soil, which is initiated by the cyclic undrained loading, and leads to a decrease in the effective confining pressure. Thus, in turn, causes a significant loss of shear strength in the soil [1][2][3][4]. This phenomenon able to trigger several damages in engineering structures, such as building toppling, settlement (floating), soil deforms, sand boils, and other failures. Some damaging effect of liquefaction has been reported during the Niigata earthquake in 1964, Hyogoken-Nambu earthquake in 1995, and Tohoku-Japan earthquake in 2011 [2,5,6].
Several soil improvement techniques have been developed for enhancing the soil resistance to liquefaction and reducing possible damages, such as densification, solidification by cement, epoxy, silicates or other chemical compounds, and bio-grouting methods using calcite precipitation techniques [4,[7][8][9]. Zen [10] has conducted a premixing method using a cement to increase the cohesion of sandy soil. It was found that mixing 5.5% of cement improves the cohesion to 98 kPa, which equivalent to N-SPT of 15-20 and UCS 100 [10,11]. This result also reported that the premixing of 5.5% cement content appears enough to onset the liquefaction [10,11].
Calcite induced precipitation technique (CIPM) may be one of the innovative and emerging methods for liquefaction mitigation [8,12,13]. Many studies of CIPM have used bacterial to dissociate urea into 4 + and 3 2− [14][15][16][17], thus are precipitated as calcite crystals in the existence of calcium ions. In this technique, the grouting solution, which produces calcite, is injected into the sand sample. The precipitated material in sandy soil may deliver bonds between the sand grains, limiting their movement, and thus, enhancing the soil strength. The deposited calcium carbonate fills the voids, thereby reducing the permeability and porosity [15,18,19].
The use of microorganisms in the calcite precipitation method has some complexities, such as the incubation of bacteria may be tough to control and required special treatments [18]. Yasuhara et al. [18] introduced a potential method among calcite precipitation techniques, which is called enzyme-mediated calcite precipitation (EMCP) [18,20]. In this method, the enzyme of urease is used to hydrolyze urea into 4 + and 3 2− instead of microorganisms. Using the urease enzyme is more straightforward than using microorganisms because biological handlings do not need to be considered [18,19]. The mixture of enzyme and reagent, which produce the precipitated calcite, are applied to soil samples. Thus, the cultivation and the fixation of the enzyme are not mandatory [18,20]. The efficacy of the EMCP method has already been assessed in the previous study, in which levels of strength, ranging from 400 kPa to 1600 kPa, are obtained [14,[21][22][23].
In the present work, the efficacy of bio-chemical grouting, using enzyme induced calcite precipitation and chemical compounds for liquefaction mitigation, were evaluated. The optimum combinations regarding the mass precipitated minerals, and the chemical reaction was fixed by test-tube experiment. Sand samples were prepared in PVC mold and treated with the selected combinations of bio-chemical grouting. Then, the improvement of the strength of improved samples was assessed using the unconfined compression strength (UCS) tests. Finally, the applicability of grouting materials for enhancing the soil resistance to liquefaction was explicitly evaluated.
The silica sand with maximum void ratio (emax), minimum void ratio (emin) the coefficient of uniformity (Cu), and specific gravities (Gs) of 0.899, 0.549, 1.550, and 2.653, respectively was used to evaluate the effect of the application of bio-chemical grouting on the soil strength [23]. It was categorized as the most potential liquified soil based on the analysis of particle size distribution [24,25]. The grain size distribution curve of the silica sand and the liquefaction potential limit are shown in Figure 1 [25].
Precipitation Test
Precipitation tests using a transparent tube are conducted to evaluate the efficacy of several combinations of the potential bio-chemical grouting in the production of precipitation material as a cemented agent. Precipitation tests are performed for both organic and inorganic precipitation. Organic precipitation refers to the utilization of bio-agent (i.e., enzyme) as a catalyst in the chemical process. The sample preparation procedure, developed by Putra et al. [19], was adopted in this work. Firstly, an enzyme of urease was mixed into the distilled water and filtered using filter paper (pore size of 11 m) to remove the undissolved particles of urease. Secondly, urea, calcium chloride, and magnesium sulfate are mixed thoroughly with distilled water, separately. Finally, the solution of CaCl2 -MgSO4 -CO (NH2)2 and the filtered enzyme were mixed thoroughly to obtain a total volume of 30 mL and permitted to react for 3-days curing times. The schematic of the procedure of precipitation test is illustrated in Figure 2 [19]. (19) In-organic precipitations are performed without the bio-agent. Several combinations of sodium dihydrogen phosphate (NaH2PO4), sodium bicarbonate (NaHCO3) magnesium nitrite (Mg(NO3)2), calcium chloride (CaCl2), and sodium hydroxide (NaOH) are mixed to obtain the optimum combinations based on the amount of precipitation materials. NaH2PO4, NaHCO3, Mg(NO3)2, CaCl2, NaOH are mixed with distilled water separately, to make the chemical solutions, then, all the solution are mixed to obtain a total volume of 30 mL and permitted to react for 3days curing times.
After curing, the grouting solution was filtered through the filter paper. The deposited particles on the filter paper and residual in the tubes were dried, and the amount was assessed to obtain precipitated mass. Finally, the optimum combinations obtained from the inorganic and organic precipitation are selected to apply in the soil sample. Two tests were performed for each case to obtain the reproducibility. The experiment conditions for precipitation tests are shown in Table 1. OC and OS refer to the organic precipitation with combination reagent of CaCl2-MgCl2 and CaCl2-MgSO4, respectively, and IP refers to the in-organic precipitation.
Unconfined Compression Strength Test
Unconfined compression strength (UCS) tests were performed to assess the improvement of the strength of the treated sand. The experimental procedures developed by Putra et al. [19] were followed in this study. The PVC cylinder (5 cm in diameter and 10 cm in height) was used for preparing the sand specimen. Firstly, 300 g of the dry silica sand was poured into the cylinders to obtain a relative density of 50%. Secondly, the 75 mL (i.e., 1 pore volume (PV)) of optimum grout solutions were applied to the sand samples. After curing time, the treated sample was removed from the PVC cylinder. Before the UCS test was performed, the surface of the improved sample was flattened. The mechanical properties of the treated samples were evaluated by the UCS test of the specimens in wet conditions [19,26]. Two tests were conducted for each condition to check the reproducibility. The schematic of the procedure of the UCS test is illustrated in Figure 3 [19].
Results and Discussion
The precipitation test results for several combinations of organic and in-organic precipitations method were evaluated. The summary of precipitation tests result is shown in Figure 4. As is apparent, in the organic precipitation of OC, the utilization of MgCl2 of 0.10 (i.e., OC-2) produced a higher amount of precipitation compared than OC-1, which are 1.52 g and 1.65 g, respectively. In the case of OS, the use of 0.10 MgSO4 has promoted a higher mass of the precipitated material compared to 0.05 MgSO4. In the in-organic precipitation, the precipitated mass obtained from the IP-1 and IP-3 resulted in a similar mass, which is higher than that of IP-2. This results indicated that the presence of Mg(NO3) and NaOH have a higher contribution to increase the mass of precipitated mass compared to the NaH2PO4. The optimum conditions from each case of precipitation tests are selected to apply to the prepared sand samples. The summary of selected grouting solutions is shown in Table 2. UCS tests were performed to evaluate the impact of the application of grouting material on the strength of the treated sand. The injected volume of the grouting solution was controlled by a number of pore volume (PV), all the samples were treated by 1 and 2 PV, one PV being ~78 mL. The UCS test results are shown in Figure 5. Case [-] Organic method In-organic method Liquified (10) Non-liquified (10) The strength of treated sand varies in the range of 28-272 kPa are obtained by the treatment of 1-2 PV grouting solutions. The significant improvements in soil strength are obtained in the case of OC-2 and OS-2, with the greatest improvement is obtained in the case of OS-2, where the strength increases from the 67 kPa to 272 kPa. In contrast, in the cases of IP-1 and IP-3, further treatment has no a significant effect on the strength of the treated soil. The results of this study show that the utilization 2PV of grouting solution OS-2 composed of calcium chloride, magnesium sulfate, urea and enzyme of urease is a promising method for liquefaction mitigation, as mention by Zen [10] that UCS (strength) of 100 kPa is enough to prevent the onset of liquefaction [10,11].
Conclusions
The applicability of several combinations of biochemical grouting has been evaluated for its possible application as the liquefaction mitigation. The organic and in-organic precipitation have been conducted to evaluate the mass of the precipitation materials, and the optimum grouting solutions were selected. UCS tests have also been performed using the selected material regarding the precipitation test result. The selected grouting solution was applied to the soil sample, and the improvement in the strength was evaluated.
The results of this study show that the organic precipitations promoted a higher amount of precipitated material compared to the in-organic precipitation. The addition of tretment has also had significant effect on the improvement of soil strength. The strength of 272 kPa is obtained in case of OS-2 treated by 2 PV treatment. This result revealed that the grouting solution of OS-2 might be a potential method to prevent the onset of the liquefaction.
|
v3-fos-license
|
2018-04-03T01:12:45.700Z
|
2017-05-26T00:00:00.000
|
2439033
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0178683&type=printable",
"pdf_hash": "d769c148ba69866dc271299e5047e09564350d39",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44533",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "d769c148ba69866dc271299e5047e09564350d39",
"year": 2017
}
|
pes2o/s2orc
|
Criticality meets learning: Criticality signatures in a self-organizing recurrent neural network
Many experiments have suggested that the brain operates close to a critical state, based on signatures of criticality such as power-law distributed neuronal avalanches. In neural network models, criticality is a dynamical state that maximizes information processing capacities, e.g. sensitivity to input, dynamical range and storage capacity, which makes it a favorable candidate state for brain function. Although models that self-organize towards a critical state have been proposed, the relation between criticality signatures and learning is still unclear. Here, we investigate signatures of criticality in a self-organizing recurrent neural network (SORN). Investigating criticality in the SORN is of particular interest because it has not been developed to show criticality. Instead, the SORN has been shown to exhibit spatio-temporal pattern learning through a combination of neural plasticity mechanisms and it reproduces a number of biological findings on neural variability and the statistics and fluctuations of synaptic efficacies. We show that, after a transient, the SORN spontaneously self-organizes into a dynamical state that shows criticality signatures comparable to those found in experiments. The plasticity mechanisms are necessary to attain that dynamical state, but not to maintain it. Furthermore, onset of external input transiently changes the slope of the avalanche distributions – matching recent experimental findings. Interestingly, the membrane noise level necessary for the occurrence of the criticality signatures reduces the model’s performance in simple learning tasks. Overall, our work shows that the biologically inspired plasticity and homeostasis mechanisms responsible for the SORN’s spatio-temporal learning abilities can give rise to criticality signatures in its activity when driven by random input, but these break down under the structured input of short repeating sequences.
Introduction
A popular hypothesis states that neural circuits operate near a second order phase transition, a so called critical point [1,2]. A critical state has also been argued to possess maximal information processing aspects including computational performance during classification tasks [3], a1111111111 a1111111111 a1111111111 a1111111111 a1111111111
Recurrent network model
The model we used belongs to the self-organizing recurrent network (SORN) family of models [32] and was almost identical to the model introduced in [35], differing slightly in the synaptic normalization rule: in addition to the normalization of incoming excitatory connections, we added a separate normalization of incoming inhibitory connections, in agreement with experimental evidence [40]. It is important to point out that both the model in [35] and our SORN model had three additional features when compared to the original SORN model [32]: the action of inhibitory spike-timing dependent plasticity (iSTDP), a structural plasticity mechanisms (SP) and the addition of neuronal membrane noise. Those features are described in detail in the following paragraphs.
Our SORN was composed of a set of threshold neurons divided into N E excitatory and N I inhibitory units, with N I = 0.2 × N E . The neurons were connected through weighted synapses W ij (going from unit j to unit i), which were subject to synaptic plasticity. The network allowed connections between excitatory neurons W EE , from excitatory to inhibitory neurons W IE , and from inhibitory to excitatory neurons W EI , while connections between inhibitory neurons and self-connections were absent. Each neuron i had its own threshold, which did not vary with time for the inhibitory neurons, T I i , and was subject to homeostatic plasticity for the excitatory neurons, T E i ðtÞ. The state of the network, at each discrete time step t, was given by the binary vectors x(t) 2 {0, 1} N E and y(t) 2 {0, 1} N I , corresponding to the activity of excitatory and inhibitory neurons, respectively. A neuron would fire ("1" state) if the input received during the previous time step, a combination of recurrent synaptic drive, membrane noise x E=I i and external input u Ext i , surpassed its threshold. Otherwise it stayed silent ("0" state), as described by: in which Θ is the Heaviside step function. Unless stated otherwise, ξ represents the unit's independent Gaussian noise, with mean zero and variance σ 2 = 0.05, and was interpreted as neuronal membrane noise due to the extra input from other brain regions not included in this model. The external input u Ext was zero for all neurons, except during the external input experiment and the learning tasks, in which subsets of units received supra-threshold input at specific time steps. Each time step in the model represented the time scale of STDP action, being roughly in the 10 to 20 ms range. The synaptic weights and neuronal thresholds were initialized identically to previous works [35,38]: W EE and W EI started as sparse matrices with connection probability of 0.1 and 0.2, respectively, and W IE was a fixed fully connected matrix. The three matrices were initialized with synaptic weights drawn from a uniform distribution over the interval [0, 0.1] and normalized separately for incoming excitatory and inhibitory inputs to each neuron. The thresholds T I and T E were drawn from uniform distributions over the intervals ½0; T I max and ½0; T E max , respectively, with T I max ¼ 1 and T E max ¼ 0:5. After initialization, the connections and thresholds evolved according to five plasticity rules, detailed below. It is important to highlight that the connectivity between excitatory neurons varied over time due to the action of plasticity on W EE .
First, excitatory to excitatory connections followed a discrete spike-timing dependent plasticity rule (STDP) [41]: The rule increased the weight W EE ij by a fixed small quantity η STDP every time neuron i fired one time step after neuron j. If neuron i fired one step before neuron j, the weight was decreased by the same amount. Negative and null weights were pruned after every time step.
Second, inhibitory to excitatory connections were subject to a similar rule, the inhibitory STDP (iSTDP). It played a role in balancing the increase of activity due to STDP and regulating the overall network activity. Every time an inhibitory neuron j fired one time step before an excitatory neuron i, the connection W EI ij , if existent, was increased by η inh /μ IP , in which μ IP represented the desired average target firing rate for the network (given as a parameter to the model). However, if the synapse was successful (i.e., if neuron j firing kept neuron i silent in the next time step), W EI ij was reduced by a bigger value η inh . These rules could be simply written as: Third, both W EE and W EI were subject to yet another form of plasticity, the synaptic normalization (SN). It adjusted the incoming connections of every neuron in order to limit the total input a neuron could receive from the rest of the network, thus limiting the maximum incoming recurrent synaptic signal. This rule did not regulate the relative strengths of the connections (shaped by both STDP and iSTDP), but the total amount of input each neuron receives. SN could be written as an update equation, applicable to W EE and W EI , and executed at each time step after all other synaptic plasticity rules: Fourth, the structural plasticity (SP) added new synapses between unconnected neurons. It added a random directed connection between two unconnected neurons (at a particular time step) with a small probability p SP , simulating the creation of new synapses in the cortex. The probability was set to p SP (N E = 200) = 0.1 for a network of size N E = 200, and p SP scaled with the square of the network size: The new synapses were set to a small value η SP = 0.001, and while most of them were quickly eliminated due to STDP action in the subsequent time steps, the life-times of active synapse followed a power-law distribution [35].
Last, an intrinsic plasticity (IP) rule was applied to the excitatory neurons' thresholds. To maintain an average firing rate for each neuron, the thresholds adapted at each time step relying on a homeostatic plasticity rule, keeping a fixed target firing rate H IP for each excitatory neuron. The target firing rate, unless stated otherwise, was drawn from a normal distribution H IP % N ðm IP ; s 2 IP Þ. However, for simplicity, it could be set to the network average firing rate μ IP , thus being equal for all neurons [35]. We set μ IP = 0.1 and s 2 IP ¼ 0, or equivalently, 10% of active excitatory neurons (on average) fire per time step. Assuming one time step equals 10 to 20 ms, these constants resulted in an average firing rate in the 5 − 10 Hz range.
Phases of network development
As observed before [35], the spontaneous activity of the SORN showed three different selforganization phases regarding the number of active excitatory to excitatory synapses when driven only by Gaussian noise (Fig 1A). After being randomly initialized, the number of active connections fell quickly during the first 10 5 time steps (the decay phase) before slowly increasing (growth phase) until stabilizing after around two million time steps (stable phase), where only minor fluctuations are present. In order to avoid possible transient effects, we concentrated our analyses only on the stable phase, discarding the first 2 × 10 6 time steps. In this sense, we measured neuronal avalanches in the regime into which the SORN self-organizes driven only by membrane noise and its own plasticity mechanisms.
Neuronal avalanches definition via activity threshold
It is important to highlight that the SORN is fundamentally different from classical self-organizing critical models such as the Bak-Tang-Wiesenfeld Sandpile model [42] or branching processes regarding the lack of separation of time scales, i.e. no pause is implemented between any two avalanches [43] (see also the discussion in [18]). Importantly, such a separation of time scales also does not apply to neural activity in vivo. Each SORN neuron could receive input from other neurons, the noisy drive ξ, and an additional input (during the extra input experiment), all of which occurred at every time step.
Motivated by those fundamental differences, a distinct definition of neuronal avalanches based on thresholding the neural activity has been used in a previous model [25]. Similarly, we introduced here a threshold θ for the network activity a(t): In more detail, a constant background activity θ was subtracted from a(t) for all time steps t, allowing for frequent silent periods and neuronal avalanches' measurements. θ was set to half of the mean network activity ha(t)i, which by definition is ha(t)i = μ IP = 0.1. For simplicity, θ was rounded to the nearest integer, as a(t) can only assume integer values. Each neuronal avalanche could be described by two parameters: its duration T and size S. An avalanche started when the network activity went above θ, and T was the number of subsequent time steps during which the activity remained above θ. S was the sum of spikes exceeding the threshold at each time step during the avalanche (Fig 1B, red area). More specifically, for an avalanche starting at the time step t 0 , S was given by: As the activity included all the network's neurons, subsampling effects [19,44,45] could be ruled out. Furthermore, as the target firing rate was H IP = 0.1, 10% of the excitatory neurons were active, on average, at every time step, which made quiescent periods a rare occurrence.
External input
In order to study the effects of external input on the SORN self-organization, we chose an adapted version of a condition previously designed to investigate neural variability and spontaneous activity in the SORN [32,34]. The condition consisted of presenting randomly chosen "letters" repeatedly to the network (i.e., at each time step, a random "letter" was chosen with equal probability and presented to the network). In our case, we chose a total of 10 different letters. Each letter gave extra input to a randomly chosen, non-exclusive subset of U E = 0.02 × N E excitatory neurons, closely following a previous probabilistic network model [29]. The subsets corresponding to each letter were fixed at the beginning and kept identical until the end of each simulation. Neurons which did not receive any input had u Ext i ðtÞ ¼ 0 for all t, while neurons matched with a specific letter received a large additional external input u Ext i ðtÞ ¼ 10 7 at the time step in which the letter was presented, making sure that the neuron spiked.
We followed the approach introduced in a previous experimental procedure in the turtle visual cortex [29]: the SORN was initially simulated up until the stable phase (2 × 10 6 time steps), when external input was turned on and neuronal avalanches were measured during a transient period and after readaptation. A single neuronal avalanche was considered part of the transient period if it started during the first 10 time steps after external input onset. According to our time step definition due to STDP action, this transient window was roughly in the 100 − 200 ms range, approximately the same time window employed for the experimental data [29]. After the transient period, neuronal avalanches were again measured for 2 × 10 6 time steps after readaptation.
Learning tasks
We analyzed the network performance and the occurrence of the aforementioned criticality signatures in two simple learning tasks, which allowed us to compare our results to previous work [32].
The first task was a Counting Task (CT), introduced in [32], in which a simpler SORN model (without the iSTDP and SP mechanisms and membrane noise) has been shown to outperform static reservoirs. The CT consisted of a random alternation of structured inputs: sequences of the form "ABBB. . .BC" and "DEEE. . .EF". Each sequence was shown with equal probability and contained n + 2 "letters", with n repetitions of the middle letters "B" or "E". Each letter shown to the network represented the activation of a randomly chosen, non-exclusive subset of U E excitatory neurons at the time step in which it was shown.
The second task, which we call Random Sequence Task (RST), consisted also in the reproduction of "letters" of a large sequence of size L, initially chosen at random from an "alphabet" of A S different letters. The same random sequence was repeated during a single simulation, but different simulations received different random sequences as input. This task definition allowed not only for the description of the SORN's learning abilities under a longer, more variable input but also, in the case of large L, for the analysis of criticality signatures under an approximately random input.
For both tasks, the SORN performance was evaluated as in [32]. Starting from the random weight initialization, we simulated the network for T plastic = 5 × 10 4 time steps with all plasticity mechanisms active. The performance was evaluated by training a layer of readout neurons for T train = 5000 time steps in a supervised fashion (using the Moore-Penrose pseudo-inverse method) and measured the correct prediction of the next input letter. The input at time step t was predicted based on the network internal state x 0 (t), calculated similarly to Eq (1), but ignoring the u(t) input term. The performance was calculated based on a sample of additional T test = 5000 time steps for both tasks. For CT, however, we ignored the first letter of each sequence during the performance calculation, as the two sequences are randomly alternated.
The additional free parameters included in the simulation of the learning tasks were chosen based on previous SORN implementations: U E = 0.05 × N E and A S = 10. The membrane noise was kept as Gaussian noise, with standard deviation σ = 0.05. Additionally, for the CT, we also looked at the performance in the case of no membrane noise (σ = 0) and of no iSTDP and SP action, in order to have a direct comparison between this model and the original SORN model [32].
Power-law fitting and exponents
The characterization of power-law distributions may be affected by large fluctuations, especially in their tails, which leads to common problems ranging from inaccurate estimates of exponents to false power-laws [46]. In our model, in order to fit the neuronal avalanche distributions of duration T and size S and calculating their exponents α and τ, respectively: we relied on the powerlaw python package [47]. The package fits different probability distributions using maximum likelihood estimators. We used exponential binning when plotting the avalanche distributions, with exponential bin size b s = 0.1 (the measurement of the exponents did not depend on the particular bin choice). Additionally, even though the left cut-offs of our data were f(T) = 1 and f(S) = 1, those points were not visible in the plots due to the binning, which considered the bin centers. We compared different distributions provided by the package, of which pure power-laws provided the best fit, but for simplicity only pure power-laws and power-laws with exponential cutoffs are shown in the results (see S1 Table for a comparison of parameters). In order to account for finite size effects in the pure power-law fits, the exponents for duration α and size τ were estimated between a minimum X min and a maximum X max cutoff, with X 2 {T, S}. For the majority of our results (SORN with N E = 200 and N I = 40), we used the following parameters: T min = 6, T max = 60, S min = 10, S max = 1500, chosen based on the goodness of the power-law fit. The maximum cutoff was scaled accordingly for bigger networks. For the power-law with exponential cutoff, we kept the same X min and removed X max : with α Ã being the power-law exponent and β Ã the exponential cut-off.
The ratio between the power-law distributions' exponents, aÀ 1 tÀ 1 is also predicted by renormalization theory to be the exponent of the average size of avalanches with a given duration hSi (T): This positive power-law relation is obeyed by dynamical systems exhibiting crackling noise [48] and has been also found in in vitro experiments [13].
Results
As criticality has been widely argued to be present in biological neural systems, we first identified the presence of its most common signature in a recurrent network shaped by biologically inspired plasticity mechanisms. We showed that neuronal avalanches with power-law distributed durations and sizes appear after network self-organization through plasticity.
We then described how synaptic plasticity and units' membrane noise are necessary for the emergence of the criticality signatures. In agreement with experimental evidence [29], we also verified that while random external input can break down the power-laws, subsequent adaptation is able to bring the network back to a regime in which they appear. Last, we showed that the same power-laws break down under simple structured input of sequence learning tasks.
SORN shows power-law avalanche distributions
We simulated a network of N E = 200 excitatory and N I = 40 inhibitory neurons for 5 × 10 6 time steps. The neuronal avalanches were measured after the network self-organization into the stable phase, and the activity threshold θ was fixed as half of the mean activity of the SORN. Both neuronal avalanche duration T (Fig 2A) and size S (Fig 2B) distributions were well fitted by power-laws, but for different ranges. For the size, the power-law distribution fitted approximately two orders of magnitude, while the duration is only well fitted for approximately one before the cut-off. The faster decay observed in the distribution's tails could not be fitted by a power-law with exponential cut-offs, and was hypothesized to be the result of finite size effect. Indeed, with increasing network size the power-law distributions extended over larger ranges (Fig 2D-2F), and the exponents remained roughly the same (avalanche's duration: α % 1.45; avalanche's size: τ % 1.28). Thus, both for simplicity and for a reduced simulation time, we kept the SORN size constant for the rest of the results (N E = 200, N I = 40).
The expected relation between the scale exponents aÀ 1 tÀ 1 from Eq (13) inferred from the power-law fitting, however, did not match the exponents obtained from the avalanche raw data (Fig 2C), although the average avalanche size did follow a power-law as a function of avalanche duration, with exponent γ data % 1.3. It is worth noting that, although the predictions were not compatible, our numerical exponent γ data agreed with the one calculated directly from experimental data from cortical activity in a previous experimental study [13].
The activity threshold θ, which defines the start and end of avalanches, should in principle affect the avalanches' distributions since the slope of the power-laws might depend on its precise choice. Small thresholds should increase the avalanches' duration and size while reducing the total number of avalanches. Large thresholds are expected to reduce the avalanche durations and sizes while also reducing the number of avalanches. An adequate threshold θ has been suggested as half of the network mean activity ha(t)i t [25], which we have been using so far in this work. While different thresholds resulted in different exponents (see S2 Table for the range of estimated exponents for T and S), power-law scaling was robust for a range of θ values, roughly between 10% and 25% activity percentiles (Fig 3). This window contained the previously used half mean activity ha(t)i t /2 (roughly 10% activity percentile for a network of size N E = 200). Therefore, we could verify that the avalanche definition in terms of θ was indeed robust enough to allow for a clear definition of power-law exponents. The unusual left cut-off for the avalanche size, observed independently of the threshold value, was arguably a consequence of our avalanche size definition, Eq (9). In particular, removing the explicit
Criticality signatures are not the result of ongoing plasticity
We investigated the role of the network plasticity on the signatures of criticality. The first question we asked is whether plasticity is necessary to drive the SORN into a regime where it shows signatures of criticality, or if they also appear right after random initialization. Thus, we compared our results to a SORN with no plasticity action, which is equivalent to a randomly initialized network. The avalanche distributions observed in the random networks, for both duration and size, did not show power-laws, as shown in Fig 4A and 4B (red curves), resembling exponential distributions rather than power-laws and indicating that plasticity was indeed necessary for the self-organization.
After verifying that the combination of plasticity mechanisms was indeed necessary to drive the network from a randomly initialized state towards a state in which the power-laws appear, we asked whether this result is purely due to the continued action of such mechanisms. If the power-laws appear only when plasticity is active, they could be a direct result of the ongoing plasticity. If the power-laws hold even when all plasticity is turned off after self-organization, this supports the interpretation that the plasticity mechanisms drive the network structure to a state where the network naturally exhibits criticality signatures. We compared, therefore, our previous results with the distributions found for a frozen SORN: a network where all plasticity mechanisms where turned off after self-organization.
The SORN was simulated up until the stable phase, when the simulation was divided in two: a normal SORN and the frozen SORN. We used the same random seed for the membrane noise in both cases (Gaussian noise with mean zero and variance σ 2 = 0.05), so that differences due to noise are avoided. Furthermore, initialization bias could also be ruled out as the Criticality meets learning networks had the same initialization parameters and thus were identical up to the time step when plasticity was turned off. The frozen SORN resulted in virtually identical power-law distributions for durations and sizes (Fig 4, top row), and the only significant differences were observed in their tails. With frozen plasticity, an increase in the number of large avalanches was observed. This effect can be partly explained by the absence of homeostatic mechanisms that control network activity in the normal SORN. Likewise, freezing individual mechanisms (as for example, the IP) did not affect the overall avalanche duration and size distributions (Fig 4, bottom row, and S2 Fig), indicating that they were not the result of continued action of any particular plasticity rule from the model. Taken together, these results showed that the SORN's plasticity mechanisms allowed the network to self-organize into a regime where it showed signatures of criticality. However, the continued action of the plasticity mechanisms was not required for maintaining these criticality signatures, once the network has self-organized.
Noise level contributes to the maintenance of the power-laws The standard deviation σ of the membrane noise ξ was one of the model parameters that influenced the SORN's dynamics. Therefore, our next step was to investigate whether the criticality signatures depend on the distribution of ξ and its standard deviation σ.
As expected, we found that the avalanche and activity distributions suggested three different regimes, represented here by three different levels of noise. In the case of high noise levels (σ 2 = 5), the neurons behaved as if they were statistically independent, thus breaking down the power-laws and showing binomial activity centered at the number of neurons expected to fire at each time step (i.e. the mean of the firing rate distribution H IP ). Low noise levels resulted in a distribution of avalanche sizes resembling a combination of two exponentials, while the activity occasionally died out completely for periods of a few time steps. A close look at the raster plots of excitatory neuronal activity (Fig 5E-5G) also revealed that large bursts of activity only happened at intermediate noise levels, while low noise levels resulted mostly in short bursts and high noise levels resulted in Poisson-like activity. Therefore, we concluded that, together with the plasticity mechanisms, the noise level determined the network dynamical regime. The activity distribution (Fig 5B) supported the hypothesis of a phase transition, as it went from a binomial distribution for high noise levels to a distribution with faster decay and maximum near zero for lower noise levels.
To further investigate the contribution of noise to the maintenance of the criticality signatures, we tested if other types of noise could have a similar effect on the network's dynamic regime, and how diffused this noise needed to be in order to allow for the appearance of the power-laws. First, we switched from Gaussian noise to random spikes: each neuron received input surpassing its threshold with a small probability of spiking p s at each time step. Using p s as a control parameter in the same way as the Gaussian noise variance, we could reproduce all the previous findings: three different distribution types and a transition window, in which the power-law distributions of neuronal avalanches appear (Fig 5C).
Last, we found that limiting the noise action to a subset of units, while keeping all plasticity mechanisms on, abolishes power laws completely (see S3 Fig). Different subset sizes were compared (10%, 5% and 0% of the excitatory units were continuously active), and the activity threshold θ was set again to ha(t)i t /2, but now excluding the subset of continuously active units. We concluded that the power-laws require not only a specific noise level, but also noise distribution across the network units.
Network readaptation after external input onset
We tested whether the onset of external input is able to break down the power-laws we have measured so far. Experimental evidence suggests a change in power-law slope in the transient period after onset of an external stimulus [29]. This work proposed that network readaptation due to short term plasticity brings the criticality signatures back after a transient period, implying self-organization towards a regime in which power-laws appear.
Our version of external input consisted of random "letters", each of which activated a subset of U E excitatory neurons. We compared neuronal avalanche distributions in two different time periods: directly after external input onset and after network readaptation by plasticity. The activity threshold θ was kept the same for both time periods.
The results agreed with the experimental evidence (Fig 6): an external input resulted in flatter power-laws (Fig 6, red curve), in agreement with experimental observations (Fig 1 in [29]). As in the experiment, we also observed a readaptation towards the power-laws, after a transient period (Fig 6, cyan curve). Furthermore, the flatter power-laws and the subsequent readaptation also appeared under weaker external inputs (u Ext i $ 1). This finding supported the hypothesis that plasticity was responsible for driving the SORN towards a critical regime, even after transient changes due to external stimulation.
Absence of criticality signatures under structured input in simple learning tasks
So far, we have observed criticality signatures in the model's spontaneous activity and activity when submitted to a random input. We focus now on the activity under structured input of two learning tasks: a Counting Task (CT) [32] and a Random Sequence Task (RST). For details on their implementation, see the Learning tasks subsection. In the CT, the external structured input consisted of randomly alternated sequences of "ABBB. . .BC" and "DEEE. . .EF", with n middle repeated letters. Differently from the former random external input, these sequences were presented during the whole simulation, one letter per time step. First, we measured the avalanche distributions for duration and size and verified that the power-laws did not appear in this case, independently of n (Fig 7A and 7B), although the distributions appeared smoother and more similar to power-laws for large values of n. This finding suggested that structured input did not allow for the appearance of the power-laws, and in this case our plasticity mechanisms could not drive the network towards the supposed critical regime. Second, we measured the performance of the SORN in the CT by training a readout layer and calculated its performance in predicting the input letter of the next time step. We found that our model was capable of maintaining a performance higher than 90% when the membrane noise was removed (σ = 0), which is consistent with the results obtained in the original SORN model for the same task [32]. With the addition of membrane noise (σ = 0.05), however, we saw a decay in the overall performance, particularly for long sequences.
In the RST, a different form of external input was used: in the beginning of each simulation, we defined a random sequence of size L, which would be repeated indefinitely. We observed that under this type of input the power-laws again did not appear (Fig 7E and 7F), but, as observed in the CT, longer sequences showed smoother curves. The performance, however, stayed above *88% for L 100, demonstrating that our SORN implementation is capable of learning random sequences.
In summary, both learning tasks highlighted our model's learning abilities and showed that the addition of plasticity mechanisms (iSTDP and SP) to the original SORN [32] does not breakdown its learning abilities. The presence of membrane noise, however, diminished the overall model performance for the CT. Furthermore, we showed that the structured input of both learning tasks was sufficient to break down the power-law distributions of avalanche size and duration.
Discussion
The hypothesis of criticality in the brain as discussed here, which states that neural circuits possess dynamics near a phase transition state, is largely based on experimental measurements of power-law distributed neuronal avalanches. This hypothesis, however, is still very controversial, in particular because power-law distributions can be generated by a number of other mechanisms but criticality [43], for example by thresholding activity of certain kinds of stochastic systems or superposition of exponentials [49,50]. Thus power-law scaling of physical quantities is not sufficient to demonstrate criticality. For that reason, our avalanche analysis alone is not sufficient to prove that the SORN self-organizes towards a critical point. Instead, we highlight that the combination of plasticity mechanisms in the model is sufficient to produce the same criticality signatures typically observed in experiments, independently of the question whether these systems are critical or not. Our results suggest that the combination of biologically inspired Hebbian and homeostatic plasticity mechanisms is responsible for driving the network towards a state in which powerlaw distributed neuronal avalanches appear, but such plasticity action is not required for the maintenance of this state. The power-law distributions of avalanche durations and sizes in the SORN's spontaneous activity replicate a widely observed phenomenon from cultured cortical networks [1,13,51] to awake animals [18,52,53]. Notably, the network also reproduces the short transient period with bigger and longer neuronal avalanches and subsequent readaptation after external input onset, which has been observed in the visual cortex of a turtle brain [29]. Our results are also in line with previous observations of power-laws in the externally driven case [54].
Additional previous studies have already identified plasticity mechanisms that tune a network to criticality. For example, networks of spiking neurons with STDP [8,24] and a model of anti-Hebbian plasticity [55] showed critical dynamics. The earliest example of self-organization towards criticality in plastic neural networks is probably the network by Levina et al., who made use of dynamical synapses in a network of integrate-and-fire neurons [7,23]. Furthermore, it is known that networks without plasticity can be fine-tuned to a critical state, where they show favorable information processing properties, both in deterministic [5,22,56] and stochastic [12,25,57] systems, or they can attain states close to criticality, e.g. operate on a Widom line [27] or a Griffith phase [58]. Those models are very important to describe the properties of a network already in a critical state. Beyond those results, here we have shown for the first time criticality signatures arising in a network model designed for sequence learning via a combination of Hebbian and homeostatic plasticity mechanisms.
The SORN's criticality signatures, in the form of avalanche distributions, were best fit by power-laws (see S1 Table). The measured exponents for duration and size, α = 1.45 and τ = 1.28, were both smaller than those expected for random-neighbor networks (2 and 3/2, respectively). This discrepancy may be due to the fact that the SORN has a complex dynamic topology that differs from a random network after self-organization. The power-laws typically spanned one or two orders of magnitude for the durations and sizes, respectively, which is comparable to experimental data. Before and after the power-law interval, the size distribution often showed a right and a left cutoff. While the right cutoff typically arises from finite size effects [59], the left cutoff is not characteristic for classical critical systems such as the branching network [60], possibly being the result of our avalanche definition based on thresholding the network activity. However, left cutoffs have been observed for neural avalanche distributions in cortex (e.g. [18,29]). Therefore, the SORN avalanche distributions are indeed compatible with experimental ones.
The SORN was initially conceived combining biologically inspired plasticity mechanisms (STDP, IP and SN) and has been shown to outperform static reservoirs in sequence learning tasks such as the Counting Task (CT) [32]. We showed that the addition of two other plasticity mechanisms (iSTDP and SP) [35] not only was able to reproduce the previous results but also increased the performance on the CT for large sequences. The addition of membrane noise, however, lowered the overall performance, particularly for bigger sequences in this particular task. Interestingly, previous work has shown that a SORN model with such addition is capable of solving a challenging grammar learning task [36]. In our experiments, even though a specific level of membrane noise led to the appearance of criticality signatures (σ % 0.05), the same noise level did not increase the model's learning abilities for simple tasks when compared to the noise-free case.
While our model showed criticality signatures in its spontaneous activity, the activity under structured external input when performing learning tasks did not lead to power-law distributions of avalanche size and duration, arguably driving the network away from a critical regime.
Despite the computational advantages of critical dynamics in models, subcritical dynamics may be favorable in vivo (see discussion in [18]), because in vivo subcriticality allows for a safety margin from the unstable, supercritical regime, which has been associated with epileptic seizures [21]. Interestingly, it seems that learning of patterns and structured input may bring a network to such a regime that does not show power-law distributed neuronal avalanches, which has also been observed for cortical activity of behaving animals [18].
Note that here the term 'criticality signatures' refers to power-law scaling for avalanche durations and sizes, a notion of criticality inspired by Bak, Tang & Wiesenfeld [42] and widely observed in experiments [1]. This 'avalanche criticality' may differ from other critical phase transitions, e.g. the transition between order and chaos [18]. It is remarkable that, nonetheless, our results are consistent with those of perturbation analyses of the SORN that also suggested that with learning of structured input the network deviates from a critical state [32].
The extent to which the criticality signatures may be important for the development of learning abilities in recurrent networks is a topic for future studies. It has been argued that criticality is beneficial for information processing [3,56], which suggests that this state may also have advantages for learning. However, our finding that the level of membrane noise necessary for the occurrence of power-laws leads to suboptimal performance in simple learning tasks suggests that the relationship between criticality and learning may be more complex. Table. Fit parameters for Fig 2 (N E = 200, N I = 40). Comparison between exponential and power-law fits for the curves in Fig 2A and 2B (raw data from 50 independent SORN trials with 10 6 time steps each). The goodness of fit R is the loglikelihood ratio between power-laws and the indicated distributions (a positive R means that data is more likely power-law distributed, while a negative R means the compared distribution is more likely a better fit). For further details, check the powerlaw package detailed description [47]. (TIFF) α and τ for different activity thresholds θ (N E = 200, N I = 40). Powerlaw exponents for duration and size for the activity thresholds θ described in Fig 3. R exp is the goodness of fit (loglikelihood ratio between a power-law and an exponential fit) in each case [47]. (TIFF)
|
v3-fos-license
|
2021-12-24T16:21:23.460Z
|
2021-12-22T00:00:00.000
|
245428578
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://hrcak.srce.hr/file/387900",
"pdf_hash": "9f7204f2688fed950b33716a4184746238085e4f",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44534",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"sha1": "93986fd1c8d99c4089b192ea09e305fc8a9fe0e1",
"year": 2021
}
|
pes2o/s2orc
|
Cross-border freight movement between Thailand-Malaysia-Singapore: utilising border based dry ports for effective inland transaction
Trade plays an important role in economic growth. Thence, a smooth cross-border transaction between Thailand-Malaysia-Singapore provides a significant implication in international trade. Currently, cross-border transactions face several issues during the crossing of borders between countries and, specifically, happens during the transactions of cargo. A very rigid documentation process within the custom clearance and theresulting severe congestion will affect the trade flow in this particular zone. Inconsistency of freight transaction documents at the cross-border also makes the transaction procedure more complicated and affects the performance of the manufacturer’s competitiveness. Thus, this paper explores the current issues at the borders involving Thailand-Malaysia-Singapore. This paper also initiates to figure out the challenges and some key success factors in modelling efficiency for cross- border transactions amongst these countries. A qualitative approach has been adapted to answer the proposed research questions. The initial results stressed that congestion, thorough and repetitious documentation procedures, involvement of many documents, as well as the time-consuming clearance of documents are key issues encountered during cross-border freight movement. This situation has caused several issues such as delays in freight delivery, losses in tax collection due to delays, reluctance to share information, and effects on the competitiveness of the freight supply chain. Development in infrastructure, information sharing, regulations, logistics performance, and customs clearance procedure can overcome the problems during cross-border Thailand-Malaysia-Singapore activities. The model outcome is expected to be smoother for the administrative process during customs clearance and it is expected to be able to efficiently reduce costs.
Introduction
International trade refers to the transaction of goods and services across national borders generally including the import and export trades. International trade can adjust the domestic utilisation rate, improve international supply and demand, adjust economic structure, and increase fiscal revenue. To enhance the international trade and supply chain, the One-Belt-One-Road (OBOR) project was proposed and further promoted by Premier Li Keqiang whilst visiting Asia and Europe (Huynh, 2019). Thereafter, US President Joe Biden suggested founding an initiative from democratic countries to compete with the OBOR project (Tan, 2021). From this scenario, the crossborder transaction has become vital to boosting international trade inland. However, some issues have been encountered during trade transactions, specifically related to different standard documentation requirements during custom clearance. Snitbhan, et al. (2004) highlighted that major issues during cross-border freight movement were congestion, limited space availability, keenness to use manual procedures, and lack of necessary facilities to support customs clearance operations. Therefore, this paper has been initiated to explore the challenges and key success factors during cross-border freight movements between Thailand-Malaysia-Singapore.
Cross-border transaction
This section discusses the global trend and challenges of the current issues in cross-border transactions; whereas, customs clearance and documentation, transport procedures, and legal aspects are the key factors for the successful cross-border trade. Besides that, motivation factors for the cross-border movement amongst Malaysia, Thailand, and Singapore is also explored in detail.
Global trend in current cross-border transactions
International conventions and documents relating to cross-border road freight transport is the agreement on the contract for international road freight transport introduced in 1956 (Hodgkinson, 2016). The consignment note is made out in three documents signed by the sender and by the transporter, and typically a fourth copy for the carrier. The primary duplicate (red) is given over to the sender after the transporter has gotten the products. The second duplicate (blue) is given over to the consignee when the products have arrived at their destination. The third copy (green) is for the carrier. The fourth copy (black) is also for the carrier. Bill of lading, packing slip, commercial invoice, Canada customs invoice and certificate of origin are based on USA-CANADA freight cross-border shipping documents. Any of these documents may be listed as a general document for the transportation of goods at the cross-border. Meanwhile, in Europe, they have removed the border barriers so that transporting goods has become easier and more efficient. However, it has affected countries within Europe as they have lost tariff collectors, and this has reduced the income of the countries in Europe. This is because collecting tariffs is onr of the main income providers in their countries (Brooks, 2005).
International Labour Organisation (2006) reported customs documentation and transport procedures often caused delays and long waiting times at cross-border trade areas in Southern Africa. For example, trucks have always been delayed for up to five days at the Rwanda-Uganda border due to customs clearance issues (Chibira and Mdlankomo, 2015). The relevant documents from the revenue office in Kampala have been misplaced and not delivered to the related party. A study has been carried out in cross-border corridors in East Africa. In the study, Hanaoka, et al. (2019) addressed that proper conditions; facilities such as roads, railways, and ports; and institutions are important to reduce transportation costs and time spent in landlocked developing countries. Stakeholders from Kenya, Tanzania, Uganda, Rwanda, and Burundi highlighted that they preferred a reduction in operating costs, increased volumes of cargo handled, a fast clearance process and harmonisation of documentation, and a reduction in cross-border charges. However, they complained of too many weighbridges in road corridors that involved spending more time, more costs, and corruption (Adzigbey, et al., 2007). Meanwhile, international trade amongst China, Myanmar, and Vietnam has been studied by considering the geographical elements, and the geographical factors affecting the cross-border logistics route choice were analysed (Li, et al., 2020). They highlighted that customs clearance efficiency was one of the deciding factors for seaport selection decision-making.
Dezan and Associates (2019) said that import and export cargoes are subject to relevant customs clearance standards under Law No. 54/2014/QH13 and certain types of cargo need to go through customs inspections, such as imported pharmaceuticals. The import and export company needs to obtain the company's business registration certificate and import/export business code registration certificate. Besides that, imported cargo requires the bill of lading, import goods declaration form, import permit, certificate of origin, cargo release order, commercial invoice, customs import declaration form, inspection report, packing list, delivery order, technical standard/ health certificate, and terminal handling receipts. On the other hand, exported cargo requires electronic export customs declaration, bill of lading, contract information, certificate of origin, commercial invoice, customs export declaration form, export permit, packing list, and technical standard/health certificate. The export processing may be completed on the same day, but the import processing time may take one to three days for the full container loads (FCL) and less than container loads, (LCL).
The challenges of cross-border transactions
Based on Chibira and Mdlankomo (2015), the challenges facing transportation operators at the cross-border of Southern African Development Country (SADC) are delays, longer travel times, fewer return trips, high operating costs, reduced reliability, and reliability of services that result in reduced productivity and capital efficiency. Next, the challenges of the administrative authority are insufficient subsidising and assets to completely operationalise the command and actualise high effect fit-for-reason interventions, deficient abilities, lack of ability to react to incidents, obsolete systems, restricted innovation, and lacking the ability to manage industry matters. Similar to the SADC, Malaysia has the same issue as Mustakim and Saud (2018) who said that there was a pause in the processing of documentation at the Malaysian-Thailand border. Snitbhan, et al. (2004), said, the current customs procedures still rely primarily on printed documentation, as the Electronic Data Interchange (EDI) is still not fully functioning. Indeed, much of the delay at border customs arise from problems with documentation. Thai customs declaration forms are, however, more complex than the Malaysian forms. Customs procedures would require documents such as (1) one of nine different customs declaration forms depending on the mode of transport and the type of the goods being imported, (2) 11 different forms/ documents relating to the relevant import duties and the payment of those duties, and (3) six forms/documents re-lating to tariff privileges or tax returns. In addition, both customs offices have limited spaces and lack of equipment for unloading and reloading goods and containers at the border (Snitbhan, et al., 2004).
The haulier needs to be registered in both Malaysia and Thailand, and needs to carry two license plates because they do not share the information. Snitbhan, et al. (2004) argued that it should be noted that only trucks operating on both sides of the border, in both Thailand and Malaysia, carry two license plates. It is because different authorities who do not share information may subject an imported product to independent random checks.
In addition, congestion has occurred at the borderlines, this is due mainly to inefficient management in processing and flow. The costs of the cross-fringe inland transport has been identified with the China Railway Express; for inland clients, the expense of traditions tax at neighbourhood dry ports is lower than at seaports since the additional expense of remote tasks is dispensed with (Lam & Gu, 2016). There have been three categories of border challenges reported: regulation, infrastructure, and information (Brooks, 2005). From a legislative viewpoint, The North American Free Trade Agreement (NAFTA) uses committees to create new rules and dispute resolution procedures. For example, the Land Transportation Standards Subcommittee made significant progress on legislation to regulate drivers and equipment in the trucking industry in North America. However, NAFTA has failed to release all the promises, especially for the Mexican trucking crossborder access (Brooks, 2005).
According to the International Labour Organisation (2006), international trade border-crossing procedures involve documentation processing, cargo inspection, and checking by different parties. However, discrepancies happen amongst the different border-crossing services in the same country. They also stated that the cross-border delays are due to the inefficiency of control procedures, insufficient application of computerised procedures, time-consumption without risk management techniques, complex procedures for weighing of vehicles, illegal migration control, implementation of veterinary and phytosanitary controls, lack of coordination between the customs administrations, lack of cooperation amongst the authorities responsible for controls, non-compliance with Transport Internationaux Routiers (TIR) procedures, failure in providing information to the professionals and private sector, changes without notice to the procedures, compulsory convoys, compulsory pay services, and lack of transparency in rules for payment in some cases.
On the other hand, Thomson Reuters (2016) took asurvey to analyse the legal aspects of cross-border transactions based on key economic hubs around the world. There were four trends found in the cross-border transactions, such as cross-border work in attractive manner and likely to increase in volume, legal complexity limiting transaction volumes, deals and drafting being increasingly standardised internationally, and reliable sources of information and insight being hard to find, but online resources are becoming important. These trends have been giving both challenges and opportunities to cross-border trade between Malaysia-Thailand-Singapore.
Motivations to explore the cross-border freight mobility
Cross-border delays impact transport services costs as well as the cost of products traded in the region. When a delay occurs, there are fewer turnaround trips than in scenarios where and when delays are reduced. That's going to affect the economy. Reducing delays will increase trade volumes and per-corridor earnings. The delays in the corridors will reduce the productive time required for cross-border road transport. This reduces the region's potential for trade, international integration, and economic facilitation. It has been concluded that the challenges affect the sustainability, productivity, efficiency, quality, and cost of the transport services, as well as the costs of the products traded in the region (Chibira & Mdlankomo, 2015). Next, the Padang Besar Container Terminal (PBCT) is not well organised, and it has prevented fast clearance. Limited customs facilities at the border between Malaysia and Thailand and the heavy congestion caused by carriers obstruct customs and dry ports' personnel from quickly clearing the transporters. Limitations in the PBCT space and infrastructure, and certain border policies make cross-activity more difficult. Transporters from Thailand to Malaysia, for example, are restricted to within two kilometres of the border with Malaysia and this causes inefficiency in rail deck container arrangements. Nonetheless, the situation in Thailand is different as Malaysian haulers can move more than 55 kilometres from Thailand's border (Othman, et al., 2016).
The landbridge train service runs in both directions (See Figure 1), southbound from Thailand to Malaysia and northbound from Malaysia to Thailand; however, the trade flows are not balanced and, in general, more goods are carried out southbound. The quantities of the containers transported by the landbridge train service have been variable over the years. Presently, there is a rising trend of the quantities of containers being transported. The control authorities present at the Padang Besar (Malaysia) railway station are: Malaysia Immigration Department (Security and Passport Division) for security and immigration control, Royal Malaysian Customs Department authorities, and Quarantine and other authorities if necessary for specific types of goods. The Auxiliary Police is present on the gate check in the adjacent Padang Besar Terminal.
In addition, border congestion gives the transportation operators' negative perceptions of the delays. Transport issues at the Malaysia-Thailand cross-border can be clas sified into procedural and system-related issues involving human capital, organisations, institutional, documentation, infrastructure, and facilitation of trans-portation. By using the relevant facilities, the production of trucking and containerised vehicles in moving cargo could improve the characteristics and professional cross-border administration (Mustakim Melan, 2018). Furthermore, the problem of smuggling inadvertently occurs. They make subsidised items a target. It is even more difficult if integrity issues or elements of corruption have been found in connection with smuggling activities. Therefore, the Malaysian anti-Corruption Commission (MACC) needs to be more aware and address this issue more effectively (Ismayatim, 2019). Lastly, the inconsistency of the freight transaction documents at the cross-border between Malaysia and Thailand make the transaction procedure more complicated (Beck, 2016). The document do not match the standard freight documentation. Owing to bureaucratic redundancy, the transactions at the crossborder becomes more complicated and inefficient. This can affect the receiving time of goods to the client, congestion, image of Padang Besar Cross-Border, and volume of trade.
Other than that, in order to grow the economy within Malaysia, Thailand, and Singapore, many challenges must be faced, especially in the process of the exchange of goods, especially at the cross-border. According to Koh (2018), the Singapore-Malaysia land crossing is described as one of the busiest in the world. Johor authorities reported an average of 296,000 pedestrians per day in 2015. The figure did not include motorcyclists (about 100,000 registered for auto-release), cars, vans, trucks, and buses. The data are a bit stark, but reports have shown 126,000 vehicles daily (including about 4,000 trucks entering Singapore) are only on the causeway built in 1923. Therefore, crossing the border country is a problem for customs clearance between the two countries. In this regard, ease of customisation and other administrative procedures will increase efficiency, and reduce costs and the financing of mobile goods internationally .
In addition, industries and municipalities in Singapore and Malaysia have produced significant increases in demand for both passenger and freight transportation. Therefore, the busy Malaysia-Singapore road conditions with the increasing number of passengers travelling between the two countries also raise other issues where a lack of regulations or boundaries during border checks during effective customs inspections is often seen as a possible obstacle to better transport networks (Barter, 2006). Each country has its own rigid procedure for crossing the borders of itscountry because they want to prevent or curb smuggling activities that are not authorised by the government or that have the tax value paid by third parties to governments such as cigarettes and drugs. In cross-border transport transactions, it involves two different countries and of course, the structure of the operating procedure is also different and adheres to the technical standards for different transport between the two countries. This is supported by Rodrigue (2017) who found that the differences impeded the continuation of the cross-border delivery process. For example, for shipment of food to Singapore, there are many requirements that a company must adhere to, such as the quality of the food. This can be ensured by the listing of food processing or exporting programmes in Singapore, which is one of the activities under the official control of the Ministry of Health (MOH), and the purpose of the programme is to ensure that the exported food meets the quality desired by Singaporeans.
Furthermore, the obstacles encountered at the border are administrative burdens for transit because they often add costs to shipping. As such, it affects transit goods as there are various direct transit charges and customs charges for transit countries, some of which must be paid in advance and follow some routes. These government regulations require high costs and heavy procedures involving the bureaucracy to manage all the procedures involved in transit involving the country (Faye, et al., 2004).
The last problem is the inconsistent transport documentation across the border. As a result of this problem, it can affect the effectiveness of the country as it can cause delays and will increase the cost of transportation for a company. For example, Singapore only needs some documents such as import or export permits with some other support like Bill of lading, packing list, and invoices to be submitted to the customs. In contrast, some other countries, such as China, India, and Malaysia, require extensive documentation of customs clearance such as bill of lading, shipping instruction, invoice, certificate of origin, import/export permit, packing list, and freight quotation.
The challenges of the crossings at the Malaysia, Thailand, and Singapore borders were discussed previously, however, the research to date has not been able to provide robust evidence for overcoming the challenges into improvements for efficiency of the crossings at the Thailand-Malaysia-Singapore borders. Therefore, this research would like to close the gap, to study the challenges and key success factors to overcome the problems.
Research Method
In this paper, telephone interviews were conducted to collect information especially on the challenges at each border, implications arising from these limitations, and suggestions to improve the performances of cross-border transactions between Malaysia-Thailand-Singapore. A total of fourteen (14) respondents had been invited to participate but only seven (7) of them participated, amounting to fifty (50) per cent of the population. They were selected from manufacturers, inland terminal operators, freight forwarders operating at these two different borders. Four (4) of them were from Malaysia-Thailand and the remaining participants were from the Malaysia-Singapore border. The questionnaire for the telephone interviews had been precisely designed and consisted of three parts, with the first being the key challenges that normally occur during cross-border transactions between Thailand-Malaysia-Singapore. Secondly, it was about the implications that emerged due to the limitations during cross-border freight transactions. The final section was about the key success factors that can be proposed to enhance the efficiency of the cross-border transactions in these three different countries. This data collection was conducted during the covid-19 pandemic and some difficulties were faced, especially on securing appointments with the respondents. Due to this limitation, the convenience sampling strategy had been implemented to enhance the participation of the respondents in this research. Convenience sampling intends to select the eligible participants who are willing and available to be interviewed (Klassen, et al., 2012). The gathered data were analysed using a systematic design based on a grounded theory. This method is suitable for a case study as it enhances the validity of the qualitative research (Parker and Roffey, 1997). A procedural design is used as it creates themes by familiarisation, reflection, transparent coding, axial coding, and selective coding from a data analysis (Creswell, 2008). Data categorisation or themes were generated using a systematic design which is important for focusing the meaning in the context of the research as well as being understandable by an external audience (Jeevan, et al., 2019).
Result and Discussion
This section explains the result and discussion of the paper which comprises the contributions of a seaport towards the dry port and vice-versa. Besides that, the outcomes of the paper-based survey on the proposed research questions are discussed with supporting statements.
The following sections reveal the challenges at the crossborders of Thailand-Malaysia-Singapore, the implications at TMS borders from current limitations and strategies to improve the efficiency in TMS cross-borders.
Challenges at the cross-borders of Thailand-Malaysia-Singapore
Based on the interview participants' responses (R1, R2, & R3), cross-border activities at Bukit Kayu Hitam (BKH) suffer from severe congestion and cause more delays compared to Padang Besar (PB). They indicate that the freight movement in PB is more efficient than BKH due to the existence of multimodal options in PB. The co-existence of various transportation options, such as road and rail, at PB enhances the modal shift and reduces the over-dependency on road or rail freight transportation. According to R4, 'BKH suffers from congestion due to a slow and unsatisfactory level of legal documentation processes'. For example, The Thai Customs Department requires a detailed procedure, especially for vehicles coming from Thailand to Malaysia, which requires the drivers to get off the vehicles to stamp their passports from the Thai Immigration officers. Due to this procedure, the drivers have to wait for 6 hours to complete their documentation procedures in immigration, customs, quarantine, and security (ICQS) checkpoints. According to R3 and R4, severe congestion at the Malaysia-Thailand check-point is exacerbated by inadequate infrastructure, equipment, lack of resources, stringent inspections, immigration checks, and delays in document declaration. Moreover, (R1) indicates that the operating hours for ICQS clearance procedure at the Malaysian-Thailand border, especially in Padang Besar, is 16 hours (6 am to 10 pm) and 12 hours (7 am-7 pm) in BKH. The lengthy operating hours in PB provide more opportunities for economic development compared to BKH.
The majority of the participants (R1, R2, R3, & R4) mentioned that documentation of freight transactions that are required from Malaysia to Thailand customs are delivery order, customs forms, packing slip, trucking bill, invoice, and tax-exempt documentation which will be shared by both countries. To ensure a troublefree clearance of goods, the required documentation must be properly prepared and provided to the local customs. Documents to be provided from Thailand to Malaysian customs are invoices, packing lists, delivery orders, leaflets, catalogues, insurance certificates, bill of lading, credit letters, permits or licenses, ticket form and custom tax documentations. Almost 11 documents are required by Malaysia and almost half that are required by the neighbouring country (see Table 2). The lengthy documentation procedure from Malaysia to Thailand will affect the efficiency of cross-border transaction between these two regions.
There is a reason behind this strict procedure in Malaysia. The issue of trespass smugglers has become very serious between these two countries. According to R1 and R2, the smugglers have cut down the border fence and brought out subsidised goods, whilst security personnel appeared to be taking no action despite that fact. They also emphasised that the smugglers used the gap between the transition shifts of security personnel as early as 6 am to 9 am. The modus operandi of the smuggling is by removing a sack from a vehicle and quickly transporting it to a border fence that had been cut and handed over to their waiting members in the neighbouring country.
During the data collection, R5 mentioned that the challenges they faced are the congestion in the Malaysia-Singapore border and frequently missing the schedule to reach the destination on time. This has frequently happened during transporting of the cargo from Singapore to Port Klang, especially. The consequence that might occur is the inability to deliver the cargo on time, unable to meet the schedule integrity at seaports and inland terminals, as well as clients. R6 mentioned that the congestion at the Malaysia-Singapore borders have occurred because there are a huge number of labourers and freight vehicles moving across the Malaysia-Singapore border throughout the day. The statement from R6 was resounded by Herng and Zhang (2019), whereby almost 300,000 people and 145,000 vehicles crossed the Malaysia-Singapore border daily.
From the perspective of document procedures (R5, R6, and R7) mentioned that Singapore implies a soft procedure by receiving some basic documentation, such as invoice, quotation and bill of lading, and all of this information will be passed on to the freight forwarding agents. These respondents also agreed that there are very limited difficulties faced by Malaysian freight transporters whilst delivering cargo to Singapore due to the less restricted standard procedure provided by Singaporean customs. This is possible because they are under pressure to complete all shipments and perform tasks accordingly in the respective seaports.
On the other hand, the same respondents also criticised the thorough procedures that have been implement- Table 1). They agreed that this was mostly a manual procedure and errors in the information in the documentation worsened the efficiency of freight transactions on this busy border. Further, R6 mentioned that incorrect documentation may also affect things such as delays in shipment and delays in payment for exported goods, and incorrect documents may also result in violations of export regulations. As a result of this error, it will incur high costs as the company will have to bear all costs involved as a result of the delay (Khaslavskaya and Roso, 2019).
The implications at TMS borders from current limitations
Based on the respondents' views (R1, R2, R3, & R4), the main implication due to inefficiency in the cross-border is mainly the delays during freight deliveries. They came to the consensus that the bureaucratic nature is the main cause for the delays, especially for the Thailand-Malaysia cross-border users. They agreed that the truck drivers were the main focus group who have been significantly affected. These opinions were well aligned with the argument by Mustakim Melan (2018) who agreed that the bureaucratic nature at the border between Malaysia and Thailand results in delays in the processing of documents which eventually affects the freight transportation from this border. In general, most of the cargo from this border will be transported to Penang Port and Port Klang. Owing to the documentation delays which cause the movement of freight to be behind schedule, the schedule integrity at the seaport has been affected, blank sailing arose, and the attractiveness of the seaport has been affected. During the pandemic situation and due to these delays, the container rotations within or between regions have become another issue.
The paper or manual documentation procedure is still used for freight during cross-border clearance. Lam and Gu (2016) argued that using simplified and computerised documentation enhances the connection between vehicle documentation and cargo documentation, and its effect is reduced vehicle delays. According to (R3 and R4), additional time taken to release the freight from Thailand to Malaysia has caused the Malaysian government a loss in tax collection. Due to this lengthy and time-consuming procedure, a slender doubt and scepticism has aroused amongst the traders in Thailand, and this has caused them to become reluctant to share detailed information about the cargo, they added. As a result, this condition has caused congestion at the border, chances for misplacing the documentation, and, for sure, a healthy and friendly trading system between neighbouring regions will be affected. According to Khaslavskaya and Roso (2019), the overlapping documentation information may be prevented by providing well-organised documents according to the requirements. However, to kick start a systematic documentation procedure, a hybrid procedure can be implemented by complementing the functionality of both manual and computerised procedures. This approach can be a paradigm shift to enhance the efficiency of cross-border transactions between Thailand and Malaysia.
Meanwhile, at the Malaysian-Singapore border, (R5 and R6) agreed that the delays and congestion still happens due to unavoidable reasons and much less caused by the documentation issues. Here, the appointed agents play a very critical role to explain the situation at the border and try to make them (the clients) understand the situation that happens at the Malaysian-Singapore border. They also added that an agent with sufficient negotiation skills plays a major role to deliver the message to the clients and find an alternative solution to 'pull out' their cargo from being stranded in the congestion. Therefore, the selection of agents who are capable of solving documentation issues during cross-border inspections is crucial. According to (R5), a skilful agent can handle all aspects of delivery and the company can offer its importers a comprehensive and timely service with goods delivered right to their door.
Participant (R5) also added that the congestion that happens at the Malaysia-Singapore border does not affect the transportation cost as many of the clients possess their own transport. Having their own transport services makes it more reliable for the company to handle their cargo either for the pick-up or delivery process. In the meantime, issues arise if the owner does not have any transport facilities and needs to rely on transport agents. In that case, the dry port operators can lead the situation by providing transport, storage, and delivery of the cargo to the clients and pick up the empty containers from them to ease the container rotation procedure.
Surprisingly the majority of the participants (R5, R6, & R7) provided the same answers in that they had never experienced duplication of information during their submission of the documents. Further, they clarified that information sharing is not an issue as all their businesses run smoothly and successfully as they assign agents to transmit information between the two parties. Some of them utilise the transport drivers who are well versed with the clearance procedure as an intermediate between the company and the customer. They emphasised that the person who carries the documents (agents or drivers) needs to be well versed in the procedures at the border to avoid unnecessary delays. According to Alexander, et al. (2017), establishing a platform for information sharing between companies and customers enables the benefits of comprehensive search capabilities, where customers can find answers to their questions using this platform. Besides information sharing, borders that utilise hybrid approaches (combining manual and computerised procedures) should assign a reliable agent or train their truck drivers to multitask as an agent during the crossborder procedure, which would be significant whilst waiting for the whole procedure to become digitalised.
Utilising dry port services for cross-border transactions between TMS
In Malaysia, there are two main dry ports which are mainly dedicated to cross-border transactions, especially from Singapore and Thailand. Dry ports in Malaysia possess the capacity to execute cross-border transactions between Thailand-Malaysia-Singapore. These dry ports can be utilised for perishable goods and cold freight transactions which require fast delivery via immediate clearance. In addition, two major functionalities of dry ports in Malaysia, namely, the transport and logistics function, and the information processing function, are focused on cros-border transactions between the nations (Jeevan, et al., 2015) For example, the transport and logistics function mainly focuses on regional freight transactions, spatial capacity for containers, cross border container transhipment centre through intermodal nodes, and connecting manufacturers for on-time delivery. Meanwhile, the information processing function is mainly on the documentation clearance for domestic and cross-border transactions. This situation indicates that the Malaysian dry ports have been developed to support not only regional economic development but also international 'inland' transactions, especially between borders.
In this cross-border transaction involving three nations, dry ports in Malaysia located in between these two nations can be utilised to ease the cross-border trade. In this paper, congestion, thorough documentation procedure, repetition in documentation procedure, involvement of many documents, and time-consumption during document clearance are some of the major issues that have been identified during cross-border trade procedures between Thailand-Malaysia-Singapore. Therefore, dry ports which remain underutilised in this nation can be used, especially for a modal shift which will reduce the congestion at the border. The application of a modal shift encourages the application of both transport modes, especially road and rail, for freight distribution within and outside the nation. However, the domination of road freight in the Malaysian freight distribution may slow down this process.
On the other hand, the establishment of the East Coast Railway Link (ECRL) which connects the west and east coast may release the domination of road freight and equalise the proportion between road and rail. This situation may utilise the dry ports in the region, especially for the modal shift between road and rail or vice versa. Another role of the dry ports is providing document clearance services away from seaports and borders. This situation may release the burden of the clearance terminal at the border to execute multiple tasks during the freight clearance procedure. Dry ports as a centre of documentation clearance may overcome some issues, especially reducing a thorough process during documentation clearance, avoiding repetition in the documentation procedure, as well as reducing time consumption during document clearance. With that, dry ports may also lead to proposed standard documents for cross-border transactions. This situation may prevent the involvement of many documents, especially from Singapore, Malaysia, and Thailand (see Table 2).
Secondly, delay in freight delivery, loss in tax collection due to delays, reluctance to share the information, and less competitiveness of freight supply chains are some of the implications that have been caused due to the inefficiency of the cross-border transactions amongst these three nations. Again, the dry ports have been proposed to reduce these drawbacks, especially proposing the modal shift to reduce the delay of freight delivery. The role of this intermodal terminal as a consolidation and deconsolidation node combined with modal shift activities will expediate the freight delivery process with the region and outside the nation. Through this modal shift, dry ports will also manage to proceed with last-mile delivery by employing multimodal operators to convey the delivery, especially via artery connectivity. Other than that, the capacity of dry ports to connect with various players in the freight chain is also an added advantage to this node, especially for information sharing between the players. Since the dry ports have a significant connection with seaports, the Port Community System (PCS) can be utilised for information sharing amongst the players in the supply chain. Besides that, the availability of spatial, temporal, and intermodal transport options may increase the competitiveness of the supply chain as well as determine if the cargo will remain attractive at the final destination.
Thirdly, dry ports in this country are proven to enhance the competitiveness of the seaports. Therefore, there is a possibility that these dry ports may enhance the efficiency of cross-border trade. Although the participants have provided some recommendations to enhance the efficiency of the cross-border trade, the involvement of a dry port may expedite the procedure or generate a sustainable solution. In addition, the integration of dry ports as a medium to enhance the efficiency of the cross-border might be significant compared to the suggestion provided by the participants. For example, the respondents have suggested providing an additional lane for heavy vehicles, assigning relevant agencies to expedite the procedure by maximising import/export counters, providing the automated procedure for documents clearance, establishing a Special Border Economic Zone for the flexible and cost-efficient procedure, and merging customs clearance between TMS to improve the efficiency of the trade transactions at the borders. However, all these suggestions can be avoided by utilising the current dry ports to execute all the aforementioned functions. In addition, the nature of dry port investment is based on the Public-Private Partnership (PPP).
Utilising this entity in the cross-border trade procedure may encourage the involvement of the seaport authority, local city and state governments, and the railway department as they are the current investors in Malaysian dry ports to actively participate in the trade at the borders. Hence, the assimilation of dry ports in the cross-border trade activity may enhance the efficiency of the trading system, par- ticularly between Thailand-Malaysia-Singapore. Also, it encourages utilising the existing facilities and preventing insignificant investment, which involves a massive amount of financial implication. Alternatively, this investment can be channelled into connectivity development, modal split facilities, application of the 4 th industrial revolution, as well as a capacity enhancement at dry ports.
Key success factors/strategies to improve the efficiency in TMS cross-borders
Based on the limitations and drawbacks that have occurred in border transactions at the borders of Malaysia, Thailand, and Singapore, these participants have also provided some significant strategies to improve the effective ness of border freight transactions amongst these countries. At the Thailand-Malaysia border, the upgrading plan has been segregated into three main dimensions including infrastructure, information, and regulation. From an infrastructure perspective, participants (R1, R3, and R4) suggested that opening all the lanes of heavy vehicles as well as facilities to accommodate the staff of all the relevant agencies under one roof to expedite the freight movement and documentation procedures are crucial. They added that these facilities are necessary to overcome congestion issues at the cross-border of Malaysia-Thailand due to the increasing volume of vehicles that is utilising the Thailand-Malaysia border every year. Besides that, the respondents suggested maximising export and import counters at all entrances to enhance the volume of imports and exports between these two countries. They added that import and export counters need to be added to hasten the import and export procedure between these two countries. These are sup-are supported by Chibira and Mdlankomo (2015) who reported that an enabling environment is needed to improve crossborder transactions to become more efficient. Therefore, increasing infrastructure and facilities is favourable to smooth the traffic and processes.
From the information perspective, the recent survey shows that congestion in the Malaysia-Thailand border occurs not only during public holidays and festive seasons, but it also happens all year round. Respondents (R1, R2, and R3) argued that information sharing between countries and establishment of a border community centre which connects all players, including seaports, inland terminals, freight forwarders, customs, immigration, as well as health departments within the countries, need to be enforced immediately. The information sharing between related parties can be enhanced by utilising automatic online documentation. Lam and Gu (2016) highlighted that automating documentation can improve the procedure of transporting goods at the land cross-border, which would have one-stop processing controls and a combination of processing procedures for all border agencies.
Meanwhile, from the regulative perspective, the Malaysian government has agreed to establish the Special Border Economic Zone (SBEZ) project at the Malaysia-Thailand border which is the main foreign trade gateway (Halid, 2018). According to the respondents (R3 and R4), this systematic regulation will assist in reducing the congestion at this border. From this systematic regulation, both sides can be flexible in their businesses as they can pick the right time to ship their goods and make sure the logistics are more cost-efficient. This strategy has been resounded by Halid (2018) who declared that this project will be a catalyst for the business development plan between Malaysia-Thailand. Further, Malaysia and Thailand will cooperate to establish a project providing world-class facilities for manufacturing and commercial sectors, including free trade zones to aid bilateral trade between Asian countries. It is also predicted to broaden regional trade on the global market, especially between China and India.
Infrastructure, logistics performance, and clearance procedures are the main indicators to improve the cross-border freight transactions between Malaysia and Singapore. Furthermore, cross-border improvements in infrastructure are an important element of the trade sector due to the reliance on quality management through public-private partnership (PPP). According to the participants (R6 and R7), infrastructure quality is another constraint for developing countries for improving the logistics performance index (LPI). Companies that participate in cross-border activities recognise that the qualities of information technology and telecommunications infrastructure are substantial for the provision of high-quality services. Besides that, simplifying and computerising documentation would speed up the transporting process (Lam and Gu, 2016), and it will as well assist in information technology and telecommunications infrastructure.
Secondly, improving logistics performance has also been a major development objective, as it has a significant impact on cross-border economic activity. Respondent (R7) indicated that logistics performance is crucial to ensuring a smooth cross-border transaction. It requires significant collaboration amongst the various players in the freight supply chain. The logistics performance is strongly related to trade growth, diversification of exports, the ability to attract direct investment, and economic growth (Ojala and Celebi, 2015).
Thirdly, enhancement of cross-border efficiency can be incorporated in the Malaysia-Singapore customs clearance procedure. Respondents (R5 and R7) argued that merging customs clearance between Malaysia and Singapore will be an effective approach as it will reduce the repetition of the customs procedures. Customs clearance could be undertaken jointly on each side of the border and the time savings would be even greater (Snitbhan, et al., 2004). Although Malaysia has different methods or procedures when inspecting goods at the cross-border compared to Singapore, this will bring about new issues such as the level of national trust due to the differences in the way goods are checked in the neighbouring country. These respondents suggested that information sharing, trust in cross-border trade transactions, confidence with their trading partners, and cooperating to boost the application of IR 4.0 in both countries to meet the international standard as well as concern on freight competitiveness are key indicators to execute similar procedures at the Malaysia-Singapore border. Trade and transport developments are crucial to compete in the global market amongst countries. Therefore, international companies need to work on projects that comply with international standards to determine if their service levels can withstand local conditions. Hence, the efficiency during cross-border transactions is crucial as they must bring their services and assets across the country's borders by reducing costs and, at the same time, in real-time.
Conclusion
This research presents the challenges of cross-border freight transactions (Thailand-Malaysia-Singapore), the implications of cross-border transactions, and strategies for improving the efficiency of cross-border transactions. The challenges faced at the cross-border of Thailand-Malaysia are congestion and delays due to inadequate infrastructure, equipment, lack of resources, stringent inspections, immigration checks, and delays in document declaration. At the same time, trespass smugglers also become a serious issue facing Malaysia and Thailand. On the other hand, the Malaysia-Singapore border faces congestion and missing the schedule to reach the destination on time. It is due to a huge number of labourer and freight vehicles moving across the border of Malaysia-Singapore throughout the day. Furthermore, incorrect documentation has also caused delays and the companies have been forced to bear the costs. The main implication to the cross-border of Thailand-Malaysia is bureaucratic and it has caused delays in processing documentation.
However, the majority of the participants stated that they had never experienced duplication of information during the submission of documents for the cross-border of Malaysia-Singapore. Thereafter, the participants have suggested several strategies to improve the efficiency of the Thailand-Malaysia-Singapore cross-border activities. For the Thailand-Malaysia border, infrastructure, information and regulation are proposed to overcome the problems. The infrastructure can be enhanced by opening all lanes of heavy vehicles and providing the maximum number of export and import counters to resolve the congestion and delays. Besides that, the information sharing system amongst Thailand-Malaysia and the related players should be considered. A Special Border Economic Zone (SBEZ) at the Thailand-Malaysia border and a free trade zone amongst the Asian countries are encouraging for international export-import trading activities. On the other hand, for the Malaysia-Singapore border, information, logistics performance, and clearance procedures are important to conquer the challenges being faced there. Effective logistics procedures combined with software solutions using the latest technology (Blockchain, Artificial Intelligent, Machine Learning) will help the customers maximise their operation cost savings. Cross-border freight mobility at the Thailand-Malaysia-Singapore borders requires a significant amount of product knowledge and coordination amongst the, three neighbouring countries such as an efficient customs documentation process by using single window online access to the authorities which will accelerate the seamless processing of the correct documentation of cargo, warehouse and distribution capacity, and coordination and communication amongst the stakeholders for accurate and on time delivery. A public-private partnership (PPP) programme, information technology & telecommunication infrastructure, and merging customs clearance between Malaysia and Singapore would ease the passage of shipments during cross-border movement.
|
v3-fos-license
|
2017-08-03T02:07:43.333Z
|
2006-01-27T00:00:00.000
|
8976513
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://retrovirology.biomedcentral.com/track/pdf/10.1186/1742-4690-3-8",
"pdf_hash": "f4239ce64f924f2b0079fad8c70fa6106bad5bd6",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44535",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "29fd733c653a77c7121b1e3c4a36ffd9797678be",
"year": 2006
}
|
pes2o/s2orc
|
Reservoir cells no longer detectable after a heterologous SHIV challenge with the synthetic HIV-1 Tat Oyi vaccine
Background Extra-cellular roles of Tat might be the main cause of maintenance of HIV-1 infected CD4 T cells or reservoir cells. We developed a synthetic vaccine based on a Tat variant of 101 residues called Tat Oyi, which was identified in HIV infected patients in Africa who did not progress to AIDS. We compared, using rabbits, different adjuvants authorized for human use to test on ELISA the recognition of Tat variants from the five main HIV-1 subtypes. A formulation was tested on macaques followed by a SHIV challenge with a European strain. Results Tat Oyi with Montanide or Calcium Phosphate gave rabbit sera able to recognize all Tat variants. Five on seven Tat Oyi vaccinated macaques showed a better control of viremia compared to control macaques and an increase of CD8 T cells was observed only on Tat Oyi vaccinated macaques. Reservoir cells were not detectable at 56 days post-challenge in all Tat Oyi vaccinated macaques but not in the controls. Conclusion The Tat Oyi vaccine should be efficient worldwide. No toxicity was observed on rabbits and macaques. We show in vivo that antibodies against Tat could restore the cellular immunity and make it possible the elimination of reservoir cells.
Background
The HIV-1 Tat protein plays important roles in the virus life cycle and maintenance of HIV-1 infected CD4+ T cells [1,2]. It is a trans-activating regulatory protein that stimulates efficient transcription of the viral genome, which requires structural changes of Tat to bind to a RNA stem-loop structure called TAR [3,4]. However, Tat differs from other HIV-1 regulatory proteins because it is rapidly secreted by CD4 + T cells following HIV-1 infection, and extra-cellular Tat is suspected to be directly involved in the collapse of the cellular immune response against HIVinfected cells [2] and directly contributes to the pathology of AIDS [5]. Extra-cellular Tat inhibits macrophage responses by binding to the Fas ligand membrane receptor [6] and inhibits cytotoxic T cell (CTL) responses due to its ability to cross cell membranes and induce apoptosis of uninfected T cells [7,8] via interaction with tubulin [8][9][10]. In addition, a number of studies have shown that the presence of antibodies against Tat blocks the replication of HIV-1 in vitro and is related to non-progression to AIDS [11][12][13]. Moreover, it has been shown that a HIV-1 Tatspecific cytotoxic T lymphocyte response is inversely correlated with rapid progression to AIDS [14]. Further studies have emphasized the hypothesis that anti-Tat CTLs are important in controlling virus replication early after primary infection [14,15].
The discovery of the extra-cellular functions of Tat in the inhibition of the cellular immune response against HIVinfected cells constitute the rationale to develop a vaccine against HIV targeting Tat [16]. However, the development of a Tat vaccine may face the same problems encountered with HIV-1 envelope proteins as Tat exists in different sizes (86 to 101 residues) and mutations exist that induce structural heterogeneity [17]. The 2D NMR studies of two active Tat variants from Europe and Africa confirmed this structural heterogeneity, although a similar folding appears to exist among Tat variants [18][19][20]. Currently, there are five main HIV-1 subtypes in the world: subtypes A (25 %) and C (50 %) are predominant and are found mainly in Africa, India and South America; subtype B (12 %) is found mainly in Europe and North-America; subtype D (6%) is found in Africa and subtype E (4 %)(a recombinant form known as CRF_01AE), is found mainly in South East Asia [21]. Tat variability follows this geographical diversity with mutations of up to 38 % observed among Tat variants from A, B, C, D and E HIV-1 subtypes that do not alter Tat functions but do not allow cross recognition with Tat antibodies [22].
Up to now, the two main vaccine strategies against Tat use a recombinant protein corresponding to a short 86 residue version of a subtype-B European Tat variant that is either inactivated [11] or has full activity [23]. These two approaches were tested on macaques followed by a homologous SHIV challenge [24,25]. A significant decrease of viremia was observed in these two studies carried out respectively on Cynomolgus [24] and Rhesus macaques [25], without showing complete protection during primary infection. A recent study showed long term control of infection following homologous SHIV challenge on Tat-vaccinated Cynomolgus macaques [26]. However, immunization with a subtype B Tat variant of 86 residues does not stimulate an efficient response against subtype A and C Tat variants [27]. Moreover, most Tat variants found in the field are of 101 residues [4].
Over the last 20 years, several HIV vaccine studies have been tested using a homologous SHIV/macaque model and some have met with success [28]. However, these were not followed by success in clinical trials [29], possibly due to the high genetic diversity of HIV-1. This is why heterologous SHIV challenge in macaques, using a genetically distinct virus, is now recommended to determine if a vaccine can be effective against HIV-1 infection in humans and corresponds to the most significant in vivo experiment after clinical trials [28].
The interest to develop a Tat vaccine rose with the discovery that seropositive long-term non-Progressor (LTNP) patients had a higher level of Tat antibodies than seropositive Rapid Progressor (RP) patients [13]. However, LTNP patients are unable to eradicate HIV since they still have HIV released from reservoir cells. Another category of patients, the highly exposed persistently seronegative (HEPS), appears to be more interesting since they were in contact with the virus, they have developed a strong cytotoxic T lymphocyte (CTL) response against viral proteins and have retro converted to become seronegative [30]. There is a very low prevalence of HEPS among adults and it could be possible that the HEPS phenotype is due to innate immunity [31].
Although HEPS patients have normally no detectable virus, it was possible to isolate and clone a HIV-1 strain from patients in a cohort in Gabon [32] that could be now classified as HEPS. This strain called HIV-1 Oyi has genes similar to regular HIV-1 strains except the tat gene, which had mutations never found in other Tat variants [16]. The epidemiological survey was carried out on a sample of 750 pregnant women and 25 were identified as seropositive [32]. From these 25 seropositive women, 23 rapidly retro converted and became HEPS. All the HEPS women were infected with HIV-1 Oyi. The high proportion of HEPS phenotype in this cohort (92%) indicated that the retro conversion was probably due to an acquired immunity and not an innate immunity. Ten years after the publication of this epidemiological survey, the 23 women were in good health and the HIV was no longer detectable in their blood [22]. Immunization with Tat Oyi raises antibodies in rabbits that are able to recognize different Tat variants even with mutations of up to 38 %, which is not possible with other Tat variants [22]. Tat Oyi appears to induce a humoral immune response against threedimensional epitopes that are conserved in Tat variants in spite of 38% mutations [22]. Moreover, Tat Oyi has a similar structure to active Tat but is unable to trans-activate [20].
This study is the first step of pre-clinical studies of a vaccine using a synthetic protein of 101 residues. Synthetic vaccines are developed for many years because they could be safer regarding biological vaccines, i e vaccines made from inactivated pathogens or recombinant proteins. However, most of the vaccines commercially available up to now have a biological origin. Very few synthetic vaccines were able to demonstrate their efficacy in vivo against a pathogen such as bacteria or virus due to the short size of the peptides that can constitute only linear epitopes, while 3D epitopes are the most susceptible to trigger an immune response that neutralize a pathogen. This is why, one of the objective of this study was to determine a vac-cine formulation suitable for human use to prepare clinical trials, as a previous study with Tat Oyi was carried out using complete Freund adjuvant [22]. We evaluated the antibody responses raised in rabbits by Tat Oyi complemented with adjuvants authorized for human use and we determined formulations providing similar results previously obtained with the Freund adjuvant [22]. Vaccination with Tat Oyi on seven Rhesus macaques provided an excellent model to test in vivo the efficacy of this synthetic vaccine before clinical trials. Furthermore, the vaccinated
Results and discussion
We selected four adjuvants (Calcium phosphate, Montanide, Adju-Phos and Alhydrogel) to develop different vaccine formulations with our synthetic protein Tat Oyi. The usual dose of aluminium for human vaccines is around 0.5 mg [33] and at this concentration, approximately 90 % of 100 µg of Tat Oyi adsorbed to both aluminium containing adjuvants (Adju-Phos and Alhydrogel). For these two reasons, we decided to carry out our inoculations at 0.5 mg Al per dose of vaccine for both Adju-Phos and Alhydrogel. For the calcium phosphate gel, we achieved 92 % adsorption using 1 mg Ca per 500 µl dose while only 62% adsorption using 0.5 mg Ca in the same volume.
Montanide adjuvant (70 %) was used because it is a metabolizable oil that can be used for human vaccination and has chemical properties similar to those found in the Freund adjuvant as used in our first vaccination studies [22].
Twelve rabbits were immunized with the four formulations (three rabbits for each formulation) and we analyzed the antibody responses against five Tat variants representative of subtypes A, B, C, D, and E (Table I). No antibody response was observed using the calcium phosphate gel and the aluminium phosphate adjuvants at 60 days post-inoculation. However, at 90 days post-inoculation, a strong antibody response was observed using these two adjuvants against five Tat variants (Table I). The best humoral response against Tat oyi was obtained using Montanide ISA720 (titer: 128,000 against Tat Oyi) at both 60 and 90 days post-inoculation. However, Montanide ISA720 and Calcium phosphate appear to be the most suitable adjuvants to complement the synthetic protein Tat Oyi, due to the absence of toxicity and the heterologuous immunity compared with different Tat variants observed after vaccination (Table I).
A heterologous SHIV-BX08 challenge carried out on seven macaques vaccinated with Tat Oyi/Montanide ISA720 and four control macaques vaccinated with β-galactosidase that were used also as control for another vaccine trial [34]. Figure 1 shows the viremia as revealed by SHIV RNA copy number in the sera of macaques after SHIV challenge. Similarly to what is observed in human a couple of months after HIV infection, both Tat Oyi vaccinated macaques and controls had an undetectable viremia 63 days after the SHIV challenge (Fig 1). In addition, virus isolation and cytoviremia was measured by co-cultivation of PBMC's with non-infected human cells at the day of challenge and each week afterwards and allow to estimate the level of reservoir cells (Fig 2). Five on seven Tat Oyi vaccinated macaques showed a better control of viremia compared to control macaques (Fig 1). Reservoir cells were not detectable at 56 days post-challenge in all Tat Oyi vaccinated macaques but not in the controls (Fig. 2).
It has been shown in SHIV challenge that plasma viremia in the first peak does not correlate with survival whereas plasma viremia levels of the second peak at or about six weeks post-infection were highly predictive of relative survival [35]. In our vaccine trial, panel C in figure 1 shows that plasma viral RNA levels were significantly lower in the vaccinated Macaques compared to the controls at nine weeks post-infection (p = 0.009 using Mann-Whitney test). While we did not observe major differences in the level of CD4 cells between vaccinated and non vaccinated macaques (data not shown), we did observe an augmentation of the number of CD8 lymphocytes in Tat Oyi vaccinated macaques (Fig. 3). However, we did not determine if these CD8 are HIV specific CTL. It is interesting to observe that before the SHIV challenge, control macaques had a higher level of CD8 compared to Tat Oyi vaccinated macaques. Control macaques were immunized with the Semliki Forest Virus (SFV) lac Z expressing β-galactosidase that boost the CD8 response [34]. This high level of CD8 were not HIV specific in control macaques and they had no antibodies against Tat. Therefore, we think that the decreased level of CD8+ cells in control macaques after the SHIV challenge could be due to extracellular Tat, since the SHIV infection should have increased the CD8 response as observed for SFV.
All Tat-vaccinated macaques, with the exception of Macaque 969, developed a strong anti-Tat antibody response (Fig 4), which correlated with an efficient reduction in viremia at nine weeks post-infection ( Fig 1C). This was best demonstrated by monkey 965, which had a strong anti-Tat antibody titer and a significantly reduced viremia nine weeks post-infection despite a high viremia in the primary phase (Fig 1C). To a lesser extent, macaque 9711 shows the same relationship between the level of anti-Tat antibody and the viremia at nine weeks ( Fig 1C). Moreover, the control of viremia in Tat Oyi vaccinated macaques was not due to antibodies raised against the HIV envelope proteins since the four SHIV challenged control macaques had high anti-gp120 antibody titers. Overall, gp120 antibody titres were similar in control and Tat Oyi vaccinated macaques (Fig 5).
Macaque 966 did react differently from the other Tat Oyi vaccinated macaques and is the most interesting. It was the one to have an almost complete immunity against SHIV BX08 with a viremia peak around 300 RNA copies per ml whilst most of the others macaques had viremia peaks between 100 000 and 3 000 000 RNA copies per ml (Fig 1). Interestingly, almost no antibodies against gp120 were detectable and no virus could be isolated from cultured PBMC's (Fig 2). To verify this strong immunity, macaque 966 was challenged a second time with another heterologous SHIV 162P 3.2 seven weeks after the SHIV BX08 challenge (Roger Legrand, Personal communication). This second challenge explains the higher viremia peak at nine weeks post-infection compared to the other Tat Oyi vaccinated macaques (Fig 1C), which rapidly decreased to an undetectable level. It is interesting also to note that antibodies against gp120 were observed with macaque 966 following the second SHIV challenge that also rapidly declined (Fig 5). Results observed with macaque 966 are very important and constitute the best proof of concept for the Tat Oyi vaccine and its rational as previously described [22]. Macaque 966 had the highest titer of anti-Tat antibody (Fig 4), the lowest viremia ( Fig 2) and no detectable virus from cultured PBMCs (Fig 1). Macaque 965 had nearly identical level of anti Tat anti- bodies but was not able to control its viremia as macaque 966. It is possible that innate immunity helped macaque 966, but it is interesting to note that antibodies against gp120 disappeared rapidly for macaque 966 (Fig 5), similarly to what was observed with the patients infected by HIV-1 Oyi in Gabon [32] and HEPS patients [30].
HIV infected CD4 T cell (reservoir cells) in rhesus macaques vaccinated with Tat Oyi (panel A) and control macaques vaccinated with β-gal (panel B) following SHIV challenge
Conflicting results appears in Tat vaccine studies in nonhuman primate viral challenges models ranging from no protection [34,[36][37][38] to significant [39,24,25], long term protection [26]. Although these conflicting results could be explained by differences in immunization regimen, viral stock, route of viral challenge and animal species, the result of two studies using similar viral vector expressing Tat, Env and Gag and giving opposite conclusion is puzzling [36,39]. One study shows the efficacy of vectored Tat but not Gag and Env [39], while another study showed efficacy of vectored Gag and Env but not Tat [36]. These conflicting results could be due to a homologous challenge in the first study [39] and a heterologuous challenge in the second study, since the second study use the Tat Jr sequence instead of the homologuous Tat Bru sequence for the vaccine [36]. HIV-1 Jr and HIV-1 Bru are B subtypes but their Tat sequences have non conservative mutations inducing conformational changes [16]. The mutations between the vaccine and the challenge virus might explain the lack of efficacy of the Tat vectored vaccine in the second study [36]. Of course, the second study more closely resembled reality since a vaccinated person will not likely be exposed a homologous virus infection. It is possible that the study by Silvera et al. would have had an different outcome had heterologous gag and env genes been used in the SHIV challenge [36]. These studies outline how mutations can affect Tat cross recognition as shown in former studies [22,27].
Conclusion
Three adjuvants authorized for human use trigger an immune response with Tat Oyi similar to what was observed with the complete Freund adjuvant in a former study [22]. No local or systemic toxicity or adverse effects were observed in rabbits and macaques with vaccine doses superior to those planed for clinical trials. Furthermore, the synthetic protein Tat Oyi is pharmacologically stable in solution for at least a period of one month, which is a requirement for mass vaccination (data not shown). Although a low viremia was not achieved for all macaques, reservoir cells were no longer detectable 56 days after a heterologuous challenge. Taken together, these results suggest that a Tat Oyi synthetic protein could be an excellent component of a vaccine targeting HIV-1 and could provide an appropriate treatment against HIV-1 in both developing and industrial countries. On a fundamental point of view, the decreased level of CD8 cells in the control macaques suggests an important role of extra cellular Tat in the immunodeficiency induced by the HIV-1. We hope to be able to confirm in phase I/II clinical trial with seropositive patients that a therapeutical effect can be obtained from the Tat Oyi vaccination. This therapeutic effect might result, firstly, in a reduced viremia and stable CD4 cells level following an interruption of the antiretroviral treatment. We believe this vaccine will not prevent sero negative people from HIV infection, however it could avoid the collapse of the cellular immunity, and therefore a therapeutic effect could be expected with the eradication of the virus titres and viral reservoir as is observed with HEPS patients. This vaccine could be also the only affordable therapy for millions of seropositive patients that have no access to antiretroviral treatment.
Tat variants and adjuvant formulations
Tat variants were assembled in solid phase synthesis with an ABI 433A peptide synthesizer with FASTMoc chemistry according to the method of Barany and Merrifield [40] as previously described [20,41]. The calcium Phosphate gel adjuvant was obtained from Brenntag Biosector (Den- Monkey Number CD8 lymphocytes/µl mark). The adjuvant based on a metabolizable oil with a mannide mono-oleate emulsifier called Montanide ISA720 was obtained from SEPPIC Ltd (Paris, France). The two aluminum-containing adjuvants, aluminum hydroxide (Alhydrogel 2 %, Superfos Biosector a/s,) and aluminum phosphate (Adju-Phos, Superfos Biosector a/ s), were kindly provided by Vedbaeck (Denmark). Experiments were conducted to assess the presence of soluble antigen in the supernatant liquid of adsorbed experimental vaccines. Tat Oyi was added to the gel and gently shaken for 24 h at room temperature. Samples were centrifuged at 313 g for 15 min at room temperature. Supernatant was aspirated and protein concentration was determined using Bradford reagent. Protein adsorption by aluminum-containing adjuvants was studied in 500 µl suspensions containing a quantity of adjuvant equivalent to 0.7, 0.5 or 0.3 mg Al.
Immunization protocols for rabbits and macaques
Twelve specific pathogen-free New Zealand rabbits (Elevage Scientifique des Dombes, Romans, France) were immunized with 100 µg of Tat Oyi and four different formulations (three rabbits for each formulation): aluminum hydroxide (0.5 mg of Al) in phosphate buffer 20 mM pH 6.5; aluminum phosphate (0.5 mg of Al) in sodium acetate buffer 20 mM pH 6.5; calcium phosphate gel (1 mg of Ca) in phosphate buffer 20 mM pH 7; and Montanide ISA720 (70%) in phosphate buffer 20 mM pH 6.5. Each rabbit was boosted three times at 20, 40 and 75 days after the first immunization. Sera were collected before immunization, and then 60 and 90 days after the first immunization. No death or injuries were observed during or as a consequence of the immunization for the full time of the experiment. The study on Macaques included eleven rhesus macaques of Chinese origin. These macaques were housed at the Primate Research Center at Rennemoulins (Institut Pasteur, France) and handled under ketamine hydrochloride anesthesia (Rhone-Mérieux, Lyon, France) according to European guidelines for animal care (Journal Officiel des Communautés Européennes, L358, 18 décembre 1986). The animals were checked to be virus-isolation negative, as well as sero-negative for SIV and simian retrovirus type D before entering the study. Seven macaques were immunized subcutaneously with Tat Oyi (100 µg) and the adjuvant Montanide ISA 720. Boosts were given at 1, 2 and 3 months after the first immunization. The control was four macaques immunized with the Semliki Forest Virus lac Z expressing β-galactosidase [34]. No death or injuries were observed during or as a consequence of the immunization for the full time of the experiment.
SHIV challenge
The seven macaques vaccinated with Tat Oyi were included in a SHIV challenge assay called RIVAC sponsored by the ANRS. The purpose of the RIVAC assay was to compare ten vaccine approaches on five to seven macaques with the same SHIV challenge model. Only results obtained with three vaccine approaches have been published [34]. The challenge strain was SHIV-BX08, derived from SIVmac239 [34]. This is a hybrid virus expressing the gp120 subunit of the R5, clade B, primary HIV-1 isolate BX08 and the gp41 subunit of HIV-1 LAI [42]. The tat and rev genes are also from HIV-LAI, whereas the gag, pol, vif, vpx and nef genes are from SIVmac239. The animals were challenged intra-rectally (IR) seven months after the first immunization. The virus stock used for challenge was amplified on human PBMC and 10-fold serial dilutions where used for inoculation of rhesus macaques. The undiluted challenge dose contained 337 +/-331 AID 50 for IR administration, as determined by the method of Spouge [43]. Tat vaccinated and control animals were sedated with ketamine hydrochloride (10 mg/kg i.m.) Antibody response against Tat for the seven macaques vacci-nated with Tat Oyi Figure 4 Antibody response against Tat for the seven macaques vaccinated with Tat Oyi. The 965 (white square), 966 (no symbol), 969 (black circle), 975 (black square), 9611 (white circle), 9711 (white triangle) and 9712 (black triangle) Macaques are the Tat Oyi vaccinated Macaques. Macaque 966 in the top had the best response against Tat and turned to have the best control of the viremia with no reservoir cells detected (Fig 1 & 2). The left axis shows the OD of 1/ 100 sera dilution. Antibodies titers against GP120 Figure 5 Antibodies titers against GP120. Antibodies against GP120 appears to not have play a role in the elimination of reservoir cells. This is well illustrated with macaque 966 (Panel A) that had no antibody against GP120 after the first challenge SHIV and a low level of antibodies after its second SHIV challenge.
|
v3-fos-license
|
2019-02-23T18:40:48.035Z
|
2018-05-22T00:00:00.000
|
88505965
|
{
"extfieldsofstudy": [
"Geography"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=84665",
"pdf_hash": "584e86544b31bb5175e18ae8e36c5afea6946b48",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44536",
"s2fieldsofstudy": [
"History"
],
"sha1": "584e86544b31bb5175e18ae8e36c5afea6946b48",
"year": 2018
}
|
pes2o/s2orc
|
Urban Development and Water Management in the Yangtze River Delta
Throughout the history of the world, the development of the cities are related to the large water systems and the ocean. Where the river is abundant, the trade and regional centres could be formed. However, along with the prosperity of the water-cities, massive urban construction and environmental issues are enormous challenges in human process. A “scientific” urban planning, “Sponge City”, “Resilient City”, regional and urban culture and characteristics get more and more attention. The theme of “water and city” is clearly of great historical value and practical significance for the new resilient urban and water management strategies. The paper will summarize characteristics of geographical, historical, socio-cultural and political realms in metropolitan deltas and the historical governance as well as the recent developments in the Yangtze River Delta. It will introduce urban development and water management in four water cities: the canal and the city-Yangzhou, the river and the city-Nanjing, the lake and the city-Suzhou and the sea and the city-Shanghai. And then it will analyze the inner motivation of the interaction between water and cities in Yangtze River Delta. Furthermore, learning from successful historical experiences, the paper will provide suggestions for future sustainable urban development.
Introduction
The Chinese government has issued a series of documents to manage and guide the urban planning.In March 2014, they promulgated "The state's new urbani-World Journal of Engineering and Technology zation development plan (2014-2020)", pointing out that some historical cities in China do not pay attention to the excavation and inheritance of history and culture, the "constructive" destructions are spreading, the natural and cultural personalities are destroyed.In view of these problems, the "planning" suggest the urban construction should adhere to the "cultural heritage, highlight the characteristics of different regions according to the natural history and cultural endowment, reflecting the regional differences, to promote the diversity of species, and develop with historical memory, cultural context, geographical features, and ethnic characteristics of the beautiful towns, forming realistic, distinctive urbanization development model".
For thousands of years, most people chose to live nearby water.The historical and cultural heritages of many cities are also closely related to the coastal shoreline, from China's Jiangnan water countries to the seaside ports in Europe, and to the current global urban waterfront space transformation and regeneration, always the same.In many cases, the waterfront is often located in the heart of the city and has a rich industrial heritage and a unique waterfront landscape.The metropolitan waterfront began to undergo a process of transformation and regeneration that from the former industrialized river sides, began to be transformed into a post-industrialized place of residence-work-leisure and were again favored by residents, investors and tourists [1].
The theme of "water and city" is clearly of great historical value and practical significance.How could we protect the pattern of traditional water towns in today's conditions, integrate the historical landscape of the water and realize the sustainable planning of the ecological environment?And whether we could re-examine the relation of "water and city" from the historical development process, and draw the historical experience to provide a reference for the further sustainable progress.Thus, city research should be based on the history and look to the future.
Through the ancient city maps illustration, literature reading, information collection and other means, the paper will take Yangtze River Delta as an example, take Yangzhou, Nanjing, Suzhou and Shanghai as the case cities, introducing the influence of water system (natural and artificial) on urban development.And then it will analyze the delta governance of Yangtze River Delta in different periods.Moreover, learning from successful historical experiences, the paper tries to provide suggestions for future sustainable development.
Water Cities in the Yangtze River Delta
The Yangtze River Delta is presented in seven spatio-cultural stage characteristics in regard to water-city interaction: Early Physical Geography and Settlement Development (<4000BC), The formation of the city (<−200BC), The first canal-urban system (250BC-600AD), Grand Canal urban system (550AD-950AD), Water-cities (900AD-1400AD), Diversification of urban development (1350AD-1850AD), The rise of railways and the urban national industry history.In Three Kingdoms period (220-280AD), Guangling (Yangzhou) became the military stronghold in the area between Huai River and Yangtze River.
Canals Thrived the City
Being a military stronghold originally, due to the digging of canals, Yangzhou became a main hub of China's north-south traffic.It became not only a trading centre for food, salt, money, and iron, but also a window for international exchange.Merchants and emissaries gathered at Yangzhou from all over the world.
The abundant resources and prosperity of the city triggered the establishment of factories and handicraft workshops, the commerce of Yangzhou presented an unprecedented prosperity.In the south of the location of Han city, along the canal, it formed businessmen communities, called "Luo city".Including the former area of the city, twofold city pattern came into being Yangzhou of Tang Dynasty.
However its military function also remained strongly.For the sufficient defence, the city of Yangzhou formed a special pattern consisting of "Zhou City", "Bao City" and "Jia City" in Song Dynasty.Yangzhou shifted from a commercial city and the economic centre into the forefront strategic important place, marking Yangzhou's prominent military status.
In Ming and Qing Dynasties, however the convenient grain-transport by water and the brisk salt-trade had brought once again the dazzling prosperity in Yangzhou.A "New city" was constructed based on the southwest areas of Yangzhou's location of Song Dynasty.A new business centre of docks, warehouses, hotels, restaurants and private homes of many rich merchants were concentrated in this region.
The canals had a significant influence on urbanization of Yangzhou, not only the changes of the city's location, but also on the urban development and city life.Due to the transporting, a lot of new type functional architectures were built World Journal of Engineering and Technology along the canals, as like warehouses, shaoguan, piers and so forth.Shaoguan, was one part of the canal revenue system, charging the taxes from the boats that passed through.Many professional streets in the "New city" were spontaneous formed because of the canals, which became the communities of some industry, that could be seen from the street's names, such as color clothing street, sesame lane and so on.The canals also brought Yangzhou's salt industry and business prosperity.The salt merchants lived along the canal, and contructed a large number of guilds and markets (Figure 1).large-scale and populous capital, and so that the rich resources in Taihu Lake basin became the main supply area of Nanjing, the Yangtze River and the Qinhuai River (Qinhuai River was far wider than it is now) were the main channels for transporting.In the year of 1393, in order to change the long lines of delivering, Emperor Zhu Yuanzhang ordered to dig the Yanzhi River, thus the Qinhuai River and Shijiu Lake directly communicated, avoiding the risks of waves in Yangtze River (Figure 2).
Suzhou: From "a City" of Jiangdong Area to the National Economic Centre
Suzhou is the intersection place at the north-south canal and Lou River, with Plains (comprising the middle and lower reaches of the Huanghe River), but also the commodity economy was well-developed, its handmade products, all crops were exported to the North.Simultaneously, the canals coordinated the cities' inner water system better, improving the urban environment a lot.
From the middle of Tang Dynasty (618-907AD) until the end of Northern Song Dynasty (960-1127AD), Suzhou received a large number of migrants from the North, and slowly developed into a "metropolis".Relying on the city's water system and the channels, the buildings were along the rivers and distributed regularly.Bridges linked most of these buildings, which became a significant feature of Suzhou urban development.Urban construction formed a "double checkerboard" pattern of land and water transport.In addition, the amount of gardens, education institutions, markets and lanes stimulated the building of restaurants, tea houses and other amusement facilities.Another important feature of this period of urban construction, the military facilities significantly enhanced, the wall and moat were much stronger.The city map of NanSong Dynasty called "Pingjiang map" presenting the city's prosperous was the oldest city map in ancient China.Later on it attrected retired officials and wealthy businessmen to build gardens, so that the small and exquisite gardens were throughout the city, the garden art reached a historical peak, beautified the urban environment (Figure 3).
Shanghai: From a Small Fishing Village to a Cosmopolitan City
Shanghai was actually a small fishing village during the Spring and Autumn In 1927, Nanjing National Government of the Republic of China was founded, and Shanghai was instituted into a special municipality, with a lot of urban plannings were followed.By the late 1920s, Shanghai's transport network was becoming more and more well-developed, especially Shanghai-Nanjing Railway.
Under the joint action of inner and external forces, Shanghai had gradually become the biggest metropolis and an industrial and commercial center in China (Figure 4).
Delta Governance of the Yangtze River Delta
Water system plays an important role in the process of urban development, and shapes the various forms in the perspective of the urban material space in different periods.The factors of urban governance are the dynamic mechanism that influence the development of the cities.In the early time, nature, military "manufactured" waterways (natural and artificial), brought the emergence of the cities, as seen in the birth of Yangzhou, Suzhou.Early human settlements evolved around the Yangtze River and Tai Lake, with humid climate that suitable for the growth of rice.The areas at the intersection of rivers and on the high ground of plains became the wonderful place for a regional central city.Driven by military and political needs, a lot of regional canals were dug, promoting the birth and The main purpose of each canal was for hegemony and defence.But they were also beneficial to the irrigation and promoted the growth of surrounding cities.The mileage of canals and the dredging of waterways became more and more important.The canals also extended from regional to national areas.In addition, since Qin Dynasty until Qing, the canals extended from the capital to the target regional areas of hegemony, promoting the development and prosperity of the cities along them.At these times there were more than thirty Chinese most important economic metropolises, and eleven located along the line of the Grand Canal [4].Furthermore the shape of the urban spatial landscape was also affected by the economy.The main factor that determined the changes of the city of Yangzhou was the development of socio-economic and natural environment.
When the canals were only used by the military, the city was not at the edge of the canals.While the function of the canals were converted to economic use, the city developed along the bank of the canals.In the south of Jiangsu Province, it presented market towns, which was strengthened by their urban economic functions.The streets were no longer flanked by walls, but by stores, restaurants, and taverns.The architectures facing streets were usually shops with a residential yard at the back or multi-story buildings adjacent to each other.Water transport and the establishment of national canal transport agency headquartered in the city of Huai'an, promoted the economic development, contributing the city growing up to the regional political and military and trade centre city.The same influence had the salt commissioner station of Lianghuai in Yangzhou [5].
Next to the economy, also the institutions "influenced" the form and layout of the cities.In the Spring and Autumn Period and the Warring States Period (770 BC-221BC), "Water Order" was the China's first irrigation management system.
During Han Dynasty (202BC-220AD), Si Maqian created the "Historical Records.Waterways", which was the first book that described the brief history of Chinese water conservancy.It introduced systematically the ancient Chinese water conservancy and its impact on the national economy and people's livelihood [6].The "Regulations of Water Conservancy" was the earliest extant national water legislation, The "Irrigation And Water Conservancy Constraint" and the "River Defense Order" about flood control in Song Dynasty (960-1279AD), such a large number of water conservancy science and technology works turned up.
This also induced a kind of water culture to affect the urbanization.In the area of Yangtze River Delta of Jiangsu Province, Wu Culture, Jiangnan Culture, Liuchao Culture and so on was the internal dynamic mechanism of urban development, influencing the city life and the appearance of functional architectures.
The temples, gardens, business halls and so forth were the embodiment of culture in physical form.These three factors, actors, institutions interplayed each other, joint creating the urban morphology of the Yangtze River Metropolitan Delta.
Learning about Future Challenges
Water has the function for consolidating the city sites, promoting the industrial World Journal of Engineering and Technology and commercial development, improving the living environment.In the farming period, the Yangtze River Delta Basin`s water network was densely distributed.
Taihu Lake, the Yangtze River, the canals and the natural rivers formed a network of water transportation system.When the Yangtze River Delta had entered into the industrial era with the springing up of the modern industry and commerce, cities were centralised along the railways and highways, the status of water transport declined gradually.At present, the meaning of Yangtze River Delta is more in the economic sense, and its regional relations and scope are more extensive.It has formed the core area of Shanghai, Nanjing metropolitan area, Suzhou-Wuxi-Changzhou metropolitan area, Hangzhou metropolitan area and Ningbo metropolitan area.The Yangtze River Delta is coming into being a urban network pattern supporting by the high speed road and railway network, and they are growing into a highly integrated giant metropolis [2].
For restoring the age old water adaptive capacity of ancient urban formations and communities, we should learn from successful historical experiences to take the approaches of socio-cultural resilience in order to deal with the future challenges.From the history, We should respect the regional landscape pattern, and have a profound understanding of the city and the surrounding geological features ,so as to pursuit the natural sustainable development.Integrating the city and environment into the same regional system and establishing the ecological relationship between the city and the suburb.History will be a resource to achieve progress in future urban development.Water environment as the linear historical context and important part of the historical landscape, with landmark historical buildings and rich human activities, it will form successional and operational historical landscape.About the planning and design of waterfront areas, it advices the urban constructions be back to face the water, restore historical memory and the social-culture resilience, combine with modern planning and human activities, in line with urban ecological development requirements, realizing the "resilient" development.
Figure 2 .
Figure 2. Map of Nanjing in Ming Dynasty.Source: Historical Atlas of China, Chinese culture university, 1980.
Figure 4 .
Figure 4. Map of Shanghai in 1504AD Map of Shanghai in 1917.Source: Complete Atlas of Shanghai antiquated maps, 2017.
Yangzhou: From a Small Canal City to the Southeast Metropolis 2.1.1. Han Canal Constructed the City
Water cities of Yangzhou, Nanjing, Suzhou and Shanghai witnessed the historical development of the Yangtze River Delta.The water environment promoted and affected these cities' development at different times.
From 211 AD, Nanjing was set as the capital of Sunwu Period, called Jianye at that time.Qinhuai River located in the south, between the Qinhuai River and the Yangtze River, Hou Lake, Chao Canal, Jinchuan River, Qing River and Yundu Canal, such waterways composed Nanjing's water network, contributing to the city's water supply and transport.And then Nanjing was set as the capital of successive dynasties: Eastern Jin, Song, Qi, Liang, and Chen, called Jiankang.Due to the large waves of the Yangtze River, Pogang Canal and Shangrong Canal were erected respectively in Dongwu (222-280AD) and Liang Period
|
v3-fos-license
|
2021-06-10T13:22:35.322Z
|
2021-06-09T00:00:00.000
|
235381314
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://jcmr-online.biomedcentral.com/track/pdf/10.1186/s12968-021-00756-x",
"pdf_hash": "4716439e4c11d2e26d0f573fccecebca730a8670",
"pdf_src": "Science",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44537",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "9d16b6d167e7492014f67f5d66a5ddecadccd0d2",
"year": 2021
}
|
pes2o/s2orc
|
Redefining IL11 as a regeneration-limiting hepatotoxin and therapeutic target in acetaminophen-induced liver injury
Inhibition of IL11 signaling limits drug-induced liver damage and promotes hepatic regeneration in a mouse model. A matter of species specificity Acetaminophen (APAP) overdose can cause liver injury; effective therapies for treating APAP poisoning beyond 8 hours after ingestion are lacking. Recombinant human interleukin 11 (rhIL11) protected rodents from liver injury; however, recent studies produced results that question the underlying mechanism. Here, Widjaja et al. used a mouse model of APAP-induced liver injury and showed that species-matched IL11 was detrimental in mice, causing hepatocyte cell death. Genetic IL11 deletion protected mice from liver damage and administration of an antibody targeting IL11 receptor reduced APAP-induced toxicity even when administered 10 hours after APAP. The results suggest that IL11 might be detrimental for hepatocytes. Additional studies will clarify the translational potential of targeting IL11 for treating liver injury. Acetaminophen (N-acetyl-p-aminophenol; APAP) toxicity is a common cause of liver damage. In the mouse model of APAP-induced liver injury (AILI), interleukin 11 (IL11) is highly up-regulated and administration of recombinant human IL11 (rhIL11) has been shown to be protective. Here, we demonstrate that the beneficial effect of rhIL11 in the mouse model of AILI is due to its inhibition of endogenous mouse IL11 activity. Our results show that species-matched IL11 behaves like a hepatotoxin. IL11 secreted from APAP-damaged human and mouse hepatocytes triggered an autocrine loop of NADPH oxidase 4 (NOX4)–dependent cell death, which occurred downstream of APAP-initiated mitochondrial dysfunction. Hepatocyte-specific deletion of Il11 receptor subunit alpha chain 1 (Il11ra1) in adult mice protected against AILI despite normal APAP metabolism and glutathione (GSH) depletion. Mice with germline deletion of Il11 were also protected from AILI, and deletion of Il1ra1 or Il11 was associated with reduced c-Jun N-terminal kinase (JNK) and extracellular signal–regulated kinase (ERK) activation and quickly restored GSH concentrations. Administration of a neutralizing IL11RA antibody reduced AILI in mice across genetic backgrounds and promoted survival when administered up to 10 hours after APAP. Inhibition of IL11 signaling was associated with the up-regulation of markers of liver regenerations: cyclins and proliferating cell nuclear antigen (PCNA) as well as with phosphorylation of retinoblastoma protein (RB) 24 hours after AILI. Our data suggest that species-matched IL11 is a hepatotoxin and that IL11 signaling might be an effective therapeutic target for APAP-induced liver damage.
INTRODUCTION
Acetaminophen (N-acetyl-p-aminophenol; APAP) is a commonly used over-the-counter drug, but APAP poisoning is a major cause of drug-induced liver injury and failure (1). The antioxidant Nacetylcysteine (NAC) is beneficial for patients presenting early with APAP poisoning (2), but there is no drug-based treatment beyond 8 hours after ingestion and death can ensue if liver transplantation is not possible (3,4).
In hepatocytes, APAP is metabolized to N-acetyl-p-benzochinonimin (NAPQI), which depletes cellular glutathione (GSH) and damages mitochondrial proteins, leading to reactive oxygen species (ROS) production and c-Jun N-terminal kinase (JNK) activation (5). ROSrelated JNK activation results in a combination of necrotic and other forms of hepatocyte cell death (1,6,7). JNK and mitogen-activated protein kinase kinase kinase 5 (MAP3K5; also known as ASK1) inhibitors have partial protective effects against APAP-induced liver injury (AILI) in mouse models, but toxicities limit its translation to the clinic (8,9). Similarly, although caspase cleavage is seen in AILI, pan-caspase inhibitors have proven ineffective and hepatocyte apoptosis is not thought to play a major role (10). Failure of caspase inhibitors could reflect caspase cleavage occurring downstream of multiple forms of cell death, a biomarker of cellular demise rather than of a specific type of cell death.
Liver regeneration after hepatic injury can be profound in both rodents and humans, as seen after partial hepatic resection (11,12). In the setting of AILI, liver regeneration is suppressed, resulting in permanent injury. Targeting the pathways that hinder the liver's regenerative capacity may trigger natural regeneration, which could be specifically useful in AILI (13,14).
Interleukin 11 (IL11) is a cytokine that is of central importance for myofibroblast activation across organs (15)(16)(17)(18). It is known that IL11 is secreted from APAP-injured hepatocytes in mice and that IL11 can be detected at very high concentration in the serum of the mouse model of AILI, where its expression is considered compensatory and cytoprotective (19). In keeping with the idea that IL11 is beneficial in the liver, administration of recombinant human IL11 (rhIL11) is effective in treating the mouse model of AILI and also protects against liver ischemia, endotoxemia, or inflammation (19)(20)(21)(22)(23)(24). As recently as 2016, rhIL11 has been proposed as a treatment for patients with AILI (25).
During our recent studies of nonalcoholic steatohepatitis (NASH), we found that IL11 appears to be detrimental for hepatocyte function, at least in some contexts (15,26). The apparent discrepancy with the previous literature prompted us to look in more detail at the effects of IL11 in the mouse model of AILI, where endogenous mouse IL11 is largely up-regulated and rhIL11 is protective (19).
IL11 drives APAP-induced hepatocyte cell death
As reported previously (19), we confirmed that AILI is associated with largely elevated concentrations of IL11 in the serum of mice (Fig. 1A). We addressed whether the elevated IL11 serum concentration in the mouse AILI model originated in the liver and found that APAP largely up-regulated hepatic Il11 expression (35-fold, P < 0.0001) (Fig. 1B). Bioluminescent imaging of a reporter mouse with luciferase cloned into the start codon of Il11 indicated IL11 expression throughout the liver (Fig. 1C and fig. S1, A and B). Western blotting confirmed IL11 up-regulation across a time course of AILI (Fig. 1D). Experiments using a second reporter mouse with an enhanced green fluorescent protein (EGFP) reporter construct inserted into the 3′ untranslated region (UTR) of Il11 (fig. S1C) showed that, after APAP, IL11 is highly expressed in necrotic centrilobular hepatocytes, the pathognomonic feature of AILI, coincident with cleaved caspase 3 (Cl. CASP3) ( Fig. 1E and fig. S1, D and E).
Having identified hepatocytes as a source of Il11 during AILI in vivo, we conducted in vitro experiments. Exposure of primary human hepatocytes to APAP resulted in the dose-dependent secretion of IL11 (Fig. 1F). Hepatocytes highly express the IL11 receptor subunit (IL11RA), and we have observed that IL11 can be hepatotoxic (15), which we confirmed in adult human hepatocytes from additional donors ( fig. S2, A and B). IL11 activates extracellular signal-regulated kinase (ERK) in some cell types (15); hence, we explored the effect of IL11 on ERK and JNK activation in hepatocytes. IL11 induced late and sustained ERK and JNK activation that was concurrent with CASP3 cleavage (Fig. 1G). Flow cytometrybased analyses showed dose-dependent IL11-induced hepatocyte cell death ( Fig. 1H and fig. S2C).
To explore the potential role of IL11 signaling in APAP-induced hepatocyte death, we used a neutralizing antibody against IL11RA (X209) (15). We further validated X209 as reactive and specific for mouse IL11RA by Western blot using recombinant protein from two different sources In functional studies, we found that X209 dose dependently reduced mouse hepatocyte cell death [median inhibitory concentration (IC 50 ) = 54 ng ml −1 ] and inhibited hepatocyte ERK and JNK activation (Fig. 1, I and J, and fig. S2, J and K). Although these data confirm the up-regulation of IL11 in AILI, they challenge the perception that this effect is compensatory and protective.
Species-specific effects of rhIL11 rhIL11 is reported as protective across multiple rodent models of human diseases including mouse/rat models of liver damage (tables S1 and S2), which stimulated the administration of rhIL11 to patients in the hope of therapeutic effect (table S3). Yet, our studies suggested that rhIL11 has the opposite effect on human hepatocytes in vitro (Fig. 1). This prompted us to test for potential inconsistencies when rhIL11 protein is used in a foreign species, as human and mouse IL11 share only 82% protein sequence homology.
First, we compared the effects of rhIL11 versus recombinant mouse IL11 (rmIL11) in mouse hepatocytes. The species-matched rmIL11 stimulated ERK and JNK phosphorylation and induced CASP3 cleavage, but rhIL11 had no effect ( Fig. 2A). Similarly, rmIL11 induced mouse hepatocyte cell death, whereas rhIL11 did not (Fig. 2B). In reciprocal experiments in human hepatocytes, we found that rhIL11 stimulated ERK and JNK signaling and hepatocyte death, whereas rmIL11 did not ( fig. S3, A and B).
This showed that the role of IL11 signaling in hepatocyte death is conserved across species, but that recombinant IL11 protein has species-specific effects and does not activate the same pathways in other species. We tested this hypothesis in vivo by injecting either rmIL11 or rhIL11 into mice, at doses previously used by others ( Fig. 2C) (22). Injection of rmIL11 resulted in liver damage with elevated serum concentrations of alanine transaminase (ALT) and aspartate aminotransaminase (AST) as well as ERK and JNK activation (Fig. 2, D and E, and fig. S3, C and D). In contrast, rhIL11 injection into mice had no effect on ERK or JNK phosphorylation and was associated with lower serum concentrations of ALT and AST at 24 hours (ALT, P = 0.018; AST, P = 0.0017) (Fig. 2, D and E, and fig. S3, C and D). Both rmIL11 and rhIL11 equally activated signal transducer and activator of transcription 3 (STAT3) at 30 min after injection, which represents a species-agnostic effect of recombinant IL11 when injected at high dose to the mouse (Fig. 2E).
To follow up on the published protective effect of rhIL11 in the mouse, we performed a protocol similar to a previous AILI study (22), where rhIL11 was injected into the mouse before APAP dosing (Fig. 2F). We found that rhIL11 reduced the severity of AILI in mice (reduction: ALT, 52%, P = 0.0001; AST, 39%, P < 0.0001). However, and of central importance, species-matched rmIL11 was not protective ( Fig. 2G and fig. S3E). The therapeutic effect of rhIL11 was accompanied by a reduction in hepatic ERK and JNK activation (Fig. 2H), which suggests that rhIL11 blocks endogenous mouse IL11-driven signaling pathways in the liver similar to IL11RA antibody effect in vitro (Fig. 1I).
Using surface plasmon resonance (SPR), we found that rhIL11 binds to mouse IL11 receptor chain 1 (mIL11RA1) with a K D (dissociation constant) of 72 nM, which is similar to the rmIL11:mIL11RA1 interaction (94 nM) and close to that reported previously for rhIL11:hIL11RA (50 nM), which we reconfirmed ( Fig. 2I and fig. S3F) (27). We then performed a competition enzyme-linked immunosorbent assay (ELISA) assay and found that rhIL11 competed with rmIL11 for binding to mIL11RA1 and was an effective blocker of rmIL11 (Fig. 2J).
In mouse hepatocytes, rhIL11 acted as a dose-dependent inhibitor of rmIL11-induced signaling pathways and cytotoxicity (Fig. 2, K and L, and fig. S3G). In addition to mouse hepatocytes, rhIL11 inhibited rmIL11-driven ERK and JNK signaling and matrix metalloproteinase 2 (MMP2) production in mouse kidney fibroblasts, heart fibroblasts, skin fibroblasts, and hepatic stellate cells ( fig. S3, H and I). Thus, rhIL11 seems to act as a neutralizer of mouse IL11 in various cells from across mouse tissues.
Hepatocyte-specific expression of Il11 causes spontaneous liver damage
To test the effects of endogenous mouse IL11 secreted from hepatocytes in vivo, we expressed an Il11 transgene in hepatocytes by injecting Rosa26 Il11/+ mice (16,17) with adeno-associated virus vector serotype 8 (AAV8) virus encoding an albumin (Alb) promoterdriven Cre construct [Il11-transgenic (Tg) mice; Fig. 3A]. Three weeks after transgene induction, Il11-Tg mice had atrophied livers (38% smaller, P < 0.0001), whereas other organs were unaffected ( fig. S4A). Il11-Tg mice also had mildly elevated serum concentrations of ALT and AST, as compared to control mice (Alb-Null) (Fig. 3, B to D, and fig. S4B). Histologically, infiltrates were seen around the portal triad and the portal veins were nonspecifically dilated (P < 0.0001) ( fig. S4, C and D). Molecular analyses of Il11-Tg livers revealed activation of ERK, JNK, and CASP3 cleavage along with increased pro-inflammatory gene expression ( Fig. 3E and fig. S4, E and F). These data support a maladaptive effect of speciesmatched IL11 secreted from uninjured hepatocytes but do not inform as to the role of IL11 in the context of APAP toxicity, which we examined subsequently in loss-of-function experiments.
In primary human hepatocytes, ROS dose dependently induced IL11 secretion and cell death ( fig. S5, A and B) and IL11 also stimulated ROS production, which was diminished, in part, by NAC (fig. S5, C and D). We observed an additive effect of H 2 O 2 -derived ROS and IL11 on hepatocyte death (flow cytometry, ALT) and maladaptive signaling (JNK, CASP3, and NOX4) ( fig. S5, E to H). IL11 dose dependently stimulated hepatocyte GSH depletion that mirrored ERK and JNK activation and NOX4 up-regulation (Figs. 1G and 3, H and I). As expected, only species-matched IL11 induced NOX4 up-regulation and lowered the amount of GSH ( Fig. 3J and fig. S6, A to D). APAP stimulated NOX4 and ROS up-regulation as well as GSH depletion, all of which were dependent, in part, on IL11 signaling (Fig. 3, K and L, and fig. S6E). These data link APAP toxicity in hepatocytes with IL11-stimulated, NOX4-dependent ROS production in a feed-forward manner downstream of APAP-induced mitochondrial ROS.
We reconsidered the effect of rhIL11 in inhibiting endogenous mouse IL11-induced mouse cell death and observed a dosedependent effect of rhIL11 on restoring GSH concentrations in rmIL11-stimulated mouse hepatocytes ( fig. S7A). Similarly, in vivo, rhIL11 was associated with improved GSH concentrations in APAPtreated mice, whereas rmIL11 was not ( fig. S7B). GKT-13781, a NOX1/NOX4 inhibitor, prevented IL11-stimulated GSH depletion, ERK, JNK, and CASP3 cleavage, as well as hepatocyte death (ALT), in a dose-dependent manner (Fig. 3, M to O, and fig. S8A). The specificity of inhibition of NOX4 was confirmed using small interfering RNA (siRNA) against NOX4 that prevented IL11-induced hepatotoxicity ( fig. S8, B to E), and we also showed no effect of IL11 on NOX1 expression ( fig. S8F). Together, these data show that IL11-stimulated NOX4 activity is important for GSH depletion in hepatocytes.
Hepatocyte-specific deletion of Il11ra1 prevents APAPinduced liver failure
Previous studies have shown that mice with global germline deletion of Il11ra1 are not protected from AILI (19), which we confirmed ( fig. S9, A to C). Shortcomings of germline gene deletion relating off-target effects (and/or developmental compensation) are recognized, and we used RNA sequencing (RNA-seq) to examine the expression of genes at the targeted locus in the Il11ra1 null mouse (30). This revealed that, in addition to Il11ra1, the expression of C-C Motif Chemokine Ligand (Ccl) 27a was also disrupted at the locus ( fig. S9, D and E). Given this potential confounding factor, we decided to use conditional and temporal deletion of Il11ra1 to better address the impact of Il11ra1 loss of function in mouse hepatocytes in AILI.
We created Il11ra1 conditional knockouts (CKOs) by injecting AAV8-Alb-Cre virus to mice homozygous for LoxP-flanked Il11ra1 alleles (Il11ra1 loxP/loxP ), along with wild-type controls. Three weeks after viral infection, control mice and CKOs were administered APAP (400 mg kg −1 ) (Fig. 4A). At baseline, both control and CKO groups had equivalent expression of hepatic cytochrome P450 2E1 enzyme (CYP2E1), a key enzyme responsible for the conversion of APAP to its active hepatotoxic metabolite, NAPQI (fig. S10A). One hour after APAP dosing, both CKO and control mice had equivalent plasma concentrations of APAP and a range of APAP metabolites, including NAPQI ( Fig. 4B and fig. S10B). Correspondingly, both strains had large depletion of hepatic GSH, the molecular fingerprint of NAPQI-mediated oxidative stress ( fig. S10C). Thus, Il11ra1 deletion in hepatocytes does not affect APAP metabolism or GSH depletion.
The day after APAP administration, gross anatomy revealed small and discolored livers in control mice, whereas livers from APAP-treated CKO mice looked similar to livers from mice receiving saline injection (Fig. 4C). Histology showed typical and extensive centrilobular necrosis in control mice, which was lesser in CKOs ( Fig. 4D and fig. S10D). It was striking that CKO mice had markedly lower serum concentrations of ALT and AST, as compared to controls and GSH concentrations that had largely returned to baseline (Fig. 4, E to G). ERK, JNK, and CASP3 activation was observed in control mice but not in the CKOs (Fig. 4H). Deletion of Il11ra1 in hepatocytes reduced cytokine/chemokine markers and increased F4/80 expression but had no effect on cluster of differentiation (Cd) 68 or Cd11b expression ( Fig. 4I and fig. S10E).
Mice deleted for IL11 are protected from AILI
We recently found that, although Il11ra1 KO mice have similarities with a globally deleted Il11 mouse (Il11 KO), they also have differences in some phenotypes (31) and also in expression of Ccl27a ( fig. S9, D and E). To investigate further the effects of AILI in a second model of genetic loss of function in IL11 signaling, we subjected Il11 KO mice to AILI (Fig. 4J). We found that Il11 KO mice are protected from liver damage and that the injured livers phenocopied the signaling patterns seen in the CKO mice (Fig. 4, K to M). Thus, germline loss of function of Il11 or hepatocyte-specific deletion of Il11ra1 in the adult is protective against AILI.
Anti-IL11RA given early during AILI is beneficial
We next tested if therapeutic inhibition of IL11 signaling was effective in reducing AILI by administering either anti-IL11 (X203) or anti-IL11RA (X209) (15,17). Initially, we used a preventive strategy by injecting X203, X209, or control antibody (20 mg kg −1 ) 16 hours before APAP (Fig. 5A) and found both X203 or X209 to be protective ( fig. S11, A and B). X209 proved most effective in protecting the liver, as seen previously in NASH studies (15), and was prioritized for subsequent experiments. We quantified APAP, NAPQI, and other APAP metabolites in plasma of the immunoglobulin G (IgG)-or X209-treated mice 1 hour after APAP by mass spectrometry and found equivalent concentrations (Fig. 5B and fig. S12A). Despite normal APAP metabolism and large acute GSH depletion (fig. S12, A and B), mice receiving X209 had lower serum markers of liver damage, largely restored hepatic GSH concentrations, and lesser centrilobular necrosis by 24 hours after APAP (Fig. 5, C and D, and fig. S13, A and B).
Next, we gave anti-IL11RA therapy in a therapeutically relevant mode at 3 hours after APAP, a time point by which APAP metabolism and toxicity is established and at which most interventions have no effect in the mouse model of AILI (Fig. 5E) (9). X209 (2.5 to 10 mg kg −1 ) inhibited all aspects of AILI with dose-dependent improvements in the degree of hepatocyte death (ALT and AST), ERK/JNK activation, GSH concentrations, and extent of centrilobular necrosis (Fig. 5, F to I, and fig. S13C).
We also determined whether inhibiting IL11 signaling had added value when given in combination with the current standard of care, NAC, 3 hours after APAP dosing (Fig. 5E). Administration of NAC alone reduced serum concentrations of ALT and AST (Fig. 5, F and G). However, NAC, in combination with X209, was even more effective than either NAC or X209 alone (ALT reduction: NAC, 38%, P = 0.0007; X209, 47%, P < 0.0001; NAC + X209, 75%, P < 0.0001). The degree of ERK and JNK inhibition with NAC or NAC together with X209 mirrored the magnitude of ALT and AST reduction in the serum and the restoration of hepatic GSH concentrations (Fig. 5, F to H and J). As such, anti-IL11RA therapy has added benefits when given in combination with the current standard of care in this mouse model.
Effects of X209 on AILI across mouse strains
There are instances in the literature where a pharmaceutical or genetic intervention has been associated with protection against AILI but has been difficult to replicate in follow-on studies (32). This may reflect the fact that liver phenotypes are susceptible to microbiota-, strain-, and sex-associated differences, with some of these factors having profound effect (33,34).
To study putative strain-specific factors in AILI, we performed an additional set of blinded experiments in female C57BL/6NTac (InVivos) mice and in male and female mice from four additional mouse strains (C57BL/6J, B6.129S1, 129X1/SvJ, and C3H/HeNTac). These studies revealed notable strain-related variation in the degree of liver injury after APAP ( fig. S14 and table S4). This said, in all experiments, we found that inhibition of IL11 signaling using X209 reproducibly reduced AILI in both male and female mice (Fig. 5, C and K; fig. S14; and table S4).
Last, we studied C57BL6/NTac mice from a second provider to assess for within-strain variation, as genetic drift and/or differences in the microbiome or pathogen load might also influence AILI severity or the response to inhibition of IL11 signaling in AILI. As compared to the C57BL6/NTac mice used throughout this manuscript (InVivos), the degree of AILI in the additional C57BL6/NTac strain (Taconic Biosciences) was much greater even at a lower APAP dose (300 mg/kg) (~3-fold, males; ~20-fold, females) (Fig. 5C, fig. S14, and table S4). Despite this, administration of X209, as compared to IgG, significantly reduced liver damage in both female and male mice (ALT reduction: male, 41%, P < 0.0001; female, 21%, P = 0.0127), although the magnitude of effect was diminished as compared to other strains, notably in female mice (Fig. 5, C and K; fig. S14; and table S4).
Liver regeneration with anti-IL11RA dosing
For patients presenting to the emergency room 8 hours or later after APAP poisoning, there is no effective treatment. This prompted us to test anti-IL11RA 10 hours after APAP (400 mg kg −1 ) administration to mice (Fig. 6A). Analysis of gross anatomy, histology, and serum concentrations of IL11, ALT, and AST revealed that X209 reversed liver damage by the second day after APAP, whereas IgG-treated mice had sustained liver injury (Fig. 6, B to E, and fig. S15, A and B). X209 effectively blocked ERK and JNK activation throughout the course of the experiment, and this preceded a reduction in Cl. CASP3 at 24 hours (Fig. 6F and fig. S15C).
Interventions promoting liver regeneration have been suggested as a new approach for treating AILI (13), and we assessed the status of genes important for liver regeneration. Inhibition of IL11 signaling was associated with a signature of regeneration with up-regulation of proliferating cell nuclear antigen (PCNA), cyclin D1/D3/E1, and phosphorylation of retinoblastoma protein (RB) (Fig. 6F), as seen during regeneration after partial hepatectomy (11).
EdU (5-ethynyl-2′-deoxyuridine) injection and histological analyses showed large numbers of nuclei with evidence of recent DNA synthesis in X209-treated mice as compared to controls (Fig. 6G and fig. S15D). Effects of X209 administration on cytokine gene expression were variable, whereas inflammatory cell markers (Cd68, Cd11b, and F4/80) were generally increased ( fig. S15E). We reassessed the adjunctive effects of X209 and NAC given 3 hours after APAP to see whether regeneration was also associated with inhibition of IL11 signaling at earlier time points. This proved to be the case, and the combination of X209 and NAC was more effective than NAC alone, notably for cyclin D1 and D3 (Fig. 6H).
We then administered X209 (20 mg kg −1 ) 10 hours after a higher and lethal acetaminophen dose (550 mg kg −1 ) at a time point when mice are moribund and livers undergo fulminant necroinflammation (Fig. 6I). X209-treated mice recovered and had a 90% survival by the study end. In contrast, IgG-treated mice did not recover and succumbed with a 100% mortality within 48 hours (Fig. 6J). On day 8 after the lethal dose of APAP, X209-treated mice appeared healthy with normal liver morphology and serum ALT concentrations were comparable to controls that had not received APAP (Fig. 6K and fig. S16, A and B).
Taking our data together, we propose a mechanism for APAP toxicity whereby NAPQI damage of mitochondria results in ROSrelated IL11 up-regulation, subsequent IL11-dependent NOX4 expression, and further ROS production ( fig. S17). This drives dual pathologies: killing hepatocytes via activation of downstream signaling and preventing hepatocyte regeneration, through mechanisms yet to be defined.
DISCUSSION
APAP poisoning is common, with up to 50,000 individuals attending emergency departments every year in the United Kingdom, some of whom develop liver failure requiring transplantation (1). Here, we show that IL11, previously reported as protective against APAP-induced liver failure (19,22), liver ischemia (20,23), endotoxemia (24), or inflammation (21), is a hepatotoxin and of importance for APAP-induced liver failure.
The observation that species-matched IL11 is pathogenic is surprising, as more than 30 publications have reported cytoprotective, anti-inflammatory, and/or anti-fibrotic effects of rhIL11 across a range of rodent models of human disease. Here, we show that, unexpectedly and paradoxically, rhIL11 is a competitive inhibitor of mouse IL11 binding to its cognate receptor. Furthermore, after binding to murine IL11RA1, rhIL11 does not stimulate the maladaptive signaling seen with species-matched IL11 (NOX4, JNK, and Caspase3 cleavage) but instead transiently activates STAT3. Although it could be argued that activation of STAT3 is protective in itself, we think this unlikely as rmIL11 also activates STAT3 when injected to mice but is hepatotoxic; thus, the STAT3 effect may be a bystander/nonspecific event.
The fact that rhIL11 turns out to be an inhibitor of mouse IL11 activity challenges our understanding of the role of IL11 in AILI and in disease more generally, as we found that the inhibitory effect of rhIL11 on IL11 signaling in the mouse is conserved across cells and tissues. This implies that IL11 signaling may be relevant for a range of diseases where rhIL11 has had protective effects in mouse models, which include rheumatoid arthritis (35) and colitis (36) (tables S1 and S2). We highlight that rhIL11 has been administered to patients in clinical trials for diseases where rhIL11 was found protective in mouse models of disease (table S3).
Although mice globally and germline deleted for Il11ra1 (30) are not protected from AILI (19), which we confirmed, we believe that this may, in part, be explained by off-target effects at the Il11ra1 locus, which we documented. Furthermore, there are unique features seen in the Il11ra1 KO mouse that are not apparent in Il11 null mice (31). Our studies advise caution against using a single genetic model on one genetic background for the study of AILI, and we suggest that loss-and gain-of-function approaches across genetic models are preferred, especially if complemented by specific pharmacologic interventions.
It is apparent from the published literature that the effect of APAP on liver damage in mice can vary across strains, which reflects influences of genetic background, microbiome, and pathogen load (32)(33)(34). We documented notable variability in the severity of AILI across strains and observed surprisingly large within-strain differences in AILI in genetically identical mouse strains from two different sources. The magnitude of effect of anti-IL11RA dosing also varied across and within strains, with some strains showing far greater reductions in ALT concentrations as compared to others (ALT reduction in female mice: C57BL6/NTac, 21%; C3H/HeNTac, 93%). Thus, the power to detect an effect associated with inhibition of IL11 in AILI is dissimilar between mouse strains, and this is an important experimental consideration.
Our study stimulates questions and has a number of limitations. We show that ERK is co-regulated with JNK in APAP injured livers, yet ERK's role in AILI is not well characterized. The role of apoptosis in AILI is contentious, and although IL11 stimulates caspase cleavage in hepatocytes, the functional relevance of this is not clear and studies of IL11 in hepatocyte lipotoxicity suggest that IL11 is important for more than one form of cell death (26). The mechanism by which anti-IL11 administration stimulates liver regeneration and the nature of the replicating cells remain unknown. The effect of IL11 on cell types other than hepatocytes in AILI was not dissected, and we did not address the role of IL11 on the immune response, which is an important issue that requires further study. Whether variation in the microbiome and/or pathogen load affects the IL11 axis in AILI, which is inferred from our within-strain studies if genetic drift is excluded, appears profound but has not been studied here. Although we believe that it is unlikely that rhIL11-stimulated STAT3 activity plays a role in the protective effects of rhIL11 in AILI, which we suggest instead reflects competitive inhibition of mouse IL11 binding to IL11RA1, this was not formally excluded. Why the anti-IL11RA approach was more effective in reducing liver damage as compared to anti-IL11 dosing was not determined but is consistent with other studies of liver disease in the mouse (15,37).
We point out that although the toxic effects of IL11 appear conserved in mouse and human hepatocytes, we did not study human biospecimens from patients with AILI. Measuring IL11 expression in humans is difficult, as the concentration of IL11 in the serum is very low and liver biopsy is not part of routine clinical care in patients with AILI. Whether or not inhibition of IL11 signaling is beneficial in patients with AILI can only be tested formally in randomized and blinded clinical trials, which might be envisaged now that therapeutic molecules are being developed.
We end by noting that because IL11 neutralizing therapies are not dependent on altering APAP metabolism and stimulate liver regeneration, they could be useful for patients presenting late with AILI, in addition to having added value when given together with NAC. Overall, our studies question the premise that IL11 is protective in the liver and suggest instead that species-matched IL11 is a hepatotoxin.
Study design
In this study, we used primary human and mouse hepatocytes and in vivo mouse experiments to investigate the effects of IL11 in hepatocytes and of inhibiting IL11 signaling (genetically and therapeutically) in a mouse model of AILI. Animal procedures were performed according to the protocols approved by the SingHealth Institutional Animal Care and Use Committee (IACUC). IL11 RNA and protein expression was examined in primary human hepatocyte supernatants and in liver and serum from the mouse model of AILI by quantitative polymerase chain reaction (qPCR), Western blotting, and ELISA. Further confirmation of IL11 protein expression in liver was assessed by bioluminescent imaging and immunofluorescence analysis of liver tissue from Il11-luciferase and Il11-EGFP reporter mouse, respectively. The effects of species-specific IL11 on hepatocyte injury and death were examined by gain-of-function approaches in vitro in primary human and mouse hepatocyte cultures and in vivo by systemic administration of rhIL11/rmIL11 and in hepatocyte-specific Il11-expressing Tg mice (AAV8-Alb-Cre-Rosa26 Il11/+ ; Il11-Tg). Binding of human/mouse IL11 to mouse IL11RA was assessed by competitive ELISA, and their binding affinities were determined by SPR. Loss-of-function experiments were performed in Il11 and Il11ra1 global KO mice and in hepatocytespecific Il11ra1-deleted mice to investigate the effects of genetic inhibition of Il11 or Il11ra1 in AILI. Binding of neutralizing antibody against IL11RA (X209) to mouse IL11RA was validated by immunofluorescence analysis and Western blotting of primary mouse hepatocytes and mouse IL11RA protein from two different commercial sources. X209 specificity to mouse IL11RA was evaluated by immunohistochemistry of liver tissue isolated from wild-type and l11ra1 KO mice. We used X209 in primary hepatocytes experiments and in pharmacologic prevention and reversal studies in the AILI mouse model. Colorimetric assays (ALT, AST, GSH measurement) and Western blot analysis were performed on hepatocytes or hepatocyte supernatants as well as liver/serum samples from mice subjected to AILI. Plasma concentrations of APAP and APAP metabolites were assessed by liquid chromatography-tandem mass spectrometry. Sample size for cell-based assays was determined on the basis of sample availability and technical needs. The in vivo experiments were designed to detect differences between treatment groups or genotype-dependent effects at 80% power (ɑ = 0.05), but sample sizes may vary depending on animal availability. Outlier tests were performed using the ROUT method (GraphPad Prism). Sample sizes are detailed in figure legends. Mice were randomly assigned to experimental groups on the day of the treatment, except for KO mice in which randomization was assigned within the same genotypes. For in vitro experiments, investigators were not blinded to group allocation during data collection and analysis. For in vivo experiments, investigators were not blinded other than for the gender and strain studies, which were performed double blind, for the large part. Histological analysis of liver tissue samples was performed blinded to treatments and genotypes. Further details are described in Supplementary Materials and Methods.
Statistical analysis
Statistical analyses were performed using GraphPad Prism software (version 8). Datasets were tested for normality with Shapiro-Wilk tests. For normally distributed data, statistical significance between control and experimental groups was analyzed by two-sided Student's t tests or by one-way analysis of variance (ANOVA) as indicated in the figure legends. P values were corrected for multiple testing according to Dunnett's (when several experimental groups were compared to a single control group) or Tukey (when several conditions were compared to each other within one experiment). Nonparametric tests (Kruskal-Wallis with Dunn's correction in place of ANOVA and Mann-Whitney U test in place of two-tailed t test) were conducted for nonnormally distributed data. Comparison analysis for two parameters from two different groups was performed by twoway ANOVA and corrected with Sidak's multiple comparisons when the means were compared to each other. Survival curves were analyzed by Gehan-Breslow-Wilcoxon test. The criterion for statistical significance was P < 0.05.
|
v3-fos-license
|
2017-10-19T15:02:46.848Z
|
2015-03-25T00:00:00.000
|
39690060
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://ccsenet.org/journal/index.php/elt/article/download/46825/25270",
"pdf_hash": "4f7545d508198ae318c59fc9802ee944b0449156",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44538",
"s2fieldsofstudy": [
"Education"
],
"sha1": "4f7545d508198ae318c59fc9802ee944b0449156",
"year": 2015
}
|
pes2o/s2orc
|
The Effect of Peer-Assisted Mediation vs . Tutor-Intervention within Dynamic Assessment Framework on Writing Development of Iranian Intermediate EFL Learners
Dynamic assessment originates in the Zone of Proximal Development (ZPD). Practicing dynamic assessment necessarily requires the development of ZPD. This study aimed to investigate the effect of peer-assisted mediation vs. tutor-intervention within dynamic assessment framework on writing development and the attitude of Iranian intermediate EFL learners. To do so, a quasi-experimental design and a questionnaire survey were carried out. After conducting a pilot study, a language proficiency test was administered to homogenize the two intact groups with 30 students in each. The writing part of the proficiency test was also considered as pre test. Then, the two groups were asked to fill out the pre-test attitude questionnaire. While in peer-assisted mediation group, the writing assignments were assessed by peers followed by feedback provided by them, in tutor-intervention group, the tutor assessed and provided feedback on the assignments during the instruction. At the end both the post test of writing and the questionnaire were administered. Comparing the post tests indicated that the peer-assisted mediation group outperformed tutor-intervention group. Although, peer-assisted mediation group had significant effect on the writing of the learners, for tutor-intervention, this effect was not significant comparing the pre and the post tests within groups. The study also revealed that both peer-assisted mediation and tutor-intervention significantly changed the attitude of the learners towards writing development between the pre and the post test of questionnaires, though the difference between their effects on the learners’ attitude was not significant comparing the post tests of questionnaires.
Problem and Purpose
Today, the role of assessment in teaching and even language learning becomes crucial in all stages.In past decades, the assessment from traditional views shifted to a new one called dynamic assessment.As the name represents it has some alternation over the traditional one.Poehner (2008) cites that "dynamic assessment posits a qualitatively different way of thinking about assessment of how it is traditionally understood by classroom teachers and researchers" (p. 1).Dynamic assessment is based on Zone of Proximal Development (ZPD).A difference between what a learner achieves by herself and what she achieves by assistance of others directly refers to ZPD.Development in learning process is along with sociocultural theory which is noted by Swain, Kinnear and Steinman (2010).Sociocultural theory considers theory of mind and cultural interactions.Someone like Peterman (2005) believes that sociocultural theory assumes; learning happens when an individual participates in cultural context and he is supported initially by more knowledgeable person.language, or education that emphasis on the ability of the learner to respond to intervention and mediation (Haywood & Lidz, 2006).Adopting interactionaist approach of DA, this study was an attempt to investigate the effect of peer-assisted mediation and tutor-intervention on writing development.It also sought to discover the effect of these types of assessment on the attitude of the learners towards their writing development.The present study highlighted the role of dynamic assessment in writing skill learning.Also, it signified the role of assistant and mediator (peer or tutor) in learning process, besides it revealed the effect of providing feedback in writing process.
Research Questions
1) Does peer-assisted mediation have any significant effect on writing development of Iranian intermediate EFL learners?
2) Does tutor-intervention have any significant effect on writing development of Iranian intermediate EFL learners?
3) Is there any significant difference between the effect of peer-assisted mediation and tutor-intervention on writing development of Iranian intermediate EFL learners?4) Does peer-assisted mediation and tutor-intervention have any significant effect on the attitude of Iranian intermediate EFL learners towards writing?5) Does tutor-intervention have any significant effect on the attitude of Iranian intermediate EFL learners towards writing?6) Is there any significant difference between the effect of peer-assisted mediation and tutor-intervention on the attitude of Iranian intermediate EFL learners towards writing?
Review of the Related Literature
Traditionally, assessment is described as an information-gathering activity (Bailey, 1996).For instance, McNamara (2004) explains that assessment is to find understandings of the pupil's knowledge or their learning ability.Based on this viewpoint, it is not possible to discover why teachers, including second language ones, often refer to the assessment as a necessary part of teaching and learning process.One teacher might think that the data achieved through the assessment procedures would be eagerly welcomed, and viewed as a vital constituent of better teaching.
Dynamic assessment considers mediation, provides constant feedback during the process of learning and the responses to these feedback, so feedback is a very important factor in powerful student learning.The benefits of successful feedback set in the context of learning outcomes are many.Successful feedback will be beneficial in the following issues: It builds self-assurance in the students, stimulates students to improve their learning; provides students with presentation improvement information, corrects errors and recognizes strengths and weaknesses.
Dynamic assessment includes presupposes giving feedback, and the responses to the feedback, so feedback is crucial in successful learning.Dockrell (2001) believes assessment should provide feedback to students on their progress towards the achievement of learning outcomes.Feedback enables students to realize where they have done well and indicates what they could improve on.It also justifies the grade or mark of comprehensive assessments.
It is important that feedback is timely.Cheng (2005) cites that if you provide feedback too soon, it may disrupt the student's reflective process.However, it is far more common that feedback is provided too late when it is no longer salient to the student.Feedback should not be held off until the end of a year or semester, as the student is unlikely to benefit from it once the task is complete and they have moved on to a new one.The benefits of successful feedback set in the context of learning outcomes are many.Successful feedback will be beneficial in the following issues: It builds self-assurance in the students, stimulates students to improve their learning; corrects errors and recognizes strengths and weaknesses.
Trends in the teaching of writing in ESL in past decades differentiated.Teachers learned more and more about how to teach fluency not accuracy, how to use authentic texts and contexts in the classroom.Process writing is one of the modern issues related to writing skill.Process writing helps writers to understand their own composing process, gives students time to write and rewrite, lets students discover what they want to say and write, gives students feedback throughout the composing process, and encourages feedback from both teacher and peers (Brown, 2007).
The effect of peer mediation with young children on autonomy behavior of children mediated by trained peers was conducted by Shamir and Steven (2005).The results indicated that children who received instruction in peer mediation with young children outperformed children who received general preparation for peer-assisted learning.Also, the higher level of mediational techniques and higher cognitive modifiability was associated with autonomy.
Another study investigated the improving oral reading fluency with a peer-mediated intervention.It examined the effects of an experimentally derived, peer-delivered reading intervention on the oral reading fluency of a first-grade student who had been referred for poor reading fluency.Results indicated that reading improvements were obtained through appropriate and efficient peer intervention as mediators of the learning reading comprehension (Duke & Daly, 2011).
Peer-assisted learning strategies on promoting word recognition, fluency, and reading comprehension in young children were investigated by Douglas and Lynn (2005) who summarized a good portion of the comprehension research program on reading in the early grades.First, they described investigations conducted in kindergarten, where their focus was on the development of decoding and word recognition.Then, they discussed studies conducted in first grade, where they continued to emphasize decoding and word recognition but expanded their focus to include fluency and comprehension.The Findings showed peer-assisted learning strategies are useful in fluency and reading comprehension.
The use of tutor mediation within a DA framework to support business students in the context of open and distance education investigated by Shrestha, Prithvi and Coffin (2012).The study explored the value of tutor mediation in the context of academic writing development among undergraduate business students studied in open and distance learning, following the DA.The analyses of the interactions suggested that DA could help to identify and respond to the areas that students need the most support in the study.Finally, they argued that a learning theory-driven approach such as DA could contribute to undergraduate students' academic writing development.Also, results showed that traditional assessment methods were unable to sufficiently support students.DA's focus on learning and development, on the other hand, helped to identify participants' evolving writing abilities.
A simple process and framework for teaching English writing to Iranian EFL intermediate learners based on the principle of Dynamic Assessment DA was introduced by Azarizad and Ghahremani (2013).Reflections and results of the research reiterated that the dialogic way of teaching is of great help in enhancing learners' writing interest and improving their writing competence.
A case study was done on introducing DA and producing a simple framework (or a process) for English writing instruction based on the principle of DA by Xiaoxiao and Yan (2010).The results of the study proved that the dialogic way of teaching was of great help in enhancing learners' writing interest and improving their writing competence.
The regulatory scale offered by Aljaafreh and Lantolf (1994) was applied to Iranian EFL learners' writing ability by Isavi (2012).In the study, the learners responded differently to the same type of errors they made in the pretest stage after introduction of mediation by the teacher.The regulatory scale which was applied in the intervention stage uncovered the fact that the individual learners had different developmental levels.The result of the study showed that a DA approach to EFL learners' writing ability could be useful.
All conducted studies have been in some sense meditational but there are other approaches to assessment that include intervention and response to intervention which are not meditational.These would fit within the broad definition of DA.However, in this study, both teacher intervention and peer assisted were mediational with focus on moving the learners beyond their ZPD in their writing ability.
Method
While peer-assisted mediation and tutor-intervention were considered as independent variables, writing skill and the attitude of the learners were dependent variables of the study.
Participants
Participants of the main study comprised of 60 female learners studying English as a foreign language at intermediate level, based on a proficiency test taken from American English File test pack.The sample was not randomly assigned to groups rather the intact classes were used.Therefore, the sample was assigned to conditions using nonrandomized ways as permitting researcher to choose conditions based on presumed needs.
Language Proficiency Test
A language proficiency test was administered to test the homogeneity of the participants.The test consisted of different parts namely grammar, vocabulary, pronunciation, reading, writing and listening aimed at evaluating the homogeneity of the participants and its writing section was also used as pretest.The language proficiency test was taken from American English file Test Pack by the institutes.Its grammar section had 20 items, vocabulary section had 20 items, pronunciation section had 10 items, reading section had 10 items, listening section had 10 items and writing section included paragraph writing.The test scored out of 100, each item got 1 point.Normally, proficiency tests do not include pronunciation section, but since the test was packed by the institutes and the validity of the test had been reported in advance, this section was not eliminated.The writing section of the test was used as a pretest.The pretest of writing included writing a cover letter to apply for a job based on an advertisement which was provided to them.The students had to write 120-150 words.
Attitude Questionnaire
The questionnaire was adapted from writing skill questionnaire by Community of Writers (Elbow & Belanoff, 2002).The questionnaire was related to the variables of the present study in order to measure the outlooks of participants towards learning and development of writing through treatment.It had 5 parts including attitude towards general writing, attitude towards generating ideas, attitude towards mediation, attitude towards feedback, and attitude towards collaboration.It consisted of 24 items that participants answered with yes, no, and sometimes.
Rating Scale of Writing
A rating scale used in this study to assess the writing assignments of the learners at each session.It belongs to DA as RECIPROCITY rating scale.It was devised by Van der Aalsvoort and Lidz (2002) as it is cited in Poehner (2008).It focuses on bidirectional interaction between mediator and learner.It also signifies the role of documentation which will be revealed by comments and written feedback.Ten scores were assigned for taught points which were repeated in subsequent assignments since this kind of rating scale as a part of DA focuses on removing the errors during a course of study.
Post Test of Writing
The post test of writing was quite different from the pre-test.It consisted of a paragraph writing around 120 to 150 words taken from the topic of American English File series.The students had to write a post card to a friend they hadn't seen or spoken to for a long time.They had to write based on the provided instruction.The writing taught points in treatment sessions regarding organization of the paragraph, punctuation, capitalization, descriptive paragraphs, using linking words and adverbs in narratives and connecting sentences with relative pronouns were all considered.The scores of posttests of writing were calculated out of 10 the same as pre -tests were scored.
Design
This quantitative research enjoyed a quasi-experimental and a survey questionnaire design.
Procedure
At the outset of the study, a pilot study was conducted in a small scale of 10 participants with the same characteristics of the main study in three sessions.While KR-21 showed 0.86 for the reliability of the proficiency test with 100 items, Cronbach alpha index was estimated 0.85 for reliability of the questionnaire with 24 items.The treatments, feedback provisions, and assistance were piloted as well.
The writing points were taught followed by paragraph writing.Then, the participants were taught how to provide feedback on the peers' writing at the end of each session.Also, the rating scale was presented to all participants as checklist to assess the paragraphs.
In all three sessions, in part with the main study, participants were divided in pairs.The pairs switched their assignments.At the first stage one reads the other's paragraph once without correcting.Second, the peer spied two important points.Then, the peer gave comments in full sentences on the margin; the rating was based on 1-10.When it was finished, the assignments were switched back.They talked and negotiated the problems.On the other hand; the tutor read the paragraphs once and then she found the errors by providing feedback, writing comments, using abbreviations and phrases in order to highlight the problems.Then the tutor returned the assignments to the participants, also tutor and participants talked about problems and solutions.The tutor then rated the assignments out of 1-10 basis.
In the main study, first, a language proficiency test and an attitude questionnaire were administered to two groups.Then, the treatments sessions started.The following writing points were taught in treatment sessions in each group.
Taught Writing Points 1) Capitalization and Organization
2) Punctuation: Period (.), Question Mark (?), Comma (,) 3) Linking ideas in narrative: (and, but, so, because) 4) Writing letter based on the presented format.5) Writing a postcard based on the given format.6) Using adverbs in narratives: For example: Suddenly, therefore, at last, at the end, then, now, soon The participants in both groups were asked to write a paragraph based on the given topics and apply the taught writing points.
Topics for Writing Assignments
1) Write a paragraph on the first day of school.
2) Write how movies or television influence people's behavior 3) Write a paragraph about your first trip and use at least three linking words.4) Write a letter to a friend and invite him or her to an occasion.In session, the researcher gave the rating scale in printed copy to the peer participants; explained and trained them how to rate the writing paragraphs based on the topics and how to provide feedback on their assignments.Then, in peer-assisted mediation group, participants were divided in pairs.The pairs switched their assignments.At the first stage, a peer as a mediator read the other's paragraph once without correcting.Then, the peer as a mediator spied two important errors.After that, the peer gave comments in full sentences on the margin and wanted the others to correct them all.Finally, the assignments were switched back.They talked and negotiated the mistakes based on provided feedback.
Rating
On the other hand, in tutor -intervention group, the tutor read the paragraphs as a mediator once and then she corrected the errors by providing feedback, writing comments, using abbreviations and phrases in order to highlight the problems.At last, the tutor rated the assignments in the same way.When the tutor returned the assignments to the participants, both the tutor and the participants talked about problems based on provided feedback.
Negotiating made the participants aware of their errors with feedback provided by tutor or peers as mediators.
They tried to come over their mistakes and avoid repeating them in subsequent assignments.Since ZPD is the essence of dynamic assessment, accordingly, a peer who was not able to find writing problems independently they did it through interaction with mediators (peer or tutor).In fact, by mediation, assistance provided by tutor or peers through interaction, rechecking the previous problems and negotiation; actual level of participants developed.This was evident through checking their previous errors in following assignments which moved the students went beyond their ZDP.After treatment sessions, both the post test of writing and the questionnaire of attitude were administered in two groups to test the research hypotheses.
As DA is a team up approach to add-on assessments within the domains of psychology, it focuses on the learner development to respond to intervention, mediation, assistance, reaction or feedback.Based on this, during the present study, feedback provided by the peer or tutor towards writing difficulties of the learners seemed to be helpful enough to remove their problems and to move them beyond their ZDP.
Rating scale used for writing posttest belonged to DA called RECIPROCITY rating scale.It was devised by Van der Aalsvoort and Lidz (2002, see Poehner, 2008).This rating scale is on bidirectional interaction between mediator and learner.It also signifies the role of documentation which revealed by comments and written feedback.Because, this rating scale is to some extent qualitative, so to make it quantitative and to assess clearly; It was considered 10 scores for the posttest too.Since this rating scale as a part of DA focuses on removing the errors during a course of study from pre-test to post-test each score of writing points was repeated in the other sessions, e.g.punctuation was repeated from the first session to the last session even in posttest, At the end, all posttest were rated and scored by two raters.
Results
As displayed in Table 1, the K-R21 Reliability Index for General Language Proficiency was .86.An independent t-test was run to compare two groups' mean scores on the pretest of general language proficiency test in order to prove that both groups enjoyed the same level of general language proficiency prior to the administration of the treatments.As displayed in Table 2, the peer-assisted mediation (M = 90.23,SD = 5.48) and tutor intervention (M = 89.16,SD = 8.06) groups showed almost the same means on the pre test of general language proficiency.The results of the independent t-test (t(58) = .526,P > .01,R = .069it represents a weak effect size) (Table 3) indicated that there was not any significant difference between the two groups' mean scores on the pre test of general language proficiency test.Thus it can be concluded that two groups were homogeneous.It should be noted that the assumption of homogeneity of variances was met (Levene's F = .000,P > .01).That is why the first row of Table 3, i.e. "Equal variances not assumed" was reported.
Hypothesis One
An independent t-test was run between pre and post test of writing to see the effect of peer-assisted mediation on writing development of Iranian intermediate EFL learners.Table 4 shows the peer-assisted mediation group in post test (M = 8.81, SD = 0.341) and pre test (M = 7.70, SD = 0.174).As displayed in table 5, the probability associated with t-observed value (.000) was lower than the significant level of .05.Therefore, peer-assisted mediation had a significant effect on writing development of Iranian intermediate EFL learners.
Hypothesis Two
To investigate the effect of tutor-intervention on writing development of Iranian intermediate EFL learners, an independent t-test was run between pre and post tests of writing.Table 6 demonstrates the tutor-intervention group in post test (M = 6.77,SD = 0.52) and pretest (M = 6.14, SD = 0.22).As displayed in Table 7, the probability associated with t-observed value (.142) is higher than the significant level of .05.
Hypothesis Three
An independent t-test was run on writing posttests of two groups to test significant difference between peer-assisted mediation and tutor-intervention on writing development.As displayed in Table 8, the peer-assisted mediation (M = 8.81, SD = 1.87) outperformed the tutor-intervention (M = 6.77,SD = 2.85) groups on the post test of writing.The results of the independent t-test (t(58) = 3.27, P < .01,R = .582it represents a large effect size) (Table 9) indicated that there was a significant difference between the peer -assisted mediation and tutor -intervention groups' mean scores on the post test of writing.It should be noted that the assumption of homogeneity of variances was met (Levene's F = 3.406, P > .01).That is why the first row of Table 9, i.e. "equal variances not assumed" was reported.
A multivariate analysis of variances (MANOVA) was run to compare the two groups' means on the five sections of the pretest of questionnaire in order to see their homogeneity in attitude towards writing.Before discussing the results it should be mentioned that the assumption of equality of covariance matrices and homogeneity of variances were met.Based on the results in table 10 (F = P > .01) it can be concluded that the homogeneity of covariances matrices was met..135 The assumption of homogeneity of variances which tested through the Levene's test assumes that groups did not differ significantly in terms of their variances.As displayed in Table 11, the Levene's F-values of attitudinal questionnaire were non-significant, i.e. (P > .01),which showed homogeneity of variances of the groups on pre tests of attitude.Because the assumption of normality was me, none-parametric data was changed to parametric ones.Based on the results displayed in Table 12 (F(5, 54) = 1.59,P > .01,Partial η 2 = .129it represents a moderate to large effect size) it can be concluded that there were not any significant difference between the means of the two groups on attitudinal questionnaire before treatment.
Hypothesis Four
To test that peer-assisted mediation does not have any effect on the attitude of Iranian intermediate EFL learners, the mean scores of pre and posttest questionnaires were compared within each group.Based on results the peer-assisted mediation group had a higher positive effect on the attitude of the learners towards writing in post test (M = 13.92)than pre test (M = 11.82)(Table 15) (t(29) = 2.91, P < .05,R = .47 it represents an almost large effect size) (Table 16).It can be concluded that peer-assisted mediation had significant effect on the attitude of the learners and positively changed their attitude towards writing skill.
Hypothesis Five
To test the effect of tutor-intervention on the attitude of Iranian intermediate EFL learners, the mean scores of pre and post test of questionnaires were compared within the group.
Based on results, the tutor-intervention group performed significantly better on the post test (M = 14.42) than pre test (M = 12.28) (Table 17) (t(29) = 3.87, P < .05,R = .58it represents a large effect size) (Table 18).It can be concluded that tutor-intervention had significant effect on the attitude of the learners and positively changed their attitude towards writing skill.
Hypothesis Six
To test the significant difference between peer-assisted mediation and tutor-intervention on Iranian intermediate EFL learners' attitude towards writing, a multivariate analysis of variances (MANOVA) was run to compare the peer-assisted mediation and tutor -intervention groups' means on the five sections of the post test of questionnaire.
Before discussing the results it should be mentioned that the assumption of equality of covariance matrices and homogeneity of variances were met.Based on table 19 (F = 1.45,P > .01) it can be concluded that the homogeneity of covariances matrices was met..113 The assumption of homogeneity of variances which tested through the Levene's test assumes that groups did not differ significantly in terms of their variances.As displayed in Table 20, the Levene's F-values for pre tests of attitude were non-significant, i.e. (P > .01),which showed the homogeneity of variances of the groups on post tests of attitude questionnaires.it can be concluded that there was not any significant difference between the means of the two groups on post tests of questionnaire.Although, both peer-assisted and tutor-mediation had significant effect on the attitude of the learners toward writing but the difference between their effect was not significant.To show the nature of dynamic assessment more clearly the assessed ten writing assignments in two groups during the courses were also compared.By comparing mean of the means of 10 writing assignments, t-value 2.65 preceded t critical 2.00 (α = 0.05).It showed that the peer-assisted group outperformed the tutor-intervention group on writing assignments during the courses.
Discussion and Conclusion
This study concluded that peer-assisted mediation proved to be more effective on the learners' writing development.It also concluded that although both peer-assisted mediation and tutor-intervention affected positively on the attitude of the learners but the difference between their effects was not significant.The applied dynamic assessment both in a form of peer-assisted mediation and tutor-intervention led to positive development on writing skill during treatment sessions, however the writing development of peer-assisted mediation group was higher.
As DA is a team up approach to add-on assessments within the domains of psychology, it focuses on the learner development to respond to intervention, mediation, assistance, reaction or feedback.In present study, feedback provided by the peer or tutor towards writing difficulties of the learners seemed helpful enough to remove their problems to a great extent.Therefore, the study concluded that both peer-mediation and tutor-intervention were effective on the learners writing development during the course of instruction; however, peer-assisted mediation was more efficient on post-test.
Learner attitude has been the essential area of inquiry in language acquisition and it is related to the internal behaviors that can affected by external ones which supports the conclusion of the study that both peer-assisted mediation and tutor-intervention affected positively on the learners' attitude toward writing.Since, attitude is usually affected positively by some external factors; in the present study these factors were peer-assistance, tutor-assistance, and mediation in line with feedback provision which led to a positive change in the learners' attitude toward writing.
The findings of the study were in line with Shamir and Steven (2005) who found that mediators and learners received significantly higher scores on autonomy behavior criteria which displayed the significant role of peer mediation.The findings were compatible with the results found by Azarizad and Ghahremani (2013) both emphasized the influential role of dynamic assessment on writing development.
On the attitude of the learners, the findings were somehow aligned with the result obtained by Johnson and Douglas (1976) that cooperative, compared to individualized, learning resulted in greater ability to take the affective perspective of others, more altruism, more positive attitude towards classroom life, and higher achievement.
The conclusion of the study supported the studies which have found traditional assessment methods unable to sufficiently support the students.DA with focusing on learning and development helps the assessors identify the participants' evolving writing abilities.All the studies concluded that feedback was of great help in enhancing learners' writing interest to improve their writing ability.The study also concluded that DA approach to EFL learners' writing ability could prove to be useful, and more appropriately designed mediation played a significant role in promoting learners' writing ability in developing their learning potential in ZPD.
The findings once again were supported by the idea that dynamic assessment can unify instruction with assessment to provide learners with mediation in order to promote their reserved learning potential during the assessment.The conclusion of the study also signified the role of attitude in enhancing learning process and it was supported by other studies and the idea that if learners are reluctant to learn or they do not have a positive attitude, they do not produce any result and language learning is stimulated by the attitude.
EFL Learners and tutors can benefit from the results of this study.Peers can mediate the learning process of writing through negotiation on writing assignments, participation in pairs or groups, providing comments and feedback on writing assignments of the peers.Moreover, the learners' attitude toward writing could positively change using peer and tutor assistance while the course is going on.Attitude can be defined as a set of beliefs developed in the course of time in a sociocultural setting and having a positive attitude certainly facilitates learning.Therefore, by positively changing the learner's attitude towards writing, those who are even reluctant to learn would be keen on learning.
EFL tutors may promote techniques of dynamic assessment through peer-assisted mediation, making the peers as mediators via teaching them how to provide feedback to remove the errors.This may also establish a friendly and challenging atmosphere which facilitates learning process and in turn enhances cooperation and collaboration learning.Teachers should assist the learners and help them to think and generate ideas by providing timely feedback.The peer-assisted mediation can adjust the tutors' responsibility in some cases, so that they will be able to manage the class more efficiently.Peer-assisted mediation leads to a decrease in complications at educational settings, enhancement of learners' self-esteem, improvement of their attendance, and encouragement of the learners in problem-solving situations to find more novel solutions.Accordingly, tutors can detect the learners' writing difficulties and their timely interventions would display that they not only help them write but also encourage them think.Similarly the tutors make available the ongoing feedback on writing learning process to support the learners at each stage.
Dynamic assessment needs instruction, intervention, and feedback to promote L2 development by the means of ZPD.How exactly Vygotsky's ZPD triggers a change from dependent performances to the process of maturing and performing independently in all aspects of L2 learning via dynamic assessment still needs more investigations.
References
7) Conjoining Sentences by which, who, where, … 8) Descriptive Paragraphs 9) Writing a paragraph with examples 10) Coherence 5) Write a postcard to a friend and tell him about a beautiful place.6) Tell a story and use at least five adverbs in it.7) Write a paragraph about what you need for a trip.What and why.Use at least 3 connectors.8) Describe your dream house 9) Write some important qualities of a good boss 10) Is homework harmful or helpful?
Table 2 .
Descriptive statistics for pretest of general language proficiency of two groups
Table 3 .
Independent t-test between the means of two groups on general language proficiency tests
Table 4 .
Paired sample statistics for peer-assisted mediation group between pre-and post-tests of writing
Table 6 .
Paired sample statistics for tutor-intervention group in pre and post tests of writing It can be concluded that tutor-intervention had no significant effect on writing development of Iranian intermediate EFL learners.Although the students outperformed in post test but the difference with pre test was not significant.
Table 8 .
Descriptive statistics of writing for two groups on post tests
Table 9 .
Independent t-test of writing posttests for two groups
Table 10 .
Assumption of equality of covariance matrices of attitude questionnaire before treatment
Table 11 .
Homogeneity of variances assumption; pretests of attitude
Table 12 .
Multivariate tests; pre tests of attitude questionnaire of two groups Table13displays the descriptive statistics for the two groups on the pre tests of attitude.The largest difference lied between their attitude towards general writing (MD = 2.67) and the smallest difference belongs to generating ideas (MD = .07).
Table 13 .
Descriptive statistics for pre test of attitude questionnaires of two groupsAs displayed in Table14, the Cronbach Alpha reliability indices for the pre and post tests of attitudinal questionnaire towards writing were .75 and .77respectively.
Table 14 .
Cronbach alpha reliability indices, pretest and posttest of attitude questionnaire toward writing
Table 15 .
Descriptive statistics of pre and post test of attitude questionnaire of peer-assisted mediation group
Table 17 .
Descriptive statistics of pre-and post-test attitude questionnaire of tutor-intervention group
Table 18 .
Paired samples t-test for tutor-intervention group between pre-and post-tests of attitude questionnaire
Table 19 .
Assumption of equality of covariance matrices, posttests of attitude questionnaires of two groups
Table 21 .
Multivariate tests for post tests attitude questionnaires for two groups
Table 22 .
T-value of 10 writing assignments of two groups Figure 1.Development of groups in 10 treatment sessions assignments
|
v3-fos-license
|
2020-12-24T09:12:02.088Z
|
2020-12-18T00:00:00.000
|
231843828
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://doi.org/10.21203/rs.3.rs-126570/v1",
"pdf_hash": "ca11e32f780d9fa326421460f530fdfd425a69b1",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44539",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "4b0f91c0d9d80103f874473b713ca06a9a76ae8d",
"year": 2020
}
|
pes2o/s2orc
|
Bleeding from band ligation-induced ulcers following the treatment of oesophageal varices: a retrospective case–control study
Background Band ligation (BL) plays a vital role in the treatment of oesophageal varices; however, the procedure carries a considerable risk of band slippage, variceal site ulcer formation and post-treatment bleeding. Our study aimed to explore the incidence of post-BL ulcer bleeding and to identify possible associated factors.Methods We retrospectively reviewed the records of patients with oesophageal varices who underwent endoscopic haemostasis by BL at our institution between 2015 and 2020. We statistically compared the patients with post-BL ulcer bleeding and those without (controls). The outcome variable was the development of BL-induced ulcer bleeding. The patients’ demographics, clinical and laboratory parameters, BL procedure outcomes and experts’ opinions were used as the independent variables and possible associated factors. Results Of the 4579 eligible patients 388 (8.5%) presented with post-BL ulcer bleeding. Proton pump inhibitor (PPI) use was associated with a lower risk of post-BL ulcer bleeding (odds ratio, 0.77; 95% confidence interval [CI]: 0.603–0.983). The presence of high-risk stigmata indicated a 1.276 times higher risk of bleeding (CI: 1.024–1.592), and a greater number of varices was associated with an increased risk of post-BL ulcer bleeding (P = 0.007). The use of fewer bands per variceal site was associated with fewer bleeding incidents (P = 0.008), while lower haemoglobin levels were associated with a higher probability of bleeding (P = 0.007).Conclusions The overall incidence of post-BL ulcer bleeding was 8.5%. The presence of high-risk stigmata and a higher number of varices and bands per variceal site were associated with an increased risk of bleeding. Adequate haemoglobin levels and the use of adjuvant PPIs and having were protective factors.
Background
Oesophageal varices result from portal hypertension as a frequent manifestation of liver cirrhosis [1].
About 60%-80% of liver cirrhosis patients develop gastrointestinal varices, with oesophageal varices constituting 17% [2,3]. The frequency of developing oesophageal varices is rmly attributed to the severity of liver disease. Up to 40% and 85% of Child-Pugh class A and C liver disease patients, respectively, are affected [4]. More prevalent in males (i.e. 60%) than females, the risk of developing oesophageal varices increases by 8% in the second year following the diagnosis of chronic liver disease and 30% in the sixth year post diagnosis [2].
Up to 50% of patients with oesophageal varices will present with bleeding at some point. The incidence of bleeding per year is 15% for large varices and 5% for small varices [1]. Following an episode of bleeding from oesophageal varices, 10%-20% of patients will not survive beyond six weeks [1]. Of the patients who survive the rst episode of bleeding, 60%-80% will rebleed in less than a year [5]. Rebleeding episodes are associated with a fatality rate of around 33% [5,6].
While a small chance of spontaneous bleeding stoppage has been reported in an older study [7], the current guidelines recommend medical interventions [5]. Common interventions include pharmacological (i.e. somatostatin, octreotide, proton pump inhibitors [PPIs] and beta-blockers) and procedural interventions, such as cyanoacrylate injection, balloon tamponade, embolization coils, sclerotherapy and band ligation (BL), among others [3,5].
BL or rubber-band ligation represents one of the oldest techniques in the treatment of gastrointestinal varices [8]. It is inferior to cyanoacrylate injection and embolization coils in terms of the overall success rate and rebleeding risk [3]. However, BL is cost-effective, less technically demanding than the aforementioned techniques. It is therefore the most commonly used technique globally.
A typical BL procedure involves using an endoscope to suction the variceal site. A rubber band is then wrapped around the base of the sac, thereby strangulating the area from the blood supply. The strangulated variceal sac eventually falls off as a result of ischemia and necrosis, creating a small scar [9]. Seemingly practicable, the procedure carries a considerable risk of band slippage, variceal site ulcer formation and post-treatment bleeding [10,11]. Our study aimed to explore the incidence of post-BL ulcer bleeding and to identify possible associated factors using a large study sample.
Study design and setting
This retrospective case-control study was conducted at the Gastroenterology Department of the Third Xiangya Hospital of Central South University, Changsha, Hunan Province, People's Republic of China. We reviewed all the patient records with oesophageal varices who attended our department between February 2015 and February 2020.
Study population
Our study included participants who met the three main inclusion criteria: (1) older than 18 years, (2) having oesophageal varices from any aetiology and (3) underwent emergent or elective BL as a treatment or prophylaxis. These eligible participants were categorized into either the case group or the control group. The case group comprised participants who met the three inclusion criteria and had endoscopically proven bleeding from a BL-induced ulcer (i.e. post-BL ulcer bleeding) without any other cause of bleeding to explain the symptom. In contrast, the control group did not have endoscopically proven bleeding from a BL-induced ulcer. We excluded patients who (1) underwent BL in combination with other haemostasis procedures such as the use of coils or cyanoacrylate injection, (2) died within the rst two days following the BL haemostasis procedure, (3) were lost during the follow-up period, (4) had missing data and (5) did not consent to participate in the study.
Band ligation procedure
The Speedband Superview Super 7 TM Multiple Band Ligator (Boston Scienti c) was used to tie (i.e. strangulate) high-risk varices or actively bleeding varicose veins. Adjuvant pharmacological treatments (i.e. PPIs and antibiotics) and other treatments, such as blood transfusion, hydration, balloon tamponade, transjugular intrahepatic portosystemic shunt and transplantation, were performed according to the hospital's guidelines [5] at the time and the gastroenterologist's discretion. All the BL procedures were performed by consultant gastroenterologists, consultant surgeons or specialist trainees under supervision. All the methods were carried out in accordance with relevant guidelines and regulations.
Data collection process
Before collecting the data, we sought approval from the Institutional Review Board of the Third Xiangya Hospital, Central South University, and were assigned approval number 2019-S475.
We utilized the hospital's electronic patient database to identify eligible patients. The data collected from the eligible patients included demographics, clinical and laboratory parameters and BL procedure outcomes. We independently collected and recorded the data in Microsoft Excel spreadsheets before cross-checking the data for correctness.
Outcomes and variables
The outcome variable was bleeding from a BL-induced ulcer, and the independent variables were obtained from the patients' demographics, clinical and laboratory parameters and BL procedure outcomes. Further independent variables were identi ed by reviewing the literature and obtaining experts' opinions. One BL procedure was considered per participant.
The continuous variables included age, MELD score, duration of admission (in days), time to the rst endoscopy (in hours), number of varices, number of bands per variceal site, number of blood units transfused and laboratory investigation results [2,6]. The categorical variables comprised sex (i.e. male or female), Child-Turcotte-Pugh score (i.e. A, B or C), aetiology of cirrhosis (i.e. alcoholic liver disease, nonalcoholic fatty liver disease, viral, alcoholic liver disease plus viral, autoimmune liver disease or other), haemostasis treatment urgency (i.e. elective or emergent) and adjuvant use of PPIs [12,13]. The other categorical variables included high-risk stigmata (i.e. yes/no), history of variceal bleeding (i.e. yes/no), use of antiplatelets (i.e. yes/no), use of anticoagulants (i.e. yes/no), re ux oesophagitis (i.e. yes/no) and comorbidities (i.e. hepatic encephalopathy, spontaneous bacterial peritonitis, hepatorenal syndrome, portal vein thrombus and others) [10,14].
Bias mitigation
Both authors independently performed the data collection followed by data cross-checking to ensure the accuracy of the data. Moreover, we utilized the strengthening the reporting of observational studies in epidemiology (STROBE) tool [15] customized for case-control studies in the write-up of this manuscript to reduce reporting bias.
Data analysis
We initially analyzed the patients' demographic characteristics using mean (and standard deviation) and proportions. For the categorical variables, we used either Pearson's, chi-square or Fisher's exact tests to compare the parameters of the case and control groups depending on the respective tests' assumptions. We performed post-hoc tests by Bonferroni's adjustment for the crosstabulation of signi cant categorical variables and calculated the odds ratios. An odds ratio of <1 or >1 corresponded with reduced or increased odds, respectively, of a post-BL bleeding event, while an odds ratio of 1 suggested no association.
For the continuous variables, we utilized either the independent t-test or Mann-Whitney U-test to compare the case and control groups depending on whether the variables demonstrated normal or non-normal distribution curves, respectively. We used the Shapiro-Wilk's test to determine the statistical signi cance of normality. A P-value of <0.05 was considered statistically signi cant.
Additional analysis
We explored the number and causes of death in relation to post-BL ulcer bleeding status. We compared the case and control groups in this respect using Fisher's exact test and calculated the odds ratio.
Results
A total of 4579 patients were included in our study. The cohort was followed up for six weeks, and 388 (8.5%) patients presented with bleeding from BL-induced ulcers (i.e. case group), while 4198 (91.5%) patients did not (i.e. control group) (Figure 1). The mean time for the occurrence of post-BL ulcer bleeding was 11.4 ± 2.3 days. Table 1 summarizes the baseline characteristics of the 4579 patients who participated in our study. There was no statistically signi cant difference between the case and control groups with respect to age, sex, MELD or Child-Pugh scores. However, a cirrhotic aetiology demonstrated statistical signi cance (P = 0.03). Table 2 presents a comparison of the clinical parameters between the case and control groups. The incidence of BL-induced ulcer bleeding was 9.5% (i.e. 318 events) and 8.2% (i.e. 70 events) for elective and emergent BL, respectively. The use of PPIs was associated with a lower risk of BL-induced ulcer bleeding, the odds ratio was 0.77 (95% con dence interval [CI]: 0.603-0.983). Patients with high-risk stigmata observed during endoscopy had a 1.276 times higher risk of bleeding (95% CI: 1.024-1.592).
The case group had a higher mean number of varices compared to the control group (P = 0.007), which means that a higher number of varices was associated with BL-induced ulcer bleeding in our study. Similarly, more bands were used in the case group (mean, 3.1 ± 0.6) compared to the control group (mean, 2.9 ± 0.6), signifying that the use of fewer bands was associated with a lower incidence of BLinduced ulcer bleeding. Moreover, the case group had lower haemoglobin levels compared to the control group (P = 0.007), which indicated that lower haemoglobin levels were associated with a higher probability of bleeding from BL-induced ulcers.
Mortality
Twenty-seven patients died during the follow-up period, 11 of whom were in the case group. The patients in the case group had a higher risk of death compared to those in the control group (odds ratio, 7.6; 95% CI: 3.508-16.524). Figure 2 is a radar chart summarizing the causes and frequency of death in the case and control groups.
Discussion
BL plays a vital role in the treatment of oesophageal varices. However, the procedure carries a small risk of band slippage, variceal site ulcer formation and post-treatment bleeding. Our study aimed to explore the incidence of post-BL ulcer bleeding and to identify possible associated factors.
After a mean follow-up time of 11.4 ± 2.3 days, the incidence of post-BL ulcer bleeding was 8.5%. This nding is higher than that previously reported by Jamwal et al. (3.6%) [14] and Cho et al. (7.7%) [16]. The differences may be attributable to the studies' methodological differences. Our study included 10 times more participants compared to that of Cho et al. [16], while Jamwal et al. followed up their participants for twice the time we followed up our patients. Older studies reported up to a 15% incidence of post-BL ulcer bleeding [17]; however, this may be due to the lack of advanced technical know-how compared to the present.
In our study, the mean time for the occurrence of post-BL ulcer bleeding was 11.4 ± 2.3 days. This nding is higher than that previously reported by Cho et al. [16], who described a mean of 8.5 ± 5.1 days. As with the incidence of post-BL ulcer bleeding, the difference could be attributed to the use of different methodologies. Our nding, however, was in line with that of Jamwal et al., who reported a range of 10-13 days. While Jamwal et al. used a longer follow-up time compared to that in our study, both studies utilized large sample sizes.
While there was no association between the bleeding event and BL treatment urgency (P = 0.162) in our study, there was a higher incidence of post-BL bleeding following elective BL (9.5%) than emergent BL (8.2%). This nding is in contrast to those of previous studies [16,18] where there were higher incidences for emergent BL. While the reason may again be due to methodological differences between the studies, we recommend that robust studies be undertaken to explore this nding.
In our results, the use of PPIs was associated with a reduced risk of post-BL ulcer bleeding (odds ratio, 0.77; 95% CI: 0.603-0.983). This nding is in agreement with those of several other studies [13,19]. The use of PPIs to reduce the size of an ulcer and lower the risk of re ux oesophagitis and post-prophylactic BL bleeding has been established [12,20]. However, a study by Wu et al. [21] contradicted this nding. The difference could be ascribed to the patient population. While our study included patients with both elective and emergent BL, Wu et al. included only emergent BL patients in their study.
The presence of high-risk stigmata was associated with a 1.276 times higher risk of post-BL ulcer bleeding (95% CI: 1.024-1.592) in our study. While previous studies have identi ed numerous risk factors as linked with post-haemostatic procedure rebleeding, we could not locate any studies reporting an association between high-risk stigmata and post-BT ulcer bleeding. Our nding may be explained by the fact that weak mucosa (i.e. high-risk stigmata) provide an unstable site for band placement, which could result in premature band detachment and subsequently ulcer formation and bleeding [22].
The case group in our study had a statistically signi cantly higher mean number of varices, and more bands were utilized per variceal site compared to the control group. While it might seem logical that the more varices there are, the greater the chance of variceal-related complications, a study by Shaheen et al. [23] demonstrated no relationship between these variables. Notwithstanding, Shaheen et al. used a smaller sample size compared to that in our study. On the other hand, our ndings correlated with those of a previous study with regard to the number of bands utilized [24].
The control group in our study had a statistically signi cantly higher mean haemoglobin level before BL treatment than the case group. This suggests that lower haemoglobin levels are associated with an increased risk of post-BL ulcer bleeding. Singh et al. [25] found that haemoglobin levels decrease spontaneously with the increasing severity of liver disease. This may suggest that our case group had more severe liver cirrhosis and therefore more severe varices than the control group. It could also explain the higher risk of death (i.e. 7.6) observed in the case group.
Study limitations and strengths
The present study was retrospective in design and therefore less robust compared to prospective studies.
It also involved only Chinese patients, which could mean less generalizability. On the other hand, our study had a large sample size and thus provides more information, less uncertainty and more reliable results.
Conclusion
The overall incidence of post-BL ulcer bleeding was 8.5% in our study. The presence of high-risk stigmata and an increased number of varices and bands per variceal site was associated with an increased risk of post-BL ulcer bleeding. Nevertheless, the use of adjuvant PPIs and having adequate haemoglobin levels were associated with a lower risk of bleeding from a post-BL ulcer.
|
v3-fos-license
|
2019-10-25T13:03:15.255Z
|
2019-10-25T00:00:00.000
|
204860977
|
{
"extfieldsofstudy": [
"Psychology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fbioe.2019.00290/pdf",
"pdf_hash": "c3df1a4f876414c0fedf7cbafba4b9804e33619a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44540",
"s2fieldsofstudy": [
"Art"
],
"sha1": "c3df1a4f876414c0fedf7cbafba4b9804e33619a",
"year": 2019
}
|
pes2o/s2orc
|
Kinematic Analysis of Postural Stability During Ballet Turns (pirouettes) in Experienced and Novice Dancers
Turning is an important but difficult movement, often performed in ballet choreography. Understanding the postural sway during ballet turns is beneficial to both dancers and dance teachers alike. Accordingly, this study evaluated the postural sway angle during ballet turns in female novice and experienced ballet dancers by means of the inclination angle, determined from the center of mass (COM) and center of pressure (COP). Thirteen experienced dancers and 13 novice dancers performed ballet turns (pirouettes). The COM-COP inclination angle was measured during the preparatory, double-leg support, and single-leg support phases of the turn. The novice dancers exhibited significantly greater ranges of the COM-COP inclination angle in the anterior-posterior (AP) and medial-lateral (ML) directions during the preparatory (AP direction, p < 0.001; ML direction p = 0.035), double-leg support (AP direction p < 0.038; ML direction p = 0.011), and ending phases (AP direction p < 0.001; ML direction p = 0.024). Moreover, during the preparatory phase, the novice dancers failed to adjust their posture in a timely manner, and therefore showed overshooting errors. Finally, during the ending phase, the novice dancers showed a greater standard deviation of the COM-COP inclination angles and performed continual postural adjustments, leading to a less smooth movement than the experienced dancers. In conclusion, the novice dancers were suggested to focus on the COM-COP adjustment during both preparatory and ending phases.
INTRODUCTION
Ballet turns, known as pirouettes, require whole body rotation on the support of a single leg. Pirouettes are a complex movement and require extensive training and accumulated experience. Consequently, the quality of their performance is a function of the dancer's skill level. Although several studies have investigated various characteristics of ballet turns, such as coordination between the upper and lower trunk (Laws, 1978;Owen and Whiting, 1989;Sugano and Laws, 2002;Golomer et al., 2009), the whole body postural sway at different phases of the pirouette movement is still unclear.
It is essential for ballet dancers to maintain dynamic stability of the whole body with an appropriate posture in ballet choreography. A previous study found that the dancers who have the ability to complete multiple-turn pirouettes allow their bodies to make adjustments throughout the turn instead of maintaining a rigid trunk (Lott and Laws, 2012). When performing a pirouette, dancers raise the heel of the supporting leg to the single-leg demi-pointe position (i.e., standing on the ball of the foot) to reduce friction during turning. During the actual turning phase (i.e., single-leg support), executing the turn requires a proper control of the center of mass (COM) over a small base of support. Specifically, a precise vertical alignment of the COM with the center of pressure (COP) is required to prevent unexpected torque and subsequent loss of balance. Regulation of proper torque and the coordination of each body segment is important for a high-skilled turn (Imura and Yeadon, 2010). Achieving this postural control requires the activation of the muscles around the torso (Winter, 2005) and the muscles affect ground reaction force. However, postural stability must be disrupted as the dancer transits from the initial double-leg support state to the single-leg support state. Consequently, the ability of the dancer to rebuild equilibrium during this transition phase is of critical importance in determining the success of the movement.
The COP displacement is an important parameter in assessing postural stability in many static standing tasks (Lyon and Day, 1997;Hiller et al., 2004;McKeon and Hertel, 2008;Catena et al., 2009). Generally speaking, a larger COP displacement indicates a more unstable posture. However, in dynamic tasks, a larger COP displacement does not necessarily indicate a greater instability. Previous studies find that dancers regulate their reaction force to minimize the COM horizontal velocity to make vertical alignment within the base of support as the number of turns increase (Zaferiou et al., 2016a,b). Thus, it is important to consider not only the COP parameters, but also the COM parameters when exploring postural control during dynamic movements. Additionally, many studies have shown that maintaining postural stability becomes more difficult as the distance between the COP and COM in the horizontal plane increases (Hahn and Chou, 2004;Hsue et al., 2009a,b). The relative arrangement of the COM and COP provides an effective means of gauging the risk of falls in the elderly and children with balance dysfunction (Corriveau et al., 2001;Hahn and Chou, 2004;Hsue et al., 2009a,b). A previous study investigated the angle between the vertical vector and the vector of COP to COM in the turning phase of the single and double pirouette and the average angles of COM-COP inclination angle was reported 4 degrees in both single and double turns in their study (Zaferiou et al., 2016b). Furthermore, the angle of the line connecting the COM and the COP with respect to the fixed global vertical axis, the inclination angle, is strongly related to the postural stability condition in dynamic movements (Chen and Chou, 2010). This may further suggest that the greater inclination angle indicates greater risk of falls and less stability in pirouette.
Regarding the differences between different skill levels of dancers in ballet turns, the novice dancers spend more preparation time to initiate a turning movement and lack of head spotting technique during pirouette (Laws, 1978;Lin et al., 2014). Blanco et al. (2019) presents that a higher correlation between ballet jump and regular jump was found as skill level of dancers increases. Studies also present that dancers had better performance during one-leg stance or during walking than the novice dancers or untrained ones, and thus suggested that longer years of ballet training may be a factor leading to experienced dancers having a superior ability in postural control (Lung et al., 2008;Kilroy et al., 2016). However, the literature lacks information regarding differences in postural stability of dancers with different skill levels during ballet turns. Such information of postural stability is of great interest to dance educators in understanding how best to improve postural stability in novice dancers and to design their training programs accordingly. Therefore, the present study compares the postural stability of experienced and novice dancers at different phases of the ballet turn, using the COM-COP inclination angle and COM-ankle inclination angle as performance indicators. In conducting the investigation, it is hypothesized that experienced dancers exhibit smaller ranges of both inclination angles than novice dancers during all phases of the pirouette movement.
Participants
Thirteen experienced female dancers (age: 17.8 ± 3.4 years, height: 159.3 ± 4.2 cm, weight: 51.54 ± 4.66 kg) and 13 novice female dancers (age: 12.0 ± 1.9 years, height: 151.9 ± 11.5, weight: 43.81 ± 9.68 kg) participated in the study. The inclusion criteria for the experienced group were specified as follows: (1) a minimum ballet training history of 6 years (8.7 ± 3.3 years); (2) a minimum of 3 h routine ballet training per week; and (3) the ability to perform double-revolution turns (or more) on single-leg support. The inclusion criteria for the novice group were specified as: (1) a ballet training history of 2-5 years (3.2 ± 1.7 years); (2) a minimum of 1.5 h routine ballet training per week; and (3) the ability to perform complete single-revolution turns on single-leg support. Dancers with vestibular or balance problems, or lower back and lower extremity injuries, were excluded from both groups. Before participating in the study, each participant read and signed an informed consent form approved by the Institutional Review Board of the University Hospital.
Instrumentation
A real-time motion capture system (200 Hz) with eight Eagle CCD cameras (Motion Analysis Corporation, Santa Rosa, CA, USA) was used to collect the three-dimensional (3D) trajectories of a modified Helen Hayes marker set consisting of 43 reflective markers. The markers were placed on the forehead, top head, rear head, sternal notch, xiphoid process, 7th cervical spinal process (C7), sacrum, midpoints of each arm and forearm, lateral epicondyles of both humeri, radial styloid process, third metacarpal head, both sides of the anterior superior iliac spine (ASIS), midpoints of each thigh and shank, greater trochanters, lateral knee joint lines, lateral malleoli, midpoints between 1st and 5th metatarsal heads, and heel posteriors, respectively. The markers were attached either to the participant directly or to the leotard or soft shoes. Two static standing trials with an additional eight markers were conducted before the dynamic trials in order to calculate the joint centers. The additional markers were placed bilaterally on the medial humeral epicondyles, ulnar styloid processes, medial knee joint lines and medial malleoli, and were removed during the dynamic trials. The markers were placed by the same individual for all the trials and participants.
The ground reaction force (GRF) during the pirouette turn was measured using two 60 × 40 cm force plates (9281B, Kistler Instrument Corp., Winterthur, Switzerland) synchronized with the motion capture system and sampled at a frequency of 1,000 Hz. To ensure the accuracy of the GRF measurements, the force plates were physically isolated from their surroundings and the performance area was cleaned and expanded by wooden plates. Moreover, the performance area was covered with vinyl to simulate the floor condition in a typical ballet classroom.
Procedures
Each participant performed five single-revolution pirouette en dehors using the dominant leg as support (Figure 1). Note that the dominant leg was defined as the leg used by the participants to kick an object, and was found to be the right leg in every case. The reason for choosing the dominant leg as the support leg was because the differences between novice and experienced dancers during ballet turns were greater in the dominant leg support. That means ballet turning with the dominant leg is more difficult for dancers. The marker trajectories and GRF data were recorded continuously throughout the turn. At the beginning of each trial, the participants were requested to adopt the ballet fourth position with the gesture leg behind the supporting leg. In response to an auditory cue, the participants flexed their knees as preparatory movement and raised the gesture leg to the ballet retire position (i.e., the foot of the gesture leg placed near the medial knee joint line of the supporting leg). The participants then performed a single-revolution pirouette en dehors. Upon completion of the pirouette, the participants landed in the ballet fifth position (i.e., the gesture leg placed closely behind the supporting leg) and returned to the upright position.
Data Analysis
The pirouette en dehors movement was subdivided into three phases, namely preparatory, turning and ending (Figure 1 and Table 1). The turning phase was further divided into three subphases, i.e., turning with double-leg support, turning with singleleg support in pre-swing, and turning with single-leg support in mid-swing. The duration of each phase was determined manually from the images captured by the CCD cameras. In analyzing the movement, single-leg support in the pre-swing phase was assumed to begin when the gesture leg came off the force plate, and continued until the retire position was reached. The retire position was determined by the least distance between the toe marker on the gesture leg and the virtual marker representing the medial knee joint line on the supporting leg.
The COP during the pirouette en dehors movement was calculated as: where, F z is the vertical GRF and the lower-case suffixes denote the 1st and 2nd force plates, respectively (Winter, 2005). Note that the COP positions are all expressed with respect to the global coordinate system. The whole body COM was calculated using a 13-segment model consisting of the head-neck, upper arms, trunk, forearmhands, pelvis, thighs, shanks and feet. The estimated COM of each segment was determined from the 3D locations of the respective markers and the anthropometry data provided in Dempster's model (Winter, 2005). The calculated whole body COM position was then transformed to a local coordinate (pelvic coordinate) frame constructed in accordance with the markers on the sacrum and bilateral ASISs. The COM-COP inclination angles in the anterior-posterior and medial-lateral directions, relative to the pelvic orientation, were calculated from the relative positions of the COM and COP, and the vertical height of the COM. The medial-lateral inclination angle (inclination ML , Equation 2) was used to evaluate the sway angle in the frontal plane, while the anterior-posterior inclination angle (inclination AP , Equation 3) was used to evaluate the sway angle in the sagittal plane [ Figure 2; (Chen and Chou, 2010)]. Note that the medial direction was measured in the direction toward the side of the supporting leg, while the lateral direction was measured in the direction toward the side of the gesture leg.
Some participants were observed to use a leaping strategy in the single-leg support phase of the pirouette movement. Consequently, both feet left contact with the force plates, and hence, the COM-COP relationship could not be applied to evaluate the postural sway. Hence, for this particular phase of the movement, the COP position was substituted by the midpoint position between the ankle joint center and the metatarsal marker. In other words, for all participants, the COM-COP inclination was measured in the double-leg support phases of the pirouette en dehors movement (i.e., the preparatory, double-leg support, and ending phases), while the COM-ankle inclination angle was measured in the single-leg support phases (i.e., the preswing and mid-swing phases). For all of the phases, the COM and COP angle data were time normalized to 100% with 101 time points. Each participant performed five pirouettes and was awarded a score for each trial by each participant and a judge with extensive ballet choreography experience. The scores were assigned in the range of 1-5; with a value of 1 indicating a poor performance and 5 an excellent performance. For each participant, the trials awarded the three highest scores were taken for subsequent analysis purposes. The ranges of the medial-lateral and anterior-posterior direction of COM-COP angles in the preparatory, turning with double-leg support and ending phases, and the ranges of COM-ankle inclination angles in the singleleg support phases were calculated. Moreover, the maximum inclination angle was detected in the three turning phases (i.e., double-leg support, pre-swing and mid-swing). In addition, the COM-ankle inclination angles at the retire position were also measured. Finally, for each participant, the standard deviations of the COM-COP inclination angles were calculated during the ending phase. All of the variables were analyzed using standard SPSS 17.0 statistical software (SPSS for Windows, Chicago, IL, USA). The Cohen's d effect size was calculated by dividing the mean difference by their pooled standard deviation. Significant differences between the two groups were detected by performing independent t-tests with a significance level of α < 0.05.
RESULTS
The novice group exhibited significantly greater ranges of COM-COP inclination AP and inclination ML during the preparatory, double-leg support and ending phases. No significant difference was observed in the range of COM-ankle inclination AP for the two groups in the pre-swing phase. However, in the midswing phase, the range of COM-ankle inclination AP for the novice group was significantly larger than that of the experienced dancers. Finally, no statistical difference was noted between the two groups in the range of COM-ankle inclination ML during the pre-swing and mid-swing phases ( Table 2).
The novice dancers showed significantly greater maximum anterior, medial and lateral inclination angles than the experienced dancers during the turning phase (i.e., doubleleg support, pre-swing, and mid-swing). However, the maximum posterior angle was similar for the two groups (Table 3). In the retire position; the novice dancers exhibited a greater range of COM-ankle inclination angle in the medial direction but a smaller range of COM-ankle inclination angle in the posterior direction than the experienced dancers ( Table 4).
DISCUSSION
The present findings show that the novice dancers performed pirouette en dehors with greater inclination angles than the experienced dancers. In particular, the COM-COP inclination AP angles of the novice group were significantly greater than those of the experienced group in the preparatory, doubleleg support, mid-swing and ending phases; while the COM-COP inclination ML angles of the novice group were significantly greater than those of the experienced group in the preparatory, double-leg support and ending phases.
Preparatory Phase
During the preparatory phase, both groups of dancers lowered their COM by flexing the hips and knees and dorsiflexing the ankles. The dancers started with a center COM, then shifted the weight forward in preparation for single leg stance on the front leg, causing a slight anterior shift in the COP and also in the COM. However, the COP responded faster than the COM, so that a greater anterior shift of the COP than the COM was found at the transition of initiation. As a result, for the 20-55% period of the phase, the COM-COP inclination angle, which already had a slight posterior inclination (COM was behind the COP), increased slightly in the posterior direction (Figure 3). In response, the experienced dancers adjusted their COM-COP inclination angle toward the anterior direction, in order to prepare for the initiation of the turn (55-65% of the phase duration) by shifting their weight toward supporting leg (front leg). However, the novice dancers were less efficient in adjusting their postural sway, and exhibited overshooting errors (at 65% of the phase duration). As a result, the novice dancers exhibited a greater range of COM-COP inclination AP angle than the experienced dancers. Note that this finding is consistent with that of a previous study, which showed that novice golfers have less accuracy in putting than experts due to a poorer recalibration ability (van Lier et al., 2011). The recalibration refers to the ability of perceiving external changes and adjusting accordingly, and this ability can be achieved through enhanced neuromuscular control training (Kiefer et al., 2011). Experienced dancers often have better perceptual sensitivity, and thus better ability to perceive external changes and response to the changes. Ballet practice with continuous inputs of recalibration (i.e., seeing themselves in the mirror) and perceived changes (i.e., dance educators' cues) influences postural control in dancers.
Turning With Double-Leg Support Phase
For the first 80% of the double-leg support phase, the COM-COP inclination AP angle ( Figure 4A) and inclination ML angle ( Figure 4B) remained relatively stable in both groups. This finding suggests that most of the movement in the double-leg support phase of pirouette en dehors is contributed mainly from motion of the upper extremities and axial rotation of the upper trunk. In a previous study, Kim et al. (2014) showed that dancers apply a twisting motion of the trunk relative to the pelvis, in order to generate additional angular momentum when initiating the turning phase of pirouette en dehors (Kim et al., 2014). Thus, both studies confirm the importance of axial trunk motion in performing ballet turns.
At the end of the double-leg support phase (90-100% of the phase), the COM-COP inclination angle moved toward the anterior direction in both groups. This tendency can be attributed to two main factors. First, a ballet fourth foot position was requested at the beginning of the task [i.e., the gesture leg (back leg) placed behind the support leg (front leg) with a foot distance apart and the heel of the front leg should be in line with the toes of rear leg]. However, following rotation of the trunk through approximately half a turn, the supporting leg (back leg) was positioned behind the gesture leg (front leg). Note that the front and back legs were defined based on the reference of dancer's trunk. To prepare for gesture leg takeoff, the dancers gradually reduced the weight bearing on the gesture leg; causing the COP to move in a posterior direction toward the supporting leg, and hence the COM-COP inclination angle to increase in the anterior direction. Second, the COM position also contributes to the increased anterior COM-COP inclination angle since the COM is still located anteriorly at the end of the support phase due to its slower response than the COP (Winter, 2005). The greater range of COM-COP inclination AP angle, during the double-leg support phase in the novice group, may result from the use of a longer preparatory distance (anterior-posterior distance between feet) in performing the pirouette en dehors (novice: 190.0 ± 68.0 mm, experienced: 164.6 ± 46.1 mm). The greater distance increases the difficulty to maintain balance during turning because the longer distance between COM and COP needs to be overcome. The results presented in Figure 4B show that the experienced dancers responded more quickly to the subtle change in the lateral direction than the novice dancers. In other words, it appears that the experienced dancers have more precise COM trajectory during double stance that less correction was needed during the turning phase than novice dancers.
Turning With Single-Leg Support in Pre-swing Phase
Although no significant difference was observed between the two groups in the range of inclination AP during the preswing phase, the difference in the absolute degrees of the inclination AP angle (Figure 5A) may denote the use of different adjustment strategies. For example, while both groups centralize their COM during the transition from double-leg support to single-leg support, some differences between the two groups may result from an adjustment of the upper extremities and trunk. These differences, and the impact of the upper extremities, require further investigation in a future study. Figure 5B shows that the novice group applied a greater medial COM-ankle inclination angle than the experienced group during the second half of the pre-swing phase. This suggests a different ability to handle the perturbation from the gesture leg in the two groups. Dancers used ankle plantarflexor moment, knee extensor moments, and hip flexor and abductor moments at the push leg (gesture leg) to initiate a turn (Zaferiou et al., 2017). The forces generated from the push leg (gesture leg) may further interfere with the dynamic balance of single-supporting leg in the frontal plane. Therefore, during pre-swing phase, dancers have to cope with the generated force by gesture leg and maintain balance in this transition phase.
Retire Position
At the transition point between the pre-swing and midswing phases (i.e., the retire position), the novice dancers showed a significantly smaller absolute COM-ankle inclination AP angle than the experienced group (Table 4). However, while the novice dancers had good postural stability, their retire performance was not as aesthetically pleasing as that of the experienced dancers. In general, the novice dancers showed a larger distance between the toe marker on the gesture leg and the medial knee marker on the supporting leg. In other words, the novice dancers were less accurate in their foot placement; perhaps as a result of a reduced proprioception of the lower extremity or a lower muscle effort by the hip abductors and external rotators (Bronner and Ojofeitimi, 2006). The increased hip muscle strength after ballet training (Bennell et al., 2001) suggests that the experienced dancers who have longer duration of ballet training may have better hip muscle strength to maintain their pelvis stability in retire position compared with the novice dancers. Furthermore, dancers with ballet training had a greater turnout angle than those who were not trained (Sutton-Traina et al., 2015). Thus, the experienced dancers may have a greater range of hip external rotation angle to maintain lateral-orientated thigh in retire position.
The novice dancers showed a smaller posterior COM-ankle inclination angle but greater medial inclination angle in the retire position than the experienced dancers ( Table 4) due to a lower lateral thigh orientation of the gesture leg. In the ballet retire position, the thigh of the gesture leg should be as laterally oriented as possible in order to satisfy ballet aesthetics. A lower lateral orientation of the thigh leads to a greater mass transfer in the anterior direction, and thus results in a higher anterior and medial COM-ankle inclination angle in the novice dancers. Therefore, the present results suggest that novice dancers require reinforced stability training on singleleg support, with particular emphasis on appropriate thigh orientation and foot placement of the gesture leg in the retire position. In addition, a greater hip extensor and abductor moment of the support leg in pirouettes suggests that sufficient gluteal muscle strength is necessary for turning movements (Zaferiou et al., 2017).
Turning With Single-Leg Support in Mid-swing Phase
The placement of the gesture leg in the retire position affects the inclination angle in the first half of the subsequent midswing phase. Specifically, the lower lateral thigh orientation of the gesture leg in the novice dancers results in a greater anterior COM-ankle inclination angle ( Figure 6A). During the second half of the mid-swing phase, the dancers position the foot of the gesture leg ready to perform the ending posture (i.e., ballet fifth position, with the gesture leg closely placed behind the supporting leg). Thus, for both groups, the COM-ankle inclination angle moves toward the posterior direction. As shown in Figure 6B, the experienced dancers showed both a smaller range and a smaller absolute value of the COM-ankle inclination ML angle in the medial-lateral direction than the novice dancers almost throughout the entire mid-swing phase. This finding suggests that experienced dancers have an improved ability to align the COM vertically with the ankle joint while stabilizing their whole body in preparation for landing. Also, dancers who had the ability to execute greater numbers of turns in a pirouette allow body segment adjustments throughout the turn, instead of maintaining the trunk as a rigid body (Lott and Laws, 2012). This is because the coordination and adjustments of the body segments are essential for a high-skilled turn (Imura and Yeadon, 2010) and may again present better postural stability in preparation for landing.
Ending Phase
To satisfy ballet aesthetics, dancers are required to land gracefully at the end of the pirouette. However, the novice group exhibited a relatively large standard deviation of the COM-COP inclination angle in the ending phase (Figure 7); indicating a continuous adjustment of their body posture during landing. A greater postural adjustment was also observed in novice dancers during ellipse-drawing with an unloaded leg (Thullier and Moufti, 2004). A previous study shows that dancers used hip strategy to regain their balance as the difficulty of the task increases (Lott and Laws, 2012). The hip strategy is a way to maintain the COM over the base of support from a relatively large perturbation by using the hip as a fulcrum and bending the trunk. Thus, the novice dancers who had greater inclination angles in this study may take this hip strategy to maintain their balance in the ending phase. These findings, again, suggest that novice dancers are less skillful in performing ballet pirouette.
Application
The present results show that in the preparatory phase of pirouette en dehors, novice dancers had slower responses than experienced dancers in modifying their posture in order to initiate the turning movement. Thus, it is suggested that novice dancers require specific training to ensure correct posture preparation prior to initiation of the turn. Furthermore, novice dancers require additional training to improve their speed of response to postural changes during the transition period from double-leg support to single-leg support, in order to improve their stability in this particular phase of the movement. Finally, in the retire position, a trend of lower lateral orientation of the thigh segment in novice dancers results in a greater anterior inclination angle. Consequently, specific training aimed at improving hip flexibility, hip extensors, abductors and external rotators strength, and foot placement is required.
Limitations
In the present study, the experienced dancers had an age of 17.8 ± 3.4 years, whereas the novice dancers had an age of 12.0 ± 1.9 years. In practice, the age difference between the two groups is not easily avoided since ballet dancers generally begin at an early age; with the result that experienced dancers inevitably tend to be older than novices. In addition, some of the dancers adopted a leaping strategy trying to place their base of support under the COM in the single-leg support phase of the pirouette. While this study attempted to address this tendency by substituting the COP position with the midpoint position between the ankle joint center and the metatarsal marker during single-leg support, the effect of the leaping strategy on the present experimental findings cannot be precisely quantified. In the present study, the actions of the upper extremities were ignored as the initial training of ballet turns focused more on stability of the trunk and lower extremities. However, a previous study has shown that trail arm motion has contributed to generating angular momentum in pirouette en dehors (Kim et al., 2014). Another study looking at a ballet turn with relatively high technique, fouetté turn, suggested the contribution of upper extremities to the torso control (Imura and Yeadon, 2010). Future study should look into the influence of upper extremities on the ballet performance. Finally, the effects of age, gender and ethnicity on the moments of inertia of the different body segments of the dancers were not quantified in the present study.
CONCLUSION
The movement strategies adopted by novice and experienced dancers in performing single-revolution ballet turns differ in terms of the COM-COP inclination angle and the COM-ankle inclination angle. Compared with experienced dancers, novice dancers exhibit overshooting errors and a greater COM-COP inclination angle during the preparatory phase. In addition, novice dancers maintain a lower lateral thigh orientation of the gesture leg (refers to less hip external rotation) in the ballet retire position, which is likely to result in a greater anterior inclination angle during the late pre-swing phase, retire position, and early mid-swing phase. Finally, novice dancers apply continuous postural adjustment during the ending phase compared with experienced dancers.
|
v3-fos-license
|
2021-09-27T19:59:11.376Z
|
2021-08-09T00:00:00.000
|
239622186
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBYSA",
"oa_status": "HYBRID",
"oa_url": "http://www.insightsociety.org/ojaseit/index.php/ijaseit/article/download/12543/3010",
"pdf_hash": "69a6b39197ffebf0088928b4b19306326af80bfc",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44543",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "d4fdf8e8b8b00eb36e99f79872307e2a3df84d58",
"year": 2021
}
|
pes2o/s2orc
|
Tuba Root ( Derris elliptica Benth.) Biopesticide Potential Assay to Control Brown Planthopper ( Nilaparvata lugens Stal.) on Rice Plant ( Oryza sativa L.)
— Brown planthopper ( Nilaparvata lugens Stal.) is one of rice plants' pests that attack from the nursery to the harvest stage. Controls carried out by farmers generally use synthetic insecticides. Reducing the impact caused by synthetic insecticides, an alternative that can be used to controlling the brown planthopper is by using botanical insecticide tuba root. Tuba root plants have been widely reported to control pests and contain the active ingredient rotenone. Rotenone works as a stomach poison and selective. This study aims to examine the ability of tuba root plant parts extracts (leaves, branches, and roots) with organic solvents to control brown planthopper pests in rice plants. The study was conducted in February-April 2019 at the Plant Pest Laboratory, Faculty of Agriculture, University of Riau. The study was conducted experimentally using a Completely Randomized Design (CRD) with three treatments and six replications to obtain 18 experimental units. The tuba root plant trial test consists of 3 levels: root extract, branch extract, and leaf extract with organic solvents. The parameters observed were the time of death of brown planthopper (hour), lethal meantime (LT50) (hour), daily mortality (%), and total mortality (%). The results showed that the application of root extract caused an initial death of 2.33 hours after application, LT50 17.33 hours after application with a total mortality rate of 100%. Application of botanical insecticide tuba root is effective for controlling brown planthopper pests in rice plants because it causes the death of brown planthopper above 80%.
I. INTRODUCTION
Rice plants are one of the primary food sources for Indonesia society. The need for rice every year continues to increase, along with increasing the population. Riau is one of the rice-producing provinces in Indonesia. According to the Central Statistics Agency of Riau Province [1], the harvested rice area in Riau Province in 2018 reached 93.755 ha with a productivity of 3.90 tons/ha and production of 365.293 tons of milled dry grain.
The problems faced by farmers to produce high grain production are increasingly diverse. Some of the factors that become a problem include the narrowed area of land due to rice fields' conversion to plantations and the pest attacks. One of the pests that attack rice plants is brown planthopper (Nilaparvata lugens Stal.).
Brown planthopper is the main and important insect pest of rice in Indonesia. Brown planthopper populations in Asia are grouped into three biotypes: East Asian, Southeast Asian, and South Asian. Brown planthopper is found in Indonesia belongs to the Southeast Asian biotype that is found in the Philippines, Thailand, Myanmar, Laos, Cambodia, and Malaysia [2].
Brown planthopper attack rice plants in all stages of growth, from seeding to harvest. These pests damage directly by piercing and sucking plant fluids [3] from phloem tissue [4], thereby reducing chlorophyll and leaf protein content and reducing the rate of photosynthesis [5]. Plants that are attacked become yellowed, withered and eventually cause symptoms of hopper burn or dry death [6]. Brown planthopper can cause very high damage under field conditions in a given time. Its cause of the BPC capable of causing maximum damage per individual insect and high fecundity. Brown planthopper's third and fourth instar nymphs have been reported to be more devastating among the different life stages [2]. Brown slope is also a vector of the spread of grass dwarf and empty dwarf disease whose damage can be greater than the attack of brown planthopper itself [7], [8].
The attack of 4 brown planthopper imago/rice grove during the period of 30 days can reduce yield by 77%, during pregnancy reduces yield by 37% and fruit ripening period decreases yield by 28% [9]. Brown planthopper attacks in Indonesia in 2010 and 2011 reached 137,768 ha and 218,060 ha. Damages include crop failure ("puso"), causing an average yield loss of 1-2 tons/ha [6]. While the area of brown planthopper attacks, especially in Riau Province in 2015, reached 77.2 ha [10]. Therefore, it is necessary to control the brown planthopper pest.
Common control by farmers to suppress the population of brown planthopper is to use chemical insecticides. However, the use of chemical insecticides continuously and unwise makes a negative impact. Adverse impacts include environmental pollution, secondary pest outbreaks, natural enemy death, resistance, and resurgence [11].
Sutrisno [12] reported that brown planthopper in Indonesia has been resistant to BPMC, carbofuran, MIPC, and imidacloprid insecticide. Melhanah et al. [13] also reported that brown planthopper from Central Java Province, selected in the laboratory for four generations, was resistant to fipronil chemical insecticide. Imidacloprid chemical insecticide also triggers the resistance of brown planthopper 13-234 times the recommended dose. Sublethal doses of imidacloprid tend to increase the number of brown planthopper populations in the field [14]. Nanthakumar et al. [15] also stated that the brown planthopper resurgence triggers a shorter period of development and growth of brown planthopper and an increase in the proportion of macropterous forms. Given its damaging effects, the use of chemical insecticides to control brown planthopper pests must be reduced. Another alternative in controlling brown planthopper pests that are more environmentally friendly is needed, namely by using botanical insecticides.
The botanical insecticide is an insecticide whose primary ingredients are from plants. Botanical insecticides, which are made from active secondary metabolites of plants, can provide one or more biological activities, influence aspects of physiology and pest behavior, and meet the requirements for use in controlling plant pests [16]. One type of plant that has potential as a source of botanical insecticide is the tuba root plant (Derris elliptica Benth.) [17].
Tuba root plants belong to the type of Fabaceae (Leguminosae) [18] whose leaves, roots, and branches can be botanical insecticides. A plant's effectiveness as a source of botanical insecticide is influenced by one part of the plant [19]. Different parts of plants have different toxicity to pests. Active compounds contained in tubal roots include dehydrotenone, deguelin, elliptone, and rotenone [20]. Rotenone levels are distributed in all parts of tuba root plants, such as branches, stems, leaves, and most roots [21], [22]. Rotenone compounds contained in tuba roots are 0.3-12% [23]. Rotenone compounds are widely reported in agriculture as an insecticide because they are a contact poison and stomach poison against insect pests [23], [24].
Rotenone active ingredients work as stomach poisons and selective contact poisons against insects [23]. The mechanism of insecticide stomach poison is by killing the target insect by entering into digestion through food that is eaten. Insecticides enter the digestive organs of insects and are absorbed by the intestinal wall. They are transplanted into the nerve center of insects and respiratory organs and poison stomach cells [25].
Several research results have been reported about the effectiveness of tuba root extracts with water solvents in controlling pests that attack several cultivated agricultural commodities. Test application of tuba root extract with a concentration of 1 g / l of water can cause mortality of 100% snail pests [26], tuba root concentration of 30 g / l of water causes total mortality of Paracocus marginatus white bug nymphs 95% [27]. The application of 0.6% tuba root extract effectively controlled Aphis glycines aphids by 91.66% [28]. Tuba root extract with a concentration of 10 g / l of water can control the brown planthopper pest in rice plants by 90% in the laboratory [29]. Meanwhile, a study on tuba root extract with organic solvents has been conducted by Kinansi et al., showing that ethanol extract of tuba root is effective in killing 50% of Periplaneta americana 6,505 hours at a concentration of 3 g / 100 ml of water [30].
The botanical insecticides were made by the maceration method, which aims to get the plant extracts that usually use certain solvents, one of which is methanol. The use of methanol solvents aims to accelerate the release of extractive substances contained in these plants. Atun states that methanol has the advantage of having a lower boiling point to evaporate at lower temperatures [31] quickly.
Research on the utilization of tuba root as a botanical insecticide with water solvents has been widely reported. However, tuba root biopesticides with organic solvents to control the brown planthopper pest in lowland rice plants have not been widely reported. Therefore it is necessary to conduct studies and research related to the potential of tuba root (Derris elliptica Benth.) as a botanical insecticide to control the brown planthopper (Nilaparvata lugens Stal.) in rice plants (Oryza sativa L.).
II. MATERIAL AND METHOD
The study was conducted at the Pest Laboratory of the Faculty of Agriculture, University of Riau, Pekanbaru City, Riau Province. This research was conducted for three months, from February to April 2019. The materials used are rice seeds of IR-42 variety, brown planthopper imago, manure, extracts of tuba root plants (from leaves, branches and roots), methanol, water, sterile aquadest, and 1000 ml volume plastic cups. The tools used in this study are analytical scales, rotary evaporator, thermohygrometer, stir bar, Whatman filter paper, container size 26 x 20 cm, 500 ml hand sprayer and 1000 ml Erlenmeyer, label paper, aspirator, knife, filter, scissors, gauze, roll tissue, flashlight, camera, and stationery.
The study was conducted experimentally using a Completely Randomized Design (CRD) with three treatments and six replications to obtain 18 experimental units. The treatment, including tuba leaf extract, tuba root extract, and tuba branches extract.
A. Research Implementation 1) Feed Provision: The rice seed used is IR-42 variety obtained from the Indonesian Center for Rice Research in Subang, West Java. Seeding of rice seeds is carried out in 26 x 20 cm plastic containers. Rice seeds are planted in a container filled with water until it is moist, then left until the rice seeds germinate. Seeding is carried out for 14 days. 14day-old rice seedlings that already have 3-4 leaves, ready to be used as brown plant hopper feed [32].
2) Propagation of brown planthopper: Brown planthopper imago was taken from the affected paddy rice plant in Jaya Pura Village, Bunga Raya District, Siak Regency, Riau Province. Brown planthopper was taken directly at the base of the stem of rice plants using an aspirator (Fig. 1). Brown planthopper taken from the field is propagated in a plastic container that has contained IR-42 variety of rice seeds aged 14 days after seedling. Rice seedlings are used as brown plant hopper hosts and maintained to obtain offspring until the amount is sufficient for the treatment of as many as 180 birds; water is added every day to taste. Propagation of brown planthopper is done until the 1-day old imago is obtained. Propagation is carried out until the F2 generation (within 2 months). 3) Making tuba root extracts with organic solvents: Tuba root plants which are used as sources of extracts are leaves, roots and twigs taken from community gardens in Tapung District, Kampar Regency, Riau. Leaves, roots and branches took to the Plant Pest Laboratory, then dried for 1 week. The dried leaves, roots, and branches are then cut into small pieces with a size of ± 2 cm and mashed using a blender. The fine tuba root flour is filtered using a 0.5 mm mesh sieve (Fig. 2). Furthermore, the obtained tubal root flour is stored. Tuba root powder extraction was carried out using methanol (polar) solvent with maceration method. Extraction process by maceration method, each extract of tuba root plant parts (leaves, twigs and roots) was put into the Erlenmeyer containing flour and methanol at a ratio of 1: 4 and stirred using magnetic stirrers for 6 hours, then macerated (soaking) for 24 hours [33]. Next, filtered using a Buchner funnel based on filter paper and the filtrate was evaporated using a rotary evaporator at a temperature of 78°C so that 100% of tuba root extract was obtained. The extraction results were further diluted using distilled water to obtain each treatment of tubal root concentration (Fig. 3). The extraction process can be seen on the flowchart (Fig 4). Fig. 3 The process of making extracts from tuba root plant flour 4) Application of tuba root extract treatment from different plant tissues: Application of extracts of tuba root plant parts of 1% each is made 12 hours after the brown planthopper imago infestation. Before making an application, first, do a calibration. Calibration is done by filling a 100 ml hand sprayer with water until it is full, then spraying it evenly on the rice plants. The amount of water left in the hand sprayer is calculated. The volume of water before spraying is reduced by the volume of water left in the hand sprayer to obtain the spray volume. Calibration was repeated 3 times and averaged (Fig. 5a). Each extract part of the tuba root plant according to the treatment concentration of 1% with a spray volume of 4 ml was sprayed on all parts of the rice plant that had been infested with brown planthopper in the laboratory (Fig. 5b). After the application is made, it is observed every 1 hour for 72 hours. 2) Lethal meantime (LT50) (hour): Observations were made by calculating the time needed for each treatment to kill 50% of the brown planthopper imago population. Observations were made every 1 hour after treatment was given up to 50% of the population of brown planthopper imago that died from each experimental unit.
3) Daily mortality (%):
Observations were made by counting the number of brown planthopper imago that died every day after being given treatment. According to Natawigena [34] the percentage of daily mortality can be calculated using the following formula:
4) Total Mortality (%):
Observations were made by calculating the percentage of the total population of brown planthopper imago that died until the end of the observation. According to Natawigena [34] the percentage of total mortality can be calculated using the following formula:
C. Data Analysis
Observational data were analyzed statistically using Analysis of variance (ANOVA) with F test at 5% alpha level. If the treatment has a significant effect followed by further testing, the Least Significant Difference (LSD).
A. First time of Death of Brown Planthopper (Hour)
The results of variance showed that the treatment of botanical insecticides from several parts of the tuba root plant significantly affected the initial time of death of brown planthopper in rice plants in the laboratory. LSD further test results at the level of 5% can be seen in Table 1. shows that the treatment of botanical insecticides, tubal roots, roots, branches, and leaves caused the initial death of brown planthopper to be significantly different from all treatments. Treatment with botanical insecticide tubal roots of the roots, branches, and leaves can show an influence on the initial death of brown planthopper with a time span of 2.3-7.33 hours. This shows that botanical insecticides on the roots of the fallopian roots, branches, and leaves can kill brown planthopper at different times.
Botanical insecticide treatment of tubal roots of the root causes the early death of brown planthopper, which occurs at the fastest 2.33 hours after application and is significantly different from the treatment of tuba roots in the branches and leaves. Whereas in the treatment of the branch, the first death occurred at 4.83 hours after application and was significantly different from the treatment of the leaves with an initial death of 7.33 hours after application. Botanical insecticides of the root part give a faster influence on killing brown planthopper. This is presumably because secondary metabolites contained in botanical insecticides in the tuba roots have more roots than in the leaves and branches. According to Kuncoro [21], rotenone levels are spread in all parts of the tuba root plant, such as branches, stems, leaves, and roots. However, most are in the roots [22]. Rotenone is chemically classified into the flavonoid group [25]. In addition to rotenone, tubal roots also contain deguelin, toxicarol, alkaloids, saponins, and polyphenols [20]. Rotenone compounds contained in tuba roots are 0.3-12% [23].
Based on the results of research that has been done, it can be seen morphological changes and changes in the behavior of brown planthopper imago after being applied with the botanical insecticide tuba roots. After the application of tubal root botanical insecticides, the initial symptoms of brown planthopper death are inactive insects such as before being treated if touched will fly and fall, the brown planthopper imago that died of changing color to black. According to Kardinan [35], evaluating the effects of insecticide poisoning on insects is to see the physical response and behavior of test insects after contact with the applied insecticide. In this study, secondary metabolites in tubal root botanical insecticides can kill brown planthopper as contact poisons and stomach poisons, namely through spraying on rice plants and brown planthopper. Kardinan [23] supports this by showing that tubal roots are a contact poison and stomach poison against insect pests.
B. Lethal Mean Time (LT50) (hour)
The observation of lethal meantime (LT50) after analysis of variance showed that the treatment of botanical insecticides a b leaf root branch from several parts of the tuba root plant significantly affected the time needed to kill brown planthopper pests by as much as 50%. BNT further test results at the level of 5% can be seen in Table 2. Table 2 shows that botanical insecticides from different parts of the tuba root significantly affect the lethal mean time (LT50) of brown planthopper. The treatment of botanical insecticides on the roots of the tuba causes lethal mean time (LT50) in brown planthopper with a range of 17.33-33.50 hours after application. Botanical insecticides from the root part are the fastest treatment in killing 50% of brown planthopper (17.33 hours after application) and significantly different from other treatments. Botanical insecticides from the branches part are significantly different from leaves, with lethal mean time (LT50) of brown planthopper 26.00 and 33.50 hours after application.
The time needed for botanical insecticide tubal roots in the root to kill 50% brown planthopper faster than other treatments. This is presumably because the amount of secondary metabolite compounds found in root extracts is higher than that found in branches and leaves so that the occurrence of brown planthopper death is as much as 50%. Martono et al., [36], stated that the effectiveness of a plantbased ingredient used as botanical insecticides is very dependent on the material used. So that the high secondary metabolites contained in the root section cause the faster of brown planthopper lethal mean time.
C. Daily mortality (%)
The observations results on the percentage of brown planthopper daily mortality with botanical insecticides in different parts of the tuba root plant cause different mortality. Daily mortality of brown planthopper can be seen in Figure 6. Figure 6 shows that the application of botanical insecticides from the roots part causes daily mortality of brown planthopper, which is different in each treatment. Daily mortality in the application of tubal root insecticide can kill brown planthopper in the range of 33-55% on the first day, on the second day the range is 25.33-40% and on the third day there is a decrease of 5-15%. Observation on the first day on the treatment of botanical insecticide from the root part can kill the brown planthopper by 55%, followed by treatment of branches and leaves that can cause mortality of 40%, and 33%, respectively.
The difference in daily mortality is due to different parts of the tuba root plant that used as the source of botanical insecticides. Botanical insecticide treatment from the roots part can cause the highest daily mortality on the first day. This is because the rotenone compounds contained in the roots are higher than the branches and leaves, so it works optimally as a contact poison and stomach poison. According to Kardinan [23], the active ingredient of rotenone works as a stomach poison and contact poison which is selective to insects.
Brown planthopper is plant-sucking and liquid-sucking insects [5]. The mechanism of insecticide stomach poison is by killing the target insect by entering into digestion through food. Insecticides enter the digestive organs of insects and are absorbed by the intestinal wall, and they are transplanted to the nerve centers of insects and respiratory organs and poison stomach cells [25]. This is also supported because the treatment of botanical insecticides in the roots part gives a lethal meantime (LT50) of 17.33 hours after application, faster than other treatments, so that daily mortality on the first day becomes higher.
The second and third daily mortality in the treatment of botanical insecticides from all sources has decreased. This is because the number of living brown planthopper decreases. This is also because botanical insecticides from plant-based matter have disadvantages such as biodegradability. Dadang and Prijono [16] support that there are some deficiencies of botanical insecticides, including the low botanical insecticides persistence, so that repeated applications are needed when the population of pests is high.
D. Total mortality (%)
Observation of the total mortality of brown planthopper after analysis of variance shows that the treatment of botanical insecticides in some parts of the tuba root plant significantly affects the total mortality of brown planthopper. LSD further test results at the level of 5% can be seen in Table 3. Leaf planthopper by 100% and significantly different from the branches and leaves. Botanical insecticide from the branches part showed total mortality of 80% and significantly different from the leaves with mortality of 73.33% until the end of the observation. This is due to the influence of the poison ability and brown planthopper's response to the botanical insecticide of the root parts, which has high rotenone toxin content compared to other treatments. This is supported by Yoon [20] that the most critical toxic substance contained is rotenone. Besides, there are deguelin, toxsicarol, alkaloids, saponins, and polyphenols in the tuba root plant.
Gunawaty's research results [37] showed that the application of tuba root powder extract with a water solvent with a concentration of 10% causes mortality of stinky rice bug as much as 98%. Tuba root extract with a concentration of 10 g can control the brown planthopper pest in rice plants by 90% in the laboratory [29]. The results of research Kinansi et al. [19] also showed that the ethanol extract of tuba plant roots effectively killed 50% P. americana at 6.505 hours at a concentration of 3 g / 100 ml, while LT90 at 11.372 at a concentration of 9 g / 100 ml. The results of Adharini's research [38] also showed that spraying the ethanol extract of tuba plant roots in termites with a concentration of 5% gave results as good as spraying a concentration of 10% because termite mortality reached the same mortality of 100%.
The application of botanical insecticide from the root part is effective in controlling brown planthopper. This is due to the total mortality of brown planthopper has been able to reach 100%. These results are consistent with the opinion of Dadang and Prijono [16], botanical pesticides to be effective as pesticides if the treatment with these extracts can result in a mortality rate of over 80%.
IV. CONCLUSION
The conclusion based on the results of this study showed that the tuba roots from the roots part are the best to be used as botanical pesticides against brown planthopper pests in rice plants. The application of root extract caused an initial death of 2.33 hours after application, 17.33 hours after application of lethal meantime (LT50), with a total mortality rate of 100%.
|
v3-fos-license
|
2018-04-03T00:21:10.543Z
|
1996-11-08T00:00:00.000
|
11226270
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.jbc.org/content/271/45/28277.full.pdf",
"pdf_hash": "700e06d9faeec92141b4a71403bc96d54b764291",
"pdf_src": "Highwire",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44544",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"sha1": "1bf91ca686b650eff3e7834eead699d9e9a06b42",
"year": 1996
}
|
pes2o/s2orc
|
Mutations in the B2 bradykinin receptor reveal a different pattern of contacts for peptidic agonists and peptidic antagonists.
The B2 bradykinin receptor, a seven-helix transmembrane receptor, binds the inflammatory mediator bradykinin (BK) and the structurally related peptide antagonist HOE-140. The binding of HOE-140 and the binding of bradykinin are mutually exclusive and competitive. Fifty-four site-specific receptor mutations were made. BK's affinity is reduced 2200-fold by F261A, 490-fold by T265A, 60-fold by D286A, and 3-10-fold by N200A, D268A, and Q290A. In contrast, HOE-140 affinity is reduced less than 7-fold by F254A, F261A, Y297A, and Q262A. The almost complete discordance of mutations that affect BK binding versus HOE-140 binding is surprising, but it was paralleled by the effect of single changes in BK and HOE-140. [Ala9]BK and [Ala6]BK are reduced in receptor binding affinity 27,000- and 150-fold, respectively, while [Ala9]HOE-140 affinity is reduced 7-fold and [Ala6]HOE-140 affinity is unchanged. NMR spectroscopy of all of the peptidic analogs of BK or HOE-140 revealed a β-turn at the C terminus. Models of the receptor-ligand complex suggested that bradykinin is bound partially inside the helical bundle of the receptor with the amino terminus emerging from the extracellular side of helical bundle. In these models a salt bridge occurs between Arg9 and Asp286; the models also place Phe8 in a hydrophobic pocket midway through the transmembrane region. Models of HOE-140 binding to the receptor place its β-turn one α-helical turn deeper and closer to helix 7 and helix 1 as compared with bradykinin-receptor complex models.
Several peptidic B 2 bradykinin antagonists have been identified; these compounds reduce pain and inflammation (8 -12). Peptidic antagonists also reduce death from experimental shock (13)(14)(15). Bradykinin receptor antagonists are potentially useful in the treatment of pain, acute and chronic inflammation, shock, allergic or infectious rhinitis, and asthma. The peptidic antagonists are useful tools; but, to date, these compounds have made poor human therapeutic agents because of their poor bioavailability and formulation difficulties (16,17). The discovery of a nonpeptide antagonist of bradykinin would improve the prospects of treating bradykinin-instigated inflammation, pain, or edema. Thus, we have focused on a molecular understanding of the bradykinin receptor ligand binding site, believing that this information may help in the discovery and design of nonpeptidic antagonists.
We used the results from molecular modeling studies of B 2 bradykinin receptors and NMR studies of peptidic agonists and antagonists to generate a number of models for agonist binding to the B 2 BKR. These models were tested by site-directed mutagenesis of the receptor and by making single amino acid changes in agonist and antagonist peptides. The data reveal a disparity between the way peptidic agonists and antagonists bind to the BKR. We attempt to reconcile the disparities by proposing new models of the BKR-ligand complex. (18,19). The product was HPLC-purified and then characterized by HPLC and mass spectrometry as to its chemical purity (Ն96%), identity, and specific activity (56.5 Ci/mmol). Media and other cell culture additives were from Life Technologies, Inc. Biochemicals and enzymes were from Boehringer Mannheim. Common reagents were from Sigma.
Methods
Standard molecular biological and cell culture methods were used except as specified (20,21).
Mutagenesis-BKR mutants were made by a modification of the polymerase chain reaction mutagenesis method (22). The mutagenesis used the BglII/PvuII fragment of the rat cDNA (4) as the polymerase chain reaction template. The full mutant receptor was obtained by * The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
replacing the BglII/PvuII fragment of the wild type BKR in the expression plasmid pSRF-159 with the mutant fragment (pSRF-159 is a derivative of the SR␣ promoter vector pcDLSR␣296 (23)). The oligonucleotides used to create mutations incorporated the desired mutation and, when possible, an additional protein coding-silent mutation to yield a new restriction site. The "silent restriction sites" were used to rapidly screen candidate mutant cDNAs for the presence of the mutation. All mutagenesis cassettes were sequenced on an ABI-373A sequencer using the dye terminator method; only results from mutants in which the desired sequence was confirmed are reported.
Transient Transfection and Stable Cell Lines-COS-7 cells, 95-100% confluent (1 ϫ 10 7 cells/T-162 flask), were washed with phosphatebuffered saline and then transfected using 15 g of DNA and 300 g of Lipofectin. The Lipofectin and DNA were mixed prior to the addition to the cells in polystyrene tubes and added, dropwise, to the cells covered by 15 ml of Opti-MEM medium. After 6 -7 h, the DNA/Lipofectin/ medium mixture was supplemented with 20 ml of growth medium. Twenty-four hours after transfection the cells were split, 1:3. Membranes were prepared 60 -72 h after transfection; these membranes usually contained 0.9 fmol of receptors/g of protein (31,000 receptors/cell).
Stable cell lines of the following receptor mutants were made: wild type, N200A, N204A, F261A, T265A, D286A, and Q290A. Transfection of the CHO cells began as above and used five parts of the appropriate cDNA in the pSRF-159 vector along with 1 part of a neomycin resistance plasmid, pSV2-neo (Stratagene). After 24 h, the cells were split, 1:5, 1:10, 1:20, and 1:40, into medium containing 500 g/ml G418. The cells were transferred to medium containing 250 g/ml G418 after 5-7 days and cloned 6 -10 days later. After clone expansion, 20 -60 clones were assayed for [ 3 H]Phe 5 -HOE-140 binding sites. A single clone was selected for all subsequent work. The clones ranged from 0.44 to 2.32 fmol of receptor/g of membrane protein (16,000 -87,000 receptors/cell).
Saturation binding assays contained the following components in a total volume of 550 l: 50 l of [ 3 H]bradykinin (final concentrations were 0.001-20 nM) or [ 3 H]Phe 5 -HOE-140 (final concentrations were 0.001-6 nM), 200 l of membranes at 0.25 mg/ml, and 300 l of binding buffer. The nonspecific binding was measured in a separate mixture that contained 100 l of 5.5 M nonradioactive ligand dissolved in binding buffer, 1 M final concentration, and 200 l of additional binding buffer. Both competition and saturation assays were done in triplicate using 12 concentrations, and usually each assay was repeated at least twice. Incubation, filtration, and scintillation counting were as in Krstenansky et al. (24). Saturation data were calculated by nonlinear curve fitting using a one site model. B ϭ (B max ϫ L/(K d(l) ϩ L)) ϩ ((m ϫ L) ϩ b), where B is the amount of ligand bound, B max is the maximum specific ligand binding, L and K d(l) have the same meaning as in Krstenansky et al. (24), and (m ϫ L) ϩ b is a line describing the nonspecific binding component. Nonspecific binding calculated by the above method was found to agree well with the nonspecific binding measured by the addition of 1 M nonradioactive ligand.
Molecular Modeling-Models of the BKR were based on the electroncryomicroscopic picture of bacteriorhodopsin developed by Henderson et al. (25). Each of the seven helices was built using the BUILDER module of INSIGHT. 2 For all GPC-7TM receptors the exact termination point of each helix is not known; thus, an extra turn was added to the extracel-lular side of each helix in order to prevent missing potential interactions. Our assumption that the binding site lies at least in part in the helical regions was based on structural and mutagenic studies of rhodopsin and mutagenic studies of amine-transmitter GPC-7TM receptors. Each helix was superimpsoed using the bacteriorhodopsin structure, and strong overlaps were removed by manual manipulation of side chain trosional angles. The ensemble of helices was minimized using the amber force field in DISCOVER-2.7, 2 using 100 steps of steepest descent; this was followed by 1500 steps of conjugate gradient minimization. To stabilize the ensemble, short loops (TM-2 to TM-3, TM-3 to TM-4, and TM-6 to TM-7) were added to the ensemble using the loop builder in SYBYL. 3 Longer loops and termini (NH 2 terminus, TM-1 to TM-2, and the COOH terminus) cannot be accurately modeled and thus were omitted. The TM-5 to TM-6 loop (19 residues) has been postulated to be two ␣-helices connected by a spacer; thus, this loop was initially arranged as two helices connected by a turn region with unconstrained regions between the ends of TM-5 and TM-6 and the loop. The ensemble was again energy-minimized, with the atoms in the backbone of the transmembrane helices fixed and all other atoms free.
Subsequent to the energy minimization, the model was subjected to 30 ps of molecular dynamics at 100 K, 15 ps at 200 K, and 75 ps at 300 K. The dynamics were limited by distance constraints on backbone hydrogen bonds within transmembrane helices. The lowest energy conformer occurring during the last 50 ps of dynamics was selected and energy-minimized using 1000 iterations of conjugate gradient minimization.
Bradykinin-receptor complexes were modeled with bradykinin constrained in a COOH-terminal -turn. This model was manually docked with the receptor using INSIGHT II. The torsional angles of the peptide were then manually adjusted to maximize contacts with receptor residues suggested by the mutagenesis experiments. Finally, the complex was energy-minimized using 100 steps of steepest descent followed by 500 steps of conjugate gradient minimization. The complex minimizations were constrained by a term that preserves the ␣-helix backbone hydrogen bonds; no other constraints were used. Data recording conditions and methods were as described in Krstenansky et al. (24). Amide temperature coefficients were measured between 0 and 30°C for the samples in aqueous solution and between 25 and 40°C for the samples in Me 2 SO and SDS solutions. A combination of NOEs, amide temperature coefficients, and spin-spin coupling constants were used to analyze each peptide's conformation.
Peptide Synthesis-The peptides were synthesized using standard Boc chemistry protocols. For HOE-140 and analogs, Boc-Arg(tosylamido)-phenylacetamidomethyl resin was used (0.58 mmol/g; Applied Biosystems, Inc.). Boc-dTic-OH and Boc-Oic-OH were purchased from Synthetech, Inc. After synthesis and cleavage of the linear peptides from the resin using liquid HF containing 5% anisole, the peptides were purified by reverse-phase HPLC. The peptides were then analyzed by analytical HPLC, amino acid analysis, and liquid secondary ionization mass spectroscopy. All peptides were Ͼ93% pure as assessed by reverse-phase HPLC. The composition of each amino acid in each peptide was consistent with the expected value, Ϯ15%. The mass spectra were all consistent with the desired structure, Ϯ450 ppm.
RESULTS
To begin our effort to understand the bradykinin receptor's ligand binding site we addressed the question of the relationship of the bradykinin and HOE-140 binding sites. We asked whether BK and Phe 5 -HOE-140 were competitive at the BKR binding site. 1). It is important to note in these assays that both ligands and the receptor are incubated together for 1-1.5 h to allow for equilibrium. Equilibrium is attained in 45 min (data not shown). The data clearly demonstrate that the maximum binding of each ligand is not depressed by the other ligand, which indicates that the interaction is competitive.
To continue our study of the receptor-ligand complex we examined the structure of bradykinin and HOE-140 by NMR. In aqueous solution BK does not have a single conformation but instead undergoes rapid motion. The NOE cross-peaks from the NOESY spectrum are relatively weak at room temperature but significantly increase at lower temperatures (0°C). There is a relatively strong NOE between the Arg 9 and Phe 8 amide protons, which provide supporting evidence for a type-II -turn involving residues 6 -9. In addition, the Arg 9 amide temperature coefficient is small (it shifts the least with temperature for this peptide, Ϫ4.9 ppb/degree kelvin), suggesting that the Arg 9 amide proton is somewhat protected from solvent exchange. The NOE pattern and amide temperature coefficients for the rest of the peptide are inconclusive as to the preferred conformations for these regions. However, it is interesting to note that the Phe 5 amide proton temperature coefficient is the next lowest (Ϫ5.7 ppb/degree kelvin, reported by Lee et al. (26)), possibly due to a partial hydrogen bonding interaction. The Phe 5 amide proton has been observed to be involved in a type II -turn under other solvent conditions (see below).
In Me 2 SO, BK adopts a fairly stable conformation. The NOE cross-peaks in the NOESY spectrum were significantly more intense than those observed in aqueous solution. The amide temperature coefficients for Gly 4 , Phe 5 , and Arg 9 are all within the range where these protons are significantly protected from solvent exchange. The NOE pattern in combination with these exchange data indicates that there are two type-II -turns involving residues 2-5 and 6 -9. These two -turns have also been observed by Mirmira and co-workers (27). The Arg 9 amide proton hydrogen bonds to the Ser 6 carbonyl oxygen, and the Phe 5 amide proton hydrogen bonds to the Pro 2 carbonyl oxygen. The orientation of these two -turns relative to each other is uncertain. The Gly 4 amide proton is protected from solvent exchange, but it is not clear to which carbonyl this amide proton is hydrogen bonding.
In SDS solution, BK adopts a similar conformation to that observed in Me 2 SO. Again the NOE cross-peaks in the NOESY spectrum were significantly more intense than those observed in aqueous solution. Both of the -turns observed in the Me 2 SO solution are present in the SDS solution. As was observed in the Me 2 SO study, the relative positioning of the two turns is still uncertain. In contrast to the Me 2 SO study, however, the Gly 4 amide proton is not protected from solvent exchange in this case.
In aqueous solution, HOE-140 undergoes rapid motion. The NOE cross-peaks were relatively weak at room temperature but became significantly more intense at 0°C. At the lower temperature, there is good evidence for a COOH-terminal type IIЈ -turn being present a significant amount of the time. The The curves were all collected at 12 concentrations of radioactive ligand, and the concentration of competing drug is indicated by the key. The inset shows the continued linear relationship over a 5-fold expanded scale. The data, 12 points, are fit to a line using a weighting of 1/y 4 (69).
value of the amide temperature coefficient for Arg 9 (Ϫ1.0 ppb/ degree kelvin, Table II, part A) and the NOE pattern confirm this assignment. The NOE pattern and amide temperature coefficients for the rest of the peptide are inconclusive as to the preferred conformations for the NH 2 -terminal region.
As was found for bradykinin, the NOE intensities for HOE-140 increased significantly in the SDS solution. HOE-140 forms two -turns: a type II -turn between residues 2-5 and a type-IIЈ -turn between residues 6 and 9 (type-IIЈ due to the Asp amino acid in position 7). This conformation is very similar to that reported by Guba and co-workers (28).
NMR was used to assess the effect of the single amino acid change on the COOH-terminal hydrogen bond (Table II, part A). As a base line for the NMR studies, we measured the exchange rate of a non-hydrogen-bonded amide, the amide proton of Gly 5 . The temperature dependence of the Arg 9 amide proton exchange ranged from Ϫ6.6 to Ϫ1.0 ppb/degree kelvin, which is greater than for the Gly 5 amide proton, Ϫ8.7 ppb/ degree kelvin (Table II, part A). We interpret these results to mean that all of the single residue-substituted bradykinin and HOE-140 analogues retained some Ser 6 -Arg 9 -turn conformers. Finally we examined other NOEs in single residuechanged BK and HOE analogs for other alteration of the solution conformation. No significant alterations were present, not shown.
To begin our study of the ligand binding site of the BKR we made physical models and simple computer graphics models by consideration of the bacteriorhodopsin structure, ignoring nonhelical loops (25). In our first efforts we made a simple residue by residue side chain replacement of the rhodopsin helical sequences with the bradykinin receptor sequence. The only deviation from the simple one to one replacement was the extension of most of the helices (helices were assigned as shown in McEachern et al. (4)) by about one helical turn. This allowed for the uncertainties in assigning helix beginning and ending points and the uncertainties of the electron defraction model (25).
Our modeling efforts also incorporated the results from extensive site-directed mutagenesis of rhodopsin and -adrenergic receptors and other members of the G-protein-coupled receptor superfamily (29 -31). All of these studies implicate the intrahelical bundle regions of residues in TM-3, TM-5, and TM-7 as important for binding of the catechol agonist and the closely related catechol antagonist to the adrenergic receptor and binding of the retinal to rhodopsin. We initially hypothesized that at least part of the bradykinin peptide interacts with intrahelical bundle regions in a manner similar to catechols and retinal. Thus, we decided to place the well-defined bradykinin 6 -9 -turn in the intrahelical bundle region.
We placed several different models of bradykinin conformation in the intrahelical bundle cavity and performed energy minimization and dynamic simulations on the surrounding receptor structure. Some of the models that emerged are shown in Fig. 2. These computationally derived models of bradykinin bound to the receptor all retained a -turn and have variable conformations of the amino terminus and variable distances of insertion of the -turn into the intrahelical bundle. Three of the models, 2, 3, and 4 have an ionic interaction between Asp 286 and the guanido function of Arg 9 . Model 1 has an alternative arrangement with the guanido of Arg 1 interacting with Asp 286 . Models 1, 2, and 3 have the bradykinin -turn inserted to about the same depth into the transmembrane helices, while in model 4 bradykinin is approximately 1 helical turn, ϳ5 Å, higher than in the other models. In all of these models Phe 8 of bradykinin occupies a hydrophobic pocket composed of residues Trp 157 from TM-4, Tyr 117 from TM-3, and Phe 261 and Trp 258 from TM-6. This hydrophobic pocket contains residues in analogous positions to others proposed to make a hydrophobic pocket in amine binding receptors (29) and rhodopsin (32). In two of the models (2 and 3) F261 would form one wall of the hydrophobic pocket surrounding an Arg 9 -Asp 286 ionic interaction; this is the same role that the Phe 261 analog phenylalanine performs in amine binding receptors.
We asked whether TM-3 played a central role in ligand binding as it does for the amine hormone binding receptors and in rhodopsin. We mutated every hydrogen bond donor or acceptor in TM-3 and mutated the only charged residue, Arg 106 , to alanine. The mutations were made, expressed, and tested for their ability to bind bradykinin, an agonist, and Phe 5 -HOE-140, an antagonist ( Table I). The only charged residue in TM-3, Arg 106 , was changed to alanine with no alterations in bradykinin or Phe 5 -HOE-140 affinity; thus, Arg 106 does not contribute an ionic interaction to either ligand-receptor complex. The rest of the TM-3 mutants (Table I) show that TM-3 is not a major contributor of dipole interactions with bradykinin since every potential hydrogen bond donor or acceptor in TM-3 was changed to alanine with no alteration in agonist or antagonist binding affinity.
The pair of mutations N200A and N204A, located at the top of TM-5, were made because they are in positions identical to two serines implicated as hydrogen bond partners with the catechol hydroxyls of adrenergic agonists. These two mutations reduced bradykinin affinity by 4.8-and 2.7-fold, respectively. These very modest reductions in affinity are much smaller than the 10 -50-fold reductions in affinity seen by loss of the catechol hydroxyl hydrogen bonds in the adrenergic receptor (33). Neither mutation affected the binding of Phe 5 -HOE-140 to the receptor. Thus, the participation of Asn 200 and Asn 204 in a strong receptor ligand hydrogen bond seems unlikely; however, weaker polar interactions remain a distinct possibility. Several of the model receptor-ligand complexes suggested that residues in TM-6 and TM-7 might play important roles in ligand binding. These residues, particularly those in TM-6, included residues in positions analogous to residues implicated by mutagenesis and by biophysical and cross-linking studies in amine hormone receptors and in rhodopsin (32,34,35). Thus, we choose several of the sites in TM-5, -6, or -7 for further mutagenesis studies (Table I).
Each mutation in TM-5, -6, or -7 that altered bradykinin affinity was paired with another mutation three or four residues (one helical turn) away that also caused an alteration in bradykinin binding affinity. Thus, altered affinities were observed for N200A and N204A, F261A, T265A, and D268A, and D286A and Q290A (Table I and Fig. 3A). Furthermore, Q262A and S246A, mutations between Phe 261 and Thr 265 and thus on the opposite side of a helix, caused no alterations in bradykinin affinity. These results suggest that the prediction of a helical conformation in these regions is warranted.
F261A and T265A caused the largest reductions in bradykinin affinity, 2000-and 240-fold, respectively, and may reflect the importance of these two residues in the agonist-ligand interaction (Table I). These two residues are one and two helical turns above and on the same helical face as a conserved tryptophan, Trp 258 , which is part of the FXXCWXP motif found in TM-6 of most G-protein-coupled receptors (36). Aromatic residues in a positions analogous to Phe 261 and Phe 258 have been shown to interact with the ligand by forming a hydrophobic pocket around the organic amine hormones or retinal in rhodopsin in their congnate receptors (32,34,35).
D268A and D268N caused modest reductions, 3.5-and 3.6fold respectively, in bradykinin binding affinity (Table I). Asp 268 was one turn above and on the same helical face of TM-6 as Thr 265 and Phe 261 . Thus, the combined results of these mutations suggest strongly that this part of TM-6 is indeed helical and composes part of a ligand binding site. Asp 268 could be participating in a salt bridge between the receptor and ligand. Such an interaction may occur, since both arginines at positions 1 and 9 of bradykinin are important for ligand-receptor interactions (Table II). However, a 3.5-fold reduction in affinity corresponds to a free energy of interaction of only 0.8 kcal/mol, which is small for a full ionic interaction. Furthermore, changing Asn 268 to asparagine causes a 3.6-fold reduction in affinity. Since neither alanine nor asparagine can form an ionic bond, the data do not support an ionic interaction with Asp 268 .
D286A and Q290A caused reductions in bradykinin binding affinity of 60-and 11-fold, respectively. The 60-fold reduced affinity caused by D286A corresponds to 2.5 kcal/mol, which may reflect a moderately strong ionic interaction with the ligand. The smaller effect of the Q290A mutation may reflect a simple hydrogen bond. Several bradykinin structure activity studies, including those shown in Table II, had suggested that the guanido functions of bradykinin, Arg 1 and Arg 9 , might be important for receptor-bradykinin interaction, presumably by forming a salt bridge or bridges. In two cases, Asp 286 was changed to a residue other than the simple loss of side chain change, D286A; these were D286R and D286K. These changes decrease the affinity of BK with D286R by 2000-fold and with D286K by 5000-fold; however, our ability to interpret these findings is limited by the large change in side chain volume and charge of these mutations. DISCUSSION The observed competition of bradykinin and HOE-140 on the B 2 receptor is in accord with other reports of competitive behavior (37,38) but in contrast to several reports that in certain biological assays HOE-140 appears to be noncompetitive (38,39). The finding of noncompetitive behavior for HOE-140 was surprising in view of the structural relationship of HOE-140 and earlier peptidic antagonists that are competitive (39). In the functional assays, the rapid response of the biological systems precludes attainment of equilibrium, thus making it difficult to determine the competitive nature of a ligand. The binding assay is not confounded by regulation of the receptor or postreceptor biological events. Thus, HOE-140 and bradykinin bind to the B 2 BKR in a competitive, mutually exclusive manner, and we speculate that the previous reports of noncompetitive behavior may have resulted from slow equilibrium of HOE-140 or other postbinding regulatory events.
The solution conformation of bradykinin has been extensively studied by NMR spectroscopy (26, 27, 40 -49). Here we systematically studied the conformation of BK, HOE-140, and several single residue-altered analogs in aqueous solution and aqueous solution with SDS micelles. In both aqueous solution FIG. 2. Two-dimensional representations of four models of BK bound to the B 2 BKR. In models 1 and 2 the NH 2 terminus and COOH terminus of bradykinin are close together, while models 3 and 4 are models in which the NH 2 and COOH terminus are further apart. Each model has a -turn between residues 6 and 9 (dotted line). Other intraligand hydrogen bonds are also represented by dotted lines. Proposed contacts between the ligand and the receptor are shown by solid arrows (dipole-dipole and ionic interactions), and solid lines with no arrowheads are potential aromatic-aromatic interactions. Because of the nature of these two-dimensional drawings, the distances between residues are not accurately depicted, e.g. the distance between the NH 2 and COOH termini of model 4 is much smaller than the drawing depicts. The inset is a perspective low resolution view of bradykinin in the model 3 or 4 conformation docked into the receptor. and an aqueous solution containing SDS micelles there is considerable evidence that bradykinin and HOE-140 have a type-II -turn between residues 6 and 9. Both molecules may also have a weaker tendency to form a type II -turn between residues 2 and 5. Thus, we believe that the 6 -9 -turn is important in the receptor bond conformation of the peptides. Indeed the addition of the dTic-Oic pair was designed to strengthen this -turn of HOE-140 (50, 51). The NMR results
TABLE I Affinity of bradykinin, an agonist, and Phe 5 -HOE-140, an antagonist
Mutant receptors were expressed transiently in COS-7, and the affinity was measured by saturation analysis. The columns are symmetrically arranged around two central columns that show 1) the mutation, designated by the one-letter amino acid code for the wild type residue followed by the residue position and the one-letter amino acid code for the mutant residue, and 2) the helical location of the mutation. The outer two sections of data are for bradykinin (left section) and for Phe 5 -HOE-140 (right section). These sections have three columns; these are, starting from the center, 1) the ratio of the mutant receptor affinity to the wild type receptor affinity (M/WT), 2) the affinity, K d , of the mutant receptor Ϯ S.D. of the measurement, and 3) the number of replicate determinations of the affinity, N. N.T., is not tested. The shading highlights the transmembrane location of each mutation. The underlined ratios are more than 3-fold altered.
** Because of their lowered affinity, the competition method was used to determine the affinity of the designated mutants. * Mixtures of data from transient transfection of COS-7 cells and stable CHO cell lines. No difference in the affinity of the receptor-expressed COS-7 cell or CHO cells was detected for wild type or mutant receptors, not shown. FIG. 3. A, summary diagram showing the positions and magnitudes of mutations that affect bradykinin binding affinity (circles) and HOE-140 binding affinity (triangles). B, stereoimage of one model of bradykinin binding to the BKR incorporating data from these studies. A two-dimensional representation of this model is shown in model 3 of Fig. 2. The identity of the helices are shown for the right image. The plane at the top shows the approximate extracellular boundary of the membrane.
confirm the presence of the -turn, and the micelle results suggest that hydrophobic-hydrophobic interactions, such as those that may occur when the peptides interact with helical regions of the bradykinin receptor, will stabilize the turn.
The modeling of GPC-7TM receptor structure is a qualitative effort due to the small amount of structural data available about this class of receptors. Our models were designed to give ideas for further mutagenesis experiments and possible binding modes of agonists and antagonists. Our models differ somewhat from those previously described (52,53). The major difference centers on each group's choice for the ends of the ␣-helices, particularly TM-6 and TM-7. We chose to include one extra turn in the ␣-helices because of the uncertainties inherent in current methods of assigning helix start and end points; as a consequence of the differences in helical end points, our models suggest that Asp 286 is at the end of TM-7, while Kyle et al. predicted it is in a loop. We believe that the Gln 290 mutation four residues or one helical turn from Asp 286 suggests a helical conformation for this region. Current computation algorithms tend to preserve secondary structure present in the starting structure. This sensitivity of the current computational methods to initial starting conditions contributes greatly to the qualitative nature of the models.
Our results in combination with those of Novotny et al. (54) and Maradone and Hogan (55) provide mutations of all of the acidic residues in extracellular domains 3 and 4 and the extracellular half of TM-4, TM-5, TM-6, and TM-7 of the B 2 receptor. Only residues Glu 199 (4.7-fold reduced in bradykinin affinity (54)), Asp 268 (3.5-fold reduced in affinity, Table I) and Asp 286 (60-fold reduced in affinity, Table I) are potentially interacting by an ionic bond. Of these potential interactions, only the D286A mutation seems strong enough (2.5 kcal/mol) to warrant clear assignment as an ionic interaction.
To further test the idea that Asp 286 provides an ionic interaction with bradykinin, we attempted a double switch, wherein Asp 286 was changed to arginine, D286R, and the Arg 9 of bradykinin was changed to aspartic acid, [Asp 9 ]BK, or glutamic acid, [Glu 9 ]BK (Table II, 9 ]BK for the presence of a COOH-terminal hydrogen bond by measuring the proton exchange rate of the residue-9 amide hydrogen. The COOHterminal -turn is apparently still intact, since the exchange rates for [Asp 9 ]BK and [Glu 9 ]BK were Ϫ5.0 ppb/degree kelvin and Ϫ6.3 ppb/degree kelvin, respectively; thus, the poor affinity of [Asp 9 ]BK is not due to the loss of the COOH-terminal turn (Table II, part B). These data do not confirm the Asp 286 -Arg 9 ionic interaction hypothesis. However, other alterations in receptor-bound peptide structure or receptor structure could prevent Arg 286 from a successful interaction with [Asp 9 ]BK; thus, a residue 286 ionic interaction is still possible.
One of the most notable results of our mutagenesis study is the lack of effect of most of these mutations on antagonist, Phe 5 -HOE-140, binding (summarized in Fig. 3A). The simple elimination of side chain mutations, changes to alanine, which affected bradykinin affinity had no effect on Phe 5 -HOE-140 affinity (Table I) For example T265A, D286A, and Q290A caused large decreases in BK binding affinity, while Phe 5 -HOE-140 affinity was unaltered. F261A did cause a small 5.8-fold effect on Phe 5 -HOE-140 affinity, but the magnitude of this effect is much less than its effect on BK affinity, 2200-fold.
The apparent separation between mutations that affect bradykinin binding from mutations that affect HOE-140 binding led us to make a series of single site changes in bradykinin and HOE-140 (Table II, part A). As in the receptor mutation studies, bradykinin and HOE-140 did not change in parallel. For example, removal of the arginine 9 side chain from bradykinin, alanine 9 bradykinin, caused a 27,000-fold decrease in affinity toward the receptor; in contrast, removal of the arginine side chain from HOE-140 caused only a 7-fold decrease in receptor affinity. All of the peptides were found to have NOEs and Arg 9 amide exchange times consistent with a COOH-terminal -turn. These results suggest that the affinity changes observed for the mutant ligands are not due a gross ligand conformation change due to the change in peptide sequence. Other mutation pairs that illustrate the different behavior of agonist and antagonists include [Ala 6 ]BK and [Ala 6 ]HOE-140 or [Ala 1 ]BK and [des-Arg 0 -Ala 1 ]HOE-140. Possibly the most surprising result is the effect of the removal of the ninth residue, [des-Arg 9 ]BK and [des-Arg 9 ]HOE-140. These molecules cannot form a 6 -9 -turn, since residue 9 is missing; yet, [des-Arg 9 ]HOE-140's affinity is only reduced 41-fold on the rat receptor and 450-fold on the human receptor. In contrast, [des-Arg 9 ]BK is reduced in affinity 37,000-fold on the rat receptor and 150,000-fold on the human receptor.
A model was built incorporating the receptor mutagenesis data and the ligand analog and NMR data; this model summarizes our best hypotheses of how BK and HOE-140 interact with the receptor and is shown in Fig. 3. This model is very similar to two-dimensional model 3 in Fig. 2. The main features of the model include a Asp 286 -Arg 9 salt bridge, a hydrophobic pocket composed of Tyr 117 , Trp 157 , Phe 261 , Trp 261 , and Trp 258 ; these residues surround phenylalanine 8. The amino terminus of bradykinin emerges from the top of the receptor in the TM-4 and TM-5 region (Fig. 3B). Involvement of the top of TM-4 and TM-7 in agonist (BK) binding is also supported by the observed inhibition of bradykinin binding by antibodies directed to the amino-terminal half of extracellular domain 3 and the carboxyl-terminal half of extracellular domain 4 (56). We are still uncertain of the relationship of the NH 2 and COOH termini of bradykinin but tend to favor molecules with the termini close together because of the observation that ⑀-amino cyclokalladin is an agonist, albeit with 1000-fold reduced affinity (24,57).
Comparisons of the mutations that affect HOE-140 binding with those that affect BK binding suggests that the binding pocket for HOE-140 might be one turn deeper in the transmembrane regions and closer to TM-7 than is the bradykinin binding site. In these models of the HOE-140 hydrophobic pocket composed of Phe 261 , Trp 258 from TM-6 and Tyr 297 from TM-7 may contribute to interactions with the large -turn-forcing residues, Tic and Oic. Models built with this arrangement for the HOE-140 binding pocket also suggest that Asp 286 does not contact Arg 9 of HOE-140 as it does in the BK models. One possible arrangement has the Arg 1 of HOE-140 making a salt bridge with Asp 286 . However, the recent observation that antibodies that are directed to the carboxyl half of extracellular domain 4 and the top of TM-7 are unable to inhibit HOE-140 binding suggests that HOE-140 interacts weakly with the top of TM-7 (56).
The lack of parallelism of receptor mutations and ligand changes on the BK and Phe 5 -HOE-140 affinity was unexpected. Furthermore, a number of alanine amino acid replacements were found that affected HOE-140 binding but not BK binding; these included F254A (6.8-fold), Q262A (4.1-fold), and Y297A (6.6-fold). The size of these effects is small compared with the size of mutations that affect BK binding. These differences in magnitude suggest that BK makes a few specific and strong contacts to the receptor, while HOE-140 makes many weaker contacts on the receptor. These data imply that although these similar compounds are classified as competitive it can not be automatically assumed that they make the same atomic interactions within the receptor; furthermore, the data suggest that each may have distinct binding interactions. These interpretations suggests that in spite of the structural similarity of bradykinin and HOE-140 and their competitive behavior they may occupy sites that are one helical turn removed from each other. This observation may fit with the widely noted observation by medicinal chemists that agonists and antagonists frequently have divergent structure activity relationships (58,59).
The results and the derived models for bradykinin's interaction with its receptor present a picture that is somewhat different from the picture presented for substance P, angiotensin, and interleukin-8 when these GPC-7TM receptors contact their ligands (60 -64). These ligands all have several contacts with extracellular regions of the receptor, albeit for most of these cases the number of transmembrane mutations that have been made is small. It is possible that the bradykinin receptor ligand binding site represents an exceptional peptide receptor whose site is more akin to the small amine and rhodopsin ligand pockets; however, a more unifying view might be that peptides bind to their receptors using the extracellular sequences to gain most of their binding energy (affinity) and specificity while a small part of the ligand interacts with helical regions near TM-5, TM-6, and TM-7. This kind of a model has been presented for the carboxyl end of the 74-residue C5a (65) and is implicated for the amino end of chemokines, such as interleukin-8 and monocyte chemotactic peptide-1 (66 -68). These ligands all appear to have many high affinity contacts with extracellular regions and a small region of the ligand, which contributes minimal binding energy but confers the ability to be an agonist. These results and the size of bradykinin, 9 amino acids, suggest that the bradykinin binding site may be similar to other peptide binding sites using both extracellular sequences and helical sequences for affinity; however, BK may have more of the affinity-determining sequences in helical regions because of its small size and compact conformation.
|
v3-fos-license
|
2019-07-25T22:59:44.366Z
|
2019-07-24T00:00:00.000
|
198493766
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://genesandnutrition.biomedcentral.com/track/pdf/10.1186/s12263-019-0643-9",
"pdf_hash": "ca556eada8b3ef26333c714107f1a8f897d1f84e",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44545",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"sha1": "ca556eada8b3ef26333c714107f1a8f897d1f84e",
"year": 2019
}
|
pes2o/s2orc
|
Mechanism of continuous high temperature affecting growth performance, meat quality, and muscle biochemical properties of finishing pigs
Background The mechanism of high ambient temperature affecting meat quality is not clear till now. This study investigated the effect of high ambient temperature on meat quality and nutrition metabolism in finishing pigs. Methods All pigs received the same corn-soybean meal diet. A total of 24 Landrace × Large White pigs (60 kg BW, all were female) were assigned to three groups: 22AL (fed ad libitum at 22 °C), 35AL (ad libitum fed at 35 °C), and 22PF (at 22 °C, but fed the amount consumed by pigs raised at 35 °C) and the experiment lasted for 30 days. Results Feed intake, weight gain, and intramuscular fat (IMF) content of pigs were reduced, both directly by high temperature and indirectly through reduced feed intake. Transcriptome analysis of longissimus dorsi (LM) showed that downregulated genes caused by feed restriction were mainly involved in muscle development and energy metabolism; and upregulated genes were mainly involved in response to nutrient metabolism or extracellular stimulus. Apart from the direct effects of feed restriction, high temperature negatively affected the muscle structure and development, energy, or catabolic metabolism, and upregulated genes were mainly involved in DNA or protein damage or recombination, cell cycle process or biogenesis, stress response, or immune response. Conclusion Both high temperature and reduced feed intake affected growth performance and meat quality. Apart from the effects of reducing feed intake, high temperature per se negatively downregulated cell cycle and upregulated heat stress response. High temperature also decreased the energy or catabolic metabolism level through PPAR signaling pathway. Electronic supplementary material The online version of this article (10.1186/s12263-019-0643-9) contains supplementary material, which is available to authorized users.
Introduction
Continuous high temperature, especially in summer in tropical or subtropical countries, is an unfavorable factor in swine production. Persistent exposure to high temperature decreases feed intake [1], growth performance [2], and meat quality [3,4]. For example, high temperature reduced intramuscular fat (IMF) deposition [5,6] and changed the pH value of the meat [3,7]. These alterations were traditionally believed to result from the decreased feed intake, but more recent studies have shown that heat stress per se also reduced metabolic rates and altered post-absorptive metabolism, regardless of decreased feed intake [8,9]. Heat stress also changed expression of some genes related to oxidative metabolism, through adaptive physiological mechanisms, to reduce thermogenesis [7,10]. Although inferior meat quality induced by heat stress has been intensively studied, the molecular mechanisms underlying the pathophysiological changes remain to be defined. As heat stress does decrease feed intake, it remains unclear what changes are dependent or independent low nutrient availability. Gene expression profiles of longissimus muscle (LM) have been used here to further examine how heat stress affects meat quality and the extent to which it is dependent on reduced feed intake.
Animals and diets
A total of 24 Landrace × Large White pigs (60 kg BW) were assigned randomly to three groups with eight pigs per group. Pigs were housed individually in wire cages (139 × 67 × 115 cm) in one of three temperaturecontrolled rooms at the Institute of Animal Science, Guangdong Academy of Agricultural Sciences. After adaption for 1 week, pigs were treated as follows: a control group of pigs had ad libitum access to feed at 22°C (RT) (22AL); the heat-stressed group had ad libitum access to feed at 35°C (35AL); and pair-fed pigs at 22°C (22PF) were fed the amount consumed by pigs raised at 35°C. All pigs were fed twice daily with a typical cornsoybean meal-based diet for finishing pigs (the diet formula is available as Additional file 1: Table S1). The temperature in one room was increased from 22 to 35°C within approximately 2 h and then remained at 35°C for the 30-d experimental period; other rooms were maintained at 22°C. Water was available ad libitum for all pigs.
Feeding, slaughter procedure, and sample collection All aspects of the experiment including transport and slaughtering procedures were carried out in accordance with the Chinese guidelines for the use of experimental animals and animal welfare [11] and approved by the Animal Experimental Committee of the Institute of Animal Science, Guangdong Academy of Agricultural Sciences. Pigs were weighed at the beginning and feed intakes were recorded to determine average daily gain (ADG), average daily feed intake (ADFI), and feed to gain ratio (F:G). At the end of the experiment, all pigs were fasted for 14 h, and then slaughtered; the whole LM samples were taken immediately after slaughtered for testing meat quality and parts samples at the last thoracic vertebra were collected and stored at − 80°C for subsequent analyses.
Meat quality measurements
The pH of muscle samples was measured at 45 min, 24 h, and 48 h postmortem using a pH meter (HI 8242C, Beijing Hanna Instruments Science & Technology, Beijing, China). Meat color CIE LAB values (L*, lightness; a, redness; b, yellowness) were determined on the transverse surface of the LM after it was cut and let to bloom for 45 min at the same times postmortem using a colorimeter (CR-410, Minolta, Suita-shi, Osaka, Japan), as described by Mason et al. [12]. Shear force was measured using an Instron Universal Mechanical Machine (Instron model 4411; Instron, Canton, MA, USA) and drip loss was measured by weight loss over 24 h at 4°C in a plastic bag, also as described by Mason et al. [12]. The IMF content was measured by petroleum ether extraction of powdered, lyophilized muscle using the Soxtec 2055 fat extraction system (Foss Tecator AB, Höganäs, Sweden), according to the Association of Official Analytical Chemists method [13].
RNA extraction and target labeling
Total RNA was isolated from LM using the TRIzol reagent (Invitrogen, Carlsbad, CA, USA) and purified using a QIAGEN RNeasy® Mini Kit (QIAGEN, Chatsworth, CA, USA) according the manufacturer's instructions. The RNA quality was checked with a spectrophotometer (ND-1000, Nano-Drop Technologies, Wilmington, DE, USA). Each RNA sample was annealed with a primer containing a poly-dT and a T7 polymerase promoter. Reverse transcriptase produced primary and secondary cDNA strands. T7 RNA polymerase was then used to create cRNA from the double-stranded cDNA by incorporating cyanine-3labeled cytidine 3-CTP according to the labeling kit recommendations (Agilent Technologies, Santa Clara, CA, USA). The quality of the labeled cRNA was again verified.
Hybridization, scanning, and feature extraction A total of 24 pigs were used and four cRNA pools of two pigs per treatment was hybridized to 12 microarrays (4 × 3 treatments) using a Gene Expression Hybridization Kit (Agilent) at 60°C for 17 h using whole pig genome arrays (Pig 4x44K Gene Expression Microarrays v2, Agilent). The arrays were washed, stabilized, and dehydrated, as recommended, then examined on a G2565BA microarray scanner (Agilent) and the data were compiled using feature extraction software (FE).
Data analysis of the mRNA microarrays
Array normalizations and error detection were carried out using Silicon Genetics' GeneSpring GX v11.5.1 (Agilent) via the enhanced FE import preprocessor, then was normalized using the supplied algorithms. A final quality control filter was applied to eliminate transcripts with excessive biological variability then GeneSpring was used to reveal genes significantly differing in expression among the 3 treatments. Differentially expressed genes with statistical significance between pairs of treatments were those passing Volcano Plot filtering (Fold Change ≥ 2.0, P value ≤ 0.05).
Real-time PCR analysis
Total RNA, extracted as described above, was used to prepare cDNA (four pools of two pigs per treatment) with PrimeScript RT reagent kits (Takara, Ostu, Japan) according to the manufacturer's instruction. Real-time PCR was carried out on a CFX connect Real-Time System (Bio-Rad, Hercules, CA) using the iTaq Universal SYBR Green SuperMix (Bio-Rad Laboratories) with gene-specific primers. The primer sequences (listed in Table 4) were designed using Primer Express 5. The PCR protocol was as follows: denaturation at 95°C for 30 s, followed by 40 cycles of 95°C for 20 s and 60°C for 20 s, then 72°C for 30 s. The relative abundance of transcripts was expressed as 2 -ΔΔCt , where the Ct (threshold cycle) value for each reaction was used to calculate gene expression, and then further normalized to abundance in the 22AL control animals, given a value of 1.
Statistical analysis
The effect of treatment was examined by one-way ANOVA and, where appropriate, means were compared using Fisher's least significant difference (LSD) post-hoc tests (SPSS V22). Differences were considered to be significant at P < 0.05. Microarray analyses were conducted using GeneSpring GX v11.5.1, via the enhanced FE import preprocessor, then normalized using the supplied algorithms. Significant GO ID pathways were selected by adjusted P value < 0.05.
Growth performance and meat quality
The performance and meat quality data are summarized in Table 1. The final BW, ADFI, and ADG of the control pigs (22AL) were much higher than those in the heatstressed (35AL) or pair-fed pigs (22PF) (P < 0.05), but the backfat thickness of the first rib in the control was lower than that in the heat-stressed or pair-fed pigs (P < 0.05), the latter having the lowest F:G ratio. There were no differences on loin eye area and leaf fat weight among three groups (P > 0.05). The pH value of LM in 35AL pigs was higher than that in the PF pigs at 45 min postmortem (P < 0.05), was higher than that of other two treatments at 24 h (P < 0.05), and exceeded that of the 22AL pigs at 48 h (P < 0.05). There were no differences in drip loss at 24 h or 48 h postmortem, nor in meat color (a, b, and L* value) except for L* at 45-min postmortem being lowest in the 22AL controls(P < 0.05). Both high temperature and feed restriction decreased IMF content in LM (P < 0.05). The shear force value in 22PF pigs was greatest in 22PF pigs (P < 0.05), without any difference between 35AL and 22AL pigs.
To define the biological pathways potentially associated with heat stress on meat quality or growth performance, KEGG pathway analysis (Table 3) revealed that, compared with the 22AL controls, the significantly upregulated pathways in 35AL were associated with ribosomes, p53 signaling pathway, and adipocytokine signaling pathway, whereas downregulated pathways were related to arginine and proline metabolism, glycolysis/gluconeogenesis, PPAR signaling, phagosome, and endocytosis pathways, citrate cycle (TCA cycle), dilated cardiomyopathy pathway, fatty acid metabolism, valine, leucine and isoleucine degradation, pyruvate metabolism, and leukocyte transendothelial migration pathway. Compared with the 22AL controls, metabolism of xenobiotics by cytochrome P450, bile secretion, and PPAR signaling pathway were upregulated in 22PF pigs and dilated cardiomyopathy, phagosome and antigen processing, and presentation were downregulated. Table 2), were chosen to verify that their relative transcript abundance determined by the genechips could be confirmed by real-time PCR (qPCR).
With the exception of PPARGC1, there were high correlations between the two methods (Fig. 2). Based on these results, 11 proteins (DECR1, FABP3, HSPAL1, LPIN1, PFKM, TNNT3, TNNI1, PPARGC-1, MSTN, FASN, and PCK1) were selected and correspondence between protein abundance, by Western blots, and transcript abundance, by qPCR, was examined. The results (Fig. 3) showed that the relative transcript abundance of 13 genes (DECR1, FABP3, GHR, HSPA1L, LPIN1, LPL, MB, MSTN, PFKM, TNNT1, TNNT3, FASN, and PCK1) had high correlations to the mRNA array result. While PPARGC1 had lower correlations to the mRNA array result. Among 11 proteins, the DECR, FABP3, LPINI, and TNNT1protein levels were downregulated and PCK1 protein level was upregulated in 35AL and 22PF group compared with the control, which were correspondent with their mRNA result, but TNNT3, MSTN, and FASN protein levels in 35AL group and 22PF group were not different from that in the 22AL group, which were not correspondent with their mRNA result. HSPAL1 protein level was upregulated and PFKM was downregulated in 35AL compared with 22PF and 22AL group, which were correspondent with their mRNA result. Both PPARGC-1 protein level and mRNA level in 35AL group were downregulated, which is not consistent with the mRNA array result (Table 4).
Discussion
The present study showed that high temperature decreased the growth performance, which is consistent with others' findings [1][2][3]. Le Dividich et al. [6] reported that high temperature decreased feed intake and that decreased energy intake was the main reason for the decreased IMF deposition. The restricted feed intake from exposure here to high temperature also decreased IMF content, indicating that the effect of high temperature on IMF content was explained mainly by the decreased feed intake. In addition, many genes related with lipid metabolism such as FABP3, LPL, SCD, DECR, and LPINI were downregulated, which decreased the fat synthesis and deposition. High temperature also increased pH value and decreased the content of glycogen in muscle, which is consistent with Hu et al. [15]. The shear force in feed-restricted group is greater compared to the control group, because of the lower intramuscular fat. As high temperature decreased the calpastatin expression [16] and possibly enhancing postmortem proteolysis to offset any decreased tenderness from the lower IMF content, the shear force in higher temperature is not different from the control.
In this study, mRNA array analysis of the LM was used to explore the mechanisms of underlying the effects of high temperature on meat quality and growth performance. High temperature has previously been shown to cause DNA damage [16,17] and protein misfolding [18], heat stress increased the expression of heat shock proteins, especially the heat shock protein 70 family [19,20]. In the present study, the GO analysis showed that high temperature per se upregulated gene expression involved in stress reaction in cells, including GTF2H4, ERCC1, HSPA1L, POLL, mutL homolog 1 (MLH1), GADD45A, and CDKN1A. HSPAL1 belong to 70-kDa heat shock proteins (Hsp70s), which have housekeeping functions and assist a wide range of folding processes, including the folding and assembly of newly synthesized proteins, refolding of mis-folded and aggregated proteins, membrane translocation of cellular and secretory proteins, and control of the activity of regulatory proteins [21,22]. In addition, heat shock proteins also adjust redox balance and relieve oxidative stress through inhibiting the activity of NADPH oxidase [23], which was one of the important reasons for high temperature causing oxidative stress. Compared with the 22PF pigs, the 35AL animals had increased expression of heat shock protein 90 (HSPCB), heat shock protein 90 (HSPH1). The GTF2H4 and ERCC1 genes also are involved in DNA repair-related biological processes, reflecting an enhanced self-protecting mechanism under high temperature stress. In addition, genes of ncRNA and rRNA metabolism, protein translation, and translation extension also were upregulated, which might be required by synthesis of heat stress proteins.
Heat stress also affected expression of some genes related to muscle structure. Many studies have reported that high temperature decreased carcass muscle weight [2,24,25]. High temperature here decreased the relative expression of actin protein genes in LM including ACTA2, ACTC1, and ACTA1, myosin heavy chain IIb (MYH4), myofibrillar protein connection (DES), the connection protein with the membrane and myofibrils (SGCA), troponinsTNNT3 andTNNI1, nest bridge proteins (CAV2, 3), β-Tropomyosin (TPM2), and Integri-nβ1(ITGB1), all involved in muscle structure, fiber development, or muscle contraction. In addition, the hightemperature treatment induced a fiber transformation from slow type to fast type and downregulated CaMK and PPAR signaling pathways, but limited feed intake did not. The change in fiber type is largely caused by reduced CaMK and PPAR cell signaling. For example, Gibala et al. [26] reported that over-expression of PPARγ induced a cell transformation from type II to type I. High temperature decreased muscle development here, independently of the reduced feed intake.
High temperature also affected expression of the genes related to energy metabolism. Rinaldo and Le Dividich [7] previously reported that high temperature decreased key enzymes related to oxidative and glycolytic metabolisms including lactate dehydrogenase, betahydroxy coenzyme A dehydrogenase, citrate synthase, and cytochrome oxidase. Weller et al [10] then reported that high temperature (34°C) decreased gene expression of NADH dehydrogenase 1 (ND1), NADH dehydrogenase 2 (ND2), cytochrome C oxidase complex, ATP5, ATP6, and others involved in the electron transport chain. These results implied that long-term exposure to high temperature reduced the level of energy metabolism in muscle. Consistent with those findings, high temperature here decreased expression of phosphofructokinase (PFKM), glucose-6-phosphate isomerase (GPI), pyruvate kinase (PKM2), triosephosphateisomerase 1 (TPI1), isocitrate dehydrogenase 3 (IDH3A), and others involved in carbohydrate metabolism, glucose metabolism, and other biological processes. Downregulated expression of these genes indicated that the g glycolytic and oxidative metabolisms were decreased under high temperature environment, and this would be expected to affect meat quality. High temperature also downregulated genes involved in the TCA cycle, likely indicating reduced total energy metabolism in muscle. It also reduced expression of genes involved in ATP and amino acid synthesis and decomposition. These changes likely provide an adaptive, reduced thermogenic response to heat stress. In the amino acid synthesis and decomposition category, aspartate amino-transferase (GOT) and creatine kinase (CKM) both relate to intramuscular energy, creatine, and ADP phosphoexchange, and therefore generation of hydrogen ions determining postmortem pH [27]. Kwasiborski et al. [28] reported that high levels of CKM in muscle lead to an increase in the rate of muscle metabolism within the early stages after slaughter, so that ultimate pH is lower. High temperature here decreased the CKM expression, which may be a reason for the higher pH. Related to decrease the content of IMF, high temperature altered expression of genes involved in lipid metabolism. For example, catalytic subunit A (PPP3CB), signal transducer and activator of transcription 5A (STAT5A), and ATP5B synthase beta subunit (ATP) were downregulated in animals exposed to high temperature, as were fatty acid binding protein 3 (FABP3), long-chain acyl coenzyme A dehydrogenase (ACADL), mediumchain acyl coenzyme A dehydrogenase (ACADM), palmitoyl coenzyme A oxidase (ACOX1), Ketoacyl CoA Thiolase (LCTHIO), and carnitine acyl transfer enzyme 1B (CPT1B), involved in fatty acid location and beta oxidation-related processes. A decline in oxidative capacity may be another adaptation to high temperature by reducing thermogenesis. Wu et al. [29] found that high temperature decreased IMF content by decreasing the activities of acetyl coenzyme A (ACC) and fatty acid synthase (FAS); it also inhibited beta-oxidation of fatty acids by decreasing the hydroxyacyl CoA dehydrogenase (HAD) in skeletal muscle. FASN, involved in fatty acid synthesis, was upregulated in mRNA array data, but qPCR and Western-blot in this present research showed that there were no differences on FASN protein among three groups, probably the fatty acid synthesis try to compensate for too lower IMF.
In the 22PF animals, the effects of reduced intake were separated from heat-stress per se; genes involved in fatty acid decomposition including DECR1, LPIN1, FABP3, and CPT1A were downregulated; the GO analysis result showed that muscle cell development and cellular component biogenesis also were downregulated. Some genes related to lipid metabolite such apolipoprotein C3 (APOC3), cytochrome P4502C34 (CYP2C34), P450 2E1 (CYP2E1), sulfate transferase 2A1 (SULT2A1), phosphatidylserine decarboxylase proenzyme (PISD), serum retinol binding protein 4 (RBP4), and PEP carboxykinase1 (PCK1) were not influenced by restricted feed intake. Although response to nutrient levels, vitamins and proteins, and lipid metabolic process were upregulated, the growth performance and IMF were lower than in 22AL group, which implies that the nutrient or energy intake did not meet the need of the pigs, and the pigs try to synthesize fat or other nutrients, but the nutrient and energy were too low to satisfy the needs of the pigs. While APOC3, CYP2E1, SULT2A, and PISD were downregulated in animals exposed to high temperature and GO analysis showed that heat stress, DNA damage, and negative regulation of cell cycle were upregulated, this implied that high temperature damaged the cell function and cannot synthesize the nutrients; pigs in 35AL group did not response to the nutrient level, but did response to the heat stress. This finding clearly shows differences between high temperature and limited feed intake in their effects on transcripts related to energy metabolism in muscle.
The most imported signaling pathways, identified by KEGG, are discussed below.
p53 signaling pathway
Skeletal muscle atrophy is reflected in the number of nuclei [30], reduced mainly through apoptosis and the p53 signaling pathway, activated by stress signals including DNA damage, oxidative stress, and induction and activation of cancer genes. The p53 protein regulates transcriptional activation of many genes, mainly involved in cell cycle arrest, cell senescence, and apoptosis [31]. Nitta et al. [32] reported that high temperature arrested cell cycles depending upon the p53 signaling pathway. Exposure of pigs to high temperature in the present study increased expression of genes of the p53 signaling pathway, apoptosis, and skeletal muscle atrophy, but p53 signal pathway was just upregulated in 35ALgroup compared with 22AL and 22PFgroup, which implied that the upregulation of p53 signal pathway was a direct temperature effect. Genes such as p21 (CDKN1A), Bid and FasFADD, Caspase 8, and Caspase10 were involved and are known to induce apoptosis [33]. Signaling complexes, including FADD, Caspase 8, andCaspase10, also induced apoptosis [33]. Yamada et al. [34] reported that human fast muscle type may be more likely to be induced to apoptosis and the same is true for rat [35]. So it can be deduced that high temperature induced more fast muscle type production through up-regulating the p53 pathway.
PPAR signaling pathway
PPAR-alpha and PPAR-sigma, members of the nuclear hormone receptor super family [36], influence gene transcription of fatty acid oxidation enzymes, such asFABP3, CPT1, and ABCA1 in skeletal muscle [37][38][39]. In this present study, genes of the PPAR signal pathway were downregulated in pigs exposed to high temperature, as were ACADL, ACADM, ACOX1, ACSL1, and CPT1B, all related to fatty acid beta-oxidation. It also is downregulated in 35AL compared with 22PF, but the FDR is very high (FDR = 1.64), the FDR of PPAR signaling pathway in 22PF vs 22AL also is very high (FDR = 0.78). It was deduced, therefore, that high temperature decreased energy metabolism through the PPAR signaling pathway.
Adiponectin signal pathway
Adiponectin is an adipocytokine playing an important role in insulin sensitivity and glucose and lipid metabolism, especially in skeletal muscle; it activates AMPK, p38MAPK, and PPAR signal pathway [40] and affects fatty acid metabolism. Adipocytokine signaling pathway also was involved in adaptive responses to heat stress [19], which is important regulators of energy homeostasis, food intake, and insulin action. In the present research, adiponectin signaling pathway was downregulated in muscle of pigs exposed to high temperature, which improved energy expenditure as adaptive response to heat stress.
Combining the results of GO and KEGG pathway analyses, the genes downregulated by reduced feed intake were mainly involved in muscle contraction, muscle development, muscle system process, or differentiation, while the upregulated genes were mainly involved in response to nutrient levels or extracellular stimuli. Downregulated genes caused by high temperature were mainly involved in muscle structure and development, energy, or catabolic metabolism, while upregulated genes were mainly involved in DNA or protein damage or recombination, or processes of cell cycle, biogenesis, and stress and immune responses. The comprehensive analyses of the transcriptome of porcine skeletal muscle provided here indicate some of the molecular basis for direct effects of exposure to high temperature on traits related to meat quality, distinct from indirect effects resulting from depressed feed intake.
Conclusions
Both high temperature and reduced feed intake affected growth performance and meat quality. Apart from the effects of reducing feed intake, a direct effect of high temperature on growth performance and meat quality also was involved in negatively regulating cell cycle, stimulating protein, DNA damage and cell apoptosis, and heat stress response. High temperature also decreased the energy or catabolic metabolism level through PPAR signaling pathway (Fig. 4).
|
v3-fos-license
|
2018-06-21T12:41:03.720Z
|
2018-06-01T00:00:00.000
|
46919416
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1424-8220/18/6/1768/pdf",
"pdf_hash": "fea4dbb151a1d656af279327c3939084146d9e87",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44546",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "fea4dbb151a1d656af279327c3939084146d9e87",
"year": 2018
}
|
pes2o/s2orc
|
Parameter Estimation of SAR Signal Based on SVD for the Nyquist Folding Receiver
The Nyquist Folding Receiver (NYFR) is a novel ultra-wideband (UWB) receiver structure that can realize wideband signal monitoring with fewer components. The NYFR induces a Nyquist zone (NZ)-dependent sinusoidal frequency modulation (SFM) by a modulated local oscillator (LOS), and the intercepted linear frequency modulated (LFM) synthetic aperture radar (SAR) signal will be converted into an LFM/SFM hybrid modulated signal. In this paper, a parameter estimation algorithm is proposed for the complicated NYFR output signal. According to the NYFR prior information, a chirp singular value ratio (CSVR) spectrum method based on singular value decomposition (SVD) is proposed to estimate the chirp rate directly before estimating the NZ index. Then, a fast search algorithm based on golden section method for the CSVR spectrum is analyzed, which can obviously reduce the computational complexity. The simulation shows that the presented algorithm can accurately estimate the parameters of the LFM/SFM hybrid modulated output signal by the NYFR.
Introduction
Ultra-wideband (UWB) receivers [1] need to achieve a high-probability reception of input signals over an extremely wide bandwidth. In addition to using ultrafast sampling methods or devices, UWB can be implemented with sub-Nyquist sampling techniques, such as modified Gegenbauer system [1,2], Xampling [3,4], and analog-to-information convertors (AIC) [5][6][7][8]. Among them, the AIC architecture based on compressive sensing (CS) theory allows sampling of only useful information at lower sampling rates without large hardware scale and has proven to be an effective sampling method for sparse signals. The Nyquist Folding Receiver (NYFR) has recently been proposed as a special type of AIC that performs radio frequency (RF) spectrum compression via a periodic non-uniform local oscillator (LOS) and induces a Nyquist zone (NZ)-dependent modulation on the received signal. Unlike the Gegenbauer polynomial method [1,2] or most other CS schemes [7][8][9][10] that require full signal reconstruction, the NYFR substantially preserves signal structure, and the information reconstruction can be relatively simple using conventional signal analysis methods without sparse recovery.
As the most mature signal of synthetic aperture radar (SAR), linear frequency modulated (LFM) signal has the advantage of good concealment, wide bandwidth, low peak power, and strong anti-interference, which makes it widely used in a variety of radar systems [11]. Parameter estimation of LFM signal has been a typical issue in the field of radar and reconnaissance. Conventional processing algorithms can be introduced to the NYFR to deal with intercepted SAR signals and avoid computationally expensive reconstruction. However, existing research on the NYFR have mainly focused on CS methods [12][13][14], and algorithms of signal parameter estimation for the NYFR output signal still require further research due to its special signal type. In conventional NYFR utilizing sinusoidal frequency modulation (SFM) LOS, the intercepted LFM signal will be converted into LFM/SFM hybrid modulated signal. This brings difficulties for back-end data processing.
In Reference [15], analytic wavelet transform was used for NYFR information recovery from the hybrid modulated signal. However, the performance of this algorithm needs further improvement. In Reference [16], a spectrum peak method was proposed for the NYFR, that is, to construct a multi-channel structure and estimate the NZ index by comparing the spectrum peak value of each channel. However, the performance of this algorithm will deteriorate along with the increase of signal bandwidth. In Reference [17], the NYFR output signal was processed based on a complicated time-frequency diagram method. This method allows us to calculate some instantaneous frequencies of the LFM/SFM hybrid modulated signal to get the upper and lower boundary lines of the time-frequency curve. The chirp rate and NZ index can then be estimated by the slope and the NZ-dependent SFM bandwidth. This time-frequency curve algorithm is intuitive but requires a higher signal-to-noise ratio (SNR) as it does not make full use of all the sampled data. Based on the periodicity of SFM, the authors of Reference [18] looked at the use of autocorrelation to process LFM/SFM hybrid modulated signal. The study found that if the interval is exactly the SFM period, a sinusoidal signal with a chirp rate-dependent carrier frequency can be obtained by autocorrelation. Therefore, we can first estimate the chirp rate like fast dechirp algorithm [19], then convert the signal into SFM signal, and lastly estimate the NZ by the spectrum peak method. However, the autocorrelation method is limited to the signal type and requires a higher SNR.
In addition to these methods, to avoid the complicated processing of SFM signal, the authors of Reference [20] adopted periodic LFM signal as the modulated LOS, and the NZ index was estimated via the chirp rate of the NYFR output signal. The method is easy to implement due to the maturity of LFM modulation technology and provides a new design for the NYFR. However, for LFM signal input, the chirp rate of NYFR output signal is derived from the input signal itself and the LFM LOS. Consequently, conventional methods are not applicable for this case, and no better algorithm has been reported. In Reference [21], an improved dual-channel NYFR architecture was proposed to reduce the difficulty of NYFR output signal processing. However, this scheme introduced an auxiliary channel to the NYFR prototype structure, which doubled the hardware size. Thus, it violated the original intention of NYFR to monitor more bandwidth with less hardware.
In this paper, a parameter estimation algorithm based on singular value decomposition (SVD) [22] is proposed for LFM signals intercepted by the NYFR, which makes use of the LOS periodicity to estimate the chirp rate before estimating the NZ index. To reduce the computational complexity, the usability of golden section method and the reasonable step of its rough search are analyzed. Simulation results verify the efficiency of the proposed algorithm for the hybrid modulated signal in the NYFR. This algorithm is also suitable for the NYFR with periodic LFM LOS.
NYFR Architecture and Intercepted LFM Signal
The NYFR architecture [6] is shown in Figure 1. The input analog signal x(t) is first filtered by an UWB preselect filter, and then mixed by a non-uniform radio frequency (RF) LOS p(t), which is controlled by zero-crossing rising (ZCR) voltage time of a RF sample clock. Then, the mixer output signal x(t) is filtered by an interpolation low-pass filter (LPF) with pass band (− f s /2, f s /2), and we would obtain y(t) as the output of the NYFR. The signal y(t) contains the LOS modulation information that can be measured to determine the original RF band. Next, y(t) is sampled by an analog-to-digital convertor (ADC) to get the discrete NYFR output, and the sampling rate f s is equal to the LOS carrier frequency. Finally, information of the original signal x(t) is recovered by the corresponding parameter estimation method. For ease of deduction, all the signals are expressed in the form of complex number. We define (− f s /2, f s /2) as the 0-th NZ, hence, (l f s − f s /2, l f s + f s /2), l ∈ {1, . . . , L} is the l-th NZ.
In Figure 1, θ(t) represents a narrow-band LOS modulation. According to [16], if SFM is selected as the modulation, the non-uniform LOS p(t) can be normalized and rewritten as: where m f is the modulation coefficient, f sin is the modulation frequency, and ϕ LOS is the LOS initial phase. In essence, p(t) consists of a set of SFM signals located at the center of their respective NZ. Additionally, if periodic LFM is selected as the modulation, the m f sin(2π f sin t) in (1) should be replaced by a LFM signal type.
Here, we denote LFM signal as the NYFR input, and it can be expressed as: where A, f 0 , µ 0 and ϕ 0 are the amplitude, start frequency, chirp rate and initial phase, respectively, and w(t) is the additive white Gaussian noise distributed in the monitoring frequency band. For simplification, it is assumed that the frequency range of the LFM signal does not cross the NZ junctions, i.e., l f s . After mixing, low-pass filtering, and sampling, the discrete expression of NYFR output [16] can be given by: where l NZ is the NZ index indicating the original carrier frequency of the input signal, n = 0, 1, . . . , N − 1, T s = 1/ f s , and l NZ = round[( f 0 + µ 0 t)/ f s ]. w(nT s ) is the additive noise that is modulated according to the original NZ position, and its power spectrum is folded into (− f s /2, f s /2). y(nT s ) is a LFM/SFM hybrid modulated signal to be processed for parameter estimation. Figure 2 shows the time-frequency diagram of an LFM signal sampled by the NYFR. The time-frequency curve of the input LFM signal is superimposed on the sinusoidal frequency modulation of the LOS. The modulation parameters of the SFM part correspond to the values of l NZ , which demonstrates the frequency band where the input signal is located. The main parameters estimated in this paper are the chirp rate and the NZ index, which correspond to the slope of the time-frequency curve and bandwidth of the SFM in Figure 2.
where is the modulation coefficient, is the modulation frequency, and is the LOS initial phase. In essence, ( ) consists of a set of SFM signals located at the center of their respective NZ. Additionally, if periodic LFM is selected as the modulation, the sin(2 ) in (1) should be replaced by a LFM signal type.
Here, we denote LFM signal as the NYFR input, and it can be expressed as: where , , and are the amplitude, start frequency, chirp rate and initial phase, respectively, and ( ) is the additive white Gaussian noise distributed in the monitoring frequency band. For simplification, it is assumed that the frequency range of the LFM signal does not cross the NZ junctions, i.e., . After mixing, low-pass filtering, and sampling, the discrete expression of NYFR output [16] can be given by: where is the NZ index indicating the original carrier frequency of the input signal, Interpolation Filter Interpolation Filter
Parameter Estimation Based on SVD
Intercepted by the NYFR with SFM LOS, the LFM signal is converted into LFM/SFM hybrid modulated signal. For the SFM part, only the NZ modulation index l NZ is unknown. Unlike the existing algorithms [16,18] that treat the LFM/SFM hybrid modulated signal as a simple modulation signal and regard the other modulation as a redundant part, we make full use of the its modulation characteristics, and estimate chirp rate directly using the LOS periodic information.
The LOS modulation period can be calculated as 1/ f s and the number of points in one LOS modulation period is N sin = f s / f sin . In addition, f sin and f s are the prior parameters for the NYFR structure. Thus, we can set N sin = f s / f sin ∈ Z + and M c = N/N sin , where · means the floor operation and M c ∈ Z + . The above setting implies that the number of signal points we use in this section is M c N sin , and if the input data length N > M c N sin , we can select M c N sin points and omit the remaining points.
For simplification, we omit the noise part temporarily, and abbreviate nT s as n. According to the LOS periodic characteristic, we first observe the relationship between two adjacent LOS periods.
It can be observed that when Equation (4) has no LFM modulation part, the quotient of the elements whose interval is a LOS period will be a constant. Thus, we can dechirp the signal y(n) in Equation (3) by multiplying an auxiliary LFM signal s µ (n) = exp [−jπµ(nT s ) 2 ] , where µ is an argument. The result can be reshape to a M c × N sin matrix.
The singular value decomposition (SVD) of Y(µ) can be computed as UΣV H [21], where Σ is an M c × N sin diagonal matrix, and we call it the singular values matrix. The singular values are λ 1 , λ 2 , · · · , λ M c , and λ 1 ≥ λ 2 ≥ · · · ≥ λ M c . Considering a noise-free situation, if µ = µ 0 , Y(µ) will become an SFM signal matrix whose row elements are proportionate to the data in one LOS modulation period, therefore the first singular value λ 1 will reach its maximum, and the rest singular values will be zeros. If µ = µ 0 , the periodic characteristic of the LOS in each row of Y(µ) will be weakened by the LFM modulation, and consequently, the other singular values of Y(µ) will be non-zero. Based on the characteristic above, we define the chirp singular value ratio (CSVR) spectrum [23] as: Meanwhile, we can search the peak of CSVR spectrum in Equation (7) whose argument is the chirp rate and the estimated chirp rate is: Figure 3 shows the relationship between the CSVR spectrum and µ under different SNR. It is worth noting that this algorithm is also suitable for the NYFR with periodic LFM LOS. Figure 4 shows the time-frequency curve of an LFM signal intercepted by the NYFR with periodic LFM LOS. The intercepted LFM signal can be processed similar to the above processing, and the CSVR spectrum is shown in Figure 5. The chirp rate can also be estimated by searching the peak of CSVR spectrum so that it can provide a feasible method for processing LFM signals intercepted by the NYFR with periodic LFM LOS.
Fast Search Algorithm
We can obtain an accurate estimation of µ 0 by scanning the CSVR spectrum, and the search interval is not limited by the data length. However, the computational complexity of the proposed method may be larger than the existing methods for two main reasons. First, the SVD processing complexity is high. Second, the total number of SVD processing may be very large, especially if a high precision is required and the search step is small. For the first point, there are some fast SVD methods [24,25] that can reduce the computational complexity of each SVD decomposition and enhance the speed.
To reduce the computational complexity of the complete search of chirp rate, this section discusses the availability of golden section search method for the CSVR spectrum algorithm. Although the dichotomy method has a faster convergence rate than the golden section method under certain circumstances, the dichotomy requires a high symmetry around the peak point to ensure that the iteration is done correctly. Therefore, we choose the golden section method of convergence-which has a lower but more stable convergence rate-to search the exact value of µ 0 .
An important prerequisite for the golden section method is to find the position near the maximum point in order to avoid iteration to the local extreme point. This can be achieved by a rough search within a reasonable interval. For the rough search, a small interval will lead to a considerable computation, while a large interval will lead to the absence of the main lobe around the true value of µ 0 , and achieve a wrong estimation. Hence, we resort to the width of the maximum main peak of the CSVR spectrum to determine the maximum available search interval.
According to the Jacobi-Anger expansion, we have: where J m (·) is the m-th Bessel function. According to Equation (9), the SFM part of y(n) in Equation (3) consists of a series of frequency components with a fundamental frequency f sin , so f sin is the key parameter that determines the periodicity of CSVR spectrum. Here we directly give the expression of the periodic fluctuation interval of CSVR spectrum as Equation (10) Figure 6 shows two CSVR spectrums, corresponding to f sin = 10 MHz and f sin = 20 MHz respectively. It can be seen that the periodic fluctuation intervals of CSVR spectrum are 100 MHz/µs and 400 MHz/µs respectively, conforming to Equation (10).
where (•) is the m-th Bessel function.
According to Equation (9), the SFM part of ( ) in Equation (3) consists of a series of frequency components with a fundamental frequency , so is the key parameter that determines the periodicity of CSVR spectrum. Here we directly give the expression of the periodic fluctuation interval of CSVR spectrum as Equation (10), and verify it by simulation.
∆ =
(10) Figure 6 shows two CSVR spectrums, corresponding to = 10 MHz and = 20 MHz respectively . It can be seen that the periodic fluctuation intervals of CSVR spectrum are 100 MHz/μs and 400 MHz/μs respectively, conforming to Equation (10). The main lobe width of the SVR spectrum in Figure 6 is 2∆µ = 2 f 2 sin . In order to improve the accuracy of CSVR spectrum peak search under low SNR, we set the search interval of rough search as 1/4 of the main lobe width, i.e., f 2 sin /2. Thus, by a rough search of µ in the possible range of chirp rate, we can ensure that the maximum of the results is located within the main lobe of the CSVR-µ curve.
As Figure 7 shows, the peak point after rough search is µ peak . Additionally, we can ensure the maximum of CSVR spectrum is located between the left and the right of the µ peak point, that is the area between two upright purple lines. The main lobe width of the SVR spectrum in Figure 6 is 2∆ = 2 . In order to improve the accuracy of CSVR spectrum peak search under low SNR, we set the search interval of rough search as 1/4 of the main lobe width, i.e., /2. Thus, by a rough search of in the possible range of chirp rate, we can ensure that the maximum of the results is located within the main lobe of the CSVR-curve. As Figure 7 shows, the peak point after rough search is . Additionally, we can ensure the maximum of CSVR spectrum is located between the left and the right of the point, that is the area between two upright purple lines. We choose neighboring positions of the peak point of rough search-namely and -as the initial boundary of the golden section search. Within the initial interval ( , ), the CSVR-curve has only one local extreme point, which is also the global maximum point. After setting the precision requirements, the accurate estimation of chirp rate can be finally obtained through iterative search.
It is assumed that the possible range of is from −1000 MHz/μs to 1000 MHz/μs, the required estimated accuracy is 10 MHz/μs, and the modulation frequency is = 10 MHz. A complete We choose neighboring positions of the peak point of rough search-namely µ a and µ b -as the initial boundary of the golden section search. Within the initial interval (µ a , µ b ), the CSVR-µ curve has only one local extreme point, which is also the global maximum point. After setting the precision requirements, the accurate estimation of chirp rate can be finally obtained through iterative search. It is assumed that the possible range of µ is from −1000 MHz/µs to 1000 MHz/µs, the required estimated accuracy is 10 −3 MHz/µs, and the modulation frequency is f sin = 10 MHz. A complete direct search on chirp rate needs calculating a total of 2 × 10 6 CSVR values of chirp rate points. By contrast, this fast search method needs a calculation of only 41 times in the rough search, and about 23 times in the iterative search to reach the same precision. Compared with direct search, the proposed search scheme can reduce the amount of computation by more than 99%.
After estimating the chirp rate, we can dechirp the LFM/SFM hybrid modulated signal in Equation (3) to get a SFM signal, and then the NZ index can be estimated by the conventional spectrum peak method [16].
To sum up, for the LFM signal intercepted by the NYFR, the proposed parameter estimation algorithm can be divided into four steps: 1.
Rough search of chirp rate based on CSVR with interval f 2 sin /2 to get its possible range.
2.
Fast iterative search of chirp rate based on the golden section method to get the accurate estimation result.
3.
Dechirp the LFM/SFM hybrid modulated signal to get the SFM part. 4.
NZ index estimation based on spectrum peak method.
Numerical Experiments
In this section, simulation experiments were conducted to verify the performance of the proposed method using the following parameters listed in Table 1. We compared the proposed CSVR method with the spectrum peak method [16], the autocorrelation method [18], and the time-frequency curve method described in Section 1. Figure 8 illustrates the normalized root mean square error (NRMSE) of chirp rate estimation. Figure 9 illustrates the performance of NZ index estimation, evaluated by the probability of correct decision (PCD).
It is clear from the figures that the proposed method outperforms the other three methods. With the proposed architecture, the estimation performance of the chirp rate is outstanding and stable when SNR ≥ −10 dB. The correct ratio of NZ index estimation is greater than 90% when SNR ≥ −11 dB and reaches 100% when SNR ≥ −10 dB. This is because the proposed CSVR method effectively utilizes the characteristics of the hybrid modulated signal and can achieve super resolution through iteration. By contrast, the spectrum peak method estimates NZ index disregarding the effect of chirp rate, resulting in a low PCD of NZ index estimation and therefore, the performance of chirp rate estimation is affected by the NZ result. The autocorrelation method results in loss of SNR due to autocorrelation, so the accuracy of chirp rate estimation is poor at low SNR. The time-frequency method only uses the instantaneous frequency values of several moments and does not make full use of all the sampling data, so its performance is also not good.
when SNR ≥ −10 dB. The correct ratio of NZ index estimation is greater than 90% when SNR ≥ −11 dB and reaches 100% when SNR ≥ −10 dB. This is because the proposed CSVR method effectively utilizes the characteristics of the hybrid modulated signal and can achieve super resolution through iteration. By contrast, the spectrum peak method estimates NZ index disregarding the effect of chirp rate, resulting in a low PCD of NZ index estimation and therefore, the performance of chirp rate estimation is affected by the NZ result. The autocorrelation method results in loss of SNR due to autocorrelation, so the accuracy of chirp rate estimation is poor at low SNR. The time-frequency method only uses the instantaneous frequency values of several moments and does not make full use of all the sampling data, so its performance is also not good. It should be noted that in Figure 8, after correctly estimating the NZ index when SNR ≥ 5 dB, the spectrum peak method can obtain a higher estimation accuracy of the chirp rate by means of high precision algorithms such as Fractional Fourier Transform (FrFT) [26], which is especially suitable for simple LFM signal. However, it is obvious that the spectral peak method needs more computation, requires much higher SNR threshold, otherwise fails under low SNR. Among the four methods, the proposed SVD-based parameter estimation algorithm has the best robustness to low SNR and the most stable estimation performance.
Conclusions
On the basis of the LOS periodicity property, a parameter estimation algorithm based on SVD of matrix is proposed for LFM signals intercepted by the NYFR. We make full use of the LOS prior information to estimate the chirp rate before estimating the NZ index. Then, an effective fast search scheme based on the golden section method is put forward to reduce the computational complexity. Simulation results demonstrate the superior performance of the proposed algorithm compared to the existing algorithms for the LFM/SFM hybrid modulated signal in the NYFR. Besides, this algorithm is also suitable for the NYFR with periodic LFM LOS. It should be noted that in Figure 8, after correctly estimating the NZ index when SNR ≥ 5 dB, the spectrum peak method can obtain a higher estimation accuracy of the chirp rate by means of high precision algorithms such as Fractional Fourier Transform (FrFT) [26], which is especially suitable for simple LFM signal. However, it is obvious that the spectral peak method needs more computation, requires much higher SNR threshold, otherwise fails under low SNR. Among the four methods, the proposed SVD-based parameter estimation algorithm has the best robustness to low SNR and the most stable estimation performance.
Conclusions
On the basis of the LOS periodicity property, a parameter estimation algorithm based on SVD of matrix is proposed for LFM signals intercepted by the NYFR. We make full use of the LOS prior information to estimate the chirp rate before estimating the NZ index. Then, an effective fast search scheme based on the golden section method is put forward to reduce the computational complexity. Simulation results demonstrate the superior performance of the proposed algorithm compared to the existing algorithms for the LFM/SFM hybrid modulated signal in the NYFR. Besides, this algorithm is also suitable for the NYFR with periodic LFM LOS.
Author Contributions: T.L. contributed to the conceptualization, methodology and writing of this article. Q.Z. contributed to the simulations, validation, review and editing. Z.C. contributed to the supervision, review and editing.
|
v3-fos-license
|
2019-04-27T13:10:28.303Z
|
2018-08-18T00:00:00.000
|
134082400
|
{
"extfieldsofstudy": [
"Geology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2075-163X/8/8/356/pdf",
"pdf_hash": "02b3e2d1abbcc493017aff02d4d395ffe767dd3c",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44547",
"s2fieldsofstudy": [
"Geology"
],
"sha1": "02b3e2d1abbcc493017aff02d4d395ffe767dd3c",
"year": 2018
}
|
pes2o/s2orc
|
Geology and Mineralogy of Rare Earth Elements Deposits and Occurrences in Finland
Rare earth elements (REE) have critical importance in the manufacturing of many electronic products in the high-tech and green-tech industries. Currently, mining and processing of REE is strongly concentrated in China. A substantial growth in global exploration for REE deposits has taken place in the recent years and has resulted in considerable advances in defining new resources. This study provides an overview of the mineralogical and petrological peculiarities of the most important REE prospects and metallogeny of REE in Finland. There is a particularly good potential for future discoveries of carbonatite hosted REE deposits in the Paleozoic Sokli carbonatite complex, as well as in the Paleoproterozoic Korsnäs and Kortejärvi Laivajoki areas. This review also provides information about the highest known REE concentration in the alkaline intrusions of Finland in the Tana Belt and other alkaline rock hosted occurrences (e.g., Otanmäki and Katajakangas). Significant REE enrichments in hydrothermal alteration zones are also known in the Kuusamo Belt (Uuniniemi and Honkilehto), and occurrences of REE-rich mineralisation are also present in granite pegmatite bodies and greisens in central and southern Finland (Kovela monazite granite and the Rapakivi Granite batholith at Vyborg, respectively). REE minerals in all of the localities listed above were identified and analyzed by scanning electron microscopy (SEM) and electron microprobes (EMPs). In localities of northern and central Finland, both primary rock forming and epigenetic-hydrothermal REE minerals were found, namely phosphates (monazite-Ce, xenotime-Y), fluorcarbonates (bastnäsite-Ce, synchysite), and hydrated carbonates (ancylite-Ce), hydrated aluminium silicates (allanite-Ce, Fe-allanite, cerite, chevkinite), oxides (fergusonite, euxenite) and U-Pb rich minerals. The chondrite normalized REE concentrations, the La/Nd ratios and the REE vs. major element contents in several types of REE bearing minerals from prospects in Finland can be used to identify and define variable REE fractionation processes (carbonatites), as well as to discriminate deposits of different origins.
Introduction
The demand for rare earth elements has spiked in recent years due to their increasing usage in numerous high-technology applications that touch many aspects of modern life and culture.Specific REEs are used individually or in combination to make phosphors-substances that emit luminescence-for many types of ray tubes and flat panel displays, in screens that range in size from smart phone displays to stadium scoreboards.Some REEs are used in fluorescent and LED lighting.Yttrium, europium, and terbium phosphors are the red-green-blue phosphors used in many light bulbs, panels, and televisions.
Enrichment of the REE may occur through primary processes such as magmatic processes and hydrothermal fluid mobilization and precipitation, or through secondary processes that move REE minerals from where they originally formed, such as sedimentary concentration and weathering.Natural rare earth element deposits and occurrences may thus be divided into primary (high-temperature) and secondary (low-temperature) deposit types.The most important primary deposits with high grade and tonnage are typically associated with alkaline-peralkaline igneous rocks and carbonatites formed in extensional intracontinental rifts [1,2].
Today, almost all (~98%) of the world's REE supply comes from China, with 40%-50% of this production contributed by the giant Fe-REE-Nb deposit at Bayan Obo [3,4].These rich ores are dominated by light rare earth elements (e.g., [2][3][4][5][6]).Reserves are estimated at more than 40 million tons of REE minerals grading at 3-5.4 wt % REE (70% of world's known REE reserves), 1 million tons of Nb 2 O 5 and 470 million tons of iron.The deposit also contains an estimated 130 million tons of fluorite [2,6].REE deposits associated with alkaline igneous rocks are typically lower grade but with larger tonnage and a higher content of HREE [2].The other sources are minor and, in addition to the Mountain Pass bastnäsite and Russian loparite eudialyte deposits, include placers, where monazite and xenotime are extracted as by-products from ilmenite-zircon sands (India, Brazil, Malaysia).Since 2003, none of these sources have contributed more than 3.5 thousand metric tons (kt) rare earth oxides (REO) (i.e., <3% of global output).
The main REE metallogenetic provinces in Europe are those areas where extensional tectonics and introduction of enriched mantle melts into shallow crustal levels have produced alkaline silicate and carbonatite intrusions.The major REE deposits are currently known in those areas where those intrusions have been exposed by erosion [7].The most notable REE deposits are found in the Mesoproterozoic Gardar Province of south-west Greenland [8], and the Protogine Zone, a major, multiply reactivated, in part extensional structure in southern Sweden [9].The majority of the Kola Alkaline Province lies in Russia, where it contains significant REE deposits in the Lovozero and Khibiny intrusive complexes [10].Mineral deposits of REE in the nepheline syenites and foidolites of the Lovozero plutons have 7.1 million tons (Mt) resources with 1.12 wt % average REO content, whereas in the apatite-nepheline rocks of Khibiny complex, 5.5 Mt REO resources with 0.40 wt %, average REO content are known [10,11].The westernmost part of the province falls within the Finnish border, with two main intrusions: the Sokli phoscorite-carbonatite complex, and the Iivaara alkaline complex (Figure 1).The approximately 360-380 Ma old Sokli carbonatite complex in north eastern Finland hosts a deeply weathered and unexploited phosphate deposit, which is enriched in niobium (Nb), tantalum (Ta), zirconium (Zr), REE and uranium (U).The Sokli complex shows many important similarities to other alkaline complexes of the Devonian Kola alkaline province, especially to the Kovdor and to the Vuorijärvi complexes (in Russia).These similarities include occurrences of early stage ultramafic cumulate rocks, well-developed phoscorite-carbonatite associations and late stage carbonatites that represent evolution of carbonate magmas from dolomite carbonatites to final stage light rare earth element (LREE)-Sr-Ba dolomite carbonatite pulses.However, a critically important difference at Sokli is the vastly greater abundance of carbonatite relative to the cumulate ultramafic rocks.Exloration for REE by the Geological Survey of Finland in the Sokli complex has focused on the fenite aureole and associated late-stage, crosscutting carbonatite dikes that seem to have the highest potential for REE mineralization [12].The ore reserve in the "soft-rock" phosphate-rich materials is about 114 Mt with 15 wt % P 2 O 5 content with an additional, approximately 75 Mt resources in the weathered bedrock containing about 5.6 wt % P 2 O 5 .Iivaara is the type locality of ijolite, which is a common rock type in carbonatite-bearing alkaline complexes.The phosphor potential of the Iivaara intrusion is very high, but the REE potential is still under study.The Iivaara intrusion shows many similarities with the Lovozero alkaline massif in Russia.Results of the previous mineralogical studies in samples from historic drill cores indicate that the Iivaara nepheline-syenite contains apatite and a small amount of allanite.The samples contain 1%-5% P 2 O 5 and 200 ppm REE only [13].Other examples of alkaline magmatic rock occurrences include the c. 2600 Ma Siilinjärvi carbonatite complex [14,15]; and the c. 2050 Ma Katajakangas alkaline gneiss [12,16].Apatite is currently mined at Siilinjärvi as a phosphate resource.The main rock types at Siilinjärvi are enriched in REE, and REE-hosting minerals in the carbonatite and associated "glimmerite" include monazite-(Ce), pyrochlore-group minerals, LREE-bearing strontianite and REE-bearing Ti, Nb-phases [17].The Katajakangas gneiss contains mineralised layers, which are rich in zircon, bastnäsite, columbite, and thorite, and an informal resource estimate indicate 0.46 Mt ore with 2.4% average total rare earth oxides [16].
This paper presents an overview of the main REE occurrences and prospects in Finland, and identifies areas with the most significant potential for future exploration and development, on the basis of their geological suitability.In addition to a detailed geological description of the potential areas, mineralogy of occurrences and prospects are also fully characterized as beneficiation methods have to be tailor-made for each deposit, and are dependent on properties such as mineralogy, textures, and grain size of the ore.Results of geochemical and mineralogical studies on samples from new drillings completed by the Geological Survey of Finland between 2009 and 2016 are also included in this review.Mineralogical studies of samples from the new drill cores were focused on the following exploration targets: (1) the fenitic zone surrounding the Sokli carbonatite intrusions at Jammi and Kaulus; (2) the Pb-REE occurrence hosted by calcsilicate rocks and dikes of carbonatite at Korsnäs; (3) the Kortejärvi carbonatite and Laivajoki silicocarbonatite complexes; (4) the alkaline gneissic granite in the Otanmäki area; (5) the REE-Au mineralization at Mäkärä and Vaulo in the Tana Belt; (6) the metasomatic-hydrothermal Au-Co-Cu-Fe-U and REE deposits with intense albitization in the Kuusamo belt; (7) the Kovela monazite granite in southern Finland and (8) the Vyborg rapakivi granite batholith in south eastern Finland.Here we classify these deposits and occurrences on the basis of their origin and geological settings (Figure 1), and summarize their characteristics into four REE deposit types on the basis of the age, host rock type and mineral associations (Table 1).An integral part of the study was the textural and mineral-chemical comparison of REE, Y, Th, U-rich mineral assemblages in different geological environments.
Materials and Methods
The first step of this study was the extension of the previous geological, mineralogical, petrological and geochemical databases with new observations on the recently completed new drill cores.On the basis of the results of the revision, drill cores were selected for re-logging and resampling for the purpose of detailed petrography and mineralogical studies.Polished thin sections were prepared from representative samples in order to investigate the mineralogical-textural characteristics of REE minerals and their host rocks by transmitted and reflected light polarizing microscopy and by scanning electron microscopy (SEM).For the purpose of high resolution SEM studies, the polished thin sections were carbon coated to a thickness of 25 nm using an EMITECH 960 L evaporation-coating unit.Backscattered electron imaging and qualitative compositional characterization of the samples was performed using an JEOL JSM 5900 LV high-resolution scanning electron microscope (GTK Electron optics and microanalysis Laboratory, Finland, Espo) fitted with an Oxford Instruments X-MAX large area (50 mm 2 ) silicon drift detector (SDD) energy-dispersive X- [18,19] respectively.
Materials and Methods
The first step of this study was the extension of the previous geological, mineralogical, petrological and geochemical databases with new observations on the recently completed new drill cores.On the basis of the results of the revision, drill cores were selected for re-logging and re-sampling for the purpose of detailed petrography and mineralogical studies.Polished thin sections were prepared from representative samples in order to investigate the mineralogical-textural characteristics of REE minerals and their host rocks by transmitted and reflected light polarizing microscopy and by scanning electron microscopy (SEM).For the purpose of high resolution SEM studies, the polished thin sections were carbon coated to a thickness of 25 nm using an EMITECH 960 L evaporation-coating unit.Backscattered electron imaging and qualitative compositional characterization of the samples was performed using an JEOL JSM 5900 LV high-resolution scanning electron microscope (GTK Electron optics and microanalysis Laboratory, Espoo, Finland) fitted with an Oxford Instruments X-MAX large area (50 mm 2 ) silicon drift detector (SDD) energy-dispersive X-ray microanalysis (EDXA) system, run with Oxford Instruments' INCA v.4 software.SEM analyses were particularly suited to the complexity and very fine grained nature of the mineral assemblages.
Compositions of REE minerals were analyzed using a CAMECA SX100/LKP type electron microprobe equipment (GTK Electron Optics Laboratory).During the analyses, the accelerating voltage was 15 keV with a beam current of 20 nA.The beam diameter was 1 µm in the analyses without fluorine and 5 µm when fluorine was included in the analyzed set of elements.A detailed analytical method is presented in Appendix A and electron probe microanalyzer (EMPA) data are presented in the tables of the Supplementary Material.The Sokli carbonatite complex (c.360-380 Ma) in north eastern Finland is a part of the Kola alkaline magmatic province [20], and hosts an unexploited phosphate deposit enriched in REE, Nb, Ta, Zr and U [21,22].The complex has a concentrically zoned structure that can be divided into two major zones (Figure 2).The inner zone is built up by multiple intrusions of carbonatites and phoscorites whereas the outer zone mainly consists of ultramafic rocks which were largely transformed into carbonate-rich metasomatites by CO 2 -rich fluids derived from the late stage injections of carbonatite melts.The relict minerals and bulk compositions indicate that the ultramafic rocks were mostly pyroxenites [23,24].The internal zones of carbonatite are surrounded by syenite and a fenite aureoles.The outer boundary of the fenitic zone is up to 2 km away from the central part of the complex.The late stage carbonatite dikes penetrate not only the complex, but also the fenitic aureole and country rocks up to 1.3 km from the contact.The country rocks consist of ultramafic and mafic volcanic units and tonalitic gneiss (Figure 2).ray microanalysis (EDXA) system, run with Oxford Instruments' INCA v.4 software.SEM analyses were particularly suited to the complexity and very fine grained nature of the mineral assemblages.Compositions of REE minerals were analyzed using a CAMECA SX100/LKP type electron microprobe equipment (GTK Electron Optics Laboratory).During the analyses, the accelerating voltage was 15 keV with a beam current of 20 nA.The beam diameter was 1 μm in the analyses without fluorine and 5 μm when fluorine was included in the analyzed set of elements.A detailed analytical method is presented in Appendix A and electron probe microanalyzer (EMPA) data are presented in the tables of the supplementary material.The Sokli carbonatite complex (c.360-380 Ma) in north eastern Finland is a part of the Kola alkaline magmatic province [20], and hosts an unexploited phosphate deposit enriched in REE, Nb, Ta, Zr and U [21,22].The complex has a concentrically zoned structure that can be divided into two major zones (Figure 2).The inner zone is built up by multiple intrusions of carbonatites and phoscorites whereas the outer zone mainly consists of ultramafic rocks which were largely transformed into carbonate-rich metasomatites by CO2-rich fluids derived from the late stage injections of carbonatite melts.The relict minerals and bulk compositions indicate that the ultramafic rocks were mostly pyroxenites [23,24].The internal zones of carbonatite are surrounded by syenite and a fenite aureoles.The outer boundary of the fenitic zone is up to 2 km away from the central part of the complex.The late stage carbonatite dikes penetrate not only the complex, but also the fenitic aureole and country rocks up to 1.3 km from the contact.The country rocks consist of ultramafic and mafic volcanic units and tonalitic gneiss (Figure 2).There is a strong connection between the complexity of fenite textures and associated mineralization due to the enrichment of REE in the intermediate and late stage carbonatite magma generations which produced multiple pulses of fenitizing fluids [21,25].The presence of brecciated zones within the carbonatite and alkaline complex is the result of an explosive release of fluids and volatiles, thus brecciation indicates development of more evolved magma generations and increased potential of Nb and REE enrichments in the source intrusion [25,26].
Geological and Mineralogical Characteristics of Major REE Deposits and Occurrences in Finland
Geochemical data from the drill cores R301 and R302 drilled by GTK in 2006 in the Jammi area (Figure 2) show that the carbonatite dikes in the fenitic aureole of the Sokli Complex are enriched in incompatible elements, such as P 2 O 5 (19.9 wt % max), Sr (1.9 wt %), Ba (6.8 wt %) and Zn (0.3 wt %).The total REE content of samples are from 0.11 to 1.83 wt %, with dominance of LREE (0.11-1.81 wt % LREE and 0.002-0.041wt % HREE contents [25].Similar dikes also occur within the carbonatite intrusion in the Kaulus area (Figure 2).According to the results of whole rock geochemical analyses, most of dikes are varieties of carbonatite ranging from silico-carbonatite to ferro-and calico-carbonatite.Calcio-carbonatite dikes are rich in calcite (15-84 vol %) with subordinate amounts of dolomite, but locally contain up to 50 vol % apatite.
The REE minerals in the carbonatite dikes at Jammi and Kaulus (Figure 2) are usually LREE-rich and their assemblages consist of ancylite-(Ce), monazite-(Ce), bastnäsite-(Ce) and allanite.In the main mass of the Sokli Carbonatite, HREE rich minerals such as xenotime-(Y) and pyrochlore are more common.In general, ancylite is the most widespread REE mineral in the whole complex.This mineral occurs in close association with strontianite, barite and bastnäsite.These minerals commonly form complex intergrowths in the carbonate matrix of dikes, but also fill up veinlets and they can also be found as euhedral crystals in cavities of the carbonatite (Figure 3a).
Monazite-(Ce) [(Ce,La,Nd,Th)PO 4 ] is the second important REE mineral in the carbonatite dikes of the Sokli complex.This mineral most commonly occurs in the form of microcrystalline, sporadic, and isolated equidimensional crystals in the carbonate matrix.Monazite also often occurs in a round, sunflower-like form.In the centre of latter precipitations, strontium-apatite is often present (Figure 3b,c).The total RE 2 O 3 contents are in the range of 40-50 wt % with systematic differences in chemical compositions according to the different stages of monazite crystallization in different assemblages.The monazite crystals have higher in Ce 2 O 3 contents (29.5-45.0wt %) than the sum of La 2 O 3 and Nd 2 O 3 (e.g., La 2 O 3 = 21.10-24.66wt %, Nd 2 O 3 = 5.81-7.56wt %).
Bastnäsite-(Ce) [(Ce,La)(CO 3 )F] has been reported as a relatively common product of alteration of allanite and Sr-or F-apatite [25] in the Sokli complex.Individual crystals of bastnäsite and allanite in the carbonatite dikes appear to be acicular or needle-like and they form either radial accumulations or intricate cross-cutting grids within a variety of minerals such as albite and dolomite (Figure 3d).Allanite-(Ce) [(Ce,Ca,Y) 2 (Al,Fe 3+ ) 3 (SiO 4 ) 3 (OH)] also occurs as both stubby and acicular crystals, generally in contact with bastnäsite and, at some places, with xenotime (Figure 3e).Representative electron-microprobe results of bastnäsite, allanite and xenotime are shown in the Supplementary Materials Table S1.Pyrochlore [(Na,Ca) 2 Nb 2 O 6 (OH,F)] is one of the tantalum/niobium oxides (commonly comprising a U-Ta-rich and Nb-bearing rutile) that typically occurs as a rare, sector-zoned and prismatic crystals with up to 50-100 µm sizes (Figure 3f).
Korsnäs Pb-REE Deposit
The Korsnäs Pb-REE deposit in western Finland was operated by the Outokumpu Oy mining company between 1961 and 1972.The deposit comprises a network of narrow carbonatite veins and dikes in an area of 10 km 2 and one larger carbonatite dyke (Svartören) that hosts the main ore body measuring 5-30 m in width, up to 1.5 km in length and extending down to a depth of around 350 m (Figure 4).The Korsnäs dyke swarm is hosted by a north-south trending fracture zone dipping to east at an angle of 40-60°.The host rock of the dikes is the c. 1.9 Ga migmatitic biotite paragneiss of the
Korsnäs Pb-REE Deposit
The Korsnäs Pb-REE deposit in western Finland was operated by the Outokumpu Oy mining company between 1961 and 1972.The deposit comprises a network of narrow carbonatite veins and dikes in an area of 10 km 2 and one larger carbonatite dyke (Svartören) that hosts the main ore body measuring 5-30 m in width, up to 1.5 km in length and extending down to a depth of around 350 m (Figure 4).The Korsnäs dyke swarm is hosted by a north-south trending fracture zone dipping to east at an angle of 40-60 • .The host rock of the dikes is the c. 1.9 Ga migmatitic biotite paragneiss of the South Ostrobothnian Schist Belt and the carbonatite dikes are dated at c. 1.83 Ga [27].The average total REE oxide grade is 0.91 wt % [28].The ore bodies consist of highly altered intrusive phases, including coarse-grained pegmatite and carbonatite dikes or calcareous scapolite-diopside-barite veins that may contain significant REE grades.The REE contents are correlated with the abundance of apatite (more than 6 wt %) with a slight excess of HREE with respect to presence of monazite and allanite [29,30].
Samples for the current geochemical and mineralogical studies were selected from the drill holes SÖ-66 and SÖ-104 which were drilled by the Geological Survey of Finland in 1955 (Figure 4).The total REE content of samples ranges from 0.7 to 2.2 wt % with LREE dominating the REE budget.The Eu content is high, from 66 to 242 ppm, whereas the Th contents range from 107 to 604 ppm.Ba and Sr are also enriched (3600-3800 ppm and 2200-3400 ppm respectively) mainly in the carbonatite dikes [13,17].South Ostrobothnian Schist Belt and the carbonatite dikes are dated at c. 1.83 Ga [27].The average total REE oxide grade is 0.91 wt % [28].The ore bodies consist of highly altered intrusive phases, including coarse-grained pegmatite and carbonatite dikes or calcareous scapolite-diopside-barite veins that may contain significant REE grades.The REE contents are correlated with the abundance of apatite (more than 6 wt %) with a slight excess of HREE with respect to presence of monazite and allanite [29,30].Samples for the current geochemical and mineralogical studies were selected from the drill holes SÖ-66 and SÖ-104 which were drilled by the Geological Survey of Finland in 1955 (Figure 4).The total REE content of samples ranges from 0.7 to 2.2 wt % with LREE dominating the REE budget.The Eu content is high, from 66 to 242 ppm, whereas the Th contents range from 107 to 604 ppm.Ba and Sr are also enriched (3600-3800 ppm and 2200-3400 ppm respectively) mainly in the carbonatite dikes [13,17].The accessory mineral assemblages in the carbonatite dikes of the Korsnäs zone are virtually identical, and include REE-bearing apatite, monazite, bastnäsite, ancylite, britholite, calcite and barite.Representative microprobe analyses of ancylite, bastnäsite and monazite are presented in the Supplementary Materials Table S2.The polycrystalline aggregates of ancylite-(Ce) occurring in some samples as disseminations in the carbonate matrix (Figure 5a) are characterized by a high LREE content: ~45 wt % Ce2O3, ~25 wt % La2O3) and ~12 wt % Nd2O3.Ancylite also forms fine-grained aggregates with bastnäsite.Figure 5b shows one example of their typical intergrowths in which bastnäsite cores are overgrown by ancylite.Bastnäsite contains 55.52-59.27wt % REE2O3, 3.40-5.63wt % F and 6.31-6.90wt % CaO.Monazite is the main REE mineral with up to 60 wt % REE2O3.This mineral occurs as small, discrete crystals or as fine-grained inclusions in apatite within the zones of The accessory mineral assemblages in the carbonatite dikes of the Korsnäs zone are virtually identical, and include REE-bearing apatite, monazite, bastnäsite, ancylite, britholite, calcite and barite.Representative microprobe analyses of ancylite, bastnäsite and monazite are presented in the Supplementary Materials Table S2.The polycrystalline aggregates of ancylite-(Ce) occurring in some samples as disseminations in the carbonate matrix (Figure 5a) are characterized by a high LREE content: ~45 wt % Ce 2 O 3 , ~25 wt % La 2 O 3 ) and ~12 wt % Nd 2 O 3 .Ancylite also forms fine-grained aggregates with bastnäsite.Figure 5b shows one example of their typical intergrowths in which bastnäsite cores are overgrown by ancylite.Bastnäsite contains 55.52-59.27wt % REE 2 O 3 , 3.40-5.63wt % F and 6.31-6.90wt % CaO.Monazite is the main REE mineral with up to 60 wt % REE 2 O 3 .This mineral occurs as small, discrete crystals or as fine-grained inclusions in apatite within the zones of the weathered rocks (Figure 5d-f).Based on the EMPA data (Supplementary Materials Table S2), monazite grains contain an average of 29 ± 1.4 wt % Ce 2 O 3 , 11 ± 1.2 wt % La 2 O 3 and 8 ± 0.5 wt % Nd 2 O 3 , 2 ± 0.5 wt % Pr 2 O 3 with considerable amounts of ThO 2 (6.5 ± 0.5 wt %) and UO 2 (3.2 ± 0.6 wt %).Barite occurs as a secondary mineral, fracture and vug-filling phase together with the REE mineralization (Figure 5e,f).Barite composed mainly of BaO (54.7-62.0wt %), SO 3 (19.6-33.1 wt %) and SrO (0.4-4.5 wt %).The composition of barite can be seen to be dominated by REE (0.3-18.0 wt % Ce 2 O 3 ) and P (0.1-3.9 wt % P 2 O 5 ), maybe due to partially replaced by various assemblages of REE-Sr-Ba minerals (Supplementary Materials Table S1).
Kortejärvi and Laivajoki Carbonatites
The 1.88-1.85Ga [30] Kortejärvi and Laivajoki carbonatite dikes in the Kuusamo belt, north central Finland (Figures 1 and 6), were emplaced into early Paleoproterozoic mafic volcanic rocks along a crustal-scale fault zone.The Kortejärvi carbonatite dyke is approximately 60 m thick and 2 km long and dips steeply to the east, whereas the Laivajoki carbonatite dyke is about 20 m thick and 4 km long and dips 60 • to the southeast [31].Both dikes consist of calcite-carbonatite and dolomite-carbonatite.The Kortejärvi dyke also contains glimmerite and olivine magnetite rocks, whereas silicocarbonatite and glimmerite occur in the Laivajoki dyke.Total REE contents are 210-1644 ppm and 443-892 ppm in the Laivajoki and in the Kortejärvi carbonatite, respectively [13].
Kortejärvi and Laivajoki Carbonatites
The 1.88-1.85Ga [30] Kortejärvi and Laivajoki carbonatite dikes in the Kuusamo belt, north central Finland (Figures 1 and 6), were emplaced into early Paleoproterozoic mafic volcanic rocks along a crustal-scale fault zone.The Kortejärvi carbonatite dyke is approximately 60 m thick and 2 km long and dips steeply to the east, whereas the Laivajoki carbonatite dyke is about 20 m thick and 4 km long and dips 60° to the southeast [31].Both dikes consist of calcite-carbonatite and dolomitecarbonatite.The Kortejärvi dyke also contains glimmerite and olivine magnetite rocks, whereas silicocarbonatite and glimmerite occur in the Laivajoki dyke.Total REE contents are 210-1644 ppm and 443-892 ppm in the Laivajoki and in the Kortejärvi carbonatite, respectively [13].Three samples were selected from the available drill cores of Kortejärvi carbonatite dyke for the purpose of mineralogical studies.The major rock forming minerals are calcite, dolomite and apatite.Apatite is interstitial between calcite and dolomite crystals, and commonly rimmed by overgrowths of monazite (Figure 7a-c).This kind of monazite apatite association is also quite common in carbonatite related deposits.Similar rim of monazite also occurs around calcite and dolomite (Figure 7b).Apatite has 53-54 wt % CaO content without significant concentrations of those trace elements (e.g., REEs, Y, Sr, Th, and U) that may substitute Ca in the crystal lattice.Apatite crystals are very rich in halogens with higher than 2 wt % F and over 1 wt % Cl concentrations (Supplementary Materials Table S3).
The most important REE-bearing accessories are monazite, allanite, synchysite and bastnäsite.These minerals associate with calcite, fluorapatite and quartz in fracture infillings within carbonate minerals and in aggregates of radiating individual crystals with bastnäsite (Figure 7c).Allanite grains are commonly altered into REE fluorcarbonate minerals, most often into bastnäsite (Figure 7c), suggesting to the F-rich nature of overprinting fluids.Bastnäsite forms radial or irregular aggregates of thin tabular crystals with up to ~50 µm sizes in cavities and cracks of altered allanite crystals.Monazite has low (from 0.03 to 3.05 wt %) CaO content with P 2 O 5 concentrations between 27.92 and 29.91 wt % (Supplementary Materials Table S3).Six representative samples of the silicocarbonatite rocks from cores of drillings conducted by the Rautaruukki Corporation in the 1970s were chosen for mineralogical studies from the Laivajoki dyke.Mineral assemblages in the samples consist primarily of quartz, calcite, dolomite, plagioclase, biotite and iron oxide minerals.Allanite and chevkinite are the most abundant REE-bearing minerals.Allanite occurs in the form of coarse-grained aggregates associated with silicate and iron oxide minerals.Allanite also contains inclusions of magnetite and chevkinite (Figure 7d).The grain size of allanite varies between 300 µm × 500 µm and 1000 µm (Figure 7e).Many crystals display a network of fractures with narrow strips of epidote (Figure 7d,e).Chevkinite fills fractures in iron oxide minerals and titanite (Figure 7f).REE-rich fluorocarbonate minerals (bastnäsite and synchysite) are commonly associated with each other but bastnäsite is more abundant in most of the studied samples.Bastnäsite also occurs as tiny anhedral inclusions up to 20 µm across in synchysite.Synchysite also forms radial or irregular aggregates of thin tabular crystals with up to ~50 µm sizes in cavities and cracks of altered allanite crystals, or in association with calcite, dolomite and Ti-bearing phases in some places (Figure 7f).The secondary carbonate minerals (mainly synchysite and bastnäsite) replace substantial parts of former allanite crystals (Figure 7f).
Results of electron microprobe analyses in polycrystalline allanite (Aln) grains from Laivajoki are given in Supplementary Materials Table S3.Allanite is characterized by relatively uniform composition with REE 2 O 3 contents between 24.67 and 27.09 wt %, small Y contents (0.08-0.13 wt % Y 2 O 3 ), and from 11.05 to 12.36 wt % Al 2 O 3 , from 11.26 to 13.52 wt % FeO total, from 10.41 to 12.22 wt % CaO and from 3.98 to 4.99 wt % MgO contents.ThO 2 and MnO 2 contents are very small, not exceeding 0.5 wt %.In places, a phase with intermediate allanite-epidote composition, showing elevated Al (~20 wt % Al 2 O 3 ), Ca (~15 wt % CaO) and smaller Fe (~12 wt % FeO total ) and (REE,Y) 2 O 3 contents of 15 wt % were also found in fractures of allanite.In the samples from the Laivajoki carbonatite dyke, the majority of bastnäsite grains are strongly enriched in LREE (from 67.29 to 77.98 wt % REE2O3) with up to 33 wt % Ce2O3, 12 wt % La2O3, 18 wt % Nd2O3, and smaller contents of the remaining REE.Fluorine is the dominant anion (from 3.1 to 4.7 wt % F) and is partly substituted by hydroxyl.Synchysite-(Ce) [Ca(REE)(CO3)2F] reveals a significantly higher Ca content (from 11.74 to 17.32 wt % CaO) in comparison to bastnäsite but LREE still predominate In the samples from the Laivajoki carbonatite dyke, the majority of bastnäsite grains are strongly enriched in LREE (from 67.29 to 77.98 wt % REE 2 O 3 ) with up to 33 wt % Ce 2 O 3 , 12 wt % La 2 O 3 , 18 wt % Nd 2 O 3 , and smaller contents of the remaining REE.Fluorine is the dominant anion (from 3.1 to 4.7 wt % F) and is partly substituted by hydroxyl.Synchysite-(Ce) [Ca(REE)(CO 3 ) 2 F] reveals a significantly higher Ca content (from 11.74 to 17.32 wt % CaO) in comparison to bastnäsite but LREE still predominate in the composition with Ce 3+ being the dominant REE cation (from 23.15 to 30.94 wt % Ce 2 O 3 ).The F content shows small variation from 3.1 to 4.8 wt % and the ThO 2 content is just over 1 wt %.
Chevkinite-(Ce) has the general formula of A 4 BC 2 D 2 (Si 2 O 7 ) 2 O 8 , where the predominant cations in each site are: A = Ca, REE, Th; B = Fe 2+ ; C = Fe 2+ , Fe 3+ , Ti; and D = Ti.Oxide totals of microprobe analyses from this phase are rather low (from 94.9 to 96.3 wt %) presumably due to presenting total Fe as Fe 2+ (see below), and perhaps due to secondary hydration during metamictization.Chevkinite from Laivajoki has notably low Ca contents (~1 wt % CaO) and correspondingly high REE contents (48.62-50.74wt % REE 2 O 3 ).Thorium contents are very low (<0.10 wt % ThO 2 ).Thus, REE occupy more than 98% of the A-site.Divalent Fe is the dominant cation at the C-site, with values of FeO concentrations ranging from 9. 27
The Otanmäki Katajakangas Nb-REE Deposit
The Otanmäki area consists mainly of Archean granitic gneisses intruded by alkali-granite and gabbro-anorthosite of c. 2.05 Ga age (Figure 8) [32][33][34].The gabbro-anorthosite intrusions host Fe-Ti-V deposits which were mined from 1950's to 1980's by the Otanmäki Oy mining company.In total, 30 Mt of ore grading of 32-34% Fe, 5.5-7.6%Ti and 0.26% V was mined [35].The processing plant in Otanmäki produced 7.6 Mt magnetite, 3.8 Mt ilmenite and 0.2 Mt sulphur concentrates, as well as 55,454 t vanadium pentoxide [36].The alkaline gneiss contains 0.7-1.5 wt % Zr and 0.1-0.2wt % Th [37].To the west of the old Otanmäki mine, the Nb-REE mineralisation at Katajakangas occurs in narrow lenses or layers with a few metres width only (Figure 8).These ore bodies are mostly composed of pervasively sheared and foliated, fine-grained, reddish-grey quartz-feldspar gneiss with riebeckite and alkaline pyroxene with some hydrothermal veins.The zones with Nb-REE mineralization in the alkaline gneiss contain high concentrations of Nb, Zr, Y, Th and REE, with an estimated Nb + Y + REE resource of 0.46 Mt at 2.4 wt % RE 2 O 3 , 0.31 wt % Y 2 O 3 and 0.76 wt % Nb 2 O 5 contents [7].
To the west of the old Otanmäki mine, the Nb-REE mineralisation at Katajakangas occurs in narrow lenses or layers with a few metres width only (Figure 8).These ore bodies are mostly composed of pervasively sheared and foliated, fine-grained, reddish-grey quartz-feldspar gneiss with riebeckite and alkaline pyroxene with some hydrothermal veins.The zones with Nb-REE mineralization in the alkaline gneiss contain high concentrations of Nb, Zr, Y, Th and REE, with an estimated Nb + Y + REE resource of 0.46 Mt at 2.4 wt % RE2O3, 0.31 wt % Y2O3 and 0.76 wt % Nb2O5 contents [7].Representative EPMA analyses of the REE-bearing minerals from the Otanmäki-Katajakangas deposit are listed in the Supplementary Materials Table S4.Fergusonite and columbite are the major host for Nb, Th, U and Y, whereas REE is mainly hosted by allanite.Fergusonite-(Y) [YNbO4] occurs Representative EPMA analyses of the REE-bearing minerals from the Otanmäki-Katajakangas deposit are listed in the Supplementary Materials Table S4.Fergusonite and columbite are the major host for Nb, Th, U and Y, whereas REE is mainly hosted by allanite.Fergusonite-(Y) [YNbO 4 ] occurs commonly as rounded to irregular discrete crystals varying in size from 100 µm up to more than 500 µm, in associations with allanite, columbite and zircon.Fergusonite is chemically heterogeneous: many crystals display growth zoning with enrichments in U and Th in the cores of crystals (Figure 9a,b).The bright domains on SEM images of these cores are characterized by high content of U (up to 9.5 wt % UO 2 ), Th (up to 3.1 wt % ThO 2 ), Y (up to 26 wt % Y 2 O 3 ) and Nb (46.3 wt % Nb 2 O 5 ) (Supplementary Materials Table S4).Textures observed in SEM and EPMA data suggest to the presence of sub-micrometre intergrowths between fergusonite and allanite in most of the studied samples (Figure 9a,b).They are interpreted as early magmatic REE phases.Columbite (Fe,Mn) (Nb,Ta) 2 O 6 is the most common niobium-bearing mineral in many of the studied samples.Columbite occurs in the form of irregular-sub round micro phenocrysts with up to ~50 µm sizes in associations with allanite (Figure 9c).Columbite is also present as elongated (250 µm), subhedral to euhedral crystals, which are commonly intergrown with euhedral to anhedral rutile.Columbite and rutile grains are dispersed throughout the matrix of the host rock.The columbite crystals are typically poorly zoned but some parts of the crystals are usually richer in uranium (Figure 9f).
The primary magmatic allanite forms euhedral to subhedral crystals, ~100 to ~300 µm across.It is associated with albite, biotite, quartz, zircon, fluorapatite, and titanite (Figure 9c).Allanite crystals are also partly to fully replace and overgrown by secondary REE carbonates (bastnäsite and synchysite).The Katajakangas allanite has a total REE-content between 23.6 and 55.7 wt % (average 39.6 wt %).Cerium (Ce) is dominant (with average Ce 2 O 3 content of 20.7 wt %), followed by Nd (10.7 wt %) and La (7.7 wt %).Concentrations of HREE are commonly below the detection limit of the electron microprobe (Supplementary Materials Table S4).The allanite-bastnäsite-synchysite association form aggregates with more or less hexagonal outlines within the surrounding matrix (Figure 9d).The aggregates consist of radiating individual crystals.
In the hydrothermal veins, synchysite and bastnäsite form well-developed fibrous crystals and aggregates of radiating acicular crystals.The REE-carbonates replace and/or encompass allanite crystals in these veins (Figure 9e).
The Mäkärä Vaulo Area in the Tana Belt
The Tana Belt is located on the southern side of the Lapland Granulite Belt in northern Finland (Figure 1).It is characterized by prominent REE anomalies in till, but lithogeochemical data for various types of rocks also display enrichments in REE [38][39][40][41].High La and Y concentrations in both till and arkose gneisses of the bedrock occur in an elongated, 200 × 200 km 2 area (Figure 10).
The Mäkärä region within the Tana Belt consists of Archean gneiss (3.1-2.6 Ga) and Paleoproterozoic supracrustal rocks (2.3-2.0Ga), which were thrusted together with the Lapland Granulite Belt onto the Central Lapland Greenstone Belt at c. 1.9 Ga during the Svecofennian orogeny [42][43][44].The bedrock of the study area is poorly exposed, and largely covered by saprolite, weathered regolith and Quaternary till with variable thicknesses, as well as with dense vegetation.This area is characterized by Au and REE anomalies in regional till and bedrock samples.Au anomalies can be related to occurrences of hydrothermal hematite-pyrite-quartz veins, whereas weathering of arkose gneiss has played a role in the formation of the REE-bearing kaolinitic saprolite.The REE enrichment in kaolinitic saprolite ranges from 0.04 to 0.1 wt % REE 2 O 3 , with LREE/HREE ratio of 2.7 to 15 in the kaolinitic saprolite sediment samples.
The arkose gneiss is mainly composed of quartz, K-feldspar, plagioclase, amphibole and mica (muscovite and biotite) phenocryst embedded in fine-grained groundmass.Lamprophyre dikes of 1775 ± 10 Ma [44] intrude the gneissic rocks at Mäkärä.The dikes are mainly composed of plagioclases, amphiboles, biotite, K-feldspars.Remnants/fragments of lamprophyre also occur in the kaolinitic saprolite.The average total REE content of dikes is 0.05 wt % (max 0.1 wt %).Detailed electron-microprobe studies of the arkose gneisses from the Tana Belt revealed extensive sub solidus alteration of primary magmatic allanite to ferriallanite.Moreover, the alteration of ferriallanite resulted in development of complex pseudomorphs and overgrowths containing mainly REE carbonate phases such as synchysite and bastnäsite.Allanite and REE carbonate phases often occur as large anhedral and spheroidal aggregates with up to 200 μm in diameter (Figure 11ac).Occasionally, these aggregates also fill up vugs.Ferriallanite is often altered to allanite in the rims and to bastnäsite in the centres of the aggregates (Figure 11a-c).Allanite does not show significant zoning by SEM imaging but contains REE-rich rims due to sub-micronsized intergrowths with allanite plus a bastnäsite-type mineral phase.At some places, the secondary carbonate minerals (mainly synchysite) occur in cavities of feldspar and in association with monazite and apatite (Figure 11e).
In the lamprophyre dikes, enrichments of REE are hosted by apatite, allanite, bastnäsite, cerite and xenotime.Rutile contains inclusions of xenotime and iron oxides.Allanite crystals are partly or fully replaced and overgrown by secondary minerals, mainly REE carbonate phases (bastnäsite and synchysite), chlorite, albite, quartz, and more rarely by xenotime, titanite and apatite (Figure 11a-d).Chemical compositions of selected allanite and associated accessory minerals from the lamprophyre dikes in the Mäkärä are presented in the Supplementary Materials Table S5.The oxide totals are in the range 96 to 99 wt %.Part of the short fall can be attributed to calculation of the total iron as FeO.The Ca content ranges between 0.89 and 6.17 wt % CaO and the sum of Y+REE varies from 21.44 to 55.16 wt % with the dominance of Ce (11.38-47.37wt % Ce2O3).The concentration of Th in allanite ranges from 0.020 to 0.80 wt % ThO2 and usually predominates over U (0.17 wt % UO2).Generally, M sites of allanite are dominated by Al (14.37-15.83wt % Al2O3) and Fe (8.79-15.16wt % FeO).The Mg contents (0.12-2.46 wt % MgO) as well as the Si and F concentrations (19.36-31.24wt % and 0.23-1.10wt %, respectively) together with the high-end of Fe contents (up to 15.16 wt % FeO) can be attributed to the presence of significant amount of ferriallanite molecule in the composition of Detailed electron-microprobe studies of the arkose gneisses from the Tana Belt revealed extensive sub solidus alteration of primary magmatic allanite to ferriallanite.Moreover, the alteration of ferriallanite resulted in development of complex pseudomorphs and overgrowths containing mainly REE carbonate phases such as synchysite and bastnäsite.Allanite and REE carbonate phases often occur as large anhedral and spheroidal aggregates with up to 200 µm in diameter (Figure 11a-c).Occasionally, these aggregates also fill up vugs.Ferriallanite is often altered to allanite in the rims and to bastnäsite in the centres of the aggregates (Figure 11a-c).Allanite does not show significant zoning by SEM imaging but contains REE-rich rims due to sub-micronsized intergrowths with allanite plus a bastnäsite-type mineral phase.At some places, the secondary carbonate minerals (mainly synchysite) occur in cavities of feldspar and in association with monazite and apatite (Figure 11e).
In the lamprophyre dikes, enrichments of REE are hosted by apatite, allanite, bastnäsite, cerite and xenotime.Rutile contains inclusions of xenotime and iron oxides.Allanite crystals are partly or fully replaced and overgrown by secondary minerals, mainly REE carbonate phases (bastnäsite and synchysite), chlorite, albite, quartz, and more rarely by xenotime, titanite and apatite (Figure 11a-d).Chemical compositions of selected allanite and associated accessory minerals from the lamprophyre dikes in the Mäkärä are presented in the Supplementary Materials Table S5.The oxide totals are in the range 96 to 99 wt %.Part of the short fall can be attributed to calculation of the total iron as FeO.The Ca content ranges between 0.89 and 6.17 S5).Synchysite reveals a significantly higher Ca content (5.62 to 9.14 wt % CaO) in comparison to bastnäsite.Again, LREE predominates here, with Ce 3+ the dominant REE cation (21.79 to 30.67 wt % Ce 2 O 3 ).The F content of bastnäsite is highly variable (from 3.40 to 6.36 wt % F) whereas it is ranging between 0.62 and 1.26 wt % F in synchysite.Transformation of allanite to secondary REE carbonate minerals (mainly bastnäsite and synchysite) has been described in numerous cases in various magmatic and metamorphic rocks e.g., [45][46][47][48][49].However, such alteration of allanite is usually associated with clay minerals (kaolinite, montmorillonite), and also with thorite, fluorite, and magnetite without carbonate minerals in some cases [45,50].In contrast, allanite in the samples from the Tana Belt was replaced mainly by bastnäsite Transformation of allanite to secondary REE carbonate minerals (mainly bastnäsite and synchysite) has been described in numerous cases in various magmatic and metamorphic rocks e.g., [45][46][47][48][49].However, such alteration of allanite is usually associated with clay minerals (kaolinite, montmorillonite), and also with thorite, fluorite, and magnetite without carbonate minerals in some cases [45,50].In contrast, allanite in the samples from the Tana Belt was replaced mainly by bastnäsite and synchysite, and rarely also by ferriallanite and chlorite.This type of alteration of allanite suggest to highly alkaline and oxidizing nature of the overprinting fluids [51][52][53][54].
Cerite [Ce 9 Fe 3+ (SiO 4 ) 6 (SiO 3 )(OH) 4 ] occurs as fine-grained crystals filling fracture-veins in quartz and albite (50 µm in length and 5-15 µm in width) as seen in the studied lamprophyre dikes (Figure 11f).Cerite is strongly enriched in the LREE and also contains relatively low concentrations of Y and HREE (Supplementary Materials Table S5).The sum of the oxides of La, Ce, Pr, Nd, Sm, Gd, Dy and Y ranges from 71.15 to 81.60 wt %.Cerium-oxide contents are from 67.79 to 77.49 wt%.Cerite also contains from 12.19 to 17.86 wt % SiO 2 , from 1.18 to 2.53 wt % Al 2 O 3 , from 0.02 to 2.41 wt % FeO and minor amounts of Ca (0.06-0.39 wt % CaO).Fluorine is invariably present in concentrations ranging from 1.21 to 1.73 wt %.Analytical totals between 94.3 and 99.6 wt % suggest the presence of minor amount of H 2 O and other elements below their detection limits.
A few disseminated anhedral xenotime grains (diameter = 15 × 40 µm and 20 × 25 µm) have also been found in the lamprophyre dikes of the Mäkära area.Xenotime also forms inclusions within ilmenite and surrounded by quartz and K-feldspar (Figure 11f).Xenotime [YPO 4 ] grains contain significant amounts of Y 2 O 3 (41.08-47.33wt %) and extremely high HREE concentration.The Gd 2 O 3 contents are up to 4.5 wt % and the Dy 2 O 3 content are up to 4.8 wt %.
Uuniemi (Kuusamo Belt)
The Kuusamo schist belt (KSB) is a fold and thrust belt which is considered as a part of the Central Lapland greenstone belt, the most significant Paleoproterozoic orogenic gold province in northern Europe.The KSB consists of an epiclastic-volcanic sequence deposited during multiple rifting events affecting the Archean basement between 2.4 and 2.0 Ga.The lithological units (Figure 6) were metamorphosed and folded during the Svecofennian orogeny (1.9-1.8Ga).Syenite and carbonatite dikes with strong albitization are widespread in the KSB.Albitization are related to late magmatic and metamorphic hydrothermal processes [55][56][57][58].
The zone of REE enrichment at Uuniniemi is composed of metasediments, syenite and metasomatic carbonatite dikes [57], which were subjected to intense albitization, Fe-metasomatism, carbonatization and brecciation.The metasomatic carbonatite dyke sampled in outcrops contains 2.8 wt % P 2 O 5 , 0.43 wt % REE 2 O 3 and 256 ppm Nb.In albitite and albite carbonate rocks, the average total REE content is 0.1 wt % [13].The albite carbonate rock samples are composed of sodic plagioclase with minor carbonate (calcite and dolomite), quartz and mica (biotite), but are also highly enriched with apatite and REE minerals such as monazite, allanite, euxenite, Fe-columbite and thorite.
Monazite in the albite carbonate rocks (albitite) mostly occurs as grains in composite mineral inclusions in apatite.In such inclusions, the irregular shaped monazite grains are typically 5-20 µm large and occupy 50-70 wt % of the total volume of the inclusions (Figure 12a).Monazite also occurs as relatively large (50−150 µm), oval or anhedral individual grains that are strongly porous and fractured in the matrix of the rock (Figure 12b).Monazite contains much more, ~66 wt % total REE 2 O 3 compared to allanite (20.4 wt %), and lower Th and Ca contents (0.22 wt % ThO 2 and 0.33 wt % CaO).
Euxenite occurs as subhedral to anhedral, commonly 200-400µm large, but occasionally up to 500 µm large grains, (Figure 12c).Electron-microprobe analyses (Supplementary Materials Table S6) of euxenite grains show lower amounts of total REE (from 0.96 to 1.21 wt % REE 2 O 3 ) and higher amounts Nb (36.10-37.56wt % Nb 2 O 5 ), Y (19.49-19.93wt % Y 2 O 3 ), and Ca (4.16-4.25 wt % CaO), compared with analyses of monazite and bastnäsite.Small amounts of W (2.50-3.61wt % WO 3 ) and U (4.15-4.62 wt % UO 2 ) and trace amounts of Th (0.49-0.63 wt % ThO 2 ) were also detected.Thorite commonly occur not only as isolated grains but also as inclusions within monazite and apatite.Thorite is characterized by large agglomerations of anhedral to round crystals with up to 200 µm in diameter (Figure 12d).In addition to ThO 2 = 53.19 O 6 ] crystals that occur in the metasomatic carbonatite dikes of Uuniniemi in the form of subhedral to anhedral crystals and range in size from 50 to 200 µm (Figure 12d).The major oxides in ferrocolumbite are FeO (19.36 wt %) and MnO (1.68 wt %) at the A sites of the crystal structure, whereas Nb 2 O 5 (67.26 wt %) and Ta 2 O 5 (1.73 wt %) occupies the B-sites.Also, minor amounts of Ti, Th, U, Y and LREE were observed as substitutions (Figure 12d).Apatite occurs as discrete subhedral grains in association with calcite and dolomite.Davidite and rutile are the principal opaque minerals.Davidite is present as large (up to 2 mm in diameter) zoned grains (Figure 12e). in ferrocolumbite are FeO (19.36 wt %) and MnO (1.68 wt %) at the A sites of the crystal structure, whereas Nb2O5 (67.26 wt %) and Ta2O5 (1.73 wt %) occupies the B-sites.Also, minor amounts of Ti, Th, U, Y and LREE were observed as substitutions (Figure 12d).Apatite occurs as discrete subhedral grains in association with calcite and dolomite.Davidite and rutile are the principal opaque minerals.Davidite is present as large (up to 2 mm in diameter) zoned grains (Figure 12e).
Honkilehto (Kuusamo Belt)
The hydrothermal Au-Co-Cu-U-REE occurrence at Honkilehto in the central part of KSB is hosted by albitised, carbonatised and sulphidised sericite quartzite [57], which are intruded by numerous sills of albite diabase and by some minor metasomatic carbonatite dikes [59].The carbonatite dikes are characterized by high content of P (2.8 wt % P 2 O 5 ) and REE (0.45 wt % total REE 2 O 3 ) and host several REE-bearing minerals such as monazite-(Ce), allanite-(Ce), ancylite-(Ce), bastnäsite-(Ce) and xenotime-(Y).The albite diabase dikes are characterized by high content of U-rich minerals (uraninite, davidite), all of which are associated with bastnäsite and allanite [27].Coexisting bastnäsite and uraninite with up to 100 and 50 µm in size, respectively (Figure 12e), occur as accessory minerals in the host albitised rocks.Uraninite is partly or wholly replaced by bastnäsite and allanite.Bastnäsite is furthermore common in fractures, indicating that it is a paragenetically late mineral (Figure 12e).The pristine parts of the uraninite grains (Supplementary Materials Table S6) contain 71.8-77.5 wt % UO 2 and the Pb content varies from 1.3 to 15.8 wt % PbO.Concentrations of Si are from 0.03 to 6.1 wt % SiO 2 , and the Fe content is from 0.5 to 2.5 wt % FeO.The Y 2 O 3 and REE 2 O 3 are from 0.5 to 5.5 wt % and from 0.05 to 4.5 wt %, respectively.
Details of REE mineralogy were checked in two drill core samples of albitised and carbonatised sericite-quartzite at Honkilehto.These samples are composed mainly of albite, calcite, biotite, quartz and sericite.Apatite occurs as discrete subhedral grains in association with calcite and dolomite.Davidite is isostructural with the crichtonite-group minerals, its association with rutile and both as principal opaque minerals.Davidite is present as up to 2 mm large zoned grains (Figure 12f).About 20 microprobe analyses have been carried out of davidite (Supplementary Materials Table S6).Its chemical composition is governed by Ti, V, Cr, and Fe-oxide accompanied by smaller amounts of REE, U and Pb.The major chemical components are TiO 2 (41.55-47.36wt %) and FeO (15.38-19.66wt %), and the U content ranges from 2.56 to 9.41 wt % UO 2 .The Th content is much lower, ranging from <0.01 to 0.19 wt % ThO 2 .Maximum amounts of other elements that were consistently detected include 0.92 wt % Sc 2 O 3 , 3.31 wt % V 2 O 3 , 12.06 wt % Cr 2 O 3 , 0.68 wt % Y 2 O 3 , 3.42 wt % La 2 O 3 , 3.13 wt % Ce 2 O 3 , 0.42 wt % CaO, 0.26 wt % MgO, 1.06 wt % SrO and 2.70 wt % PbO.Backscattered electron images of davidite display the presence of patchy compositional zoning and intense cracking of grains (Figure 13).It is not known whether the cracking is related to radiation damage induced volume expansion or to fracturing by tectonic processes.Reconnaissance EPMA traverses show that the compositional zoning, at least in part, reflects the distribution of Ti, Y, REE, U and Pb.Uranium, Cr, Ti and Pb tend to be relatively concentrated toward the central (more altered) portion of grains at the expense of Pb.Lighter areas thus have maximum U, Pb and minimum Ti, Y and REEs (Figure 13).REE appear to co-exist or substitute, respectively, for Pb and U, as indicated by the corresponding decreasing trends of Ti, Y, REE with increasing trend of Pb + U. Coexisting bastnäsite and uraninite with up to 100 and 50 μm in size, respectively (Figure 12e), occur as accessory minerals in the host albitised rocks.Uraninite is partly or wholly replaced by bastnäsite and allanite.Bastnäsite is furthermore common in fractures, indicating that it is a paragenetically late mineral (Figure 12e).The pristine parts of the uraninite grains (Supplementary Materials Table S6) contain 71.8-77.5 wt % UO2 and the Pb content varies from 1.3 to 15.8 wt % PbO.Concentrations of Si are from 0.03 to 6.1 wt % SiO2, and the Fe content is from 0.5 to 2.5 wt % FeO.The Y2O3 and REE2O3 are from 0.5 to 5.5 wt % and from 0.05 to 4.5 wt %, respectively.Details of REE mineralogy were checked in two drill core samples of albitised and carbonatised sericite-quartzite at Honkilehto.These samples are composed mainly of albite, calcite, biotite, quartz and sericite.Apatite occurs as discrete subhedral grains in association with calcite and dolomite.Davidite is isostructural with the crichtonite-group minerals, its association with rutile and both as principal opaque minerals.Davidite is present as up to 2 mm large zoned grains (Figure 12f).About 20 microprobe analyses have been carried out of davidite (Supplementary Materials Table S6).Its chemical composition is governed by Ti, V, Cr, and Fe-oxide accompanied by smaller amounts of REE, U and Pb.The major chemical components are TiO2 (41.55-47.36wt %) and FeO (15.38-19.66wt %), and the U content ranges from 2.56 to 9.41 wt % UO2.The Th content is much lower, ranging from <0.01 to 0.19 wt % ThO2.Maximum amounts of other elements that were consistently detected include 0.92 wt % Sc2O3, 3.31 wt % V2O3, 12.06 wt % Cr2O3, 0.68 wt % Y2O3, 3.42 wt % La2O3, 3.13 wt % Ce2O3, 0.42 wt % CaO, 0.26 wt % MgO, 1.06 wt % SrO and 2.70 wt % PbO.Backscattered electron images of davidite display the presence of patchy compositional zoning and intense cracking of grains (Figure 13).It is not known whether the cracking is related to radiation damage induced volume expansion or to fracturing by tectonic processes.Reconnaissance EPMA traverses show that the compositional zoning, at least in part, reflects the distribution of Ti, Y, REE, U and Pb.Uranium, Cr, Ti and Pb tend to be relatively concentrated toward the central (more altered) portion of grains at the expense of Pb.Lighter areas thus have maximum U, Pb and minimum Ti, Y and REEs (Figure 13).REE appear to coexist or substitute, respectively, for Pb and U, as indicated by the corresponding decreasing trends of Ti, Y, REE with increasing trend of Pb + U.
The rapakivi granite in the Kymi Complex hosts several greisen zones which are characterized by silicification with fine grained masses of sericite, muscovite and chlorite.Fluorite, topaz, zircon, epidote, apatite, genthelvite, iron-titanium oxides and sulphide minerals (sphalerite, galena, and chalcopyrite) are the most abundant accessory minerals.Geochemically, high F, Li, Rb, Ga, Sn and Nb, concentrations and depleted Mg, Ti, Zr, Ba, Sr and Eu contents distinct zones of greisen from the fresh granite [61].Some of the greisens in rapakivi granite are also enriched in indium and rare earth elements, with roquesite (CuInS 2 ) being the major indium bearing mineral (Figure 15a), whereas monazite, allanite, bastnäsite, xenotime and thorite are the main REE rich minerals (Figure 15b-f).
The total REE content of greisens range from 100 to 1025 ppm with an average of 500 ppm [62], whereas the indium content is commonly from 70 to 200 ppm, but it may reach values of up to 800 ppm [54].Previous studies concluded that enrichments of REE is due to a combination of magmatic and hydrothermal processes [60,[63][64][65].
The rapakivi granite in the Kymi Complex hosts several greisen zones which are characterized by silicification with fine grained masses of sericite, muscovite and chlorite.Fluorite, topaz, zircon, epidote, apatite, genthelvite, iron-titanium oxides and sulphide minerals (sphalerite, galena, and chalcopyrite) are the most abundant accessory minerals.Geochemically, high F, Li, Rb, Ga, Sn and Nb, concentrations and depleted Mg, Ti, Zr, Ba, Sr and Eu contents distinct zones of greisen from the fresh granite [61].Some of the greisens in rapakivi granite are also enriched in indium and rare earth elements, with roquesite (CuInS2) being the major indium bearing mineral (Figure 15a), whereas monazite, allanite, bastnäsite, xenotime and thorite are the main REE rich minerals (Figure 15b-f).
The total REE content of greisens range from 100 to 1025 ppm with an average of 500 ppm [62], whereas the indium content is commonly from 70 to 200 ppm, but it may reach values of up to 800 ppm [54].Previous studies concluded that enrichments of REE is due to a combination of magmatic and hydrothermal processes [60,[63][64][65].The 1.64-1.63Ga Kymi Granite Complex can be divided into three varieties of rapakivi granite: (1) granite with ovoid alkali feldspar phenocrysts larger than 3 cm in diameter, mantled by sodic plagioclase.In this variety, the feldspar crystals have been partially re-melted, and subsequently altered; the alteration produced the rim of greenish sodic plagioclase; (2) granite with quartz-feldspar porphyritic texture and angular phenocrysts of potassium feldspar (microcline), plagioclase and quartz; (3) hornblende bearing granite with porphyritic rapakivi texture.The 1.64-1.63Ga Kymi Granite Complex can be divided into three varieties of rapakivi granite: (1) granite with ovoid alkali feldspar phenocrysts larger than 3 cm in diameter, mantled by sodic plagioclase.In this variety, the feldspar crystals have been partially re-melted, and subsequently altered; the alteration produced the rim of greenish sodic plagioclase; (2) granite with quartz-feldspar porphyritic texture and angular phenocrysts of potassium feldspar (microcline), plagioclase and quartz; (3) hornblende bearing granite with porphyritic rapakivi texture.
Detailed mineralogical observations on REE-bearing minerals were completed in 12 samples from the greisenised Kymi granite stocks and the surrounding country rocks (Figure 14).Results of electron microprobe analyses in those minerals are reported in the Supplementary Materials Table S7.Monazite occurs as euhedral to subhedral accessory crystals (20-50 µm diameter) and it is often intergrown with zircon, xenotime, apatite and fluorite (Figure 15b) and also fills fractures in apatite (Figure 15c).Monazite grains usually coexist with xenotime (Figure 15b).Contents of the major elements in monazite are as follows: S7).Detailed mineralogical observations on REE-bearing minerals were completed in 12 samples from the greisenised Kymi granite stocks and the surrounding country rocks (Figure 14).Results of electron microprobe analyses in those minerals are reported in the Supplementary Materials Table S7.Monazite occurs as euhedral to subhedral accessory crystals (20-50 μm diameter) and it is often intergrown with zircon, xenotime, apatite and fluorite (Figure 15b) and also fills fractures in apatite (Figure 15c).Monazite grains usually coexist with xenotime (Figure 15b).Contents of the major elements in monazite are as follows: P2O5 from 27.7 to 29.3 wt %, F from 1.0 to 1.6 wt %, Ce2O3 from 31.4 to 34.4 wt %, La2O3 from 11.7 to 15.4 wt %, Nd2O3 from 11.3 to 13.5 wt %, Sm2O3 from 1.3 to 2.7 wt %, Gd2O3 from 1 to 2.2 wt % and Pr2O3 from 3.0 to 3.6 wt %.The ThO2 contents vary from 0.53 to 3.1 wt % whereas UO2 has low concentrations from 0.01 to 0.24 wt % and SiO2 varies between 0.5 and 3.2 wt % (Supplementary Materials Table S7).Bastnäsite is mostly present as disseminations of acicular and needle-shaped crystals and flakes (100 µm × 300 µm).It also occurs in thin fractures and vugs of fluorite, biotite and quartz as round-hexagonal crystals (Figure 15c-e).Bastnäsite is commonly associated with allanite and thorite.At some places, bastnäsite appears to replace allanite as it forms rims around corroded allanite cores (Figure 15f).Bastnäsite is rich in LREE (~65 wt %), with Ce 2 O 3 from 30.6 to 33.7 wt %, La 2 O 3 from 12.9 to 17.9 wt %, Nd 2 O 3 from 7.3 to 11.6 wt %, Sm 2 O 3 from 0.8 to 2.0 wt %, Gd 2 O 3 from 0.7 to 1.7 wt % and Pr 2 O 3 from 2.5 to 3.2 wt % (Supplementary Materials Table S7).The flour contents are between 6.7 and 9.3 wt %.Bastnäsite grains also contain significant amounts of total HREE (from 0.5 to 1.5 wt %), as well as ThO 2 (from 0.3 to 1.3 wt %) and Y 2 O 3 (from 1.7 to 3.4 wt %).
Kovela Granitoid Complex
The Kovela Granitoid Complex is located in the Uusimaa Belt (1.85-1.79Ga) in southern Finland (Figure 1).The complex has a well-developed zonal structure: the central part consists of coarse-grained equigranular to K-feldspar porphyritic biotite monzogranite and small granodiorite bodies, whereas the marginal zones of the complex consists of pyroxene gneiss.Garnet-cordierite gneiss as well as marginal pegmatite occur along the outer contact of the complex (Figure 16).Bastnäsite is mostly present as disseminations of acicular and needle-shaped crystals and flakes (100 μm × 300 μm).It also occurs in thin fractures and vugs of fluorite, biotite and quartz as roundhexagonal crystals (Figure 15c-e).Bastnäsite is commonly associated with allanite and thorite.At some places, bastnäsite appears to replace allanite as it forms rims around corroded allanite cores (Figure 15f).Bastnäsite is rich in LREE (~65 wt %), with Ce2O3 from 30.6 to 33.7 wt %, La2O3 from 12.9 to 17.9 wt %, Nd2O3 from 7.3 to 11.6 wt %, Sm2O3 from 0.8 to 2.0 wt %, Gd2O3 from 0.7 to 1.7 wt % and Pr2O3 from 2.5 to 3.2 wt % (Supplementary Materials Table S7).The flour contents are between 6.7 and 9.3 wt %.Bastnäsite grains also contain significant amounts of total HREE (from 0.5 to 1.5 wt %), as well as ThO2 (from 0.3 to 1.3 wt %) and Y2O3 (from 1.7 to 3.4 wt %).
Kovela Granitoid Complex
The Kovela Granitoid Complex is located in the Uusimaa Belt (1.85-1.79Ga) in southern Finland (Figure 1).The complex has a well-developed zonal structure: the central part consists of coarsegrained equigranular to K-feldspar porphyritic biotite monzogranite and small granodiorite bodies, whereas the marginal zones of the complex consists of pyroxene gneiss.Garnet-cordierite gneiss as well as marginal pegmatite occur along the outer contact of the complex (Figure 16).The strongly radioactive granitic pegmatite bodies in the central part of the complex have predominantly peraluminous and S-type character.Most of the granitic pegmatite dikes run roughly in NW-SE direction and they are named according to their locations: S dikes in the southern part and N dikes in the northern part of the complex.These dikes are usually 5-10 m wide and 60-70 m long and gently dip to the West.Among them, the S dikes are more radioactive than the N dikes.They consist of perthitic K-feldspar, quartz, plagioclase, biotite, garnet, sillimanite, cordierite and staurolite.Monazite, zircon, xenotime, magnetite (oxidized), titanite, ilmenite and chlorite are the most common accessories.The complex underwent granulite facies metamorphism at 700-820 • C temperatures and 2.6-5.8kbar pressures [66].
Monazite and Th-rich monazite are the most abundant accessories in the pegmatite dikes in addition to the commonly observed zircon, xenotime and huttonite/thorite.Monazite grains (100-1000 µm) are usually elongated or sometimes rounded and fractured.Their colours range from light yellow to yellowish brown.Exceptionally, some euhedral to subhedral prismatic crystals are as long as 3 mm (Figure 17a).Monazite is mostly enclosed in quartz and K-feldspar and attached to garnet (Figure 17b).Backscattered SEM imaging revealed complex sector-, concentric-and patchy-zoning, as well as normal growth zoning in monazite and presence of abundant thorite inclusions in some grains (Figure 17e-f).Growth zoning only rarely occurs, whereas concentric and patchy zoning have been recognized on monazites from most of the studied samples.Representative microprobe analyses of monazite compositions from the Kovela monzogranite and granite samples are given in Supplementary Materials Table S8.The zoning reflects the variations between the abundances of LREE and Th (+U + Ca and Si).Dark zones on SEM images of monazite predominantly occur as outer rims and as rare inner cores (Figure 17e).These zones have lower Th (+Ca and Si) and higher REE contents compared to the brighter zones.This negative correlation between Th (+Ca and Si) and REE occurs on all monazite grains and reflects the typical coupled substitution mechanisms in monazite: (Th, U) 4+ + Si 4+ − − REE 3+ + P 5+ e.g., [67] and (Th, U) 4+ + Ca 2+ − − 2REE 3+ [68].The A-site cation sites in the crystal structure of monazite are principally occupied by Ce as the dominant REE (from 24.90 to 28.56 wt % Ce 2 O 3 ).The La-content varies from 8.30 to 10.56 wt % La 2 O 3 and the Nd 2 O 3 contents are from 8.98 to 10.54 wt %.The amount of Th, representing the huttonite molecule as a solid solution in monazite varies from 14.9 to 20.1 wt % ThO 2 .The U content is low and ranges from 0.47 to 1.47 wt % UO 2 .The Ca content, representing the cheralite molecule as a solid solution in monazite, varies from 0.71 to 1.69 wt % CaO.Skeletal and porous intergrowths of 10-20 µm large crystals of thorium-silicate compositions (probably huttonite or thorite) with monazite were also observed by SEM (Figure 17f).The huttonite grains contain from 52.7 to 66.0 wt % ThO 2 as main component, with several weight percent of LREE (1.3-7.4 wt % Ce 2 O 3 , 0-1.6 wt % La 2 O 3 and 0.7-2.1 wt % Nd 2 O 3 ; Supplementary Materials Table S8).The P and U contents in huttonite grains vary from 3.2 to 6.5 wt % P 2 O 5 and from 0.9 to 2.9 wt % UO 2 , respectively.The percentage of monazite, cheralite and huttonite components in these thorium-silicate grains range between 5.0-18.2wt %, 13.0-27.0wt %, and 62.1-74.4wt %, respectively.
Thorite forms inclusions within monazite and zircon (Figure 17a,b).Two types of zircons were found: (1) small (<50 µm) grains with round to subhedral habits as inclusions in garnet and monazite (Figure 17a-c); and (2) coarse, from 100 to 400 µm large crystals with mostly euhedral to subhedral prismatic shapes.Type 2 zircons often show oscillatory zoning on BSE images (Figure 17c).The elevated Th contents in the Kovela complex was most probably resulted from a complex, multi-stage process which involved primary magmatic crystallization and superimposed metamorphic-hydrothermal alteration.Förster and Harlov [69], Overstreet [70], Mohr [71] and Kelly et al. [72] suggested that metamorphism may generate the breakdown of primary monazite to Th-rich mineral in the monazite-huttonite series by depletion of the Y, HREE, Ca and P content parallel with the relatively enrichment of Th, Si and LREE.
Compositions of REE Minerals with Different Origins in Finland
Box-plot diagrams of REE concentration data according to the main REE-bearing phases from the studied deposits and occurrences in Finland are shown in (Figure 18).As it is expected, and also demonstrated by many other studies [63,64], LREE are generally enriched in monazite, allanite and carbonate minerals, whereas xenotime concentrates HREE regardless the conditions of crystallization.The highest average LREE concentrations at around 60 wt % occur in monazite, bastnäsite and synchysite, but the range of these values is highly variable.However, comparison of mantlenormalized [73] REE distributions in different types of REE-rich minerals from carbonatite and alkaline rocks, hydrothermal alteration zones and granitoids/greisens with elevated REE contents reveals significant and systematic variation in both absolute and relative abundances (Figure 19).The REE distribution patterns of bastnäsite, allanite and ancylite are similar to those of monazite.They
Compositions of REE Minerals with Different Origins in Finland
Box-plot diagrams of REE concentration data according to the main REE-bearing phases from the studied deposits and occurrences in Finland are shown in (Figure 18).As it is expected, and also demonstrated by many other studies [63,64], LREE are generally enriched in monazite, allanite and carbonate minerals, whereas xenotime concentrates HREE regardless the conditions of crystallization.The highest average LREE concentrations at around 60 wt % occur in monazite, bastnäsite and synchysite, but the range of these values is highly variable.However, comparison of mantle-normalized [73] REE distributions in different types of REE-rich minerals from carbonatite and alkaline rocks, hydrothermal alteration zones and granitoids/greisens with elevated REE contents reveals significant and systematic variation in both absolute and relative abundances (Figure 19).The REE distribution patterns of bastnäsite, allanite and ancylite are similar to those of monazite.They are characterized by higher LREE over HREE, and negative Y anomalies.In each types of minerals, except of xenotime, strong enrichment in LREE relative to MREE and HREE can be clearly seen.The normalized plots for most monazite exhibit similar patterns in each types of deposits/occurrences, but monazite from granitoids-related magmatic-hydrothermal (e.g., pegmatite and greisen) systems have higher REE contents in comparison to the metasomatic-hydrothermal and carbonatite related environments (Figure 19a).Allanite and ancylite show similar REE patterns to monazite, with the lowest concentrations of REE in alkaline-rocks and carbonatite in comparison to the other types of geological environments (Figure 19b).Similarly to monazite and allanite, bastnäsite and synchysite also display steeply right-inclined normalized REE profiles with depleted Y contents (Figure 19c,e), but bastnäsite from pegmatite/greisen shows lower REE concentrations in comparison to the other geological environments.
HREE concentrations above the detection limits of the EPMA analyses characterize the pegmatite/greisen hosted monazite, allanite and bastnäsite only.However, the absolute HRRE concentrations are just around or even lower than those in the average of the mantle [72].Strong enrichment of HREE over LREE characterize xenotime (Figure 19f).This reflects that xenotime prefers to incorporate HREEs into its structure [74].
Minerals 2018, 8, x FOR PEER REVIEW 26 of 39 are characterized by higher LREE over HREE, and negative Y anomalies.In each types of minerals, except of xenotime, strong enrichment in LREE relative to MREE and HREE can be clearly seen.The normalized plots for most monazite exhibit similar patterns in each types of deposits/occurrences, but monazite from granitoids-related magmatic-hydrothermal (e.g., pegmatite and greisen) systems have higher REE contents in comparison to the metasomatic-hydrothermal and carbonatite related environments (Figure 19a).Allanite and ancylite show similar REE patterns to monazite, with the lowest concentrations of REE in alkaline-rocks and carbonatite in comparison to the other types of geological environments (Figure 19b).Similarly to monazite and allanite, bastnäsite and synchysite also display steeply right-inclined normalized REE profiles with depleted Y contents (Figure 19c,e), but bastnäsite from pegmatite/greisen shows lower REE concentrations in comparison to the other geological environments.HREE concentrations above the detection limits of the EPMA analyses characterize the pegmatite/greisen hosted monazite, allanite and bastnäsite only.However, the absolute HRRE concentrations are just around or even lower than those in the average of the mantle [72].Strong enrichment of HREE over LREE characterize xenotime (Figure 19f).This reflects that xenotime prefers to incorporate HREEs into its structure [74].The differences in REE-enrichments and CaO-contents according to the origin of the REE rich phosphate minerals are also evident in the LREE-HREE + Y-CaO ternary plots (Figure 20).Monazite from the carbonatite has a relatively wide range in the proportions of the different HREE+Y and LREE, as well as total REE and CaO contents (Figure 20a).This variability in composition probably reflects the cumulative effect of fractionation of REE during crystallization of carbonatite melts with different composition, as well as the effect of segregation of magmatic fluids.Under magmatic conditions, dolomite has lower partition coefficients for REE than calcite [3], and precipitation of rock forming carbonate minerals results in enrichment of LREE in the residual liquid.La-Ce and Sm-Nd also fractionate into magmatic fluids in different proportions [75].In contrast, monazite from The differences in REE-enrichments and CaO-contents according to the origin of the REE rich phosphate minerals are also evident in the LREE-HREE + Y-CaO ternary plots (Figure 20).Monazite from the carbonatite has a relatively wide range in the proportions of the different HREE+Y and LREE, as well as total REE and CaO contents (Figure 20a).This variability in composition probably reflects the cumulative effect of fractionation of REE during crystallization of carbonatite melts with different composition, as well as the effect of segregation of magmatic fluids.Under magmatic conditions, dolomite has lower partition coefficients for REE than calcite [3], and precipitation of rock forming carbonate minerals results in enrichment of LREE in the residual liquid.La-Ce and Sm-Nd also fractionate into magmatic fluids in different proportions [75].In contrast, monazite from hydrothermal and granite-related magmatic-hydrothermal environments appear to show less variability in composition with dominance of LREE (Figure 20a).
Xenotime in carbonatites, alkaline rocks and granite related magmatic-hydrothermal systems also exhibits limited, although specific variations in the relative concentration of HREE + Y to LREE (Figure 20b).The source of elements for the formation of xenotime-(Y) in the studied granitic rocks is the leaching of P, REE and Y mainly from zircon and apatite.and apatite associated with xenotime related to the breakdown of primary (Fe, U, Y, REE, Ca, Si)-(Nb, Ta) oxide phases both tend to have slightly higher LREE compared to most xenotime grains.The latter type is also clearly higher in the HREE and lower in Y. hydrothermal and granite-related magmatic-hydrothermal environments appear to show less variability in composition with dominance of LREE (Figure 20a).Xenotime in carbonatites, alkaline rocks and granite related magmatic-hydrothermal systems also exhibits limited, although specific variations in the relative concentration of HREE + Y to LREE (Figure 20b).The source of elements for the formation of xenotime-(Y) in the studied granitic rocks is the leaching of P, REE and Y mainly from zircon and apatite.Monazite and apatite associated with xenotime related to the breakdown of primary (Fe, U, Y, REE, Ca, Si)-(Nb, Ta) oxide phases both tend to have slightly higher LREE compared to most xenotime grains.The latter type is also clearly higher in the HREE and lower in Y.In the REE deposits of occurrences of Finland, bastnäsite and synchysite are the most common Ca-REE fluorocarbonates and they also show compositional variation according to the conditions of their crystallization (Figure 21a).Based on the EPMA data, bastnäsite with low Ca + Sr contents, ranging from 0.1 to 1.1 wt % CaO + SrO (hydrothermal), 2.0 to 7.3 wt % CaO + SrO (carbonatite), 0.1 to 6.2 wt % CaO + SrO (granitic), 3.6 to 6.1 wt % CaO + SrO (alkaline) and synchysite, with higher Ca, varying between 11.In the REE deposits of occurrences of Finland, bastnäsite and synchysite are the most common Ca-REE fluorocarbonates and they also show compositional variation according to the conditions of their crystallization (Figure 21a).Based on the EPMA data, bastnäsite with low Ca + Sr contents, ranging from 0.1 to 1.1 wt % CaO + SrO (hydrothermal), 2.0 to 7.3 wt % CaO + SrO (carbonatite), 0.1 to 6.2 wt % CaO + SrO (granitic), 3.6 to 6.1 wt % CaO + SrO (alkaline) and synchysite, with higher Ca, varying between 11.7 and 18.0 wt % CaO + SrO (hydrothermal) 11.8 and 17.6 wt % CaO + SrO (carbonatite), 8.6 and 13.5 wt % CaO + SrO (alkaline) respectively.Both minerals are classified as fluorocarbonates with F dominant in the F-OH-Cl site.Compositions of the Ca-REE fluorocarbonates (bastnäsite and synchysite) also discriminate formation conditions in the LREE-HREE-CaO ternary diagram, but in a different and less straightforward way (Figure 21a).Hydrothermal and carbonatite bastnäsite are richer in LREE in comparison to synchysite of similar origin, whereas the compositional differences considering the LREE and CaO contents are less pronounced in the pegmatite/greisen and alkaline rocks hosted occurrences.The less pronounced differences among REE-carbonates from different geological environments most probably reflect the complex effect of temperature-pressure and CO 2 concentration parameters [48].On the basis of our investigations at the REE occurrences and prospects in Finland, we suggest that high mole fractions of CO 2 in a hydrothermal fluid also enhanced such fractionations, again due to the greater solubility of neutral complexes in fluids of low dielectric constants.
Figure 21b shows the compositions of REE silicate minerals (allanite-ferroallanite, and chevkinite) of different origin in the (Y + REE)-(Al + Fe)-(Ti) ternary.Most of the allanites in studied REE occurrences and prospects belongs to the allanite-ferriallanite series (up to 40% of allanite-Ce), and a smaller part, to the ferriallanite end-member (Figure 21b).The CaO content of allanite-(Ce) is unusually high and appears to be a characteristic feature of allanite from arkose gneiss in Tana Belt.
Chevkinite, which has the same crystal system as allanite contains more titanium than allanite and ferroallanite, and this Ti-enrichment of allanite-type silicates characterize chevkinite grains in carbonatites and albitite alteration zones in hydrothermal systems.The distribution of allanite and chevkinite in the studied carbonatites dikes suggests that the composition of the magmas is an important control on the stability fields of these minerals, a fact that is supported by the presence of primary allanite in rocks formed by processes of mixing magmatic fluids and a later hydrothermal fluid.Bagi ński et al. [76] and Macdonald et al. [77] documented the alteration of chevkinite-(Ce) to a phase strongly enriched in Ti and depleted in Si, REE and Fe compared to the unaltered phase.In contrast to studies chevkinite did not alter to a member of the epidote-group, possibly because REE + Y abundance in chevkinite (43-50 wt %) is almost twice that observed in allanite.different geological environments most probably reflect the complex effect of temperature-pressure and CO2 concentration parameters [48].On the basis of our investigations at the REE occurrences and prospects in Finland, we suggest that high mole fractions of CO2 in a hydrothermal fluid also enhanced such fractionations, again due to the greater solubility of neutral complexes in fluids of low dielectric constants.
Figure 21b shows the compositions of REE silicate minerals (allanite-ferroallanite, and chevkinite) of different origin in the (Y + REE)-(Al+Fe)-(Ti) ternary.Most of the allanites in studied REE occurrences and prospects belongs to the allanite-ferriallanite series (up to 40% of allanite-Ce), and a smaller part, to the ferriallanite end-member (Figure 21b).The CaO content of allanite-(Ce) is unusually high and appears to be a characteristic feature of allanite from arkose gneiss in Tana Belt.
Chevkinite, which has the same crystal system as allanite contains more titanium than allanite and ferroallanite, and this Ti-enrichment of allanite-type silicates characterize chevkinite grains in carbonatites and albitite alteration zones in hydrothermal systems.The distribution of allanite and chevkinite in the studied carbonatites dikes suggests that the composition of the magmas is an important control on the stability fields of these minerals, a fact that is supported by the presence of primary allanite in rocks formed by processes of mixing magmatic fluids and a later hydrothermal fluid.Bagiński et al. [76] and Macdonald et al. [77] documented the alteration of chevkinite-(Ce) to a phase strongly enriched in Ti and depleted in Si, REE and Fe compared to the unaltered phase.In contrast to studies chevkinite did not alter to a member of the epidote-group, possibly because REE + Y abundance in chevkinite (43-50 wt %) is almost twice that observed in allanite.by a wide range of the proportions of lanthanides (La + Ce + Pr/ΣREE At %) and (La/Nd) cn ratios compared to other monazite samples from different occurrences (Figure 22a).This probably reflects fractionation of REE between melts, fluids and monazite during the crystallization of the parent carbonatite melts, similarly to the fractionation trend observed by Smith et al. [3] at Bayan Obo.Monazite from granite-related environments (e.g., pegmatite and greisen zones) in Finland is characterized by a relatively narrow (La/Nd) cn and (La + Ce + Pr)/(∑REE) ratios (from about 2 to 1.5 and less than 0.75, respectively), close to the values of the average upper crust (e.g., 1 and 0.68, respectively).These ratios are similar to those for monazite from REE deposits occurring in alkaline rocks, pegmatite and hydrothermal veins outside of Finland (Figure 22a).characterized by a wide range of the proportions of lanthanides (La + Ce + Pr/ΣREE At %) and (La/Nd)cn ratios compared to other monazite samples from different occurrences (Figure 22a).This probably reflects fractionation of REE between melts, fluids and monazite during the crystallization of the parent carbonatite melts, similarly to the fractionation trend observed by Smith et al. [3] at Bayan Obo.Monazite from granite-related environments (e.g., pegmatite and greisen zones) in Finland is characterized by a relatively narrow (La/Nd)cn and (La + Ce + Pr)/(∑REE) ratios (from about 2 to 1.5 and less than 0.75, respectively), close to the values of the average upper crust (e.g., 1 and 0.68, respectively).These ratios are similar to those for monazite from REE deposits occurring in alkaline rocks, pegmatite and hydrothermal veins outside of Finland (Figure 22a).Allanite from hydrothermal occurrences in Finland is lanthanum-enriched, relative to other allanite of different origin (Figure 22b).Compositions of allanite from carbonatites, alkaline rocks and greisens are characterized by relatively low (La/Nd) cn ratios (between 3 and 1) and low La + Ce + Pr/ΣREE ratios (less than 0.85) and fall into the compositional trend defined by the average data from the Nechalacho (alkaline rock hosted), Olserum Djupedal (hydrothermal) and Creek Pass (pegmatite) deposits.
Bastnäsite from different types of REE deposits and occurrences in Finland shows the widest ranges in (La/Nd) cn starting from about crustal values, as shown in granitoids/greisens rocks((La/Nd) cn = 0.80 to 2.0) and going to the maximum ratios encountered, as in hydrothermal metasomatic rocks ((La/Nd) cn = 8.5 to 12.84).The rapidly changing conditions that would prevail in high temperature metamorphic hydrothermal fluids with highly variable CO 2 -content provide an explanation for the large compositional variations found on the bastnäsite compositions from different REE deposits.The REE distribution in bastnäsite-(Ce) from the carbonatite deposits, granitoids/greisens and alkaline arkosic rocks in Finland have similar patterns of variation compared to alkaline rocks at Nechalacho and hydrothermal deposits at Olserum-Djupeda, but plotting below the fractionation trend observed in the Bayan Obo carbonatite (Figure 22c; Table 2).
Synchysite from Finnish carbonatite, alkaline rock hosted and hydrothermal deposits have relatively wide ranges of La/Nd cn ratios (from around 3.4 to 1.5) and La + Ce + Pr/ΣREE ratios (between 0.85 and 0.6) and these ratios largely corresponds to to those from different deposits from Finland (Figure 22d).
Disscusion
Despite of the existence of several areas with high REE potential and many ongoing exploration projects, the territory of Finland is still underexplored for REE.However, if demand is to be met in the future, continued research into, and further exploration of Finnish REE resources will be needed.The REE exploration potential in Finland may be evaluated by comparing the geochemical databases and known REE occurrences with global deposit types.In terrains which are largely covered by glacial or other young unconsolidated sediments, such as the area of Finland, mostly geochemical and geophysical methods are used for localization of areas with REE potential.The most probable pathway to REE production in Finland is via the by-production alongside with extraction of other commodities such as P, Nb, Ta and Au.Enrichment of the REE may occur through primary processes such as magmatic processes and hydrothermal fluid mobilization and precipitation, or through secondary processes that move REE minerals from where they originally formed, such as sedimentary concentration and weathering.
Several REE deposits and occurrences in various geological settings across Finland have been subjected to detailed geological and mineralogical studies.These REE resources are intimately associated with a variety of rock types such as late-stage carbonatite dikes, epigenetic-hydrothermal systems and granite-pegmatites and greisens.The main features of the major REE-bearing minerals from the REE-bearing deposits obtained in this study have been summarized in Table 1.Primary magmatic REE enrichments occur in late-stage carbonatite dikes in the Sokli carbonatite complex (1-10 wt % RE 2 O 3 ), with carbonatite veins at the Korsnäs deposit (0.9 wt % RE 2 O 3 ), alkaline gneiss at the Otanmäki-Katajakangas deposit (2.4 wt % RE 2 O 3 ) and monazite granite in Kovela (0.5-4.3 wt % RE 2 O 3 ).Hydrothermal REE deposits are also of interest, although typically low-grade; they include Fe-oxide-Cu-Au class deposits in the Paleoproterozoic Kuusamo Belt (Au, Cu, Mo, Ni, REE and U), and Tana Belt in Northern Finland.These deposits could be potential sources of REE as a by-product of Au and other metals.Higher concentrations of the REE are found in the Kortejarvi and Laivajoki carbonatite intrusions, whereas A-type granite intrusions within Vyborg rapakivi batholith display high average levels of Sn, Ga, Nb, Be, In and REE [12,60].According to the existing mineralogical and geochemical data, the Eurajoki and Kymi stocks seem to have the most potential for REE mineralisation among the rapakivi granites.
The late stage carbonatite dikes in the Sokli complex (360-380 Ma), located in the Finnish part of the Devonian Kola alkaline province, contain one of the most diverse assemblages of REE minerals described so far from carbonatites and provide an excellent opportunity to track the evolution of late-stage carbonatites and their sub-solidus (secondary) changes.More than ten More than ten different types of rare earth minerals have been analysed in detail and compared with analyses published from other deposits.Most of these minerals common in carbonatites (e.g., Ca-REE fluocarbonates and ancylite-(Ce)) plus monazite, xenotime and Sr, REE-apatite and some of them are very rare Ba, REE fluocarbonates.Mineralogical and mineral chemical evidence demonstrates that hydrothermal and metasomatic reworking processes were responsible for the REE mineralization in Sokli carbonatite and confirms that such processes are predominant in the formation of REE minerals in carbonatites.During late-stage processes, apatite and carbonate minerals were replaced by various assemblages of REE-Sr-Ba minerals.Other example is the Korsnäs Pb-REE-bearing carbonatite dyke which intrudes Palaeoproterozoic mica-gneisses of the Pohjanmaa schist belt, and hosts REE-bearing apatite, monazite, allanite, calcio-ancylite, and bastnäsite.The deposit consists of mineralised zones in pegmatite and carbonatite or calcareous scapolite-diopside-barite-bearing skarn rocks.The wall rock of the dyke has been kaolinised in the strongly sheared ore zone.Apatite and monazite are heterogeneously distributed in the ore but occur together with galena.Monazite forms either anhedral grains or larger grain clusters that occur as inclusions within apatite phenocrysts.
In contrast to carbonatites, other notable examples of REE-rich products of the peralkaline rocks include REE-P deposits in Otanmäki Katajakangas deposit south of the Lake Oulujärvi in central Finland.It is not surprising, that in addition to monazite and allanite, which are typical LREE hosts in the studied rocks, the peralkaline varieties contain such HREE minerals as fergusonite-Y and fergusonite-U.Other notable heavy rare-earth element (REE) hosts include primary zircon and columbite.Hydrothermal reworking and autometasomatic processes have been reported to produce HREE mineralization (e.g., fergusonite-Y, and euxenite-Y) arising from the decomposition of allanite and other primary Nb, Ti minerals [81,82].On the other hand, rare-metal minerals are hosted by the granites of both the older and younger phases and produced by a combination of anatexis (principally silicate minerals, as allanite) and later hydrothermal processes (mostly oxide minerals, as fergusonite).Furthermore, fractionation of rare-earth elements in allanite-(Ce) and monazite-Ce led to magma enrichment in middle and heavy rare earths, leading to the crystallization of fergusonite-Y, and euxenite-Y during the late magmatic stages.
Hydrothermal alteration zones associated with Au-Co-Cu ± U mineralization in the central part of the Kuusamo belt is characterized by REE and U-rich minerals.There are two stages in the evolution of Au-Co-Cu ± U deposits within the Kuusamo Belt.The first conforms to a classic magmatic scheme, the second is postmagamatic and is characterized of multiphase metasomatism and metamorphic hydrothermal processes.In the postmagmatic stage, albitization, sericitization, chloritization, carbonation, and silicification have been traced [16,57].Albitisation is the most extensive alteration type and is, apparently, preceeded the establishment of peak metamorphic conditions of the Svecofennian orogeny.Several phases of mineralization appeared in conjunction with albitization, resulting the enrichments in niobium, yttrium, uranium, thorium and REE.Niobium mainly situated in pyrochlore, fergusonite, and titanite.The uranium content is high in some minerals (uraninite, with 78.6 to 80.5 wt % UO 2 , davidite, euxenite, cloumbite), while thorium concentrated in other ones (thorite, monazite allanite).Albitization is followed by the chloritization, which is closely related to gold mineralization and indicated by the formation of chlorite, tremolite-actinolite, magnetite, talc and Fe sulphides.Chloritized rocks exhibit enrichments in heavy REE, and are relatively strongly enriched in Mg, strongly depleted in Si and large ion lithophile (such as U, Th, Nb, F) elements, and slightly depleted in light REE.The next stage is phyllic alteration, indicated by biotite and sericite ± pyrite and additional gold mineralisation and ductile deformation.Sericitized rocks were previously silicified; they are very strongly enriched in K-Rb-Cs-Ba, very strongly depleted in Na-Ca-Sr-Eu, and slightly depleted in light REE relative to albitization stage.This is followed by a stage of carbonation, silicification, where calcite, allanite-Ce, ancylite-Ce, and bastnäsite-Ce replace monazite-(Ce) after apatite.Alterations are localized on permeable zones such as fractures, flow tops, discordant breccia dikes, and conformable breccia horizons.
In addition to peralkaline granites (see above), their late-orogenic peraluminous counterparts in post-or anorogenic settings may be associated with Vyborg granites and granite pegmatite (Kovela) with high Th-LREE content.The Vyborg rapakivi batholith (1.64 Ga) in south-eastern Finland (Kymi) is one of a few regions in the Fennoscandian Shield with several documented occurrences of REE, indium and Zn-Cu-Pb sulphide mineralization.The late-orogenic peraluminous Kovela monazite-granite shows up as a strong positive aeroradiometric gamma radiation anomaly in the Svecofennian Uusimaa Belt.Monazite is the dominant REE-mineral; accessory minerals include zircon, xenotime, and thorite.
Late-stage carbonatite dikes in the Sokli carbonatites are important sources of REE in Finland.Less pronounced that LREE enrichment is found in the metasomatic carbonatite bodies at Sokli and calcsilicate rocks (skarn) from Korsnäs.On the basis of variations in REE distributions and mineral chemistry, strong differentiation of REE contents in monazite can be distinguished in the carbonatite complexes in Finland and elsewhere.Other REE rich minerals from the magmatic and post-magmatic stages of carbonatite complexes show less pronounced variations in REE contents.
Alkaline rocks (Otanmäki) have potential for occurrences of significant REE deposits.The host alkaline granitic rocks are typically formed by progressive fractional crystallization of a parental magma and the deposits typically represent two periods of mineralization.The first primary magmatic period is associated with crystallization of highly fractionated magma rich in REE.The minerals of this period are commonly overprinted during the second period by late magmatic to hydrothermal fluids enriched the primary mineralization and remobilized REE in to hydrothermal veins.
The epigenetic hydrothermal alteration and associated REE enrichments are documented in the Kuusamo Belt.Several phases of REE mineralization can be distinguished.Subtle compositional differences among REE-fluorocarbonates (bastnäsite, synchysite), monazite and allanite define a spectrum from relatively La-enriched to (Ce + Nd)-enriched phases.The La-rich character of the mineralization in the Kuusamo Belt may be related to the high CO 2 -contents of high temperature metamorphic-hydrothermal fluids as high coordination number complexes of La are predicted to be more strongly associated than for the other REE [3].
The pegmatite dikes within Kovela granitic complex, southern Finland are characterized by the presence of Th enriched monazite and its specific compositional variation (huttonite/thorite monazite) appears to represent multi-stage process which involved primary magmatic crystallization and late-stage hydrothermal alteration.The recrystallization during both the high-T and lower-T magmatic-hydrothermal events, led to monazite becoming depleted in Y, HREE, Ca and P, and relatively enriched in Th, Si and LREE to form the Th-rich minerals of the monazite huttonite series.
In the greisens of rapakivi granite in the Kymi stock, LREE are carried by monazite, bastnäsite and allanite, and the HREE by xenotime and zircon.The REE-mineralization was also formed by the combination of magmatic and post magmatic processes.
The fractionation processes leading to high variation in lanthanum contents of monazite in carbonatite hosted REE-rich deposits (e.g., Bayan Obo) was also observed in Finnish deposits.Results of studies in Finnish deposits and occurrences suggest that origin of REE-bearing minerals can be
Figure 1 .
Figure 1.Simplified geological map of Finland with locations of the main REE deposits and occurrences.Modified after Korsman et al. and Nironen et al. [18,19] respectively.
Characteristics of Major REE Deposits and Occurrences in Finland 3.1.REE Deposits in Carbonatites 3.1.1.The Jammi and the Kaulus Carbonatite Dikes in the Sokli Complex
Figure 2 .
Figure 2. Geological map of the Sokli massif in the northern Finland.Modified after Vartiainen [20].
Figure 2 .
Figure 2. Geological map of the Sokli massif in the northern Finland.Modified after Vartiainen [20].
Figure 4 .
Figure 4. Geological map of the bedrock in the Korsnäs area according to the electronic DigiKp map sheet of the Geological Survey of Finland.Coordinates correspond to Finnish National Coordinate System ETRS-TM35FIN.
Figure 4 .
Figure 4. Geological map of the bedrock in the Korsnäs area according to the electronic DigiKp map sheet of the Geological Survey of Finland.Coordinates correspond to Finnish National Coordinate System ETRS-TM35FIN.
Figure 6 .
Figure 6.(a) Locations of the Kortejärvi-Laivajoki carbonatite intrusions and metasomatichydrothermal REE enrichments at Uuniniemi and Honkilehto in the Kuusamo belt; (b) a magnetic ground survey map showing the location of the drill holes; (c) Cross section in Kortejärvi carbonatite showing P2O5 contents in drill cores R10 and R11 (modified from[13]).Coordinates correspond to Finnish National Coordinate System ETRS-TM35FIN.
Figure 6 .
Figure 6.(a) Locations of the Kortejärvi-Laivajoki carbonatite intrusions and metasomatic-hydrothermal REE enrichments at Uuniniemi and Honkilehto in the Kuusamo belt; (b) a magnetic ground survey map showing the location of the drill holes; (c) Cross section in Kortejärvi carbonatite showing P 2 O 5 contents in drill cores R10 and R11 (modified from[13]).Coordinates correspond to Finnish National Coordinate System ETRS-TM35FIN.
Figure 8 .
Figure 8. Geology of the Katajakangas and Otanmäki area on the basis of the electronic DigiKP map available at the Geological Survey of Finland.Coordinates correspond to the Finnish National Coordinate System ETRS-TM35FIN.
Figure 8 .
Figure 8. Geology of the Katajakangas and Otanmäki area on the basis of the electronic DigiKP map available at the Geological Survey of Finland.Coordinates correspond to the Finnish National Coordinate System ETRS-TM35FIN.
Figure 10 .
Figure 10.Geological map of the Tana belt in the area of the REE mineralization at Mäkärä and Vaulo.Modified from Salmirinne et al. [38].
Figure 10 .
Figure 10.Geological map of the Tana belt in the area of the REE mineralization at Mäkärä and Vaulo.Modified from Salmirinne et al. [38].
wt % CaO and the sum of Y + REE varies from 21.44 to 55.16 wt % with the dominance of Ce(11.38-47.37 wt % Ce 2 O 3 ).The concentration of Th in allanite ranges from 0.020 to 0.80 wt % ThO 2 and usually predominates over U (0.17 wt % UO 2 ).Generally, M sites of allanite are dominated by Al (14.37-15.83wt % Al 2 O 3 ) and Fe (8.79-15.16wt % FeO).The Mg contents (0.12-2.46 wt % MgO) as well as the Si and F concentrations(19.36-31.24wt % and 0.23-1.10wt %, respectively) together with the high-end of Fe contents (up to 15.16 wt % FeO) can be attributed to the presence of significant amount of ferriallanite molecule in the composition of allanite.The bastnäsite compositions in lamprophyre dikes show dominance of light REE (LREE; La to Nd) over other REE, with Ce > La > Nd > Pr abundances, and 33.5 to 38.5 wt % Ce 2 O 3 , while calcium reaches only around 1.3 to 2.9 wt % CaO (Supplementary Materials Table
Figure 13 .
Figure 13.Back-scattered electron image of patchy chemical zoning of davidite (sample R307/3.85) as crichtonite group mineral.EPMA line scans across davidite zones corresponds to variation in the TiO 2 , PbO + UO 2 , Y 2 O 3 and REE contents.
Figure 16 .
Figure16.Geological sketch map showing the major lithological units of the Kovela granitic complex in southern Finland (also see Figure1) on the basis of the electronic map sheet in the DigiKP database of the Geological Survey of Finland.Coordinates are given according to the Finnish National Coordinate System ETRS-TM35FIN.
Figure 16 .
Figure16.Geological sketch map showing the major lithological units of the Kovela granitic complex in southern Finland (also see Figure1) on the basis of the electronic map sheet in the DigiKP database of the Geological Survey of Finland.Coordinates are given according to the Finnish National Coordinate System ETRS-TM35FIN.
Figure 18 .
Figure 18.Box plots showing the distributions of REE concentrations in REE-bearing phases of the studied rocks (a) Distribution of total LREE; (b) Distribution of total HREE.
Figure 18 . 39 Figure 19 .
Figure 18.Box plots showing the distributions of REE concentrations in REE-bearing phases of the studied rocks (a) Distribution of total LREE; (b) Distribution of total HREE.
7 and 18.0 wt % CaO + SrO (hydrothermal) 11.8 and 17.6 wt % CaO + SrO (carbonatite), 8.6 and 13.5 wt % CaO + SrO (alkaline) respectively.Both minerals are classified as fluorocarbonates with F dominant in the F-OH-Cl site.Compositions of the Ca-REE fluorocarbonates (bastnäsite and synchysite) also discriminate formation conditions in the LREE-HREE-CaO ternary diagram, but in a different and less straightforward way (Figure21a).Hydrothermal and carbonatite bastnäsite are richer in LREE in comparison to synchysite of similar origin, whereas the compositional differences considering the LREE and CaO contents are less pronounced in the pegmatite/greisen and alkaline rocks hosted occurrences.The less pronounced differences among REE-carbonates from
Figure 22 .
Figure 22.Composition plots showing (La/Nd)cn ratios against (La + Ce + Pr/ΣREE) ratios in the studied REE minerals of different origin in Finland and the averages of these ratios from REE deposits outside of Finland.Sources of data for deposits outside of Finland are referred in the text.The average upper crust data are from [71].(a) Monazite, the fractionation trend observed by [3] at Bayan Obo is highlighted by the black ellipse.(b) Allanite, (c) Bastnäsite, The fractionation trend observed by [3] at Bayan Obo is highlighted by the black ellipse.(d) Synchysite.
Figure 22 .
Figure 22.Composition plots showing (La/Nd) cn ratios against (La + Ce + Pr/ΣREE) ratios in the studied REE minerals of different origin in Finland and the averages of these ratios from REE deposits outside of Finland.Sources of data for deposits outside of Finland are referred in the text.The average upper crust data are from [71].(a) Monazite, the fractionation trend observed by [3] at Bayan Obo is highlighted by the black ellipse.(b) Allanite, (c) Bastnäsite, The fractionation trend observed by [3] at Bayan Obo is highlighted by the black ellipse.(d) Synchysite.
It contains Ce 2 O 3 as the most abundant REE oxide with concentrations ranging from 35.44 to 38.53 wt %, followed by La 2 O 3 from 14.43 to 20.98 wt %, and Nd 2 O 3 from 9.61 to 13.46 wt %.ThO 2 displays low concentrations, from 0.02 to 1.07 wt %, and it is under the detection limit of microprobe analyses.Allanite-type minerals are characterized by high REE (43.43-47.70REE 2 O 3 wt %), Fe (8.03-11.05FeO wt %), and Ti (2.93-3.65 TiO 2 wt %) contents, together with low contents of Mn (from 0.07 to 0.13 MnO wt %) and Mg (from 0.66 to 0.93 MgO wt %).The majority of bastnäsite grains in the studied samples are strongly enriched in LREE with approximately 70 wt % total REE.The average Ce 2 O 3 is 33.35 wt %, whereas the average La 2 O 3 is 19.1 wt % and the average Nd 2 O 3 is 8.3 wt %.
Minerals 2018, 8, x FOR PEER REVIEW 17 of 39 allanite.The bastnäsite compositions in lamprophyre dikes show dominance of light REE (LREE; La to Nd) over other REE, with Ce > La > Nd > Pr abundances, and 33.5 to 38.5 wt % Ce2O3, while calcium reaches only around 1.3 to 2.9 wt % CaO (Supplementary Materials Table S5).Synchysite reveals a significantly higher Ca content (5.62 to 9.14 wt % CaO) in comparison to bastnäsite.Again, LREE predominates here, with Ce 3+ the dominant REE cation (21.79 to 30.67 wt % Ce2O3).The F content of bastnäsite is highly variable (from 3.40 to 6.36 wt % F) whereas it is ranging between 0.62 and 1.26 wt % F in synchysite.
|
v3-fos-license
|
2022-07-02T06:17:22.231Z
|
2022-06-30T00:00:00.000
|
250173809
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "846af8a2c7431503e683a0bd73cdf590f54d1b80",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44548",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "d2ef7af5ae0721c57fff1481345f24331e83a957",
"year": 2022
}
|
pes2o/s2orc
|
Artemisinin inhibits neutrophil and macrophage chemotaxis, cytokine production and NET release
Immune cell chemotaxis to the sites of pathogen invasion is critical for fighting infection, but in life-threatening conditions such as sepsis and Covid-19, excess activation of the innate immune system is thought to cause a damaging invasion of immune cells into tissues and a consequent excessive release of cytokines, chemokines and neutrophil extracellular traps (NETs). In these circumstances, tempering excessive activation of the innate immune system may, paradoxically, promote recovery. Here we identify the antimalarial compound artemisinin as a potent and selective inhibitor of neutrophil and macrophage chemotaxis induced by a range of chemotactic agents. Artemisinin released calcium from intracellular stores in a similar way to thapsigargin, a known inhibitor of the Sarco/Endoplasmic Reticulum Calcium ATPase pump (SERCA), but unlike thapsigargin, artemisinin blocks only the SERCA3 isoform. Inhibition of SERCA3 by artemisinin was irreversible and was inhibited by iron chelation, suggesting iron-catalysed alkylation of a specific cysteine residue in SERCA3 as the mechanism by which artemisinin inhibits neutrophil motility. In murine infection models, artemisinin potently suppressed neutrophil invasion into both peritoneum and lung in vivo and inhibited the release of cytokines/chemokines and NETs. This work suggests that artemisinin may have value as a therapy in conditions such as sepsis and Covid-19 in which over-activation of the innate immune system causes tissue injury that can lead to death.
ide (H 2 O 2 ) is known to act as a potent immune cell chemoattractant 6,7 . In previous work we have shown that the TRPM2 ion channel, which is activated by H 2 O 2 , mediates the chemotactic action of H 2 O 2 by preferentially inducing a calcium influx at the neutrophil leading edge 8 . We initially searched for inhibitors of neutrophil chemotaxis by screening a natural compound library, using neutrophil chemotaxis towards H 2 O 2 as the assay. Figure 1A shows the forward migration index (FMI), the ratio of linear distance travelled in the direction of the H 2 O 2 gradient to the total distance travelled, which gives an index of the directionality of cell movement. Interestingly, capsaicin, a TRPV1 agonist 9 and eugenol, a TRPV3 agonist 10 both significantly potentiated directional chemotaxis, perhaps because they have a weak agonist action at TRPM2. Of the compounds that caused an inhibition of chemotaxis, five were identified as interesting for further investigation (red boxes in Fig. 1A), based on a significant reduction in FMI together with a significant reduction in average speed of migration ( Supplementary Fig. 1A). The dose-response relations of four of these compounds, beta-carotene, curcumin, ferulic acid and N-acetylcysteine, were similar, suggesting a common action, while artemisinin was more potent (Supplementary Fig. 1B). The four compounds showing a similar potency are antioxidants, so we investigated the possibility that they may act indirectly by dissipating the gradient of H 2 O 2 . To test this idea, we used a gradient of adenosine diphosphate ribose (ADPR), which like H 2 O 2 is a potent neutrophil chemoattractant 8,11 . ADPR directly activates TRPM2 at an intracellular location [12][13][14] , while H 2 O 2 does not directly activate TRPM2 but acts by increasing intracellular levels of ADPR 13,14 . When a gradient of ADPR was used to activate neutrophil chemotaxis none of the four antioxidant compounds was able to inhibit chemotaxis (Fig. 1B), showing that their action was indeed to dissipate the H 2 O 2 gradient rather than to directly inhibit chemotaxis. Artemisinin, on the other hand, inhibited neutrophil chemotaxis towards both ADPR and H 2 O 2 (Fig. 1B,C, respectively), demonstrating that its action in abolishing chemotaxis is independent of any effect on the gradient of H 2 O 2 . We next compared the ability of artemisinin to inhibit neutrophil chemotaxis with that of other well-established antimalarial compounds. Artemisinin was the only antimalarial that inhibited neutrophil migration (Fig. 1C) and therefore has a unique mechanism of action. We found that artemisinin has no effect on neutrophil viability, ruling out the possibility of a toxic action of artemisinin as a basis for its inhibition of neutrophil chemotaxis ( Supplementary Fig. 2). The SARS-CoV-2 spike protein was also a potent chemoattractant for neutrophils in our in vitro assay, and artemisinin also strongly inhibited chemotaxis up a gradient of SARS-CoV-2 (Fig. 1D). Figure 1E shows that artemisinin and artesunate (an artemisinin analogue) are both highly potent inhibitors of neutrophil chemotaxis driven by H 2 O 2 , with IC 50 ≈ 0.3 nM. Artemisinin and artesunate also strongly inhibit chemotaxis towards a diverse range of other chemotactic signals, including the chemokine CXCL2 (Fig. 1E), the complement factor C5a and the bacterial cell wall component lipopolysaccharide (LPS) (Supplementary Fig. 3). In each case, the values of IC 50 for artemisinin and artesunate were close to 0.3 nM, with none being significantly . Forward migration index (FMI, vertical axis), the mean ratio of distance travelled in the direction of the chemoattractant gradient to total distance travelled, is a measure of chemoattraction (see details in ref 8 ). Thirty-one compounds from a natural compound library were tested at 10 µM. Five compounds (red boxes) were selected on the basis that they caused the greatest inhibition of FMI, together with the greatest reduction in speed of movement ( Supplementary Fig. 1A). Each bar shows mean ± SEM from n = 3 mice. Statistical analysis: For comparison with Control: **p < 0.01, ***p < 0.001, ****p < 0.0001 (One-way ANOVA and Tukey-Kramer post-hoc test). (B) Out of the five compounds inhibiting chemoattraction towards H 2 O 2 , only artemisinin inhibited migration up a gradient of ADPR, a direct activator of TRPM2. All neutrophils in each experiment from same batch; FMI in absence of inhibitor is consistent within batches but maximum value varies somewhat between batches. Each bar shows mean ± SEM from n = 3 mice. Statistical analysis: Comparison with ADPR alone: ***p < 0.001, ns = not significant. (One-way ANOVA and Tukey-Kramer post-hoc test). (C) Artemisinin (10 μM) completely inhibits neutrophil migration up a 10 nM gradient of H 2 O 2 (FMI not significantly different from that in the absence of H 2 O 2 gradient), while antimalarials pyrimethamine, hydroxychloroquine, mefluoquine and lumefantrine (all 10 μM) have no inhibitory effect. Each bar shows mean ± SEM from n = 3 mice. Comparison with H 2 O 2 : ***p < 0.001, ns = not significant. (One-way ANOVA and Tukey-Kramer post-hoc test). (D) Covid spike protein (SARS-CoV-2, 100 nM) is a potent neutrophil chemoattractant and chemoattraction is inhibited by artesunate (10 μM Supplementary Fig. 4 shows that macrophage chemotaxis was also potently inhibited by artemisinin and artesunate, in a similar way to the effects of these compounds on neutrophils. This work identifies artemisinin as a potent inhibitor of neutrophil and macrophage chemotaxis driven by a wide variety of chemoattractant agents.
Mechanism of inhibition of chemotaxis by artemisinin. A number of active analogues of artemisinin
have been developed for use as antimalarials, including arteether, artemether and artesunate (structures shown in Supplementary Fig. 5). All are rapidly metabolised in vivo to dihydroartemisinin (DHA), a more metabolically stable analogue with a longer in vivo half-life (~ 1.3 h) than any of its precursors 15,16 . All of these analogues, including the stable metabolite DHA, showed an equally high potency in inhibiting neutrophil chemotaxis towards a range of chemotactic signals (Fig. 1E,F and Supplementary Fig. 3; IC 50 ≈ 0.3 nM for all analogues). These experiments show that none of the chemical modifications in these artemisinin analogues impacts on a site critical for the inhibitory action of artemisinin on chemotaxis. An unusual feature of artemisinin is the endoperoxide 1,2,4-trioxane ring (top left in Supplementary Fig. 5). We found that deoxyartemisinin, which lacks the peroxide bridge but is otherwise identical to artemisinin, is completely inactive in inhibiting neutrophil chemotaxis ( Fig. 2A), showing that the presence of the peroxide bridge is essential for the action of artemisinin on chemotaxis. The critical role of the peroxide suggests that artemisinin may inhibit its protein target by oxidation. It has been known for many years that hydrogen peroxide can oxidise the sulfhydryl group in cysteine, and that this reaction depends on free ferrous iron 17 . We therefore investigated whether the action of artemisinin on neutrophil chemotaxis also depends on iron. Removing ferrous iron with the specific chelator desferrioxamine completely abrogated the ability of both artemisinin and artesunate to inhibit neutrophil chemotaxis at all concentrations (Fig. 1E,F). Antagonism by desferrioxamine of the inhibition of chemotaxis by artemisinin was independent of whether H 2 O 2 , a chemokine, C5a or lipopolysaccharide (LPS) were used as the chemoattractant (Supplementary Fig. 6). These observations suggest that artemisinin and its active derivatives may inhibit their protein target not by reversible antagonist binding, as has previously been supposed 18,19 , but instead by covalent modification of a cysteine residue, catalysed by Fe 2+ . Each bar shows mean ± SEM from n = 3 mice. Statistical analysis: For comparison with H 2 O 2 : **, p < 0.01; ns = not significant. (One-way ANOVA and Tukey-Kramer post-hoc test). (B) Artemisinin does not block TRPM2 ion channels. Patch clamp recording of membrane current from TRPM2-transfected HEK293 cell at + 80 mV (orange) and − 80 mV (blue); TRPM2 ion channels activated by the inclusion of 1 mM ADPR in the intracellular patch clamp solution. Moment of breaking through to whole-cell mode shown by arrow. Artemisinin (10 μM) has no effect on membrane current (fractional current change 0.99 ± 0.04 at − 80 mV, 0.97 ± 0.03 at + 80 mV, neither significantly different from 1.0, n = 6), while the known TRPM2 inhibitor N-(pamylcinnamoyl) anthranilic acid (ACA, 20 μM) suppresses membrane current at both membrane voltages (fractional current change 0.02 ± 0.01 at − 80 mV, 0.06 ± 0.01 at + 80 mV, both significantly different from 1.0, p < 0.0001, n = 6). (C) Neutrophil forward migration index (FMI) in a gradient of H 2 O 2 (10 nM, bar 2) and CXCL2 (10 nM, bar 5) is abolished by artesunate (10 μM) and by the selective SERCA inhibitor thapsigargin (50 nM). Each bar shows mean ± SEM from n = 4 experiments with neutrophils from 4 mice. Statistics: ***, p < 0.001 compared to DMEM control; ####, p < 0.0001 compared to H 2 O 2 or CXCL2. 0.001, one-way ANOVA and Tukey-Kramer post-hoc test. (D) Application of SERCA inhibitor thapsigargin (black trace, 50 nM) to a neutrophil releases calcium from intracellular stores (ratio measurement with fura-2, see Methods). Calcium influx from external medium prevented with 0Ca 2+ /2 mM EGTA (application time shown by bar at top). Similar dose-dependent release of intracellular store calcium seen with artemisinin (pink, 10 μM; green, 100 nM; blue, 1 nM) showing that artemisinin is a SERCA inhibitor. Increase of calcium on readmission of external Ca 2+ is due to activation of store-operated calcium entry (SOCE) following store discharge and is similar in all cases, showing that artemisinin does not affect SOCE. Calcium release by artemisinin (10 μM) is inhibited by Fe 2+ chelator desferrioxamine (DesF, light pink, 50 μM), but DesF has no effect on calcium release by thapsigargin www.nature.com/scientificreports/ Artemisinin and its analogues have been shown to be capable of alkylating both cysteine itself 20 and the central cysteine residue in a cysteine-containing tripeptide, glutathione 21 , by oxidising and combining with the cysteine sulfhydryl ( Supplementary Fig. 5B, C).
SERCA is the cellular target of artemisinin.
Neutrophil chemotaxis depends on the ability of chemoattractants to generate leading-edge calcium "pulses" that determine the direction of cell migration 8 . Supplementary Video 1 shows the generation of calcium pulses in a neutrophil migrating up a gradient of H 2 O 2 (left-hand video), and the complete suppression of calcium pulses, together with chemotaxis, in the presence of artemisinin (second-left video). In the presence of a gradient of ADPR, calcium pulses drive chemotaxis in a similar way to H 2 O 2 , and artemisinin also inhibits both calcium pulses and chemotaxis (pair of videos on right). These experiments suggest that artemisinin prevents chemotaxis by inhibiting the generation of leading-edge calcium pulses.
In previous work we showed that chemotaxis driven by H 2 O 2 depends on activation of the TRPM2 ion channel 8 . The importance of a calcium influx via TRPM2 for chemotaxis driven by H 2 O 2 is shown in Supplementary Fig. 7. In this experiment, neutrophils were loaded with the calcium chelator BAPTA, which completely suppressed the intracellular calcium increase caused by activation of TRPM2 by H 2 O 2 ( Supplementary Fig. 7A). In the absence of this TRPM2-mediated calcium increase, neutrophil chemotaxis towards H 2 O 2 was abolished ( Supplementary Fig. 7B).
We next carried out patch-clamp experiments on TRPM2 heterologously expressed in HEK293 cells in order to test whether artemisinin might inhibit chemotaxis by blocking TRPM2. TRPM2 was activated by alternate positive and negative voltage pulses (Fig. 2B). Artemisinin had no significant effect on the current carried by TRPM2, in contrast to the known TRPM2 blocker ACA, which caused prompt and near-complete current inhibition.
A second reason for discarding TRPM2 as a target is that artemisinin inhibits, with equal potency, chemotaxis towards H 2 O 2 , ADPR ( Chemotaxis activated by H 2 O 2 and ADPR depends on activation of TRPM2 8 , but chemotaxis activated by cyto/ chemokines depends on a separate pathway not involving TRPM2 8 . The schematic diagram in Supplementary Fig. 12 (steps 1-4) shows how H 2 O 2 activates calcium influx through TRPM2, which in turn generates leadingedge calcium "pulses" that steer chemotaxis 8 . Leading-edge calcium pulses generated by cyto/chemokines and chemoattractants such as LPS 8 , on the other hand, depend on a separate pathway independent of TRPM2 (Supplementary Fig. 12, steps 6, 7). The ability of artemisinin to inhibit chemotaxis activated by each of these two distinct pathways implies that the action of artemisinin must be at a point common to both pathways, such as the sarcoplasmic and endoplasmic reticulum calcium ATPase (SERCA) that is responsible for refilling subcellular calcium stores, or the store-operated calcium entry mechanism (SOCE), that mediates calcium entry and store refilling following discharge of subcellular stores 22 . Both SERCA and SOCE have been shown to be functional in neutrophils 23 .
Thapsigargin, a potent and selective SERCA blocker 24 , completely inhibited neutrophil chemotaxis towards both H 2 O 2 and the chemokine CXCL2, in a similar way to the inhibition caused by artemisinin (Fig. 2C), consistent with the idea that both thapsigargin and artemisinin exhaust the internal calcium stores that are necessary to drive chemotaxis 8 . Thapsigargin evoked an increase in internal calcium concentration in neutrophils in the complete absence of external calcium (Fig. 2D, black trace), that must be due to release from internal stores because no calcium influx across the surface membrane is possible. The calcium release was followed by a return to baseline levels as cytoplasmic calcium was extruded by surface membrane calcium pumps. When intracellular calcium stores had been exhausted, readmission of external calcium caused a sustained calcium increase, attributable to store-operated calcium entry (SOCE) carried via activation of calcium-selective Orai channels in the surface membrane 25 . The protocol shown in Fig. 2D therefore shows a way of separating a potential inhibitory action of artemisinin on SERCA and on SOCE.
Artemisinin evoked a dose-dependent increase in neutrophil intracellular calcium similar to that seen with thapsigargin ( Fig. 2D), showing that artemisinin, like thapsigargin, acts to release calcium from intracellular stores of neutrophils and therefore may be a SERCA inhibitor. Artemisinin was effective down to a concentration of 1 nM in releasing calcium from intracellular stores, consistent with the high potency of artemisinin in inhibiting neutrophil chemotaxis (IC 50 ≈ 0.3 nM, Fig. 1E,F). The intracellular calcium release evoked by thapsigargin was unaffected by the Fe 2+ chelator desferrioxamine, but calcium release by artemisinin was completely suppressed ( Fig. 2D), results that echo the effect of Fe 2+ chelation on chemotaxis (Fig. 1E,F). At all concentrations of artemisinin, the profile of SOCE following readmission of calcium was similar to that caused by thapsigargin, showing that artemisinin does not interact with SOCE. These experiments are consistent with SERCA being the downstream target of artemisinin in neutrophils.
Artemisinin irreversibly inhibits SERCA3. Thapsigargin is toxic to mammals 26 , while artemisinin has an excellent clinical safety record as an antimalarial, a difference that could arise from selective inhibition by artemisinin of a non-critical mammalian SERCA isoform, in contrast to the known ability of thapsigargin to inhibit all three SERCA isoforms equally 24 . SERCA1 is critical for muscle contraction, while SERCA2 is widely expressed in many essential organs 27 . Inhibition of either isoform would therefore be likely to cause significant toxicity. SERCA3, on the other hand, has a more limited expression pattern, which includes expression in immune cells 27 . These considerations suggest that SERCA3 may be the target of artemisinin in neutrophils.
In Fig. 2E-G we overexpressed mammalian SERCA1, 2 or 3 in HEK293 cells and then used the protocol shown in Fig. 2D to test for SERCA inhibition by thapsigargin or artemisinin. Thapsigargin released calcium from intracellular stores with a similar time course when applied to all SERCA isoforms, consistent with its ability to inhibit all isoforms equally 24 www.nature.com/scientificreports/ SERCA1 and 2 (Fig. 2E,F) but released calcium with a similar time course to thapsigargin in cells transfected with SERCA3 (Fig. 2G). Removal of Fe 2+ with desferrioxamine did not affect the ability of thapsigargin to inhibit SERCA3, but completely prevented inhibition of SERCA3 by artemisinin (Fig. 2G). Thapsigargin released calcium from intracellular stores of naïve HEK293 cells but artemisinin did not (Fig. 2H), consistent with expression of SERCA2 in HEK293 cells, that are derived from the kidney, where SERCA2 is the principal isoform 27 . As shown in Fig. 2D, artemisinin releases calcium from intracellular stores of neutrophils, consistent with the known expression of SERCA3 in cells of the immune system 27 . These experiments show that SERCA3 is the mammalian target of artemisinin. Thapsigargin inhibits SERCA isoforms by binding reversibly to a location between membrane-spanning helices 3 and 7, deep within the motile machinery of the calcium pump 28 . The experiments above show that inhibition of SERCA3 by artemisinin depends, on the other hand, on its unusual peroxide bond, not present in thapsigargin, together with the presence of Fe 2+ as a probable catalyst, suggesting a different mechanism involving irreversible covalent binding, likely to a cysteine residue. In Fig. 2I we used the protocol shown in Fig. 2D to compare the reversibility of SERCA inhibition by thapsigargin and artemisinin. Following exhaustion of calcium stores by thapsigargin, and consequent calcium influx via SOCE on readmission of external calcium, the intracellular calcium level returned slowly to its normal level over the 20 min following removal of thapsigargin, showing that SERCA had reactivated and intracellular stores had refilled, thus switching off SOCE. Readmission of thapsigargin again released calcium from intracellular stores, followed by reactivation of SOCE when extracellular calcium was readmitted, confirming the reversibility of thapsigargin binding to SERCA. However, when the same experiment was repeated using artemisinin, elevated calcium levels due to activation of SOCE persisted after store discharge, showing that stores had not refilled and that SERCA3 inhibition had therefore been maintained. On reapplying artemisinin in zero calcium, very little calcium release was observed, consistent with the lack of store refilling (Fig. 2I). This experiment confirms that inhibition of SERCA3 by artemisinin is essentially irreversible on the time scale used, in contrast to the reversible inhibition by thapsigargin.
Artemisinin and analogues suppress in vivo neutrophil invasion in response to H 2 O 2 .
The potent action of artemisinin and its analogues in suppressing neutrophil chemotaxis in vitro suggests that these compounds may have a similar action in vivo, and therefore may potentially be useful as therapeutics in conditions such as ARDS and Covid-19 where excess immune cell invasion is an important driver of the pathology. We measured neutrophil invasion into mouse peritoneum following intraperitoneal injection of 10 μM H 2 O 2 , a concentration that we have found in previous work to have a maximal effect in activating neutrophil chemotaxis in vitro 8 . The time course of neutrophil invasion in response to i.p. H 2 O 2 is shown in Fig. 3A. In this experiment total cell counts are shown; the background level of c. 2 × 10 6 cells (lower dotted line) is attributable to the presence of tissue-resident macrophages 8 . Following injection of H 2 O 2 , neutrophil invasion causes the cell count to rise rapidly, reaching a peak of 6.5 × 10 6 cells at 60 min, a level that is maintained until 120 min, followed by a return to baseline over the a further 90 min. Injection of artesunate s.c. 30 min prior to injection of H 2 O 2 largely suppressed the neutrophil invasion up to 120 min, at which time the effect diminishes owing to the short in vivo lifetime of artesunate and its active metabolite dihydroartemisinin 15,16 . In agreement, Supplementary Fig. 8 shows that 10 μM H 2 O 2 i.p. strongly activated an influx of neutrophils, and that neutrophil invasion was largely suppressed by injections of either artemisinin or artesunate at 28 mg/kg s.c., 30 min prior to injection of H 2 O 2 , with a slightly lesser effect at 6 mg/kg, a dose close to a typical clinically-used dose for artesunate of 2.4 mg/kg i.v. The similar in vivo inhibition by artemisinin and artesunate mirrors the similar actions of these two analogues in inhibiting neutrophil chemotaxis in vitro (Fig. 1E,F).
Excess release of cytokines/chemokines is thought to be critical in the pathology of conditions such as Covid-19 in which immune cell invasion plays an important role 1,2 . In Fig. 3B-E we used ELISA to measure the concentration of two pro-inflammatory cytokines, IL-1β and IL-6, and two chemokines, CXCL1 and CXCL2. In each case, the profile of increase following injection of H 2 O 2 is similar to the profile of neutrophil invasion, rising from a low level to a broad peak at 60-120 min, followed by a return to undetectable levels by 210 min, a time at which the level of invading neutrophils had declined back to baseline. The suppression caused by prior injection of artesunate is striking, with the cytokine/chemokine increase near-completely abolished in all cases up to 120 min. An increase is seen at 150 min, in line with the recovery of neutrophil chemotaxis as the effect of artesunate wears off (Fig. 3A).
The release of neutrophil extracellular traps (NETs) from neutrophils may also augment the damaging effect of cytokines 3,5,29 . In Fig. 3F we examined the release of NETs by assaying cell-free DNA release. The profile is broadly similar to the release of cytokines; NET release shows a broad peak at 30-150 min, followed by a decline to low levels by 210 min as neutrophil invasion reverses. Artesunate completely inhibits NET release at times earlier than 150 min. An alternative assay of NET release using fluorescence microscopy showed a similar increase in NETs in response to in vitro application of LPS, also abolished by artemisinin (Supplementary Fig. 9).
A similar experiment carried out with infusion of H 2 O 2 into the lung shows that neutrophil invasion, cytokine and chemokine release and NET release are all strongly suppressed by artesunate, as was found in the peritoneum (Supplementary Fig. 10). In summary, the release of cyto/chemokines and NETs in response to H 2 O 2 parallels neutrophil invasion in both peritoneum and lung, and the ability of artesunate to inhibit neutrophil invasion has a striking effect in preventing the release of proinflammatory cytokines/chemokines and NETs.
Artemisinin and analogues suppress in vivo neutrophil invasion in response to LPS and SARS-CoV-2 spike protein. Lipopolysaccharide (LPS), a constituent of the cell wall of gram-negative bacteria, plays a critical role in the interactions of many bacterial pathogens with the innate immune system 30 Supplementary Fig. 12). In the experiment shown in Supplementary Fig. 11, we tested the ability of LPS to induce invasion of neutrophils into the peritoneum and the effect of artemisinin on this invasion. Neutrophil invasion into the peritoneum in response to LPS was activated more slowly than that induced by H 2 O 2 , so we sampled invasion at 5 h, and gave three doses of artemisinin s.c. at intervals of 2 h to maintain systemic levels of artemisinin throughout this time. LPS activated a neutrophil invasion that was similar in magnitude to that induced by H 2 O 2 , and the invasion was also largely suppressed by artemisinin (Supplementary Fig. 11A). The production of cytokines IL1-β and IL-6 and chemokines CXCL1 and CXCL2 was also strongly suppressed by artemisinin ( Supplementary Fig. 11B-E), as was NET release (Supplementary Fig. 11F).
The invasion of neutrophils into the lung has been proposed to be critical for the pathogenesis of Covid-19 1-3,5 . We therefore tested whether artemisinin and its analogues are effective in suppressing neutrophil invasion into the lung, and what effect these treatments have on cytokine/chemokine and NET release. Figure 4A shows that lung neutrophil invasion in response to LPS was strongly suppressed by artesunate at both 28 mg/kg and 6 mg/ kg, the latter dose being close to the clinically used dose of 2.4 mg/kg. Production of the pro-inflammatory cytokines IL1-β and IL-6, and chemokines CXCL1 and CXCL2, was strongly suppressed (Fig. 4B-E). In addition the release of NETs, as assayed from DNA release, was also inhibited (Fig. 4F).
A similar experiment was conducted using the SARS-CoV-2 spike protein as chemoattractant (Fig. 5). We found that the peak of neutrophil invasion in response to the SARS-CoV-2 spike protein was delayed compared to LPS, so we assayed neutrophil invasion and the release of cyto/chemokines and NETs at 24 h and maintained levels of artesunate throughout this period by regular injections (see legend to Fig. 5). As was seen with LPS injection, artesunate reduced the invasion of neutrophils into the lungs and also almost totally abolished the release of pro-inflammatory cyto/chemokines and NETs. The dose of 6 mg/kg, close to the dose of 2.4 mg/kg used clinically for malaria, gave approximately the same level of suppression as a higher dose of 28 mg/kg, suggesting that the clinical dose regime used for malaria would also be adequate for treating conditions such as ARDS and Covid-19.
In summary, these experiments show that artemisinin and its analogues potently suppress neutrophil invasion into both peritoneum and lung in response to a wide range of pathological stimuli, and also almost totally inhibit release of cytokines, chemokines and NETs, suggesting that artemisinin may be useful therapeutically in treating conditions such as ARDS and Covid-19 in which cyto/chemokine and NET release are important contributors to morbidity.
Artemisinin directly suppresses release of cytokines, chemokines and NETs. A notable feature
of the data presented in Figs. 3-5 and Supplementary Figs. 10 and 11 is that the inhibition by artemisinin of cyto/ chemokine and NET release is in every case greater than the inhibition of neutrophil entry, suggesting that artemisinin may have a dual action: to suppress neutrophil chemotaxis, and in addition to directly suppress release of cyto/chemokines and NETs. In the experiment shown in Fig. 6 we examined the action of the chemoattractants H 2 O 2 and LPS on isolated neutrophils in order to investigate the possibility of a direct action of artemisinin, independent of inhibition of neutrophil chemotaxis.
A concentration of 10 μM H 2 O 2 , which maximally activates chemotaxis 8 , caused a small but significant enhancement of release of IL-1β, IL-6, CXCL1 and NETs (Fig. 6A-E). The enhancement caused by LPS (10 ng/ ml), however, was in each case 1-2 orders of magnitude greater (Fig. 6F-J). In each case the enhanced release caused by both H 2 O 2 and LPS was completely suppressed by artemisinin, and the action of artemisinin was in turn completely antagonised by the ferrous iron chelator desferrioxamine. These experiments highlight a second action of artemisinin, distinct from its action of inhibiting neutrophil chemotaxis, in directly suppressing release of cyto/chemokines and NETs. The mechanism of this action is currently unknown but appears to be distinct from the action on chemotaxis, suggesting the existence of a second target of artemisinin that controls the release of inflammatory mediators from neutrophils. A second target for artemisinin would not be surprising, as previous studies have also shown artemisinin to have broad effects on a number of systems in malarial parasites, including glycolytic pathways, haemoglobin degradation, antioxidant defence and protein synthesis 31,32 . injection of H 2 O 2 (10 µM in PBS, 10 μl/gm body weight). H 2 O 2 causes a large influx of neutrophils that peaks at 60 min and reverses by 210 min (black points). Cells present in peritoneum before injection of H 2 O 2 are tissueresident macrophages, while cells entering the peritoneum following injection of H 2 O 2 are neutrophils (see Methods for cell identification). Artesunate (6 mg/kg s.c., delivered 30 min before injection of H 2 O 2 ) suppresses neutrophil influx for > 120 min (red points). Each point shows mean ± SEM from n = 4 mice. (B) Concentration of IL-1β in peritoneal lavage measured by ELISA, using i.p. lavage samples obtained as in A. Black bars show increase as function of time (mins) in vehicle-injected mice; open bars are corresponding data for mice injected with artesunate 6 mg/kg s.c., 30 min before injection of H 2 O 2 as in A. Each bar shows mean ± SEM from n = 3 mice. (C,D,E) Similar data for IL-6, CXCL1 and CXCL2, obtained from same samples. (F) Similar data for release of NETs, quantified using Pico-Green kit. Statistical analysis: BLQ, below limit of quantitation; *, p < 0.05; **, P < 0.01; ***, p < 0.001, ****, p < 0.0001 compared with negative control (no H 2 O 2 ); #, p < 0.05; ##, p < 0.01; ###, p < 0.001; ####p < 0.0001 artesunate group compared with no-artesunate group at same time point. ANOVA with Bonferroni post-hoc correction. www.nature.com/scientificreports/
Discussion
The work described here shows that artemisinin and its active analogues are potent inhibitors of mammalian neutrophil and macrophage chemotaxis. We find that artemisinin inhibits chemotaxis by blocking the generation of leading-edge calcium signals that are required for innate immune cell chemotaxis. The target of artemisinin in inhibiting chemotaxis is the SERCA3 calcium pump isoform that is responsible for filling neutrophil intracellular stores with calcium, with the effect that intracellular stores are emptied and leading-edge calcium signals can therefore no longer be generated. Artemisinin inhibits only one isoform, SERCA3, out of the three mammalian SERCA isoforms, a selectivity that explains the lack of toxicity of artemisinin when used clinically as an antimalarial. We also find that artemisinin and its analogues are highly effective at reducing neutrophil chemotaxis and inhibiting cytokine/chemokine and NET release both in vivo and in vitro, and in both peritoneum and lung. There are several reasons for thinking that the mechanism of action of artemisinin is the same for inhibition of neutrophil chemotaxis and killing of malaria parasites. In both cases the potency is high (IC 50 ≈ 5 nM for malarial killing 33 vs. IC 50 ≈ 0.3 nM for inhibition of neutrophil chemotaxis, see Fig. 1); efficacy is completely Fig. 3, using same experimental protocol as in A. Increase in cytokine concentration and NET release induced by LPS was suppressed by artesunate. Each bar shows mean ± SEM from n = 3 mice. Statistical analysis: BLQ, below limit of quantitation; *p < 0.05; **P < 0.01; ***p < 0.001, ****p < 0.0001, LPS group compared with control group; #p < 0.05; ##p < 0.01; ###p < 0.001; ####p < 0.0001, LPS group compared with LPS plus artesunate group at same time point. ANOVA with Bonferroni post-hoc correction. www.nature.com/scientificreports/ abolished in both cases by replacing the unusual peroxide bridge with a single oxygen (ref. 18 and Fig. 2A); and the action depends in both cases on low micromolar concentrations of free ferrous iron as a catalyst (refs 18,34 and Fig. 1E,F). Thus, discovering the mechanism of action of artemisinin in inhibiting neutrophil chemotaxis is likely to give clues to the mechanism of action in killing malaria parasites. Understanding the molecular basis of the anti-malarial action of artemisinin will open up the possibility of designing novel antimalarials based on the artemisinin scaffold, which may become essential in the face of growing malarial resistance to artemisinin and its analogues. Previous work has identified multiple targets of artemisinin in the malaria parasite that are covalently modified by artemisinin 31,32 , but in these studies the artemisinin-derived probes were used at a concentration three orders of magnitude or more above the IC 50 value of 0.3 nM for mammalian SERCA3 found in the present study, so the possibility of a more selective effect at lower concentrations of artemisinin cannot be excluded. The lack of toxicity of artemisinin in mammals, which express three SERCA isoforms, is explained because the critical isoforms SERCA1 and 2 are insensitive to artemisinin (Fig. 2E-G). Malaria parasites, on the other hand, express a single SERCA isoform (also known as PfATP6) 18 . Malarial SERCA was proposed some years ago to be the target of artemisinin 18 , but subsequent studies did not confirm this work 35,36 and the idea has remained controversial in the field. The work in the present paper suggests that malarial SERCA is indeed likely to be the target of artemisinin, as was originally proposed 18 . www.nature.com/scientificreports/ How can artemisinin achieve selective inhibition of SERCA3 but not the closely-related isoforms SERCA1 and SERCA2? Alkylation of a specific cysteine residue in SERCA3 could be achieved if a high-affinity binding pocket for artemisinin was located adjacent to the target cysteine residue in SERCA3 but not in other isoforms. The SERCA pump undergoes large structural rearrangements during its active cycle 37 , and it is therefore plausible that the addition of a bulky residue such as artemisinin, coupled irreversibly to a cysteine residue in a critical location, could be responsible for inhibiting the calcium transporter function.
Here we also show that artemisinin and its analogues are potent inhibitors of neutrophil invasion into peritoneum and lung in vivo in response to chemoattractants such as H 2 O 2 , LPS and the SARS-CoV-2 spike protein from the virus that causes Covid-19. The knowledge that an important target of artemisinin is SERCA3 gives a molecular basis for past empirical studies using artemisinin in rodent models of lung inflammation and sepsis in vivo [38][39][40][41][42][43][44][45] . These studies have shown that artemisinin and its analogues inhibit cytokine release, reduce lung pathology and significantly enhance survival in response to insults such as lung infusion of lipopolysaccharide or bleomycin, exposure to cigarette smoke, and inflammation caused by systemic sepsis, and moreover that artemisinin appeared to have no adverse effects, even at large doses [38][39][40][41][42][43][44][45] . Our results complement these studies by showing that artemisinin and its analogues inhibit cytokine/chemokine release following injection of both LPS, a bacterial cell wall component, and the SARS-CoV-2 spike protein. Moreover, they also show a striking effect in inhibiting NET release.
Is inhibition of neutrophil chemotaxis the only mechanism by which artemisinin blocks the release of cyto/ chemokines and NETs? While simply preventing the entry of neutrophils into organs such as lung or peritoneum undoubtedly makes an important contribution to inhibiting the release of pro-inflammatory factors such as cytokines and NETs in vivo, the work shown here suggests that a more direct inhibition also makes an important contribution, for two reasons: the inhibition of neutrophil chemotaxis in vivo is less complete than the inhibition of release of pro-inflammatory factors; and artemisinin has a potent effect on release of pro-inflammatory factors in vitro. An important second target of artemisinin, whose inhibition blocks synthesis or release of proinflammatory factors, therefore remains to be discovered.
Together with previous work, the results presented here suggest that artemisinin may have value in enhancing survival in conditions such as sepsis, ARDS and Covid-19. Much of the work presented in this paper formed the basis of a proposal to the World Health Organisation (WHO) for the use of artesunate as a therapy for patients seriously ill with Covid-19. This idea is now in clinical trials as part of the 'SOLIDARITY' initiative [46][47][48] .
Materials and methods
Animals. Black C57BL/6 WT mice (6-8 weeks old) were purchased from Charles River Inc. Isolation of mouse peritoneal neutrophils and macrophages. In vitro chemotaxis experiments. Mice were injected i.p. with 3% thioglycolate solution (10 μl/g) and, after 4 h (for neutrophils) or 4 d (for macrophages), were euthanised by cervical dislocation. The peritoneal-covering skin was removed, 5 ml PBS injected into the peritoneal cavity which was massaged gently for 60 s to dislodge cells. The peritoneal fluid was gently extracted by syringe and centrifuged for 10 min at 200 RCF. The supernatant was discarded and cells resuspended in DMEM + 10% FBS. These methods generated cell suspensions containing > 90% of either neutrophils or macrophages, identified through a fast-acting modified version of the May-Grünwald-Giemsa staining and subsequent cell type identification as shown in 8 (neutrophils) and Supplementary Fig. 3 www.nature.com/scientificreports/ as above, and samples of the suspensions were immediately spun down onto glass slides using a cytocentrifuge (Sigma 2-7 Cyto, Shandon, Germany as described below) and leukocytes (neutrophils, macrophages) identified through a fast-acting modified version of the May-Grünwald-Giemsa staining and subsequent cell type identification as shown in 8 . The remaining cell suspension was then centrifuged for 10 min at 200 RCF and supernatants were collected and frozen at − 20 °C for cyto/chemokine analysis by ELISA and cf-DNA(NET) quantification using Quant-iT PicoGreen kit (Thermo Fisher).
Isolation of mouse BALF neutrophils.
The nostrils of mice briefly anaesthetized were infused with H 2 O 2 , LPS or SARS-CoV-2 spike protein and at experimental time points (see lung methods below), mice were euthanised by destruction of the brain. The mice were placed in the supine position, limbs were secured and the skin around the neck was removed. Salivary glands were separated to reveal the sternal hyoid muscle and forceps used to incise the muscle around the trachea. A cotton suture was then threaded under the tracheal tissue. A needle was then used to puncture the middle of the trachea between two cartilage rings and a pre-made plastic catheter was inserted ~ 0.5 cm into the tracheal lumen and stabilised with the suture. A syringe, loaded with 1 ml PBS was then attached to the catheter and PBS slowly injected. The thorax was massaged gently for 60 s, before BAL fluid was aspirated. This was repeated 3 times to maximise the BAL fluid recovery. Samples of the BAL fluid were immediately spun down onto glass slides using a cytocentrifuge (as described below) and neutrophils identified through a fast-acting modified version of the May-Grünwald-Giemsa staining and subsequent cell type identification as shown in 8 . The remaining cell suspensions was then centrifuged for 10 min at 200 RCF and supernatants were collected and frozen at − 20 °C for cyto/chemokine analysis by ELISA and cf-DNA (NET) quantification using Quant-iT PicoGreen kit (Thermo Fisher).
Cell identification in peritoneal and BALF extracts.
Cell suspension was isolated from peritonea/ lungs of WT mice as above, spun down onto glass slides using a cytocentrifuge at 400 RPM for 5 min and left to air-dry overnight. A modified version of the May-Grünwald-Giemsa staining was used to identify cell types (RAL DIFF-QUIK kit, RAL diagnostics). Slides were suspended in RAL Diff-Quick fixative solution (methanol based solution to stabilize cellular components) for 1 min, in RAL Diff-Quik solution I (Xanthene solution; a buffered solution of Eosin Y) for 1 min and in RAL Diff-Quik solution II (a buffered solution of thiazine dyes, consisting of methylene blue and Azure A) for 1 min. Nuclei were meta-chromatically stained red/purple and cytoplasm pink/yellow (see ref 8 and Supplementary Fig. 4).
Neutrophil and macrophage chemotaxis assays. Ibidi µ-slide chemotaxis assay chambers, precoated with collagen IV along the central migration strip, were purchased from Thistle Scientific Ltd (Uddingston, Glasgow, UK). Neutrophils or macrophages, isolated as above from peritonea of WT mice, were re-suspended within 30 min of collection in DMEM + 10% FBS at a concentration of 5 × 10 5 cells per ml and 6 µl was seeded along the central migration strip of an Ibidi µ-slide chamber as per the manufacturer's instructions. Slides were incubated for 1 h at 37 °C in humidified 95% air/5% CO 2 , to allow neutrophil/macrophage adherence to the central migration strip. DMEM (without added FBS) with and without added chemoattractant was then added to the wells on opposite sides of the central migration strip. DMEM was from Thermo Fisher Scientific Cat. No. 41966-029. For experiments in which effects of compounds were to be tested, equal concentrations were added to both DMEM + chemoattractant and DMEM wells. Slides were pre-incubated at 37 °C in 95% air/5% CO 2 for 20 min to allow the generation of a gradient of chemoattractant across the 1 mm wide × 70 μm deep central cell migration strip. Live-cell time-lapse microscopy was then conducted using a 10 × lens and dark-field illumination on a Nikon Eclipse Ti-E inverted microscope equipped with the Nikon Perfect Focus System (PFS). The microscope was housed in a temperature-controlled Perspex box (Solent Scientific) at 37 °C, with slides housed in a stage-mounted block in humidified 95% air/5% CO 2 . A maximum of 12 individual chambers (4 individual slides, 3 chambers per slide) could be imaged per experiment by using a motorized stage. Stage movement, lens focus and image acquisition were controlled by Nikon NIS Elements software. Experiments were conducted over 2 h for neutrophils and 1 h for macrophages, with images of each assay compartment taken every 2 min. The ImageJ Fiji TrackMate plug-in was employed to track individual neutrophils/macrophages. A chemotaxis and migration plug-in, provided by Ibidi, was used to calculate speed and forwardl migration index (FMI) data from the neutrophil/macrophage tracks. For further details see ref 8 .
Calcium imaging of neutrophils. Neutrophils isolated as above from the peritonea of WT mice, were re-suspended in DMEM + 10% FBS at a concentration of 5 × 10 5 per ml. Neutrophils were plated onto a collagencoated 13 mm round glass coverslip and incubated at 37 °C in 95%air/5% CO 2 for 1 h to allow neutrophils to adhere. Fura2-AM (5 µM in DMEM) was then added to the cells on the coverslip for 30 min at 37 °C in 95% air/5% CO 2 . Solutions were changed as shown in the figures and fluorescence was measured during alternating illumination at 340 nm and 380 nm (OptoScan; Cairn Research Inc, Kent, UK) every 2 s using a Nikon Eclipse Ti inverted microscope with a 40 × lens and iXon 897 EM-CCD camera controlled by WinFluor 3.2 software. F 340/380 ratios were obtained using FIJI (ImageJ) and converted to calcium concentrations using the equation given by Grynkiewicz et al. with values R max = 2.501, R min = 0.103, both determined experimentally.
For experiments when calcium signals during chemotaxis up a gradient of chemoattractant were to be recorded (as in Supplementary Video 1), 1 µl of Fura-2 AM solution (50 µg Fura-2 AM + 10 µl pluronic F-127 + 10 µl DMSO) was added to 500 µl of peritoneal neutrophil suspension and incubated for 1 h at 37 °C in 95%air/5% CO 2 . Fura-2 loaded cells in suspension were seeded into Ibidi chambers as described above and imaged in a Nikon Ti-E microscope with a 40 × phase contrast lens. Fast-moving neutrophils located in the middle of the central cell migration strip were selected, with typically only one cell imaged per field. Calcium www.nature.com/scientificreports/ ratio images were obtained with alternating 340 nm and 380 nm epi-illumination supplied by stable LED light sources (Fura-LED, Cairn Research), at 500 ms intervals. All images were filtered by a broad-band 510 nm filter and captured with a Photometrics Prime 95B sCMOS camera. Stage movement, focus and image acquisition were controlled by Nikon NIS Elements software. The ImageJ Fiji RatioPlus plug-in was used to generate F 340/380 ratio images and a rainbow look-up table (LUT) was applied to the ratio images to indicate the level of calcium. For further details see ref 8 .
Loading cells with BAPTA-AM.
To determine the effect of intracellular calcium chelation on intracellular calcium levels and chemotaxis induced by H 2 O 2 , extracted mouse peritoneal neutrophils were re-suspended in DMEM + 10% FBS at a concentration of 5 × 10 5 per ml and for chemotaxis experiments were incubated with or without BAPTA-AM (50 µmol/l, Stratech Scientific Ltd) for 30 min. To measure the effect of BAPTA on intracellular calcium levels neutrophils were also incubated with Fura2-AM as described above.
Transfection of HEK293 cells. Human embryonic kidney HEK293 cells were split at a confluency of 80%, resuspended in media to a concentration of 7 × 10 4 cells per ml and 0.5 ml was plated into a four-well plate containing 13 mm glass coverslips pre-coated with poly-d-lysine (1 mg/ml), ready for transfection the following day. Cells were transfected with 0.5 µg of a plasmid containing cDNA for SERCA1, 2 or 3 using a modified calciumphosphate protocol, as previously described 49 . Cells were used for calcium imaging 2d post-transfection. Rat SERCA1a (pMT2) was a gift from Jonathan Lytton (Addgene plasmid # 75182; http:// n2t. net/ addge ne: 75182; RRID: Addgene_75182) 50 Patch clamp. Transfection of HEK293 cells with TRPM2, a kind gift from Prof Y. Mori, University of Kyoto, Japan, was carried out as described above. Manual whole-cell patch clamp recording was carried out as previously described 53 . TRPM2 ion channels were activated by the inclusion of 1 mM ADPR in the intracellular patch clamp solution.
XTT cell viability assay. Peritoneal neutrophils, isolated as above, were seeded into four individual 96 well plates (2 × 10 5 /well) and incubated for 1 h at 37 °C in 95% air/5% CO 2 to allow adherence. Artemisinin was then added to half of the wells on all plates at a 10 µM concentration. Following incubation for: 0 h, 12 h, 24 h and 48 h, respectively, 50 µL of XTT/PMS solution was added to all wells, and plates were incubated for a further 2 h, before absorbance was analysed on a FLUOstar Omega microplate reader (BMG LABTECH, Buckinghamshire, UK) at 450 nm.
In vivo peritoneal H 2 O 2 chemotaxis experiments. WT mice were injected s.c. with either sham or artemisinin/artesunate (either 28 mg/kg or 6 mg/kg for both) 30 min prior to being injected i.p. with H 2 O 2 (10 µM in PBS, 10 µl/g body weight) or PBS alone for the control baseline group. Mice were then euthanised over 10-210 min and peritoneal lavage was extracted and cell types identified as described above, before supernatants were analysed for cytokines/chemokines by ELISA and for NETs by cf-DNA quantification.
In vivo peritoneal LPS chemotaxis experiments. WT mice were injected s.c. with either sham or artemisinin (28 mg/kg) 30 min prior to being injected i.p. with LPS (30 ng/cavity) or PBS alone for control group. Further sham/artemisinin s.c. injections were administered at 90 and 210 min, before mice were euthanised at 300 min and peritoneal lavage was extracted and cell types identified as described above, before supernatants were analysed for cytokines/chemokines by ELISA and for NETs by cf-DNA quantification.
Lung BALF H 2 O 2 chemotaxis experiments. WT mice were injected s.c. with either sham or artesunate (28 mg/kg or 6 mg/kg) 30 min prior to having H 2 O 2 (10 µM in PBS) or PBS alone for control group infused into both nostrils. Mice were euthanised after 60 min and bronchio-alveolar lavage fluid (BALF) was extracted and cell types identified as described above, before supernatants were analysed were analysed for cytokines/ chemokines by ELISA and for NETs by cf-DNA quantification (see below).
Lung BALF LPS and SARS-CoV-2 spike protein chemotaxis experiments. WT mice were injected s.c. with either sham or artesunate (28 mg/kg or 6 mg/kg) 30 min prior to having LPS (300 ng in PBS each lung), SARS-CoV-2 spike protein (25 μg in PBS each lung) or PBS alone for the control group infused into both nostrils. Further sham/artesunate s.c. injections were administered at 90 and 210 min, before mice were euthanised at 300 min and BALF lavage was extracted and cell types identified as described above, before supernatants were analysed were analysed for cytokines/chemokines by ELISA and for NETs by cf-DNA quantification.
Analysis of cytokines and chemokines in peritoneal and lung fluid. At the indicated times after injection of the stimuli (H 2 O 2 , LPS or SARS-CoV-2 spike protein), animals were terminally anesthetized and the peritoneal lavage or BALF was collected in PBS. IL-6, IL-1β, CXCL1 and CXCL2 concentrations were measured by enzyme-linked immunosorbent assay (ELISA) using commercial kits (DuoSet; R&D Systems) as previously described 54 . The results are expressed as pg/mL of each cytokine/chemokine. As a control, concentrations of these cytokines/chemokines were measured in mice injected with vehicle (PBS). www.nature.com/scientificreports/ Quantification of cell free DNA (NETs) in peritoneal and lung fluid. Peritoneal lavage or BALF were collected at different time points after injection of stimuli (H 2 O 2 , LPS or SARS-CoV-2 spike protein) and the amount of cell free DNA (cf-DNA) was quantified using the Quant-IT™ PicoGreen® kit (Thermo Fisher) according to the manufacturer's instructions. The fluorescence intensity (excitation at 488 nm and emission at 525 nm wavelength), a measure of the amount of dye bound to DNA, was quantified by a fluorescence reader (FlexStation 3 Microplate Reader, Molecular Devices, CA, USA) as previously described 55 . The results are expressed as ng/mL of cf-DNA.
Imaging of NETs. Extracted mouse peritoneal neutrophils were re-suspended in DMEM + 10% FBS at a concentration of 5 × 10 5 per ml and incubated with or without 10 ng/l LPS (4 h). To examine the effect of artemisinin neutrophils were pre-treated for with 10 µM artemisinin (30 min before LPS incubation). Samples were then incubated for 1 h with Sytox green nucleic acid stain (5 µM) (Thermo Fisher Scientific). Cells were plated onto coverslips and illuminated using 488 nm wavelength light at 10 × or 60 × magnification to visualise release of DNA from the neutrophils as NETs. Cells were classed as having released NETs if the diameter of the fluorescent area was > 2 × that of average for untreated cells.
|
v3-fos-license
|
2020-01-30T09:04:21.815Z
|
2019-01-01T00:00:00.000
|
214183146
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://scindeks-clanci.ceon.rs/data/pdf/0354-9542/2019/0354-95421948135S.pdf",
"pdf_hash": "dcb1d65722b9442e055342eebfd8914961476bd3",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44549",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "961486325b1e81578a760935403f0d460c7ad990",
"year": 2019
}
|
pes2o/s2orc
|
The influence of the length of fattening and gender of the lambs on the thickness of the subcutaneous fatty tissue
In this research, three groups of 12 lambs (6 male and 6 female) of the Pirot improved race were examined. The first group of lambs was fattened for 60 days, the second 120 and the third 180 days. Nutrition of the lamb to rejection (40 days) is the mother's milk. After 40 days, it switched to pelleted concentrate (with 18% protein) and a quality hay, which was ad libidum as the concentrate. At the end of the fattening, the lamb is slaughtered by the usual technique. The objectives of this study were to determine the influence of the length of fattening and gender of the lambs on the thickness of the subcutaneous fatty tissue. Differences in fat in the subcutaneous tissue dorsally, medially and laterally at the intersection between the 12 and 13 vertebrae are significantly different (P<0.01) both in male and female lambs in all three groups. At the intersection of the lateral side between the 12 and 13 vertebrae there are significant differences (P<0.01) between the first and second and between the first and third groups in both genders. The subcutaneous fatty tissue in females compared to male lambs is thicker in all measured locations. However, significant differences were found in the thickness of breast tissue (P<0.05) and dorsal between the 12 and 13 vertebrae (P<0.01) for lambs of the second group. Female lambs of the third group also have thicker subcutaneous fatty tissue, dorsally and medially between the 12 and 13 vertebrae (P<0.05). Received 5 December 2019 Accepted 18 December 2019 Acta Agriculturae Serbica, Vol. XXIV, 48(2019); 135-142 136
Introduction
Fat content is important given its impact on the price of the carcass (Díaz et al., 2001). Some of the measurements for this criterion are the thickness of the dorsal fat, the weight of renal pelvic fat, and the visual assessment of the fat content of the carcass (Díaz et al., 2002;Carrasco et al., 2009).
Another variable used as a general indicator of the quality of the carcass is its conformation (Díaz, 2001), which involves a visual assessment and objective measurements such as the width and depth of the thorax, length of legs, width of the rump or the area of the rib eye, among others (Díaz, 2001).
The influence of intramuscular fat (IMF) on tenderness and juiciness varies depending on the study and the species studied (Wood et al., 2008). With sheep, meat with more marbling or IMF is more valued by sensory panels (Fisher et al., 2000;Wood et al., 2008). Similarly, meat with a higher IMF level has a lower shear force value, which nevertheless does not directly relate the IMF level to the degree of tenderness (Sañudo et al., 2000). The fatty acid composition of the meat is very important given its implications for human health (Givens, 2005) in relation to heart disease and cancer (Wood et al., 2003). The fatty acid composition affects characteristics of the meat like juiciness, flavor, shelf life, and firmness of the fat (Wood et al., 2003).
Material and methods
The experiment included a total of 36 lambs from the purified ennobled sheep divided into 3 groups (6 males and 6 females), according to the duration of the fattening period, as follows: I group 60 days fattening; II group 120 days fattening and III group 180 days fattening. The test is performed only in lambs, lambing as unions, at the farm Djumruk on Vlasina Lake, Republic of Serbia, located at an altitude of 1250 m.
The daily meal of sheep breastfeeding from the beginning of the experiment to the 40 th day consisted of: seeds 1.8 kg / throat; silage 1.5 kg / throat and concentrate 0.5 kg / throat. In the first 10 days, the mother's milk was present in the diet of the lambs, and from the 11 th day until the end of the fattening, all three groups of lambs had at their disposal a pelleted concentrate and a quality seeds at will, whose consumption was monitored and recorded every day. The lambing period of the lambs was completed on the 40 th day of their life. In the diet of all three groups of lambs, the pelleted concentrate and the quality seeds were represented until the end of the experiment and that no group was pasture or used any other foods.
After finishing the fattening the lambs were slaughtered in the slaughterhouse Jugokop -Bujanovac, Republic of Serbia, which had an export character, which means that all necessary prerequisites for processing and storage of the meat received were met, according to the strict European standards. Each group of lambs from farm to slaughterhouse was transported by truck. Twelve hours before slaughter, food was broken at the lambs, while water was available until loading in a truck. Immediately after the landing of the lambs in the livestock depot, a visual inspection was carried out by the veterinary inspection, which concluded that all the lambs were in good condition, with good health and that they could go to slaughter.
The slaughter of the lambs is carried out according to the technological procedure, according to the following phases: preparing lamb for slaughter; raising to the track; bleeding; removing the skin; evisceration and cooling. After taking the linear measures is done cutting left half on basic parts, and their measurement. Then, the calculated values of the yield of individual tissues (meat, fat, bones) in the main parts of the carcass. After slaughtering the lambs, the primary processing and cooling, the halves are cut into the main parts. The carcasses are cut into the following main parts: round, loin, back, shoulder, neck, breast, ribs, foreshank, belly and lower leg.
The carcasses were taken out of the cold room and the depth of soft tissue was measured with sharpened metal rule. Each joint was dissected and the lean meat, intermuscular and subcutaneous fat were separated from the bones accurately.
Variational statistical analysis was performed by analyzing the variance of twofactorial experiment (3 x 2), according to Sokal and Rohlf, 1995. The differences in the mean values were tested with the Tukey test.
Results and discussion
The thickness of the subcutaneous fatty tissue for the male lambs (mm) is given in Table 1, the thickness of the subcutaneous fatty tissue for female lambs (mm) is given in Table 2, while in Table 3 the thickness of the subcutaneous fatty tissue is determined according to the gender of the lambs (mm). On the breast and tail, there is a significant difference (P<0.01) between the first and third, as well as between the second and third groups both in male and female lambs. Differences in fat in the subcutaneous tissue dorsally, medially and laterally at the intersection between the 12 th and 13 th vertebrae are significantly different (P<0.01) both in male and female lambs in all three groups. At the intersection of the lateral side between the 12 th and 13 th vertebrae there are significant differences (P<0.01) between the first and second and between the first and third groups in both genders.
Various authors (Díaz et al., 2002;Cañeque et al., 2003;Karim et al., 2007;Ekiz et al., 2012) reported higher fatness level in lambs fed concentrate in sheepfold than lambs fed on pasture and concomitant dressing percentage increase. Peña et al.. (2005) also reported an increase in fatness level of carcass with increasing slaughter weight of lambs. Table 3 shows that fatty tissue in females compared to male lambs is thicker in all measured locations. However, significant differences were found in the thickness of breast tissue (P<0.05) and dorsal between the 12 th and 13 th vertebrae (P<0.01) for lambs of the second group. Female lambs of the third group also have thicker subcutaneous fatty tissue, dorsally and medially between the 12 th and 13 th vertebrae (P<0.05). Таble 3. The thickness of the subcutaneous fatty tissue according to gender of the lambs (mm) * -The gender differences are significant at the level P<0.05; ** -The gender differences are significant at the level P<0.01; ns -The gender differences are not significant The percentage of fat trimmings in carcass and the tissue composition of sample cut were influenced by a significant interaction between age-class and sex (P<0.05): in males the age-class never affected the tissue composition of sample cut, as in females the muscle and fat percentages increased with age while the bone percentage decreased. The fat content of loin meat increased with age in females (P<0.05) and decreased in males (P<0.05). The poly-unsaturated fatty acids (FA) content of loin meat was higher in males than in females (P<0.001), with saturated FA and mono-unsaturated FA revealing significant interactions between age-class and sex (P<0.05), Sabbioni et al., (2016
Conclusion
Resuls showed that on the breast and tail, there was a significant difference (P<0.01) between the first and third, as well as between the second and third groups both in male and female lambs. Differences in fat in the subcutaneous fatty tissue dorsally, medially and laterally at the intersection between the 12 th and 13 th vertebrae are significantly different (P<0.01) both in male and female lambs in all three groups. At the intersection of the lateral side between the 12 th and 13 th vertebrae there are significant differences (P<0.01) between the first and second and between the first and third groups in both genders. The subcutaneous fatty tissue in females compared to male lambs is thicker in all measured locations. However, significant differences were found in the thickness of breast tissue (P<0.05) and dorsal between the 12 th and 13 th vertebrae (P<0.01) for lambs of the second group. Female lambs of the third group also have have thicker subcutaneous fatty tissue, dorsally and medially between the 12 th and 13 th vertebrae (P<0.05).
|
v3-fos-license
|
2016-05-04T20:20:58.661Z
|
2015-10-05T00:00:00.000
|
15893035
|
{
"extfieldsofstudy": [
"Psychology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/bn/2015/707625.pdf",
"pdf_hash": "0ad98419195e041fdd1e920ed3f8ed3713027d74",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44550",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "a5a740cffaa3fbd17a8572b9c2c2c3622d88fbd1",
"year": 2015
}
|
pes2o/s2orc
|
The Influence of Music on Prefrontal Cortex during Episodic Encoding and Retrieval of Verbal Information: A Multichannel fNIRS Study
Music can be thought of as a complex stimulus able to enrich the encoding of an event thus boosting its subsequent retrieval. However, several findings suggest that music can also interfere with memory performance. A better understanding of the behavioral and neural processes involved can substantially improve knowledge and shed new light on the most efficient music-based interventions. Based on fNIRS studies on music, episodic encoding, and the dorsolateral prefrontal cortex (PFC), this work aims to extend previous findings by monitoring the entire lateral PFC during both encoding and retrieval of verbal material. Nineteen participants were asked to encode lists of words presented with either background music or silence and subsequently tested during a free recall task. Meanwhile, their PFC was monitored using a 48-channel fNIRS system. Behavioral results showed greater chunking of words under the music condition, suggesting the employment of associative strategies for items encoded with music. fNIRS results showed that music provided a less demanding way of modulating both episodic encoding and retrieval, with a general prefrontal decreased activity under the music versus silence condition. This suggests that music-related memory processes rely on specific neural mechanisms and that music can positively influence both episodic encoding and retrieval of verbal information.
Introduction
Episodic memory can be defined as a neurocognitive system, uniquely different from other memory systems, which enables human beings to remember past experiences [1]. Numerous studies have investigated the factors that can boost this system. According to the encoding specificity principle [2], the memory trace of an event and hence the properties of effective retrieval cues are determined by the specific encoding operations performed by the system on the input stimuli. Craik and Lockhart [3] first proposed that the durability of the trace depends on the "depth" of encoding processing, deeper semantic processing allowing better encoding of the target information. Furthermore, it has been demonstrated that the encoding context of an event plays a crucial role in successful memory performance. For instance, a rich context given by stimuli with a high (positive or negative) emotional valence can enhance the encoding of contextual information associated with an item [4]. In this scenario, music could offer a perfect example of an enriched context. Indeed, given its complexity as a stimulus that evolves through time and has a strong emotional impact [5], music is likely to enrich the encoding context of an event, thereby improving subsequent memory performance. The evocative power of music is fascinating and undisputed: it can evoke both emotional states and personal events from the past [6]. Several studies have revealed a specific episodic memory for music, showing how it depends largely on emotion [7] and revealing the existence of specific related neural processes [8]. Nevertheless, the question of whether music as an encoding context can enhance episodic memory performance, especially concerning verbal material, remains debatable and controversial. Several studies have shown that music, presented either as background or as sung text, can enhance verbal learning and memory in both healthy and clinical populations [9][10][11][12][13]. However, several authors have recently claimed that music can also draw attention away from the to-be-remembered information, thus interfering in memory performance [14][15][16]. The key to solving this question seems to rely on a better understanding of the processes 2 Behavioural Neurology involved: improving our knowledge of how music can boost memory performance at both behavioral and functional (i.e., neuronal) levels could shed new and essential light on the most efficient music-based paradigms and interventions.
In a series of functional near-infrared spectroscopy (f NIRS) studies, we previously showed that background music during the episodic encoding of verbal material can improve item and source memory performance and modulate prefrontal cortex (PFC) activity [10,11]. More specifically, f NIRS studies have found that music leads to decreased activation (i.e., decrease in oxyhemoglobin-O 2 Hb and deoxyhemoglobin-HHb increase) in the dorsolateral prefrontal cortex (DLPFC), known to be important for organizational, associative, and memory encoding [17]. In view of f NIRS studies showing decreased PFC activity during verbal learning in which subjects were helped during their performance [18,19], we hypothesized that music could modulate episodic encoding by modifying the need of extra organizational and strategic encoding usually attributed to the DLPFC [20] and facilitating the creation of richer associative bindings crucial for subsequent retrieval [10,11]. However, both methodological and theoretical caveats raise important issues. The present work therefore aims to increase our knowledge of music-related memory processes by extending investigations of background music and verbal memory through three main questions arising from these previous studies.
First, existing f NIRS data are limited to the encoding phase, raising the question of which mechanisms are involved during episodic retrieval. Research on episodic memory has clearly demonstrated that in order to understand how memories are formed, we need first to understand many cognitive and neurobiological processes involved in both encoding and retrieval, as well as the interactions among these phases [21]. Furthermore, in the light of the contrasting results in the literature, it is crucial to know whether the music facilitation reflected in decreased PFC activation is also found in the retrieval phase or whether by contrast it shows a more demanding pattern in line with the interference hypothesis. Therefore, in the present study, f NIRS prefrontal data were acquired during both encoding and retrieval of words in order to test the hypothesis that the PFC disengagement found during memory formation is also found during the retrieval phase.
Secondly, previous f NIRS acquisitions were limited to eight channels covering the bilateral DLPFC, thus hindering the possibility of ascertaining the music effect throughout the lateral prefrontal cortex, which is crucial during episodic memory processes [22][23][24]. Ventrolateral and dorsolateral regions of the PFC have been shown to implement different controls that provide complementary support for long-term memory encoding and retrieval. More specifically, during the encoding phase, ventrolateral prefrontal cortex (VLPFC) regions contribute to the ability to select goal-relevant item information and strengthen the representation of goalrelevant features, while DLPFC regions contribute to memory enhancement by allowing associations among items in longterm memory during encoding [17]. Concerning the retrieval phase, several studies on episodic memory retrieval have found a fronto-parieto-cerebellar network, in which several bilateral frontal regions seem to mediate processes that act in the output of episodic retrieval operations (see [22] for a review). It is therefore important to understand whether the observed PFC deactivation is restricted to the dorsolateral region or whether it includes the whole lateral prefrontal cortex. While a delimited prefrontal deactivation would suggest that music specifically modulates certain cognitive processes, a decrease throughout the PFC during the music condition would indicate an overall PFC disengagement and suggest that music-related memory processes rely on music-specific and unusual neural mechanisms. Hence, in the present study, a multichannel (i.e., 48 measurement points) f NIRS system was used to monitor the whole PFC cortex during episodic encoding and retrieval.
The third important point concerns a behavioral issue. Our previous behavioral and functional results led us to explain the findings in terms of associative bindings. A musical context may afford efficient mnemonic strategies allowing the creation of interitem and item-source associations that can help subsequent retrieval. These mnemonic strategies would result in PFC deactivation [18,19]. If confirmed, this would be an important contribution to the existing debate about the complex music-memory issue. However, previous studies used judgment memory tasks, whereby subjects were presented with a copy of old items and had to retrieve and judge whether or not each item had been presented previously (item memory) and, if so, in which context (source memory). Using this paradigm, it was not possible to investigate possible associative processes. Therefore, in the present study we used a free recall task in order to investigate if subjects adopted particular strategies during the retrieval phase.
To extend our knowledge of music-related memory processes and contribute to the current debate, the present study used multichannel f NIRS to test lateral prefrontal activations during music-related encoding and retrieval (i.e., free recall). We asked subjects to memorize lists of words, presented with a background of either music or silence, and to retrieve as many words as possible after an interference phase. We used a 48-channel f NIRS system to monitor their PFC activity bilaterally. Based on the hypothesis that a background of music would modulate PFC activity throughout the memory processing stages, we expected to find less cortical activation during both the encoding and the retrieval phases under the music condition, in line with our previous studies on verbal encoding with music [10,11]. Furthermore, clustered retrieval of previously encoded words for the music condition when compared to the silence condition would suggest that music helps encoding through the implementation of associative strategies.
Participants.
Nineteen young healthy students at the University of Burgundy (13 female, mean age 21.65 ± 3.2 years) took part in the experiment in exchange for course credits. All the participants were right-handed, nonmusicians, and native French-speakers and reported having normal or correctedto-normal vision and hearing. None were taking medication known to affect the central nervous system. Informed written consent was obtained from all participants prior to taking part in the experiment. The study was anonymous and fully complied with the Helsinki Declaration, Convention of the Council of Europe on Human Rights and Biomedicine.
Experimental
Procedure. Subjects were seated in front of a computer in a quiet, dimly lit room. After the 48 f NIRS probe-set had been fitted on the forehead overlying the PFC (see f NIRS section below for detailed description), the inear headphones inserted and the sound recorder placed, subjects were informed that they would be presented with different lists of words with two auditory backgrounds: music or silence. They were asked to memorize the lists of words and were told that, after a brief backward counting task, they should mentally recall the previously seen words and then say as many as they could.
Verbal stimuli consisted of 90 taxonomically unrelated concrete nouns selected from the French "Lexique" database ( [25]; http://www.lexique.org/). Words were randomly divided into six lists (15 words per list, 45 words for each condition), matched for word length and occurrence frequency. In the music condition, the background music used in all blocks was the instrumental jazz piece "Crab walk" (by Everything But The Girl, 1994). This excerpt was chosen after a pretest among a list of 8 pieces representing different musical genres (such as classical, jazz, new age) preselected by the authors. All the excerpts were instrumental in order to avoid possible interference between the lyrics and the verbal material to be encoded. The excerpts were evaluated by nonmusician participants in terms of arousal, emotional valence, and pleasantness using a 10-point scale. Participants were also asked to report if the music was familiar or not. The selected piece was chosen for its positive valence, medium arousal quality and for being rated as pleasant and unfamiliar.
Three blocks for each condition (music or silence) were presented to each subject, giving a total of 6 experimental blocks. Each block consisted of three phases, namely, encoding, interference, and retrieval. In the encoding phase, 15 words were displayed successively against a background of music or silence. The auditory stimulation started 15 s before the first word was displayed, continued during the sequential display of words, and ended 15 s after the last word. Words in each block were displayed for 3 s per word, amounting to 45 s for the sequential presentation of 15 words. Each encoding phase therefore lasted 75 s (15 s background, 45 s words, and 15 s background). Verbal stimuli were visually presented in white on black background in the middle of the screen. Auditory stimuli were presented using in-ear headphones, and the overall loudness of the excerpts was adjusted subjectively to ensure a constant loudness throughout the experiment.
Prior to the retrieval phase, subjects were asked to count down from a given number displayed on the screen until the word "stop" appeared. The interference phase lasted 30 seconds.
The retrieval phase was divided into 15 s of a "search for" phase, in which the previous encoding background Figure 1: Representation of one encoding-interference-retrieval block between two 30 s rest blocks. Each block consisted of 15 s of context (+ on the screen) alone (music or silence in the earphones), then 45 s of context and word encoding (with either background music or silence), and then again 15 s of context (+) alone. After 30 seconds of interference phase (counting), subjects were asked to search for previously encoded words (search for, with either background music or silence, 15 s) and then to recall as many words as they could (free recall, 30 s).
was presented in the earphones and subjects were asked to mentally recall the previously seen words and 30 s of a "free recall" phase, in which subjects were asked to say aloud as many words of the previously encoded list as possible. There were two reasons for this procedure: first, to resituate the subjects in the source of encoding, enabling good memory performance (see, e.g., [26,27]); secondly, to have a control condition for possible voice-movement artifacts. A sound recorder was used to record subjects' free recall performance. The retrieval phase lasted 45 s. The total duration of each block was 3 minutes. Each block was followed by a 30 s rest (silent) (Figure 1).
The order of music/silence blocks was counterbalanced, as well as the order of word lists and the order of words in the lists. During the rest periods, subjects were instructed to try to relax and not to think about the task; in contrast, during the context-only phases of the blocks, participants were instructed to concentrate on a fixation cross on the screen and to focus on the task. Presentations of task instructions and stimuli were controlled by E-Prime software (Psychology Software Tools, Inc.) using a laptop with a 15 monitor. The entire experimental session, including f NIRS recording, lasted about 20 minutes.
f NIRS Measurements.
A 48-channel f NIRS system (Oxy-monMkIII, Artinis Medical Systems B.V., The Netherlands) was used to measure the concentration changes of O 2 Hb and HHb (expressed in M) using an age-dependent constant differential path-length factor given by 4.99 + 0.0067 * (age 0.814 ) [28]. Data were acquired at a sampling frequency of 10 Hz. The 48 f NIRS optodes (24 emitters and 24 detectors, Figure 2(a)) were placed symmetrically over the lateral PFC. The distance between each emitter and detector was fixed at 3 cm. For each hemisphere, f NIRS channels measured the hemoglobin concentration changes at 24 measurement points in a 12 cm 2 area, with the lowest optodes positioned along the Fp1-Fp2 line and the most central optodes 2 cm from the Cz line [29], in accordance with the international 10/20 system [30]. From top to bottom, these measurement points were labeled 1-24 (see Figure 2(a)). To optimize signal-to-noise ratio during the f NIRS recording, the 48 optodes were masked from ambient light by a black plastic cap that was kept in contact with the scalp with elastic straps, and all cables were suspended from the ceiling to minimize movement artifacts [31] (Figure 2(b)). During data collection, O 2 Hb and HHb concentration changes were displayed in real time, and the signal quality and the absence of movement artifacts were verified.
Data Analysis
2.4.1. Behavioral Data. Memory performance was calculated for each subject under both conditions by computing the total number of correctly retrieved words. Incorrectly retrieved items were considered as intrusions. Paired -tests were used to compare the free recall memory and intrusion scores in the silence and music conditions. Subjects' possible associative strategies at encoding were examined using cluster analysis, in which the chunks created at retrieval indicated the level of interitem associations at the encoding phase. We therefore calculated the number of items presented in a row (i.e., one following the other) during the encoding phase that were retrieved in chunks, identifying 2-, 3-, 4-, 5-, and 6-word chunks produced by each subject and under each condition (e.g., if the subject encoded "bottle, " "fork, " "match, " "coat, " and "pool" in the encoding phase and then subsequently retrieved "fork, " "match, " and "coat" during the free recall task, this constituted a 3-word chunk; if the subject retrieved "bottle, " "match, " and "coat, " this constituted a 2word chunk). Paired -tests successively compared the most consistent chunks (>2-words) over the total of chunk ratios between the silence and music conditions.
f NIRS Data.
In order to eliminate task-irrelevant systemic physiological oscillations, the O 2 Hb and HHb signals were first low-pass filtered (5th-order digital Butterworth filter with cut-off frequency 0.1 Hz) for each of the 48 f NIRS measurement points.
To determine the amount of activation during the encoding phase for the two conditions, data in each of the 6 experimental blocks was baseline corrected using the mean of the O 2 Hb and HHb signals during the last 5 s of the rest phase. We then sample-to-sample averaged (i.e., 10 samples/s) the baseline-corrected signals over the 3 blocks of each condition, yielding one average music and silence O 2 Hb and HHb signal per participant for both the encoding phase and the retrieval phase (both "search for" and "free recall" tasks). We then computed the maximum O 2 Hb and the minimum HHb delta-from-baseline values over the 45 s (for the encoding), 15 s (for the "search for" retrieval), and 30 s (for the "free recall" retrieval) stimulus windows, for both the music and silence average block of each participant and for each channel (see Figure 4). Delta values were then statistically analyzed using a repeated measure ANOVA with 2 (music/silence condition) × 2 (left/right hemisphere) × 24 (optodes) repeated factors. Paired -tests were also used to compare each channel as well as the means of left right activity for the silence and music condition and for each phase of the memory task [31] (see Figure 4).
Furthermore, in order to ascertain the PFC activation during the entire block of music/silence encoding and retrieval conditions, we ran a complete group time-series analysis in which we averaged O 2 Hb, HHb, and total Hb (THb) concentrations over 5 s windows (i.e., one average point for each 5 s) all over the blocks of encoding, interference, "search for, " and free recall phase, getting 35 successive measures of concentrations. Time-series data were then analyzed using a repeated measure ANOVA with 2 (music/silence condition) × 2 (left/right hemisphere) × 24 (optodes) × 35 (points in time) within-subject factors.
Behavioral Results.
Paired -tests on the free recall memory performance and intrusion scores revealed no differences in the total number of correctly retrieved words and falsealarm rates between the music and silence conditions ( (18) = 1.17, > .05). However, cluster analysis revealed a significant difference between the two conditions concerning the number of chunks created at retrieval. While -tests on the total number of words retrieved in chunks did not reveal a significant difference between the two conditions ( (18) = −.165, > .05), a significant difference was found for cluster creation, data revealing that subjects created more consistent chunks (>2 words) in the music than in the silence condition, ( (18) = 2.23, = .02) (Figure 3). Figure 4 shows a channel-level analysis on O 2 Hb delta-to-baseline values for each phase of the memory task (encoding, "search for, " and free recall). The repeatedmeasures ANOVA on O 2 Hb delta-to-baseline values during the encoding phase showed a main effect of condition, with the whole PFC significantly less activated during encoding with music than with silence, (1, 18) = 9.78, = .006. For the retrieval phase, statistical analysis showed similar , during the encoding, "search for" and free recall phases. Red = more activated; green = less activated. The whole prefrontal cortex resulted significantly less activated in the music condition during the three phases. * * , * , and ( * ) on the music channels show statistically significant differences (resp., < .01, < .05, and .05 < < .09) between the two conditions for each channel. The difference between the two conditions is also showed in the left right of the figure, with black and grey bars representing O 2 Hb Δ-to baseline mean values, respectively, for silence and music conditions, in the right and left hemisphere ( * * for < .01 resulted from paired -tests comparisons). results for the "search for" and "free recall" tasks. Repeatedmeasures ANOVA on the "search for" phase revealed a main effect of condition, with higher O 2 Hb concentrations for retrieval with silence than with music ( (1, 18) = 9.62, = .006), which was also confirmed in the "free recall" phase ( (1, 18) = 8.75, = .008). The decreased PFC activation under the music retrieval condition was also supported by higher HHb values (based on the balloon model, see, e.g., [32]) for the music condition ( (1, 18) = 6.93, = .017 for the "search for" phase, (1, 18) = 3.56, = .075 for the "free recall" phase). These results were also confirmed by pairedtest comparing the mean values of left and right channel for the two conditions ( Figure 4).
f NIRS Results.
Time
Discussion
Extending previous studies of verbal memory encoding and music [10,11], the present work investigated music-related episodic encoding and retrieval processes using multichannel f NIRS to monitor cortical oxygenation changes over the lateral PFC during both episodic encoding and retrieval of verbal information.
One of the main findings of this study is that activity decreased under the music condition as compared to the silence condition. In line with our previous experiments, f NIRS results during word encoding revealed that the PFC was significantly more active under the silence condition than under the music condition [10,11]. In the light of f NIRS studies showing PFC cortex deactivation when subjects' memory performance was improved by given strategies or pharmacological stimulants [18,19], we previously interpreted the decreased DLPFC activity during music encoding as a music-related facilitation process. More specifically, we postulated that background music, unlike silence, required less involvement of the DLPFC for organizational [17] and relational interitem processing [33] during verbal episodic encoding. The first new finding of the present study is that the decreased activity under the music condition extended across the entire lateral PFC. As shown by Figure 4, analysis on musical encoding and retrieval revealed lower O 2 Hb values in almost all channels. As mentioned in the introduction, the DLPFC and VLPFC are jointly recruited to guide the processing of interitem relational information in working memory, which promotes long-term memory for this relational information [20,34]. In particular, VLPFC is involved in both relational and item-specific memory formation, and it seems to select goal-relevant features during episodic encoding, thus contributing to subjects' ability to select relevant item information to remember [17,34,35]. Although f NIRS limitations in channel localizations make it hard to specifically identify which lateral prefrontal regions are specifically involved during all over the memory task, these results suggest that that the facilitator effect of a musical background also relies on its capacity to disengage the most ventral part of the PFC from its goal-relevant selective functions. In other words, music may affect the encoding state, not only by disengaging the PFC during specific interitem relational strategies (related to DLPFC activity), but also and more generally by affecting episodic prefrontal functions, namely, the capacity to select the relevant information to remember and strategically organize it for successful memory formation.
Another crucial finding of the present study is that such PFC less activation continued during the retrieval phase. Figure 5 shows an example of time-course of the THb signal all over the block of encoding and retrieval: although the retrieval phase showed an increased activation in both conditions, especially in most ventral channels (Figure 4), this was always less pronounced for the music condition. In our opinion, this is important for two main reasons. First, the fact that the music-related PFC decrease was observed during both the "search for" phase (with background music) and the "free recall" phase (without background music) excludes the possibility that the observed PFC modulation was due to the presence of auditory stimulation rather than to a specific music effect. Secondly, music provides a less demanding way of modulating the recruitment of PFC areas crucial for long-term manipulation of information and active strategic retrieval [36][37][38], indicating a long-lasting effect. This is particularly important in view of the divergent results in the literature. Indeed, if music constitutes a dual-task interference [14,15], we should have observed highest increase in neural activity for the music condition in at least one of the memory phases, as previously observed in f NIRS studies investigating dual-task situations [39,40]. On the contrary, our results suggest that music-related memory processes rely on specific neural mechanisms underlying a less demanding prefrontal engagement throughout the stages of memory formation and retrieval.
In the light of previous f NIRS studies on memory [18,41,42], our results should also be viewed in terms of the contribution of f NIRS to understanding the role of PFC in longterm memory processes. Unlike our previous studies, we did not find a main effect of lateralization during word encoding. However, a more thorough time-series analysis revealed a condition × laterality interaction, suggesting higher left and right hemisphere engagement, respectively, for silence and music condition. Furthermore, a specific lateralization became evident at the retrieval "search for" phase, where we found a left and right lateralization for the silence and music condition, respectively. This condition by laterality interaction related to the presence of music when subjects tried to retrieve previously encoded words can be interpreted in the light of studies showing that the lateralization of PFC activity during retrieval depends on the availability of verbal codes, with left hemispheric involvement for verbally coded information and right hemispheric activation for nonverbally coded information [43].
Major criticism of PFC f NIRS data concerns the taskevoked changes occurring in forehead skin perfusion [44][45][46][47]; PFC activity interpretations must therefore be taken with caution. Nevertheless, our findings not only confirm that f NIRS is a good tool for noninvasive investigation of longterm memory [41,48,49], but can also help shed new light on music-related prefrontal episodic memory processes. In particular, we suggest that music is able to modulate all stages of memory processing in a state-dependent manner, enabling the creation of relational links that may constitute efficient mnemonic strategies, as well as the successful retrieval of relevant information. Accordingly, less PFC activity is required to put these strategies to use during either encoding or retrieval. Importantly, this explanation is supported by our behavioral results. Indeed, cluster analysis revealed that participants created significantly more chunks (i.e., formed by >2 words) during the free recall of words previously encoded with music [50]. This would indicate that subjects found it easier to adopt relational-associative strategies to create interitem (and possibly item-source) links during memory formation, which were then used as mnemonic strategies for successful retrieval. However, although we previously found that a musical background can boost item [10] and source [11] memory in recognition tasks, this was not the case for the free recall task, where no difference between music and silence was found in the number of correctly retrieved words. This suggests that behavioral paradigms often fail to characterize a reliable effect of music on memory performance, even when imaging methods are able to detect a music-related effect.
Considering many authors claiming that music hampers encoding and leads to negative results, as well as the different positive behavioral outcomes, it remains important to discuss when and how music can help memory performance. In our opinion, it is crucial to note that many kinds of paradigms using many kinds of music stimuli exist in literature and hence can lead to contrasting results. In the present study, we used a pleasant musical background with a positive emotional valence and medium arousal quality with the specific idea that music can constitute a helpful encoding context. The results can therefore be discussed in the frame of an enriched context (see, e.g., [51,52]) given by the presence of the music, in which many mechanisms (arousal-mood modulation, emotions, and reward) intervene to orchestrate the final music-related positive effect. In this perspective, the music-dependent prefrontal modulation observed opens new questions about the interpretation of such specific PFC decreased oxygenation pattern and the related facilitation. The mechanisms underlying PFC deactivation are matter of debate and can reflect several neural processes. Some explanations can be found in regard to BOLD signal decrease, which usually corresponds to an O 2 Hb decrease and HHB increase in f NIRS signal. A BOLD decrease is usually interpreted as a deactivation that reflects a focal suppression of neural activity [53,54] and several explanations have been proposed to clarify such deactivation. For instance, Harel et al. [55] claimed that BOLD decrease can be due to stealing of blood from less active regions into the most cerebral blood flow demanding areas. Therefore, the observed f NIRS prefrontal pattern could reflect a higher activation in other brain regions. The present multichannel f NIRS paradigm in part elucidated this question by investigating not only the DLPFC [10,11], but also the entire PFC activity. Considering the different tasks attributed to the different regions of PFC for the episodic encoding and retrieval [17], it was reasonable to think that music could be more demanding for regions surrounding the DLPFC. Results revealed a prefrontal decrease in almost all the f NIRS channels, suggesting a huge and coherent prefrontal disengagement. However, such disengagement could be related to a greater activation in other (i.e., nonprefrontal) regions [55] that need therefore to be further investigated. Raichle and colleagues [54] proposed that such reduction of neuronal activities might be mediated through a reduction in thalamic inputs to the cortex during attention-demanding cognitive tasks or through the action of diffuse projecting systems like dopamine (see also [56]). f NIRS studies showing deactivation in nonverbal tasks (e.g., video games) have tried to interpret it in terms of attention-demanding tasks [57]. Nevertheless, this hypothesis seems in conflict with other f NIRS studies investigating prefrontal responses to attention tasks. Indeed, several authors have shown how alertness or attention states significantly increase rather than decrease PFC activation [58,59]. Also the dopamine system can be responsible for PFC deactivation [54]. Dopamine is a neurotransmitter strongly associated with the reward system: it is released in regions such as the ventro-tegmental area (VTA), nucleus accumbens or PFC as a result of rewarding experiences such as sex, food, but also music [60,61]. Therefore, if prefrontal less activation can be related to the action of diffuse dopamine systems and the positive effect of music may be related to rewardemotional responses as well, it is possible that music-related reward mechanisms play a crucial role in helping subjects in engaging successful verbal memory processes reflected in PFC disengagement.
Another crucial point to consider concerns the strong relationship between music and language, which has been clearly shown on both behavioral and neurophysiological level (see, e.g., [62]). It is therefore possible that, among possible general mechanisms discussed above, more languagespecific processes may directly intervene during the encoding of verbal material with music. More specifically, our findings suggest that semantic-associative mechanisms may be activated more easily in presence of a musical background, thus resulting in greater clustering during the free recall task. A good example is represented by what participants reported in informal posttask metacognitive follow-up: indeed, when asked how difficult they found the task, many of the subjects suggested that music helped them in creating stories (i.e., bindings) among items and between items and music. For example, if the words "pool" and "glass" were subsequently presented and music was present, participants reported these words were easier to remember because of the creation of a little story in their mind (e.g. "I imagined myself drinking a glass of wine while playing the pool in a bar with a jazzy atmosphere"). In this case, the musical context may help in creating new connections between the items and the source itself, namely, new episodes that participants can then retrieve during their subjective mental "time travel" [1], as reflected by behavioral findings. Further neurophysiological investigations (e.g., investigating gamma and theta oscillations, shown to bind and temporally order perceptual and contextual representations in cortex and hippocampus [63]) could in this case elucidate possible item-source bindings processes and further research is therefore needed in this domain.
Taken together, our results overall can be seen in the general framework of the music and memory literature, supporting the idea that music can help verbal memory processes and that associative strategies facilitated by the presence of a musical background may explain memory enhancement. Given the increasing need to understand when and through which mechanisms music is able to stimulatecognitive functions, these results offer in our opinion an important contribution to the existent literature and open interesting perspectives on music-based rehabilitation programs for memory deficits.
Conclusions
The aim of this study was to focus on the prefrontal processes involved in music-related episodic memory. More specifically, we wanted to extend previous findings of prefrontal deactivation in the encoding phase of verbal material to the whole prefrontal cortex and also to the retrieval phase.
Overall, f NIRS findings show that music may specifically act and modify normal cortical activity; namely, it can entirely modulate the lateral PFC activity during both encoding and retrieval in a less demanding way. In particular, our results suggest that music-related strategic memory processes rely on specific neural mechanisms recruited throughout the stages of memory formation and retrieval. These findings are supported by behavioral evidence indicating music-related associative strategies in the recall of verbal information and offer interesting perspectives for music-based rehabilitation programs for memory deficits.
|
v3-fos-license
|
2020-09-26T13:07:49.126Z
|
2020-09-25T00:00:00.000
|
221910678
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2020.573254/pdf",
"pdf_hash": "e53c99fc7452b9e9e79912a7dfc1dd58128192bc",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44552",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "e53c99fc7452b9e9e79912a7dfc1dd58128192bc",
"year": 2020
}
|
pes2o/s2orc
|
The Influence of Form- and Meaning-Based Predictions on Cortical Speech Processing Under Challenging Listening Conditions: A MEG Study
Under adverse listening conditions, prior linguistic knowledge about the form (i.e., phonology) and meaning (i.e., semantics) help us to predict what an interlocutor is about to say. Previous research has shown that accurate predictions of incoming speech increase speech intelligibility, and that semantic predictions enhance the perceptual clarity of degraded speech even when exact phonological predictions are possible. In addition, working memory (WM) is thought to have specific influence over anticipatory mechanisms by actively maintaining and updating the relevance of predicted vs. unpredicted speech inputs. However, the relative impact on speech processing of deviations from expectations related to form and meaning is incompletely understood. Here, we use MEG to investigate the cortical temporal processing of deviations from the expected form and meaning of final words during sentence processing. Our overall aim was to observe how deviations from the expected form and meaning modulate cortical speech processing under adverse listening conditions and investigate the degree to which this is associated with WM capacity. Results indicated that different types of deviations are processed differently in the auditory N400 and Mismatch Negativity (MMN) components. In particular, MMN was sensitive to the type of deviation (form or meaning) whereas the N400 was sensitive to the magnitude of the deviation rather than its type. WM capacity was associated with the ability to process phonological incoming information and semantic integration.
Under adverse listening conditions, prior linguistic knowledge about the form (i.e., phonology) and meaning (i.e., semantics) help us to predict what an interlocutor is about to say. Previous research has shown that accurate predictions of incoming speech increase speech intelligibility, and that semantic predictions enhance the perceptual clarity of degraded speech even when exact phonological predictions are possible. In addition, working memory (WM) is thought to have specific influence over anticipatory mechanisms by actively maintaining and updating the relevance of predicted vs. unpredicted speech inputs. However, the relative impact on speech processing of deviations from expectations related to form and meaning is incompletely understood. Here, we use MEG to investigate the cortical temporal processing of deviations from the expected form and meaning of final words during sentence processing. Our overall aim was to observe how deviations from the expected form and meaning modulate cortical speech processing under adverse listening conditions and investigate the degree to which this is associated with WM capacity. Results indicated that different types of deviations are processed differently in the auditory N400 and Mismatch Negativity (MMN) components. In particular, MMN was sensitive to the type of deviation (form or meaning) whereas the N400 was sensitive to the magnitude of the deviation rather than its type. WM capacity was associated with the ability to process phonological incoming information and semantic integration.
HIGHLIGHTS:
-Mismatch Negativity amplitude reflects the difficulty in phonological sensory perception.
INTRODUCTION
The predictive brain hypothesis (Friston, 2009;Clark, 2013) describes the brain as an anticipatory organ that can generate predictions about the causal structure of the external world, based on the top-down influence of knowledge stored in longterm memory (Bar, 2007;Winkler et al., 2009;Friston, 2012). At the neural level, this is made anatomically possible by plastic corticopetal-corticofugal loops through which changes in activity at higher levels of the brain affect neural coding at lower levels of the brain including subcortical nuclei (Khalfa et al., 2001). In the domain of language comprehension, this predictive mechanism is thought to be crucial given that the speed and perceived ease with which complex speech signals are processed are influenced by the extent to which linguistic and contextual predictions have been preactivated (for a review, see Federmeier, 2007). Even though it has been observed in several studies that predictions can be generated at multiple levels (e.g., phonological, semantic) during language comprehension (for a review, see Kuperberg and Jaeger, 2016), it remains unclear whether deviations from expectations at different levels have a different impact on speech processing (for a review, see Nieuwland, 2019). Numerous studies have shown that predictions about the form (i.e., phonology) and the meaning (i.e., semantics) of speech increase both its intelligibility (e.g., Miller et al., 1951;Davis and Johnsrude, 2007;Zekveld et al., 2011Zekveld et al., , 2013 and its perceptual clarity (Wild et al., 2012;Signoret et al., 2018;Signoret and Rudner, 2019). This facilitative effect could explain the enhanced perception of a speech event for which we already have knowledge stored in long-term memory -a phenomenon that leads to improved speech detection at a phonological level (see the "speech detection effect" in Signoret et al., 2011), better speech recognition at a lexical level (see the "word detection effect" in Signoret et al., 2011), and facilitated speech categorization at a semantic level (Daltrozzo et al., 2011;Rönnberg et al., 2019). Additionally, predictions about form and meaning have been shown to have an additive and independent facilitative effect on speech perception in that the meaning can still enhance the perceptual clarity of degraded speech even when total reliance on the form is possible (Signoret et al., 2018;Signoret and Rudner, 2019), suggesting that predictions about the form and the meaning could have different kinds of impact on neural speech processing.
Meaning-Based Prediction Effects on Speech Processing
Several studies have indicated that meaning-based predictions play an important role in speech perception (for a review, see Van Petten and Luka, 2012), especially under adverse listening conditions (Obleser et al., 2007;Sheldon et al., 2008). It is even proposed that predictions about meaning have a stronger impact than predictions about form (see for example Ito et al., 2016). Indeed, recent behavioral results showed a facilitative effect of meaning-based predictions on speech comprehension and learning, but no effect of formbased predictions (see Experiments 1 and 2 in Corps and Rabagliati, 2020). Meaning-based predictions were also shown to be more robust than form-based predictions in a visual word experiment monitoring eye fixations (Ito et al., 2018). Participants fixated more often on picture targets and meaningrelated pictures than on form-related or unrelated pictures after hearing sentences in which the final word was correctly expected. These behavioral observations were corroborated at a neural level with effects indexed by the N400 component, which is an evoked potential originally observed in EEG studies typically between 200 and 600 ms after stimulus onset (for a review, see Kutas and Hillyard, 1980;Kutas and Federmeier, 2011) in a distributed network including at least the left posterior part of the middle temporal gyrus (Brouwer and Hoeks, 2013). This component is modulated by the processing of meaning-based predictions where larger N400 amplitudes are observed in response to unexpected or less expected in a sentence than in response to highly expected words (see, for instance, Lau et al., 2009Lau et al., , 2013Obleser and Kotz, 2010;Wang et al., 2012;Maess et al., 2016). In an EEG study investigating the temporal decay of meaning-and form-based predictions of final words in a sentence reading task (Ito et al., 2016), N400 amplitudes were larger for unrelated (i.e., deviant) stimuli than for stimuli whose meaning could be predicted, irrespective of the time allowed to generate the prediction. Moreover, N400 amplitudes were also larger for unrelated stimuli than for stimuli whose form could be predicted, but only when participants had a long time (i.e., 700 ms) to predict the final word, suggesting that meaning-based predictions could be generated faster than formbased predictions.
Form-Based Prediction Effects on Speech Processing
Considering that knowledge-based predictions pre-activate representations about the form of an upcoming word (DeLong et al., 2005), it is likely that form-based predictions can bias processing to a limited set of phonological combinations (Ylinen et al., 2016). Although Nieuwland et al. (2018) were unable to replicate the N400 effect demonstrated in the study by DeLong et al. (2005), Nieuwland (2019) suggested that pre-activation of form is apparent in earlier brain responses. This hypothesis is in line with previous results showing that the perceived clarity of speech was greater when contingent on form-based predictions rather than meaning-based predictions, especially under adverse listening conditions (see Signoret et al., 2018;Signoret and Rudner, 2019). The difference in speech processing between form-and meaning-based predictions might then be observed on early neural activity, such as in the Mismatch Negativity (MMN) amplitudes. MMN effects are elicited by any deviation to standard, expected events, and reflects an automatic expression of change detection in neural predictions with regard to incoming auditory stimuli (for a review, see Näätänen et al., 2007). MMN amplitude modulation has been observed for example in phoneme discrimination (Näätänen et al., 1997) and localized to the auditory cortex (Poeppel et al., 1997). The MMN is reported to have larger amplitude for unexpected than expected events at a mean latency of about 160-170 ms (Schwade et al., 2017) and is most prominent in the left hemisphere (Shestakova et al., 2002). The MMN effect is thus considered as a viable index of predictive coding (Friston, 2012) and useful for the study of form-based representations in the brain.
The Role of Working-Memory in Speech Processing
Current models of language understanding such as the Ease-of-Language Understanding (ELU) model (Rönnberg et al., 2008, emphasize the integration of stimulusdriven and knowledge-based processes (McClelland and Elman, 1986;Hickok and Poeppel, 2007) when processing speech under adverse listening conditions. Such conditions are regularly encountered in everyday life situations where the perceived quality of speech signals can be affected by external factors in the form of background noise (e.g., in supermarkets, train stations, or classrooms) and signal distortion (e.g., phone calls) or by internal factors such as hearing impairment. There is an inverse relationship between the quality of speech signal and reliance upon knowledge-based predictions (see, for instance, Rogers et al., 2012;Rönnberg et al., 2013;Peelle, 2018) such that, the more the speech signal is degraded, the more the brain needs to rely on knowledge stored in long-term memory to predict the contents of incoming speech signals (Rönnberg et al., 2008). This reliance is reflected at a cognitive level through the engagement of working memory (WM), which is where knowledge-based predictions are likely to be formed and maintained during language understanding .
Predicted speech events are processed with ease and make few demands on explicit WM processing (cf. prediction role, Rönnberg et al., 2019) while unpredicted or mispredicted (i.e., deviant) events require more explicit processing and load on WM capacity (cf. postdiction role, Rönnberg et al., 2019). A central role of WM is therefore to compare relevant knowledge-based contents active in memory with stimulus-driven processing for monitoring prediction error (Friston, 2012). This process may explain a variety of findings that correlate WM capacity with speech processing proficiency in adverse listening conditions where higher WM capacity is associated with better performance (Akeroyd, 2008;Besser et al., 2013;Rudner and Signoret, 2016). As phonology is proposed to be the bottleneck of lexical access in implicit and rapid information processing (Rönnberg et al., 2008, the reliance upon WM is probably greater when the form of the perceived content is deviant. At a neural level, WM capacity is plausibly reflected on the N400 component, with higher WM capacity associated with smaller N400 effects (Kim et al., 2018) in processing deviant compared to expected stimuli.
Overview of the Current Study
The purpose of the present study was to explore how deviations from expectations modulate cortical speech processing under adverse listening conditions and how this affects speech processing. Based on the literature outlined above, we designed an experiment in which we explored how MMN and N400 components were affected by deviations from form-and meaning-based expectations. For this purpose, MEG recordings of ongoing brain activity were obtained while participants listened to familiar spoken sentences presented in background noise. We compared cortical responses to form-and/or meaning-based deviations from an expected final word in familiar sentences. In addition, we explored the extent to which processing of form and/or meaning deviations could be associated with WM capacity.
We experimentally varied the degree to which the final word of each sentence was related to the remainder of the sentence. The final word was either an expected word (i.e., the final word matched with prediction both in form and meaning, e.g., "The nearest doctor is so far, we'll have to drive there in your car") or a deviant word (see Table 1). Such deviants belonged to one of three categories: meaning deviants (deviating in meaning but related in form, e.g., "The nearest doctor is so far, we'll have to drive there in your jar"), form deviants (deviating in form but related in meaning, e.g., "The nearest doctor is so far, we'll have to drive there in your bus"), or unrelated deviants (deviating in both form and meaning, e.g., "The nearest doctor is so far, we'll have to drive there in your plus").
This experimental design utilizes the characteristics of the N400 and MMN response, where the literature has shown that magnitude of deviations from a prediction has a positive relationship to the magnitude of modulation of response components, so that an increase in response amplitude follows in an increase in the magnitude of deviation from predictions. Accordingly, we phrase our hypotheses from the perspective that the modulation magnitude of a particular response (such as MMN or N400) following a particular type of deviation (e.g., in form, meaning) will reveal whether that response component is sensitive or not to that particular type of deviation. Based on this general perspective, we hypothesize the following from our experimental design: (1) Difference in amplitudes between expected and deviant final words: N400 as well as MMN amplitudes are larger for deviant than for expected final words under adverse listening conditions. (2) Differences in amplitudes between the types of deviant: Unrelated deviants generate larger N400 effect compared to meaning deviants. Form deviants generate larger MMN effect compared to meaning deviants. (3) Higher WM capacity is associated with better performance in processing final words, especially form deviants, and associated with smaller N400 effects.
Participants
Twenty-one young adults recruited from Linköping University participated in this study (thirteen males, mean age = 25.2, SD = 5.50). All participants were native Swedish speakers with no history of hearing impairment or neurological disease. For assessing hearing according to the American National Standards Institute (ANSI, 2004), the hearing thresholds at hearing frequencies 0.125-8 kHz were tested with an AC40 audiometer. Handedness was tested with the Edinburgh Handedness Inventory (Oldfield, 1971) and the safety for MEG inclusion was checked with a detailed questionnaire. After reading an information letter, all participants provided written informed consent to the study, which was conducted in accordance with the guidelines of the Declaration of Helsinki and approved by the Regional Ethics Committee in Linköping (2015/158-31). Participants were compensated with 500 SEK for their contribution to the study.
Working Memory Test
To assess WM capacity participants completed a Swedish version of the Reading Span (RS) test (Daneman and Carpenter, 1980;Rönnberg et al., 1989). The RS test is composed of three-word sentences visually presented in blocks of 2-6 sentences. The sentences were presented word-by-word on a computer screen at a rate of one word per 800 ms. The sentences were grammatically correct, but half of the sentences made sense (such as "the tractor works well") while the other half did not (such as "the fox reads poetry"). After reading each sentence, the participant had 5,000 ms to decide whether the sentence was absurd or not by pressing "yes" for absurd sentences and "no" for normal sentences. After a block, the participants were asked to recall either the first or the final words (determined randomly) of each sentence in their correct serial presentation order. There were two blocks per sentence list and the maximal available RS-score was 40 correctly recalled words.
Sentences
The sentence task consisted of two main conditions: expected vs. deviant final words. All final words in a sentence consisted of one syllable of three phonemes. In the expected condition, final words (48 words in total) were congruent with the remainder of the sentence both in form and meaning (see Supplementary Appendix), for example: "The nearest doctor is so far, we'll have to drive there in your car." Such final words were validated by 21 participants from Linköping University (12 males; mean age = 23.3 years, SD = 2.15 years), who had to end the sentence with the best adapted final word in a sentence completion test. These expected final words were evaluated by 10 other participants from Linköping University (5 males; mean age = 24.1 years, SD = 1.73 years) in an experiment in which they had to evaluate if the final word was the word they expected (yes/no response). The final words with the highest cloze probability scores (M = 0.95, SD = 0.003) were chosen as expected final words.
Final words in the deviant conditions belonged to one of three categories (of which there were 48 words in each): deviating in either form, meaning, or both. For the meaning deviants, violating predictions in meaning but related in form, the first phoneme was different from that of the expected final word, while the second and third phonemes were identical to the correct final word. Meaning deviants were also semantically absurd in relation to the first part of the sentence, for example: "The nearest doctor is so far, we'll have to drive there in your jar." For the form deviants, violating expectations in form but related in meaning, all the phonemes were different from the expected final word but the word was otherwise semantically similar to the predicted final word, for example: "The nearest doctor is so far, we'll have to drive there in your bus." For the unrelated deviants, all phonemes were different from the expected final word and also semantically absurd in relation to the first part of the sentence, for example: "The nearest doctor is so far, we'll have to drive there in your plus." In order to get the same amount of expected vs. deviant final words within the experiment, the expected trials were repeated three times. A correct final word was thus presented in half of the trials (i.e., 144 trials), and a deviant final word was presented in the remaining half (i.e., 144 trials). In total, the same first part of the sentence was randomly repeated six times for each participant: three times with an expected final word and three times with a deviant final word (from, meaning, or unrelated deviants). As such, all 48 final words were used both as expected, form deviant, meaning deviant and unrelated deviant in accordance with the first part of the sentence within participant. In doing so, we were able to achieve perfect counterbalancing between the different experimental conditions (see Table 1). This design ensured that the observed effects could only be due to final word's relationship with the first part of the sentence and not dependent upon word characteristics.
To load on WM during speech processing, sentence materials were presented in a background of continuous white noise. The loudness level of the speech material was set at 80% intelligibility (i.e., +1 dB SNR) for the first part of the sentences to give enough information to the listener for predicting the expected final word and 50% of intelligibility (i.e., −5 dB SNR) for the final words (see "words in context, " Figure 2 in Malmberg, 1970, p. 121) to load on WM and avoid ceiling effects. After the experiment, participants were asked to evaluate the sentence cloze for each presented final word with respect to its associated sentence on a five-point Liker scale (from 1 = not natural at all to 5 = very natural).
Procedure
Before the MEG experiment, the participants received instructions to read the 48 sentences pertaining to the expected condition (i.e., sentences ending with the predicted final words) at home, so they became familiar with the sentence material. After providing written informed consent to the study, participants were prepared for the MEG experiment. During the preparation, the experimenter checked that participants had read the sentence list at home and asked them to read the sentence list once again. This familiarization procedure was used to ensure that the participants knew the expected final word of each sentence, which was correct both in form and meaning. Throughout the experiment, participants listened to each sentence and assessed whether the final word was the "expected one, " i.e., the word appearing in the sentence list they had read (see experimental paradigm in Figure 1). Each trial began with a background of auditory white noise together with a white fixation cross visually centered on a black screen. As the first part of the sentences did not have identical durations, the onset of the sentence varied such that the offset of the first part of each sentence (i.e., before the presentation of the final word) occurred 6,400 ms after the beginning of the trial. This was followed by a delay period with a fixed duration of 1,600 ms, which was enough time to generate and maintain the knowledge-based linguistic predictions in WM. The final word of each sentence had an onset at 8,000 ms after the beginning of the trial. To ensure that motor activity would not be present in the MEG recording of linguistic processing, the participants had to delay the motor response to 2,800 ms after the onset of the final word. The participants were also instructed not to blink during the prediction delay or the presentation of the final word. The longest final word duration was 1,240 ms. When the background noise faded to silence, the fixation cross was replaced by the appraisal question "Was the final word the correct one? (i.e., the one that you had read before)." Participants had 2,000 ms to provide a motor response (by pressing yes/no buttons with the index or middle finger of the same hand, respectively). The response hand was counterbalanced across participants (i.e., 50% used the left hand and 50% used the right). Participants were instructed that they could blink at the time they responded. The inter-trial interval was 1,000 ms.
After the MEG experiment, a cognitive test battery including the RS test was administered to the participants who also filled in the sentence cloze evaluation. The testing after the MEG experiment took approximatively 40 min and the duration of the entire experiment, including breaks, was approximately 2 h.
MEG Acquisition
The data were collected at The National Facility for Magnetoencephalography (NatMEG), Department of Clinical Neuroscience, Karolinska Institutet. Neuromagnetic data were recorded on the Elekta Neuromag TRIUX with a 306-channel whole-scalp system (sampling rate: 2,000 Hz; 0.1-660 Hz online bandpass filter) in a magnetically shielded, sound-proofed room (MSR; model AK3b from Vakuumschmelze GmbH, Hanau, Germany). Head position was monitored using four head-position indicators (HPI) coils together with subjectspecific scalp measurements using a 3D digitizer (FASTRAK; Polhemus, Inc.) relative to three anatomical fiducial points: nasion, left pre-auricular, and right pre-auricular points. Ocular activity was monitored via bipolar horizontal and vertical electrooculography (EOG). Cardiac activity was monitored with bipolar electrocardiography (ECG), with electrodes attached below the left and right clavicle.
Stimulus presentation was synchronized with MEG recordings and behavioral responses using Presentation R software (Version 18.1, Neurobehavioral Systems, Inc., Berkeley, CA). Auditory stimuli were presented through ear-tubes (model ADU1c, KAR Oy, Helsinki, Finland) to both ears. Visual instructions were FIGURE 1 | MEG experimental paradigm: at the beginning of each trial, a white cross fixation appeared on a black screen with a background of white noise. Sentences were presented at 80% intelligibility 1,000-3,840 ms after trial onset such that the first part of the sentence always ended 6,400 ms from the trial onset. After a prediction delay of 1,600 ms, the critical final word was presented at 50% intelligibility (by manipulating the loudness level of the final word and keeping the background noise level constant). Motor responses were collected 10,800 ms after trial onset, with the longest final word ending 9,240 ms from the trial onset.
projected onto a screen inside the magnetically shielded room (black background, white text). All 288 trials were presented with randomized order in one session including seven short breaks to allow participants to rest as long as they need and ask questions. During these breaks, participants were asked to evaluate their alertness on a scale (the Karolinska Sleepiness Scale, KSS; Akerstedt and Gillberg, 1990) between 1 (= extremely alert) to 9 (= very sleepy). Total recording time was approximately 1 h.
MEG Preprocessing
Using MaxFilter v2.2 (Taulu and Simola, 2006), data from the MEG sensors (204 planar gradiometers and 102 magnetometers) were processed using temporal Signal Space Separation (tSSS) with a correlation limit of 0.95 and segment length of 10 s (Taulu et al., 2005;Taulu and Simola, 2006) to suppress noise sources, to compensate for head motion, and to reconstruct any bad sensors.
Subsequent processing was done in FieldTrip (Oostenveld et al., 2011) software implemented in MATLAB R2017b (The MathWorks, Inc., Natick, MA). The data segments were extracted from -200 ms before the final word presentation up to 1,500 ms after the onset of final word presentation. Only trials obtaining a correct answer (hits in correct condition and correct rejections in deviant conditions) were included. Segments containing system-related artifacts or muscular activity were identified based on signal variance. Identified segments were inspected visually and rejected if contamination with artifacts was confirmed. The remaining data were subsequently resampled at 300 Hz, lowpass-filtered below 40 Hz and baseline corrected by demeaning using the mean activity in the 200 ms leading up to the stimulation. Subsequently, independent component analysis (ICA) was performed (Makeig et al., 1996). Components explaining horizontal and vertical eye movements, eye blinks, and ECG were discarded based on visual inspection. On average, 1.85 components were excluded per participant. Sensor-level time series were reconstructed from the remaining components. After preprocessing, visual inspection of all the remaining segments was performed and the number of remaining trials varied from 134 to 236 per participant (on average 174.65 ± 35.16). The minimum number of remaining trials per conditions was 23 (out of 48) and was evaluated as enough to be included in the analysis. Timelocked analyses were then used to calculate the average responses for each participant, so-called event-related fields (ERF) for correct and deviant conditions and then for each deviant condition (form, meaning, and unrelated) separately.
Behavioral Performance
Behavioral analysis was conducted with Statistica analysis software (v.13; Hill and Lewicki, 2005). Signal Detection Theory (Green and Swets, 1974;Macmillan and Creelman, 2005) was used to analyses final word assessments. Hits were defined as participants answering "yes" when the expected final word was presented, and false alarms were defined as participants answering "yes" when deviant final word was presented. Correct rejections were defined as participants answering "no" when a deviant final word was presented, and omissions were defined as participants answering "no" when an expected final word was presented. We expected to obtain 50% hits (i.e., answering "yes" in the expected condition) as the intelligibility level was set to 50%. More interesting was to investigate whether deviants were identified as deviants. The d-prime measure was used to assess task performance, of which a high d-prime value corresponded to high task performance. A single d-prime score was obtained for each deviant type (form-related, meaningrelated, and unrelated) and the variance in d-prime scores were compared by way of a within-subject ANOVA. Reaction times related to each deviant type were also investigated with a within-subject ANOVA. Differences in cloze scores of the final words in the post-experiment evaluation were also analyzed by way of a within-subject ANOVA on the factor Final Word, including all cloze conditions (correct, meaning-related, formrelated, and unrelated). To highlight the involvement of WM capacity, Spearman correlations were calculated for WM capacity (i.e., RS scores) and false alarm percentage as well as the mean amplitude of N400 components for each deviant type (minus expected condition) on cluster showing significant differences. An alpha level of 0.05 was used as a significance level.
MEG Sensor-Level Analysis
The sensor-level analysis was performed on gradiometers and magnetometers on all epoch lengths (i.e., 0-1,500 ms) with a non-parametric cluster-based permutation statistical test (Maris and Oostenveld, 2007) to highlight (1) processing differences between expected and deviant final words. A two-sided paired t-test ("cfg.statistics = ft_statfun_depsamplesT") was used for the generation of clusters with a threshold of 5% ("cfg.alpha = 0.05"). The likelihood of these clusters under the null hypothesis that the data is exchangeable were investigated using Monte-Carlo-randomizations ("cfg.method = 'montecarlo"' , "cfg.numrandomization = 1,000, " "cfg.correctm = cluster"). The same procedure was used to highlight (2) processing differences between the type of deviants. The sensor-level analysis was also run on gradiometers and magnetometers between the different deviant conditions (i.e., form, meaning and unrelated deviants) focusing on later components such as the auditory N400 component (i.e., 200-600 ms post stimulus), but also on early responses such as the MMN (i.e., 120-200, focusing on the peak at about 160 ms and not overlapping with later effects). Grand-averaged ERFs were calculated for sensors that were part of the clusters found in the cluster-based permutation analysis.
Head-Modeling and Dipole Analysis
To localize which areas are involved in the differences observed between form and meaning deviants, source analysis was planned. Since gradiometers have a better signal-to-noise ratio on the Elekta TRIUX system, source modeling was based on data from the MEG gradiometer sensors. Head-modeling was performed using a whole-brain 3D volume from the Centre for Medical Image Science and Visualization (CMIV) at Linköping University, Sweden. The T1-weighted anatomical image was acquired using a Philips Ingenia 3.0 Tesla MRI scanner with a standard eight-element head coil. The following pulse sequence parameters were used: voxel sized of 1 × 1 × 1 mm 3 , TR = 25 ms, TE = 4.6 ms, 175 sagittal slices.
The first step in source modeling is to create a forward model indicating how sources in the brain connect to the sensors in the sensor array. To do this, the MR image and MEG sensor array were co-registered using a two-step procedure. First, the three fiducial points were found on the MR image (rough alignment). Afterward, an iterative closest points (ICP) algorithm was used to optimize the co-registration by minimizing the distance between the digitized head points (nasion, left pre-auricular, and right pre-auricular points) and the head surface. The co-registered image was subsequently segmented intro brain, skull, and scalp tissue. From the brain compartment a surface mesh was created, from which a single compartment volume conductor was created. The volume conductor indicates how magnetic fields spread for sources inside it. A source space was created by creating a regular grid of sources centered on the volume conductor. For each of the sources inside the volume conductor, a lead field was estimated, indicating how each source would be seen by each of the sensors.
The second step was to do the inverse modeling, estimating which source configuration best explained the sensor activity pattern on the sensors. For the early component (i.e., MMN, 120-200 ms), we chose to do a symmetrical dipole fit, fitting two dipoles at the same time under the assumption that they were symmetrical around the x-axis, i.e., ear-to-ear. This thus assumes two focal sources in the brain -which fits well with the expectation that there should be bilateral activity in the auditory cortices at such early latencies (Shahin et al., 2007). However, for the later component (i.e., N400, 200-600 ms), such dipole analysis was not performed since this component is observed in a distributed network (see, for example, Maess et al., 2006). The two dipoles were fitted for the activity in the time window of interest. A grid search was used, going through sources one by one, to find the optimal starting position. Subsequently, gradient descent was used to optimize the dipole on six parameters, i.e., the xyz-parameters of the position of the dipole and the xyzparameters of its moment. The optimization finished when the difference between the sensor activity pattern produced by the two fitted dipoles and the actual sensor activity pattern could not be reduced any further. We fitted the two dipoles based on the gradiometer data and all the four conditions collapsed. To get the time courses for each condition separately, separate dipoles were estimated with the position fixed, just estimating the xyz-parameters of the moment using gradient descent.
Sentence Experiment
Overall performance on the recognition task was 64.32% (SD = 13.40). As expected, the performance (i.e., hits) was 50.33% (SD = 20.95) for the expected final words. In the deviant conditions, the performance (i.e., correct rejections) was higher: 69.74% (SD = 7.02) for meaning deviants, 85.12% (SD = 10.02) for form deviants and 75.75% (SD = 11.62) for unrelated deviants. Reaction times were significantly longer for meaning (M = 502.01 ms, SD = 136.79 ms) than for form (M = 461.96 ms, SE = 115.21 ms) deviants while no significant difference was observed between these two conditions and the unrelated deviants (M = 482.34 ms, SE = 131.56 ms; all ps > 0.45). Using d'-scores, the ANOVA revealed a main effect of Deviant Type [F(2, 40) = 44.11; p < 0.001; partial η 2 = 0.688] showing higher d -scores for form (d = 1.16; SE = 0.21) than unrelated (d = 0.76; SE = 0.19) deviants, which had also higher dscores compared to meaning deviants (d = 0.54; SE = 0.14) (all ps < 0.006, Bonferroni corrected, see Figure 2). showed significantly higher cloze scores for expected rather than form deviant word (p < 0.001), and for form rather than meaning (p < 0.001) or unrelated (p < 0.001) deviants. No statistically significant difference was observed between meaning and unrelated deviants (p = 0.098). These findings confirm that final words pertaining to the expected condition had the most natural sentence cloze and that meaning deviant or unrelated final words were judged as less natural than form deviant final words.
Cortical Responses
Data from one participant was not included in the analysis because of too much movement (∼6 cm from origin), reducing the group to 20 participants (12 males, mean age = 25.4 years, SD = 5.6 years).
Differences Between Expected and Deviant Final Word Processing
Over the entire epoch length, the cluster-based permutation test indicated that there was a significant difference between ERFs related to word processing of expected and deviant words (see Figure 3A). A negative cluster most pronounced over left frontocentral magnetometers extended from approximately 283-783 ms (p < 0.002, see Figure 3B) while a positive cluster most pronounced over right frontal sensors extended from approximately 260-813 ms (p < 0.004, see Figure 3C), reflecting N400 effects on frontal sensors. Analysis on gradiometers showed comparable results with a positive cluster extended from 240 to 680 ms (p < 0.002) while a negative cluster extended from 260 to 840 ms (p < 0.004), localized predominantly over left sensors, and also over right fronto-central sensors. These findings extend results previously observed with higher N400 amplitudes for deviant vs. expected final words (Maess et al., 2016) from clear speech to adverse listening conditions.
N400 effects
Testing for N400 effects between the type of deviants (see Figure 4 showing mean magnetometer activity for each deviant type in the negative cluster reported for the entire epoch length), the cluster-based permutation test revealed a significant difference between unrelated deviants and the other types of deviants. A positive cluster over left temporal sensors revealed a larger N400 amplitude to unrelated vs. form deviants both for magnetometers (p = 0.048) and gradiometers (p = 0.002). Similarly, a larger N400 amplitude was found for unrelated compared to meaning deviants, although this difference was statistically significant only for magnetometers (p = 024). However, no significant difference was observed between form and meaning deviants (p > 0.05).
MMN effects
Testing for an MMN effect between the type of deviants, the cluster-based permutation test revealed a significant difference between form deviants and meaning deviants (see Figure 5A) in a negative cluster over left parietal magnetometer (p = 0.020, see Figure 5B) and left fronto-temporo-parietal gradiometer (p = 0.006) sensors. The cluster-based permutation test also showed a significant difference between meaning and unrelated deviants in a positive cluster over left temporoparietal magnetometer (p = 0.014) and right middle parietal gradiometer (p = 0.020) sensors, revealing higher activity for meaning than unrelated deviants. No significant difference was observed between form and unrelated deviants (p > 0.05).
To localize the observed differences at sensor level in MMN amplitudes between form and meaning deviants, we used dipole analysis on gradiometers to model these responses at the anatomical source level. In 17 out of 20 participants, the results showed a bilateral dipole activity (120-200 ms) in the auditory cortex. On average for the whole-time window (120-200 ms), the dipole in the left hemisphere showed a significantly higher response amplitude following meaning compared to form deviants (t (16) = −3.34; p = 0.004, see Figure 6, upper panel). No such difference was found for the dipole in the right hemisphere (t (16) = −1.16; p = 0.264, see Figure 6, lower panel); and nor was a significant general difference between the dipoles in the left and right hemispheres (t (16) = 1.29; p = 0.216).
Working Memory Performance
One participant did not want to complete the WM test, reducing the group to 20 participants for behavioral data and 19 participants for MEG data. A significant correlation was observed between RS scores (M = 19, SD = 4.6) and false alarms for meaning deviants (r s = −0.499; p = 0.025), but not for form deviants or unrelated deviants. This negative correlation indicated that participants with higher WM capacity experienced fewer false alarms (i.e., better performance) when processing meaning deviants which were rhyming with the expected final word (see Figure 7).
Testing for correlation between WM capacity and the mean amplitude of the N400 effects for each deviant type (minus expected condition) in significant clusters, RS scores were negatively associated with the N400 effect for meaning deviant [r s = −0.622; t (19) = −3.27; p = 0.004] but not to form deviant [r s = 0.093; t (19) = 0.38; p = 0.706] or unrelated [r s = −0.002; t (19) = −011; p = 0.991] final words in the positive cluster. This negative correlation indicated that participants with higher WM capacity had smaller N400 effects in response to meaning deviants compared to participants with lower WM capacity. No other significant correlation was found.
DISCUSSION
The present study investigated how cortical processing of degraded speech is affected when either/or both form-and meaning-based predictions about the incoming speech are violated. Participants familiarized themselves with the sentence material corresponding to the expected final word before testing. This allowed us to observe participant's neural responses to prediction deviations by replacing final words of the familiar sentence with final words that were deviating to the first part of the sentence either in form, meaning, or both. Results showed that, under adverse listening conditions, meaning deviants elicited higher false alarm rate and larger neural activity in the left auditory cortex compared to form deviants, suggesting that meaning deviants were more difficult to process. Moreover, deviant final words evoked larger N400 amplitudes than expected final words, but no significant difference in N400 amplitude was found between final words that deviated in form and those that deviated in meaning. WM also appeared to play a significant role in the processing of final words, as higher WM scores were associated with better rejections and smaller N400 effects for meaning deviants.
Behavioral Results and Limitations
Final words were presented in a background of white noise at a level of 50% intelligibility to induce adverse listening conditions loading on WM. This was confirmed by the performance level of the correct condition in which participants recognized the expected final word in relation to the pre-familiarized material in 50% of cases. Under deviant conditions, performance levels were much higher (77% overall across types of deviant), indicating that it was easier to reject the final word when it did not match knowledge-based predictions than to accept it when it did. However, correct rejections proved harder to make for meaning deviants than form deviants, so that participants responded slower and made more errors when processing meaning deviants compared to form or unrelated deviants. Having in mind that the meaning deviants are semantically incorrect but phonologically related to the expected final word, this result indicates that performance is lower when the final word rhymes with the expected final word. However, it is worth mentioning here that although the meaning deviants were phonologically related to the expected final word, they did not exactly match the expected final word on phonology. This could have induced difficulties in phonological processing. Interestingly, the rate of false alarms to meaning deviants was negatively associated with WM capacity. As a lower false alarm rate reflects better task performance, this result suggests that individuals with greater WM capacity were less likely to incorrectly classify final words phonologically related to the expected word as correct. In other words, individuals with greater WM capacity are less susceptible to phonological lures when listening to speech under challenging conditions (for a discussion see p.2 in Rudner et al., 2019). Furthermore, participants with higher WM capacity had smaller N400 effects in response to final words with deviant meaning compared to participants with lower WM capacity, indicating that the processing of phonologically related final words requires less neural resources for listeners with higher WM capacity. Plausibly, this finding indicates that WM is involved in the phonological analysis of the unfolding speech. However, this finding is limited by the experimental context of the study, which differs from everyday listening condition in the sense that listeners knew the sentences in advance (which is unlikely in everyday language comprehension). What is remarkable however is that WM was specifically involved in phonological processing but not semantic processing of the final words of a known sentence, which is in accordance with the assumption about lexical access being mediated by phonology in implicit and rapid information processing (see Rönnberg et al., 2008Rönnberg et al., , 2013Rönnberg et al., , 2019. According to the ELU model (Rönnberg et al., 2008, knowledge-based predictions are held in WM until they have served their purpose. In the current experiment, knowledgebased predictions required both phonological and semantic knowledge to determine whether the final word in the sentence was the expected target. These findings suggest that WM capacity sets a limit for the retention of semantic information required to reject a phonologically matching word when listening to speech under adverse conditions. It might be suggested that WM capacity was particularly involved in the processing of phonological matching due to the specific design of our sentence materials. Given that for every sentence, the expected final word of the second clause always rhymed with the ending word of the first clause, and that a prediction delay of 1,600 ms was added between the first part of the sentence and the final word, we may have created a task-related bias toward phonology. WM involvement, in this case, may reflect the active maintenance of the rhyming sound which was possible to generate due to the prediction delay (see Ito et al., 2016). Initially, the rhyming design was intended to restrain the number of possible candidates for the final word to only one. Consequently, however, this phonological dimension may have resulted in greater difficulty to correctly identify and reject phonologically related final words. Nonetheless, when evaluating the sentence cloze with all the four possible final words (i.e., expected, form deviant, meaning deviant or unrelated deviant), it clearly appears that participants preferred (i.e., rated with higher scores) final words that matched the semantic context of the sentence over final words that phonologically rhymed with the sentence. Although form deviants made more sense than meaning deviants for participants when judging sentence cloze offline, meaning deviants still induced more recognition errors during online processing. This discrepancy in performance results between the online recognition task and the offline cloze task suggests that phonological predictions in noise may override semantic predictions under adverse listening conditions. Alternatively, these results may also be explained by the fact that the two tasks (i.e., the recognition task and the sentence cloze task) did not involve the same type of knowledge-based predictions. During the recognition task, participants may have relied more on their phonological knowledge in order to facilitate the processing of degraded speech as they have to listen to spoken sentences, whereas in the cloze task, participants needed only to rely upon their semantic knowledge to judge the naturalness of the sentence cloze.
N400 Effects
Higher amplitude was observed over fronto-lateral sensors between 200 and 600 ms for deviant vs. expected conditions. This finding is in line with previous results showing that deviating words elicit larger N400 amplitudes than expected words (Maess et al., 2016) in quiet listening conditions. The present result extends this previous finding for speech perception under adverse listening conditions in which the intelligibility of the speech signal is compromised by background noise (see also Strauß et al., 2013 for other type of noise degradation). Participants were, therefore, more likely to rely on their knowledge-based predictions than on the word characteristics of the upcoming stimulus to perform the recognition task on the final word. Our findings suggest that preactivations of linguistic representations associated with unfolding speech are necessary for efficient speech processing under adverse listening conditions. Furthermore, form deviants elicited smaller N400 amplitudes than unrelated deviants (on both gradiometers and magnetometers), and meaning deviants also elicited smaller N400 amplitudes than unrelated deviants (only on magnetometers). These findings seem to contradict previous studies showing that meaning deviants and unrelated final words elicited similar N400 effects, especially in high-cloze sentences with prediction delays (Ito et al., 2016). But in our study, knowledge-based predictions had a strong phonological dimension due to the construction of the sentence material using a rhyming clause while the degree of semantic constraint was similar across conditions. The first part of the sentence was the same across all four experimental manipulations producing by consequence a similar constraint from knowledgebased predictions toward the upcoming final word, both on phonological and semantic characteristics. Then, participants had a long prediction delay (i.e., 1,600 ms) before hearing the final word, which was plenty of time for generating expectations at both phonological and semantic levels. This could explain why meaning and form deviants elicited smaller N400 effects than unrelated deviants that comprised both phonological and semantic anomalies. It should be noted that we have not observed differences between N400 effects related to form and meaning deviants which had only one type of deviation (either phonological or semantic). These results suggest that the N400 response likely reflects integration processes modulated by the strength of phonological or semantic deviation where accumulated deviation from semantic and phonological expectations results in larger N400 amplitudes. Thus, N400 effects are not reflecting prediction cost (as discussed in Luke and Christianson, 2016) but more probably the amount of matching between the predictions and the actual processed word (Kuperberg et al., 2020). This is also probably why WM capacity was associated with smaller N400 effects in processing final words with deviant meaning: it is possible that listeners with higher WM capacity processed meaning deviant more easily than participants with lower WM capacity. These results support the model proposed by Chen and Mirman (2012) which stipulates that phonological and semantic representations are activated simultaneously, and that precise phonological predictions will constrain the amount of all possible semantic predictions (Chen and Mirman, 2015). Taken together, these findings are in line with recent results suggesting that the N400 effects reflect a combination of prediction and integration processes (Nieuwland et al., 2020).
Early Effects
The most interesting result of this study is the modulation of the MMN amplitudes by the type of prediction deviation since the observed MMN is related to early activity in the auditory cortex, and especially in the left hemisphere. In line with our hypothesis, higher amplitudes on left temporo-parietal sensors were observed for meaning compared to form deviants, both for gradiometers (between 120 and 200 ms), and for magnetometers (with peak activity around 180 ms). Additionally, these effects were localized to the left auditory cortex. This means that the left auditory cortex showed higher amplitudes in response to final words that are phonologically related to but semantically deviant from the expected final word. Because phonological language processing is usually left lateralized in the primary auditory cortex (Shestakova et al., 2002;Näätänen et al., 2007), this finding supports the notion that the left auditory cortex is preferentially prepared to respond to incoming phonological information. Since our study used the very same final words across sentences in all four experimental conditions, there were no differences in terms of acoustics or item characteristics between the different experimental conditions. Our carefully counterbalanced experimental design thus assured that any observed effect in this study was strictly due to the relationship between the final word and the knowledge-based expectations that were generated from the first part of the sentence. However, the downside of using such well-counterbalanced material is that the unrelated deviant also rhymed with the form deviant final words. This is probably the reason why we did not observe differences in early neural responses between unrelated deviants and form deviants. Instead, a significant difference in early cortical activity was observed between meaning deviants and unrelated deviants, which further supports the notion that the left auditory cortex has a preference for phonological information.
From the perspective of the predictive coding theory (Friston, 2009), MMN could be related to an early neural prediction error reflecting a discrepancy between the pre-activated neural memory trace of an expected stimulus and the phonological characteristics of the incoming speech sound. Thus, it could be proposed that phonological expectations primed the left auditory cortex via top-down influence. This result is in line with the assumptions proposed by the ELU model that considers phonology as the key for accessing the mental lexicon and there is accumulating evidence showing that phonological expectations can be observed in early cortical responses, before the N400 component (for a review, see Nieuwland et al., 2020). Nieuwland's review (2020) shows that effects on the early time window referred as N200 (and that includes several components such as MMN or Phonological Mismatch Negativity) are increased by deviation from phonological predictions and are not differentiable from subsequent N400 effects. The author also concluded that further research is needed to disentangle N400 effects from earlier activity. In our study, we did not observe the same significant difference in MMN and N400 time windows: the difference in processing meaning and form deviants was significant for the MMN time window but not for the N400 time window, while the effect of processing unrelated deviants was significantly larger compared to the effect of processing meaning deviants only for the N400 time windows. The meaning deviants, which rhyme with the expected final words but have a different meaning, are also the deviants which induce most errors in the behavioral task, suggesting that they are the most difficult to separate from the expected final words. It is probably this difficulty in sensory processing that is reflected in early time windows, suggesting that the effects observed on the MMN time window are more likely related to sensory processing and focused on phonological processing in a comparison stage, while N400 effects are more likely related to cognitive processing in an integration stage, modulated by WM capacity in its postdiction role .
CONCLUSION
The present study aimed to investigate how the nature of knowledge-based predictions influence cortical speech processing under adverse listening conditions and whether this influence is associated with WM capacity. By manipulating the phonological and/or semantic relationship between a sentence and its final word, our results suggest that left auditory cortex may have been primed to preferentially respond to phonologically expected features of the incoming speech. In addition, WM appeared to play a role in the phonological processing of upcoming words. The results of this experiment provide support for an early neural mechanism responsible for comparing knowledge-based predictions with incoming speech signals. Taken together, these results suggest that the early effect could be related to the difficulty in sensory perception while the later effect could be related to integration processing in the sentence context.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation, to any qualified researcher.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Regional Ethics Committee in Linköping (2015/158-31). The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
CS, JR, MR, ÖD, and DL designed the experiment. CS and RB collected the data. CS and LA analyzed the data. All authors were involved in interpreting the results and writing the manuscript.
FUNDING
This work was supported by grant no. 2017-06092 from the Swedish Research Council. The NatMEG facility was supported by Knut and Alice Wallenberg (KAW 2011.0207).
|
v3-fos-license
|
2023-01-24T16:33:18.312Z
|
2023-01-20T00:00:00.000
|
256190446
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10750-023-05147-0.pdf",
"pdf_hash": "f3608fc35f1609762af96de30009f7b1c99cb1d1",
"pdf_src": "Springer",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44553",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"sha1": "c3334b672910c6d862053124d84ac160c3f83544",
"year": 2023
}
|
pes2o/s2orc
|
The effects of physical disturbance and sediment refuge on the growth of young native freshwater mussels Elliptio complanata (Eastern Elliptio)
Native freshwater mussels form a critical component of benthic foodwebs, but are endangered worldwide, making their study an important conservation issue. Many unionids live in shallow environments where they are potentially vulnerable to natural disturbances, but the impact of physical forces on their growth and the role of sediments as a refuge is poorly understood. Here, we validate the use of two types of shell internal lines (nacreous, prismatic) as indicators of physical disturbance and shell coloration as an indicator of sedimentary habitat. We use these indicators to test (1) whether the sediments provide an effective refuge for juvenile and young adult mussels from natural disturbances and (2) whether disturbance events affect their growth. Elliptio complanata (Eastern Elliptio) emerge from the sediments when they are 20–50 mm in size and 2.5–7 years old. Juvenile and young adults lay down more disturbance lines at more exposed nearshore sites, but also in small lake basins with dense mussel populations. Disturbance lines are produced during both endo- and epibenthic growth periods, but in contrast to adults, they are not associated with growth anomalies. Sediments accumulating in shallow nearshore areas of lakes provide an imperfect but effective refuge for native mussels that warrant protection.
Introduction
Native freshwater mussels provide a wide range of ecosystem services worldwide (Strayer, 2014;Vaughn & Hoellein, 2018). Their filter feeding is a key process in many benthic foodwebs that transfers organic matter from the water column to the sediments through the production of feces and pseudofeces (material filtered from the water column but not ingested) (Vaughn & Spooner, 2006;McCasker & Humphries, 2021). At high density, mussels can play an important role in sediment dynamics and biogeochemical cycling, including nutrients (Strayer, 2014;Vaughn & Hoellein, 2018). However, populations of many native freshwater mussels are threatened due to pressures from habitat alteration and fragmentation, climate change, invasive predators and competitors, and pollution (Lopes-Lima et al., 2017;Ferreira-Rodríguez et al., 2019). At least 37 of the Unionid species present in the 1800s in North America have since gone extinct and more than 100 of the remaining ~ 300 species are endangered (Williams et al., 2017;Vaughn & Hoellein, 2018). Given freshwater mussels' ecological significance and the myriad threats they face, understanding the factors that influence their growth and development is important for conservation (Ferreira-Rodríguez et al., 2019). Native freshwater mussels live in soft sediments and are vulnerable to physical forces that can dislodge and transport both the sediments and benthic organisms (Strayer, 1999;Allen & Vaughn, 2010). Several studies have shown that various measures of hydrodynamic forces can help us predict the local distribution of mussels in rivers and nearshore lake areas (Daraio et al., 2010;Cyr et al., 2012Cyr et al., , 2017bLopez & Vaughn, 2021). For example, landscape variables (e.g., stream size) and fine-scale hydrological variables (e.g., bed shear stress, Reynolds number) are useful predictors of the spatial distribution and community composition of unionids in rivers (Daraio et al., 2012;French & Ackerman, 2014;Lopez & Vaughn, 2021). In lakes, unionids are most abundant in shallow sedimentary areas and at greater depths in large lakes with deep wave-mixed layers and along steep slopes (Cyr, 2008;Cyr et al., 2017b). Hydrodynamic forces can thus play a major role in delimiting unionid distributions.
Mussel growth is also influenced by hydrodynamics. Water flow is required to supply food ( Vanden Byllaardt & Ackerman, 2014;Mistry & Ackerman, 2018) and remove wastes, and higher flows are related to higher bivalve growth rates (Grizzle & Morin, 1989;Menge et al., 1997;Dycus et al., 2015). In shallow nearshore areas of lakes, mussel growth declines with increasing exposure to wind-driven physical forces (Cyr, 2020a(Cyr, , 2020b, but the presence of sediments provides a refuge for unattached freshwater mussels (Balfour & Smock, 1995;Schwalb & Pusch, 2007;Cyr, 2009). The role of physical forces and disturbances in freshwater ecosystems is understudied compared to marine ecosystems (but see Cyr et al., 2017b andCyr, 2020b for lakes;Lopez & Vaughn, 2021 for rivers).
Mussel shells record disturbances (Haag & Commens-Carson, 2008;Cyr, 2020b). The manipulation of unionid mussels has long been known to result in the production of disturbance lines in their shells (Coker et al., 1921;Negus, 1966;Haag & Commens-Carson, 2008). Disturbance lines also appear in wild mussels collected from the field (e.g., Veinott & Cornett, 1996;Cyr, 2020b), suggesting that mussel shells record natural physical disturbances as well. The intensity and characteristics of natural disturbances that result in the production of disturbance lines are not known, although wind-driven waves and thermocline seiching are two probable candidates (Cyr, 2020b). A disturbance line is thought to be produced when a disturbed mussel retracts its mantle, detaching it temporarily from the edge of the shell (Haag & Commens-Carson, 2008). When growth resumes, a dark line is laid down that comes up through the nacreous (interior) layer (e.g., DL in Fig. 2). These "nacreous" disturbance lines have been associated with decreased growth (Haag & Commens-Carson, 2008), although Cyr (2020b) reported increased growth in adult lake mussels exposed to natural physical disturbances.
Another type of internal line which may indicate disturbance are the prismatic lines (intra-shell periostracal layer) described by Checa (2000). The formation of prismatic lines occurs when the mussel reinitiates shell growth following a period of inactivity, and full prismatic lines are a normal part of annual growth lines (Checa, 2000). However, we commonly observe orphan prismatic lines that extend down from the periostracum and partially or fully through the prismatic layer, but contrary to growth lines, do not extend through the nacreous layer (e.g., PL in Fig. 2). It is unknown whether these orphan prismatic lines indicate disturbances.
This research tests how natural physical disturbance affects ecologically important native freshwater mussels, whether the sediments provide an effective refuge for them and what kind of impact natural disturbance has on the early development and growth of young mussels. More specifically, we test whether nacreous and prismatic disturbance lines in Elliptio complanata (Lightfoot, 1786) (Eastern Elliptio) are (1) more common at wind and wave-exposed compared to sheltered nearshore sites in lakes, (2) limited to growth periods after young adults emerge from the sediments, and (3) associated with anomalous shell growth rates in juveniles and young adults, as reported for adults (Haag & Commens-Carson, 2008;Cyr, 2020b). Mussels have indeterminate growth so factors affecting their early growth can have longterm effects on their populations (Haag, 2012).
We worked with E. complanata, a unionid mussel that inhabits a wide variety of substrates in shallow nearshore areas of lakes (Cyr, 2008). This species is locally abundant and widely distributed in lakes and rivers of the Atlantic Slope drainage of North America (Graf & Cummings, 2007). As a result, we can study their growth without having an impact on their population, something that cannot be done with threatened or endangered species.
Study site
Lake Opeongo is a multi-basin oligo-mesotrophic lake located in Algonquin Provincial Park on the Canadian Precambrian Shield, Ontario, Canada (45°42'N, 78°22'W) (Cyr, 2020b). This lake has been protected from human development since the park was established in 1893 (St. Jacques et al., 2005). We sampled nearshore areas in South Arm and East Arm, two large stratified basins of similar size (surface area: 22.1 and 18.1 km 2 , mean depth: 14.6 and 16.3 m, maximum depth: 50 and 44 m, respectively), and in Sproule Bay and Deadman, two small shallow polymictic basins (surface area: 2.1 and 0.3 km 2 respectively, maximum depth: 7 m for both; Fig. 1). Both large basins have an elongated shape with their main axis aligned with the predominant W-SW winds. This allowed us to select nearshore sampling sites exposed to a wide range of wind-generated physical forces (waves, currents) within and across basins. The sediments at our sampling sites ranged with site exposure, from deep fine and organic sediments at sheltered Fig. 1 Location of sampling sites in Lake Opeongo, where small mussels (17 sites; black circles), snails (4 sites; open blue circles), and sediments (10 sites; brown open triangles) were collected. Wind rosettes show the proportion of wind blowing from each direction at the South Arm and Sproule Bay weather stations (stars). Depth contours at 10-m intervals Table 1 Characteristics of sampling sites and of the mussels collected at each site a Could not find smaller mussels despite extensive search b Estimated from sediment organic content (see Methods) n mussels is the number of mussels collected from the sediment surface at each site, TL is total shell length of these mussels, n shells is the number of thin shell cross sections analyzed at each site and the estimated (est.) age of these mussels. Sampling sites are ordered by basin and are shown in Fig
Fig. 2
Thin shell cross sections of young mussels from three different sites: a W8-6 (TL = 39.7 mm), b T9S-2 (TL = 60.9 mm), c T11E-1 (TL = 65.9 mm) showing shell coloration, transition to clear nacre toward the tip, growth lines (GL), nacreous disturbance lines (DL), and prismatic disturbance lines (PL, p-PL is a partial PL at the top of the prismatic layer). The three layers of mussel shells mentioned in the text are identified in (a) periostracum, prismatic layer, and nacreous layer. Full series of annual growth rates for these three mussels are shown in Fig. 3. Location of sampling sites shown in Fig. 1 sites to small pockets of sediments between boulders at very exposed sites (Table 1) (Cyr, 2009;Cyr et al., 2012). Much of the shoreline, including all sampling sites, have sparse or no aquatic vegetation. Elliptio complanata is abundant and is the only unionid species in this lake.
Sampling
Mussels were sampled from 17 shallow nearshore sites (eight sites in East Arm, six in South Arm, two in Sproule Bay and one in Deadman; Table 1, Fig. 1). Mussels were collected between 6 and 10 July 2010, during the peak of E. complanata emergence (Matteson, 1948;Cyr, 2009). At each site, a snorkeler swam slowly along the bottom, parallel to shore at 2 m depth to collect all mussels found at the sediment surface. Most mussels were clearly visible, but some mussels were fully buried with only their open siphon visible at the sediment surface. These mussels were brought back to the boat, where their total shell length (TL; longest axis from anterior to posterior end) was measured with calipers. Where possible, we selected 10 small mussels per site (ideally TL < 60 mm) covering as wide a range of (small) sizes as possible. Fewer mussels (6-9) were collected at three rocky sites (T9E, T2E, T2S), where mussels were sparse and where we could only find adult-size mussels despite extensive searches (Table 1). Sediment depth was measured to quantify the availability of a refuge. These data were originally collected for two different studies (Cyr, 2009;Cyr et al., 2012), using slightly different, but comparable, sampling designs. A diver inserted a 0.8-cmdiameter plastic-coated metal rod as deep as possible into the sediments at 5-m interval along a 30-m transect in East Arm (n = 7), and at 2-m interval along a 10-m transect in South Arm and Sproule Bay (n = 5). Where resistance was not felt, sediment depth was recorded as the length of the rod (60 cm; i.e., sediment depth was underestimated). Two divers compared their sediment depth measurements at two sites and were usually within 5-10 cm of each other (median difference = 6.5 cm, range = 0-26 cm). Large differences in measurements could be due to differences in the force applied to the sampling rod by different divers, but also to small-scale variability in sediment structure, including the presence of buried rocks and branches. We calculated geometric mean sediment depth (Zseds) for each site. Deeper sediments are found in sediment accumulation areas and tend to have finer particles and higher organic content than shallow erosional areas . Sediment depth was not measured at site D1, so we used our measurements of sediment organic content at that site to predict sediment depth with a model developed in Lake Opeongo that includes sediment data from six sites in Sproule Bay and 20 sites in South Arm (linear regression, R 2 = 0.58, P < 0.00001).
Site exposure was quantified using effective fetch measurements, which account for the effect of predominant winds (Hảkanson & Jansson, 1983). We measured the distance of open water in front of each site (fetch, F, in m) along eight cardinal directions (d). Effective fetch (F eff ) is then calculated by weighing the distance of open water by the average wind speed (in m s −1 ) blowing from each cardinal direction (w d ): Winds were measured at weather stations maintained by the Ontario Ministry of Natural Resources and Fisheries (OMNRF; stars in Fig. 1). We used wind data collected on South Arm from May-October 2001-2009 (10 min intervals) to calculate fetch in South Arm, East Arm, and Deadman. Winds measured in Sproule Bay over two summers (May-October 2003 were used for that basin.
Processing of mussel shells
Thin shell sections were prepared as explained in Cyr (2020a). Internal growth lines were identified as those extending from the periostracum to the base of the nacreous layer (GL in Fig. 2). These are considered annual growth lines based on δ 18 O profiles in Lake Opeongo mussels (Cyr, 2020a). Annual growth was measured as the curved distance along the shell surface between successive growth lines measured at 20X magnification with an image analysis software (Infinity Analyze 6.5, Teledyne Lumenera, Ottawa, Canada).
We also identified two types of disturbance lines: nacreous disturbance lines (DL) and prismatic lines (PL). DL are dark lines coming up through the nacre that often do not reach the top of the nacreous layer (Veinott & Cornett, 1996;Haag & Commens-Carson, 2008) (Fig. 2b-c). Dark lines closely associated and merging with a (light) growth line were also considered nacreous disturbance lines (e.g., DL close to the tip in Fig. 2b; Cyr 2020a, b). PL are short and usually light lines coming down from the periostracum that run partially or fully through the prismatic layer and occasionally extend into the top of the nacreous layer, as described by Checa (2000;Fig. 2a, c). We identified and counted both types of disturbance lines in thin shell sections over as many growth periods as possible.
Emergence of young mussels from the sediments We used two indicators to estimate when young mussels emerged from the sediments. First, we measured the δ 15 N signature of all mussels to test whether their δ 15 N signature shifts with increasing body size. Cyr (2020a) showed that endobenthic mussels (in one of the basins studied here) have a more depleted δ 15 N signature than (larger) mussels collected on the sediment surface. Sediment δ 15 N becomes more depleted with increasing sediment depth (Kohzu et al., 2011) and provides an interesting tracer of endobenthic life. All (small) mussels in the present study were collected from the sediment surface, but it takes time for the isotopic signature of an organism to shift depending on the turnover rate of their tissues ( Vander Zanden et al., 2015). Therefore, we expect recent emergence to be reflected in a mussel's isotopic signature.
The second indicator of emergence is the internal shell coloration observed in the earliest years of mussel growth that disappears in later years as their growth drops to low adult growth rates (Fig. 2). We noticed a more or less sudden shift in shell coloration in the hundreds of shell cross sections analyzed in previous studies. In this study, we determined visually the last year of shell coloration in as many mussel cross sections as possible. We did not attempt to quantify the intensity of coloration since it varies with thickness and quality of the shell cross sections.
Isotopic signatures
The mantle tissues from each mussel were dissected, dried, and prepared for stable isotope analysis (δ 13 C, δ 15 N) (Cyr, 2020a). We also used tissue samples from previous studies for comparison: whole body mussel samples collected on 28-29 September 2004 at four sites in South Arm (Griffiths & Cyr, 2006) and mantle tissue samples collected on 6 June and 26-27 September 2006 at sites Sp1 and Sp4 in Sproule Bay (Cyr, 2020a).
Baseline isotopic signatures were measured from plankton, benthic primary consumers (snails), and sediments (Cyr et al., 2017a). Offshore plankton were collected with four to five vertical tows of a 100 μm Wisconsin net through the epilimnion and metalimnion of South Arm, East Arm, and Sproule Bay on two dates (16-17 June, 22-23 July 2009). We have no plankton baseline data from Deadman. For comparison, we also used plankton samples collected at two South Arm offshore sites on 29 September 2004 (Griffiths & Cyr, 2006) and on 5 July and 26 September 2006 in Sproule Bay and South Arm (Cyr, 2020a).
Herbivorous snails were collected from nearshore sites (~ 2 m depth) in Sproule Bay in early July 2006 (Physella, Gyraulus) and in South Arm on 21 July 2009 (Helisoma; open blue circles in Fig. 1). We have no benthic baseline data from East Arm or Deadman. The snails were left at least 30 min in lake water to clear their gut, dried at 60 °C, and the soft body of three individuals per site was analyzed separately after removing their shell and operculum.
Sediments were collected at five of our sampling sites in South Arm (SW3, SW8, SE6, SE9, SE10) and at site Sp1 in Sproule Bay during summer 2006, and at three sites in East Arm and at site D1 in Deadman during summer 2009 (open triangles in Fig. 1). Sediment samples were collected with handheld Lexan corers (5.6 cm internal diameter) and the surface 1 cm was extruded with a piston. In East Arm, samples of the top 5 cm of sediments were also collected at two sites for comparison. The sediments were wet-sieved and the isotope signature of the finest (most organic) size fraction analyzed (< 63 μm in South Arm, Sproule Bay and Deadman; < 110 μm in East Arm).
All samples for isotopic analysis were dried at 60 °C for at least 24 h, ground to powder with mortar and pestle, and analyzed for δ 13 C and δ 15 N at the Environmental Isotope Laboratory, University of Waterloo, Ontario, Canada. Average precision (median standard error) of replicates is 0.09‰ for both δ 13 C and δ 15 N (n = 41).
Statistical analyses
We used simple linear regression analysis to test for relationships between δ 15 N and shell size (TL) at each of our 17 sampling sites. P values were corrected for multiple comparisons using False Discovery Rate (FDR; Benjamini & Hochberg, 1995).
We tested whether disturbance lines (DL, PL) occurred equally in colored and clear growth periods using ANOVA. This analysis was restricted to mussel shells that contained disturbance lines. We used linear mixed-effect models (Zuur et al., 2009) in the nlme package in R (Pinheiro et al., 2019) to compare the proportion of growth periods in individual mussels that had disturbance lines in colored vs clear growth periods (fixed factor). Heteroscedasticity in the data was accounted for where appropriate by adding a random effect term in the model (Zuur et al., 2009).
Our first main objective was to determine whether nacreous/prismatic disturbance lines were more common at more exposed study sites. To address this question, we calculated disturbance line density in each mussel by dividing the number of disturbance lines counted by the number of annual growth periods visible, and averaged across all mussels from a given sampling site. We then used Generalized Additive modeling (GAM, Zuur et al. 2009) to test for a relationship between mean disturbance line density (D DL/PL ) and site exposure (2 measures: Zseds is sediment depth in cm, F eff , effective fetch in km) in different basins (3 categories: small basins, South Arm, East Arm) with the following model: In this model, we test for a non-linear relationship with Zseds without imposing a particular shape to this relationship (fitted with a thin plate regression spline, s). a-c are fitted parametric coefficients for the other variables and their interaction. We fitted the model with maximum likelihood and removed variables sequentially using the highest Akaike's Information Criterion (AIC; Burnham & Anderson, 2002).
(1) Fig. 3 Time series of annual growth rates for the three young mussels shown in Fig. 2: a W8-6 (TL = 39.7 mm), b T9S-2 (TL = 60.9 mm), c T11E-1 (TL = 65.9 mm). Solid black lines are modeled trends in growth used to calculate growth anomalies. Note that these trends span from the earliest maximum growth and do not include data from the partial 2010 growth period (open circles). Red vertical lines are nacreous disturbance lines (DL, dashed) and prismatic disturbance lines (PL, dotted) observed during these growth periods. Shading indicates growth periods with shell coloration. In panel (a), the double red lines in 2008 means there were two PL during that growth period (see Fig. 2a). Numbers in parentheses are estimated ages during the first measurable growth period in each shell. Figure 2 shows the most recent portions of thin shell cross sections (i.e., from the tip) for these mussels We then used restricted maximum likelihood estimate to calculate all fitted coefficients in the final (optimal) model (Zuur et al., 2009). The analysis was done using the mgcv Package (Wood, 2022) in R.
Our second main objective was to determine whether the presence of disturbance lines (DL, PL) in a growth period was associated with growth anomalies. Growth anomalies were calculated by fitting a linear or exponential model (best fit) through the time series of growth measured in each mussel, from the earliest growth period with maximum growth to the most recent full growth period (i.e., excluding 2010; Fig. 3). In old mussels (age = 14-37, n = 5), we excluded all growth periods after the mussel reached low adult growth rates (< 1 mm). Trends were only fitted in mussels with at least three growth periods matching these restrictions. Growth anomalies were calculated as the difference between the observed growth in each growth period and the trend line. We then used a nested analysis of variance (ANOVA) to compare growth anomalies in periods with and without disturbance lines. This was done using Generalized Linear Mixed modeling (GLMM) and was restricted to mussels with at least one disturbance line. As a result, some sites were excluded entirely from the analysis (sites SE9, SE10, SW5, T11S excluded from DL analysis). The GLMM was fitted with the nlme package in R, with growth anomaly in each growth period (described above) as the independent variable, the presence/absence of disturbance lines as a fixed factor, and sampling site (13 sites for DL, 17 sites for PL) and basin (3 levels: small basins, South Arm, East Arm) as random factors to account for the hierarchical structure of the data. Heteroscedasticity in the data was accounted for by adding random effects in the model (Zuur et al., 2009).
Results
We collected as wide a range of juvenile and young adult-size mussels (< 60 mm TL) at each site, except for three rocky sites in East Arm (T9E, T2E, T2S) where we could not find mussels smaller than 45-50 mm (Table 1). Internal growth lines confirmed that most mussels were young (Table 1) and were still growing faster than adults in this lake (Cyr, 2020b; ~ 70% of mussels with > 2 mm growth in the last growth period before sampling). Eggs were found in the gills of four (out of 164) mussels, all of whom were larger than 50 mm (TL).
Internal shell coloration in early growth periods All shell cross sections that could be analyzed, except one, showed some internal coloration, and more than half showed coloration throughout the shell up until they were sampled in 2010 (80 of 148 mussels; TL = 24-68 mm, est. age = 2-13 years). In the other mussels, we typically observed translucent Fig. 4 Boxplots showing shell size (TL, total shell length) during the last colored growth period, estimated age during the last colored growth period and growth during the last colored growth period (brown shading) and the first clear growth period (white) in mussels from each basin. Boxplots show median (thick line), 25 th -75th percentiles (box), minimum and maximum without outliers (whiskers) and outliers (open circles, > 1.5 × interquartile range). D: Deadman, Sp: Sproule Bay, SA: South Arm, EA: East Arm brown shell coloration during early growth periods with a more or less sharp transition to clear shell cross sections in more recent growth periods (Fig. 2). This transition occurred as growth declined from high juvenile to low adult growth rates (Figs. 3, 4c), in mussels with median shell length 20-50 mm (Fig. 4a) that were on average 2-7 years old (Fig. 4b).
Shell coloration also varied within individual growth periods, usually starting with clear nacre secreted in the lower nacreous layer below the previous year's growth, followed by fully colored nacre later in the growth period (e.g., Fig. 2b). A few mussels showed more complex banding patterns in some growth periods but these were all recorded as "colored."
Isotopic signatures
The mussels within each lake basin had very similar δ 13 C signatures, both between individuals at a given site (small error bars) and between sites (Fig. 5). The δ 13 C signatures of mussels were most similar to those of plankton, slightly more depleted than sediments and much more depleted than benthic primary consumers (snails; Fig. 5). Mean mussel δ 13 C signature was more enriched in the two large basins (−26.6‰ ± 0.1 in East Arm, Fig. 5c; −28.1‰ ± 0.1 in South Arm, Fig. 5b) compared to the two small basins (−29.5‰ ± 0.3 in Sproule Bay, −31.0‰ in Deadman; Fig. 5a), as observed in zooplankton across lakes of different sizes (Post, 2002).
Mussel δ 15 N signatures were not related to their shell length (regressions by site, all FDR-corrected P > 0.4), but varied with internal shell coloration (Fig. 6). Mussels with their most recent full growth period (2009) showing internal coloration had the most depleted δ 15 N signature, most similar to δ 15 N in the sediments (Fig. 6a, c-d). Mussel δ 15 N signatures increased with increasing time since their last colored growth period, and mussels with more than three to four recent clear growth periods reached the δ 15 N signature of plankton. These changes are consistent with the difference in δ 15 N measured between endobenthic and epibenthic mussels collected in Sproule Bay in 2006 (Fig. 6b).
Disturbance lines
Both types of disturbance lines (DL, PL) were equally common in the colored and clear portion of small mussel shells. Nacreous disturbance lines (DL) were present in 25% of the (early) colored growth periods and in 27% of the (late) clear growth periods per mussel, on average (ANOVA, P > 0.1; n = 43 and 28 mussels with colored and light growth periods, respectively), and there was rarely more than one DL per growth period. Prismatic disturbance lines (PL) were found in 50% of the colored growth periods and in 62% of the clear growth periods per mussel, on average (ANOVA, P > 0.09; n = 84 and 44 mussels, respectively) and there were often multiple PL per growth period. This suggests that disturbance lines are produced during early growth periods when the mussels were presumably spending most of their time in the sediments. Mussels at different sites had an average (mean) of 0 to 0.3 DL per growth period (Fig. 7a, b). DL density declined non-linearly with sediment depth (Fig. 7a), but was not significantly related to effective fetch (GAM model, P > 0.6; Fig. 7b, Table 2). The non-linear relationship with sediment depth could be interpreted as a threshold, where mussels at sites with very little fine sediments (≤ 3 cm) have three times as many disturbance lines as mussels with access to deep sediments (> 10 cm; Fig. 7a). Interestingly, the four sites with ≤ 3 cm of fine sediments are the only ones located on small islands and DL density at these four sites appears to increase with increasing effective fetch (high points in Fig. 7b). Fig. 6 Boxplots comparing the δ 15 N signature of mussels with their last colored growth period observed in different years prior to sampling in: a small basins (Sproule Bay, Deadman), c South Arm and d East Arm. Panel b shows 2006 data from surface (SFC) and endobenthic (ENDO) mussels collected at Sp1 and Sp4. Horizontal reference lines for plankton (blue dash for June, dash-dot for July) and sediments (brown; solid line for mean ± dotted lines for standard error). Boxplots as in Fig. 4.
Number of samples shown in parentheses
Prismatic disturbance lines (PL) were much more common than DL, and were also related non-linearly to sediment depth (Fig. 7c), but not to effective fetch (Fig. 7d, Table 2). Interestingly, PL density was much higher in the two small lake basins (open triangles in Fig. 7) compared to the large basins.
Contrary to expectations, the presence of disturbance lines (DL, PL) did not affect growth in a systematic way. Growth anomalies during periods with disturbance lines showed no significant difference compared to growth periods without disturbance lines (hierarchical ANOVA, p ≥ 0.5 for DL and PL; Fig. 8). We found no evidence that the presence of disturbance lines in juvenile and young adult mussels is related to stunted or enhanced shell growth.
Sediments as habitat
Our data clearly show that juveniles spend many years in the sediments and that there is variability in the size and age at which they emerge from the sediments (Fig. 4). Juvenile mussels in Lake Opeongo lay down colored nacreous material during early growth periods, which likely reflect high organic content in surrounding sediment porewater. We confirmed that shell coloration is consistent with changes in the δ 15 N signature of their soft bodies. Juveniles with fully colored shells had δ 15 N signatures similar to the sediments and to endobenthic mussels, whereas young mussels who laid down clear shell material over the last three to four growth periods had δ 15 N signatures progressively approaching that of plankton (Fig. 6). Given the rapid growth of juveniles, we expect the nitrogen in mantle tissues to turn over rapidly (Dubois Fig. 7 Relationships between mean number of disturbance lines per growth period and two measures of nearshore site exposure: sediment depth (left panels) and effective fetch (right panels). Top panels for nacreous disturbance lines (DL), bottom panels for prismatic disturbance lines (PL). Each point is mean number of disturbance lines per growth period in mus-sels from one site (n shells listed in Table 1) and different symbols identify the basin (filled squares: East Arm, shaded circles: South Arm, open triangles: small basins). Panels a, c solid lines (± standard error) are fitted GAM models (Table 2). Panels b, d no significant relationship with effective fetch Kasai et al., 2016). Therefore, the slow (multi-year) change in juvenile δ 15 N signature we observed after they emerge from the sediments (i.e., time since last colored growth period) is likely due to a slow change from deposit feeding in the sediments to full suspension feeding on plankton (Araujo et al., 2018;Lavictoire et al., 2018). Using shell coloration as an indicator of habitat, we determine that juvenile E. complanata emerge from the sediments when they are on average (median) about 20-50 mm (TL) in size and approximately 2.5-7 years old. These results are consistent with independent estimates of shell size when mussels first emerge from the sediments (30-50 mm; Cyr, 2020a) and when they mature (45-50 mm; Downing et al., 1993;H. Cyr pers. obs.). Matteson (1948) also reported maturation "at least as early as the end of the third growing season." The isotopic data show that while in the sediments, juvenile and young adult mussels feed on material of planktonic origin, either directly from the water column or by selective feeding in the sediments. The δ 13 C signature of small mussels was most similar to plankton δ 13 C, with very little variability across a range of body sizes and between sampling sites in each basin (Fig. 5). Small mussel δ 13 C signatures were also more depleted than the δ 13 C signature of the sediments where they feed. Juvenile mussels have the capacity to feed selectively (Beck & Neves, 2003;Fung & Ackerman, 2020) allowing them to use higher quality material (e.g., algae) needed for growth (Gatenby et al., 1997). Cyr (2020a) also reported that small endobenthic mussels feed on material of planktonic origin and our results here extend this finding from one shallow lake basin (Sproule Bay) to nearshore areas across lake basins of different sizes. This result is also consistent with findings of higher concentrations of planktonic algae in river sediments compared to the overlying water column, particularly in depositional areas (e.g., behind boulders), which resulted in efficient feeding by endobenthic juvenile mussels (Fung & Ackerman, 2020).
Our data also suggest changes in shell coloration over the growing season. The growth periods we labeled as "colored" were rarely uniform in color. The most common pattern of coloration was clear nacre laid down early in the growth period in the lower portion of the nacreous layer, which darkened through the growth period (Fig. 2b). Other banding patterns were observed (e.g., clear-dark-clear, dark-clear), but were relatively rare. Assuming that shell coloration is due to colored dissolved organic matter (CDOM) in surrounding sediment porewater, there are several possible explanations for this seasonal change in shell coloration. One possibility is seasonal changes in CDOM concentration in the sediment porewater, due to seasonal changes in organic matter inputs and degradation (microbial, photochemical;Clark et al., 2014) or to water exchange between sediment porewater and the overlying water column. Water exchange is most likely in coarse permeable sediments (Rocha, 2000;Janssen et al., 2005) but was not detected in the shallow nearshore sediments of Lake Opeongo (Cyr, 2012). We did not measure CDOM, but we expect seasonal changes in porewater CDOM concentration to produce similar shell coloration patterns in all buried mussels, whereas in any given year, we observed mussels with different coloration patterns, casting doubt on this explanation. A second possible explanation is that endobenthic mussels move to different depths in the sediments at different times of the year. Juveniles are usually found in the upper few mm-cm of sediments, but they are quite motile and can quickly position themselves vertically Smoothing functions for sediment depth, s(Zseds), are fitted splines without parametric coefficients (-) and with approximate partial P values (Zuur et al., 2009); their non-linear shapes are plotted in Fig. 7. Effective fetch was not significant (partial P > 0.6; Fig. 7b in the sediments in response to oxygen and surface disturbances (Sparks & Strayer, 1998;Bílý et al., 2021;Hyvärinen et al., 2021). Seasonal patterns of vertical migration are known in adults (Amyot & Downing, 1997;Cyr, 2009), but not in juveniles (Bílý et al., 2021). A third possible explanation is seasonality in juvenile growth rate, with "dilution" of CDOM during periods of rapid growth resulting in lighter shell color. Interestingly, Negishi & Kayaba (2010) reported earlier initiation of growth in young mussels compared to adults, with high growth rates early in the growing season followed by lower growth. It is unclear whether the changes in shell coloration we observed indicate seasonal shifts in environmental conditions within the sediments or in juvenile growth.
Sediment refuge and disturbance lines
We found that nacreous (DL) and prismatic (PL) lines both become more abundant at exposed nearshore sites with little or no sediment refuge (Zseds < 2-3 cm). Mussels in sediments deeper than ~ 9 cm had low DL density, and given that the mussels we sampled were less than 7 cm in length, this allowed them to entirely bury themselves in the substrate. This supports the hypothesis that DL and PL are both indicators of natural physical disturbances and that nearshore sediments provide a refuge against physical disturbances. Interestingly, we observed both types of disturbance lines in growth periods with a colored nacreous layer, when juveniles presumably spend most of their time Fig. 8 Comparison of Growth Rate (GR) anomalies between growth periods with or without a nacreous (DL) and b prismatic (PL) disturbance lines in different basins. There are no significant differences for either type of disturbance lines (hierarchical ANOVA, P ≥ 0.5). Number of growth periods included in each category is shown in parentheses. Box plots as in Fig. 4 buried in the sediments. Moreover, the density of DL we measured in juveniles and young adults was similar to that observed in adults (0.1-0.4 DL per growth period, Cyr 2020b), who spend a lot more time above the sediment surface. Juvenile and adult mussels have both been observed to bury quickly (within minutes) when exposed to stressful conditions (Schwalb & Pusch, 2007;Cyr, 2009;Kemble et al., 2020), so sediments provide a refuge for mussels throughout their life. The existence of disturbance lines in young juveniles suggests that the sediments are an effective but imperfect refuge from natural disturbances. Prismatic (PL) disturbance lines are much more abundant than nacreous (DL) disturbance lines, but were related in a similar non-linear fashion to sediment depth (Fig. 7a, c), suggesting PL are formed under more benign conditions than DL. However, PL were much more abundant in the small relatively sheltered basins than in the large basins, so natural physical disturbances cannot be the only cause for their formation. Several authors have suggested that direct interactions could be stressful at high mussel densities (e.g., competition for food, interference competition; Peterson, 1982;Allen & Vaughn, 2009). In lakes, mussel density varies in a unimodal fashion with depth of the water column and mussels reach maximum density at greater depths in larger lake basins (Cyr, 2008;Cyr et al., 2017b). At our 2-m sampling depth, mussel density in the two small basins (Sproule Bay and Deadman; mean density = 73 mussels m −2 , range = 59-82) was more than an order of magnitude higher than in South Arm (mean = 3.6 mussels m −2 , range = 0.4-6.1), and more than two orders of magnitude higher than in East Arm (mean = 0.5 mussels m −2 , range = 0.07-1.1; early July data from Cyr ,2009;Cyr et al., 2012 and unpublished). We hypothesize that benign physical disturbances in small lake basins and direct interactions between mussels at high densities are not as disruptive as natural physical disturbance at exposed sites in large basins that cause nacreous disturbance lines (DL). Mussels in small basins would experience frequent partial mantle retractions producing prismatic lines (PL), but few full mantle retractions producing nacreous disturbance lines (DL).
Disturbance and mussel growth
The distribution and growth of juvenile and of adult E. complanata in shallow nearshore areas of lakes are related to wind exposure (fetch) and to sediment characteristics Cyr, 2020a, b). Mussels are more abundant and grow faster at more sheltered shallow nearshore sites, but also at windexposed sites with fine sediments. So mussel growth in nearshore areas of lakes is limited by wind-driven physical forces, but it is unclear whether this is a direct effect of physical disturbances or is indirectly related to other factors.
The presence of disturbance lines is usually thought to indicate lower shell growth. Disturbance lines are produced after a mussel is exposed to stressful conditions that cause the mantle to retract and detach from the edge of the shell and results in temporary cessation of growth. If growth stops for long enough and cannot be compensated over the remaining growth period, these disturbances will result in lower annual growth. This was confirmed by Haag & Commens-Carson (2008) who found slightly lower growth in mussels that were handled and marked with small notches carved into the shell, and who produced disturbance lines. In contrast, Cyr (2020b) found that the presence of disturbance lines (DL) in adults exposed to natural physical disturbances was related to higher, not lower, growth. Mussels quickly respond to stressful conditions by burying into the sediments but also resume their activities soon after these events (Neves & Moyer, 1988, H. Cyr pers. obs.), so under natural conditions we would expect them to easily compensate for short periods of inactivity. However, adults living at exposed nearshore sites have shorter periods of activity over the growing season than those at sheltered sites (Cyr, 2009), and mussels who remain active longer at these exposed sites would acquire more food and grow better but also increase the risk of being exposed to stressful events. In the present study, we find that juveniles and young adults produce disturbance lines but these have no detectable effect on annual growth, suggesting that small mussels compensate for temporary cessation of growth. Juveniles may also be more plastic than adults in their tolerance to stress (Gleason et al., 2018).
We found no evidence that natural disturbances have a negative impact on the growth of juvenile and small adult mussels in shallow nearshore areas of small to intermediate-size lakes. It is unclear how these results extend to mussels living in more dynamic benthic environments, such as streams and rivers. In highly dynamic marine intertidal areas, mussels and clams grow better at wave-exposed than at sheltered sites, in large part due to higher food influx (Grizzle & Morin, 1989;Menge et al., 1997). Native freshwater mussels are usually unattached to their substrate, meaning that accumulated sediments in shallow nearshore areas of lakes, and possibly in flow refuges of streams and rivers (Strayer, 1999), provide an important, but imperfect refuge from natural physical disturbances. Given that previous studies have also found nearshore fine sediments can host high juvenile mussel densities and growth even in areas highly exposed to natural physical disturbance Cyr, 2020a), it is clear that these zones of sediment accumulation warrant protection. The expansion of land development prohibitions, which has already been proposed in the EU to preserve the habitat of endangered riparian freshwater mussels (Dobler et al., 2019), is one option worthy of further investigation. Action to preserve these zones is all the more necessary given that these important areas for benthic organisms are directly exposed to nearshore industrial and recreational development, and are impacted by large-scale anthropogenic and climate-related hydrological changes (Pip, 2006;Strayer & Dudgeon, 2010;O'Neill & Thorp, 2011).
|
v3-fos-license
|
2021-02-03T06:16:48.532Z
|
2021-01-22T00:00:00.000
|
231759135
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1660-4601/18/3/969/pdf",
"pdf_hash": "0a6c785365198e70793a0aa6f395b6401fe88fe6",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44559",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "e79e4bd1efcd7e135a1b9f5f179a53dc0fcec4e5",
"year": 2021
}
|
pes2o/s2orc
|
Contribution of Subway Expansions to Air Quality Improvement and the Corresponding Health Implications in Nanjing, China
With China’s rapid economic development, particularly its accelerated urbanization, air pollution has been one of the serious environmental issues across China. Most major cities in China expand their subway systems to handle this problem. This study takes both long- and short-term effects of subway expansions on air quality and its corresponding health implications into account based on a network density-based time series analysis and a distance-based difference-in-differences analysis. The daily and hourly monitor-level air quality data on Nanjing from 13 May 2014 to 31 December 2018, combining with corresponding weather variables, are used to quantify the effect of subway expansions on local air pollution caused by eight new subway lines in Nanjing. The results reveal that subway expansions result in a statistically significant decrease in the air pollution level; specifically, the air pollution level experiences a 3.93% larger reduction in the areas close to subway lines. Heterogeneous analysis of different air pollutants indicates that the air pollution reduction effect of subway expansions is more significant in terms of Particulate Matter (PM2.5) and CO. A back-of-the-envelope analysis of the health benefits from this air improvement shows that the total number of yearly averted premature deaths is around 300,214 to 443,498. A set of alternative specifications confirm the robustness of our results. These results provide strong support for putting more emphasis on the environmental effect of subway expansions in the cost-benefit analysis of subway planning.
Introduction
In recent years, many cities in China have experienced a deteriorating environmental quality, which not only poses a threat to residents' health and life but also challenges sustainable urban development. Zhao et al. [1] found that anthropogenic PM 2.5 (fine particulate matter with a diameter of 2.5 micrometers or less) exposure in China resulted in 1.08 million premature deaths in 2012. In addition to PM 2.5 , other types of air pollutants like nitrogen oxide (NOx), sulfur dioxide (SO 2 ), carbon monoxide (CO), and photochemical smog and their derivatives through physical and chemical reactions also induce serious health problems in China. A large body of literature has confirmed the link between vehicular emissions and air pollution [2,3] and the corresponding health problems [4]. For instance, Li and Yin [5] calculated the share of traffic-related air pollution in urban total air pollution emission and found that 63% of CO and 37% of NOx were generated by urban vehicular emissions in China. With the increase of private vehicles in China, traffic-related air pollution may become more and more serious if there is no adoption of appropriate one single city, researches on a cross-city comparison in terms of the relationship between the expanded subway and local air quality also suggest the existence of the substitution effect. Based on 45 newly opened rail lines of 14 cities in China, Liang and Xi [18] found that the improved rail transit may induce more commuters to escape from cars, especially taxis, which contributes to the improvement in air quality. Gu et al. [19] examined the relationship between transit-oriented development and air quality in 37 Chinese cities, and also found that rail-based transit-oriented development contributes to better air quality. However, some researchers also found the insignificance of the substitution effect in China. Zhang et al. [20] found that improved rail transit has a small and statistically insignificant effect on the travel mode of residents with cars after the analysis of a survey data on travel energy consumption in Beijing. Wang et al. [21] found that the short-term impact of subway expansion on PM 10 tends to be positive, while negative in the longer term based on the PSM-DID method.
In summary, the transport−air pollution literature predicts that the final impact of subway expansions on air quality could go in either direction. Although some researchers have examined this relationship in the context of China, most of them choose Chinese megacities like Beijing and Shanghai as the target of interest [22,23], with little focus on medium-sized cities. What is the specific effect of subway expansions on local air quality, particularly in the context of Chinese medium-sized cities? Is there significant heterogeneity among different types of air pollutants in terms of the degree of influence? How to deal with the endogeneity concern? The dearth of attention to these questions informs us that a more integrative understanding of subway development and environmental pollution is indispensable.
Therefore, this paper presents a case study of the impact of subway expansion on air quality in Nanjing and based on a distance-based DID approach used to solve the endogeneity concern. Its objectives are to (1) identify the relationship between subway construction and local air pollution, and (2) achieve a comprehensive understanding of the air pollution reduction mechanism of urban rail transit through considering the heterogeneous effect of different types of air pollutants and different time intervals based on hourly air pollution data. Our study may contribute to the transportation infrastructureair quality literature in the following four areas. First, this study adopts high-frequency air quality data at the monitor level together with subway lines in the same city to identify the causal relationship between subway expansions and air pollution. Second, we construct a continuous subway density indicator to reflect the subway network density change, which allows us to identify subway expansions' marginal impact instead of only focusing on the magnitude of air quality change due to the first subway line. Third, a distance-based DID approach that considers the endogeneity in subway density is adopted to handle this concern. Last, heterogeneity tests for different types of air pollutants and different time intervals within a day are performed using hourly air pollutant emission data.
The remainder of this paper is structured as follows. In the second section, we discuss the empirical background in the context of Nanjing, including the situation of air pollution and subway development in Nanjing. Section 3 describes the key data, followed by the empirical strategy. Section 4 presents the main empirical results, and then, on this basis, Section 5 presents a back-of-the-envelope analysis of the health benefits. Section 6 proposes corresponding policy suggestions and Section 7 concludes. Last, Section 8 presents a brief discussion of future research on this topic.
Air Pollution in Nanjing
Over the past decades, sustained and rapid economic growth has been accompanied by an increasingly deteriorated environment in China. Nanjing, as the capital of Jiangsu Province, is a highly urbanized and industrialized city as well as the central city in the northwestern Yangtze River Delta (YRD). Nanjing also experiences a gradually deteriorating air quality along with its rapid economic development. Figure 1 shows the daily change in PM 2.5 concentrations in Nanjing from 13 May 2014 to 31 December 2018. The concentration level of most days during this process is above the World Health Organization (WHO) 24 h guideline value, and the yearly average level is up to 52.4 µg/m 3 , that is about two-thirds of the Chinese annual standard. Air pollution represents the largest single environmental health risk over the world according to the WHO [24]. A 2013 assessment by the International Agency for Research on Cancer (IARC) of WHO also concluded that the association between ambient air pollution and an increase in cancer incidence had been confirmed, especially lung cancer and cancer of the urinary tract/bladder (See: https://www.who.int/news-room/fact-sheets/ detail/ambient-(outdoor)-air-quality-and-health for more information). Power plants, vehicles, and industrial activities all contribute to the increase of outdoor air pollution, though identifying the specific share of transport-related sources is challenging as vehicular emissions usually induce secondary air pollutants after physical or chemical processes. However, according to the statistics of China's National Bureau of Statistics (CNBS), the annual average concentration of PM 2.5 in Nanjing was 74 in 2014, while only 43 in 2018. Its subway system, meanwhile, has achieved a well-developed transport network and the number of trips by public transport have accounted for 50% of total travel in the urban area. Whether the link between both changes exists need further empirical evidence.
Subway System of Nanjing
Before 2014, Nanjing only had two subway lines in operation, with 55 stations, and the time gap between the first subway line and the second one was five years (Figure 2), which suggests a slow stage of development. Through hosting the second Youth Olympic Games (YOG) in 2014, Nanjing entered the era of acceleration in subway construction. From 2014 to 2018, 8 new subway lines were constructed, reaching a total of 378.38 km. This operating kilometrage makes Nanjing rank No. 4 among China's 37 cities with the subway, and also the first city in China that achieved the full coverage of access to subway services across all districts.
Data Sources and Summary Statistics
Different from previous research based on the city-level air pollution data, this study collects Nanjing's air quality data at the monitoring station level (the geographic distribution of 9 monitors is shown in Figure 4). Due to the limit of data availability, we collected the daily and hourly monitor-level Air Quality Index (AQI), SO 2 , NO 2 , CO, PM 10 , PM 2.5 , and Ozone (O 3 ) from 13 May 2014 to 31 December 2018. During this period, 8 new subway lines with 104 stations were constructed, accounting for almost two-thirds of the total in operation by the end of 2018. The AQI is based on the abovementioned six atmospheric pollutants that measure daily air quality according to a scale of 0 to 500: the higher the value, the greater the air pollution and health risks. The main explanatory variable in this study is the indicator that reflects the opening dates and locations of subway lines. Figure 4 shows the layout of monitoring stations and subway stations in Nanjing in 2018, most of the high-supply areas of subway services lie in the urban area around the city center. Considering that almost all stations in the same line opened on the same day, we chose seven major opening dates during the sample period (as shown in Figure 3) as the date of interest.
In addition, we also constructed a continuous variable to reflect the subway density: where i, j, and t represent air quality monitors, subway stations, and days, respectively. N t indicates the number of currently operational subway stations at time t. Dist 2 ij indicates the square of the distance from monitor i to subway station j at time t. This measure could be regarded as a transformation of the gravity model in physics, which reflects the number of subway stations centered around a specific monitoring station. More subway lines indicate higher subway density, however, monitors that are closer to new subway lines will have a higher level of density. Therefore, these stations are expected to record a better air quality as commuters nearby are more likely to substitute from cars to the subway. Figure 5 reports the subway density changes of each block (calculated based on Equation (1)) across the city with the subway expansion. In general, the subway density of each block decreases with the increase of distance between the block of interest and urban center. Both air quality and subway datasets are aggregated to the monitor and daily level. We also included daily weather variables, including daily maximum and minimum temperature, average wind speed, and binary variables, indicating rain or snow and the same wind direction from day to night, as local weather conditions are important determinants of air quality [7]. First, sun and high temperatures can function as catalysts for chemical processes of air pollutants. Second, air pollutants will be removed from the atmosphere by precipitation. In view of the fact that the unit of analysis is based on daily data, we also added a set of time-fixed effects (year, month, weekend, season, and holidays). The meteorological data come from the China Meteorological Administration (CMA). The industrial scale and structure, investment in infrastructure, vehicle fleet, and other socioeconomic factors are not controlled due to their unavailability at monthly or seasonal level. Given this potential problem of missing variables, we adopted a DID method with a 60-day time window around the opening dates of new subway lines to handle it, and we assume that the erratic fluctuation of economic activities will not appear in the short term. Table 1 summarizes the descriptive statistics for the main variables of our analysis. Notes: i = 1, . . . , 9 denotes an air quality monitor, t denotes a specific day from 13 May 2014 to 31 December 2018. The standardized density in the vicinity of a given monitor refers to the Subway_density it calculated from Equation (1) divided by its standard deviation (SD). The unit of observation is monitor-day.
Econometric Model
In this section, we first employ subway network density as the key independent variable to examine the effect of subway expansions on urban air pollution. This measure relies on the spatial and temporal variation of the network expansion during the data period that reflects a long-term change of the subway network. Then, we use the DID method, focusing on a shorter time window, as an alternative strategy to confirm the robustness of our result and investigate the heterogeneous effects of different air pollutants. All regressions are performed through the software of Stata 16.
The Relationship between Subway Density and Air Quality
In our primary econometric exercise, we examined changes in air pollution around the air quality monitors and estimated the conditional correlation between it and subway density in Nanjing over time. We specify a reduced form model to quantify the effects of a marginal increase in subway supply on equilibrium air quality in the vicinity of a monitor. For each pollutant, p∈{AQI, PM 2.5 , PM 10 , CO, NO 2 , SO 2 , O 3 }, recorded by monitor i at time t: where ln Air_pollution pit , the dependent variable, is the daily air pollution level recorded by each monitor, and each pollutant data is logarithmically transformed to avoid nonnormality and heteroscedasticity. Monitor i indicates the monitor fixed effects to control unobserved location attributes that may affect air quality, these attributes do not change over time but change across monitors. Trend it is a monitor-by-day variable (i.e., the interaction of the dummy for monitor i and the linear time trend t) that picks up time-varying monitor-specific trends and alleviates the endogeneity concerns in terms of the location of subway lines as well as helps to solve the spurious regression problem. Weather t is a vector of weather covariates including dummies for daily snow and rain (rain or snow = 1, otherwise = 0) and fixed wind direction (i.e., the corresponding dummy will be set as 1 if the wind direction of day and night keeps consistent, otherwise 0), wind speed, as well as daily maximum and minimum temperature. Time fixed effect controls a set of temporal fixed effects to capture those time-varying unobservables, including year, season, weekend (weekend = 1, weekday = 0), and holiday fixed effects (holiday = 1, otherwise = 0). δ it is an error term.
The empirical approach above focuses on the subway density change and its impact on the air quality, while the measurement is at the station level, not a city-wide level. Therefore, we expect that there exist significant differences in this impact between monitoring stations with denser subway networks nearby and those who are far away from subway lines.
Difference-in-Difference Specification
A simple time-series regression on the casual relationship between subway density and air pollution has the potential problem of endogeneity. That is, the locations of new subway lines are not placed randomly but are strongly correlated with active economic activities, which usually inform worse traffic congestion nearby. This non-random placement may lead to the downward bias of the estimated coefficient of subway density in Equation (1). Moreover, those cities with subway lines are usually accompanied by serious traffic congestion and air pollution, and the general Ordinary Least Square method (OLS) may induce reverse causality and self-selection bias [25]. The omitted variable bias may also result in endogeneity, which weakens the robustness of estimating results [26,27]. To address concerns of endogeneity, we also estimate a DID specification. Specifically, we use the OLS method to estimate the following DID model: where Subwayn it is an indicator variable that takes a value of one for monitor i if it is within 2 km of any operational subway stations that were opened on a given date, od (od-60 ≤ t ≤ od + 60), and a value of zero for others. Post t is a dummy variable that takes a value of one for all 60 days after these new subway stations are operational (od ≤ t ≤ od + 60), and a value of zero otherwise. W indicates the corresponding unit vector. After interacting Post t with the corresponding unit vector W, one will get a vector of treatment indicator for each monitor on each date. Our coefficient of interest, β 1 , will measure the change in air pollution due to subway opening for areas near new subway stations over the 60-day period following the opening date. The rest of the control variables are the same as Equation (2). We set 2 km as the cut-off value according to the research of Li et al. [17], where they assume that the typical walking and biking distance is 1 and 3 km, respectively. A mean value of them is taken as the radius that a subway station can exert impact on commuters' choice of travel mode. The 60-day windows on either side of the 7 opening dates of new subway lines allow us to examine the impact of each new subway line on the air quality separately and avoid the overlap of each new line's time window.
Before the analysis with the DID approach, a test of parallel trends in pre-treatment periods should be conducted. In this study, the air quality in treated and untreated subclasses should follow similar trends before the subway opening date. We decompose the study window into 20-day bins and estimate the following set of regressions: , and we set the 20-day windows before the new subway lines' opening dates (i.e., m = 0) as the base interval. The coefficient ∂ m captures the differences in trend between observations of pre-openings and post-openings. The results are shown in Table 2. It could be found that there exist no significant changes in air quality between the treated and untreated subclasses in all pre-opening intervals compared with the base interval. In contrast, we find a significant air pollution improvement effect in all three post-opening intervals in the specification that does not control the monitor-specific time trend. Therefore, the treatment and control group satisfy the parallel trends assumption for DID analysis. Table 3 presents the results from OLS estimates based on Equation (2) (the original regression data and the corresponding estimation codes are available upon request). This analysis seeks to estimate the conditional correlation between subway density and air quality, where columns (1), (2), and (3) report the results of AQI as the dependent variable. We only control weather variables and time-fixed effects in column (1), and the coefficient suggests a positive connection between subway density and air pollution. The main reason may be that areas with a denser network are usually located in the city center, where more pollutants tend to concentrate. After adding the monitor fixed effect in columns (2) and (3), this positive correlation disappears. Considering the endogeneity of subway line location (i.e., subway locations are usually closely related to economic activities), column (3) adds monitor-specific time trends to the specification. The downward bias may be generated in the absence of this control variable, and the difference of estimated results in columns (1) and (2) has confirmed it. We see that the magnitude of subways on air quality, θ 1 , is about −0.071 and is different from zero at the 1% level after controlling the monitor-specific trends. That is, a one standard deviation increase in subway network density reduces the air pollution level by about 7.1 percent, and this result is statistically significant at the 1% level. Table 3. The effect of subway network density on air pollutants: Ordinary Least Squares (OLS).
Dependent
Variable: Estimated results concerning other air pollutants in Table 3 suggest a similar relationship, except for the O 3 . This positive effect on O 3 pollution is possibly related to the generation process of it. O 3 is the reaction product of NOx and volatile organic compounds (VOCs) catalyzed by high temperatures and ultraviolet light [28]. It is more related to human activities that are positively associated with the density of subway stations. All the estimations of weather variables are consistent with our intuitive judgments. High temperature contributes to the increase of air pollution level, while high wind, constant wind direction, low temperature, and rain or snow all help the dispersion of air pollutants.
Additional Evidence Based on a Difference-in-Differences Analysis
Considering the potential endogeneity concern of the estimation of time-series correlation above, in this section, we use a DID method to confirm the robustness of our results. Table 4 shows the estimation result using the DID method, specified in Equation (3). We sequentially added weather variables, time, and monitor fixed effects as control variables from column (1) to (5), and all report similar results with the results in Table 3. That is, there exists a negative correlation between subway openings and air pollution levels. Specifically, column (5) suggests that a subway opening reduces air pollution level by 3.93 percent for monitors near new stations (≤ 2 km) compared with those far away from these stations. We also chose 4 km (Subwayn4 it ) as the cut-off radius that distinguishes the treatment group from the control group. The estimation coefficients are reported in column (6) in Table 4. The results change very little compared to the results in Table 3, which confirms the robustness of our estimation. We also collected the hourly air quality data in the same sample period, and the results are reported in column (7) in Table 4, which are similar in sign and magnitude to those in Table 3.
The regression results of DID above focus on average effects over the 60 days before and after a subway opening, while different time windows may have a certain impact on the estimation results. Therefore, we used 10, 20, 30, 40, 50, 70, 80, 90, 100, 110, 120, 130, 140, 150, 160, 170, and 180-day windows as alternative specifications, and the results are reported in Table 5. These estimates are broadly consistent with the inspection of Table 4: subway expansions have a negative effect on air pollution levels during each 40 or longer day period following a subway opening. In the short term, commuters may need some time to switch to rail transit, which results in a positive subway effect on air pollution. When extending the time window to 40 days, the subway effect on air pollution starts to be negative; however, this effect fades away during the 120-day and longer post-opening. With the improvement of Nanjing's subway, the daily average passenger volume by subway had reached 1.47 million in 2014 and 3.07 million in 2018, which has more than doubled within four years. While, during the 4 years (2014-2018), Nanjing incremental private vehicles were 586,890, reached 2.07 million in 2018, and the population experienced slow growth during the 4-year period. This implies that the reincorporation of these latent drivers is likely to counteract the substitution of auto trips for public transit trips and improvements in air quality. Therefore, we infer that improved transit due to the subway expansion may induce some latent drivers to travel by cars, which weakens the subway effect on air pollution reduction in the long term. In addition, considering that people's choice of travel mode may be endogenously related to air quality, for instance, people tend to travel by subway during the "red alert" days, we omit the samples with good (AQI ≤ 50) or poor (AQI > 300) air quality, and the estimation result is reported in column (18) of Table 5. Results are also very similar to those in Table 3. Notes: Robust standard errors in parentheses clustered at the day level, *** p < 0.01, ** p < 0.05. All columns include the following controls: the daily weather fixed effect, time fixed effects, and dummies for monitors and the interactions with the time trend. od represents opening dates.
Heterogeneous Effect
Regression results so far focus on daily average air quality, and intra-day heterogeneity has not been considered. High levels of tailpipe emissions usually appear at peak travel times, while the subway is out of service between 11:00 pm and 5:00 am on any given day. Whether there are distinct patterns for the subway effect on air pollution at different intervals within a day needs further investigation.
Before the empirical examination, we plot the change in hourly pollutant concentration in Figure 6. We can find that AQI shows a relatively stable trend within a day: a slow upward trend from 6:00 to 12:00 am may result from the increase of CO, PM 10 , and PM 2.5 concentrations in the same period. Moreover, the CO concentration have shown a volatile trend, and its daily maximum pollution level typically occurs at about 9:00 am, which synchronizes with rush hour periods. This trend reflects the contribution of tailpipe emissions to the deterioration of air quality. For the NO 2 , we can see a downward trend from 8:00 am to 3:00 pm, and an upward trend from 4:00 pm to 2:00 am. As the NO 2 concentration is closely associated with vehicle emissions and industrial production activities, its variation keeps pace with commuting and factory emissions at night. PM 10 and PM 2.5 show a similar trend within a day, that is a slow increase during the 6:00 to 10:00 am and 4:00 to 11:00 pm stretch, which suggests their response to the morning and afternoon travel peak. The minimum PM 2.5 and NO 2 recorded in the afternoon may result from the comprehensive effect of photosynthesis of plants, meteorological factors, and commuting time. The O 3 experiences a down and up changing trend, and the peak values appear at about 7:00 pm-this is more likely related to its generation process that we discussed above [28]. As the main contributor to SO 2 is industrial emission, it will not show any time-varying characteristics; hence, SO 2 is the least volatile air pollutant. Based on the descriptive analysis above, we estimate Equation (3) for three different time intervals in a day (i.e., subway service time: 6:00 am to 11:00 pm, travel peak time: 7:00 to 9:00 am and 5:00 to 7:00 pm, and subway out of service time: 1:00 to 5:00 am). The results are reported in Table 6, from where we can find that the negative subway effect on air pollution still works, except for NO 2 and O 3 . The subway opening causes a statistically significant reduction in PM 2.5 and CO concentration, which are also the major components of vehicle exhausts. This finding is consistent with Wei's [29] estimation results using panel data of 16 major cities in China from 2014 to 2017. This reduction effect is significant both during subway service time and travel peak time, informing us that subway expansion contributes to the reduction in air pollution level. Notes: Robust standard errors in parentheses clustered at the day level, *** p < 0.01, ** p < 0.05, * p < 0.1. All columns control for the daily weather fixed effect, time fixed effects (year, holiday, weekend, season, hour), and dummies for air pollution monitoring stations and the interactions with the time trend. The dependent variable is the pollution concentration of each pollutant for a specific hour of a day.
The main source of NO 2 is vehicle emissions, and the concentration of it is particularly localized near major roadways [30], where the monitors of the treatment group in this study are also located. Commuters' demand for subway is higher during rush hours when pollutant concentrations are also elevated, especially near those major roadways. In contrast, inside of the 1:00 to 5:00 am stretch, the parameters of NO 2 and PM are statistically insignificant. In addition to the subway, demand for buses is also higher during rush hour periods accompanied by high congestion, and buses generally emit pollutants at a higher rate than auto travel on a per vehicle-mile basis [14,15]. Therefore, air pollution levels may be higher within a day during the service time, which may be captured by the DID approach. Our DID analysis focuses on the subway effect on air quality in the shortterm based on the hourly concentration data, which may capture a distinct pattern from the method based on the whole data period. Columns (2), (8), (14), and (20) of Table 6 show that subway expansion is associated with a statistically insignificant decrease of PM 10 concentrations, which further confirms this assumption. In terms of the O 3 , the positive coefficient is related to its generation process that has been described above. The insignificant coefficient of SO 2 also confirms the findings in the descriptive analysis above. One concern with our research design may be that the subway opening is picking up other systematic unobservable variables. We should not see a significant reduction effect when the subway is out of service, and we probed this claim via a placebo test. Columns (13) to (18) show the subway effect on air quality from 1:00 to 5:00 am in a day. Except for CO and O 3 , other coefficients became statistically insignificant, and this is consistent with our main model. Columns (19) to (24) report the results based on hourly data of a whole day.
Health Implication of Subway Expansions
This section presents a back-of-the-envelope analysis of the health benefits from air improvement attributable to subway expansions. We refer to the research of He et al. [31] to predict the reduced mortality out of the air quality improvement: where Mortality i indicates the estimated prevented deaths of city i during the sample period. ∆AQL i represents the estimated change in air quality level in city i during the sample period, where we calculate it based on Equation (3) with ln(PM 2.5 ) as the dependent variable. Elasticity represents the sensitivity of mortality to a one-unit change in air quality. Since its estimate is not the focus of interest in this study, we use the estimates from existing researches on the effect of air pollution on human health. Since the impacts of air pollution on human health vary over time, researches in this field based on data from earlier years may be less effective as the reference. Moreover, considering the credit of estimates based on quasi-experimental studies compared with those based on associated regression models [32], we used Web of Science, Google Scholar, Scopus, and other databases to identify academic articles and book chapters that meet these criteria, and it could be found that eligible articles in this field are not that rich. Finally, Fan et al. [33] found that a 10 µg/m 3 increase in PM 2.5 resulted in a 2.2 percent increase in mortality rate based on a Regression Discontinuity (RD) analysis, and He et al. [34] proposed that this rate can reach over 3.25 percent through an estimation of the effect of straw burning on air pollution and health in China, and these were chosen as two main references. Therefore, we set the range of mortality increase as 2.2~3.25% following a 10 µg/m 3 increase in PM 2.5 . BaseMR i denotes the annual mortality rate in city i at the base year, and Popu i indicates the population in city i at the base year. In view of the data availability, we set 2018 as the base year of mortality rate and population. If we assume that Nanjing is a representative city in China, the same air improvement could be achieved through more public transport supplies in other cities of China. The results show that the total number of yearly averted premature deaths is around 300,214 to 443,498 due to the air quality improvement caused by the development of public transport infrastructures. This reflects the enormous social costs of air pollution during the normal time.
Policy Implications
Understanding how subway expansion affects air quality based on the high-frequency air pollution data is essential for crafting efficient environmental policies to alleviate the negative effects that air pollution brings on public health and welfare from the perspective of public transportation. In this connection, we propose the following policy recommendations to deal with the public transport-air pollution nexus and take full advantage of public transport's potential in improving urban air quality.
First, planners and policymakers should take more integrated measures that consider the cost of subway construction and the social value of it, especially taking the environmental effects into account simultaneously when conducting a cost-benefit assessment of new subway lines [6].
Second, the government should tighten tailpipe emissions standards and encourage lower emission vehicles. Meanwhile, attention should also be paid to vehicle-related supporting infrastructures, particularly the expansion of the road surface area for public transport and bicycle lanes. Both measures will contribute to solving urban traffic congestion and developing a fast, safe, and convenient transportation condition. However, the one-size-fits-all approach, such as the driving restriction policy and the license-plate lottery policy, should be avoided during this process. Otherwise, these executive orders may distort consumers' behaviors of purchasing cars and lead to a skewed distribution of high-polluting vehicles [35] and restrain residents' travel demand, hence resulting in a decrease in social welfare [36]. Moreover, wider road surface area for bicycle lanes encourages daily exercise by cycling from home to school or work or by walking to and between stations of public transportation, all of which contribute to reducing urban traffic and air pollution as well as improving individual health. In addition to the expansion of the road surface area, providing changes to public traffic in the periphery of Nanjing by park and ride facilities with charging stations for electric cars and serving larger areas with attractive parking places for bikes at metro stations can be considered to fully unlock the subway's potential to reduce pollution, as these measures could also reduce commuting by car to and from suburbs and surrounding areas, and thereby reduce air pollution.
Third, other modes of public transport, such as shared bikes, should be encouraged to connect subway stations and destinations, ultimately forming a synthesis transportation system. The shared bike, as an easy and low-cost mode of transportation, contributes to the reduction in air pollution, noise, and traffic congestion and has been a popular travel mode in China since it first entered the public arena in late 2016 [37]. Although some management problems, such as disorder of bicycle parking and the misfunction of the deposit-refund system, have arisen in the development of the market-oriented shared bike, the average number of commuters reached more than 40 million a day by 2019 [38]. Therefore, there is much room left for improvement to drive the rapid and healthy development of the green travel mode.
Last, policymakers should improve public awareness of the effectiveness and benefits of public transport in reducing air pollution, and help residents form the habit of green travel. It is worthy of note that this transition cannot happen overnight; thus, it will require patient guidance and encouragement along the way.
Conclusions
Given the deteriorating air quality across cities in China, central and local governments have taken the improvement in transportation infrastructures as an effective measure to counter it. However, it is hard to know the specific effectiveness or benefits of this measure without the information regarding the magnitude of air quality improvement caused by public transportation. Previous research in this area, in general, focuses more on the congestion relief function of public transport [11,12,39], and less research has been devoted to understanding the subway effect on air pollution. This paper took both short-term and long-term effects of subway expansion on local air quality into consideration based on a network density-based time series analysis, and a distance-based DID approach with a 60-day window. Different from previous research of similar topics which mainly focused on first-tier cities in China [40,41], this study chose Nanjing, a medium-sized city, as our study area, which allowed us to have a comprehensive understanding of the pollution abatement effect of subway expansions. To shed light on how subway expansion is affecting local air quality, we used daily and hourly monitorlevel air quality data on Nanjing from 13 May 2014 to 31 December 2018, combining with corresponding weather variables. We examined the change in air pollution concentration caused by the 8 new subway lines during the sample period in Nanjing. The results showed that there existed a positive effect of subway expansion on local air quality, specifically air pollution level experienced a 3.93% larger reduction in the areas close to subway lines. This decline is similar to the result of Gendron-Carrier et al. [7] based on the analysis of 39 cities across the world with aerosol optical depth (AOD) from satellites as the air quality indicator. However, this effect was inconsistent among different types of air pollutants: the pollution abatement effect is more significant in terms of PM 2.5 and CO. Chen and Whalley's [6] analysis with Taipei as the case study also found a significant reduction in carbon monoxide due to the opening of the Taipei Metro. Liang et al.'s [18] research based on 14 Chinese cities also confirmed the significant reduction in CO and particulate concentrations following a subway system opening. A back-of-the-envelope analysis of the health benefits from this air improvement showed that the total number of yearly averted premature deaths is around 300,214 to 443,498. The reduced mortality is larger than that in Liang et al.'s [18] work, who set the environmental effects of the subway in Beijing as the benchmark when calculating the potential health benefits.
Future Research and Expectations
This study confirms the environmental effects of subway expansions across urban areas in China. However, our analysis focused on Nanjing only, and since the precise effects of subway expansions depend on many factors, which may differ across areas, future research could replicate these results in other contexts and unpack the channel through which subways affect air pollution, and more city-specific policy recommendations could then be proposed. Since the data period in this study is relatively short, future research could further extend the data period and examine the environmental effect of subway expansion within a longer time period. The non-linear effect of subway expansion on air quality may occur in the long run, so the corresponding analysis of non-linear models could be considered. Besides, future research could also examine the impact of subway expansion on health outcomes directly to analyze the role of underlying health conditions in the choice of travel mode. Obtaining reliable and high-frequency data on mortality and the impact of exposure to air pollution on mortality in different cities, combining with the unique traffic patterns, geography, and economic structures of them, would contribute to a more credible estimation of the health benefits of subway expansion in future research. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2021-05-01T02:42:07.124Z
|
2021-03-26T00:00:00.000
|
233467550
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.pjms.org.pk/index.php/pjms/article/download/3944/882",
"pdf_hash": "064e839f2b25de6c278b6ac0ec88b9aaca75bb66",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44560",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "064e839f2b25de6c278b6ac0ec88b9aaca75bb66",
"year": 2021
}
|
pes2o/s2orc
|
Frequency of iron deficiency anemia (IDA) among patients with Helicobacter pylori infection
Background and Objective: Helicobacter Pylori (H. pylori) is a widespread infection across the globe having a high prevalence among the developing countries. Iron Deficiency is anticipated to be the most prevalent micronutrient deficiency globally, the most frequent cause of anemia. Our objective was to determine frequency of Iron Deficiency Anemia (IDA) among patients with H. Pylori gastritis. Methods: It was a cross-sectional prospective study. Patients fulfilling inclusion criteria were enrolled at Liaquat National Hospital, Karachi, Pakistan. Blood samples were taken for serum iron, transferrin saturation, ferritin, and total iron-binding capacity and H.pylori assessed by urea breath test, stool for antigen, Rapid urease test or histopathology. Results: 112 patients with H. Pylori infection with anemia were included. 53 (47.3%) were males & 59 (52.7%) were females with mean age of 38.4464 ± 9.00634 years. Iron deficiency anemia was seen in 42 patients (37.5%). Conclusion: IDA was noted in 37.5% of cases. H. Pylori infection is a frequent cause of iron-deficiency anemia of previously unidentified origin among adults.
INTRODUCTION
H. Pylori is a chronic microbial infection, which is highly prevalent around the globe, especially in developing countries. The worldwide prevalence of H. Pylori is recorded to be about 50%. Though high variation has been associated with age, geography, and socioeconomic status, its overall prevalence is high in developing countries due to many reasons. 1. H. Pylori infection affects people from all across the globe but its prevalence differs from one region to the other. 1 Usually acquired in childhood in the early stages, it can become chronic if untreated. 2 The people who acquire this infection mostly do not show many symptoms, which leads to the hypothesis that some of H. Pylori strains are not harmful or even beneficial 3 and may lead to illness in a very small number of adults. 4 It can be a causative factor for multiple upper gastrointestinal diseases like gastritis, gastric, or duodenal ulceration, and it even augments the risk for gastric malignancy. 5 As per the study conducted by Ford AC et al about the epidemiological aspects of H. Pylori, and the implications it has on public health; the important risk factors proposed for infection include growing age, shorter height, male sex, obesity, tobacco usage, poor socioeconomic conditions and low educational standing of the parents in studies conducted among children. 6 Multiple diagnostic modalities are available with varying sensitivity and specificity for assessing H. Pylori infection. These include serology, urea breath test (UBT), Rapid Urease Test (RUT), biopsy with histopathology, and cultures. The most specific way remains the isolation of the microbe from gastric biopsies to establish the diagnosis of infection. Rasool et al conducted a study in 2007 which showed that H. Pylori was diagnosed by rapid urease test and histology in 61 (65%) and 66 (70%) patients respectively, while 14C UBT helped diagnosing infection in 63 (67%) patients. UBT's accuracy was found to be 93% in comparison with histology with a high positive predictive value of 97% and the negative predictive value was 84%. 7 Anemia, described as a reduction in the quantity of red blood cells (RBCs) or the quantity of hemoglobin (Hb) concentration below established cut-off levels, is an international public health issue. According to the World Health Organization Database on Anemia (1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005), almost a quarter of the world's population is anemic. 8 H. pylori infection (active state) was independently related to iron deficiency and the resultant anemia 9 and there are also studies showing a poor response of anemia to oral iron replacement with coexistent active H. Pylori infection. 10 Valiyaveetil et al conducted a randomized control study in 2004 that suggested that treatment of H. Pylori infection may lead to enhancement of response to oral iron therapy. 10 Eradicating H. Pylori results in an enhanced response to oral iron replacement among infected pregnant female patients having Iron deficiency anemia. 11 This study evaluated the frequency of IDA among anemic patients with H. Pylori infection. Multiple studies point toward a positive linkage between H. Pylori infection and anemia secondary to iron deficiency. [12][13][14] However, the evidence is still insufficient in a Pakistani population. The results of this study will aid the clinicians in identifying patients who are at increased risk of developing anemia secondary to iron deficiency. Its early detection and proper management will, hence, save the patients from anemic heart failure, which happens to be a complication for chronic anemia. This will also upgrade the lifestyle of patients by improving the signs and symptoms of anemia like lethargy and easy fatigability.
METHODS
Adopting the cross-sectional approach, this study was conducted after hospital ethics committee approval (Ref: App#0486-2019-LNH-ERC, Dated: June 3, 2019) at Liaquat National Hospital, Karachi in the Department of Gastroenterology, from July 29 th , 2019 till Jan 28 th , 2020. Patients that were enrolled were the ones attending the in-patient or out-patient facilities at the Gastroenterology Department at Liaquat National Hospital, Karachi with presence of H.pylori antigen in stool test or positive urea breath test or chronic gastritis because of H. Pylori on endoscopy & gastric biopsy with anemia. For all patients included in this study, the following information were collected: age, gender, nutritional history and menstrual history in female patients. Patients were excluded if they had any other source of chronic blood loss. Blood samples were collected for calculating serum iron and ferritin concentrations, transferrin saturation, and total iron-binding capacity (TIBC). Patients were labeled having Iron Deficiency Anemia when the concentration of hemoglobin was less than 12 g/dl in males and less than11 g/dL in females, and further serum studies showed a ferritin level of < 30 ng/ ml with a raised Total Iron Binding Capacity greater than 450 μg/dL, Serum Iron Level less than 50 μg/dL, reduced transferrin saturation less than 20%. Clinical history along with demographics were recorded by a principal investigator as per the predesigned pro forma, and a documented informed consent was attained ahead of enrolling the patient for the study. To avoid confounding variables, strict adherence was done to the inclusion and exclusion criteria. Statistical analysis: For data analysis, SPSS version 22 was utilized. Percentages and frequencies were recorded for categorical variables like gender, education level, socioeconomic status, hemoglobin levels, and other parameters like serum Iron, Ferritin, transferrin saturation, and TIBC levels, iron deficiency anemia. Values were calculated as mean ± standard deviation for continuous variables such as age. Effect modifiers like age, gender, education level, socioeconomic status, Hb level were addressed via stratification. Chi-square test was applied. P ≤ 0.05 was considered as level of significance.
RESULTS
Total of 112 patients infected with H. Pylori with anemia were registered for this study. The mean age of 38.4464 ± 9.00634 years was observed. Age distribution is shown in Graph-1. The descriptive statistics in relation to age is shown in Table- The frequencies of age groups, gender, education level, and socioeconomic status were calculated according to iron deficiency anemia.
The results are shown in Table-III. In this study, anemia secondary to iron deficiency was significantly associated with age (p-value=0.042), while no association was observed with gender,
DISCUSSION
In this study, iron deficiency anemia was noted in 42 patients (37.5%) with H. Pylori infection, as compared to results of the Monzón et al 14 study, which had stated that 38% of the patients may have iron deficiency anemia due to H. pylori infection, it also suggests that H. pylori gastritis can be a common etiological reason for IDA among adult patients with iron deficiency/iron refractoriness among whom the routine work-up for diagnosing the cause of IDA yielded no obvious result. One previous study stated that a large proportion of patients having atrophic body gastritis also encounter IDA and out of these, 61 % were diagnosed with H. Pylori infection. 15 A Korean study on adolescents (n=937) showed positive seropositivity rate for H. Pylori with iron deficiency to be 35.3%. 16 In Monzón et al study, 14 eradication of H. pylori was linked with resolution of IDA without any additional iron replacement therapies and a relapse-free period of approximately 24 months mean follow-up. These results support in favor of the association of H. pylori infection with iron deficiency anemia. The Objective Response (OR) of infection with H. pylori as the causative reason for IDA was as high as ten times in the second group as compared to the first one.
In this study IDA was noted in 17% male patients and 20.5% female patients as compared to an earlier study that reported Iron Deficiency Anemia's prevalence among dyspeptic patients to be 26.9%, 35.2% in men, and 64.8% in women. Anemia's prevalence among patients with H. Pylori gastritis was 30.9% and 22.5% among those who were not infected. 12 Thus, a hypothesis was put forth that H. Pylori -association with anemia was a result of reduced iron absorption in the context of hypochlorhydria. 13 Adeel Rahat et al. The mean hemoglobin level in this study was 11.830 ± 1.695 g/dl and the mean transferrin saturation was 27.693 ± 12.695%. Patients having both, H. Pylori gastritis and Iron Deficiency Anemia are more prone to have corpus gastritis than those who have H. Pylori -infection but not anemia. 15 Because of corpus gastritis, reduced gastric acid secretion and raised intragastric pH may ensue which results in impairment of iron absorption. 15 However, gastric acid secretion may normalize after eradicating H. Pylori. Likewise, another significant consequence of H. Pylori gastritis that results in decreased absorption of iron is a decrease in gastric juice ascorbic acid concentration as ascorbic acid aids in iron absorption from the gut by its reduction into the ferrous form. 17 Another method that has been hypothesized to understand the relation between iron deficiency and H. Pylori gastritis was iron uptake by the bacterium itself. Various microorganisms use iron as a growth factor and H. Pylori is one of them. It contains a 19-kDa iron-binding protein resembling ferritin and thus may play a pivotal role in storing excess iron by the H. Pylori. 18 There is another possible mechanism that explains the reduced availability of iron which is seizing up of iron because of lactoferrin in the gastric mucosa. H. Pylori sequesters iron from human lactoferrin through a receptor-mediated mechanism 19 . It appears that the gastric mucosal lactoferrin secretion is affected by the H. Pylori 20 . Lactoferrin levels of the gastric wall are reported to be considerably higher in H. Pylori positive IDA patients than the persons who were not anemic and also negative for H. Pylori, non-anemic but positive for H. Pylori, and H. Pylori negative with IDA. This shows that lactoferrin possibly plays an important role in iron deficiency anemia. 16 In this study, 52.7% of patients were females and IDA was predominant in the female gender. Results of the study of Monzón et al 14 on premenopausal women disagree with earlier results of Annibale et al 15 . The reason was that they showed that 92% of the patients, mainly premenopausal females, recovered from anemia at one year of follow-up after H. Pylori eradication. The discrepancies have more to do with the definition of response.
There may be certain other factors that are responsible for iron deficiency anemia in otherwise healthy normal premenopausal females. These mainly include increased blood loss during menstrual flow, pregnancy induced higher iron demands, dietary insufficiency, and breastfeeding. 21 Menstrual blood loss may be reduced by approximately 50% by hormonal contraceptive therapy. This may help in females with average or mildly above-average blood loss 22 . Monzón et al 14 study showed that this therapy was also helpful in resolving IDA in those premenopausal females in whom the requirements of iron were increased despite of eradication of H. pylori.
H. pylori infection may also result in Latent Deficiency, which may improve after the infection has been irradicated 23,24 . However, it is not known if H. Pylori -infected patients who simultaneously have Latent Deficiency are at higher risk of having IDA or not.
In conclusion, the results of this current study show that H. pylori infection is a common cause of IDA among females and patients with lower education levels.
Limitation of the Study:
The main limitations were relatively smaller sample size, and improvement in anemia following H. pylori eradication. So additional studies with larger sample sizes are suggested.
|
v3-fos-license
|
2022-12-29T16:15:11.401Z
|
2022-12-24T00:00:00.000
|
255217393
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1422-0067/24/1/305/pdf?version=1671875721",
"pdf_hash": "fee4eab0ce7c3e03ac4cac8767c81d09a785b2c1",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44561",
"s2fieldsofstudy": [
"Materials Science"
],
"sha1": "f9262631973bd7b2cdc1008cbb7c6b4966349842",
"year": 2022
}
|
pes2o/s2orc
|
PEO Coatings Modified with Halloysite Nanotubes: Composition, Properties, and Release Performance
In this work, the properties of the coatings formed on the Mg-Mn-Ce alloy by plasma electrolytic oxidation (PEO) in electrolytes containing halloysite nanotubes (HNTs) were investigated. The incorporation of halloysite nanotubes into the PEO coatings improved their mechanical characteristics, increased thickness, and corrosion resistance. The studied layers reduced corrosion current density by more than two times in comparison with the base PEO layer without HNTs (from 1.1 × 10−7 A/cm2 to 4.9 × 10−8 A/cm2). The presence of halloysite nanotubes and products of their dihydroxylation that were formed under the PEO conditions had a positive impact on the microhardness of the obtained layers (this parameter increased from 4.5 ± 0.4 GPa to 7.3 ± 0.5 GPa). In comparison with the base PEO layer, coatings containing halloysite nanotubes exhibited sustained release and higher adsorption capacity regarding caffeine.
Introduction
Most modern engineering areas face a demand for lightweight construction materials. Magnesium alloys combine high performance regarding mechanical properties with low density, which explains their wide application in the design of car engine blocks, gearboxes, steering columns, rotorcraft control-system levers, plane light-weight structure frames, and other machinery parts with a weight-sensitive application area [1][2][3][4][5].
Beyond the industrial sector, biomedical engineering considers the potential use of magnesium alloys for lightweight biodegradable orthopedic implants that eliminate the necessity of a second surgical procedure for implant removal after bone healing [6][7][8][9]. The load-bearing capacity, harmless products of degradation [8], and the essential role of Mg in the human metabolism [10] make this metal and some of its alloys prospective biomaterials.
However, the high susceptibility of magnesium alloys to corrosion and their poor wear resistance inhibit their application in moist and adverse conditions as well as implant material. These issues can be effectively alleviated by the surface engineering techniques aimed at creating protective coatings with various functional properties [11][12][13]. One of the most promising methods of surface modification is plasma electrolytic oxidation (PEO) [14], which induces the formation of thick oxide layers on the surfaces of the valve metals under supercritical conditions [15,16]. PEO became widely investigated due to the multifunctional performance of the formed coatings and the low requirement for the treated surface quality. Depending on the content of the obtained layer, the protective properties of the PEO coatings can be accompanied by photocatalytic activity [17,18], antibacterial properties [19,20], improved hardness [21], etc. The possibility of managing coatings properties through a wide range of adjustable process parameters [22] enables the formation of PEO layers for various fields, including mechanical [23,24] and biomedical engineering [13,25].
The PEO technique allows the formation coatings of bioactive composition with a convoluted morphology, providing both the biocompatibility [26][27][28] and osteointergation [29][30][31] of a modified surface, which makes PEO coatings formed on magnesium 2 of 25 alloys an appropriate basis for biodegradable implant design. The implant setting could be assisted by the controllable release of pharmaceuticals from a modified PEO layer that, depending on the substance used, would prevent implant-associated infections and inflammatory processes, and accelerate bone tissue recovery.
HNTs are relatively inexpensive and accessible nanoparticles of natural origin; their deposits are quite common for lake sites in Australia, New Zealand, the USA, and Russia [47]. HNTs present the chemical formula of Al 2 Si 2 O 5 (OH) 4 •nH 2 O, the outer surface of the nanotubes is formed by siloxane groups (Si-O-Si), while the inner surface consists of aluminol groups (Al-OH) [48,49]. The encapsulating ability of the HNTs is mostly attributable to the interaction of their lumen aluminol groups with guest molecules by hydrogen bonding, whose strength increases with the dipole moment of the loaded substances [32,50,51].
Many papers have been devoted to the adsorption properties of HNTs [42,52,53]; however, information on the preservation of the adsorption capacity of HNTs embedded in PEO coatings, as well as on the possibility of single-step modification of PEO layers by HNTs loaded with active molecules, is still insufficient. The key issues that require investigation are the oxidizing effect of PEO on organic substances and its destructive effect on HNTs.
Release tests of the samples without HNTs and containing pre-loaded and raw HNTs allow evaluation of the preservation of the adsorptive properties of the HNTs and guest molecules in their lumen throughout the PEO process, as well as the relationship between the surface adsorption of the obtained coatings and the presence of HNTs.
The development of corrosion-resistant coatings modified by nanocontainers on magnesium alloys meets the current needs of modern science, medicine, and technology in lightweight products robust to different operating conditions and environments. The determination of the influence of strong electrical fields, extreme temperatures, and pressures realized during the PEO process on the physical and chemical behavior of incorporated HNTs has great scientific and practical significance for the further development of the theoretical framework behind multifunctional PEO coatings.
Morphology and Composition of the Coatings
The SEM images of the samples' surfaces are presented in Figure 1. The surface morphology of the obtained samples is typical for PEO coatings with volcano-like or crater-like surface structures [15,54,55]. The addition of HNTs to the electrolyte elicited the formation of a more rugged and irregular surface compared to the base PEO coating. The clusters and agglomerates of the nanoparticles can be observed for the H10, H20, H30, and H40 samples. The SEM images of the coatings cross sections are shown in Figure 2. The HNTs strongly affect the internal porosity and thickness of the formed layers. It may be seen that the thickness increases gradually from H0 up to H40 (Table 1), which can be explained by the influence of nanoparticles' addition on the electrical response of the system. Porosity and thickness parameters are closely interrelated, and both depend on the kinetics of the PEO process. The sintering of the HNTs contributes to the creation of layers with a more heterogeneous and porous morphology, affecting the energy and duration of the further plasma discharges. As a result, the porosity of the oxide layers grows proportionally to the HNTs concentration (Table 1). At the same time, porous coatings undergo frequent breakdowns in the weak dielectric points [15,[54][55][56][57], which leads to the formation of more thick and less dense coatings. The SEM images of the coatings cross sections are shown in Figure 2. The HNTs strongly affect the internal porosity and thickness of the formed layers. It may be seen that the thickness increases gradually from H0 up to H40 (Table 1), which can be explained by the influence of nanoparticles' addition on the electrical response of the system. Porosity and thickness parameters are closely interrelated, and both depend on the kinetics of the PEO process. The sintering of the HNTs contributes to the creation of layers with a more heterogeneous and porous morphology, affecting the energy and duration of the further plasma discharges. As a result, the porosity of the oxide layers grows proportionally to the HNTs concentration (Table 1). At the same time, porous coatings undergo frequent breakdowns in the weak dielectric points [15,[54][55][56][57], which leads to the formation of more thick and less dense coatings. As can be seen from the high-magnification SEM images of the samples' surfaces ( Figure 3), the occurrence of irregularities increases with the concentration of nanoparticles in the electrolyte. Large clusters of nanoparticles become clearly distinguishable for H20-H40 samples. Individual HNTs can be observed for all samples containing nanoparticles, while the base PEO coating possesses a much smoother appearance. As can be seen from the high-magnification SEM images of the samples' surfaces ( Figure 3), the occurrence of irregularities increases with the concentration of nanoparticles in the electrolyte. Large clusters of nanoparticles become clearly distinguishable for H20-H40 samples. Individual HNTs can be observed for all samples containing nanoparticles, while the base PEO coating possesses a much smoother appearance. As can be seen from the high-magnification SEM images of the samples' surfaces ( Figure 3), the occurrence of irregularities increases with the concentration of nanoparticles in the electrolyte. Large clusters of nanoparticles become clearly distinguishable for H20-H40 samples. Individual HNTs can be observed for all samples containing nanoparticles, while the base PEO coating possesses a much smoother appearance. More detailed SEM images of the incorporated nanoparticles and agglomerates on the surface of the H40 sample are shown in Figure 4. The red frames within the pictures represent areas that were magnified and analyzed. The incorporated particles retain their characteristic tubular shape under the strong conditions of the PEO process, which positively affects the adsorptive capacity of the HNTs and the formed layers.
More detailed SEM images of the incorporated nanoparticles and agglomerates on the surface of the H40 sample are shown in Figure 4. The red frames within the pictures represent areas that were magnified and analyzed. The incorporated particles retain their characteristic tubular shape under the strong conditions of the PEO process, which positively affects the adsorptive capacity of the HNTs and the formed layers. Based on the obtained results, it can be proposed that two main mechanisms of the HNTs' incorporation take place. One part of HNTs is adhered to the bottom of the pores by electrophoretic force and sintered directly throughout the plasma discharge treatment, while the other part of HNTs, from the electrolyte in affinity to the substrate surface, is seized mechanically by the molten oxide layer. Moreover, as can be seen from the SEM images of the pore ( Figure 5), part of the HNTs deposited onto the pore bottom maintained their original tubular structure as well. The nanoparticles are present at the bottom of the pores and discharge channels due to both the incorporation mechanism and the high surface energy of these locations mentioned by other scholars [46,58]. The results of surface topography, thickness, and porosity measurements are presented in Table 1. The increasing roughness parameter over the samples indicates HNTs Based on the obtained results, it can be proposed that two main mechanisms of the HNTs' incorporation take place. One part of HNTs is adhered to the bottom of the pores by electrophoretic force and sintered directly throughout the plasma discharge treatment, while the other part of HNTs, from the electrolyte in affinity to the substrate surface, is seized mechanically by the molten oxide layer. Moreover, as can be seen from the SEM images of the pore ( Figure 5), part of the HNTs deposited onto the pore bottom maintained their original tubular structure as well. The nanoparticles are present at the bottom of the pores and discharge channels due to both the incorporation mechanism and the high surface energy of these locations mentioned by other scholars [46,58].
More detailed SEM images of the incorporated nanoparticles and agglomerates on the surface of the H40 sample are shown in Figure 4. The red frames within the pictures represent areas that were magnified and analyzed. The incorporated particles retain their characteristic tubular shape under the strong conditions of the PEO process, which positively affects the adsorptive capacity of the HNTs and the formed layers. Based on the obtained results, it can be proposed that two main mechanisms of the HNTs' incorporation take place. One part of HNTs is adhered to the bottom of the pores by electrophoretic force and sintered directly throughout the plasma discharge treatment, while the other part of HNTs, from the electrolyte in affinity to the substrate surface, is seized mechanically by the molten oxide layer. Moreover, as can be seen from the SEM images of the pore ( Figure 5), part of the HNTs deposited onto the pore bottom maintained their original tubular structure as well. The nanoparticles are present at the bottom of the pores and discharge channels due to both the incorporation mechanism and the high surface energy of these locations mentioned by other scholars [46,58]. The results of surface topography, thickness, and porosity measurements are presented in Table 1. The increasing roughness parameter over the samples indicates HNTs The results of surface topography, thickness, and porosity measurements are presented in Table 1. The increasing roughness parameter over the samples indicates HNTs introduction. As can be seen from the 3D surface maps, the roughness increases steadily from the H0 to the H40 sample ( Figure 6). The thickness of the layers is directly proportional to the HNTs concentration in the electrolyte. The H40 sample has a coating with an utmost thickness of 62 ± 8 µ m, which is 1.2 times higher than the value obtained for the base PEO coating. Element distribution maps obtained by EDX confirm aluminum presence on the coating surface (Figure 7). In view of the fact that this element is a component of the HNTs only (base PEO layer does not contain Al), its presence corresponds with nanoparticles incorporation. The presence of silicon indicates both the incorporation of the nanoparticles and substrate reaction with silicate ions during the PEO treatment. Oxygen is assigned to such components of the formed layers as oxides, silicates, and HNTs. The presence of sodium is attributable to the cation's sorption from the electrolyte on the coating's surface. The presence of magnesium is caused by the substrate oxidation into MgO, Mg2SiO4, and other derivatives [55]. The thickness of the layers is directly proportional to the HNTs concentration in the electrolyte. The H40 sample has a coating with an utmost thickness of 62 ± 8 µm, which is 1.2 times higher than the value obtained for the base PEO coating. Element distribution maps obtained by EDX confirm aluminum presence on the coating surface ( Figure 7). In view of the fact that this element is a component of the HNTs only (base PEO layer does not contain Al), its presence corresponds with nanoparticles incorporation. The presence of silicon indicates both the incorporation of the nanoparticles and substrate reaction with silicate ions during the PEO treatment. Oxygen is assigned to such components of the formed layers as oxides, silicates, and HNTs. The presence of sodium is attributable to the cation's sorption from the electrolyte on the coating's surface. The presence of magnesium is caused by the substrate oxidation into MgO, Mg 2 SiO 4 , and other derivatives [55]. The discrepancy in aluminum concentration between the outer layer and inner one can be seen on the cross-sectional element distribution map: its presence is higher on the surface of the sample in comparison with the internal layer ( Figure 8). In particular, clusters with a high aluminum content can be observed on the surface of the H40 sample. This can be explained by the outward and inward coating growth that was proved in different research works [54,55]. Even though the first stage of the coating development is accom- The discrepancy in aluminum concentration between the outer layer and inner one can be seen on the cross-sectional element distribution map: its presence is higher on the surface of the sample in comparison with the internal layer ( Figure 8). In particular, clusters with a high aluminum content can be observed on the surface of the H40 sample. This can be explained by the outward and inward coating growth that was proved in different research works [54,55]. Even though the first stage of the coating development is accompanied by the inclusion of nanoparticles [58], the access of halloysite nanotubes to the inner layer of the coating on the further stages of inward coating growth is limited by the size of the particles. On the contrary, HNTs readily reach and incorporate into the forming outer layer of the PEO coating, which leads to the difference in Al presence between the outer and inner layers of coating. The discrepancy in aluminum concentration between the outer layer and inner one can be seen on the cross-sectional element distribution map: its presence is higher on the surface of the sample in comparison with the internal layer ( Figure 8). In particular, clusters with a high aluminum content can be observed on the surface of the H40 sample. This can be explained by the outward and inward coating growth that was proved in different research works [54,55]. Even though the first stage of the coating development is accompanied by the inclusion of nanoparticles [58], the access of halloysite nanotubes to the inner layer of the coating on the further stages of inward coating growth is limited by the size of the particles. On the contrary, HNTs readily reach and incorporate into the forming outer layer of the PEO coating, which leads to the difference in Al presence between the outer and inner layers of coating. The HNTs incorporation was also confirmed by the X-ray fluorescence spectrometry results ( Table 2). The increase in aluminum concentration on the surface of the studied samples is obvious for all coatings obtained in the electrolyte containing HNTs, which conform to the previously obtained results. The HNTs incorporation was also confirmed by the X-ray fluorescence spectrometry results ( Table 2). The increase in aluminum concentration on the surface of the studied samples is obvious for all coatings obtained in the electrolyte containing HNTs, which conform to the previously obtained results. The X-ray diffraction data of the used nanomaterial are represented in Figure S1. Considering the specificity of the XRD method and X-ray penetration depth, the diffractograms obtained for the PEO samples have no discernible peaks corresponding to halloysite. The intense peaks of a magnesium substrate overlap with halloysite peaks due to a low relative content of the nanoparticles in the samples. Therefore, diffractograms for the obtained samples are not presented. Figure 9 illustrates the XPS survey and high-resolution spectra of the HNTs and PEO coating obtained in the electrolyte containing 40 g/L of the HNTs. In the represented survey spectra, binding energies associated with the elements were observed previously in element maps and discussed in the EDX analysis results. The acute reduction in the sodium and fluorine content after the etching can be observed, which confirms adsorption of the water-soluble electrolyte components on the coatings surface (Table 3). While Na content is completely dependent on the surface adsorption capacity and decreases substantially after the etching, part of the F − ions react with the substrate with conversion into MgF 2 during the PEO process [55,59], and therefore, the fluorine concentration changes to a lesser extent. According to the calculated data, oxygen forms non-metal and metal oxides. As it can be seen from the high-resolution spectra, oxygen remained in the same states in both the raw powder of HNTs and the obtained coatings.
According to the deconvolution of the XPS spectra, the silicon was found bound in two forms: in the common 4+ oxidation state and in a less oxidized state (peak of Eb about 101 eV), which presumably separates quartz and products of chemical or plasma-chemical reactions from the Si in its aluminosilicate state [60,61]. The spectrum presented in Figure 9a is distinguished from the spectrum shown in Figure 9e with a more distinct peak in the lower energy region. These changes in the intensity are attributable to the components of the electrolyte, namely metasilicate. The magnesium substrate reacts with silicate According to the calculated data, oxygen forms non-metal and metal oxides. As it can be seen from the high-resolution spectra, oxygen remained in the same states in both the raw powder of HNTs and the obtained coatings.
According to the deconvolution of the XPS spectra, the silicon was found bound in two forms: in the common 4+ oxidation state and in a less oxidized state (peak of E b about 101 eV), which presumably separates quartz and products of chemical or plasma-chemical reactions from the Si in its aluminosilicate state [60,61]. The spectrum presented in Figure 9a is distinguished from the spectrum shown in Figure 9e with a more distinct peak in the lower energy region. These changes in the intensity are attributable to the components of the electrolyte, namely metasilicate. The magnesium substrate reacts with silicate ions during the PEO process and forms forsterite with Si 4+ [62,63]. Accordingly, the peak of this silicon state is more intense for the PEO coating compared to the raw material.
The Al 2p peak of the obtained PEO coating can be deconvoluted into two peaks, which indicates the presence of aluminum in two chemical states. The predominant peak in the lower energy region presumably corresponds to aluminum, which is part of the aluminosilicate (halloysite) [64,65]. The higher energy component presumably corresponds to the dehydroxylation product of the HNTs: Al 2 O 3 . Since the plasma discharge temperature in the PEO process reaches about 4500-10,000 K [56], the reactive incorporation of halloysite nanotubes, accompanied by their thermochemical transformations, is taking place. The HNTs conversion involves the segregation of aluminum oxide, which was mentioned in the work of Kissinger [66] and some other recent papers [67,68]. It is worth noting that the higher-energy component is absent in the spectrum of the raw nanoparticles, which confirms the assumption of the dehydroxylation of the HNTs under PEO. The formation of the secondary phase of Al 2 O 3 is of particular interest, as it factors in the wear and corrosion resistance of the formed coatings.
Electrochemical Properties of the Coatings
To provide a deeper insight into the influence of nanoparticles incorporation on the characteristics of the coatings, the electrochemical performance of the obtained samples was assessed by electrochemical impedance spectroscopy and potentiodynamic polarization techniques. The change in corrosion properties of coatings after the HNTs incorporation is evident from the analysis of the polarization curves represented in Figure 10 and the calculated performance specified in Table 4.
PEO. The formation of the secondary phase of Al2O3 is of particular interest, as it factors in the wear and corrosion resistance of the formed coatings.
Electrochemical Properties of the Coatings
To provide a deeper insight into the influence of nanoparticles incorporation on the characteristics of the coatings, the electrochemical performance of the obtained samples was assessed by electrochemical impedance spectroscopy and potentiodynamic polarization techniques. The change in corrosion properties of coatings after the HNTs incorporation is evident from the analysis of the polarization curves represented in Figure 10 and the calculated performance specified in Table 4. All samples containing nanoparticles, except H40 sample, demonstrated a decrease in the corrosion current density in comparison with the base PEO layer. The highest corrosion resistance was demonstrated by the H20 sample; it showed icorr being more than two times lower than the value observed for the base PEO coating (from 1.1 × 10 -7 to 4.9 × All samples containing nanoparticles, except H40 sample, demonstrated a decrease in the corrosion current density in comparison with the base PEO layer. The highest corrosion resistance was demonstrated by the H20 sample; it showed i corr being more than two times lower than the value observed for the base PEO coating (from 1.1 × 10 −7 to 4.9 × 10 −8 A/cm 2 ). Additionally, a distinct increase in the polarization resistance for the H10, H20, H30 samples in 1.3-1.8 times compared to the samples obtained in the electrolytes without nanoparticles could be observed. The H20 sample exhibited the highest polarization resistance of 1.2 × 10 6 Ω·cm 2 , which is almost two times higher than the R p value for the H0 sample.
These characteristics can be explained by the incorporation of the HNTs, which led to partial pore sealing with the chemically stable HNTs and products of their thermal conversion. As was already noted for the results of the XPS test of the obtained coatings, they presumably include quartz and aluminum oxide, which contribute to the electrochemical behavior of the coatings. Moreover, the detailed SEM images of the pore demonstrated sintering of the HNTs to the bottoms of the pores and the infilling of incompletely closed channels with sintering products ( Figure 5). Thus, HNTs seal the surface defects of the coatings and improve their chemical stability.
Once the concentration of the HNTs in the electrolyte reaches a value of 30 g/L, the anticorrosive properties of the formed coatings start to deteriorate. This tendency can be explained by an increase in the heterogeneity and porosity of the coatings, which result in the penetration of the aggressive environment toward the magnesium substrate through the defects and increase corrosion current density.
The experimental data obtained by EIS are presented in the Bode plots ( Figure 11) as dependencies of the impedance modulus (|Z|) and phase angle (θ) on frequency (f).
For the impedance spectra fitting the appropriate equivalent, electric circuits (EECs) were used. The Bode plots have two low-and high-frequency bends (Figure 11b,d), which are responsible for the capacitance of the whole coating (CPE 1 ) and the resistance of the porous sublayer (R 1 ) and the capacitance and resistance of the non-porous sublayer (CPE 2 and R 2 ), R e is a resistance of the electrolyte. Therefore, these two time constants presented in experimental spectra are accountable for the two different sublayers, which can be modelled with two series-parallel R-CPE-chains (Figure 12a). It should be noted that the behavior of the H40 sample after 24 h exposure to corrosive media can be modelled by the EEC with the one time constant presented in Figure 12b, where the single R-CPE-chain is responsible for charge transfer through the PEO layer with high porosity.
Once the concentration of the HNTs in the electrolyte reaches a value of 30 g/l, the anticorrosive properties of the formed coatings start to deteriorate. This tendency can be explained by an increase in the heterogeneity and porosity of the coatings, which result in the penetration of the aggressive environment toward the magnesium substrate through the defects and increase corrosion current density.
The experimental data obtained by EIS are presented in the Bode plots ( Figure 11) as dependencies of the impedance modulus (|Z|) and phase angle (θ) on frequency (f). For the impedance spectra fitting the appropriate equivalent, electric circuits (EECs) were used. The Bode plots have two low-and high-frequency bends (Figure 11b,d), which are responsible for the capacitance of the whole coating (CPE1) and the resistance of the porous sublayer (R1) and the capacitance and resistance of the non-porous sublayer (CPE2 and R2), Re is a resistance of the electrolyte. Therefore, these two time constants presented in experimental spectra are accountable for the two different sublayers, which can be modelled with two series-parallel R-CPE-chains (Figure 12a). It should be noted that the behavior of the H40 sample after 24 h exposure to corrosive media can be modelled by the EEC with the one time constant presented in Figure 12b, where the single R-CPE-chain is responsible for charge transfer through the PEO layer with high porosity. Figure 11. Bode plots (dependences of impedance modulus |Z| and phase angle θ on frequency for the obtained samples. Spectra were acquired after exposure to the corrosive medium for 2 h (a,b) Figure 11. Bode plots (dependences of impedance modulus |Z| and phase angle θ on frequency for the obtained samples. Spectra were acquired after exposure to the corrosive medium for 2 h (a,b) and 24 h (c,d). Impedance spectra presented by experimental data (scatter plot) and fitting curves (solid lines). The constant phase element (CPE) was used in this work instead of the capacitance because of the heterogeneity of the coating sublayers. The impedance of the CPE is calculated in accordance with Equation (4): where Q is the frequency independent parameter, j is the imaginary unit, ω is the angular frequency and n is the exponential coefficient.
The results of the fitting of the experimental impedance spectra are presented in Table 5. The Re value according to calculations was constant for all the studied samples and approximately equal to 30 Ω × cm 2 . The constant phase element (CPE) was used in this work instead of the capacitance because of the heterogeneity of the coating sublayers. The impedance of the CPE is calculated in accordance with Equation (4): where Q is the frequency independent parameter, . J is the imaginary unit, ω is the angular frequency and n is the exponential coefficient.
The results of the fitting of the experimental impedance spectra are presented in Table 5. The R e value according to calculations was constant for all the studied samples and approximately equal to 30 Ω × cm 2 . Table 5. Calculated parameters of equivalent electrical circuits (the units of R and |Z| f = 0.01 Hz are Ω × cm 2 ; Q are S × cm −2 × s n ) for the coatings after 2 h immersion in 3.5 wt.% NaCl aqueous solution. |Z| f = 0.01 Hz was measured at the frequency f = 0.01 Hz. All samples with incorporated HNTs showed a higher impedance modulus at the lowest frequency compared to the base PEO coating. In the set of the samples the increase in the |Z| f = 0.01 Hz can be observed up until the H30 sample, then the impedance modulus begins to decrease. The H20 sample showed the highest value of |Z| f = 0.01 Hz (1.26 × 10 6 Ω × cm 2 ) after 2 h of exposure, which is 10 times higher than the value obtained for the base PEO coating (1.21 × 10 5 Ω × cm 2 ).
Sample
The increase in the R 1 values, especially for the H20 sample, shows the growth of the porous sublayer resistivity as a consequence of the incorporation of HNTs, as was indicated previously for the SEM images and potentiodynamic polarization tests. The R 1 value for the H20 sample is 4 times higher compared to the one for the sample with the base PEO coating (due to more narrow pores), while the R 2 value is 8 times higher for the same matter. This behavior points to the remarkably thicker nonporous sublayer of the H20 sample among other samples, which determines its high anticorrosive properties.
The Q 1 values significantly contribute to the corrosion inhibition rate and are related to the protective properties of the coatings as a whole. The clear decrease in Q 1 and Q 2 magnitudes is obvious for the sample obtained in electrolyte with a concentration of halloysite nanotubes of 20 g/L due to the H20 sample's optimal combination of porosity and sublayer thickness. Table 6 shows an overall reduction in the anti-corrosive properties of the samples after exposure to the corrosive medium for 24 h. This result is a consequence of the corrosive medium reaching the substrate through the defects in the coating. The order in the set of the studied samples remain unchanged: the H20 sample possesses the highest |Z| f = 0.01 Hz . The protective properties of the coatings were also tested by a 28-day immersion in 3.5 wt.% NaCl. The least number of defects was found on the H20 and H30 samples, while for the base PEO coating, pitting corrosion was observed ( Figure 13). The protective properties of the coatings were also tested by a 28-day immersion in 3.5 wt.% NaCl. The least number of defects was found on the H20 and H30 samples, while for the base PEO coating, pitting corrosion was observed ( Figure 13). Figure 13. The appearance of H0, H10, H20, H30, H40 samples after 28 days of exposure to 3.5 wt.% NaCl solution.
Mechanical Properties of the Coatings
Beyond the chemical resistance, silicon and aluminum oxides are expected to improve the mechanical properties of the coatings, which were assessed using a DUH-W201 tester for the calculation of microhardness and Young's modulus ( Table 7). The coatings formed in the electrolytes containing 10 and 20 g/l of HNTs demonstrated the highest microhardness and Young's modulus among all samples, which is attributable to their low porosity and the presence of the HNTs dihydroxylation products. The results presented in Table 7 illustrate that the presence of Al2O3 enhances the microhardness of the coatings containing HNTs in comparison with the base PEO coating by 1.3-1.5 times. This parameter begins to decrease for the samples obtained in electrolytes with the addition of HNTs above 20 g/l, which apparently stems from their porosity.
Mechanical Properties of the Coatings
Beyond the chemical resistance, silicon and aluminum oxides are expected to improve the mechanical properties of the coatings, which were assessed using a DUH-W201 tester for the calculation of microhardness and Young's modulus ( Table 7). The coatings formed in the electrolytes containing 10 and 20 g/L of HNTs demonstrated the highest microhardness and Young's modulus among all samples, which is attributable to their low porosity and the presence of the HNTs dihydroxylation products. The results presented in Table 7 illustrate that the presence of Al 2 O 3 enhances the microhardness of the coatings containing HNTs in comparison with the base PEO coating by 1.3-1.5 times. This parameter begins to decrease for the samples obtained in electrolytes with the addition of HNTs above 20 g/L, which apparently stems from their porosity. Figure 14 represents the images of the studied samples after the scratch testing. All samples containing nanoparticles demonstrated L C3 magnitudes exceeding those for the base PEO coating ( Table 8). The highest L C2 , L C3 parameters values were demonstrated by the H30 sample. Microhardness, thickness, and porosity factored crucially into the adhesion strength of the tested coatings; therefore, the L C3 parameter began to decrease for the highly heterogeneous and porous H40 sample. Figure 14 represents the images of the studied samples after the scratch testing. All samples containing nanoparticles demonstrated LC3 magnitudes exceeding those for the base PEO coating ( Table 8). The highest LC2, LC3 parameters values were demonstrated by the H30 sample. Microhardness, thickness, and porosity factored crucially into the adhesion strength of the tested coatings; therefore, the LC3 parameter began to decrease for the highly heterogeneous and porous H40 sample.
Release Tests
According to the results of the provided studies, the optimal mechanical and electrochemical characteristics were demonstrated by the samples obtained in the electrolyte containing 20 g/l of HNTs; therefore, release tests were carried out using the samples obtained in electrolytes with this concentration of HNTs.
Since aluminum ions are essentially toxic to the human body [69][70][71], we conducted an experiment aimed at determining the possibility of Al 3+ release from the coatings. The samples were immersed in a solution imitating human blood plasma by ionic composition (SBF) for 28 days, after which the solution was analyzed by atomic adsorption spectroscopy (AAS). According to the data obtained, the concentration of aluminum ions in the solution is below the detection limit of the AAS method.
Release Tests
According to the results of the provided studies, the optimal mechanical and electrochemical characteristics were demonstrated by the samples obtained in the electrolyte containing 20 g/L of HNTs; therefore, release tests were carried out using the samples obtained in electrolytes with this concentration of HNTs.
Since aluminum ions are essentially toxic to the human body [69][70][71], we conducted an experiment aimed at determining the possibility of Al 3+ release from the coatings. The samples were immersed in a solution imitating human blood plasma by ionic composition (SBF) for 28 days, after which the solution was analyzed by atomic adsorption spectroscopy (AAS). According to the data obtained, the concentration of aluminum ions in the solution is below the detection limit of the AAS method.
The H20-P samples were prepared using HNTs pre-loaded with caffeine, which allowed us to estimate the applicability of the PEO process for the formation of the coatings with a sustained release of active molecules and assess the maintenance of such molecules in the loaded HNTs lumen throughout the PEO process.
The H20-E samples obtained with pristine HNTs were immersed in caffeine-containing electrolytes, washed, and tested. These samples allowed us to account for the adsorption of caffeine from the electrolyte by the sample surface and compare release rates of the pre-loaded and raw HNTs.
The H0-O samples were obtained in a caffeine-containing electrolyte without HNTs, which allowed us to assess the adsorption capacity of the base oxide layer itself and estimate the possible seizure of the caffeine from the electrolyte by the forming oxide layer.
To assess the capability of the loading of the formed coatings containing HNTs with active molecules and their adsorptive properties, as well as the possibility of the application of such coatings for sustained release of the substances, the release tests for samples that were exposed to the caffeine solution (H20-C, H0-C) were performed. Both H20-C and H0-C were obtained in caffeine-free electrolyte and then exposed to the concentrated caffeine solution, which allowed us to compare and estimate the adsorption capacities of the samples without pretreatment.
As can be seen from Figure 15, the participation of the pre-loaded HNTs in the PEO process elicits prolonged release from the H20-P sample in comparison with the H20-E and H0-C samples. The values exhibited by the H20-E and H0-C coatings correspond to a fluctuation process in the region of a certain equilibrium value of the caffeine concentration, while for the H20-P sample, the concentration of the active molecules increased with the exposure time. The dynamics of the release for coatings with caffeine-loaded HNTs favorably differs from those exhibited by the H20-E and H0-O samples and indicates that part of the caffeine remains in the nanotubes' lumen after the PEO. lowed us to estimate the applicability of the PEO process for the formation of the coatings with a sustained release of active molecules and assess the maintenance of such molecules in the loaded HNTs lumen throughout the PEO process.
The H20-E samples obtained with pristine HNTs were immersed in caffeine-containing electrolytes, washed, and tested. These samples allowed us to account for the adsorption of caffeine from the electrolyte by the sample surface and compare release rates of the pre-loaded and raw HNTs.
The H0-O samples were obtained in a caffeine-containing electrolyte without HNTs, which allowed us to assess the adsorption capacity of the base oxide layer itself and estimate the possible seizure of the caffeine from the electrolyte by the forming oxide layer.
To assess the capability of the loading of the formed coatings containing HNTs with active molecules and their adsorptive properties, as well as the possibility of the application of such coatings for sustained release of the substances, the release tests for samples that were exposed to the caffeine solution (H20-C, H0-C) were performed. Both H20-C and H0-C were obtained in caffeine-free electrolyte and then exposed to the concentrated caffeine solution, which allowed us to compare and estimate the adsorption capacities of the samples without pretreatment.
As can be seen from Figure 15, the participation of the pre-loaded HNTs in the PEO process elicits prolonged release from the H20-P sample in comparison with the H20-E and H0-C samples. The values exhibited by the H20-E and H0-C coatings correspond to a fluctuation process in the region of a certain equilibrium value of the caffeine concentration, while for the H20-P sample, the concentration of the active molecules increased with the exposure time. The dynamics of the release for coatings with caffeine-loaded HNTs favorably differs from those exhibited by the H20-E and H0-O samples and indicates that part of the caffeine remains in the nanotubes' lumen after the PEO. Figure 15. Concentration of caffeine in the release medium for the H20-P, H20-E, and H0-C samples. Data are represented as means ± SD (n = 3). Figure 16 demonstrates the release curves for the H20-C and H0-C samples that were exposed to the saturated caffeine solution. The samples containing HNTs exhibited higher concentrations of released caffeine compared to the H0-C, which can be explained by the adsorption activity of embedded HNTs and the more developed surface of the H20 sample, as it was noted for the SEM images and profilometric analysis of the coatings (Figures 3 and 9). Figure 15. Concentration of caffeine in the release medium for the H20-P, H20-E, and H0-C samples. Data are represented as means ± SD (n = 3). Figure 16 demonstrates the release curves for the H20-C and H0-C samples that were exposed to the saturated caffeine solution. The samples containing HNTs exhibited higher concentrations of released caffeine compared to the H0-C, which can be explained by the adsorption activity of embedded HNTs and the more developed surface of the H20 sample, as it was noted for the SEM images and profilometric analysis of the coatings (Figures 3 and 9).
Release Performance of the Coatings
The release of loaded organic molecules from a coating containing HNTs proceeds in two stages, including a fairly rapid desorption of molecules attached to the oxide layer and HNTs by van der Waals forces and a slower stage of the release from the inner cavity of the nanotubes [72][73][74], where they are held by hydrogen bonds (Figure 17).
Release Performance of the Coatings
The release of loaded organic molecules from a coating containing HNTs proceeds in two stages, including a fairly rapid desorption of molecules attached to the oxide layer and HNTs by van der Waals forces and a slower stage of the release from the inner cavity of the nanotubes [72][73][74], where they are held by hydrogen bonds (Figure 17).
Release Performance of the Coatings
The release of loaded organic molecules from a coating containing HNTs proceeds in two stages, including a fairly rapid desorption of molecules attached to the oxide layer and HNTs by van der Waals forces and a slower stage of the release from the inner cavity of the nanotubes [72][73][74], where they are held by hydrogen bonds (Figure 17). Figure 17. Interaction mechanism between HNTs lumen surface and caffeine. Figure 17. Interaction mechanism between HNTs lumen surface and caffeine.
Caffeine is a rather polar compound, whose molecular structure facilitates its retention in HNTs through hydrogen bonding. According to the modelling of caffeine molecule hydration, its O2, O6 and N9 atoms are prone to serve as hydrogen bond acceptors due to their partial negative charge [75]. Carbonyl moieties contain O2 and O6 atoms with two pairs of non-bonding electrons that interact electrostatically with positively charged hydrogen of hydroxy groups in HNTs lumen [49,76]. The N9 atom might provide weak hydrogen bonding with its pair of electrons as well [75].
The mobile structure and high dipole moment make it possible for the caffeine molecule to intercalate between the HNTs layers by breaking hydrogen bonds, dipoledipole interactions and van der Waals forces, holding alumosilicate layers together [51]. A high degree of interlayer hydration of halloysite-10 Å ( Figure S1) favors complex formation [51,77] due to expanded interlayer space, where the caffeine solution penetrates. However, complexation requires both donor and acceptor functional groups as two bonding sites [51,78,79], while caffeine has only acceptor groups. Therefore, this type of interaction is controversial and needs to be confirmed.
Another point to consider is the electronic properties of the HNTs, in particular, the negative charge of the inner surface of nanoparticles that presumably implies some electrostatic interactions with the caffeine dipole, contributing to the adsorption capacity of the coatings besides hydrogen bonding [80].
As schematically represented in Figure 17, caffeine molecules diffuse down the concentration gradient after the hydrogen bonds are broken by thermally activated perturbations or natural fluctuations [81]. Then caffeine accumulates in sinuous channels in which seized HNTs are located. A complex structure of channels retards the fast penetration of the release medium and decelerates the ingress of caffeine into the bulk of the medium, which supports a sustained release. Apparently, the phenomenon of the prolonged release exhibited by the H20-P samples can be attributed to the gradual release of caffeine from channels of the coating, where loaded HNTs are distributed (Figure 17), whereas equilibrium concentrations of caffeine for the H20-E and H0-C are reached rapidly due to the desorption of the intercalating agent from the coatings surface.
Moreover, the H20-C and H20-P coatings containing the HNTs demonstrate higher adsorption capacity due to hydrogen bonding with active molecules, compared to the H0-C and H0-O that adsorb caffeine by the weaker van der Waals forces with their oxide layer.
Samples Preparation
The rectangular specimens of 20 mm × 15 mm × 2 mm in size made of Mg-Mn-Ce magnesium alloy (Mn 1.30; Ce 0.15; Mg bal. (wt.%)) were used as a substrate. The specimens were mechanically ground with a sanding paper of various grits (P600, P800, and P1200), cleaned in an ultrasonic bath Sonorex RK100H (Bandelin, Germany) filled with deionized water. Then samples were degreased with isopropanol and air-dried.
Coatings Formation
Based on the positive results of previous studies [21,82], the solution containing sodium fluoride (5 g/L) and sodium silicate (20 g/L) was chosen as the base electrolyte. The conductivity of this electrolyte was equal to 16-17 mS/cm, and the pH was equal to 10.7-10.8.
In this work, we used halloysite nanotubes (Halloysite Ural, Russia) with a length of 1-3 µm, an outer diameter of 50-70 nm and a lumen diameter of 15-30 nm ( Figure S2). The nanoparticles were dispersed in the base electrolyte using a Sonopulse HD 3200 ultrasonic homogenizer (Bandelin, Germany).
An anionic surfactant (NaC 12 H 25 SO 4 , sodium dodecyl sulfate) was used for stabilization and intensification of the electrophoretic migration of the HNTs dispersed phase. The surfactant concentration in the electrolyte was 0.25 g/L. The concentration of the HNTs in the prepared electrolyte was 0, 10, 20, 30, and 40 g/L (Table 9). Reagent grade chemicals were used in this research. The process of coatings formation was carried out using the plasma electrolytic oxidation unit. The methodology of the PEO process was described elsewhere [21]. During the PEO, the polarizing pulse frequency was equal to 300 Hz. All samples were processed in the two-stage bipolar PEO mode. During the first stage (200 s), the anodic and cathodic components were in galvanostatic (0.36 A/cm 2 ) and potentiostatic (−30 V) modes, respectively. For the second stage (600 s), the anodic component remained galvanostatic (0.36 A/cm 2 ), while the cathodic one changed potentiodynamically from −30 V up to −10 V. The electrolyte temperature was maintained at 10 • C by a recirculating water chiller Smart H150-3000 (LabTech, Italy).
Morphology and Composition Characterization
The study of the surface topography was conducted by the optical laser profilometry method using an OSP370 device installed on an M370 workstation (Princeton Applied Research, TN, Oak Ridge, USA). Image analysis was performed using Gwyddion 2.45 software. The surface topography was characterized by the most common roughness parameters: R a (arithmetical mean deviation of the profile), and R z (ten-point height of irregularities).
The microphotographs of the surface of the samples were obtained using a Sigma 300 scanning electron microscope (SEM) (Carl Zeiss, Munich, Germany). The elemental composition of the surface layers was determined by energy dispersive spectroscopy (EDS) using an INCA X-act EDS analyzer integrated into the SEM (Oxford Instruments, MA, Concord, USA) and an energy dispersive X-ray fluorescence spectrometer EDX-800HS (Shimadzu, Kyoto, Japan).
The coatings thickness was measured using SEM images of the samples cross-section. The porosity (P) was determined by digital processing of the SEM images using ImageJ software (National Institutes of Health, MD, Bethesda, USA). The proportion of the area occupied by pores on the entire visible surface of the coating was estimated according to Equation (1): where (S p ) i is the i-th pore area, and S 0 is the area of the analyzed surface. X-ray photoelectron spectroscopy (XPS) was used for the analysis of the chemical composition of the investigated coatings. The XPS measurements were carried out using a SPECS device (SPECS, Germany) with a 150 mm hemispherical electrostatic analyzer. Ionization was carried out with non-monochromatized Al K α radiation. The transmission energy of the analyzer was 50 eV. The step was 0.1 eV for high resolution spectra and 1 eV for survey spectra. The scale was calibrated using the peaks of C 1 s hydrocarbons (E b = 285.0 eV). An Ar + ions source with an E k of 5000 eV was used to etch the samples; the etching time was equal to 5 min, and the average etching rate was equal to 10 Å/s. The X-ray diffraction (XRD) technique was used for the phase analysis of the HNTs and of the obtained coatings. XRD was performed on a Bruker D8 ADVANCE diffractometer (Bruker). The diffraction patterns were recorded in a 4-80 degrees (2θ) range using a monochromatic Cu K α radiation with a step size of 0.02 • and speed of 1 s per step, operating at 40 kV at 40 mA.
Electrochemical Measurments
The electrochemical tests were carried out using VersaSTAT MC (Princeton Applied Research, TN, Oak Ridge, USA). Potentiodynamic polarization was performed at room temperature using three-electrode K0235 FlatCell (Princeton Applied Research, TN, Oak Ridge, USA). The samples were studied in 3.5 wt.% NaCl solution. The counter electrode was a platinized niobium mesh. The saturated calomel electrode (SCE) was used as a reference electrode. The area of contact of the sample with the electrolyte was equal to 1 cm 2 . For potentiodynamic polarization tests, samples were immersed in the electrolyte for 60 min to stabilize the electrode potential. The sweep rate for potentiodynamic polarization was equal to 1 mV/s. The samples were polarized from E corr − 0.15 V up to E corr + 0.5 V, where E corr is the corrosion potential.
Values of the corrosion potential (E corr ), corrosion current density (i corr ) and the cathodic and anodic Tafel slopes (β c and β a , respectively) for the samples were calculated using the Levenberg-Marquardt (LEV) approach using Equation (2): The polarization resistance (R P ) was calculated in accordance with Equation (3). The specimens were polarized from E corr − 0.02 V up to E corr + 0.02 V at sweep rate of 0.167 mV/s: The electrochemical impedance spectroscopy (EIS) test was carried out in a frequency range from 1 MHz to 10 mHz, using a 10 mV amplitude sinusoidal voltage. EIS measurements were conducted after 2-and 24 h exposures to the corrosive media (3.5 wt.% NaCl). Impedance spectra were acquired at a logarithmic sweep of 10 points per decade. EIS spectra were fitted using appropriate equivalent electrical circuits.
The laboratory-based immersion corrosion test was carried out by immersing the obtained samples in a 3.5 wt.% NaCl solution for 28 days, followed by a visual inspection.
Mechanical Properties Characterization
The study of mechanical properties (the microhardness and elasticity modulus) was carried out using a DUH-W201 dynamic ultra-micro hardness tester (Shimadzu, Japan). The universal microhardness H µ was measured on the samples cross-section using a Berkovich indenter at a load of 100 mN.
The adhesive properties of the surface layers were investigated by scratch testing using a Revetest Scratch Tester (CSM Instruments, Switzerland). A Rockwell diamond indenter was used for scratch testing. The experiments were carried out at a track length of 5 mm with a gradual increase of the applied load from 1 to 20 N at a rate of 9.5 N/min. The following parameters were determined for each coating: L C2 is the load at which the beginning of peeling of coating areas was observed, and L C3 is the load at which abrasion of the coating to the substrate occurs.
Release Tests
An effective adsorption of organic molecules by HNTs' interlayer depends on the dipole moment of the intercalating agent [83][84][85]. Therefore, caffeine was chosen, due to its relatively high dipole moment [86] and the feasibility of its minor amount determination by high-performance liquid chromatography (HPLC). Caffeine was purchased from Sigma-Aldrich (99%). The HPLC system consisted of a Shimadzu LC-20AD HPLC pump, a Shim-pack FLC-ODS column, a Shimadzu SPD-M20A detector (all Shimadzu, Japan), and a water/acetonitrile (80/20) mobile phase was used at flow rate of 0.4 mL/min.
The adsorption capacity was assessed through a comparison of the release rates of caffeine from coatings that were loaded by different procedures described in Table 10. The PEO process parameters were the same as previously described for all samples used in the release tests. The H20-P coatings were obtained in the electrolyte containing 20 g/L of the HNTs pre-loaded with caffeine. The loading of the HNTs was performed in a saturated water solution of caffeine (2.1 g of caffeine per 100 mL deionized water). A total of 20 g of the nanoparticles powder was suspended in 100 mL of the caffeine solution and kept under 100 Pa pressure for 10 min using an Epovac vacuum impregnator (Struers, Germany) [87]. The cyclic vacuum treatment was repeated three times. The suspension was then left under atmospheric pressure for 48 h at room temperature (25 • C) and continuous steering. After the exposure, HNTs were separated from the solution by decantation, followed by filtration. The excess caffeine was washed off with 100 mL of deionized water. To prevent an unintended release of caffeine from HNTs cavity during the washing, cold water (10 • C) was used. The caffeine-loaded nanoparticles were dried at 25 • C and then used in the PEO process.
The H20-E samples were obtained by exposure of the H20 samples to the base electrolyte for PEO with the addition of caffeine of 7 g/L. The concentration of the caffeine was chosen due to the proximity of the saturation point at temperature (10 • C) [88]. The exposure time was equivalent to the PEO process duration (800 s).
The H0-C coatings were obtained in the base electrolyte, which contained 7 g/L of caffeine. According to several studies, HNTs' release rates are naturally high and the release pinnacle locates within 3-4 days [33,35,37], therefore one of the release tests lasted for 5 days. Each of the studied samples were placed in tubes filled with 20 mL of deionized water and sustained at 37 • C, concentration of the caffeine in the release medium was measured every day by HPLC.
In a separate 24 h experiment, the H0 and H20 samples were immersed in the concentrated solution of caffeine and then tested. The samples were placed for 1 h in the 100 mL water containing 2.1 g of caffeine, which is close to its saturation point (at 25 • C) [88]. After the exposure, the samples were washed with cold deionized water (10 • C). Then the samples were placed in tubes filled with 20 mL of deionized water and maintained at 37 • C in a circulation thermostat for 1 h. After 1 h, the samples were washed with deionized water and placed in another tube for 2 h, and then repeated in a similar way for 4, 8, 16, and 24 h. The concentration of caffeine in the release medium was measured by HPLC.
The data were considered to be significantly different at p < 0.05. The release test data are presented as mean values with standard deviation (mean ± SD).
Conclusions
The influence of HNTs on the PEO coatings structure, mechanical properties, and electrochemical properties was investigated in the present work. The increase in porosity and heterogeneity of the formed layers is directly proportional to the concentration of the HNTs in the electrolyte. The composition analyses showed that the obtained coatings include not only the HNTs, but also the products of their dihydroxylation under the plasma discharge conditions. The HNTs and products of their plasma-chemical reactions factor into the mechanical and anticorrosive properties of the PEO coatings.
The 2-fold decrease in corrosion current density was observed for the samples obtained in the electrolyte with the HNTs concentration of 20 g/L in comparison with the base PEO coating. These coatings also demonstrate the highest polarization resistance and impedance modulus after both 2 and 24 h of immersion in the corrosive medium. The achieved results can be explained by the non-porous sublayer thickness of the H20 sample and the filling of the pores with the chemically stable nanoparticles along with the products of their thermochemical transformation. Moreover, H20 and H30 samples have the highest microhardness values, which are 1.5 times higher than the base PEO layer. Results of our analysis allow us to conclude that aluminum oxide presence has a major contribution to the increase in protective properties of the coatings.
It was found that coatings with nanocontainers that exhibit sustained release of the organic molecules can be obtained by the incorporation of both pristine and pre-loaded halloysite nanotubes during the PEO.
The pre-loaded HNTs retain guest molecules in their lumen throughout the PEO and actuate the release of the hosted substance from the coatings with such nanocontainers. As it was shown in the research, these coatings demonstrate at least a 5-day-long release of caffeine. Furthermore, coatings with pristine HNTs display higher concentrations of the released caffeine, in comparison with the base PEO layer. In both cases, the higher adsorption of caffeine and sustained release are presumably attributable to the preservation of caffeine in the HNTs lumen assisted by hydrogen bonds.
The obtained results lead us to conclude that HNTs remain their shape and adsorptive properties after the incorporation into the PEO coatings, which provides the opportunity for the one-step formation of protective coatings on magnesium alloys with an active molecule-delivery property.
|
v3-fos-license
|
2024-07-06T06:17:11.999Z
|
2024-07-04T00:00:00.000
|
270971680
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "bf671302d8fc21993a458a855662665970faf807",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44562",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "6820fb13196ff7ffb6eccde421a611792b74aaea",
"year": 2024
}
|
pes2o/s2orc
|
Social coevolution and Sine chaotic opposition learning Chimp Optimization Algorithm for feature selection
Feature selection is a hot problem in machine learning. Swarm intelligence algorithms play an essential role in feature selection due to their excellent optimisation ability. The Chimp Optimisation Algorithm (CHoA) is a new type of swarm intelligence algorithm. It has quickly won widespread attention in the academic community due to its fast convergence speed and easy implementation. However, CHoA has specific challenges in balancing local and global search, limiting its optimisation accuracy and leading to premature convergence, thus affecting the algorithm’s performance on feature selection tasks. This study proposes Social coevolution and Sine chaotic opposition learning Chimp Optimization Algorithm (SOSCHoA). SOSCHoA enhances inter-population interaction through social coevolution, improving local search. Additionally, it introduces sine chaotic opposition learning to increase population diversity and prevent local optima. Extensive experiments on 12 high-dimensional classification datasets demonstrate that SOSCHoA outperforms existing algorithms in classification accuracy, convergence, and stability. Although SOSCHoA shows advantages in handling high-dimensional datasets, there is room for future research and optimization, particularly concerning feature dimensionality reduction.
Related work
High-dimensional optimization problems are widely found in engineering applications and scientific computing, for example, wind turbine fleet optimization 24 and automobile side impact optimization 25 .However, Swarm intelligence algorithms mainly suffer from poor solution quality and the tendency to fall into local optima in high-dimensional optimization problems.Therefore, many researchers have proposed improvement strategies in many aspects to avoid falling into local optima and finding globally optimal solutions.Therefore, many researchers have proposed improvement strategies to avoid falling into local optimum and accelerate the convergence speed.Table 1 lists some Swarm intelligence algorithms for solving high-dimensional optimization problems.
Neggaz 26 proposed ISSAFD, which relies on the use of the sine cosine algorithm and perturbation operators to improve the performance of the slap swarm algorithm.Hussain 28 proposed the SCHHO algorithm, which enhances the development by fusing the sine-cosine algorithm and Harris Hawks optimisation to dynamically adjust the candidate solutions to avoid the problem of solution stagnation in HHO.Braik 27 proposed Chaotic sequence and Lévy flight with BCSA (CLBCSA).CLBCSA combines chaotic sequences and Lévy flights to enhance the algorithm's local exploitation capabilities while maintaining its global search capabilities.This combination strategy aims to improve the algorithm's ability to avoid falling into local optima and to converge quickly to the global optimum.Yang 35 proposed the Binary Golden Eagle Optimizer algorithm combined with the Initialization of Feature Number Subspace (BGEO-IFNS).With the IFNS approach, BGEO-IFNS can initially generate higher-quality populations, improving the algorithm's ability to search in a high-dimensional search space and the final optimisation performance.Nadimi-Shahraki 36 proposed the E-WOA algorithm, which solves the feature selection problem using a pooling mechanism and three effective search strategies.Finally, the E-WOA algorithm was applied to COVID-19 disease feature selection.Rajalaxmi 30 proposed the BIGWO algorithm.First, the optimal solution is solved by the GWO algorithm; then, the optimal subset of features is obtained by binary conversion of the optimal solution with V-and S-shaped functions.Gad 31 proposed the iBSSA algorithm, firstly, to improve the local exploration capability using a local search algorithm; secondly, to improve the global search capability using a roaming agent approach; and finally, to obtain the optimal feature subset by a binary transformation of the optimal solution using V-and S-shaped functions.Wang 32 proposed the ABGWO algorithm.First, an adaptive coefficient is introduced to improve the local exploration capability and global search capability of the GWO algorithm.Finally, the optimal feature subset is obtained by binary conversion of the optimal solution by a Sigmoid transformation function.Long 33 proposed LIL-HHO.First, the escape energy E is improved by a sinusoidal function to achieve a good transition from the exploration phase to the exploitation phase.Second, the search accuracy is enhanced by introducing the individual's best position for each eagle.Third, crystal imaging learning is used to eliminate the local optimum and thus obtain the global optimum solution.Finally, experiments prove that this algorithm outperforms the comparison algorithm.Peng 44 proposed the EHHO algorithm.First, the optimal solution is obtained by optimizing the HHO algorithm through a hierarchical structure.Then, the optimal subset of features is obtained by binary conversion of the optimal solution through a V-transformation function.Chang 45 proposed by the MSGWO algorithm.First, a Random Opposition-based Learning (ROL) strategy is applied to improve the population quality in the initialisation phase.Secondly, the convergence factor is adjusted nonlinearly to reconcile global exploration and local exploitation capabilities.Finally, a twostage mixed-variance operator is introduced to increase population diversity and balance the exploration and exploitation capabilities of GWO.Houssein 46 proposed the mSTOA algorithm.The algorithm uses a balanced exploration/exploitation strategy, an adaptive control parameter strategy, and a population reduction strategy to solve the problem of poor convergence and improve classification accuracy.Duan 47 proposed the cHGWO-SCA algorithm.First, the SCA algorithm is used to update the position of the head wolf; second, the grey wolf is guided to search for prey using moderate value weights and individual optimal positions to obtain the global optimal solution.Nadimi-Shahraki 43 proposed by the MFO-SFR algorithm improves the performance of the search process through the stagnation finding and replacing (SFR) strategy.Secondly, archives are used to enrich the diversity of the population.Finally, experiments prove that the algorithm is effective.
Wang 29 proposed the BChOA algorithm.First, the optimal solution is found by the ChOA algorithm.Then, the optimal subset of features is obtained by binary transformation of the optimal solutions by V-and S-type functions.Pashaei 37 proposed the BCHoA-C algorithm.Firstly, the MRMR algorithm ranks the feature set and filters a subset of features with high relevance and low redundancy.Secondly, the CHoA algorithm finds the optimal solution.Finally, the optimal subset of features is obtained by binary conversion of the optimal solution using V-type and Sigmoid conversion functions.Khishe 49 proposed OBLChOA.This algorithm gets the global optimal solution using a greedy search and backward learning strategy.Jia 39 suggested EChOA, which firstly initializes the population using polynomial mutation; secondly, calculates the gap between the lowest social status chimp and the leader chimp via Spearman's rank correlation coefficient; and finally, uses the beetle's tentacle operator to jump out of the local optimum to obtain the global optimum solution.Liu 40 proposed ULChOA, an algorithm that updates the location of prey using a generic learning mechanism that provides a dynamic balance between the exploration and exploitation phases.The algorithm was finally demonstrated to be effective through experiments.Kaur 34 proposed the SChoA algorithm.The algorithm solves the slow convergence by improving the Chimp's search and updating the equation with a sine cosine function to obtain the optimal solution.Gong 38 proposed the NChOA algorithm, which uses niching techniques, individual optimal techniques for PSO, and local search techniques to improve search efficiency and increase convergence speed.Wang 41 proposed AChOA, initialising the population through a Tent chaotic mapping.Secondly, it uses an adaptive non-linear convergence factor and adaptive weight coefficients to improve population diversity.Finally, a Lévy flight strategy is applied to jump out of the local optimum.The method is experimentally proven to be effective.Fahmy 42 proposed ECH3OA, which obtains the global optimal solution by combining a fusion of the enhanced Chimp Optimization Algorithm (ChOA) and Harris Hawkes Optimization Algorithm (HHO).Bo 48 proposed the GSOBL-ChOA 48 Greedy choices and oppositional learning algorithm.Firstly, the convergence rate is accelerated by applying the OBL technique in the exploration phase of ChOA.Second, a greedy selection strategy is used to find the optimal solution.Although the swarm intelligence algorithms mentioned above improve search efficiency and increase convergence speed, they still suffer from unbalanced exploration and exploitation, poor solution quality, and tend to fall into local optimality.According to our study, enhancing local exploration, increasing population diversity, and finding globally optimal solutions have become essential for studying swarm intelligence algorithms in high-dimensional optimization [50][51][52] .Therefore, this paper focuses on the location update equations and global optimization mechanisms in the CHoA algorithm.It proposes a Chimp optimization algorithm with a coevolutionary strategy and Sine chaotic opposition learning and also applies it to the high-dimensional classification feature selection problem.
Chimp Optimization Algorithm
The CHoA algorithm is a swarm intelligence optimization algorithm proposed to simulate the hunting behaviour of a chimp in nature.The chimp hunting process is generally divided into chasing and attacking the prey.The standard CHoA algorithm selects an attacker (first optimal solution), a barrier (second optimal solution), a chaser (third optimal solution), and a driver (fourth optimal solution) to discover potential prey locations jointly.In the search spaces, the chimp group mainly uses the four best-performing chimps to guide the other chimps toward their optimal areas, while the four chimps -attacker, barrier, chaser, and driver -predict the possible locations of the captured objects during the continuous iterative search by guiding the continuous search for the global optimal solution.The mathematical model of a chimp chasing prey during the search process is, therefore, as follows: In Eq. (1), X prey the position vector of the prey, X chimp the position vector of the current individual chimp, t the number of current iterations, and a, C, m the coefficient vector, which is calculated as follows: Among them, r 1 and r 2 are random numbers between [0, 1] , respectively.f is the convergence factor whose value decreases non-linearly from 2.5 to 0 as the number of iterations increases.t max is denoted as the maximum number of iterations.a is a random vector that determines the distance between the chimp and the prey, with a random number of values between −f , f .C is the chaotic vector generated by the chaotic mapping.C is the control coefficient for the Chimp expulsion and prey chasing, and its value is a random number between [0, 2].
The mathematical model for the chimp attack on prey is as follows: From Eqs. ( 6) to (11), X(t) is the position vector of the current Chimp, X attac ker is the position vector of the attacker, X barrier is the position vector of the barrier, X chaser is the position vector of the chaser, X driver is the posi- tion vector of the driver and X chimp (t + 1) is the updated position vector of the current Chimp.X chimp (t + 1) is the chaotic mapping, which is used to update the position of the solution.From Eq. (10), it is clear that individual chimp positions are estimated from the four best individual chimps, while the other Chimps update their positions randomly.From Eq. ( 11), it can be seen that to simulate the social behaviour of chimps attacking their prey, let u be a random number between [0, 1] .When u < 0.5 , Eq. ( 10) is used for the position update.When u ≥ 0.5 , loca- tion updates using chaotic process mapping were employed, and this approach determined the chimp's attack behaviour randomly. (1)
Proposed improved chimp optimization algorithm
The traditional CHoA has several limitations, such as falling into local optima, slow convergence, and imbalanced development.Therefore, our work aims to develop new variants of CHoA.The proposed algorithm does not affect the basic framework of the CHoA algorithm.Still, it only introduces a social coevolution strategy into the CHoA location equation to overcome the blindness of search and dynamically adjust the balance between local exploration and global exploitation.The Sine chaotic opposition learning mechanism improves the full search capability, enabling the algorithm to jump out of the local optimum solution.This is described in detail below.
Social coevolution strategy
From Eq. ( 11), individual chimp positions are determined jointly by attackers, barriers, chasers, and drivers or by chaotic process mapping for position updating.This equation update has the following disadvantages: • When the four key individuals in the population, attackers, barriers, chasers, and drivers, are all caught in a local optimum, the entire population risks tilting towards a locally optimal solution, significantly constraining the algorithm's global search capability.• Suppose the attackers, barriers, chasers and drivers, unfortunately, fall into the confines of the local optimal solution during the iterative process.In that case, the whole chimpanzee population may quickly fall into the trap of this local optimum.This severely limits the algorithm's convergence efficiency and slows its exploration towards the global optimum.• Chaotic_value , as a randomly generated vector, carries a certain degree of randomness in its triggering mechanism, with about half the probability of being able to be activated.However, this randomness also leads to a need for more stability.Although Chaotic_value allows individuals to escape from the local optimal solution to a certain extent, it does not fully consider the interactions and information exchanges within the population during the execution of the optimal search.In particular, Chaotic_value fails to fully utilise the potential of learning and acquiring positional information from other individuals in the population, which somewhat limits its efficacy in improving search efficiency and optimising global solutions.
Therefore, to address the defects in the principle of the above algorithm and to enhance the local exploitation capability of the chimp optimization algorithm and the ability to communicate among chimp individuals, this paper proposes to update the chimp individual positions using a social coevolution strategy with the following equation.
In Eq. ( 12), r 3 is a random number between 2 is a co-occurrence quantity, which represents the relationship characteristics of chimp i and i − 1 in the chimp population.R is the benefit factor.This rep- resentation of the benefit factor R allows for an adequate representation of whether individual chimps benefit partially or fully from the interaction.When R = 1 , it means that chimp i and chimp i − 1 gain a small benefit from interacting with each other.When R = 2 , it means that chimp i and chimp i − 1 greatly benefit from interacting with each other.
The r 3 • (X attacker − C • R) is the socially coextensive component, which not only allows the optimal chimp ( X attacker ) to exchange information with the general chimp but also allows each chimp to exchange informa- tion with neighbouring chimp.This approach enables the chimp to no longer search singularly around a circle defined by attackers, barriers, chasers, and drivers.Furthermore, Eq. ( 12) leads the individual chimp to steadily converge to the optimal value, which improves the algorithm's search accuracy and speed, obtaining the desired search results.
Sine chaotic mapping strategy
Sine chaotic mapping Chaos 53 is a stochastic, non-periodic, and non-convergent approach found in non-linear dynamical systems.In mathematics, chaotic systems are a source of randomness.The main idea is to exploit the random and ergodic nature of chaotic motion by mapping variables into the interval of values in chaotic variables and finally linearly transforming the resulting solution into the space of optimized variables.The standard chaotic mappings in the optimization field are logistic mapping 54 , Tent mapping 55 , etc. Sine chaotic mapping can help the algorithm jump out of the boundaries of local extreme points due to its ability to search in a wide range.Therefore, using this advantage of sine chaotic mapping, the algorithm can explore the solution space more deeply and reduce the risk of falling into sub-optimal solution regions, improving the solution's quality and the optimisation process's overall performance 56 .Sine mappings are calculated as follows: In equation ( 13), a ∈ (0, 1] is the control parameter and S x j i ∈ [−1, 1] is the chaotic sequence value.
Opposition-based Learning
Opposition-Based Learning (OBL) is a mathematical method proposed by Tizhoosh 57 , the essential principle of which is to select the best solution for the next iteration by estimating and comparing the feasible solution with the inverse solution.Rahnamayan 58 proposed an opposing learning strategy for the neighbourhood centre of gravity, allowing the particle swarm to take in the group search experience and increasing population diversity.www.nature.com/scientificreports/Yin 59 proposed that introducing adversarial learning competition for local search in the primary particle swarm algorithm can improve the algorithm's performance in solving high-dimensional optimization.All of these scholars have made it possible for the reverse solution to reach the vicinity of the optimal solution more accurately by using the contrastive learning approach to improve intelligent optimization algorithms.Thus, the computational model of opposing learning is specified as follows: Among them, X i = x 1 i , x 2 i , . . ., x j i , i = 1, 2, . . ., N; j = 1, 2, . . ., D ,N is the number of populations and D is the dimensional search space.X i is a point in D dimensional space.X i is the reverse of X i .x
Sine chaotic oppositional learning
From the ChoA algorithm description 14 , in performing global exploration, the Chimp algorithm first updates the dimensional information of the solution.Subsequently, it evaluates the fit of the objective function.Next, the fitness value of the current position is compared to the fitness of the previous position to determine whether that position is used for the next iteration.However, as the dimensionality increases, the algorithm may face a decrease in the diversity of the population at a later stage of the iteration, which increases the risk of falling into a local optimum.This diversity reduction directly affects the algorithm's convergence speed and the final solution's accuracy.At the same time, it is clear from the descriptions in "Sine chaotic mapping" and "Opposition-based Learning" that Sine chaotic mappings are random and can perform searches globally.Oppositional learning can increase the diversity of the population and speed up the algorithm's convergence.
Therefore, this paper proposes a strategy combining Sine chaos mapping and oppositional learning.Firstly, the goal is to reduce the mutual interference between dimensions.Secondly, it will increase the diversity of the algorithm's search positions and help the algorithm expand the exploration area so that the algorithm gains the ability to get rid of local extremes.Its computational model is: From Eq. ( 15), compared with general opposition learning, this paper uses Sine opposition learning to perturb the ChoA algorithm to enhance population diversity to increase the likelihood of the algorithm jumping out of the local optimum and, to a certain extent, reduce the likelihood of the algorithm falling into the local optimum, thus improving the optimization efficiency of the algorithm.
Although a reverse solution is generated by Eq. 15, this reverse solution is not necessarily better than the original solution.Therefore, a greedy selection strategy is introduced to choose whether or not to replace the original solution with the reverse solution, i.e. the replacement is made only if the reverse solution has a better fitness value.This approach allows the best position to be introduced into the next iteration with the following computational model: Through Eqs. ( 15) and ( 16), it can be seen that the Sine dimension-by-dimension opposition learning strategy can be used by generating opposition solutions far from the local extrema when the algorithm falls into a local optimum.The greedy strategy selects the individual with better fitness among the original and inverse solutions, thus generating chimpanzee individuals with better positions.This effectively avoids the decline of population diversity in the late iterations and enhances the algorithm's global optimality-finding ability.At the same time, a progressively smaller search space can be obtained through the dynamic boundary search mode employed by Sine's dimension-by-dimension opposition learning.This approach can facilitate the evolution of the CHoA algorithm towards the target position according to different requirements during the iterative process, allowing the algorithm to obtain a better convergence rate.
SOSCHoA implementation step
Through the above description, this paper combines the social coevolution strategy, chaotic mapping theory, and the dimension-by-dimension opposition learning strategy to optimize the optimization seeking efficiency and improve the algorithm's stability to expect better optimization results during each iteration.Therefore, combining the above improvement methods, the SOSCHoA algorithm pseudo-code is given below, with the following steps: (14) Algorithm 1 SOSCHoA: the social coevolution and Sine chaotic opposition learning chimp optimization algorithm Compared with the basic CHoA algorithm, the SOSCHoA algorithm has the following features: • the SOSCHoA algorithm does not change the framework of the basic CHoA algorithm but only introduces new operators; • the SOSCHoA algorithm updates the attack prey position through a social coevolution strategy to enhance the local exploration ability; • the current optimal individual performs a dimension-by-dimension Sine chaos-based opposition learning strategy, enhancing the diversity of the population and reducing the probability of the algorithm falling into a local optimum; • through a greedy mechanism, allowing the target location to lead in obtaining the global optimal solution.
Proof of convergence of the SOSCHoA algorithm
Similar to the convergence analysis of most metaheuristic algorithms, we use the deterministic derivation of the SOSCHoA algorithm to analyze its convergence.It is important to note that the convergence proof does not necessarily guarantee that the algorithm converges to the global optimal solution.Since the CHoA algorithm is an intelligent population algorithm, the following theorem follows.
Theorem 1 If the CHoA algorithm based on general opposite learning converges, then the SOSCHoA algorithm is also convergent.
Proof let X i (t) and X i (t) be the current and opposing solutions in generation t. x j i (t) and x j i (t) are the values of X i (t) and X i (t) in the j dimension, respectively, and the complete solution to the problem is x * , which by the conditions in Theorem 1 has for the solution x j i (t) in the t generation of the population: www.nature.com/scientificreports/Since,lb j (t) = min x j i (t) ,ub j (t) = max x j i (t) , it follows: At t generation, the current opposing solution generated by the Sine chaotic opposite learning strategy shown in Eq. ( 19) is: When t → ∞ , from Eq. ( 19): From Eq. ( 20), when x j i (t) converges to x * j , the dyadic solution based on Sine chaotic opposition learning strategy also converges to x * j .Therefore, if the CHoA algorithm based on the general dyadic solution converges, the SOSCHoA algorithm also converges.
Time complexity analysis of the SOSCHoA algorithm
The time complexity indirectly reflects the algorithm's convergence speed.In the CHoA algorithm, the time required to initialize the parameters (population size N, D search space dimension, a, m, f coefficients, etc.) is assumed to α 1 , the time required to update the positions of other chimpanzee individuals in the population in each dimension according to Eq. ( 11) is α 2 and the time required to solve the target fitness function is f (D) , then the time complexity of ChOA is In the SOSCHoA algorithm, the time required to initialize the parameters is consistent with the standard ChOA.In the loop phase of the algorithm, let the time required to execute the social symbiosis strategy of α 3 , let the time required to execute the dimension-by-dimensional Sine chaotic opposition learning strategy of α 4 , and the time required to execute the greedy mechanism of α 5 , then the time complexity of SOSCHoA is The SOSCHoA proposed in this paper is consistent with the basic ChOA time complexity.
In summary, the improvement strategy proposed in this paper for ChOA does not increase the complexity of the time.
SOSCHoA based feature selection
The feature selection problem for high-dimensional datasets is a binary optimization problem 34 ; the solution space is limited to {0, 1} .For SOSCHoA, it is first necessary to convert continuous optimization values to binary.A feature selection solution can be represented as a searching individual in the SOSCHoA algorithm; the individual dimension is represented as the number of features in the original dataset, and the individual x j i ∈ {0, 1} .The cod- ing rules are: When x j i = 1 , feature j in individual i was selected; when x j i = 0 , it means that feature j in individual i was not selected.For example, Table 2 represents a feature selection solution with an individual dimension of 9, corresponding to an original dataset with nine feature attributes.Of these, indicates that individual i selected features 1, 2, 4, 7 and 8 in the optimal feature subset solution.x 3 i = x 5 i = x 6 i = x 9 i = 0 , this indicates that individual i selected features 3, 5, 6, 9 not selected in the optimal feature subset solution.The classifier will use features 1, 2, 4, 7, and 8 as classification data 60 .
At the same time, SOSCHoA converts the continuous optimized form to binary form using a conversion function with the following specific functional equation: (17) www.nature.com/scientificreports/ Where the value of the position in feature j in individual i is x j i .At the same time, the feature selection problem for a dataset is a multi-objective optimization problem, requiring the maximum possible data classification accuracy while minimizing the number of features selected.To balance the number of features selected (minimization) and the classification accuracy (maximization), the fitness function is defined as: From Eq. ( 25), γ R (D) denotes the classification error rate (in this paper, the K-Nearest Neighbor (KNN, k=5) algorithm is used to evaluate the classification accuracy of the selected feature subset), |Selected| denotes the number of selected feature sets, and |ALL| denotes the number of original feature sets.α denotes the weighting factor,α ∈ [0, 1],β = 1 − α .Since Eq. ( 25) plays a large role in the SOSCHoA algorithm searching for the optimal feature subset, it is set to 0.99.
Experimental validation and analysis
To verify the degradation and classification performance improvement of SOSCHoA for high-dimensional classification data.This section conducts a series of comparison experiments, and the detailed description of the high-dimensional classification dataset used is shown in Table 3.The settings of the comparison algorithms used are presented in Table 4. Second, the classification performance is analyzed, and the number of features in SOSCHoA is investigated.Third, experimental results on classification performance, number of features, and running time are analyzed and evaluated for SOSCHoA versus other heuristic algorithms.Finally, the convergence performance of the compared algorithms and the Wilcoxon rank sum test is verified.
Description of the experimental dataset
The experimental datasets were selected from the internationally well-known ASU high-dimensional dataset (https:// jundo ngl.github.io/ scikit-featu re/ datas ets.html).Table 3 briefly describes these datasets, with the number of samples ranging from 62 to 210, the number of features ranging from 325 to 22,283, and the number of class labels ranging from 2 to 11.When the number of class labels is two categories, it is considered dichotomous.When the number of class labels is more significant than two classes, it is considered multiclassification.
Experimental settings
To evaluate the impact of the proposed strategy mechanism on the classification performance of high-dimensional microarray data during feature selection, three sets of comparison experiments were designed as follows.
In the first set of comparison experiments, the classification performance of SOSCHoA was compared with that of the CHoA algorithm 14 and the DLFCHOA algorithm 17 .In the second set of comparison experiments, SOSCHoA was compared with PIL-BOA 18 , BBOA 19 , LMRAOA 20 , VGHHO 21 of different opposing learning element heuristics for comparison of fitness values and classification performance.In the third set of comparison experiments, SOSCHoA was compared with FA 22 , FPA 10 , WOA 12 , HHO 13 , MRFO 23 for comparison of fitness values and classification performance.The experimental framework is shown in Fig. 1.
Figure 1 shows that SOSCHoA is run on the training dataset to generate a subset of candidate features.Secondly, the training and test sets are transformed into new training and test sets by removing unselected features.Finally, the test dataset is fed into the classifier to verify the classification performance of the selected feature subset against the feature subset selected by the comparison algorithm.(24) thus improving the efficiency and effectiveness of group collaboration.In addition, the SOSCHoA algorithm can help the group jump out of the local optimal solution and search for the global optimal solution further.
Comparison of SOSCHoA with CHoA and DLFCHOA classification performance
In Table 5, AccMean (%), maxAcc (%), and SD denote the average classification accuracy, best classification accuracy, and standard deviation for each algorithm over 30 independent runs on each classification dataset.In 6, d and time(/s) denote the average number of features selected and the average running time for each algorithm over 30 independent runs.
As seen from Table 5, SOSCHoA achieves higher average classification accuracy on all test datasets than the CHoA algorithm.Also, compared to the DLFCHOA algorithm, SOSCHoA achieves higher average classification accuracy on all test datasets except for the nci9 dataset.Regarding standard deviation, SOSCHoA is optimal compared to CHoA on all test datasets except warpPIE10P.SOSCHoA is optimal compared to DLFCHOA on all test datasets except on Carcinom.In conclusion, SOSCHoA showed better performance than DLFCHOA and ChoA algorithms in terms of both average classification accuracy and robustness.
As can be seen from Table 6, SOSCHoA has the highest average number of features selected among the three algorithms at 66.66, which is 12.31 and 21.59 higher than DLFCHOA and ChoA, respectively.This indicates that SOSCHoA still needs to improve its feature selection capability and optimise the number of selected features.
Analysis of CHoA algorithm improvement strategies
The data in Table 3 were selected for classification accuracy and adaptation value experiments to analyse the improved strategies' impact on the algorithms' performance.The CHoA algorithm that only employs the social coevolution strategy (SOCHoA) is compared with the CHoA algorithm that escapes the local optimal solution using the Sine chaotic opposing learning strategy (SCHoA).The parameters of the above two algorithms are the same as in "Experimental settings".
The comparison results from Table 7 show that the operator's classification accuracy and average adaptation value using the social Coevolution strategy are significantly better than the SOSCHoA algorithm on the warp-PIE10P, Carcinom and nci9 datasets.The operator's classification accuracy and average adaptation value using the Sine chaotic Opposing learning strategy are significantly better than the SOSCHoA algorithm on the lung and Lung_Cancer datasets.Meanwhile, by combining the results in Tables 5 and 7, it can be seen that SOCHoA and SCHoA classification accuracy and average adaptation value perform poorly on the lung_discrete dataset, which suggests that only adopting the Social Coevolution strategy or only the Sine Chaos Opposing Learning strategy can be of significant help in improving the performance of the CHoA algorithm.
In conclusion, the results of SOSCHoA are better than the two sub-algorithms of SOCHoA and SCHoA.The comparison results show that both improvement strategies play a role in improving the algorithm, and their promotion can be effectively combined without being suppressed by either operator, which confirms the effectiveness of the improvement strategies for the algorithm.Therefore, the SOSCHoA algorithm can improve the CHoA algorithm, strengthen its global investigation and local mining ability, accelerate the convergence speed, eliminate the local optimum, and achieve higher classification accuracy and smaller optimal adaptation value.
Analysis of the impact of opposing learning strategies on classification performance
To verify the superiority of SOSCHoA, algorithms with different opposing learning strategies were selected to compare and validate the classification performance of the test data, specifically PIL-BOA, BBOA, LMRAOA, and VGHHO.The algorithms were tested for classification comparison by using the 12 test datasets given in Table 3.Each algorithm was run 30 times to obtain the average classification values, and the comparison results are shown in Table 8.
From the results in Table 8, the classification performance of SOSCHOA was only better than that of VGHHO on the lung.Regarding carcinoma, the classification performance of SOSCHOA was the worst.For all other datasets, the classification performance of SOSCHOA was better than that of the other metaheuristics.This indicates that SOSCHOA has a significant advantage over the different algorithms in terms of classification performance.Also, the running time of the SOSCHOA algorithm is well within the acceptable range.To further demonstrate the effectiveness of the SOSCHoA algorithm, it was compared with the five different heuristic optimization algorithms.Table 9 shows the average classification accuracy of these five algorithms.Table 10 indicates the number of features selected for these five algorithms.Table 11 shows the average running time of these five algorithms.Table 9 shows that on the warpPIE10P dataset, WOA classification accuracy was the best, and SOSCHoA classification accuracy ranked third.On the lung and Lung_Cancer datasets, FA classification accuracy was the best, and SOSCHoA classification accuracy ranked second.For the Carcinom and nci9 datasets, HHO classification accuracy was the best, and SOSCHoA classification accuracy ranked second.SOSCHOA's classification performance for all other datasets was better than that of the other metaheuristics.This indicates that SOSCHOA has a significant advantage over the different algorithms in terms of classification performance.
As seen from Table 10, the number of features selected by SOSCHoA is lower on all test datasets compared to the five algorithms, FA, FPA, WOA, MRFO, and HHO.From Tables 9 and 10, it can be seen that the SOSCHoA algorithm is the most efficient.
As seen from Table 11, the running time of the SOSCHoA algorithm is still relatively long due to the larger search space in high-dimensional data.However, the running time of the SOSCHOA algorithm is well within the acceptable range.
In summary, the SOSCHoA algorithm has a robust search capability and can find a relatively small and high-quality subset of features.Secondly, it shows that the SOSCHoA algorithm can improve the classification accuracy in the selected feature subset.Finally, it also indicates that the chosen feature subset by the SOSCHoA algorithm still has room for further reduction and improvement in classification accuracy.This also provides a feasible study for subsequent research and the design of new innovative mechanisms to eventually reduce the size of the feature subset and further improve the model's classification performance.In summary, among the six www.nature.com/scientificreports/results in the lung dataset.In the Leukemia_1 dataset, SOSCHOA and the WOA and MRFO algorithms were found to be identical overall, respectively.These results show that the SOSCHoA algorithm usually provides statistically significant performance improvements.However, we also note that on specific datasets, the performance of SOSCHoA is similar to that of the other algorithms.This may be due to the characteristics of these datasets or the inherent advantages of different algorithms in dealing with specific problems.
Conclusion
When dealing with high-dimensional classification data, the complex interactions between features pose higher challenges to feature selection algorithms.The traditional CHoA has limitations in fast convergence and accurate optimization search, and it is difficult to identify and eliminate irrelevant and redundant features efficiently.To overcome these limitations and improve the global search capability and convergence efficiency of the algorithm, after an in-depth study of the core mechanism of CHoA, this paper proposes a new algorithm: Social Coevolution and Sine Chaotic Oppositional Learning Chimp Optimization Algorithm (SOSCHoA).The improvements of the SOSCHoA algorithm are mainly reflected in the following aspects: • Introducing the social coevolution strategy, which enhances the information exchange between individu- als, extends the search subspace and dynamically adjusts the balance between local exploration and global exploitation.• Using a sine chaotic opposition learning increases the diversity of the population.It improves the ability of the algorithm to jump out of the local optimum and approach the global optimal solution.• Experimental results show that SOSCHoA significantly outperforms existing algorithms such as CHoA, DLFCHOA, PIL-BOA, BBOA, VGHHO, FA, FPA, WOA, HHO, and MRFO in terms of convergence rate, classification accuracy, and feature approximation ability.These results confirm the significant advantages of SOSCHoA in improving classification accuracy and reducing the number of features.However, regarding reducing the number of feature dimensions, the SOSCHoA algorithm still needs to catch up on datasets such as warpPIE10P, lung, Carcinom and nci9.
Future research will focus on further optimizing the position update equation and the global exploration mechanism to improve the high-dimensional classification optimization capability of SOSCHoA, especially when dealing with datasets with higher feature dimensions.
Figure 2 .
Figure 2. Variation of SOSCHoA classification accuracy versus the number of selected features.
Figure 4 .
Figure 4. Comparison of the convergence curves of the SOSCHoA algorithm with the other eleven compared algorithms.
Figure 5 .
Figure 5.Comparison of the convergence curves of the SOSCHoA algorithm with the other eleven compared algorithms.
Table 1 .
Research on meta-heuristic algorithms for high-dimensional data.
of dataset Number of samples Number of features Number of classification labels
Figure 1.Experimental framework.
Table 4 .
Comparison algorithm parameter settings.
Table 5 .
Comparison of the classification performance of SOSCHoA with CHoA and DLFCHOA.
Table 6 .
Comparison of the number of selected features and running time (/t) for SOSCHoA with CHoA and DLFCHOA.
Table 7 .
Comparison of classification accuracy and average fitness value test results of algorithms.
Table 8 .
Analysis of the running time (/s) and classification accuracy of SOSCHoA with different opposing learning strategy algorithms.
Table 9 .
Average classification accuracy performance of SOSCHoA and the other four heuristic optimization algorithms.
Table 10 .
Average number of features selected for SOSCHoA and other heuristic optimization algorithms.
|
v3-fos-license
|
2021-12-12T17:52:40.051Z
|
2021-12-06T00:00:00.000
|
245060669
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2075-4701/11/12/1960/pdf",
"pdf_hash": "7822cab8e1b75f80673061af8f9342655a8ef4f1",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44564",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"sha1": "efc5eb5c3aabc63ea7fc93db166eae142097134e",
"year": 2021
}
|
pes2o/s2orc
|
Practical Approbation of Thermodynamic Criteria for the Consolidation of Bimetallic and Functionally Gradient Materials
: This study concerns the key problem of determining the conditions for the consolidation or fracture of bimetallic compounds and high-gradient materials with different coefficients of thermal expansion. The well-known approach to determining the strength is based on the assessment of the critical energy release rates during fracture, depending on the conditions of loading (the portion of shear loading). Unfortunately, most of the experimental results cannot be used directly to select suitable fracture toughness criteria before such a connection is made. This especially applies to the region of interphase interaction, when it is required to estimate the internal energy of destruction accumulated during the preparation of the joint in the adhesion layer within the range of 20–50 µ m. Hence, criteria for the adhesive consolidation of bimetallic compound layers were obtained on the basis of the thermodynamics of nonequilibrium processes. The analysis of the quality of the joint using the obtained criteria was carried out on the basis of the calculation of isochoric and isobaric heat capacities and coefficients of thermal expansion of multiphase layers. The applicability of the criteria for the qualitative assessment of the adhesion of layers is demonstrated in the example of bimetallic joints of steel 316L—aluminum alloy AlSi10Mg obtained by the SLM method at various fusion modes.
Introduction
Analysis of trends in the modern industry development indicates that an effective solution to the problem of obtaining specific, often incompatible characteristics in materials is the development and creation of composite materials. Among the composite materials, we can distinguish functional-gradient materials.
In a functional-gradient material (FGM), both the composition and the structure gradually change in volume, which leads to corresponding changes in the properties of the material [1,2]. A fairly complete overview of modern trends in the creation of FGM can be found in the works [3].
In the case of a sharp difference between the chemical compositions of the FGM phases, one can speak of functional bimetallic materials. The concept of functional bimetallic materials (FBM) was proposed in 1984 in Japan as a means of obtaining materials for a thermal barrier [4]. FBM is an advanced material that can achieve a transition gradient or sudden transition from one material to another for various materials [2]. In the early stages of FBM production, welding was the main technology for combining dissimilar metals [5], explosive welding [6,7] and laser welding [8] were particularly successful. Among other technologies, laser additive manufacturing is an ideal technology for producing FBM [8].
Functional graded structures (FGS) are another type of composite materials that occupy an intermediate position between FGM and FBM. In the study [9], based on the analysis of technologies for building FGS, two methods of their production from CrNi and Al-powders using additive technologies are compared; these are direct metal deposition (DMD) and selective laser melting (SLM), as presented in Figure 1.
Metals 2021, 11,1960 2 of 13 phases, one can speak of functional bimetallic materials. The concept of functional bimetallic materials (FBM) was proposed in 1984 in Japan as a means of obtaining materials for a thermal barrier [4]. FBM is an advanced material that can achieve a transition gradient or sudden transition from one material to another for various materials [2]. In the early stages of FBM production, welding was the main technology for combining dissimilar metals [5], explosive welding [6,7] and laser welding [8] were particularly successful. Among other technologies, laser additive manufacturing is an ideal technology for producing FBM [8].
Functional graded structures (FGS) are another type of composite materials that occupy an intermediate position between FGM and FBM. In the study [9], based on the analysis of technologies for building FGS, two methods of their production from CrNi and Al-powders using additive technologies are compared; these are direct metal deposition (DMD) and selective laser melting (SLM), as presented in Figure 1. The LDMD method for FGS manufacturing is schematically presented in Figure 1 and was proposed earlier [10]. The layers were formed from Ni (Diamaloy) and Al based powders on a related substrate according to the following strategy: the first two layers were pure NiCr, the next two were 70% of NiCr + 30% of Al, the third pair of layers was 50% of NiCr + 50% of Al, and finally the upper 7th and 8th layers had a ratio of 30% of NiCr + 70% of Al. For the Fe-Al system, such a system was successfully tested in [11].
FBM and FGS-materials must have strong interlayer bonds, which are preserved during further technological processing and under operating conditions. It is assumed that the material retains its macroscopic continuity up to the initiation of an interphase crack. Delamination is considered as a process of initiation and development of continuity microdefects, leading to the formation of interlayer cracks. Delamination is formed due to a combination of two or three main delamination mechanisms (modes): normal opening mode I (a), sliding shear mode II (b), and scissor shear mode III (c) [6,12]. There are numerous models of damage mechanics within the framework of the phenomenological approach [13][14][15][16], which are applicable for a monolithic material and separate components of layered materials; most of them do not allow assessment of the fracture in the joint zone, as they do not take into consideration the inhomogeneity of thermomechanical properties between different phases interfaces.
It should be mentioned that materials with poor thermal conductivity obtained by fused layer deposition of metal powder are prone to cracking. The first reason for the occurrence of cracks is an increased level of residual stresses, which are formed due to uneven heating during the synthesis of layers, during which the upper layers undergo significant tensile stresses during solidification [9]. The presence of a certain number of pores and structural defects, from which the development of cracks begins, is the second reason for the tendency to crack formation. Fundamental criteria for the initiation and propagation of fracture can be obtained using the concept of energy balance at the crack front, which, for an equilibrium crack, can be expressed as the equality of the available energy and the energy required to create a unit area of the new crack surface [17]. The LDMD method for FGS manufacturing is schematically presented in Figure 1 and was proposed earlier [10]. The layers were formed from Ni (Diamaloy) and Al based powders on a related substrate according to the following strategy: the first two layers were pure NiCr, the next two were 70% of NiCr + 30% of Al, the third pair of layers was 50% of NiCr + 50% of Al, and finally the upper 7th and 8th layers had a ratio of 30% of NiCr + 70% of Al. For the Fe-Al system, such a system was successfully tested in [11].
FBM and FGS-materials must have strong interlayer bonds, which are preserved during further technological processing and under operating conditions. It is assumed that the material retains its macroscopic continuity up to the initiation of an interphase crack. Delamination is considered as a process of initiation and development of continuity microdefects, leading to the formation of interlayer cracks. Delamination is formed due to a combination of two or three main delamination mechanisms (modes): normal opening mode I (a), sliding shear mode II (b), and scissor shear mode III (c) [6,12]. There are numerous models of damage mechanics within the framework of the phenomenological approach [13][14][15][16], which are applicable for a monolithic material and separate components of layered materials; most of them do not allow assessment of the fracture in the joint zone, as they do not take into consideration the inhomogeneity of thermomechanical properties between different phases interfaces.
It should be mentioned that materials with poor thermal conductivity obtained by fused layer deposition of metal powder are prone to cracking. The first reason for the occurrence of cracks is an increased level of residual stresses, which are formed due to uneven heating during the synthesis of layers, during which the upper layers undergo significant tensile stresses during solidification [9]. The presence of a certain number of pores and structural defects, from which the development of cracks begins, is the second reason for the tendency to crack formation. Fundamental criteria for the initiation and propagation of fracture can be obtained using the concept of energy balance at the crack front, which, for an equilibrium crack, can be expressed as the equality of the available energy and the energy required to create a unit area of the new crack surface [17].
Failure analysis is often used during the design phase of composite structures, which requires accurate and reliable determination of material properties. For adhesive joints, these properties are strength parameters and critical energy release (CERR) rate, which is characterized by the toughness of the material. In this case, CERR is the most defining parameter [12]. It is advisable to determine the criteria that allow one to find CERR as a function of the ratio of modes of the involved separation mechanisms (I, II, III). A number of studies have been devoted to this issue [12,18,19]. It should also be noted that the cohesion law [12,20], which is based on the universal law of binding energy proposed by Rose et al., is applicable to the interface of the bimetallic material [21]. Most macroscopic fracture theories are based on the principles of solid mechanics and classical thermodynamics [1]. With regard to additive technologies, the existing energy approaches can be expanded if we consider the conditions for the consolidation of a multiphase material, in particular FBM, from the point of view of the thermodynamics of nonequilibrium processes.
Theoretical Foundations of the Research Method
The process of additive synthesis of FBM is high-temperature, and energy exchange in a local volume at the interface boundary of a bimetallic compound can be so intense that separation is possible. The purpose of this study is to identify the conditions for the consolidation of phases of a multiphase medium with different thermophysical properties characteristic of bimetallic materials, from the conditions of the balance of the thermal and stress-strain states, as well as phase equilibrium in the interface. Consolidation in this context means the absence of interphase separation under conditions of thermodynamic equilibrium. In this regard, to solve the key problem of finding conditions for the consolidation of a multiphase material from the point of view of thermodynamics, the heat transfer equation at the interface was considered, reflecting the interphase mechanical interaction.
Modeling of the bimetallic compound interface was carried out on the basis of the state analysis determined by the thermodynamics of irreversible processes. A similar approach at the macrolevel was used in [22] in relation to a medium consisting of deformable grains. All macroscopic processes in a heterogeneous medium were considered by the methods of continuum mechanics using averaged or macroscopic parameters.
As a result, it was possible to obtain the criteria for the consolidation of two phases K 1 and K 2 , dependences (1) and (2), which can be considered as necessary conditions for the formation of a stable adhesive bond from the point of view of thermodynamics: where c v Ω , c w Ω are the molar isochoric heat capacities of layers v and w, c v σ , c w σ are the molar isobaric heat capacity of the layers, α v , α w are the linear coefficients of thermal expansion, and k v = s v Ω /s v σ , k w = s w Ω /s w σ are the coefficients inverse to the polytropic indicator. To find criteria (1) and (2) at the interface boundary, an analytical method was used to determine all thermodynamic quantities included in them. Isobaric and isochoric heat capacities of a pure substance from the composition of each phase were calculated according to Debye's law of molar heat capacity [23]: where θ is the Debye's temperature, which is defined as Metals 2021, 11, 1960 4 of 13 In Equations (3) and (4), is Planck's constant, k is Boltzmann's constant, ν is the vibration frequency of atoms, x is the parameter, determined on the basis of the solidstate theory [23], and T is temperature (all calculations are made for room temperature, T = 298 K). The characteristic Debye temperatures of substances are known from literature, for example, [24].
Equation (1) is valid when determining the Debye temperature for a pure substance, and for a substance in a compound (as part of a phase), the Debye temperature is calculated using the Koref's equation [24]. According to Koref's rule, data on the melting points of a compound, and melting points and Debye temperatures of pure substances outside the compound, make it possible to obtain the melting temperatures of these substances in a compound, according to the dependence: Here θ*, θ are the characteristic Debye temperatures of the element in the compound, with other elements of phase and the element outside the compound of phase, and T * m , T m are the melting temperatures of the entire phase and the element outside the compound of the phase. The isochoric heat capacity (s Ω ) values are determined from θ* using the Debye's equation separately for each phase component. Then, summing them up according to the Neumann-Kopp rule, the isochoric heat capacity of the compound is determined. For the A l B m D k compound, the isochoric heat capacity can be found from the dependence [24,25]: The recalculation of the isochoric heat capacity to the isobaric heat capacity was carried out according to the Magnus-Lindemann equation [24]: where n is the number of atoms in the compound (n = l + m + k), and T * m is the melting point of A l B m D k .
The usual approach to assessing the properties of an FGM material is to apply the rule of mixtures. Although these are not really physical or mathematical rules, these relationships can be used to approximate the thermal or mechanical properties of a composite material in terms of individual properties and relative amounts of components. The simplest is the classical linear rule of mixtures (Voigt's estimate) for two constituent materials, based on the assumption of uniform strain or stress of the composite structure [1]. The upper Voigt bound [26,27] for the effective coefficient of thermal expansion α is provided by the expression: where C 1 , α 1 , E 1 are the volumetric concentration, coefficient of thermal expansion (CTE), and modulus of elasticity related to the first component (phase) of the composite substance, C 2 , α 2 , E 2 , to the second component. According to [1], two-phase composite material dependences for calculating the CTE are more accurately and experimentally confirmed in works [28][29][30][31][32].
Materials and Methods
To determine the applicability of thermodynamic criteria for the consolidation of phases (layers) of a bimetallic material (1) and (2) for the analysis of delamination, a series of experiments was carried out to create a bimetallic compound AlSi10Mg, steel 316L, by the method of selective laser alloying [33]. Due to the different specific energy of fusion supplied to the surfacing area with a change in the scanning speed, we achieved a different level of mixing of the phase components in the region of formation of the interlayer interface. The elemental composition of the interface was determined by energy dispersive analysis. Thermodynamic consolidation criteria were calculated for each sample. Bimetallic samples were subjected to mechanical testing to assess the adhesion strength of the interlayer interface. The results of mechanical tests were compared with the calculated value of the thermodynamic criterion for phase consolidation.
For the surfacing material, we used powder of aluminum alloy AlSi10Mg. The results of X-ray microscopy study of the morphology and chemical composition of the powder are presented in Table 1. Fusing of aluminum powder was carried out on a pre-cleaned substrate: a plate of steel 316L 2.0 mm thick with the chemical composition, as presented in Table 2. Fusing of layers of aluminum powder was carried out on an SLM 280 installation in strips 70 × 15 mm in size. Up to 10 layers were deposited sequentially on 5 strips (samples) using technological scanning parameters (Table 3). Analysis of the microstructure of bimetallic samples was carried out on thin sections of cross-section using a Zeiss Axio Vert A1 Mat optical microscope (Carl Zeiss Microscopy GmbH, Jena, Germany): with ×200 and ×500 magnification for each sample. To improve the visibility of the grain boundaries, a gray filter was used in a bright field. The etching of the samples was carried out in a solution of following acids: H 2 SO 4 -HCl-HNO 3 -HF in a proportion of 180-180-120-30 mL, respectively, by immersion for 5 min. The processing of the obtained images of the microstructure was carried out in the specialized software system SIAMS 800. (version 800, OOO "SIAMS", Ekaterinburg, Russia).
To determine the chemical composition, a Phenom ProX electron microscope (Phenom-World, Eindhoven, The Netherlands) was used with an attachment for energy dispersive analysis. The chemical composition was measured at the boundary of two materials with a step of 18-20 µm (5 measurements along and 10 measurements across the boundary), at a magnification of ×1000. The measurement results along the border were averaged and processed by statistical methods. The mechanical tests were carried out to compare the adhesion strength of the interface zone for all samples. The adhesion strength was determined by comparing the wear resistance of the samples to external influences. The samples were blown with steel microballs with a diameter of up to 0.3 mm and at a speed of up to 50 m/s. During blowing, compressive stresses arise in the AlSi10Mg surface layer, and tensile stresses arise in the interface zone, which contribute to delamination. Blowing was carried out until visible signs of wear appeared on all samples in the form of exfoliated AlSi10Mg particles.
Experimental Results
The microstructure of the bimetal boundary is presented in Figure 2. On all samples, three zones can be distinguished that are formed during SLM: a zone of deposited material, a heat-affected zone, and a substrate (from bottom to top). With an increase in the energy density, the depth of the heat-affected zone in the substrate increases from 65-80 microns (modes 1-3) to 120-180 µm (modes 4-5). The thickness of the deposited layer decreases with increasing energy density: mode 1-180-240 µm, 2-120-200 µm, 3-60-100 µm; 4-5-20-40 µm. The microstructure of the interface between the layers of a bimetallic compound is presented in Figure 2.
The distribution of chemical elements along the boundary of the bimetal was determined from the results of energy dispersive analysis. The content of the key elements Fe, Cr, Ni, Al, and Si was measured at a distance of 20 µm on both sides of the interface between the bimetal layers. Measurement values, averaged over five points, are presented in Table 4. three zones can be distinguished that are formed during SLM: a zone of deposited material, a heat-affected zone, and a substrate (from bottom to top). With an increase in the energy density, the depth of the heat-affected zone in the substrate increases from 65-80 microns (modes 1-3) to 120-180 μm (modes 4-5). The thickness of the deposited layer decreases with increasing energy density: mode 1-180-240 μm, 2-120-200 μm, 3-60-100 μm; 4-5-20-40 μm. The microstructure of the interface between the layers of a bimetallic compound is presented in Figure 2. The distribution of chemical elements along the boundary of the bimetal was determined from the results of energy dispersive analysis. The content of the key elements Fe, Cr, Ni, Al, and Si was measured at a distance of 20 μm on both sides of the interface between the bimetal layers. Measurement values, averaged over five points, are presented in Table 4. Steel 316L crystallizes first and, since it belongs to the austenitic class, it does not undergo phase transformations below the solidus point [34]. The ratio of the components in the entire temperature range for γ, α + γ-phases corresponds to the values in Table 2. The quantitative analysis of other phases in the interface area was carried out according to the following algorithm.
At the first stage, according to the data in Table 3, the percentage content of the elements Fe, Al, Si was recalculated proceeding from the condition that their total content was 100%. Further, according to the diagram of the ternary state of Fe-Al-Si [35] (Figure 3) and Table 4, the possible composition of the phases of the system was determined.
The quantitative analysis of other phases in the interface area was carried to the following algorithm.
At the first stage, according to the data in Table 3, the percentage co ements Fe, Al, Si was recalculated proceeding from the condition that the was 100%. Further, according to the diagram of the ternary state of Fe-Al 3) and Table 4, the possible composition of the phases of the system was d Figure 3. Fusibility diagram Fe-Al-Si (the percentage of elements is given by weig τ2-Al12Fe6Si5; τ3-Al9Fe5Si5; τ4-Al3FeSi2; τ5-Al15Fe6Si5; τ6-Al4FeSi) data from [35 In addition to the solid solution γ, α + γ-phases of 316L, possibl sponding to the content of elements in Table 4 at the boundary of bim presented in Table 5. τ 2 -Al 12 Fe 6 Si 5 ; τ 3 -Al 9 Fe 5 Si 5 ; τ 4 -Al 3 FeSi 2 ; τ 5 -Al 15 Fe 6 Si 5 ; τ 6 -Al 4 FeSi) data from [35].
In addition to the solid solution γ, α + γ-phases of 316L, possible phases corresponding to the content of elements in Table 4 at the boundary of bimetal layers are presented in Table 5. Table 5. Possible phases at the boundary of bimetal layers.
Place of Phase Separation
Phase Designation and Its Formula Melting Temperature Range, • C Invariant Reaction (See Figure 3) From the side of 316L τ 1 (Al 3 Fe 3 Si 2 ), τ 2 (Al 12 Fe 6 Si 5 ) 935-940 From the side of AlSi10Mg τ 5 (Al 15 Fe 6 Si 5 ), τ 6 (Al 4 FeSi) 615-620 At the second stage, the quantitative content of each phase from Table 5 and 316L phases was determined by the method of nonlinear programming. The problem of finding the content of phases in accordance with the law of mixtures is reduced to an optimization problem with linear constraints, which are presented in Tables 6 and 7. The variables x i on the left side of the Tables 6 and 7 indicate the percentage of the i-phase. The coefficients for the variables x i are the weight percentage of the element in the i-phase. The restrictions on the total content of each element in all phases for each sample are taken from Table 4. The content of each phase will be a solution to the inequality systems in Tables 6 and 7. The error of such calculations is residual 100 − ∑ x i . The results of the quantitative analysis of the phase composition are presented in Table 8. x Al15Fe6Si5 + x Al4FeSi + x 316L + x (Al)+(Si) → 100 In the interface area of the grown bimetallic samples, tensile stresses were additionally created, which promote delamination by blowing metal balls onto the upper layer. The view of the samples after blowing is presented in Figure 4.
Calculation of the Consolidation Criteria
The results of calculations of the heat capacities and consolidation criteria according to Equations (1)- (8) are summarized in Table 9.
Calculation of the Consolidation Criteria
The results of calculations of the heat capacities and consolidation criteria according to Equations (1)- (8) are summarized in Table 9. For further analysis, let us determine the deviations of the calculated values of the consolidation criteria from Table 9 from their ideal values equal to 1: Deviations of the criteria from the ideal value (the smaller the deviation is, the better it is) are presented in Figure 5.
For further analysis, let us determine the deviations of the calculated values of the consolidation criteria from Table 9 from their ideal values equal to 1: Deviations of the criteria from the ideal value (the smaller the deviation is, the better it is) are presented in Figure 5. The criteria for the SLM scanning mode 5 turned out to be closest to the ideal value. If we refer to Figure 4, with photographs of delamination of samples subjected to mechanical stress, we can note a qualitative similarity between the picture of delamination and the values of the consolidation criteria. Comparing the values of the consolidation criteria and the magnitude of their discrepancy with the desired values ( Figure 5) with the delamination of the samples (Figure 4), it can be noted: (i) the most informative criterion for consolidation, reflecting the destruction, is the criterion (2), and (ii) the values of the consolidation criteria do not correlate with each other.
It should also be noted that the values of the consolidation criteria for each specific case do not yet serve as indicators of destruction; however, their significant deviation from 1 by more than 25-30%, as the studies demonstrate, indicate an increased likelihood of delamination.
Conclusions
The calculation results of the consolidation criteria (1) and (2), according to the methodology and dependencies outlined in Section 2, are presented. The proposed criteria should be used to determine the probability of cracking. The ideal values of the criteria are equal to 1. To determine the applicability of the proposed criteria, a series of experiments were carried out to create a bimetallic compound AlSi10Mg, steel 316L, by the method of selective laser melting with different energy densities of fusion. The results of the criteria calculations were compared with the results of the tests on adhesive strength and demonstrated an acceptable correlation with the test results. As demonstrated by the test results, the significant difference in the calculated criteria values by more than 25-30% from optimum designates an increased likelihood of delamination.
The consolidation criteria (1) and (2) do not at all pretend to fully reflect the physical phenomena occurring in the fusion area, even from the thermodynamic point of view. The criteria for the SLM scanning mode 5 turned out to be closest to the ideal value. If we refer to Figure 4, with photographs of delamination of samples subjected to mechanical stress, we can note a qualitative similarity between the picture of delamination and the values of the consolidation criteria. Comparing the values of the consolidation criteria and the magnitude of their discrepancy with the desired values ( Figure 5) with the delamination of the samples (Figure 4), it can be noted: (i) the most informative criterion for consolidation, reflecting the destruction, is the criterion (2), and (ii) the values of the consolidation criteria do not correlate with each other.
It should also be noted that the values of the consolidation criteria for each specific case do not yet serve as indicators of destruction; however, their significant deviation from 1 by more than 25-30%, as the studies demonstrate, indicate an increased likelihood of delamination.
Conclusions
The calculation results of the consolidation criteria (1) and (2), according to the methodology and dependencies outlined in Section 2, are presented. The proposed criteria should be used to determine the probability of cracking. The ideal values of the criteria are equal to 1. To determine the applicability of the proposed criteria, a series of experiments were carried out to create a bimetallic compound AlSi10Mg, steel 316L, by the method of selective laser melting with different energy densities of fusion. The results of the criteria calculations were compared with the results of the tests on adhesive strength and demonstrated an acceptable correlation with the test results. As demonstrated by the test results, the significant difference in the calculated criteria values by more than 25-30% from optimum designates an increased likelihood of delamination.
The consolidation criteria (1) and (2) do not at all pretend to fully reflect the physical phenomena occurring in the fusion area, even from the thermodynamic point of view. However, if the phase composition in the interface region is presumably known, then these criteria can serve as indicators of possible destruction.
|
v3-fos-license
|
2022-06-12T15:03:23.741Z
|
2022-06-01T00:00:00.000
|
249587007
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1660-4601/19/12/7062/pdf?version=1654758780",
"pdf_hash": "50d8edc0e4649292b168321d899675961ec7fab9",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44565",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "256e122992c5c0c2b1496e755b11124bfe7d17e6",
"year": 2022
}
|
pes2o/s2orc
|
Differential Eating Behavior Patterns among the Dark Triad
There is little extant empirical literature examining the associations between Dark Triad (DT: Machiavellianism, narcissism, and psychopathy) and eating behaviors. The current study (n = 361) investigated the associations between Dark Triad and restrained eating, uncontrolled eating, and emotional eating in a sample drawn from the general population. The results from the study indicate that (a) despite expected sex differences in narcissism and primary psychopathy, no sex differences were found in Machiavellianism, secondary psychopathy, and eating behaviors; (b) among women, Machiavellianism was a protective factor against uncontrolled eating behaviors; (c) the sex of the participant moderated the narcissism–uncontrolled eating behaviors and narcissism–emotional eating behaviors relationships, with the negative correlation being stronger for men than that for women; (d) secondary psychopathy, rather than primary psychopathy, was associated with higher uncontrolled eating behaviors in both sexes, and associated with higher emotional eating behaviors for men only. The implication of these findings are interpreted and discussed.
Introduction
Eating disorders are defined with special regard to body weight/shape and associated behaviors, such as dieting, binge eating, purging, and excessive exercising. On the basis of body weight, the DSM-5 divides eating disorders into two main types: anorexia nervosa and bulimia nervosa [1].
Clinicians and researchers have suggested that eating disorders are closely associated with personality traits. In a recent review, Farstad and colleagues revealed that higher perfectionism, neuroticism, avoidance motivation, sensitivity to social rewards, and lower extraversion and self-directedness are common in all eating disorder diagnoses [2]. Further, among eating disorders, greater impulsivity is common in bulimia nervosa, avoidance and obsessive-compulsive personality disorders are the most common in restricting anorexia nervosa (one subtype of anorexia nervosa), and borderline and paranoid personality disorders are common in binge eating/purging anorexia nervosa (another subtype of anorexia nervosa), bulimia nervosa, and eating disorders not otherwise specified.
Eating disorders were found to occur almost exclusively in women [3]. In women with eating disorders, empirical studies have identified three personality subtypes: the high functioning/perfectionist, overcontrolled, and dysregulated subtypes [1,4,5]. Among the three, the high-functioning/perfectionist subtype and overcontrolled subtype are associated with either anorexia nervosa or bulimia nervosa, while the dysregulated subtype is most closely associated with bulimia nervosa [1]. Accordingly, it seems that Cluster C personality traits are the most prevalent among those with anorexia nervosa and bulimia nervosa.
Although a certain study involved the association between bulimia nervosa and the personality constructs of narcissism [6], few studies to date have explored the relationship between Dark Triad traits (i.e., Machiavellianism, narcissism, and psychopathy) and eating disorders/behaviors. The Dark Triad refers to a term that was introduced by Paulhus and Williams to describe three overlapping but distinct personality traits that are socially undesirable and comprise malevolent characteristics [7]. Machiavellianism is characterized by a belief in the effectiveness of manipulative tactics in dealing with other people, a cynical view of human nature, and a moral outlook that puts expediency above principle [8][9][10]. Narcissism is marked by exaggeration of self-worth and importance, superiority over others, bragging, attention and admiration seeking, and manipulation [11,12]. Lastly, psychopathy is defined by impulsivity and sensation seeking, callousness, a lack of remorse, and antisocial behaviors [10,11,13]. Research has suggested that lower agreeableness, lower honesty-humility, interpersonal manipulation, and callous affect were the common characteristics shared by the Dark Triad [14,15].
Recently, three important studies addressed the links between Dark Triad and eating behaviors. Specifically, Sariyska and colleagues first underlined the role of Dark Triad in the context of eating style, and found that the group of omnivores scored higher on Machiavellianism, narcissism, and psychopathy than the group of vegans/vegetarians did [16]. In 2020, Mertens and colleagues examined the relationship between Dark Triad, and meateating justification and meat consumption in Germany, and noted that Machiavellianism was partly able to explain gender differences in meat-eating justification strategies and behaviors [17]. More recently, Mertens and colleagues examined the relationship between Dark Triad and eating behaviors in a large sample of German population, and also found evidence that Machiavellianism plays an important role in explaining gender differences in meat-eating justification strategies [18].
Although the works of Sariyska et al., and Mertens et al. are influential in establishing associations between Dark Triad and eating behaviors [16][17][18], the three studies have several limitations. First, they assessed psychopathy globally and did not differentiate the forms of psychopathy (i.e., primary and secondary psychopathy) [10]. Second, participants were all recruited from Germany. So, cross-cultural evidence is needed to confirm the findings. Third, three studies have ignored the links between Dark Triad and disordered eating behaviors (between eating disorders and normal eating style), such as emotional eating.
Given the extent to which each Dark Triad trait influences behaviors [19], given the close associations between Dark Triad and fast life history strategies (a fast life history strategy is reflective of reproductive efforts over somatic efforts and mating efforts over parental effort, and affects various aspects of human psychology, including disordered eating style) [15,20,21], and given the sex differences in eating disorders [3], it seems reasonable to extend previous findings to the links between Dark Triad and disordered eating behaviors. On the basis of theory analysis, the current study aims to gain a deeper understanding of the associations between Dark Triad traits and disordered eating behaviors. We predict that (a) Dark Triad traits are positively associated with disordered eating behaviors and its dimensions; (b) each Dark Triad trait could uniquely contribute to the prediction of eating behaviors, and psychopathy (more specifically secondary psychopathy) would be most closely associated with disordered eating behaviors, such as uncontrolled and emotional eating behaviors; and (c) the sex of the participant could moderate the associations between the Dark Triad and eating behaviors, and these associations are especially strong for women.
Participants
To reach a large number of participants, an online questionnaire was used for data collection. The link to the survey was distributed via several social media platforms in July 2019. After receiving informed consent, participants were assured that their answers were confidential and anonymous. In total, 378 individuals started the online questionnaire; after dropping incomplete and invalid data, 361 respondents remained. The final sample consisted of 248 (68.7%) women and 113 (31.3%) men aged 18-56 (M = 24.83, SD = 7.45). Among these participants, 0.8% had a junior high school degree, 8% had a high-school degree and technical secondary school qualifications, 75.3% had a college degree, 22.4% had a master's degree, and 2.2% had a doctorate degree. After fulfillment of the research requirement, participants received CNY 10 (approximately USD 1.5).
Machiavellian Personality Scale (MPS)
MPS is a 16-item, self-rating, and validated measure designed to assess four dimensions of Machiavellianism: (a) amorality (e.g., "I am willing to be unethical if I believe it will help me succeed"), (b) desire for control (e.g., "I like to give the orders in interpersonal situations"), (c) desire for status (e.g., "status is a good sign of success in life"), and (d) distrust of others (e.g., "people are only motivated by personal gain") [22,23]. Each item was rated on a 5-point Likert scale anchored by 1 (strongly disagree) and 5 (strongly agree). All items were summed to create a total score (range 16-80), and a higher score was indicative of higher levels of Machiavellianism. This scale was previously used among Chinese samples with satisfactory reliability and validity [24]. In this study, Cronbach's alpha was 0.868 for entire scale, 0.809 for amorality, 0.845 for desire for control, 0.768 for desire for status, and 0.819 for distrust of others. Due to varying factorial structures of Machiavellianism construct [16], we only used the total score.
Narcissistic Personality Inventory-Brief Version (NPI-16)
NPI-16 is a 16-item, self-rating, and validated measure designed to assess individual differences in levels of narcissism, which was validated in the Chinese sample [25,26]. NPI-16 has a dichotomous, forced-choice response format. Each item on the scale presents two statements, one indicative of narcissism and the other not (e.g., A: "I think I am a special person" or B: "I am no better or no worse than most people"). Participants were asked to indicate which best described themselves, scored 1 = narcissistic response, 0 = non-narcissistic response. All items were summed to create a total score (range 0-16), and a higher score was indicative of higher levels of narcissism. In this study, Cronbach's alpha was 0.818 for entire scale.
Levenson Self-Report Psychopathy Scale (LSRP)
LSRP is a 26-item, self-rating, and validated measure designed to assess two factors: (a) primary psychopathy (e.g., "I enjoy manipulating other people's feelings") and (b) secondary psychopathy (e.g., I have been in a lot of shouting matches with other people). Each item was rated on a 4-point Likert scale anchored by 1 (strongly disagree) and 4 (strongly agree) [27]. This scale was previously validated among Chinese samples [28]. The score for each factor is generated by adding the scores of items within that factor, all items are summed to create a total score (range 26-104), and a higher score is indicative of higher levels of psychopathy. In this study, Cronbach's alpha was 0.857 for the entire scale, 0.811 for primary psychopathy, and 0.730 for secondary psychopathy.
Three Factor Eating Questionnaire-R18 (TFEQ-R18)
TFEQ-R18 is an 18-item, self-rating, and validated measure designed to assess three different aspects of eating behaviors: (a) restrained eating (i.e., conscious restriction of food intake aimed to control body weight and/or to promote weight loss), (b) uncontrolled eating (i.e., the tendency to eat more than usual due to a loss of control over intake with a subjective feeling of hunger, (c) emotional eating (i.e., the inability to resist emotional cues, eating as a response to different negative emotions) [29]. Each item was rated on a 4-point Likert scale anchored by 1 (definitely true) and 4 (definitely false). The score for each factor was generated by adding the scores of items within that factor, and a higher score was indicative of lower levels of restrained eating, uncontrolled eating, or emotional eating. The Chinese version of TFEQ-R18 was obtained by conducting a translation and back-translation, without any overlap across the members who performed the translation and back-translation. The original and back-translated items were compared for nonequivalence of meaning, and discrepancies were revised. The process continued until no semantic differences were noticed between the original version and the Chinese version. In this study, the results of the CFA revealed that the 18-item three-factor model fitted the data well (χ 2 /df = 2.066, RMSEA = 0.054, NFI = 0.931, CFI = 0.963, GFI = 0.924); Cronbach's alpha was 0.868 for entire scale, 0.847 for restrained eating, 0.900 for uncontrolled eating, and 0.865 for emotional eating. Table 1 shows the means and standard deviations for all variables. As expected, men reported higher levels of narcissism and primary psychopathy than those of women. The sex difference in Machiavellianism was not significant, although men scored marginally higher than women did. As for eating behaviors, the present study failed to find any sex differences in its three factors.
Correlations
There is evidence that the Dark Triad has different predictors in the two sexes [19]; we then separately examined correlations between Dark Triad and eating behaviors for men and women. Results are presented in Table 2. In neither sex was Machiavellianism associated with restrained eating, uncontrolled eating, and emotional eating. For men only, narcissism was negatively and significantly associated with uncontrolled eating and emotional eating. In both sexes, both primary psychopathy and secondary psychopathy were negatively and significantly associated with uncontrolled eating. When these correlations were assessed across the sexes, only two differed significantly. The correlations between narcissism and uncontrolled eating (r = −0.389, p < 0.01 for men; r = −0.087, p > 0.05 for women; Fisher's z = 2.818, p < 0.01), emotional eating (r = −0.308, p < 0.01 for men; r = −0.011, p > 0.05 for women; Fisher's z = 2.678, p < 0.01) were stronger in men than those in women, thereby suggesting that the impact of narcissism on uncontrolled eating and emotional eating may differ significantly in two sexes.
Regression Analyses
Multiple regression analyses were performed separately for men (coded = 1) and women (coded = 2) to explore differential eating behavior patterns among Dark Triad traits [30]. We controlled for age and educational degree in the regression analyses by entering them in Step 1, followed by Dark Triad in Step 2. As shown in Table 3, in both sexes, Dark Triad had no correlation with restrained eating. As shown in Table 4, Machiavellianism was associated with lower uncontrolled eating for women only, narcissism was associated with higher uncontrolled eating for men only, and secondary psychopathy was associated with higher uncontrolled eating in both sexes. As shown in Table 5, narcissism and secondary psychopathy were associated with higher emotional eating for men only. In addition, for women only, age was associated with lower uncontrolled eating and emotional eating, and strikingly, higher educational degree was associated with higher levels of emotional eating.
Moderating Effect of Sex
Because narcissism has different predictors in the two sexes, we conducted formal moderation analysis to confirm whether the sex of the participant moderated the associations between narcissism and eating behaviors [31]. After controlling for age and educational degree, the sex-narcissism interaction term was negatively and significantly associated with uncontrolled eating (β = −0.197, t = 2.91, p < 0.01) and emotional eating (β = −0.196, t = 2.85, p < 0.01). These results mean that the link between narcissism and uncontrolled/emotional eating was more substantial for men than it is for women (Figures 1 and 2).
Moderating Effect of Sex
Because narcissism has different predictors in the two sexes, we conducted formal moderation analysis to confirm whether the sex of the participant moderated the associations between narcissism and eating behaviors [31]. After controlling for age and educational degree, the sex-narcissism interaction term was negatively and significantly associated with uncontrolled eating (β = −0.197, t = 2.91, p < 0.01) and emotional eating (β = −0.196, t = 2.85, p < 0.01). These results mean that the link between narcissism and uncontrolled/emotional eating was more substantial for men than it is for women (Figures 1 and 2).
Discussion
The Dark Triad is a hot topic in personality psychology, clinical psychology, and evolutionary psychology. Researchers have examined various intrapersonal, interpersonal, and behavioral correlates. In current study, we conducted an exploration study to examine the associations between the Dark Triad and eating behaviors.
Discussion
The Dark Triad is a hot topic in personality psychology, clinical psychology, and evolutionary psychology. Researchers have examined various intrapersonal, interpersonal, and behavioral correlates. In current study, we conducted an exploration study to examine the associations between the Dark Triad and eating behaviors.
Consistent with previous studies [32][33][34], the results of sex differences in the Dark Triad indicate that men scored significantly higher than women did on narcissism and primary psychopathy. Men scored higher than women did in Machiavellianism, but the sex difference was slight and not significant. Additionally, no sex difference was found in three aspects of eating behaviors in present study.
The fact that Machiavellianism was associated with lower uncontrolled eating behaviors deserves attention. Among women, when shared variance between the traits of Dark Triad was controlled in multiple regression, Machiavellianism (β = 0.197, p < 0.05) uniquely predicted uncontrolled eating, thus suggesting that it was a protective factor against uncontrolled eating behaviors.
Machiavellianism and even Dark Triad, may tap into a fast life strategy [35][36][37][38]. Life history theory is a midlevel evolutionary theory about resource allocation that describes the adaptive choices made by people to optimize survival and reproduction on account of ecological and/or social environments [20]. The fast life history strategy is produced by harsh or unpredictable environments encountered in childhood [20], is reflective of reproductive efforts (an early age of reproduction and a preference for immediate benefits at the expense of long-term benefits) over somatic efforts (people devoted to their own continuing survival and development), and is adaptive under adverse circumstances [21]. For example, an experimental study has shown that information associated with harsh environments encourages behaviors consistent with a fast life history strategy, unconsciously leading participants to seek and consume more filling and high-calorie foods [39] that they believe will sustain them for a long time. Therefore, the relationship between Machiavellianism and relevant eating behaviors is very pertinent in view of the results.
However, some studies found that those high in Machiavellianism have strategic planning and a longer-term orientation [40,41]. Perhaps these characteristics may promote a slow life strategy [42] and thereby diminish uncontrolled eating behaviors. Another possible hypothesis is that dieting may work primarily as a female strategy in mating and status competition [1]. Dieting and the resulting thinness can increase one's attractiveness and enhance status in female groups, especially when cultural and fashion emphasis on thinness is strong. From an evolutionary perspective, the psychological mechanisms that underlie dieting behaviors are fundamentally adaptive [1].
Both multiple regression and moderation analyses indicate that only men showed a significant narcissism-uncontrolled eating/emotional eating slope, thus demonstrating that the sex of the participant could moderate the simple relationships between narcissismuncontrolled/emotional eating. These results suggest that narcissism uniquely predicted reckless eating behaviors in men. An explanation for the obtained results is that narcissism is positively associated with impulsivity. For example, Crysel and colleagues found that, of the Dark Triad traits, narcissism was most consistently associated with behavioral risk tasks, and may be driving the observed relationships between the Dark Triad and risk behaviors [35]. Lau and Marsee also found that narcissism showed the strongest associations with behavioral dysregulation and emotional dysregulation among the Dark Triad traits [43].
Another possibility for the obtained results is that sensation seeking is characteristically higher in men than that in women [44]. Regarding impulsivity and sensation seeking, the two key behaviors correlating to the fast life history strategy in humans are coupled with entitlement and overconfidence (i.e., narcissism), and men appear motivated to engage in reckless eating behaviors.
With respect to psychopathy, the present study found that secondary psychopathy was associated with higher uncontrolled eating behaviors in both sexes, and associated with higher emotional eating behaviors for men only. These results should be taken as modest support for our hypotheses, and allow for us to further discriminate secondary psychopathy from primary psychopathy. Previous research has revealed that, compared to primary psychopathy (emotionally stable psychopathy), secondary psychopathy (neurotic psychopathy) is a better predictor of uncontrolled behaviors such as substance abuse, aggression, and criminality [45]. Therefore, from a mental health perspective, skills in emotion regulation should be included when reckless eating behaviors are the focus of an intervention program.
Incidentally, the current study noted that those men who scored higher on the Dark Triad, especially narcissism, showed more reckless eating behaviors than their women counterparts did, while others found that eating disorders occur almost exclusively in females [3]. The solution of the apparent contradictory findings appears to be associated with the hypothesis that, as a female strategy in mating and status competition, women's dieting behaviors are fundamentally adaptive and may lead to maladaptive outcomes, such as anorexia nervosa [1].
There are some limitations of the current study that should be considered. First, only self-report measures were used; therefore, the present study may be subject to monoinformant biases. Future studies may benefit from additional data sources, such as parents, teachers, peers, and close friends. Second, although Machiavellianism appears to be onedimensional, both narcissism and psychopathy are multidimensional [46,47]. In this study, one characteristic limitation is that it tended to consider overall scores on the narcissism trait. Future research may examine the associations between eating behaviors and different types of narcissism, that is, grandiose and vulnerable narcissism. Third, while the study was available online, recruitment was reliant on Chinese-speaking populations. The culture from which participants are recruited may impact on personality, eating behaviors, and their willingness to provide socially desirable responses [48]. Future research should consider a more diverse population such as Western populations. In addition, the current work is based on a small sample of participants, so future researchers should attempt to replicate these findings in larger samples to gain a more reliable result. Fourth, although it is important to study eating behaviors in a subclinical sample, it limits the generalizability of the results to a population from clinical samples. Future research should extend it to clinical sample in order to determine whether similar associations between Dark Triad traits and eating behaviors emerge. Lastly, although the reliability and validity of the eating questionnaire in this study were satisfactory, the questionnaire had not been validated in the Chinese context before, and its psychometric properties need to be further tested in the future.
Conclusions
This study is explorational research to examine the associations between Dark Triad traits and eating behaviors. The results showed that (a) despite expected sex differences in narcissism and primary psychopathy, no sex differences were found in Machiavellianism, secondary psychopathy, and eating behaviors; (b) among women, Machiavellianism was a protective factor against uncontrolled eating behaviors; (c) the sex of the participant moderated the narcissism-uncontrolled eating and narcissism-emotional eating relationships, with the negative correlation being stronger for men than that for women; (d) secondary psychopathy, rather than primary psychopathy, was associated with higher uncontrolled eating behaviors in both sexes, and associated with higher emotional eating behaviors for men only.
Author Contributions: L.S.: conceptualization, investigation, data curation, writing-original draft preparation; S.S.: conceptualization, methodology, writing-review and editing; Y.G.: writingoriginal draft preparation, writing-review and editing, supervision, project administration, funding acquisition. All authors have read and agreed to the published version of the manuscript. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2021-09-27T19:03:16.366Z
|
2021-08-15T00:00:00.000
|
238709562
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2073-4433/12/8/1046/pdf?version=1629710025",
"pdf_hash": "75470099da765e78167803b8279da5de1217e032",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44566",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "7451729d600eb216e73d5e8e9f0db7f5366bf356",
"year": 2021
}
|
pes2o/s2orc
|
On the Problem of Critical Electric Field of Atmospheric Air
: It is traditionally accepted to define the dielectric strength of air as an electric field corresponding to the balance between the rates of impact ionization and electrons’ attachment to neutrals. Its reduced value is known to be about 110 Td regardless of the altitude above the mean sea level. In this study, the altitude profile of the critical electric field of atmospheric air in the 0–40 km altitude range is specified. Unlike the conventional approach, a wide range of additional plasma-chemical processes occurring in atmospheric air, such as electron detachment from negative ions and ion-ion conversion is taken into account. Atmospheric air is considered to be a mixture of N 2 :O 2 = 4:1 containing a small amount of chemically active small gas components, such as water vapor, atomic oxygen, ozone, and several types of nitrogen oxides. It is shown that the dielectric strength of air falls noticeably compared to its conventional value. The results of the study can be important to solve the problems of initiation and propagation of lightning discharges, blue starters, and blue jets.
Introduction
The breakdown electric field, which separates the dielectric state of the medium from the ionized one, is an important property of atmospheric air. It is traditionally accepted to define the breakdown threshold E b taking into account only ionization (production of electrons) and attachment (the loss of electrons) processes. This concept involves a single equation for the electron concentration [e] temporal evolution: where ν i and ν a are the ionization and attachment frequencies, respectively, which are both sharp functions of the electric field [1]. In the framework of Equation (1), electron multiplication is impossible when ν a > ν i and the threshold of their number density exponential growth is determined from the relation: In atmospheric air, which in the first approximation can be considered as a nitrogenoxygen mixture, there are two main ionization reactions: In the lower atmosphere, the key process responsible for the loss of electrons is their attachment to oxygen molecules: Under normal conditions, the attachment frequency ν a varies from 10 7 s −1 (for threebody attachment (6) which prevails for reduced electric fields smaller than 55 Td [2]) to 10 8 s −1 (for two-body attachment (5) which prevails for reduced electric fields higher than 55 Td [2]). Table 4 reported in [3] allows one to estimate the electron lifetime as 10-100 ns. When attached to neutrals, electrons form negative ions, the relatively low mobility of which significantly complicates further ionization. The air breakdown field E b at the sea level found from Equation (2) varies from 2.6 to 3.2 MV/m [1] and exponentially decreases with increasing height because of the reduction in the number density of air molecules. It must be noted that the balance relation (2) works well at times that do not exceed ν −1 a , while at larger time scales an additional consideration of slower processes, the main of which is detachment of electrons from negative ions, is needed.
There are numerous studies (see [2,[4][5][6][7] and references therein) discussing the influence of electron detachment from negative ions on the critical breakdown field of air. In [4], two additional (with respect to impact ionization and attachment) plasma-chemical processes were employed: detachment of electrons from O − 2 ions and conversion of O − ions into O − 2 ones: A joint consideration of reactions (3)-(5), (7) and (8) allowed the authors to compile a linear system of three differential equations describing the temporal evolution of electron, O − , and O − 2 concentrations and to derive the formula for the effective ionization frequency, which becomes positive under the conventional threshold E b . In [2], reactions (3)-(5) and (7) were supported by associative detachment and three-body conversion reactions. Similarly to [4], it was shown that the resultant system of four differential equations that involves the number densities of electrons and ions loses stability when ν i < ν a . This result was later refined in [5] with additional consideration of reactions (6) and (8) along with the following detachment processes involving O atoms: The authors of [6] developed a simple model with reactions (3)-(5) and (9) to show that in the upper atmosphere electrons multiply under electric fields significantly below the conventional breakdown threshold because, at high altitude (low pressure), the electron associative detachment from atomic oxygen ions counteracts the effect of dissociative attachment. In a recent study [7] devoted to the problem of lightning initiation in a thundercloud, it was shown that the involvement of detachment reactions (7), (9), and (13) together with the conversion ones (8) and (10) and provides a significant (15-30%, see their Figure 1(a)) reduction in the critical electric field compared to the traditionally accepted value E b . In their calculations, the authors first considered the process of ion loss to hydrometeors, which can be important for intracloud conditions, and analyzed the 0-20 km altitude range. Among other results of [7], there is the fact that the gap between the reduced critical field E c /N, which falls with the altitude above the mean sea level (AMSL) h, and the reduced conventional breakdown field E b /N, which does not noticeably depend on h, increases with increasing values of h reaching approximately 35 Td at the height of 20 km.
In the present study, the quantity of the critical electric field of atmospheric air is refined. The advantage of our model is that it considers both the wide range of plasmachemical reactions (72 in total, see Appendix A) and the presence of chemically active small gas components (SGCs), such as H 2 O, O, O 3 , NO, NO 2 , NO 3 , and N 2 O inhomogeneously distributed over the analyzed 0-40 km altitude range.
The content of the paper is the following. In Section 2, the composition and thermodynamical properties of virgin atmospheric air in the considered 0-40 km altitude range (Section 2.1) and the numerical scheme used to define the altitude profile of its reduced critical electric field and to analyze its ion composition in suprathreshold conditions (Section 2.2) are given. In Section 3, the model results which are further discussed in Section 4 are presented. The main findings of our study are formulated in Section 5. Appendixes A and B provide the list of considered plasma-chemical reactions and components of the system evolution matrix, respectively.
Materials and Methods
The main purpose of the study is to specify the critical electric field, exceeding of which ensures exponential growth of charged particle concentrations in atmospheric air, and to analyze the system behavior in the fields slightly exceeding its value. In this section, the composition and properties of the analyzed medium, which is atmospheric air in the 0-40 km altitude range, are described and the numerical approach used is discussed.
Ambient Conditions
In this study, the basic parameters of the atmospheric air correspond to the standard atmosphere approximation which is widely used in solving various technical and thermophysical problems and implies the averaged, i.e. not attached to some specific conditions, values of air pressure and temperature. Altitude distributions of atmospheric air parameters and composition were obtained by digitizing data from the following sources: • Air temperature (T) and pressure (p) altitude distributions- Table 1 The air number density N, which determines the reduced electric field E/N, was calculated as where k B = 1.38 × 10 −23 J/K is the Boltzmann constant. In the altitude range of 0-40 km AMSL, the air can be considered as a mixture of N 2 :O 2 = 4:1 (see Figure 1 in [8]) containing SGCs whose number densities are many orders of magnitude smaller than N = [ Altitude distributions of the discussed quantities in the considered range of 0-40 km AMSL are presented in Figure 1. In a recent study [7], it was shown that the frequency of ion loss to hydrometeors is ν h = 0.1-1 s −1 in thundercloud conditions. It is unknown how ν h depends on the altitude AMSL beyond the cloud volume. On the other hand, it follows from general considerations that ν h must fall when moving away from the cloud center where the concentration of hydrometeors is maximal. Because of this, it is conditionally assumed in the study that In Equation (16), the 5-km altitude, where ν h has a maximum, approximately corresponds to the peak of the used water vapor altitude profile (see Figure 2 in [8]), while the characteristic scale of 10 km is comparable to the vertical extent of a thundercloud where the vast majority of hydrometeors is located.
Evolution Matrix
In conditions when the electric field exceeds the air breakdown threshold, there appear an increasing number of electrons and ions. In this study, a wide range of atmospheric posi- 3 ) ions is considered. The model includes several types of plasma-chemical processes (see Appendix A), such as ionization, electrons' attachment to neutrals and detachment from negative ions, and ion-ion conversions. The set of considered reactions can be presented in the form of an evolution matrix A (see where x is a vector of variables that includes concentrations of electrons and positive and negative ions. Since in sub-threshold electric fields the ambient concentrations of electrons and atmospheric ions are negligible compared to those under suprathreshold conditions, the equilibrium state of the system can be considered as zero. This circumstance allows one to neglect quadratic recombination processes (at least when the electric field is not far from the ionization threshold) and to operate with the linearized system of Equation (17). As in previous studies (for example, [2,[4][5][6][7]), the sought critical electric field E c is defined as a field at which the first positive eigenvalue λ + of the matrix A, which can also be called an effective ionization frequency ν eff , appears. It should be noted that some components of the evolution matrix A are sharp functions of both reduced electric field E/N and air temperature T. The components of the eigenvector x + corresponding to the eigenvalue λ + , which characterize ion composition of atmospheric air, also depend on electric field and altitude AMSL. The described numerical method allows one to answer the following questions (see Section 3): 1.
How does the critical electric field of atmospheric air depend on altitude AMSL? 2.
How does the presence of SGCs influence the critical electric field altitude profile? 3.
How does the effective ionization frequency depend on electric field and altitude AMSL? 4.
How does the composition of charged particles (electrons and ions) vary with both electric field and altitude AMSL? 5.
What is the ratio of detachment frequency to ionization frequency at different electric fields and altitudes AMSL?
Results
Figure 2 presents altitude profiles of reduced conventional (E b /N) and critical (E c /N) breakdown electric fields obtained for different atmospheric air compositions. The values of E b were obtained from Equation (2) with ν i = ν i1 + ν i2 and ν a = ∑ 12 k=1 ν ak (see Appendix A). It follows from Figure 2 that the field E c , whose calculation involves a wide set of plasma-chemical reactions (see Appendix A), is significantly smaller than the conventional field E b , the concept of which assumes that only ionization and attachment processes are significant. This is because, in agreement with previous studies [2,[4][5][6][7], the role of electrons' detachment from negative ions cannot be neglected. Conversions between different types of negative ions are also important because each detachment reaction involves a specific sort of negative ions. It is also seen that the gap between E b and E c increases (from 15% at the ground level to 50% at the height of 40 km) with increasing altitude AMSL. The possible factors of this can be the reduction of the role of three-body reactions with a decreasing molecule concentration and the non-monotonous altitude profile of the air temperature T(h) (see Figure 1). The latter is important because the temperature of neutrals T influences both the rates of some plasma-chemical reactions (see Appendix A) and the altitude dependence of molecule concentration N(h) (see Equation (15)). Further, it is seen from Figure 2 that the role of SGCs in determining the altitude profiles of E b and E c is generally not significant. For a reduced conventional breakdown field E b /N, the presence of water vapor provides insignificant growth (about 2.5 Td) at altitudes smaller than 10 km because of the detachment reaction (2l) (see Appendix A), while its influence on the critical electric field E c is negligible. For the nonconventional breakdown field E c , the role of SGCs becomes noticeable above approximately 25 km altitude. In particular, an exclusion of ozone results in a significant (more than 10 Td for 40 km altitude) reduction of E c /N at altitudes above 35 km because the presence of ozone provides attachment processes (2f)-(2h) (see Appendix A). Calculations show that NO 2 , NO 3 , and N 2 O molecules do not noticeably affect the breakdown field, at least at the considered altitudes. Figure 3 presents several examples of dependencies of the model-predicted effective ionization frequency ν eff on the reduced electric field E/N for several different altitudes and on the altitude AMSL h for different values of the electric field E. It follows from Figure 3 that the rapid growth of the increment ν eff at electric fields and altitudes slightly exceeding the critical levels quickly transfers into the mode of smoother growth. The bigger the altitude AMSL is, the smaller the rate of ν eff increment. This feature partially compensates for the critical electric field reduction with increasing altitude (see Figure 2). In this and all the following figures, (1) the presented model results were obtained with all the SGCs taken into account; (2) for panels with fixed values of h, the upper reduced electric field limit of 111 Td corresponds to the conventional breakdown field E b ; (3) for panels with fixed values of E, the upper altitude limits correspond to the electric field E being equal to E b . Thus, in our model results we do not touch upon the area of E > E b . The presented fractions correspond to normalized components of the eigenvector x + of the matrix A conjugated to the eigenvalue λ + = ν eff (see Section 2.2). It is seen from Figures 4 and 5 that at reduced electric fields (altitudes) corresponding to the conventional breakdown threshold E b /N ≈ 111 Td (the heights where E = E b ), the system already contains a sufficiently large amount of not only ions but also free electrons which are very important for the breakdown development. Near the critical threshold E c (the altitude at which E = E c ), a negative charge exists predominantly in the form of negative ions, while electrons do not survive under these conditions because of the rapid attachment to neutrals. As the electric field (altitude) increases, the role of detachment grows rapidly which is accompanied by the decay of the relative fraction of negative ions in the "community" of negatively charged particles and release of electrons. As a result, the balance gradually changes in favor of the latter. Figure 6 shows how the ratio of effective detachment frequency to the attachment frequency ν eff d /ν a depends on the reduced electric field E/N for several altitudes and on altitude AMSL h for several fixed values of the electric field E. The total attachment frequency is ν a = ∑ 12 k=1 ν ak (see Appendix A), while the effective detachment frequency ν eff d is calculated taking into account the relative contributions of negative ions involved in reactions (3a)-(3h) from Table A1: where x 2 , x 3 , x 4 , and x 7 are components of the vector of variables x corresponding to O − , O − 2 , O − 3 , and NO − 2 ions (see Appendix B) that vary with the electric field and altitude AMSL (see Figures 4 and 5). It follows from Figure 6 that the role of the detachment process quickly becomes significant at electric fields exceeding the critical threshold E c , especially for high altitudes. The knowledge of this ratio is important because, if these conditions can be considered as quasi-equilibrium, it characterizes the balance between electron (n e ) and atmospheric negative ion (n n ) number densities:
Discussion
The concept of a critical electric field of air breakdown is closely related to the problem of lightning initiation which heads the list of ten top questions in the physics of lightning [15]. Indeed, maximal electric fields measured in clouds are about an order of magnitude lower than the conventional breakdown value at the same altitude (see, for example, Table 3.2 in [16] and Table 3.1 in [15]) which means that there must be some physical mechanisms making electrical breakdown possible in conditions of smaller electric fields. In this study, we developed a numerical model which takes into account a wide list of plasma-chemical processes and the presence of atmospheric SGCs to show that the critical electric field, at which charged particles multiplication begins, is noticeably lower than the conventional breakdown threshold and that the gap between their values increases with increasing altitude (see Figure 2). It was also shown that at electric fields higher than E c there is some amount of free electrons (see Figures 4 and 5) which are the key element of any electrical breakdown. The fact that, even in electric fields smaller than the conventional breakdown threshold, the air contains some amount of not only ions but also free electrons sheds some light on how lightning initiation in sub-breakdown intracloud conditions is possible at all (see [7] for more details).
In our model, we deal with a linearized system of differential equations and neglect the higher order processes, the most significant of which are electron-ion and ion-ion recombination. This is valid because near the critical threshold E c the measure of air ionization is low. For electric fields significantly higher than E c , recombination becomes noticeable which makes the used approach inoperable. That is why in the presentation of our model results we limit ourselves to electric fields ranging from critical E c to conventional E b breakdown thresholds. Production and chemical transformations of SGCs are also not taken into account assuming that their concentrations do not differ significantly from that of the virgin air. Effective generation of SGC molecules (some reactions are shown in Appendix A) is possible at relatively high concentrations of charged particles which is not the case near their multiplication threshold. Regarding the transformations between neutrals, the rate constants of their reactions are functions of the air temperature [17]. As there must not be significant air heating at the considered electric field range, SGC number densities must not change significantly. Although the described model limitations can be crucial far above E c , it is believed that they do not significantly affect the model predictions described in the paper.
In this study, the used altitude profiles of nitrogen oxides were taken, for lack of anything better, from particular experiments [11][12][13][14] conducted at certain times and places which can potentially be a source of inaccuracy of the model results. On the other hand, it follows from Figure 2 that the most "important" SGC noticeably influencing the nonconventional breakdown field is ozone whose averaged number density altitude profile is relatively well known. For the conventional case, the most "influential" SGC is the water vapor which is also well measured near the ground. So, the use of locally measured altitude profiles of nitrogen oxides is believed to be justified.
The altitude range of 0-40 km AMSL is considered in the study. At higher altitudes, the atmosphere becomes strongly ionized by cosmic rays (see Figure 1.3 in [16]) regardless of the electric field. Because of this, it can hardly be considered as a dielectric medium which significantly complicates the concept of its breakdown field. For the considered altitude range, we suppose that below the critical electric field concentrations of all the charged particles are negligible which allows us to work with a zero equilibrium state.
Conclusions
The concept of a critical breakdown field of atmospheric air taking into account the wide range of plasma-chemical reactions and the presence of SGCs is refined in the study. In addition, the model results allow to analyze the dynamics of charged components composition and the relative share of free electrons among negatively charged particles near the critical breakdown threshold. The main findings of the study are the following:
1.
The critical electric field of atmospheric air, at which the multiplication of charged particles begins, is significantly smaller than the conventional value, mostly due to electrons' detachment from negative ions. The gap between conventional and nonconventional thresholds increases with increasing altitude AMSL from 15% at the ground level to 50% at the height of 40 km.
2.
The presence of SGCs does not significantly influence the critical electric field.
3.
Close to the critical threshold, the effective ionization frequency is a sharp function of the reduced electric field. The rate of its growth decreases with increasing altitude AMSL which partially compensates for the critical electric field reduction.
4.
Above the critical electric field, ionized air contains some amount of free electrons. Their relative share in "community" of negatively charged particles, which can be expressed via the ratio of effective detachment frequency to the attachment frequency, generally increases with increasing reduced electric field.
The results of the study testify that discharge development in atmospheric air actually begins in electric fields significantly smaller than the conventional breakdown threshold which is important for the lightning initiation problem. Foundation (project 19-17-00183).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Acknowledgments: The authors express their gratitude to N.A. Popov for productive discussions on the subject of the study.
Conflicts of Interest:
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in the manuscript: AMSL Above mean sea level SGC Small gas component
Appendix A. Model Reactions
In the appendix, we present the list of reactions included in our model. They are all "linear" in the sense that their left sides contain only one considered charged component. This is because in our simplified approach we neglect quadratic recombination reactions. In the table below M stands for N 2 or O 2 and [M] = [N 2 ] + [O 2 ] is the number density of atmospheric air. The dependence of the electron temperature T e on the reduced electric fieldȆ = E/N was taken from [18]. The temperatures of air (T) and electrons (T e ) are expressed in Kelvin degrees. Electron detachment from negative ions Ion-ion conversion without nitrogen oxides Ion-ion conversion involving nitrogen oxides [22] (5x)
Appendix B. Evolution Matrix Components
In the appendix, we present non-zero components of the linearized evolution matrix A from Equation (17) that result from the list of plasma-chemical reactions shown in Appendix A. To define the positions of frequencies of the considered reactions in matrix A, we first set the components of a vector of variables x = {x 1 , x 2 , . . . , x 18 }: Now one can attribute the frequencies of plasma-chemical reactions from Table A1 to the components of matrix A which are presented below (for simplicity grouped by lines).
Line 4 for d[O −
3 ]/dt: Line 8 for d[NO − 3 ]/dt: The knowledge of components of matrix A allows, if necessary, to write evolution equations for the considered charge components. For example, it follows from the 15-th line that
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2006-09-14T00:00:00.000
|
1837574
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcimmunol.biomedcentral.com/track/pdf/10.1186/1471-2172-7-22",
"pdf_hash": "4c0cad9ea2964fc7d6f67b28fbd178c3361518e7",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44567",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "4c0cad9ea2964fc7d6f67b28fbd178c3361518e7",
"year": 2006
}
|
pes2o/s2orc
|
Expression of P2 receptors in human B cells and Epstein-Barr virus-transformed lymphoblastoid cell lines
Background Epstein-Barr virus (EBV) infection immortalizes primary B cells in vitro and generates lymphoblastoid cell lines (LCLs), which are used for several purposes in immunological and genetic studies. Purinergic receptors, consisting of P2X and P2Y, are activated by extracellular nucleotides in most tissues and exert various physiological effects. In B cells, especially EBV-induced LCLs, their expression and function have not been well studied. We investigated the expression of P2 receptors on primary human B cells and LCLs using the quantitative reverse transcriptase-polymerase chain reaction (RT-PCR) method for revealing the gene expression profile of the P2 receptor subtypes and their changes during transformation. Results The mRNA transcripts of most P2 receptors were detected in primary B cells; the expression of P2X3 and P2X7 receptors was the lowest of all the P2 receptors. By contrast, LCLs expressed several dominant P2 receptors – P2X4, P2X5, and P2Y11 – in amounts similar to those seen in B cells infected with EBV for 2 weeks. The amount of most P2 subtypes in LCLs or EBV-infected B cells was lower than in normal B cells. However, the amount of P2X7 receptor expressed in LCLs was higher. Protein expression was studied using Western blotting to confirm the mRNA findings for P2X1, P2X4, P2X7, P2Y1, and P2Y11 receptors. ATP increased the intracellular free Ca2+ concentration ([Ca2+]i) by enhancing the Ca2+ influx in both B cells and LCLs in a dose-dependent manner. Conclusion These findings describe P2 receptor expression profiles and the effects of purinergic stimuli on B cells and suggest some plasticity in the expression of the P2 receptor phenotype. This may help explain the nature and effect of P2 receptors on B cells and their role in altering the characteristics of LCLs.
Background
B cells synthesize and secrete large quantities of soluble immunoglobulin antibodies and thus, play a key role in humoral immunity. An infection with the Epstein-Barr virus (EBV) easily transforms resting primary B cells in vitro from human peripheral blood cells into B-blast-like proliferating lymphoblastoid cell lines (LCLs) [1]. This infection is used routinely in the laboratory to generate LCLs from B cells [2]. LCLs are widely used in various types of studies, including those involving the disciplines of immunology, cellular biology, and genetics. This transformation results in changes in certain cellular properties, including gene expression [3], cell surface phenotyping, and cytokine production [4]. Extracellular nucleotides -e.g., adenosine 5'-triphosphate (ATP), adenosine 5'-diphosphate, uracil 5'-triphosphate, and uracil 5'-diphosphate -have various physiological effects in many cells, such as exocrine and endocrine secretion, neurotransmission, cell proliferation, cell differentiation, and programmed cell death that are mediated by P2 receptors, consisting of P2X and P2Y receptors [5]. P2X receptors are ligand-gated cation channels, of which seven receptor subtypes (P2X 1 to P2X 7 ) have been identified and cloned [6]. P2Y receptors, which are G-protein-coupled metabotropic structures, consist of eight cloned and functionally distinct subtypes: P2Y 1 , P2Y 2 , P2Y 4 , P2Y 6 , P2Y 11 , P2Y 12 , P2Y 13 , and P2Y 14 [5,7].
Blood cells express P2 receptors which regulate such responses as cell proliferation, differentiation, chemotaxis, cytokine release, immune and inflammatory responses [5,8]. In lymphocytes, ATP induces an increase in membrane permeability for cations and larger molecules [9,10], as well as cellular proliferation [11] and cell death through P2 receptors [12,13]. The precise nature of the expression and function of the P2 receptor subtypes have been investigated [14][15][16].
P2 receptors expressed in B cells have been investigated using electrophysiological, pharmacological, and immunocytochemical techniques, which have revealed the existence of P2 receptors [17], especially P2X [14,18]. However, the researchers in these studies failed to perform a quantitative analysis of P2 mRNA and used B cells from chronic lymphocytic leukemia (CLL) or LCLs, rather than pure B cells. Recently, the mRNA profile of the lymphocyte P2 receptor was subjected to quantitative analysis, but the B cells were not separated and not all subtypes were targeted [15,16].
In this study, we investigated the expression of P2 receptors in human B cells and in LCLs using quantitative reverse transcriptase-polymerase chain reaction (RT-PCR), Western blotting, and fluorimetric techniques to measure intracellular free Ca 2+ concentration ([Ca 2+ ] i ). We were able to determine the profile of the P2 receptor mRNA in these cells and monitor changes in [Ca 2+ ] i in response to P2 receptor activation. Our findings indicate the plasticity of P2 receptors in B cells during their transformation into LCLs.
Results
IgD and CD38 are cell-surface molecules that have been used widely to identify the B-cell phenotype during B-cell development. Like germinal center B cells, most EBVtransformed B cells were positive for CD38 but not for IgD [19,20]. The expression of IgD and CD38 molecules on primary B cells and EBV-transformed LCLs was evaluated by fluorescence-activated cell sorter (FACS) analysis. To generate LCLs, we cultured isolated B cells with the active EBV supernatant for 4 to 6 weeks, as described in Methods section. The primary B cells expressed IgD, but not CD38, and the LCLs expressed CD38, but not IgD (data not shown). This result is consistent with our previous findings [20].
P2 receptor mRNA quantification
The expression of P2 receptors in B cells and LCLs was determined using quantitative RT-PCR. The expression of the P2 receptor subtypes was compared among B cells, LCLs, and peripheral blood mononuclear cells (PBMCs). P2X 1 or P2Y 1 were used as a calibrator (i.e. the P2X receptor was expressed as a ratio of P2X 1 and the P2Y receptor as P2Y 1 ) in order to illustrate the expression of P2 receptors relative to each other. All P2X and P2Y receptor subtypes were detected in the B cells. Most of the P2 receptor subtypes had similar rates of expression within 1-or 2fold of each other with the exception of the P2X 3 and P2X 7 receptors, which were expressed in lower quantities (Figure 1, n = 4). P2X 7 receptor expression was significantly low compared to other P2X receptors (p < 0.05), with the exception of P2X 3 . The presence of P2X-and P2Y-receptor mRNA in the B cells is in agreement with the findings of previous lymphocyte studies using RT-PCR [15,16]. EBVinfected B cells were also examined because an in vitro transformation might alter the expression of receptors. The most abundant P2 receptor subtypes were P2X 4 , P2X 5 , and P2Y 11 ( Figure 2; n = 4). The expression of P2X 5 receptors in LCLs was significantly higher than other P2X receptors (p < 0.05; Figure 2). The P2 receptors in B cells were compared with those expressed in LCLs ( Figure 3, P2X 1 of B cells used as a P2X calibrator and P2Y 1 as a P2Y calibrator). The expression of the P2X 1 through to P2X 6 receptors and P2Y receptors in LCLs and B cells that had been infected with EBV for 2 weeks was significantly lower than in noninfected B cells (p < 0.01; Figure 3). However, the LCLs expressed a significantly larger number of P2X 7 receptors than B cells (p < 0.01; Figure 3). The expression of EBV-infected LCLs, which had been infected for more than 4 weeks and EBV-infected B cells, which had been infected for 2 weeks, yielded similar profiles and quantities. As a control, P2 receptors in PBMCs were quantified and these showed a different expression profile. In PBMCs, which are mainly monocytes and lymphocytes, P2X 4 , P2Y 6 , P2Y 11 , and P2Y 13 were the predominant P2 receptor subtypes (Figure 4, n = 4), and the expression rates for P2X 4 and P2Y 6 were significantly higher than other P2X or P2Y subtypes (p < 0.001). In addition, P2X 4 , P2X 7 , P2Y 6 , P2Y 11 , and P2Y 13 expression was significantly higher in PBMCs compared with B cells (p < 0.05), which may be the result of T cell/monocyte contamination in the PBMC preparation [15,16]. Therefore, the up-regulated P2X 7 receptor can be expected to have a physiological role during the transformation of B cells into LCLs.
Western blotting for P2 receptors
To investigate the correlation of mRNA with protein, we carried out Western blot analysis for P2X 1 , P2X 4 , P2X 7 , P2Y 1 , and P2Y 11 receptors, all of which had varying amounts of mRNA during EBV transformation (n = 4). The distribution of P2 receptors in B cells and LCLs is shown on the left panel ( Figure 5). The bands representing P2X 1 (60-kDa), P2X 4 (65-kDa), P2Y 1 (66-kDa), and P2Y 11 (50-kDa) receptors were more prominent in B cells than in LCLs, which correlates with the results of the mRNA quantitative analysis. As for the P2X 7 receptor, it was represented by a prominent 68-kDa band in LCLs and a faint band in B cells. This is consistent with the results of RT-PCR, which indicated that the expression of P2X 7 is higher in LCLs than in B cells. To compare protein loading, the blot was re-probed with anti-glyceraldehyde-3phosphate dehydrogenase (GAPDH) antibody (40-kDa) ( Figure 5, right).
Effect of ATP on intracellular free Ca 2+ concentration
Extracellular ATP is an effective modulator of [Ca 2+ ] i , and its activities are mediated through P2 receptors [17]. To determine whether the P2 receptors examined by RT-PCR
Ratio of Target/P2Y1
Relative expression of P2 receptors in B cells Figure 1 Relative expression of P2 receptors in B cells. The relative expression of the P2X receptor gene (upper; P2X 1 , P2X 2 , P2X 3 , P2X 4 , P2X 5 , P2X 6 , and P2X 7 ) and P2Y receptor gene (lower; P2Y 1 , P2Y 2 , P2Y 4 , P2Y 6 , P2Y 11 , P2Y 12 , P2Y 13 , and P2Y 14 ) in B cells are presented (n = 4). The expression was normalized to GAPDH. P2X receptors were calibrated by P2X 1 and P2Y receptors were calibrated by P2Y 1 . Data are the mean ± SEM. Figure 6C), indicating that the potency of ATP was similar in both cells. The mechanism leading to the intracellular Ca 2+ response was examined further by repeating these experiments under Ca 2+ -free conditions. The B cells were treated with 1 mM ATP under Ca 2+ -free conditions, and the [Ca 2+ ] i remained at or near the pre-agonist levels ( Figure 7A). The peak [Ca 2+ ] i was mostly abolished in B cells (n = 14, p < 0.0001) and in LCLs (n = 12, p < 0.0001) in the absence of Ca 2+ ( Figure 7B). These data suggest that an influx of Ca 2+ is the major route by which B cells and LCLs respond to ATP stimulation.
Discussion
In this study, we determined and compared mRNA expression levels for all known P2X and P2Y receptor subtypes on human B cells and LCLs. Quantitative RT-PCR was used to determine the gene expression profile for P2 receptors. This method was selected because selective agonists and antagonists for most of the P2 receptor subtypes are absent and real-time PCR has advantages over other methods, such as requiring only a small number of cells and being one of the most reliable methods of determining the amount of RNA. This is the first study to show the expression of P2 receptors using mRNA from healthy human B cells. In these cells, most of the P2X and P2Y receptor subtypes had 2- fold expression with the exception of P2X 3 and P2X 7 receptors. In the studies of the P2X receptor, the P2X 1 , P2X 2 , P2X 4 , and P2X 7 receptors were found in human B cells by an immunocytochemical assay [14] and the nondesensitizing cation channels activated by ATP, which is a feature of P2X 7 receptor, were measured using electrophysiological methods [18]. The different results of P2X subtype expression might be due to the different B cells, or variations in P2X receptor expression [21]. B cells transformed by EBV [18] or malignant B cells [14] were used in previous studies, while normal B cells were used in the present study. In addition, it is possible that there might be differences in the transcription, translation, and function of P2X receptors. The different P2X 7 expression levels may be because P2X 7 receptor might be up-regulated in CLLs [10] and that some lymphoid cells do not express P2 receptors (P2X 1 , P2X 4 , P2X 7 , P2Y 1 , and P2Y 11 ) compared by Western blotting in B cells and LCLs [11]. In addition, B cells did not undergo the typical increase in membrane permeability to ATP and were not susceptible to ATP-mediated cytotoxicity [8,22]. Although the P2Y receptors in B cells were investigated, it was not enough to compare the expression of subtypes. P2Y subtypes were detected by RT-PCR in previous studies, albeit only in lymphocytes [15,16].
Relative expression of P2 receptors in PBMCs
In LCLs and B cells infected by EBV for 2 weeks, the predominant P2 receptor subtypes were P2X 1 , P2X 4 , P2X 5 , P2X 7 , and P2Y 11 . The expression of most P2 receptors was suppressed during the EBV-induced B-cell transformation into LCLs, however, the suppression of P2X 1 , P2X 4 , P2X 5 and P2Y 11 receptors was not as great as for other subtypes. Only P2X 7 receptor was significantly up-regulated. Western blotting showed similar patterns for P2X 1 , P2X 4 , P2X 7 , P2Y 1 , and P2Y 11 , as well as for P2X 2 , P2X 5 , P2Y 2 , and P2Y 6 (data not shown). Our results suggest that there is some plasticity in P2-receptor expression in B cells. This possibility has been investigated in many tissues and cells, including the urinary bladder, heart, vessels, gut, neurons, and cancer cells [5]. In immune cells, plasticity in P2Y 2receptor expression was studied during myeloid leukocyte differentiation [23]. Sensitivity to ATP in thymocytes changes with the stage of maturation [24,25], and P2X 7receptor expression can be modulated by diverse stimuli [26]. The plasticity of P2 receptors may be due to changes in their exposure to ATP or EBV-induced changes in gene expression. In vivo, ATP is often released by blood cells into the extracellular environment through nonlytic mechanisms. Some leakage of cytoplasmic ATP may also occur as a consequence of damage to the cell or acute cell death. Platelet-dense granules comprise another relevant source of ATP [8]. In vitro, however, the sources of ATP for B cells are limited to nonlytic mechanisms or leakage of cytoplasmic ATP. The EBV-induced transformation of B cells into LCLs results in some B cells dying, which results in ATP being released into the extracellular compartment, where it continually degrades. Thus, the concentration of ATP may be high in the early stages of in vitro transformation and lower in later stages. The expression of P2 receptors may be affected by this fluctuation in environmental ATP.
In PBMC populations that include lymphocytes and monocytes, the dominant P2 receptor subtypes were P2X 4 , P2Y 6 , P2Y 11 , and P2Y 13 . An mRNA expression assay revealed that the P2Y 1 , P2Y 2 , P2Y 4 , and P2Y 6 receptors were expressed in lymphocytes and monocytes and that the P2Y 6 receptor was expressed in relatively higher amounts than the other P2Y receptor subtypes [16]. P2X 4 and P2Y 12 receptors were expressed in relatively large amounts in lymphocytes and P2X 4 , P2Y 2 , and P2Y 13 receptors in monocytes [15]. The expression of P2X 4 , P2Y 6 , and P2Y 13 receptors correlated with the findings of previous studies; however, the expression of the P2Y 11 receptor was somewhat different. It is possible that other lymphocytes or monocytes expressed these subtypes predominantly. Alternatively this may reflect a variation in cohorts or contamination with other types of blood cells.
To date, these blood cells have not been investigated well enough to compare P2 receptor subtypes, although some of them have been surveyed [5,8,16,27,28]. Because the P2 receptor profiles of blood cells are not completely known, it is difficult to determine which P2 receptor subtypes have been expressed dominantly in PBMCs until now.
Although the P2Y 8 and P2Y 10 receptors were examined with other subtypes, the findings for these subtypes were omitted because they are not included among the classical P2Y receptor subtypes in humans. We found the mRNA for these subtypes in B cells, LCLs, and PBMCs, indicating that they are prominent in these cells. In previous studies of human P2 receptors, the P2Y 8 and P2Y 10 receptors were expressed in HL60 [29] and included in the human genome [30].
Conclusion
In this study, the expression of P2X and P2Y receptors in human B cells and LCLs was investigated. P2-receptor expression was suppressed during the EBV-induced transformation of B cells, except for the P2X 7 subtype, which was up-regulated. Extracellular ATP induced an increase in [Ca 2+ ] i in B cells and LCLs via P2 receptors. Therefore, these findings reveal the exact P2 receptor profiles and the effects of purinergic stimuli on B cells and suggest some plasticity in the expression of the P2 receptor phenotype.
This will help us explain the nature and effect of P2 receptors on B cells and their role in altering the characteristics of LCLs.
B-cell purification and generation of EBV-transformed LCLs
Ten 240-mL packs of blood were obtained from the Central Red Cross Blood Center (Seoul, Korea). This blood was not appropriate for transfusion because of slightly elevated alanine aminotransferase levels. We used it to isolate PBMCs, using Ficoll-Hypaque gradient centrifugation (Amersham Biosciences, Uppsala, Sweden) and B cells, which were purified (>95% CD20 + ) using a B-cell isolation kit and a MACS separator (Miltenyi Biotec, Bergisch Gladbach, Germany). The immortalization of B cells was achieved by EBV infection [2,[32][33][34]. The B95-8 supernatant was added to the purified B cells in a culture flask (1 × 10 6 cells/mL). Following a 2-hour incubation period at 37°C, the same volume of medium and 0.5 μg/mL cyclosporine A [35] were added. The cultures were incubated for 4 to 6 weeks until clumps of EBV-infected B cells were visible. EBV-transformed LCLs were cultured in RPMI-1640 medium (GIBCO/BRL, Grand Island, NY, USA) supplemented with 10% heat-inactivated fetal bovine serum (FBS) (BioWhittaker, Walkerville, MD, USA) and 1% (v/v) antibiotics/antimycotics that included penicillin G (100 IU/mL), streptomycin (100 μg/mL), and amphotericin B (0.25 μg/mL). The cells were cultured in a humidified atmosphere of 5% CO 2 and 95% air at 37°C. The EBV stock was prepared from an EBV-transformed B95-8 marmoset cell line. These cells were grown in an RPMI-1640 medium supplemented with 10% FBS, and infectious culture supernatants were harvested and stored at -80°C until needed. Thus, each pack of blood was used to produce B cells, EBV-infected B cells, LCLs, and PBMCs for use in this experiment. The study was approved by the Institutional Review Board at the National Institute of Health, Korea Center for Disease Control and Prevention.
Quantitative real-time RT-PCR
The total cellular RNA was collected from human B cells, LCLs, and PBMCs. RNA was extracted using the RNeasy mini kit (Qiagen, Valencia, CA, USA) according to the manufacturer's instructions and stored at -80°C until used. Quantitative RT-PCR was performed to determine the expression of the P2 receptor genes (Table 1). To generate cDNA, we induced reverse transcription of the total RNA using oligodT 15 (Roche Diagnostics GmbH, Mannheim, Germany) and reverse transcription polymerase (Promega, Madison, WI, USA). Oligonucleotide primers (Bioneer, Daejeon, Korea) were designed using Primer Express software (Applied Biosystems, Foster City, CA, USA), based on sequences obtained from the GenBank database, and tested for quality and efficiency. Primer effi-ciency was established to ensure optimal amplification of our samples. Serial dilutions of synthetic cDNAs were carried out according to the supplier's instructions to define relative changes in quantity. Real-time PCR was performed using SYBR Green PCR Master mix (Applied Biosystems) in an ABI PRISM 7900 HT Sequence Detection System (Applied Biosystems). The amplification program included activation of AmpliTaq Gold at 95°C for 10 minutes, followed by 45 cycles of 2-step PCR with denaturation at 95°C for 15 seconds and annealing/extension at 60°C for 1 minute. Amplifications were followed by a melting curve analysis. A negative control (no cDNA template) was run simultaneously with every assay. The PCR from each cDNA sample was run in triplicate. Constitutively expressed GAPDH was selected as an endogenous control to correct any potential variation in RNA loading or in the efficiency of the amplification reaction. Results are presented as relative fold changes by using GAPDH as a reference and P2X 1 or P2Y 1 as a calibrator and applying the formula 2 -ΔΔCt [36]. NJ, USA). Fluorescent video images were averaged, digitized (0.3-1.0 Hz), and analyzed using Metafluor acquisition and analysis software (Universal Imaging Corp, West Chester, PA, USA). Individual cells in the field of view were selected and paired 340/380 images were subtracted from the background. The Fura-2 fluorescence ratios, indicative of changes in [Ca 2+ ] i , were calculated and their changes were extracted over time. All experiments were performed at room temperature, and the external solution and drugs were perfused at a rate of 2 mL/min by gravity. Data were expressed as the ratio of fluorescence due to excitation at 340 nm and at 380 nm (F340:380). In some experiments, a nominally Ca 2+ -free medium was used, which was identical in composition, except for the omission of CaCl 2 .
Statistics
Data are presented as the mean ± SEM, and n indicates the number of independent experiments or the number of cells used to measure [Ca 2+ ] i . Statistical significance was determined using one-way ANOVA or Student's t test; p < 0.05 was considered significant.
|
v3-fos-license
|
2020-09-17T13:06:16.341Z
|
2020-09-01T00:00:00.000
|
221747544
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2076-3425/10/9/634/pdf",
"pdf_hash": "8e36513a275a1d309997260a448e29b27a618db3",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44569",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"sha1": "598660efca728f3cd290b8e2c292c203b87d70cd",
"year": 2020
}
|
pes2o/s2orc
|
Levetiracetam Reduced the Basal Excitability of the Dentate Gyrus without Restoring Impaired Synaptic Plasticity in Rats with Temporal Lobe Epilepsy
Temporal lobe epilepsy (TLE), the most common type of focal epilepsy, affects learning and memory; these effects are thought to emerge from changes in synaptic plasticity. Levetiracetam (LEV) is a widely used antiepileptic drug that is also associated with the reversal of cognitive dysfunction. The long-lasting effect of LEV treatment and its participation in synaptic plasticity have not been explored in early chronic epilepsy. Therefore, through the measurement of evoked field potentials, this study aimed to comprehensively identify the alterations in the excitability and the short-term (depression/facilitation) and long-term synaptic plasticity (long-term potentiation, LTP) of the dentate gyrus of the hippocampus in a lithium–pilocarpine rat model of TLE, as well as their possible restoration by LEV (1 week; 300 mg/kg/day). TLE increased the population spike (PS) amplitude (input/output curve); interestingly, LEV treatment partially reduced this hyperexcitability. Furthermore, TLE augmented synaptic depression, suppressed paired-pulse facilitation, and reduced PS-LTP; however, LEV did not alleviate such alterations. Conversely, the excitatory postsynaptic potential (EPSP)-LTP of TLE rats was comparable to that of control rats and was decreased by LEV. LEV caused a long-lasting attenuation of basal hyperexcitability but did not restore impaired synaptic plasticity in the early chronic phase of TLE.
Introduction
Epilepsy is one of the most common and widespread neurological disorders, affecting more than 65 million people around the world [1]. Epilepsy is characterized by spontaneous and recurrent seizures that reflect temporary disruption of brain function due to excessive abnormal discharge of cortical neurons [2]. Temporal lobe epilepsy (TLE), in which epileptogenic activity originates from NOM-062-Z00-1999) published in 2001. The protocol was approved by Comité Institucional para el Cuidado y Uso de los Animales de Laboratorio (CICUAL) for animal experimentation (CICUAL INP-064/2015). The following experimental procedures were employed ( Figure 1): SE (see Section 2.1) was induced in the EPI groups (EPI and EPI + LEV groups), while nonepileptic controls (CONT and CONT + LEV groups) were kept under similar conditions as the EPI rats, except that SE was not induced. Three weeks after SE induction, animals from the EPI and EPI + LEV groups were video monitored to record the first spontaneous behavioral seizure (see Section 2.2). After one seizure was detected in the video monitoring, EPI + LEV and CONT + LEV animals were implanted subcutaneously with an osmotic minipump (see Section 2.3). The minipumps delivered LEV (300 mg/kg/day, 10 µL/h) for 7 days. After that period, the minipumps were surgically removed. Electrophysiological recordings were conducted during the washout period, i.e., four days after the end of LEV treatment (see Section 2.4). The electrophysiological measurement procedure began with the positioning of the stimulation and recording electrodes, followed by a period of stabilization (120 min). After that, input/output (I/O) curves, paired-pulse (PP) and LTP induction protocols were implemented for electrophysiological measurements Figure 1. Experimental design. At time 0, status epilepticus (SE) was induced in male Wistar rats via lithium-pilocarpine administration. Three weeks after SE induction, rats were video monitored until the first spontaneous behavioral seizure was detected and then treated with levetiracetam (300 mg/kg/day) for one week. Four days after the end of the treatment, electrophysiological experiments were conducted; finally, animals were intracardially perfused, and brain dissection was performed.
Pilocarpine-Induced Status Epilepticus
SE was induced by systemic injection of pilocarpine, as previously described [35][36][37]. Briefly, animals were pretreated with lithium chloride (127 mg/kg, i.p.; Sigma-Aldrich, Ciudad de México, México) 22 h before pilocarpine administration. On the day of SE induction, animals were injected with scopolamine methyl-bromide (1 mg/kg, i.p.; Sigma-Aldrich, México) to avoid peripheral cholinomimetic effects, and 30 min later, they received a single dose of pilocarpine hydrochloride (30 mg/kg, i.p.; Sigma-Aldrich, México). Behavioral seizures were scored according to the Racine scale [38]; SE was defined as sustained convulsive behavior (stage 4 or 5 on the Racine scale) for more than 30 min [20]. Ninety minutes after SE began, rats were administered an intramuscular (i.m.) injection of 5 mg/kg of diazepam (PISA, Ciudad de México, México) and were placed on an ice bed for 1 h to reduce the hyperthermia produced by SE. CONT rats received saline solution (NaCl 0.9%) instead of pilocarpine. A second dose of diazepam was administered eight hours later, and then the rats received a rehydrating injection of saline solution (5 mL, 0.9%, subcutaneous (s.c.)) on the first night. Finally, the rats were housed overnight in a room at 17 ± 0.5 °C. Beginning one day after SE, the room temperature was restored to 22 ± 2 °C. Figure 1. Experimental design. At time 0, status epilepticus (SE) was induced in male Wistar rats via lithium-pilocarpine administration. Three weeks after SE induction, rats were video monitored until the first spontaneous behavioral seizure was detected and then treated with levetiracetam (300 mg/kg/day) for one week. Four days after the end of the treatment, electrophysiological experiments were conducted; finally, animals were intracardially perfused, and brain dissection was performed.
Pilocarpine-Induced Status Epilepticus
SE was induced by systemic injection of pilocarpine, as previously described [35][36][37]. Briefly, animals were pretreated with lithium chloride (127 mg/kg, i.p.; Sigma-Aldrich, Ciudad de México, México) 22 h before pilocarpine administration. On the day of SE induction, animals were injected with scopolamine methyl-bromide (1 mg/kg, i.p.; Sigma-Aldrich, México) to avoid peripheral cholinomimetic effects, and 30 min later, they received a single dose of pilocarpine hydrochloride (30 mg/kg, i.p.; Sigma-Aldrich, México). Behavioral seizures were scored according to the Racine scale [38]; SE was defined as sustained convulsive behavior (stage 4 or 5 on the Racine scale) for more than 30 min [20]. Ninety minutes after SE began, rats were administered an intramuscular (i.m.) injection of 5 mg/kg of diazepam (PISA, Ciudad de México, México) and were placed on an ice bed for 1 h to reduce the hyperthermia produced by SE. CONT rats received saline solution (NaCl 0.9%) instead of pilocarpine. A second dose of diazepam was administered eight hours later, and then the rats received a rehydrating injection of saline solution (5 mL, 0.9%, subcutaneous (s.c.)) on the first night. Finally, the rats were housed overnight in a room at 17 ± 0.5 • C. Beginning one day after SE, the room temperature was restored to 22 ± 2 • C. [37]. Therefore, video monitoring of seizures began 3 weeks after SE induction. Animals were housed in individual polycarbonate cages and were video monitored to record the occurrence of spontaneous seizures. Video monitoring was performed with four cameras (Steren Model CCTV-970, Mexico City, México), and the recordings were collected during the light period (08:00 to 17:00 h) [36,39]. The videos were analyzed by trained observers using the fast-forward (6×) component of the system. When seizure-like activity was detected, the video was reversed to the start of the behavior and examined at real-time speed. An animal was considered to have a seizure when the Racine score reached 4 or 5 points [37,38].
Levetiracetam Treatment
Two days after the first behavioral spontaneous seizure was observed, an ALZET ® osmotic minipump was implanted subcutaneously for one week to provide subchronic treatment with LEV (300 mg/kg/day). The LEV dose was chosen based on previous experiments in EPI rats [32,36]. LEV treatment via this route has been shown to lead to adequate LEV concentration in blood and proper LEV washout after removing the osmotic minipump [36]. To fill the osmotic minipump chambers, LEV was extracted from tablets (Keppra ® ). Briefly, two tablets of LEV were dissolved in 3 mL of saline solution (0.9%). Then, the mixture was sonicated and centrifuged for 15 min at 3000 rpm (1400× g centrifuge, Hermle Labnet 2326 K, rotor 220.72), the supernatant was filtered with a Corning 28 mm syringe filter of 0.45 µm, and finally, the osmotic minipumps were filled according to the manufacturer's instructions.
Electrophysiological Recordings
On the recording day, four days after the removal of the osmotic minipumps, rats were anesthetized with urethane (1.3 g/kg; i.p.) and, after complete loss of reflexes, placed on a stereotaxic apparatus. A heating pad was used to maintain body temperature at 37 ± 0.5 • C. To register local field potentials, a stainless steel concentric electrode with a tip diameter of 250 µm was placed in the dorsal DG (anteroposterior (AP) −3.5 mm, mediolateral (ML) 2.0 mm and dorsoventral (DV) −3.0 to −3.4 mm from the dura), and another stainless steel concentric bipolar stimulation electrode was placed in the perforant path (AP −7.2 mm, ML 4.1 mm and DV −2.4 to −3.2 mm from the dura) [40] (Figure 2). Final adjustments in the DV coordinates of both electrodes were made to produce an evoked potential of optimal morphology; then, a stabilization protocol with a duration of 2 h was performed: a series of 5 single square pulses (0.1 ms in duration) and 1500 µA of intensity were delivered to the perforant path at 15 min intervals using a Grass S-88 stimulator and a PSIU-6 constant current isolator (Grass Technologies, West Warwick, RI, USA). Evoked local field potentials were amplified (gain: 200), digitized (10 kHz sampling rate) and stored in a Biopac MCE 100C system (Biopac Systems Inc., Goleta, CA, USA). Briefly, evoked local field potentials recorded in the DG were composed of an initial positive element that corresponded mainly to the excitatory postsynaptic potential (EPSP) and a negative component that represented the sharp population spike (PS) (Figure 3, top right). The EPSP magnitude is associated with synaptic dendritic activity, and the PS magnitude represents the number of granule cells producing action potentials as a result of the EPSPs (Figure 3, top right). The magnitude of the EPSP was measured as the slope of the rising phase of the potential prior to PS onset. PS amplitude was calculated by the magnitude of a line connecting the lowest value of the negative component with a line connecting the two positive peaks (Figure 3, top right) [41,42]. 2.4.1. Input/Output Curves I/O curves were constructed using stimulation intensities ranging from 50 to 1500 µA (monophasic pulse duration: 0.1 ms). Averaged evoked potentials (n = 5) for each intensity were used for quantification. Only animals in which the maximum applied intensity evoked EPSPs of 3 mV or higher were used for further analysis. Both EPSP and PS I/O curves reflect the excitability of the circuit under basal conditions by evaluating the relationship between the intensity of the stimulus supplied to the perforant path and the magnitude of the electrophysiological responses in the DG. Additionally, the latencies of EPSP and PS, at intensities that produced 50% or 100% of maximal PS, were taken from the stimulation artifact to EPSP onset, PS onset, PS peak or EPSP peak ( Figure 5 right). We define EPSP onset as the point at which the amplitude increases from the baseline recording; meanwhile, PS onset was defined as the point at which the amplitude decreases from the ascendant component of the EPSP [41,43].
Paired-Pulse Depression and Facilitation
Field potentials were evoked by PP stimulation in the perforant path and recorded in the DG at intensities that produced 20, 50 and 100% of maximal PS from the I/O curve, with interpulse intervals (IPIs) of 10, 20, 30, 70 and 250 ms. PP stimulation examines both facilitation (PPF) and depression (PPD) [41]. Data are represented as a PP percentage ((pulse 2 amplitude/pulse 1 amplitude) × 100), a percentage < 100 reflects depression and a percentage > 100 reflects facilitation.
Long-Term Potentiation
To induce LTP, high-frequency stimulation was delivered into the perforant path, and the response was examined in the DG. Tetanic stimulation consisted of three train pairs at 400 Hz, four stimuli per train, separated by 200 ms between each four-pulse burst, and 10 s between each train pair at a maximal stimulation of 1500 µA. Pre-and post-train stimuli of an intensity that produced 50% of maximal PS were delivered to evaluate LTP, responses were recorded 15 min before and 5 min after tetanic stimulation delivery, and LTP measures continued every 15 min for 125 min (modified from Sánchez-Huerta et al. [41]). The EPSP slope and the PS magnitude are expressed as percentages of baseline (pretrain) values.
Histological Verification
At the end of the electrophysiological experiments, animals were transcardially perfused with saline solution (0.9%) followed by buffered paraformaldehyde (4%). Then, the rats were decapitated, and their brains were dissected and postfixed at room temperature for 12 h in the same fixative medium. Next, the brains were transferred to sucrose (30%; diluted in phosphate buffer) until complete infiltration. Finally, serial coronal sections (150 µm thick) were cut with a vibratome (Electron Microscopy Sciences USA, model OTS-4000). The slices were mounted on gelatinized slides and stained with cresyl violet (Sigma-Aldrich). Cytoseal (Electron Microscopy Sciences, Hatfield, PA, USA) was added, and the brain sections were observed using a clear field microscope (Olympus BX51) equipped with a digital video camera (mbf CX9000) to verify the location of the electrodes. Data were collected and processed only from those animals in which the stimulation and recording electrodes were correctly placed.
Statistical Analysis
Latencies were analyzed using a two-way analysis of variance (ANOVA) with two between-subject factors: EPI condition and treatment. Linear regression analysis of PS amplitude as a function of the EPSP slope curve was performed, and Student's t-test was used to compare slopes. Statistical analyses were performed with SigmaStat 3.5 software (SigmaStat 3.5 software, Systat Software, Inc. San Jose, CA, USA). Data from the I/O curve, PPF, PPD and LTP were analyzed by three-way repeated-measures (RM) ANOVAs (an ad hoc Excel worksheet was used), with one within-subject factor (intensity, IPI or time, respectively) and two between-subject factors (EPI condition and treatment). When appropriate, a Student-Newman-Keuls (S-N-K) test was used as a post-hoc comparison test. Finally, p-values of 5% or lower were considered to be statistically significant. All data are expressed as the mean ± standard error of the mean (S.E.M.).
Basal Excitatory Synaptic Transmission: Input/Output Curve
The evoked local field potentials were recorded in the DG of the hippocampus (Figure 2), and they were measured as the EPSP slope and the PS amplitude ( Figure 3, top). Analysis of the I/O curve over a range of stimuli represents the state of synaptic basal excitatory transmission [41]. At all intensities tested, the magnitude of the EPSP slope, associated with synaptic dendritic activity, was not affected by TLE or treatment ( Figure 3, top), and three-way RM ANOVA did not reveal significant differences in the main effects of treatment or EPI condition. However, as a consequence of the stimulus-response relation, a significant difference in the within-subject factor (intensity) was found (F 15,330 = 52.63, p < 0.01). Nevertheless, the statistical analysis did not reveal a significant interaction among treatment, EPI condition and intensity factors.
In contrast, the three-way RM ANOVA for the I/O curve of PS amplitude revealed significant differences in the EPI condition (F 1,22 = 24.63, p < 0.01), treatment (F 1,22 = 6.86, p < 0.05) and intensity (F 15,330 = 39.19, p < 0.01) factors, without an interaction among these three factors. Post-hoc S-N-K tests revealed that the EPI and EPI + LEV rats showed increased PS amplitude compared with the CONT and CONT + LEV groups. This increase in PS magnitude was significant at stimulus intensities from 100 to 1500 µA for EPI rats and from 200 to 1500 µA for the EPI + LEV group (Figure 3, bottom). Interestingly, when the EPI and EPI + LEV groups were compared, a significantly higher PS magnitude at intensities from 800 to 1500 µA was observed in the EPI rats vs. the EPI + LEV group (Figure 3, bottom).
PS amplitude as a function of EPSP slope is represented in Figure 4; the excitability curves were similar for CONT and CONT + LEV rats. The linear regression analysis revealed significant correlations with positive slopes in both groups (CONT: b = 1.00, R 2 = 0.87, p < 0.01; CONT + LEV: b = 0.96, R 2 = 0.86, p < 0.01), without a significant difference in their slopes. EPI groups also showed significant correlations (EPI: b = 2.47, R 2 = 0.97, p < 0.01; EPI + LEV: b = 1.91, R 2 = 0.94, p < 0.01) without statistical significance in their slopes. Interestingly, the EPI condition corresponded to a shift to the left of the excitability curve compared with the control condition; higher slope values were found in EPI animals than in CONT animals (EPI vs. CONT, t 8 = 5.06; p < 0.01; EPI + LEV vs. CONT, t 8 = 2.34; p < 0.05). Although LEV treatment did not prevent this shift in the curve in EPI rats, the higher PS amplitudes observed in the EPI group were absent in EPI + LEV rats ( Figure 4).
The latencies of EPSP onset and the EPSP peak were similar among all groups ( Figure 5) at intensities that produced 50% or 100% of the maximal PS according to the I/O curve. Two-way ANOVA failed to detect any main effects of treatment on EPSP onset at 50% and 100% and for the EPSP peak at 50% and 100% of maximal PS from the I/O curve. There were also no significant main effects of the EPI condition factor or the treatment × condition interaction for EPSP onset or EPSP peak latencies at both intensities tested. The latencies of EPSP onset and the EPSP peak were similar among all groups ( Figure 5) at intensities that produced 50% or 100% of the maximal PS according to the I/O curve. Two-way ANOVA failed to detect any main effects of treatment on EPSP onset at 50% and 100% and for the EPSP peak at 50% and 100% of maximal PS from the I/O curve. There were also no significant main effects of the EPI condition factor or the treatment × condition interaction for EPSP onset or EPSP peak latencies at both intensities tested. In contrast, two-way ANOVA revealed significant main effects of EPI condition on the latencies of PS onset (50%; F1,25 = 27.33, p < 0.01, 100%; F1,25 = 18.65, p < 0.01) and PS peak (50%; F1,25 = 15.13, p < 0.01, 100%; F1,25 = 11.61, p < 0.01) for the 50% and 100% maximal responses. The post hoc S-N-K tests detected significant differences, and the EPI and EPI + LEV groups showed lower latencies for PS onset and PS peak than both the CONT and CONT + LEV groups ( Figure 5). There were no significant main effects of treatment or significant effects of the treatment × condition interaction for the PS onset In contrast, two-way ANOVA revealed significant main effects of EPI condition on the latencies of PS onset (50%; F 1,25 = 27.33, p < 0.01, 100%; F 1,25 = 18.65, p < 0.01) and PS peak (50%; F 1,25 = 15.13, p < 0.01, 100%; F 1,25 = 11.61, p < 0.01) for the 50% and 100% maximal responses. The post hoc S-N-K tests detected significant differences, and the EPI and EPI + LEV groups showed lower latencies for PS onset and PS peak than both the CONT and CONT + LEV groups ( Figure 5). There were no significant main effects of treatment or significant effects of the treatment × condition interaction for the PS onset and PS peak latencies at both intensities tested.
Short-Term Plasticity: Paired-Pulse Facilitation and Depression
Paired stimulation of the perforant path produces, depending on the IPI, facilitation or depression of the evoked field potentials recorded in the DG. These synaptic changes are referred to as short-term plasticity, where PPF reflects the pre-and postsynaptic modulation; in turn, PPD reveals the integrity of the local inhibitory circuits (GABAergic interneurons) [44]. Although PP stimuli were delivered at intensities that produced 20, 50 and 100% of the maximal PS according to the I/O curve, the profiles of facilitation and depression were better established at 100% of the maximal response; thus, this intensity was chosen for analysis.
Paired stimulation at different IPIs resulted in a facilitation/depression pattern in the CONT and CONT + LEV groups ( Figure 6). The three-way RM ANOVA revealed significant differences in the EPI condition, the between-subject factor (F 1,25 = 9.69, p < 0.01) and in IPI, the within-subject factor (F 4,100 = 35.65, p < 0.01), but not in the treatment factor or in the IPI × treatment × EPI condition interaction. The S-N-K test revealed significant differences in CONT groups compared with the EPI groups at short (10 and 20 ms) and intermediate (30 and 70 ms) IPIs ( Figure 6). In the CONT group, PPD was observed at short IPIs (10 and 20 ms) and in the longer interval (250 ms), and PPF was observed at intermediate intervals (30 and 70 ms), reaching the maximal facilitation with 70 ms IPI. This triphasic pattern (PPD-PPF-PPD) was not observed in EPI animals. EPI rats did not show PPF at intermediate IPIs; instead, PPD was observed at 30 ms intervals, and neither facilitation nor depression was observed at an IPI of 70 ms. Interestingly, the EPI group showed a higher PPD at short IPIs (10 and 20 ms) than the CONT groups. This altered pattern in short-term plasticity caused by epilepsy was not modified by LEV treatment, as EPI + LEV rats showed the same profile of short-term plasticity as EPI rats.
Long-Term Plasticity
LTP is a long-lasting increase in the strength of synaptic transmission [45]. Here, LTP was elicited in the DG by high-frequency stimulation of the perforant path. After delivering tetanic stimulation, a modest (~25%) increase in the EPSP slope in the CONT, CONT + LEV and EPI groups was apparent (Figure 7, top), which was of minor magnitude in the EPI + LEV group (main effects of treatment: (F 1,22 = 36.97), p < 0.01 and S-N-K test). This mild potentiation of the evoked EPSP slope decayed over time and was statistically significant for all groups at 95, 110 and 125 min compared with at 5 min post-train (time factor: F 8,176 = 5.24, p < 0.01 and S-N-K test). Three-way RM ANOVA did not reveal any significant differences for the main effects of the EPI condition or the EPI condition × treatment × time interaction. at intermediate IPIs; instead, PPD was observed at 30 ms intervals, and neither facilitation nor depression was observed at an IPI of 70 ms. Interestingly, the EPI group showed a higher PPD at short IPIs (10 and 20 ms) than the CONT groups. This altered pattern in short-term plasticity caused by epilepsy was not modified by LEV treatment, as EPI + LEV rats showed the same profile of shortterm plasticity as EPI rats.
Long-Term Plasticity
LTP is a long-lasting increase in the strength of synaptic transmission [45]. Here, LTP was elicited in the DG by high-frequency stimulation of the perforant path. After delivering tetanic stimulation, a modest (~25%) increase in the EPSP slope in the CONT, CONT + LEV and EPI groups was apparent (Figure 7, top), which was of minor magnitude in the EPI + LEV group (main effects of treatment: (F1,22 = 36.97), p < 0.01 and S-N-K test). This mild potentiation of the evoked EPSP slope decayed over time and was statistically significant for all groups at 95, 110 and 125 min compared with at 5 min post-train (time factor: F8,176 = 5.24, p < 0.01 and S-N-K test). Three-way RM ANOVA did not reveal At a difference in the LTP of the EPSP slope, the CONT and CONT + LEV groups had robust (~250%) LTP in the PS magnitude after train; both EPI groups also had PS-LTP, albeit to a minor degree (~180%; Figure 7, bottom). This difference between the CONT groups and the EPI groups was significant, and three-way RM ANOVA revealed significant main effects of EPI condition (F 1,21 = 11.17, p < 0.01). The LTP of PS magnitude also declined over time in all groups (time factor: F 8,168 = 7.86, p < 0.01), and the S-N-K test revealed significant differences from 50 to 125 min after tetanic stimulation compared with 5 min post-stimulation. However, LEV treatment was not able to prevent the alteration in PS-LTP caused by epilepsy, and no significant differences were detected by three-way RM ANOVA for the main effects of treatment or the epileptic condition × treatment × time interaction.
11.17, p < 0.01). The LTP of PS magnitude also declined over time in all groups (time factor: F8,168 = 7.86, p < 0.01), and the S-N-K test revealed significant differences from 50 to 125 min after tetanic stimulation compared with 5 min post-stimulation. However, LEV treatment was not able to prevent the alteration in PS-LTP caused by epilepsy, and no significant differences were detected by threeway RM ANOVA for the main effects of treatment or the epileptic condition × treatment × time interaction.
Discussion
In this study, we characterized the alterations in basal excitability, facilitation, depression and LTP in the DG of the hippocampus of rats with early chronic epilepsy and determined the longlasting effect of LEV treatment. The main findings of this research were that, in the I/O curve, EPI animals presented an increase in the amplitude of PS and a reduction in the onset-and peak-PS
Discussion
In this study, we characterized the alterations in basal excitability, facilitation, depression and LTP in the DG of the hippocampus of rats with early chronic epilepsy and determined the long-lasting effect of LEV treatment. The main findings of this research were that, in the I/O curve, EPI animals presented an increase in the amplitude of PS and a reduction in the onset-and peak-PS latencies with respect to nonepileptic groups. Interestingly, LEV treatment partially reduced this increase but did not lower PS amplitude to CONT levels. In turn, TLE caused an augmentation in PPD without showing PPF. Nevertheless, LEV treatment in EPI rats did not alleviate such alterations. Finally, animals in the CONT, CONT + LEV and EPI groups showed mild EPSP-LTP with a decrease in the EPI + LEV-treated group. With respect to PS-LTP, nonepileptic groups showed a robust response that was reduced in EPI animals; LEV treatment was not able to restore this reduction in EPI rats. It is important to mention that all our results reflect the semipermanent changes caused by LEV in the neurochemistry of the system, since the recordings were realized four days after the cessation of LEV treatment. This long-lasting effect of LEV has been previously reported for LEV and some other antiepileptic drugs but not for all our electrophysiological parameters studied [32,36,46,47].
Hyperexcitability Caused by TLE in the DG Is Attenuated by LEV Treatment
The effects of LEV on basal synaptic transmission in the hippocampal DG area were examined before the PPF, PPD and LTP experiments. I/O curves represent basal synaptic transmission, reflecting not only the level of presynaptic neurotransmitter release but also postsynaptic processes [44]. Rises in stimulation intensity typically result in an increased EPSP slope and an elevated PS amplitude, as observed for all groups. However, TLE caused a significant augmentation of PS amplitude and a reduction in the onset-and peak-PS latencies, indicating clear signs of hyperexcitability in the DG in the early chronic phase of epilepsy. This finding is consistent with previous findings where an increase in PS amplitude in the DG area of post-SE anesthetized rats was observed [32] and with a reduction in the onset-and peak-PS latencies reported in freely moving kainate-induced TLE animals [33]. Interestingly, when PS amplitude was plotted as a function of EPSP slope, the EPI condition caused a shift to the left in the excitability curve compared with controls, showing that coupling between the EPSP and PS reflects the final result of the neuronal synaptic response [48]; our EPSP-PS data reinforce the idea of a hyperexcitable state in the granule cells of the EPI brain. The hyperexcitability of dentate granule cells could be a consequence of pathological rearrangements of neuronal circuitry on which an initial loss of hilar mossy cells denervates granule cell dendrites; this triggers the formation of abnormal recurrent excitatory connections among normally unconnected granule cells (mossy fiber sprouting, Figure 8a) [49]. Additionally, there is a combination of GABAergic hilar interneuron loss and connectivity alterations in the remaining interneurons (Figure 8a,b) [2,49,50]. This imbalance between excitation and neuronal inhibition may be the origin of the hyperexcitable and hypersynchronous neuronal activity observed. Brain Sci. 2020, 10, x FOR PEER REVIEW 13 of 21 Our data also demonstrated that LEV treatment partially reversed the hyperexcitability of the DG in the chronic phase of epilepsy; these results extend upon the findings of Margineanu et al. [32], who reported that LEV treatment inhibited the development of hippocampal DG hyperexcitability in the epileptogenic phase of pilocarpine-induced epileptic rats. The mechanism of action through which LEV is able to reduce DG excitability remains unknown; however, the mechanism may involve effects on excitatory and inhibitory neurotransmitter release [29,53] since the LEV primary target is SV2A protein, which is expressed in all nerve terminals independently of their neurotransmitter content and is involved in modulation of the vesicular cycle [21,24,30]. Nevertheless, data suggest [48,51,52]. All this together would promote a hyperinhibitory environment; this could explain the changes in the short and long-term synaptic plasticity in TLE rats, such as strong depression (PPD) and the absence of facilitation (PPF) by paired pulses, and the decrease in PS long-term potentiation (PS-LTP). (c) The partial recovery of PS amplitude in basal excitatory transmission and the decrease in excitatory postsynaptic potential (EPSP)-LTP slope by LEV, could be associated with the potentiation of the GABAergic signaling through the increase in the release of γ-aminobutyric acid (GABA), suggesting that LEV may act as an effective antiseizure agent that suppresses the firing of glutamatergic neurons in the DG. OML: outer molecular layer; MML: middle molecular layer; IML: inner molecular layer; GCL: granule cell layer; pp: perforant pathway (red axon).
Our data also demonstrated that LEV treatment partially reversed the hyperexcitability of the DG in the chronic phase of epilepsy; these results extend upon the findings of Margineanu et al. [32], who reported that LEV treatment inhibited the development of hippocampal DG hyperexcitability in the epileptogenic phase of pilocarpine-induced epileptic rats. The mechanism of action through which LEV is able to reduce DG excitability remains unknown; however, the mechanism may involve effects on excitatory and inhibitory neurotransmitter release [29,53] since the LEV primary target is SV2A protein, which is expressed in all nerve terminals independently of their neurotransmitter content and is involved in modulation of the vesicular cycle [21,24,30]. Nevertheless, data suggest that LEV has a selective effect on the DG inhibitory system, as SV2A protein is strongly coexpressed with GABAergic markers under healthy conditions [24,54] and preferentially regulates vesicular γ-aminobutyric acid (GABA) release in the hippocampus [55,56]. In addition, under EPI conditions, SV2A is increased and coexpressed with GABAergic, but not glutamatergic, markers in the hilar interneurons, suggesting that SV2A specifically regulates GABAergic neurotransmission in the hilus as a compensatory antiseizure mechanism [54,57]. Furthermore, this scenario is consistent with the results reported by Pichardo-Macías et al. [36], who showed that LEV treatment might re-establish the balance in the glutamate/GABA ratio, increasing the vesicular release of GABA in the chronic phase of TLE induced by lithium-pilocarpine treatment. Therefore, LEV may act as an effective antiseizure agent that potentiates inhibitory transmission, enhancing GABA release and suppressing the firing of glutamatergic neurons in the DG (Figure 8c).
TLE Caused Alterations in Short-Term (Facilitation/Depression) Synaptic Plasticity, and LEV Did Not Reverse Them
As previously mentioned, PPD reflects the integrity of GABAergic circuits, which are constituted by different populations of interneurons innervating the granule cells of the DG [58]. Our results showed that EPI rats exhibit increased PPD; this observation is in line with previous reports in kainate-induced SE rats [33,59] and supports the existence of an elevated GABAergic tone in the DG of EPI animals during the chronic phase of this disease. The mechanism associated with the hyperactivation of GABAergic networks remains intriguing. Although there is a consensus that chronic epilepsy is associated with a loss of different subtypes of GABAergic interneurons in the hilus of rodents and humans (Figure 8a,b) [51,52,60], in addition to the death of some interneurons, plastic changes in the inhibitory networks and altered postsynaptic responses of the remaining neurons could also influence GABAergic tone (Figure 8b). In this regard, it has been reported that calbindin-immunoreactive interneurons show enlargement of their cell bodies, growth of numerous spines and elongation of dendrites (Figure 8b) [52]. Additionally, during mossy fiber sprouting, some interneurons are targeted by axons of granule cells, establishing aberrant networks (Figure 8b) [49,51]. Moreover, patch-clamp recordings have shown postsynaptic alterations, evidencing that tonic GABA currents, mediated by lower affinity GABA A receptors, are enhanced in granule cells of EPI rats [61]. These structural and functional changes could partially explain the increased GABAergic tone in the DG of EPI rats. Nevertheless, the functional significance of this alteration needs to be addressed; specifically, elevated inhibitory activity could be responsible for synchronizing granule cells, thus contributing to the generation of seizures, or this alteration could simply control the efficacy of excitatory inputs and thereby limit synaptic plasticity [62].
On the other hand, our results indicate an absence of PPF in rats with TLE. PPF reflects a short duration increase in synaptic transmission derived from pre-and postsynaptic modulation [44]. It has been reported that facilitation may depend on several factors, such as residual calcium, vesicular readily releasable pool increases, properties of postsynaptic receptors, and synaptic activity frequency [63]. Although not all these mechanisms have been studied in EPI rats, Upreti et al. [64] observed an increase in the number of vesicles of the readily releasable pool and more vesicular release and endocytosis in granule cells of EPI rats. These changes are consistent with an augmentation in glutamate release in EPI rats under basal and depolarizing conditions [36,65], which in turn could modify the expression of glutamate transporters, increasing the uptake of that neurotransmitter in the hippocampal region [66,67]. These data suggest persistent glutamatergic activity in the presynaptic compartment, which could explain the greater PS magnitude observed in this study; however, such exacerbated activity could exhaust neurotransmitter availability when PP is applied at short IPIs. Furthermore, glutamatergic activity could explain not only the loss of facilitation by PP but also the persistence of GABAergic tone in interneurons, which causes a hyperpolarized environment that prevents an increased response to the second stimulus. A third possibility is a combination of these processes.
The EPI + LEV group did not show any change with respect to the EPI group, as PPD reflects the influence of the inhibitory postsynaptic potentials that are produced by the GABA released from synaptic vesicles of interneurons, the subsequent activation of postsynaptic GABA A receptors and the entrance of Clions, which increases the negative charge inside the postsynaptic neuron [2,44]. The resultant augmentation in membrane conductance and hyperpolarization underlies what is known as phasic inhibition [68,69]. GABA can also activate other receptors on presynaptic terminals or at neighboring synapses, causing persistent tonic activation of GABA A receptors [69,70]. This kind of activation may affect the magnitude and duration of the response to a stimulus, reducing the probability that an action potential might be generated [69]. Since PPD and PPF are typically measured with PS amplitude, the continuous activity of interneurons could provoke the absence of PS in the PP procedure and explain the strong augmentation of PPD and the absence of PPF in EPI rats. The lack of effect of treatment with LEV to restore PPD and PPF in EPI rats could be due to a "floor effect", EPI rats exhibit an elevated GABAergic tone in the DG, which seems to have reached its maximum level, thus avoiding the LEV effect.
TLE Reduced PS-LTP, and LEV Did Not Correct This Alteration
Likewise, our results showed that in the DG of the hippocampus of early chronic EPI rats, the EPSP-LTP slopes were comparable to nonepileptic animals; however, there was a decrease in PS-LTP amplitude. Early LTP is a process that depends on glutamate release, the repeated activation of the AMPA receptors and subsequent activation of NMDA channels that allow Ca 2+ influx and the activation of some kinase enzymes (e.g., PKC, CAMKII and Fyn) [44,71]. In chronic epilepsy, the vesicular release of glutamate is substantially increased in the DG, resulting in elevated extracellular levels of this neurotransmitter after high-K + stimulation [36,65,72]. Nevertheless, the NR2B subunit of the NMDA receptor is minimally expressed in hippocampal synaptosomes, an effect that is accompanied by reduced expression of GluA1/GluA2 heteromers and an increase in GluA1/GluA1 homomers of the AMPA receptor [73]. These modifications in the ratio of glutamate receptors might contribute to the generation of the EPSP-LTP slope but prevent the proper functioning of NMDA receptors and the activation of second messengers. Furthermore, evidence has consistently shown that chronic epilepsy is associated with low levels of PKCγ in the DG [74,75]; this isotype is activated by Ca 2+ and is required for LTP induction [76]. Moreover, a recent study described high levels of Fyn in the hippocampus of EPI rats [77]; although Fyn is commonly recognized as an inductor of LTP [78], the authors suggest that this enzyme may be able to impede hippocampal LTP when the HTR6/ERK1/2 pathway is activated [77]. Overall, the evidence suggests that the exacerbated glutamatergic neurotransmission in the DG of chronic EPI rats could not necessarily trigger robust LTP since several postsynaptic alterations in glutamate receptors and signaling pathways linked to LTP seem to be compromised in this pathology.
Regarding EPI animals treated with LEV, our data showed a decrease in EPSP-LTP. These results conflict with those reported by Sanchez et al. [34] and Ge et al. [18], who showed that LEV reverses the decrease in EPSP-LTP in the DG of an Alzheimer's model and in the CA1 hippocampal region of EPI rats. The differences between these reports and our results may be due to differences in the animal model, disease progress, region registered or treatment scheme. However, our results are consistent with the hyperinhibition hypothesis ( Figure 8). It has been reported that under pathophysiological conditions, LEV increased GABA release (Figure 8c) [36,79]. In addition, in EPI DG, the inhibitory interneurons, but not the principal cells, are primarily immunoreactive to the neuronal activity marker c-Fos [49]. Furthermore, SV2A protein was substantially increased and coexpressed with the GABA marker in the cell bodies and dendrites of hilar interneurons of mice with PTZ-induced seizures [54]. These three factors, the pathophysiology of neural tissue, neuronal activity and SV2A expression, might have a selective inhibitory effect that influenced the decrease in the EPSP-LTP slope of EPI animals treated with LEV ( Figure 8c). Therefore, the high activity of GABAergic interneurons, augmented by LEV, could keep granular cells hyperpolarized and not allow the generation of EPSPs.
On the other hand, EPI rats treated with LEV exhibited a reduction in PS-LTP with respect to the nonepileptic animals, but it was not significant with respect to EPI rats. As we mentioned before, the postsynaptic alterations in glutamate receptors and signaling pathways could impede PS-LTP generation under EPI conditions, and therefore, LEV may not have had an effect. Otherwise, our PS-LTP data could explain previous reports with respect to the cognitive impairment presented in animal models and patients with TLE [80][81][82], as it has been postulated that the LTP corresponds to the cellular bases of memory processes. Although our results do not show positive effects on short-term plasticity and PS-LTP in rats treated with LEV, it cannot be overlooked that the drug may enhance cognitive processes in the long term, as was observed in a study conducted in children who were treated with LEV for benign epilepsy and showed an improvement in cognitive abilities [83].
Conclusions
Taken together, our results showed that TLE provoked profound changes in the basal excitability and synaptic plasticity of the DG and provide evidence that LEV has a long-lasting effect, reducing the basal excitability of granule cells in TLE rats. Although most alterations were not reestablished by LEV treatment, our study suggests that LEV may act as an effective antiseizure agent that potentiates inhibitory transmission, enhancing GABA release and suppressing the firing of glutamatergic neurons in the DG. This view is supported by EPSP-LTP data that reveal that LEV may have an effect on the hyperpolarization of granule cells in EPI rats. Our results do not support the role of LEV as a drug able to reestablish synaptic plasticity in TLE rats; nevertheless, further investigation is required to determine whether other factors (e.g., treatment duration or doses) could influence the effectiveness of LEV treatment. Moreover, epilepsy is a complex disorder that involves both neuronal inhibition and neuronal excitation; therefore, further studies are needed to better understand the complete mechanism of action of LEV.
Conflicts of Interest:
The authors declare no conflicts of interest, financial or otherwise.
|
v3-fos-license
|
2023-07-19T06:18:52.776Z
|
2023-07-01T00:00:00.000
|
259948995
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0010987&type=printable",
"pdf_hash": "4cc1cfc5244572cad4a0a818a1310887b6381cbb",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44570",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "cbc70d9c6e46bae784405d2730cc9ea7841fd753",
"year": 2023
}
|
pes2o/s2orc
|
A long-term observational study of paediatric snakebite in Kilifi County, south-east Kenya
Introduction Estimates suggest that one-third of snakebite cases in sub-Saharan Africa affect children. Despite children being at a greater risk of disability and death, there are limited published data. This study has determined the: population-incidence and mortality rate of hospital-attended paediatric snakebite; clinical syndromes of snakebite envenoming; and predictors of severe local tissue damage. Methods All children presenting to Kilifi County Hospital, Kenya with snakebite were identified through the Kilifi Health and Demographic Surveillance System (KHDSS). Cases were prospectively registered, admitted for at least 24-hours, and managed on a paediatric high dependency unit (HDU). Households within the KHDSS study area have been included in 4-monthly surveillance and verbal autopsy, enabling calculation of population-incidence and mortality. Predictors of severe local tissue damage were identified using a multivariate logistic regression analysis. Results Between 2003 and 2021, there were 19,606 admissions to the paediatric HDU, of which 584 were due to snakebite. Amongst young children (≤5-years age) the population-incidence of hospital-attended snakebite was 11.3/100,000 person-years; for children aged 6–12 years this was 29.1/100,000 person-years. Incidence remained consistent over the study period despite the population size increasing (98,967 person-years in 2006; and 153,453 person-years in 2021). Most cases had local envenoming alone, but there were five snakebite associated deaths. Low haemoglobin; raised white blood cell count; low serum sodium; high systolic blood pressure; and an upper limb bite-site were independently associated with the development of severe local tissue damage. Conclusion There is a substantial burden of disease due to paediatric snakebite, and the annual number of cases has increased in-line with population growth. The mortality rate was low, which may reflect the species causing snakebite in this region. The identification of independent predictors of severe local tissue damage can help to inform future research to better understand the pathophysiology of this important complication.
Introduction
Snakebite is a neglected tropical disease that affects 5 million people each year, with the greatest burden falling on rural populations of the tropics and sub-tropics [1]. Snakebite disproportionately affects children living in low-income countries, who are likely to be at a greater risk of disability and mortality. In sub-Saharan Africa it has been estimated that 30% of people affected by snakebite are children [2]. Typical activities that may bring children into contact with snakes include outdoor play, agricultural work, and walking to school. The burden of snakebite in sub-Saharan Africa has been estimated at over 1 million DALYs (disabilityadjusted life years) per year [3]. Much of this is accounted for by deaths and limb amputations in the young, who disproportionately contribute to disability adjusted life years and years of life lost [3].
The limited available evidence suggests that children are twice as likely to be administered antivenom following snakebite and have an increased risk of death [4][5][6]. It is hypothesised that children are particularly vulnerable as they receive a higher dose of venom relative to their body weight. Despite this, limited studies have described the burden of paediatric snakebite in sub-Saharan Africa. Two observational studies in South Africa (with samples sizes of 51 and 72 children) identified a substantial burden of painful progressive swelling, with one in four cases undergoing debridement or fasciotomy [7,8]. In a retrospective study of 28 consecutive cases of paediatric snakebite in The Gambia, the mortality rate was 14% [9]. In a community
Study site
The KHDSS hospital surveillance system has collected data on paediatric admissions (including snakebite cases) to Kilifi County Hospital since 2003. Paediatric care in Kenya is provided to children aged �12-years. To avoid misclassifying cases with delayed onset of clinical envenoming, local policy stipulates that all cases of paediatric snakebite are admitted for �24-hours observation on the paediatric high dependency unit (HDU). This is regardless of disease severity and even applies to cases with no features of envenoming. The paediatric HDU is funded and staffed by the KEMRI-Wellcome Trust Research Programme and a standardised protocol for the management of snakebite is in place. The resources and quality of care available at this paediatric HDU are higher than typical government healthcare facilities in much of Africa, facilitating an approach to snakebite management that aligns with recommended standards in high-income settings. Antivenom is administered in accordance with WHO guidelines [12]. Once paediatric cases with snakebite have been managed on the paediatric HDU for 24-hours, the clinical team decide whether it is appropriate for the child to remain on the paediatric HDU, be stepped down to the Kilifi County Hospital paediatric ward, or to be discharged home. Cases transferred to the paediatric ward continue to have data collected as part of the KHDSS hospital surveillance study, and cases discharged home are followed-up by the KHDSS community surveillance system.
Hospital surveillance data are linked to the KHDSS community surveillance data. The study area is 891 km 2 and Kilifi County Hospital is the only hospital with inpatient paediatric services in the study area [11]. The KHDSS study area was defined based on the lowest number of administrative sublocations that were the site of residence of greater than 80% of the paediatric inpatients at Kilifi County Hospital over a three-year period (1998-2000) [11]. The study area, including all dwellings, has been GPS mapped. In 2021, the KHDSS included 92,063 households and 309,228 residents. Most of the study population reside in rural dwellings and the local economy is predominantly centred on subsistence farming [11].
Identification and eligibility of cases
All cases of snakebite affecting children aged �12 years, attending from January 2003 until December 2021, were eligible for inclusion in this study. These cases were routinely enrolled into the KHDSS hospital surveillance study over this period. At admission and on discharge from Kilifi County Hospital, clinical-research staff prospectively assigned a diagnostic code which was recorded on the KHDSS hospital database. Specific diagnostic codes were in place to classify snakebite, as follows: 'snake venom' and 'snake bite.' As a precaution, to avoid missing cases that may have been incorrectly classified, database search terms also included the following diagnostic codes, which were recorded at admission and on discharge: (1) "snake venom"; (2) "snake bite"; or (3) "acute animal bite." In addition to searching diagnostic codes, free-text sections of the database were searched for the following terms: (1) "snake"; or (2) "venom". All cases identified through this database search were screened by an academic clinician (MA) and non-snakebite cases, for example a dog bites, were excluded. For a case to be included there had to be a specific and clear reference to snakebite being the cause of the admission, in the database or the clinical records. In cases where the diagnosis of snakebite was uncertain, the paper notes were scrutinised by the study team.
Data extraction
The following data were prospectively recorded on the KHDSS database at the time of hospital admission by research staff: demographics; date and time of admission; date and time of discharge; weight; admission vital signs; diagnosis code; mortality; and date of death. The following clinical laboratory results from admission samples were extracted from the KEMRI-Wellcome Trust Research Programme laboratory database: full blood count (including differential); serum sodium; serum potassium; and serum creatinine.
To supplement the above prospective data that were collected in the KHDSS, retrospective data from the paper case notes were extracted by a team of research nurses using a standardised case report form. The following retrospective data were extracted: residence; geographic location of bite; circumstances of bite; date and time of bite; anatomical location of bite; use of traditional treatments; clinical features of local and systemic envenoming; clotting time; antivenom administration; indication for antivenom; antivenom associated adverse events; adjunctive treatments; complications of envenoming; and discharge destination. The antivenom product that was administered was not documented in the medical records. However, from the hospital pharmacy records it was possible to identify the antivenom product available during each study year.
Deaths were identified by searching the KHDSS hospital database, the paper medical records, and the KHDSS community database. Verbal autopsy was routinely conducted for all deaths that occurred within the KHDSS study area. This was conducted using the 2007 World Health Organization (WHO) verbal autopsy tools [13], as described previously [14].
To calculate the population-incidence of hospital-attended snakebite, census data from the KHDSS community database were used. Full details of this surveillance system have previously been published [11]. Community interviewers visited every household in the study area on a 4-monthly basis. A single resident was interviewed, from whom information pertaining to each resident was collected. The identity of all residents was confirmed and any newly born children, in-migrations, deaths, and out-migrations, since the previous enumeration round, were recorded. Person-years of observation were stratified by sex, age and 41 geographic sublocations.
Statistical analysis
Clinical data were described using summary statistics including means, medians, and proportions. Population-incidence of hospital-attended snakebite was calculated with 95% confidence intervals. Population incidence was calculated separately for young children (0-5 years inclusive) and older children (6-12 years inclusive). The hospital surveillance system did not consistently capture all admissions until 2006, thus, to avoid underestimation, incidence estimates have only been calculated for the period of 2006-2021.
A logistic regression analysis was conducted to identify variables associated with severe local tissue damage. Severe local tissue damage was defined as any case developing local skin necrosis or requiring surgical intervention-criteria that are congruent with the recently established snakebite core outcome measurement set [15]. The following variables were included in a univariate analysis, and those with a significance value of p �0.10 were selected for inclusion in the multivariate analysis: age, site of bite (dichotomised as upper limb and lower limb), elapsed time from bite to admission, MUAC (mid-upper arm circumference, which was measured away from the bite site)-for-age z-score (using the zscorer package in R), vital signs on admission (pulse rate, respiratory rate, systolic blood pressure, capillary refill time, axillary temperature, and oxygen saturations), admission full blood count (haemoglobin, white cell count, granulocyte count, lymphocyte count, and platelet count), and admission serum biochemistry (sodium, potassium, and estimated glomerular filtration rate [eGFR]). The eGFR was calculated using the Schwartz equation [16]. Multiple imputation, using the mice package in R, was undertaken to replace missing values [17]. R version 4.2.2 (R Foundation for Statistical Computing) was used for all analyses.
Incidence of hospital-attended paediatric snakebite
During the study period there were 78,038 paediatric admissions to Kilifi County Hospital (children aged �12-years), and 19,606 admissions to the paediatric HDU. The diagnostic code search of the KHDSS hospital database identified 724 potential paediatric snakebite cases. Following manual review of clinical data, 72 were excluded as they were not cases of snakebite (these were predominantly bites by other animals). Further exclusions are detailed in the CONSORT diagram (Fig 1). There were 584 children aged 12-years or under that presented with snakebite to Kilifi County Hospital between January 2003 and December 2021. Snakebite thus represented 2.98% of all admissions to the paediatric HDU over the study period. Details of the proportion of admissions to Kilifi County Hospital for snakebite by year are available in Table A in S1 Text. The median age was 8 years (IQR 5-10 years) and 47.6% were female. Clinical records were available for 472 (80.8%) participants, and most were resident in the KHDSS study area (N = 399; 68.3%).
The population-incidence of hospital-attended snakebite was calculated using the number of admissions per year amongst children that resided in the KHDSS study area (numerator) and annual age-specific census data from the KHDSS (denominator). As young children had a substantially lower risk of snakebite, population-incidence was stratified between the ages 0-5-years and 6-12-years. For children aged �5-years, the average population-incidence between 2006 and 2021 was 11.3/100,000 person-years; for children aged 6-12-years, the average PLOS NEGLECTED TROPICAL DISEASES population-incidence was 29.1/100,000 person-years. Fig 2 demonstrates the annual population-incidence for young children and older children, with 95% confidence intervals. Although there is variability between study years, the incidence remained broadly consistent until 2020, with a decline in 2021. As there has been a substantial increase in the number of people residing in the KHDSS study area, absolute numbers presenting with snakebite have increased over the study period. In 2006 there were 98,967 person-years of follow-up amongst children �12 years of age; by 2021 this had increased to 153,453 person-years.
Population-incidence of hospital-attended snakebite was calculated by year of age, as shown in Fig 3. There was a substantial increase in incidence with age: from 3.6/100,000 person-years at age 1-year, to 35.9/100,000 person-years at age 9-years. With increasing age above 9-years incidence fell, reaching 30.0/100,000 person-years by age 12-years.
Clinical features
The circumstances of the snakebite were available in the clinical records in 307 (52.6%) cases. Most snakebites occurred outdoors and near to the child's home (131 cases; 42.7%) or in the The bands represent the 95% confidence intervals. Population incidence is stratified by age category.
Traditional therapies that were sought prior to admission included application of a 'black stone' in 110 (23.3%) cases, application of a torniquet in 48 (10.2%) cases, and cutting the skin at the bite site in 47 (10.0%) cases (Table 1). Two or more types of traditional therapy were sought prior to admission in 56 (11.9%) cases (S1 Fig). The median elapsed time from bite until admission was 6-hours and 45-minutes (IQR 3-15-hours; range 10 minutes-17 days). The median elapsed time from admission until antivenom administration was 2-hours and 50-minutes (IQR 1-9 hours). Children who had received traditional therapies took a median of 3.6 hours longer to present to hospital, and this difference was statistically significant (median 9.4 hours and 5.8 hours; p = 0.003).
Most children had local swelling at presentation, being present in 399 (84.5%) cases (Table 1). There were six cases with systemic bleeding, and two with neurotoxic envenoming. There were no features of envenoming in 51 cases (10.8%). Age-adjusted tachycardia, hypotension, and tachypnoea were present in 311 (53.3%), 21 (4.6%), and 345 (59.3%) cases, respectively. The 20-minute whole blood clotting test (20-WBCT) was documented in only 18 cases. Many were conducted incorrectly and followed a procedure akin to the Lee-White clotting time (with repeated checks of the sample before 20-minutes had elapsed). Two of the 18 cases where a bedside clotting test was documented were prolonged over 20-minutes.
Full blood count and serum biochemistry were routinely undertaken for all children presenting with snakebite. In cases where insufficient blood sample volumes were obtained, due to challenging access, the full blood count was performed in preference to biochemistry. These demonstrated anaemia (haemoglobin <8.2 g/dL) in 51 (9.6%) cases, leukocytosis in 314 (59.5%) cases, and reduced eGFR in 12 (5.0%) cases (Table B in S1 Text). The clinical laboratory results, stratified by severe local tissue damage, have been depicted in Fig 5. The numbers of children with age-adjusted abnormal clinical laboratory results have been summarised in Table B in S1 Text.
The mean duration of hospital stay was 6.3 days (SD 17.8 days). There was an average of 195 bed-days occupied per year due to paediatric snakebite admissions. The mean duration of hospital stay was significantly prolonged (55.0 days vs 4.2 days; p<0.001) in cases with severe local tissue damage (defined as developing skin necrosis or undergoing local surgery). Amongst the 25 cases with severe local tissue damage, 20 (80.0%) were admitted for �7-days.
Predictors of severe local tissue damage
Severe local tissue damage developed in 25 cases (4.3%). Necrosis at the site of the bite developed in 22 cases (3.8%), and 19 (3.3%) required surgery. Ten cases underwent debridement, seven had a fasciotomy, four underwent skin grafting, and three had an amputation. The three cases fulfilling the criteria for severe local tissue damage that did not have skin necrosis had all undergone fasciotomy. All the cases that underwent amputation (n = 3) were preceded by the development of skin necrosis, and none were preceded by surgical fasciotomy.
Following multiple imputation, the following covariates were assessed in a univariate logistic regression analysis to identify potential predictors of severe local tissue damage: age, MUAC-for-age z-score, site of bite (upper vs lower limb), time from bite to admission, vital signs on admission (axillary temperature, pulse rate, respiratory rate, systolic blood pressure, capillary refill time in seconds, and oxygen saturations), serum sodium, serum potassium, eGFR, white blood cell count, granulocyte count, lymphocyte count, platelet count, and
Lymphocyte count and granulocyte count were omitted from the multivariate logistic regression model, as each were positively associated with severe local tissue damage and, therefore, the total white blood cell count was selected (to avoid multicollinearity). The rate of severe local tissue damage was similar between bites to the arm and the hand, although the event rate was small. Two participants (13.3%) with arm bites developed severe local tissue damage, five (11.9%) with hand bites, six (4.3%) with leg bites and seven (3.3%) with foot bites (Fig 4). Therefore, the covariate of upper limb bite (hand or arm) was entered into the multivariate analysis. The following statistically significant predictors were identified from the multivariate analysis: upper limb bite site (OR 3.27; 95% CI 1.17-9.17; p = 0.03); white cell count (OR 1.14; 95% CI 1.06-1.22; p<0.01); systolic blood pressure (OR 1.03; 95% CI 1.00-1.07; p = 0.04); serum sodium (OR 0.9; 95% CI 0.82-0.99; p = 0.03); and haemoglobin (OR 0.72; 95% CI 0.56-0.92; p = 0.01).
Snakebite associated mortality
Nine of the children in this study have died. Four of these deaths were unrelated to the snakebite and occurred years later during separate hospital episodes. The cause of deaths in these cases were epilepsy, accidental fall, acute respiratory infection associated with HIV/AIDS, and seizures secondary to previous bacterial meningitis, and these deaths occurred 13-, 5-, 5-, and
PLOS NEGLECTED TROPICAL DISEASES
Paediatric snakebite in Kilifi County, Kenya 4-years after the snakebite incident, respectively. Of the five snakebite associated deaths, two were due to neurotoxic envenoming, one had cardiovascular instability (hypotension, bradycardia and respiratory distress), one developed antivenom associated anaphylaxis, and one infant had a general deterioration of an uncertain nature, which culminated in cardio-pulmonary arrest ( Table 3). The five snakebite associated deaths occurred within one day of the hospital admission.
Discussion
This study represents one of the most comprehensive analyses of paediatric snakebite in Africa and demonstrates the substantial burden of this disease. The concerning trend of rising cases of paediatric snakebite in Kilifi, in parallel with population growth, underscores the need for strengthened targeted prevention strategies, improved training of healthcare providers, and increased availability of antivenom treatments. There is also an urgent need for similar studies on the epidemiology of paediatric snakebite envenoming to be conducted, particularly in sites with access to established health and demographic surveillance systems (HDSS) in Africa [18].
PLOS NEGLECTED TROPICAL DISEASES
There was a substantial fall in hospital-attended snakebite incidence in 2021, which may have been caused by the SARS-CoV-2 pandemic altering health seeking behaviour [19]. One quarter of paediatric snakebite cases were given antivenom, which was most frequently indicated for local envenoming. Most recently (years 2019-2022), Inoserp Pan-Africa (Inosan Biopharma) polyvalent antivenom has been used, although, in 2022 it was withdrawn from the Kenyan market after failing a risk-benefit assessment conducted by the World Health Features of envenoming: Bite site swelling, and neurotoxicity Clinical narrative: Snakebite to the lower limb was followed by difficulty in swallowing and blurred vision. At hospital developed respiratory arrest which was managed with mechanical ventilation. The following day spontaneous breathing returned, and mechanical ventilation was ceased. Subsequently developed copious haematemesis followed by cardiac arrest. Managed with cardiopulmonary resuscitation and adrenaline but died shortly after. Organization [20]. Since 2016, intermittent stocks of SAIMR (South African Vaccine Producers) polyvalent antivenom have been available through charitable donation by the Bio-Ken Snake Farm in Watamu, which tends to be reserved for more severe cases, given its evidence of pre-clinical efficacy [21]. Although the frequency of severe allergic reactions was low, there was one case that died as a direct result of antivenom induced anaphylaxis. Despite local envenoming being the most frequent indication for administering antivenom in much of Africa, its effectiveness for this indication is unproven, particularly if it is given late, and clinical trials are urgently needed [22]. Novel oral small molecule therapeutics may hold promise, particularly if they can be administered in rural clinics and thus reduce the time to treatment [23][24][25].
Bleeding was the most common sign of systemic envenoming. Despite this, measures of coagulopathy, such as the 20WBCT, were rarely documented in the case files. Early detection It was not possible to describe the predominant biting species in this study. It is believed that the puff adder (Bitis arietans), spitting cobras (Naja spp.), and burrowing asps (Atractaspis spp.) are the predominant medically important species in this region, but the relative contribution of these, and other less medically important species, is unknown. Mambas (Dendroaspis spp.) and non-spitting cobras (Naja haje, N. subfulva) are habitual to this region of Kenya. Although there were only two cases of neurotoxic envenoming in this study, both were fatal.
Delayed presentation to hospital was frequent and often prolonged. As most cases resided within the KHDSS study area, which is near to Kilifi County Hospital, it is likely that there is a delay in the decision to attend hospital. It is concerning that a large proportion of children received traditional therapies prior to presenting to hospital, particularly as this was associated with a statistically significant prolongation of the bite to admission time. The most frequently sought traditional therapy was application of a 'black stone,' which has been used in many geographic settings despite its lack of efficacy [26].
Most children in this study received antimicrobials. The majority had cloxacillin, although broader spectrum agents such as ceftriaxone and gentamicin were also used. Unlike other animal bites, snakebite rarely results in infection and routine antimicrobial prophylaxis is not recommended [27,28].
Severe local tissue damage developed in 4.4% of cases and was often associated with admissions that were weeks or even months long. Low haemoglobin was associated with severe local tissue damage. The direction of causality is uncertain, and it is feasible that children with anaemia may have other comorbidities that put them at risk of local tissue damage. Snakebite can cause anaemia as a result of thrombotic microangiopathy, although this is usually associated with thrombocytopaenia, which was uncommon in this study [29]. A raised white cell count on admission was also associated with severe local tissue damage, which has been demonstrated in other settings [30,31]. This is likely to be a bi-directional process, with activation of the innate immune system causing collateral damage at the bite site, and damage of local tissues triggering an immune response. Children that had sustained a snakebite to the upper limb were more likely to develop severe local tissue damage, the reason for which is uncertain. It is regarded that children are at a greater risk of envenoming, compared to adults, as they receive a higher dose of venom relative to their body weight; therefore, it may follow that the small upper limb of a child is particularly at risk. Increasing systolic blood pressure and lower serum sodium were associated with severe local tissue damage, although the small effect size and borderline statistical significance make the clinical relevance of these associations uncertain. Ultimately, further studies of local envenoming in sub-Saharan Africa are needed to confirm whether the predictors of severity identified in this single site study are reproducible.
A limitation of this study was that paediatric snakebite cases that did not attend hospital were missed, and therefore the true burden of disease has been under-estimated. A household survey is needed to further define the epidemiology of snakebite in Kilifi. Although the KHDSS study enabled reliable identification of consecutive cases of paediatric snakebite, with routine data collection and clinical laboratory analyses, the KHDSS study was not specifically designed to study snakebite. Thus, many important datapoints, such as whether antivenom was administered, needed to be retrospectively collected from the hospital records. Nevertheless, documentation on the paediatric HDU tended to be detailed and accurate, with standardised admission and discharge case report forms and contemporaneous daily documentation during admission. All cases were managed on a paediatric HDU which is supported and staffed by the KEMRI-Wellcome Trust Research Programme. There were missing data, particularly for biochemistry laboratory results and for items with variable documentation in the clinical records, such as the use of traditional therapies. The risk of bias due to missing data was partially mitigated using multiple imputation.
In conclusion, this study demonstrates the substantial burden of snakebite envenoming amongst children in rural Kenya. This is traumatic for children, interrupts schooling and development, is disruptive for families, places a substantial burden on healthcare facilities, and can lead to permanent disability or death. There is an urgent need for improved community awareness, with particular focus on preventative strategies, appropriate first aid, and the importance of early presentation to hospital. Many children in Kilifi receive antivenom for local envenoming, and it is important to assess whether this is effective.
Supporting information S1 Text. Table A
|
v3-fos-license
|
2020-03-19T10:54:17.164Z
|
2020-03-12T00:00:00.000
|
214816242
|
{
"extfieldsofstudy": [
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://advancesindifferenceequations.springeropen.com/track/pdf/10.1186/s13662-020-02570-8",
"pdf_hash": "f0b9a25c99d5fca55053f569aea8749004211191",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44572",
"s2fieldsofstudy": [
"Mathematics"
],
"sha1": "e20aabe30442f27e2ffb513d35e87cca544b3c84",
"year": 2020
}
|
pes2o/s2orc
|
Existence uniqueness and stability of mild solutions for semilinear ψ-Caputo fractional evolution equations
In this paper, we study the local and global existence, and uniqueness of mild solution to initial value problems for fractional semilinear evolution equations with compact and noncompact semigroup in Banach spaces. In particular, we derive the form of fundamental solution in terms of semigroup induced by resolvent and ψ-function from Caputo fractional derivatives. These results generalize previous work where the classical Caputo fractional derivative is considered. Moreover, we prove the Mittag-Leffler–Ulam–Hyers stability result. Finally, we give examples of time-fractional heat equation to illustrate the result.
Introduction
Fractional differential equations have been applied in many fields, such as economics, engineering, chemistry, physics, finance, aerodynamics, electrodynamics of complex medium, polymer rheology, control of dynamical systems (see [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16]). The research on fractional calculus has become a focus area of study due to the fact that some dynamical models can be described more accurately with fractional derivatives than the ones with integer-order derivatives. In particular, it is shown that fractional calculus provides more realistic models demonstrating hidden aspects in a model of spring pendulum [13], the free motion of a particle in a circular cavity [11] and some epidemic models [15,17].
Several researchers are interested in investigating various aspects of fractional differential equations such as existence and uniqueness of solutions, exact solutions, stability of solutions, and methods for explicit and numerical solutions [17][18][19][20]. The common techniques used to display the existence and uniqueness of solutions are fixed point theorem, upper-lower solutions, iterative method and numerical method. For stability of solutions, there is a concept of data dependence, which becomes one of significant topics in the analysis of fractional differential equations, called the Ulam-Hyers stability (see [21][22][23]).
One of the main research focuses on fractional calculus is the theory of fractional evolution equations since they are abstract formulation for many problems arising in engineering and physics. Evolution equations are commonly used to describe the systems that change or evolve over time. A number of studies have been conducted on the existence and unique of solutions for fractional evolution equations based on semigroup and fixed point theory (see [24][25][26][27][28][29][30][31][32]). On the other hand, there has been some studies about fundamental solution for homogeneous fractional evolution equations [33,34]. Recently, [19] applied the homotopy analysis transform method (HATM) for solving time-fractional Cauchy reaction-diffusion equations. In addition, Wang and Zhou [35] presented four kind of stabilities of the mild solution of the fractional evolution equation in Banach space, namely Mittag-Leffler-Ulam-Hyers stability, generalized Mittag-Leffler-Ulam-Hyers stability, Mittag-Leffler-Ulam-Hyers-Rassias stability and generalized Mittag-Leffler-Ulam-Hyers-Rassias stability.
There is variation in the definition of fractional differential operators found in the literature, including Riemann-Liouville, Caputo, Hilfer, Riesz, Erdelyi-Kober, and Hadamard [2,36] operators. The common definitions that triggered attention from many researchers are Riemann-Liouville and Caputo fractional calculus. In Riemann-Liouville fractional differential modeling, the initial condition involves limit values of fractional derivatives, which is difficult to interpret. The Caputo fractional derivative has the advantage of being suitable for physical models with initial condition because the physical interpretation of the prescribed data is clear and it is in general possible to provide these data by suitable measurements [37].
Almeida [38] generalized the definition of Caputo fractional derivative by considering the Caputo fractional derivative of a function with respect to another function ψ and studied some useful properties of the fractional calculus. The advantage of this new definition of the fractional derivative is that a higher accuracy of the model could be achieved by choosing a suitable function ψ.
Recently, Jarad and Abdeljawad [39] introduced the generalized Laplace transform with respect to another function and the inverse version of the Laplace transform with respect to another function. This can be used to solve some fractional differential equations in the framework of generalized Caputo fractional derivative.
Motivated by the work of [25,39], we consider the following fractional evolution equation in a Banach space E: where 0 < α < 1, T < ∞, A is the infinitesimal generator of a C 0 -semigroup of uniformly bounded linear operators {T(t)} t≥0 on E, u 0 ∈ E and f : [0, ∞) × E → E is given function. The fractional derivative C 0 D α ψ considered in this work is in the sense of Caputo fractional derivative with respect to a function ψ which gives a more general framework to the results in the literature. Moreover, this problem is more general than the work in [39] where we consider the evolution operator A instead of a constant.
In this paper, we aim to establish a mild solution for the problem (1) in terms of semigroup depending on a function ψ from the generalized Caputo derivative. In addition, we prove the existence and uniqueness results of mild solution for the problem (1) in local and global time under the condition that {T(t)} t≥0 is both compact and noncompact operator. The results obtained in this work are in the abstract form which can be applied for further investigation such as the evolution equations with perturbation, delay and nonlocal term. This paper will be organized as follows. In Sect. 2, we will briefly recall some basic definitions and some preliminary concepts about fractional calculus and auxiliary results used in the following sections. We then construct a mild solution by using semigroup for the problem in Sect. 3. We prove the existence and uniqueness of mild solutions of the problem (1) under compact and noncompact analytic semigroup by the Schauder fixed point theorem in Sects. 4 and 5, respectively. In Sect. 6 we present Mittag-Leffler-Ulam-Hyers stability result for the problem (1). Finally, we give some examples to illustrate the application of the results obtained in Sect. 7 and our conclusion in Sect. 8.
Preliminaries
In this section, we introduce preliminary background which is used throughout this paper.
Let E be a Banach space with the norm · and let C(J, E) be the Banach space of continuous functions from J to E with the norm u C = sup t∈J u(t) .
The ψ-Riemann-Liouville fractional integral operator of order α of a function f is defined by It is obvious that when ψ(t) = t, (2) is the classical Riemann-Liouville's fractional integral.
Proposition 2.14 ( [43,44]) The Wright function φ α is an entire function and has the following properties: Next, we introduce the definition for Kuratowski measure of noncompactness, which will be used in the proof of our main results.
The following properties of the Kuratowski measure of noncompactness are well known. ([45, 46]) Let E be Banach spaces and U, V ⊂ E be bounded. Then the noncompactness measure has the following properties:
Lemma 2.23 (Schauder's fixed point theorem) Let E be a Banach space and D ⊂ E, a convex, closed and bounded set. If T : D → D is a continuous operator such that T(D) ⊂ E, T(D) is relatively compact, then T has at least one fixed point in D.
Next, we give some facts about the semigroups of linear operators. These results can be found in [51,52].
For a strongly continuous semigroup (i.e., We denote by D(A) the domain of A, that is, Lemma 2.24 ([51, 52]) Let {T(t)} t≥0 be a C 0 -semigroup, then there exist constants C ≥ 1 and a ≥ 0 such that T(t) ≤ Ce at for all t ≥ 0.
Lemma 2.25 ([51, 52]) A linear operator A is the infinitesimal generator of a C 0 -semigroup if and only if
Throughout this paper, let A be the infinitesimal generator of a C 0 -semigroup of uniformly bounded linear operators {T(t)} t≥0 on E. Then there exists M ≥ 1 such that
Representation of mild solution using semigroup
According to Definition 2.5 and Theorem 2.7, it is suitable to rewrite the Cauchy problem in the equivalent integral equation Proof Let λ > 0. Applying the generalized Laplace transforms to (8), we have It follows that We consider the following one-sided stable probability density in [53]: Using (10), we get Then we get Now, we can invert the Laplace transform to get is the probability density function defined on (0, ∞).
For any u ∈ E, define operators S α ψ (t, s) and T α ψ (t, s) by
Lemma 3.2
The operators S α ψ and T α ψ have the following properties: (iii) If T(t) is compact operator for every t > 0, then S α ψ (t, s) and T α ψ (t, s) are compact for all t, s > 0.
(iv) If S α ψ (t, s) and T α ψ (t, s) are compact strongly continuous semigroup of bounded linear operators for t, s > 0, then S α ψ (t, s) and T α ψ (t, s) are continuous in the uniform operator topology.
Proof The proof follows the argument of [26].
Before starting and proving the main results, we introduce the following hypotheses.
Existence and uniqueness of mild solution under compact analytic semigroup
In this section, we begin by proving a theorem concerning the existence and uniqueness of mild solution for the problem (1) under the condition of compact analytic semigroup. The discussions are based on fractional calculus and Schauder fixed point theorem. Our main results are as follows.
Proof For any r > 0, let Step 1: We will prove that K : Ω r → Ω r , that is, there exists r > 0 such that K(Ω r ) ⊂ Ω r , We assume that for each r > 0, there exists u r ∈ Ω r and t ∈ [0, T], such that (Ku)(t) > r. According to Lemma 3.2(i) and (H 3 ), we have Dividing to both side by r and taking the limit supremum as r → ∞, we obtain which is contradiction. Therefore K : Ω r → Ω r .
Step 3: We will prove that K(Ω r ) is equicontinuous. For any u ∈ Ω r and 0 ≤ t 1 < t 2 ≤ T, we have
By Lemma 3.2, it is clear that I 1 → 0 as t 1 → t 2 and we obtain α and hence I 2 → 0 and I 3 → 0 as t 2 → t 1 . For t 1 = 0 and 0 < t 2 ≤ T, it easy to see that I 4 = 0. Then, for any ε ∈ (0, t 1 ), we have It follows that I 4 → 0 as t 2 → t 1 and ε → 0 by Lemma 3.2(iv) and (iii). Therefore, which means that K(Ω r ) is equicontinuous.
Obviously, K(0) is relatively compact in E. Let 0 ≤ t ≤ T be fixed. Then, for every ε > 0 and δ > 0, let u ∈ Ω r and define an operator K ε,δ on Ω r by Then, by the compactness of T(ε α δ) for ε α δ > 0, we see that the set K ε,δ (t) = {(K ε,δ u)(t) : u ∈ Ω r } is relatively compact in E for all ε > 0 and δ > 0. Furthermore, for any u ∈ Ω r , we have Therefore, there are relatively compact sets arbitrarily close to the set K(t) for t > 0. Hence, K(t) is relatively compact in E.
Therefore, by the Arzelá-Ascoli theorem K(Ω r ) is relatively compact in C([0, T], E).
Thus, the continuity of K and relatively compact of K(Ω r ) imply that K is a completely continuous. By the Schauder fixed point theorem, we see that K has a fixed point u * in Ω r , which is a mild solution of (1). The proof is complete.
Remark 4.2 From Theorem 4.1, we notice that if ψ is bijection function then the problem (1) has at least mild solution provided that
Theorem 4.3 Assume (H 4 ) holds. Then the problem (1) has a unique mild solution.
Proof Let u 1 and u 2 be the solutions of the problem (1) in Ω r . Then, for each i ∈ {1, 2}, the solution u i satisfies Then, for any t ∈ [0, T], we have where k * = sup 0≤t≤T |k(t)|. By using the Gronwall inequality (Lemma 2.11), we obtain which implies that u 1 ≡ u 2 . Therefore, the problem (1) has a unique mild solution u * ∈ Ω r .
Theorem 4.4 Suppose that conditions (H 1 )-(H 3 ) hold. Then, for any u 0 ∈ E, the problem (1) has a mild solution u on a maximal interval of existence
Proof We notice that a mild solution u of the problem (1) defined on [0, T] can be extended to a larger interval [0, Therefore, repeating the procedure and using the methods of steps in Theorem 4.1, we can prove that there exists a maximal interval [0, T max ) such that the mild solution u of the problem (1). We want to prove that if T max < ∞ then lim t→Tmax u(t) = ∞. First, we will prove that lim sup t→Tmax u(t) = ∞. Assume by contradiction that
Similar to
Step 3 of Theorem 4.1, we can prove that u(t )u(t) → 0 as t , t → T max Therefore, by the Cauchy criteria we see that lim t→Tmax u(t) = u 1 exists. By the first part of the proof, there exists a δ > 0 such that the solution can be extended to [0, T max + δ) and we know that to the fractional evolution equation there exists a mild solution on [T max , T max + δ). This means that the mild solution of the problem (1) can be extended to [0, T max + δ), which contradicts with the maximal interval [0, T max ). Hence, lim sup t→Tmax u(t) = ∞. Now, we will prove that if T max < ∞, then lim t→Tmax u(t) = ∞. If this is not true, then there exist a constant K > 0 and a sequence t n → T max such that u(t n ) ≤ K for all n. Since t → u(t) is continuous and lim sup t→Tmax u(t) = ∞, we can find a sequence a n such that a n → 0 as n → ∞, u(t) ≤ M(K + 1) for t n ≤ t ≤ t n + a n and u(t n + a n ) = M(K + 1) for all n sufficiently large. But we have M(K + 1) = u(t n + a n ) ≤ S α ψ (a n , 0)u(t n ) + t n +a n t n ψ(t n + a n )ψ(s) α-1 T α ψ (t n + a n , s)f s, u(s) ψ (s) ds t n +a n t n ψ(t n + a n )ψ(s) α-1 ψ (s) ds which implies that M(K + 1) ≤ MK as a n → 0, a contradiction. Therefore, we find that if T max < ∞, then lim t→Tmax u(t) = ∞.
Next, we discuss the existence of a global mild solution for the problem (1). To this end, we need replace the assumption (H 3 ) by (H 5 ). Proof It is clearly that (H 5 ) implies (H 3 ). Therefore, by Theorem 4.4 we know that the problem (1) has a mild solution u on a maximal interval of existence [0, T max ). By the proof process of Theorem 4.4, we can see that the problem (1) has a global mild solution if u(t) is bounded for every t in the interval of existence of u. If suffices to show that u(t) is bounded for every t ∈ [0, T max ) with T max < ∞.
Then for any 0 ≤ t ≤ T max we have and By Corollary 2.12, we obtain which means that u(t) is bounded for every t ∈ [0, T max ).
Existence and uniqueness of mild solution under noncompact analytic semigroup
In this section, we will prove the existence of mild solution for the problem (1) under the condition of a noncompact analytic semigroup.
Proof For any r > 0, let Then, Ω r is bounded closed convex subset of C([0, T], E). Define an operator K : Using the same argument in Theorem 4.1, we obtain K : Ω r → Ω r is continuous and K(Ω r ) is equicontinuous. Then it is sufficient to prove that K : Ω r → Ω r is condensing.
where Co is the closure of convex hull. Then, by Lemma 2.21 we obtain Co K(Ω r ) ⊂ Ω r is bounded and equicontinuous. Now, we will prove that K : D → D is a condensing operator. For any D ⊂ Co K(Ω r ), by Lemma 2.17, we see that there exists a countable set D 0 = {u n } ⊂ D such that By the equicontinuity of D, we know that D 0 ⊂ D is also equicontinuous. Therefore, by Lemma 2.20, we have Since K(D 0 ) ⊂ D is bounded and equicontinuous, we obtain by Lemma (2.18). It follows that Thus, K : D → D is a condensing operator. Therefore, by Lemma 2.22, K has at least one fixed point u * in Ω r , which is a mild solution of (1). The proof is complete.
Remark 5.2 From Theorem 5.1, we notice that if ψ is bijection function then the problem (1) has at least one mild solution provided that Proof The proof uses the same argument as in Theorem 4.5.
Proof
Let v ∈ C 1 ([0, T], ∞) be a solution of inequality (16). Then we get for all t ∈ [0, ∞). Let us denote by u ∈ C([0, T], ∞) the unique mild solution of the Cauchy problem We have By Corollary 2.12, we obtain The proof is complete.
Examples
In this section, we give examples of fractional differential equation of compact and noncompact semigroup cases. The main results can be applied for a larger class of Caputo fractional derivative with respect to ψ. In particular, our results can be reduced to the examples in [25,32] when ψ(t) = t.
Then for t ∈ [0, 1] we have , and C 2 is the set of all is the set of all continuous defined on (0, 1) which have continuous partial derivatives of order less than or equal to 2, and H 1 0 (0, 1) is the completion of C 1 (0, 1) with respect to the norm u H 1 (0,1) .
Conclusion
We construct a mild solution for fractional evolution equation based on Laplace transform with respect to ψ-function. We obtain the local and global existence and uniqueness of mild solution for the problem with ψ-Caputo fractional derivative, which can be reduced to the classical Caputo fractional derivative in previous work. Furthermore, the form of a fundamental solution obtained in this work is a foundation result for further investigation such as the problem with perturbation, delay and a nonlocal term.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2010-07-16T00:00:00.000
|
17592804
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2010.00014/pdf",
"pdf_hash": "7497d2f5f7d3fb8b9e7814c047825889e5888f55",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44573",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "80ea221795561c8cf64dd9035fc91d0b3af7e7a6",
"year": 2010
}
|
pes2o/s2orc
|
Carotid Endarterectomy vs. Carotid Stenting: Fairly Comparable or Unfairly Compared?
Carotid revascularization with carotid endarterectomy (CEA) has been shown to be superior to medical therapy for stroke prevention in symptomatic and asymptomatic patients with moderate to severe stenosis who meet well defined medical and surgical selection criteria. The benefit of CEA is significantly higher in symptomatic compared to asymptomatic patients. Carotid artery stenting (CAS) has emerged as an alternative in patients who are considered high surgical-risk due to co-existent medical co-morbidities or anatomical high-risk features. Since its development in the early 1990’s, the technique of endovascular carotid revascularization has been undergoing a continuous maturation process mainly due to a change from the initial use of balloon expandable stents to self-expanding stents, the introduction of and continuously improving array of emboli prevention devices (EPD's) and last but not least increasing operator experience. This culminated in the randomized SAPPHIRE trial of protected CAS [i.e., CAS performed with EPD] vs. CEA in high surgical-risk patients, that showed that CAS was non-inferior to CEA with lower peri-procedural complication rates as well as lower rates of restenosis (Yadav et al., 2004). Furthermore, increased experience with this technique has led to the realization that just like with CEA, there are patients (e.g., older age, excessive vascular tortuosity or calcification) who are high-risk for CAS. (Chaturvedi et al., 2010).
Carotid revascularization with carotid endarterectomy (CEA) has been shown to be superior to medical therapy for stroke prevention in symptomatic and asymptomatic patients with moderate to severe stenosis who meet well defined medical and surgical selection criteria. The benefit of CEA is significantly higher in symptomatic compared to asymptomatic patients. Carotid artery stenting (CAS) has emerged as an alternative in patients who are considered high surgical-risk due to co-existent medical co-morbidities or anatomical high-risk features. Since its development in the early 1990's, the technique of endovascular carotid revascularization has been undergoing a continuous maturation process mainly due to a change from the initial use of balloon expandable stents to self-expanding stents, the introduction of and continuously improving array of emboli prevention devices (EPD's) and last but not least increasing operator experience. This culminated in the randomized SAPPHIRE trial of protected CAS [i.e., CAS performed with EPD] vs. CEA in high surgical-risk patients, that showed that CAS was non-inferior to CEA with lower peri-procedural complication rates as well as lower rates of restenosis (Yadav et al., 2004). Furthermore, increased experience with this technique has led to the realization that just like with CEA, there are patients (e.g., older age, excessive vascular tortuosity or calcification) who are highrisk for CAS. (Chaturvedi et al., 2010).
The question of whether CAS is an alternative to CEA in patients without high surgical-risk, is addressed by the results of three randomized, European studies comparing CEA to CAS in patients without high surgical-risk medical or anatomical features (Mas et al., 2006;Ringleb et al., 2006;Ederle et al., 2010) (EVA-3S, SPACE, ICSS). More recently the North American CREST study results were presented at the International Stroke Conference in 2010 and subsequently published. . The results of these trials were discrepant with two (EVA-3S and ICSS; Mas et al., 2006;Ederle et al., 2010) showing worse outcomes with CAS, one failing to prove non-inferiority (SPACE) (Ederle et al., 2010) and one showing equivalence (CREST) . Why were the results so different? To better understand these discrepancies, it is important to understand the differences in trial methodology.
The first and most important methodologic difference was operator experience across the four studies. In EVA 3S, operators had to have performed at least five carotid stent procedures or be supervised by a physician who was qualified (Mas et al., 2006). Following publication of the EVA3s manuscript it was revealed that only 16% of patients were treated by operators with more than 50 CAS cases of experience and 39% of patients were treated by physicians in training (Clark, 2010). In SPACE, the operator had to have performed 25 percutaneous angioplasty or stent procedures, without a specific requirement for carotid procedures. Operators who had insufficient CAS experience (10 cases) could enroll patients if they had assistance of a tutor (Ringleb et al., 2006). In ICSS, the requirement was 50 stent procedures, of which a minimum of 10 were required to be carotid artery procedures (Ederle et al., 2010). As with EVA3s and SPACE inexperienced operators could have the assistance of a tutor. The trend across all of these studies is that many operators may have had some experience with peripheral stent placement but this experience was not necessarily acquired in the carotid arteries. As aortic arch tortuosity is emerging as one of the critical factors determining procedural risk with CAS, lack of proof of experience with carotid catheterization as a prerequisite for participation in the trial seen across all the European studies is arguably the most important factor responsible for the overall high rates of stroke reached in these studies. By contrast prospective CAS registries in North America, which preceded CREST, required a higher level of experience with brachiocephalic catheterization and carotid interventions and have reported rates of stroke that are significantly less that those reported in the European studies. Table 1 summarizes the differences in selection criteria for stenting across the recent randomized trials. Site selection may also have been an issue; in ICSS, two centers were found to have an extraordinarily high rate of complications and were removed from the study after 5 of 11 patients experienced disabling stroke or death. The inclusion of inexperienced physicians and allowing them to perform the procedure in the presence of a tutor as part of a randomized trial casts doubt on the validity of these trials.
By contrast the vetting process for the CREST study was more rigorous and required a minimum experience of 10-30 carotid stent procedures with 0.14′ wire systems, experience with EPD, and a documented 30-day stroke and death rate of "6-8%" . In addition after admittance into the study there was a required lead-in phase of up to 20 patients designed to ensure operators had adequate experience and acceptable complication rates prior to randomizing patients. The standards of rigorous vetting for 15% did not have these medications post procedure (Mas et al., 2006). Data regarding peri-procedural anti-platelet medications were surprisingly not reported in ICSS (Ederle et al., 2010).
The issue of anti-platelet therapy with carotid stenting is of great importance because following stent implantation especially within the first week there is activation of platelets and increases of ADP-induced platelet aggregation (Szapary et al., 2009). Vast experience from the coronary literature confirmed by carotid stenting studies has shown that the use of dual anti-platelet therapy is essential in the prevention of peri-procedural ischemic events and acute stent thrombosis (Bhatt et al., 2001). In the CREST study, the use of dual anti-platelet therapy was required as part of the protocol. An issue not addressed in any of the hitherto conducted studies and which requires further study is that of biochemical resistance to anti-platelet agents. The recently described proceduralists performing carotid revascularization were set by NASCET and ACAS, the first trials to show benefit of CEA compared to medical therapy (North American Symptomatic Carotid Endarterectomy Trial, 1991; Endarterectomy for asymptomatic carotid artery stenosis, 1995) in which only experienced surgeons chosen according to strict criteria were allowed to participate. As opposed to the stenting arm operators, carotid surgeons in the European randomized trials of CAS vs. CEA were more experienced compared to their interventionalist counterparts: no inexperienced surgeon was allowed to perform the procedure whether or not a tutor was present.
The second major protocol difference was the use of peri-procedural dual anti-platelet medications. In the ICSS and EVA-3S studies, the use of dual anti-platelet medications was "recommended". In EVA-3S, 17% of patients were not on dual anti-platelet medications prior to the procedure and nearly association of the cytochrome P450 2C19 genotype to reduced effectiveness of clopidogrel (Shuldiner et al., 2009) and associated cardiovascular death raises concerns that may be of great relevance for carotid stenting; adverse thromboembolic events following carotid stenting may be reduced in the future by tailoring peri-procedural antithrombotic agents to patient specific response to the drug.
The third consideration was the lack of exclusion criteria for stenting. By contrast high surgical-risk criteria precluding randomization were present for the CEA arms in all the above randomized trials. The EVA-3S trial did not include angiographic exclusion criteria for stenting, which combined with low operator experience, could account for the significant rate of perioperative stroke and death seen in the CAS arm, a rate not reported since the very first series of carotid stenting reported in the late 1990's (Naylor et al., 1998). What has become clear is that akin to CEA patient selection for CAS is key to minimizing peri-procedural risks. Consistent with this experience gained from CEA the CREST trial had rigorous angiographic exclusion criteria such as severe tortuosity and calcification, intraluminal thrombi and large, bulky, plaques (Roubin et al., 2006), which further explains these discrepant results. Fourthly, ICSS, EVA 3S and SPACE allowed the use of different stent and EPD types. By allowing operators to select the stent and EPD type, there may have been unfamiliarity with the devices particularly when there is a lack of consistent use. In contrast, the CREST study utilized one single stent and EPD system the Acculink™ stent and Accunet™ EPD (Abbott Vascular, Santa Clara, CA, USA) for each patient including for those patients treated within the lead-in phase. This allowed the operator to become familiar with the particularities of one single device.
Lastly, a potentially critical consideration was the inconsistency of EPD use across all of the European studies. EPD's were used in 27%, 72% and 91% of patients in SPACE, ICSS and EVA 3S respectively (Mas et al., 2006;Ringleb et al., 2006;Ederle et al., 2010). Although there has been controversy with regards to the benefit of EPD in preventing ischemic stroke associated with CAS the lack of a consistent protocol represents a significant shortcoming. The CREST study protocol required the use of an EPD for all patients enrolled. To what extent this requirement contributes to the lower periprocedural risk of stroke of 4.1% observed in CREST vs. 7.5-8.8% in the European studies is not entirely clear. Although no randomized trials have compared protected CAS vs. unprotected CAS, several large series, including a registry comprising >12,000 patients comparing the two approaches, have shown an approximately 50% reduction in perioperative stroke risk with their use (Wholey et al., 2003). The ICAROS trial also showed a significant reduction of events with the use of EPD in patients with echolucent plaques (Biasi et al., 2004). Although MRI based studies have not shown a difference in the number of DWI lesions when comparing protected to unprotected CAS, the studies were not powered to detect if a clinical difference may exist. Moreover, the use of an EPD may be a surrogate for operator experi-ence. Despite the controversy surrounding EPD's, a lack of a consistent protocol with regards to EPD's likely accounts for some of the peri-procedural stroke rates noted amongst the trials.
The inclusion of asymptomatic and symptomatic carotid stenosis patients in the CREST and SAPPHIRE trials as opposed to only symptomatic carotid stenosis in the other trials is another important difference that raises the question regarding its contribution to the inconsistent results amongst the trials. Registries for CAS have shown differences in major adverse clinical event rates when comparing symptomatic lesions to asymptomatic stenosis. The CREST trial and SAPPHIRE trials included 47% and 71% of patients respectively with asymptomatic stenosis. This may also account for the differences in the overall 30-day event rates, but does not account for the final results of the North American studies compared to their European counterparts.
In conclusion, the CREST study showed non-inferiority of protected CAS to CEA. The ICSS, EVA-3S and SPACE studies failed to reach the same conclusion due to higher stroke rates in the CAS group. For several reasons outlined above but particularly because of insufficiently rigorous vetting standards used for carotid stent operators, the latter studies showed that inexperienced operators without a defined protocol will achieve inferior results to CEA performed by experienced operators. Available data from CREST a trial with vigorous standards for operator experience allow us to reach the conclusion that in addition to aggressive medical therapy there are two equivalent treatment options for symptomatic or asymptomatic carotid stenoses: CEA and protected CAS. Therefore, the task ahead and one that should be pursued in daily practice by endovascular specialists and carotid surgeons as part of multidisciplinary teams is to determine which procedure is best suited for each individual patient according to patient specific anatomical and medical considerations.
|
v3-fos-license
|
2020-03-19T06:24:02.544Z
|
2020-03-18T00:00:00.000
|
213183787
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.js.20200802.13.pdf",
"pdf_hash": "3451448134eadd2ad12e0a2019ccb928c589de6b",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44576",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "3451448134eadd2ad12e0a2019ccb928c589de6b",
"year": 2020
}
|
pes2o/s2orc
|
Surgical Approach of Open Reduction and Internal Fixation for Proximal Humeral Fracture in the Elderly
Background: The incidence of proximal humeral fracture is increasing gradually. Many patients choose open reduction and internal fixation. With the maturity of surgical technology and thought, people begin to think about the optimization of surgical effect from the surgical incision, including less trauma, less bleeding, less postoperative complications and fast postoperative recovery. However, due to the complexity of shoulder anatomy, scholars have created different surgical approaches from different perspectives. Objective: Although the effect of open reduction and internal fixation is confirmed, there are still some differences in the effect of different surgical approaches. We need to study the summary and research progress of surgical approach for proximal humeral fracture, which is conducive to the selection of the optimal approach for incision, so as to improve the prognosis. Method: Selective literature review. Result: At present, common surgical approaches include lateral approach, anteromedial approach, anterolateral approach, small incision approach and other approaches. This paper describes and compares the advantages and disadvantages of each approach, so as to choose the best approach for different fracture types. Conclusion: Based on the complexity of the anatomical relationship of the shoulder joint, the displacement, classification of the fracture, the proximal humerus are opened and exposed from different perspectives. Choosing a safe surgical approach is one of the key links of the whole operation and plays an important role in the postoperative effect. In this paper, the common approaches and new approaches of open reduction and internal fixation for proximal humeral fracture are reviewed, which provides new ideas for the design of surgical scheme.
Introduction
As the acceleration of the aging process in society, the proportion of middle-aged and elderly is increasing. The proximal humeral fractures is one of the four most common fractures in the elderly [1]. The incidence rate is increasing year by year. It accounts for 7-9% of all body fractures [2,3], and the incidence rate of the female is higher than males. The reason is also related to the higher incidence of osteoporosis in elderly women [4]. The fracture often occurs in the low energy injury caused by falling over the horizontal ground, arm extension and standing position [5,6]. Although the previous literature has pointed out that about 60-80% of non displaced or slightly displaced fractures can be treated conservatively [7], open reduction and plate internal fixation is usually required to restore the normal and stable anatomical structure of the proximal humerus for displaced and multi-part fractures [8]. In addition, compared with the past, expanding the scope of surgical indications would reduce the threshold of surgical indications [9]. Many complex proximal humeral fractures also choose open reduction and internal fixation. Most doctors focus on the restoration of the anatomical position of the joint, the strong internal fixation of the operation, the repair of the damaged rotator cuff tissue and the early restoration of the patients' activity function, while the discussion and summary of the surgical approach are few in academic and in practice. Although the surgical technique is mature and the treatment effect is accurate, there are still some differences in the surgical effects, postoperative rehabilitation, and postoperative complications when choosing different surgical approaches. Hence, it is necessary to understand the research progress, the summary of the surgical approach and the selection of the optimal approach to develop the surgical plan and improve the prognosis.
Anatomy
Due to the complex anatomical relationship among the muscles, blood vessels and nerves of the shoulder joint, it is relatively difficult to choose the surgical approach, which requires doctors to have a thorough grasp of the anatomy of the shoulder joint and the displacement of the fracture. The posterior bundle of brachial plexus continued to be an axillary nerve. The projection position of the axillary nerve trunk is 5.0cm-7.4cm away from the lateral side of the acromion [10]. It is initially located on the lateral side of the radial nerve and behind the axillary artery. The anterior, middle and posterior deltoid are innervated along with the deep layer of the deltoid. The deltoid is divided into three heads by the muscular space, which wraps around the shoulder joint from the front side, the outside side and the back side [11]. It starts from 1/3 outside the clavicle, lateral to the acromion and mesoscapula, and synthesizes a tendon to stop at the trochanter of deltoid outside the humerus. The broken end of the fracture can damage the axillary nerve, resulting in the paralysis of the deltoid and the limitation of shoulder abduction. The subclavian artery continues as the axillary artery and the brachial artery continues along the teres major tendon and the lower margin of the latissimus dorsi. The deep posterior humeral circumflex artery of deltoid bypassed the surgical neck and anastomosed with the anterior humeral circumflex artery. The anterior and posterior humeral circumflex brachial arteries provide most of the blood of the humeral head and the great tubercle of the humerus [12].
Surgical Approach
Lateral approach: Longitudinal incision is made from the lateral margin of the acromion to the lateral side of the upper humerus ( Figure 1). The deltoid is bluntly separated into the axillary nerve along with the muscle fiber space. The assistant pulls the deltoid to both sides, fully exposing the broken end of the fracture. The axillary nerve around the surgical neck of the humerus should be protected when exposing the surgical area. The surgical approach is simple and quick, and the risk of axillary nerve injury is low. The operative field of vision is clear, and the lateral surface of the proximal humerus is well exposed. Compared with the anteromedial approach, the lateral approach is more convenient to detect displaced fractures. The operation of plate placement is relatively easy and convenient [13], and functional recovery is good. The risk of axillary paralysis is low and no serious complications are found [14,15]. Korkmaz et al. studied and analyzed that for AO / ASIF B and C type fractures, the effect of lateral approach on reduction of the humeral head and the great tubercle of the humerus was better than that of anteromedial approach. The postoperative shoulder function score is higher. During the operation, 270° reduction and fixation of proximal humerus fracture, and reduction and fixation of posterior fracture are more convenient. The clear exposure of axillary nerve can reduce iatrogenic injury [13]. However, in order to better expose the steel plate area during the operation, it is often easy to over pull the muscle and peel off too much soft tissue. The decrease of curative effect in some patients may be related to the over the destruction of soft tissue and the influence of fracture blood supply [16]. And the anteromedial fracture block is difficult to expose, which is not conducive to operation. Therefore, if we want to choose the lateral approach, we should master surgical skills. It can reduce the overstretching of muscle and the peeling of soft tissue, especially for complex comminuted fracture. More attention should be paid to the operation of visual field exposure to reduce the incidence of complications. Anteromedial approach: a longitudinal incision is made from the front of the shoulder joint, starting from the coracoid process to the medial and ending at the deltoid. After skin incision, the cephalic vein is found, which is the anatomical mark of the space ( Figure 2). Blunt separation of the space between the deltoid and pectoralis major, if necessary, cut off the beginning of the anterior deltoid tendon, and fully expose the fracture of the valgus tissue. However, it is easy to aggravate the degree of blood damage of small fracture block and musculocutaneous nerve, which may affect the abduction and anteflexion of the shoulder joint. During the separation process, the damage to the blood circulation of the joint capsule and the rotator cuff should be minimized. Meanwhile, the cephalic vein should be protected. It is the most familiar and classical surgical approach for most doctors [17][18][19], and it can maximize the surgical area compared with the other two classic approaches. Harmer et al. also concluded that this approach can improve the visual effect of the surgical field by comparing it with the quantitative exposure surface area of the anterolatera approach [20]. The conclusion is the same as that of other authors [16]. But there are some limitations into this approach. Because the deltoid is fan-shaped and spreads around the upper humerus. It is difficult to enter the posterolateral side of the shoulder joint when the fracture of the great tubercle of the humerus is restored or the implant is placed [21][22][23]. It is often necessary to dissect the lateral humeral tissue from the inside out, but this operation is easy to damage the anterior humera circumflexl artery. There is a risk of ischemic necrosis of the humeral head [24][25][26][27]. According to Cardet's literature, the incidence of ischemic necrosis of the humeral head was 37% [28]. This may be related to the injury of blood vessels. Therefore, doctors should pay attention to the protection of the artery and its branches during the operation, which plays an important role in the prevention of ischemic necrosis of the humeral head [29]. But Hettrich's study found that the posterior humeral circumflex artery was the main blood supply artery of the humeral head. The anterior humeral circumflex artery mainly supplies the great tubercle of the humerus [30]. This also explains that when the proximal humeral fracture destroys the anterior humeral circumflex artery, the humeral head is not necessarily ischemic necrosis. So we should pay attention to the posterior humeral circumflex artery. There should not be too much peeling of the posterior medial tissue. During reduction, we should also pay attention not to damage the artery and reduce the risk of ischemic necrosis of the humeral head. Anterolateral approach: According to Gardner's description, the approach is to open and expose the lateral humerus and the great tubercle of the humerus between the anterior and the middle deltoid bundles. It is beneficial to the indirect reduction of the humeral head, reducing the peeling of soft tissue and the damage of blood circulation, and reducing the damage of deltoid [23,[30][31][32][33]. And between the muscle bundles belong to the area without blood vessels, also better placed with a fixed angle of the endophyte [16]. But we need to pay attention to the anatomic location of the axillary nerve ( Figure 3). The main risk of this approach is to injure the axillary nerve, so it is necessary to dissect the axillary nerve to reduce the risk of iatrogenic nerve injury [10,34,35]. No iatrogenic injury is caused to the axillary nerve when it is not more than 6cm in the longitudinal incision and 1cm in the traction of the axillary nerve from the bone cortex [31]. This approach has the advantages of less trauma, shorter time and less bleeding compared with other approaches. However, it is more difficult to select this approach than other approaches in the subsequent second operation. Due to the incidence rate of proximal humeral fractures is high in the elderly. If there is no special discomfort, there is no need for second operations to remove the plate to avoid the risk of reoperation [36]. Isiklar et al. compared with the anteromedial approach, found that patients in the anterolateral approach group showed better stable scores in the early postoperative period. And it can better restore the humeral head and the great tubercle of the humerus [37]. In terms of complications, compared with the anteromedia approach, Benjamin assessed the incidence of complications to be about the same, but the distribution of complications was different. The complications of the anterolateral approach mainly affect the head area of the humerus. This may also be due to the relatively small incision of the anterolateral approach and the insufficient field of vision to expose the head, which affects the reduction and fixation of the head fracture block. However, it is easier to fix the plate on the shaft of the humerus [38]. The bone density in the posterior, inferior and medial areas of the humeral head is higher [39]. Therefore, the placement of plates and screws to the best position during operation can promote fracture healing and reduce the risk of internal fixation loosening. Do not place the steel plate too high or too partial medial is to avoid subacromial impingement syndromet and affect the function of the internal rotation of the shoulder joint. Small incision approach: The traditional approach for proximal humerus fracture has a series of disadvantages, such as single incision, long incision, large trauma, more bleeding, long postoperative rehabilitation time, which affect the surgical effect and postoperative rehabilitation. Some doctors have improved on the traditional incision. In order to reduce the length of the original traditional approach incision and achieve minimally invasive combined with MIPPO technology to complete the open reduction and internal fixation of fracture, Li et al. compared with the traditional anteromedial approach, the small incision group was superior to the traditional approach group in terms of intraoperative blood loss, operation time and postoperative function score [40]. In the same way, it can reduce the soft tissue peeling and damage through the small incision of the anterolateral approach combined with MIPPO technology. The combination of the anterolateral approach with MIPPO technology to reduce soft tissue dissection and injury, accelerate postoperative wound recovery and relieve pain, improve fracture end healing [41]. Because of the risk of injury to the axillary nerve, the distal screw should not be used on the internal plant [42]. However, no damage to the nerve is found in the small incision approach [43]. In the view of the potential risk of damaging the axillary nerve [44], Buecking et al. explored the axillary nerve with their fingers in the subdeltoid capsule and marked its course on the surface [38]. Ruchholtz et al. used five hole steel plate during the operation, and the top of the steel plate contacted with the bone. Three holes at the distal end of the plate were fixed with screws to avoid the axillary nerve [45]. Additionally, the small incision approach also has some limitations. Some studies have shown that the incidence of complications is related to the professional experience of doctors [45]. For small hospitals and young doctors, the learning curve is tortuous. Because of the small incision, the exposure field of the fracture is not enough, so it is necessary to fully evaluate the fracture before an operation. During the operation, repeated fluoroscopy is needed to understand the reduction and fixation. Both patients and doctors need to bear X-ray radiation repeatedly. However, the poor reduction effect or complex proximal humeral fracture can easily lead to malunion of fracture and ischemic necrosis of fracture block [46]. But we can also foresee that with the gradual improvement of the minimally invasive approach and the improvement of surgical techniques, the small incision approach combined with MIPPO can also be used to achieve the good surgical effects for complex fractures.
Other incision approaches: Extended anterolateral approach: Mackenzize reported an extended anterolateral approach for shoulder replacement. The operator can safely expose the anterolateral proximal humerus. No axillary nerve injury is found in all patients. However, we find that the evaluation method of the axillary nerve is abnormal, and the evaluation methods used for different patients are not the same [47]. Gardner also exposed the axillary nerve through this approach, and no structural damage was found. Its validity and security are verified. Robinson et al. also found that this approach can avoid the iatrogenic injury of the axillary nerve caused by muscle pulling and blind reduction [22]. Mouraria combined the related literature analysis and indicated that the extended anterolateral approach can reduce the risk of iatrogenic axillary nerve injury, and the postoperative functional recovery is good [48].
Deltoid lift approach: For some complex fractures, the surgical field is often not exposed enough. Ting et al. proposed a new incision approach for transverse incision from the medial 3cm of the acromioclavicular joint of the cadaveric body. The skin is cut along the shoulder joint and the forearm lateral to below the deltoid stop. The tension of the axillary nerve decreased and retracted outward. According to quantitative measurement, the surface area of the exposed surgical field is relatively large, with an average of 38cm²-53cm². It can not only keep the main nerve and blood vessels of the deltoid, but also displaying the key anatomical signs needed by operation [49]. Double incision approach: Gallo et al. proposed a double incision approach for complex proximal humeral fractures with tuberculum majus displacement. The anteromedial incision is used to expose the humeral head and shaft, and the lateral small incision is used to restore the tuberculum majus. The selection of the approach can make the steel plate pass through the injured side to fix the fracture block. It can not only reduce the peeling of soft tissue and the damage of blood supply, but also reducing the injury of the deltoid and the iatrogenic injury of the nerve caused by excessive traction, so as to better restore the great tubercle of the humerus [50]. The potential complication of this approach is postoperative joint stiffness, so strengthening the rehabilitation exercise of patients can be significantly improved within one year.
Conclusion
Pain, deformity and limited movement of shoulder joint have a great influence on the quality of life. Proximal humeral fracture is one of the common fractures of osteoporosis in the elderly. With the progress of orthopaedic surgery and the improvement of quality of life, more and more patients often choose open reduction and internal fixation. However, the operative approach is limited by the anatomical relationship and the displacement of the fracture, and specializing in the anatomy of the shoulder joint is the beginning of the exposure of the fracture site. Based on the anatomical relationship, a variety of surgical approaches are designed for clinicians to select in specific circumstances. From the research progress of the above approaches, we find that the approach has its own advantages and disadvantages. Based on the review and summary of the original technology, incision from all angles, traditional long incision to small incision combined with minimally invasive technology, single incision to double incision, are continuous reflected and innovated in order to provide new ideas and methods for clinical application, to select an optimal surgical approach for patients, to achieve the best surgical effect.
|
v3-fos-license
|
2018-11-15T01:31:23.332Z
|
2016-08-04T00:00:00.000
|
53702558
|
{
"extfieldsofstudy": [
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.scirp.org/journal/PaperDownload.aspx?paperID=69654",
"pdf_hash": "278fc4304a46915e6ffe7048a055efb7b078d31e",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44578",
"s2fieldsofstudy": [
"Mathematics",
"Biology"
],
"sha1": "278fc4304a46915e6ffe7048a055efb7b078d31e",
"year": 2016
}
|
pes2o/s2orc
|
Convergence Properties of Piecewise Power Approximations
We address the problem of convergence of approximations obtained from two versions of the piecewise power-law representations arisen in Systems Biology. The most important cases of meansquare and uniform convergence are studied in detail. Advantages and drawbacks of the representations as well as properties of both kinds of convergence are discussed. Numerical approximation algorithms related to piecewise power-law representations are described in Appendix.
Introduction
For a given function ( ) , defined in a domain n Ω ∈ , let us calculate its partial derivatives in the logarithmic space: ( ) ln , ln where ( ) In this paper we study piecewise constant ap- proximations of the quantities (1) or, in other words, nonlinear approximations of the function v by piecewise power functions.
This study is first of all motivated by applications in Systems Biology, where many networks can be described via compartment models , with the influx and efflux functions 0 i V + ≥ and 0 i V − ≥ , respectively.
For instance, in a typical metabolic network used in Biochemical Systems Theory the index ( ) 1, , i i n = refers to the n internal metabolites 0 i x ≥ .The influx ( ) 1 , , 0 function accounts for the rate (velocity) of a production (synthesis), resp.degradation of the metabolite i x .Another important example is gene regulatory networks which in many cases can be described as a system of nonlinear ordinary differential equations of the form ( ) ( ) where ( ) x t is the gene concentration (i = 1, •••, n) at time t, while the regulatory functions i F and i G depend on the response functions ( ) , which control the activity of gene k and which are assumed to be sigmoid-type functions [1].
The derivatives ( ) j f P in the logarithmic space are very important local characteristics of biological networks.In Biochemical Systems Theory these derivatives are known as the kinetic orders of the function v, while in Metabolic Control Analysis (see e.g.[2]) they are called elasticities.From the mathematical point of view, these quantities measure the local response of the function v to changes in the dependent variable (for instance, the local response of enzyme or other chemical reaction to changes in its environment).Thus, they describe local sensitivity of the function v, the terminology which is widespread in e.g.engineering sciences.
If all influx and efflux functions in (2) have constant kinetic orders, one obtains the so-called "synergetic system", or briefly "S-system": where the exponents ij g , ij h represent all the (constant) kinetic orders associated with (4).The right-hand side of an S-system, thus, contains power functions, and analysis based on S-systems is, therefore, called "Power-Law (PL) Formalism", see e.g.[3]- [7]).
The Power-Law Formalism has been successfully applied to a wide number of problems, for example, to metabolic systems [8], gene circuits [9], signalling networks [10].Such systems are very advantageous in biological applications, as the systems' format considerably simplifies mathematical and numerical analysis such as steady state analysis, sensitivity, stability analysis, etc.For instance, calculation of steady states for the Ssystems is a linear problem (see [7]).By these and other biological and mathematical reasons, it was suggested in [11] to classify such systems as "a canonical nonlinear form" in systems biology.
In many models, however, the kinetic orders may vary considerably.A typical example is a model coming from Generalized Mass Action ( ) where the power functions describe the rates of the process no.r, while ir µ is a stoichiometric factor that stands for the number of molecules of i x produced, i.e. 1, 2, ing the processes in (5) in a net process of synthesis i V + (positive terms) and a net process of degradation i V − (negative terms) results in an aggregated system (2), which is not an S-system.
Another example of generic systems with non-constant kinetic orders stems from Saturable and Cooperativity Formalism [12] reflecting two essential features of biological systems, which gave the name to this formalism (see [13] for more details).In this case, the system (5) becomes
(
) ( ) where 1, , i n = and j n , j m , j K and j L are real numbers.
Another version of Saturable and Cooperativity Formalism, which is mentioned in [12], is defined as follows: where 1, , i n = and j n , j m , i b , i c , α and β are real numbers.
In the case of gene regulatory networks (3) the sensitivities (1) are non-constant as well, even if one considers the functions i F and i G to be multilinear in k z .In addition, the usage of non-multilinear functions are also known in this theory [14].
Taking into account the importance of kinetic orders/elasticities/sensitivities (1) in Systems Biology, one one hand, and convenience of the well-developed analysis of S-systems (stability theory [7], parameter estimation routines [15], software packages) on the other, a new kind of generic representations of compartment systems (2) was suggested in [16] (see also [17] for further applications of this representation).According to this idea, the entire operating domain is divided to partition subsets where all kinetic orders can be viewed as constants.In other words, the system (2) is approximated by a set of S -systems, each being only active in its own partition subset.This way of representing (2) is called "Piecewise Power-Law Formalism" [18].
From the biological point of view, piecewise power-law representations are useful in many respects, when compared to other ways of approximations, as they take into account biologically relevant characteristics (kinetic orders) rather than the standard partial derivatives.Therefore, piecewise S-systems preserve important biological structures and, at the same time, do not destroy a relatively simple mathematical structure of plain S-systems.By this reason, approximations of a general target function by piecewise power approximations may be of a great importance for biological and other modelling.A rigorous mathematical justification of the idea of piecewise power-law approximations is the main purpose of the present paper.More precisely, we consider mean-square and uniform convergence of approximations by piecewise power functions to the target function provided that the associated partitions of the operating domain Ω satisfy some additional assumptions.One of the challenges is that partitions of the operating domain Ω may not be chosen freely in applications.For instance, the partitions may directly stem from biological properties of the model [17].Other ways of constructing partitions can be dictated by optimality-oriented algorithms.In Appendix (see also [18]) we describe such a method which goes back to the paper [19] and which is based on an automatical procedure, allowing to obtain simultaneously the best possible polyhedral partition and the respective best possible piecewise linear approximation in the logarithmic space.
The main results of the paper are presented in Section 3 (mean-square convergence of piecewise power approximations) and in Section 4 (uniform convergence of piecewise power approximations).Several auxiliary results are proved in Appendices A.1-A.3, while Appendix A.4 presents an approximation algorithm which provides an automated partition and the respective best possible approximation in the logarithmic space for a given number of subdomains.Finally, in Appendix A.5 we explane by example why a direct piecewise powerlaw fitting is ill-posed.
Preliminaries
Throughout the paper we use the following notations (see Table 1
which we call Cartesian.We assume Ω to be closed and bounded (i.e.compact) subset of n .Let ∆ be its image in the logarithmic space n and { } 1 for any natural N.In some results and algorithms ∆ will be a polyhedron domain in the logarithmic space, and N i ∆ will be a polyhedral partition.
∆
under the inverse logarithmic transformation.
Table 1.Overview of the basic terminology and notation used in the paper (LS-least-squares).
Cartesian space Logarithmic space
Space We also put ( ) ( ) be a least-squares (LS) power-law fitting to the function v on x ∈ Ω we consider the piecewise power function be a LS linear approximation to the function ψ on N N V x y = Ψ We remind that the parameters N c and N ij g of the linear functions Ψ are uniquely obtained from the following minimization criterion in the logarithmic space: Alternatively, one can define approximations of the target function v by power functions minimizing the distance in the space Ω : .
Our last minimization criterion looks similar to (6), but is, in fact, very different as the minimum here is taken over all polyhedral partitions { } 1
∆
of the polyhedral domain ∆ , and all corresponding linear functions ).The main advantage of the criteria ( 8) and ( 10) is their linearity that provides the uniqueness of the solution and also makes the process of finding the solution computationally cheap, as it is based on explicit matrix formulas.On the other hand the use of the logarithmic transformation requires caution.The influences of the data values will change, as will the error structure of the model.Yet, the criterion (8) only requires a standard linear regression, while the criterion (10) requires a special regression algorithm, still linear, but much more involved (see Appendix A.4 for details).
The criterion (9) gives best possible approximation in terms of the LS error in the Cartesian space.However, a nonlinear regression algorithm should be used in this case, which is less advantageous, especially when the number of the estimated parameters is big.In addition, the nonlinear regression may have other drawbacks, one of which is ill-posedness (see Appendix A.5).
Mean-Square Convergence of Piecewise Power Approximations
The results of this section provide the mean-square convergence (L 2 -convergence) of piecewise approximations by power functions.The involved parameters may be e.g.obtained according to one of the minimization criteria (8) or (9).
The main technical challenges stemming from the nature of these minimization algorithms can be summarized as follows: 1) the L 2 -convergence of the approximations in the logarithmic space may not imply the L 2 -convergence of their images in the Cartesian space (and vice versa); 2) it is not evident that automatic dissections of the operating domain, as e.g. in the algorithms based on the minimization criterion (10), make the diameters of the partition subsets go to zero even if the number of partition subsets tends to ∞.
Three propositions below deal with L 2 -convergence in the logarithmic domain.Proposition 1.Let the target function 0 v > be measurable and bounded on Ω and log v ψ = . Suppose that the measurable partitions { } 1 To prove this proposition we need the following lemma, the proof of which can be found in Appendix A.1: Lemma 1.Let v be measurable and ( ) and the measurable partitions { } 1 Proof of Proposition 1.We use the sequences from the lemma 1, which both converge in the L 2 -sense in the respective domains.Since ( ) is the LS piecewise linear approximation in ∆ , we have as N → ∞ . In the next proposition we do not assume that 0 Proposition 2. Let ∆ be a polyhedral domain in n , the function ψ be square integrable in ∆ and { } 1
∆
be the optimal polyhedral partition of ∆ obtained by the algorithm described in Appendix A. 4. Then for the corresponding LS approximations Proof.Evidently, for the L 2 -function ψ there exists a sequence of polyhedral partitions { } 1 For the optimal polyhedral approximation ( ) as N → ∞ . In particular, the assumption on ψ is fulfilled if the target function v is measurable and bounded on Ω .The case of the L 2 -convergence of the approximations N V , given as ( ) We introduce the following notation.Given a partition subset where the point , be the symmetric n n × -matrix with the entries defined as , , mes Below we fix a matrix norm . .All matrix norms are equivalent.One of the norms is Euclidean, which is defined via the maximal eigenvalues: ( ) In the case of symmetric, positive definite matrices (like N i A above) we can write that ( ) We say that the sequence of partitions { } 1 If the chosen norm is Euclidean, then the latter estimate can be rewritten as 0 diam , where N i λ is the least (positive) eigenvalue of the matrix Informally speaking, this property means that the partition subsets cannot be too different from each other in the shape.Assume, for instance, that the partition sets are enclosed in rectangular boxes.The result below says that if the ratio of the longest and the shortest edges of the boxes is bounded above, i.e. boxes are not "too thin", then the sequence of such boxes satisfies the property ( ∆ )., where N a (resp.N b ) is the length of the smallest (resp.biggest) edge of the box N P .
We fix N and the Nth rectangular box , , , .
1) If the measurable partitions
and associated LS piecewise linear approximations N Ψ satisfy the criterion (10) for each Proof.To prove the first part of the theorem, we apply Lemma 1 and obtain x is the LS piecewise power approximation in Ω .In the second part of the theorem, we use either Proposition 1 or Proposition 2, which yields the L 2convergence of the LS approximations N Ψ to the function log v ψ = . Applying Lemma 2 we obtain the uniform boundedness of the approximations: for some M and any 1, 2, N = .Then we have
Uniform Convergence of Approximations
In the previous section we studied convergence of LS approximations in the L 2 -norm.In many applications, however, it is desirable to consider their uniform convergence.This may be, for instance, of interest if we include the obtained approximations into the models based on differential equations, as it is well-known that convergence of (approximations of) solutions is only guaranteed by the uniform convergence of (approximations of) the right-hand sides.
The main result of this section is formulated in terms of kinetic orders ( ) and its piecewise power approximations N w .Theorem 5. Let the target function 0 v > be a C 1 -function (i.e.differentiable with the continuous partial derivatives).Let the sequence of partitions { } 1 of Ω have the following two properties: 1) The closure of each N i Ω coincides with the closure of its interior .
Assume, in addition, that for any 1, , Then N w v → uniformly on Ω as .N → ∞ Proof.We fix N and consider the corresponding partition By assumption, for On the other hand, the mean value theorem yields The uniform continuity of the continuous vector function ( ) y ψ ∇ on ∆ and the property that ( ) imply that, given an 0 is fulfilled for sufficiently large N. Since (15) holds for any we also obtain that for sufficiently large N ( ) ( ) As the uniform convergence of the sequence { } N Φ implies its uniform boundedness, there is M such that ( ) ( ) . This gives the uniform convergence of N w to v as .N → ∞ Our last result shows that the LS approximations converge uniformly in the scalar case.This is due to the fact that in the scalar case the equalities ( 14) are always fulfilled.
Corollary 1. Let the target function v be continuous on [ ]
Then for the corresponding LS power approximations N V and N v we have N The proof of the theorem follows directly from the previous theorem and the following lemma, the proof of which is given in Appendix A.
Discussion and Conclusions
Piecewise power-law representations may be very useful as practical approximations to target functions which are defined analytically or numerically.However, a strict mathematical justification of these approximations is not always paid attention to.Unfortunately, such an analysis is not always straightforward, especially if one puts additional a priori assumptions on the approximations, which is quite common in many applications.
We showed in the present paper that under additional assumptions power approximations do converge to the target function.We studied least-squares and uniform convergence, both of which are widely used (explicitly or implicitly) in applications.
Our analysis dealt with two types of regression: linear regression in the logarithmic space and power-law regression in the Cartesian space.The first procedure has all the advantages of the linear regression, but the transformation back to the Cartesian space distorts the error structure of the problem; the least squares error for the resulting piecewise power-law fitting is in general less accurate than the corresponding error for a power-law regression of the original data.As a partial remedy, it may be advantageous to apply power-law regression to the original data over each of the partition subsets back in the Cartesian space.Yet, being nonlinear regression this procedure is essentially ill-posed.Thus, both kinds of regression have their strong and weak sides, so that the choice between them must be undertaken by modeling consideration.
In many cases, it may also be advantageous to use the classical linear regression in combination with optimal partitions of the operating domain.In the logarithmic space this procedure is again linear and can be automatized, but this may also cause several technical problems when proving the convergence of the corresponding approximations.
In the present paper, we offered a partial mathematical justification of the analysis based on piecewise power approximations, stemming from both kinds of regression, by verifying their convergence in the mean-square (L 2 ) and uniform sense.Uniform convergence is e.g.important if target functions are included in differential equations, as it is the uniform, and not L 2 -convergence, which is inherited by the solutions of the equations.However, a comprehensive analysis of convergence of solutions of differential equations, approximated by piecewise S-systems, is beyond the scope of this paper and will be discussed in a separate publication.
L ∆
consisting of all linear functions and equipped with the scalar product ( ) ( ) One basis is given by the set (11).However, this set is not necessarily orthogonal.First of all, we choose 0 1 e = and observe that its norm is equal to 1. Using the description (11) of the basis functions In the proof below we often omit one of the variables in ( ) N i e l y , that is either l, or y, depending on a particular interpretation of this basis.Writing
( )
N i e y means that we regard it as a vector for each particular y, i.e.Omitting y ( ( ) N i e l ) means that we treat ( ) e l y as a function of y for a given l, i.e. as an element of the space ( ) , we require the following constraints on the coefficients: The final step in the proof of the lemma uses the explicit representation of the LS approximation This implies also the uniform boundedness of the approximations ( ) N i V x on Ω .The proof of the lemma is complete.
A.3. Proof of Lemma 3, Section
Let us first prove the of 0 y .Assume the converse, i.e. that ( ) ( ) where i ∆ are all polyhedral sets defined by (17) and defining a partition of the logarithmic domain ∆ .Scalar weights 0 1 , , , , and the numbers
∆
such that in the domain Ω for all j, then v clearly is a power function in Ω of the form 1 is more involved.The reason for that is that the L 2 -convergence of the sequence { }
Proposition 3 .
A sequence of rectangular boxes { } N P satisfies the property ( ∆ ) if and only if The latter estimate is due to the uniform Lipschitz continuity of the function ( ) exp u on the interval [ ] This estimate proves the L 2 -convergence of the LS approximations N V to the target function v.
3 . 3 .
Lemma Let a linear function [ ] : , l a b → be the LS approximation of a 1 defined via the center of mass we directly deduce from (12) that 0 1 e = is orthogonal to any linear combination of the other basis functions.The challenge is therefore to estimate the norms of linear combinations in further considerations).
∆λ
is the Euclidean norm in n and a b ⋅ is the scalar product of two vectors) with the constraint Diagonalization of the symmetric, positive definite matrix N i A with the help of an orthogonal matrix Q gives the matrix containing the eigenvalues 0 is evidently an upper estimate for the functions (11) on the partition subset N i ∆ .The maximum value of the expression ( ) is the minimal eigenvalue of the matrix N i A .Due to the condition ( ∆ ) we get that the constant 0 c does not depend on i and N.
, contradicts the definition of the least squares approximations.The case ( ) ( ) 0 prove that in this case the graph of the scalar linear function ( ) l y intersects the graph of ( )y θ in at least two points from the interval [ ] , a b .From the first part of the proof we know that at least one intersection point does exist.Assume that there is exactly one point
1 l(
these sets may be empty).Consider a new linear approximation given by ( ) ( ) () chosen in such a way that the graphs of the functions ( ) namely, d by construction).It is easy to see that such a δ does exist.Indeed, in a vicinity U of the point d we have that y in U. Outside U, i.e. inside the compact set [ ] , \ a b U the continuous function Θ is non-zero, so that the graphs of the functions θ and 1 l meet only in d.We complete now our analysis of the scalar case observing that for such δ rise to a partition of the original domain Ω .Applying the inverse logarithmic transformation, we obtain function ψ and the corresponding partition { } i ∆ .Below the weights are collected in a vector i The aim of the piecewise linear regression: given a function ( ) of partition subsets N and a natural number c find a piecewise linear function Ψ and the polyhedral partition { } 1 N i i=
). Let
n + also dominates the asymptotics of the diameter.Therefore the condition ( ∆ ) is fulfilled for the given sequence of rectangular boxes if and only if the sequence { } N nb , which
|
v3-fos-license
|
2022-04-13T15:04:31.412Z
|
2022-04-11T00:00:00.000
|
248106212
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/20550324.2022.2057661?needAccess=true",
"pdf_hash": "13c171fbde0559ca2cfe30760b0d886dd7631a02",
"pdf_src": "TaylorAndFrancis",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44579",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Chemistry"
],
"sha1": "1840b35c2bacabe0595da4bcb72094f37dfa79a6",
"year": 2022
}
|
pes2o/s2orc
|
Sandwich-structured electrospun all-fluoropolymer membranes with thermal shut-down function and enhanced electrochemical performance
Abstract High safety and rate capability of lithium-ion batteries (LIBs) remain challenging. In this study, sandwich-structured poly(vinylidene fluoride)/poly(vinylidene fluoride-co-hexafluoropropylene)/poly(vinylidene fluoride) (PVDF/PVDF-HFP/PVDF) membranes with thermal shut-down function were successfully prepared through electrospinning. The effects of different weight ratios of PVDF and PVDF-HFP in a composite membrane on the physical and electrochemical properties of the membrane were explored. It was found that the composite membrane with 36 wt% PVDF-HFP (P/H2/P) showed excellent electrolyte absorption (367%) and ionic conductivity (2.5 × 10−3 S/cm). Half-cell with P/H2/P as separator exhibited higher discharge capacity and better cycle performance than commercial PP membrane. More importantly, thermally stable high melting-temperature PVDF was chosen as outer layer, while low melting-temperature PVDF-HFP was used as inner layer. Self-shutdown function of this separator was achieved when heated at 140 °C, providing a safety measure for LIBs. These results indicate that PVDF/PVDF-HFP/PVDF composite membrane is a promising separator candidate in high performance LIBs applications. Graphical Abstract
Introduction
Due to their obvious advantages of high energy density, long cycle life and environmental friendliness, lithium-ion batteries (LIBs) have been widely used in smart electronic devices, electric vehicles (EVs) and energy storage system [1,2]. However, current safety issue and poor rate capability severely impede their further applications and development of new generation high performance LIBs [3,4]. In recent years, the safety of LIBs has been improved by adding electrolyte additives [5,6], composite electrodes [7] and modified separators [8][9][10], etc. As an inert component, separators do not participate in electrochemical processes but determine the electrochemical performance and safety of LIBs, providing an effective and less sacrificial way to improve the safety of LIBs [11]. Commercial polyolefin membranes including polyethylene (PE), polypropylene (PP) and PP/PE/PP have been widely used as LIBs separators due to their excellent mechanical properties and low cost. However, their relatively low porosity, insufficient electrolyte wettability severely affect the rate capability of LIBs [12]. Besides, low thermal stability may cause internal short circuit at elevated temperatures and lead to uncontrollable thermal runaway, limiting their applications in next generation of batteries [13]. Although there have been studies on modifying commercial membranes, such as coating inorganic particles to polyethylene terephthalate (PET) nonwoven fabrics [14], electrospun PVDF on PP nonwoven fabrics [15], etc. Yet the coated membranes suffer from pore blockage and performance deterioration [16]. On the other hand, poor compatibility between different polymer materials in the composite membrane may result in inferior interfacial bonding and premature interlayer failure. Therefore, a method to fabricate a durable membrane with shut-down function, high porosity and excellent ionic conductivity is still a challenge.
Electrospinning has been widely used to prepare various functional fiber membranes with high porosity, large specific surface area and interconnected porous structure [17,18]. Electrospun membranes have been made from a variety of polymers including polyacrylonitrile (PAN) [19,20], polyimide (PI) [21,22], thermoplastic polyurethane (TPU) [23,24], polyvinylidene fluoride (PVDF) [25,26], polyvinylidene fluoride-hexafluoropropylene (PVDF-HFP) [27,28], etc. In particular, PVDF and PVDF-HFP have attracted significant attentions because of their excellent mechanical properties and thermal stability, chemical inertness in electrolyte and good electrolyte affinity [29,30]. Kim et al. confirmed that the crosslinked PVDF-HFP membrane provided highly efficient ionic conducting pathways, resulting in higher discharge capacity of assembled LIBs compared with PP membranes [31]. Khalifa et al. demonstrated a nonwoven PVDF/halloysite nanotube (HNT) nanocomposite membrane with high ionic conductivity and low thermal shrinkage [32]. Although single-layer polymer fiber membranes with nanoparticles embedded can gain higher thermal stability, it is not able to achieve thermal shut-down at heat accumulation situation, therefore, three-layer membranes based on electrospun fibers have been further developed to improve safety and electrochemical performance of LIBs. Wu et al. reported a novel sandwich structured PI/PVDF/PI composite membrane. High meltingtemperature PI component improved thermal stability while low melting-temperature PVDF component melt to shut down ion pathways at elevated temperatures [33]. Pan et al. fabricated ultrathin SiO 2 -anchored layered PVDF/PE/PVDF porous fiber membranes. The membranes exhibited highly porous structure with high electrolyte uptake capability, and unique layered structure was beneficial to arrest heat accumulations by cutting off Li þ diffusion channels [34]. However, most polymer pairs are thermodynamically immiscible, which results in poor interfacial adhesion and therefore inferior mechanical and electrochemical properties [35].
In this study, we chose PVDF homopolymer and PVDF-HFP copolymer to prepare all-fluoropolymer composite membrane, aiming to achieve better interlayer adhesion and lower lithium ion transfer resistance. Moreover, the outer PVDF microfibers layer with better thermal stability was selected as a support to avoid short circuit and the intermediate PVDF-HFP microfibers layer with a low melting temperature was selected to realize thermal shutdown at high-temperature situation in order to improve battery safety. By controlling a total spinning time of 18 h, membranes with different time ratios were obtained (Figure 1), i.e. PVDF: PVDF-HFP: PVDF ¼ 1:1:1, 1:2:1, 1:3:1, denoted as P/H1/P, P/H2/P, P/H3/P, respectively. The influences of different weight ratios of two polymers in composite membranes on ionic conductivity, electrolyte uptake, thermal stability, mechanical properties and electrochemical properties of assembled half-cells were investigated. Thermal shut-down function was also examined by a simulated high-temperature situation. . Commercial membrane (Celgard 2500) was used as reference for comparison purpose. PVDF binder (Arkema 500) was purchased from Arkema, France and super P was purchased from Tianchenghe Technology Co. Ltd. (Shenzhen, China) used as conductive additive.
Membranes fabrication
The PVDF/PVDF-HFP/PVDF microfibrous membranes were prepared by electrospinning method, as shown in Figure 1. Before preparing the spinning solution, PVDF and PVDF-HFP powders were dried at 60 C for 12 h. Then PVDF and PVDF-HFP powders were dissolved in a mixed solvent of DMF:acetone ¼ 7:3 (v:v) respectively, and mechanically stirred at 50 C for 3 h to obtain 14 wt% PVDF solution and 20 wt% PVDF-HFP solution.
The as-prepared solution was then electrospun into fibers at a tip-to-collector distance of 19 cm, a voltage of 18 kV, a flow rate of 0.02 ml min À1 and the speed of collector was 150 rpm. By controlling a total spinning time of 18 h to obtain membranes with different time ratios, i.e. PVDF:PVDF-HFP:PVDF ¼ 1:1:1, 1:2:1, 1:3:1, denoted as P/H1/P, P/H2/P, P/H3/P, respectively. The as-prepared membranes were dried in a vacuum oven at 80 C for 12 h to remove the solvent. Finally, the membranes were hot pressed at 120 C under a pressure of 4 MPa for 1 h to consolidate into PVDF/PVDF-HFP/PVDF composite membranes.
Characterizations
The microscopic morphology of the membranes were observed by a scanning electron microscope (SEM JSM-6390LV, Japan), and the accelerating voltage was 15 kV.
The porosity of membranes was measured as follows. The samples with a diameter of 18 mm were cut from membranes by a membrane-punching machine.
They were washed and dried at 50 C for 6 h. The porosity was then calculated by using Eq. (1) [36].
where, P is the porosity of the sample; q 0 is the density of PVDF and PVDF-HFP (1.78 g cm À3 ) and q is the density of the sample. In order to evaluate the electrolyte affinity of microporous membranes, the electrolyte uptake ratio (EU) of membranes was calculated by Eq. (2). The membranes were soaked in the electrolyte (1 M LiPF 6 in EC:DEC) for 2 h.
where, W 0 and W are the mass of membrane before and after absorbing the electrolyte, respectively. Tensile tests were carried out using a tensile tester (UTM4104X, SUNS, Shenzhen, China) at a testing speed of 20 lm s À1 . The gauge length of samples was 20 mm and the width was 8 mm.
The thermal behavior of the membranes was characterized by a differential scanning calorimeter (DSC 250, TA Instruments, US). The membranes were heated from 25 C to 200 C at a rate of 10 C min À1 under nitrogen atmosphere. The crystallinity of membranes was calculated from DSC data by using Eq. (3).
where DH m is the enthalpy of melting peak in the DSC curve; DH 100 is the apparent enthalpy of fusion per gram of totally crystalline PVDF/PVDF-HFP (DH 100 ¼104.7 J g À1 ) [37]. The samples with a diameter of 18 mm were cut from membranes by a membrane-punching machine and treated at 130 C for 0.5 h to compare thermal stability of these membranes. Thermal shrinkage ratio (TS) was calculated by Eq. (4).
where S 0 and S 1 represent the surface area of membranes before and after thermal treatment, respectively. Cells with a configuration of stainless steel (SS)/ membrane/SS were assembled, electrochemical impedance spectroscopy (EIS) was used to measure bulk resistance (R b ) of the membranes at frequencies between 1 Hz and 500 KHz at an amplitude of 5 mV. Then the ion conductivity was calculate by Eq. (5) [38].
where, r is ionic conductivity; L is the thickness of the membrane; R b is the bulk resistance and A is the effective area of the membrane. The cathode materials were prepared by blending LiFePO 4 powders (80 wt%), super P (10 wt%) and PVDF binder (10 wt%). A coin-type cell (CR 2032) with a configuration of Li/membrane/LiFePO 4 was used to investigate the rate capability, cycling performance and EIS. Galvanostatic charge/discharge C-rate capabilities were examined at voltage range of 2.5-4.2 V in a Neware Battery Test System (BTS-4000, China) at 0.2, 0.5, 1, 2, and 4 C, a 1 C-rate meaning that the selected discharge current discharged the battery in 1 h. The cycling performance was investigated at 1 C for 100 cycles.
In order to simulate its thermal shut-down behavior at high temperature, the P/H2/P was sandwiched between two steel plates and treated at 140 C for 30 min (denoted as SD-P/H2/P). The half-cell assembled with SD-P/H2/P was charged and discharged at 2 C and 4 C.
Results and discussion
The microscopic morphology of P/H2/P composite membrane was observed by SEM and shown in Figure 2. P/H2/P had interconnected fibrous structures leading to high porosity, which can facilitate the absorption and retention of electrolyte and lithium ion transportation. After a time-controlled electrospinning process, the P/H2/P membrane had a 72 lm-thick layer made of PVDF-HFP microfibers at the center and two 40 lm-thick layers made of PVDF microfibers at the top and bottom. After hot pressing at 120 C, PVDF-HFP microfibers were partially melted ( Figure S1) and consolidated well with the PVDF microfiber layer (Figure 2b). It is noticed from magnified image in Figure 2c that the partially melted and consolidated PVDF-HFP formed numerous microfibrils in the interfacial regions between PVDF and PVDF-HFP layers resulting in better interfacial bonds between these two layers. Furthermore, it is observed from Figure 2b and c that a crack was formed during the freezefracture preparation of these samples. It shows that the crack occurred in the outer PVDF layer instead of in the interlayer regions between PVDF and PVDF-HFP layers. This intralayer crack indicates that a good interfacial bonding between PVDF and PVDF-HFP layers were obtained. The cohesive and physically connected interfacial regions of these all-PVDF composite membranes can promote an integrity of composite membranes and therefore their mechanical properties.
Excellent tensile properties of separators are necessary to improve the safety and prelong service life of LIBs, but the tensile strength and modulus of electrospun membrane are generally low because its nonwoven features [21]. Figure 2d shows the stress-strain curves of different samples. The maximum tensile strengths of single-layered PVDF-HFP and PVDF membranes were only 3.3 MPa and 4.4 MPa, respectively. Besides, PVDF-HFP membrane was brittle and less tough than PVDF membrane. The tensile properties of sandwich-structured membrane without hot pressing (denoted as nHP-P/H2/P in Figure 2d) were also investigated. This nHP-P/H2/P membrane exhibited a similar brittleness as PVDF-HFP membrane. However, the sandwich-structured membranes after hot pressing showed much higher tensile strengths ($ 400% maximum increment) and tensile moduli ($1400% maximum increment) than single-layered and non hot-pressed membranes. During hot press process, PVDF-HFP microfibers with a low melting point (T m ) ($130 C) partially melted and consolidated with PVDF microfibers to obtain cohesive interfacial adhesions, which led to a significant increase in tensile strength. Inspiringly, the superior interfacial adhesion between microfibers in the interfacial regions similar as adhesion between fibers and matrix in all-polymer composites is a key factor for the improvement in tensile properties [39][40][41][42][43][44][45][46][47]. Meanwhile, as the content of PVDF-HFP in the composite membrane increased, the tensile strength of membrane was improved successively because more interfacial regions with cohesive adhesions were created.
Thermal behavior of PVDF/PVDF-HFP/PVDF composite membrane was characterized by DSC and shown in Figure 3a. It is observed that the composite membrane showed endothermic peaks at around 135 C and 172 C corresponding to the melting temperatures of PVDF-HFP and PVDF components, respectively. By calculating from DSC curves [48], the actual weight ratios of three layers in these composite membranes were 1:0.6:1, 1:1.1:1 and 1:2.1:1 for P/H1/P, P/H2/P and P/H3/P respectively. The crystallinity of PVDF-HFP and PVDF components was calculated by Eq. (3) and shown in Figure 3b. As the PVDF-HFP content increased from 23 wt% (P/H1/P) to 51 wt% (P/H3/P), the crystallinity of PVDF-HFP component only increased from 4.1% to 5.5%, revealing the crystallization of PVDF-HFP copolymer chains with strong steric hindrance is kinetically unfavorable. Meanwhile, when the content of PVDF component in these composite membranes decreased from 77 wt% to 49 wt%, the crystallinity of PVDF component decreased sharply by 58.8%. The total crystallinity of composite membranes was reduced to below 20% for P/H2/P and P/H3/P. Because electrolyte uptake and retention occur through a swelling process in the amorphous regions in separators [49], lower crystallinity is therefore beneficial to improve electrolyte affinity and ionic conductivity.
Additionally, porosity and electrolyte uptake ratio are two evidently important indicators for evaluating the performance of membranes. Figure 3c shows that the composite membranes exhibited significantly higher porosity and electrolyte uptake ratio than PP membrane, attributing to three-dimensional fibrerous network structure of electrospun microfibre membranes and lower crystallinity. Especially, due to effective combination of PVDF and PVDF-HFP layers through hot press, P/H2/P membrane had high porosity (69%) and electrolyte uptake ratio (367%), which is beneficial to improve ionic conductivity and subsequent cell performance.
Severe thermal shrinkage of the membrane may cause a short circuit within the battery and increase the risks of spontaneous combustion and explosion during heat accumulation, therefore good thermal stability of the membrane is crucial to the safety performance of LIBs. Figure 3d shows the photographs of the membranes before and after heat treatment at 25 C and 130 C for 0.5 h. Thermal shrinkage ratios of PP, P/H1/P, P/H2/P, and P/H3/P membranes were calculated to be 11.9%, 1.1%, 2.1%, and 10.1%, respectively. The thermal shrinkage ratio of membranes increased as PVDF-HFP component increased due to relatively lower thermal stability of PVDF-HFP than PVDF. The thermal shrinkage ratio of PP was the highest among these membranes, obvious shrinkage and shape change from round to rolled-up in the machine direction (uniaxial stretching direction) were observed for PP. Meanwhile, P/ Hx/P showed uniform shrinkage in all directions, which is advantageous for effectively improving the safety of LIBs at high temperatures. Figure 4a shows the Nyquist plots of SS/membrane/SS cells. In high-frequency region, the intercept of Nyquist plot on the real axis represents the bulk resistance (R b ) of membrane. The ionic conductivity value were hence calculated and presented in Table S1. Due to their high porosity and electrolyte uptake ratio, the composite membranes demonstrated lower R b and much higher ionic conductivity than PP membrane (0.8 Â 10 À3 S cm À1 ). Obviously, the R b of P/H2/P membrane was lower than P/H1/P and P/H3/P membranes, therefore, it showed the highest ionic conductivity (2.5 Â 10 À3 S cm À1 ) among these composite membranes, indicating rapid migration of lithium ions during charge-discharge process.
The compatibility of liquid electrolyte-soaked porous membrane with commercial electrode materials was characterized by EIS of Li/membrane/ LiFePO 4 half cells. All the Nyquist plots in Figure 4b show a semicircle in high-and medium-frequency region indicating the charge-transfer resistance (R ct ) in interfacial regions, as well as straight line in low-frequency region indicating the diffusion of lithium ions in cathode materials [50]. The R ct values of composite membranes were significantly lower than that of PP membrane (Table S1), particularly, P/H2/P membrane displayed the lowest charge-transfer resistance, due to its high porosity, better wettability to electrolyte and low crystallinity. The low resistance of P/H2/P composite membrane would improve the compatibility between electrode and electrolyte-soaked P/H2/P membrane, revealing that the transportation of lithium ions between electrode and electrolyte interfaces is more efficient.
The electrochemical performances of Li/membrane/LiFePO 4 half-cells are shown in Figure 4c-f. The type and structure of membranes influences lithium ions transport through electrolyte-soaked membranes. The initial discharge capacities of half-cells with composite membranes at 0.2 C were higher than cell with PP (147.0 mAhg À1 ) as shown in Figure 4c. This is correlated with the fact that these electrospun microfibrous membranes had higher porosity and electrolyte retention, leading to lower interfacial resistances. Moreover, P/H2/P membrane showed higher initial discharge capacity (157.1 mAhg À1 ) than P/H1/ P membrane (152.2 mAhg À1 ) and P/H3/P membrane (150.0 mAhg À1 ), due to its higher ionic conductivity than other two composite membranes. Figure 4d shows the rate capability of the cells assembled with various membranes. At the same C-rate, cells with PVDF/PVDF-HFP/PVDF composite membranes exhibited higher capacities than cell with PP due to their lower interfacial resistances. Meaningfully, the capacity-decay values of P/H1/P, P/H2/P and P/H3/P were 41.4%, 36.1% and 42.0% respectively, lower than PP (53.0%) when the charge-discharge rate increased from 0.2 C to 4 C, indicating that composite membranes can help to reduce the ohmic polarization of the cell. P/H2/P membrane had higher porosity and liquid electrolyte absorption, consequently, it had superior ionic conductivity which can facilitate rapid lithium ions transportation between electrodes and improve the high-rate performance of battery. Therefore, cell assembled with P/H2/P had the highest discharge capacity especially at higher rates ( Figure 4e). Figure 4f shows the cycling performance of the cells with different membranes at 1 C. The cells with PVDF/PVDF-HFP/PVDF membranes had no apparent capacity loss after 100 cycles, indicating that composite membranes had excellent cycle stability.
The shut-down behavior of composite membrane was further investigated. P/H2/P was sandwiched between two steel plates and treated at 140 C for 30 min (denoted as SD-P/H2/P). The microscopic morphology of SD-P/H2/P composite membrane was shown in Figure 5a-c. It is noticed from Figure 5a that pore blockage clearly occurred. Because 140 C does not reach the melting temperature of PVDF layer, it is supposed that the melted PVDF-HFP microfibers penetrated into PVDF outer layers and consolidated to cause pore blockage. Figure 5b and c indicate that melted and consolidated PVDF-HFP wrapped the fibers of PVDF layers to form a dense membrane. It is expected that the as-formed dense membrane would be able to effectively block ion transfer channels and therefore shut down electrochemical reactions between electrodes to prevent further heat accumulations.
The charge-discharge tests of Li/membrane/ LiFePO 4 half-cell with SD-P/H2/P membrane were performed at 2 C and 4 C, the results are shown in Figure 5d. The discharge capacity of this cell was almost zero at both 2 C and 4 C, indicating that the dense middle layer of SD-P/H2/P membrane blocked the transport channels of lithium ions [51], demonstrating that P/H2/P composite membrane can perform thermal shut-down function at hightemperature situation [33]. Currently, polypropylene (PP) and polyethylene (PE) monolayer membranes are mostly used in LIBs industry, but they suffer from low thermal stability and high safety risks [10]. Although commercial PP/PE/PP membranes (e.g. Celgard 2320) can provide thermal shut-down function at 135 C (T m of PE), its thermal shrinkage ratio is as high as 20% at 130 C [51] due to its low thermal stability and poor interfacial compatibility between different polymer layers. The current PVDF/PVDF-HFP/PVDF composite membrane can therefore provide a promising alternative separator solution for high safety LIBs. In summary, a sandwich-structured all-fluoropolymer composite membrane was prepared by timecontrolled electrospinning and hot pressing process. The all-fluoropolymer composite membrane exhibited excellent tensile properties due to its cohensive interfacial adhesion between microfibers in interfacial regions. The effects of different weight ratios of two polymers in composite membrane on physical and electrochemical properties of the membrane were explored. P/H2/P composite membrane showed superior electrolyte absorption ratio (367%), ionic conductivity (2.5 Â 10 À3 S/cm) and lower interfacial resistance (119.3 X). Hence, cell with P/ H2/P composite membrane demonstrated high discharge capacity (157.1 mAhg À1 ) at 0.2 C and low capacity-decay of 36% from 0.2 C to 4 C. More importantly, P/H2/P can perform thermal shutdown function at high-temperature situation to prevent heat accumulation and decrease the risks of thermal runaway. Therefore, the all-fluoropolymer composite membrane is a promising separator candidate to improve safety and electrochemical properties of LIBs.
Disclosure statement
No potential conflict of interest was reported by the authors.
Notes on contributors
Rongyan Wen received her BEng degree in 2019 and she is now a postgraduate student working on electrospun polymeric functional membranes for LIBs. Zhihao Gao received his BEng degree in 2019 and he is now a postgraduate student working on functional composite membranes for LIBs.
Lin Luo received his BEng degree in 2020 and he is now a postgraduate student working on functional composite membranes for LIBs.
Xiaochen Cui received his BEng degree in 2020 and he is now a postgraduate student working on carbon-based materials for energy storage applications.
Prof. Jie Tang is a managing researcher in advanced lowdimentional nanomaterials group in National Institute for Materials Science, Tsukuba, Japan. Her research interest is in the design, fabrication, characterization and applications of one-or two-dimensional nanostructured materials.
Dr. Zongmin Zheng received her PhD degree in Chemistry from Xiamen University and joined in Qingdao University in 2017. Her research focuses on materials for energy storage applications.
Dr. Jianmin Zhang is now an Associated Professor in Qingdao University. She received her PhD degree in Materials Science from Queen Mary University of London in 2009. Then she worked for AVIC and Simens in Beijing. In 2015 she joined in Qingdao University. Her research interests include polymeric functional membranes, carbon-based materials for energy storage applications.
|
v3-fos-license
|
2020-10-29T09:06:19.539Z
|
2020-10-27T00:00:00.000
|
261593669
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.researchsquare.com/article/rs-96133/v1.pdf?c=1637261855000",
"pdf_hash": "51972ca352c15fd92526c4699defd6e7cc2fb1ce",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44580",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"sha1": "3890e4d389e2789ba5fe3eec6f03c49f2e043d27",
"year": 2020
}
|
pes2o/s2orc
|
Effects of a Long-Term Administration of Aqueous Extract of Prunella vulgaris L. on Survival, Spontaneous Thyroid Carcinoma and Neoplastic C-cell Hyperplasia in Rats
Background: The continued global rise in thyroid carcinoma calls for alternative prevention and treatment strategies. Prunella vulgaris L. (PV) is a herbaceous plant with a medicinal property in the treatment of thyroid gland dysfunction, but its influence on thyroid carcinoma is unclear so far. This study was designed to investigate the effects of aqueous extract of PV on survival, spontaneous thyroid carcinoma and its preneoplastic lesion in rats.Methods: A total of 552 Wistar rats (half female and half male) were randomly assigned into 4 groups and given one of the following diets for 24 months: chow diet (control), 2.5 (low), 8.25 (middle) and 25 (high) g/kg bw PV diets. After intervention, serum metabolic parameters including indicators of liver and renal function, glucose and lipid profiles were measured. Histological examination was conducted to confirm the types of thyroid carcinoma and its preneoplastic lesion. Results: After intervention, serum aspartate transaminase of male rats in high PV group decreased significantly. No statistical differences among groups in terms of survival, body weight and other metabolic parameters were detected. In the control, low, middle and high PV groups, 14, 14, 15 and 8 rats developed thyroid carcinoma, respectively. Medullary thyroid carcinoma (MTC) emerged as the most common histological type in both sexes. Although PV failed to decrease risk of total thyroid carcinoma or each histological type, the incidence rates of neoplastic C-cell hyperplasia (CCH, a preneoplastic lesion of hereditary MTC) in PV groups were lower than that of control, and the lowest was observed in high PV group, manifesting as 5.25-time decrease in female rats and 5.5-time decrease in male rats.Conclusion: Our results suggested for the first time that, a long-term administration of aqueous extract of PV decreased the incidence of neoplastic CCH without impairing survival and metabolic parameters.
Background
Thyroid carcinoma is the most common malignant tumor in the endocrine system. With the development of ultrasound scan technology, increased exposure to radiation and change in body mass index, incidence of thyroid carcinoma has been rapidly and consistently rising worldwide [1][2][3][4]. Data of Chinese tumor surveillance indicated that the number of patients diagnosed with thyroid carcinoma increased by 137% from 2010 to 2013. Although the overall prognosis of thyroid carcinoma especially welldifferentiated thyroid carcinoma is favorable, adverse side effects caused by regular treatment cannot be ignored. For instance, radioactive iodine therapy is associated with increased risk of secondary primary malignancy [5], while thyroxine suppression may lead to arrhythmias and bone loss [6]. Additionally, quality of life in thyroid carcinoma survivors is always lower as compared with the general population [7]. Thus, a more effective and safer therapy is needed for thyroid carcinoma.
A multitude of plants and plant extracts have been used as adjuvant drugs for the treatment of carcinoma, due to their remarkably anti-tumor activity and low toxicity. Prunella vulgaris L. (PV), also known as self-heal, is a perennial herbaceous plant that is widely distributed in Asia and Europe. Previous study demonstrated that PV was rich in avonoids, sterols, organic acids, triterpenoids, polysaccharide and phenolic acids [8]. In traditional Chinese medicine, PV has been applied to treat thyroid gland dysfunction, goiter and neck lump for more than one thousand years, and today its clinical application extends to headache, dizziness, herpetic keratitis and certain cancers [8][9][10][11].
Unfortunately, literature regarding effect of PV on thyroid carcinoma is extremely limited, even though consumption of PV bene ts thyroid function [12]. The only published paper showed that PV was able to induce apoptosis in papillary thyroid carcinoma (PTC) cells and follicular thyroid carcinoma (FTC) cells in vitro, indicating a potential inhibitory effect of PV on thyroid carcinoma [13], but this should be further con rmed by more in vivo researches. In the present study, Wistar rats were administered with different doses of aqueous extract of PV for 2 years to investigate impacts of PV on survival, metabolic parameters, spontaneous thyroid carcinoma and its associated preneoplastic lesion. Findings of this research will provide preliminary data for the use of PV as a remedy for the prevention and treatment of thyroid carcinoma.
Materials And Methods
Preparation of aqueous extract of PV Dried spikes of PV (lot number Q17201) were purchased from Guangzhou Caizhilin Pharmaceutical Co. Ltd (Guangzhou, Guangdong, CHN), and a voucher specimen (voucher number 0065456) was deposited at Guangdong Provincial Center for Disease Control and Prevention (Guangzhou, Guangdong, CHN). The aqueous extract of PV spikes was prepared by Wanglaoji Pharmaceutical Co. Ltd. (Guangzhou, Guangdong, CHN). Brie y, 76.9 g of PV spikes were boiled twice in 1L of distilled water, 1.5 hours for each. The ltrate of decocted solution was concentrated under reduced pressure at 55 ~ 60℃, followed by ethanol (70%) precipitation for more than 24 hours. After ethanol distillation, the supernatant of extract was concentrated again and freeze-dried to produce a powder. One g of PV powder was equivalent to approximately 20 g of spikes. High Performance Liquid Chromatography (HPLC) (Santa Clara, CA, USA) was used to measure PV characteristic component rosmarinic acid (RA). The result showed that the content of RA in PV was 8.34 mg/g, which was in accordance with the requirement of Chinese pharmacopoeia (2015 edition).
Dosage information
Different doses of PV aqueous extract-derived powder were added into chow diet to prepare special PV diets. The nal concentrations of PV in diets were 2.5, 8.25 and 25 g/kg body weight (g/kg bw), corresponding to 10, 33 and 100 times as much as maximal daily dose of PV in adult recommended by the Chinese pharmacopoeia [14]. Both rat chow diet and special PV diets were produced and purchased from Guangdong Medical Laboratory Animal Center (Guangzhou, Guangdong, CHN).
A total of 552 4-week-old Wistar rats, half female and half male, were provided by experimental animal center of Southern Medical University (Guangzhou, Guangdong, CHN) and housed individually in a standard environment (20 ~ 26℃, 40 ~ 70% humidity and 12-hour light/dark cycle). After acclimatization for 1 week, rats were randomly assigned into 4 groups (69 femal + 69 male per group) according to body weight, and given free access to drinking water and one of the following diets for 24 months: chow diet (control), 2.5 g/kg bw PV diet (low PV), 8.25 g/kg bw PV diet (middle PV) and 25 g/kg bw PV diet (high PV). Body weight and feed intake were recorded. At month 12, 24 rats (12 female and 12 male) in each group were euthanized, and the rest was euthanized at month 24. Fasting abdominal aorta blood and thyroid tissue were collected. The experiment approach was reviewed and approved by animal experiment ethics committee of Guangdong Provincial Center for Disease Control and Prevention.
Histological examination
Thyroid tissue was xed in 10% neutral buffered formalin at 4℃ overnight and then embedded in para n. Section (4 ~ 6 µm) of tissue was stained with hematoxylin and eosin (H&E) for the histological examination. Histological types of tumor and pre-cancerous lesion were con rmed by two pathologists using light microscopy.
Statistical analysis
Data are presented as means ± SD and analyzed by SPSS 17.0 (Chicago, IL, USA) and SAS 9.1.3 (Cary, NC, USA). Kaplan-Meier survival analysis and log rank test were applied to compare survival of rats among groups. Body weight and feed intake were analyzed by repeated measurement analysis of variance. Differences in metabolic parameters among groups were identi ed using one-way ANOVA followed by Least Signi cant Difference (LSD)-test. Incidence rates of thyroid carcinoma as well as neoplastic CCH were compared by Fisher's exact test or χ 2 test. Statistical signi cance was taken at P<0.05.
Results
Long-term consumption of PV did not impair survival of rats No death was observed at month 12. After the intervention for 2 years, 66, 69, 71 and 71 rats were alive in the control, low PV, middle PV and high PV groups, respectively. Although there was no statistically signi cant difference in survival among groups (Fig. 1), a trend toward increase could be found in male rats supplemented with middle and high doses of PV (52.6% in the control, 61.4% in middle and high PV groups).
Effects of PV on feed intake and body weight
Fluctuations of feed intakes of rats were detected in all groups over the intervening period ( Fig. 2A-B). However, it did not lead to marked in uence on body weight, because changes of body weights in all groups were extremely similar, especially for male rats (Fig. 2C-D).
Effects of PV on metabolic parameters
At the end of intervention, levels of metabolic parameters containing ALT, TP, ALB, BUN, CREA, glucose, TG and TC demonstrated no signi cant differences among groups ( Table 1). AST of male rats in high PV group decreased remarkably as compared to the control (P<0.05), whereas similar result was not found in female rats, indicating that the potential protective effect of PV on liver function might be dependent on sex. Taken together, long-term administration of PV would not interfere with liver and renal function as well as glucose and lipid metabolism. Fig. 3A), medullary thyroid carcinoma (MTC, Fig. 3B) and poorly differentiated thyroid carcinoma (PDTC, Fig. 3C), among which MTC was the most common one in both female and male rats, while poorly differentiated thyroid carcinoma was detected only in male rats ( Table 2). No secondary carcinoma was detected in thyroid. Unfortunately, PV failed to decrease the incidence of either total thyroid carcinoma or each histological type. Long-term consumption of PV decreased the incidence of neoplastic C-cell hyperplasia (CCH) It should be noteworthy that the number of rat with neoplastic CCH (Fig. 3D), a preneoplastic lesion of hereditary MTC [15], was signi cantly different among groups (P<0.05, Table 2). Incidence rates of neoplastic CCH in all PV groups were lower than that of control, and the lowest was observed in high PV group, manifesting as 5.25-time decrease in female rats and 5.5-time decrease in male rats.
Discussion
PV consumption did not lead to remarkably changes in survival, body weight and majority of metabolic indicators (ATL, TP, ALB, BUN, CREA, glucose, TG and TC) in both female and male rats. Meanwhile, a signi cant decrease in AST was detected only in male rats fed high dose of PV, which tted well with ndings of Qu et al [16] suggesting that PV has the potential to protect liver function, and this effect may differ by sex. In disagreement with previous animal studies [17][18][19], the hypoglycemic, hypolipidemic and renal-protective functions of PV were not observed in the present study. The discrepancy was possibly due to different animal model because all the aforementioned bene ts were found in diabetic rodents. It seems that rodents with metabolic abnormality are more sensitive to PV treatment. Considering PV did not induce any impairment in survival and metabolism after the 2-year intervention, its long-term administration was supposed to be safe.
Although PV is a medicinal plant with protective effects against thyroid dysfunction and several types of cancer [9,11,12,20], there is no proven bene t of PV as a means to inhibit thyroid carcinoma. In the present study, a total of 3 histological types of thyroid carcinoma were found in rats, namely FVPTC, PDTC and MTC, among which MTC was the most frequent. Unfortunately, incidence rate of each carcinoma was not reduced after PV intervention. Based on these results, however, it was di cult to con rm that PV was completely ineffective in preventing and treating thyroid carcinoma in human, because PTC and FTC, which account for more than 90% of total thyroid carcinoma in human [3], did not occur spontaneously in rats fed chow diet. Furthermore, PV was shown to be capable of inducing apoptotic cell death in both PTC and FTC cell lines by regulating B-cell lymphoma-2/Bcl-2-associated X protein/caspase-3 signaling pathway [13]. Thus effects of PV on PTC and FTC are yet to be examined using speci c animal models, and only after that can we conclude whether PV is helpful to ght against thyroid carcinoma.
Of particular interest in our study was the observation that incidence of neoplastic CCH in PV groups especially high PV group was obviously lower as compared with the control. CCH is classi ed into 2 types: physiologic CCH and neoplastic CCH. As for neoplastic CCH, even though there is no consensus on its role in the development of MTC so far [21,22], most researcher inclined to recognize neoplastic CCH as the precursor of hereditary MTC (e.g. family MTC and type 2 multiple endocrine neoplasia) [15] and sporadic MTC in some cases [23]. As a result, PV was supposed to be capable of reducing risk of hereditary MTC, but we noted that incidence rates of MTC among all groups were lack of statistical difference after PV intervention. This was possibly because MTC observed in the present study was closer to sporadic type rather than hereditary type. More researches especially human researches pertaining to the potential protective effect of PV on hereditary MTC are warranted.
The components in PV extract that were responsible for the inhibition of neoplastic CCH were unclear.
Abundant studies have yielded a robust evidence for a causal role of RET proto-oncogene mutation in the development of neoplastic CCH and the subsequent hereditary MTC [24][25][26]. Activating transcription factor 4 (ATF4) is shown to suppress MTC by targeting RET gene in vivo and in vitro [27]. It happened that RA, the characteristic component of PV, was previously reported to activate ATF4 when potentiating the therapeutic effect of MG132 on hepatocellular carcinoma [28]. Therefore, RA was likely to be a functional component in PV extract for suppressing neoplastic CCH, but it needs to be further con rmed.
Conclusion
In summary, the current study rstly showed that a long-term administration of aqueous extract of PV decreased the incidence of neoplastic CCH without impairing survival and metabolic parameters in both female and male rats. PV is a promising drug resource with the potential to reduce risk of hereditary MTC. 28. Ozgun GS, Ozgun E. The cytotoxic concentration of rosmarinic acid increases MG132-induced cytotoxicity, proteasome inhibition, autophagy, cellular stresses, and apoptosis in HepG2 cells.
Human & experimental toxicology 2020;39(4):514-523. Figure 1 Effects of PV on survival in female (A) and male (B) rats. Rats were fed chow diet (control), low, middle and high PV diets for 24 months. Survival of rats was analyzed by Kaplan-Meier survival analysis and log rank test.
Figure 2
Daily feed intake and body weight of rats during feeding period. Daily feed intake and body weight of female (A, C) and male (B, D) rats fed chow diet (control), low, middle and high PV diets were analyzed by repeated measurement analysis of variance. Data are presented as means ± SD.*P<0.05 versus control, **P<0.01 versus control.
|
v3-fos-license
|
2018-12-07T06:37:04.315Z
|
2015-10-01T00:00:00.000
|
55139046
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1016/j.jclepro.2015.05.033",
"pdf_hash": "b61c0d3b68b6a2f93b7365e2d3084d7a6700b72a",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44582",
"s2fieldsofstudy": [
"Materials Science"
],
"sha1": "84e52dd016779ab7e647afc7f789e40d5c4a7bf8",
"year": 2015
}
|
pes2o/s2orc
|
The use of hydrogen to separate and recycle neodymium – iron – boron-type magnets from electronic waste
The rare earth metals have been identified by the European Union and the United States as being at greatest supply risk of all the materials for clean energy technologies. Of particular concern are neodymium and dysprosium, both of which are employed in neodymiumeironeboron based magnets. Recycling of magnets based on these materials and contained within obsolete electronic equipment, could provide an additional and secure supply. In the present work, hydrogen has been employed as a processing agent to decrepitate sintered neodymiumeironeboron based magnets contained within hard disk drives into a demagnetised, hydrogenated powder. This powder was then extracted mechanically from the devices with an extraction efficiency of 90 ± 5% and processed further using a combination of sieves and ball bearings, to produce a powder containing <330 parts per million of nickel contamination. It is then possible for the extracted powder to be re-processed in a number of ways, namely, directly by blending and re-sintering to form fully dense magnets, by Hydrogenation, Disproportionation, Desorption, Recombination processing to produce an anisotropic coercive powder suitable for bonded magnets, by re-melting; or by chemical extraction of the rare earth elements from the alloy. For example, it was shown that, by the re-sintering route, it was possible to recover >90% of the magnetic properties of the starting material with significantly less energy than that employed in primary magnet production. The particular route used will depend upon the magnetic properties required, the level of contamination of the extracted material and the compositional variation of the feedstock. The various possibilities have been summarised in a flow diagram. © 2015 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
Introduction
Rare earth magnets based upon neodymiumeironeboron (NdFeB) are employed in many clean energy and high tech applications, including hard disk drives (HDDs), motors in electric vehicles and electric generators in wind turbines.
In recent years, the supply of rare earth metals has come under considerable strain. China currently provides over 85% of rare earth metals to the world market but, in recent years, began to impose export quotas. This resulted in dramatic price fluctuations for the rare earth metals, in particular, neodymium, praseodymium and dysprosium, the rare earth constituents of NdFeB magnets. According to the EU Critical Materials list (2010,2014) and the US Department of Energy's energy critical element list (2010), the rare earth metals are classified as at greatest risk of supply shortages compared to those of all other materials used for clean energy technologies.
There are several ways in which these material shortages could be addressed including: (a) opening rare earth mines in countries outside of China, (b) using alternative technologies which do not contain rare earths (c) reducing the amount of rare earth metal used in particular applications such as magnets or (d) recycling the existing stock of NdFeB magnets contained within various types of equipment. It should, perhaps be emphasised that, none of these options are mutually exclusive.
However, with regard to option (a), the mining, beneficiation and separation of rare earth elements is energy intensive, results in toxic by-products from acid leaching processes and the primary ores are nearly always mixed with radioactive elements such as thorium (Jordens et al., 2013;Zhu et al., 2015). If alternative technologies are employed, as in option (b), compared to permanent magnet machines, this can often lead to a drop in efficiency which will effect CO 2 production (Widmer et al., 2015). It is possible to reduce some of the more scarce rare earth elements in magnets, as in option (c), however at present it is not possible to completely eliminate rare earths from magnets and achieve the same performance.
Despite the obvious need for a viable recycling route for rare earths, at present less than 1% of rare earths are currently recycled (Binnemans et al., 2013). With regard to recycling, option (d), much of the current stock of scrap NdFeB magnets is contained in obsolete HDDs, as the magnets used in electric vehicles and wind turbines are expected to be in service for at least 10 and 25 years respectively, and therefore as yet, unlikely to be available in significant quantities. Rademaker et al., 2013, predicted that in 2015, the HDD industry could source 64% of its NdFeB requirement from recycled HDD sources, which equates to approximately 11% of total NdFeB demand. However, Sprecher et al., 2014b, suggests lower figures of 57% and 3% respectively for the year 2017. In order to recycle NdFeB magnets from redundant electrical devices, several challenges need to be addressed, including the following; Collection and sorting of devices containing NdFeB. Identification of NdFeB in moving waste streams. Separation of NdFeB from the devices. Purification of the separated NdFeB. Re-processing of extracted materials into useful forms such as new magnets. This paper outlines some possible methods for the separation, purification and re-processing of NdFeB magnets.
The work at the University of Birmingham has focussed primarily on HDDs as they are: (1) relatively easy to identify, (2) already separated from the device (such as a computer), (3) there is a rapid turnover of computers (~5 years) and (4) they are the largest single application of NdFeB magnets in electronic-type goods.
One of the biggest challenges associated with the recycling of NdFeB magnets from HDDs and any other form of electronics, is how to separate efficiently the magnets from the other components. There are two types of NdFeB magnet in a HDD; usually 2 fully dense sintered magnets in the voice coil motor (VCM) assembly and a resin bonded magnet in the spindle motor (See Fig. 1). The work outlined in this paper has focussed on the higher value sintered NdFeB magnets which, in total, weigh typically between 10 and 20 g for a 3½ 00 HDD and 2.5 g for a 2½ 00 HDD (Sprecher et al., 2014). It should be noted that the application of the HPMS process (Hydrogen Processing of Magnetic Scrap) allows the NdFeBtype magnets to be extracted without damaging the remainder of the device with regard to the possible recycling of the other components according to the WEEE legislation.
Manual separation of the sintered NdFeB magnets from the HDD would involve the removal of 8e10 security screws. The magnets are also coated with Ni (and occasionally with NieCueNi), and glued into position between the plates of the VCM and they are in the fully magnetised state. At present, a large majority of HDDs are shredded in order to destroy any data on the disk. However, the magnets being extremely brittle, break up into granules/powder which remains permanently magnetised. Consequently, this powder is attracted to the other ferrous material including the shredder itself and is therefore very difficult to remove. The presence of this magnetised powder can adversely affect the operation of the shredder.
In the present work, hydrogen was employed, using the HPMS process, as a processing agent in order to extract selectively, NdFeB magnets from HDDs. Hydrogen is already used to process cast NdFeB in the Hydrogen Decrepitation (HD) process. The HD process is used extensively to reduce bulk (or strip) cast NdFeB ingots to friable, hydrogenated NdFeB granules/powder, prior to the production of jet milled powder which is then aligned, compressed and sintered to form fully dense sintered magnets (McGuiness et al., 1986). The hydrogen is then removed during the vacuum sintering process. Previous work at the University of Birmingham has shown that hydrogen can be employed to re-process uncoated scrap sintered magnets into powder which was then re-sintered to produce aligned, fully dense sintered magnets (Zakotnik et al., 2009) or subject to further HDDR processing to produce bonded magnets (Sheridan et al., 2012.
Materials and methods
The magnets employed in this study were (a) VCM magnets from the Philips factory (based in Southport UK) with a composition of Nd 13.78 Fe 75.51 B 6.30 Dy 0.66 Al 0.76 (minor constituents not included) or (b) the magnets contained within obsolete 3½" HDDs sourced from redundant computers at the University of Birmingham (dating from 1996Birmingham (dating from to 2006. The HDDs were pre-processed by cutting off the corners, close to the VCM (using an industrial cropper). The sectioned HDDs were also distorted prior to hydrogen processing by pressing one side of the sectioned HDD in a uniaxial press in order to fracture the magnet into a few pieces (see Fig. 2).
Both the sectioned HDDs and the VCM magnets were processed in hydrogen at a pressure of 2 bar gauge (for between 2 and 4 h) and at room temperature (initially). Although, initially, the magnets were at room temperature, the absorption of hydrogen is an Fig. 1. Manually separated HDD, highlighting the voice coil motor assembly containing 2 sintered NdFeB magnets and the spindle motor containing a resin bonded NdFeB magnet. Fig. 2. Distorted VCM assembly. exothermic process and consequently, there will be an increase in temperature. After hydrogen processing the sectioned HDDs were rotated in a porous drum in order to liberate the decrepitated magnetic powder. A combination of sieving and mechanical processing was employed in order to increase the fraction of NdFeB in the extracted materials. Ion coupled plasma (ICP) spectroscopy and oxygen and carbon analysis (all performed at Less Common Metals, UK) were used to assess the composition of the extracted materials.
Results and discussion
Initial HPMS trials were performed on Ni-coated VCM magnets provided by Philips. It was evident that, on processing at room temperature and up to 10 bar hydrogen, and in the space of 4 h, on average, only 4 in 10 magnets decrepitated. This is likely to be the result of the number and character of pin holes in the electroplated coatings and/or to possible variations in the surface conditions of the underlying magnets. In order to guarantee that 100% of the magnets would react in hydrogen at 10 bar pressure, then, either the magnets needed to be heated to above 170 C or, prior to hydrogen processing, the coating had to be ruptured. In the present work the magnets were fractured by distorting the VCM assemblies as shown in Fig. 2. The steel plates which surround the magnets are ductile whereas the NdFeB magnets are extremely brittle. Thus, on distortion of the VCM assembly within the HDD, the magnets break into several pieces. This was sufficient to create fresh surface without increasing significantly the surface area of the magnets and thus increasing the tendency to oxidise. In this condition it was shown that the VCM magnets could be exposed to air in the laboratory atmosphere for over 30 days and still all react on subsequent exposure to hydrogen.
Crucially, on reaction with hydrogen the NdFeB magnets become demagnetised (Harris and McGuiness, 1991), thus allowing the powder to be separated much more readily. During the HDprocess, the Ni coating is converted to flake-like particles during the HD process with a wide range of particle sizes (150 mm e 3 mm). However, in the case of the NieCueNi coatings, a markedly different behaviour was observed whereby the coating did not fragment but separated as coiled sheets (as shown in Fig. 3) approximately 1 cm in length.
Processing of hard disk drives
Before the HPMS process was applied to HDDs, the HDDs were sectioned across the voice coil end of the device (Fig. 4). This had the dual effects of concentrating the NdFeB fraction of the waste product and opening up the HDD to provide a ready exit route for the hydrogenated powder. The remaining fraction of the HDD can therefore be shredded to destroy the data on the disk and subsequently recycled with other WEEE to recover the remaining valuable and/or critical material such as aluminium and components of the printed circuit boards.
Ten sectioned and distorted HDDs were placed in a hydrogen decrepitation vessel containing a rotating porous drum (Fig. 5). This was adapted from a unit designed for the production of sintered NdFeB-type magnets by the HD-process (McGuiness et al., 1986). The diameters of the holes in the drum were between 2 and 3 mm. The HDDs were processed in hydrogen for 2 h at room temperature and 2 bar gauge pressure. After this treatment the remaining gaseous hydrogen was evacuated and the drum rotated for 50 min at~60 rpm. On rotation, a powdered feed of the magnet material was observed to fall immediately, through the port at the bottom of the vessel. In this experiment the vessel was left open to the air after HD processing in order to provide a direct observation of the effectiveness of the process. In the first five minutes of rotation, over 90% of the NdFeB-based material was removed from the HDDs.
The material extracted from the HDDs contained predominantly, hydrogenated NdFeB powder together with Ni flakes, pieces of plastic, sections of screws and fragments of electronic components, and some of these features can be seen in Fig. 6. By determining the mass of the extracted material compared to that of the NdFeB fraction which remained in the sectioned HDDs, it was possible to estimate an extraction efficiency of around 90 ± 5%.
The material separated from the HDDs was processed further by sieving combined with mechanical agitation achieved by placing ball bearings onto the sieve stages. It was possible to extract nearly all of the contaminant particles using this technique. ICP and Canalysis of the residual, concentrated NdFeB fraction indicated 325 ppm Ni and 1779 ppm C. When the coating was removed prior to ICP analysis, the sintered magnets themselves contained an average of around 300 ppm Ni and 600e700 ppm of C. The higher Ni content is not significantly higher than the base alloy and is unlikely to present a problem for downstream re-manufacturing processes at this level. At this stage the origin of the additional carbon is not clear but could be due, in part at least, to the presence of polymers (such as adhesives) in the HDD or due to contamination from the cropper used at an industrial site. Further work is currently underway in an attempt to resolve this question.
Re-processing routes
As outlined in Fig. 7, once the material is extracted from the HDDs there are several possible re-processing routes for the extracted hydrogenated powders.
It is possible that procedures could be developed to extract the Nd from the hydrogenated alloy and, in this case, the extracted NdFeB would act as a type of rare earth-rich ore. However, unlike many mined sources, it would only contain 2e4 of the rare earth elements and would not contain any radioactive material (primarily thorium) therefore alleviating the 'balance problem' as described by Falconnet, 1985. The hydrogenated material has a high surface area which would lend itself to pyrometallurgical and hydrometallurgical chemical processes, as described by Binnemans et al., 2013. However, these processes will require a significant input of energy and therefore increased cost and possibly increased CO 2 emissions. The extracted elements would also have to be recast with the other constituents to produce the required NdFeBtype alloy and then hydrogen processed and jet milled to produce material suitable for sintering into new magnets. The hydrogenated powder is also likely to require degassing prior to any refining process.
Another possibility is to degas the hydrogenated alloy powder, pelletise and then melt and cast the material. It should then be possible to remove any surface oxide from the melt and therefore reduce the overall oxygen concentration in the resultant material. However this would require inert sample transfer and, to a large extent, the composition of the final cast material would be determined by that of the input scrap.
Yet another re-processing route would be to directly re-use the extracted hydrogenated NdFeB alloy to produce new sintered magnets. Previous work at the University of Birmingham (Burns et al., 2000;Zakotnik et al., 2009;Walton et al., , 2014Rivoirard et al., 2000) has shown that it is possible to recover around 90% of the magnetic properties by lightly milling the powders and then re-sintering. The extracted hydrogenated NdFeB powder already has a fine, aligned microstructure, retained from the starting material, and therefore much less milling is required than in the case of the cast and subsequently hydrogenated NdFeBtype alloy. By processing in this way it is possible to produce new magnets with fewer processing steps and hence with significantly less energy and hence cost than those for the complete sintering route. It was estimated by Sprecher et al., 2014a, using Life Cycle Analysis (LCA) that the direct re-sintering route would use 88% less energy than primary magnet manufacture due to the avoidance of high energy processing steps such as beneficiation, acid roasting, solvent extraction and jet milling. Human toxicity is also significantly reduced due to the absence of radioactive substances in the hydrogenated alloy compared to virgin production. The disadvantages of using the re-sintering route is that the composition of the final magnets will, to a great extent, be controlled by that of the input scrap and, on additional powder processing, the oxygen content of the magnets will increase. The higher oxygen content would result in a deterioration in the sinterability and hence in the magnetic properties. However, previous work (for example, Kianvash et al., 1999;Mottram et al., 2001;Zakotnik et al., 2009) in these laboratories has shown that this effect can be overcome by the blending of additional Nd in the form of NdH 2 . It is also possible to reprocess the extracted hydrogenated NdFeB into powder suitable for the production of coercive, anisotropic HDDR powder (Sheridan et al., 2012Gutfleisch et al., 2012), which can be employed to make polymer bonded magnets or hot pressing to produce fully dense material (unpublished work).
The separation processes outlined in this paper have been carried out at low pressure and initially at room temperature so that, as with the HD-process for the manufacture of sintered NdFeB-type magnets, the process can be readily scaled-up. Thus, in the Magnetic Materials Group at the University of Birmingham, a 300 L capacity reactor has been constructed and assembled to process, in a single run, up to 500 sectioned hard disk drives. The scaling up was funded by the UK Waste Resources Action Program (http:// www.wrap.org.uk/) and this work now forms part of the FP7 Remanence project (http://www.project-remanence.eu/). It should also be noted that hydrogen process gas could be recycled, possibly by employing a suitable metal hydride store to absorb the degassed hydrogen. This hydrogen could then be re-employed in the recycling process or used to generate electricity in a PEM fuel cell or a gas turbine. Under the conditions described in this paper, hydrogen would only react selectively with the NdFeB magnets and not with magnets based upon SmCo (2:17), SrFe 12 O 19 and AlNiCo. With this in mind, it should be possible to use the HPMS route to separate NdFeB magnets from mixed feedstocks of material containing several types of magnets.
Conclusions
This paper shows that hydrogen is a very effective agent in extracting NdFeB magnets from HDDs using the HPMS process and that this technique can also be applied successfully to other devices such as electric motors, generators and actuators [to be published]. By concentrating the extracted materials using further sieving and mechanical separation steps, it is possible to reduce the contaminants to a level whereby the extracted NdFeB powder can be used directly to form new magnetic materials. Fig. 7 shows that there are a number of viable routes to re-process the extracted materials into new magnets. The chosen route will depend upon the magnetic properties required by the final magnet, the contamination level of the extracted NdFeB and the compositional variation of the scrap feedstock. The HPMS process could well be driven by both economic and legislative processes aimed to create a sustainable supply of REEs for countries outside of China and to reduce the demand on natural geological resources. The main highlight of this paper is that the HPMS process has been shown to be cost effective with a much lower environmental footprint compared to primary production of NdFeB magnets, particularly when short loop recycling processes are employed (e.g. re-sintering). Further work is required on LCA with regard to all of the downstream remanufacturing options. A US Patent has been granted on the HPMS process.
|
v3-fos-license
|
2020-09-14T20:07:21.415Z
|
2020-09-04T00:00:00.000
|
221726448
|
{
"extfieldsofstudy": [
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.japsonline.com/admin/php/uploads/3207_pdf.pdf",
"pdf_hash": "4d833ad417de62fa5865c03c0b1da18bb0fe8a2a",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44583",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "4d833ad417de62fa5865c03c0b1da18bb0fe8a2a",
"year": 2020
}
|
pes2o/s2orc
|
Recombinant human secretory leukocyte protease inhibitor ameliorated vessel preservation in experimentally isolated rat arteries
Kantapich Kongpol1,2, Rungrueang Yodsheewan3, Nitirut Nernpermpisooth1,4, Sarawut Kumphune1,5* 1Biomedical Research Unit in Cardiovascular Sciences (BRUCS), Faculty of Allied Health Sciences, Naresuan University, Phitsanulok, 65000, Thailand. 2Graduate program in Biomedical Sciences, Faculty of Allied Health Sciences, Naresuan University, Phitsanulok, Thailand. 3Department of Pathology, Faculty of Veterinary Medicine, Kasetsart University, Kamphaeng Saen Campus 73140, Thailand. 4Department of Cardio-Thoracic Technology, Faculty of Allied Health Sciences, Naresuan University, Phitsanulok, 65000, Thailand. 5Department of Medical Technology, Faculty of Allied Health Sciences, Naresuan University, Phitsanulok, 65000, Thailand.
INTRODUCTION
Vascular reconstruction is a therapeutic technique used in reconstructive arterial surgery for several pathological conditions including organ substitute, coronary artery bypass grafting in an ischemic heart (Ben Ali et al., 2018;Fahner et al., 2006;Kazemi et al., 2018;Wille et al., 2008), and infrarenal aortic replacement (Kieffer et al., 2004;Zatschler et al., 2009). A vascular allograft is the most attractive option for revascularization. However, the major challenge in transplantation is reducing vascular endothelial injury, which can promote inflammation following thrombosis-mediated vascular graft failure (Ben Ali et al., 2018;Wille et al., 2008). To minimize the vessel injury, the protective strategies should be implemented immediately after meticulous graft harvesting with an appropriate preservation solution. The latter is a critical factor that can influence the clinical outcomes of longterm graft storage (Ben Ali et al., 2018).
Several organ preservative solutions such as the University of Wisconsin (UW) solution, histidine-tryptophanketoglutarate (HTK) solution, and Celsior solution have been used in vascular transplantations (Kazemi et al., 2018). Cold 0.9% normal saline solution (NSS) is another preservative solution, which is used in some resource-limited (developing) countries (Kazemi et al., 2018). However, several studies have shown that NSS has a lower preservative efficacy when compared to other preservative solutions (Kazemi et al., 2018;Zatschler et al., 2009). Therefore, the addition or supplementation of a solution that could possibly enhance the preservative efficiency of NSS and prolong the vessel graft storage should be considered.
The secretory leukocyte protease inhibitor (SLPI) is a serine protease selective inhibitor that counteracts with an excessive inflammatory response (Majchrzak-Gorecka et al., 2016). The previous studies showed that adding recombinant human SLPI (rhSLPI) in a heart preservative solution could restore myocardial contraction (Schneeberger et al., 2008). Besides, the previous studies showed that the treatment of rhSLPI in human umbilical vein endothelial cells could reduce cell damage and inflammation, prevent death, and preserve cellular cytoskeleton integrity from in vitro I/R injury (Nernpermpisooth et al., 2017;Paiyabhroma et al., 2018;Prompunt et al., 2018aPrompunt et al., , 2018b. This suggests the efficacious cytoprotective effect of rhSLPI on vascular endothelial cells. The previous study has also demonstrated that endothelialderived rhSLPI not only reduces endothelial cell injury but also attenuates cardiomyocyte death in an in vitro ischemia/reperfusion model (Kongpol et al., 2019). Therefore, the in vitro endothelial protective effect of rhSLPI could provide basic information for the implementation of vascular tissue. However, the effect of rhSLPI supplemented to 0.9% NSS on vessel graft preservation has never been investigated. Therefore, we hypothesized that 0.9% NSS preservative solution, supplemented with rhSLPI, could prevent isolated rat aorta injury and prolong storage duration.
Experiment animals
Adult male Wistar rats weighing 250-300 g (n = 6) were purchased from Nomura Siam International, Bangkok, Thailand. All animals were maintained under temperature (22 ± 1°C) with 12-hours light:dark cycle at the Center for Animal Research, Naresuan University, Phitsanulok, Thailand. All protocols used in this study were approved by the Animal Use and Care Committee at Naresuan University (NU-AE601129) and conformed to the guidelines set by the American Physiological Society and Animal Welfare Act.
Isolation of rat aorta
Rats were anesthetized by intraperitoneal injection of pentobarbital (100 mg/kg) and heparin (150 units). The deep anesthesia was closely observed and confirmed by a lack of both toe pinch and corneal reflexes. After that, the hearts were rapidly isolated. The thoracic and abdominal aorta were then excised quickly and placed in ice-cold NSS. The aorta was carefully cleaned from adjacent fatty tissue and cut into ring segments with 3-5 mm in length. Therefore, a total of 16 aortic rings per animal were incubated in 0.9% NSS in the presence and absence of 1 µg/ ml rhSLPI and stored at 4°C for various preservative periods (0, 6, 24, and 48 hours). One set of aortic rings, which preserved with and without rhSLPI at each period, was used for histopathological study. Another set of aortic rings was homogenized to collect protein and determined for the inflammatory cytokines by enzymelinked immunosorbent assay (ELISA).
Vascular histopathology
The histopathology technique used in this study was modified from Howat WJ and Wilson BA (Howat and Wilson, 2014). At the end of each preservation period, the aortic rings were fixed with 2.5% (v/v) glutaraldehyde for 24 hours. Then, the rings were further fixed with 10% (v/v) formalin pending a histopathological process.
Briefly, the rings were dehydrated, embedded into paraffin, and cut into 5-10-μm-thick sections (PFM Rotary, Cologne, Germany). Then, the sections were stained with hematoxylin and eosin (H&E) for histopathological analysis. All histopathological examinations were performed by an experienced pathologist. The pathologist was blind toward the experiment group, and the examination and scoring of vascular pathology were performed under a light microscope (Olympus). The identification of vascular endothelial cells was based on the information that the endothelial cells located at tunica intima showed as a single layer of squamous-to-fusiform shape cells close to the lumen of the blood vessel. In H&E staining, the endothelial cells reveal as a flat layer of squamous-to-fusiform shape cells characterized by basophilic oval-shaped nucleus, which may protrude into the lumen of the blood vessel, and very few of eosinophilic cytoplasm. The vascular histopathology scoring criteria consisted of endothelial detachment, elastic membrane disruption, necrosis, endothelial degeneration/edema, and complete denudation (Kazemi et al., 2018). The severity score was graded as follows: grade 0 = no lesion, grade 1 = lesion less than 25% of tissue, and grade 2 = lesion more than 25% of tissue.
Determination of released lactate dehydrogenase (LDH) activity
At the end of each preservative period, the preservation solution was collected and stored at −20°C until analysis (Prompunt et al., 2018a). The released LDH activity was determined from the collected solutions using the LDH activity assay kit, which is a modified method based on the recommendations of the Scandinavian Committee on Enzymes (LDH SCE mod.). The kit was purchased from HUMAN (Wiesbaden, Germany). About 10 µl of collected preservative at each period was mixed with 1 ml of reaction buffer and incubated at 37°C for 5 minutes. Then, 250 µl of substrate reagent was added. The solution was mixed, and the absorbance was read at λ340 nm. The mean absorbance change per minute (ΔA/minute) was used for calculating LDH activity with the following formula: LDH activity (U/I) = ΔA/minute × 20,000
Tissue homogenization and protein extraction
Aortic rings, which preserved in NSS in the presence and absence of rhSLPI at each period, were snapped in liquid nitrogen and transferred to −20°C freezer until analysis. About 50 mg of aortic ring sample was homogenized in 500 µl of homogenization buffer [20 mM Tris HCl (pH 6.8), 1 mM Na 3 VO 4 , and 5 mM NaF] using a mortar and pestle (Mongkolpathumrat et al., 2019). Then, the tissue homogenate was centrifuged at 14,000 rpm for 10 minutes at 4°C. The supernatant was collected for further experiments.
Determination of protein concentration by Bradford assay
The total protein concentration of tissue homogenate was measured by Bradford protein assay reagent (BIO-RAD, Hercules, CA) (Mongkolpathumrat et al., 2019). About 50 µl of the protein samples were added into 2.5 ml of Bradford reagent (BIO-RAD, Hercules, CA) and incubated at room temperature for at least 5 minutes. The absorbance was measured using a spectrophotometer at λ595 nm. The relative protein concentration was calculated by the equation for the line generated in the Bovine Serum Albumin (BSA) standard curve. The protein concentration was used to adjust the amount of protein used for the determination of inflammatory cytokine by ELISA.
Determination of inflammatory cytokines by ELISA
ELISA was performed using a 2,2′-azinobis-(3ethylbenzothiazoline-6-sulfonate) (ABTS) ELISA Buffer Kit, Prepotech® (Mongkolpathumrat et al., 2019). The ELISA reagents were prepared at room temperature by gentle mixing. First, the precoated step was performed in 96-well plate overnight. On the following day, the plate was inverted to remove the liquid and blot the paper towel. After that, the plate was washed four times using 200 μl of washing buffer solution. About 200 µl of the blocking solution was added into the well and then incubated for 1 hour. Tissue homogenates (10 μg of protein) were added into the wells and incubated at room temperature for at least 2 hours followed by washing 4 times again. Then, the detection antibody was added and incubated at room temperature for 2 hours. The plate was then washed again, and 100 μl of avidin-horseradish peroxidase conjugated 1:2,000 was added and incubated at room temperature for 30 minutes. After that, the ABTS liquid substrate was added and incubated at room temperature until the color was developed. The absorbance was measured using a spectrophotometer at λ405 nm.
Statistical analysis
Statistical analysis was performed using commercially available software (GraphPad Prism version 7). All data were expressed as mean ± S.E.M. All comparisons were assessed for significance using an unpaired t-test or analysis of variance (ANOVA) followed by the Tukey-Kramer test or Chi-square test when appropriate. A p-value less than 0.05 was considered to be statistically significant.
Protective effect of rhSLPI decrease tissue injury by reduced LDH activity
The aortic rings were preserved in cold NSS in the presence and absences of 1 µg/ml of rhSLPI for 0, 6, 24, and 48 hours. Then, the preservative solution was collected to determine the released LDH activity. The results showed that there was a significant increase in released LDH activity in preservative solution at 6-48 hours when compared to control 0 hour (6 hours: 32.25 ± 1.80 IU/ml, 24 hours: 42.37 ± 2.40 IU/ml, and 48 hours: 77.58 ± 1.60 IU/ml vs. 0.5 ± 0.29 IU/ml, p < 0.05). Moreover, the results showed that the released LDH activity in NSS with rhSLPI for 48 hours was significantly lower than that of NSS (61.75 ± 1.58 IU/ml vs. 77.58 ± 1.60 IU/ml, p < 0.05). However, there was a trend of the reduction of released LDH activity in NSS with rhSLPI at 6 and 24 hours but was not statistically significant when compared to NSS group (6 hours: 32.25 ± 1.80 IU/ml and 6 hours with rhSLPI: 25.86 ± 2.33 IU/ml, p > 0.05, and 24 hours: 42.37 ± 2.40 IU/ml and 24 hours with rhSLPI: 33.59 ± 2.50 IU/ml, p > 0.05) (Fig. 1).
Protective effect of rhSLPI decrease inflammatory cytokines
The protein extract from aortic rings was used to determine the inflammatory cytokines by ELISA, including TNF-α and IL-6. The results showed that vessels preserved for 48 hours in both the groups significantly increased TNF-α level when compared to control (NSS: 338.9 ± 50.15 pg/ml, NSS with rhSLPI: 237 ± 21.98 pg/ml vs. control: 178 ± 19.03 pg/ml, p < 0.05). On the contrary, vascular preserved for 24 hours did not appear to be different when compared to control (NSS: 181.4 ± 6.66 pg/ml and NSS with rhSLPI: 150.3 ± 5.53 pg/ml vs. control: 178 ± 19.03 pg/ ml, p > 0.05). However, the TNF-α level in the vessel preserved in NSS with rhSLPI for 24 hours was significantly lower than vascular preserved in NSS (150.3 ± 5.53 pg/ml vs. 181.4 ± 6.66 pg/ml, p < 0.05) ( Fig. 2A). Besides, the level of TNF-α in NSS with rhSLPI for 48 hours was lower than 0.9% NSS without rhSLPI but could not reach statistical significance (237± 21.98 pg/ ml vs. 338.9± 50.15 pg/ml, p > 0.05) ( Fig. 2A).
Protective effect of rhSLPI on vascular histopathology
Vascular histopathology was performed in both aortic tissues preserved in NSS and NSS supplemented with rhSLPI. The endothelial detachment, elastic membrane disruption, necrosis, endothelial degeneration/edema, and complete denudation were divided into no change or change. The results showed that there was no significant difference in the number of vessels with the presence of histopathological changes during different durations of preservation (Table 1).
To determine the severity of vascular histopathology as well as the effect of rhSLPI to preserve vascular integrity, the severity of endothelial detachment, elastic membrane disruption, necrosis, endothelial degeneration/edema, and complete denudation were graded on a scale ranging from 0 to 2 (Fig. 3).
For endothelial detachment, the results found that the vessels preserved in NSS with rhSLPI for 6 and 24 hours showed a significantly lower endothelial detachment score when compared to the other groups (Fig. 4A). On the contrary, the vessels preserved in NSS with rhSLPI for 48 hours did not show any significant difference (Fig. 4A).
Similarly, the vessels preserved in NSS with rhSLPI for 6 and 24 hours showed a significantly lower elastic membrane disruption score when compared to the vessels preserved in NSS (Fig. 4B). Although the elastic membrane disruption score of the vessels preserved in NSS with rhSLPI for 48 hours was not significantly lower than other groups, the score was shown to have a lower trend in the vessels preserved in NSS with rhSLPI.
The necrosis scores for the vessels preserved in both groups of preservatives for 6 hours were not shown to be different. On the contrary, the vessels preserved in NSS with rhSLPI for 24 hours showed a significantly lower necrosis score when compared to the other groups (Fig. 4C). There were no significant differences in necrosis score between the vessels preserved in NSS with and without rhSLPI for 48 hours though there was a trend of reduction in necrosis score in the vessels preserved in rhSLPI group.
Similarly, the score for endothelial degeneration/edema in the vessels preserved for 6 hours showed that the vessels preserved in NSS with rhSLPI had a significantly lower endothelial degeneration/edema score when compared to the vessels preserved in NSS (Fig. 4E). However, the score of complete denudations showed no significant difference in any group (Fig. 4F).
DISCUSSION
Among crucial factors that affect the success of vascular reconstruction, the quality of vessel grafting, primarily influenced by the graft preservative/storage solution, influences the preservation of the endothelial structure and vascular functions (Woodward et al., 2016).
NSS is the most widely used preservative solution in vascular operative procedures (Kazemi et al., 2018;Weiss et al., 2009) as it is convenient and cheap. However, several studies have suggested unsatisfactory outcomes of its use, such as altered integrity of the endothelial layer and abolished vascular dilation (Weiss et al., 2009;Wilbring et al., 2011, Wilbring et al., 2013a, 2013b. Therefore, the preservative efficiency of NSS could be enhanced by adding supplements. The previous studies have demonstrated that adding various substances such as heparin with papaverine (Santoli et al., 1993) or 5% albumin (Weiss et al., 2009) could enhance the efficiency of NSS for vessel graft preservation. The previous reports have also shown that the cardiovascular-protective effect of rhSLPI could protect cardiomyocytes, cardiac fibroblast, and vascular endothelial cells from ischemia/reperfusion (I/R) injury (Nernpermpisooth et al., 2017;Paiyabhroma et al., 2018;Prompunt et al., 2018aPrompunt et al., , 2018b. Recently, we have demonstrated that endothelial-derived SLPI could protect cardiomyocytes from I/R injury (Kongpol et al., 2019), which suggests the beneficial effect of rhSLPI for cardiovascular applications. Interestingly, it has been shown that adding rhSLPI in a heart preservative solution could restore myocardial contraction during experimental murine heart transplantation (Schneeberger et al., 2008).
The major findings in the current study showed, for the first time, that rhSLPI could enhance the efficiency of conventional preservative NSS to preserve the isolated rat aortic vessels by reducing injury and inflammation and preserve the vascular structure. Furthermore, the experiments published by Schneeberger et al. were designed to test the effect of SLPI, either by gene knockout or treatment with recombinant protein, in transplanted murine hearts. The results showed that 200 μg of recombinant SLPI given intravenously immediately after transplantation, or diluted in cold-HTK preservation and flushed into the heart, could significantly improve a cardiac score, without the improvement of graft histology, and neither affected organ injury nor the inflammatory response. However, this study showed that only 1 μg/ml of rhSLPI (200 times less concentrate) dissolved in NSS could preserve vessel allograft by reducing tissue injury and inflammatory response and improve graft histology. Therefore, this study provides evidence that rhSLPI has the potential to be an alternative additive for vessel graft preservation.
The preservation in cold NSS could be affected by both profound low-storage temperature and bloodless-hypoxic conditions. These could induce biochemical and physiological stresses that are detrimental to vessel grafts and, consequently, cause tissue injury that eventually leads to graft failure. The determination of cellular injury could be confirmed by the detection of released (LDH -a cytosolic enzyme present in almost all cell types in the vasculature) activity. When the plasma membrane is ruptured or damaged, LDH is rapidly released into the surrounding environment (Chan et al., 2013;Kumar et al., 2018). Therefore, the determination of LDH activity is a marker of cellular injury. The vessels preserved in NSS with rhSLPI showed a lower released LDH activity (Fig. 1). This result indicated that rhSLPI could be an alternative additive of choice supplement to NSS by reducing vessel graft injury during storage.
A short episode of ischemia/reperfusion (I/R) injury occurs during the isolation of vessel grafting and the transplantation procedure, respectively (Kazemi et al., 2018;Schneeberger et al., 2008). The I/R injury is an inflammatory process that increases the expression of pro-inflammatory cytokines, particularly the tumor necrosis factor α (TNF-α) and interleukin-6 (IL-6), that cause cellular injury and death and induce leukocyte recruitment and platelet adhesion, which could lead to vascular occlusion and rejection after revascularization. These affect the success of organ transplants (Eltzschig and Eckle, 2011;Epelman et al., 2015;Kazemi et al., 2018). Moreover, ischemic injury could also induce a tissue inflammation by releasing pro-inflammatory cytokines (Yang et al., 2016).
One of the well-known functions of SLPI is antiinflammation (Doumas et al., 2005), which corresponds to these findings and demonstrated that vessels preserved in NSS with rhSLPI had less TNF-α and IL-6 levels than control (Fig. 2). Similar to the previous study, the pretreatment of rhSLPI in adipocytes indicated the downregulation of LPS-induced IL-6 gene expression and protein secretion in adipocytes (Adapala et al., 2011). Besides, mice hearts preserved in cold preservative with rhSLPI also decreased pro-inflammatory cytokines expression, including TNF-α, Transforming growth factor (TGF)-β, Nuclear Factor Kappa B (NF-κB), eNOS, Potassium chloride (KCl) (Schneeberger et al., 2008). Therefore, adding rhSLPI to cold preservative could reduce pro-inflammatory cytokine levels, leading to a decreased inflammation of vessel grafts.
The previous studies have shown that NSS is an acceptable preservative although with limitations when compared to other preservative solutions (Kazemi et al., 2018;Zatschler et al., 2009). For example, the determination of preservation-induced changes in smooth muscle cell and ex vivo endothelial cell function in isolated porcine carotid arteries showed that NSS is the worst preservative solution, causing failure in the contraction and relaxation of the carotid artery, when compared to other preservatives, such as UW, HTK, Celsior, or a modified HTK solution (Abrahamse et al., 2002). Besides, a comparison between HTK and physiological saline solution (PSS) or the newly developed preservation solutions (solution 8 and solution 9) in isolated rat aorta showed that PSS failed to develop the vascular relaxation in isometric tension and endothelial nitric oxide synthase (eNOS) KCl expression (Zatschler et al., 2009). Furthermore, human saphenous vein preserved in NSS showed a reduction of KCl-and phenylephrine-induced vascular contraction, as well as endothelial-dependent and independent relaxation and cell viability when compared to UW solution, Celsior solution, and autologous whole blood (Wise et al., 2015). However, the aspect of preservation on the vascular structure by measuring vascular histopathology was not considered as there was insufficient information. In the previous study, isolated human femoral and iliac arteries preserved in NSS or UW solutions were assessed for vascular integrity changes using the histopathology technique on the 1st, 5th, 10th, and 21st days (Kazemi et al., 2018). The study demonstrated that there was no statistical difference in vascular pathological score between NSS and UW groups until 21st day (Kazemi et al., 2018). This was similar to the findings, indicating that there was no significant difference in the complete denudation score for the vessels preserved in NSS in the presence and absence of rhSLPI for 48 hours. However, the preservation for 24 hours resulted in a lower pathological degenerative score, particularly on endothelial detachment, elastic membrane disruption, and necrosis in NSS supplemented with rhSLPI (Table 1). Interestingly, the histopathological score (Fig. 4A, C, and D) showed discrepancies in histopathological changes between 24 and 48 hours. It has been reviewed that the earliest event of vascular injury is endothelial edema, followed by endothelial necrosis, resulting in endothelial detachment (Woywodt et al., 2002). Therefore, at 6 hours after storage, we assumed that most endothelial cells were in the edema stage, which reflect why the scoring of endothelial edema at 6 hours was the highest (Fig. 4D). When the injury progressed between 24 and 48 hours, the necrosis of endothelial cells predominated, and therefore, the score of endothelial necrosis was observed as higher, whereas that of edema was lower (Fig. 4C). At 24-48 hours, endothelial detachment occurs (Fig. 4A), where endothelial detachment scores of both 24 and 48 hours predominated, without significant difference between 24 and 48 hours. At 48 hours, endothelial cell detachment occurs, and in particular, necrotic cells also disappeared from vascular tissue; therefore, the score of necrosis at 48 hours was observed less. This result provided novel information concerning the addition of rhSLPI into NSS to protect the vascular structure and could potentially increase the success rate of vascular transplantation.
There are several limitations that could be considered in this study, as follows: first, only the vascular structure was measured by vascular histopathological assessment. However, the effect of preservation with rhSLPI on vascular physiological functions, by using isometric tension, needs to be investigated. Moreover, the expression of adhesion molecules such as cell adhesion molecules, P-selectins, and integrin on activated endothelial cells should be investigated further to evaluate the risk of vessel graft failure under rhSLPI supplement in NSS. This experimental study performed a preclinical research using rats. The sample size (n = 6) was calculated as appropriate by a statistician with the awareness of unnecessary wastage of resources, which lead to ethical issues. However, the sample size in this study might only be appropriate for a laboratory study and might not provide more clinical significance. Therefore, we suggest using a larger sample size in future published articles, mostly in clinical studies of human samples.
For the first time, this study showed that rhSLPI could enhance the efficiency of NSS as a vascular preservative solution. However, the experimental perspective should focus on the effect of rhSLPI on vessel preservation in an in vivo animal model of transplantation. To closely simulate real clinical settings, the clinical outcome of vessel patency post-transplantation of vessel grafts, preserved in a preservative solution supplemented with rhSLPI, should be determined. This could be performed using angiography to determine the vascular parameters such as wall thickness and lumen diameter, as well as atherosclerotic lesions. From a clinical perspective, the outcomes from this study could be implemented for human vessel grafting and other organ transplants, and the long-term effects of transplanted grafts preserved in preservative solution, supplemented with rhSLPI, should be determined. Furthermore, the effect of rhSLPI in a preservative solution for other vital organs, such as the kidneys, brain, and liver, should be considered for further study.
CONCLUSION
The supplementation of vascular preservative solution with recombinant rhSLPI could reduce vascular injury and inflammation while maintaining vascular integrity.
For the first time, the results have shown that adding rhSLPI to the preservative saline solution could prevent vascular injury and possibly extend the duration of graft storage before transplantation.
|
v3-fos-license
|
2021-01-16T05:05:13.114Z
|
2021-01-13T00:00:00.000
|
231610307
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41598-020-80775-3.pdf",
"pdf_hash": "45131dd14a9ba2cd2d3dd17eac86b11b6992645d",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44584",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "45131dd14a9ba2cd2d3dd17eac86b11b6992645d",
"year": 2021
}
|
pes2o/s2orc
|
Stable coherent mode-locking based on π\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pi$$\end{document} pulse formation in single-section lasers
Here we consider coherent mode-locking (CML) regimes in single-section cavity lasers, taking place for pulse durations less than atomic population and phase relaxation times, which arise due to coherent Rabi oscillations of the atomic inversion. Typically, CML is introduced for lasers with two sections, the gain and absorber ones. Here we show that, for certain combination of the cavity length and relaxation parameters, a very stable CML in a laser, containing only gain section, may arise. The mode-locking is unconditionally self-starting and appears due to balance of intra-pulse de-excitation and slow interpulse-scale pump-induced relaxation processes. We also discuss the scaling of the system to shorter pulse durations, showing a possibility of mode-locking for few-cycle pulses.
Mode-locking is a method to obtain short pulses directly from laser oscillators [1][2][3][4] . It is a common and very basic technique, used in virtually all areas of modern optics. Typical for applications is so called passive mode-locking (PML), achieved by incorporating a nonlinear (saturable) absorber with suitable properties into the laser cavity. In such two-section cavities, generation of short pulses is achieved due to saturation of the amplifier/absorber section, and thus the pulse duration τ p is larger than the polarization relaxation time T 2 in the amplifier and absorber sections. Hence, in such PML-based lasers, pulse duration is fundamentally limited by the inverse bandwidth of the gain medium 1,2,5 . Opposite situation arises, when the electric field in the cavity is so strong, that the Rabi frequency Ω R 6 is larger than the inverse dephasing time of the medium, R ≫ 1 T 2 . In this case, the pulse duration is typically smaller than the dephasing time, τ p < T 2 , and the light-matter interaction taking place on the time of the pulse duration is thus "coherent", so the mode-locking appeared there is often called "coherent mode-locking" (CML). The basic features and key differences between standard PML and CML are summarized in the Table 1, where by T 1 the population relaxation time is denoted.
The key idea of CML 5,7,8 is to use the gain and absorber sections both in the coherent regime ( τ p < T 2 ). In the absorber section, a pulse of self-induced transparency (SIT) 6,9,10 (a 2π pulse) is formed. Such a pulse is a solitary wave, which stably propagates in the absorber without losses. As such, it is also stabilized against perturbations, in particular against instabilities of the non-lasing state. The gain section, in contrast, has to be arranged in such a way, that essentially the same pulse has an area π instead of 2π . Besides, it is assumed that the gain section is nearly fully inverted at the moment when the pulse arrives. The π pulse is a "half of Rabi oscillation" and thus it returns all the atoms of the gain section to the ground state, so that the energy is fully transferred from the medium to the pulse. The resulting pulse duration was predicted to be able to achieve even the single-cycle level 11,12 . This is in agreement with the theoretical prediction [11][12][13] and experimental demonstration 14 of Rabi-oscillations and other SIT-based pulses at few-and single-cycle level. CML potentially allows passive mode-locking in quantum cascade lasers [15][16][17] , which is known to be otherwise virtually impossible because of too fast carrier relaxation times 18 . Besides, CML can arise if the absorber section works in the coherent regime [19][20][21] whereas the amplifier section is in the saturable regime. This type of CML was recently demonstrated experimentally [22][23][24][25] . In 26 , it was shown that CML should arise as a very stable and even self-starting regime if the cavity round-trip time is of the order of T 1 , allowing the medium to relax enough in between the pulses.
One of the signatures of CML making them different from PML, is that the pulse duration decreases with increasing the output power 24,25 . In the case of the coherent absorber section, this is also easy to understand Table 1. Standard (incoherent) passive mode-locking (PML) and coherent mode-locking (CML).
Standard passive mode-locking (PML) Coherent mode-locking (CML)
Based on incoherent ( τ p > T 2 ) gain/absorption saturation Based on Rabi oscillations Significant part of energy is left in absorber Almost no losses in absorber ( 2π-pulse of SIT) Only part of energy is taken from the amplifier Almost all energy stored in the amplifier is taken ( π pulse) τ p is fundamentally limited by T 2 τ p can be much smaller than T 2 www.nature.com/scientificreports/ study homogeneously broadened media via direct numerical solution of the MB equations; in "Condition for few-cycle pulse generation in single-section laser" section we derive the conditions of few-cycle pulse generation; finally, in "Conclusions" section we discuss the results and draw the conclusions.
Coherent pulse propagation and area theorem
An important quantity describing the pulse dynamics in the coherent regime is the pulse area, defined as 9 where d 12 is the transition dipole moment of a two-level atom, and E (t, z) is the pulse envelope. Coherent pulse propagation in an amplifying or absorbing inhomogeneously broadened medium is described using so-called area theorem 6,9,10 : with where α 0 is the absorption ( α 0 < 0 ) or gain ( α 0 > 0 ) coefficient per unit length, N 0 is the concentration of two-level atoms, ω 0 is the medium transition frequency and g(�ω) is the inhomogeneously broadened spectral distribution function, centered at ω 0 , so that +∞ −∞ g(�ω)d�ω = 1. Equation (2) is derived assuming the following conditions to hold 6,9,10 : where τ p is the duration of the generated pulse, 1/T * 2 is the half-width of the inhomogeneously broadened line of the resonant medium and 1/T 2 is the half-width of the homogeneously broadened line of a two-level atom. That is, it is assumed, that on the pulse duration the individual dipoles belonging to different atomic sub-ensembles, dephase. In particular, Eq. (2) is not valid for a homogeneously broadened media. On the other hand, in the limit of a small signal and thus small area ( sin ≈ ), Eq. (2) describes then an exponential decay or growth of the pulse area: � ∼ e α 0 z/2 . In the case of a homogeneous medium, linearization of Maxwell-Bloch equations near non-lasing state gives very similar growth/decay rate: The solution of Eq. (2) is: where 0 is the initial pulse area. One can see, that, apart from the trivial solution = 0 , the area of a stationary SIT soliton is � = 2πm for any positive integer m. Two branches of the solution of the area theorem for an amplifying medium are plotted in Fig. 1. In this case, the initial pulse of the area 0 < � 0 < 2π approaches the steady-state, having the pulse area π as the pulse propagates in the medium. At the same time, the pulse duration decreases. In the next section, we will use this approach to study the pulses arising inside a cavity.
CML in a single-section laser and the area theorem
Here, using the results of the previous section, we develop a diagram technique similar to the one introduced in 60 for a two-section laser. We consider a CML in a ring-cavity single-section laser, having an amplifying section 1 inside the cavity, and operating in a unidirectional lasing regime, as shown in Fig. 2. The unidirectonal lasing is supported by nonreciprocal element 2.
The analysis of a traveling wave in a ring cavity laser is simple and physically transparent. On the other hand, for a ring cavity, a counter-propagating wave is not suppressed, in contrast to a two-section cavity, where such waves are ruled out by the nonlinear absorber. In practice, unidirectional generation can be set up by using a nonreciprocal intracavity element.
The branches of solutions of Eq. (2) in the amplifier are shown in Fig. 1. Let us assume, that a short pulse with the infinitesimal area 0 ≪ 1 passes through an amplifier 1 (see Fig. 2) in the coherent regime, gets reflected from a mirror M 3 with the amplitude reflection coefficient r, and then enters the amplifier once again. We assume for simplicity, that other mirrors M 1 and M 2 do not produce any losses. We also suppose, that the pulse travels long enough in the amplifier, such that the active medium is able to recover to its equilibrium state between the consecutive pulse passages.
One more question we have to address here is how the pulse area changes upon reflection from the mirror in our cavity. Suppose that the electric field of the incident pulse is given as: (5) tan(�/2) = tan(� 0 /2)e α 0 z/2 , www.nature.com/scientificreports/ with slowly varying pulse envelope E inc (t, z) and central pulse frequency ω 0 . Besides that, we assume that the mirror is located at z = 0 and denote as inc and ref the areas of the pulse incident on the mirror and reflected from the mirror respectively. Let multiply both sides of Eq. (6) by e −iω 0 t and integrate over time from −∞ to +∞: One can see, that first and second integrals in Eq. (7) represent (up to constant factors) the Fourier component of the incident pulse at the frequency ω 0 and inc respectively. The third integral can be transformed using integration by parts as: where the first term on the right-hand side turns to zero due to the finite pulse duration. The commonly used slowly varying envelope approximation (SVEA) states, that: Therefore validity of SVEA Eq. (9) would allow us to neglect the second term on the right-hand side of Eq. (7) with respect to the first one. Moreover, the presence of the fast-oscillating factor e −2iω 0 t ′ under the integral sign in Eq. (8) can lead to even smaller values of the second term on the right-hand side of Eq. (7), than it could be expected from Eq. (9). Indeed, from Eq. (9) one would estimate the ratio of two terms on the right-hand side of Eq. (7) to be of the order of ω 0 τ p ≫ 1 . At the same time, if we take for example the envelope of a stationary π -pulse propagating in the amplifying medium with linear losses 6,10 : what is much larger as compared to just a factor ω 0 τ p . Considering the above, Eq. (7) finally turns into: with the Fourier transform of the incident pulse F inc (ω) . The exactly same equality is obtained for the area of the reflected pulse. Therefore the areas of inc and ref are simply related through the amplitude reflection coefficient of the mirror r(ω) at the frequency ω 0 , assuming that the response of the mirror is broadband enough: It is worthy noting, that the applicability of the relation Eq. (10) reduces to the applicability of SVEA Eq. (9). For long enough pulses with ω 0 τ p ≫ 1 SVEA is reasonably justified, while for few-cycle pulses it can not be fulfilled anymore.
Using the area theorem Eq. (2) and branches of it's solution (similar to that plotted in Fig. 1) we are now able to follow the evolution of the pulse area during a single round-trip in a ring laser cavity. As the pulse propagates in the amplifier, the corresponding point on the diagram is moving from left to right along the amplifier branch from the point 1 to the point 2, see Fig. 3. This propagation is accompanied by the increase of the pulse area. After the pulse passes the amplifier, it is reflected by a non-ideal mirror and its area is thus reduced according to Eq. (10), what corresponds to the moving of the point on the diagram Fig. 3 along the curve 23 from right to left to the point 3. Then, pulse propagates in the amplifier once again along other amplifier branch 34 and so on. One can expect, that after many round-trips a stable self-pulsating regime with pulse having the area in the vicinity of π , sets up. This limit cycle ABC is shown in Fig. 3 with red lines.
This limit cycle can be obtained analytically as following: Let denote as k the pulse area after k round-trips in the cavity, measured at the output of the gain medium in Fig. 2. According to Eqs. (5) and (10), the pulse area after k + 1 round-trips in the cavity k+1 is related to k as: Aτ p cos hπω 0 τ p , www.nature.com/scientificreports/ From Eq. (11) one finds the pulse area in the steady-state regime * as: If we denote the function on the right-hand side of Eq. (12) as f (�) , the stability condition of the steady state * requires: We note that the stability of the mapping * defined by (13) does not mean automatically the stability of the initial system. It ensures, however, its stability in respect to perturbations with zero frequency.
From Eq. (12) one finds: Therefore the stability condition Eq. (13) yields: If Equation (12) has only one non-lasing steady-state solution, * = 0 and this solution is stable, since Eq. (13) is satisfied. On the other hand, if Equation (12) has two steady-state solutions. The trivial one * = 0 is unstable, since Eq. (13) is not fulfilled. Another non-zero solution 0 < � * < π/r is shown in Fig. 4 in dependence on the parameters r and α 0 L g . Figure 4 shows, that the stationary solution * approaches π with increase of r or α 0 L g . This steady state is always stable (in the sense of Eq. (14)). Indeed, since the derivative Eq. (14) is larger than 1 for = 0 and smaller than 1 for � = π/r , at the intermediate point * the derivative Eq. (14) must be smaller than 1, otherwise the equality Eq. (12) could not take place. This fact is demonstrated in Fig. 5, where the derivative Eq. (14) is plotted.
Numerical simulations
The diagrammatic technique presented above gives a qualitative picture of the evolution of the pulse area in a single-section laser with a unidirectional lasing regime in an inhomogeneously broadened media. For homogeneously broadened media, the area theorem does not hold anymore. Nevertheless, here we show that basically the same dynamics takes place in the homogeneously broadened media as well. Besides, we reveal the details of bifurcation scenario as well as the scaling behavior of the pulse with the pump, and the relation between the (11) � k+1 = 2 arctan tan r� k 2 exp α 0 L g /2 .
(16) r exp α 0 L g /2 < 1, where g(z) = d 12 (z) 2 , κ = 4πω 0 d 12 N 0 (z) , F(z, t) = 4g(z)A(z, t)p s (z, t) , p s (z, t) is the slowly-varying envelope of the imaginary part of the non-diagonal element of the density matrix of a two-level particle, n(z, t) is the population difference between the lower and upper energy levels of a two-level particle, n 0 (z) = −1 is the stationary value of n(z, t) in the absence of the pulse for amplifier ( n 0 = 1 for the absorber), A(z, t) is the real-valued slowlyvarying amplitude of the cos-component of the electric field. The parameters of the two-level particles are the transition dipole moment d 12 , concentration of the particles in the gain medium N 0 , relaxation times T 1 = 1/γ 1 and T 2 = 1/γ 2 as well as the eigen-frequency of the medium ω 0 . The set of equations Eqs. (18)-(20) allows accurate modeling of the evolution of extended two-level media in a cavity, assuming relatively long pulse durations and intensities at which the Rabi frequency � R ≪ ω 0 , so that no multilevel dynamics come into play. The equations naturally take into account longitudinal multi-mode dynamics and the accompanying nonlinear coherent effects. In Eqs. (18)-(20), we dropped the equations for the real part of the non-diagonal element of the density matrix p c (t) and the sine-component of the electric field A s (z, t) 5 , since in the case of the resonant light-matter interaction p c = 0 , and hence A s = 0 6 .
In the example that will be considered below, the following parameters were used: the wavelength = 0.7µm , the reflection coefficient of the mirror r = 0.8 , the cavity length L = 3 cm, the length of the gain section L g = 1 cm, the transition dipole moment d 12 = 5 Debye, T 1 = 0.5 ns, T 2 = 0.25 ns.
First, we preformed a set of simulations with gradially increased the concentration of the active particles N 0 , each time starting simulations from non-lasing state perturbed by noise. We found the first threshold at around N 0 = N t ≈ 6.7 × 10 10 cm −3 . After this first threshold, the laser was operating in a CW mode. At a value of N 0 = N f ≈ 0.165 · 10 14 cm −3 , small pulsations in the CW mode appeared, indicating its destabilization. This second threshold thus takes place at rather high values of pump N f /N t ≈ 250 . This is to be compared to the estimation for RNGH instability threshold, given by [29][30][31] Our numerical result is comparable with this estimation, although somewhat lower. Taking into account that Eq. (21) is only an estimation and was derived for a cavity with distributed parameters, whereas in our case the parameters change significantly across the cavity, we think that the CW instability in the second threshold does correspond to RNGH.
With further increase of the concentration, self-pulsations turn into a pronounced mode-locking regime with a single pulse per roundtrip, see Fig. 6a,b.
Above the second threshold, the dependence of the pulse duration on the concentration of amplifying particles N 0 was calculated numerically and is shown in Fig. 7. There, the curve 1 shows the dependence of the FWHM pulse duration τ p (normalized to the cavity round-trip time τ c = L/c ) on the reciprocal pump normalized to its value at the second threshold (of a single-section laser) N f . In the region of Q from 0.4 to 0.8, the dependence is close to linear, which demonstrates a characteristic feature of CML: the pulse duration decreases with increasing power 24,25 . Up to Q ≈ 0.4 , the lasing takes the form of a single pulse. An example solution is shown in Fig. 6b. At Q ≈ 0.35 , the nature of the solutions changes. In addition to the main pulse, a lower intensity pulse appears. These are two coupled pulses of the 0π-pulse type, that is, the envelope changes its sign. An example of such pulse is given in Fig. 6c,d. With further increase of N 0 , that is, decrease of Q, the solution becomes irregular, but then settles to a harmonic mode-locking with two pulses in the cavity (not shown in the figure). Then, the solution becomes irregular again, after which regular solutions with three pulses in the cavity show up. This scenario with increasing the number of pulses, followed by an irregular regime, repeats itself (not shown).
It is interesting to compare the dynamics to the case of smaller T 2 , that is, out of the coherent regime. Such comparison is made in Fig. 8. In Fig. 8a a simulation is shown with the same parameters as in Fig. 6b, but with 10 times smaller T 2 . This results in a CW regime, since the excess over the first threshold also decreases, according to Eq. (4). To return back to the same excess over the lasing threshold we need to increase the pump by the same ratio. As it is seen in Fig. 8b,c, this leads to irregular pulsations. From these simulations we see that the mode-locking in the coherent regime (for large T 2 ) is more stable and survives higher pump levels, than the mode-locking in a laser with small T 2 . In comparison, if we increase T 1 , keeping all other parameters fixed, then www.nature.com/scientificreports/ the pulse duration, as given by numerical simulations, increases, since, due to decrease the pump rate N 0 /T 1 , the overall power also decreases. Furthermore, we compared our simulations of a single-section laser with a laser, containing both an amplifier and absorber and working in the CML regime. The length of the absorber section was taken to be the half of the length of the amplifier, the concentration was three times less than in the amplifier section, and the dipole moment was twice larger. This twofold difference in the dipole moments is necessary for the implementation of coherent mode-locking in a two-section laser 7 . The relaxation times were taken as: T 1 = 0.2 ns, T 2 = 0.1 ns. In contrast to a single-section laser, self-pulsations start at the higher pump level and exist in the range from Q = 0.57 to Q = 0.1 (see Fig. 7, curve 2). After that, the mode-locking regime becomes unstable, and several pulses appear in the generation. With further increase of N 0 , harmonic mode-locking was observed. As in the case of a single-section laser, the instability zones alternated with harmonic mode-locking zones took place. Comparison of the curves 1 and 2 in Fig. 7 shows that in the given example, the minimum pulse durations in the mode-locking regime differ 3 times between single-and two-section lasers. That is, an absorber allows to reduce the pulse duration, in comparison to the single-section laser without absorber. This happens via more effective preventing the development of "tails" of the pulses that arise in a single-section laser, that is, via better protecting the non-lasing background after the pulse against perturbations. However, the achievable decrease of the pulse durations is not so dramatic, as it could be expected. In both cases, the area of the pulse after the amplifying medium in the mode-locking zone was close to π . The corresponding dependence is shown in Fig. 9.
The area in Fig. 9 was calculated numerically as the integral over the pulse envelope over the whole roundtrip. Such definition does not allow to determine the area of the pulses when multiple of them are present in the cavity. On the other hand, in this way we can continue the definition of the area to the self-pulsing regimes with a small amplitude and even to the CW regime. With a such defined area, numerical simulations revealed another Fig. 6b and T 2 = 25 ps (10 times smaller than in Fig. 6b). (b) A(t), and (c) A 2 (t) for T 2 = 25 ps and N 0 = 0.45 · 10 15 cm −3 , 10 times larger than in (a), providing the same excess above threshold as in Fig. 6b. Other parameters are the same as in Fig. 6. This figure was created with Matlab R2016b (http://www.mathw orks. com). Figure 9. Dependencies of the pulse area at the output of the amplifying medium on Q for a single-(curve 1) and for a two-section laser (curve 2). The area was calculated numerically as the integral of the envelope over the complete roundtrip. This figure was created with Matlab R2016b (http://www.mathw orks.com). www.nature.com/scientificreports/ remarkable feature of a single-section laser: near the second threshold, the pulse area is still close to π , staying still slightly larger than this value. It even slightly increases with the increase of the pump. After exceeding the second threshold, when self-pulsations start, the area begins to decrease. In the region where the stable modelocking is achieved, the area is smaller than π . The dependence of the area on the pump for a two-section laser is also nonmonotoneous: at large durations (low pumps) the area grows but then begins to decrease. Nevertheless, it also stays close to π.
As we see, our numerical simulations give, in general, pulses with the area around π , similarly with the result for inhomogeneous broadened media predicted by a mapping in the previous section. Nevertheless, some differences to the mapping-based solutions do exist. First, in contrast to the mapping, the area can exceed the value of π . Also, differently from the mapping, direct numerical simulations are able to give us the pulse shape which can significantly vary with the pump level. In particular, the mapping predicts, that the solutions with the area around π arise directly at the first threshold. On the other hand, it says nothing about the corresponding pulse durations. Our results indicate that those nonzero solutions born at threshold according to the mapping may correspond to the CW solution described in this section. The mechanism, determining stable self-starting mode-locking in our single-section laser is essentially the same as in the two-section CML laser described in 26 (because, as mentioned before, the second (absorber) section only introduces some stabilizing effect, without altering the dynamics). Namely, the passage of a π-pulse leaves nearly all the atoms of the gain section in the ground state. During the roundtrip time, the pump ensures that the population relaxes back. In this situation, if the cavity length is selected properly, the medium has enough time to relax before the next pulse comes. If the cavity length is too long (or, putting it in the other way, the pump is too strong), the number of pulses over the roundtrip time increases as was described before.
Condition for few-cycle pulse generation in single-section laser
The consideration above suggests that, in order to decrease the pulse duration, we need to decrease the cavity length. In this respect, it is useful to establish a general scaling of Eqs. (18)- (20) which would allow us to rescale existing solutions to the shorter pulse durations. For this, let us suppose that all times in our system are decreased by a factor K: t → t/K, T 1 → T 1 /K, T 2 → T 2 /K , except the transition frequency ω 0 (and thus the wavelength ) which we keep the same. This is possible since ω 0 only appears in κ , so we compensate this by modifying another variable entering κ only, as discussed below. The physical meaning of n and p s requires that they remain intact by the rescaling: n → n , p s → p s . In order to keep balance in Eq. (20), we need to rescale the space coordinate in the same way as time z → z/K . This means that all intracavity elements, including the whole cavity length, must be also reduced K times. Besides, we keep g the same. From Eq. (18) we immediately obtain that A → KA . This, in turn, means, that in Eq. (20) we need to rescale κ as κ → K 2 κ . We can achieve this by a corresponding change of N 0 : N 0 → K 2 N 0 . This all defines a rescaling, which, being applied to Eqs. (18)- (20), leaves the equations unchanged; also all the possible regimes including mode-locking remain intact. For a mode-locking regime, the pulse duration decreases K times, but the pulse shape does not change, and the ratio τ p /τ c of the pulse duration τ p to the cavity length τ c , as well as the pulse area, remain the same. We note that this is not the only rescaling which is possible in Eqs. (18)- (20), but we find this particular one the most suitable for practical realizations.
We explored this scaling by direct numerical simulations as illustrated in Fig. 10. The dependence in Fig. 10a shows that τ p /τ c remains constant as we modify the cavity length (together with the other parameters as prescribed by the scaling). Note the logarithmic scale in Fig. 10, allowed to vary L from 3 m to 3 mm. Figure 10b shows the dependence of the maximum pulse amplitude on the cavity length as we change L according to the rescaling. This curve reveals that E max grows with exactly the same rate as 1/L as suggested by the scaling. www.nature.com/scientificreports/ Such rescaling makes it possible, using the simulations above, to "rescale" the existing solutions and thus to estimate the parameters of the laser, at which the mode-locking with the pulse duration we want, takes place. By that, we should not however cross the boundary of the validity of the slow envelope approximation (that means that we must consider pulses of at least several cycles in duration), since the rescaling mentioned above does not work anymore for the equations free from the slow envelope. Taking the target pulse duration to be 10 optical cycles (23 fs) and assuming τ p /τ c = 0.08 (cf. Fig. 7), using our rescaling one can obtain K = 34 in respect to the configuration in Fig. 6b, thus the cavity and the gain section lengths should be about 0.88 mm, and 0.29 mm correspondingly, with T 1 = 1.4 ps, T 2 = 0.7 ps, and N 0 = 5.3 · 10 18 cm −3 . We note that the pulse repetition rate in such a short cavity should be as high as 0.34 THz. We checked these parameters by numerical simulations and indeed found a stable mode-locking with required pulse duration, and the dynamics similar to Fig. 6b, and with 34 times higher amplitude.
Although such short pulses are formally supported by Eqs. (18)- (20), the practically achievable pulse duration in every physical realization will be most probably limited by further physical processes. In particular, it is not easy to realize relaxation times in ps range needed for such few-cycle pulses; besides, large pump powers in the range of hundreds Watts will be required in this situation, most probably leading to heating and related problems. Finally, to realize a traveling-wave cavity of 0.1 mm-scale length is also rather challenging.
Conclusions
To summarize, we have demonstrated that a stable, self-starting coherent mode-locking regime is possible in a single-section laser, containing only an amplifying section. Coherent mode-locking, taking place if the decay time T 2 exceeds the pulse duration, was up to now known to appear in lasers containing both absorbing and amplifying sections. Nevertheless, if the cavity length and the pump/loss balance are tuned properly, that is, in such a way that the relaxation after the pulse passage is matched to the fast population change during the pulse, the resulting mode-locking is so stable, that the absorbing section is not needed anymore and can be removed. The self-starting behaviour is ensured since at the required pump levels both non-lasing and CW regimes are highly unstable. On the other hand, as our results show, in the coherent regime (large T 2 ) the pulsations are much more stable than in the incoherent case.
In the article, for inhomogeneously broadened media, we established the existence of the coherent modelocking and its stability (to zero-frequency perturbations) by constructing a mapping, based on the area theorem Eq. (2). In the case of nonzero frequency detuning between the pulse and the medium, the chirped pulse area theorem should be used 61 . It yields exactly the same equation for the evolution of the pulse area as Eq. (2), just with slightly different definition of the pulse area. Therefore, all results from the "CML in a single-section laser and the area theorem" section of our manuscript should hold for the respective chirped pulse area as well.
For homogeneously broadened media, we have shown the existence and stability of the mode-locking using direct simulations of Maxwell-Bloch equations. In this latter case we established that as the pump increases, mode-locking arises from a CW regime via self-pulsations caused by RNGH instability.
In the mode-locking regime, the pulse area is around π , that is, a half of the Rabi oscillation. Taking into account that just before the pulse arrival the medium is almost fully inverted, and just after the pulse passage this whole energy is fully transferred into radiation, this regime requires unusually strong pump levels. In the examples considered above, the pump needed for mode-locking exceeds the lasing threshold by hundreds of times. Such high levels might look completely unrealistic for, for instance, semiconductor lasers with electrical pumping, but other media/schemes such as optically pumped gases, or alkali metal vapors, or optically pumped quantum dots, could be promising candidates. This is supported by pulsed regimes already observed in the gasand vapor based lasers 6,[22][23][24][25][51][52][53][54][55] . We note that CML is in fact dissipation-less in the sense that all the pump energy can be converted into radiation. So, if the parasitic dissipation channels such as heating and other non-radiative processes are suppressed, the high excess above threshold should not posses a problem. As it was mentioned before, most promising candidates in this respect are gases and vapors. Also, an interesting possibility could be superfluidic helium, since in superfluidic phase the coupling to phonons is suppressed.
Using the scaling established here we showed that, at the cost of reducing the cavity length and increasing the pump power, the pulse durations even in few-cycle range can be obtained. The general scaling obtained by us is the following: to reduce the pulse duration K times, the cavity length and roundtrip times τ c should be decreased by factor of K, accomplished by an increase K 3 times of the pump power N 0 /τ c . Besides, relaxation times must be decreased K times as well. Since even shorter, single-cycle, pulses were predicted to be achievable with CML in a two-section cavity 11,12 , we expect that this can be also possible for the single-section scheme. This problem requires however rather different theoretical approach and is beyond the scope of the paper.
|
v3-fos-license
|
2024-06-09T15:07:40.703Z
|
2024-06-07T00:00:00.000
|
270337857
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1080/00131857.2024.2363357",
"pdf_hash": "51474b8d09649223091f2e32e24fc3366458216f",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44585",
"s2fieldsofstudy": [
"Philosophy",
"Education"
],
"sha1": "6826ea737121a1bbc2c905acdffe1c418e5da0d5",
"year": 2024
}
|
pes2o/s2orc
|
Rousseau’s lawgiver as teacher of peoples: Investigating the educational preconditions of the social contract
This paper argues that Rousseau’s lawgiver is best thought of as a fic-tional teacher of peoples. It is fictional as it reflects an idea that is entertained despite its contradictory nature, and it is contradictory in the sense that it describes ‘an undertaking beyond human strength and, to execute it, an authority that amounts to nothing’ (II.7; 192). Rousseau conceives of the social contract as a necessary device for enabling the transferal of individual power to the body politic, for subsuming individual wills under the general will, and for aligning the good of the individual with the common good. For the social contract to be valid, however, it needs to be preceded by a desire to belong to a moral community that can induce people to join willingly, and that will grant legitimacy to the laws established. If the social contract is the machinery that makes the body politic function, the lawgiver is ‘the mechanic who invents the machine’ (II.7; 191). In this paper we will look closer at the pedagogical functions of Rousseau’s mythical lawgiver by first examining the relationship between the social contract, the general will and the lawgiver. Then, we aim to flesh out a pedagogical understanding of the figure of the lawgiver by way of the two educational dimensions of accommodation and transformation. Finally, we will argue for the importance of understanding Rousseau’s lawgiver as a fictional device allowing for the fundamental and enduring educational task of balancing between the preservation and renewal of society.
Introduction
In so far as human societies need to be actively preserved and renewed in the face of the constant threat of mortality, social contracts need to be continuously renegotiated.At the same time, a viable social contract needs to rely on an assumed stability, allowing for the ongoing initiation and formation of new generations of citizens.This makes the social contract into a foundational educational concern.Regardless of how we construe the relation between education and politics, the very fact of the social contract makes education and politics interdependent.This interdependency gains in complexity in the ever-increasing plurality of contemporary societies.It is simply not self-evident that contemporary social contracts can be grounded in monolithic traditions and taken-for-granted systems of values.While, in a practical sense, education might benefit from a cohesive cultural foundation, it would suffer from resorting to nostalgia and ingrained prejudices that threaten pluralistic communities.Education, then, is stuck between having to assume a common good to strive for and acknowledging that individuals can no longer be assumed to be bearers of the same customs and traditions.This tension between social cohesion and cultural pluralism, with consequences for politics as well as for education, gives rise to the problem that the social contract is designed to address.For Rousseau, the stipulation of the social contract allows for a reinvention of society in light of the changing preconditions and limitations of human existence.Increasingly pluralistic societies present an ever-growing challenge for conceptualizing a viable social contract with severe political as well as educational consequences.While this development may render Rousseau questionable in terms of offering a reliable theoretical framework, we would argue that while circumstances have certainly changed, the fundamental problem addressed by Rousseau remains essentially the same.
In The Social Contract, Rousseau asks: 'How will a blind multitude, which often does not know what it wants because it rarely knows what is good for it, carry out by itself an undertaking as vast, as difficult as a system of legislation?' (II.6; 189-190 1 ).For Rousseau, it seems that for people to be able to arrive at a place where they can be entrusted to legislate reliably, they first need to be informed by a supreme being, and so 'Gods would be needed to give laws to men' (II.7;190).In response to this apparent paradox, Rousseau introduces the mythical figure of the lawgiver, describing 'a superior intelligence who saw all of men's passions and experienced none of them' (II.7;190).Arguably, the lawgiver is best thought of as a fictional teacher of peoples, ensuring that the general will and the laws are aligned with customs and traditions (even if people's desires are not).It is fictional as it reflects an idea that is entertained despite its contradictory nature, and it is contradictory in the sense that it describes 'an undertaking beyond human strength and, to execute it, an authority that amounts to nothing' (II.7;192).Rousseau conceives of the social contract as a necessary device for enabling the transferal of individual power to the body politic, for subsuming individual wills under the general will, and for aligning the good of the individual with the common good.For the social contract to be valid, however, it needs to be preceded by a desire to belong to a moral community that can induce people to join willingly, and that will grant legitimacy to the laws established.If the social contract is the machinery that makes the body politic function, the lawgiver is 'the mechanic who invents the machine' (II.7;191).
In this paper we will look closer at the pedagogical functions of Rousseau's mythical lawgiver by first examining the relationship between the social contract, the general will and the lawgiver.This goes to highlight Rousseau's conception of the social contract as a vehicle for promoting a more substantive understanding of freedom as a collective endeavor.In this endeavor, the lawgiver emerges as an imaginative, yet paradoxical, instigator of moral communities founded upon the recognition of a common good.Having established the political role of the lawgiver, we aim to flesh out a pedagogical understanding of the same figure by way of the two educational dimensions of accommodation and transformation.First, we explore the lawgiver's function in terms of accommodation where education always needs to account for what already is.Second, we turn to the dimension of transformation where the figure of the lawgiver is pedagogically motivated in terms of changing human nature.Finally, we discuss the implications of the figure of the lawgiver for the role of the teacher navigating beyond the narrow confines of individual and political desires.In conclusion, we will argue for the importance of understanding Rousseau's lawgiver as a fictional device allowing for the fundamental and enduring educational task of balancing between the preservation and renewal of society without underestimating the many challenges posed by an increasingly pluralistic world.
Rousseau's social contract and the problem of the general will
According to Rousseau, human beings have 'reached that point where the obstacles that interfere with their self-preservation in the state of nature prevail by their resistance over the forces each individual can use to maintain himself in that state.Then, that primitive state can no longer persist, and the human race would perish if it did not change its manner of being' (I.6; 172).Rousseau's preferred name for the instrument of that change is the social contract.As such, the social contract is not a fact of nature but an artificial construct by which the body politic can begin to take form.Looking back and around himself, Rousseau notices a host of bad examples of where the social contract had failed to ensure the successful transition from individual freedom in the state of nature to collective freedom in civil society.The social contract should ideally function by facilitating the transaction whereby people willingly give up some of their natural freedom in exchange for the benefit of belonging to a socially cohesive and powerful unity.Historically, however, Rousseau notes that the social contract had often been turned into a source of bondage and inequality, where most people were forced to surrender their individual freedom for the benefit of a few people's conventional privileges. 2 As such, the social contract had frequently been perverted and transformed from an instrument of collective empowerment into a ploy by the rich to illegitimately secure their property against the poor.Faced with this challenge, Rousseau seeks to investigate 'whether there can be any legitimate and reliable rule of administration in the civil order, taking men as they are and laws as they can be' (I.1; 163).The problem for Rousseau is to find a way of joining the desires and will of each individual with the desires and will of the collective, making each and every individual identify as a citizen with a common interest.The fundamental problem to which the social contract responds concerns the following question: How to find a form of association that defends and protects the person and goods of each associate with all the common force, and by means of which each, uniting with all, nonetheless obeys only himself and remains as free as before?(I.6; 172).
The challenge for Rousseau is to convincingly argue for ways in which people can remain autonomous while willingly subjecting themselves to a political collective, ruled by a common interest.For him, unlike for Hobbes, it is paramount that sovereign power is an expression of the will of the people and not the will of an arbitrary ruler (be it a king or a parliament).For Rousseau, an act of sovereignty 'is not an agreement between a superior and an inferior, but rather an agreement between a body and each of its members' (II.4;185).Consequently, the only legitimate sovereign body is the people acting as one in its role as legislator.It is worth noting here that 'the people is a legislative force that wills, rather than an executive power that acts' (Daly, 2021(Daly, , p. 1279)).Because acts of sovereignty have 'no object other than the general welfare' (II.4; 185), they cannot interfere in particular cases but must apply equally to all.As such, 'the social compact establishes among the citizens an equality such that they all commit themselves under the same conditions and should all enjoy the same rights' (II.4;185).For this reason, legislation and government are separate things for Rousseau.
The way in which Rousseau responds to the problem of reconciling freedom and obedience in civil society is to postulate the idea of a general will.Each individual has a particular will as a fact of nature.Fundamentally, the particular will of individuals is naturally geared towards their self-preservation.As members of a social compact, however, people must understand themselves as parts (citizens) of a greater unity governed by a common interest.It follows from this that the artificial construct of the body politic needs to be equipped with a will that is perfectly aligned with its desire for self-preservation, that is, the common interest of the people.For Rousseau, therefore, the general will is not the sum of the wills of all individuals, but a will emanating from the body politic as a whole (II.3;182).The general will is an expression of the people in its role as sovereign.In such an association 'citizens share an understanding of the common good and that understanding is founded on the members' commitment to treat one another as equals by refraining from imposing burdens on other citizens that those members would be unwilling to bear themselves' (Cohen, 2010, p. 15).By making laws for themselves, based on a shared understanding of the common good, people can preserve their freedom at the same time as they willingly submit themselves to the authority of the state.This is so as 'the commitments that bind us to the social body are obligatory only because they are mutual, and their nature is such that in fulfilling them one cannot work for someone else without also working for oneself' (II.4;184).Any freedom worth defending, then, is a freedom departing from the fundamental basis of equality manifested through the articulation of a general will and guaranteed by the people in its role as sovereign legislator.The precondition for the general will, however, is that individuals must identify themselves as citizens belonging to a unified people, a single body politic.The problem of the general will is that it seems to require the kind of unity that it is supposed to create.Recognizing this problem, Rousseau writes: In order for a nascent people to be able to appreciate sound maxims of politics and to follow the fundamental rules of statecraft, the effect would have to become the cause: the social spirit that is to be the work of the institution would have to preside over the institution itself, and men would have to be prior to the laws what they are to become through the laws (II.7;(192)(193).
In other words, the emergence of the general will is complicated by the fact that it seems to presuppose a level of understanding of the common good that it is intended to promote.This is because as a rule, people will always be more or less guided by their particular will which is not necessarily aligned with or cognizant of the general will.For Rousseau, this problem has two sides.First, people naturally want what is good for them, but they do not necessarily have the means to recognize it.Second, while 'the general will is always right […] the judgment that guides it is not always enlightened' (II.6;190).The first aspect of the problem concerns the education of the will, making it conform to reason.The second aspect has to do with equipping the social body with a unified set of wants.When these two aspects are sufficiently dealt with, Rousseau believes that 'the union of understanding and will in the social body results from public enlightenment, and from this union results the smooth working of the parts, and, finally, the greatest force of the whole' (II.6;190).Having identified the core problem of the general will as an educational problem concerning the union of understanding and will, Rousseau introduces the elusive figure of the lawgiver.
The lawgiver
Rousseau's lawgiver is inherently paradoxical.The lawgiver is conceived as a representative of the will of the people before the people, as a body politic, is even constituted.As such, the lawgiver is both a precondition and an (indirectly) active force in the life of the republic.The lawgiver is not to be confused with the executive power of the state.Rather, the lawgiver 'is the mechanic who invents the machine' , while the executive power (the prince) 'is merely the workman who puts it together and makes it work ' (II.7;191).While the lawgiver acts as the creator of the social contract, it is the people in its role as sovereign that keeps it alive through the institutions, laws, and customs enabled by the lawgiver.Rousseau writes that '[a]t the birth of societies […] it is the leaders of republics who create the institutions, and afterward it is the institutions that form the leaders of republics' (II.7;191).The lawgiver figures as a necessary corrective of the uneducated self-interest of the blind multitude, making it possible for such a multitude to transform into a united people, capable of upholding a legitimate social contract (II.7;191).
Accordingly, while the lawgiver is necessary for laws to be made, legislation must always be done by the people.The people, however, must be made to desire laws that are conducive to their self-preservation as a body politic.This reformation of their desire and collective will-the amalgamation of particular wills into the general will-is the task of the lawgiver.The lawgiver, therefore, must be 'capable of changing, so to speak, human nature; of transforming each individual, who by himself is a complete and solitary whole, into a part of a greater whole from which that individual receives as it were his life and his being' (II.7;191).In other words, people must be made to constitute a moral community before they can be trusted to assume responsibility as a sovereign.This concerns the art of making people willingly give up their natural independence for the interdependence of a 'partial and moral existence' (II.7;191).on Rousseau's account, 'when the force acquired by the whole is equal or superior to the sum of the natural forces of all the individuals, the legislation can be said to be at the highest point of perfection it might attain' (II.7;191).The transformation from blind multitude to well-functioning body politic requires an intervention from an outside force, a supreme lawgiver.The problem facing the lawgiver is that people cannot be compelled, lest they fall back into the role of a servant, and they also cannot be persuaded, because they lack a sufficient understanding of the benefits of submitting to the general will.The result of this, according to Rousseau, is that 'since the lawgiver can use neither force nor reasoning, he must of necessity have recourse to an authority of a different order which might be able to motivate without violence and persuade without convincing' (II.7;(192)(193).Having no recourse to either force or reasoning, the lawgiver must appeal to people's existing customs and opinions.Influencing these, the lawgiver must endeavor to accommodate laws to the level of understanding of ordinary people (II.12;(202)(203).
It follows from this that the lawgiver must have a keen sense of how people are differently constituted and of how different customs and traditions can be made to align with the common good.Rousseau offers the metaphor of the 'lawgiver-architect' to underline the importance of adapting teachings to the context in which people exist as follows: Just as an architect, before putting up a large building, examines and tests the soil to see whether it can support the weight, so the wise founder does not begin by drawing up laws which are good in themselves, but first examines whether the people for whom he intends them is fit to bear them (II.8;194).This illustrates that by adapting laws to the context in which people actually exist, their opinions and beliefs can be reformed.Laws, customs, or opinions are never universally valid.For Rousseau, they are only valid to the extent that they help support the moral community and the institutions of the republic.In order for the reformation of the blind multitude to work, the lawgiver must be able to speak in a language that those addressed can understand.Without this, the laws offered would have no effect.For many people, religion functions as a foundation for making sense of the world and their particular place in it.As such, it offers a source of authority that the lawgiver can make use of as a tool when reforming people's opinions and beliefs (Riley, 1991, p. 57).
In summary, Rousseau's lawgiver is conceived as a necessary precondition for the creation of the society of the general will.At the same time, the lawgiver appears to be an impossible figure in so far as '[g]ods would be needed to give laws to men' (II.7;190).However, as Judith Shklar (1969, p. 128) has argued, '[i]t is only in a human image that the goodness ascribed to God can really be made manifest' .The lawgiver provides such a human image without having to correspond fully to any concrete historical person.As such, the lawgiver becomes a kind of fictional device by which Rousseau can construct a tangible ideal, borrowing from the traits of various historical persons such as Moses, Plutarch, or Lycurgus in order to offer images of authority that are neither too general nor too restricted to sway people.It seems to follow from this that the lawgiver is best understood as a pedagogical device useful for illustrating the basic preconditions for setting up a legitimate social contract founded on an understanding of the common good.Dana Villa (2017, p. 38) argues that: The operation of the 'machine' -through institutions, laws, and procedures -continues the work of education and formation begun by the 'great legislator' .In many ways, the latter becomes, just like the tutor in Émile, a sort of 'man behind the curtain' .ostensibly off the scene, he continues to exert a shaping influence on the people's civic identity through the very laws and institutions he originally put in place.
understanding the lawgiver as a pedagogical device opens up for a discussion of the educational dimensions of Rousseau's political theory, where civic education is not primarily concerned with having students become political actors, but with allowing them to become integrated parts of a cohesive moral community.In what follows we will look closer at the pedagogical functions of the lawgiver in an educational context where the lawgiver emerges as a symbol for an authority allowing teachers to intervene with the moral formation of future generations.While Villa's main concern is to investigate tensions between different forms of political education developing in European modernity (where the concept of a people becomes prominent), we aim to look closer at the pedagogical structure inherent in Rousseau's lawgiver so as to be able to argue for the moral and political preconditions of all education.What concerns us here are the various implications these preconditions will have for education, allowing us to understand the role of the teacher navigating between political governance and the cultivation of individual freedom.
The pedagogical functions of the lawgiver
There is a god-like ambition in all education as Rousseau conceives it.In so far as education becomes a means for transforming individuals into citizens of a body politic guided by a general will, it must be 'capable of changing, so to speak, human nature' (II.7;191).This is the fundamental assumption behind the formation of new states as well as the formation of young individuals.Matthews and Ingersoll (1980, p. 92) argue that '[l]ike the personae of 'the legislator' in The Social Contract […] Émile's tutor must play god; he must create a natural asylum, an incubator, outside the clutches of society where Émile is allowed to mature before he is ultimately returned to society' .While this might cause us to become suspicious of the intentions of the seemingly 'all-knowing' tutor, it can in fact be turned around to indicate the god-like aspiration of all educational interventions.Whenever we remove children from their families and gather them in schools, we are saying that their nature can be changed and improved using pedagogical means.This is how we can begin to think about the lawgiver as a pedagogical device, not in terms of an 'all-knowing' teacher who single-handedly determines the future path of citizens-in-the-making, but in terms of a basic pedagogical assumption saying that every society, whether it acknowledges it or not, turns to public education as a transformative mechanism for enacting its political will.What is important for Rousseau, is that we recognize that in order for public education to be a legitimate means of transforming individuals, the political will must be sufficiently aligned with the general will as an expression of the common good.
In a very general sense, then, the overarching purpose of education can be said to be the initiation of new generations into cohesive moral communities.obviously, this does not mean that education is reduced to moral education in a narrow sense, but that all education necessarily aims for the development and preservation of a moral community guided by an understanding of a common good.Even in seemingly non-moral domains, such as the teaching of particular dimensions of physics, there is an over-arching moral framework indicating why learning physics is part of the human endeavor to live a good and full life together with others, striving for the same thing.The main function of the lawgiver, as a pedagogical device, is therefore to remind us of the importance of grounding all educational endeavors in a morality informed by the common good, making this connection into the 'unshakable keystone' (II.12; 203) for any well-functioning society.
To the extent that '[t]he lawgiver is […] a teacher of the people in much the same manner as the tutor who forms Émile's character' (Villa, 2017, p. 81), we might conceive of the lawgiver as a pedagogical device that operates along two basic dimensions relating to the dual educational intentions of preserving and changing the world.The first dimension, connected to preserving the continuity between the old and the new, is what we might call the principle of accommodation.The second dimension required for changing human nature operates according to the principle of transformation.To be clear, the dimensions of accommodation and transformation are key features of the political role of the lawgiver as portrayed by Rousseau in The Social Contract.What we mean to argue here is that these dimensions correspond well with the two fundamental educational aims of preservation and renewal. 3 The founding of a body politic is dependent upon understanding and adjusting to the mentality of the people who are to constitute it (II.7;192).In order for people to truly become part of a body politic, their individual wills must be made to correspond to the general will and their nature must be transformed accordingly.In education, the starting point for renewing the world must always be to get to know the world enough to be able to understand what needs to be transformed.At the same time, educational initiation is dependent upon the preservation of the world as manifested through already existing human traditions, customs, and artifacts.In what follows, we will take a closer look at accommodation and transformation as fundamental pedagogical features of the lawgiver.
Accommodation
The principle of accommodation entails that pedagogical change is predicated by a sufficient understanding of the conditions for change.In a political context, '[t]he achievement of the lawgiver is […] to produce institutions and prescribe policies that are well-suited to the particular conditions of a society and fit the propensities of its people' (Matthews & Ingersoll, 1980, p. 87).Whether the institutions are successful or not depends on if the lawgiver has managed to diagnose and understand the preexisting customs and opinions of the body politic-in-making.Recall the 'lawgiver-architect' who endeavors to examine and test the soil to see if it can support the weight before erecting the building itself (II.8;194).Correspondingly, 'the wise founder does not begin by drawing up laws which are good in themselves, but first examines whether the people for whom he intends them is fit to bear them' (II.8;194).The principle of accommodation, then, entails the proper understanding of what something is (a random collection of people with certain commonalities) before it can be cultivated into becoming something better (a moral community of equals).Dana Villa turns to Rousseau's Considerations on the Government of Poland (Rousseau, 1986a) as an example of how accommodation is thought to work: The rebirth of Poland and a national morality of the common good, one similar to but distinct from the civic virtue of the ancient regimes, requires only that these preexisting natural elements be allowed to ferment (fermenter) within a properly arranged political and educational environment (Villa, 2017, pp. 77-78).
As Villa understands it, Rousseau's recommendations for the government of Poland (and of Corsica) 'are framed entirely by the idea of cultivating the inherent qualities of the Corsican and Polish peoples' (Villa, 2017, p. 80).The cultivation of these qualities seems to require a process of accommodation where existing customs, traditions, and opinions can be made to align with the idea of a common good to strive for.It is not so much that the lawgiver instructs people in what to feel and think, as he enables people 'to become cognizant of their corporate will' (Gomes, 2018, p. 210).Gomes (2018, p. 209) describes the task of the lawgiver in terms of an accommodation to people's sentiments and emotions: 'by playing on the people's sentiments and emotions, this wise institution aims to unify the citizenry, making every individual aware of the general will that only he, supposedly, with his wisdom and intellectual distance as a foreigner, is able to perceive before a people's corporate identity becomes fully apparent to them' .
From a pedagogical point of view, accommodation functions as a necessary condition for individual and collective transformation.In order to initiate a process of educational change, students must come to understand their own customs, traditions, and opinions as part of a greater intersubjective world so that they can be gradually inducted into a cohesive moral community.Similarly, because the teacher can neither rely on coercion (force) nor persuasion (reason) to validate his/her authority (c.f.II.7;[192][193], he/she needs to be able to speak to students in a way that makes sense from the limited point of view of their existing framework of customs, traditions, and opinions.Their existing frame of reference is certainly not to be considered an end in itself, but rather as a necessary starting point for instigating processes of educational transformation.
Transformation
Political transformation, for Rousseau, requires educational transformation.The need for political transformation arises with the establishment of a legitimate social contract.The problem is not so much to identify a mutually beneficial form of association, as it is a problem of making individuals come to understand how their personal well-being is conditioned by the well-being of the people as a whole.It is, at bottom, a problem of expanding their individual freedom.In isolation, a person is free to the extent that 'he wants only what he can do and does what he pleases' (Rousseau, 1979, p. 84).In society, however, the will of the individual needs to be molded in line with the general will so that sociability and self-preservation are not at odds.In fact, by giving up a part of our individual power we can access a power much greater than that of our individual body (I.6; 173).This transaction, however, is neither self-evident nor straight forward.If each individual knew the limits of their constitution and power as well as the necessity of cooperation for self-preservation in society, the transaction would merely be a formal matter of agreeing upon a mutual social compound.unfortunately, for Rousseau, this is not the case.Instead, people tend to misjudge their capacities and as a result the social contract is easily corrupted because people in general want more than they can acquire on their own.The problem, then, is not a political problem of coming to terms, but an educational problem of making people understand that their freedom is fundamentally dependent upon a realistic conception of their capacity.This, in fact, makes for the starting point of all education in Émile.To reiterate Rousseau's conception of freedom along with its educational consequences: 'The truly free man wants only what he can do and does what he pleases.That is my fundamental maxim.It need only be applied to childhood for all the rules of education to flow from it' (Rousseau, 1979, p. 84).
To become free in society requires a process of transformation in two steps.one, it involves forming a will based on a reliable understanding of oneself in relation to one's surroundings.Two, it demands the alignment of one's will with the general will so as to both contribute to and benefit from the strength of a genuine social contract.The first step requires an education of the senses geared at understanding what one is.Rousseau draws up the basic framework for such an understanding: He whose strength surpasses his needs, be he an insect or a worm, is a strong being.He whose needs surpass his strength, be he an elephant or a lion, be he a conqueror or a hero, be he a god, is a weak being.[…] Man is very strong when he is contented with being what he is; he is very weak when he wants to raise himself above humanity (Rousseau, 1979, p. 81).
Rousseau's intention is to make Émile strong in precisely this sense, so that he can join with a society of the general will without succumbing to a misconception of his will and power.Having aligned his will with his actual capacity, Émile must transform into a citizen, meaning that his individual will is made to correspond with the common good as expressed through the general will (Gomes, 2018).In this process, Émile exchanges one form of freedom for another (Matthews & Ingersoll, 1980, p. 95).The intended instigator of this transformation is in fact the lawgiver.This is explicitly stated by Rousseau when he writes that the lawgiver has to be 'capable of changing […] human nature; of transforming each individual, who by himself is a complete and solitary whole, into a part of a greater whole from which that individual receives as it were his life and his being' (II.7;191).Being at once placed inside as well as outside of the social realm, the lawgiver is uniquely placed to influence people to transform into citizens without having to give up their own will and freedom in the process.For this, the lawgiver needs to appeal to the existing customs, traditions, and opinions of the people in question so as to make them commit freely to the 'self-imposed chains of love, brotherhood and respect' (Matthews & Ingersoll, 1980, p. 95).
In his unfinished treaty The Constitutional Project for Corsica (Rousseau, 1986b), Rousseau explores how what he conceived of as negative education in the case of Émile could be applied to the constitution of a new republic.Rather than actively instilling civic virtues in future citizens, the formation of the young-states as well as children-depends above all upon them being kept protected from bad influences.Hence, in the case of Émile, 'the first education ought to be purely negative.It consists not at all in teaching virtue or truth but in securing the heart from vice and the mind from error' (Rousseau, 1979, p. 93).In the case of Corsica, Villa (2017, p. 72) concludes that 'precisely because there is no such thing as a static state in naturebecause change is always a constant-remaining the same requires active intervention, self-discipline, and the imposition of bulwarks designed to keep the modern world at bay' .The transformation sought after by Rousseau, whether it concerns Émile or Corsica, needs to be closely monitored and controlled so that dangerous prejudices and vices can be kept at bay until the young individual or republic reaches maturity.While accommodation concerns the lawgiver's ability to correctly diagnose and understand the opinions, established prejudices, and sentiments of a people, transformation can only begin when the same people are protected from the onslaught of prejudices cropping up in modern society.In the face of this, the task of the lawgiver is to attend 'to all the particular features of a people' and engrave 'morals, customs and opinions in their hearts' so as to be able to 'guide them in performing the great and difficult undertaking of establishing a system of legislation that is suited to them alone' (Gomes, 2018, p. 210).
To be sure, this is a political task, but it runs parallel to an educational task where the figure of the lawgiver can be understood in terms of a pedagogical device enabling and maintaining the necessary interdependency of accommodation and transformation.As a pedagogical device, the lawgiver represents an institution that facilitates and protects the very processes of social preservation and renewal.Without this institution, there is no authority by which to legitimately connect the transformational striving of the individual to the shared assumption of a common good.This means that while the pedagogical device of the lawgiver is impossible to confirm empirically it needs to be presumed so as to avoid grounding educational authority in either coercion or persuasion.As paradoxical as it is, the idea of the lawgiver provides a way for education to balance in between the old and the new, making use of that which already exists in order to create something that has yet to manifest.This foundational paradox corresponds well with the political role of the lawgiver in so far as 'the social spirit that is to be the work of the institution would have to preside over the institution itself, and men would have to be prior to the laws what they are to become through the laws' (II.7;(192)(193).
The role of the teacher
The role of the teacher in this is not that of a lawgiver.Because the teacher is always an individual, with a particular will, for the teacher to assume the role of a lawgiver would mean the institutionalization of an individual will rather than the cultivation of an understanding of a common good manifested through a general will.If this was the case, the teacher would become a despot and there would no longer be a general will as the will of the teacher then is 'merely a particular will, or an act of magistracy; it is at most a decree' (II.2; 180).The teacher, instead, is placed in the unique position of being able to channel the authority of the lawgiver by presuming a common good that is powerful enough to transform particular wills into parts of a general will.
The idea of a common good-following Rousseau-cannot be reduced to either an expression of the particular will of the individual teacher or to the political will of any governing institution.For the teacher to claim the right to articulate the common good is a violation of the general will as the general will would be reduced to the whims and desires of the teacher.In this scenario, education is grounded in the self-preservation of the teacher rather than in the cultivation of a moral community capable of understanding individual freedom as determined by the power of the body politic as a unity.For the teacher to simply follow the commands of the current governing institution faces the same problem in so far as the general will is once again replaced by a particular will-in this case the political will of a particular institution.In either case, teaching becomes reduced to a form of coercion and thereby stripped of any sense of mutuality.In order to avoid having education being coopted by particular wills (whether that of the teacher or a given political institution), the teacher is placed in the unique position of representing a general will that emanates from the people in its role as sovereign.
The role of education, and thereby the teacher, concerns the constitution of the republic but not its governance.In education, this constitution means preserving and renewing the world by both accommodating teachings to existing opinions, traditions, and customs as well as transforming particular wills into expressions of a unified general will.This, in turn, forms the basis of a social contract where people can 'obey with freedom and bear the yoke of public felicity with docility' (II.7;193).While Rousseau's lawgiver is conceived as a political fiction designed to make possible a just and equal social contract, it can also be understood in terms of a pedagogical device apt for describing the crucial role of education, as well as the responsibility of teachers, in preparing the moral foundations necessary for preserving and renewing sustainable social contracts.
Conclusion
In this essay, we have argued for the relevance of understanding Rousseau's lawgiver not only as a political figure, but also as a pedagogical device useful for highlighting the role of education for grounding the moral foundations necessary for establishing and maintaining a just and equal social contract.By focusing on the dimensions of accommodation and transformation, we have endeavored to illustrate the parallel between politics and education for Rousseau.Assuming this reading to be viable, it still remains to question the currency of the idea of the lawgiver for the purposes of critically understanding the relationship between education and the social contract in a contemporary setting, defined by increasing tensions between plurality and individualism.Accepting the fact that in pluralistic societies, where radically different understandings of the world need to be reconciled, there is no sense in which Rousseau's ideal of starting from an untainted beginning (as in the case of Émile or Corsica) would be possible.Such a dream inevitably entails an eradication of the grounds for conflicts that seem to be the very starting point for contemporary efforts of establishing sustainable social contracts.The question, then, is whether the lawgiver is automatically rendered obsolete by the untimeliness of Rousseau's political recipe, or whether a pedagogical reading of the lawgiver can in fact help us identify a more basic role for education in terms of contributing to the formation of a general will determined by the common good?
Faced with this question, we would argue that unless education is decoupled from a common understanding of the good (a cohesive moral community allowing for human interdependency) there needs to be an educational response to the formation of a body politic.The pedagogical understanding of the role of the lawgiver can lend education a form of legitimacy unrestricted by sectarian or individualizing ideology.To be clear, the lawgiver will never solve the concrete political problems of organizing public education, but it enables the voluntary recognition of common interests shared across human differences as a necessary starting point for any educational endeavors, freed from the destructive forces of coercion and persuasion.As such, the understanding of the lawgiver can function as a necessary bulwark shielding education against the corruption of fickle political wills and religious dogmas.understood in this sense, the symbolic protection provided by the fiction of the lawgiver might allow education to exist as a precondition for the establishment of a just and equal social contract, without becoming a political tool for fashioning citizens according to the desires of those currently in power.
While it is helpful to think of the parallels between education and politics in terms of the cultivation of a general will by way of the figure of the lawgiver, this raises practical problems.
Since the examples referred to by Rousseau as in some sense paradigmatic lawgivers-Moses, Plutarch, and Lycurgus-have lost their purchase on the contemporary political imagination, we might wonder how the lawgiver would be conceptualized in a contemporary setting.If, as we have argued, the lawgiver is a fictional device-serving to bridge the gap between particular wills and the idea of a general will-will it be enough to help teachers understand and act out their unifying role in a pluralistic society mirrored in the classroom?Rousseau's model is based on a community where the figure of the lawgiver can be identified in a more or less straightforward sense.In a contemporary setting, however, where many different (and sometimes incommensurable) traditions and customs clash and collide within one and the same society (and classroom), it becomes much less evident who or what to turn to as a unifying principle.If we want to retain the conception of education as the preservation and renewal of society, the judgment guiding the selection of what to preserve and what to open up for renewal seems to be dependent on something beyond the idiosyncratic desires of the individual teacher and the ever-changing will of political institutions.However, if we acknowledge that the figure of the lawgiver is necessarily fictional (so as to be accommodated to the mentality of different groups and people), then we might wonder whether this fiction will ever have the power to move a plurality of people towards a common good?At the same time, the paradox remains in so far as education-as preservation and renewal-needs to be assumed to depart from some form of unified striving towards a common good capable of harboring and directing a plurality of particular wills.
|
v3-fos-license
|
2023-04-13T15:30:59.333Z
|
2023-04-01T00:00:00.000
|
258093792
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://assets.cureus.com/uploads/original_article/pdf/98070/20230411-16119-2q7r99.pdf",
"pdf_hash": "325be3a915120488213b687e2ff140d9bbfb15be",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44586",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"sha1": "ba3ddbc5f0a1519b473a2a37e90b3e1306ffc8da",
"year": 2023
}
|
pes2o/s2orc
|
Assessment of Setup Errors in Gynecological Malignancies Treated With Radiotherapy Using Onboard Imaging
Introduction Radiotherapy plays a vital role in the management of gynecological malignancies. However, maintaining patient position poses a challenge during daily radiotherapy treatment of these patients. This study identifies and calculates setup errors in interfraction radiotherapy and optimum clinical target volume-planning target volume (CTV-PTV) margins in patients with gynecological malignancies. Material and methods A total of 38 patients with gynecological malignancies were included in the study. They were treated with a dose of 50 Gy in 25 fractions for five weeks, followed by brachytherapy. All patients were immobilized using a 4-point thermoplastic cast. Anteroposterior and lateral images were taken thrice weekly for five weeks. Setup verification was done using kilovoltage images obtained using Varian On-board Imager (Varian Medical System, Inc., Palo Alto, CA). Manual matching was done utilizing bony landmarks such as the widest portion of the pelvic brim, anterior border of S1 vertebrae, and pubic symphysis in the X, Y, and Z axes, respectively. Results A total of 1140 images were taken. The individual systematic errors ranged from -0.24 to 0.17 cm (LR), -0.15 to 0.19 cm (AP) and -0.36 to 0.29 cm (CC) while the individual random errors ranged from 0.04 to 0.36 cm (LR), 0.06 to 0.33 cm (AP) and 0.10 to 0.29 cm (CC). The calculated CTV-PTV margins in LR, AP and CC directions were 0.17, 0.18, and 0.25 cm (ICRU-62); 0.28, 0.31 and 0.47 cm in LR, AP and CC directions (Stroom's), and 0.32, 0.36 and 0.55 cm (Van Herk) respectively. Conclusion Based on this study, the calculated CTV-PTV margin is 6 mm in gynecological malignancies, and the present protocol of 7 mm of PTV margin is optimum.
Introduction
External beam radiotherapy (EBRT) to the pelvis is a routine treatment for patients with gynecological malignancies. Radiation therapy involves various processes: positioning, immobilization, simulation, delineation, and treatment delivery. Many studies have proven that the positioning of patients for pelvic radiotherapy is relatively inaccurate and subject to setup variations that are probably greater than other sites in the body [1,2]. In addition, the motion of external skin marks relative to internal structures, the nonrigid nature of the area, patient rotation, and day-to-day variations in rectal and bladder filling, for instance, make the pelvis relatively challenging to set up accurately.
Based on the various literature results, a setup margin can be given for attaining precision in radiotherapy planning and providing accuracy in treatment delivery. In order to ensure optimal dose delivery and better outcome, it is essential to reduce setup errors during radiotherapy treatment. Portal imaging is essential for minimizing setup errors and defining adequate clinical target volume-planning target volume (CTV-PTV) margins.
Any discrepancy between the planned and actual treatment position is known as a setup error. These errors result from differences accumulated during the treatment planning process and recur during treatment sessions, which cause a shift in the cumulative dose distribution. There are two types of setup errors. 1 1 1 1 1 Systematic errors are repeatable errors in the treatment technique and constantly occur until corrective actions are performed. In contrast, random errors are occasional, variable factors like patient movement during radiation treatment [3,4].
Accuracy and reproducibility of the patient's position are fundamental to the successful delivery of radiation therapy. However, uncertainty exists in radiotherapy due to setup errors resulting in a difference between planned and delivered doses [5]. The primary aim of the study is to assess the setup errors in gynecological malignancies treated with radiotherapy using onboard imaging
Materials And Methods
Approval from Shri Ram Murti Smarak Institute of Medical Sciences (IRB No:SRMSIMS/ECC/2019-20/127) was given at the outset.
This study included 38 patients with gynecological malignancies treated with image-guided radiation therapy (IGRT) in the Department of Radiation Oncology from November 2019 to April 2021.
Immobilization and simulation
All patients were immobilized using a fixed four-point thermoplastic cast system (2.5 mm thickness pelvic cast, manufactured by Klarity). Patients underwent contrast-enhanced CT (CECT) scan radiotherapy planning (RTP). CECT whole abdomen scans were done with a flat table insert. CT images of the simulation were acquired with the patient in the supine and treatment position along with fiducial markers. Fiducial markers used were 2 mm in diameter with small lead balls placed on the pelvic cast. In addition, 3 mm slice thickness images were obtained. These images were transferred through Digital Imaging and Communications in Medicine (DICOM-CT) into the eclipse treatment planning system (version 8.6.17, Varian Medical System, Inc., Palo Alto, CA) [5].
Gross Tumor Volume (GTV)
GTV as seen on clinical examination and radiological imaging.
Organ At Risk (OARS)
Delineation of OARs was done per Radiation Therapy Oncology Group (RTOG) guidelines.
Bladder
Inferiorly starting from its base and superiorly to the dome.
Rectum
Inferiorly starting from the lowest level of ischial tuberosities (right or left). Contouring ended superiorly till the rectum lost its round shape in the axial plane and connected anteriorly with the sigmoid.
Bowel Bag
Inferiorly from the most inferior small or large bowel loop, or above the rectum or anorectum, whichever is most inferior. Abdominal contents were contoured, excluding muscle and bones. Contouring was stopped superiorly 1 cm above PTV.
Bone Marrow
This comprised the whole pelvic bone, lumbar spine, and bilateral proximal femur.
Left and Right Femur
Cranially first section of the femoral head and caudally up to the lesser trochanter.
Dose Prescription
All patients were delivered standard radiotherapy at a dose of 50 Gy in 25 fractions over five weeks along with concurrent Cisplatin at 35 mg/m2 i/v infusion every week.
Portal imaging and displacement measurements
After patient positioning, an orthogonal pair (anteroposterior and lateral) of portal images were acquired using an on-board kV imager on a linear accelerator. Reproducible bony landmarks were defined for evaluating patient setup errors as per the recommendations by the Royal College of Radiologists. Pelvic brim for X (left to right) displacements and pubic symphysis for Z (superior to inferior) displacements were taken as the bony landmarks in anterior-posterior portal images. The anterior border of the S1 vertebra for Y (anterior to posterior) displacements was identified as a bony landmark in lateral portal images [5]. For the X axis, displacement to the left was taken as positive, while displacement to the right was taken as negative. In the Z axis, displacement in the superior direction was taken as positive and displacement in the inferior direction as negative. In the Y axis, anterior displacement was taken as negative and posterior displacement was taken as positive.
Calculation of systematic and random errors
Individual and population-based systematic and random errors were calculated along the X, Y, and Z directions. These were
Treatment execution and verification
Treatment was delivered with 6MV and 15 MV X-rays by a linear accelerator. All patients receiving pelvic radiation therapy underwent kilovoltage portal (kVp) imaging using On-Board Imager on a regular basis on the linear accelerator. In each patient, kV images (AP and lateral) were taken three times a week (Monday, Wednesday, and Friday), and a cone beam CT scan (CBCT) image was taken once every week (Tuesday) for a total of five weeks. Total kV images for each patient were 15 (AP and lateral). A single observer did all observations to remove inter-observer bias.
Displacement measurements
The displacements of 38 patients were calculated in 1140 images, and assessment was done in X (left to right), Y (anterior to posterior), and Z (superior to inferior) directions. Displacement measurements of all 38 patients are demonstrated with scatter diagrams (Figures 1-3
Mean Individual Error
Mean individual errors ranged from a minimum of -0.36 cm on Z-axis to a maximum of 0.29 cm on Z-axis (
Systematic and Random Errors
The calculated population systematic errors were 0.09 cm on the X axis, 0.10 cm on the Y axis, and 0.17 cm on the Z axis, and the calculated population random errors were 0.14 cm on the X axis, 0.15 cm on the Y axis, and 0.18 cm on the Z axis ( Table 2).
CTV-PTV Margins
Population-based CTV-PTV margins were calculated for all patients using the ICRU Report 62, Stroom's, and Van Herk's formulae [5]. Using the ICRU recommendation, the CTV-PTV margin in LR, AP, and CC directions were 0.17 cm, 0.18 cm, and 0.25 cm, respectively. The corresponding values were 0.28 cm, 0.31 cm, and 0.47 cm according to Stroom's formula and 0.32 cm, 0.36 cm, and 0.55 cm according to Van Herk's formula ( Table 3).
Discussion
There are two types of setup errors: random and systematic errors.
The systematic error is the deviation that occurs in the same direction and is of a similar magnitude for each fraction throughout the treatment course. For example, this could be due to the target delineation error or change in target position and shape between delineation and treatment (tumor regression, bladder and bowel changes) [5].
The random error is the deviation that varies in direction and magnitude for each delivery fraction. This may be due to patient setup error changes in target position and shape between fractions and during fractions (like breathing). In addition, random errors are influenced by immobilization technique, patient comfort, and departmental protocols [5].
Our study attempts to calculate the setup errors in all three axes and the final CTV-PTV margins. Daily setup was recorded in all three planes, i.e., X, Y, and Z. kV images were matched based on bony landmarks. The setup errors identified were used to calculate the systematic and random errors. Varian offline data was used for documenting the shift in all the planes for daily fractions in all the patients. The clinical shifts were used for correcting the setup errors, and they were read and run by the same person. The mean deviation and SD of the individual and population were calculated. Based on the available values, the calculation of the CTV-PTV margin was done in the X, Y, and Z axis based on Stroom, Van Herk, and ICRU 62 formulae.
In their study on 22 patients treated with pelvic radiotherapy, Noghreiyan VV et al. evaluated a total of 182 portal images. Population's random (σ) and systematic (Σ) errors were calculated based on the portal images in three directions (X, Y, and Z). The systematic setup errors for patients ranged between 2.36 and 4.99 mm, while the random errors ranged between 1.51 and 2.74 mm. The setup margin for CTV to PTV was in the range of 2.8-5.7 mm, 5.7-11.9 mm, and 6.9-14.4 mm in the X, Y, and Z directions, respectively [8].
In our study, the decrease in setup margins could possibly be due to the increased number of images taken in our study (thrice weekly) along with CBCT imaging. The advantage of CBCT imaging is that multiple slices in different planes and soft tissue resolution allow for more precise correction of setup errors.
In our study, CTV-PTV margins did not exceed 0.55 cm, possibly because of rigid immobilization devices, which reduced the setup errors in all three directions. Using a 4-point thermoplastic cast for all the patients, the calculated CTV-PTV margins according to the Van Herk formula were 0.32 cm, 0.36 cm, and 0.55 cm in LR, AP, and CC directions, respectively. While analyzing the setup errors, it was observed that maximum displacement occurred in the craniocaudal direction.
In a study by Murrell DH et al. done on 20 patients with cervical and endometrial malignancies, daily CBCT for each was registered in four dimensions to the planning CT. The bony landmarks chosen were the sacrum and pubic symphysis in the anterior-posterior (A-P) direction, lower lumbar vertebrae, and ischial tuberosity in the superior-inferior (S-I) direction, and the femoral heads laterally [9]. The median shift between IGRT methods was 2 mm, 1 mm, and 1 mm in the anterior-posterior, superior-inferior, and lateral directions, respectively. Maximum deviations were observed in the A-P direction [9]. The possible reason for the reduction in setup margin lies in the difference in imaging protocol and CBCT imaging (CBCT images were taken once a week). Also, they used different bony landmarks for recording the setup errors in craniocaudal, anteroposterior, and mediolateral directions. This led to more accurate daily reproduction of treatment and a reduction in setup margins. We need to review the pros and cons of daily verification by CBCT versus the daily radiation hazard of CBCT.
In our institute, a similar study was done five years back by Kumar P et al. on 21 patients with carcinoma cervix. Patients were immobilized using full-body Vaclok cushions. A total of 242 images were evaluated. The individual systematic errors calculated were -6.6 to 4.9 mm on the X axis, -4.9 to 3.5 mm on Y-axis, and -6.3 to 6.5 mm on Z-axis. In contrast, individual random errors ranged from 0.5 to 8.3 mm, 0.7 to 5.2 mm, and 1.1 to 4.6 mm on the X, Y, and Z axis, respectively. CTV-PTV margins were 7.9 mm, 7.0 mm, and 9.1 mm on the X, Y, and Z axis, respectively. They found that safety margins of 1 cm would be adequate for all the patients [5].
In our study, utilizing 4-point thermoplastic casts for immobilization and daily kV imaging for verification, we got CTV-PTV margins of less than 6 mm. As a result, the systematic and random errors obtained were also lesser than their study.
In our institute, we initially followed 1 cm of PTV margin [5]. With the introduction of rigid immobilization devices (thermoplastic casts) and better imaging techniques (IGRT), the margins were gradually reduced to 7 mm (symmetric margins). Our study calculated the maximum setup margin as 6 mm, thus inferring that a further reduction of 1 mm in setup margin could be made in the near future, eventually leading to more normal tissue sparing and less toxicity.
Our study has two limitations. First, rotational errors were not taken into account because of the lack of six degrees of freedom couch which could be incorporated in further studies as per the availability. Also, it is to be noted that the organ motion was not accounted for in our study. Hence, further studies are required in this direction.
We assessed the various setup uncertainties in each direction to generate our own CTV-PTV margins. Our study showed that the measured setup uncertainties were lesser than the estimated errors, and the institutional protocol of a 7 mm margin is adequate. These setup errors cannot be generalized because of variation in the procedures used, which include the type of immobilization, infrastructure present, and type of imaging system available. PTV margins can only be validated after doing a small research on setup errors in the department. In our institute, patients were treated five times a week, but the protocol was based on three times kV images and one-time CBCT due to research protocol. Further reduction of setup margins is justified by daily patient positioning, localization, and correction before treatment. The establishment of online correction protocols would further help improve treatment positioning accuracy [10]. An adequate immobilization system is a key factor in determining setup errors, and the appropriateness of thermoplastic cast needs to be ensured. The expertise of treatment team members and quality assurance are also essential. While determining the institutional protocol for PTV margins other than imaging protocols and immobilization, these numerous factors also need to be taken into account.
Conclusions
It is helpful to audit the PTV margins practiced in the department at regular intervals and try to decrease the setup errors by using stringent methodology protocols for imaging. Present PTV margin of 7 mm practiced in our department was found to be optimum. However, every institution needs to define its own PTV margins after research work, depending on the type of immobilization used and the available imaging facilities.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Shri Ram Murti Smarak Institute of Medical Sciences issued approval SRMSIMS/ECC/2019-20/127. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
|
v3-fos-license
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.