added
string
created
string
id
string
metadata
dict
source
string
text
string
version
string
2019-04-24T14:05:01.187Z
2019-04-18T00:00:00.000
128362221
{ "extfieldsofstudy": [ "Business", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00103-019-02943-9.pdf", "pdf_hash": "05606b2e35e1840ba6f50dbbd387fe56fd222c8a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1912", "s2fieldsofstudy": [], "sha1": "05606b2e35e1840ba6f50dbbd387fe56fd222c8a", "year": 2019 }
pes2o/s2orc
Priorities for protecting health from climate change in the WHO European Region: recent regional activities Evidence of the impact of climate change on health is growing. Health systems need to be prepared and gradually adapt to the effects of climate change, including extreme weather events. Fossil fuel combustion as the driver of climate change poses a tremendous burden of disease. In turn, cutting greenhouse gas emissions in all sectors will achieve health co-benefits. If all countries meet the Paris Agreement by 2030, the annual number of avoidable premature deaths could total 138,000 across the entire European Region of the World Health Organization (WHO). Several international frameworks promote a stronger commitment by countries to implementing the necessary adaptations in the health sector and to addressing health considerations in adaptation measures in other sectors. The WHO has a mandate from its member states to identify solutions and help prevent or reduce health impacts, including those from climate change. National governments are continuing to establish public health adaptation measures, which provide a rationale for and trigger action on climate change by the health community. Effective national responses to climate risks require strategic analyses of current and anticipated threats. Health professionals need to play a proactive role in promoting health arguments and evidence in the formulation of national climate change adaptation and mitigation responses. To this end, country capacities need to be further strengthened to identify and address local health risks posed by climate change and to develop, implement and evaluate health-focused interventions through integrated approaches. Building climate-resilient and environmentally sustainable health care facilities is an essential pillar of health sector leadership to address climate change. Introduction Some of the climatic change in recent years has established new record levels, such as for global and European temperatures, winter Arctic sea ice extent and sea levels [1]. Climate change is already affecting human health, with increasing exposures and vulnerability recorded worldwide [2]. Key reports of global and European relevance include the series of government-approved reports from the Intergovernmental Panel on Climate Change (IPCC), specifically the special report on the impacts of global warming of 1.5°C [2] and its fifth assessment report, which reviewed the evidence on climate change and health and provided summaries for policy-makers [3,4]. The health synthesis report aims to summarize the findings of the IPCC special 1.5 report regarding the relationship between climate change and health [5]. In 2015, the Lancet Commission published the report on climate change and global environmental change [6]. In 2017, the WHO Regional Office for Europe presented an update on protecting health in Europe from climate change, drawing on the extensive body of new research and evidence [7]. Health impacts of climate change and variability are being observed: direct impacts result through temperature increases, heat waves, storms, forest fires, floods and droughts. Indirect impacts are mediated through the effects of climate change on biodiversity, vectors distribution, allergens, ecosystems and productive sectors, such as agriculture, water and food supplies. Climate change will affect everybody, but vulnerability to weather and climate change depends on people's level of exposure, their personal characteristics (such as age, education, income and health status) and their access to health services. Elderly people, children, outdoor workers and homeless people are particularly susceptible population groups [8,9]. The effects of exposure can be direct or indirect, for example heat spells may directly cause heat stress, dehydration or heat stroke, while the worsening of cardiovascular and respiratory conditions or electrolyte disorders may be indirect consequences [10,11]. Climate change affects environmental conditions and social infrastructure, which also determine the health effects, ranging from death to loss of well-being and productivity. The pathways by which climate change can affect health have been explained in a number of conceptual frameworks [6,12]. . Fig. 1 presents a combination and adaptation of these frameworks relevant to the WHO European Region [7]. The WHO Regional Office for Europe works with the Member States to generate evidence, develop supporting tools and to identify best policy options to minimize the health effects of climate change. The aim of this article is to support the communication and implementation of existing global and regional commitments and priorities to protect health from the adverse effects of climate change. Climate change is a matter of public health Climate change is influencing mortality, injury and morbidity rates of both communicable (such as vector-and waterborne diseases) and non-communicable (such as cardiovascular and respiratory diseases as well as mental health issues) diseases [6,7,9]. The increasing frequency and intensity of extreme weather events due to climate change pose growing risks to human health. Heat-waves were the deadliest extreme climate event in Europe between 1991 and 2015, particularly in southern and western Europe. Several extreme heat-waves have occurred since 2000 (in 2003, 2006, 2007, 2010, 2014, 2015 and 2016) [1]. Exceptionally persistent and high July temperatures in 2018 baked countries across the WHO European Region, including northern Europe and even above the Arctic circle in Lapland, setting the stage forcatastrophic forest fires in Greece, for example [13]. Sweden experienced a large number of wildfires due to a prolonged heatwave in summer 2018, which the Swedish Civil Contingencies Agency considered the most serious inthe modernhistoryofthe country [14]. In the summer of 2017, Portugal was severely affected by wildfires, which occurred during a concurrent heatwave and severe drought, killing 65 people [15]. In Greece, Italy and France, severe alert warning messages were also issued in 2017, indicating that even healthy and active people could suffer from possible [17], and in 2010 many eastern European cities recorded extremely high temperatures, particularly in the Russian Federation, where the deaths attributable to these high temperatures were estimated at around 55,000 [18]. Urban populations are at risk of multiple exposures; for example air pollution also increases the health risks associated with high temperatures [11]. High air temperatures can adversely affect food quality during transport, storage and handling. Elevated marine water temperatures accelerate the growthrate of certain pathogens, such as Vibrio species that can cause foodborne outbreaks af-ter eating seafood and wound infections in injured skin exposed to contaminated marine water [1]. Cold spells were the deadliest weather extremes in eastern Europe, with cumulative numbers of deaths of 28 per 1,000,000 people over the whole time period (1991-2015) [1]. Prolonged cold spells affect physiological and pathological health, especially among elderly people and those with respiratory and cardiovascular diseases [19]. By the end of the 21st century, two thirds of Europeans could be exposed to weather-related disasters every year, compared with only 5% during the period 1981-2010. Climate change is the dominant driver of the projected trends, accounting for more than 90% of the rise in the risk to humans [20]. Flood events registered since 1991 have caused the death of more than 2000 people in the WHO European Region, affected 8.7 million others and generated at least 72 billion Euro in losses [21]. Two thirds of deaths associated with flooding occur from drowning; the rest result from physical trauma, heart attacks, electrocution, carbon monoxide poisoning or fire associated with flooding. Infectious disease vectors such as mosquitoes and rodents may also increase as a result of flooding [1]. Climate change is likely to cause changes in ecological systems that will affect the risk of infectious diseases in the WHO European Region through water, food, air, rodents and arthropod vectors [7,22]. Waterborne pathogens may be transmitted through two major exposure pathways: drinking water (if water treatment and disinfection are V. Kendrovski · O. Schmoll Priorities for protecting health from climate change in the WHO European Region: recent regional activities Abstract Evidence of the impact of climate change on health is growing. Health systems need to be prepared and gradually adapt to the effects of climate change, including extreme weather events. Fossil fuel combustion as the driver of climate change poses a tremendous burden of disease. In turn, cutting greenhouse gas emissions in all sectors will achieve health co-benefits. If all countries meet the Paris Agreement by 2030, the annual number of avoidable premature deaths could total 138,000 across the entire European Region of the World Health Organization (WHO). Several international frameworks promote a stronger commitment by countries to implementing the necessary adaptations in the health sector and to addressing health considerations in adaptation measures in other sectors. The WHO has a mandate from its member states to identify solutions and help prevent or reduce health impacts, including those from climate change. National governments are continuing to establish public health adaptation measures, which provide a rationale for and trigger action on climate change by the health community. Effective national responses to climate risks require strategic analyses of current and anticipated threats. Health professionals need to play a proactive role in promoting health arguments and evidence in the formulation of national climate change adaptation and mitigation responses. To this end, country capacities need to be further strengthened to identify and address local health risks posed by climate change and to develop, implement and evaluate healthfocused interventions through integrated approaches. Building climate-resilient and environmentally sustainable health care facilities is an essential pillar of health sector leadership to address climate change. Keywords Adaptation · Greenhouse gases · Public health · Mitigation · World Health Organization Schlüsselwörter Anpassung · Treibhausgas · Gesundheitswesen · Klimaschutzmaßnahmen · Weltgesundheitsorganisation inappropriate) and recreational water use. Heavy precipitation and flooding events can disrupt water treatment and distribution infrastructures, increasing the risk of ingress of faecal pathogens and thus of waterborne outbreaks [21,22]. Climate risks associated with increases in drought frequency and mag-nitude include impacts on quality and quantity of freshwater resources, including eutrophication and algae blooms, with possible impacts on drinking-water quality. Droughts may also compromise food safety and security and cause mental health effects, vector-borne diseases and injuries due to lower than usual water levels in lakes and rivers that are used for recreation [23]. Many parts of the Mediterranean region experienced significant drought in 2017, including Italy with the most severe anomalies in annual rainfall 26% below the 1961-1990 average [24]. Water scarcity is accelerating across the European Region and can pose additional challenges for providing sustainable water and sanitation services. The conference took place in Ostrava, Czech Republic, in 2017 and brought together health and environment ministers and high-level representatives of Member States in the WHO European Region. They committed to strengthening and promoting actions to improve the environment and health at the international, national and sub-national levels through the Ostrava Declaration. According to this declaration, by enhancing national implementation, countries will develop national portfolios of action on environment and health by the end of 2018, as standalone policy documents or parts of others, respecting differences in countries' circumstances, needs, priorities and capacities. The portfolios should draw on Annex 1 to the Declaration, which is a compendium of possible actions to facilitate its implementation, focusing on the seven priority areas, including climate change and health [29] The Protocol on Water and Health to the Convention on the Protection and Use of Transboundary Watercourses and International Lakes (Water Convention) [30] Adopted in 2005, the protocol is the first legally binding multilateral agreement to ensure safe drinking water and sanitation in the WHO European Region. Its goal is to protect human health and well-being through improved water resource management and by prevention, control and reduction of water-related diseases, as well as detection, contingency planning and response to outbreaks. A key priority of the protocol's programme of work is building climate-resilient water and sanitation services World Health Assembly Resolution WHA61.19 on climate change and health [31] All Member States in the WHO European Region, including the 28 EU countries, approved Resolution WHA61.19 in 2008, which urges countries to: Include health measures in adaptation plans Build technical, strategic and leadership capacity in the health sector Strengthen capacity for preparedness for and response to natural disasters Promote active cross-sectoral engagement in the health sector Express commitment to meeting the challenges of climate change, and guide planning and investments Thirteenth general programme of work 2019-2023 (GPW13) [32] GPW13 sets out WHO's strategic direction towards improving the health of the world over the coming 5 years. It highlights the importance of addressing climate change and health, specifically in small island developing states and other vulnerable settings, and of strengthening cross-sectoral collaboration towards health in all policies The occurrence of waterborne diseases is related to water quality and may be affected by changes in runoff, seasonality and frequency of extreme events such as heavy rains, floods and droughts [12]. Areas under high water stress, for example, are estimated to increase from 19% in 2007 to 35% by the 2070s, by which time the number of additional people affected is expected to reach 16 million to 44 million [7]. Global and regional policy frameworks for climate action and health An important aspect of tackling challenges around health and climate change is establishing mechanisms to monitor health impacts and setting targets to reduce these. Several international policy frameworks and platforms are in place (. Table 1); these stipulate a clear mandate to foster stronger engagement of the health sector with climate change adaptation and mitigation. Since 1999, with the adoption of the Declaration of the Third Ministerial Con-ference on Environment and Health held in London, United Kingdom [33], Member States of the WHO European Region have been committed to action towards mitigation of and adaptation to climate change. Health in climate change adaptation Coherent multisectoral action is necessary to effectively tackle the challenges posed by climate change. Health considerations are increasingly on the agendas of sectors and actors addressing climate change. In turn, a consideration of climate change warrants a correspondingly prominent place on the health agenda. The effects of climate change may threaten the overall progress made in reducing the burden of diseases and injuries by increasing morbidity and mortality. Evidence suggests that there is a very high benefit-cost ratio for health adaptation, and that higher benefits are achieved with early adaptation action [34]. Under the UNFCCC process, the Paris Agreement on Climate Change is the first universal, legally binding global deal to combat climate change and adapt to its effects [25]. Its global goal on adaptation focuses on "enhancing adaptive capacity, strengthening resilience and reducing vulnerability to climate change, with a view to contributing to sustainable development and ensuring an adequate adaptation response in the context of the global temperature goal". With regard to health, implementation of the Paris Agreement provides its parties with opportunities to strengthen the climate resilience of their health systems-for example, through improved disease surveillance and preparedness for extreme weather events and ensuring climate-resilient health facilities, with undisturbed access of health facilities to essential services such as energy, water and sanitation. With regard to the Paris Agreement, only 18% of all 53 WHO European Member States refer to health in their "intended nationally determined contributions" (INDCs) when outlining commitments to achieving climate-related policy goals and targets, compared with 67% of countries globally [35]. To promote and position health as a key driver for climate actions, the health community needs to play an active role in awareness-raising and advocacy, and in strengthening the evidence base on the health impacts of climate change. This also includes integrating climate resilience into existing and future core health system programming and developing tools to assess the health implications of mitigation policies [36]. The WHO carried out targeted surveys in 2012 and 2017 among its Member States (i. e. with 22 countries participating in 2012 and 20 countries in 2017) to track the status and progress of how health is positioned in existing climate change policies and programming in the European Region [37,38]. The surveys primarily focused on governance of climate change and health, the status of health vulnerability and impact assessments, the existence of national adapta-tion health policies, the strengthening of health systems and the raising of awareness. The findings are summarized in . Fig. 2. Governance mechanisms on climate change and health improved between 2012 and 2017. In 2012, already 96% of responding countries had established a multisectoral committee on climate change whose primary role is to coordinate actions and policies for both adaptation and mitigation, including relevant health aspects. In 2017, all responding countries confirmed the existence of such a governmental body. Similar progress could be observed in implementing health vulnerability and impact assessments. These assessments are a key instrument to provide information for decision-makers on the extent and magnitude of likely health risks attributable to climate change and to identify and prepare for changing health risks. They can suggest priority policies and programmes that can prevent or reduce the severity of future climate change health impacts. WHO developed a guideline that is designed to provide the basics on conducting a national or sub-national health vulnerability and impact assessments [39]. In 2012, 77% of the 22 responding countries stated that they had conducted health-specific assessments of the impacts, vulnerability and adaptation to climate change. In 2017, the percentage of countries performing such assessments had increased to 85% of the 20 responding countries. Adaptation is defined by the IPCC as "the process of adjustment to actual or expected climate and its effects. In human systems, adaptation seeks to moderate harm or exploit beneficial opportunities. In natural systems, human interventions may facilitate adjustment to expected climate and its effects" [12]. As climate change is one of the many factors associated with the incidence of numerous adverse health outcomes, there is a need to design policies, plans and measures that address the health risks of climate change in order to prevent and reduce the severity of current and future impacts. The development of adaptation plans and programmes for the health sector will depend on and vary according to the specific needs identified during vulnerability impact assessments [40,41]. In 2012, national health adaptation plans or strategies on climate change had been developed in 64% of the 22 responding countries, with nine countries (40%) reporting that these policies were approved by the government. In 2017, 15 of the responding countries had developed a climate change health adaptation strategy and an associated implementation plan (75%), and these were approved by the government in 13 countries (65%). In 2012, 83% of responding countries reported that they had taken actions towards strengthening public health capacities and health systems to cope with impacts of climate change; at 85% in 2017, this figure remained almost unchanged. The examples of measures taken by Member States to improve health systems included strengthened early-warning systems and responses, infectious disease surveillance, as well as improved water and sanitation services. In 2012, 75% of responding Member States reported a high level of awareness of the relevance of health effects on climate change and a sizeable influence in political developments as compared with 65% in 2017. Examples on well-developed health communications regarding extreme weather events showed that climate change and health are perceived as an important topic in political developments [37,38]. The need to minimize and prevent adverse climate change-related health outcomes highlights the need for inclusion of health as a consideration in all policies, across all sectors. The 2030 Agenda specifically addresses health (Sustainable Development Goal/SDG 3: Ensure healthy lives and promote wellbeing for all at all ages) and climate change (SDG 13: Take urgent action to combat climate change and its impacts), as well as a range of targets that support action to protect and promote health through increasing adaptive capacity and health resilience to climate risks, prioritizing mitigation actions that benefit health and pushing the health sector to become less carbon-intensive and more environment-friendly [26]. While responding to climate change is a cross-government priority in many countries, it requires the health sector to work both internally and in a coordinated manner with other actors, often under a single climate change strategy and coordinating mechanism, to define adequate measures. Implementation of the UNFCCC is strongly supported by the 2030 Agenda, which explicitly acknowledges that the UNFCCC "is the primary international, intergovernmental forum for negotiating the global response to climate change". The health sector therefore needs to lead adaptation planning for health, working with other sectors to achieve health benefits. Health in climate change mitigation Both air pollutants and greenhouse gases (GHG) are emitted from many of the same sectors, including energy, transport, housing and agriculture. The shortlived climate pollutants such as black carbon, methane and ozone have important impacts on both climate and health. Fossil fuel combustion as the driver of climate change poses a large burden of disease, including a major contribution to the 7 million deaths from outdoor and indoor air pollution annually [7]. The Paris Agreement on Climate Change identifies and promotes measures that both mitigate climate change and improve health, for example, by reducing carbon emissions, air pollution and the environmental impact of the health sector itself [25]. Countries in the WHO European Region have made very substantial commitments to reducing their GHG emissions. The combined commitment of the 53 Member States is equivalent to reducing overall GHG emissions in the region by 26% by 2030, estimated in comparison with baseline emissions in 1990 [41]. Most have set targets to reduce carbon emissions below 1990 levels, while others have set emission caps or intend to reduce future emission growth rates relative to a "business as usual" scenario. Further reductions could be achieved through international cooper-ation, knowledge sharing and financial support. Most measures and policies to reduce GHG emissions can benefit human health, if adequately designed and implemented. Carbon-cutting policies that are known to provide health benefits include those that reduce emissions of healthdamaging pollutants through changes in energy production, energy efficiency, sustainable transportation and control of landfills, among others. These commitments are reflected in Member States' official submissions to the Secretariat of UNFCCC as INDCs, which reflect countries' ambition to reduce emissions, given their capabilities and circumstances. The annual preventable premature mortality could amount to 138,000 deaths across the whole WHO European Region, of which 33% (45,350 deaths) would be averted across 28 countries of the European Union in 2030 and beyond. In economic terms, the benefit of reduced emissions is equivalent to a savings of 244-564 billion US dollars, or 1%-2% of the WHO European Region gross domestic product (at purchasing power parity prices). The saved costs of illnesses (34.3 billion US dollars) amount to between 6% and 14% of the total economic benefit [41]. Conclusions The protection of health from the effects of climate change has developed from a niche topic to high-level policy attention, as reflected in international agreements such as the Paris Agreement and the 2030 Agenda for Sustainable Development. Increasingly, the call to integrate health into all policies and the need to consider climate change in all policies are being recognized and implemented. Understanding and awareness of health risks from climate change is growing fast within the health community; this needs to be reflected as core elements in training and career development for health professionals. Capacity-building is supported through the setting of norms and standards, the development of technical guidance and training courses and the mainstreaming of climate change and Table 2 Strategic objectives in the draft WHO global strategy and actions to advance implementation of the Ostrava Declaration. (WHO [42] and WHO Regional Office for Europe [29]) Strategic objectives for the transformation needed outlined in the draft WHO global strategy on health, environment and climate change strategy Possible actions to advance the implementation of the Ostrava Declaration Primary prevention: to scale up action on health determinants for health promotion and protection in the 2030 Agenda for Sustainable Development, including on the drivers of environmental risks to health To develop and implement a national strategy or action plan for public health adaptation to climate change as an independent policy or within wider national adaptation policies, as well as natural disaster risk reduction policies Cross-sectoral action: to address determinants of health in policies in all sectors and ensure healthy energy, transport and other health-determining transition to gain the health co-benefits of environmental protection To assess climate change risks to health in relevant national policies, strategies and plans Strengthened health sector: to strengthen health sector leadership, governance and coordination roles in working together with other sectors with relevance to health, environment and climate change To include, on a voluntary basis, health considerations within Member States' commitments to the UNFCCC Building support: to build mechanisms for governance as well as political and social support, including multilateral and other high-level agreements that tackle major driving forces and global threats, such as climate change To consider climate change adaptation and mitigation in the development of specific environment and health policies, such as those on air quality, water and sanitation and others, bearing in mind that the cornerstones of adaptation include proper health protection infrastructure and housing standards Enhanced evidence and communication: to generate the evidence base on risks and solutions, and to efficiently communicate that information to guide choices and investments To strengthen natural risk reduction policies and early warning surveillance and preparedness systems for extreme weather events and climate-sensitive disease outbreaks Monitoring: to guide actions by monitoring progress towards the SDGs To develop information, tools and methodologies to support authorities and the public to increase their resilience against extreme weather and climate health risks To include the health aspects of climate change in education curricula, non-formal education and workforce continuing professional education To scale up public communication and awareness-raising campaigns on climate change and health To conduct or update national health vulnerability, impact and adaptation assessments of climate change To support research on the effectiveness, cost and economic implications of climate change and health interventions, with a particular focus on mutual co-benefits health topics into medical and public health training. The health sector can support and inform policy-making towards the full potential of healthy mitigation through intersectoral action, advocacy, health impact assessment, identifying health cobenefits and win-win policy options and leading by example in reducing its own carbon emissions. WHO aspires, among others, to support national, regional and global advocacy, provide evidence through country profiles and business cases for investment, ensure technical and capacitybuilding support for implementation and support climate-resilience, energy and water access in health care facilities. The WHO thirteenth general programme of work (GPW13) is woven around three strategic priorities, each setting a goal of 1 billion people and collectively known as the "triple billion" goal. These include: 1 billion more people benefiting from universal health coverage, 1 billion more people better protected from health emergencies and 1 billion more people enjoying better health and wellbeing. GPW13 highlights the importance of addressing climate change and health, specifically in small island developing states and othervulnerable settings, and of strengthening cross-sectoral collaboration towards health in all policies. To this end, WHO aims "to ensure that health systems become resilient to extreme weather and climate-sensitive disease" and "to help countries to ensure that global carbon emissions are falling so as to bring health 'co-benefits'" by 2030 [32]. The draft WHO global strategy on health, environment and climate change, which is to be considered by the Seventysecond World Health Assembly in May 2019, aims to support GPW13 in providing a vision and way forward on how the world and its health community can respond to environmental health and climate change risks and challenges up to 2030 [42]. In the WHO European Region for the priority area of climate change and health, the Ostrava Declaration on Environment and Health calls upon "countries to strengthen adaptive capacity and resilience to climate change-related health risks, to support measures to mitigate climate change and to achieve health cobenefits in line with the Paris Agreement". To achieve these objectives and planned ones in the forthcoming strategy, countries can include in their national portfolios some proposed actions listed in . Table 2. The health community should be fully engaged in the national intersectoral mechanisms for adaptation to climate change, including contributing to the development of the health components of national adaptation plans, of nationally determined contributions to the UNFCCC and of the national SDG implementation plans. Vladimir Kendrovski and Oliver Schmoll are staff members of the World Health Organization (WHO) Regional Office for Europe. The authors alone are responsible for the views expressed in this article and they do not necessarily represent the decision or stated policy of the World Health Organization.
v3-fos-license
2020-09-10T10:11:59.452Z
2020-08-20T00:00:00.000
225243827
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.ijnhs.net/index.php/ijnhs/article/download/352/176", "pdf_hash": "268a2d0b313fc94832d9feeed7af946693783ed0", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1914", "s2fieldsofstudy": [ "Medicine" ], "sha1": "83e321c453df19d5ca0b91049dd20528aa38f892", "year": 2020 }
pes2o/s2orc
Diet knowledge, Self Efficacy, and Motivation for Hypertension Preventive Behavior DOI: http://doi.org.10.35654/ijnhs.v3i4.352 Abstract. Preventive behavior requires an individual's desire and ability to be able to control blood pressure. The study aimed to examine the dietary knowledge, self-efficacy, and motivation towards hypertension prevention behavior elderly with hypertension. A cross-sectional approach was applied in this study. The results showed that knowledge on a diet, and motivation were associated with hypertension preventive behavior p = <α = 0.05. While self-efficacy of hypertension prevention behavior p => α = 0.05. Conclusion: Knowledge of diet, motivation influences hypertension prevention behavior, whereas self-efficacy does not affect hypertension prevention behavior. It is expected to conduct further research on the effect of social support and effective methods of providing information on increasing self-efficacy. INTRODUCTION The World Health Organization's hypertension prevalence shows that around 1 billion suffer from hypertension, while the data of the Ministry of Health in 2016 showed 63,309,620 cases and 427 thousand deaths (1). According to Riskesdas in 2018, there was an increase in the prevalence of hypertension based on the results of measurements in the age group 31-44 years (31.6%), ages 45-54 years (45.3%), ages 55-64 years (55.2%) (2). Increased cases due to not being able to control risk factors such as smoking, unhealthy diets such as less consumption of vegetables and fruit, consumption of sugar, salt and excess fat, obesity, lack of physical activity, excessive alcohol consumption, and stress (3). Data of Karang Rejo City Health Center Tarakan patient visits in 2017 were 3151 (4). In 2018 there were 3244 cases from January to October 2018 and ranked 2nd out of 10 diseases with the most visits (5). Efforts to monitor and detect early are made to reduce the recurrence rate by managing hypertension. Management of hypertension is done by pharmacological and non-pharmacological efforts as a preventative behavioral effort to control blood pressure. According to the Indonesian Cardiovascular Specialist Association (6) explained non-pharmacological treatment efforts carried out with a healthy lifestyle include weight loss, reducing salt intake, exercising, reducing alcohol consumption, and stopping smoking from controlling the pressure to normal. Preventive behavior is carried out to support individual health to prevent complications and even death. Preventive behavior requires the desire and ability of the elderly to be able to control blood pressure. Then a healthy lifestyle becomes the leading choice to change behavior by providing information. The information is expected to give the elderly knowledge to participate in efforts to prevent or reduce the risk of hypertension. Efforts to encourage or support carried out by Karang Rejo City Health Center Tarakan is by providing hypertension information with the media, consultation through elderly poly as well as physical activity, which is carried out routinely once a week by checking blood pressure and weight. In line with the gifted program planned by the Ministry of Health described that is periodic health checks, get rid of cigarette smoke, diligent physical activity, balanced diet, adequate rest, and manage stress (7). In addition, according to Nuraeni, Mirwanti & Anna (2017), to get good behavior, the factors that influence it must also be useful in improving the behavior of prevention and treatment of hypertension (8). Elderly as a vulnerable population, health efforts need to be done with one of them changing risk behavior. This is to maintain the condition of the elderly who are vulnerable to complications due to hypertension. Identification of the elderly's problem about diet knowledge, self-efficacy, and motivation becomes alternative efforts to implement more towards behavior modification. OBJECTIVE The study aimed to examine the relationship between knowledge on diet, self-efficacy, and motivation with prevention behavior among the elderly with hypertension. METHOD We conducted a cross-sectional approach in this study. The study was done at Karang rejo Tarakan North Kalimantan in 2019. Fifty samples were selected from the population by using. The simple random sampling. The Inclusion criteria of the study include being able to read write, diagnosed controlled hypertension, elderly, type of primary hypertension, and willingness to participate in this study. In the exclusion criteria, the respondent is not willing to fill in the questionnaire. Data collection was carried out several times at the beginning of the study before making research respondents follow inclusion and exclusion criteria. Then the respondent gets an explanation of the study and signs an informed consent. The time needed is ± 30 minutes-the validity test used by the Pearson method with the help of a computer. The validity test decision is expressed by the value of r count also r table. If r count is greater than r table, then it is approved valid and for testing reliability by comparing the r table's value with r results (9). C-Square (χ²) test analysis is used to determine the independent variable's relationship to the dependent variable (10). Characteristic of respondents Characteristics of respondents explained the patients' age, gender, and level of education among respondents (see table 1). Most of the respondents were more than 61 years. The criteria for respondents are chosen by the elderly to be analyzed for prevention behavior in line with the older age who are increasingly vulnerable to disease but have controlled blood pressure. The distribution of the respondents based was 36% of males and 64% with females. This condition is sufficient to illustrate the characteristics of respondents based on inclusion criteria. Respondents are mostly elementary schools (58%), but some respondents did not go to school. Respondents were selected based on inclusion criteria, namely being able to read and write so that it was easy to answer questions and communicate well. Table 2 showed the relationship between knowledge and hypertension prevention behavior. The results showed that there was a significant relationship between dietary knowledge with hypertension prevention behavior (p<0.05). As for the coefficient of association between perceptions of obstacles with hypertension prevention behavior of 0.383 which indicates that the correlation is weak The relationship between self-efficacy with hypertension prevention behavior Table 3 showed The relationship between self-efficacy with hypertension prevention behavior. The results of statistical tests showed that there was no significant relationship between self-efficacy and hypertension prevention behavior. With Chi-square test obtained p = 0.522> α = 0.05. As for the coefficient of association between self-efficacy and hypertension prevention behavior of -0.104 which indicates that the association direction is negative (opposite direction) the greater the value of self-efficacy, the smaller the value of hypertension prevention behavior The relationship between motivation with hypertension prevention behavior The results of the statistical test showed a significant relationship between motivation and hypertension prevention behavior with p = 0,000 <α = 0.05. As for the coefficient of association between motivation and hypertension prevention behavior of 0.323 which indicates that the correlation is weak DISCUSSION Characteristics of respondents at the elementary school level can influence respondents on one's knowledge of perceiving information. Information received and carried out repeatedly. The age factor is not a reason for the elderly to not be able to control blood pressure. This is under the activities carried out by the health center with routine elderly activities such as gymnastics, blood pressure control, drug administration, and information through health promotion media. This is consistent with the research of Umar & Agustina (2016) showed that there is a significant relationship between age and hypertension (p-value 0.004) with an OR value of 3,439. Based on gender, data on respondents is not distinguished. This is in line with the criteria and methods used in selecting samples (11). The level of knowledge can affect an individual's health conditions. Learning about the hypertension diet shows that there is a significant relationship with prevention behavior. Good knowledge supports the respondent's prevention behavior because the possibility of the respondent is in the evaluation stage by having the ability to assess the condition of the disease by making prevention efforts by knowing what foods are restricted and should not be eaten to prevent relapse hypertension. Six levels of knowledge. According to notoatmodjo (2013 in Hartono, 2016), namely know, understand, application, analysis, synthesis, and evaluation (12). A person's ability is said to be capable of knowing if the respondent can look at the evaluation stage by being able to assess or make an assessment of an object. Satisfaction with one or several needs will lead to a higher tendency to get more information (13). This information is obtained because someone has been able to assess an object. This is supported by research according to Utomo confirmed that there is a relationship between the level of knowledge about hypertension with efforts to prevent hypertension recurrence in the Posyandu in Blulukan Village, Colomadu District, Karanganyar District with P = 0.032 (14). Another study conducted by Astika, Muhlisin & Rosyid, (2014) shows a relationship between the level of knowledge of hypertension diet with recurrence hypertension in Mancasan Village, Puskesmas I Baki Sukoharjo due to poor nutrition in the elderly (15). The results of research on self-efficacy do not influence the behavior of hypertension prevention. Low self-efficacy has a small chance of preventing hypertension behavior regarding control of blood pressure, weight, diet, physical activity, and stress. Barriers experienced by respondents can affect self-efficacy, such as how to regulate inappropriate eating patterns, smoking habits, and uncontrolled pressure can affect the prevention of hypertension. The ability to avoid these habits can undoubtedly change the condition of hypertension. Self-efficacy management related to stress is needed by the elderly to have risk factor prevention behavior. According to Adria (2013) showed there was a significant relationship between stress and the occurrence of hypertension in the elderly with p = 0.047 (p <0.05). Self-efficacy related to diet The elderly's efforts to be sure to avoid salty foods tend to be lacking (16). This is undoubtedly a risk of hypertension in the elderly. This statement is in line with research conducted by Sukri, Wibowo & Wahyono (2019). There is a significant influence between eating patterns on hypertension in the elderly in the Posillandu Kutillang 1 in the Arut subdistrict of North Arawar Subdistrict, Kotawaringin Barat, Central Kalimantan (17). Self-efficacy as preventive behavior is an individual's belief that he can carry out prevention based on conditions experienced by the elderly. High self-efficacy will influence the behavior of the elderly in preventing or avoiding conditions that aggravate the disease. This is in line with Abdi's research reported that respondents who have high self-efficacy have full confidence in changing behavior, such as encouragement from health workers or advice from family or neighbors who have a history of hypertension (18). A study from Amelia, Sinaga & Sembiring (2018) showed that hypertensive patients' self-efficacy and lifestyle obtained the majority of people aged 56-69 years. Who has a patient's experience of lifestyle also related to the duration of illness can make their health better, but if previous experience is not right, then from that experience will reduce his motivation in conducting self-care so that health can decrease (19). The motivation for prevention behavior shows there is a relationship. This is shown because of the stimulation of the desire to prevent illness and the group of elderly respondents who follow both blood pressure control and physical activity. According to Prihartanta (2015) confirmed that active and functioning motives, because there are external stimuli, are extrinsic motivations caused by the driving factors. Respondents became motivated to participate due to the presence of elderly groups and Public health center monitoring through continuous information sharing (20). Also, the public health center's efforts in providing health promotion indirectly affect respondents' knowledge, which can then cause motivation to take preventative behavior. Information that has been given has an impact on the elderly. According to Herlinah et al. showed analysis showed information support was a dominant factor in the behavior of the elderly in controlling hypertension (21). Another study conducted by Prabandari et al explained the higher the level of respondents' knowledge about hypertension, the higher the level of motivation to check themselves or vice versa (22). CONCLUSION Dietary knowledge, motivation influences hypertension prevention behavior, whereas self-efficacy does not affect hypertension prevention behavior. It is expected to conduct further research related to the effect of social support as well as methods of providing useful information in increasing self-efficacy. STRENGTH AND LIMITATION Data collection needs to be done in quantitatively and qualitatively to obtain more in-depth information about motivation and self-efficacy. Further research needs to be done related to the intervention of prevention behaviors towards blood pressure and comparing samples in the elderly with secondary hypertension to determine preventive action.
v3-fos-license
2021-12-17T16:44:32.048Z
2021-12-15T00:00:00.000
245236428
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10437-021-09467-1.pdf", "pdf_hash": "fe394586b1adfdc9264849156b9de64ac5c95c1a", "pdf_src": "Springer", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1916", "s2fieldsofstudy": [ "History", "Chemistry" ], "sha1": "d745695384c9da7a573a58d891fc9e3b27bfb036", "year": 2021 }
pes2o/s2orc
Teardrops at the Lake: Chemistry of New Kingdom to Makuria Glass Beads and Pendants Between the First and Second Nile Cataracts International expeditions extensively excavated Lower Nubia (between the First and Second Nile Cataracts) before it was submerged under the waters of Lake Nasser and Lake Nubia. The expeditions concentrated on monumental architecture and cemeteries, including sites at Qustul and Serra East, where the New Kingdom, and Napatan, Meroitic, Nobadian, and Makurian-period elites and common people were buried, ca. 1400 BC–AD 1400. Although the finds abound in adornments, including bead imports from Egypt and South India/Sri Lanka, only a few traces of local glass bead-making have been recorded in Nubia so far. Based on results of laser ablation–inductively coupled plasma–mass spectrometry (LA-ICP-MS) analysis of 76 glass beads, pendants, and chunks from Qustul and Serra East contexts, dated between the New Kingdom and the Makuria Kingdom periods, this paper discusses the composition and provenance of two types of plant-ash soda-lime (v-Na-Ca) glass, two types of mineral soda-lime glass (m-Na-Ca), and two types of mineral-soda-high alumina (m-Na-Al) glass. It also presents the remains of a probable local glass bead-making workshop dated to the period of intensive long-distance bead trade in Northeast Africa, AD 400–600. Introduction This article offers an overview of glass types in Lower Nubia, their provenance, and chronology based on the laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS) analysis of the New Kingdom through Medieval glass beads and pendants from Qustul and Serra East cemeteries, and the glass remains (beads, pendants, and chunks) from a Nobadian Serra East household. Historic Nubia encompasses the southern edge of modern Egypt and the northern part of modern Sudan (Dafalla, 1975;Osman, 1992;Taha, 2013Taha, , 2021. Due to the natural boundary of the rock-strewn rapids of the Second Cataract, Nubia is usually referred to as comprising two parts-Lower Nubia, between the First and the Second Nile Cataracts, and Upper Nubia, in the south (Fig. 1). As a part of the Nubian civilization, Lower Nubia witnessed one of the earliest events of social complexity in Africa. Thanks to its frontier location, it also benefited from contacts with its Egyptian neighbors. Archaeological finds suggest that the A-Group people (3700-2800 BC) and the C-Group people (2300-1550 BC) were wealthy individuals. Due to the strategic location at the junction of trade routes, these individuals controlled the trade routes and provided Egypt and the Mediterranean with raw materials and exotics. The Pan Grave people (2200( -1550, represented by small and dispersed populations from the Eastern Desert, also contributed to the rich history of the Lower Nubian region (e.g., Adams, 1977;Hafsaas, 2006;Török, 2008;Williams, 1983Williams, , 1986Williams, , 1989Williams, , 1993. The region was under Egyptian control in the New Kingdom (ca. 1570-1069. Between 747 and 656 BC, the 25th Dynasty, otherwise called Kushite Dynasty, which originated in Lower Nubia, controlled Ancient Egypt and other parts of northeast Africa, stretching from the confluence of the Niles to the Mediterranean. The Nubian kings promoted the revival of the arts, language, architecture, and religion of the New Kingdom, and Egyptian artisans and scribes were employed in Nubia (e.g., Fisher, 2012;Török, 2008;Welsby, 1996;Williams, 1990). The wealth of the Lower Nubian region during the Napatan period (ca. 750-350 BC) is assumed to have continued when the center of power moved to the kingdom of Meroë (ca. 350 BC-AD 350). Under Roman rule in the north and Meroë in the south, in the first through fourth centuries AD (Classic and Late Meroitic period), Lower Nubia was an intermediary between Upper Nubia and Egypt (e.g., Török, 2008;Williams, 1991). During the Kingdom of Nobadia (ca. AD 350-600), the region had unsettled relations with the Blemmyes, active in the Eastern Desert and the Red Sea ports, and the Egyptians in the north (Emery & Kirwan, 1938;Obłuski, 2013Obłuski, , 2014. By the early eighth century AD, Nobadia and Makuria (in Upper Nubia) were united under a Makurian king, which gave rise to the Makurian period (seventh-fourteenth centuries AD) (Fisher, 2012;Welsby, 2002). In the sixth century AD, Lower Nubia adopted Christianity, and by the seventh century, there were changes in burial practices, especially in the number of burial goods. In contrast to the pre-Christian burials that abounded in grave goods, these were scarce or absent altogether in Christian burials. Before the Christian era, many beads were buried with deceased Nubians regardless of their social status, sex, and age. These included locally available ostrich eggshell and semi-precious stones and imported resources (Red Sea marine mollusk shells, Mediterranean Corallium rubrum sp., semiprecious stone). The beads were mostly made of faience and glass (Then-Obłuska, 2018b). Despite their apparent profusion in Nobadian graves, only one that assumed glass bead-making workshop has been recorded in a house at Serra East, in Lower Nubia (Williams, 1993, p. 229-230). Hence, the beads seem to testify to a rich history of wide-ranging contacts in the region. While Nubia's trade links with Egypt and the Mediterranean have long been well acknowledged, its eastern connections have only recently been recognized. Lower Nubia was close to Berenike, Marsa Nakari, and Quseir ports at the Red Sea, part of a busy commercial network connecting the Mediterranean world with the Indian Ocean during the Roman and Early Byzantine periods. The glass bead finds in Lower Nubia dating to these periods testify to the contacts with these ports and the Nubian Nile Valley (Then-Obłuska & Dussubieux, 2016;Then-Obłuska & Wagner, 2019a, b;Then-Obłuska, 2015a. Lower Nubian sites and samples Before disappearing under the waters of what is now Lake Nasser and Lake Nubia in the 1960s, Lower Nubia was the focus of extensive excavations that yielded, among others, a wide variety of beads, including glass ones. These beads were stored in museums and institutions around the world. One of the archaeological rescue missions was the Oriental Institute Nubian Expedition (OINE) of the University of Chicago (e.g., Williams, 2009;Williams & Heidorn, 2019). An assemblage of eighty-one beads and artifacts from Qustul and Serra East found during the OINE excavations in the early 1960s, and presently stored in the Oriental Institute Museum University of Chicago, was used in this study. Of this number, one bead (OINE67) appears to be a modern intrusion, three were made of stone (OINE19, 27, 51), and one was too corroded (OINE21). Hence, five beads were excluded from the LA-ICP-MS analysis. The sites in Qustul and Serra East were excavated on the east bank of the Nile, just to the north of the Second Cataract (Williams, 1990(Williams, , 1991(Williams, , 1992(Williams, , 1993. The samples were found mostly in graves, at cemeteries Q, QC,R,VF,[20][21][22][23][24][25][26][52][53][54][55][56][57][58][59][60][61][62][63][64][65][66], as well as at cemetery B (OINE79) and its surface (OINE68-69) in Serra East. There are also glass beads from a household context in Serra East, site LB. This context is interpreted as a bead workshop associated with Nobadian culture (80)(81). The Nobadian cemeteries are generally tumuli superstructures with either single or multiple grave units. Various artifacts (metal tools, weapons, toilet utensils, and fittings, as well as some stone, clay, bone, and ivory objects, basketry, textiles, and glass vessels) have been found, although pottery and bead adornments dominate the grave assemblages. Bead adornments were discovered mostly loose, scattered, and in heaps. They accompanied the deceased regardless of their age or sex. Necklaces and bracelets are the main beadwork types; however, other forms have also been recorded (Williams, 1991(Williams, , 1993. Site LB was a house in a shallow wadi built against the south side of a rock outcrop on drift sand. The structure was built into the natural squarish inset or break in the rock face. Stones were loosely stacked with mud to form walls for one room with a rough stone floor paving. The walls stood only about 0.5 m high (Williams, 1993, pp. 229-230). The house or shelter, 2.7 × 3.7 m, contained a good collection of whole and smashed but complete pottery vessels. Disturbed skeletons of two goats or sheep in clean sand were found beneath the paving and deep debris. Outside, to the east, against the rock, several cooking or hearth constructions, with considerable ash deposits, were found. The doorway may have been in the east wall. Further to the west, 3.05 m northwest of the shelter, a fragmentary oven or kiln was also found built into a small corner against the rock. To the south, test pits in the wadi sand yielded occasional bead and pendant fragments 78,80,81), partially perforated stone beads, and some flint chipping debris and potsherds. Glass chunks (OINE76-77), ostrich eggshell fragments, unfinished ostrich eggshell beads, and a carnelian chunk were also collected from site LB. All of these suggest that there was a bead-making workshop at the site. The hearth was approximately 2.30 m deep and 2 m wide and consisted of upright stones with possible ash deposits. While the Qustul cemetery Q (220) was dated to ca. AD 370/380-410 (OINE01-42), the remaining cemetery sites were dated between the early fifth and sixth centuries AD (OINE43-66) (Williams, 1991(Williams, , 1993. The reuse of Meroitic beads in Nobadian graves was a common practice, and these are treated as Meroitic-dated bead types. Other beads (OINE58-61, 63-64) were of the 25th Dynasty date (Williams, 1990, and personal communication). A few samples belong to the New Kingdom (OINE62, 65) and Makurian (OINE57, 73) dates. The glass assemblage for this study was selected to include all types of manufacturing techniques, color, and shape. Electronic Supplemental Material (ESM 1) presents the specimens arranged based on their Oriental Institute Museum (OIM) number, including information on the site, find context, find number, and all registered data about the techniques of manufacture, shape, color, and dimensions (mm scale). The assemblage includes monochrome, bichrome, and mosaic glass beads, as well as metal-in-glass beads with gold or silver foil between two glass layers. A variety of manufacturing techniques (e.g., drawing a glass tube; winding glass around a metal mandrel; folding a glass strip around a metal mandrel or joining glass strips around a mandrel; and rod-piercing glass), finishing processes (segmenting in molds and breaking apart the segments of a drawn tube; or cutting a drawn glass tube and heat-rounding the sections), and decoration (the application of trails, mosaic eyes) are recognized in the assemblage. The objects are illustrated in Fig. 2 and identified by their sample numbers. Results and discussion The compositions of the glass beads, including 56 major, minor, and trace elements, were obtained using LA-ICP-MS at the Elemental Analysis Facility at the Field Museum. More details on the instrumentation and protocol are in Then-Obłuska and , and ESM 2 presents major, minor, and trace element compositions of Corning Reference Glass B and D. Seventy-six glass beads, pendants, and chunks were analyzed. Different colors of the bichrome and polychrome beads were measured individually, and some measurements had to be repeated because, in some cases, the color meant to be targeted had been missed, and another one was measured instead. These resulted in 112 compositions (ESM 3). Soda is the main alkali in all glass samples, and four main glass types were identified based on the content of MgO and Al 2 O 3 (Fig. 3). Soda-lime glasses form the most numerous group with eight samples made of plant ash-soda-lime glass (v-Na-Ca) and 84 samples made of a mineral soda-lime glass (m-Na-Ca). Four samples feature mineral-or plant ash-soda composition (m/v-Na-Ca), and 16 samples are mineral soda-high alumina (m-Na-Al) ( Table 1). v-Na-Ca glass Eight samples made from soda-rich glass have low alumina (Al 2 O 3 < 2%) and high magnesia (MgO > 3%) concentrations, indicating the use of plant-ash soda as a flux. Lime (CaO) concentrations range 3.5-8.0%. Plant-ash soda-lime-silica glass (v-Na-Ca) is the earliest known glass type. It was produced in Egypt (e.g., Rehren & Pusch, 2005;Smirniou & Rehren, 2011;Tite & Shortland, 2003) and Mesopotamia (e.g., Shortland et al., 2018) as early as the middle of the second millennium BC. Later, v-Na-Ca glass was made by the Sasanians. This was followed by Islamic glass-makers in a region east of the Euphrates from the third century BC to about the seventeenth century AD, and by Islamic glass-makers in the East Mediterranean region, Egypt, and the Levant, starting from the mid-ninth century AD (Brill, 2005;Henderson et al., 2016;Mirti et al., 2008Mirti et al., , 2009Phelps, 2016). Based on different levels of MgO and trace constituents such as Ti, Zr, Cr, and La, two subtypes of plant ash-soda glass are distinguished in the OINE assemblage: New Kingdom (v-Na-Ca NK) and Medieval (v-Na-Ca OINE73) (Fig. 4A). v-Na-Ca NK A pendant of light blue wound glass decorated with a white spiral trail (OINE65Bl, W), found in Nobadian grave R 119 at Qustul and thought to be Meroitic in date (Williams, 1991, p. 146), is part of a v-Na-Ca sub-group. It has low levels of Rb (about 11 ppm) and Li (about 5 ppm). These chemical attributes exclude glass affiliation with contemporary v-Na-Ca Sasanian glass, featuring higher levels of both elements (e.g., Mirti et al., 2009;Then-Obłuska & Dussubieux, 2016). The two-colored glasses used to make this pendant have trace element levels indicating a Late Bronze Age date. Soda plant ash glass was produced since the middle of the second millennium in Egypt and Mesopotamia, and both production areas can be distinguished by plotting Zr/Ti and Cr/La ratios. A high ratio of 1000*Zr/Ti > 40 and a low ratio of Cr/ La < 4 suggest Egyptian provenance of the glass, while the ratios of 1000*Zr/Ti < 60 and Cr/La > 4 indicate Mesopotamian glass (Shortland et al., 2007). Comparison of the two ratios in OINE65 with those in Late Bronze Age glasses from Egypt and Mesopotamia (based on Shortland et al., 2007;Henderson, 2013, Fig. 6.10) reveals similarities with the Egyptian data (Amarna and Malkata) (Fig. 4A). Undoubtedly, the blue glass (OINE65B) owes its color to the presence of CuO (2.5%), although a contribution of Fe 2 O 3 (0.5%) cannot be excluded. Traces of tin, SnO 2 (0.09%), Co (61 ppm), and Ni (62 ppm) might have been brought in accidentally by the copper. All the elements occur in proportions resembling the scores for the "Cu blue" glass from Amarna and Malkata (Shortland & Eremin, 2006, Table 1). The white glass (OINE 65 W) has CaO at a level of 6.5% and a very high level of Sb 2 O 5 (5.1%) and, like the New Kingdom v-Na-Ca glass from Amarna and Malkata, was decorated with Table 1). Similar proportions of CaO in the blue and the white glass prompt the conclusion that the latter color was obtained solely by adding antimony to the colorless glass. Antimony precipitated with the calcium in the glass made with soda plant ashes, and calcium antimonate was obtained in this way (Shortland, 2002). A higher level of Al 2 O 3 and Sb 2 O 5 in the white glass (OINE65W) along with traces of cobalt in Fig. 4 A A biplot of chromium/lanthanum versus 1, 000 × zirconium/titanium in OINE v-Na-Ca glass in frames for late Bronze Age glasses from Egypt (Amarna and Malkata) and Mesopotamia (Tell Brak and Nuzi) (based on Shortland et al., 2007); B Biplot of Al 2 O 3 and MgO/CaO showing results for v-Na-Ca glass OINE73, and "Mesopotamian" glass from Samarra 1 & 2, mid-ninth century AD , Raqqa, ninth century AD (Henderson et al., 2016), Nishapur, ninth and tenth century AD (Henderson et al., 2016), and Veh Ardasir, AD 300-700 (Mirti et al., 2008(Mirti et al., , 2009), within range borders, according to Phelps (2016) copper blue glass (OINE65B) would exclude its affiliation with later New Kingdom glass from Lisht, dated to the end of the second millennium (Shortland & Eremin, 2006, p. 596-597, Table 2). Furthermore, the presence of traces of SnO 2 in OINE65B would also exclude its affiliation with early New Kingdom blue glass dated to the fifteenth century BC (Shortland & Eremin, 2006, p. 596-597, Table 2). The OINE65 affiliation with the Amarna and Malkata sites, assumed glass-making centers in the fourteenth century BC (Shortland & Eremin, 2006), makes this bead stand out from the Nobadian tomb collection. The bead may have been moved from one of the New Kingdom tombs at Qustul (Then-Obłuska, forthcoming; Williams, 1992). Bichrome pendants with a similar trail added spirally are dated to the New Kingdom period (Spaer, 2001, cats. 72-74). Another bead, OINE62, found in a 25th Dynasty grave, features higher K 2 O (1.3%) and high MgO (2.8%), and its translucent purple wound glass has 0.8% MnO and 0.7% Fe 2 O 3 . Although the New Kingdom glass usually features more elevated magnesium and potassium (Shortland & Eremin, 2006), OINE62 still seems to fit this glass group since it typologically resembles other New Kingdom beads (Then-Obłuska in press: cat. 324.1, from Qustul R 94 dated to New Kingdom Post-Amarna period; Metropolitan Museum of Arts, New York, MET 11.215.661, from Malqata, Palace of Amenhotep III, ca. 1390-1353 BC). Also, comparing its Cr/La and 1000*Zr/Ti ratios with records for Egyptian and Mesopotamian glass makes its attribution to New Kingdom Egyptian glass fairly apparent (Fig. 4A). v-Na-Ca OINE73 Other v-Na-Ca compositions belong to a polychrome bead fragment, OINE73A-E, found in a Nobadian house (fourth-sixth centuries AD), thus implying the v-Na-Ca OINE73 glass could have been produced during the Sasanian period. To date, the OINE73A-E data for the various colored glass fragments were placed in a MgO/CaO vs. Al 2 O 3 graph. The result demonstrated a distinction between the soda plant ash glass of Eastern Mediterranean provenance (Syria, Egypt, and Palestine/Levant) and the soda plant ash from the Mesopotamian region (northern Syria, Iran, and Iraq), dated between the eighth and tenth centuries AD (McIntosh et al., 2020;Phelps, 2016;Schibille et al., 2019). Further, two groups have been distinguished in the Mesopotamian glass: Mesopotamian Type 1 with samples from Veh Ardasir (third through seventh centuries AD) and Raqqa (type 4; ninth century AD), and Mesopotamian Type 2 with samples from Samarra (Schibille et al., 2018, Fig. 6). The results for OINE73, with Al 2 O 3 (about < 1.5%), fit the Mesopotamian Type 2 (Phelps, 2016;Schibille et al., 2018;Fig. 6), particularly the Samarra 2 subgroup (Fig. 4B). Analyses of (de)colorants and opacifiers in OINE73 are compatible with an Islamic glassperiod affiliation. All glasses (yellow, black, red, and colorless) contain some concentrations of SnO 2 and PbO (> 2%). The white (PbO 2 and SnO 2 = 1.6%) and yellow (PbO = 8.7%, SnO 2 = 1.05%) colors were opacified with tin oxide and lead stannate, accordingly. The yellow color has some traces of antimonate (0.04%). Although, after the fourth century AD, tin, instead of antimonate, was used as an opacifier (Tite et al., 2008), occasional use of antimonate and arsenic was recorded for Merovingian-and Islamic-period yellow and greenish-yellow glass (Neri et al., 2019). With all the above in mind, the OINE73 fragment may be assumed to be a Medieval intrusion in the context of the Serra East Nobadian household rather than a Sasanian production. In fact, OINE73A-E is the fragment of a bead made of mosaic glass with a so-called checkerboard pattern already known in the Hellenistic period, and its production continued through Medieval times (e.g., Spaer, 2001, p. 120). The glass might have arrived through the Red Sea ports of Aidhab (used as a port at least from the time of the Fatimid conquest of Egypt in AD 969) and Suakin, some 230 miles south of ʿAidhab, founded in the ninth century. m-Na-Ca glass Most analyzed OINE samples (n = 84), featuring low alumina (< 3%), also have low magnesia (< 1.5%) concentrations, indicating the use of mineral soda as flux. Soda-lime glass, using mineral soda as a flux -usually in the form of natron from Wadi el Natrun in Egypt -was manufactured in Egypt and the Syro-Palestinian region for two millennia, between the tenth century BC and mid-ninth century AD (e.g., Phelps et al., 2016;. Dated to the beginning of the tenth century BC, glass vessels from Theban tomb 320 are characterized by low potassium and magnesium (< 1.2%), soda (18.2-23.4%), calcium (1.3-4.8%), and alumina (> 2.1%) and, most likely, were made from sand and mineral soda (Schlick-Nolte & Werthmann, 2003). Natron-based glass, usually containing low levels of magnesia and potash (< 1.5%) (LMLK glass) and moderate levels of Al 2 O 3 (0.5 to 3%) (e.g., Panighello et al., 2012), became widespread in the Southern, Eastern, and Western Mediterranean (e.g., . Some samples of the OINE m-Na-Ca type have very low levels of Al 2 O 3 (< 0.5%) and low levels of some trace elements (m-Na-Ca LT), suggesting the use of silica sources different from that of the m-Na-Ca glass produced between the Hellenistic and Islamic periods. In the sample discussed, this type of glass is found mainly in the first through sixth century-AD bead types. Hence, they are labeled Roman glass (m-Na-Ca R). An elevated cobalt (555-1629 ppm) content was the main colorant of the dark blue beads (OINE02BCo,09,13,15BCo,16BCo,25,30,31BCo,33BCo,34BCo,35,41,55BCo,72,74,81) and workshop chunks (OINE76-77). The cobalt used for the natron-type beads is not unambiguously associated with any particular impurities, a trademark of Roman cobalt sources (Gratuze et al., 1992). According to Gratuze et al. (2018), natron glass colored with cobalt features a relatively constant pattern with a high CoO/NiO ratio (CoO/NiO > 24). Sometime between the late fourth and the beginning of the sixth century, the CoO/NiO ratios experience a drastic decrease (2.2 < CoO/NiO < 5.1) (Gratuze et al., 2018, p. 18). In the OINE assemblage, only a few dark blue glass beads have a high Co/Ni ratio (> 24 for OINE16, 33, 72), while the glass of these Nobadian-dated beads may have been produced earlier, in the Meroitic period. Only one bead in the studied assemblage has a low Co/Ni ratio of 5.9 (OINE32). In contrast, most samples have ratios below 24 but higher than 5 (OINE02, 09,13,15,25,30,31,34,35,41,55,74,76,77,81), which may have resulted from recycling earlier glass mixed with the glass of a later date (Fig. 5C). Many of these samples were made of Egyptian glass (Fig. 5B). It seems probable that the period between the late fourth and the sixth century was the time when earlier, Early Roman cobalt glass of Co/Ni ratio above 24 would have been mixed with new resources characterized by the Co/Ni ratio between 2.2 and 5.1 . The m-Na-Ca beads were made with diverse techniques, well recognized in Roman and late antique northeast Africa (e.g., Then-Obłuska & Wagner, 2019b) and beyond. These are beads made of drawn and segmented glass and gold-in-glass as produced in Alexandria, Egypt (Kucharczyk, 2011;Rodziewicz, 1984). Other beads were made of the wound, folded, and rod-pierced glass, and many of them might have been manufactured locally. m-Na-Ca LT/Classic Nine m-Na-Ca samples (OINE59, 60Bl-Y, 61, 64Bl-B-Gr-W-Y) are characterized by very low levels of MgO, K 2 O, Al 2 O 3 ( ≤ 0.5%), and Fe 2 O 3 (average of 0.8%). They also have low levels of silica-related impurities and other earth trace elements (e.g., Ti, Sr, La, Rb, and Ba) compared with the m-Na-Ca Roman glass type. Although OINE58 and 63 feature MgO, K 2 O, or Al 2 O 3 levels higher than other examples in the m-Na-Na LT glass group, their trace elements (Sr) still fit the "low trace" natron group. For this reason, they have been assigned to the m-Na-Ca LT glass group. The OINE m-Na-Ca LT composition suggests a very clean silica source, i.e., a better quality of sand or even quartz pebbles, for glass production (Shortland & Eremin, 2006). Based on the Zr level, two types of "classic natron" glass were distinguished: low-Zr and high-Zr natron glass (Conte et al., 2019). Low levels of Zr (< 9 ppm) and the lowest possible levels of alumina, magnesia, potash, iron, and REEs in a glass sample indicate, according to the authors, the use of quartz pebbles (Conte et al., 2019, table 4). However, the Zr level (> 21 < 272 ppm) in the OINE m-Na-Ca LT glass would point to a high-Zr affiliation, and this, in turn, would exclude the use of quartz pebbles. High lime concentrations in the m-Na-Ca glass came with sand collected from a beach and thus contained seashell fragments. A relatively low Sr content linked to a rather high CaO/SrO ratio (362) is thought to result from the addition of diagenetically altered shells, partly recrystallized once their initial strontium contents had been lost (Conte et al., 2019;Wedepohl et al., 2011). As for the m-Na-Ca R glass, the authors compared yttrium to zirconium (Y/Zr) and cerium to zirconium (Ce/Zr) ratios of the m-Na-Ca LT glass samples with the ratios for glass produced in the Levantine and Egyptian regions, respectively. As shown in Fig. 5A, the m-Na-Ca LT glass samples follow a trend observed for glass samples of Egyptian provenance. Low Al 2 O 3 (< 0.5%) levels undermine any comparison of the m-Na-Ca LT glass with most of the low MgO and K 2 O (< 0.5%) natron glasses from the eighth through the fourth century BC Europe (e.g., Macedonia; Blomme, et al., 2017). Still, similar compositions have been reported from various French sites, dated to the beginning of the Iron Age, in the ninth through second centuries BC (Gratuze, 2009, Fig. 2). One of these groups features low potassium and low alumina, each at a level of about 0.5%. Some samples in this group come from the Champlay context dated to ca. 750-500 BC or 750-400 BC (Gratuze, 2009, Fig. 2). An antimony decolored glass sample from Sardis (Turkey), dated to ca. 700-500 BC (Ignatiadou, 2000), and a turquoise decoration of the Bologna eye bead from 500-300 BC-Etruscan context in northern Italy feature similar very low Al 2 O 3 results (Arletti et al., 2010, IG45). Interestingly, an opaque red chunk with MgO, K 2 O, and Al 2 O 3, < 0.5%, was found in Persepolis, dated to around the fifth century BC (Brill, 1999, IIH:198). Additionally, a glass of probable Egyptian provenance, featuring Mg, K, and Al, < 0.5%, low levels of some trace elements that resemble the m-Na-Ca LT glass, and a so-called "classic natron," was identified in the Iron Age Italy, ca. 800-500 BC (Conte et al., 2019). The study by Conte et al. (2019), using the measurements of selected elements' levels, also presents ways to date the natron black glass more adequately. Some black samples with low lime, high iron, and high trace and REEs contents are dated to ca. 900-700 BC. Other samples (TG3bl, TG12bl, and TG13bl), characterized by lower alumina, titania, iron, and higher lime concentrations, are comparable with the OINE m-Na-Ca LT, and dated to ca. 700-500 BC. An analysis of opacifiers in the m-Na-Ca LT group confirms its early date. Yellow glass in OINE60Y and 64Y features significant Sb 2 O 5 (1.2%, 1.1%) and PbO (15%, 9.7%) levels and a complete lack of tin. Antimony-based opacifiers (i.e., lead antimonate yellow) were used, in the Near East and Egypt, from the onset of glass production, ca. 1500 BC, through the Roman period (Turner & Rooksby, 1959). Towards the end of the Roman period (especially fourth century AD onwards), the production of opaque yellow glass would fall back on the use of stannate instead of lead antimonate (Tite et al., 2008). It was not until the late fifteenth century AD that the latter was reintroduced into glass production (Molina et al., 2014). OINE58 is translucent blue with CuO (1.2%), MnO, and Fe 2 O 3 (0.4%). Like the blue glass in v-Na Ca NK (OINE65A), it does not contain tin. An m-Na-Ca LT type bead (OINE64), decorated with colorful spots, belongs to a group of so-called crumb beads reported from contexts dated between the Late Bronze and the Medieval times (Spaer, 2001, p. 127). It was found along with other m-Na-Ca LT glass beads in a 25th Dynasty grave. The same context also yielded a quadruple wedjat eye, typical of the Third Intermediate Period (Williams, 1990). Comparing OINE64 black glass compositions with Italian samples (see above) suggests ca. 700-500 BC as a probable date. PXRF analysis of red "natron sodium-lead-calcium-magnesium-silica" glass beads from the Nubian site of Gala Abu Ahmed in Wadi Howar, dated ca. 1100-400 BC, provided no trace elements feasibly comparable with OINE glass (Daszkiewicz & Lahitte, 2013). Some compositional similarities in OINE58 can be recognized in an orange bead of Egyptian glass, featuring low levels of MgO, K 2 O, and Al 2 O 3 (< 0.5%) (Then-Obłuska & Wagner, 2019b, SNM07). Its elemental levels (e.g., Al 2 O 3 0.25%, Sr 93 ppm, Zr 13.8 ppm, and Ti 319 ppm) resemble those in m-Na-Ca LT glass; however, the NaO (2.98%) and CaO (1.8%) levels are much lower. The bead was found in a Sedeinga grave, accompanied by several other beads of the same type and Napatan amulets (Then-Obłuska, 2015b). A Napatan date for this glass type can be supported by evidence from Nag Shayeg, where beads of this type have been found in a probable Napatan tomb, T131 (Then-Obłuska & Wagner, 2019b, Pl. 28.1-28.2). Fig. 6 Principal components 1 and 2 calculated using the concentrations of MgO, CaO, Sr, Zr, Cs, Ba, and U for samples belonging to glass groups m-Na-Al 1, 2, 3, 4, and 6 and for samples from Nubia. The m-Na-Al 1 glass samples are unpublished data from Sri Lanka and South India, the m-Na-Al 2 glass samples are beads from Chaul (Dussubieux et al., 2008), the m-Na-Al 3 glass samples are beads from Kopia (Dussubieux & Kanungo, 2013), the m-Na-Al 4 are glass vessel fragments from Sumatra (Dussubieux, 2009), and the m-Na-Al 6 glass are from the site of Juani Primary School (Dussubieux & Wood, 2021) m/v-Na-Ca glass Four glass samples (OINE03,45,46,53) with low alumina levels, moderate K 2 O (< 1.5%) level, and elevated concentrations of MgO (> 1.5%) suggest the use of mineral-soda and plant ash or the specific soda plant ashes. The K 2 O or MgO > 1.5% values are commonly believed to indicate the use of organic material in the form of plant or wood ash in the glass batch. Glass with higher concentrations of MgO or/and K 2 O was identified and discussed for early Roman glasses in Egypt (Nenna & Gratuze, 2009;Then-Obłuska & Dussubieux, 2016), and the Egyptian m/v-Na-Ca glass was found mainly in the first through mid-fourth centuries AD Nubia (Then-Obłuska & Dussubieux, 2021;Then-Obłuska & Wagner, 2019a). The present study confirms their Egyptian provenance (Fig. 5A), and the OINE m/v-Na-Ca glass beads in this assemblage were most probably Meroitic items reused in Nobadian graves. m-Na-Al glass Sixteen samples have high alumina (> 7%) and low magnesia (< 2%) concentrations, indicating the use of mineral-soda flux (m-Na-Al). The mineral-soda high alumina glass beads, with relatively high (> 5%) concentrations of alumina and trace elements, are particularly common in India where, undoubtedly, they were manufactured (Brill, 2003;Dussubieux et al., 2010. Low magnesia (< 2%) concentrations indicate the use of mineral-soda flux. Based on different trace element levels, two sub-types have been identified within the OINE assemblage: 15 samples were made of m-Na-Al 1 and one sample of m-Na-Al 2 (Fig. 6). m-Na-Al 1 Fourteen samples in the OINE collection have high Al 2 O 3 contents ranging from 7.1 to 13.4% (OINE01,04,07,08,12,14,18,20,23,39,49,50,52,79). The MgO concentrations in this glass are usually low (< 1%), while other trace elements, such as uranium with 4-24 ppm, have the highest concentration in this study. Dussubieux and co-authors distinguished a few subtypes of high-alumina mineral soda glass (m-Na-Al 1-4 and 6) based on the contents of five elements: Sr, Zr, Ba, U, and Cs (Dussubieux et al., 2010, tab. 3;Dussubieux & Wood, 2021). Using principal component analysis (PCA) and glass constituents, MgO, CaO, Zr, Sr, Ba, Cs, and U, the m-Na-Al glass beads found in Nubia were compared with already defined m-Na-Al subtypes (m-Na-Al 1-4 and 6), and they showed similarities with the m-Na-Al 1 glass (Fig. 6). The compositions of the 14 high alumina samples of this m-Na-Al 1 glass group (formerly known as "low uranium-high barium glass," Dussubieux et al., 2010) have average contents of Ba and U that match the m-Na-Al 1 type (Table 2). Additionally, one sample, a green drawn and rounded bead OINE11, has a high concentration of MgO (3.3%), K 2 O (2.2%), and Al 2 O 3 (5.6%), and low CaO (3.1%). The trace element levels match the m-Na-Al 1 group (Table 2). Also, the PbO (4%), CuO (0.8%), and SnO 2 (0.5%) levels fit within the range for the green glass in the m-Na-Al 1 group. OINE11 was Table 2 Average concentrations and standard deviations of important elements crucial for separating m-Na-Al subtypes (data from Dussubieux et al., 2010), followed by data for the high-alumina glasses (m-Na-Al 1, m-Na-Al 2) from the Lower Nubian OINE collection found in a Nobadian grave together with glass beads made of m-Na-Al 1 glass, which would confirm its affiliation with high alumina glass of South Indian/ Sri Lankan provenance. The presence of lead and tin in OINE14 (PbO = 4.6%; SnO 2 = 0.54%) suggests the yellow bead was probably colored and opacified by lead stannate. Six semi-translucent pale green samples, OINE08,18,23,39,50,79, contain significant quantities of CuO (0.3-0.8%), PbO (2-5%), and SnO 2 (0.3-0.6%), suggesting lead stannate may have contributed to the opacification of the glass. Seven beads, OINE01,04,07,12,20,49,52, are orange and contain high concentrations of copper (CuO 7.2-9.2%), but also a higher concentration of iron (2.3-3.1%) when compared to the blue, red, and black m-Na-Al 1 glass from both South Asia (Lankton & Dussubieux, 2006, p. 129, Table 2) and Nubian sites (Then-Obłuska & Wagner, 2019b). The orange samples are also characterized by high levels of MgO, K 2 O, and P 2 O 5 . Phosphorus and lime-rich inclusions were found in an orange m-Na-Al 1 glass sampIe from South Asia. These suggest a possible addition of an apatite-rich ingredient for internal reduction to convert the Cu 2+ into Cu 2 O (Dussubieux et al., 2010) that usually colors glass orange. The m-Na-Al 1 glass was most probably manufactured in Sri Lanka or South India. Beads made of this glass are found in Sri Lanka and South India, between the second/first century BC and fifth century AD; and in Southeast Asia between the fifth century BC and tenth century AD (Carter, 2016;Dussubieux et al., 2010, tab. 4;Dussubieux & Gratuze, 2013). Aside from the Southeast Asian finds, the presence of South Indian/Sri Lankan glass beads has also been confirmed at the Early Roman Red Sea port of Quseir, Egypt (Then-Obłuska & Dussubieux, 2016), in Merovingian-period Europe (Pion & Gratuze, 2016;Poulain et al., 2013), and Zanzibar, AD 700-1100 (Sarathi et al., forthcoming;Wood et al., 2017). The South Indian/Sri Lankan glass (m-Na-Al 1, -green, orange, black, yellow, and orange-on-red) has been found in the Nubian Nile Valley between the First Cataract and the confluence of the Niles (Then-Obłuska & Wagner, 2019a, b), including the mid-fourth century AD samples from the cemeteries of nomadic peoples (Blemmyes) around Kalabsha, representing the northernmost presence of these glass beads in the Nile valley (Then-Obłuska & Dussubieux, 2021). Beads made of m-Na-Al 1 glass were produced using a technique diagnostic of Indian origin-drawing a glass tube and heat-rounding its sections (Francis, 2002). South Indian or Sri Lankan glass beads have also been macroscopically identified at other sites associated with the Blemmyes: the Early and Late Roman Red Sea port sites of Berenike and Marsa Nakari (Francis, 2002(Francis, , 2007Then-Obłuska, 2016, 2017b, 2018a and the Eastern Desert sites of Shenshef and Sikait (Then-Obłuska, 2017a, thus pointing to the east-west direction of South Asian bead distribution in northeast Africa. m-Na-Al 2 One sample, OINE57, has low levels of MgO (1.1%) and K 2 O (1.8%) and a high level of Al 2 O 3 (8.5%), pointing to its mineral soda high alumina affiliation. When compared with m-Na-Al 1 glass (Table 2, Fig. 6), it displays higher concentrations of U and Cs and lower concentrations of Ba, Sr, and Zr and fits the m-Na-Al 2 group as defined by Dussubieux et al. (2010). The CuO (0.7%), PbO (2.4%), and SnO 2 (0.4%) levels in the green glass suggest the use of lead stannate. The m-Na-Al 2 glass was previously identified at sites dating from the ninth to the nineteenth century AD, located on the west coast of India and the east coast of Africa (Dussubieux et al., 2010). A recent analysis of more beads from the East African coast has helped revise the chronology for this glass and suggested its presence from around the fourteenth century AD onwards. The Indo-Pacific Khami beads from Southern Africa and m-Na-Al 2 beads on the East African coast have been identified as sharing the same composition. Therefore, both can be assigned to around the fourteenth century AD (Dussubieux & Wood, 2021). Although the m-Na-Al 2 glass beads might have been manufactured in Maharashtra, a recent study using Sr, Nd, and Pb isotope analysis suggests that the raw glass was likely procured from a different region, possibly western Uttar Pradesh . The beads would have been traded across the Indian Ocean through Chaul, south of Mumbai (Wood, 2019). OINE57, found in grave VF68, originally was strung together with other drawn green and black beads and a Mediterranean Sea coral (Corallium rubrum sp.). Since the latest evidence for m-Na-Al 2 glass suggests a new fourteenth century AD dating for the Qustul VF68 grave, it appears fairly probable that the beads and grave may belong to the Islamic period in Lower Nubian history. Glass Bead-making in Lower Nubia During the New Kingdom and the 25th Dynasty, glass beads in Lower Nubia were made of Egyptian glass (v-Na-Ca and m-Na-Ca LT) and most probably imported. Although glass and metal-in-glass beads of Egyptian and Levantine m-Na-Ca R glass became very common in Nubian tombs (e.g., Then-Obłuska & Wagner, 2019b), during the Meroitic period, no evidence of local bead-making has yet been found. Whereas many beads found at the Nobadian sites were imported (Egypt and South India/Sri Lanka), there is also some scarce evidence for possible beadmaking (re)using glass imported from Egypt and the Levant. Samples OINE70-78 and 80-81 were found at a Serra East Nobadian household bearing traces of a fireplace and the remains of an oven or a kiln interpreted as a bead workshop. A question then arises whether cobalt blue chunks, OINE76 and 77, could have been used there for bead and pendant manufacture. As recent experiments prove, glass can be processed in rudimentary household fire pits where, with the help of blowpipes, temperatures high enough to produce glass beads are achievable (Hodgkinson & Bertram, 2020). Chunks and dark blue beads found in this Serra East workshop (OINE74, 76, 77, 81) have comparable Co/Ni ratios of between 13.2 and 15.6 that also resemble other dark blue specimens in the Nobadian collection (OINE02BCo,09,13,15BCo,25,30BCo,31BCo,34BCo,41,55BCo) having a Co/Ni ratio of between 11.25 and 18.8 (Fig. 5C). The chunks are of Egyptian (OINE76) and Levantine (OINE77) origin, also attested for cobalt blue glass beads in Lower Nubia (Fig. 5B). This observation seems to verify the hypothesis that the Serra East chunks were used in the local bead-making process. Translucent and semi-translucent copper blue beads with yellow spots (OINE66B, 71B) and teardrop pendants (OINE37,44,75,78,80) were found in the workshop and graves. A lack of antimonate characterizes the blue glass, and the biplot of Y/Zr to Ce/Zr ratios indicates its Levantine origin (Fig. 5B). While the Levantine glass beads themselves are uncommon finds in Nubia in the period under discussion (Then-Obłuska & Wagner, 2019b), a glass of that type was undeniably used in the production of diagnostic Nobadian ornaments, most probably locally made from reused Levantine glass. Indeed, teardrops of Levantine glass and those of high-alumina glass of uncertain provenance have already been recorded from Lower Nubia (Then-Obłuska & Wagner, 2019a, b: SJE02 and SJE25 accordingly). It thus appears these pendants may have been produced locally using glass from different sources. Only two Makuria period specimens were analyzed: an imported glass (v-Na-Ca OINE73) and an imported item (m-Na-Al 2). We cannot provide chemical compositional results supporting the idea of glass bead-making during the Makuria period, but we must mention a workshop at the Early Christian site of Debeira in Lower Nubia that yielded much ash and large pieces of unworked glass remains and a bead of similar glass (Shinnie & Shinnie, 1978, p. 44). This, in turn, implies a need for further typological and archaeometric evidence to test the hypothesis of local glass bead production in Lower Nubia. Conclusions The analysis of glass beads from Lower Nubia, a now-submerged region, reveals developments in bead glass chemistry over three millennia in Northeast Africa, encompassing the New Kingdom (v-Na-Ca NK), 25th Dynasty (m-Na-Ca LT), Early Roman/ Meroitic (m-Na-Ca R, m/v-Na-Ca), Late Roman/ Nobadian (m-Na-Ca R, m-Na-Al 1), Makurian (v-Na-Ca OINE73), and possibly Islamic (m-Na-Al 2) periods. This study of glass provenance presents the first-ever dataset attesting to Egyptian glass from the New Kingdom (v-Na-Ca NK) and 25th Dynasty periods (m-Na-Ca LT/Classic) in Lower Nubia. It also presents new evidence for Egyptian and Levantine m-Na-Ca glass in the Early Roman/Meroitic and Late Roman/Nobadian periods. Moreover, the study offers new data for South Indian/Sri Lankan glass bead imports in Late Antique northeast Africa (m-Na-Al 1). Furthermore, it provides the first evidence for the presence of "Mesopotamian" Islamic glass (v-Na-Ca OINE73) and Indian glass (m-Na-Al 2) in Medieval northeast Africa. Lower Nubia mostly imported glass beads between the fourteenth century BC and fourteenth century AD. However, the presence of cobalt blue chunks and beads, both having similar Co/Ni ratios and found in a Late Antique bead workshop, seems to corroborate the hypothesis of the local bead and pendant manufacture. This assumption seems to be further confirmed by the presence, in the workshop, of a diagnostically Nobadian pendant type, i.e., copper blue teardrops, locally produced using different but mainly Levantine glass sources. Beadmaking in medieval Nubia requires further investigation.
v3-fos-license
2021-12-29T16:14:55.258Z
2021-12-24T00:00:00.000
245530876
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.7554/elife.76101", "pdf_hash": "1f7f1d80afdfd07fa39f54326c88d87444768206", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1920", "s2fieldsofstudy": [ "Biology" ], "sha1": "2bc6f53b2fca47336a36fc1646ac067e5503a34b", "year": 2022 }
pes2o/s2orc
Neocortical pyramidal neurons with axons emerging from dendrites are frequent in non-primates, but rare in monkey and human The canonical view of neuronal function is that inputs are received by dendrites and somata, become integrated in the somatodendritic compartment and upon reaching a sufficient threshold, generate axonal output with axons emerging from the cell body. The latter is not necessarily the case. Instead, axons may originate from dendrites. The terms ‘axon carrying dendrite’ (AcD) and ‘AcD neurons’ have been coined to describe this feature. In rodent hippocampus, AcD cells are shown to be functionally ‘privileged’, since inputs here can circumvent somatic integration and lead to immediate action potential initiation in the axon. Here, we report on the diversity of axon origins in neocortical pyramidal cells of rodent, ungulate, carnivore, and primate. Detection methods were Thy-1-EGFP labeling in mouse, retrograde biocytin tracing in rat, cat, ferret, and macaque, SMI-32/βIV-spectrin immunofluorescence in pig, cat, and macaque, and Golgi staining in macaque and human. We found that in non-primate mammals, 10–21% of pyramidal cells of layers II–VI had an AcD. In marked contrast, in macaque and human, this proportion was lower and was particularly low for supragranular neurons. A comparison of six cortical areas (being sensory, association, and limbic in nature) in three macaques yielded percentages of AcD cells which varied by a factor of 2 between the areas and between the individuals. Unexpectedly, pyramidal cells in the white matter of postnatal cat and aged human cortex exhibit AcDs to much higher percentages. In addition, interneurons assessed in developing cat and adult human cortex had AcDs at type-specific proportions and for some types at much higher percentages than pyramidal cells. Our findings expand the current knowledge regarding the distribution and proportion of AcD cells in neocortex of non-primate taxa, which strikingly differ from primates where these cells are mainly found in deeper layers and white matter. Introduction The prevailing concept of neocortical pyramidal cell function proposes that excitatory inputs arrive via the dendrites, are integrated in the somatodendritic compartment, and upon reaching sufficient threshold, the axonal domain generates an action potential. The axon usually originates from the ventral aspect of the soma, starting with a short axon hillock followed by the axon initial segment (AIS), the electrogenic domain generating the action potential (reviewed by Kole and Brette, 2018). Already Ramon y Cajal suggested that impulses may bypass the soma and flow directly to the axon (reviewed by Triarhou, 2014). Axon carrying dendrites (AcDs) are common in cortical inhibitory interneurons (Meyer, 1987;Wahle and Meyer, 1987;Meyer and Wahle, 1988;Höfflin et al., 2017). Furthermore, upright, inverted, and fusiform pyramidal neurons of supra-and infragranular layers display AcDs in Golgi impregnated or dye-injected cortex from rodents, lagomorphs, ungulates, and carnivores (Peters et al., 1968;Smit and Uylings, 1975;van der Loos, 1976;Peters and Kara, 1985;Ferrer et al., 1986a;Ferrer et al., 1986b;Hübener et al., 1990;Reblet et al., 1992;Matsubara et al., 1996;Prieto and Winer, 1999;Mendizabal-Zubiaga et al., 2007;Hamada et al., 2016;Ernst et al., 2018). In mouse hippocampal CA1 pyramidal cells, axons frequently emerge from basal dendrites (Thome et al., 2014). Multiphoton glutamate uncaging and patch clamp recordings revealed that input to the AcD is more efficient in eliciting an action potential than input onto regular dendrites (non-AcDs). AcDs are intrinsically more excitable, generating dendritic spikes with higher probability and greater strength. Synaptic input onto AcDs generates action potentials with lower thresholds compared to non-AcDs, presumably due to the short electrotonic distance between input and the AIS. The anatomical diversity of axon origins plus the diversity of length and position of the AIS substantially impact the electrical behavior of pyramidal neurons (reviewed by Kole and Brette, 2018). This begs the questions, of how frequent AcD pyramidal neurons are among the mammalian species, and whether AcD pyramidal neurons also exist in primates. Our data suggest remarkable differences between phylogenetic orders and position in gray and white matter. Pyramidal AcD cells in adult cortex We assigned AcDs in a very conservative manner. All cells in which the axonal origin could not be unequivocally seen to arise from a dendrite were considered 'somatic' axon cells. A certain fraction of neurons had an axon which shares a root with a dendrite. We consequently considered 'shared root' cells as somatic axon cells. Figure 1A documents the diversity of axon origins of pyramidal cells in a P60 infant macaque monkey cortex with an axon originating from a soma (inset B), an AcD (inset C), and the shared root configuration (inset D). Generally, AcDs were basal dendrites. AcD neurons of other species are shown in Figure 2A-E. Macaque neurons stained with SMI-32/βIV-spectrin are shown in three videos. Figure 2-video 1 shows an AcD pyramidal neuron from premotor cortex and flanking non-AcD neurons. Figure 2-video 2 shows a layer V pyramidal cell of the cingulate cortex with an axon wiggling out between thick dendrites at the right somatic pole. Even with the help of confocal imaging it was not easy to unequivocally identify the origin of the axon, and we score these neurons as a shared root cell. Figure 2-video 3 shows a spindle-shaped neuron resembling a Von Economo neuron of infragranular layers of the cingulate cortex. Note that the axon emerges >65 µm away from the soma from the descending dendrite. The online version of this article includes the following figure supplement for Quantitative analysis In gray matter of non-primates, 10-21% of the pyramidal neurons assessed by perpendicular counts through all layers had an AcD ( Figure 3A). The interindividual variability and staining methods are reported for mouse and rat in Table 1, and for cat, ferret, and pig in Table 2. Mouse Thy-1-EGFP, βIV-spectrin-positive pyramidal neurons vary from 10 to 22%, possibly due to the individual variability of the Thy-1 expression level. In adult macaque, gray matter of only about 3-6% of the pyramidal neurons had an AcD ( Figure 3A). The interindividual variability and staining methods are reported in Table 3. In human gray matter, the proportion of AcD pyramidal neurons was 1.96% on average. The interindividual variability and staining methods are reported in Table 4. A significant difference emerged after a layer-specific analysis. Proportions were largely obtained in a second round of quantification with surface-parallel tracks, and in some cases sections were assessed that had not been analyzed in the first round of counting in order to obtain higher cell numbers. Therefore, in Tables 1-4 the laminar percentages do not simply add up to proportions obtained for whole gray matter. Furthermore, we plotted the individual values because bar graphs do not represent interindividual variability. Non-primates had about equal proportions of AcD cells in supra-and infragranular layers ( Figure 3B, Tables 1 and 2). Thy-1 was only expressed in layer V and therefore, did not allow to determine laminar percentages for mouse. The macaque had only about 1-5% supragranular and about 5-14% infragranular AcD cells ( Figure 3B, Table 3). Note the variable proportions of AcD neurons in infragranular layers and no obvious correlation between proportion Figure 2. Representative axon carrying dendrite (AcD) neurons. (A1, A2) From rat visual cortex (biocytin, immunofluorescence); (B1, B2) cat visual cortex (immunofluorescence); (C1, C2) ferret visual cortex (biocytin); (D1, D2) macaque premotor cortex (biocytin, immunofluorescence), the inset shows the axon origin at higher magnification; (E1, E2) human auditory cortex (Golgi method; D2 is a montage of two photos). Apical AcDs (asterisk in C2) were rare, less than 10 were detected among the neurons assessed in adult rat, ferret, and macaque, and none in our human material. In all cases, the axon immediately bent down toward the white matter. Axon origins are marked by large arrows, small arrows indicate the course of biocytin-labeled axons. Scale bars 25 µm. The online version of this article includes the following video and figure supplement(s) for figure 2: Tables 1-4, which also indicate the staining methods. Numbers above the bars are the total number of pyramidal neurons assessed per species/ cell class for this graph. Numbers in the bars indicate the number of individuals. (B) Laminar analysis. Non-primate species showed roughly equal proportions of AcD neurons in supra-and infragranular layers. With some individual variability the range was 10-21%. In contrast, in macaque, the cluster was downshifted along the ordinate due to overall much lower proportions. Furthermore, infragranular pyramidal cells displayed much higher proportions of AcD cells compared with supragranular pyramidal cells. A Mann-Whitney rank sum test of 'all non-primate' versus 'all macaque' percentages of supragranular and infragranular AcD cells, yielded p<0.001 and p<0.001, respectively. Human was not included in the statistical test because only one method was used to detect AcD cells. The legend indicates the number of individuals and the staining methods; IFL, immunofluorescence. Note that we could not do a laminar analysis for all individuals shown in (A) because staining of supragranular layers in some animals delivered too low numbers which might have led to a sampling error. The online version of this article includes the following source data for figure 3: Source data 1. Data and statistical analysis of experiments shown in Figure 3A, B. and age of the individual macaques. The differences between non-primates and macaques were significant (see legend to Figure 3B). Values obtained in human Golgi material overlap with the lower range of the macaque values. Also in humans, supragranular layers had low proportions of AcD cells of 0.99% on average. Laminar percentages for infragranular neurons were on average 2.87%, and variable between individuals, but obviously not correlated with age ( Table 4). Note that levels might have been a bit underestimated in Golgi material. The point will be addressed below. For a more detailed analysis, we compared six cortical areas (primary sensory to limbic) in macaque using the same method, SMI-32/βIV-spectrin immunofluorescence. Antibody SMI-32 is directed against nonphosphorylated neurofilaments. It labels somata and dendrites of large type 1 pyramidal cells mainly of layers III and V, and much weaker the smaller pyramidal neurons, but not spiny stellates of layer IV and small pyramidal neurons of layer II (García-Cabezas and Barbas, 2014). βIV-spectrin is one of the most reproducible markers for the AIS. The following regions were assessed: visual cortex V1/operculum, auditory cortex A1 along the lower bank of the lateral fissure, somatosensory cortex S2 along the upper bank of the lateral fissure, cingulate cortex medial and lateral flank including areas 23 and 31, respectively, the upper and lower bank of the intraparietal sulcus, and dorsal cortex (premotor and parietal at more anterior levels). The intraparietal sulcus has on the upper bank area MIP (medial intraparietal area) involved in grasping, and on the lower bank the areas LIP (lateral intraparietal area) and VIP (ventral intraparietal area) involved in control of eye movements. Intraparietal neurons were retrogradely labeled from the premotor cortex injections. In three individuals, the percentages of AcD neurons varied between the six areas by about a factor of 2 ( Figure 4A). We could not recognize a systematic difference in that one of the areas presented with substantially higher or lower percentages. Furthermore, we compared primary visual area 17 to extrastriate visual areas in adult cats which received biocytin injections. Also here, individual percentages of AcD neurons varied from 11 to 20%; the interindividual variability was larger than the interareal variability. There was no recognizable difference between the areas analyzed ( Figure 4B). Moreover, the values obtained in cat visual cortex matched those in ferret visual cortex (striate and extrastriate) ( Figure 4B; Table 2). As defined in the beginning, all neurons in which the axon origin was not unequivocally seen to emerge from a dendrite were scored as shared root cells, and for Figure 3 and Figure 4 were included in the group of cells with somatic axons. Yet, cells with the shared root configuration according to our criteria have been accepted in recent studies already as AcD neurons (Thome et al., 2014). The question was, how often does the shared root configuration according to our definition occur? We plotted the percentages of AcD neurons versus shared root neurons for rat, ferret, macaque, and human ( Figure 5A). For macaque, we included the biocytin-stained material from premotor cortex. In macaque Individual 2 we could assess the contralateral cortex to determine the percentage of AcD and shared root cells of callosal projection neurons. Furthermore, in Individual 2, long-range projection neurons residing in the intraparietal sulcus were assessed, which in functional terms belong to the eye-hand coordination and grasping network. Furthermore, we determined the shared root configuration in all areas shown in Figure 4A visualized via immunofluorescence. Also included were the values obtained in Golgi-stained macaque and human cortex (see Figure 5-source data 1). In Figure 5A, the species cluster along the ordinate as already evident in Figure 3B. However, in Figure 5A, the species scatter widely along the abscissa. This suggested an absence of a systematic correlation between AcD and the shared root configuration. Next, for rat, ferret, macaque, and human, we compared the percentages of AcD to the sum of AcD plus shared root ( Figure 5B). If the shared root cells were considered as AcD cells, the proportions of AcD cells increase to some extent in all species analyzed. The interindividual variability of the shared root cells was at a factor of >10 (range of 0.46-5.5% in macaque), and statistics argued against any biologically significant difference between species. Unexpectedly, a subtle difference was observed independently by two observers who analyzed the Golgi material (PW at Ruhr University Bochum, GM at University La Laguna). A total of 13 cases (2 macaque, 11 human individuals) had percentages of shared root cells higher than percentages of AcD cells, whereas in 22 of 25 individuals and/or cortical areas stained for biocytin and immunofluorescence the percentages of shared root cells were lower than the percentages of AcD cells ( Figure 5B). Thus, the proportion of AcD neurons was slightly underrepresented in the Golgi material. Yet, also the biocytin material had larger proportions of shared root cells ( Figure 5B). We therefore compared immunofluorescence and biocytin of the macaque material ( Figure 5C). Indeed, the biocytin material delivered significantly more shared root cells. The unequivocal AcD cells were equally well recognized by both methods. With the large data set of macaque Individual 2 we compared the two methods within one individual ( Figure 5D). Again, similar to the Golgi material, the proportion of shared root cells was higher in the biocytin material than it was in the immunofluorescence material whereas unequivocal AcD cells were equally well detected with the two methods. AcDs in rodent hippocampus are described as being functionally privileged. This may be mirrored by their spine density. Analysis of rat and ferret biocytin-stained pyramidal cells, however, revealed that neither the dendrites sharing a root with an axon nor the AcDs had spine densities differing systematically from the spine density of non-AcDs from the very same neuron ( Figure 6). Developmental aspects Kitten layer VI pyramidal cells ( Figure 3A, Table 2) showed adult percentages of AcDs early postnatally. Many pyramidal cells were L-shaped or inverted-fusiform, with the axon emerging from one of the dominant dendrites (Lübke and Albus, 1989). In line with this, infant macaque cortex exhibited percentages of AcD cells comparable to adult cortex neurons labeled with biocytin, and again, AcD cells were more frequent in infragranular layers ( Figure 3A and B, Table 3). Unexpectedly, of the pyramidal cells in kitten white matter (Wahle et al., 1994), 43.31% had axons emerging from the major dendrite ( Table 2). Even more striking, 8.86% of the interstitial pyramidal neurons of the adult human white matter (Meyer et al., 1992) displayed AcDs (Table 4), and on average an additional 13.23% of the interstitial pyramidal cells had axons emerging from a shared root. Interneurons We also assessed the proportion of AcD of interneurons. In human Golgi material, interneurons were easily recognized by non-spiny slightly varicose dendrites, lack of polarity, and -if present -locally branching axons. The more grazile morphology with rather simple dendrites allowed a reliable detection of AcD cells. Examples of bitufted, Martinotti, and basket cells were Neurolucida reconstructed ( Figure 7A). Not all cells had axons impregnated beyond the initial segment because some interneuronal axons are myelinated, yet, the AIS can be unequivocally distinguished from dendrites (Jones, 1975). Up to 30% of the interneurons (all types pooled) had an AcD ( Figure 7A). This was in contrast to the rather small percentage of AcD pyramidal cells in human. Furthermore, Parvalbumin-positive neurons were analyzed in immunostained human material. About 22% had an AcD ( Figure 7A). Parvalbumin is a marker for gamma-aminobutyric acid (GABA)-ergic fast-spiking basket and chandelier cells, whereas neuropeptides are enriched in non-fast-spiking interneurons with Somatostatin being a marker for this interneuron lineage, at least in a rodent. Somatostatin-positive neurons are GABA-ergic. In perinatal kitten occipital cortex, they start to appear in deep layer VI and gradually more cells differentiate in layer VI, V, and upper layers; many are bitufted cells and Martinotti cells with ascending axons (Wahle, 1993). Neuropeptide Y-positive cells of mainly layers VI and V of the gray matter are often small basket neurons and also belong to the GABA-ergic neurons. About 12% had an AcD. Neuropeptide Y-positive axonal loop cells of the cat subplate are a transient type of projection neuron (Wahle and Meyer, 1987), which was recently reported to not contain glutamate decarboxylase (Ernst et al., 2018). Only about 5% had an AcD, the vast majority of axonal loop cells had the axon originating from the soma. For both neuropeptide Y-positive cell types, this was constant through kitten postnatal development. Of the Somatostatin-positive neurons 45-50% had an AcD ( Figure 7B). It seems as if the percentage of Somatostatin-positive AcD neurons would increase early after birth. However, this 'increase' rather reflected the differentiation of layer V/VI bitufted and Martinotti cells which begin to express Somatostatin more intensely. So, Somatostatin-positive AcD cells simply became easier to detect at higher numbers from P7-9 onward. To summarize, we observed a substantial species difference with pyramidal AcD cells being more frequent in non-primates. Within species, we found clear laminar differences, with pyramidal AcD cells being rare in primate supragranular layers, and more frequent in deep layers, and in white matter (subplate/interstitial) neurons. Interneurons in human and kitten cortex presented with type-specific proportions of AcD which can be much higher than those of pyramidal neurons. with respect to the stereotaxis coordinates (Paxinos et al., 2009). The posterior box corresponds to a fairly caudal level of the visual cortex. The table summarizes the percentages of AcD neurons obtained in the six areas and three individuals and gives the mean of each area with standard deviation. Abbreviations: arc, arcuate sulcus; cgs, cingulate sulcus; cs, central sulcus; ecal, external calcarine sulcus; ios, inferior occipital sulcus; ips, intraparietal sulcus; lf, lateral fissure; lu, lunate sulcus; prs, principal sulcus; sts, superior temporal sulcus. (B) Upper left is a photomicrograph of one of the coronal sections of cat occipital cortex analyzed for biocytin-stained AcD neurons. The injection site in this case was near the area 17/18 border, some other cats had an additional injection into the suprasylvian gyrus (see Figure 4-source data 1). Area 17 is along the medial flank, areas 18, 19, and 21 are in the lateral sulcus and on the suprasylvian gyrus. Upper right is the cat brain (after Reinoso-Suarez, 1961) with the visual fields indicated. The table summarizes the percentages of AcD neurons obtained in area 17 and the extrastriate areas. The graph pairs the data points of the five cats. To the right, we compared cat (n = 7) to ferret (n = 4) visual cortex (striate and extrastriate). Every point is one individual, the red bar represents the median for each column. The p-values were determined with a Mann-Whitney rank sum test. The online version of this article includes the following source data for figure 4: Source data 1. Data and statistical analysis of experiments shown in Figure 4A and B. Discussion A majority of human gray matter pyramidal neurons have axons arising from the soma. In this aspect, in particular supragranular neurons of primates differ from those of non-primates. We found an interindividual variability of AcD cells at about a factor of 2, and despite our high cells numbers a sampling bias cannot be completely excluded. We could not find areal differences in macaques and in cats. Also, the data of human visual, auditory, temporal, and prefrontal cortex did not argue for areal differences. Basal dendritic trees of layer III pyramidal cells in human visual cortex are largest at birth whereas those in temporal cortical areas continue to increase in complexity during the first postnatal years (for review Elston and Fujita, 2014). This suggested that the sparcity of the AcD phenotype in human in particular in supragranular layers is not dependent on postnatal changes of dendritic complexity. An additional fraction of neurons have axons which share a root with a basal dendrite. Electron microscopy has demonstrated the mixed nature of the shared root which displays a dendritic fine structure but also contains the fasciculated microtubuli characteristic for the axon hillock. The latter is less distinct when the axon emerges from a dendrite in that the dense undercoating typical for the initial segment starts immediately after the point of divergence (Peters et al., 1968). This begs the question of how an axon can emerge from a dendrite? Cortical pyramidal neurons migrate radially upward while their axons emerge from the basal somatic pole and already during soma migration descend into the white matter. After the neurons have reached their laminar destination the leading process transforms into the apical dendrite, and basal dendrites begin to sprout. It remains methods whereas the biocytin staining yielded higher numbers of shared root cells (SR). In C, D, colors indicate the comparisons, and the p-values were determined with a Mann-Whitney rank sum test. IFL, immunofluorescence. The online version of this article includes the following source data for figure 5: Source data 1. Data and statistical analysis of experiments shown in Figure 5A and B. to be shown if, during basal dendritogenesis, the axon hillock becomes passively displaced from the soma onto an outgrowing dendrite. However, the argument does not explain why the proportion of AcD neurons is much higher in hippocampus although numbers published with intracellular labeling methods recently in rodent CA1 neurons vary from 20 (Benavides-Piccione et al., 2020) to about 50% (Thome et al., 2014). Figure 5 continued Furthermore, at this moment it is not clear if the axonal origin is always firmly anchored or can drift along the plasma membrane, for instance, by mechanical influences. It is known that the AIS is a regulated microdomain (Jamann et al., 2018) which undergoes activity-dependent shifts in length and in position. So, could the axon hillock actively 'translocate' or become passively displaced from the somatic to a proximal dendritic membrane? Dendrites are dynamic structures and although imaging studies in mouse have reported fairly stable basal dendrites of supragranular pyramidal neurons during development (Trachtenberg et al., 2002), there are also reports on dynamic changes elicited by environmental enrichment, activity, or disease (for review Hickmott and Ethell, 2006;Elston and Fujita, 2014). Domestic pig and wild boar had similar proportions suggesting that domestication has no influence. Kitten and infant macaque data suggest that adult proportions of AcD neurons are already present at early ages, and the three assessments in infant macaque are within the macaque cluster ( Figure 3B). In macaque, ontogenetically older infragranular pyramidal cells display more AcDs than later generated supragranular neurons, and the same was observed in our human material. Neurons of the white matter seem to be a special case. In cat, the inverted pyramidal neurons represent a subset of subplate cells. They reside at strategic positions to monitor incoming inputs and may quickly relay that information to the overlying gray matter via axons ascending into the gray matter including layer IV (Friauf et al., 1990). Given that subcortical afferents and white-to-gray matter projections match in topography (reviewed by Molnár et al., 2020), a synaptic double-hit scenario has been postulated with geniculocortical afferents trying to strengthen synapses onto layer IV spiny stellates, and with excitatory subplate afferents transiently acting as 'helper synapse' and instructor for the developing thalamocortical connectivity (reviewed by Molnár et al., 2020). With regard to the functional concept of AcD neurons (Thome et al., 2014;Hamada et al., 2016;Kole and Brette, 2018), our findings suggest that action potential firing abilities bypassing somatic integration and somatic inhibition are advantageous during development of thalamocortical wiring. In adult human white matter, pyramidal interstitial cells may differ from the transient subplate cells of non-primate cortex (Meyer et al., 1992;Suarez-Sola et al., 2009;Sedmak and Judaš, 2021). Yet, a function of quickly relaying incoming afferent information up to the gray matter is also conceivable here, and this might narrow the time window of synaptic integration enabling plasticity or help to activate inhibitory interneurons. Whether neurons with axons sharing a common root with a dendrite should be regarded as AcD neurons is a matter of debate. From the morphological perspective, we assigned AcD in a very conservative manner. All neurons in which the axon origin was not unequivocally arising from a dendrite or seemed to share a common root with a dendrite were included in the group with somatic axons. Yet, in recent studies cells with the shared root configuration have been considered AcD neurons, also using immunofluorescence (Hamada et al., 2016;Thome et al., 2014). As expected, when plotting the sum of AcD plus shared root for the various staining methods, values for all species were increasing. However, the non-primate-to-macaque difference can be easily seen. For instance, our summed values from adult rat visual cortex sampled across all layers come closer to proportions reported for layer V neurons by Hamada et al., 2016; about 28% in adult Wistar rat somatosensory cortex. Of note, however, Hamada et al., 2016 reported on neurons which by our criteria would not be AcD cells; their criterion for inclusion has been the distance of the spectrin/ankyrin G-labeled AIS to the soma irrespective of whether the axon emerges from a shared root or unequivocally from a dendrite. Parvalbumin-positive AcD neurons of Individuals 11-13. (B) Photomicrograph of a layer VI neuropeptide Y-positive neuron with somatic axon, and a layer V Somatostatin-positive AcD neuron. Axons marked by white arrows, small black arrows mark collaterals. The graph shows percentages of AcD interneuron subsets at the ages indicated in developing cat occipital cortex (see Figure 7-source data 1 for sample size). The online version of this article includes the following source data for figure 7: Figure 7A and B. Figure 7 continued Together, considering a fraction of shared root will be tolerable, at least in non-primate mammals with their substantial numbers of unequivocal AcD neurons. The human Golgi material yielded the lowest values of AcD and of AcD plus shared root cells ( Figure 5B) and the lowest proportion in supragranular layers ( Figure 3B). We did not run statistical comparisons with our human data for the following reason. After analyzing more and more individuals and/or brain areas it became evident that the Golgi methods yielded a lower proportion of AcD neurons and the higher proportion of shared root cells. In line with this, also the biocytin material yielded higher proportions of shared root cells. A parsimonious explanation may be as follows. The Golgi reaction product is a chromate precipitate deposited at the plasma membrane. The pitch-black reaction product, the thickness of the tissue sections, on top of the complexity of basal dendrites in primate (Hendry and Jones, 1983) and even more so in human (Mohan et al., 2015;review by Goriounova and Mansvelder, 2019) can make it difficult to determine if an axon emerges from a soma, or from a shared root, or already from a very proximal dendritic trunk. The same accounts for the black biocytin reaction product (see Figure 2 and Figure 2-figure supplements 1 and 2) although the section thickness here was thinner. An additional argument comes from the axon itself. Axons originating from dendrites are thinner and have less prominent hillocks (Peters et al., 1968;Mendizabal-Zubiaga et al., 2007;Benavides-Piccione et al., 2020). With dark reaction products it was difficult to precisely determine where exactly a thin process lacking a clear hillock arises from a large dendritic root. This way we counted somewhat higher percentages of shared root and somewhat lower percentages of AcD in the Golgi Cox and Golgi-Kopsch material. By contrast, the intracellular staining of much thinner sections such as the 20-50 µm thick sections of the biocytin and immunofluorescence material allowed to visualize structures at better optical resolution. In particular, the confocal analysis allowed to walk micrometer-by-micrometer through the optical stack to decide 'pro AcD' or 'pro shared root' for each case in question arguing that the optical resolution was the crucial parameter. Nevertheless, biocytin staining was equal to the immunofluorescence in detecting clear-cut AcD, but was inferior to immunofluorescence and confocal analysis when it comes to deciding on shared root. It should be noted that the frequently used SMI-32 staining method may also have a certain bias in that it stains preferentially type 1 pyramidal neurons (Molnár and Cheung, 2006). Future studies are needed before a final conclusion on the areal and laminar proportion of human pyramidal AcD neurons can be made, and for a species comparison intracellular staining methods should be applied as recently done for CA1 pyramidal cells (Benavides-Piccione et al., 2020). Pyramidal cell AcDs in isocortex and allocortex are basal dendrites. We found less than 10 axons in rat, ferret, and macaque emerging from an apical dendrite of a classical upright pyramidal cell of layers II-V. Pyramidal cells of layer VI can be L-shaped or fusiform-bipolar with two major dendrites, or inverted, in rodent as well as in primate (Hendry and Jones, 1983). In human, the large-sized Von Economo neurons in cingulate and other cortices have been reported to regularly have an axon from a thick descending basal dendrite which in addition often shares a common root with a secondary dendrite (Banovac et al., 2021). A study comparing human and mouse hippocampal CA1 pyramidal cells with intracellular injections reported that axons may arise from basal dendrites. The proportions are 40% AcD cells in human and 20% AcD cells in mouse (Benavides-Piccione et al., 2020). The latter proportion differs markably from the 52% AcD neurons visualized by DsRed expression in mouse CA1 neurons, and the 47% AcD neurons visualized via intracellular injection in Wistar rat CA1 neurons reported by Thome et al., 2014. Electron microscopy has revealed that in rat cortex the AIS of axons originating from one of these major dendrites of inverted pyramidal cells are thinner (Peters et al., 1968;Mendizabal-Zubiaga et al., 2007;Benavides-Piccione et al., 2020), and the initial segment is shorter and less innervated by symmetric synapses than AIS of axons arising from the soma (Mendizabal-Zubiaga et al., 2007). In cat visual cortex, inverted-fusiform pyramidal neurons of layer VI serve corticocortical, but not corticothalamic projections; for instance, the feedback projection to area 17 from the suprasylvian sulcus (Einstein, 1996), an area involved in motion detection, processing of optical flow, and pupillary constriction. With regard to the functional concept of AcD cells, the kinetics of intra-and interareal information processing may have so far unrecognized facets. Local axon GABA-ergic cortical interneurons often have axons emerging from dendrites in rodent as well as in monkey (Jones, 1975) and human (Kisvárday et al., 1990). Our data confirm earlier observations in vivo (Wahle and Meyer, 1987;Meyer and Wahle, 1988;Wahle, 1993) and in vitro (Höfflin et al., 2017) in that the frequency of AcD is cell type specific. About half of the bitufted and Martinotti neurons in cat cortex had an AcD whereas most Parvalbumin-positive neurons, in particular, large basket cells in the human cortex had somatic axons regardless of laminar position. Intriguingly, cat subplate axonal loop cells turned out to be lowest with just about 5% AcD cells. This was in contrast to >40% AcD subplate pyramidal cells present at the very same ages in the very same compartment, with both types being co-generated from the cortical ventricular zone early during corticogenesis. Also intriguingly, about one-third of the interneurons of human cortex had an AcD. Our sample represents a mixture of types because the axons were stained only initially, and it was not possible to separate by type, as numbers were too small for this. Interestingly, however, the proportion of AcD interneurons in human was fairly close to an average proportion of AcD interneurons in cat cortex, whereas the proportion AcD pyramidal cells in human was substantially lower compared to pyramidal cells of cat and other non-primate mammals. Why interneurons do not seem to follow the primate trend toward less AcD cells remains to be unraveled. Our data add to the view that human cortical pyramidal neurons differ in important aspects from those of non-primates (Elston et al., 2011;Elston and Fujita, 2014;Defelipe, 2011;Beaulieu-Laroche et al., 2018;Gidon et al., 2020;Rich et al., 2021). For instance, human supragranular pyramidal neurons have highly complex basal dendrites (Hendry and Jones, 1983), each being a unique computational unit (reviewed by Goriounova and Mansvelder, 2019). Furthermore, layer II/III human pyramidal cell dendrites have unique membrane properties (Eyal et al., 2016) and are more excitable than those of rat (Beaulieu-Laroche et al., 2018). Another feature is the unique design of the human cortical excitatory synapses having pools of synaptic vesicles, release sites, and active zones that are much larger compared to those in rodents (Molnár et al., 2016;Yakoubi et al., 2019). Large and efficient presynapses and more excitable dendrites may reliably depolarize the target cell's somatodendritic compartment such that electrical dendroaxonic short circuits might become obsolete. We propose, from non-primate to primate isocortical pyramidal neurons, an evolutionary trend toward inputs that are conventionally integrated within the somatodendritic compartment and can be precisely modulated by inhibition to generate an optimally tuned cellular and, finally, behavioral output. Animals The data presented here were compiled by tissue sharing (immunohistochemistry) and from tissue that had originally been processed for unrelated projects, i.e., no additional animals were sacrificed specifically for this study. Biocytin injections Two adult male Long-Evans pigmented rats (Table 1) received local biocytin injections into areas 17 and 18 in the course of teaching experiments done in the 1990s demonstrating surgery, tracer injections, and biocytin histology. The animals were from the in-house breeding facility. The histological material has been used for decades to train neuroanatomy course students at the Department of Zoology and Neurobiology. Four adult pigmented ferrets (Mustela putorius furo, Table 2) received biotin dextrane amine (BDA) injections into the motion-sensitive posterior suprasylvian area (Philipp et al., 2006, Kalberlah et al., 2009. Five adult cats ( Table 2) received biocytin injections into visual cortex at around the border of area 17 to area 18 (Distler and Hoffmann, unpublished). After a survival time of 6-13 days, the animals were sacrificed and processed as described for the macaque cases. Three male adult macaques (Macaca mulatta; Table 3) received tracer injections (15-20% BDA MW 3000) into dorsal premotor cortex (Distler and Hoffmann, 2015). After a survival period of 14-17 days, the animals were sacrificed with an overdose of pentobarbital and perfused through the heart with 0.9% NaCl and 1% procaine hydrochloride followed by paraformaldehyde-lysine-periodate containing 4% paraformaldehyde in 0.1 M phosphate buffer pH 7.4. Coronal 50 µm thick frozen sections were cut on a microtome and processed for biocytin histochemistry with the avidin-biotin method (ABC Elite) with diaminobenzidine as chromogen which, in most cases, was enhanced with ammonium nickel sulfate (Distler and Hoffmann, 2015). A P60 infant macaque (Table 3) received a biocytin injection into visual cortex and has been processed as above (Distler and Hoffmann, 2011). Intracellular Lucifer yellow injections The cat material was from a study on development of area 17 layer VI pyramidal cell dendrites (Lübke and Albus, 1989). Briefly, Lucifer yellow was iontophoretically injected into the somata in fixed vibratome slices of 100-150 µm thickness followed by UV-light photoconversion in the presence of diaminobenzidine toward a solid dark-brown reaction product (Table 2). Furthermore, we assessed neurons in the white matter of developing cat visual cortex ( Table 2) prelabeled with the antibody 'subplate-1' followed by Lucifer yellow injection and photoconversion (Wahle et al., 1994). Immunofluorescence Mouse material (Table 1) was collected as part of the ongoing dissertations of Nadja Lehmann and Susanna Weber, Institute of Neuroanatomy, Medical Faculty Mannheim, Heidelberg University, supervised by Prof. Maren Engelhardt. Sections were processed as described previously (Jamann et al., 2021). The intrinsic EGFP signal was combined with βIV-spectrin immunostaining. Adult cat material for immunostaining was from studies on development of visual cortex interneurons (Wahle and Meyer, 1987;Meyer and Wahle, 1988). Cryoprotected slabs of these brains had been stored since then embedded in TissueTek at -80°C. The 5-month pig material was obtained from the Institutes of Physiology and Anatomy, Medical Faculty, University Mannheim (donated by Prof. Martin Schmelz). The P90 European wild boar material was from current studies (Ernst et al., 2018;Sobierajski et al., 2022; Table 2). Adult macaque and P60 infant macaque material not used for immediate histological assessment had been stored after fixation and glycerol infiltration in isopentane at -80°C. From such spared blocks, 50-µm cryostat sections were cut for immunostaining ( Table 3). Sections were pretreated with 3% H 2 O 2 in TBS for 30 min, rinsed, incubated for 1 hr in 0.5% Triton in TBS, blocked in 5% horse serum in TBS for 2 hr followed by incubation in mouse anti-SMI-32 to stain somata and dendrites of subsets of pyramidal cells, and rabbit anti-βIV-spectrin (Höfflin et al., 2017) to stain the AIS. We could do only one pig and one cat for the laminar analysis because the immunofluorescence did not deliver sufficient basal dendritic SMI-32 labeling of supragranular neurons in the second available individual. Thus, for these two cases no reliable laminar data could be obtained. Mouse anti-NeuN staining of adjacent sections helped to identify the layers in the biocytin material. DAPI counterstaining helped to identify layers of the immunofluorescence sections. After 48 hr incubation at 8°C sections were rinsed, incubated in fluorescent secondaries including DAPI to label nuclei, and coverslipped for confocal analysis. Formalin-fixed human material donated to the Department of Anatomy of the University of La Laguna (see below, Golgi-Kopsch method) was cryosectioned at 80 µm thickness and immunoperoxidase stained for parvalbumin to determine AcD basket and chandelier cells (Individuals 11-13 in Figure 7A). The material was prepared as part of the dissertation of Maria Luisa Suarez-Sola (University La Laguna, Spain, 1996) under the supervision of Prof. Gundela Meyer; the material served to illustrate the publication Suarez-Sola et al., 2009. Stained sections were reassessed for the present study. Golgi impregnation The Golgi-Cox impregnations were done with so-called access tissue removed during transcortical amygdalo-hippocampectomy from two adult patients who suffered from temporal lobe epilepsy (Individuals 1 and 2 in Table 4). All experimental procedures were approved by the Ethical Committees as reported (Schmuhl-Giesen et al., 2021;Yakoubi et al., 2019). These and other previous studies Mohan et al., 2015 have demonstrated that the access tissue is normal because it is located far from the epileptic focus. Biopsy tissue was processed using the Hito Golgi-Cox Optim-Stain kit (Hitobiotec Corp) as described (Schmuhl-Giesen et al., 2021;Yakoubi et al., 2019). Coronal sections (quality as shown by Schmuhl-Giesen et al., 2021 in their Figure 1-figure supplement 1) were analyzed for AcD pyramidal neurons in supragranular (I-IV) and infragranular (V-VI) layers. Furthermore, interneurons with smooth dendrites were assessed from this material. Of selected neurons the initial axon and dendrites were 3D reconstructed with the Neurolucida (MicroBrightField Inc, Williston, VT, United States) at 1000× magnification. The Golgi-Kopsch impregnations of macaque cortex ( Table 3) were done on spare tissue from experiments done by Prof. Dr. Barry B. Lee (Lee et al., 1983). The sections had been used as reference material in the Dept. of Anatomy, University of La Laguna, Tenerife, Spain. The Golgi-Kopsch impregnations of human auditory and agranular prefrontal cortex (Individuals 3-9 in Table 4) were processed decades ago (Meyer, 1987;Meyer et al., 1989;Meyer et al., 1992). The brains were from notarized donations to the Department of Anatomy of the University of La Laguna for teaching of medical students and for research. Donors had no neurological disorders. After death, the bodies were transferred to the Department and perfused with formalin. The brains were extracted, stored in formalin, and small selected blocks were processed using a variant of the Golgi-Kopsch method. Tissue blocks were immersed in a solution of 3.5% potassium dichromate, 1% chloral hydrate, and 3% formalin in destilled water for 5 days, followed by immersion for 2 days in 0.75% silver nitrate. Blocks containing the auditory cortex (Heschl's gyrus), ventral agranular prefrontal cortex, and visual area 18 were cut by hand with a razor blade, dehydrated, and mounted in Epon. For the assessment of AcD cells in the white matter, the border between gray and white matter was traced (Meyer et al., 1992). We avoided this zone and took as orientation the dense aggregations of astrocytes in the white matter and the linear arrangement of blood vessels. As shown before (Meyer et al., 1992), interstitial pyramidal cells have a variety of shapes, from elongated bipolar to multipolar, but carry dendritic spines in contrast to non-pyramidal interstitial cells. Analysis and assignment of AcD We scored all pyramidal cells with sufficiently well-stained basal and apical dendrites that had a recognizable axon. We analyzed fields of view where the labeled pyramidal cells are fairly perpendicularly oriented such that the apical dendritic trunk and the descending axon could be clearly seen. In the biocytin, Lucifer yellow, and Golgi cases, the axon could be clearly distinguished from sometimes equally thin descending dendrites because the latter had spines. Axons often had a clear axon hillock, which is more prominent in primate > carnivore > rodent, although this was less prominent for axons emerging from dendrites. Descending primary axons often gave rise to thinner collaterals. Note the complementary nature of the methods: Golgi impregnation labels neurons in all layers, not selecting for particular types of pyramidal cells, and also yielding cells of layer II which are SMI-32-negative. The Golgi-Cox method with the optimized kit yields a somewhat higher density of neurons than the Golgi-Kopsch method. Yet, the various Golgi methods have been reported to deliver very similar results (Banovac et al., 2021). Intracortical biocytin/BDA injections labeled preferentially neurons with horizontal projections in layers II/III, and neurons of infragranular layers closer to the injection site. SMI-32/ βIV-spectrin labeling is strongest in large pyramidal cells of layer III and also in infragranular layers, in particular of layer V, and weaker in layer VI as demonstrated (Paxinos et al., 2009). This way, the two methods yielded data preferentially for type 1 pyramidal neurons. The areal comparison in macaque is reported in Figure 4A. The fields were identified according to Paxinos et al., 2009 andLewis andVan Essen, 2000. The areal comparison in cat is reported in Figure 4B. The fields were identified according to Reinoso-Suarez, 1961. All neurons fulfilling the criteria were sampled by five observers trained on the AcD criteria (ME, Linz; GM, La Laguna; PW together with IG or ES, Bochum, mostly by '4-eyes'). For light microscopy, neurons were viewed and scored with 40× and 63× objectives. All sections available of the tracing material were assessed and described in the source data, and in part more than once. In a first assessment, we went perpendicular to obtain AcD cells in all layers. These data are largely included in Figure 3A. However, when analyzing the macaque material we got the impression of less AcD neurons in supragranular layers. Therefore, we assessed the macaque and non-primate biocytin material a second time, now in a surface-parallel manner. Furthermore, for the laminar analysis, we had to obtain larger numbers of neurons to minimize any sampling error. With most of the human Golgi material, AcD cells were determined in a laminar fashion from the beginning, and since we found so few AcD cells we also scored the shared root configuration from the beginning. Subsequently, to obtain the shared root from the biocytin material for a species comparison, we reassesed the animal material a third time, again in a perpendicular manner albeit in fewer sections. Cell numbers and the histological basis for every figure are given in the source data, note that total cell numbers vary but the percentage of AcD cells obtained was in all cases close to the first count of the individual and within the range for each species. For SMI-32/βIV-spectrin and Thy-1/βIV-spectrin fluorescence, images and the tile scan were done with a Leica TSC SP5 confocal microscope (40× and 10× objective resp., with 1.1 NA, 1024 × 1024 px). For SMI-32/βIV-spectrin fluorescence, the areas were imaged by taking confocal stacks at regular distances in supragranular and infragranular layers to equal proportions. All stacks (numbers are given in the source data files) were quantitatively assessed, no selection was made. We aimed to obtain large numbers of neurons in order to avoid or at least reduce any sampling bias as much as possible for the laminar analysis, and in particular for the macaque tissue. Therefore, all neurons (AcD, shared root, somatic) with sufficient staining of the initial dendrites and a βIV-spectrin-labeled AIS were manually marked in the confocal stacks using the '3D-environment'-function of Neurolucida 360 similar to Figure 2-videos 1-3 exported from the Leica program. For the photomicrographs presented, global whole picture contrast, brightness, color intensity, and saturation settings were adjusted with Adobe Photoshop. Scale bars were generated with ImageJ (MacBiophotonics) and inserted with Adobe Photoshop (CS6 Extended, Version 13.0 × 64). The assignment of AcD was done in a very conservative manner following Peters et al., 1968 (see their Figure 1 with cell A presenting a shared root, cell B a somatic axon; cells C, D are AcD cells). Thus, we accepted as AcD cells only neurons in which the axon arose with recognizable distance of at least the width of the axon hillock to the soma, or emerged at such an angle that a vector through the axon hillock will not project into the soma, but will bypass the soma tangentially. Sometimes the axon and a dendrite emerged so close to each other or from a shared root (in X/Y but also Z level) such that the optical plane did not allow to make a clear decision. We included shared root cells in the group of 'somatic axon cells', unless otherwise noted/analyzed (see Figure 5). In particular, the white matter pyramidal neurons of the human brain and of the cat brain were difficult due to their elongated shape and the somata tapering into the major dendrites (Meyer et al., 1992). Therefore, we strictly aimed for the clear-cut cases. Spine analysis To elucidate if the privileged AcD has a higher spine density than non-AcD, spines were plotted with the Neurolucida at 1000× magnification from biocytin-labeled neurons of rat and ferret cortex from primary and secondary basal dendrites starting minimum 50 µm away from the soma. On average we were able to reconstruct 170 µm/neuron in rat and 145 µm/neuron in ferret. The number of spines per 100 µm dendritic length was computed, and the value for the AcD was paired to the average value of the basal non-AcD of every neuron. Yet, the number of measurable neurons was limited for the following reasons. First, neurons had to be well backfilled with the tracer. Second, neurons had to have an appreciable length of the AcD plus a minimum of one basal non-AcD in the 50 µm thin sections. Third, these dendrites had to display branch orders of 2-4, because the proximally thicker stems are not suitable for spine analysis and often void of spines (Hübener et al., 1990). Fourth, only solitary cells residing not too close to the injection site with its high background could be analyzed. Spine densities varied in our data set. Technically, the degree of biocytin labeling expectedly varied with the strength of the connection to the injection site. Biologically, pyramidal cell type-specific spine densities are known to vary up to an almost spine-free state, e.g., in Meynert cells (Hübener et al., 1990). To collect a sufficient sample size, we included moderately biocytin-backfilled cells, although they tended to present with a lower spine density. Moreover, most counts were taken from branch order 2-4 segments which may have less spines than terminal segments. Our density average in rat matched values reported for nonterminal segments of Golgi-stained near-adult hooded rat visual cortex supragranular pyramidal cells (Juraska, 1982). Our ferret spine values were lower compared to earlier reports (Clemo and Meredith, 2012) presumably for the reasons mentioned above. However, this would not compromise our finding because we compared only dendrites within the individual neurons. Would there be some systematic change of the spine density between the AcD and the non-AcD of each cell, the difference should manifest irrespective of the individual staining intensity. Ethics EthicsThe data presented in this paper were collected via tissue sharing and from material that had originally been processed for projects not related to the present topic, i.e. no animals were sacrificed specifically for the present study. Additional files Supplementary files • Transparent reporting form Data availability All data generated or analysed during this study are included in the manuscript and supporting file; source data files have been provided for Figures 3 , 4, 5, 6, and 7.
v3-fos-license
2022-05-27T15:07:55.666Z
2022-05-01T00:00:00.000
249075652
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.cureus.com/articles/75351-chronic-inflammatory-demyelinating-polyneuropathy-cidp-in-diabetes-mellitus-a-diagnostic-dilemma.pdf", "pdf_hash": "ab0d36c409d29f5cde31e3ad9d7b6207b1fb89c4", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1921", "s2fieldsofstudy": [ "Medicine" ], "sha1": "c7046887681472b6f3b3c3690c22fc77c2e99f30", "year": 2022 }
pes2o/s2orc
Chronic Inflammatory Demyelinating Polyneuropathy (CIDP) in Diabetes Mellitus: A Diagnostic Dilemma Chronic inflammatory demyelinating polyneuropathy (CIDP) is a neurological disorder of the peripheral nerves which can lead to gradually increasing motor and sensory loss. It can be a difficult entity to diagnose, particularly in elderly patients with a history of Diabetes Mellitus due to their overlapping neuropathic syndromes. Reported is a case of CIDP in an elderly female who manifested multiple sensory, motor, and autonomic complaints. A compilation of clinical features, neuroimaging, lumbar puncture, electromyography, nerve conduction studies, and nerve biopsy were used to reach the diagnosis. Highlighted is a clinical approach to identifying CIDP that can cause neuropathy in the setting of other potential confounding disorders namely Diabetes Mellitus. Introduction Chronic inflammatory demyelinating polyneuropathy (CIDP) is a rare form of acquired peripheral neuropathy that can present with a wide variety of sensory and motor deficits. In the most typical form, there is gradually progressive, symmetric loss of distal and proximal motor function as well as sensory involvement that is less prominent in comparison [1]. We report a complex case of CIDP involving a 74-yearold female who developed severe autonomic, sensory, and motor complaints leading up to her diagnosis. Case Presentation A 74-year-old female with Insulin-dependent Type II Diabetes Mellitus (DM) and a history of gastric bypass surgery presented due to progressive motor decline over a four-month period. Her history was notable for a neurogenic bladder diagnosed two years prior and chronic pain of the neck, shoulders, hips, and lower back. Her weakness progressed in an ascending pattern, initially involving the lower extremities with associated foot drop, and recurrent falls followed by involvement of the arms and bulbar weakness causing severe oropharyngeal dysphagia. On presentation, she was found to have marked atrophy and weakness involving the thigh and calve musculature. Deep tendon reflexes were globally diminished. Babinski's reflex was negative. Sensory testing revealed allodynia, decreased vibratory sense, normal light touch sense, and decreased proprioception in the lower extremities. Serologic testing including vitamin and trace mineral levels, creatinine kinase, aldolase, paraproteins, antinuclear antibodies (ANA), thyroid function, and acetylcholine receptor antibodies were all unremarkable. Her hemoglobin A1c was 5.9%. Magnetic resonance imaging (MRI) of the cervical, thoracic, and lumbosacral spine showed multilevel disc disease with mild-moderate neuroforaminal narrowing affecting the C5, C6, and L2 nerve roots and hyperintense short TI inversion recovery (STIR) signals within the paraspinal musculature ( Figure 1A, 1B). MRI of the bilateral thighs showed diffuse sarcopenia and subcutaneous edema (Figure 2A, 2B). Tables 1-4. Results showed absent bilateral median, ulnar, and right radial sensory responses. Absent bilateral peroneal and tibial motor responses. Left median and ulnar motor distal latencies were within normal limits with reduced compound motor action potential (CMAP) amplitudes (proximal stimulation not obtained due to dressing present in the arm). Right median and ulnar motor distal latencies were within normal limits with reduced CMAP amplitudes and reduced motor conduction velocity. Left median and ulnar F-wave latency near normal upper limit. Right ulnar F-wave latency was slightly prolonged. Right median F-wave latency was normal. Overall, this study did not show evidence for myopathy or demyelinating features but was limited due to not being able to obtain motor nerve conduction responses in the lower extremities. Recorded Site Latency ms Amplitude mV Left Median -Digit II (Ortho) The patient underwent quadriceps muscle and sural nerve biopsy which showed marked muscle atrophy and extensive loss of large and small myelinated nerve fibers with features of active axonal and demyelinating neuropathy. There were rare endoneurial T-Lymphocytes and a macrophage attached to relatively intact myelin of one nerve fiber. There was no onion bulb formation. Also noted were arteriosclerotic changes without evidence of vasculitis. A history of progressive weakness coupled with findings of cytoalbuminologic dissociation on LP and evidence for demyelination on biopsy was most consistent with a diagnosis of CIDP. The patient was initiated on intravenous (IV) solumedrol 250 mg daily for three days then transitioned to 1 mg/kg daily. She was then treated with intravenous immunoglobulin (IVIG) 2 g/kg evenly distributed over a period of five days. She experienced an overall improvement in her upper extremity and bulbar weakness. Steroids were tapered over a two-month period. Her lower extremity weakness had mild improvement, but she remained wheelchair dependent. She was arranged for outpatient IVIG 1 g/kg daily over two days every three weeks as maintenance. Discussion CIDP is a type of immune-mediated neuropathy that can lead to progressive weakness, abnormal sensation, and autonomic dysfunction. The diagnosis of CIDP can be challenging to ascertain given its rarity and similarities with other common neuropathic diseases. History of DM is especially problematic when considering CIDP given that diabetic polyneuropathy and diabetic amyotrophy can also lead to elevated protein in the cerebrospinal fluid (CSF) and axonal damage as seen in nerve conduction studies and nerve biopsy. Nutritional deficiency and insulin neuritis were also considered but difficult to prove in retrospect. A normal hemoglobin A1c and response to treatment favors CIDP over diabetic neuropathy in our case but does not effectively rule it out. Interpretation of electromyography (EMG) and nerve conduction studies in the presence of multilevel intervertebral disc disease also poses a challenge. Despite the multifactorial nature of her disease, a diagnosis of CIDP can still be made based on clinical characteristics, electrophysiological criteria, biopsy, and CSF findings. No gold-standard set of diagnostic criteria exists, however, the European Federation of Neurological Societies and the Peripheral Nerve Society (EFNS/PNS) criteria seem to be one of the most useful in identifying CIDP with a reported sensitivity and specificity of 81 and 97% respectively [2]. The EFNS/PNS guideline defines criteria for typical and atypical CIDP that are based on clinical, electrodiagnostic, and supportive criteria. Typical CIDP is defined by chronically progressive, stepwise, or recurrent symmetric proximal and distal weakness and sensory dysfunction of all extremities, developing over at least two months. There are also absent or reduced tendon reflexes in all extremities. Other causes of demyelinating neuropathy must be ruled out such as POEMS syndrome, Lyme infection, or lumbosacral radiculoplexus neuropathy. Other supportive criteria are defined including cytoalbuminologic dissociation on CSF, characteristic neuroimaging findings, abnormal sensory electrophysiology, objective improvement following immunomodulatory treatment, and nerve biopsy showing evidence of demyelination [3]. Other forms of CIDP are characterized by the predominance of sensory involvement or the identification of autoantibodies against nodal or paranodal proteins. Autonomic dysfunction is an uncommon feature of CIDP and is more prominent as a complication of diabetes when present. MRI with gadolinium is the imaging modality of choice and should be performed to exclude other forms of neuropathy that can mimic CIDP. Features that are suggestive of CIDP are thickening and enhancement of peripheral nerves, brachial or lumbosacral plexus, and nerve roots [4]. Electrodiagnostic testing is an essential component of diagnosing CIDP as most patients will show evidence of primary demyelination. Features that are suggestive of demyelination on nerve conduction and electromyography testing include partial conduction block, conduction velocity slowing, temporal dispersion, and distance-dependent reduction of CMAP [5]. It is however worth noting that other nerve diseases like diabetic neuropathy can also evoke similar findings. Some studies even suggest a potential association between CIDP and DM. In some cohort analyses, diabetics were observed to meet electrophysiologic criteria for CIDP 12-17% of the time [6,7]. The prevalence of CIDP also tends to be higher in diabetics compared to nondiabetics [8]. CIDP in diabetics however tends to present in older patients and more in the typical form compared to idiopathic CIDP [9]. The axonal loss also tends to be more severe in patients with concurrent CIDP and DM which likely confers worse outcomes following treatment [10]. Biopsy does not distinguish diabetic neuropathy from CIDP as both can show varying amounts of demyelination and axonal loss. Histological analysis showing an inflammatory infiltrate, prominent demyelination, or onion bulb formation can be highly suggestive of CIDP but only seen in a minority of cases. Still, it is worth noting that CIDP is a treatable disease. In some cases, assessing response to immunotherapy can aid in confirming a diagnosis of CIDP retrospectively whereas diabetic polyneuropathy should not respond to such treatment [11]. There are no laboratory findings that are specific to CIDP, however, certain tests should be done to exclude other disorders. In general laboratory testing should include complete blood count, liver function tests, thyroid function studies, serum and urine protein electrophoresis with immunofixation, serum-free light chain assay, fasting serum glucose, glycated hemoglobin, and serum calcium and creatinine. CSF analysis revealing an elevated protein level is a nonspecific finding but tends to be higher in CIDP compared to diabetic neuropathy. Conclusions This case highlights the complexity of diagnosing CIDP, particularly in the presence of multiple comorbid conditions which can confound clinical, imaging, and electrophysiologic studies. It is important to remember that CIDP is a treatable form of neuropathy in diabetics. Clinicians should have a low index of suspicion for CIDP in diabetics with progressive motor neuropathy that is unproportionate to that of diabetic neuropathy alone and to weigh the potential benefit of treating these patients with immunotherapy. Though a primarily motor neuropathy, it is also important to recognize the wide heterogeneity of CIDP types and to not discount this diagnosis based on atypical features of dysautonomia and sensory dysfunction. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
v3-fos-license
2018-01-23T22:41:55.072Z
2017-05-01T00:00:00.000
205132868
{ "extfieldsofstudy": [ "Economics", "Computer Science" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://link.springer.com/content/pdf/10.1057/s41274-016-0129-8.pdf", "pdf_hash": "a9d0338447d44d8a11967f107dc6776e9b71ec53", "pdf_src": "TaylorAndFrancis", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1922", "s2fieldsofstudy": [ "Economics" ], "sha1": "4fa8ed8b5ff22492381d5442c7ecea124f7101de", "year": 2017 }
pes2o/s2orc
Directional marginal productivity: a foundation of meta-data envelopment analysis Abstract Differential characteristics of the production function represent elasticity measures and marginal rates of production technologies; in particular, marginal productivity (MP) plays an important role in economic theory and applications. This study provides a theoretical foundation of directional marginal productivity (DMP) supporting the meta-data envelopment analysis (meta-DEA) which measures the efficiency via marginal-profit-maximized orientation. In addition, the segmented marginal rate of technical substitution is developed based on DMP. In fact, DMP is developed to address finding the improving direction of the efficient firm on the frontier towards the marginal profit maximization. This approach, which emphasizes “planning” over “efficiency evaluation”, forms the basis for transforming a typical “ex-post” DEA into an “ex-ante” DEA study. Two case studies show that the DMP provides an explicit span of directions for productivity improvement via a trade-off between these distinct directions. Introduction This study provides a theoretical foundation of directional marginal productivity (DMP) supporting the meta-data envelopment analysis (meta-DEA) which measures efficiency via marginal-profit-maximized orientation (Lee, 2014). We illustrate the derivation of the DMP and propose the segmented marginal rate of technical substitution (MRTS). Differential characteristics of the production function generally can be calculated by the partial derivatives of the production function, given a smooth efficient frontier. However, the partial derivative usually presents one-to-one mapping, i.e. how a change in single input contributes to single output. In practice, one-to-many mapping, or how a change in single input contributes to multiple outputs simultaneously, is more meaningful. In fact, the microeconomic theory supports a substitution between multiple products and a multi-objective decision-making process is common. An expectable trade-off between multiple products refers to multi-output marginal productivity (MP) estimation. This study, a foundation of the meta-DEA, uses directional distance function (DDF) to develop the DMP theoretically. Marginal rate plays an important role in economic theory and applications. The primary purpose of the estimation of a production function is to obtain estimates of the regression coefficients. These coefficients refer to MPs, which characterize how the dependent variable will be affected by changing one extra unit of independent variables. In a DEA framework, the dual multiplier linear program to the primal envelopment model represents the MP and it also refers to shadow price. Economists use the term ''elasticity'' to measure the percentage of how changing one variable affects the others. 1 Applications of MP or elasticity in the literature include Banker and Thrall (1992) and Førsund and Hjalmarsson (2004), who developed a range of scale elasticity to explicitly support the decision-maker since DEA may not have a unique shadow price. Cooper et al (2000) addressed marginal rates and elasticities of substitution using the slacks in an additive DEA model. The optimal slack values can be positive or negative to achieve the efficient frontier. Moreover, Lee et al (2002) and Mekaroonreung and Johnson (2012) estimated the shadow prices of SO 2 and NO x , i.e. the undesirable outputs (pollution) generated from the production process via DEA and convex nonparametric least squares (CNLS) (Kuosmanen, 2008;Kuosmanen and Johnson, 2010;Lee et al, 2013). From an engineering perspective, the estimation of MP also contributes to capacity planning and resource allocation. Capacity is the maximal output level of a production process. The output is a result of the total productive capability of a firm's resources including workforce, machinery, and utilities. Capacity adjustment is the ability to adjust output levels to response uncertainty by controlling variable resources in the short run. In production theory, capacity adjustment can be interpreted as the MP of the production function, i.e. the extra output generated by one more unit of an input. Johansen's (1968) definition of physical capacity is the maximum amount that can be produced with existing fixed inputs (e.g. plant and equipment), given an unlimited availability of variable factors. Johansen's definition distinguishes between a short-run production function describing the production possibilities keeping capacity variables (e.g. capital equipment) fixed, and a long-run production function characterizing all inputs is variable inputs contributing to capacity measures. Färe et al (1989b) employed a nonparametric approach to obtain the capacity measure with a cross-sectional data set. Lee and Johnson (2014) proposed an ''effectiveness'' measure and ''proactive DEA'' approach, which benefit capacity adjustment under demand fluctuation via MP estimation. The developments of MP estimation are limited due to the estimation difficulty at the edges of the frontier and anchor points (see Krivonozhko et al, 2004;Hadjicostas and Soteriou, 2006;Bougnol and Dulá, 2009). Identifying anchor points which define the transition from the Pareto-Koopmans efficient frontier to the free disposability of the boundary is extremely complicated. Thus, optimal solutions of dual multipliers in DEA are used to investigate the anchor points. However, using nonparametric techniques of DEA results in nonunique solutions, most of which present zero values. In addition, because the production function cannot be observed easily in practice, a piece-wise linear production function can be estimated using DEA based on collected observations (Banker et al, 1984;Fried et al, 2008). However, a piece-wise linear frontier forms a polyhedral set representing production technologies and thus is not differentiable. To overcome the problem of nondifferentiability, Podinovski and Førsund (2010) gave an explicit definition of differential characteristics on a nondifferentiable efficient frontier and proposed a directional-derivative approach to calculate elasticity measures without any simplifying assumptions. They applied differential characteristics to the DEA frontier and addressed elasticity measures and marginal rates of substitution (Asmild et al, 2006). Several studies have addressed the performance evaluation or productivity improvement of an inefficient firm based on input-oriented measure, output-oriented measure, hyperbolic measure (Färe et al, 2002;Kuosmanen, 2005), or directional distance function (Chambers et al, 1996(Chambers et al, , 1998Chung et al, 1997), yet only a few have discussed the productivity improvement of an efficient firm on the frontier. Zofio and Prieto (2006) suggested choosing the direction in the DDF to move towards the allocatively efficient benchmarks. Extending their work, to the case of efficient firms, Lee (2014) suggests that firms should select the direction via DMP to move towards the direction of marginal profit maximization. This study provides a theoretical foundation of meta-DEA. An analytical expression of MP measures of multiple outputs (i.e. DMP) is obtained by solving the dual multipliers of DDF; in particular, one-to-many mapping is developed. This study also alternatively addresses the most general class of measures, including the mixed input and output bundles such as MRTS and any types of elasticity measure. We also consider the undesirable-output case. This study is organized as follows. Section 2 introduces the estimation of single-output MP. Section 3 introduces the DDF. Section 4 develops the DMP estimation by DDF from one specific input to multiple outputs and illustrates the segmented MRTS. Section 5 presents a meta-DEA model. Section 6 introduces DMP for undesirable outputs. Section 7 gives two numerical examples, and Section 8 concludes the study and suggests future research. Single-output marginal productivity We first assess the single-output MP of a nondifferential efficient frontier constructed by the DEA estimator based on a directional-derivative technique proposed by Podinovski and Førsund (2010). Let set I represent the inputs and index i 2 I. Set J represents outputs and index j 2 J. Set K represents firm and index k 2 K. Index r 2 K is used for one specific firm and is an alias of k. Let observations X ik be the ith input level and Y jk be the jth output level of firm k. Let k k be the decision variable referring to the intensity weights representing the convex combination between firms, and let y j be the decision variable representing the maximum absolute level of output j. When estimating DMP, based on the microeconomic theory, some of the inputs are not controllable and considered as the nondiscretionary inputs at their current values (Banker and Morey, 1986). Thus, we estimate possible change in the discretionary inputs and keep exogenously fixed inputs constant. Model (1) determines the maximum absolute level of one specific output j à , given the level of one specific (discretionary) input i à of one specific firm r. Let v i , u j , and u 0 be the decision variables representing the dual multipliers of input constraint, output constraint, and convex-combination constraint in model (1), respectively. We can now construct the dual model of model (1) as model (2). Recalling that MP is a characteristic of the frontier, for one specific efficient firm r, the following revised formulation calculates the marginal rate b þDEA i à j à r approaching from the right side with respect to one particular input, i à and one output j à . Since MP is defined on the efficient frontier, i.e. firm r is on the frontier and y j à ¼ Y j à r in model (1), we can derive the objective function , and the model (3) estimates the b þDEA i à j à r (Podinovski and Førsund, 2010). To measure the marginal rate approaching from the left side, we simply replace the objective function by the following equation. Therefore, Figure 1 illustrates the single-input single-output MP, b þDEA i à j à r or b ÀDEA i à j à r , in terms of output expansion or contraction. 2 Note that we do not define the MP for inefficient firms operating inside of the production frontier. Directional distance function The directional distance function (DDF) estimates efficiency by expanding outputs and reducing inputs at the same time (Luenberger, 1992;Chambers et al, 1996Chambers et al, , 1998Chung et al, 1997). Let g ¼ g X ; g Y ð Þ be the predetermined directional vector for inputs and outputs, where g X 2 < I j j þ and g Y 2 < J j j þ . Given direction vector g X ; g Y ð Þ, we define the directional distance function as shown in model (5), where g is the decision variable for efficiency estimate. If g = 0, then a firm r is efficient; otherwise g [ 0 represents the inefficient case. Max g To estimate the MP by DDF we develop model (6) to estimate the maximum absolute level of one specific output. Let g X i à and g Y j à be the given elements in directional vector g of one specific input i à and one specific output j à . Model (6) determines the maximum absolute level. Note that it is a variant of model (5), where Y j à r is a constant describing output level of j à for a firm r and won't affect the optimization result but for solely calculating the absolute level in objective function. We derive model (7) as the dual model of model (6) using dual variables mentioned above. ProducƟon FuncƟon Expansion ContracƟon Proposition 1 Given g X i à ; g Y j à ð Þ¼ 0; 1 ð Þ if firm r is on the efficient frontier, then the objection function value of model (1) is equivalent to that of model (6), and the objection function value of model (2) is equivalent to that of model (7). Proof See Appendices 1 and 2. Proposition 1 is important and shows that DDF is a generalized estimator, since it not only estimates efficiency by either one-input orientation or one-output orientation, but also achieves frontier by multi-orientation simultaneously. That is, DDF provides a hint to develop the DMP estimation of multi-product orientation by fine-tuning the given direction in DDF. Directional marginal productivity via directional distance function This section describes a proposed multi-output MP model (i.e. DMP). Based on the DDF, we develop a model to describe how the change of single input X i à affects the multiple outputs. Let set j à , J be the outputs set whose MP will be investigated. We estimate DMP by model (8) given the direction vector g X i à ; g Y j ð Þas parameters, where g X i à ¼ 0 and P j2J à g Yj ¼ 1 for unit simplex (Färe et al, 2013). 3 We then define the DDF as follows. Max g The firm r is on the frontier since MP is one of the differential characteristics on the frontier, i.e. g ¼ 0 ¼ P i2I v i X ir À P j2J u j Y jr þ u 0 : Thus, we use a variant of dual model (9) to estimate the DMP of Y j , j 2 j à , with respect to X i à (Shapiro, 1979). Note that the direction g Y j can be regarded as the ''weightings'' between investigated outputs. The larger the weight, the closer the DMP towards the output with the higher weight. For example, if g Y j 1 ; g Y j 2 ð Þ¼ 0; 1 ð Þ, then the DMP estimated by model (9) is the same as the single-output MP estimated by model (3) with respect to the second output. To illustrate the ''weights'' (i.e. direction) among multioutput substitutability, we eliminate the unit of each factor for normalization. Let X Max and propose model (10). The reason for introducing unit simplex and eliminating the measurement units of inputs and outputs is to normalize the weight which presents a trade-off among outputs. Take an example of two outputs. If we would like to estimate the MP passing the middle of two outputs, the weight g Y j 1 ; g Y j 2 ð Þ¼ 0:5; 0:5 ð Þshould be assigned intuitively for calculating DMP since eliminating the measurement units makes units-invariant, i.e. the results are independent of the units of the inputs and outputs. Therefore, increasing one extra unit of X i à of firm r means that the vector of the DMP with respect to output Y j is In addition, it is invalid to estimate the MP on the portion of free disposability with respect to the inputs. First, intuitively, the free disposable portion shows the direction that can reduce its input level while still maintaining the same outputs, i.e. this direction cannot truly reflect marginal productivity. Second, based on Proposition 2 below, MP estimates on the portion of free disposability with respect to inputs are equal to zero by model (10) (10). Proposition 2 If the direction for MP estimation used in model (10) projects to the portion of free disposability with respect to inputs, then the MP estimate will be equal to 0. Proof See Appendices 1 and 2. We also provide an alternative way to calculate the marginal rate of technical substitution (MRTS) of outputs based on DMP. The frontier in DEA is not smoothing; thus, in most cases, the estimation of the MP for a specific firm r is not a fixed value, but a range with minimal value and maximal value, i.e. [min, max], though it is possible that in some cases, i.e., the minimal value (min) is equal to the maximal value (max). To address the issue, we need to consider two-sided MP. Let DMP + be the DMP approaching from the right side which we obtain from model (10) ; DMPbe the DMP approaching from the left side we obtain from model (10) Thus, DMP + and DMPmay form a range [min, max]. On the nonsmooth DEA frontier, MRTS should be calculated from both sides (see Podinovski and Førsund, 2010). For simplicity, we only illustrate MRTS + , which is generated by DMP + . The other case MRTScan be derived similarly. For a two-output case, we define a typical MRTS + which can be calculated by two single-output MPs on a frontier hyperplane, where all inputs are fixed at some levels and all other output than Y j1 and Y j2 are fixed at some levels. That is, MRTS + with the arbitrary two outputs Y j1 and Y j2 is calculated (10), given the direction g Yj 1 ; g Yj 2 ð Þ¼ 1; 0 ð Þ, and a 2 is calculated by g Y j 1 ; g Y j 2 ð Þ¼ 0; 1 ð Þ, which indicates a single-output MP (SOMP), respectively. Note, however, that because an estimated DEA piece-wise frontier in high dimensions forms a polyhedral set with multiple facets, a simple calculation of MRTS often provides a lower resolution. Figure 2 shows that each line segment (solid line) on the DEA frontier presents a different MRTS; the dashed line shows a rough but typical MRTS estimation by two single-output MPs. To resolve this issue, we develop a definition of ''segmented MRTS (s-MRTS)'' between any two DMPs by calculating the marginal difference of each output as follows (Section 7.1 describes an example illustration). For other approach to estimate s-MRTS, see Petersen (1996, 2003). Definition 2 Segmented marginal rate of technical substitution (s-MRTS) can be calculated by investigating two specific outputs and defined as s-MRTS þ ¼ Þ are the two DMPs used for s-MRTS estimation. In fact, the one-sided MRTS + of transformation of output j 1 with respect to output j 2 can be calculated by model (9) with the objective function Max Àu j2 and the given direction g Y j 1 ; g Y j 2 ð Þ¼ 1; 0 ð Þ. Meta-DEA: direction towards marginal profit maximization This section introduces meta-DEA to find a direction for an efficient firm to move towards its allocatively efficient benchmark based on maximization of the firm's marginal profits (Lee, 2014). We know that different directions (i.e. weighting vector) may generate different DMPs, i.e. these DMPs can make a span like a frontier. However, this frontier is associated with MP rather than the levels of inputs or outputs. We term this a ''meta-frontier'', i.e. frontier-about-frontier, because the DMPs are generated by DEA technique. Figure 3 gives an illustration, where P(x) is an output space referring to the production possibility set, given the level of one specific input. Note that DEA forms a ''production possibility set'' in the level of x, whereas meta-DEA forms a ''marginal production possibility set'' towards the level of x ? Dx based on estimated MPs. Therefore, given input price and output price, we can find the direction for marginal profit maximization with one extra unit of input. Given an input and output price vector (P i , P j ), there is a way to find the marginal-profit-maximized direction. We can generate the DMPs manually, given random-picked directions, and then calculate the allocative efficiency with respect to the meta-DEA frontier based on these discrete directions (i.e. vectors of DMPs). Let g Y j w be a ''decision variable'' representing wth direction satisfying w identified for marginal profit maximization is defined in mathematical formulation as the following equation. Note that, for changing one specific input, the marginal profit maximization is equivalent to the marginal revenue maximization due to a fixed marginal cost representing one extra unit of single input. See Lee (2014) for the benefits of meta-DEA and how it complements the profit-efficiency analysis (Nerlove, 1965). So far, we have discussed a capacity expansion case. In estimating the marginal rate approaching from the right side, all of the elements g Y j in a given direction are nonnegative. However, since some cases show capacity contraction towards marginal profit maximization, e.g. undesirable outputs such as pollution or waste, it is also helpful to estimate the marginal rate approaching from the left side via a negative direction. The next section discusses undesirable outputs in detail. Directional marginal productivity for desirable and undesirable outputs To estimate the marginal rate approaching from the left side, intuitively we can use a negative direction of g Yj . For a singleoutput case, we use models (3) and (4) Proof See Appendices 1 and 2. Recall that the DEA estimator, in particular BCC model (Banker et al, 1984), assumes free disposability of undesirable outputs, which implies that a finite amount of input can produce an infinite amount of undesirable output. The assumption is physically unreasonable (Färe et al, 1989a;Färe and Grosskopf, 2003;Kuosmanen and Podinovski, 2009). Intuitively, we can reduce the level of the good output which in turn will result in a proportionate reduction of the undesirable outputs. This property is termed weak disposability (Shephard, 1974). The relationship between good output and undesirable output is nulljoint, and the undesirable output is a by-product of good output (Färe et al, 2007). We introduce the weak disposability by Kuosmanen's convex technology with undesirable outputs (Kuosmanen, 2005). Let Q be the set of undesirable output, Q * , Q be the subset of undesirable output investigated for DMP, and g B q the direction of undesirable output q. For unit simplex P j2J à g Y j þ P q2Q à g B q ¼ 1, the direction is a vector g X i à ; g Yj ; g Bq ð Þ , where g X i à ¼ 0, g Yj ! 0, and g Bq ! 0. Model (12) defines the DDF with undesirable outputs as follows. Chia-Yen Lee-Directional marginal productivity: a foundation of meta-data envelopment analysis Max g Since the firm r is on the frontier, (12) estimates the DMP of Y j ; j 2 J à and B q ; q 2 Q à with respect to X i à by eliminating the unit of factors. Therefore, we calculate the DMP with undesirable outputs In this case, the firm would like to increase desirable outputs and decrease undesirable outputs simultaneously by controlling input level. The DMP provides a good insight into the best direction for effective resource allocation, i.e. ''Generate more energy, but less pollution''. Note that to estimate MP on the portion of free disposability with respect to one investigated input is invalid, which is similar to the issue discussed in Section 4. Numerical illustrations Section 7.1 explains how to estimate a two-output MP case by using the proposed model (10) described in Section 4. Section 7.2 explains how to estimate a DMP case considering one-undesirable output by using the model (13) described in Section 6. Two-output case We return to the example in Podinovski and Førsund (2010), which includes one input, two outputs, and three observations. However, we change the scale of the second output to illustrate the benefit of unit elimination as model (10) ( Table 1). For one specific unit A, given g Y 1 þ g Y 2 ¼ 1 for normalization, we use the direction g X1 ; g Y2 ; g Y2 ð Þto generate model (14), where g X 1 ¼ 0. When increasing one extra unit of X 1 in unit A, the DMP of Given the output price vector (P 1 , P 2 ), 4 where P 1 [ 0 and P 2 [ 0, we observe that the meta-DEA can identify the direction for marginal profit maximization. Due to our two-output case, the optimal direction is associated with output price ratio, i.e. P 1 /P 2 . We investigate a resolution of 10 intervals (11 cases) between g Y 1 ; g Y 2 ð Þ¼ 1; 0 ð Þ and g Y 1 ; g Y 2 ð Þ¼ 0; 1 ð Þ as shown in Table 2 and Figure 4. Table 2 shows that the single-output MP of unit A is consistent with the result shown in Podinovski and Førsund (2010), i.e. the MP of Y 1 is 4 using the direction g Y 1 ; g Y 2 ð Þ¼ 1; 0 ð Þ, and the MP of Y 2 is 50 using the given direction g Y 1 ; g Y 2 ð Þ¼ 0; 1 ð Þ, respectively. Keeping in mind that the DEA frontier includes a free disposable portion with respect to outputs, and given that the same MP of Y 2 is equal to 50, we can increase the MP of Y 1 to obtain 0.44 by shifting the direction from g Y 1 ; g Y 2 ð Þ¼ 0; 1 ð Þ to (0.4, 0.6). When we increase one extra unit of input, we prefer to choose the Note that the marginal cost is fixed to represent one extra unit of single input. direction g Y 1 ; g Y 2 ð Þ¼ 0:4; 0:6 ð Þ , rather than (0, 1) to generate more. In addition, given the output price vector, the meta-DEA shows the marginal profit maximization of the two outputs and points out the direction for productivity improvement. For instance, if the output price ratio is between 14.24 and 14:26À, the meta-DEA suggests a direction g Y 1 ; g Y 2 ð Þ¼ 0:6; 0:4 ð Þ . In fact, it points out the allocatively efficient benchmarks on MP frontier. We can also calculate the s-MRTS for unit A as shown in Table 2. For example, we use the DMPs in case 1 and case 2 to calculate s-MRTS ¼ Þ 0À21:05 ð Þ ¼ À0:07. N/A represents s-MRTS which cannot be calculated due to a portion of free disposability of output Y 1 . Note that a typical where a 1 and a 2 are calculated by g Y1 ; g Y2 ð Þ¼ 1; 0 ð Þ and (0, 1), respectively. However, this MRTS across a boundary (edge) between two facets gives imprecise estimate of MRTS. One-desirable-output and one-undesirable-output case Returning again to Kuosmanen and Podinovski (2009), we consider two observations (units D and E in Table 3) and change the scale of the undesirable output. We investigate a resolution of 10 intervals (11 cases) between g Y1 ; g B1 ð Þ¼ 1; 0 ð Þ and g Y1 ; g B1 ð Þ¼ 0; 1 ð Þ as shown in Table 4. The result shows that the MPs generated by cases 8, 9, and 10 will benefit unit D by decreasing its undesirable output and slightly increasing its good output by moving forward to unit E along the frontier. However, MPs estimated from case 1 to case 7 are equal to zeros based on Proposition 2, since the directions we assign project to the portion of free disposability with respect to input. MP equal to zero does not provide any useful information for productivity improvement when adjusting the input level. In fact, the MP estimate of unit D is not zero, given the direction projecting to the free disposability portion of the input. Figure 4 DMP and meta-DEA of Y 1 and Y 2 in unit A (revised from Podinovski and Førsund, 2010). Conclusion This study provides a theoretical foundation of DMP supporting the meta-DEA which measures efficiency via marginal-profit-maximized orientation. DMP investigates the differential characteristics of nonsmooth piece-wise linear frontier estimate by DEA, and we explicitly derived the DMP by DDF. Since increasing one extra unit of input can simultaneously contribute to multiple outputs, this study fills the gap in the literatures and extends the Podinovski and Førsund's (2010) work to the DMP given the predetermined directional vector. In practice, the DMP can be used to build the span of MP frontier supporting the productivity improvement via resource reallocation, e.g. a capacity adjustment matching demand fluctuation. The managerial implication of DMP enhances the decision quality on marginal effects. In addition, DMP can be also applied to the computation of MRTS and we develop an alternative measure of s-MRTS to compensate the typical MRTS measure via a segmentation technique and calculation of each output's marginal difference. Typically, the MRTS can be estimated by the ratio of two derivatives of DDF with respect to different outputs (Grosskopf et al, 1995). However, these derivatives usually come from the dual variables of output constraints in DEA formulation, and thus, the nonunique dual solutions are common. The proposed s-MRTS addresses the issue and also complements the approach shown in Olesen and Petersen (2003). For the future works, the synergistic effects of multiple inputs and multiple outputs can be considered. Noting that the estimation of the increase in output is conservative if two or more inputs are expanded simultaneously, we suggest separately estimating the marginal production of each inputs and then taking the dot product of the marginal product vector. However, doing so will not capture any synergistic effects between the different inputs. In addition, the DMP estimation can support capacity adjustment, but moving along the efficient frontier too far may be out of production possibility set. To maintain feasibility, meaning that a firm remains within its original production possibility set after taking adjustment, we suggest a limited range of the resource adjustments and recalculating the MP in each iterative shortdistance move to ensure that the firm remains within the production possibility set due to the law of diminishing marginal returns (Lee and Johnson, 2014). nonparametric least squares The convex nonparametric least squares (CNLS) technique (Hildreth, 1954;Kuosmanen, 2008) describes the average behaviour of observations. CNLS avoid the prior assumptions regarding function form while maintaining the standard regularity conditions for production functions, namely continuity, monotonicity, and concavity. Later, Kuosmanen and Johnson (2010) demonstrated that inefficiency estimated by the sign-constrained CNLS is equivalent to that estimated by the additive output-oriented DEA. The coefficients associated with the independent factors intuitively provide estimates of the MP in a regression-based approach. Now, we describe how to prove that b ik +DEA is consistent with the b ik +CNLS estimated by the sign-constrained CNLS. Let e r be the inefficiency term of specific firm r. We obtain the nonradial DEA inefficiency estimate e r DEA by solving the following linear programming formulation. Note that the DEA formulation (15) differs from the standard radial outputoriented variable-return-to-scale (VRS) DEA. Next, we obtain the inefficiency estimate e k CNLS of firm k by solving the following sign-constrained CNLS. Let index h be an alias of index k, a k be the intercept coefficient, and b ik be the slope coefficient of the ith input of kth firm. Both models (15) and (16) measure inefficiency relative to the same DEA frontier; recall that Kuosmanen and Johnson proved e DEA k ¼ e CNLS k . The firm is efficient if and only if its inefficiency estimate equals zero; otherwise, values smaller than zero represent measures of inefficiency. The result also shows that the estimates b ik can be interpreted as MP. Since model (16) generates multiple solutions, the objective function could be replaced by M P k e 2 k þ P i;k b ik to acquire unique solution, where M is a large enough number. This expansion to estimate a unique solution keeps an identical piece-wise linear frontier and obtains the right-side MP oYk Vice versa, replacing the objective function with M P k e 2 k À Since models (15) and (16) generate the same DEA frontier, based on Theorem 3.1 in Kuosmanen and Johnson's (2010) we extend the proof with respect to MP by developing Proposition 4. Proposition 4 For all real-valued data, the MP estimated by sign-constrained convex nonparametric least squares model (16) with objective function M P k e 2 k þ P i;k b ik is equivalent to the MP estimated by DEA model (3); that is, ; the similar result can be applied to Chia-Yen Lee-Directional marginal productivity: a foundation of meta-data envelopment analysis Proposition 4 is interesting. It explicitly illustrates the MP generated by DEA model (3) as the MP directly shown as the coefficient of independent factors in the regression-based CNLS model. Thus, it reveals that the MP can be generated by the DDF as an implicit formulation of model (15). This also gives rise to Proposition 5. Proposition 5 The single-output MP estimation by additive DEA, sign-constrained CNLS, and DDF with direction vector g X i à ; g Y j à ð Þ¼ 0; 1 ð Þ shows a consistent result. Appendix 2: Proof of theorems Proposition 1 Given g X i à ; g Y j à ð Þ¼ 0; 1 ð Þ if firm r is on the efficient frontier, then the objection function value of model (1) is equivalent to that of model (6), and the objection function value of model (2) is equivalent to that of model (7). Proof Since firm r is on the efficient frontier, model (6) generates g = 0. In fact, The model is exactly the same as model (1) when g ¼ 0. Thus, k k are the same in model (1) and model (6). In addition, given g X i à ; g Y j à ð Þ¼ 0; 1 ð Þ in model (7), then u j à ¼ 1 and Y j à r is a constant, which allows us to remove the terms Y j à r and u j à Y j à r from the objective function of model (7). The objective function of model (7) is the same as that of model (2). Thus, optimal solutions v i and u j are the same. Model (2) is equivalent to model (7). Proposition 2 If the direction for MP estimation used in model (10) projects to the portion of free disposability with respect to inputs, then the MP estimate will be equal to 0. Proof To prove this proposition by model (10) is equivalent to proving it by model (9), since model (10) is a normalized version of model (9). If k 0 , k k , and g are dual variables of each constraint in model (9), respectively, the dual model of model (9) with P j2J à g Yj ¼ 1 is as follows. 8j 2 JnJ à P k k k ¼ Àk 0 k k ! 0; g; k 0 are free Note that in the above model, if -k 0 = 1, then the model is almost equivalent to model (8) except that first constraint P k k k X i à k 1 À k 0 X i à r . When model (9) estimates the MP projecting to the portion of free disposability with respect to one specific input, the slack in input constraints P k k k X i à k 1 À k 0 X i à r is positive. That is, its dual variables v i à ¼ 0 in model (9). Thus, the objective function in model (10) is will be a zero vector. Proposition 3 The MP estimated by model (10) with the objective function Max v i à X Max i à ¼ a is equivalent to the MP estimated, given a negative direction. Proof Given g Yj is negative, let P j2J à g Yj ¼ À1 for normalization. The objective function of model (8) will be Maxg P j2J à g Y j ¼ Max À g. To obtain the same optimal solution, we replace the objective function of model (8) by Min g. Thus, the first constraint of model (9) will be P i v i X ir -P j u j Y jr ? u 0 = 0 and the objective function of model (9) will be Maxv i à . Finally, the objective function will be Max v i à X Max i à ¼ a in model (10). Proposition 4 For all real-valued data, the MP estimated by sign-constrained convex nonparametric least squares model (16) with objective function M P k e 2 k þ P i;k b ik is equivalent to the MP estimated by DEA model (3) ; the similar result can be applied to Proof For the single-output case, we calculate MP using model (3). b þDEA ir ¼ Min v i s:t: P i v i X ir À uY r þ u 0 ¼ 0 P i v i X ik À uY k þ u 0 ! 0; 8k u ¼ 1 v i ; u ! 0; u 0 is free Then, for one specific firm r, we need to solve the following model k times. However, for all firm k, we only need to solve one time (i.e. one-shot solution) by using the following formation. b þDEA ik ¼ argmin u0;v X i;k v ik s:t: Y h ¼ P i v ih X ih þ u 0h ; 8h Y k P i v ih X ik þ u 0h ; 8k; 8h v ik ! 0; u 0k is free Let u 0k = a k ? e k , where a k represents the intercept and e k represents the deviation of inefficiency. We know e k = 0 since all firms k are on the efficient frontier. Therefore, we harmlessly impose the sign-constraint as an additional constraint. Clearly, the inefficient firms (for which e h \ 0) do not influence the shape of the DEA frontier, and thus, we add the inefficiency components into the constraint and write the formulation equivalently as follows. b The model is the same as the CNLS frontier characterized by all efficient firms with e k = 0. We replace v ik by b ik . Finally, we derive the sign-constrained CNLS formulation to estimate the MP as follows. Proposition 5 The single-output MP estimation by additive DEA, sign-constrained CNLS, and DDF with direction vector g X i à ; g Y j à ð Þ¼ 0; 1 ð Þ shows a consistent result.
v3-fos-license
2016-05-12T22:15:10.714Z
2016-03-07T00:00:00.000
16005430
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2016.00263/pdf", "pdf_hash": "feebc32a9685c23bb7b7ac151eecf35c4f710ff7", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1923", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "sha1": "dae62ca95e98d4086051caf8c9b4a05a1d5a10c9", "year": 2016 }
pes2o/s2orc
Evolutionary Recycling of Light Signaling Components in Fleshy Fruits: New Insights on the Role of Pigments to Monitor Ripening Besides an essential source of energy, light provides environmental information to plants. Photosensory pathways are thought to have occurred early in plant evolution, probably at the time of the Archaeplastida ancestor, or perhaps even earlier. Manipulation of individual components of light perception and signaling networks in tomato (Solanum lycopersicum) affects the metabolism of ripening fruit at several levels. Most strikingly, recent experiments have shown that some of the molecular mechanisms originally devoted to sense and respond to environmental light cues have been re-adapted during evolution to provide plants with useful information on fruit ripening progression. In particular, the presence of chlorophylls in green fruit can strongly influence the spectral composition of the light filtered through the fruit pericarp. The concomitant changes in light quality can be perceived and transduced by phytochromes (PHYs) and PHY-interacting factors, respectively, to regulate gene expression and in turn modulate the production of carotenoids, a family of metabolites that are relevant for the final pigmentation of ripe fruits. We raise the hypothesis that the evolutionary recycling of light-signaling components to finely adjust pigmentation to the actual ripening stage of the fruit may have represented a selective advantage for primeval fleshy-fruited plants even before the extinction of dinosaurs. INTRODUCTION Light has a dual role in plants as an essential source of energy for driving photosynthesis and, on the other hand, as an environmental cue that modulates many aspects of plant biology such as photomorphogenesis, germination, phototropism, and entrainment of circadian rhythms (Chen et al., 2004;Jiao et al., 2007). The ability to perceive and respond to light changes is mediated by a set of sophisticated photosensory pathways capable of discriminating the quality (spectral composition), intensity (irradiance), duration (including day length), and direction of light (Moglich et al., 2010). In particular, plants perceive light through at least five types of sensory photoreceptors that are distinct from photosynthetic components and detect specific regions of the electromagnetic spectrum. Cryptochromes (CRYs), phototropins, and Zeitlupe family members function in the blue (390-500 nm) and ultraviolet-A (320-390 nm) wavelengths, while the photoreceptor UVR-8 operates in the ultraviolet-B (280-315 nm) region. Phytochromes (PHYs), which are probably the best studied photoreceptors, function in a dynamic photoequilibrium determined by the red (R, ca. 660 nm) to far-red (FR, ca. 730 nm) ratio in land plants and throughout the visible spectrum (blue, green, orange, red, and far-red) in different algae (Moglich et al., 2010;Rizzini et al., 2011;Rockwell et al., 2014). The photonic information gathered by these photoreceptors is then transduced into changes in gene expression that ultimately promote optimal growth, development, survival and reproduction (Jiao et al., 2007). Photosensory pathways are thought to have occurred early in plant evolution, probably at the time of the Archaeplastida ancestor (i.e., the last common ancestor of glaucophyte, red algae, green algae and land plants) or perhaps even earlier, before the occurrence of the endosymbiotic event that gave rise to photosynthetic eukaryotes over more than a billion years ago Mathews, 2014;Fortunato et al., 2015). Through the ages, these mechanisms diverged to play particular roles in different branches of the plant lineage, ranging from presumably acclimative roles in algae Rockwell et al., 2014) to resource competition functions in land plants (Jiao et al., 2007). In particular, the ability of PHYs to detect changes in the R/FR ratio allows land plants to detect the presence of nearby vegetation that could potentially compete for light. Light filtered or reflected by neighboring leaves (i.e., shade) has a distinctive spectral composition that is characterized by a decreased R/FR ratio due to a preferential absorption of R light by chlorophyll (Casal, 2013). Low R/FR ratios reduce PHY activity, allowing PHY-interacting transcription factors (PIFs) to bind to genomic regulatory elements that tune the expression of numerous genes (Casal, 2013;Leivar and Monte, 2014). Oppositely, high R/FR ratios enhance PHY activity, causing the inactivation of PIF proteins mainly by proteasome-mediated degradation (Bae and Choi, 2008;Leivar and Monte, 2014). Carotenoid biosynthesis represents a rather well characterized example of this regulation. In Arabidopsis thaliana, shade decreases the production of carotenoids in photosynthetic tissues (Roig-Villanova et al., 2007;Bou-Torrent et al., 2015) in part by promoting the accumulation of PIF proteins that repress the expression of the gene encoding phytoene synthase (PSY), the main rate-determining enzyme of the carotenoid pathway (Roig-Villanova et al., 2007;Toledo-Ortiz et al., 2010;Bou-Torrent et al., 2015). De-repression of PSY under sunlight induces carotenoid biosynthesis, which in turn maximizes light harvesting and protects the photosynthetic machinery from harmful oxidative photodamage caused by intense light (Sundstrom, 2008). Light signals in general and PHYs in particular also modulate the genetic programs associated to fruit development and ripening. Here we will revise current and emerging knowledge on this area based on work carried out in tomato (Solanum lycopersicum), which is the main model system for fleshy fruits, that is, fruits containing a juicy fruit pulp. Further, we will discuss potential selection pressures that might account for the evolutionary recycling of light-signaling components in fleshy fruits. FLESHY FRUIT RIPENING: THE CASE OF TOMATO Fleshy fruits are differentiated floral tissues that evolved 80-90 million years ago (Ma), i.e., relatively recently in the history of plants (Givnish et al., 2005;Eriksson, 2014), as an adaptive characteristic promoting the animal-assisted dissemination of viable seeds (Tiffney, 2004;Seymour et al., 2013;Duan et al., 2014). After seed maturation, fleshy fruits typically undergo a ripening process that involves irreversible changes in organoleptic characteristics such as color, texture, and flavor, all of which result in the production of an appealing food to frugivorous animals. In this manner, the ripening process orchestrates the mutualistic relationship between fleshy-fruited plants and seed-disperser animals (Tiffney, 2004;Seymour et al., 2013;Duan et al., 2014). Upon fertilization, the development of fleshy fruits such as tomato can be divided into three distinct phases: cell division, cell expansion, and ripening (Gillaspy et al., 1993;Seymour et al., 2013). These different stages are characterized by hormonal, genetic, and metabolic shifts that have been reviewed in great detail elsewhere (Carrari and Fernie, 2006;Klee and Giovannoni, 2011;Seymour et al., 2013;Tohge et al., 2014). Before ripening occurs, tomato fruits have a green appearance due to the presence of chloroplasts that contain the whole photosynthetic machinery. The transition to ripening is characterized by a loss of chlorophylls, cell wall softening, accumulation of sugars, and drastic alterations in the profile of volatiles and pigments. Most distinctly, chlorophyll degradation is accompanied by a conversion of chloroplasts into chromoplasts that progressively accumulate high levels of the health-promoting carotenoids β-carotene (pro-vitamin A) and lycopene (Tomato Genome Consortium, 2012;Fantini et al., 2013;Seymour et al., 2013). These carotenoid pigments give the characteristic orange and red colors to ripe tomatoes. A large number of other fruits (including bananas, oranges, or peppers) also lose chlorophylls and accumulate carotenoids during ripening, resulting in a characteristic pigmentation change (from green to yellow, orange or red) that acts as a visual signal informing animals when the fruit is ripe and healthy (Klee and Giovannoni, 2011). THE EFFECT OF LIGHT SIGNALING COMPONENTS ON FRUIT RIPENING Multiple lines of evidence have exposed the relevance of fruit-localized photosensory pathways as important players in the regulation of fruit ripening and the potential of their manipulation to improve the nutritional quality of tomatoes (Azari et al., 2010). Among many light-signaling mutants displaying altered fruit phenotypes, the tomato high pigment (hp) mutants hp1 and hp2 are two of the best characterized. These mutants owe their name to a deep fruit pigmentation derived from an increment in the number and size of plastids, which in turn result in elevated levels of carotenoids such as lycopene (Yen et al., 1997;Mustilli et al., 1999;Levin et al., 2003). Detailed characterization of the hp1 and hp2 mutants, which also show increased levels of extraplastidial metabolites such as flavonoids, revealed that the mutated genes encode tomato homologs of the previously described light signal transduction proteins DAMAGED DNA BINDING PROTEIN 1 (DDB1) and DEETIOLATED1 (DET1), respectively (Mustilli et al., 1999;Schroeder et al., 2002;Levin et al., 2003;Liu et al., 2004) (Figure 1). Other components that participate in the same light-signaling pathway that HP1 and HP2 have also been shown to impact tomato fruit metabolism. For instance, silencing the tomato E3 ubiquitin-ligase CUL4, which directly interacts with HP1, also produces highly pigmented fruits (Wang et al., 2008). Another example is the E3 ubiquitin-ligase CONSTITUTIVELY PHOTOMORPHOGENIC 1 (COP1), which specifically promotes the degradation of the light-signaling effector ELONGATED HYPOCOTYL 5 (HY5) (Schwechheimer and Deng, 2000) (Figure 1). Transgenic plants with downregulated transcripts of COP1 and HY5 produce tomato fruits with increased and reduced levels of carotenoids, respectively (Liu et al., 2004). Work with photoreceptors ( Figure 1) has also shed light on the subject. Tomato plants overexpressing the blue light photoreceptor cryptochrome 2 (CRY2) produce fruits with increased levels of flavonoids and carotenoids (Giliberto et al., 2005). PHYs have been found to control different aspects of tomato fruit ripening as well. Activation of fruit-localized PHYs with R light treatments promotes carotenoid biosynthesis, while subsequent PHY inactivation by irradiation with FR light reverts it (Alba et al., 2000;Schofield and Paliyath, 2005). Furthermore, preventing light exposure from the very early stages of fruit set and development results in white fruits completely devoid of pigments (Cheung et al., 1993), a phenotype that resembles that of phyA phyB1 phyB2 PHY triple mutant plants (Weller et al., 2000). In addition to regulating carotenoid levels in tomato fruits, PHYs seem to regulate the timing of phase transition during ripening (Gupta et al., 2014). A MECHANISM TO MONITOR RIPENING BASED ON SELF-SHADING AND LIGHT SIGNALING Although light signaling components have long been known to modulate fruit ripening, another important piece of the puzzle was revealed recently. In tomato, fruit pericarp cells are morphologically similar to leaf palisade cells (Gillaspy et al., 1993). Thus, fruits can be viewed as modified leaves that, besides enclosing the seeds, have suffered a change in organ geometry, namely, a shift from a nearly planate conformation to an expanded three-dimensional anatomy. This anatomy imposes spatial constrains coercing light to pass through successive cell layers, so that the quality of the light that reaches inner sections of the fruit is influenced by the cells of outer pericarp sections (Figure 2). Another key difference between tomato leaves and fruits is the cuticle, which is far more pronounced in the fruit. While a potential role of the cuticle in altering the spectral properties of the light that reaches the pericarp cells remains to FIGURE 1 | A simplified model of light signaling components involved in the regulation of tomato fruit pigmentation and ripening. Fruitlocalized phytochrome and cryptochrome photoreceptors regulate the activity of the downstream E3-ubiquitin ligase COP1 and CUL4-DDB1-DET1 complexes, which in turn mediate the degradation of the transcriptional activator HY5. In addition, active phytochromes reduce the activity of transcriptional repressors such as PIFs. The balance between activators and repressors finally modulates the expression of carotenoid and ripeningassociated genes. R, red light; FR, far-red light; Blue, blue light; UV-A, ultraviolet-A light. be investigated, it is now well established that the occurrence of chlorophyll in fruit chloroplasts significantly reduces the R/FR ratio of the light filtered through the fruit fresh (Alba et al., 2000;Llorente et al., 2015). A reduction in R/FR ratio (also referred to as shade) normally informs plants about the proximity of surrounding vegetation (Casal, 2013). In tomato fruit, however, changes in R/FR ratio can inform of the ripening status. As a consequence of self-shading, it is proposed that a relatively high proportion of PHYs remain inactive in green fruit. This condition stabilizes the tomato PIF1a transcription factor, that binds to a PBE-box located in the promoter of the gene encoding the PSY isoform that controls the metabolic flux to the carotenoid pathway during fruit ripening, PSY1. PIF1a binding directly represses PSY1 expression (Figure 2). Chlorophyll breakdown at the onset of ripening reduces the self-shading effect, consequently FIGURE 2 | Self-shading model for the light mediated modulation of carotenoid biosynthesis in tomato fruits. Chlorophylls in green fruits preferentially absorb red (R, ca. 660 nm) wavelengths of the light spectrum, generating a self-shading effect characterized by low R to far-red (FR, ca. 730 nm) ratios that maintain PHYs predominantly in the inactive form and relatively high levels of PIF1a repressing PSY1. Once seeds mature, the developmental program induces the expression of genes encoding master activators of the ripening process. Some of them, like RIN and FUL1/TDR4, also induce PSY1 gene expression directly. Chlorophyll breakdown reduces the self-shading effect so that the R/FR ratio within the cells gradually increase, consequently displacing PHYs to their active form, reducing PIF1a levels and derepressing PSY1 expression. By sensing the spectral composition of the light filtered through the fruit pericarp, this mechanism diagnoses actual ripening progression to finely adjust fruit carotenoid biosynthesis. promoting PHY activation, degradation of PIF1a, derepression of PSY1, and eventually carotenoid biosynthesis (Figure 2). In this manner, the genetically controlled expression of PSY1 (and hence the production of carotenoid pigments) is finetuned to the actual progression of ripening (Llorente et al., 2015). Translation of molecular insights from tomato to other fleshyfruited plants has indicated that many regulatory networks are conserved across a wide range of species (Seymour et al., 2013). Thus, given the ubiquitous nature of PHYs in land plants and the widespread occurrence of ripening-associated fruit pigmentation changes that typically involve the substitution of an initially chlorophyll-based green color with distinctive non-green (i.e., non-R-absorbing) eye-catching colors, it is possible that similar self-shading regulatory mechanisms might operate in other plant species to inform on the actual stage of ripening (based on the pigment profile of the fruit at every moment) and thus finely coordinate fruit color change. However, the composition of the cuticle or even the anatomy of the most external layer of the pericarp (i.e., the exocarp) might also impact the quality and quantity of light that penetrates the fruit flesh. The self-shading mechanism is expected to be irrelevant in fleshy fruits with a thick skin or exocarp that prevents light to pass through and reach more internal fruit layers. FRUIT COLORS AS RIPENING SIGNALS IN AN EVOLUTIONARY CONTEXT Fleshy fruits are considered to have first appeared in the Late Cretaceous (circa 90 Ma) (Givnish et al., 2005;Eriksson, 2014), at a time when the Earth's vegetation was dense and exuberant, and where most ecological niches were taken over by angiosperms (Lidgard and Crane, 1988;Berendse and Scheffer, 2009). The plentiful surplus of nutritious food gave rise to a huge explosion in the Cretaceous fauna, bringing about the coexistence of numerous herbivorous and omnivorous reptiles (dinosaurs, pterosaurs, lizards), birds and mammals (Lloyd et al., 2008;Prentice et al., 2011;Vullo et al., 2012;Wilson et al., 2012;Jones et al., 2013;Jarvis et al., 2014). With such an abundance of planteating animals, being able to display a change in fruit color when ripe probably represented a valuable trait among early fleshyfruited plants to call the attention of these various potential seed dispersers. Although deep time co-evolutionary scenarios may be difficult to support, this idea gains plausibility if we consider that the same strategy had been successfully implemented beforehand by gymnosperms, which had already evolved fleshy fruitlike structures by the Early Cretaceous, at least some 20-30 million years before the first fleshy fruits (Yang and Wang, 2013). Several gymnosperms (e.g., Ginkgo biloba, Taxus baccata, and Ephedra distachya) produce fleshy colorful tissues around their seeds and, similar to that occurring in angiosperms, these fruit-like structures undergo a ripening process that also serves as a visual advertisement for animals to eat them and disperse their seeds. Recent evidence supports the hypothesis that the main molecular networks underlying the formation of the fleshy fruit were originally established in gymnosperms (Lovisetto et al., 2012(Lovisetto et al., , 2015, thus suggesting that the ripening phenomenon was first selected as an ecological adaptation in gymnosperms and that angiosperms merely exploited it afterwards. If correct, this would imply that Cretaceous plant-eater animals would have already been used to feeding on color-changing fleshy fruit-like tissues by the time that angiosperm fleshy-fruited plants evolved, something that may have facilitated the establishment of the latter. Another relevant fact is that the dominant land animals during the Cretaceous period, the dinosaurs, as well as pterosaurs, lizards, and birds, had highly differentiated color vision, much superior to that of most mammals (Rowe, 2000;Chang et al., 2002;Bowmaker, 2008). Differentiated color vision, or tetrachromacy, is a basal characteristic of land vertebrates derived from the presence of four spectrally distinct retinal cone cells that allow discriminating hues ranging from ultraviolet to red (Bowmaker, 2008;Koschowitz et al., 2014). Turtles, alligators, lizards and birds, are all known to have tetrachromatic color vision, a shared trait inherited from their common reptilian ancestry (Rowe, 2000;Bowmaker, 2008). We have recently come to know that some dinosaurs even sported plumage color patterns and flamboyant cranial crests that may have served for visual display purposes (Li et al., 2010(Li et al., , 2012Zhang et al., 2010;Bell et al., 2014;Foth et al., 2014;Koschowitz et al., 2014). Altogether, these insights suggest that color cues were likely an important means of signaling among dinosaurs. Although purely speculative at the moment, it is reasonable to assume that there could have also been dinosaurs that, analogously to several birds and reptiles nowadays (Svensson and Wong, 2011), consumed fleshy fruits within their diet as a source of carotenoid pigments used for ornamental coloration. Even though the relevance of, now extinct, Cretaceous megafauna as biological vectors involved in the seed dispersal of primeval fleshyfruited plants remains speculative and controversial (Tiffney, 2004;Butler et al., 2009;Seymour et al., 2013), it is clear that they certainly had fleshy fruit available to eat during the last 25-35 million years of their existence, until the occurrence of the Cretaceous-Paleogene mass extinction event (65 Ma). Fruit color change meets the criteria of a classical signal, which can be defined as a cue that increases the fitness of the sender (i.e., fleshy-fruited plants) by altering the behavior of the receivers (i.e., seed-disperser animals) (Maynard Smith and Harper, 1995). Importantly, besides visibility conditions and the visual aptitude of the receiver, the detectability of a visual signal is determined by its contrast against the background, that is, the conspicuousness of the signal (Schmidt et al., 2004). Ripe fruits displaying a distinct coloration against the foliage leaves are more conspicuous for animals than green fruits and there is no evidence to consider that it was any different to Cretaceous animals. In fact, the invention of fruit fleshiness took place along with expanding tropical forests, suggesting it may have evolved as an advantageous trait related to changes in vegetation from open to more closed environments (Seymour et al., 2013;Eriksson, 2014). In this context, light signaling pathways already established in land plants may have had the chance to evolutionary explore novel phenotypic space in fleshy fruits. Subsequent adaptations under selection in the fruit may have then integrated these pathways as modulatory components of the pigmentation process during ripening. For instance, the self-shading regulation of the tomato fruit carotenoid pathway (Llorente et al., 2015) (Figure 2) might have evolved by co-option of components from the preexisting shade-avoidance responses (Mathews, 2006;Casal, 2013). This evolutionary recycling of light-signaling components in fleshy fruits might therefore be a legacy from the time when dinosaurs walked the earth. AUTHOR CONTRIBUTIONS BL, LA, and MR-C searched and discussed the literature and wrote the article.
v3-fos-license
2016-05-12T22:15:10.714Z
2013-11-12T00:00:00.000
6328356
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://waojournal.biomedcentral.com/track/pdf/10.1186/1939-4551-6-20", "pdf_hash": "1ccc620a76d57773502b5b556b00b67dfae4aa19", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1924", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "sha1": "1ccc620a76d57773502b5b556b00b67dfae4aa19", "year": 2013 }
pes2o/s2orc
Asthma and the socio-economic reality in Brazil Background Asthma is a prevalent disease that is considered a health problem worldwide. The aim of this study was to analyze the clinical and socioeconomic characteristics of a cohort of asthmatics receiving specialized outpatient treatment in a tertiary/teaching public hospital in Brazil. Methods Persistent asthmatics older than 5 years old were consecutively included. They received clinical treatment at 3- to 4-month intervals and were interviewed 2 times at 6-month intervals over a 12-month observation period. The data were collected directly from the patients or their parents by 2 researchers who did not participate in their clinical care. The primary variables were age, gender, education level, monthly family income, place of residence, number of lost days of school or work, BMI, the severity and control level of asthma, the number of scheduled and non-scheduled visits and hospitalization days and the best peak-flow measurement. Results Of the 117 participants, 108 completed the study. Of the participants, 73.8% were women, and 25.0% lived outside the county. Of those who lived within the county, 60.1% lived in areas far from the health care unit. The majority (83.3%) had associated rhinitis, and more than 50.0% were overweight or obese, in whom the prevalence of severe asthma was greater (p = 0.001). The median monthly income was US$ 536.58 and was greater among the patients with controlled asthma (p = 0.005 and p = 0.01 at the start and the end of the study, respectively). In the initial evaluation, 16 participants had severe asthma, and in the final evaluation, 8 out of 21 patients with uncontrolled asthma had improved. Three-quarters of the students and half of the workers had missed days of school or work, respectively. The asthmatic population in this study had medium to low socioeconomic status in Brazil and socioeconomic status was associated with overweigth/obesity and with poor control of asthma. Conclusion Asthma has a great impact on absenteeism in Brazil. Lower monthly family income and body weight above the ideal level were associated with greater severity and worse control of asthma. Background Asthma is one of the most prevalent chronic diseases in the world and is considered a public health problem worldwide [1,2]. The prevalence of asthma in developed countries increased 50% per decade in the last 40 years of the 20th century, and approximately 250,000 deaths occur worldwide because of asthma each year [1]. Asthma is often associated with chronic rhinitis, which can be allergic or not. Studies indicate that 75% to 80% of the individuals with asthma have allergic rhinitis, and 40% to 50% of the individuals with allergic rhinitis and eosinophilic non-allergic rhinitis have bronchial hyperresponsiveness (BHR) [2][3][4]. Conservative estimates suggest that 500 million people have allergic rhinitis and 300 million people have asthma around the world [2][3][4]. The Pan American Health Organization (PAHO) estimates that there are approximately 15 million asthmatics in Brazil [5]. In Brazil, the data from the International Study of Asthma and Allergies in Childhood (ISAAC), conducted in various capital cities, showed that the average prevalence rates of allergic rhinitis were 12.6% and 14.6% in children and adolescents, respectively, and the respective prevalence rates of active asthma were 24.3% and 19% [6]. The ISAAC data obtained in cities in the state of Rio de Janeiro showed that the prevalence of active asthma varied from 13-17% in adolescents [7,8]. There are no recent data on the prevalence rates of rhinitis and asthma in the adult Brazilian population. It is estimated that rhinitis affects 20% or more of the general Brazilian population and asthma may affect up to 10%, which means that 19 to 20 million people may have asthma [9]. Although the number of hospitalizations due to asthma in the Brazilian public health system decreased between 2000 and 2011, it remains one of the main causes of hospitalization and imposes a large social and economic burden on the country [9]. Comparing population-based data on adults in Southern Brazil from 2000 to 2010, Fiori and collaborators found that the prevalence rates of asthma were 4.2% and 5.2%, respectively [10]. The Brazilian health system executes the principles defined in the Brazilian Constitution of 1988, which characterized health as a "citizenship right and a duty of the State". According to the guidelines defined by Law no. 8080/1990 [11], the health system is characterized by universal access and a regional hierarchy of services. Although there have been several programs to organize asthma care at different levels of complexity in the Unified Health System [Sistema Único de Saúde -SUS], even large urban centers have not been able to guarantee access for the entire population to the most appropriate level of treatment for the severity of their disease. This obstacle prevents optimal control of asthma and thus prevents reducing its associated morbidity and mortality. Broad knowledge on the various aspects of chronic diseases, including their clinical and socio-demographic characteristics and the care-seeking behavior of the individuals affected who require health services at different levels of complexity, can optimize the allocation of resources for primary, secondary and tertiary prevention in the Brazilian health system. The objective of this study was to describe the characteristics of a cohort of patients with persistent asthma who sought specialized treatment in a secondary/tertiary ambulatory care center linked to a university hospital in a large city in Brazil. We analyzed socio-demographic and clinical variables as potential indicators of patient care-seeking behavior at the local health care facilities. Methods Patients older than 5 years of age with a diagnosis of persistent asthma based on previously established consensus criteria [2,9] and who had been receiving treatment for at least 3 months in a specialized ambulatory care center (Allergy-Immunology and Pneumology-Phthisiology Services) in a university hospital in Rio de Janeiro city were consecutively enrolled in the study from April to September 2011. In both services, ambulatory care is predominantly performed by physicians in a residency program. In 2011, 326 patients with asthma were monitored in these services. All of the patients underwent routine visits at 3-to 4-month intervals and were interviewed twice during this period after a 6-month interval. All interviews were taken in the same day of a scheduled clinical visit. The last interview was performed in March 2012. The data were collected using a tool developed by the authors, which was pre-tested with 30 patients before the beginning of the study. Two interviewers (the first and second authors of the present study) who did not participate in the clinical care of the patients received training and performed the data collection on both occasions. In interviews with individuals younger than 18 years of age, all of the questions were answered by a responsible adult, and individuals older than 14 years of age offered supplementary answers when necessary. The following variables were collected: socio-demographic variables: age, gender, education level, monthly family income, the neighborhood where the patients were residing on the day of the interview and the number of days the patient was absent from school or work in the 3 months preceding the interview (to avoid recall bias); clinical variables: BMI on the day of the interview, the severity and control of the patient's asthma in the month preceding the interview, the number of scheduled appointments the patient attended and the number of days of hospitalization in the 6 months preceding the interview and the number of unscheduled/emergency visits in the 3 months preceding the interview (to avoid recall bias); functional variable: the highest of 3 peak expiratory flow (PEF) readings on the day of the interview. The severity (intermittent, mild persistent, moderate persistent and severe persistent) and the control of the patient's asthma (controlled, partially controlled and uncontrolled) were assessed by the attending physician at the time of the interview using clinical data from the past 4 weeks in accordance with international guidelines [2,9]. The treatment regimen was only changed during the study by the attending physician if a change was indicated according to the same guidelines [2,9]; the interviewers who collected the data after the medical care was provided had no influence. Overweight and obesity were defined as BMI ≥ 25 and BMI ≥ 30, respectively, for individuals older than 18 years and as weight-for-the age > Z score + 1 SD and weightfor-age > Z score + 2 SD, respectively, for those less than 18 years of age [12]. Annual estimative of school and work losses, as well of unscheduled/emergency visits had made by doubling results of these variables, because they were collected regarding the three previous months. The patients who dropped out of treatment (i.e., had no return visits for > 4 months) and the patients with chronic cardiopulmonary disease that could cause respiratory symptoms similar to asthma and influence the use of treatment resources were excluded from the study. Before the data collection, the patients (or the guardians if the patient was under 18 years old) signed the free informed consent form after being informed about the study. The study was approved by the local Research Ethics Committee (REC). The project was registered with the Brazilian National Research Ethics Committee [Comissão Nacional de Ética em Pesquisa -CONEP] under number FR413262 and approved by the local REC on 05/03/2011. All of the data were entered in spreadsheets (MS Office/Excel 2010, Microsoft Co., CA, USA) by the first author of the present study. GraphPad Prism version 6.0 (GraphPad Software Inc., La Jolla, CA, USA) was used for statistical analysis. The chi-square test (with Fisher's correction when necessary) and the Mann-Whitney test were used to compare the differences between categorical variables and the differences between continuous unpaired variables, respectively. The paired t-test and Wilcoxon's test were used to compare continuous paired variables. A significance level of 5% was used. Results and discussion During the study period, 117 patients were enrolled. Nine patients did not complete the study (7.7% loss): 1 patient died of a cause unrelated to asthma, 1 patient asked to be discharged and 7 patients dropped out of treatment before the second interview. The final study population (108 patients, representing 33.12% of the asthmatic patients in treatment in the two Services) included 80 patients who were followed at the Allergy/Immunology Service and 28 patients from the Pneumology Service. Of the participants, 79 were women (73.8%) and 29 were men (27.1%). This female gender predominance was absent among patients under 20 years-old. Most of the patients (75.0%) resided in Rio de Janeiro city; however, 60.1% of these residents lived in remote areas (defined as more than 10 km distant from the health care unit). The median monthly family income was R$ 1,100.00 (equivalent to US$ 536.58; IQR25-75 = 348.78-975.61), which falls into the economic classes C1 and C2 [13], considered the lowest stratum of medium economic class in Brazil. Among our patients the primary education level predominates (n = 76/70.4%), 25 29-1951.22), respectively. Distributions and medians of gender, local of residence, occupational status, educational level, age, duration of asthma and of rhinitis, peak-flow measurements and monthly income are shown in Table 1. Ninety patients (83.3%) had chronic rhinitis associated with asthma. The proportion of asthma cases associated with rhinitis was higher among the patients from the Allergy-Immunology Service (p = 0.0001), whereas the patients from the Pneumology Service were older, had asthma for a longer duration, experienced more severe asthma and had a greater prevalence of overweight/obesity (p = 0.02, p = 0.04, p = 0.01 and p = 0.01, respectively). Sixty-seven patients (62.0% of the total) had another medical comorbidity: 41 out of them (61.2% of patients with comorbidities) had systemic arterial hypertension (SAH), none were using beta-blockers, 11 (16.4%) had diabetes mellitus (DM), and 11 (16.4%) had degenerative joint disease. Other less common comorbidities included thyroid disease, depression, dyslipidemia and gastro-esophageal reflux. Table 2 shows distribution of gender, monthly family income, weight status and presence of self-referred comorbidities by age ranges. Of the patients whose body weight was above the healthy weight range (n = 64 / 59.2% of the total), 35 (32.4%) were overweight and 29 (26.8%) were obese. None of children and teenagers were obese and only 2 (9.5%) were overweight, while obesity and overweight predominated among other age ranges ( Table 2). The proportion of severe asthmatics was greater than the proportion of mild/ moderate asthmatics among the overweight/obese patients compared with the normal weight patients in the start of study (p = 0.001). This difference persisted when we analyzed only women, who represented the majority of the population (p = 0.01) (Figure 1). At the beginning of the study, 53 patients (49.08%) were classified as having mild asthma, 39 (36.11%) had moderate asthma and 16 (14.81%) had severe asthma. By the final evaluation, the classification of 42 patients (38.90%) had changed to intermittent asthma, and 8 (38.09%) of the 21 patients who initially had uncontrolled asthma demonstrating the intra-individual variability of the disease and improvements in their control after 6 months of treatment. The average PEF measurements increased significantly during the study (p < 0.0001), and the changes in severity were statistically significant (p < 0.0001) ( Figure 2). Futhermore, 12 patients (11.11%) were not taking medication to control their asthma, whereas only 4 patients (3.7%) were not taking it at the time of the second data collection (2 with intermittent/controlled asthma and 2 with moderate persistent asthma; 1 partially controlled and 1 uncontrolled). The patients attended 444 appointments in the 6-month period before the first interview and 330 appointments in the 6-month period before the second interview (mean 4.11 and 3.05 visits/patient/semester, respectively). There were 108 visits without an appointment (ambulatory care or urgent care/emergency) in the 3 months prior to the first interview and 49 in the 3 months prior to the second interview (mean 1.00 and 0.45 visits/patient/trimester, respectively). The median expenditure with public or private transport to attend these visits was US$ 9.76 per patient (IQR25-75 = 5.37-14.63). Only one and two patients went to visits by foot in the first and second interviews/clinical visits, respectively. In the 6 months preceding the first data collection, 3 patients were hospitalized due to asthma (total of 13 days of hospitalization; mean 4.33 days/patient), and 2 patients (total of 7 days of hospitalization; mean 3.5 days/patient) were hospitalized within the 6 months preceding the second data collection. Monthly family income was lower among the patients with uncontrolled asthma both at the beginning and at the end of the study (p = 0.005 and p = 0.01, respectively; Figure 3). At the beginning of the study, the patients with uncontrolled asthma had a median monthly income of U$ 372.09 (25- Patients with severe and moderate asthma reduced their mean family incomes during the study, whereas mild asthmatics did not. Estimative of annual costs of asthma treatment per patient showed that severe ones expend 12% of family annual income, whereas the moderate asthmatics expend 4.8%, mild patients 3.6% and intermittent ones 3.7% of annual income treating their asthma. None of the 53 patients with controlled asthma were smokers, but 3 patients with partially controlled or uncontrolled disease were still smoking. Three other patients whom live together with smokers had their asthma each one controlled, partially controlled and not controlled. At the end of data collection, the difference between asthma control of the patients in the Allergy/Immunology Service compared with patients in the Pneumology Service didn't reach statistical significance at 5% level (p = 0.08), however we don't know if it would be different with a greater sample. Of the patients treated at the Pneumology Service (who had more severe asthma and a high prevalence of overweight/obesity), the proportion of patients with partially controlled or uncontrolled asthma was higher among the overweight/obese patients compared with those of normal weight (p = 0.001; Figure 4). There were no differences in gender distribution (p = 0.62) or the monthly family income (p = 0.39) between patients from the two Services. Eighteen (75.0%) of the 24 students missed school because of asthma, and 13 patients missed work because of their illness (38.3% of employees outside home). Including the 4 employees who were absent from work to take care of their children with asthma, a total of 17 employees (51.5% of the individuals working outside the home) missed work because of asthma in the 3-month periods prior to each interview. The participants missed an estimated total of 163 school days/year and 164 working days/year, or an average of 6.79 days/ student/year and 9.64 days/employee/year. In the first assessment, 4 patients were not working and collecting sickness benefits, including 1 with severe uncontrolled asthma and 3 patients with moderate disease. In the second evaluation, 2 of these patients had retired because of disability related to their asthma, one remained out of work, another one lost her job, and another patient (with partially controlled moderate persistent asthma) had also stopped working and was collecting sickness benefits. Few published studies with different main objectives have described the profile of asthmatic patients receiving specialized monitoring in medium to high complexity hospitals. It is estimated in published guidelines that 60.0% of asthma cases are intermittent or mild persistent, 25.0% to 30.0% are moderate, and only 5.0% to 10.0% are severe [2,9]. This minority of severe asthmatics utilizes the most costly resources (e.g., medication, non-scheduled ambulatory visits, emergency visits and hospitalizations) and account for most of the mortality caused by the disease. These patients usually have the worst asthma control and are most in need of monitoring at secondary and tertiary heath care facilities [2,9,14]. At the beginning of the study, our results showed that nearly 15.0% of the patients had severe asthma, which was higher than the estimated proportion for the general population of asthmatics but still small for a specialized ambulatory care center in a moderate-to highcomplexity unit. We observed a large proportion of patients with mild asthma in this population. In part, this result may be related to the large number of patients followed in Allergy/Immunology Services, where allergic asthma is predominant and the proportion of patients with more severe disease is lower than the proportion of adult patients with non-allergic asthma. Part of the demand from patients with less severe disease could be absorbed by units in the basic health care network if all or most of the local network had health care teams properly prepared to address the disease. Asthma care programs already exist in other regions of the country, which may be one of the reasons why fewer patients are admitted with moderate/severe asthma that requires specialized assistance with more diagnostic and therapeutic resources. In a study with 90 participants that compared the direct costs of treating patients with controlled and uncontrolled asthma (45 per group) in a Brazilian tertiary health care unit, the mean age was similar to the age of our population, but the average monthly family income was lower [15]. Thirty-one patients had mild asthma (34.4%), 41 had moderate asthma (45.5%), and 18 (20.0%) had severe asthma: compared with our population, a slightly higher proportion of patients had severe asthma and a slightly lower proportion had mild asthma in that unit. Even so, we still consider the percentage of patients with mild asthma treated in a university hospital in the largest Brazilian city to be high, suggesting that the city experiences difficulties similar to ours in distributing the demand from asthmatic patients across the hierarchy of public health care units. The data from the second evaluation of our patients show that a large proportion of the patients were able to control and/or reduce the severity of their asthma after 6 months. Some of these patients should be referred to less complex units close to their homes, providing greater comfort to the patients and opening up spaces to admit and treat patients with more severe disease. This issue reflects the difficulties in referring patients between the health care units of varying complexity in the health care system. Approximately 10.0% of the patients did not achieve total or partial control of the disease, even while undergoing treatment in a university hospital where, theoretically, more comprehensive approaches are applied in accordance with the latest guidelines and the access to many diagnostic resources is unrestricted. Limited access to medications for treatment may have contributed to this problem because most of the patients depended on obtaining free samples or buying medications with their own resources. A retrospective study performed in a Brazilian university reported results similar to ours in terms of the mean age, the proportion of women and the proportion of patients with severe asthma among the patients in treatment [16]. Based on the evaluation of the last prescription in the records, the use of the pharmacological treatment in accordance with the guideline recommendations for the asthma management at that time was low. Among the patients with persistent asthma, a large proportion (71.0%) had no prescription for inhaled corticosteroids. The current guidelines for asthma management clearly state that continuous use of inhaled corticosteroids, tailored to the severity level and control of the disease, is the most effective strategy for reducing morbidity and mortality from persistent asthma [2,9]. The cited study, unlike ours, included patients treated in other services that address asthma but were not specialized in Allergy or Pulmonology, which may have contributed to the undesirable result. This study suggests that even in a university hospital in a large Brazilian city, teams that do not specialize in respiratory diseases (pediatricians, internists and general practitioners) are not properly applying the recommended treatment guidelines for the disease. Exacerbating the problem of inappropriate prescription practices, there are high rates of non-adherence and inadequate adherence to the treatment regimen (potentially greater than 70%), and the dropout rate from control medications can reach 92.0% after 1 year [17][18][19]. Our population has a low to medium socio-economic profile, with access to urban transport and medication. Moreover, in our institution, a university health unity, is admired for most of people and has a good reputation in the city. These facts can helped us to reach this low level of losses during the study (only 7 out of 117 allocated patients dropped out of treatment during the 12 months of observation). In another study that retrospectively analyzed 434 asthmatic children and adolescents included in the assistance program in Brazilian primary health care units between 1988 and 1993 [20], more than 50.0% dropped out during the monitoring phase, predominantly in the first 6 months of treatment. Among those who continued with the monitoring program, the asthma assistance program in the primary health care unit achieved successes in terms of clinical improvement and greater adherence to drug treatments [21]. In our population of patients with persistent asthma, 11 (10.1%) were not taking continued inhaled corticosteroid at the beginning of the study (2 uncontrolled, 5 partially controlled and 4 considered controlled). The mean monthly income of them was not significantly different from the rest of patients using inhaled corticosteroids (US$ 595.61/SD = 355.70 versus US$ 764.39/SD = 746.34; p = 0.21). The proportion of non users of inhaled corticosteroids dropped to 3.0% (2 patients with uncontrolled and partially controlled asthma, each one) of the 66 patients with persistent asthma at the end of the study. In addition, the number of unscheduled visits to the ambulatory care unit or emergency care decreased throughout the study. The study was observational; there was no active attempt by the researchers to change the patients' therapy because they had already been receiving treatment for at least 3 months when they joined the study. These data suggest that the local teams are capable of providing competent care in alignment with the current recommendations for drug treatment for asthma [2,9]. We can't rule out the possibility that patients' adherence improved when they were informed that they were participating in a longitudinal study after the first data collection that would include a second data collection (Hawthorne effect) and/or that the teams paid more attention to the medications of the participants between the two data collection points because the support staff could identify the patients participating in the study. Furthermore, asthma severity and asthma control naturally vary over time. The first data were collected in the fall and winter, whereas the second collection occurred during the spring and summer, when the weather contributes to better clinical outcomes of asthma in our geographic region. Our results demonstrate that asthma has a strong impact on school and work attendance because 75.0% of the students missed days of school and 34% of employees missed days of work directly because their asthma. This proportion grows to more than 50% if we also consider adults that missed work to care their children with asthma. Our prevalence of work absenteeism are clearly bigger than published results from a cohort of industry employees aged 16 to 65 years old in Brazil, where the one year prevalence of work days lost to health problems was 13.5% [22]. In addition to causing absences from work when the disease is exacerbated, asthma also causes long temporary absences. In Brazil in 2008, sickness benefits for asthma were provided to 7.5/100,000 employees for a median duration of 49 days (IQR 25-75 = 28-87 days) [23]. In a transversal study on health related work days lost during 30 months among public service workers in Vitória, a medium seaside city in the same region of Rio de Janeiro (southwest of Brazil), respiratory diseases was the first cause of absenteeism with an average of 8.4 days and median of 5 days lost per period of license [24]. In the population that we studied, 5 patients (4.6% of total and 15.1% of employees) with moderate to severe asthma were out of work during the study period, 2 retired due to disability, and 1 lost a job. The monthly income of patients with uncontrolled asthma was lower than the income of those with controlled asthma at both observation points. Despite the small number of patients with uncontrolled disease, and considering that only 5 patients (6.6% of all working aged asthmatics) were out of work or retired due to asthma, our results suggest that lower income can contribute to worse the disease. However, the lower mean monthly family incomes of severe patients compared to moderate and mild ones as well as the reduction in these indexes in severe and moderate patients during the study, but not in mild ones, suggest that the disease can also contribute to lower family income reducing the working capacity of patients or their parents. As the literature has already described, we noted a frequent association between asthma and chronic rhinitis, which reinforces the need to devote attention to treatment that properly controls this comorbidity in asthmatics [3]. In a regional Brazilian program, adults with moderate to severe asthma were monitored and receive inhaled medication to control their asthma. For 21 months in 2003 and 2004, 269 patients with a median age of 46 years were included in a study of the characteristics and costs of asthma. Rhinitis was present in 72% of the patients, a lower proportion than what we found despite the greater severity of asthma in those patients. Less than half of the patients were working, and the second and third-largest proportions were composed of unemployed people and retirees, respectively. The majority (74%) had a monthly family income less than the national minimum wage (i.e., the population was poorer than the one followed in our study). Nevertheless, with free medication, amelioration of asthma control and a reduction in hospitalizations were achieved [25]. Other clinical comorbidities were also common in our population, especially overweight/obesity and systemic arterial hypertension, which can impair the control of asthma or compete for financial resources used for asthma treatment, respectively. Overweight and obesity were associated with increased asthma severity in all of the patients and with worse control of the disease among the patients with more severe disease, older patients and patients with a longer duration of asthma. None of the asthmatic patients with systemic arterial hypertension were using a beta-blocker, which is known to aggravate asthma. Another study conducted by the Bahia State Asthma and Allergic Rhinitis Control Program (Programa de Controle da Asma e Rinite Alérgica na Bahia -ProAR) in Brazil sought to evaluate the factors associated with severe asthma in the population [26]. Clinical data from 102 asthmatics treated in 2007-2008 were evaluated retrospectively. The mean age was 44.0 years (± 13.6). Only 2.9% of the patients had mild asthma, 30.4% had moderate asthma, and 66.7% had severe asthma, as expected for a specialty service for asthma. In this population, 61.7% of the patients were overweight or obese, a proportion similar to our findings, even though we observed fewer severe cases. There was also a significant association between arterial hypertension and asthma. The increasing prevalence of overweight/obesity in Western societies has been identified as a factor associated with the increased prevalence of asthma. Data from the USA show that the prevalence rates of asthma and obesity increased to a similar extent between 1980 and 2000 [27]. Data of Brazilian Health Ministry, obtained by a telephone surveillance system in 2011 showed that 64.3% of Brazilian adults had overweight or obesity, a similar result compared with ours, but with a smaller proportion of obese (15.8%) and more overweight (48,5%) [28]. Studies have shown that obesity is associated with an increased risk of asthma symptoms. This association could initiate in early life, being greater in adults than in children and in adult women than in men, but the nature of these potential reciprocal effects still need further investigation [29]. Obesity appears to contribute to reduce responsiveness to medication, worsening control and increasing associated costs of the disease. This could be due to a change in asthma phenotype, particularly evidenced as a less eosinophilic type of airway inflammation with less responsiveness to inhaled conticosteroids, combined to the added effects of changes in lung mechanics [30][31][32]. These results reinforces that, besides providing free access to control medication, which recently was became available by Brazilian government, the public health system needs to make efforts to provide primary health facilities with interdisciplinary teams prepared to approach the various educational, socioeconomic and clinical aspects of asthma and its comorbidities to medium/lower income population. Focus in drug treatment as defined by international guidelines, continuing clinical and functional monitoring and adequate approaching to rhinitis and obesity are needed. Conclusions The population of individuals with asthma followed in this moderate-to high-complexity health care unit had medium-low socioeconomic status in Brazil, a high prevalence of associated chronic rhinitis and a high prevalence of overweight/obesity. A large proportion of the patients have missed days of school or work for reasons were directly or indirectly related to the disease. Lower monthly family income and body weight above the ideal level were associated with greater severity and worse control of asthma. In Brazil, there are difficulties in building a regional and hierarchical public health system, which may be the consequence of an inadequate supply of services.
v3-fos-license
2021-04-07T14:04:57.415Z
2021-04-07T00:00:00.000
233139020
{ "extfieldsofstudy": [ "Computer Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41746-021-00438-z.pdf", "pdf_hash": "e35f243b2d2c60d4c3b03379d44901dc597e175d", "pdf_src": "Springer", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1925", "s2fieldsofstudy": [ "Medicine" ], "sha1": "e35f243b2d2c60d4c3b03379d44901dc597e175d", "year": 2021 }
pes2o/s2orc
Diagnostic accuracy of deep learning in medical imaging: a systematic review and meta-analysis Deep learning (DL) has the potential to transform medical diagnostics. However, the diagnostic accuracy of DL is uncertain. Our aim was to evaluate the diagnostic accuracy of DL algorithms to identify pathology in medical imaging. Searches were conducted in Medline and EMBASE up to January 2020. We identified 11,921 studies, of which 503 were included in the systematic review. Eighty-two studies in ophthalmology, 82 in breast disease and 115 in respiratory disease were included for meta-analysis. Two hundred twenty-four studies in other specialities were included for qualitative review. Peer-reviewed studies that reported on the diagnostic accuracy of DL algorithms to identify pathology using medical imaging were included. Primary outcomes were measures of diagnostic accuracy, study design and reporting standards in the literature. Estimates were pooled using random-effects meta-analysis. In ophthalmology, AUC’s ranged between 0.933 and 1 for diagnosing diabetic retinopathy, age-related macular degeneration and glaucoma on retinal fundus photographs and optical coherence tomography. In respiratory imaging, AUC’s ranged between 0.864 and 0.937 for diagnosing lung nodules or lung cancer on chest X-ray or CT scan. For breast imaging, AUC’s ranged between 0.868 and 0.909 for diagnosing breast cancer on mammogram, ultrasound, MRI and digital breast tomosynthesis. Heterogeneity was high between studies and extensive variation in methodology, terminology and outcome measures was noted. This can lead to an overestimation of the diagnostic accuracy of DL algorithms on medical imaging. There is an immediate need for the development of artificial intelligence-specific EQUATOR guidelines, particularly STARD, in order to provide guidance around key issues in this field. INTRODUCTION Artificial Intelligence (AI), and its subfield of deep learning (DL) 1 , offers the prospect of descriptive, predictive and prescriptive analysis, in order to attain insight that would otherwise be untenable through manual analyses 2 . DL-based algorithms, using architectures such as convolutional neural networks (CNNs), are distinct from traditional machine learning approaches. They are distinguished by their ability to learn complex representations in order to improve pattern recognition from raw data, rather than requiring human engineering and domain expertise to structure data and design feature extractors 3 . Of all avenues through which DL may be applied to healthcare; medical imaging, part of the wider remit of diagnostics, is seen as the largest and most promising field 4,5 . Currently, radiological investigations, regardless of modality, require interpretation by a human radiologist in order to attain a diagnosis in a timely fashion. With increasing demands upon existing radiologists (especially in low-to-middle-income countries) [6][7][8] , there is a growing need for diagnosis automation. This is an issue that DL is able to address 9 . Successful integration of DL technology into routine clinical practice relies upon achieving diagnostic accuracy that is noninferior to healthcare professionals. In addition, it must provide other benefits, such as speed, efficiency, cost, bolstering accessibility and the maintenance of ethical conduct. Although regulatory approval has already been granted by the Food and Drug Administration for select DL-powered diagnostic software to be used in clinical practice 10,11 , many note that the critical appraisal and independent evaluation of these technologies are still in their infancy 12 . Even within seminal studies in the field, there remains wide variation in design, methodology and reporting that limits the generalisability and applicability of their findings 13 . Moreover, it is noted that there has been no overarching medical specialty-specific meta-analysis assessing diagnostic accuracy of DL performance, particularly in ophthalmology, respiratory medicine and breast surgery, which have the most diagnostic studies to date 13 . Therefore, the aim of this review is to (1) quantify the diagnostic accuracy of DL in speciality-specific radiological imaging modalities to identify or classify disease, and (2) to appraise the variation in methodology and reporting of DL-based radiological diagnosis, in order to highlight the most common flaws that are pervasive across the field. Search and study selection Our search identified 11,921 abstracts, of which 9484 were screened after duplicates were removed. Of these, 8721 did not fulfil inclusion criteria based on title and abstract. Seven hundred sixty-three full manuscripts were individually assessed and 260 were excluded at this step. Five hundred three papers fulfilled inclusion criteria for the systematic review and contained data required for sensitivity, specificity or AUC. Two hundred seventythree studies were included for meta-analysis, 82 in ophthalmology, 115 in respiratory medicine and 82 in breast cancer (see Fig. 1). These three fields were chosen to meta-analyse as they had the largest numbers of studies with available data. Two hundred twenty-four other studies were included for qualitative synthesis in other medical specialities. Summary estimates of imaging and speciality-specific diagnostic accuracy metrics are described in Table 1. Units of analysis for each speciality and modality are indicated in Tables 2-4. Ophthalmology imaging Eighty-two studies with 143 separate patient cohorts reported diagnostic accuracy data for DL in ophthalmology (see Table 2 and Supplementary References 1). Optical coherence tomography (OCT) and retinal fundus photographs (RFP) were the two imaging modalities performed in this speciality with four main pathologies being diagnosed-diabetic retinopathy (DR), age-related macular degeneration (AMD), glaucoma and retinopathy of prematurity (ROP). Diabetic retinopathy: Twenty-five studies with 48 different patient cohorts reported diagnostic accuracy data for all, referable or vision-threatening DR on RFP. Twelve studies and 16 cohorts reported on diabetic macular oedema (DME) or early DR on OCT scans. AUC was 0.939 (95% CI 0.920-0.958) for RFP versus 1.00 (95% CI 0.999-1.000) for OCT. Retinopathy of prematurity: Three studies reported diagnostic accuracy for identifying plus diseases in ROP from RFP. Sensitivity was 0.960 (95% CI 0.913-1.008) and specificity was 0.907 (95% CI 0.907-1.066). AUC was only reported in two studies so was not pooled. Others: Eight other studies reported on diagnostic accuracy in ophthalmology either using different imaging modalities (ocular images and visual fields) or for identifying other diagnoses (pseudopapilloedema, retinal vein occlusion and retinal detachment). These studies were not included in the meta-analysis. Respiratory imaging One hundred and fifteen studies with 244 separate patient cohorts report on diagnostic accuracy of DL on respiratory disease (see Table 3 and Supplementary References 2). Lung nodules were largely identified on CT scans, whereas chest X-rays (CXR) were used to diagnose a wide spectrum of conditions from simply being 'abnormal' to more specific diagnoses, such as pneumothorax, pneumonia and tuberculosis. Lung nodules: Fifty-six studies with 74 separate patient cohorts reported diagnostic accuracy for identifying lung nodules on CT scans on a per lesion basis, compared with nine studies and 14 patient cohorts on CXR. AUC was 0.937 (95% CI 0.924-0.949) for CT versus 0.884 (95% CI 0.842-0.925) for CXR. Seven studies reported on diagnostic accuracy for identifying lung nodules on CT scans on a per scan basis, these were not included in the meta-analysis. Lung cancer or mass: Six studies with nine patient cohorts reported diagnostic accuracy for identifying mass lesions or lung cancer on CT scans compared with eight studies and ten cohorts on CXR. AUC was 0.887 (95% CI 0.847-0.928) for CT versus 0.864 (95% CI 0.827-0.901) for CXR. X-ray imaging was also used to identify atelectasis, pleural thickening, fibrosis, emphysema, consolidation, hiatus hernia, pulmonary oedema, infiltration, effusion, mass and cardiomegaly. CT imaging was also used to diagnose COPD, ground glass opacity and interstitial lung disease, but these were not included in the metaanalysis. Breast imaging Eighty-two studies with 100 separate patient cohorts report on diagnostic accuracy of DL on breast disease (see Table 4 and Supplementary References 3). The four imaging modalities of mammography (MMG), digital breast tomosynthesis (DBT), ultrasound and magnetic resonance imaging (MRI) were used to diagnose breast cancer. Other specialities Our literature search also identified 224 studies in other medical specialities reporting on diagnostic accuracy of DL algorithms to identify disease. These included large numbers of studies in the fields of neurology/neurosurgery (78), gastroenterology/hepatology (24) and urology (25). Out of the 224 studies, only 55 compared algorithm performance against healthcare professionals, although 80% of studies in the field of dermatology did (see Supplementary References 4, Supplementary Table 1 and Supplementary Fig. 4). Variation of reporting A key finding of our review was the large degree of variation in methodology, reference standards, terminology and reporting among studies in all specialities. The most common variables amongst DL studies in medical imaging include issues with the quality and size of datasets, metrics used to report performance and methods used for validation (see Table 5). Only eight studies in ophthalmology imaging 14,21,32,33,43,55,108,109 , ten studies in respiratory imaging 64,66,70,72,75,79,82,87,89,110 and six studies in breast imaging 62,91,97,104,106,111 mentioned adherence to the STARD-2015 guidelines or had a STARD flow diagram in the manuscript. Funnel plots were produced for the diagnostic accuracy outcome measure with the largest number of patient cohorts in each medical speciality, in order to detect bias in the studies included 112 (see Supplementary Figs. 5-7). These demonstrate that there is high risk of bias in studies detecting lung nodules on CT scans and detecting DR on RFP, but not for detecting breast cancer on MMG. Assessment of the validity and applicability of the evidenc The overall risk of bias and applicability using Quality Assessment of Diagnostic Accuracies Studies 2 (QUADAS-2) led to a majority of studies in all specialities being classified as high risk, particularly with major deficiencies in regard to patient selection, flow and timing and applicability of the reference standard (see Fig. 2). For the patient selection domain, a high or unclear risk of bias was seen in 59/82 (72%) of ophthalmic studies, 89/115 (77%) of respiratory studies and 62/82 (76%) or breast studies. These were mostly related to a case-control study design and sampling issues. For the flow and timing domain, a high or unclear risk of bias was seen in 66/82 (80%) of ophthalmic studies, 93/115 (81%) of respiratory studies and 70/82 (85%) of breast studies. This was largely due to missing information about patients not receiving the index test or whether all patients received the same reference standard. For the reference standard domain, concerns regarding applicability was seen in 60/82 (73%) of ophthalmic studies, 104/ 115 (90%) of respiratory studies and 78/82 (95%) of breast studies. This was mostly due to reference standard inconsistencies if the index test was validated on external datasets. DISCUSSION This study sought to (1) quantify the diagnostic accuracy of DL algorithms to identify specific pathology across distinct radiological modalities, and (2) appraise the variation in study reporting of DL-based radiological diagnosis. The findings of our specialityspecific meta-analysis suggest that DL algorithms generally have a high and clinically acceptable diagnostic accuracy in identifying disease. High diagnostic accuracy with analogous DL approaches was identified in all specialities despite different workflows, pathology and imaging modalities, suggesting that DL algorithms can be deployed across different areas in radiology. However, due to high heterogeneity and variance between studies, there is considerable uncertainty around estimates of diagnostic accuracy in this meta-analysis. In ophthalmology, the findings suggest features of diseases, such as DR, AMD and glaucoma can be identified with a high sensitivity, specificity and AUC, using DL on both RFP and OCT scans. In general, we found higher sensitivity, specificity, accuracy and AUC with DL on OCT scans over RFP for DR, AMD and glaucoma. Only sensitivity was higher for DR on RFP over OCT. In respiratory medicine, our findings suggest that DL has high sensitivity, specificity and AUC to identify chest pathology on CT scans and CXR. DL on CT had higher sensitivity and AUC for detecting lung nodules; however, we found a higher specificity, PPV and F1 score on CXR. For diagnosing cancer or lung mass, DL on CT had a higher sensitivity than CXR. In breast cancer imaging, our findings suggest that DL generally has a high diagnostic accuracy to identify breast cancer on mammograms, ultrasound and DBT. The performance was found to be very similar for these modalities. In MRI, however, the diagnostic accuracy was lower; this may be due to small datasets and the use of 2D images. The utilisation of larger databases and multiparametric MRI may increase the diagnostic accuracy 113 . Extensive variation in the methodology, data interpretability, terminology and outcome measures could be explained by a lack of consensus in how to conduct and report DL studies. The STARD-2015 checklist 114 , designed for reporting of diagnostic accuracy 116 and specific reporting standards for AI studies 115,117 . The QUADAS-2 (ref. 118 ) assessment tool was used to systematically evaluate the risk of bias and any applicability concerns of the diagnostic accuracy studies. Although this tool was not designed for DL diagnostic accuracy studies, the evaluation allowed us to judge that a majority of studies in this field are at risk of bias or concerning for applicability. Of particular concern was the applicability of reference standards and patient selection. Despite our results demonstrating that DL algorithms have a high diagnostic accuracy in medical imaging, it is currently difficult to determine if they are clinically acceptable or applicable. This is partially due to the extensive variation and risk of bias identified in the literature to date. Furthermore, the definition of what threshold is acceptable for clinical use and tolerance for errors varies greatly across diseases and clinical scenarios 119 . Limitations in the literature Dataset. There are broad methodological deficiencies among the included studies. Most studies were performed using retrospectively collected data, using reference standards and labels Additional AI algorithmic information Is the algorithm a static model or is it continuously evolving? Demonstrate how algorithm makes decisions Is there a specific design for end-user interpretability, e.g., saliency or probability maps Methods Transfer learning Was transfer learning used for training and validation? Cross validation Was k-fold cross validation used during training to reduce the effects of randomness in dataset splits? Reference standard Is the reference standard used of high quality and widely accepted in the field? What was the rationale for choosing the reference standard? Additional clinical information Was additional clinical information given to healthcare professionals to simulate normal clinical process? that were not intended for the purposes of DL analysis. Minimal prospective studies and only two randomised studies 109,120 , evaluating the performance of DL algorithms in clinical settings were identified in the literature. Proper acquisition of test data is essential to interpret model performance in a real-world clinical setting. Poor quality reference standards may result in the decreased model performance due to suboptimal data labelling in the validation set 28 , which could be a barrier to understanding the true capabilities of the model on the test set. This is symptomatic of the larger issue that there is a paucity of goldstandard, prospectively collected, representative datasets for the purposes of DL model testing. However, as there are many advantages to using retrospectively collected data, the resourceful use of retrospective or synthetic data with the use of labels of varying modality and quality represent important areas of research in DL 121 . Study methodology. Many studies did not undertake external validation of the algorithm in a separate test set and relied upon results from the internal validation data; the same dataset used to train the algorithm initially. This may lead to an overestimation of the diagnostic accuracy of the algorithm. The problem of overfitting has been well described in relation to machine learning algorithms 122 . True demonstration of the performance of these algorithms can only be assumed if they are externally validated on separate test sets with previously unseen data that are representative of the target population. Surprisingly, few studies compared the diagnostic accuracy of a) Ophthalmic Imaging b) Respiratory Imaging c) Breast Imaging DL algorithms against expert human clinicians for medical imaging. This would provide a more objective standard that would enable better comparison of models across studies. Furthermore, application of the same test dataset for diagnostic performance assessment of DL algorithms versus healthcare professionals was identified in only select studies 13 . This methodological deficiency limits the ability to gauge the clinical applicability of these algorithms into clinical practice. Similarly, this issue can extend to model-versus-model comparisons. Specific methods of model training or model architecture may not be described well enough to permit emulation for comparison 123 . Thus, standards for model development and comparison against controls will be needed as DL architectures and techniques continue to develop and are applied in medical contexts. Reporting. There was varying terminology and a lack of transparency used in DL studies with regards to the validation or test sets used. The term 'validation' was identified as being used interchangeably to either describe an external test set for the final algorithm or for an internal dataset that is used to fine tune the model prior to 'testing'. Furthermore, the inconsistent terminology led to difficulties understanding whether an independent external test set was used to test diagnostic performance 13 . Crucially, we found broad variation in the metrics used as outcomes for the performance of the DL algorithms in the literature. Very few studies reported true positives, false positives, true negatives and false negatives in a contingency table as should be the minimum for diagnostic accuracy studies 114 . Moreover, some studies only reported metrics, such as dice coefficient, F1 score, competition performance metric and Top-1 accuracy that are often used in computer science, but may be unfamiliar to clinicians 13 . Metrics such as AUC, sensitivity, specificity, PPV and NPV should be reported, as these are more widely understood by healthcare professionals. However, it is noted that NPV and PPV are dependent on the underlying prevalence of disease and as many test sets are artificially constructed or balanced, then reporting the NPV or PPV may not be valid. The wide range of metrics reported also leads to difficulty in comparing the performance of algorithms on similar datasets. Study strengths and limitations This systematic review and meta-analysis statistically appraises pooled data collected from 279 studies. It is the largest study to date examining the diagnostic accuracy of DL on medical imaging. However, our findings must be viewed in consideration of several limitations. Firstly, as we believe that many studies have methodological deficiencies or are poorly reported, these studies may not be a reliable source for evaluating diagnostic accuracy. Consequently, the estimates of diagnostic performance provided in our meta-analysis are uncertain and may represent an overestimation of the true accuracy. Secondly, we did not conduct a quality assessment for the transparency of reporting in this review. This was because current guidelines to assess diagnostic accuracy reporting standards (STARD-2015 114 ) were not designed for DL studies and are not fully applicable to the specifics and nuances of DL research 115 . Thirdly, due to the nature of DL studies, we were not able to perform classical statistical comparison of measures of diagnostic accuracy between different imaging modalities. Fourthly, we were unable to separate each imaging modality into different subsets, to enable comparison across subsets and allow the heterogeneity and variance to be broken down. This was because our study aimed to provide an overview of the literature in each specific speciality, and it was beyond the scope of this review to examine each modality individually. The inherent differences in imaging technology, patient populations, pathologies and study designs meant that attempting to derive common lessons across the board did not always offer easy comparisons. Finally, our review concentrated on DL for speciality-specific medical imaging, and therefore it may not be appropriate to generalise our findings to other forms of medical imaging or AI studies. Future work For the quality of DL research to flourish in the future, we believe that the adoption of the following recommendations are required as a starting point. Availability of large, open-source, diverse anonymised datasets with annotations. This can be achieved through governmental support and will enable greater reproducibility of DL models 124 . Collaboration with academic centres to utilise their expertise in pragmatic trial design and methodology 125 . Rather than classical trials, novel experimental and quasi-experimental methods to evaluate DL have been proposed and should be evaluated 126 . This may include ongoing evaluation of algorithms once in clinical practice, as they continue to learn and adapt to the population that they are implemented in. Creation of AI-specific reporting standards. A major reason for the difficulties encountered in evaluating the performance of DL on medical imaging are largely due to inconsistent and haphazard reporting. Although DL is widely considered as a 'predictive' model (where TRIPOD may be applied) the majority of AI interventions close to translation currently published are predominantly in the field of diagnostics (with specifics on index tests, reference standards and true/false positive/negatives and summary diagnostic scores, centred directly in the domain of STARD). Existing reporting guidelines for diagnostic accuracy studies (STARD) 114 , prediction models (TRIPOD) 127 , randomised trials (CONSORT) 128 and interventional trial protocols (SPIRIT) 129 do not fully cover DL research due to specific considerations in methodology, data and interpretation required for these studies. As such, we applaud the recent publication of the CONSORT-AI 117 and SPIRIT-AI 130 guidelines, and await AI-specific amendments of the TRIPOD-AI 131 and STARD-AI 115 statements (which we are convening). We trust that when these are published, studies being conducted will have a framework that enables higher quality and more consistent reporting. Development of specific tools for determining the risk of study bias and applicability. An update to the QUADAS-2 tool taking into account the nuances of DL diagnostic accuracy research should be considered. Updated specific ethical and legal framework. Outdated policies need to be updated and key questions answered in terms of liability in cases of medical error, doctor and patient understanding, control over algorithms and protection of medical data 132 . The World Health Organisation 133 and others have started to develop guidelines and principles to regulate the use of AI. These regulations will need to be adapted by each country to fit their own political and healthcare context 134 . Furthermore, these guidelines will need to proactively and objectively evaluate technology to ensure best practices are developed and implemented in an evidence-based manner 135 . CONCLUSION DL is a rapidly developing field that has great potential in all aspects of healthcare, particularly radiology. This systematic review and meta-analysis appraised the quality of the literature and provided pooled diagnostic accuracy for DL techniques in three medical specialities. While the results demonstrate that DL currently has a high diagnostic accuracy, it is important that these findings are assumed in the presence of poor design, conduct and reporting of studies, which can lead to bias and overestimating the power of these algorithms. The application of DL can only be improved with standardised guidance around study design and reporting, which could help clarify clinical utility in the future. There is an immediate need for the development of AI-specific STARD and TRIPOD statements to provide robust guidance around key issues in this field before the potential of DL in diagnostic healthcare is truly realised in clinical practice. METHODS This systematic review was conducted in accordance with the guidelines for the 'Preferred Reporting Items for Systematic Reviews and Meta-Analyses' extension for diagnostic accuracy studies statement (PRISMA-DTA) 136 . Eligibility criteria Studies that report upon the diagnostic accuracy of DL algorithms to investigate pathology or disease on medical imaging were sought. The primary outcome was various diagnostic accuracy metrics. Secondary outcomes were study design and quality of reporting. Data sources and searches Electronic bibliographic searches were conducted in Medline and EMBASE up to 3rd January 2020. MESH terms and all-field search terms were searched for 'neural networks' (DL or convolutional or cnn) and 'imaging' (magnetic resonance or computed tomography or OCT or ultrasound or X-ray) and 'diagnostic accuracy metrics' (sensitivity or specificity or AUC). For the full search strategy, please see Supplementary Methods 1. The search included all study designs. Further studies were identified through manual searches of bibliographies and citations until no further relevant studies were identified. Two investigators (R.A. and V.S.) independently screened titles and abstracts, and selected all relevant citations for full-text review. Disagreement regarding study inclusion was resolved by discussion with a third investigator (H.A.). Inclusion criteria Studies that comprised a diagnostic accuracy assessment of a DL algorithm on medical imaging in human populations were eligible. Only studies that stated either diagnostic accuracy raw data, or sensitivity, specificity, AUC, NPV, PPV or accuracy data were included in the meta-analysis. No limitations were placed on the date range and the last search was performed in January 2020. Exclusion criteria Articles were excluded if the article was not written in English. Abstracts, conference articles, pre-prints, reviews and metaanalyses were not considered because an aim of this review was to appraise the methodology, reporting standards and quality of primary research studies being published in peer-reviewed journals. Studies that investigated the accuracy of image segmentation or predicting disease rather than identification or classification were excluded. Data extraction and quality assessment Two investigators (R.A. and V.S.) independently extracted demographic and diagnostic accuracy data from the studies, using a predefined electronic data extraction spreadsheet. The data fields were chosen subsequent to an initial scoping review and were, in the opinion of the investigators, sufficient to fulfil the aims of this review. Data were extracted on (i) first author, (ii) year of publication, (iii) type of neural network, (iv) population, (v) dataset-split into training, validation and test sets, (vi) imaging modality, (vii) body system/disease, (viii) internal/external validation methods, (ix) reference standard, (x) diagnostic accuracy raw data-true and false positives and negatives, (xi) percentages of AUC, accuracy, sensitivity, specificity, PPV, NPV and other metrics reported. Three investigators (R.A., V.S. and GM) assessed study methodology using the QUADAS-2 checklist to evaluate the risk of bias and any applicability concerns of the studies 118 . Data synthesis and analysis A bivariate model for diagnostic meta-analysis was used to calculate summary estimates of sensitivity, specificity and AUC data 137 . Independent proportion and their differences were calculated and pooled through DerSimonian and Laird randomeffects modelling 138 . This considered both between-study and within-study variances that contributed to study weighting. Study-specific estimates and 95% CIs were computed and represented on forest plots. Heterogeneity between studies was assessed using I 2 (25-49% was considered to be low heterogeneity, 50-74% was moderate and >75% was high heterogeneity). Where raw diagnostic accuracy data were available, the SROC model was used to evaluate the relationship between sensitivity and specificity 139 . We utilised Stata version 15 (Stata Corp LP, College Station, TX, USA) for all statistical analyses. We chose to appraise the performance of DL algorithms to identify individual disease or pathology patterns on different imaging modalities in isolation, e.g., identifying lung nodules on a thoracic CT scan. We felt that combining imaging modalities and diagnoses would add heterogeneity and variation to the analysis. Meta-analysis was only performed where there were greater than or equal to three patient cohorts, reporting for each specific pathology and imaging modality. This study is registered with PROSPERO, CRD42020167503. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article.
v3-fos-license
2023-01-08T05:11:07.868Z
2022-12-30T00:00:00.000
255499352
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1422-0067/24/1/680/pdf?version=1672400492", "pdf_hash": "3331dbd62860ac416d8c6e93d372042c9fd01dc4", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1926", "s2fieldsofstudy": [ "Medicine" ], "sha1": "3331dbd62860ac416d8c6e93d372042c9fd01dc4", "year": 2022 }
pes2o/s2orc
Efficacy of Eribulin Plus Gemcitabine Combination in L-Sarcomas Although the overall survival of advanced soft-tissue sarcoma (STS) patients has increased in recent years, the median progression-free survival is lower than 5 months, meaning that there is an unmet need in this population. Among second-line treatments for advanced STS, eribulin is an anti-microtubule agent that has been approved for liposarcoma. Here, we tested the combination of eribulin with gemcitabine in preclinical models of L-sarcoma. The effect in cell viability was measured by MTS and clonogenic assay. Cell cycle profiling was studied by flow cytometry, while apoptosis was measured by flow cytometry and Western blotting. The activity of eribulin plus gemcitabine was evaluated in in vivo patient-derived xenograft (PDX) models. In L-sarcoma cell lines, eribulin plus gemcitabine showed to be synergistic, increasing the number of hypodiploid events (increased subG1 population) and the accumulation of DNA damage. In in vivo PDX models of L-sarcomas, eribulin combined with gemcitabine was a viable scheme, delaying tumour growth after one cycle of treatment, being more effective in leiomyosarcoma. The combination of eribulin and gemcitabine was synergistic in L-sarcoma cultures and it showed to be active in in vivo studies. This combination deserves further exploration in the clinical context. Introduction Soft tissue sarcomas (STS) are a group of neoplasms with mesenchymal origin, representing 1% of all cancers in adults, with a crude incidence rate of about 5.6 cases per 100,000 inhabitants per year. STS are characterized by heterogeneous molecular aberrations, varying biology, and variable responses to treatment [1,2]. Despite efforts made in recent decades, the standard first-line systemic therapy for these tumours is still doxorubicin alone or in combination, generally with ifosfamide [3]. The increase in survival expectancy detected in the last decade for advanced STS is related, at least in part, with the emergence of new drugs in second and subsequent lines. Eribulin is a synthetic analogue of halichondrin B, a natural compound extracted from the marine sponge Halichondria okadai, and it has been approved for the treatment of metastatic breast cancer, and recently also for unresectable or metastatic liposarcoma (mLPS) patients who have received a prior anthracycline regimen [4,5]. In the randomized, open-label, multicentre, phase III clinical trial, Schöffski et al. reported that eribulin showed significantly longer overall survival (OS) with respect to dacarbazine, in a population based on advanced LPS and leiomyosarcoma (LMS) patients [6]. In a subsequent histological subgroup analysis of this trial, longer OS was restricted to patients with LPS subtypes, pleomorphic LPS being the one with the longest difference in OS (22.2 vs 6.7 months) [7]. In the case of the LMS group, both OS and progression-free survival (PFS) were comparable in patients treated with eribulin and dacarbazine [8]. Furthermore, a non-significant difference was found according to the primary anatomic site, eribulin being more effective in non-uterine LMS [8]. In any case, eribulin was shown to be active, inducing partial responses, even in uterine LMS and in different subtypes of LPS with relatively low toxicity [9]. The mechanism of action of eribulin is based on its ability to block microtubule polymerization without affecting its shortening phase, which is the case with other microtubule-targeted anticancer drugs such as taxanes and vinca alkaloids [10][11][12]. In turn, eribulin disrupts the mitotic spindle leading to cell cycle arrest at the G2/M phase [10]. In prostate and breast cancer cell lines, the mitotic arrest is irreversible and if prolonged in time leads to apoptosis [10,13]. Another interesting feature of eribulin is its ability to regulate vascular remodelling [14]. Eribulin inhibits pericyte-and endothelial-driven in vitro angiogenesis, reducing the number of capillary networks in co-cultures of pericytes and endothelial cells [15,16]. It also reduces the expression of angiogenesis-associated genes, including vascular endothelial growth factor (VEGF), as well as of genes involved in Wnt, Notch, and Ephrin signalling pathways and related to a mesenchymal phenotype [17]. In vivo, eribulin increases microvessel density, as observed in breast cancer and LMS xenograft models, causing tumour vascular remodelling and increasing tumour perfusion [17,18]. Gemcitabine [2 -deoxy-2 , 2 -difluorocytidine monohydrochloride (beta isomer); dFdC] is a deoxycytidine analogue used in the treatment of a large spectrum of tumours, including STS [19]. Gemcitabine, in its tri-phosphorylated form, acts as a competitive substrate of deoxycytidine triphosphate, being incorporated into DNA during replication, inhibiting its elongation and causing a solid G1 cell cycle arrest leading to cell death by apoptosis [20]. In the metastatic setting of STS, it is administrated as a single agent or in combination with docetaxel and dacarbazine, showing activity in LMS [21,22]. Additionally, several clinical studies exploring gemcitabine in combination with other cytotoxic drugs, including paclitaxel [23], sirolimus [24], and pazopanib [25], suggested synergistic activity and proved the usefulness of gemcitabine in STS treatment. Nevertheless, tumours develop mechanisms of chemoresistance, which may justify the limited therapeutic effect of gemcitabine. Thus, new strategies are urgently required to potentiate its activity in STS [19]. A promising strategy is the combination of gemcitabine with anti-neoplastic drugs that can increase tumour perfusion, facilitating its delivery and intratumoral accumulation. This study aims to investigate the potential synergism of eribulin plus gemcitabine in Lsarcomas in vitro, assessing the mechanisms underlying this synergism and the translation to in vivo studies to determine the effectiveness and safety of this drug combination, as well as to analyse the potential benefit in the clinical setting. Eribulin and Gemcitabine Combination Produces a Synergistic Effect in Cell Viability To look for more effective treatments for sarcomas, we tested combinations between eribulin and the cytotoxic agent gemcitabine in four sarcoma cell lines originating from LPS (93T449 and 94T778) and LMS (SK-UT-1 and CP0024). First, we identified the optimal drug concentration for each compound calculating the half-maximal inhibitory concentration (IC 50 ) concerning cell viability in each cell line. MTS experiments analysed 72 h after adding the drug revealed IC 50 viability values at nanomolar (nM) concentration in all the cell lines, confirming the cytotoxic effect previously described for both drugs ( Figure 1A,B, Supplementary Table S1). Our results indicated that LMS cell lines were more sensitive to eribulin than LPS cell lines. Then, we tested whether the cytotoxic effect of eribulin could be potentiated by combination with gemcitabine by trying three different combinations: simultaneous addition of both drugs, sequential addition of eribulin followed by gemcitabine, and sequential addition of gemcitabine followed by eribulin. In all cases, viability was assessed by MTS after 72 h of the first drug treatment and in sequential regimens, the second drug was added 24 h after the first drug ( Figure 1C). Drops in viability ranging from 90 ± 25.17-fold to 27.78 ± 2.52-fold in the SK-UT-1 cell line were observed when comparing monotherapy with eribulin and eribulin plus gemcitabine combinations ( Figure 1D). This is especially apparent at low drug concentrations (0.1 and 1 nM) in all the cell lines. Looking at ED 50 of the combinations, we observed that for both LMS cell lines, eribulin before gemcitabine is the most effective combination, with ED 50 below 0.02 (0.199 for CP0024 and 6.272 × 10 −6 for SK-UT-1 cell lines) ( Figure 1D, Supplementary Table S2). Statistical analysis and isobolograms demonstrated that there was a synergistic effect in cytotoxicity when both drugs were combined, with the addition of eribulin before gemcitabine being the most effective combination and the one chosen for further analysis ( Figure 1E and Supplementary (A) Cell viability measured at 72 h by MTS after treatment with eribulin at concentrations in the range of 10-11 to 10-7 molar or (B) gemcitabine at concentrations in the range of 10-10 to 10-6 molar in 94T778, SK-UT-1, 93T449 and CP0024. The graphs show the mean of 3 independent replicates performed in triplicate. (C) Representative diagrams of the different drug combinations tested in the cell lines. Cells were seeded on day zero and treated on day 1 and/or 2. Viability was measured on day 4 by MTS. (D) Cell viability in the LPS (upper graphs) and LMS (lower graphs) lines. The following conditions were tested: eribulin monotherapy, gemcitabine monotherapy, eribulin plus gemcitabine, gemcitabine pre-eribulin and eribulin pre-gemcitabine, all at 0.1, 1 and 10 nM concentrations of both drugs. The graphs show the mean of 3 independent replicates performed in triplicate (mean ± SD). Another way to look at viability is to perform a clonogenic assay that measures the ability of one cell to create a colony after drug treatment. In the 93T449 cell line, we observed that when cells were treated with the combination both at 12 h eribulin plus 6 h gemcitabine or 24 h eribulin plus 12 h gemcitabine, the ability of the culture to form clones was lower, with the difference with eribulin monotherapy being statistically significant (p = 0.002 and p = 0.033 for 12-6-and 24-12 h combination, respectively) (Supplementary Figure S1A-C). In the case of LMS cell lines, we observed a trend in the SK-UT-1 cell line both at 24 and 12 h that makes monotherapy have a lower clonogenic capacity than the combination (Supplementary Figure S1). The same effect is observed in the CP0024 cell line, with this difference being significant (p < 0.001 and p = 0.012 for 12 and 24 h of treatment, respectively). (Supplementary Figure S1). Cell Viability Reduction in the Combined Treatment Is in Part Due to an Increase in Apoptotic Events To understand the molecular mechanisms responsible for the synergy observed in cell viability experiments with the sequential combination, we decided to check whether the reduction in cell viability could be a consequence of an increase in apoptotic events. Since, so far, all experiments have revealed similar results in the four cell lines tested, we decided to perform the subsequent analysis only in one LPS cell line (93T449) and two LMS cell lines (CP0024 and SK-UT-1); that became our focus of interest. Using flow cytometry to analyse DNA content, we quantified cell cycle profiles after the incubation with the drugs of interest. Consistent with the role of eribulin in microtubule polymerization, an arrest in G2/M is observed in long treatments and it seems to be reversible since the percentage of G2 cells diminished from 12 to 24h, except in the SK-UT-1 cell line (Figure 2A,B). However, interestingly, during short treatments with eribulin (from 10 min to 3 h) there was no accumulation of cells in G2/M but there was in S phase, suggesting that eribulin could also have an effect during DNA replication (Figure 2A,B). Additionally, we measured the hypodiploid events that represent the sub-G1 population, thus measuring DNA fragmentation occurring during apoptosis and cell death. Increasing incubation times with eribulin revealed that the longer the eribulin treatment was, the larger the sub-G1 population was, observed in 93T449, CP0024, and SK-UT-1 cell lines ( Figure 2C). As in the MTS experiment, again we observed that LMS cell lines were more sensitive to eribulin showing higher levels of sub-G1 cells (17.45 ± 0.68% of cells after 24 h of treatment in the CP0024 and 11.6 ± 2.25% in the SK-UT-1 cell line) compared to the LPS 93T449 cell line (1.85 ± 0.21% after 24 h of treatment) ( Figure 2C). This is true even considering that the eribulin concentration used in CP0024 and SK-UT-1 cultures was 10 times lower compared to the one used in 93T449 (1 nM vs. 10 nM). When we treated cells with eribulin followed by gemcitabine, we observed a slight increase in the number of sub-G1 cells with this difference being significant in both LMS cell lines (2.74 ± 0.49-E24hG12h vs. 1.85 ± 0.38-E24h in 93T449; 6.21 ± 0.4-E12hG6h vs. 11.39 ± 1.82-E12h in CP0024; 15.87 ± 0.74-E24hG12h vs. 11.6 ± 0.32-E24h in SK-UT-1) ( Figure 2C). To test whether the increase in the sub-G1 population could be a consequence of an increase in the apoptotic events due to defects produced during the S phase or the arrest in G2/M upon eribulin treatment, we decided to measure apoptotic cells directly by flow cytometry with annexin V in both sarcoma cell lines. Figure 2D showed a clear increase in apoptotic events after treatment with eribulin for 24 h. However, no significant differences were observed between monotherapy and the combination with gemcitabine ( Figure 2D). We also confirmed these results by measuring the levels of apoptotic markers, such as cleaved PARP-1 protein and cleaved caspase 3, by Western blot. We observed a significant increase in PARP and cleaved caspase 3 after 24 h of eribulin treatment compared to non-treated cells. However, the combination with gemcitabine did not increase the levels of these two proteins compared to eribulin in monotherapy ( Figure 2E). The Combined Treatment Produces an Increase in DNA Damage (γ-H2AX) and Accumulation of p21 Levels To assess the mechanisms implicated in the synergistic effect of the eribulin plus gemcitabine combination, we studied the accumulation of DNA damage by checking the presence of γ-H2AX, a marker of DNA damage accumulation in the cells. Interestingly, both in LPS and LMS cell lines, we observed an accumulation of γ-H2AX foci when cells were treated with eribulin and the combination both at 24 and 12 h ( Figure 3). In the LPS cell line 93T449, the foci accumulation is greater with ERI + GEM than with eribulin alone, with this difference being statistically significant at 12 h (32.78 ± 2.40 vs. 18.24 ± 2.14; p = 0.046) ( Figure 3A). The same difference is observed at protein level, especially for the 24-12 h experiment ( Figure 3D). For the LMS cell line CP0024, we observe similar results in the 12-6H experiment, a foci accumulation when we treat both with eribulin or the combination (23.34 ± 4.74 in E12G6H vs. 12.33 ± 1.12 in E12H; p = 0.152) but in the 24-12H experiment we also observe an γ-H2AX foci accumulation when treating with gemcitabine that we did not observe in other cell lines, with a significant difference between G12H and E24H (27.19 ± 3.2 vs. 12.33 ± 1.12; p = 0.049) ( Figure 3B). In the case of the LMS cell line SK-UT-1, we observe similar results to 93T449: an increment in foci accumulation and protein levels of γ-H2AX when we treat both with eribulin alone or in combination with gemcitabine. This accumulation is increased in the combination, and we observe a tendency that almost achieved statistical significance (29.05 ± 2.78 in E12G6H vs. 17.93 ± 1.57 in E12H; p = 0.073 and 33.9 ± 5.44 in E24G12H vs. 16.62 ± 0.85 in E24H; p = 0.088) ( Figure 2C). Γ-H2AX protein levels are similar between combination and monotherapy with eribulin in the SK-UT-1 cell line ( Figure 3D). Interestingly, while we observe a γ-H2AX accumulation, p21 protein levels were incremented with eribulin treatment, both in monotherapy or the combination in the 93T449 and CP0024 cell lines ( Figure 3D). The Combination of Eribulin and Gemcitabine Provides a Significant Benefit In Vivo Regarding Tumour Growth and Survival, Compared to Monotherapy To evaluate the in vivo relevance of eribulin and gemcitabine combination activity in tumour growth, we generated two LMS PDX mouse models ( Table S4). We observed that at the beginning of the treatment both monotherapy with eribulin and the combined scheme suppress tumour growth, similarly, at later time points the combination regimen maintained a better suppression of tumour growth, while tumours treated only with eribulin persisted growing. This effect is observed in LMS-IBIS-002 ( Figure 4A) and LMS-IBIS-010 ( Figure 5A) models. The combination treatment group stayed alive for more days, whereas the eribulin monotherapy group had to be sacrificed earlier due to tumour growth. In both models, we also observed that eribulin alone or in combination with gemcitabine suppressed tumour growth compared to non-treated or gemcitabine-treated mice. In the LMS-IBIS-002 model, we calculated the percentage of tumour growth inhibition (TGI) to the control and on day 14 we observed that eribulin + gemcitabine TGI is statistically significantly greater than eribulin TGI (62.49 vs. 55.48%; p = 0.0002; Wilcoxon test ( Figure 4B). We could not observe the same effect in the LMS-IBIS-010 model because differences between eribulin and the combination were reached when the control mice had already been sacrificed. At day 39, higher tumour growth inhibition was observed with eribulin, compared to the combination (73.5 vs. 67.59%; p = 0.01, Wilcoxon test) ( Figure 5B). Bodyweight was not affected by any of the drugs administered ( Figures 4C and 5C). Both models showed a significant Kaplan-Meier curve between all groups (Log-rank (Mantel-Cox) test p-value: 0.0016 and 0.0229 for LMS-IBIS-002 ( Figure 4D) and LMS-IBIS-010 ( Figure 5D) models, respectively). Eribulin Remodels Tumour Vascularity (IHQ CD31) Based on the angiogenic capabilities of eribulin, we decided to study vessel formation in the tumours of our three PDX models in response to treatment ( Figures 4E,F, 5E,F and Supplementary Figure S2E,F). In the case of the LMS-IBiS-002 model, we did not observe significant differences between any of the treatment groups. In general, this model showed high cellularity but few microvessels ( Figure S4E,F). In contrast, in the LMS-IBiS-010 model, the eribulin-treated group had the highest density of microvessels together with the combination, which was the second group with the highest MVD ( Figure S4E,F). Finally, the LPS-IBiS-015 model showed the highest MVD (around 60 microvessels per field) but no differences were observed between any of the groups (Supplementary Figure In the case of LPS PDX model LPS-IBIS-015, differences in tumour growth between the combination group and eribulin were also attained later than the 21-day cycle (Supplementary Figure S2A). TGI is also bigger in the eribulin group than in the combination at day 42 of the experiment, without reaching significance (81.65 vs. 67.87%; n.s.; Wilcoxon test) (Supplementary Figure S2B). Besides this, LPS-IBIS-015 presented a significant Kaplan-Meier curve between all groups (Log-rank (Mantel-Cox) test p-value: 0.0033) (Supplementary Figure S2D). Bodyweight is not altered by any of the treatments (Supplementary Figure S2C). Eribulin Remodels Tumour Vascularity (IHQ CD31) Based on the angiogenic capabilities of eribulin, we decided to study vessel formation in the tumours of our three PDX models in response to treatment ( Figures 4E,F, 5E,F and Supplementary Figure S2E,F). In the case of the LMS-IBiS-002 model, we did not observe significant differences between any of the treatment groups. In general, this model showed high cellularity but few microvessels ( Figure S4E,F). In contrast, in the LMS-IBiS-010 model, the eribulin-treated group had the highest density of microvessels together with the combination, which was the second group with the highest MVD ( Figure S4E,F). Finally, the LPS-IBiS-015 model showed the highest MVD (around 60 microvessels per field) but no differences were observed between any of the groups (Supplementary Figure S2E,F). The levels of apoptosis and necrosis were also evaluated in the tumours collected from the animals treated within this in vivo study by analysing the expression levels of apoptotic markers miRNA MIR184 and necrosis markers miRNA MIR21. No differences were seen in both miRNAs on the day the mice were sacrificed when comparing the expression levels of these markers in the mice treated with the combination group with mice treated with either eribulin or gemcitabine. Nonetheless, a tendency is observed for LMS-IBiS-010 model, where we observed that the combination presents higher levels of both miRNAs (Supplementary Figure S6). Discussion To identify a new strategy in the treatment of L-sarcomas in second or successive lines, we have focused on the combination of eribulin with gemcitabine. The combination is synergistic in in vitro experiments with LMS and LPS cell lines and it is particularly effective in PDX models of LMS. Eribulin is a microtubule-targeting agent that has been approved for the treatment of patients with unresectable or metastatic LPS previously treated with anthracyclines [5,26]. The use of eribulin for LMS has not been supported because, in a subgroup analysis, there was no apparent difference between eribulin and dacarbazine (standard active drug in this pathology) in OS (12.8 months in the eribulin arm and 12.3 months in the dacarbazine arm, respectively) and PFS (2.6 months in both treatment arms) [6]. In our study, we observed that the IC 50 of eribulin for our cell lines is in the nanomolar range, as demonstrated by Hayasaka et al. [27], Stehle et al. [28], and by Escudero et al. in a panel of STS cell lines [29]. Our LMS cell lines are more sensitive to eribulin than LPS cell lines, contrary to what has been published in several clinical trials, so we are providing new data supporting the use of eribulin also for LMS. In Phase 3 clinical trial by Schoffski, the comparator used was not the most appropriate because leiomyosarcomas are a more sensitive subtype to dacarbazine than liposarcomas, thus compromising the outcome [6]. The greater sensitivity of LPS in the clinical setting may be because they have a more favourable microenvironment enriched in endothelial cells, which is a niche where eribulin can act more easily [15,30]. Since the approval of eribulin in STS, multiple clinical trials have been testing the safety and efficacy of eribulin in combinations with other drugs, such as pembrolizumab (NCT03899805), lenvatinib (NCT03526679) or irinotecan hydrochloride (NCT03245450), with no results published to date. In our study, we tested the combination of eribulin with gemcitabine. This combination has been tested in breast cancer: a Phase 1 trial was conducted in Japan in metastatic patients without reaching the recommended dose for Phase 2 due to haematological toxicities [31]. Additionally, a Korean group conducted a randomized Phase 2 trial with the combination of eribulin and gemcitabine (EG) versus paclitaxel and gemcitabine (PG) in first-line treatment of HER-2 negative patients, where it was observed that EG presents a similar clinical benefit to PG in terms of PFS, but with lower neurotoxicity [32]. For the doses used in that trial, they were based on a Phase 1 trial in advanced solid tumours [33]. A Phase 2 trial of the combination in L-sarcomas in Korea has been recently published showing promising activity of the combination with a good safety profile [34]. The combination of eribulin with gemcitabine was tested in our L-sarcoma lines with three different sequences: eribulin plus gemcitabine, eribulin before gemcitabine, and vice versa to find the best combination scheme to obtain a strong synergism. Eribulin before gemcitabine proves to be the best approach, as it is the approach with the lowest combination index (CI) in our cell lines. In the two LMS cell lines, CP0024 and SK-UT-1, the ED 50 is lower when eribulin is applied before gemcitabine, and in the case of LPS, for 94T778 this is also the case but not for 93T449, where concomitance or gemcitabine before eribulin would work better. Regarding the combination indexes, there is a strong synergy in all cell lines when we treat with eribulin before gemcitabine at low concentrations (0.1 and 1 nM) of both drugs. This low concentration-synergy has been reported in other studies [35,36], probably due to high eribulin activity. Previous preclinical studies described the G2 m arrest caused by eribulin, which becomes more pronounced over time, becoming irreversible [37]. Kuznetsov et al. thoroughly characterized the effect of eribulin on mitosis in lymphoma cell models. After eribulin treatment, cells begin to accumulate in the G2 m phase at 2 h, peaking at 12 h and hypodiploid events beginning to be observed thereafter. They also proved that these events were apoptotic by staining with acridine orange/ethidium bromide and by flow cytometry with annexin V [13]. In our case, we observed this blockade from 6 h of eribulin treatment in 93T449 and later in CP0024 and SK-UT-1, obtaining the maximum at 12 and 24 h, respectively, as previously described [37]. In addition, at short eribulin exposure, we observed an accumulation of cells in the S phase, suggesting that eribulin could have an additional effect during DNA replication not described previously. This effect is observable in 93T449 at 3 h and in CP0024 at 6 h. In parallel, an increase in the sub-G1 hypodiploid population is observed, as previously described in triple-negative breast cancer with the combination of eribulin and a histone deacetylase inhibitor [38]. Thus, there is an increase in cell death that is significantly different between eribulin and the combination in both CP0024 and SK-UT-1 that is not observed in the 93T449 cell line. This supports the idea that LMS, treated with a lower concentration of both drugs, is more sensitive to eribulin and that the combination potentiates an increase in cell death, observed by an increase in the sub-G1 population, and may be the cause of the observed synergy. It has been described that one of the effects caused by eribulin is an irreversible mitotic arrest, accompanied by Bcl-2 phosphorylation [10,13,39]. At the concentration and times used in our assays, eribulin does not appear to cause irreversible cell arrest in G2-M, although specific experiments, such as those named above, would be necessary to confirm this hypothesis. Cell death occurs after mitotic arrest and is characterized by the inactivation of antiapoptotic Bcl-2 proteins and by the activation of Bax in Ewing sarcoma cell lines, where caspases contribute only partially [40]. In rhabdomyosarcoma, Weiß et al. observed the need of mitotic arrest for apoptosis induction that it is not only caspase-dependent and ENDOG-silencing (caspase-independent pathway), it also decreases apoptosis [41]. Kuznetsov et al. also studied apoptotic markers, such as the activation of caspases 3 and 9 and PARP processing [13]. In our L-sarcoma lines, increased apoptosis was observed by both flow cytometry and PARP-1 and caspase-3 processing which would explain the synergy observed only in the LPS line. In our in vitro models of LMS, eribulin produces a greater increase in apoptosis than gemcitabine, but the combination of the two does not increase in a way that would be the sole cause of the synergy we observed when studying cell viability. This fact may be because other processes are involved in this synergy in addition to apoptosis or that apoptosis is not solely caspase-dependent. It is possible that by not observing a complete mitotic arrest, the cell is not fully entering apoptosis and alternative processes are occurring that lead to cell death. In the mice treated with eribulin combined with gemcitabine we did not observe an increase in the expression levels of markers of apoptosis and necrosis, which is in line with the results observed in the LMS in vitro studies. However, we need to take into account that these tumours were obtained after euthanizing mice a long time after their initial treatment with eribulin and/or gemcitabine, when these tumours had reached maximum tumour volumes. To perform a study of apoptosis and necrosis in tumour samples, we would need to have obtained tumours a few days after the initial treatment with the drugs, where we would probably have found significant differences in the expression of apoptosis/necrosis markers between treatment groups. Gemcitabine, being a deoxycytidine triphosphate analogue, is incorporated into the DNA of cells during replication, causing DNA damage that the cell cannot repair, leading to G1-cell arrest. There is wide evidence of gemcitabine combinations with other drugs such as inhibitors of microtubules, PARP, or proteins such as p330. These induce a synergy caused by an increase in apoptosis, linked to an increase in DNA damage, which is measured, in most cases, by an increase in γ-H2AX [42][43][44]. These studies were conducted on both pancreatic and lung cancer, as there is not much evidence in STS. Likewise, gemcitabine alone produces an increase in γ-H2AX foci in in vitro models of pancreatic cancer [45]. In our LPS line, the combination of eribulin and gemcitabine causes an increase in the number of γ-H2AX foci per cell, which may justify the observed synergy. In the case of LMS, eribulin increases the number of foci, but there is no significant difference with the gemcitabine combination. The analysis of other markers of damage, such as Rad51 or the phosphorylation of ATM and ATR, would be necessary because different mechanisms of damage repair may be acting between the different lines [42,46]. DNA damage and other types of stress lead to increased expression of p21 [47]. In addition, some studies have co-localized p21 and γ-H2AX at the sites of damage since γ-H2AX is required for p21-induced cell cycle arrest to be induced following a replication error [48]. In our lines, p21 expression is induced in the same pattern as γ-H2AX after treatment with eribulin and gemcitabine. Previous studies from our laboratory looked at the p53 status of our cells: 93T449 has no mutations, SK-UT-1 has two missense mutations at residues 524 and 743, and line CP0024 has a heterozygous mutation at residue 52. Recently, it has been shown that a complete loss of p53 function sensitizes lung cancer cells to eribulin [49]. This could be the case for our lines with mutation and loss of function of p53, more specifically in the LMS lines. The mutational status of 94T778 is unknown, but it would be of interest to determine whether TP53 is mutated in this LPS line, which is more sensitive than the other line of this type of sarcoma (93T449) to eribulin, or to determine whether the expression/amplification levels of MDM2, a negative regulator of p53, are higher in 94T778. Both lines come from the same patient, but 94T778 comes from the second metastasis of that patient. In this case, the mutational/inactivation status of p53 could account for the different sensitivities observed in our preclinical setting. In line with this, it was recently reported that mutations in TP53 are associated with a higher PFS in patients with LMS treated with eribulin [50]. In a phase-II trial measuring the efficacy of eribulin and gemcitabine combination in L-sarcomas, with PFS as the primary endpoint, treatment is carried out following the same scheme as in our in vivo PDX approach [27]. We were able to observe differences between eribulin and the combination group if, after one treatment cycle, we monitored tumour growth up to a maximum of 150 days in both LMS and LPS models. We found that tumours treated with the combination took longer to reach the maximum tumour volume than those treated with eribulin alone. Eribulin has shown antitumor activity both in monotherapy and in combination with various drugs in 10 xenografts of different types of cancer [14], and has shown superiority over other antitumor drugs in reversing doxorubicin resistance in an orthotropic dedifferentiated LPS xenograft [51] and in a Ewing's sarcoma model [52]. It has been described that the antitumour activity of eribulin is also based on the changes it produces in the microenvironment, mainly through the associated immune response and vascular remodelling. In breast cancer xenografts, eribulin improves tumour perfusion through vascular remodelling based on an increase in the number of small functional microvessels [17]. In our case, in the LMS-IBiS-002 model, we obtained a tendency such as that of Miki et al. [53] in which treatment with eribulin reduces MVD. In contrast, in the LMS-IBiS-005 model, we obtained the expected result that is most reflected throughout the literature [14,17]: an increase in microvessels with treatment with both eribulin and the combination, i.e., eribulin would be causing an increase in tumour vessels, allowing greater perfusion of the drug and an enhancement of tumour growth. One of the reasons why we did not observe the expected results in MVD may be due to having performed this study at the end of the experiment. For a more reliable study of the changes in microvessel density, the tumours should have been removed one week after drug treatment and the study should have been performed at that time, which is one of the limitations of our study. Overall, the combination was found to be feasible and effective in LMS and LPS models, with no apparent toxicities, and with a delaying effect on tumour volume growth. Cell Cultures The LPS 93T449 (CRL-3043™), 94T778 (CRL-3044™), and LMS SK-UT-1(HTB-114™) human cell lines were obtained from the American Type Culture Collection (ATCC; Manassas, VA, USA). The CP0024 LMS human primary cell line was established from fresh tumour samples in our laboratory. The SK-UT-1 cell line was cultured in DMEM medium supplemented with 10% FBS, 1% penicillin/ampicillin (P/S), 1% sodium pyruvate, 0.1% MEM-non-essential amino acid solution, and 0.1% HEPES buffer. Both CP0024, 93T449, and 94T778 were cultured in RPMI medium supplemented with 10% FBS, 1% P/S, and 1% Fungizone. All cell lines were maintained at 37 • C with 5% CO 2 . Tissue culture supplements were all purchased from Sigma-Aldrich (Madrid, Spain). Cells were checked routinely and found to be free of contamination by mycoplasma or fungi, and their authenticity was checked before experiments. All the cell lines were discarded after 7-8 passes and new lines were obtained from frozen stocks. For all remaining in vitro experiments, eribulin was tested at 10 nM (93T449) and 1 nM (CP0024 and SK-UT-1), and gemcitabine was used at 30 nM (93T449) and 3 nM (CP0024 and SK-UT-1) in the following conditions: eribulin monotherapy (24 or 12 h), gemcitabine monotherapy (24 or 12 h), and combination treatment (24 h eribulin plus 12 h gemcitabine or 12 h eribulin plus 6 h gemcitabine). DMSO was used as a drug vehicle and negative control. Clonogenic Assay Cells (3.5-4 × 10 4 ) were seeded in 10 cm dishes and treated with the same scheme used in cell viability assays. Each condition was tested in triplicate. After 10 days, colonies were fixed and stained with crystal violet assay. After extensive washing, colonies were counted manually, and the relative number of observed colonies was represented in a graph. Cell Cycle Analysis After treatment, cells were trypsinized, centrifuged, and washed with 1X PBS, followed by a 30 min (min) incubation at 4 • C in 70% ethanol. After centrifugation at 1800 rpm for 10 min, cells were resuspended in a mixture of 1 mg/mL propidium iodide (Sigma Aldrich) and 50 mg/mL RNaseA (Qiagen ® , Hilden, Germany) in PBS, incubating for 1 h at RT in the dark with horizontal shaking. The cell cycle was measured by flow cytometry (Canto II Analyzer cytometer (BD Biosciences; Franklin Lakes, NJ, USA)) and data were analysed with FlowJo software (FlowJo LLC; Ashland, OR, USA). Apoptosis Analysis The levels of apoptotic, early apoptotic, and necrotic cells were evaluated in the 93T449, CP0024, and SK-UT-1 cell lines. A FITC Annexin V Apoptosis Detection Kit with PI was used to determine cell death (Immunostep; Salamanca, Spain) following the manufacturer's instructions. Apoptosis levels were determined by flow cytometry (Canto II flow cytometer) and data were analysed with both BD FACS Diva and FlowJo software. . Blots were then washed in 1X TBS-T and incubated with either rabbit anti-mouse IgG (1:10000; a9004; Sigma Aldrich) or goat anti-rabbit IgG (1:10000; ab6721, Abcam) peroxidase-labelled antibodies in 1X TBS-T for 1 h. HRP substrate was used for chemiluminescent detection (Amersham™ ECL™ Western Blotting Detection Reagent (GE Healthcare, Life Sciences)) and image acquiring was performed using Chemidoc Imaging System (Bio-Rad). Blots were analysed using Image Lab from Bio-Rad. The experiments were performed in triplicate. Cell Immunofluorescence Cells were seeded in 10 mm 3 plates, where three sterile 1 cm circular coverslips had been previously introduced. After cell treatment, each crystal was transferred to a well of a 24-well plate, where the remainder of the protocol was carried out. Cells were fixed with 3% paraformaldehyde in H 2 O for 30 min at room temperature (RT). Then, cells were washed with 200 mM glycine solution for 15 min at RT and permeabilized with 0.5% Triton X-100 (Sigma-Aldrich) for 30 min at RT. Cells were blocked in 1% BSA 2X PBS solution for 30 min at RT. Subsequently, they were incubated with the γ-H2AX antibody 1:100 diluted in blocking solution overnight at 4 • C. Washes were made with 1X PBS for 5 min and incubated with the goat anti-rabbit IgG (H+L) Cross-Adsorbed Secondary Antibody, Alexa Fluor 488 (ThermoFisher Scientific) diluted in blocking solution for 1 h at RT in the dark. After washing with 1X PBS, nuclei were stained with DAPI (Life Technologies, Carlsbad, CA, USA) diluted in PBS (1:1000 dilution) for 15 min RT in the dark. Crystals were mounted on the slides with ProLong Gold solution (Life Technologies). The slides were stored at 4 • C until analysis. Images were obtained with the Leica TCS-SP2-AOBS confocal microscope and analysed with LCS Lite and Fiji Software v1.8.0. Patient-Derived Xenograft (PDX) Models Three PDX models were used for the in vivo studies: LMS-IBiS-002, LMS-IBiS-010, and LPS-IBiS-015. For LMS-IBiS-002, pathologic evaluation confirmed the original diagnostic of a leiomyosarcoma spindle cell type. Microscopically, it is a well-demarcated, unencapsulated tumour with an expansive growth front. It consists of interlacing bundles of spindle cells with eosinophilic cytoplasm. There were areas of intense nuclear pleomorphism, as well as frequent mitotic figures. No areas of coagulative necrosis, lymphatic vascular invasion were identified. Although, no immunohistochemical markers were stained for the original diagnosis, we were able to detect diffuse expression of smooth muscle actin (SMA) and h-caldesmon in our PDX. For LMS-IBiS-010, the patient tumour expressed SMA and h-caldesmon, while desmin protein expression was negative. Our PDX model maintained the positive expression of h-caldesmon, but had loss at least in the tumour block tested the expression of SMA. For LPS-IBiS-015, pathologist described the original tumour as a Grade 3 dedifferentiated liposarcoma with muscular differentiation, and this tumour was positive for MDM2, SMA, and Desmin and negative for myogenin, CD117, DOG1, and S100 protein. In Vivo Patient-Derived Xenograft (PDX) Studies Six-to eight-week-old female nude mice (Nude-Foxn1 species (Charles River Laboratories, Wilmington, MA, USA)) were used. The mice were anesthetized with 100 µL of a mixture of the anaesthetics diazepam (Roche, Basel, Switzerland) and ketamine (Pfizer, New York, NY, USA) in a 1:3 ratio administered by intraperitoneal injection. Tumour samples (10 mm 3 volume) were implanted subcutaneously on the right flank and developed in 3-7 weeks until reaching a minimum volume of 150 mm 3 to start treatments. Tumours were measured using callipers. In Vivo PDX Treatment The mice were randomized according to their tumour size to the following 4 treatment groups: control group (intraperitoneal saline solution); eribulin group (1.6 mg/kg dose intravenously); gemcitabine group (120 mg/kg dose intraperitoneally); and eribulin and gemcitabine combination group, receiving doses of 1.6 mg/kg of intravenous eribulin and 120 mg/kg of intraperitoneal gemcitabine, eribulin 3 h prior to gemcitabine. For intraperitoneal treatments, we used an insulin needle (0.3 mm (29 G) × 12.7 mm) (BD Microfine; Beckton Dickinson and Company, Franklin Lakes, NJ, USA) and for intravenous treatment, we treated by either of the two lateral veins of the tail with a needle (0.40 mm (27G) × 10 mm) (BD Microfine; Beckton Dickinson and Company, USA). Mice were immobilized in a restrainer for mice of 35 g (90 × 30 mm) during tail treatment. The mice received the appropriate treatment for 2 weeks (1 dose/week, Day 0 and 7). Mice were monitored daily for signs of distress and weighed three times a week. The tumour size was measured, and size was estimated according to the following equation: tumour volume = [length × width 2 ]/0.52, also three times a week. Mice were sacrificed when they reached the maximum tumour volume (1500 mm 3 ) in a CO 2 chamber, following the ethical standards of animal treatment. This project has the approval of Consejería de Agriculutra, Pesca y Desarrollo rural under the code 16/05/2017/061. Immunohistochemical Analysis Fragments of resected tumour PDX from nude mice were fixed in a 1:4 formol solution in H 2 O (Epredia™ Formal-Fixx™, Thermo Fisher Scientific) for 1 day and then paraffin-embedded. Cross-sections were prepared and stained with rabbit anti-mouse CD31, (PECAM-1) (D8V9E) XP ® , Cell Signalling, 1:90 dilution) using Histofine Simple Stain Mouse MAX PO and DAB substrate kits (Nichirei Bioscience, Tokyo, Japan). Tissue morphology was visualized by haematoxylin and eosin (H&E) staining. Quantification of vascular morphology and microvessel density (MVD) was conducted by an expert pathologist (RR) with an Olympus BX61 optical microscope. To calculate the proportion (%) of small and large vessels, a piecewise threshold of the vascular area was selected. Vascular "hot spot" regions, defined as regions of high vascular density in the tumour, were identified at 100× magnification. Individual mature and immature microvessel counts were made on at least 5 different fields (0.95 mm 2 /field) and MVD was expressed as the average microvessel count per high-power field (HPF). Apoptosis-Related miRNA Expression Analysis of Paraffin-Embedded Samples Paraffin blocks corresponding to tumour samples from LMS-IBiS-002, LMS-IBiS-010, and LPS-IBiS-015 mouse models were cut into 10 µm slices. RNA, including small noncoding RNA, was extracted using the RecoverAll Total Nucleic Acid Isolation kit (Invitrogen, Waltham, MA, USA), following the manufacturer's instructions. RNA samples were retrotranscribed to cDNA using the high-capacity reverse transcription kit (Invitrogen), following instructions from the cDNA TaqManTM Small RNA Assays user guide. Taq-Man probes (Thermo-Fisher) for apoptosis miRNA MIR184 (Hs06637236_s1) and necrosis miRNA MIR21 (Hs04231424_s1) were used as targets, U6 snRNA (001973) was used as an endogenous control. Statistical Analysis of In Vitro Studies Data of biological replicates were grouped and presented as mean ± standard error of the mean. Differences between two treatment conditions were statistically analysed when indicated, using the unpaired Student's t-test. Differences were considered significant when p < 0.05. Statistical analysis was performed using Prism 6.0 (GraphPad; San Diego, CA, USA). In addition, the log rank (Mantel-Cox) test was used for survival analysis in the PDX experiment, and the Wilcoxon signed-rank test was used for the assessment of tumour growth inhibition. Conclusions In conclusion, we have demonstrated that the eribulin and gemcitabine combination is synergistic in LMS and LPS cell lines. This synergism could be explained in part by the accumulation of DNA damage and the sub-G1 population. Eribulin plus gemcitabine combination is feasible in L-sarcoma PDX models, and survival in mice treated with the combination augments in comparison with eribulin monotherapy. Further analysis is needed to explain the mechanics underlying this synergism and its translation to the clinic onset.
v3-fos-license
2019-04-06T00:42:05.776Z
2012-01-01T00:00:00.000
98365583
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/journals/jchem/2012/767941.pdf", "pdf_hash": "2703518f56972c1689280901235fcdd3fd4f4954", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1928", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "a1ff00b1f3aa65bbed81ef7356b764cd78fb1aad", "year": 2012 }
pes2o/s2orc
Synthesis, Characterization and Antimicrobial Studies of a New Mannich Base N-[Morpholino (phenyl)methyl]acetamide and Its Cobalt(II), Nickel(II) and Copper(II) Metal Complexes A new Mannich base N-[morpholino(phenyl)methyl]acetamide (MBA), was synthesized and characterized by spectral studies. Chelates of MBA with cobalt(II), nickel(II) and copper(II) ions were prepared and characterized by elemental analyses, IR and UV spectral studies. MBA was found to act as a bidentate ligand, bonding through the carbonyl oxygen of acetamide group and CNC nitrogen of morpholine moiety in all the complexes. Based on the magnetic moment values and UV-Visible spectral data, tetracoordinate geometry for nitrato complexes and hexacoordinate geometry for sulphato complexes were assigned. The antimicrobial studies show that the Co(II) nitrato complex is more active than the other complexes. Introduction Metal chelates of Mannich bases form an interesting class of compounds, which find extensive applications in various fields 1,2 .Among the very few number and variety of transition, inner transition and main group metal complexes of Mannich bases, those formed from bivalent transition metals are of particular interest, because of their synthetic flexibility, structural diversities, bonding interactions, biological significance, and other multiple applications [3][4][5] .We report here, the synthesis of the Mannich base MBA, which is a bidentate ligand.For, it contains two donor atoms: carbonyl oxygen and CNC nitrogen.With this ligand Co II , Ni II , and Cu II (in the molar ratio 1:2) complexes have been prepared. Experimental High purity acetamide(Merk), benzaldehdye(Merk), morpholine(Merk) were used as supplied.All other solvents and metal salts used were of A.R. grade and used as received. Synthesis of the Ligand N-[Morpholino(phenyl)methyl]acetamide(MBA) was synthesized by employing the Mannich synthetic route 7 : Acetamide (5.90 g, 0.1 mol) was dissolved in minimum quantity of ethanol.To this solution, benzaldehdye (10 mL, 0.1 mol) followed by morpholine (9 mL, 0.1 mol) were added in small quantities with constant stirring in an ice bath.After 28 days, a yellow solid was obtained.It was washed with water and with acetone.The compound was dried in air and then at 60 o C in an air oven and recrystallised from ethanol.The percentage yield of the compound was 73 and its melting temperature was 148-150 o C. Synthesis of the Complexes The hot methanolic solution of the metal salt was added slowly with constant stirring to the hot ethanolic solution of the ligand in 2:1 mol ratio.The insoluble complexes 8 formed, were filtered, washed with methanol and ethanol to remove the unreacted metal and ligand, dried in air and then in an air oven at 80 o C. Instruments Micro elemental (C, H, and N) data were obtained with Carlo Erba 1108 elemental analyzer at RSIC, CDRI, Lucknow.Metal contents were estimated by usual procedure, after digesting the complexes with con.HNO 3 .Sulphate was estimated gravimetrically as BaSO 4 and chlorides were estimated volumetrically by Volhard's method 6 .The conductance data were obtained in ~10 -3 M DMF solutions of the complexes at room temperature using a Systronics direct reading digital conductivity meter -304 with dip type conductivity cell.IR spectra were recorded using a spectrum-one Perkin Elmer FT-IR spectrometer employing KBr pellets.The UV-Visible regions were recorded in DMF solutions using double beam UV-Visible spectrometer, Perkin EZ-301 of working range 1100-190 nm.The 1 H and 13 C NMR of the ligand were recorded on a Bruker instrument and on a JEOL-GSX 400 spectrometer employing TMS as internal reference and DMSO-d 6 as solvent.The FAB mass recorded for the ligand was carried out using a JEOL GC mate mass spectrometer.Electrochemical studies were performed on a Bio -Analytical System CV-50W electrochemical analyzer with three-electrode system of a glassy carbon electrode as the working electrode, a platinum wire as auxiliary electrode and Ag/AgCl as the reference electrode.The room temperature magnetic susceptibility measurements of the complexes were made by using a Gouy magnetic balance calibrated using mercury(II)tetrathiocyanatocobaltate(II). Antimicrobial Studies Antimicrobial activities 9 of MBA and their metal complexes, such as Co II , Ni II , and Cu II were tested in vitro against six bacterial species E.coli, P.aeruginosa, S.typhi, B.subtilis, S. pyogenes, and S.aureus and the fungal species A. niger and A.flavus by disc diffusion method using agar nutrient as medium and gentamycin as control.The paper disc containing the compound (10, 20, and 30 μg/disc) was placed on the surface of the nutrient agar plate previously spread with 0.1 mL of sterilized culture of microorganism.After incubating this at 37 o C for 24 h, the diameter of inhibition zone around the paper disc was measured. Characterization of the Ligand Infrared spectrum of MBA shows a sharp peak at 3297 cm -1 , which may be assigned to the ν NH of the secondary amide group 10 .The strong band at 1647 cm -1 may be attributed to the ν C=O stretching mode.The other strong bands appearing at 1493, 1449, and 1206 cm -1 are indicative of bending vibrations of the methylene (δ CH2 ) group and stretching vibration of the morpholine ring (ν ring ).The medium absorption band at 1116 cm -1 suggests the presence of new C-N-C bond pertaining to the formation of Mannich base by the insertion of morpholinobenzyl group on acetamide.The absorption band present at 1140 and 1072 cm -1 may be assigned to the C-N-C frequency of morpholine.The strong bands at 1049 and 1034 cm -1 may be attributed to ν C-O-C frequency of morpholine group.The band at 749 cm -1 indicates the presence of monosubstitution of morpholine in MBA. The UV-Visible spectrum 11 in DMF registers two intense split bands centered at 286 nm and 242 nm, which are presumably due to n→π* transition of the carbonyl group and π→π* transition of the carbonyl group and the benzene ring. The I H NMR signal at δ=8.44 ppm may be assigned to the secondary amide NH proton.The methine proton shows a signal at δ=5.61 ppm.The multiplet in the range δ=7.44-7.28ppm (7.44 ppm for C at the position 2&6 and the peaks at δ=7.36 and δ=7.28 ppm for C at 3&5 and 4, respectively) attributed to the protons of the benzene ring 12 .The chemical shift of the protons of N(CH 2 ) 2 group of morpholine ring occurs at δ=2.51 ppm.The chemical shift of the protons of O(CH 2 ) 2 group of morpholine occurs at δ=3.58 ppm. The 13 C NMR spectrum 13 shows the carbonyl carbon at δ=170.13 ppm.The signals observed between δ=139.71-127.76ppm are due to aromatic carbons of benzamide.The resonance signals at δ=139.71, 128.64, 127.93 and 127.76 ppm are assigned to the carbons of the phenyl group at 1, 2&6, 3&5 and 4 positions respectively.The signals due to the C 1 carbon of benzene ring can be differentiated by its decreased peak height of δ=139.71ppm.The signal at δ=66.70ppm is due to the O(CH 2 ) 2 group and that at δ=48.98 ppm is due to N(CH 2 ) 2 carbon of morpholine. The mass spectrum 14 of MBA was obtained on electron ionization mode, showing a very weak molecular ion peak at m/z = 234.This confirms the already assigned molecular mass to the Mannich base understudy.Thereupon, on fragmentation, intense signals at m/z = 143 & m/z = 114 are recorded.They are due to the removal of C 6 H 5 CH 2 -and CHO -groups respectively.The next m/z signal at 86 is due to morpholine ion. Characterization of the Complexes To find out the stoichiometry 15 of the complexes, the percentage of the metal ions, anions and CHN were determined.The molar conductance values reveal that all the complexes are non-electrolytes.The CHN analyses are also in good agreement with the calculated values (Table 1). In the IR spectra 16 of all the MBA complexes (Table 2), the stretching frequencies of C=O and C-N-C bonds are found lowered showing that both carbonyl oxygen and CNC nitrogen atoms are coordinated to the metal ions.So the ligand acts as ON donor.The IR spectrum of the sulphato complexes shows the presence of coordinated sulphato group.The bands at the ranges of 1150, 1000, and 900 cm -1 are due to 'SO' stretching mode, ν 3 of sulphato group.The triply degenerate 'OSO' bending mode, ν 4 splits up into its components at about 650, 600, and 580 cm -1 in the complexes.The frequencies at 750(ν 1 ) and 500(ν 2 ) are also observed.These are due to the bidentate chelation 17 associated with the coordinated sulphato group.The bands around 3300 -3500, 1600 -1650, 800 -880, 600 -690 and 460 -530 cm -1 found only in the spectra of Co II , Ni II , and Cu II , sulphato complexes of MBA indicate the presence of coordinated water molecule 18 .The Co II nitrato complex exhibits electronic transition bands at 3841 cm -1 due to 4 A 2 (F)→ 4 T 2 (F)(ν 1 ) transition, at 6719 cm -1 due to 4 A 2 (F)→ 4 T 1 (F)(ν 2 ) transition, 15086 cm -1 due to 4 A 2 (F)→ 4 T 1 (P)(ν 3 ) and the band at 28534 cm -1 due to charge transfer 19 transitions respectively.The calculated ν 2 /ν 1 ratio is below 1.75.Also the effective magnetic moment of nitrato complex is 4.58 B.M.These are the expected values for tetrahedral geometry.The Co II sulphato complex exhibits electronic transition bands at 6951(ν 1 ), 14982(ν 2 ), and 18576(ν 3 ) cm -1 respectively due to 4 T 1g (F)→ 4 T 2g (F)(ν 1 ), 4 T 1g (F)→ 4 A 2g (F)(ν 2 ), and 4 T 1g (F)→ 4 T 1g (P)(ν 3 ) transitions 20 , respectively.The band at 24053 cm -1 indicates the charge transfer transition.The calculated ν 3 /ν 1 ratio for Co II sulphato complex is 2.67.The μ eff .value of this complex is 5.08 B.M.These are in agreement with the values expected for an octahedral Co II complex.The number of bands, their energy positions and intensity confirm the octahedral stereochemistry for the sulphato complex. Nitrato and sulphato complexes of Cu II exhibit electronic absorption bands at 9204&9425 cm -1 due to 2 B 1g → 2 A 1g and 2 B 1g → 2 A 2g transitions respectively.The bands at 10388&12968 cm -1 corresponds to 2 B 1g → 2 B 2g transition.The bands at 11915&15317 cm -1 are due to 2 E g → 2 T 2g (F) transitions and those appearing at 24062, 32008&35100 cm -1 are characteristic of charge transfer transitions 22 (ligand→metal).The μ eff .value of nitrato complex is 2.26 B.M. and for sulphato complex is 1.83 B.M. The band positions and multi-component nature of the spectra suggest a geometry for the sulphato complex and a tetragonally distorted tetrahedral geometry for the nitrato complex respectively. The electronic spectral parameters, Dq, B, β, β 0 % and ligand field stabilization energy (LFSE) 23 , were calculated for Co II & Ni II complexes.The order of Dq values among the Co II complexes are found to be Co(NO 3 ) 2 .MBA < CoSO 4 .2MBA.The Dq value for the octahedral sulphato complex is greater than that of the tetrahedral Co II nitrato complex.From the β 0 % value, the covalent character of Co II complexes is established.The percentage covalency 24 is more for the tetrahedral nitrato complex.The β 0 % values are about 29 and 13 for the nitrato and sulphato complex of Co II respectively, when the free ion value for the inter electronic repulsion parameter is incorporated. The X band EPR spectra 25 of polycrystalline nitrato and sulphato complexes of Cu II is recorded at LNT (77 K).The g values of the, nitrato and sulphato complexes of Cu II are in the trend, g ║ > g ┴ > g DPPh suggesting that the unpaired electron lies predominantly in the d x2- y2 orbital.The nitrato and sulphato complexes of Cu II showed EPR spectra of axial symmetry type indicating planar based distorted octahedral geometry around copper centre.The g ║ values of nitrato and sulphato complexes are less than 2.30 indicating the covalent nature. The higher g ║ values may be due to the coordination of H 2 O to the Cu II ion in these complexes.The axial symmetry parameter 26 G value, which is a measure of interaction between the metal centers in the crystalline solids for the nitrato and sulphato complexes of Cu II , is 7.21 and 7.50.This suggests the lack of change in interaction between two Cu II centres in the unit cell of the complex. The Cu(II) complex exhibited two quasireversible peaks.A cyclic voltammogram of Cu(II) displays two reduction peaks, first one at Epc= -0.65 V with an associated oxidation peak at Epa= -0.5 V and second reduction peak at Epc= -1.58 V with an associated oxidation peak at Epa = -1.8V.This corresponds to the Cu(II) / Cu(I) and Cu(I) / Cu(0) respectively at a scan rate of 0.2 V/s.The value of ΔEp are 1.5 and 2.02 for first and second redox couples respectively and increase with scan rate giving evidence for quasi-reversible nature associated with one electron reduction. Antibacterial Activity A comparison of diameters of the inhibition zones of the compounds investigated and listed in Tables 3 and 4 shows that Co II nitrato complex exhibits highest antibacterial and antifungal activity against all the bacterial and fungal species studied.This is because, they have larger diameters of inhibition zones than even the control gentamycin at the same concentration and identical conditions.The complexes have more antibacterial and antifungal activities than the ligand against all the bacteria and fungi studied.This observation clearly indicates that the chelation increases the activity.The higher activity of the Co II complex may be due to the fact that, Co(II) is an essential micronutrient during transcription and transformation of nucleic acids.Co II complexes were shown to inhibit cellular protein and RNA synthesis.In Co II nitrato complex, the unsaturated metal center present, achieves higher coordination number by binding with some of the functional groups of the protein.This leads to the increased uptake of the compound by the bacterium and thereby inhibiting its growth.Steric constraints are less for a tetrahedral complex than for an octahedral complex.So the tetrahedral complexes are biologically more active than the octahedral complexes 27 . Antifungal Activity The fungi toxicity of the free ligand is less severe than that of the metal chelates.A possible mechanism of toxicity may be speculated in the light of chelation theory 28 .Chelation reduces considerably the polarity of the metal ion, mainly because of partial sharing of its positive charge with donor groups and a possible π-delocalization of electron over the chelate ring.This increases the liphophilic character of the neutral chelate, which favours its permeation through lipoid layers of fungus membranes.Furthermore, the mechanism of action of the compounds may involve the formation of hydrogen bond through the uncoordinated heteroatoms viz.O, S, and N with the active centers of the cell constituents, thus resulting in the interference with the normal cell process 29 .These compounds have a greater chance of interaction with either the nucleotide bases or biologically essential metal ions present in the biosystem and also coordinatively unsaturated metal present in the metal complexes.The low activity of some of the complexes may be due to a mismatching of the geometry and charge distribution around the molecule with that around the pores of the fungal cell wall, preventing penetration and hence toxic reaction within the pores.As a corollary, the complex cannot reach the desired site of action on the cell wall to interfere with normal cell activity 30 . Table 1 . Analytical and Conductance Data for Co II , Ni II , and Cu II complexes of MBA. Table 2 . Important IR Absorption Bands (cm -1 ) of MBA and of Co II , Ni II , and Cu II complexes. Table 3 . Antibacterial activity of ligand and its complexes. Table 4 . Antifungal Activity of Ligand and its Complexes.
v3-fos-license
2018-04-03T02:37:28.055Z
2017-04-19T00:00:00.000
18542528
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/s12913-017-2236-x", "pdf_hash": "fbe801684544d58c8c775206145ec29135b5e5e3", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1929", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "sha1": "7909cd12b2b9eb0a7fb4fcbeeebedde894b807a5", "year": 2017 }
pes2o/s2orc
Perspectives of policy-makers and stakeholders about health care waste management in community-based care in South Africa: a qualitative study Background In South Africa, a new primary health care (PHC) re-engineering initiative aims to scale up the provision of community-based care (CBC). A central element in this initiative is the use of outreach teams comprising nurses and community health workers to provide care to the largely poor and marginalised communities across the country. The provision of care will inevitably lead to an increase in the amount of health care waste (HCW) generated in homes and suggests the need to pay more attention to the HCW that emanates from homes where there is care of a patient. CBC in South Africa is guided by the home-based care policy. However, this policy does not deal with issues about how HCW should be managed in CBC. This study sought to explore health care waste management (HCWM) in CBC in South Africa from the policy-makers’ and stakeholders’ perspective. Methods Semi-structured interviews were conducted with 9 policy-makers and 21 stakeholders working in 29 communities in Durban, South Africa. Interviews were conducted in English; were guided by an interview guide with open-ended questions. Data was analysed thematically. Results The Durban Solid waste (DSW) unit of the eThekwini municipality is responsible for overseeing all waste management programmes in communities. Lack of segregation of waste and illegal dumping of waste were the main barriers to proper management practices of HCW at household level while at the municipal level, corrupt tender processes and inadequate funding for waste management programmes were identified as the main barriers. In order to address these issues, all the policy-makers and stakeholders have taken steps to collaborate and develop education awareness programmes. They also liaise with various government offices to provide resources aimed at waste management programmes. Conclusions HCW is generated in CBC and it is poorly managed and treated as domestic waste. With the rollout of the new primary health care model, there is a greater need to consider HCWM in CBC. There is need for the Department of Health to work together with the municipality to ensure that they devise measures that will help to deal with improper HCWM in the communities. Electronic supplementary material The online version of this article (doi:10.1186/s12913-017-2236-x) contains supplementary material, which is available to authorized users. Background Following the Alma Ata Declaration on Primary Health Care in 1978, many low-and middle-income countries (LMICs) have made it a policy priority to shift the care of chronically ill patients from hospitals to the community [1]. The World Health Organization (WHO) has also promoted home and community-based care (CBC) and the concept of task-shifting to deal with health worker shortages in LMICs [2]. In recent years, considerable increases in the funding for HIV/AIDS/TB and the need to meet the millennium development goals have led to a renewed focus on CBC in many LMICs [1,2]. In sub-Saharan Africa, community-based organisations (CBOs) are a key element in the provision of primary health care services in poor and marginalised communities [2][3][4]. In the HIV/AIDS sector for example, CBOs often provide care and resources to marginalised populations like sex workers, drug users, gay men, the aged, the poor and the homeless [2,5]. CBOs are relevant in providing health care because they understand their local communities and they are linked to the populations that they serve [6]. CBOs serve as a link between the health care system, decision makers, and stakeholders in developing health policies and programmes [7]. They are also involved in research development that aims at informing policy [8] and help to facilitate the involvement of communities in planning and implementation of health care in order to achieve 'health for all' , a key principle for primary health care [5]. Community-based care in South Africa is guided by the home and community-based care policy that was developed in 2001 which is still a draft document. The main thrust of the policy is the provision of CBC in the homes of the patients. The policy encourages community members to participate in the provision of care to the ill people [9]. However, this policy does not deal with how health care waste (HCW) should be managed in CBC. The WHO defines HCW as all waste that is generated in health care facilities, research centres, and laboratories that are related to medical procedures. It also includes waste produced from health care activities in minor and scattered sources including in homes where there is recuperative care, self-administration of insulin and dialysis [10]. HCW management (HCWM) involves segregation, collection, storage, treatment, transportation, safe disposal [11] and monitoring of these activities [10]. When HCW is not properly managed, it could transmit infectious diseases such as HIV/AIDS, hepatitis B and C and tuberculosis to the public, and could cause death [12,13]. HCW could also reduce environmental aesthetics [14], cause social contagion [15] and also cause the breeding of disease-causing vectors such as cockroaches, flies and rodents [16,17]. In South Africa, a new primary health care (PHC) reengineering initiative aims to scale-up the provision of CBC. A central element in this initiative is the use of outreach teams, comprising nurses and community health workers, to provide care to the largely poor and marginalised communities across the country [18,19]. One would expect that the scale-up of the provision of community-based care will inevitably lead to an increase in the amount of HCW generated in homes and this suggests the need to pay more attention to the HCW that emanates from homes where there is care for patients [20]. In KwaZulu-Natal province where this study was conducted, some challenges with HCWM have been documented. For example, a study was conducted in 30 clinics in iLembe health district, the findings of which revealed that HCW was frequently not segregated from the point of generation to the point of disposal; it was sometimes transported together with goods and passengers, and the vehicles were driven by people who are untrained, unequipped and not registered to handle HCW [21]. Given the recent policy direction of the department of health to promote home and community-based care on a national scale, the perspectives of policy-makers and stakeholders could help shed light on particular issues relevant for policy decision-making on health care waste management in community-based care. Regrettably, little is to be found about the perspectives of policy-makers and stakeholders regarding HCWM in community-based care in South Africa. In this study, we sought to answer the following questions: What are policy-makers' and the stakeholders' perceptions regarding HCWM in community-based care? How do policymakers and stakeholders describe challenges related to HCWM in CBC? How do policy-makers and stakeholders address the challenges related to HCWM in CBC? Research design This was a descriptive qualitative study [22] that helped to provide in-depth insights into policy-makers' and stakeholders' perceived challenges with HCWM, their causes as well as descriptions of how the challenges related to HCWM were addressed. Study setting and context This study was conducted in 29 resource-scarce communities located on the outskirts of Durban, KwaZulu-Natal, South Africa. Of these, 21 were peri-urban communities. Peri-urban communities are segregated communities that were created by the apartheid government in the 1950s and 1960s and were racially structured to stabilize black labour in the industrial economy. These communities are characterised by the presence of small sized houses named after the reconstruction and development programme (RDP) that was initiated by the government in 1994 to promote service delivery. The RDP houses are for the poor who earn less than R3500 per month [23]. Currently, because the government provides low subsidies for developing these houses, RDP houses are usually built on cheap land located away from economic opportunities. The minibus taxi industry provides community members with transport which links dwellers to the cities to access economic opportunities. Because most people in these communities do not work and/or have an unsteady income, they tend to build 'back rooms' which are extensions of the main house. They rent the backrooms out to people who are still waiting for RDP houses as a way of earning a living. Some households rent out the RDP houses and opt to live in the backrooms [24]. Furthermore, three of the communities that were included in the study were informal settlements. Informal settlements consist of houses that are illegally built on private land, government owned land or tribal land. People who live in informal settlements travel from various places such as rural areas or peri-urban communities and some are foreign nationals who are in search of formal housing and employment. Informal settlements have a high rate of unemployment, food insecurity and poverty [25,26]. Five were rural communities: these are areas that are neither peri-urban nor informal settlements. The communities are settlements usually located far from economic centres and affordable transport is limited and expensive. They are occupied mainly by the older populations that have retired and rely on subsistence agriculture, social grants and allowances from family members who work in cities [27]. All the 29 poor resource communities are characterized by high rates of unemployment and poverty; there is a lack of quality social services such as education, health and transport services. Municipal services such as water, sanitation and electricity are basic and free [28][29][30]. These communities are serviced by the eThekwini Municipality of KwaZulu-Natal. Study participants Four kinds of participants were included in our sample: nine ward councillors who are policy makers, five area cleansing officers, ten managers of CBOs and six education officers who are stakeholders in charge of overseeing general waste management activities in the communities. The number of years of experience in the community ranged from one to 13 years as described in Table 1 below. Sampling procedure for the participants CBO managers were selected using snowball sampling. We contacted two CBO managers known from previous research. These managers provided contact details of the other managers that they knew. From the contact details provided, eight CBO managers from different communities were recruited purposively if their organisations offered home-based care services. CBO managers were included in the study because they oversee CBC programmes that are responsible for generating health care waste. The CBO managers were chosen if they were willing to participate in the study. Ten CBO managers (one per organisation) participated in the study while three were not available during the study period. Contact details of the ward councillors, area cleansing officers and education officers who served the 29 communities were obtained from CBO managers. The ward councillors, area cleansing officers and education officers were chosen if they were willing to participate in the study and if the CBOs fell within their jurisdiction. Ten ward councillors served the thirteen communities. However, only nine of them participated in the study, the remaining one declined to participate in the study citing a lack of interest. Five area cleansing officers and six education officers participated in the study because the thirteen communities fell within areas of their jurisdiction. All participants were selected if they worked for a period of six months or more because respondents with Area cleansing officers These are stakeholders and are government employees at the municipal level. They supervise waste management contractors, inspect communities to ensure that waste is collected and they oversee garbage bag distribution within the communities. Education officers They are stakeholders who are employed by the government at the municipal level. They develop and facilitate education programmes on waste management in the communities. 6 1 -3 such length of work experience were in a better position to provide insights to the study. Ethical considerations and data collection procedure Ethical approval for this study was obtained from the Humanities and Social Science Research Ethics Committee of the University of KwaZulu-Natal, South Africa. Semi-structured interviews were conducted with nine policy-makers and twenty-one stakeholders and these were guided by interview schedules. In order to develop the interview schedules, we conducted a rapid review of the literature on community-based care and health care waste management in South Africa. Information derived from this review was then used to develop open-ended questions for the interview schedules for each group of participants (see the Additional file 1). The interviews were conducted by the lead author. The interview schedule covered three main themes: 1) the policy-makers' and the stakeholders' perspectives regarding health care waste management in community-based care 2) the policy-makers' and stakeholders' perceived challenges regarding health care waste management in communitybased care 3) strategies employed by policy-makers and stakeholders to address the challenges related to health care waste management in community-based care. Participation in the study was voluntary and anonymity was achieved through the use of titles and not names. The objectives of the study were explained to the participants; informed consent was sought and all participants gave both written and verbal informed consent. Permission to record all interviews was sought and granted. All interviews were conducted in English in the participants' offices and they lasted from about 40 to 60 min. Data collection took place from August 2014 to March 2015. Permission to publish the findings was sought and granted both from the ethics committee and the participants. Data analysis All the recorded data was transcribed verbatim in English by the research assistant. Data analysis was conducted using the six steps of thematic analysis suggested by Braun and Clarke [31]. The first step involved familiarization with data through reading all the transcribed scripts. We immersed ourselves in the data in order to familiarise ourselves with it. In the second stage, we identified and generated themes. In the third stage, we re-read all transcripts identified and generated all codes. In the fourth stage, we generated themes from the codes. Fifth, we read and grouped the identified themes and then proceeded to identify sub-themes. We discussed each of the themes and sub-themes. We reached consensus as such all the main themes and subthemes are presented in the findings. Results The following themes were derived from the data: Perceived HCWM practices in community-based care by policy-makers and stakeholders, the perceived challenges, the perceived causes of the challenges and strategies used to address the challenges of HCWM practices in community-based care by policy-makers and stakeholders. All the major themes are in bold while minor themes are italicised in bold. Narratives showing the opinions and positions of participants within their specific roles are presented under each theme and subthemes. However, a few dissenting opinions were also noted when comparing opinions and positions among different roles. Such dissenting opinions have been presented as such in the narratives together with verbatim quotes. The perceived health care waste management practices in community-based care Participants explained that the Durban Solid Waste (DSW) unit of the eThekwini Municipality is responsible for overseeing all waste management programmes in their communities. Waste management services are free for those in rural, peri-urban communities and informal settlements, because they are subsidised by the government. All participants indicated that health care waste is mixed, treated as domestic waste and is removed together with domestic waste from all homes. They further explained that, DSW has garbage trucks and waste collectors that remove all domestic waste which is mixed with health care waste from suburbs. The ward councillors indicated that, as a way of empowering communities, the municipality awards tenders to community members. The selected community members work as waste management contractors whose job is to remove all waste from homes to the disposal sites. Ward councillors and area cleansing officers indicated that all tenders are advertised in the media and the most competent contractors are offered the tenders. Contractors sign contracts with DSW and they are given rules and regulations on how they should operate: "Yes they sign a contract document that binds them on how to work. It is a very thick document which constitutes what they are supposed to do and how and what is expected of them and their staff." (Area Cleansing Officer 1). All participants were asked to give an account of how HCW is managed in the communities that they served. The CBO managers explained that they advised their community health workers (CHWs) who provide home visits to the patients to dispose of the HCW in black garbage bags or in any other plastic bag and to tie the plastics containing HCW to prevent spillage. Contrary to what the CBO managers said, most area cleansing officers were defensive when they were asked to explain how HCW is managed and removed from homes in the communities that they served. Most of them indicated that they were not aware that HCW that was generated in homes: "The thing is we do not know that there is a problem like that, if we knew of a house that has a patient, then maybe we can make an arrangement." (Area Cleansing Officer 5). Area cleansing officers emphasised that there is a private company responsible for removing HCW from hospitals and clinics, yet they did not say who is responsible for removing HCW from homes where patients are receiving care. They insisted that their main role is to ensure that all domestic waste and not HCW, generated in homes is removed by community waste management contractors: "A private company collects all the waste for the hospitals and the clinics, but us in the DSW unit we only collect domestic waste." (Area Cleansing Officer 2). Only two area cleansing officers and all education officers were willing to openly discuss the issue of HCWM. The two acknowledged that they are aware that HCW is generated in homes and is usually treated and removed together with domestic waste. The two area cleansing officers explained that they handle HCW as domestic waste because it is not in large quantities unlike at the hospitals. One of them said: "Such cases are few that we have health care waste… so because it may be only one residence that has a patient, we encourage such people to put everything (HCW) in a plastic bag and tie it up, then place it in the house bin, because there is no other way. Unless if there is a lot of people, then we can refer them to those that deal with medical waste in the clinics and the hospitals, they have their own special truck that collects medical waste." (Area Cleansing Officer 3). All participants were asked to describe the challenges related to HCWM in the communities that they served as well as their perspectives about the causes of these challenges. The challenges are discussed at the household/community and the municipal levels. At the household level, the main themes that emerged are lack of segregation of waste by households and illegal dumping and these are discussed in detail below. The perceived challenges with health care waste management practices in community-based care at household level This theme will discuss the challenges that impede health care waste management practices in CBC, the causes and the strategies used to deal with the challenges. The themes are presented at the community level and municipality level. At the community level, challenges range from lack of segregation of HCW by households to illegal dumping. At the municipality level the challenges range from corrupt tender processes to inadequate funding for general waste management. A wide range of causes of the challenges and strategies used to deal with the challenges are provided and all themes and sub-themes are summarized in Table 2 and will be discussed in detail. Lack of segregation of waste in homes by households All participants revealed that generally, waste segregation is a responsibility of the households and that waste collectors are responsible for collecting the waste from homes and transporting it to the landfill. They also explained that households do not separate the HCW and as a result, waste collectors end up collecting and transporting the unsegregated HCW to the landfill. There were incidences of waste collectors being pricked by needles while collecting waste from homes. Participants revealed that, such incidences were investigated and the affected individuals sought medical attention. "Another thing is, needles which people use when they have diabetes or anything, they just throw them away. We have had incidences where our workers have been pricked by them because even if you give them gloves a needle is a needle, it goes through. But such incidences are thoroughly investigated." (Education Officer 1). All participants felt that the possible cause of lack of segregation of waste by the household members was lack of knowledge about waste segregation. They believed that there was a need for community members to be educated on how to handle HCW. One area cleansing officer said: "…Communities must be taught to at least wrap a needle with a tissue or something before disposing it… Just for them to learn simple things like that for now." (Area Cleansing Officer 5). Illegal dumping All respondents indicated that all the study communities were facing challenges with illegal dumping. Community members disposed of HCW together with domestic waste illegally in the bush, on the roads and in streams. "There is litter all around. You go to the roads, rivers and streams you find that they are full of litter. People throw dirty diapers and other things there…" (Ward Councillor 7). All participants said that illegal dumps are a hazard to children who make these dumps their playgrounds and to scavenge for used items. One CBO manager said: "With these illegal dumps that are right next to our homes. You find that children go to these areas and play there! It is dangerous!" (Manager C). Ward councillors believed that illegal dumps created an opportunity for criminal activities especially in periurban communities and informal settlements. The councillors said that in these communities, there were instances when they found foetuses at the illegal dumps suspected have been from illegal abortions carried out by young girls in these communities. They also felt that illegal dumpsites were a hiding place for boys who used injectable drugs and disposed of the needles illegally at the illegal dumps. Additionally, one councillor narrated a story where in two separate communities, they found a woman's body that was burnt in an illegal dump in the bush and in another community, a woman was beaten up and left to die at a dumpsite. The perceived causes of challenges with health care waste management practices in community-based care at the household level All respondents revealed that illegal dumping of HCW was the main cause of the challenges related to health care waste management in the communities. The reasons provided ranged from laziness to lack of space in the communities. Laziness and negative attitudes towards waste management All participants claimed that community members were illegally dumping HCW because they were too lazy to take it out on the particular days designated for waste collection. They reported that community members disposed of HCW illegally because they believed that this practice was a way of indirectly creating jobs for the waste collectors. This attitude outrage all education officers and the area cleansing officers expressed outrage at this attitude because these practices undermined their work by creating the impression that they are not doing their jobs effectively. One area cleansing officer, with an angry tone, said: "The mind-set of the people is terrible! Their attitude towards waste management is unacceptable! Throwing away litter! Anywhere and everywhere! Because they believe that they are 'creating jobs'! Who does that? Really? (Area Cleansing Officer 4). Irregular collection of health care waste by contractors All participants agreed that irregular collection of waste caused the creation of illegal dumps. For example, CBO managers and ward councillors explained that there were several instances when waste was left uncollected from the communities for several days without any notices from the waste contractors. They revealed that the uncollected waste is scattered by animals that tear up the garbage bags to scavenge for food. To ensure that waste management services continue, area cleansing officers seek permission to use the DSW trucks (meant to serve suburbs) to collect all waste from the communities. The education officers and the area cleansing officers revealed that they have the power to fine and penalise the contractors who fail to adhere to the contracts. Those that do not deliver the required services or pay the fines are reported to top management so that their contract can be cancelled and their services terminated. "We report those that do not pay the fines and those who continually fail to deliver services according to the stipulated contract. We recommend to the top management that they should not be paid the full amount or their contracts should be cancelled." (Area Cleansing Officer 4). Insufficient garbage bags Education officers and area cleansing officers provided more insights about this issue because they are directly involved with waste management and related issues. They explained that households in peri-urban and rural areas as well as those in informal settlements receive only two garbage bags per week while those in the suburbs received two months' supply. With regards to the use of garbage bags, the education officers expressed concern. They explained that most households in the suburbs adhere to proper waste management practices and they use the garbage bags for the intended purposes, while in most cases, those in peri-urban, rural areas and informal settlements use the garbage bags for other purposes such as storing clothes, committing crimes such as storing dead bodies or storing foetuses resulting from illegal abortions. However, the main reason why only two garbage bags were provided is inadequate funding for sufficient supply of garbage bags by the municipality. Most area cleansing officers said that two garbage bags per week were not sufficient to accommodate the HCW that is generated on a daily basis. They said that this was an issue beyond their control and there was nothing they could do to rectify the problem because they work with a given budget which was limited. They also said that they are discussing the issue with their superiors to find a possible solution regarding budget increments. One area cleansing officer said that they negotiated with their superiors in management for several years to offer households at least a three months' supply of garbage bags without success. "There is nothing we can do because it is something we have raised with the management, saying that people should be given a three month supply as it happens with the suburb… They said that they have problems relating to budget and the money is not adequate for buying garbage bags for households…" (Area Cleansing Officer 2). Lack of participation in waste management programmes Education officers stated that with the help of community leaders and ward councillors, they organise clean-up campaigns in the communities aimed at removing all illegal dumps. They hold workshops with community members and teach them about the importance of keeping the environment clean. During the campaigns, education officers encourage community members to take ownership of the problem (illegal dumping). After that they choose a day for cleaning and removing all the illegal dumps in the communities. Education officers said that they felt disappointed because community members do not commit to such programmes. They indicated that many community members do not show up for clean-up sessions. They believed that such acts undermine their work. Back rooms in peri-urban communities Area cleansing officers blamed some households for creating enabling environments for illegal dumping in the community. They revealed that some households have illegal backrooms. They said that backrooms are structures that most households build as an extension of their own house in peri-urban communities. Residents rent out these rooms to tenants as a way of earning a living. Area cleansing officers revealed that when such structures are built, no toilets or refuse bags are provided to the tenants, because they are not legal occupants. They said that occupants of such back rooms are also expected to share all the sanitation facilities with the landlords but many of them dump their HCW illegally. "The refuse bag distributors know that they should give one plastic bag to each household, but then there are houses with 4 or 5 tenants. Tenants also need refuse bags, but they do not get them because the people who give bags don't know them, they are not appearing on their database so they are staying illegally." (Area Cleansing Officer 1). All area cleansing officers suggested that government must take responsibility for addressing this problem because it has to do with service delivery and that it is a housing issue that needs to be dealt with by the housing department. Long distance between homes and waste storage facilities Ward councillors and area cleansing officers revealed that in informal settlements, roads are inaccessible for the waste collectors. As such, waste collection points are built close to the main roads. All households are expected to remove their waste from homes and store it in these facilities on a daily basis. They explained that the long distance between the homes and the waste disposal facilities was a disincentive for community members which negatively affected their use of such facilities. Area cleansing officers said that this issue is beyond their control and felt that it is a service delivery issue that is supposed to be addressed by the government. Slow change in rural areas This was an issue that was raised by only one education officer. The education officer believed that change in rural areas is slow. Households in rural areas still buried HCW even if they were educated about its negative impacts. In response to this challenge, she said that all education officers continue to offer education about proper management of waste. The education officer also believes that there is a need for the municipality to put extra effort into monitoring waste management activities in these areas. Perceived challenges with health care waste management practices at the municipality level All participants felt that there are challenges at the municipal level that hamper proper management of waste in homes in the communities. They identified corrupt tender processes and insufficient funding for waste management services as problems at the municipal level. Corrupt tender processes All participants believed that the service delivery issue was not within their purview and is therefore an issue that they could not address. All area cleansing officers expressed disagreement with the process involved when choosing contractors responsible for managing waste in the communities. They felt that the tender process was corrupt and lacked transparency. The area cleansing officers revealed that most contractors got their tenders because they had political connections with the tender board. They complained that the government does not involve area cleansing officers in the selection of the contractors even though they are in a better position to do so because they work directly with the people and are able to know their capabilities. They criticised the process and indicated that this interfered with waste management services in the communities. They observed and believed that the contractors that are offered the tenders are incompetent and unskilled to handle waste in general. They said that some waste contractors used open vans when collecting the waste from the communities: "You find that they use open vans and staff in the same vans to collect the waste." (Area Cleansing Officer 5). The education officers revealed that the contractors' trucks constantly broke down and as a result, waste is left uncollected from communities for several weeks: "I won't lie, there are times when the trucks break down and waste is left uncollected. When we ask them they say they are doing something about it. They delay to replace the trucks." (Education Officer 2). Area cleansing officers and education officers felt the constant breakdown of the contractors' trucks caused households to resort to illegal dumping. They also said that they have powers to fine and terminate the contracts of the offending contractors. However, area cleansing officers felt that their powers were undermined by the tender board that turned down their recommendations. They indicated that such acts caused conflict between them and the contractors. Area cleansing officers believed that most contractors lost respect for them and undermined their job. "Most of the contractors are politically connected. Sometimes you report and recommend that the contractor's contract should be cancelled because he or she is not performing but you find that they have been rewarded with a tender again. Then we look like we are bad people and contractors cannot respect us anymore, they do what they want, you know! We end up dealing with one problem that is not getting solved." (Area Cleansing Officer 4). Inadequate funding for waste management programmes Ward councillors, area cleansing officers and the education officers believed that generally, all waste management issues were not seen as a priority issue by the government as are issues relating to provision of housing for the people citing insufficient funding towards waste management by the government. Two education officers felt that the municipality was not willing to provide sufficient funds for clean-up sessions because it was not a priority issue to the government. "Collection trucks and resources for clean-ups are costly. One of the challenges is funds. There are limited funds for clean-ups." (Education Officer 1). One education officer said that insufficient funding has a negative impact on human resources. He said that the job of an education officer requires more human resources due to the fact that they inspect all communities and also attend meetings. Some meetings were held on the same day and same time, and as such it is hard for them to prioritise where to go because all meetings are important and require their attendance. Even though they are each assigned to attend different meetings, they are still unable to attend all of them. "There are 18 meeting rooms and only three of us and the challenge is that sometimes there are multiple meetings on the same day due to a lot of war rooms. We then have to separate ourselves between the war rooms but we cannot make it. There is so much demand and we are few." (Education Officer 3). On the other hand, ward councillors revealed that general waste management issues were not a priority on a list of their community development programmes. They revealed that the top developmental issue is housing followed by unemployment. They also indicated that even community members are not interested in any waste management issues because they are more concerned with housing and employment issues. "People are hungry, they want jobs and houses. So when you talk about waste no one will listen they all leave you because they are not interested." (Ward Councillor D). Strategies used to deal with health care waste management challenges in community-based care combined at the household and municipality levels All participants indicated that, they do not provide programmes directly related to health care waste management. All programmes that are provided aim at managing waste in general and these strategies are discussed below. Collaboration Education officers said that they have taken some steps to address the problem of lack of segregation of waste in general, illegal dumping and lack of participation by community members. This includes working with CBO managers, community leaders, ward councillors and area cleansing officers, who said that they collaborate with Departments of Health, Housing, Environmental Affairs and Environmental Health to provide various education programmes to community members. They offer door to door education on general waste management and distribute pamphlets that have information on waste management. They also indicated that they hold monthly 'Masakhane road shows' where the public is educated on the separation of various waste. Education trucks (mobile classrooms) are provided on site to schools and organisations, to offer training on waste minimization. Enviro-forums are conducted with the business owners, health organisations, community members and councillors that aim at having effective coordination on issues regarding the protection of the environment. Special days are set aside to raise issues on the environment and the importance of managing general waste. Weekly landfill site tours that cover general waste management topics, financial issues, recycling and conservancy management are conducted. Lastly, buy back and drop off centres are advertised. These are recycling initiatives where community members can drop of recyclable products in exchange for money at buy back centres and also drop off recyclable products for non-reimbursement at drop off centres. Education officers also indicated that they hold cleanup sessions. In instances where community members do not show up, they reschedule such sessions and continue to mobilise the community members. They collaborate with the Environmental Health Department and hold workshops with the community members to educate them about the importance of the managing various kinds of waste. "We postpone it. We do not just give up at the first point. We call another meeting and we involve the ward councillors and the environmental health department so that they advise the community on the hazards that come with a dirty place." (Education Officer 3). Education officers also encourage people to adopt a spot. This is usually done after cleaning an area that was previously an illegal dump. Various people are encouraged to adopt and own such spots to use them as gardens or a play park. Names of the owner (the adopters) are displayed on those spots and are published in community newspapers. Annual competitions are held and prizes are given to the adopters that manage and sustain the spots. This is a way of encouraging people to participate in the clean-up sessions. They also indicated that they focus more on providing education in schools to target children. They do this with the hope that the children would implement what they learn at school in their homes. They also hoped that the households would learn from the children. "What we do is increase the levels of education in schools. So we won't need a lot of money. Therefore, the more people are aware about proper waste mismanagement, the more they take initiative and the less money spent." (Education Officer 3). Reporting and liaising with government All ward councillors, area cleansing officers and education officers felt that they has no power to address issues about corrupt tender processes. They said that these are issues beyond their control because they are involved with politics. However, they address issues regarding distance between homes and waste storage by reporting the matter to the Department of Human Settlements that is in charge of housing issues. On the other hand, all ward councillors, area cleansing officers and education officers explained that to deal with the insufficient funding for clean-up sessions and for garbage bags, they are still negotiating with the government to increase its budget: "We do have meetings where we present all our challenges. So it is in these meetings that we try and negotiate with our superiors that we need resources for waste management, for example they must provide more garbage bags for the households…" (Ward Councillor A). Discussion Previous studies show that HCW is improperly managed in hospitals and clinic settings [13,[32][33][34][35][36]. Our study provides nuanced qualitative findings, which illustrates that HCW is also not properly managed in CBC. This finding contributes to the body of knowledge on HCWM. The finding that the municipality is in charge of overseeing all domestic waste management in the communities including HCW is consistent with the requirements by South African National Standards (SANS 2004) on HCWM [37]. The SANS 2004 states that, HCW that is generated in homes as a result of care for a patient is assumed to be in small quantities hence SANS (2004) requires municipalities in charge of managing domestic waste to handle, transport and treat this waste before its disposal [37]. However, the findings reveal that, in practice, HCW is treated as domestic waste in contravention of the SANS requirements. Furthermore, it is intriguing that participants assume that HCW that is generated in homes as a result of care for a patient is in small quantities. Yet, South Africa has the highest HIV prevalence in the world and has about 5.6 million people living with HIV [38] most of whom receive care at home [39]. South Africa also has the largest number of TB incident cases in the world [40]. Given that the standards were developed in the year 2004, it seems reasonable to argue that it does not take into account subsequent policy development that have led to the rise in the home-based care activities in South Africa [2]. These include high prevalence of HIV and TB as well as the recent primary health care re-engineering initiative which aims to scale-up the provision of home health care services to communities across the country through outreach teams [18,19]. The existing and new policy developments highlight the need for policymakers to revise the policy on HCWM in CBC. Area cleansing officers expressed dissenting perspectives about the management of waste in homes. While some claimed that they are not aware that HCW was being generated in homes, others acknowledged that it is mixed with domestic waste. This indicates that HCW from homes is not treated as stipulated in the SANS (2004). Even if the volume of HCW generated in homes is small, this does not diminish the risks that it might pose to the environment and the people. Moreover, this finding shows a misunderstanding about how HCW from homes should be handled by the stakeholders in the municipality. The SANS (2004) [37] requires that all HCW from homes be treated as HCW and not as domestic waste. The standards further require the health care providers who are assigned to the patients to provide containers for storing sharp waste specifically for diabetic patients. As for the other infectious HCW besides sharps, it is recommended that private arrangements with hospitals or clinics should be made for the collection and disposal of HCW from homes by contractors responsible for collecting HCW from hospitals and clinics. It was clear from our study that health care providers do not provide storage facilities for HCW to the households where there are patients receiving care. Additionally, no private arrangements are made for the collection of HCW from the homes of the patients in CBC. Participants did not seem to know whose responsibility it was to provide these facilities and services. These findings highlight a need for the Department of Health to develop policies that will govern HCW from CBC and other minor sources as is the case with hospitals, clinics and other health facilities. Further, the Department of Health and the Durban solid waste unit (DSW) should develop formal partnerships that will help delineate responsibilities relating to the provision of storage facilities for HCW and the disposal of these facilities. Stakeholders in this study indicated that separation of HCW in homes is the responsibility of households. Mixing of HCW with domestic waste makes treatment of such waste difficult [36]. Improper segregation of HCW exposes family members to injuries resulting from sharp waste and exposes them to infections [41]. Although education officers indicated that they provide education and awareness programmes to community members in the communities, it is clear from our findings that this has not yielded the desired results because it focuses more on domestic waste not HCWM. There is therefore a need for the Department of Health to work with the area cleansing officers to develop mechanisms for identifying and providing households that have patients with HCW storage bins as recommended by the SANS 2004 [37]. There must also be mechanisms put in place to monitor HCWM activities in homes to ensure compliance. From the policy-makers' perspectives, the main reason for illegal dumping by community members is the lack of sufficient allocation of budgets for HCWM which results in shortages in the supply of garbage bags specifically for domestic waste. The area cleansing officers stated that they are in constant negotiation with their superiors for adequate allocation of budget. We found that most households are poor and rely solely on government to provide them with houses and basic services including waste management services. As a consequence, some households in peri-urban communities build backrooms to generate income. Occupants of the backrooms are illegal occupants and they contribute to the problem of illegal dumping of HCW which could cause air, land and water pollution [42]. There is a need for the Department of Housing to develop and tighten enforceable housing laws to prohibit building of illegal structures. Steps must also be taken to deter defaulters. Furthermore, irregular collection of waste by waste collectors was a major factor contributing to illegal dumping of waste by community members. Both the irregular collection of waste and insufficient supply of garbage bags are a problem of poor service delivery. All participants in this study revealed that these problems were caused by inadequate funding. The issue of inadequate funding is common in the service delivery literature in sub-Saharan Africa. Various authors [43,44] explain that government taxies, usage fee revenues and aid are the main source of funding for water, sanitation and electricity in sub-Saharan Africa and yet the allocation of funding for these services is only 0.5% of the gross domestic product (GDP). In addition, some authors [45,46] argue that municipalities also lack skilled people in the local government to run services delivery programmes adequately. The process of rolling out services to the communities is slow and hampers the quality and efficiency of waste management programmes [45,46] Poor allocation of funding for waste management programmes could mean such services are not a priority to government. The government must promote sanitation programmes as one of the priority issues for protecting the health of its citizens by allocating adequate funding for waste management at the municipal level. Our study reveals a lack of cooperation from community members in the removal of waste from homes and also during clean-up sessions in the community. The education officers revealed that they provide various education programmes in the communities and clean-up campaigns that aim at changing people's attitudes towards waste management. Clean-up campaigns are really important and they give community members a sense of ownership not only of community goods but also of community problems. Clean up initiatives can also serve as deterrents to improper waste disposal. If participants know that they will be called out to clean up then they might be less likely to dispose waste improperly and also likely to discourage those who do so. Research has shown that corruption is a persistent issue facing public service institutions in LMICs [47]. This study reveals allegations about corrupt tender processes for waste contractors thereby affecting service delivery. It is not clear how true these allegations are. However, a study on service delivery in South Africa found that most municipal officials in charge of awarding tenders were corrupt and were only interested in enriching themselves. Furthermore, the study revealed that policies on fighting corruption were not implemented and this led to misappropriation of funds among municipal officials without any accountability [48]. Considering that issues of corruption are much broader and cannot be addressed with one clear cut solution, we recommend that further studies must be conducted to provide in-depth insights into this issue. Our study shows that incompetent contractors were hired to provide waste collection services in the communities and this undermined waste collection which had negative ramifications for the community as a whole. We recommend that further studies be conducted to explore this issue. The findings will inform efforts to solve the problem of corruption that are related to health care waste management. The major strength of this study lies in its method. The qualitative approach illuminates how and why HCW is improperly managed in CBC. The policy-makers and stakeholders were the appropriate participants who provided insight into the issue of HCWM. The main limitation of the study was the fact that the perspectives of the people overseeing HCWM at the Department of Health were not explored. Their perspectives would have added more insight into waste management policies and practices at the level of the department. Conclusions This study shows that the waste generated in communitybased care is improperly managed. Given that South Africa has the highest HIV and TB prevalence in the world and majority of people living with HIV and TB receive care at home, it is imperative that policy-makers pay attention to HCWM in CBC. With the rollout of the new primary health care model, there is an even greater need to consider HCWM in CBC as a priority issue. Home-based care policies should be revised to include provisions for HCWM. Further research should be conducted with households and waste collectors to understand their HCWM experiences. Research could also be conducted with the Department of Health and other departments that have interest in HCWM issues to find out their perspectives about HCWM in homes. These studies could provide deeper insights into how HCW is managed from homes to the point of disposal. Finally, future research should seek to collect data that could be used to develop a conceptual framework that will help shed light on health care waste management in communitybased care and further our understanding of this issue. Additional file Additional file 1: Interview guides 1 to 3. The attached file contains 3 interview guides that consists of series of open-ended questions which were asked to all the participants that participated in this study. (DOCX 13 kb)
v3-fos-license
2019-04-21T13:08:00.006Z
2016-02-01T00:00:00.000
123841903
{ "extfieldsofstudy": [ "Physics", "Mathematics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://iopscience.iop.org/article/10.1088/1742-6596/693/1/012015/pdf", "pdf_hash": "cb59c49c9ea563bd6452b999bf13568bad815d45", "pdf_src": "IOP", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1935", "s2fieldsofstudy": [ "Education" ], "sha1": "16f1bda7bf582530c06078307045eb5a59ad2c60", "year": 2016 }
pes2o/s2orc
An Investigation of Secondary Teachers’ Understanding and Belief on Mathematical Problem Solving Weaknesses on problem solving of Indonesian students as reported by recent international surveys give rise to questions on how Indonesian teachers bring out idea of problem solving in mathematics lesson. An explorative study was undertaken to investigate how secondary teachers who teach mathematics at junior high school level understand and show belief toward mathematical problem solving. Participants were teachers from four cities in East Java province comprising 45 state teachers and 25 private teachers. Data was obtained through questionnaires and written test. The results of this study point out that the teachers understand pedagogical problem solving knowledge well as indicated by high score of observed teachers‘ responses showing understanding on problem solving as instruction as well as implementation of problem solving in teaching practice. However, they less understand on problem solving content knowledge such as problem solving strategies and meaning of problem itself. Regarding teacher's difficulties, teachers admitted to most frequently fail in (1) determining a precise mathematical model or strategies when carrying out problem solving steps which is supported by data of test result that revealed transformation error as the most frequently observed errors in teachers’ work and (2) choosing suitable real situation when designing context-based problem solving task. Meanwhile, analysis of teacher's beliefs on problem solving shows that teachers tend to view both mathematics and how students should learn mathematics as body static perspective, while they tend to believe to apply idea of problem solving as dynamic approach when teaching mathematics. Introduction It has already been agreed that problem solving is an essential issue deeply discussed in mathematics education in recent decades since its practical role to the individual and society. Problem-solving as one of five standard competences in mathematics mentioned by NCTM (National Council of Teachers of Mathematics) [1] not only develops individuals' conception about aspects of mathematics, but also it helps to adapt to various problems in many aspects of their lives. NCTM [1] also recommended that problem solving be the focus of mathematics teaching because it encompasses skills and functions which are an important part of everyday life. This issue brings a variety of responds which regards to both story of success and difficulties in applying it within practical mathematics teaching and learning in many countries. It is generally known that mathematics curriculum in some countries can be ascertained put problem solving as a and learning of mathematics [20]. Teachers' beliefs about students' ability and learning greatly influence their teaching practices [21]. The study of Stipek, Givvin, Salmon, MacGyvers [16] show that there is a substantial coherence among teachers' beliefs and consistent associations between their beliefs and their practice. Thus, it is important to know what beliefs that teachers in Indonesia typically have, particularly, in order to understand their influence toward teaching practice. Hence, this issue gives rise to the questions of how Indonesian teachers bring out idea of problem solving in their understanding related to both content and pedagogical knowledge as well as how they experience difficulties, and how are their beliefs on this issue. As an initial step to address these questions, this present study aims to investigate secondary teachers' understanding and belief on mathematical problem solving.Therefore, the following research questions were addressed: 1. How did secondary teachers understand mathematical problem solving regarding problem, problem solving as instruction, problem solving steps, problem solving strategies, and instructional practice of problem solving as well as level of their performance on problem solving task? 2. What are secondary teachers' difficulties which are regarded to understand mathematical problem solving? 3. How are secondary teachers' beliefs on mathematical problem solving? Teachers' understanding on the nature of problem solving Teaching problem solving needs some understandings which are related to some points of knowledge. Chapman [22] mentioned three types of knowledge for teaching problem solving: problem solving content knowledge, pedagogical problem solving knowledge, and affective factors and beliefs. (see table 1). Structuring some knowledge within this table was then confirmed as follows. First, figuring out what it means by problem. Chapman [22] argued that teachers should understand problems based on their structure and purpose in order to make sense of how to guide students' solutions including understanding on types of tasks, such as cognitively demanding tasks; multiple-solution tasks; tasks with potential to occasion/promote mathematical creativity in problem solving; demanding problems that generally allowed for a variety of problem-solving strategies; rich mathematical tasks, and particularly open-ended problems. Second, understanding problem solving in instruction which means teachers are encouraged to foster their students in completing problem solving steps precisely. Third, coming up with idea of instructional practices for problem solving, teachers need to consider a series of activities which give students opportunity to solve problems which needs challenging complex thinking and logical reasoning. The activity depends on how the teacher's ability to prepare a problem. Crespo & Sinclair [in 23] explained that teachers who are able to create questions in the initial situation will be more successful learning rather than the teacher who asked the problems spontaneously.Thus, teaching for problem solving, teachers should be proficient in as well as understand the nature of it in order to teach it effectively. In general, this categorization appears to give insight on emerging teachers' conception of problem solving theoretically to teachers' actual teaching practically. In addition to the specific issue related to problem solving content knowledge, teachers should also be proficient to deal with a variety of problem solving task, such as completing problem solving steps as well as applying problem solving strategies on various types of mathematical task. Level of understanding regarding this issue was discussed by some frameworks, such as Polya's problem solving step [24], and some error analysis guidelines on performing mathematical tasks developed by, for instance, Wijaya et al [13] who adapted from three main frameworks, namely Newman's error [25], Blum and Less' modelling stages [as cited in 26], and PISA's mathematization stages [27], and Kohar & Zulkardi [12] who adapted from Valley, Murray, & Brown [28] and PISA's mathematical literacy [29]. To figure out the possibility of those frameworks are applied to investigate level of teachers' proficiency in solving a variety of problem solving task, the following table shows characteristics of some of those. Understanding what a student knows, can do, and is disposed to do (e.g., students' difficulties with problem solving; characteristics of good problem solvers; students' problem solving thinking) Instructional practices for problem solving Understanding how and what it means to help students to become better problem solvers (e.g., instructional techniques for heuristics/strategies, metacognition, use of technology, and assessment of students' problem solving progress; when and how to intervene during students' problem solving). Affective factors and beliefs Understanding nature and impact of productive and unproductive affective factors and beliefs on learning and teaching problem solving and teaching In general, from table 2 we can notice that element of steps among those three frameworks corresponds each other. Specifically, steps of understanding problem and devising strategies, simultaneously, has likely similar idea with steps of reading, comprehension, and transformation in Newman analysis, while this idea also appear on mathematical literacy, i.e. formulate. As early stages in solving mathematical task, they end up with determining precise mathematical model or strategies before performing further steps of solving problem. Likewise, each idea of carrying out step in Polya's process, process skill in Newman, and employ in PISA's mathematical literacy deals with undertaking mathematical procedure to find mathematical results, such as performing arithmetic computations, solving equations, making logical deductions from mathematical assumptions, performing symbolic manipulations, or extracting mathematical information from tables and graph. Furthermore, the last step of Polya's, i.e. looking back, corresponds to final stage of Newman analysis, i.e. encoding and PISA's mathematical literacy, i.e. interpretation. The idea of this stage is interpreting the mathematical result to the initial problem such as checking the reasonableness of the answer or considering other strategies and solution of the problem. The difference, obviously, only appear on the type of the tasks examined where PISA's mathematical literacy specifies on contextual task [29], while Polya and Newman respectively deals with general mathematical problem [24] and written mathematical task [25]. Comparing those three frameworks, it is known that Polya's problem solving steps, which was introduced before the other two frameworks, have an agreement with both Newman analysis and PISA's mathematical literacy. Thus, Newman's error categories can be used to analyze teachers' level of performance in solving context-based mathematical problem solving tasks, which were used in this study. The typical teaching of problem solving should also need to be considered as important parts of understanding instructional practices for problem solving pedagogical knowledge. As examples, typical learning in Japan [30] consists of (1) discuss the previous lesson, (2) presents a problem, (3) ask students work individually or in groups, (4) discussion of problem-solving method, and (5) provides a summary and discussion of an important point. Shimizu [30] explains that underline and summarizes the activities or "Matome" has the function (1) underline and summarize the main points, (2) to encourage reflection of what we have done, (3) determine the context to recognize new concepts or terms based on previous experience, and (4) make the connection (connection) between the new and the previous topic. Similarly, teachers in Finland as reported by Koponen [31] in his study carry out problem solving lesson by firstly introducing the problem, then working on the problem in pairs or in small groups, instructing the individual students on their solutions, and the final whole-classroom discussion. Based on the above discussion it is necessary to know actually how teachers understand the nature of problem solving that includes problem itself, problem-solving, problem-solving strategies, problem solving approach brought into classroom practical teaching, and posing problem solving task. Besides, it is also necessary to look into teachers' performance in solving problem solving task. Teachers' beliefs on mathematical problem solving Mathematical beliefs, as Raymond said, are regarded as "personal judgments about mathematics formulated from experiences in mathematics [32]. They play role as prerequisite development of problem solving itself [33]. Regarding their categorization related to other interrelated fields, Weldeana & Abraham [34] summarized frameworks of teachers' mathematical belief systems into smaller subsystems, including beliefs about the following: (a) the nature of mathematics, (b) the actual context of mathematics teaching and learning, and (c) the ideal context of mathematics teaching and learning. These subsystems are wide-ranging, including, for examples, teachers' views on mathematical knowledge; the role of learners and learning; the role of teachers and teaching; and nature of mathematics activities. This categorization was also conceptualized by Ernest [35] who described three views of nature of mathematics: the instrumentalist view, the platonist view, and the problem solving view. The instrumentalist believes mathematics is useful and collects unrelated facts, rules and skills. The platonist views mathematics as a consistent, connected and objective structure, which means mathematics is a unified body of knowledge that is discovered, not created. The problem solving view sees mathematics as a dynamically organized structure located in a social and cultural context. In attempt to simplify these views, Beswick [30] has tried to make connections among the nature of mathematics, mathematics learning, and mathematics teaching as follows. The relationship between teachers' beliefs about mathematics and its teaching and learning practice, has been investigated in many studies. Ernest [in 31] claimed that a teacher's personal view of mathematics underpins beliefs on the teaching and learning of mathematics. In line with this claim, Schoenfeld [37] stated that the teacher's sense of the mathematical enterprise determines the nature of the classroom environment that the teacher creates. Thus, beliefs influence teaching and learning practice. Other study, Ruthven's [38] study, for instance, argued that teachers need to broaden their perspective about ability and quality in mathematical learning which is probably more easily changed by changing practical teaching in classroom first, as the teachers' understanding of mathematics teaching and learning. Thus, teaching and learning practice influence teacher's belief. Studies on how deep mathematics teacher develop beliefs on mathematics and its practice in teaching and learning were then investigated by some researchers. A study of Zhang and Sze [39] comparing preservice teachers' belief on mathematics in China and Thailand, for instance, revealed that generally teachers participating in this study believe that mathematics is about thinking, logic, and usefulness, rather than a subject of calculableness and preciseness. Regarding their belief on mathematics teaching and learning, the results showed that Chinese preservice teacher's beliefs are more like constructivist. Beswik [40] in his study on two mathematics teacher's view on the nature of mathematics and mathematics as school subject regarding on the three views: platonist, instrumentalist, and problem solving, suggested that more attention needs to be paid to the beliefs about the nature of mathematics that the teachers have constructed as a result of the cumulative experience of learning mathematics. Thus, we get insight on how important these views offer opportunities for teachers to rethink their own beliefs and get to know more about teaching practices. These beliefs, as Schoenfeld [37] argued, give impact on students' belief in learning mathematics which then obviously influence their mathematics performances. The cause, for instance, is that teachers rely on established beliefs to choose pedagogical content and curriculum guidelines [e.g. 17]. If teachers tend to believe mathematics as a set of tools that contain facts, rules, and skills, the lesson will likely to be centered on teachers instead of students [41]. Furthermore, The importance of students 'and teachers' beliefs about the role of problem solving in mathematics is a prerequisite development of problem solving itself [41]. Romberg in [42] shows the relationship of elements in the teaching of mathematics as follows. only teacher's mathematical content, but also teachers' beliefs influence students' performance. This view illustrates the importance of types of teacher beliefs which are needed to be investigated as attempt to improve teachers' proficiency dealing with problem solving. Methods This is a descriptive explorative research which aims to explore teachers' understanding on mathematical problem solving and their belief. Participants Participants were secondary teachers who has minimum a bachelor degree, have taught more than 5 years, from four cities: Surabaya, Sidoarjo, Gresik, and Mojokerto. There were 25 private teachers and 45 state involved in this study. practice of problem solving (3 items), and designing problem solving task (3 items). Other three items are categorized to identify the difficulties of teachers in problem solving while the other three items are categorized to explore teacher beliefs, i.e regarding mathematics, how to teach mathematics, and how students should learn mathematics. Data collection Meanwhile, the problem solving tasks were designed to explore teachers' understanding regarding contextual tasks which do not likely need any higher prerequisite formal mathematical knowledge. See the tasks at appendix. The following is the description on the tasks' demands. The difference between those two task, particularly, appears on the most dominant stages which are likely more needed to be performed when solving the task. Here, task 1 needs more performance on final stage of problem solving, i.e. interpreting mathematical result to initial problem, while task 2, in opposite, demands more likely performance on early stages, i.e. from understanding the task to devising strategies or mathematical model of the task. Data analysis Descriptive analysis on investigating teachers' understanding and belief was carried out by using score given on each group of questions. Each option on a question has score either 1 (not understanding), 2 (partial understanding), or 3(full understanding). As an example, we give one question contained in group of question: problem solving content knowledge including its options and its score as follows. An open-ended mathematics task is the task which... A. contains open sentences (score 1) B. produces a variety of strategies to find out the solutions (score 3) C. gives an open opportunity to persons who want to solve (score 1) D. contains higher level of difficulty so that needs higher mathematical skill as well (score 2) E. has more than one solutions (score 3) F. has opportunity to be developed into other type of tasks by changing information or requirements from the solved task (score 2) The score varies to show level of understanding from 1.00 (do not understand), to 3.00 (fully understand), while the other scores varies to show level of beliefs on mathematical problem solving from 1.00 (platonist view/as tool) to 3.00 (problem solving view). The score is given to each participant on each question based on the following formula. In detail, we have developed a guideline to categorize these levels as shown by the following table. Regarding teachers' performance on problem solving task, we used framework on investigating individual's performance based on stages of mathematical modelling adapted from Wijaya [13], and Kohar [12] since the task is in the category of context-based problem solving task. This is shown as follows. Teachers' understanding in questionnaire results There were seven categories of questions which were tested to measure teachers' understanding on problem solving. Each category could contain more than one questions. For instance, category of problem solving strategies contained two questions, while category of experience in designing problem solving tasks contained three questions. Table 7 shows teachers' average score on the questionnaire. Table 7 shows that the lowest score of understanding problem solving appears on category problem solving strategy (1.83) which means teachers did not really.understand toward this group of questions. In giving response on questions related to problem solving strategy, data show that most teachers chose wrong options related to type of problem solving strategy should be applied on a given information on the question. Moreover, teacher did not really show good understanding on the meaning of problem as indicated by its score, i.e., 2.36, which could be interpreted as low score of full understanding based on scoring category on table 5. However, higher score appears on category problem solving as instruction (2.91) and designing problem solving task (2.64), both of which are related to practical knowledge of problem solving. It shows even though teachers are aware of the importance of problem solving as the focus of learning but there are still weaknesses in selecting a task question as a problem and solution strategies. Thus, regarding Chapman's category of knowledge needed to understand problem solving, we then could note that the teachers had relatively better understanding on pedagogical problem solving knowledge rather than problem solving content knowledge. Teachers' understanding in performing problem solving task In total, we had 140 possible responses (number of tasks done by all teachers in total) which included 35 correct responses (25%), 80 incorrect responses (57.15%), i.e., no credit or partial credit, and 25 missing responses (17.85%). Each incorrect response had an opportunity to be coded by more than one sub-type code since its different errors found from this response. For instance, a response could be coded as mathematical processing error subtype algebraic error (P-1) and interpretation simultaneously (I). Thus, the total number of responses was no longer 140 items, instead we found 176 coded responses. Then, the percentage of each category of responses is given as follows. Table 8 shows that transformation/devising strategies error constitute to the most frequently found from teachers' work (23.30%), while mathematical process error, conversely, become the least frequently observed (4.55%). Morover, they also performed comprehension error highly, i.e., 21.03%. This point out that teachers found some difficulties in early stages of problem solving steps. As examples on how teachers perform those errors, the following figures show comprehension error and transformation-interpretation error, respectively on task 1. Figure 1. Examples of errors on task 1 On comprehension error, teacher at figure 1a was unable to distinguish between relevant and irrelevant information given from the table. He only considered information in column won, lost, and drawn without giving more attention to the column goals for and goals against to convey a calculation. Thus, we coded it as C-2. Meanwhile, transformation error was performed by teacher at figure 1b who tended to use directly a mathematical procedure, i.e. addition and subtraction without analyzing whether or not it was needed and did not provide precise mathematical argumentation supporting the procedure he used. Consequently, he got the score in negative number, which is not possible to happen in real world setting. Thus, we also coded it as I error. Regarding transformation error and comprehension error on task 2, we give these following examples. Figure 2. Examples of errors on task 2 Figure 2a was coded as transformation error because the teacher provided wrong mathematical model to find the best time chosen for holding the competition. She seemed trying to find LCM of the differences of hours from the three cities which is not suitable to find the solution. Hence, this error was coded as T-3. Meanwhile, figure 2b actually shows a unique strategy, i.e. using algebraic approach, which was not found from other teachers' work. However, the teacher did not perform carefully in selecting information related to model of inequality for Ankara time, i.e. writing 12≤x+14≤23, instead of 12≤x+2≤23. Thus, we coded it as P-1 since it contains algebraic error of finding solution of inequality. Teachers' complete performance in solving the tasks is also interested to be discussed. Here, we found two approaches, i.e. arithmetical and algebraic approach. Here are examples of these approaches from task 1. a. T-3 b. P-1 Translation: Finding least common multiples from differences of hours among those three countries, i.e. LCM from 4,15, and 5 = 60 hours So, it should start from 12.00 a.m. to 12.00 p.m. Translation : let x is time for Greenwich, then allowed time for other cities to hold the competition is given as follows. Since the duration of competition is 1 hour, then range of time that can be used is from 5 to 8 a.m (Greenwich time) shows that teacher used symbols representing number of goals successfully shot to another team, then carrying out a well-operated algebraic form by seeing about information of goals for and goals against from the table in task 1 to find out each of goals produced by each of teams. Interestingly, the teacher started to symbolize score of Mentari vs Surya by a : a, which means he know that number of goals produced between Mentari and Surya is same based on information of number of drawn match given in the table. Similarly, teacher at figure 3b also applied this information to find each of score between two teams of each match, but he applied arithmetical approach by listing some possibilities of score and then carrying out simple operation (addition) of number to find out the score of Mentari vs Surya FC. Teachers' difficulties on understanding problem solving We categorized teachers' difficulties into three type of questions, i.e. problem solving steps, designing problem solving task, and causes of difficulties in designing problem solving task. Table 7 show teachers' responses on this issue. Teachers' beliefs on mathematical problem solving Belief in mathematics can be classified into three parts, namely mathematics as a tool, mathematics as body static and dynamic mathematics as human creations. Table 7 shows teachers' average score of beliefs on mathematics, teaching mathematics, and learning mathematics. Discussion and Conclusion Results of studying teachers' understanding on problem solving show that most teachers were likely found difficulties on problem solving content knowledge, especially in identifying problem solving strategy. Analysis of their performance in solving problem solving task also reveal that errors mostly happened when they started to devise strategies or precise mathematical model. These findings were then supported by their opinion on questionnaire that the most difficult step of carrying out problem solving step is devising strategies. Moreover, understanding this knowledge needs likely more attention to deeply comprehend interrelated content knowledge regarding certain topic. For instance, when investigating prospective elementary teachers' common thinking about open-ended, Chapman [17] found that they actually know that open ended problem meant more than one answer but they were doubt about what this meant mathematically such as inability to provide example of other answers on a given open-ended task. Regarding teachers' belief, the fact that teachers tend to view both mathematics and how students should learn mathematics as body static perspective, while they tend to believe to apply idea of problem solving as dynamic approach when teaching mathematics shows that belief in mathematics is not the only factor affecting the practice of teaching and the views of how students should learn. Raymond [in 4] describes other factors besides belief in mathematics such as teacher education programs, teaching social norms, teachers' life outside the classroom, characteristics of teacher's personality, the situation in the classroom, and student life outside the classroom. Teachers also may have a tendency toward a view on teaching and learning mathematics which includes encouraging students to be actively involved in solving problems in various contexts. However, our findings about teachers' view on teaching problem solving seemed contrast with the study of Wijaya, et al [20] and Maulana [43] which point out that teachers' view of teaching mathematics task in their study admit to mostly gave context-based tasks which explicitly provide the needed procedures and contain only the information that is relevant for solving the tasks. This teachers' view does not support the idea of teaching mathematics as problem solving as our finding on teachers which agreed to teach mathematics as problem solving. Furthermore, teachers in their study prefer to teach using directive teaching approach by mostly explaining a topic while students write, listen, and answer closed questions. Thus, there is an inconsistency between teachers' actual practical teaching and their view on teaching problem solving. A conjecture of this issue could be related with teacher's understanding on problem solving knowledge, particularly on the lack of teachers' problem solving content knowledge as we found in this study since there is a significant association among teachers' understanding, beliefs, and teaching practice on problem solving [16,44,45]. To sum up, we argue that: (1) teachers' understandings on problem solving content knowledge was less than those on pedagogical problem solving knowledge, (2) teachers more believed on mathematics and mathematics learning as body static, while in practice, they tend to believe to the views that they should teach mathematics as a dynamic/problem solving activity. The implication of this study recommends the need to develop a program of teacher professional development in understanding and applying problem solving in teaching practices taking into consideration the difficulties experienced by teachers primarily on problem solving content knowledge. Thus there will be a balance between teachers' view and teachers' actual knowledge and practice toward mathematical problem solving.
v3-fos-license
2018-04-03T03:03:59.223Z
1997-12-05T00:00:00.000
331886
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.jbc.org/content/272/49/30760.full.pdf", "pdf_hash": "645fd8b807883df956a0c3e898b26e610c8fa831", "pdf_src": "Highwire", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1936", "s2fieldsofstudy": [ "Biology" ], "sha1": "c734e5559fdbf8f7046c5e6eb49ea277ce9fd289", "year": 1997 }
pes2o/s2orc
Release of Ecto-protein Kinases by the Protozoan ParasiteLeishmania major* Leishmania major promastigotes have externally oriented ecto-protein kinases (PK) that are capable of phosphorylating both endogenous membrane substrates and foreign proteins. Live parasites phosphorylate protamine sulfate, casein, and phosvitin but not bovine serum albumin. Addition of exogenous PK substrates, such as phosvitin or casein, induced the shedding of ecto-PK that are capable of phosphorylating protamine sulfate. No phosphorylation of protamine sulfate was seen when cell-free supernatants from promastigotes incubated with either buffer alone or bovine serum albumin were used. A second enzyme, a constitutively released PK that phosphorylates casein or phosvitin and not protamine sulfate or mixed histones, was identified and characterized. This PK is inhibited by 5 μm staurosporine, 50 μg/ml heparin, and 75 μm CKI-7, concentrations typical of the IC50 found for other eukaryotic casein kinases (CK). The constitutively shed ecto-PK specifically phosphorylated a peptide substrate for CK1 but not for CK2, suggesting that this shed PK is similar to CK1. The protozoan parasite Leishmania is responsible for a wide spectrum of human diseases that cause varying degrees of patient morbidity and mortality and affect more than 12 million people world-wide. Leishmania have a relatively simple life cycle, existing as extracellular flagellated promastigotes in the sandfly vector and following transmission to a mammalian host, as intracellular aflagellated amastigotes in macrophages (1). Throughout its life cycle, Leishmania encounter hostile, changing environments that require rapid responses to ensure survival of the parasites (1,2). In eukaryotes, protein phosphorylation is a major mechanism for regulating cellular responses to environmental signals, including cell-cell interactions. The role of intracellular protein kinases (PK, EC 2.7.1.37) 1 in complex regulatory cascades that control differentiation, metabolism, growth, gene expression, and other cellular processes is well established (3). Less is known about the functions of externally oriented cell surface PK (ecto-PK), although the potential for their involvement in signal transduction and cell-cell interactions appears great. These ecto-PK utilize extracellular ATP that is present in blood plasma and other body fluids at concentrations from 1 to 30 M (4). Ecto-enzymes have been demonstrated in a variety of cultured cells, including HeLa cells, fibroblasts, neutrophils, neurons, and others (5)(6)(7)(8)(9)(10). Several types of serine/threonine ecto-PK have been identified and include the cAMP-dependent protein kinase (PKA), protein kinase C (PKC), and cyclic nucleotide independent PK (6 -8, 10). Recently tyrosine ecto-PK have also been reported (11). Ecto-PK are capable of phosphorylating both endogenous membrane and exogenous foreign substrates. We have identified a cyclic nucleotide-independent ecto-PK activity on viable Leishmania major promastigotes that phosphorylate exogenous substrates, such as mixed histones and protamine sulfate, in addition to 11 endogenous parasite membrane proteins (12). Live parasites can also phosphorylate the C3 and C3b polypeptide components of the human complement system (13). Phosphorylation of C3 was shown to inactivate both the alternative and classical complement pathways (14) and thus may play an important role in parasite survival. In addition, the inducible release or shedding of ecto-PK in the presence of enzyme substrates has been described for specific PK on the surface of HeLa cells, endothelial cells, fibroblasts, neutrophils, and other cells (6,7,9,15,16). This activity appears to be similar to casein kinases and was recently purified from HeLa cells and characterized as casein kinase 1 (CK1) and casein kinase 2 (CK2, see Ref. 6). In this study we show for the first time that parasites are capable of shedding ecto-PK. At least two leishmanial ecto-PK released by promastigotes were identified as follows: first, an ecto-PK that is shed constitutively and phosphorylates phosvitin; and second, an enzyme released by incubation with PK substrates that phosphorylates protamine sulfate. The constitutively shed enzyme was characterized and shown to be CK1like. These findings will allow us to further characterize the properties and roles of ecto-PK and the possible ramifications of these enzymes on host-parasite interactions. EXPERIMENTAL PROCEDURES Materials-The CK-specific inhibitors, CKI-7 and CKI-8, were purchased from Seikagaku America (St. Petersburg, FL). The CK1-and CK2-specific peptides, RRKDLHDDEEDEAMSITA and RRRADDSD-DDDD, respectively, used in the phosphorylation assays were generous gifts from Dr. L. Pinna (University of Padova, Italy). All the other PK inhibitors, protein substrates, and reagents were purchased from Sigma. P-81 phosphocellulose paper was obtained from Whatman Scientific Ltd. ( (17). Virulent cloned parasites were maintained by serial passage in BALB/c mice and obtained as required by needle aspiration from lesions. Parasites were maintained in culture for not more than 12 passages. Release of Ecto-PK Activity-Viable promastigotes were washed once by centrifugation (10 min, 600 ϫ g) with 20 mM Tris-HCl, pH 7.5, containing 150 mM NaCl, 1 mM MgCl 2 , 1 mM glucose, and 10 mM NaF (buffer A). The parasites were resuspended at 5 ϫ 10 8 cells/ml in buffer A (100 l) with or without substrate (1 mg/ml phosvitin, hydrolyzed or intact casein, or bovine serum albumin). The cells were layered over di-N-butyl phthalate (150 l) and incubated for 20 min at 30°C. The promastigotes were removed by rapid centrifugation in a Microfuge (Beckman model B, for 1 min), and the supernatants were used for phosphorylation assays. Phosphorylation Assays-Released ecto-PK activity was measured in cell-free supernatants, prepared as described above, by adding PK substrates, [␥-32 P]ATP (1-10 Ci) and 0.1 mM cold ATP. The reactions were incubated for 10 -20 min at 30°C and stopped by the addition of ice-cold trichloroacetic acid (1 ml, 25%). After 30 min on ice protein phosphorylation was analyzed by either filtration or sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE). In the former case, the trichloroacetic acid precipitates were collected on Millipore HAWP filters (0.45 m), washed four times with ice-cold 5% trichloroacetic acid (2 ml each), and counted in a ␤-scintillation counter. For gel electrophoresis, the precipitate was collected by centrifugation and washed with 5% trichloroacetic acid (once) and 90% acetone (three times). The pellet was resuspended in sample buffer, analyzed by 12% SDS-PAGE, and exposed to x-ray film or phosphorimaging. Quantitation of the phosphorylation was carried out by densitometric analysis of the bands. The protein substrates and concentrations used are indicated in the figure legends. Phosphorylation by live promastigotes. Parasites were washed once by centrifugation with buffer A and resuspended at 5 ϫ 10 8 cells/ml in buffer A (100 l). Cells were preincubated at 30°C with or without the PK substrates for 2-5 min, and the reaction was initiated by addition of [␥-32 P]ATP (1-10 Ci) and 0.1 mM cold ATP. After 10 -20 min incubation the reactions were stopped by adding trichloroacetic acid and analyzed by filtration and SDS-PAGE as described above. Peptide Phosphorylation-Phosphorylation of specific peptide substrates for CK1 or CK2 using cell-free supernatants was carried out as described above with the following modifications. The reaction was stopped by the addition of ice-cold 100% trichloroacetic acid (24 l) and 1% bovine serum albumin (BSA, 40 l), and co-precipitated on ice (30 min). After centrifugation, samples from the supernatant (20 l in triplicate) were spotted on P-81 ion exchange chromatography phosphocellulose paper (Whatman Scientific Ltd., UK) and washed three times in 75 mM phosphoric acid (500 ml, 5 min each wash) to remove the unbound phosphate. The paper was dried and 32 P incorporation was measured in a ␤-scintillation counter. Parasite Viability-Promastigote viability was assessed by two complementary assays as follows: ethidium bromide (EtBr) incorporation, which measures the percentage of dead or damaged cells; and fluorescein diacetate hydrolysis, which measures the percentage of viable cells. Fluorescein diacetate hydrolysis (18) was tested in selected experiments; however, the EtBr assay (19), described below, was used in all experiments. To determine the effect of each treatment on cell viability, promastigotes (5 ϫ 10 7 cells/200 l buffer A) were incubated in parallel under identical conditions, except for [␥-32 P]ATP, to those described above. At the beginning and end of each incubation 2.3 ml of buffer A containing 50 M EtBr was added, and the fluorescence was measured 5 min later in a fluorescent spectrophotometer (Perkin-Elmer LS-5B Luminescence Spectrometer, 365 nm excitation, 580 nm emission). Buffer A containing 50 M EtBr served as the blank. A standard curve using increasing numbers of promastigotes (5-500 ϫ 10 5 ) in buffer A containing digitonin (30 g/ml) and EtBr (50 M) was used to calculate the number of dead parasites. In some cases, the percentage of dead or damaged parasites was also determined by counting fluorescent and total parasites using a fluorescent phase microscope (Laborlux K; Leitz, Germany) at 400 ϫ magnification. Effect of PK Inhibitors on Enzyme Activity-Stock solutions of the PK inhibitors heparin, CKI-7, CKI-8, staurosporine, H-7, and W-7 were prepared in dimethyl sulfoxide or distilled water. The inhibitors or dimethyl sulfoxide alone were diluted with buffer A and added to the reactions just before use. Phosphorylation of the PK substrates and analysis of the reactions by either filtration or SDS-PAGE was carried out as described previously. The final concentrations of inhibitors examined are given in the text. Phosphorylation of Exogenous Protein Substrates by Ecto- PKs on Live Promastigotes-The phosphorylation of different PK substrates or BSA was examined using viable L. major stationary phase promastigotes. After a 20-min incubation with [␥-32 P]ATP substrate, phosphorylation was analyzed by SDS-PAGE and autoradiography ( Fig. 1 and results not shown). Leishmanial ecto-PK(s) are capable of phosphorylating several different exogenous substrates, including protamine sulfate (PS), phosvitin, hydrolyzed casein (h-casein), intact casein (i-casein), and mixed histones ( Fig. 1, data not shown and Ref. 12). Different phosphorylation patterns were observed for each PK substrate used. When phosvitin (lane 2) was added to the reaction mixture, one major radiolabeled band, 35 kDa, corresponding to the substrate molecular mass was observed. Likewise, when PS was added to the reaction mixture a major phosphorylated band at about 14 kDa, corresponding to this protein (lane 1), was seen. Phosphorylation of h-or i-casein gave as expected multiple bands or a major band at 29 kDa, respectively (lane 4 and data not shown). When BSA was used as an exogenous substrate, no protein phosphorylation was observed (lane 5). The only radioactive product observed using BSA was a band that migrated with the tracking dye. This band appears in all the reactions, including those without substrate (lane 3), and may represent phosphorylation of parasite lipids or small peptides. Endogenous phosphorylation of parasite proteins was only observed in the absence of foreign substrates and following extensive exposure of the gels (data not shown and Ref. 12). The additional high molecular weight bands observed when PS was used were not investigated but are probably due to protein impurities, since these bands were not observed in preliminary experiments when PS from an alternative commercial source was used (Ref. 12 and data not shown). Parasite viability was measured by both EtBr fluorescence and fluorescein diacetate hydrolysis. In the absence of substrates, promastigote viability after 15 min at 30°C was never less than 97%, as measured by both methods, and remained essentially unchanged up to 35 min incubation. Incubation of the promastigotes with either PS, h-casein, or phosvitin for 25-35 min resulted in an 11, 2.5, and 0%, respectively, decrease in parasite viability compared with those incubated in buffer alone. Effects on parasite viability were already seen by 5 min incubation with the substrate, and essentially no additional change in viability (Ͻ1%) was noted with increasing time using any of the PK substrates. Similar differences in PK substrate toxicity for cells have been reported with HeLa cells, neutro- phils, and fibroblasts (5,7,8). Release of Ecto-PK Activity from Parasites-Incubation of neutrophils, HeLa cells, and fibroblasts with substrates for PK was shown to induce the release of PK activity from these cells. This activity could be detected in the cell-free supernatants (7,11,15,16). Therefore, we decided to test if phosphorylation of exogenous substrates observed using live parasites was due to enzyme release by the cells. Promastigotes were washed and incubated with or without h-casein for 20 min. After removal of the cells by centrifugation through an oil layer, PK activity of the supernatants was assayed by adding PS where indicated ( Fig. 2; lanes 1-5). Preincubation of promastigotes with h-casein resulted in the release of a PK that phosphorylates both h-casein and PS (lanes 1 and 4). Shedding of the PS phosphorylating activity by the cells required parasite preincubation with h-casein, since no phosphorylation of PS was observed when the parasites were preincubated in buffer alone (lane 2). Preincubation of promastigotes with other PK substrates, including i-casein, phosvitin, or PS, also caused the shedding of a PK activity that could phosphorylate PS. However, no phosphorylation of PS was observed if supernatants from parasites incubated with BSA were used (data not shown). The labeled bands seen in lanes 1 and 4 are not due to phosphorylation of endogenous secreted parasite proteins, since no radiolabeled bands were observed if supernatant alone was used in the phosphorylation reactions (lane 5). Likewise, PS was not phosphorylated when added to the reaction mixture in the absence of supernatants collected from h-casein-treated promastigotes (lane 3). Shedding of the PK that phosphorylates PS requires promastigote preincubation with h-casein. However, it was not clear whether h-casein phosphorylation (Fig. 2, lane 1) required parasite preincubation with substrate, "induced release," or if this PK activity was constitutively released by the parasites. Cellfree supernatants were collected from promastigotes incubated with or without h-casein (Fig. 2, lanes 6 and 7). Substrate was added to supernatant obtained from parasites incubated in buffer alone and the phosphorylation reaction carried out. Phosphorylation of h-casein was seen when either procedure was used and did not depend on whether the parasites were incubated with the substrate prior to collection of the supernatant. These results demonstrate that the PK activity that phos-phorylates h-casein is constitutively released from the leishmanial parasites. However, phosphorylation of h-casein using supernatants collected from parasites preincubated with substrate was 70% greater than those only incubated with buffer ( lanes 6 and 7), suggesting that incubation with casein may induce release of a PK activity capable of phosphorylating h-casein. These findings suggest that at least two different kinase activities are released from intact cells: 1) "substrateinducible" activity that phosphorylates PS and perhaps h-casein; and 2) a "constitutive" activity that only phosphorylates h-casein and not PS. The "constitutively" shed leishmanial CK-like activity, LCK, was characterized further. Substrate Specificity of the Constitutive PK Activity-Enzyme activity of cell-free supernatants, collected from promastigotes incubated with buffer alone, was measured using several different PK substrates, including h-casein, i-casein, phosvitin, mixed histones, PS, and BSA. Phosphorylation was examined by SDS-PAGE ( Fig. 3 and data not shown). Phosvitin was the best substrate for the constitutively released PK (lane Effect of Inhibitors on the Constitutively Released PK-Several different PK inhibitors, including staurosporine, W-7, heparin, CKI-7 and CKI-8, were examined for their ability to block phosvitin phosphorylation. The antibiotic staurosporine, a competitive inhibitor of ATP binding to PK, inhibits a wide range of enzymes, including PKC, PKA, and Ca 2ϩ -calmodulin PK, at nanomolar concentrations. Unlike most PK, CK are less sensitive to staurosporine, and the IC 50 for CK1 and CK2 is 163 and 19 M, respectively (20). The IC 50 found for the leishmanial CK (LCK), 5 M, is similar to CK2. Heparin has been used to distinguish between the two CKs (21). CK2 is strongly inhibited at approximately 1% the concentration that inhibits CK1 (IC 50 Ϫ CK1 ϭ 24 g/ml; CK2 ϭ 0.15 g/ml). Phosphorylation of phosvitin by LCK was measured in the presence of different concentrations of heparin (Fig. 4). This curve shows that relatively high concentrations of heparin (IC 50 ϭ 50 g/ml), more similar to CK1, are needed to inhibit the leishmanial enzyme's activity. In addition to heparin, the effect of CKI-7, an isoquinoline derivative of W-7, on phosvitin phosphorylation was also examined. CKI-7 is a specific inhibitor of CK1 and CK2 (IC 50 Ϫ CK1 ϭ 9.5 M; CK2 ϭ 90 M; see Ref. 22). The IC 50 for other common PK, such as PKA, PKC, and Ca 2ϩ -calmodulin PK, is much higher, 550, Ͼ1000, and 195 M, than for CK. The IC 50 found for LCK was 75 M, closer to that observed for mammalian CK2. Taken together the results using PK inhibitors and substrate specificity strongly suggests that the shed leishmanial enzyme is a CK. LCK was also inhibited by W-7 (IC 50 Ϫ 10 M). Phosphorylation of CK1-and CK2-specific Peptide Substrates-The constitutively released LCK activity and total cellular CK activity were further characterized in four separate experiments using peptides specific for either CK1 (RRKDLH-DDEEDEAMSITA) or CK2 (RRRADDSDDDDD; Ref. 23). Results typical of these experiments are give in Table I. Phosphorylation of the peptides using parasite lysates, either freeze/ thawed or sonicated, showed that both enzymes are present in the parasite (Table I and data not shown). The total activity found in the promastigote lysates for CK1 was approximately 2.6-fold higher than CK2. When peptide phosphorylation was examined using cell-free supernatants as a source of LCK, the activity was significantly lower than that measured using parasite lysates. In some experiments both CK-specific peptides were phosphorylated (data not shown). However, the CK1 activity was consistently higher, up to 650 times greater, than the CK2 activity (Table I). Phosphorylation of the CK-specific peptides using intact promastigotes gave results similar to that found using the cell-free supernatants. These results strongly suggest that the constitutively released leishmanial enzyme is CK1-like. Furthermore, the finding of high CK1 activity in the cell-free supernatant (9.6% of the parasite lysate) and little or no CK2 activity suggests that the released activity is not due to cell lysis. Promastigote viability in buffer A was Ͼ97% over 20 min. Kinetics of Constitutive PK Release-Release of the CK-like activity over time was followed for 30 min. At each time point, aliquots containing parasites in buffer alone were removed and cell-free supernatants prepared. In parallel, the percentage of dead promastigotes was measured using the EtBr assay. CK-like activity was assayed using phosvitin as substrate and analyzed by SDS-PAGE and densitometry. Results from one typical experiment is shown in Fig. 5. PK activity shed by the parasites into the buffer increased dramatically over the first 10 min of incubation (600%). After peaking, the activity measured in the cell-free supernatants slowly decreased, until it appeared to level off after 25 min at twice the initial activity. The initial time point (t ϭ 0 min) was obtained by adding parasites to buffer and then immediately centrifuging to prepare a cell-free supernatant. All time points were compared with a negative control, phosvitin, in labeling buffer without supernatant. The percentage of dead promastigotes was examined in parallel by EtBr staining and showed no change in parasite viability over the first 15 min in labeling buffer (98% viable; t ϭ 0 and 15 min) and only a small decrease after 30 min (96% viable). The difference in kinetics of CK secretion and change in parasite viability further excludes the possibility that the PK release observed is a result of cell damage. DISCUSSION Many eukaryotic cells possess ecto-PK that are capable of phosphorylating foreign and endogenous protein substrates. PK identified include PKC, PKA, CK1, and CK2. In addition, several vertebrate cells were shown to shed ecto-PK from their surface in the presence of PK substrates like phosvitin or casein. Although most cells examined appear to release cyclic nucleotide independent CK, the release of PKA or PKC has been documented in only a few cases. Parasites have evolved varied strategies to evade host defense mechanisms including the mimicry of host regulatory molecules and enzymes. Leishmania promastigotes also express ecto-PK on their surface that phosphorylate foreign proteins (24). Previous studies showed little or no evidence that the parasite ecto-PK activity was related to PKA or PKC, respectively. Activators and inhibitors of these enzymes had no significant effect on the phosphorylation of exogenous or endogenous substrates (Ref. 12 and data not shown). However, comparative studies between live parasites, which phosphorylate the C3 and C3b polypeptides of the human complement system, and LPK-1, a purified parasite enzyme, which only phosphorylates C3, but not C3b, suggested that promastigotes possess more than one ecto-PK (13). Preliminary experiments show that the constitutively shed leishmanial PK, LCK, phosphorylates C3a (data not shown). Interestingly, a casein kinase shed from human platelets following activation was shown to FIG. 4. Effect of heparin on the constitutively shed leishmanial casein kinase (LCK) activity. Promastigotes were incubated in labeling buffer for 20 min at 30°C, and the cell-free supernatants were collected by centrifugation through an oil layer. Protein kinase activity was assayed using phosvitin (1 mg/ml), [␥-32 P]ATP, and increasing concentrations of heparin. The reaction was stopped after 10 min by the addition of ice-cold trichloroacetic acid and analyzed by 12% SDS-PAGE and autoradiography. Quantitation of the phosvitin phosphorylation was carried out by densitometry. TABLE I Phosphorylation of casein kinase (CK) peptide substrates by leishmanial protein kinases Shed cLPK, promastigotes (5 ϫ 10 7 /100 l) were incubated for 20 min at 30°C and removed by centrifuging through an oil layer. The cell-free supernatants were used as a source of enzyme. Lysed parasites, promastigotes (5 ϫ 10 7 /100 l) were freeze/thawed three times and used as a source of enzyme. Intact parasites, live promastigotes (5 ϫ 10 7 /100 l) were incubated directly with the CK-specific peptide substrates. CK1 (RRKDLHDDEEDEAMSITA) or CK2 (RRRADDSDDDDD) specific peptide, [␥ 32 P]ATP, and protein kinase were incubated for 10 min, and the reaction was stopped by adding 1% BSA and trichloroacetic acid. After 30 min on ice the precipitate was removed by centrifugation, supernatants in triplicate were spotted on P-81 filters and washed with 75 mM phosphoric acid, and 32 P incorporation was measured by liquid scintillation counting. Background phosphorylate both C3 and C3b (25). To obtain a better understanding of the role of ecto-PK in host-parasite interactions, it will be necessary to characterize parasite ecto-enzymes and the physiological substrates involved in these processes. In this study we demonstrate that Leishmania promastigotes have at least two types of ecto-PK, both of which can be released by the parasites. Unlike most cells examined so far, one type of activity appears to be shed continuously in the absence of substrate, whereas the second activity, similar to other eukaryotic cells, is shed only when incubated with substrate. These activities, constitutive and induced, are easily distinguished by their ability to phosphorylate PS, since the former enzyme(s) shows no activity when assayed with this substrate, and the latter readily phosphorylates PS. These shed ecto-PK are not due to cytoplasmic leakage from damaged or dead cells. Although both histone and PS are cytotoxic for eukaryotic cells (5,16,26) and parasites, little or no cytotoxicity was found when cells (5,7,16,26) or parasites were incubated with phosvitin or casein. Initial parasite viability in the studies described herein always exceeded 97%, as measured by two fluorescent assays, and changed by Ͻ2% following incubation either in the presence or absence of casein or phosvitin. This percentage was identical to that found following incubation with buffer alone. The small amount of dead parasites was not responsible for the induced PK activity observed, since no phosphorylation of PS was noted using cell-free supernatants from promastigotes incubated either with BSA or buffer alone, whereas good phosphorylation of PS was found with as little as 2.5 ϫ 10 6 freeze/thawed parasites (data not shown). Furthermore, no correlation was found between the release of constitutive ecto-PK activity in the cell-free supernatants and decreasing cell viability. Ecto-PK activity peaked rapidly at 10 min and then gradually decreased with time, whereas cell viability remained essentially constant over the first 15 min and then decreased only slightly, by 2%, after 30 min. We decided to focus on the characterization of the constitutively shed leishmanial ecto-PK activity that appears to be related to casein kinases. Substrate specificity was typical of these enzymes. Phosvitin, followed by h-casein and i-casein, was the best substrate for the enzyme(s), and neither mixed histones nor PS were phosphorylated by the ecto-PK. The parasite activity is different from a spontaneously shed human leukemic cell line serine/threonine ecto-PK that was recently reported (11). The latter activity phosphorylates PS and histone H2B, as well as casein, phosvitin, and the human complement polypeptide C9, and may contain more than one PK. PKI, a specific PKA inhibitor, was found to inhibit the phosphorylation of histone H2B but not C9 by the leukemic cell ecto-PK. The latter activity was postulated to be CK-like. Little evidence was found in our study or previous studies for either an intracellular or externally oriented leishmanial PKA (12). However, we have recently cloned and characterized the genes for two PKA catalytic subunits from L. major (27). The effect of several PK inhibitors on the leishmanial ecto-PK was examined. Unlike most PK that are inhibited by nanomolar concentrations of staurosporine (20), the IC 50 values for casein kinases are in the M range (CK1 and CK2, 163 and 19 M, respectively). The IC 50 value for the leishmanial ecto-PK (5 M) was similar to CK2. The high concentration of drug need to inhibit the leishmanial activity is not due to the intrinsic resistance of parasite enzymes to staurosporine. The phosphorylation of PS by Leishmania aethiopica promastigotes, either particulate or soluble fractions or live parasites, is strongly inhibited by staurosporine. Low concentrations of drug (50 nM) inhibited the phosphorylation of PS by Ͼ80% when parasite fractions were used and by approximately 45% using live parasites (28). Furthermore, staurosporine concentrations similar to those that inhibit the constitutively shed ecto-PK are cytostatic and/or cytotoxic to the promastigotes and induce pronounced morphological changes (29). Heparin and CKI-7, both specific CK inhibitors, also blocked the leishmanial ecto-PK activity at concentrations similar to those reported for mammalian and yeast CK and confirmed that the constitutively shed parasite enzyme is CK-like. However, no conclusion regarding the type of CK in the cell-free supernatants could be made based on IC 50 values for these inhibitors, since the constants found using heparin or CKI-7 each implicated the presence of a different CK, either CK1 or CK2, respectively. Interestingly, heparin concentrations (10 M) similar to those that inhibit LCK were also shown to significantly reduce endogenous protein phosphorylation by live parasites (30). Finally, we were able to identify the constitutively shed PK by examining the phosphorylation of specific peptide substrates for CK1 and CK2. Only the CK1-specific peptide substrate was phosphorylated confirming that the constitutively shed LCK activity is CK1-like. This conclusion was further supported by the finding that the LCK, similar to other CK1, only utilizes ATP, whereas CK2 utilizes both ATP and GTP for phosphorylation (data not shown). Although spontaneously shed ecto-PK activity has not been observed with cell-free supernatants from HeLa cells or neutrophils (7,16), ecto-CK released from HeLa cells by incubation with phosvitin were recently purified and characterized (6). Differences in the sensitivity of the leishmanial and mammalian ecto-PK to different inhibitors suggest that it may be possible to design drugs that specifically inhibit the parasite but not the host enzymes. However, this will require the purification and characterization of the parasite enzyme(s). The CK1 family has been found in all eukaryotic cells exam- FIG. 5. Constitutive secretion of leishmanial casein kinase 1 (LCK1) by promastigotes of L. major. Effect of incubation time on released protein kinase activity and cell viability was measured. Promastigotes in labeling buffer were layered on oil and incubated at 30°C. At three time points samples were taken to check parasite viability by staining with ethidium bromide and counting in a fluorescent microscope. In parallel, the promastigotes were removed by rapid centrifugation, and the cell-free supernatants were assayed for LCK1 activity by the addition of phosvitin and [␥-32 P]ATP. Reactions were analyzed by 12% SDS-PAGE, autoradiography, and densitometric scanning of the film. F, LCK1 activity; f, parasite viability. ined so far and is believed to be involved in the regulation of nuclear and cytoplasmic processes. These PK consist of monomeric proteins that vary considerably in size from 25 to 55 kDa and have been found in the nucleus, cytoplasm, membrane, and cytoskeleton. Several different isoforms of CK1 have been identified in mammalian cells and yeast using molecular techniques. In Saccharomyces cerevisiae two essential genes have been sequenced and found to encode a carboxyl-terminal prenylation motif, believed to target them to the plasma membrane. Using a nested polymerase chain reaction with degenerate oligonucleotides to conserved regions of CK1, we have amplified a 342-base pair fragment from L. major that shows 74.4% identity over 336 base pairs to human CK1-⑀. 2 We expect that molecular analysis of the leishmanial-CK1 gene (lck1), the recent availability of molecular techniques for the production of null Leishmania mutants, and further biochemical characterization of parasite ecto-PK will prove invaluable in understanding the role of these enzymes in parasite-host interactions.
v3-fos-license
2023-03-08T16:18:35.223Z
2023-01-01T00:00:00.000
257396760
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://thesai.org/Downloads/Volume14No2/Paper_18-Hyperparameter_Optimization_of_Support_Vector_Regression_Algorithm.pdf", "pdf_hash": "f4c26b4a84949a711cd83efae5653305877fe8ea", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1937", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "28955724b08cda68f3c031ed136fd2b38783d560", "year": 2023 }
pes2o/s2orc
Hyperparameter Optimization of Support Vector Regression Algorithm using Metaheuristic Algorithm for Student Performance Prediction —Improving student learning performance requires proper preparation and strategy so that it has an impact on improving the quality of education. One of the preparatory steps is to make a prediction modeling of student performance. Accurate student performance prediction models are needed to help teachers develop the potential of diverse students. This research aims to create a predictive model of student performance with hyperparameter optimization in the Support Vector Regression Algorithm. The hyperparameter optimization method used is the Metaheuristic Algorithm. The Metaheuristic Algorithms used in this study are Particle Swarm Optimization (PSO) and Genetic Algorithm (GA). After obtaining the best SVR hyperparameter, the next step is to model student performance predictions, which in this study produced two models, namely PSVR Modeling and GSVR Modeling. The resulting predictive modeling will also be compared with previous researchers' prediction modeling of student performance using five models: Support Vector Regression, Naïve Bayes, Neural Network, Decision Tree, and Random Forest. The regression performance metric parameter, Root Mean Square Error (RMSE), evaluates modeling results. The test results show that predictive student performance using PSVR Modeling produces the smallest RMSE value of 1.608 compared to predictions of student performance by previous researchers so that the proposed prediction model can be used to predict student performance in the future. I. INTRODUCTION Educators need a prediction of student performance to improve student achievement. Predicting student performance is used as material for evaluating student learning so that it can facilitate the diversity of potential students, both those who excel academically [1] [2] and detect students who have the potential to experience failure [3]. Accurate prediction of student performance can also be the right policy decision in educational institutions [4]. The implementation of the Machine Learning Algorithm to predict student performance is to compare the accuracy of both classification and regression [5]. The Machine Learning Algorithms used include Neural Networks, Decision Trees, Naïve Bayes, SVM, KNN, and Logistic Regression [2][5] [6]. Support Vector Machine (SVM) is a Machine Learning Algorithm that can be used to predict student performance [5] [7][8] [9]. As for solving regression problems, SVM is better known as Support Vector Regression (SVR) [10]. SVR has good generalization ability, can be implemented for non-linear data with high dimensions, and has low computational complexity [11]. In addition, other advantages of SVR are overcoming overfitting and making predictions with data that is not too large [12]. From these advantages, SVR can be implemented in this study to predict student performance [11]. Problems often experienced by SVR occur in large-scale data, thus making significant computational processes challenging to determine optimal hyperparameter values [11] [13]. Optimal selection of hyperparameters in Machine Learning Algorithms has been carried out using various Metaheuristic Algorithm approaches, namely Particle Swarm Optimization (PSO) [14], Artificial Bee Colony (ABC) [15], and Genetic Algorithm (GA) [16]. The SVR Algorithm is a Machine Learning Algorithm; optimizing hyperparameters in SVR modeling will increase the value of modeling accuracy [16] [17]. II. RELATED WORK Many researchers have researched student performance prediction using Machine Learning Algorithms. Tomasevic et al. [6] predict student performance by comparing Machine Learning Algorithm modeling, namely KNN, SVM, ANN, Decision tree, Naïve Bayes, and Logistic Regression with classification and regression models. This study used data on students' past learning achievements, learning engagement, search activities, discussion participation, and demographics. The results of this study show that ANN outperforms other Machine Learning Algorithms with the best accuracy. In the study of student performance prediction conducted by Xu et al. [18], using student activity data, as many as 4,000 students on the online duration, traffic volume, and connection frequency. The resulting classification prediction modeling is in the form of passed and failed. The Machine Learning Algorithms used include Decision Trees, ANN, and SVM. This study showed that the ANN and SVM Algorithms for predicting student achievement were the most accurate. Cortez et al. [5] compared the accuracy of predicting student performance with classification and regression modeling using Neural Network, Decision Tree, Naïve Bayes, Random Forest, and SVM Algorithms to predict student www.ijacsa.thesai.org performance in mathematics and Portuguese. The experimental results show that in the classification case, the Naïve Bayes Algorithm produces the best accuracy for predicting student performance in mathematics and the Decision Tree Algorithm produces the best accuracy for predicting student performance in Portuguese. In the regression case, the Random Forest Algorithm has the best accuracy for predicting student performance in mathematics. In contrast, the Naïve Bayes and Random Forest Algorithms produce the best accuracy for predicting student performance in Portuguese. This study will use previous research datasets, namely the performance of high school students in Portugal in mathematics [5], with the development of the SVR Algorithm. The choice of the SVR algorithm is because the algorithm can overcome overfitting and make predictions with data that is not too large [12]. The development of the SVR Algorithm is to find optimal hyperparameters in the SVR Algorithm using the Metaheuristic Algorithm, namely PSO and GA [14] [16]. By using optimal hyperparameters, the application of the SVR Algorithm can increase the accuracy of predictive modeling [16] [17]. So the proposed contribution of this research is developing a model predicting student performance on the SVR Algorithm with hyperparameter optimization using the Metaheuristic Algorithm, which previous researchers have not done. III. MATERIAL AND METHOD In this study, there are several stages needed to predict student performance. In the early stages, the collection of the student performance dataset, Split dataset to data training and data testing, Optimization of hyperparameters on SVR using the Metaheuristic Algorithm, and Modeling of Student Performance Prediction. Fig. 1, it can be seen the stages carried out in this study. The first step was collecting the dataset. Dataset collection can be done by downloading from the UCI Machine Learning Website 1 . After processing the dataset, the dataset will be split into training and testing data, with 90% of the training data and 10% of the testing data. Training data is used when training Algorithms and looking for suitable models, while data testing is used as test data to determine the performance of the model that has been produced. The next step in this study is to model student performance predictions using the SVR Algorithm with hyperparameter optimization. This study used two models: GSVR Modeling and PSVR Modeling. After predictive modeling is generated, the next step is to evaluate the performance of the regression performance metrics using RMSE and compare it with Machine Learning Algorithm modeling done by previous researchers using the same dataset [5]. A. Data for Student Performance Prediction This study uses a dataset of student performance at secondary schools in Portugal from the UCI Machine Learning Repository Website. This collection of student performance data comes from two secondary schools, namely the Gabriel Pereira School and the Mousinho da Silveira School mathematics. It consists of 395 instances and 33 demographic, social, financial, and academic data attributes [5]. Of the 33 attributes in this dataset, one attribute is the result of students' mathematics final exam scores, namely G3, which will be used as the target for modeling student performance predictions, so 32 attributes in the dataset affect student performance. The description of the student performance dataset used can be seen in Table I. B. Machine Learning Algorithms This research will focus on developing the SVR Algorithm by optimizing hyperparameters using the Metaheuristic Algorithm. At the end of the development, we will compare with other Machine Learning Algorithms used by previous researchers to predict student performance, namely Support Vector Regression (SVR), Naïve Bayes, Neural Networks, Decision Trees, and Random Forests [5]. 1) Support vector regression (SVR): SVR is a development of the SVM Algorithm introduced by Vladimir Naumovich Vapnik in 1995 [19]. SVR shows good performance in solving regression problems [11]. SVR applies the Structural Risk Minimization (SRM) method, which is a method with a focus on finding the optimal hyperplane and minimizing errors from the training data and incentive loss function, resulting in a continuous and real-value data output [20]. In this study, the hyperparameters used are C, gamma, and epsilon. 2) Naïve Bayes: Naïve Bayes is a simple probabilistic classifier that calculates a set of probabilities by summing the frequencies and combinations of values from the given dataset [21]. This Algorithm uses Bayes theorem and assumes that all attributes are independent or not interdependent, given the value of the class variable [22]. Naive Bayes is based on the simplifying assumption that attribute values are conditionally independent when given output values [23]. In other words, given the output values, the probabilities of observing together are the product of the individual probabilities [24]. 3) Neural networks: Neural networks are information processing Algorithms inspired by the workings of the biological nervous system, especially in human brain cells, in processing information [25]. Neural networks consist of many information-processing elements that are connected and work together to solve a particular problem, which is generally a classification or prediction problem [26]. 4) Decision tree: A Decision tree is a predictive model technique used for task classification and prediction [27]. A Decision tree divides the problem search space into problems [28]. The process in the decision tree is to change the form of table data into a tree model. The model tree will generate rules and be simplified [29]. 5) Random forest: Random Forest Random Forest is a supervised learning Algorithm released by Breiman [30]. Random Forest is commonly used to solve problems related to classification, regression, etc. This Algorithm is a combination of several tree predictors, or it can be called a decision tree, where each tree depends on a random vector value sampled freely and evenly on all trees in the forest [31]. The prediction results from the Random Forest get the most results from each decision tree [32]. C. Metaheuristic Algorithm for Optimizing Hyperparameters in the SVR Algorithm A Metaheuristic can be defined as an iterative generation process that guides subordinate heuristics by intelligently combining different concepts to exploit the search space used to organize information in efficiently finding a near-optimal solution [33]. A Metaheuristic Algorithm is used to help optimally find hyperparameters to produce the best accuracy value in predictive modeling [12] [17]. Determining hyperparameters in a Machine Learning Algorithm is a significant step in modeling [16]. The optimal hyperparameter is determined based on the fitness function. The fitness function is as follows. Where � is the predicted value, is the original value of the sample dataset, and n is the total number of samples. In this study, The Metaheuristic Algorithms used to find optimal Hyperparameters in SVR are PSO and GA, which will be discussed as follows. 1) Particle swarm optimization (PSO): PSO has been developed by Kennedy and Eberhart as an optimization Algorithm [34]. The way PSO works is based on the results of observing the social behavior of a group of birds and fish moving to a specific position to get food, which is then referred to as the best position in the multidimensional search space [35][36]. The term particle denotes a bird in a flock that collectively influences its intelligence or that of the group [37]. According to the search area, particle movement with velocity will save it as the best position as Pbest and Gbest [38]. PSO aims to get the optimal solution by minimizing the fitness function [39]. In this study, we will apply PSO as an optimizer for SVR hyperparameters, with the name PSVR Modeling. The steps taken are the initialization of the initial parameters of the PSO in the form of particle velocity, initial particle position, and iteration. Particles will update the position and velocity memory to obtain the Pbest and Gbest values [11]. The best fitness value of the iteration limit will produce the best SVR hyperparameter combination in the form of C, gamma, and epsilon in PSVR Modeling. www.ijacsa.thesai.org 2) Genetic algorithm (GA): GA is an evolutionary Algorithm inspired by the mechanism of natural selection based on Charles Darwin's theory [40]. GA was introduced in 1975 at the University of Michigan by John Holland [41]. GA is widely used to solve optimization problems [42]. GA works to find the optimum solution simultaneously at several points in one generation, and then GA manipulates the population structure symbolically as the best solution [43]. In GA, a solution is a chromosome, and a group of chromosomes is called a population. Chromosomes from one population form a new population based on the objective function or the best fitness value [44]. This study will also apply GA as an optimizer for SVR hyperparameters, with the name GSVR Modeling. The steps are to initialize the initial GA parameters in the form of the initial population and iteration limits. The initial population in the state of individuals will be selected based on the order of the best fitness function with the selection stages [16]. After that, the cross-over stage is carried out, namely the exchange of genes between one chromosome and another based on the parameter of the crossover rate. The next stage is a mutation, in which the resulting chromosome will replace one or more genes with other genes at random [45]. In the final stage, a new individual will be generated to determine the best fitness value obtained from the iteration limit to produce the best SVR hyperparameter combination in the form of C, gamma, and epsilon in GSVR Modeling. D. Evaluation Method The developed model will be evaluated using regression performance metric parameters in the final stage. The function of the regression performance metric parameter is to measure the accuracy of modeling predictions of student performance. This study's regression performance metric parameter is the Root Mean Square Error (RMSE). RMSE can be defined as the square root of the average value of squared errors between the actual value and the forecast value [36]. Where � is the predicted value of student performance, is the original value of the student performance sample dataset, and n is the total number of samples. IV. RESULT AND DISCUSSION This study uses the student performance dataset by Cortez [5], focusing on students' final exam scores in Mathematics. This student performance dataset from two secondary schools consists of 395 instances and 33 attributes. This dataset is a file with the Comma Separated Values (CSV) format in excel. After checking each row and column of data, no empty data is found in this dataset, so it can be stated that this dataset has been filled in completely. The next step is to model student performance predictions by optimizing hyperparameters in SVR Algorithm using the Metaheuristic Algorithm, which is then compared with modeling student performance predictions that have been carried out by previous researchers using Machine Learning Algorithms. A. Results of Previous Research Results using Machine Learning Algorithms Cortez et al. [5] have researched predicting student performance modeling using Machine Learning Algorithms, including SVR, Naïve Bayes, Neural Networks, Decision Trees, and Random Forests. The modeling results that have been produced are in Table II below. TABLE II. RESULT OF PREVIOUS RESEARCH [5] Algorithm In Table II, the modeling results show that the best predictive modeling of student performance is obtained in modeling predictive student performance in the Random Forest Algorithm with an RMSE value of 1.75, while the worst student performance in the SVR Algorithm with an RMSE value of 2.09. In previous studies, only making comparisons of the accuracy of modeling predictions of student performance using Machine Learning Algorithms and hyperparameter optimization has not been carried out on Machine Learning Algorithms. So that in this study will improve the accuracy of modeling student performance predictions with the SVR Algorithm by optimizing hyperparameters using the Metaheuristic Algorithm. B. Results of Optimization Hyperparameter SVR using Metaheuristic Algorithm and Modeling This stage is the initial stage for developing the SVR Algorithm to predict student performance. Hyperparameter optimization is performed to determine the best hyperparameter composition of the SVR Algorithm as a predictive model to be developed. The settings for the hyperparameter values to be optimized are C, gamma, and epsilon by determining the range of upper and lower limit values, with hyperparameter values C = [100 -1000], gamma = [0.001 -0.009], and epsilon = [0.001 -0.009]. The predictive modeling of student performance resulting from hyperparameter optimization using the Metaheuristic Algorithm is as follows. 1) Optimization hyperparameter SVR using particle swarm optimization (PSVR modeling): In this study, PSO will be applied as an optimizer for SVR hyperparameters. The SVR hyperparameters are C, gamma, and epsilon. Meanwhile, as an optimization Algorithm, the PSO parameters will be determined by initialization, also carried out on the initial PSO parameters. The PSO parameter will be set with a total of 50 particles, while the value of C1 is 1.0 and C2 is 2.0, with a weight value of W is 0.5. The number of iterations in this research will be varied by the number of iterations of 50, 100, 250, and 500. Based on the PSVR Modeling results in Table III, the optimal SVR hyperparameter combination was obtained at C, gamma, epsilon = [103, 0.002, 0.001] at the 100th iteration is the best RMSE value of 1.608. The selection of optimal hyperparameters generated in PSVR Modeling results from searching for a combination of hyperparameter limits on SVR that has been determined using the PSO method search stages. In the PSO method, the resulting hyperparameter combination will be evaluated for its fitness value based on the Pbest and Gbest values of the iterations and predetermined PSO parameters so that the best hyperparameter combination with the smallest RMSE value is obtained. 2) Optimization hyperparameter SVR using genetic algorithm (GSVR modeling): GA will be applied as an optimizer for SVR hyperparameters in this study. The SVR hyperparameters are C, gamma, and epsilon. Meanwhile, as an optimization Algorithm, the GA parameters will be determined by initialization, also carried out on the initial GA parameters. The GA parameter will be determined by a total of 50 individuals, while the mutation coefficient value is 0.01, with a cross-over coefficient value of 0.5. The number of iterations in this research will be varied by the number of iterations of 50, 100, 250, and 500. Based on the GSVR Modeling results in Table IV, the best RMSE value was 1.830 at the 250th iteration with the optimal SVR hyperparameter combination at C, gamma, epsilon = [100, 0.001, 0.008]. In GSVR Modeling, selecting optimal hyperparameters is also the result of searching for a combination of hyperparameter limits on a predetermined SVR using the GA method search stages. In the GA method, the resulting hyperparameter combination will be evaluated for its fitness value based on the GA stages to get the best individual based on the iteration results and GA cycle stages in the form of selection, cross-over, and mutation so that the best hyperparameter combination with the smallest RMSE value is obtained. From the experimental results, obtained modeling of student performance with PSVR Modeling and GSVR Modeling, which will then be used to be compared with student performance modeling in previous studies. C. Comparing and Analysis Results In this study, modeling of student performance predictions has been carried out using the SVR Algorithm, which is optimized for hyperparameters with the Metaheuristic Algorithm, by producing a proposed model in the form of PSVR Modeling in Table III and GSVR Modeling in Table IV. In the PSVR Modeling experiment with the optimal SVR hyperparameter, the RMSE value was 1.608, and GSVR Modeling with optimal SVR hyperparameters produced an RMSE value of 1.830, so when compared to modeling student performance predictions using the Machine Learning Algorithm obtained from previous research in Table II, which will produce a comparison like in Fig. 2. Fig. 2 shows that modeling predictive student performance with PSVR Modeling gets the best results with the smallest RMSE value of 1.608 compared to the RMSE value on GSVR modeling and performance prediction modeling students conducted in previous studies. It can also be seen that the SVR algorithm gets the highest RMSE value of 2.09 compared to the Machine Learning algorithm used in previous studies for modeling student performance predictions. With this research, it can be seen that optimizing the hyperparameters in the SVR algorithm can reduce the error value or increase the accuracy of modeling student performance predictions. Fig. 3 shows the graph of increasing accuracy using hyperparameter optimization using the Metaheuristic Algorithm for the SVR Algorithm. The experimental results show an increase in the accuracy of the SVR algorithm with the proposed model. GSVR Modeling shows an increase in accuracy of 12.44% with a decrease in the error value with RMSE from 2.09 to 1.830 compared to the SVR Algorithm. As a comparison, the best improvement is PSVR Modeling which shows an increase in accuracy of 23.06% with a decrease in the error value with RMSE from 2.09 to 1.608 compared to the SVR Algorithm. Modeling student performance predictions using an accurate Machine Learning Algorithm can predict student performance so that appropriate strategies can be determined to improve student learning outcomes. In previous research [5], modeling predictions of student performance were compared using several machine learning algorithms. This study has developed a predictive model for student performance by optimizing the hyperparameters in SVR using the Metaheuristic Algorithm, namely PSO and GA, to produce a proposed model with two models, PSVR Modeling and GSVR Modeling. In predicting student performance predictions using PSVR Modeling, the prediction accuracy is the best compared to predicting student performance using other Machine Learning Algorithms with an RMSE value of 1.608. The increase in the accuracy of the RMSE value was also generated by modeling predictions of student performance with PSVR Modeling of 23.06% compared to predictions by modeling student performance using the SVR Algorithm. This experiment shows that the student performance prediction model with the proposed model can be used to predict student performance in the future. In this study, the selection of optimal hyperparameters in the SVR Algorithm has been proven to increase accuracy in predicting student performance. Future research is expected to be able to conduct experiments by setting hyperparameters on C, gamma, and epsilon with a more varied range of values so that it is possible to obtain even better predictive modeling accuracy results. VI. FUTURE WORK In future research, further the development will be carried out on predicting student performance modeling using the Feature Selection Method with Metaheuristic Algorithms. So modeling student performance predictions using the feature selection method will produce features that influence student performance predictions and increase the accuracy of the resulting model. ACKNOWLEDGMENT The first author is a doctoral student at the Faculty of Engineering, Universitas Sriwijaya. The authors would like to thank Universitas Sriwijaya for their support in carrying out this research.
v3-fos-license
2020-06-25T09:07:33.774Z
2020-10-01T00:00:00.000
226564362
{ "extfieldsofstudy": [ "Business" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/34/e3sconf_iims2020_03025.pdf", "pdf_hash": "bf88bd6c1c9483ad31fddf474fce95dd17edb84f", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1938", "s2fieldsofstudy": [ "Engineering", "Medicine", "Psychology" ], "sha1": "04b0d6155ae666da3185c22274760c74c91c8418", "year": 2020 }
pes2o/s2orc
Prophylactic Approach to Industrial Electro-Traumatism Taking into Account Individual Psycho-Physical Peculiarities of Worker Taking into Account Network Digitalization . The article assesses electrical safety in electrical installations, considers the risks and factors affecting a working person. Electrical injury, compared with other types of industrial injuries, is a relatively small percentage, however, the number of injuries with severe and fatal outcomes occupies a leading place. That is why electrical safety issues need to be given the closest attention. To ensure safe working conditions during the operation of electrical installations, it is necessary to know how electric current acts on the human body, conditions of exposure to hazardous voltage, protection measures against electric shock. The topic of the article is of great practical importance both for those interested in electrical safety issues and for employees and enterprises of the energy industry. Introduction Preserving the life and health of employees of an organization is a priority vector direction of state policy in the field of labor protection and in the activities of all employees. Other business interests go to the second plan. This dictates the need to conduct systematic work in the field of labor safety in all organizations aimed at reducing the risk of injuries and occupational diseases. Therefore, the study of the causes and the development of appropriate measures to reduce injuries is a very urgent task. Specific occupational risks inherent only in the electric power industry, the most serious of which is the danger of electric shock, make it necessary for employers to build a strict policy of strict observance of all types of labor protection requirements, production and labor discipline by all employees. Over the past ten years, Russia has maintained a positive trend to reduce occupational injuries in all types of enterprises in all sectors. The Rostrud data show the dynamics of fatal injuries both in the whole of the Russian Federation and in the form of economic activity "Production and Distribution of Electricity, Gas and Water". Statistics are shown in Figure 1. Results and discussion Consider the distribution of accidents by type of facility. 3.6% of fatal accidents occurred in heat generating plants and heating networks, 35.7% in consumers electrical installations, 3.6% in thermal power plants, and 57.1% in electric networks. The information is presented in the form of a diagram, Figure 2. According to official statistics for the Russian Federation as a whole and for the type of activity "Production and Distribution of Electricity", there is a tendency towards a decrease in the total number of victims of accidents in any field of production and the number of fatal injuries. Thus, the work to eliminate the circumstances and causes of injuries (including fatal), carried out by employers, gives positive results and the hope of approaching the standards of safe work in the workplace. The main traumatic factor in any energy enterprise is electric shock. The causes of accidents that occurred in 2017 are shown in Figure 3. Analysis of industrial accidents that occurred in 2017 in organizations of the Russian Federation showed that the main causes of accidents were: the fall of the injured person (victim) from a height (12% of the total number of accidents); in 14% of cases, death was the result of exposure to rotating, scattering, moving objects, parts, machines and mechanisms; 74% of electric shocks. The frequency of injuries from electric shocks in developed countries is 2-3 cases per 100,000 population [4]. Annually, as a result of exposure to electric current, 22-25 thousand people die in the world, and among the fatal outcomes in production, electric trauma takes the fifth place [6]. The analysis showed that the total number of accidents at electric power enterprises in 2017 decreased compared to 2016, but continues to remain high. The main factors of the causes of accidents are: -failure to perform technical and organizational measures in preparation for work; -expansion of the workplace and the volume of the task; -intentional violation of safety rules during the performance of work; -lack of sufficient qualifications and experience of the staff. The listed reasons indicate that accidents with personnel occur due to a lack of basic knowledge, inadequate qualifications and training of personnel. This is especially true for remote areas of energy enterprises, where the selection of qualified personnel for electrical engineering positions is difficult. It was established that the younger the employee, the higher the frequency of electrical injuries, which is due to the low level of qualification of personnel [14]. However, the probability of electrical injuries for personnel with extensive experience and a high level of qualification is also high. This is due to the fact that they have to carry out the bulk of the work and, therefore, the probability of getting under voltage is higher than that of workers with little experience. A high level of electrical injuries is also observed if work is carried out overtime, due to psychophysiological factors (inattention, fatigue, etc.). Assessment of risks, dangers and their definition is an important task in the fight against electrical injuries. These skills allow you to learn how to manage these risks and form the skills of the staff to identify the main production risks. Risk is a combination of the probabilities of a hazardous event or exposure and the severity of the injury or ill health that could be caused by such an event or exposure. When assessing possible risks, it is necessary to proceed from the severity of the consequences and measures to manage them. Management activities should be proportionate to risk. Should be considered: -what risks can be eliminated? -what risks can be managed? -what effective measures can be taken? Key risk management measures: -compliance with instructions and rules in full; -use of PPE; -observance of labor discipline; -self-and mutual control. Key principles in identifying and eliminating risk: -perform a risk assessment; -develop and take measures to eliminate the risk; -if it is impossible to eliminate the risk, it is necessary to take measures to individual or collective protection. Consider approaches to identifying risks at the stage of task formation and until the moment of the accident itself. According to statistics, the frequency of occurrence of situations that led to injury obeys the laws in the form of a pyramid, which is based on the risks that occur in the workplace. At the heart of one death are from a thousand to several tens of thousands of dangerous conditions. This principle is described in the theory of F. Bird, which clearly shows the statistics of the distribution of accidents ( Figure 4). As can be seen from the figure, a minor violation, which is not given an appropriate assessment, develops into injuries of varying severity. This theory shows the way to manage industrial safety. To change the quantitative indicators located at the top, it is necessary to change the base of the pyramid. From the foregoing, we can conclude that one of the principles of ensuring labor safety is the fight against sources of risk. In addition, there is a Bradley curve -an effective tool in improving occupational safety. According to this pattern -the more joint efforts are made to ensure electrical safety, the less the likelihood of accidents. Employees must ensure the safe execution of work not only under the influence of instincts and management supervision, but also actively acting, both personally and in a team. Over the long history of the development of the electric power industry, a whole system of labor protection management has been created. It provides an opportunity to apply an integrated approach to security issues, however, the achieved level can still not be considered satisfactory. In general, the problem of "man and technology" is one of the main problems of modern science. Its solution involves collaboration: physiologists, mathematicians, psychologists, engineers, anatomists and representatives of many other scientific disciplines, because this problem requires an integrated approach to solving it. There is a need to develop a different fundamental approach to the analysis of systems "man -electrical installation." The task of researching a person as an operator (and only as an operator) turns into the task of researching an operator as a person [9]. Imagine the factors affecting a working person in the form of Figure 5. When selecting for work in such enterprises where work is based on the "man-machine" system, and especially if work is associated with responsibility and a threat to life, one should carefully consider the mental state of a person. For example, to assess his temperament and character, a particularly important role is played by the properties of temperament in activities related to the extreme conditions of time pressure and life threatening. Character anomalies associated with disturbances in the emotional-volitional sphere are called "psychopathies." Usually, four types of them are distinguished ( Figure 6). It is necessary to exclude the possibility of admission of persons with psychopathic and psychopathological traits. This is implemented in psychological testing procedures. For example, when selecting for work, the type of higher nervous activity should be taken into account, since people with different types of higher nervous activity react differently to various stimuli. In choleric patients, nervous processes are characterized by a predominance of excitement over inhibition. It is difficult for such an employee to control his actions. Consequently, the risk of industrial electrical injuries increases. A significant risk factor is stress. The state of mental stress that occurs in a person under the most difficult conditions in the process of carrying out his activities not only in everyday life, but also in extreme conditions can provoke electrical injury. Under stress, the electrical resistance of the skin changes. It can range from 2,000 ohms to 2,000,000 ohms. Skin resistance: -faces and rear of the brush is in the range from 10,000 to 20,000 ohms, -hips -2,000,000 ohms, -palms and soles -from 200,000 ohms to 2,000,000 ohms. For comparison, internal organs and tissues have a resistance of only about 500-1000 ohms. The resistance of the human body, depending on the state of the body and environmental conditions, fluctuates over a very wide range and therefore is of great importance for the outcome of electrical injury. Especially with voltages up to 500 V (at high voltages, resistance is of less importance). The condition of the skin can significantly affect the nature of the electric trauma, as follows -what was the skin at the time of the injury, for example, if it is thick, dry, then the skin has a lot of resistance. Of great importance is the moisture of the skin. The resistance of the skin moistened with water drops by 40%. This explains the increased risk of electrical injury in the hot season and in "hot", humid rooms. The amount of sweat glands in the skin, the degree of its blood supply, and contamination with various substances are also important. The condition of the skin also depends on the state of the organism as a whole -its reactivity, the state of the nervous, endocrine and other systems. Therefore, skin resistance is not the same in different people, in different parts of the body of one person and in one area at different times [2]. According to a number of studies, up to 20.3% of electricians experience stress at the workplace, especially people working on power grids with a high voltage class [11]. Stress at work is a reaction to the presentation of requirements to employees that do not correspond to the level of their knowledge and skills, as well as the need to act in conditions of lack of time or lack of information. Occupational stress is a diverse phenomenon, expressed in reactions (mental and physical) to stressful situations of work. Stress is present even in well-managed organizations, which depends not only on structural and organizational characteristics and the nature of work, but also on interpersonal relationships of employees. All employees are subject to stress, regardless of workload, and in turn, stress leads to a decrease in a person's working capacity. Both organizations and individuals are very concerned about stress and its consequences. Attempts to manage stress can be represented in the form of Figure 7, which involves the selection of one of the possible solutions. Thus, the data presented provide clearer guidelines for planning and developing corrective programs in the direction of occupational safety psychology, thereby increasing the effectiveness of safety measures. The experience of psychologists shows that a positive effect is achieved by recognizing the insufficiency of professionally important qualities individually for each employee and the phased formation of adaptive ways of their development. One of the important directions of increasing the effectiveness of security is the digitalization of the network, which cannot be successful without the implementation of the concept of "Digital Electrician". "Digital electrician" is a concept of organizational and hardware-software complex, designed to increase the safety of work at electric grid facilities and automate the processes of their planning, execution and control. Increasing the effectiveness of organizations' activity in the use of digital resources is reflected, for example, in the strategy for the development of the information society in Russia for the period 2017-2030, which focuses on the massive introduction of digital interaction tools of complex systems, in particular, integrated production structures. Currently, the project "Digital Electrician" is being implemented in the electric power industry. The project aims to solve problems associated with the violation of labor protection rules during work on operating electrical installations. The project "Digital Electrician" implies control, which is ensured by: -monitoring a labor protection worker on a daily basis; -visualization -the psychological preparation of the employee to comply with labor protection requirements. Monitoring of a labor protection worker on a daily basis can be presented in the form of a diagram (Figure 8). The electrician turns on the smartphone, then enters the password (you can use the personnel number), is tested where he needs to answer 3 questions with 3 possible answers. If the answer is given correctly -the work permit is received, if the wrong answer is given, then the work permit is received, but the next day an additional question is raised on the topic in which the mistake was made. At the end of the month, the immediate supervisor analyzes the test results in order to identify poor employee knowledge, on the basis of which a technical training plan is formed. Visualization -the psychological preparation of the employee to comply with labor protection requirements consists in the fact that prior to arrival at the place of work, assigning order -allowance, admissions worker, the work producer, brigade members, slinger, monkey board worker, manipulator crane operator, provides visual and psychological training, in part security measures when performing future work specified in the order -allowance. For clarity of this principle, let us imagine the work on the tolerance: "Replacing a wooden support with a reinforced concrete support" in the form of a diagram of Figure 9. From Figure 9 it is seen that for each category of workers, training is carried out, depending on the type of work they perform, and thereby actually prepares them for immediate implementation. The «Digital Electrician» project will require staff to master new knowledge and communication skills with high-tech production tools, computer technologies. The formation of such competencies and skills will accelerate the adaptation to new knowledge, which is relevant in the context of dynamically changing technologies in the electric power industry. Ultimately, a conscious perception of the concept of "Digital Electrician" reduces the desynchronization of the mental and emotional processes of the electrician's body, increases its adaptive potential, and also helps to identify employees at risk. The individual component of the adaptive potential is assessed by measuring indicators characterizing the stability of various body systems to adaptive loads and the rate of return of the measured parameters to some optimal values, highlighting the patterns of organization and the course of the systemic adaptive reaction at the organism level, and the formation of adaptive neoplasms [4]. The individual characteristics of the adaptive potential, reflecting the functional state of the body, are one of the significant components of the «Digital Electrician» project. In terms of the impact of the «Digital Electrician» on psychophysiological indicators, the following should be noted. The ability to independently carry out control of local and summary tasks under conditions of time limits and performers, control of communications, the use of essentially standard response algorithms, contributes to a more accurate implementation of their duties by electricians. Such important psychophysiological characteristics as memory and attention during daily training will improve, or remain at the proper level in people of older age groups. Creating a personalized, with proper prioritization and easy-to-configure digital workplace (with training elements), with easy individual settings, in addition to a disciplining and motivating effect, can also positively affect the emotional state of an employee. We predict a decrease in the level of psychoemotional stress of electricians. In general, the effect of introducing the «Digital Electrician» concept is: -reducing the risk of injury; -increase productivity; -increasing the transparency of work processes and tasks; -decrease in the volume of filled out documentation; -automation of the staff development system. Conclusions Summing up, it is important to note the need for a systematic approach to ensure the reliability of the professional activities of staff. Minimizing the negative impact of the human factor on the risk of injury is possible provided timely diagnosis and regular corrective and preventive work, one of the elements of which is the formation of professionally important qualities of electricians that affect work safety.
v3-fos-license
2021-08-04T13:51:49.430Z
2021-08-03T00:00:00.000
236901924
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcmededuc.biomedcentral.com/track/pdf/10.1186/s12909-021-02853-y", "pdf_hash": "3a9d2773cf5b8eedff134d4b3d9d2a02894a688e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1939", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "sha1": "59e80a8257734a4f69ce97bc0c5823104403e165", "year": 2021 }
pes2o/s2orc
Translating and validating a Japanese version of the Patient Care Ownership Scale: a multicenter cross-sectional study Background Patient care ownership (PCO) is an essential component in medical professionalism and is crucial for delivering high-quality care. The 15-item PCO Scale (PCOS) is a validated questionnaire for quantifying PCO in residents; however, no corresponding tool for assessing PCO in Japan exists. This study aimed to develop a Japanese version of the PCOS (J-PCOS) and validate it among Japanese medical trainees. Methods We performed a multicenter cross-sectional survey to test the validity and reliability of the J-PCOS. The study sample was trainees of postgraduate years 1–5 in Japan. The participants completed the J-PCOS questionnaire. Construct validity was assessed through exploratory and confirmatory factor analyses. Internal consistency reliability was examined by calculating Cronbach’s alpha coefficients and inter-item correlations. Results During the survey period, 437 trainees at 48 hospitals completed the questionnaire. Exploratory factor analysis of the J-PCOS extracted four factors: assertiveness, sense of ownership, diligence, and being the “go-to” person. The second factor had not been identified in the original PCOS, which may be related to a unique cultural feature of Japan, namely, a historical code of personal conduct. Confirmatory factor analysis supported this four-factor model, revealing good model fit indices. The analysis results of Cronbach’s alpha coefficients and inter-item correlations indicated adequate internal consistency reliability. Conclusions We developed the J-PCOS and examined its validity and reliability. This tool can be used in studies on postgraduate medical education. Further studies should confirm its robustness and usefulness for improving PCO. Supplementary Information The online version contains supplementary material available at 10.1186/s12909-021-02853-y. Background Medical professionalism has received increasing attention in recent years [1,2]. In medical education, it has become an indispensable core competence. In 2002, professional attributes were enshrined in the Physician's Charter on Medical Professionalism [3,4]. This charter has now been endorsed by numerous national and international professional associations [5], thereby reflecting the growing importance of medical professionalism. Patient care ownership (PCO) is a commonly recognized and crucial component of medical professionalism [6]. It has been defined as a cognitive-affective state in which physicians apply intellectual and emotional components during decision-making [7,8]. PCO is considered an important competency to develop during residency training [6]. Developing the PCO of medical trainees is supposed to enhance their responsibility and accountability for patient care and to improve their clinical skills and patient outcomes [6]. However, since the implementation of duty-hour restrictions by the American Accreditation Council for Graduate Medical Education, concerns regarding the erosion of PCO among medical trainees have grown [7,[9][10][11]. Although various qualitative studies have been conducted on PCO, no quantitative measurement tools had been available to quantify it among residents until the PCO Scale (PCOS) was developed; it was developed and validated in the United States in 2019 [12]. The original PCOS questionnaire is a 15-item tool. The items represent eight different constructs associated with PCO: advocacy (three items); responsibility, accountability, and follow-through (four items); knowledge (one item); communication (one item); initiative (one item); continuity of care (one item); autonomy (three items); and perceived ownership (one item). The responses to these items are given on a seven-point Likert scale that ranges from 1 = strongly disagree to 7 = strongly agree. Exploratory factor analysis extracted three factors defined as assertiveness, being the "go-to" person, and diligence. The PCOS is intended for use in investigating interventions to nurture PCO and exploring the ways through which PCO influences physicians' decision-making, behaviors, and patient outcomes. In Japan, medical care has a history of being reliant on the overwork of doctors, particularly that of young physicians [13,14]. Specifically, 40 % of doctors perform a level of work that exceeds the standard working hours put in by workers in other industries, and over 10 % of physicians work more than 1,860 h of overtime per year, which is approximately twice the karoshi line, that is, the number of hours beyond which a death is speculated to be related to overwork [15]. Owing to this serious problem of overwork, the government has passed restrictions concerning working hours that will go into effect for physicians in April 2024. The availability of a Japanese version of the PCOS will enable Japanese physicians, in the coming era of duty-hour regulations, to evaluate trainees' PCO and to provide feedback to them. However, because the concept of PCO originated in Western countries, revalidating the assessment tool so that due attention is paid to the immediate cultural context is crucial. In addition, although the original questionnaire was intended solely for residents in internal medicine, its benefits could be broadened if it were expanded and used for trainees in other departments as well. In this study, we aimed to develop a Japanese version of the PCOS (J-PCOS) for trainees from various medical specialties rather than only internal medicine. Moreover, we also sought to examine the validity and reliability of the instrument. Setting In Japan, medical students pass through a six-year undergraduate medical course of study, followed by a national licensing examination. The undergraduate program typically comprises four years of preclinical education and two years of clinical education. Those who pass the national licensing examination and aim to practice clinical medicine proceed to an obligatory two-year initial postgraduate clinical training program [16]. In this system, all trainees spend two years rotating through multiple specialties. The clinical departments that a trainee passes through within this rotation must include the following seven specialties: internal medicine, surgery, emergency medicine, pediatrics, psychiatry, community medicine, and obstetrics and gynecology. Trainees are also required to obtain clinical experience at a general ambulatory site [17]. Subsequently, and only after the two-year training, young physicians enter an advanced postgraduate clinical training program for medical specialties, which spans three years or more [18]. Translation process With the consent of the original author (MD), we translated the PCOS into Japanese following suggested guidelines for the cross-cultural adaptation of self-reported measures [19]. Translators 1 (HF), 2 (DS), and 3 (KK) independently translated the PCOS into Japanese (Stage I, Translation) and subsequently worked together to coordinate the three translations to produce a complete draft (Ver. 1) (Stage II, Synthesis). All the translators were familiar with the cultures of the environments in which each language is used, and Translator 3 (KK) was a fluent bilingual speaker of English and Japanese. We then requested bilingual individuals who were not involved in this study and had no knowledge of the ori- was conducted among 21 respondents, who were then interviewed to determine whether the instrument was comprehensible and whether it had been understood as intended (Stage V, Pretesting). Because no problematic items emerged in the pilot test as a result of the translation process, we decided to consider Ver. 4 the final version. All the authors confirmed the instrument's face and content validity. Data collection After obtaining their contact information from the Residency Electronic Information System, a database of teaching hospitals developed and maintained by the Japanese Ministry of Health, Labor, and Welfare, we communicated with 186 postgraduate clinical training hospitals in Japan. In total, 48 hospitals agreed to cooperate with our multicenter crosssectional study. Between December 2020 and January 2021, we distributed an anonymous online survey of the PCOS to all the potential participants (n = 2688, postgraduate years [PGY] 1-5) at the 48 hospitals. Approximately three weeks after the initial invitation, reminders were sent out to increase the response rate. Ethical considerations All the participants provided consent to participate in the study. This study was approved by the Institutional Review Board of the University of Tokyo (2019362NI). Statistical analysis The construct validity of the J-PCOS was examined through both an exploratory and a confirmatory factor analysis (EFA and CFA, respectively). Because we aimed to develop a scale optimized for the Japanese medical education culture, EFA was performed first in this study. First, in the EFA, a maximum likelihood estimation via the promax rotation method was used to explore the structure of the items. The number of factors to be extracted was determined by checking the initial eigenvalues for each factor and the scree plot. The cut-off value for factor loadings was set at 0.35. Second, the factor structure identified in the EFA was further validated by means of a CFA. The model fitness was assessed with the comparative fit index (CFI), the Tucker-Lewis index (TLI), and the root mean square error of approximation (RMSEA). The guidelines suggest that the CFI and the TLI values should be close to or above 0.95 and the RMSEA should be close to or below 0.06 for a good model fit [20]. Third, the internal consistency reliability was examined using the Cronbach's alpha coefficient and interitem correlations. A Cronbach's alpha value of 0.70 or higher indicates an acceptable internal consistency, and an inter-item correlation of 0.30 or higher is considered to indicate acceptable reliability [21]. Finally, the descriptive statistics of the factors and overall scale were extrapolated. All the data were analyzed using SPSS Statistics 27.0 (IBM Japan; Tokyo, Japan) and AMOS 23.0 (IBM Japan; Tokyo, Japan). Respondents' characteristics Of the 2,688 eligible participants, 437 (16.3 %) responded to the survey. There were no missing values in any of the responses. Table 1 shows the characteristics of the respondents. Although race or ethnicity was not asked in the questionnaire, most doctors working in Japan are Japanese. It can be assumed that most of the respondents to the questionnaire are Japanese. Construct validity In EFA, of the 15 items, the first item measuring responsibility, accountability, and follow-through dimension and the second item measuring autonomy dimension were excluded because their factor loadings were less than 0.35; the remaining 13 items were used for analysis. In total, four factors with factor loadings of 0.35 or greater were identified. The cumulative contribution rate of the four factors was 55.9 % ( Table 2). Next, a CFA was performed on these 13 items to determine the fit for the four-factor model (Fig. 1). All the factor loadings for each item onto each factor exceeded 0.35. The indices of the model fit were good (CFI = 0.955, TLI = 0.941, and RMSEA = 0.066). Following a discussion among the researchers, the four factors determined in this analysis were labeled as follows: assertiveness, sense of ownership, diligence, and being the "go-to" person. Table 3 shows the internal consistency of and score distribution for the J-PCOS. The overall Cronbach's alpha coefficient for the J-PCOS was 0.90. For the factors of assertiveness, sense of ownership, and being the "go-to" person, all the Cronbach's alpha coefficients were above 0.70. However, the factor of diligence slightly failed to meet the 0.70 criterion. Internal consistency and descriptive statistics The highest self-evaluation for PCO was observed in the factor of the sense of ownership (Factor 2), followed by that of assertiveness (Factor 1); the factor with the lowest scores was being the "go-to" person (Factor 4). Thus, we had obtained a final version of the questionnaire in Japanese (Additional file 1). Discussion In this study, we translated and validated the 13 items developed for the J-PCOS. Both construct validity and internal consistency reliability were maintained by following a translation process for the items. To the best of our knowledge, the present research is the first to develop a Japanese version of the original scale. Psychometric analysis methods were employed to evaluate the J-PCOS. Although the factor analysis supported the construct validity of the scale, the J-PCOS differed from the original PCOS in its factor structure. In particular, we extracted four factors, including a factor labeled "sense of ownership", which was not identified in the original PCOS. This discrepancy may be due to a unique attribute of Japanese culture. Trainees in Japan take pride in the hard work that they perform and display a substantial amount of commitment [22]. The Japanese spirit of self-sacrifice, which is expressed throughout their medical careers, is a core quality of Bushido, the moral code of personal conduct that originated among the samurai-the ancient warriors of Japan. Although Japanese society is changing, this tradition continues to impact doctors and patients' expectations of them [22][23][24]. In the evaluation of internal consistency reliability, the Cronbach's alpha value for diligence (Factor 3) was not above the 0.70 threshold. However, because Cronbach's alpha values are considerably sensitive to the number of items that are in the scale, finding low values for Cronbach's alpha in short scales (especially in two-item RAFT: Responsibility, accountability, and follow-through scales) is common [25]. In such cases, it is more appropriate to show inter-item correlations. In this study, all inter-item correlations exceeded the optimum criterion, thereby indicating an adequate internal consistency reliability of the scale. The findings of this multicenter, cross-sectional study show that the PCOS is a useful tool for measuring PCO in Japanese settings and that it exhibits good reliability and validity. The differences between the cultural, historical, and social roots of medical The J-PCOS could be used to investigate educational programs that are aimed at developing ownership, to explore how ownership influences patient outcomes, and to conduct research on postgraduate medical professionalism. When using J-PCOS, it is expected that the total score will be utilized. The factor scores for each of the four factors may also be useful in clinical education settings when detailed information on PCO is required. Several potential limitations should be acknowledged. First, the response rate to the survey was relatively low, representing potential selection bias. In general, online surveys are much less likely to achieve a high response rate than paper-based surveys [27]. Because it is not uncommon for web surveys to have a response rate of 10 % or less [28], the response rate herein is considered acceptable. Second, although we assessed construct validity and internal consistency reliability, other forms of validity and reliability were not evaluated. For example, criterion-related validity, which could further consolidate the scale's robustness, should be assessed. However, the lack of other validated scales prevented this examination. Test-retest reliability was also not evaluated. These properties of the scale should be examined in future studies. Third, we performed EFA and CFA in the same sample. The validity of the study might have been increased if the researchers used a larger sample and randomly split it into two independent groups (i.e., splithalf validation). However, insufficient sample size, due in part to difficulties caused by the coronavirus 2019 pandemic, prevented us from using this method. Finally, this scale was designed for trainees working in an inpatient setting. Future research should revise the scale for application in outpatient settings as well as for attendings. Conclusions We translated the PCOS into Japanese to create the J-PCOS and verified its construct validity and internal consistency reliability. This scale can be used to investigate postgraduate medical professionalism. Further research to consolidate the robustness of the J-PCOS is warranted. Additional file 1. Japanese version of the Patient Care Ownership Scale.
v3-fos-license
2023-09-24T15:10:52.931Z
2023-09-21T00:00:00.000
262200278
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2311-7524/9/9/1061/pdf?version=1695308708", "pdf_hash": "4b89093a7b978df82071beea203ec75c3c70bcbd", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1940", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "086176d41156a3dbb9e0127ed9b9bc52b559468f", "year": 2023 }
pes2o/s2orc
Effects of Four Photo-Selective Colored Hail Nets on an Apple in Loess Plateau, China : Hail, known as an agricultural meteorological disaster, can substantially constrain the growth of the apple industry. Presently, apple orchards use a variety of colored (photo-selective) hail nets as a preventative measure. However, it is unclear which color proves most effective for apple orchards. This study provides a systematic investigation of the impact of four photo-selective colored hail nets (white, blue, black, and green; with white being the control) on the microenvironment of apple orchards, fruit tree development, fruit quality, and yield over a two-year period (2020–2021). Different photo-selective nets do not evidently alter the intensity of light, although the nets’ shading effects decrease in the order from black to green to blue. Among them, blue nets increased the proportion of blue light, while green nets enhanced the proportion of green light. On the other hand, black, green, and blue nets diminished the proportion of red and far-red light. Such photo-selective nets effectively lowered soil temperature but did not have an impact on relative humidity and air temperature. Encasing apple trees with blue nets promoted growth, increasing shoot length, thickness, leaf area, and water content, while simultaneously decreasing leaf thickness. Black nets had comparable effects, although the impacts of green nets were inconsistent. Different photo-selective nets did not significantly influence the leaf shape index or overall chlorophyll content. However, black and green nets reduced the chlorophyll a/b ratio, while blue nets slightly boosted this ratio. Additionally, blue nets proved beneficial for apple trees’ photosynthesis. With the employment of a principal component analysis and comprehensive evaluation, this study concludes that blue nets offer the most favorable environmental conditions for apple growth while protecting apple orchards against hail, compared to black, white, and green nets. Introduction Apple is a globally important fruit crop, both economically and nutritionally.The Loess Plateau is the largest apple-growing area in the world, with apple cover and yield of 1.3 million ha and 23 million tons, respectively, accounting for 25.2% of global land cover and 26.3% of global apple production in 2016 [1].However, the Loess Plateau is vulnerable to various environmental factors, including hailstorms, which can cause substantial damage to apple trees and their fruit.Hail damage not only impacts fruit production within the current season but also affects fruit yield in subsequent seasons by harming flower buds [2].Traditional anti-hail measures, such as cloud seeding, anti-hail guns, nanocomposites, or expanding planting areas have proven to be expensive and ineffective [3].Previous studies have indicated that hail nets can impact various environmental factors, including light, air flow, temperature, and humidity.Recently, photo-selective colored netting, a promising agro-technical approach, has emerged as an alternative solution that utilizes nets that not only offer vital protection against hail, wind, pests, and excessive solar radiation but also alter the quality of transmitted light [4,5].By selectively manipulating light wavelengths, the photo-selective netting optimizes plant growth and enhances crop quality [2,[5][6][7][8][9].Therefore, it is crucial to understand the impact of photo-selective nets on apple tree physiology and fruit quality to effectively utilize anti-hail nets and maximize their benefits. One of the primary advantages of photo-selective nets is their ability to reduce the amount of solar radiation reaching the orchard environment beneath them [2].The subtle shading effect caused by photo-selective nets can decrease leaf temperature and evaporative demand, enhancing photosynthesis and subsequently promoting carbohydrate production, potentially resulting in improved yield quality [4,10,11].Several studies have also emphasized the role of photo-selective nets in modifying the orchard environment, affecting factors such as light intensity, light quality, canopy temperature, air humidity, and soil temperature [7,9,[12][13][14].While photo-selective nets allow solar radiation to pass through, they also scatter it, mitigating its impact [5,11,13].In a separate study conducted by Shahak et al., it was found that apple trees covered with red nets displayed a superior rate of leaf photosynthesis compared to those covered with blue, pearl, gray, and black nets [15].Furthermore, an investigation comparing various protective netting colors discerned that the net photosynthesis rate in 'Fuji' apples showed notable elevation under blue and grey nets, as opposed to pearl-colored nets [16].Variations in microclimatic conditions created by photo-selective nets have been found to significantly influence the physiological responses of fruit trees, which are closely linked to their growth, fruit production, and fruit quality [2,7,17,18].In a comparative study on 'Mondial Gala' apples, Iglesias and Alegre reported that fruits grown under black nets exhibited significantly lower red coloration compared to those exposed to sunlight in three out of four growing seasons [19].Similarly, Solomakhin and Blanke discovered that apple peels under photo-selective nets had higher chlorophyll levels but four to five times lower anthocyanin levels [20].Furthermore, Blanke suggested the use of black nets specifically for monocolor green apple varieties and bicolor apple cultivars that require good coloration [21]. Over the past decade, numerous field studies have consistently demonstrated that photo-selective nets have varying effects on vegetative and reproductive growth in a wide range of cultivated species, with red and yellow nets promoting vegetative growth and blue nets inducing dwarfism [22,23].Conversely, gray and pearl nets have been found to effectively enhance branching in ornamental crops [24][25][26].In the context of apple cultivation, Solomakhin and Blanke observed that different types of photo-selective nets, particularly the green-black type, resulted in increased vegetative growth compared to uncovered trees [27].In contrast, Bastías et al. found that blue nets stimulated a higher rate of apple shoot growth compared to red, gray, and white nets [28].Additionally, Giaccone et al. reported an improvement in the vigor of nectarine trees when cultivated under red nets [29]. The importance of optimal internal fruit quality is increasingly recognized by consumers, and studies have shown that the use of photo-selective nets can affect the internal quality of fruits, particularly apples [2].For instance, the use of black nets has been found to increase the total acidity of apples compared to those grown without any covering [19].The firmness of apple fruits, such as 'Fuji' and 'Pinova', is subject to variation depending on the type of photo-selective netting employed for cultivation, with apples grown under green-black and red-black netting yielding softer fruits compared to those grown under red-white nets, while the firmest fruits are produced in the control group without any netting [20].In a study by Do Amarante et al., it was observed that 'Gala' apples grown under white net exhibited a significant decrease in fruit flesh firmness at harvest, in contrast to 'Fuji' apples [30].Additionally, fruits grown under white nets showed a decrease in total soluble solids content, which was attributed to shading and resulted in reduced carbohydrate reserves in the fruit, ultimately leading to lower levels of soluble sugar at commercial maturity [30]. This study aimed to assess the impact of four photo-selective nets (white, blue, black, and green nets) on apple orchards.As a result of the frequent hail storms in the Loess Plateau, it was impossible to utilize control plants that were exposed to direct sunlight. To evaluate the photo-selective effect of the different colors, the white net, the most used locally, was considered the control net.The evaluation encompassed various aspects, including environmental factors, growth and development indices, fruit quality, and overall yield.To determine the most effective color for orchard hail net coverage, principal component analysis was employed for a comprehensive evaluation.The research findings have significant theoretical and practical implications, providing valuable insights for improving apple tree growth, fruit yield, and quality. Plant Materials and Growth Conditions The experiment was conducted at the Apple Research Center in Luochuan County, Shaanxi Province, China (109 • 32 40 E, 35 • 42 28 N).The experiment was conducted over a period of two years, from June 2019 to November 2021.The orchard area employed a dwarfing rootstock-mediated high-density planting system, with 4-year-old apple trees selected as the experimental materials.The rootstock used was M26, and the cultivar was Yanfu No.8.The row spacing and plant spacing were set at 3.5 × 1.5 m. Based on the colors of photo-selective hail nets, four treatments were established: white, blue, black, and green.The hail nets were installed at a height of 5 m above the ground in a roof-shaped structure.The installation of the nets began in April and continued until the end of November each year.The nets were made with polyethylene material by adding UV stabilizers and anti-oxidants with hed quad crossover, 4 × 7 mm mesh, 25 mm mesh size, 480 denier, and 60 gsm (Dongshen Development Ltd., Xiamen, China). The experimental layout employed a randomized block design.A single-colored net enveloped three rows of apple trees, encompassing no less than 60 trees.The measurements were conducted on nine trees per individual colored net (treatment) within the central row to mitigate any potential border effects. Measurement of Air Humidity, Air Temperature, Light Intensity, and Light Quality Temperature and illuminance measurements were conducted using a temperature and illuminance recorder (TPJ-22-G, Zhejiang topu yunnong Technology Co., Ltd., Hangzhou, China), as well as a spectroradiometer (HR-450, HiPoint, Taiwan, China), from 9:00 am to 5:00 pm in early August.The devices were placed at a distance of 20 cm from the outer edge of the canopy and at a height of 1.7 m above the ground, which roughly corresponded to the center of the canopy.To ensure precision and consistency, the measurements were repeated 10 times, and the obtained results were recorded for subsequent analysis. Measurement of Soil Temperature To measure soil temperature, a soil thermometer (TPJ-21-G, Zhejiang topu yunnong Technology Co., Ltd., Hangzhou, China) was inserted 5 cm deep and 20 cm away from the trunk.Each treatment had three biological replicates, and within each biological replicate, three trees were selected as the three basic replicates.Data changes were recorded between 9:00 am and 5:00 pm in early August. Measurement of New Shoot Growth The new shoot growth was calculated by measuring the shoot length and diameter at the end of the annual vegetative growth, specifically in early August.A minimum of fifteen non-fruiting bourse shoots were selected for each treatment. Measurement of Leaf Relative Water Content To determine the leaf relative water content, we followed a specific procedure described previously [31].First, we collected fully expanded leaves and measured their weights while fresh.Next, we soaked the leaves in water for a period of 12 h and recorded their weight as saturated weight.Finally, we transferred the leaves to an oven and dried them until a constant weight (dry weight) was achieved.Fifty leaves were selected for each treatment.The relative water content was calculated using the following formula: Leaf relative water content = Fresh weight − Dry weight Saturated weight − Dry weight × 100 Measurement of Leaf Area The leaves were scanned using a scanner (Epson, Suwa, Japan), and leaf auto compute software was employed for accurate calculation.Fifty leaves were selected for each treatment. Measurement of Photosynthetic Parameters The determination of leaf photosynthesis was conducted following the previously described methods [31].During the new shoot growth period, photosynthetic parameters were measured under sunny conditions from 9:00 am to 5:00 pm.For each treatment, three branches with consistent tree vigor were selected from each biological replicate, and the sixth mature leaf from the top of each branch was used for measurement.The portable LI-6400 photosynthesis system (LI-COR, Lincoln, NE, USA) was used to measure the net photosynthetic rate, transpiration rate, intercellular carbon dioxide concentration, and stomatal conductance. Determination of Relative Chlorophyll Content The relative chlorophyll content of leaves in the top or mid-canopy was assessed by measuring the SPAD values of five selected leaves.The measurement of chlorophyll a and b was performed as described previously [32].Simply, fresh leaves were collected, and the large veins were removed.The leaves were then cut into small pieces.Approximately 0.1 g of the leaf fragments was weighed and placed in a mortar.A small amount of 80% acetone was added to the mortar, along with a pinch of calcium carbonate and quartz sand.The mixture was thoroughly ground until it became a homogeneous paste.An additional 80% acetone was added to the paste, and the resulting mixture was transferred to a centrifuge tube.The volume was adjusted to 10 mL with 80% acetone.The extraction process was carried out at room temperature in a dark place for 24 h.After the extraction, the solution was collected, and the absorbance values at wavelengths of 663 nm (A 663 ) and 645 nm (A 645 ) were measured using 80% acetone as a reference.Chlorophyll a or b was calculated using the formula: Chlorophyll a content (mg/mL) = 12.72A 663 − 2.59A 645 Chlorophyll b content (mg/mL) = 22.88A 645 − 4.67A 663 Determination of Fruit Quality For the assessment of external quality, 15 similarly sized fruits were randomly chosen from each treatment for evaluation.Parameters such as fruit weight, shape, and skin color were measured.An electronic vernier caliper was used to measure the maximum longitudinal and transverse diameters of the fruit.A ratio of these diameters was then used to define the shape of the fruit.A portable Cr-100 colorimeter (X-Rite, Granville, MI, USA) was employed to measure skin color parameters.Variations in skin color were denoted using brightness (L*), red-greenness (a*), and yellow-blueness (b*) values.To compute the yield per plant, these fruits were harvested. To evaluate internal quality, various parameters were measured, including flesh firmness, pericarp firmness, pericarp malleability, and flesh brittleness.These assessments were carried out at five distinct points on the fruit's equatorial surface using a fruit texture analyzer (TMS-Touch, FTC, Frederick, MD, USA).Subsequently, these individual measures were averaged to yield a single value for each parameter.Additional measurements included soluble solid content gauged using a PAL-1 digital refractometer (Atago, Tokyo, Japan).Lastly, the fruit's acidity level was determined with the use of a digital GMK-835F device (G-WON, Seongnam-si, Republic of Korea). Principal Component Analysis We conducted dimensionality reduction and principal component analysis on the data from 2020 and 2021 using IBM SPSS Statistics 20.The correspondence between factors and items was determined by analyzing the factor loading coefficient matrix table after rotation.A factor loading coefficient with an absolute value greater than 0.4 indicates a significant relationship between the item and the dimension (factor).In cases where a research item corresponds to multiple factors, professional knowledge is taken into account to determine its specific attribution to a particular factor. Statistical Analysis All statistical analyses were performed using Origin 2019b software.The significance of differences between treatments for the various measured parameters was evaluated through one-way ANOVA Tukey's test. The Impact of Photo-Selective Nets on the Orchard Environment To examine the effects of photo-selective nets (black, blue, green, and white) on the microclimate of orchards (Figure 1A), we measured and analyzed the daily variations of four indicators: soil temperature, light intensity, relative air humidity, and relative air temperature.The results revealed that the colors of photo-selective nets had a noticeable influence on the orchard's microclimate, particularly in terms of the daily variations in soil temperature (Figure 1B).The findings demonstrated that covering orchards with black net effectively reduced soil temperature, with a maximum difference of up to 5 • C compared to white net and up to 3 • C compared to blue or green net (Figure 1B).Additionally, it was observed that the colorful nets reduced the daily amplitude of soil temperature variation, leading to a relatively stable daily variation pattern compared to the white net (Figure 1B).However, the photo-selective nets did not exhibit significant effects on the indicators of light intensity, relative air humidity, and relative air temperature (Figure 1C and Supplemental Table S1). Furthermore, to examine the effects of various photo-selective nets on light quality, spectral measurements and analysis were conducted using a spectrometer during August and September of 2020 and 2021.The results indicated that the blue net had a substantial impact on altering the composition of light quality when compared to the white net.This effect was primarily achieved by increasing the proportion of blue light, while significantly reducing the proportions of red and far-red light.Additionally, the ratios of red and farred light experienced a noteworthy decrease (Table 1).In comparison, the black net had a less pronounced influence on light quality compared to the blue net.However, they were still able to reduce the proportions of red and far-red light to some extent (Table 1).On the other hand, the green net significantly impacted the composition of light quality.They increased the proportion of green light while decreasing the proportions of red and far-red light.Furthermore, the green net exhibited a significant reduction in the ratio of red and far-red light (Table 1).All three colors of photo-selective nets (blue, black, and green) were found to substantially decrease the ratio of red and far-red light, with the green net having the most significant impact.However, the influence of photo-selective net colors on the proportion of ultraviolet light was relatively minor (Table 1).In addition to spectral analysis, the experiment also compared the light intensity.The results from four experiments demonstrated that photo-selective colored nets of the same specifications did not significantly alter the light intensity (Table 1).However, there was a slight trend of decreased light transmission as the color of the photo-selective net darkened.Specifically, the light transmission performance order was as follows: white net > green net > blue net > black net (Table 1).Furthermore, to examine the effects of various photo-selective nets on light quality, spectral measurements and analysis were conducted using a spectrometer during August and September of 2020 and 2021.The results indicated that the blue net had a substantial impact on altering the composition of light quality when compared to the white net.This effect was primarily achieved by increasing the proportion of blue light, while significantly reducing the proportions of red and far-red light.Additionally, the ratios of red and far-red light experienced a noteworthy decrease (Table 1).In comparison, the black net had a less pronounced influence on light quality compared to the blue net.However, they were still able to reduce the proportions of red and far-red light to some extent (Table 1).On the other hand, the green net significantly impacted the composition of light quality.They increased the proportion of green light while decreasing the proportions of red and far-red light.Furthermore, the green net exhibited a significant reduction in the ratio of red and far-red light (Table 1).All three colors of photo-selective nets (blue, black, and The Effect of Photo-Selective Nets on the New Shoots' Growth The presence of photo-selective nets can influence various aspects of plant growth, including leaf health and shoot development, as they alter environmental factors [2].This experiment aimed to assess the impact of photo-selective nets on shoot growth.Based on the measurements of the new shoot length in 2020, it was observed that blue, black, and green nets significantly increased shoot length compared to the white net.Among them, the blue net resulted in the greatest increase in shoot length, followed by the green, black, and white nets (Figure 2A).This trend of increased shoot length continued in 2021, although the overall significance was reduced compared to the white net (Figure 2B).Furthermore, the thickness of the new shoots in 2020 was significantly greater under the blue, black, and green nets compared to the white net.Specifically, the green net led to the greatest thickness of new shoots in 2020, followed by the blue, black, and white nets (Figure 2C).In 2021, the thickness of new shoots under the green net exhibited a significant decline, while the thickness under the blue net remained the greatest, followed by the black, white, and green nets (Figure 2D).Overall, there was an increase in the thickness of the new shoots in 2021 compared to 2020 (Figure 2C,D). In summary, the blue net had a significant positive effect on both the length and thickness of the new shoots in both years, resulting in higher biomass accumulation.While the black net also promoted shoot elongation and thickening, its effect was not as pronounced as that of the blue net.On the other hand, the green net initially showed a remarkable increase in shoot length and thickness in the first year, but its growth exhibited a clear decline in the second year. The Effect of Photo-Selective Nets on Relevant Leaf Indices Leaves, the largest plant organs exposed to the external environment, are highly susceptible to changes in environmental conditions, which can greatly influence their morphological structure and physicochemical properties [33].Based on the leaf-related data from 2020, it was observed that the utilization of blue, black, and green nets significantly increased the leaf area compared to the white net (Figure 3A).The largest leaf area was observed under the blue net, followed by the black, green, and white nets (Figure 3B).However, when considering the physiological indicator of leaf thickness, the leaves under the blue net were significantly thinner than those under the white net.Conversely, the leaves under the black and green nets exhibited increased thickness, with the greatest increase observed under the green net (Figure 3C).In terms of leaf biomass accumulation, both the green and blue nets had similar effects, resulting in a higher leaf biomass compared to the black net and significantly higher than the white net (Figure 3D,E).However, there was no significant effect of photo-selective colored nets on leaf indices (Figure 3F).Moreover, the relative leaf water content exhibited a moderate increase under the blue and green nets compared to the white net, whereas it experienced a slight decrease under the black net (Supplemental Figure S1A). Horticulturae 2023, 9, x FOR PEER REVIEW the blue net resulted in the greatest increase in shoot length, followed by the green and white nets (Figure 2A).This trend of increased shoot length continued in 20 hough the overall significance was reduced compared to the white net (Figure 2B thermore, the thickness of the new shoots in 2020 was significantly greater under th black, and green nets compared to the white net.Specifically, the green net led greatest thickness of new shoots in 2020, followed by the blue, black, and white ne ure 2C).In 2021, the thickness of new shoots under the green net exhibited a sig decline, while the thickness under the blue net remained the greatest, followed black, white, and green nets (Figure 2D).Overall, there was an increase in the thick the new shoots in 2021 compared to 2020 (Figure 2C,D).In summary, the blue net had a significant positive effect on both the leng thickness of the new shoots in both years, resulting in higher biomass accumulation the black net also promoted shoot elongation and thickening, its effect was not nounced as that of the blue net.On the other hand, the green net initially showe markable increase in shoot length and thickness in the first year, but its growth ex a clear decline in the second year. The Effect of Photo-Selective Nets on Relevant Leaf Indices Leaves, the largest plant organs exposed to the external environment, are high ceptible to changes in environmental conditions, which can greatly influence the phological structure and physicochemical properties [33].Based on the leaf-relat from 2020, it was observed that the utilization of blue, black, and green nets signi increased the leaf area compared to the white net (Figure 3A).The largest leaf ar In comparison to 2020 s data, the overall pattern in 2021 remained largely constant (Figure 3G-L).The use of the blue net led to a significant rise in leaf area and a minor decrease in leaf thickness (Figure 3H,I).Nonetheless, there was a minor increment in the total leaf biomass, with no alteration in the leaf index (Figure 3J-L).Furthermore, the blue net resulted in a slight increment in relative leaf water content (Supplemental Figure S1B).Conversely, the black net slightly boosted the leaf area and, to some degree, led to increases in leaf thickness and dry weight, while the leaf index remained unaffected (Figure 3H-L).Likewise, the green net not only augmented the leaf area but also facilitated an escalation in leaf thickness and relative water content.The leaf index, however, was unaltered (Figure 3H-L and Supplemental Figure S1B).In general, the trend observed in 2021 paralleled that of 2020, albeit with diminished significance, potentially attributed to the cyclical fruit-bearing pattern of the tree.Of the various photo-selective nets, the blue net consistently demonstrated the most pronounced shading effect, precipitating a considerable increase in leaf area and fresh weight, a decrease in leaf thickness, and an elevation in both the leaf dry weight and relative water content when compared with the white net.The black and green nets similarly culminated In general, the trend observed in 2021 paralleled that of 2020, albeit with diminished significance, potentially attributed to the cyclical fruit-bearing pattern of the tree.Of the various photo-selective nets, the blue net consistently demonstrated the most pronounced shading effect, precipitating a considerable increase in leaf area and fresh weight, a decrease in leaf thickness, and an elevation in both the leaf dry weight and relative water content when compared with the white net.The black and green nets similarly culminated in significant amplifications in leaf area and thickness along with leaf biomass accumulation. Interestingly, the shading influence of photo-selective nets appeared to exert no discernible effect on the total leaf shape index of the apple tree. The Effect of Photo-Selective Nets on Chlorophyll Content Chlorophyll, existing in two forms (chlorophyll a and b), is the primary light-absorbing pigment in plants, directly influencing their light energy utilization and serving as an indicator of overall plant health [34].In this study, we examined chlorophyll content as a means of assessing the impact of different photo-selective nets on tree growth.Based on the 2020 data shown in Figure 4A,B, the blue, black, and green nets did not cause significant changes in relative chlorophyll content at the top and middle of the tree canopy compared to the white net.However, the chlorophyll a/b ratios under the black and green nets decreased significantly (Figure 4C).On the other hand, the chlorophyll a/b ratio under the blue hail net slightly increased, but without any significant changes compared to the white net (Figure 4C). in significant amplifications in leaf area and thickness along with leaf biomass accumulation.Interestingly, the shading influence of photo-selective nets appeared to exert no discernible effect on the total leaf shape index of the apple tree. The Effect of Photo-Selective Nets on Chlorophyll Content Chlorophyll, existing in two forms (chlorophyll a and b), is the primary light-absorbing pigment in plants, directly influencing their light energy utilization and serving as an indicator of overall plant health [34].In this study, we examined chlorophyll content as a means of assessing the impact of different photo-selective nets on tree growth.Based on the 2020 data shown in Figure 4A,B, the blue, black, and green nets did not cause significant changes in relative chlorophyll content at the top and middle of the tree canopy compared to the white net.However, the chlorophyll a/b ratios under the black and green nets decreased significantly (Figure 4C).On the other hand, the chlorophyll a/ b ratio under the blue hail net slightly increased, but without any significant changes compared to the white net (Figure 4C). The 2021 data, as depicted in Figure 4D−F, are consistent with the discoveries from 2020.No significant variations were observed in the SPAD values at the top and middle of the tree canopy among the different photo-selective nets (Figure 4D,E).Similarly, a considerable decrease in chlorophyll a/b ratios was noted under the black and green nets, whereas a minor increase was observed under the blue hail net when compared with the ratios under the white net (Figure 4F). The Effect of Photo-Selective Nets on Photosynthetic Parameters Given the significant influence of light quality and intensity on plant leaf photosynthesis [35], we endeavored to assess how differently colored hail nets affect plant growth through the lens of photosynthesis.As shown in Figure 5, photosynthesis under various The 2021 data, as depicted in Figure 4D-F, are consistent with the discoveries from 2020.No significant variations were observed in the SPAD values at the top and middle of the tree canopy among the different photo-selective nets (Figure 4D,E).Similarly, a considerable decrease in chlorophyll a/b ratios was noted under the black and green nets, whereas a minor increase was observed under the blue hail net when compared with the ratios under the white net (Figure 4F). The Effect of Photo-Selective Nets on Photosynthetic Parameters Given the significant influence of light quality and intensity on plant leaf photosynthesis [35], we endeavored to assess how differently colored hail nets affect plant growth through the lens of photosynthesis.As shown in Figure 5, photosynthesis under various photo-selective nets was compared on the grounds of photosynthetic rate (Pn), transpiration rate (Tr), stomatal conductance (Gs), and intercellular CO 2 (Ci).It was deduced that the blue net yielded the highest Pn, succeeded by black and green, with the white net resulting in the lowest.The data charted a bimodal curve for Pn, peaking at 11:00 and 15:00, and reaching its nadir at 13:00 or denoting a photosynthetic 'siesta'.Prior to 11:00, no significant differences in Pn were noted among plants under diverse nets.However, at 13:00, apple trees under the blue net demonstrated the fastest resumption of photosynthesis, followed by those under the green and black nets.In contrast, apple trees under the white net struggled with photosynthetic 'siesta' and intense midday light (Figure 5A).Trees under blue, black, and green nets recorded elevated Tr from 13:00 to 15:00, indicating a higher Pn (Figure 5B).Similarly, higher Gs during this time period suggested a more effective reduction of stomatal closure induced by the 'siesta', in comparison to the white net (Figure 5C).Ci was found to decrease under all nets, aligning with the CO 2 accumulation when stomata close at night and its optimal utilization within cells for photosynthesis during the day (Figure 5D). Horticulturae 2023, 9, x FOR PEER REVIEW 12 of 22 photo-selective nets was compared on the grounds of photosynthetic rate (Pn), transpiration rate (Tr), stomatal conductance (Gs), and intercellular CO2 (Ci).It was deduced that the blue net yielded the highest Pn, succeeded by black and green, with the white net resulting in the lowest.The data charted a bimodal curve for Pn, peaking at 11:00 and 15:00, and reaching its nadir at 13:00 or denoting a photosynthetic 'siesta'.Prior to 11:00, no significant differences in Pn were noted among plants under diverse nets.However, at 13:00, apple trees under the blue net demonstrated the fastest resumption of photosynthesis, followed by those under the green and black nets.In contrast, apple trees under the white net struggled with photosynthetic 'siesta' and intense midday light (Figure 5A).Trees under blue, black, and green nets recorded elevated Tr from 13:00 to 15:00, indicating a higher Pn (Figure 5B).Similarly, higher Gs during this time period suggested a more effective reduction of stomatal closure induced by the 'siesta', in comparison to the white net (Figure 5C).Ci was found to decrease under all nets, aligning with the CO2 accumulation when stomata close at night and its optimal utilization within cells for photosynthesis during the day (Figure 5D). The Effect of Photo-Selective Nets on Fruit Quality Fruits exhibit many distinctive external traits, including color, shape, and size, and internal characteristics, Iing texture, taste, soluble solids, and titratable acidity [36].To investigate the impact of photo-selective nets on the external and internal quality of fruit, we examined the firmness and malleability of the pericarp, the brittleness of the flesh, as well as the concentration of soluble solids and titratable acidity.The findings showed no significant impact of photo-selective nets on the external quality of fruits in 2020 and 2021 (Supplemental Figures S2 and S3). The Effect of Photo-Selective Nets on Fruit Quality Fruits exhibit many distinctive external traits, including color, shape, and size, and internal characteristics, Iing texture, taste, soluble solids, and titratable acidity [36].To investigate the impact of photo-selective nets on the external and internal quality of fruit, we examined the firmness and malleability of the pericarp, the brittleness of the flesh, as well as the concentration of soluble solids and titratable acidity.The findings showed no significant impact of photo-selective nets on the external quality of fruits in 2020 and 2021 (Supplemental Figures S2 and S3). For an in-depth understanding of the effect of these nets on internal fruit quality, the analysis was continued using data from 2020 and 2021 (Figure 6).The data analysis of 2020 revealed that blue and green nets considerably enhanced pericarp malleability while reducing flesh firmness and brittleness in comparison to the usage of white net.Interestingly, a minor decrease in pericarp firmness, soluble solids, and titratable acidity was observed, but these variations were not statistically significant (Figure 6A-F).The black net, however, showed a significant reduction in soluble solids and a moderate increase in pericarp firmness relative to the white net, while displaying similar trends to blue and green nets for the other parameters (Figure 6A-F). The Effect of Photo-Selective Nets on Tree Productivity According to the 2020 data presented in Figure 7A, significant disparities in the individual weights of fruits were observed across the various photo-selective nets employed.The blue net was associated with the highest increase in individual fruit weight, followed The analysis from 2021 confirmed that the blue, black, and green photo-selective nets significantly increased pericarp malleability compared to the white net, while mildly reducing other factors such as flesh firmness and brittleness, pericarp firmness, soluble solids, and titratable acidity (Figure 6G-L).All these observations suggest that the use of photo-selective colored nets has a significant effect on the internal quality of fruits, while the external quality is not substantially altered when compared to white nets. The Effect of Photo-Selective Nets on Tree Productivity According to the 2020 data presented in Figure 7A, significant disparities in the individual weights of fruits were observed across the various photo-selective nets employed.The blue net was associated with the highest increase in individual fruit weight, followed closely by the black net.In contrast, when covered by the green net, the weight of individual fruits was found to be less than that observed under the white net (Figure 7A).This trend also held true for the yield per tree, with blue and black net-covered trees yielding the most fruit, and green net-covered trees yielding the least (Figure 7C).This significant yield reduction under the green net contrasted starkly with the impressive increase under the blue net, followed consecutively by the black net.These trends suggest that both the weight of individual fruits and their quantity per plant were significantly improved under blue and black nets, while green net led to a considerable decline when compared to white nets (Figure 7A-C). Horticulturae 2023, 9, x FOR PEER REVIEW 15 of 22 closely by the black net.In contrast, when covered by the green net, the weight of individual fruits was found to be less than that observed under the white net (Figure 7A).This trend also held true for the yield per tree, with blue and black net-covered trees yielding the most fruit, and green net-covered trees yielding the least (Figure 7C).This significant yield reduction under the green net contrasted starkly with the impressive increase under the blue net, followed consecutively by the black net.These trends suggest that both the weight of individual fruits and their quantity per plant were significantly improved under blue and black nets, while green net led to a considerable decline when compared to white nets (Figure 7A−C). In the follow-up experiments conducted in 2021, apples cultivated under blue, black, and green nets resulted in a superior weight per fruit than those grown under white net.The use of blue and green nets led to the most substantial increase in individual fruit weight, with black nets following suit (Figure 7D).Furthermore, the blue net resulted in the highest overall yield per tree, with the black net close behind.Interestingly, the yields associated with green nets in the follow-up experiments showed a marked improvement when compared to the results of the previous year (Figure 7D−F).In conclusion, a consistent pattern in data from both 2020 and 2021 conclusively demonstrated that the blue net had a substantial positive effect on both the weight of individual fruits and the yield per tree for apple cultivation.The black net also showed promising results.In contrast, the use of green nets yielded inconsistent results in terms of overall tree yield. Comprehensive Evaluation of Photo-Selective Nets on Apple Trees In 2020, we meticulously evaluated the impact of various photo-selective nets, including white, blue, black, and green nets, on apple trees by conducting a principal component analysis using the SPSS 20 software's dimension reduction module on parameters In the follow-up experiments conducted in 2021, apples cultivated under blue, black, and green nets resulted in a superior weight per fruit than those grown under white net.The use of blue and green nets led to the most substantial increase in individual fruit weight, with black nets following suit (Figure 7D).Furthermore, the blue net resulted in the highest overall yield per tree, with the black net close behind.Interestingly, the yields associated with green nets in the follow-up experiments showed a marked improvement when compared to the results of the previous year (Figure 7D-F).In conclusion, a consistent pattern in data from both 2020 and 2021 conclusively demonstrated that the blue net had a substantial positive effect on both the weight of individual fruits and the yield per tree for apple cultivation.The black net also showed promising results.In contrast, the use of green nets yielded inconsistent results in terms of overall tree yield. Comprehensive Evaluation of Photo-Selective Nets on Apple Trees In 2020, we meticulously evaluated the impact of various photo-selective nets, including white, blue, black, and green nets, on apple trees by conducting a principal component analysis using the SPSS 20 software's dimension reduction module on parameters such as new shoot length and diameter, leaf area, leaf thickness, leaf weight, flesh firmness and brittleness, soluble solids, and titratable acidity, or others.Disclosed in Table 2 are the component matrices, serving as graphical illustrations of the relationship between the three key principal components extracted and the raw variables.The findings proposed that principal component 1 (PC1) registered strong correlations with parameters like new shoot length, shoot thickness, relative water content of leaves, chlorophyll a, chlorophyll b, and fruit firmness; principal component 2 (PC2) had strong links with leaf area, soluble solids content, and fruit flesh crispness; and principal component 3 (PC3) bore a strong connection exclusively with the primary parameter of the leaf shape index (Table 2).The contribution rates of the three principal components, displayed in Table 3, collectively concluded the influence of photo-selective nets on apple trees, accounting for 100% of the total contribution rate.Explicitly, PC1 contributed 56.978%, PC2 contributed 23.994%, and PC3 contributed 19.028% (Table 3).Using the ratios of the eigenvalues of each principal component to total eigenvalues as weights, we aggregated comprehensive scores (F_total = 0.571F1 + 0.239F2 + 0.190F3) for each treatment, as exhibited in Tables 3 and 4. The exhaustive study demonstrated that the blue net offered the most optimal coverage, trailed by the black net, white net, and finally the green net (Table 4).In the subsequent year of 2021, we executed a parallel comprehensive assessment.The component matrix contained in Table 5 projects the correlation between the extracted seven principal components and the raw variables.The variance contribution rates are showcased in Table 6.The experimental results disclosed that the contribution rates of these seven principal components amounted to 88.976%.As a result, the comprehensive influence of diverse photo-selective nets on apple trees can be represented by these seven principal components.By leveraging the proportions of the eigenvalues of each principal component to total eigenvalues as weights, we devised the principal component scoring model: F_total = 0.210F1 + 0.193F2 + 0.150F3 + 0.143F4 + 0.120F5 + 0.094F6 + 0.088F7 (Tables 6 and 7).The comprehensive scores for each treatment are presented in Table 7.The comprehensive analysis inferred that the blue net delivers the prime coverage, succeeded by the black net, green net, and, lastly, the white net. Discussion Apple crops are increasingly being cultivated under protective netting systems, providing protection against extreme weather events [6].Technological advancements have facilitated the development of colored nets equipped with photo-selective plastic filters.These nets not only offer differential filtration of solar radiation and physical protection but also drastically alter the light conditions, notably the spectral light composition [37].Plant perception of light is affected by both its intensity and spectral characteristics.[2].According to our research findings, these photo-selective nets do not noticeably influence light intensity indicators, a result that is consistent with the findings by Bastías et al. that red and blue nets curtailed the photosynthetically active radiation equivalently as compared to the white net [28].In a related study, Serra et al. found that apple trees cultivated under photo-selective nets intercepted more light than their uncovered counterparts.Nevertheless, the net's color had no significant effect on the tree's light interception over the course of two years [38].In addition, several studies have demonstrated the direct impact of photoselective nets on the transmission spectra of sunlight.Specifically, the blue net was found to reduce transmission in the 600-720 nm range, while increasing transmission in the blue and blue-green wavelengths, particularly within the 440-520 nm range [39,40].Consistent with these findings, our research revealed that the blue net increased the proportion of blue light while significantly decreasing the proportion of red and far-red light, and the red to far-red light ratio.Similarly, the black and green nets in our study partially reduced the proportion of red and far-red light.In conclusion, it appears that photo-selective nets modify the spectrum of light that reaches the orchard. Photo-selective nets have been found to alter the microenvironment of orchards, leading to changes in soil temperature [2,14,19].The decrease in soil temperature is largely due to the decreased amount of light that penetrates the ground as a result of the shade net [12,41].In our study, we found the black net to be especially efficient in reducing soil temperature when compared with other colors, such as white, blue, and green.These results are consistent with findings by Kalcsits et al., where soil temperatures under pearl and blue nets were significantly lower than those under uncovered control and red nets [14].Although there is no documented evidence regarding the varying impact of photo-selective netting on soil temperature in the prevalent literature, we speculate that there may be other characteristics of the net's transmittance spectrum that influence soil temperature.However, further research is required to explore this possibility. Multiple studies have proposed that photo-selective nets not only partially transmit solar radiation but also diffuse it, which is essential for the photosynthesis of leaves in the lower part of the canopy [8,11].A comparison has shown that the net photosynthesis in 'Fuji' apples was considerably higher under blue and grey nets than those grown under pearl net [16].Similar patterns have also been observed in ornamental plants as well [42].Our research supports these findings and further reveals that photosynthesis was most effective under the blue net, followed by the black and green nets, with the white net demonstrating the lowest efficiency.We also noted a slight increase in SPAD values when measurements were taken under the blue net in comparison to the white net.This could potentially be attributed to the enhanced absorption capacity of the primary photosynthetic pigments (chlorophylls a/b) in the blue spectrum.Moreover, our study uncovered a positive correlation between the blue net and higher stomatal conductance, leading to an increase in photosynthesis.This observation aligns with evidence obtained from research on ornamental plants, which states that blue light wavelength is more effective in triggering stomatal opening and inhibiting stomatal closure [43].Thus, the elevated stomatal conductance observed in leaves grown under the blue net in our research can be attributed to the effect of blue light on the stomatal aperture. One crucial organ for analyzing crop growth is the leaf, as it helps us comprehend the crop's ability to convert radiation into dry matter via photosynthesis [44].Microscopy analysis has shown that under blue net, both the palisade thickness and the ratio of palisade to spongy mesophyll decreased by 19% compared to leaves under white net [45].It is important to note that earlier studies have underlined the importance of a low red to far-red (R/FR) ratio in governing plant growth.This not only stimulates an enlargement of the cell wall, leading to an increase in leaf area, but also improves leaf photosynthetic capacity, dry matter accumulation, and overall plant growth [46][47][48].Our research affirms these findings, as it revealed an increase in leaf area and a reduction in leaf thickness under blue net, suggesting that these are adaptive responses for shaded leaves to maximize light transmission to the chloroplasts.Moreover, our results are in line with the significantly lower R/FR ratios observed under blue netting, reinforcing the role of the R/FR ratio in facilitating these responses.Adjusting the R/FR ratio has been recommended as a strategy for managing shoot extension, particularly to encourage greater shoot length under low R/FR ratios [13].Previous investigations have consistently demonstrated that a reduction in the R/FR ratio leads to shoot elongation across various plant species, such as kiwifruit [49], grapevines [50], peach [29], and ornamental plants [51].Similarly, our study found that trees grown under blue net with a low R/FR ratio exhibited significantly greater total shoot length and roughness compared to those grown under white net.These effects may be the result of responses induced by the phytochrome, which are triggered by the lowered R/FR ratio and the consequent decrease in phytochrome photo-equilibrium as observed under blue net. Fruit color is a critical aspect that influences consumers' fruit consumption decisions [52].Our study revealed that diversely colored nets have no significant impact on fruit color or fruit shape index.These observations align with most previous studies, which reported minimal or no effects of netting on apple fruit color and shape [5,39,53].However, it is important to note that the influence of netting can differ based on various factors, including the type of net and apple variety. Consumer preference for apples hinges heavily on their sweetness, which is generally determined by the total soluble solids (TSS) content [54].In three growing seasons, the use of a black shade net significantly reduced TSS in 'Mondial Gala' apples compared to both a crystal shade net and a control group without any shade.However, these differences were not apparent in another growing season [19].Similarly, for 'Elstar' apples, both white and black shade nets resulted in a reduction of TSS compared to the control group [55].Do Amarante et al. also reported a significant reduction in TSS for 'Gala' apples grown under a white shade net, a phenomenon not observed in 'Fuji' apples during harvest [30].Our study found that apples grown under blue nets presented a decrease in TSS accompanying a significant decrease in titratable acidity compared to white nets.These observations indicate the influence of net color on soluble solids and titratable acidity, potentially due to the modulation of light diffusion.This further emphasizes the importance of integrating photo-selective nets in apple cultivation practices, as they markedly affect the levels of soluble solids and titratable acidity in apples. While consumers initially judge a product by its appearance, their ultimate decision to repurchase it is based on its edible quality [56].High initial firmness values at harvest can extend the duration of flesh firmness retention [57].Prior research suggests that consumers favor firmer apples [58].Compared to those cultivated under red-white nets, "Fuji" and "Pinova" apples grown under green-black and red-black netting were found to have a softer texture.Fruit from the control group, not covered by any netting, displayed the highest firmness [20].Even though limited research has been conducted on pericarp malleability, pericarp firmness, and flesh brittleness, these factors are closely associated with postharvest storage quality.They are critical for the long-term storage of fruits.However, the impact of photo-selective netting on pericarp firmness, malleability, and flesh brittleness at both pre-and post-harvest stages remains largely unexplored.In this study, the use of colored nets resulted in a reduction in flesh firmness, suggesting a potentially negative effect of photo-selective netting on either postharvest fruit storage or consumer purchasing behavior. The illumination conditions created by photo-selective nets can influence plant physiology, thereby affecting both the average fruit weight and plant yield [59].Previous studies found that the use of blue or grey netting significantly increased the weight of 'Fuji' apples compared to a control group using white netting [28].Likewise, cucumbers grew heavier when under aluminized, pearl, blue, or red nets [60].Consistent with these results, our study showed that blue nets yielded heavier individual fruits compared to white nets.Additionally, research has indicated that prolonged exposure to blue light may improve photosystem II, stomatal conductance, and dry matter production [61,62].Therefore, adjusting the combination of blue, red, and far-red light using photo-selective nets could manipulate the processes controlling carbohydrate availability, which is crucial for apple growth and yield.Our study also observed a significant rise in apple yield when using blue netting instead of white nets, supporting previous studies by Hemming et al. and Zheng et al., who reported that shading nets that enhance diffuse light can improve fruit yield in horticultural crops by increasing plant photosynthetic capacity [63,64]. Conclusions In this study, the effects of four photo-selective nets (white, blue, black, and green) on environmental factors, tree growth and development, and fruit yield and quality were investigated in an apple orchard.A principal component analysis was performed for datasets collected in 2020 and 2021 independently, with the findings compared across both years.The blue hail protection net came up top in overall score for both years, followed by black, white, and green nets.These results suggest that deploying the blue hail protection net could potentially optimize apple orchard management and production levels. Supplementary Materials: The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/horticulturae9091061/s1, Figure S1: The relative water content of leaves treated with different photo-selective colored nets; Figure S2: The combined effects of photo-selective colored nets on the external qualities of apple fruits at harvest in 2020; Figure S3: The combined effects of photo-selective colored nets on the external qualities of apple fruits at harvest in 2021; Table S1: The effects of photo-selective colored nets on the relative humidity and air temperature. Figure 1 . Figure 1.The effects of photo-selective colored nets on soil temperature and light intensity.(A) Apple orchards under photo-selective white, blue, black, and green nets.(B) Soil temperature variations at different hours of the day under photo-selective colored nets.(C) Light intensity at different hours of the day under photo-selective colored nets.Error bars indicate the standard deviation (n = 10). Figure 1 . Figure 1.The effects of photo-selective colored nets on soil temperature and light intensity.(A) Apple orchards under photo-selective white, blue, black, and green nets.(B) Soil temperature variations at different hours of the day under photo-selective colored nets.(C) Light intensity at different hours of the day under photo-selective colored nets.Error bars indicate the standard deviation (n = 10). Figure 2 . Figure 2. The effects of photo-selective colored nets on new shoot growth.(A) New shoot l 2020.(B) New shoot length in 2021.(C) New shoot diameter in 2020.(D) New shoot dia 2021.Error bars indicate standard deviation [n = 15 in (A), n = 36 in (B), and n = 20 in (C,D)].from Tukey's test. Figure 2 . Figure 2. The effects of photo-selective colored nets on new shoot growth.(A) New shoot length in 2020.(B) New shoot length in 2021.(C) New shoot diameter in 2020.(D) New shoot diameter in 2021.Error bars indicate standard deviation [n = 15 in (A), n = 36 in (B), and n = 20 in (C,D)].p values from Tukey's test. Figure 3 . Figure 3. Response of leaves to different photo-selective colored nets.(A) Leaf scan images in 2020.(B−F) Determination of (B) leaf area, (C) hundred leaf thickness, (D) leaf fresh weight, (E) leaf dry weight, and (F) leaf index shown in (A) under different photo-selective colored nets.Error bars indicate standard deviation [n = 60 in (B), n = 9 in (C), and n = 46 in (D−F)].(G) Leaf scan images in 2021.(G−L) Determination of (H) leaf area, (I) hundred leaf thickness, (J) leaf fresh weight, and (K) leaf dry weight, and (L) leaf index shown in (G) under different photo-selective colored nets.Error bars indicate standard deviation [n = 15 in (H), n = 16 in (I), n = 10 in (J), n = 14 in (K), and n = 24 in (L)].p values from Tukey's test. Figure 3 . Figure 3. Response of leaves to different photo-selective colored nets.(A) Leaf scan images in 2020.(B-F) Determination of (B) leaf area, (C) hundred leaf thickness, (D) leaf fresh weight, (E) leaf dry weight, and (F) leaf index shown in (A) under different photo-selective colored nets.Error bars indicate standard deviation [n = 60 in (B), n = 9 in (C), and n = 46 in (D-F)].(G) Leaf scan images in 2021.(G-L) Determination of (H) leaf area, (I) hundred leaf thickness, (J) leaf fresh weight, and (K) leaf dry weight, and (L) leaf index shown in (G) under different photo-selective colored nets.Error bars indicate standard deviation [n = 15 in (H), n = 16 in (I), n = 10 in (J), n = 14 in (K), and n = 24 in (L)].p values from Tukey's test. Figure 4 . Figure 4. Chlorophyll content under different photo-selective colored nets.(A−C) Determination of (A) upper canopy SPAD, (B) middle canopy SPAD, and (C) chlorophyll a/b ratio under different photo-selective colored nets.(D−F) Determination of (D) upper canopy SPAD, (E) middle canopy SPAD, and (F) chlorophyll a/b ratio under different photo-selective colored nets.Error bars indicate standard deviation [n = 36 in (A,B), n = 15 in (C), n = 25 in (D,E), and n = 15 in (F)].p values from Tukey's test. Figure 4 . Figure 4. Chlorophyll content under different photo-selective colored nets.(A-C) Determination of (A) upper canopy SPAD, (B) middle canopy SPAD, and (C) chlorophyll a/b ratio under different photo-selective colored nets.(D-F) Determination of (D) upper canopy SPAD, (E) middle canopy SPAD, and (F) chlorophyll a/b ratio under different photo-selective colored nets.Error bars indicate standard deviation [n = 36 in (A,B), n = 15 in (C), n = 25 in (D,E), and n = 15 in (F)].p values from Tukey's test. Figure 5 . Figure 5. Photosynthetic parameters of the trees pretreated with different photo-selective colored nets (A−D).(A) The photosynthetic rate, (B) the transpiration rate, (C) the stomatal conductance, and (D) intercellular carbon dioxide concentration of the trees under different photo-selective colored nets.Error bars indicate the standard deviation [n = 3].p values from Tukey's test. Figure 5 . Figure 5. Photosynthetic parameters of the trees pretreated with different photo-selective colored nets (A-D).(A) The photosynthetic rate, (B) the transpiration rate, (C) the stomatal conductance, and (D) intercellular carbon dioxide concentration of the trees under different photo-selective colored nets.Error bars indicate the standard deviation [n = 3].p values from Tukey's test. Figure 7 . Figure 7.The impact of photo-selective colored nets on yield per tree.(A) Single fruit weight, (B) Fruir number, and (C) Yield of trees covered with different photo-selective colored nets in 2020.(D) Single fruit weight, (E) Fruit number, and (F) Yield of trees covered with different photo-selective colored nets in 2021.Error bars indicate standard deviation [n = 50 in (A), n = 9 in (B−F)] p values from Tukey's test. Figure 7 . Figure 7.The impact of photo-selective colored nets on yield per tree.(A) Single fruit weight, (B) Fruir number, and (C) Yield of trees covered with different photo-selective colored nets in 2020.(D) Single fruit weight, (E) Fruit number, and (F) Yield of trees covered with different photo-selective colored nets in 2021.Error bars indicate standard deviation [n = 50 in (A), n = 9 in (B-F)] p values from Tukey's test. Table 1 . The effects of photo-selective colored nets on light quality in August and September 2020 and 2021. Table 2 . Factor loading matrix of principal components on different traits in 2020. Table 3 . Eigenvalues and variance contribution rates of principal component in 2020. Table 4 . Comprehensive evaluation of different photo-selective colored nets in 2020. Table 5 . Factor loading matrix of principal components on different traits in 2021. Table 6 . Eigenvalues and variance contribution rates of principal component in 2021. Table 7 . Comprehensive evaluation of different photo-selective colored nets in 2021.
v3-fos-license
2022-06-30T02:03:34.826Z
0001-01-01T00:00:00.000
10720062
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.nature.com/articles/6602155.pdf", "pdf_hash": "1ba92d4c75b2a66f8c3e8fb2195736c9180b2a1b", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1941", "s2fieldsofstudy": [ "Medicine" ], "sha1": "28937f5087664fc5ebf38627d0396181b6c8d613", "year": 2004 }
pes2o/s2orc
Edinburgh Research Explorer A cross-sectional survey to estimate the prevalence of family history of colorectal, breast and ovarian cancer in a Scottish general practice population A cross-sectional survey of all patients aged 30–65 in four general practices within one Local Health Care Co-operative in Fife, Scotland was undertaken to measure the prevalence of family history of colorectal, breast and ovarian cancer. A total of 7619 patients aged 30–65 responded to a postal questionnaire (response rate 59%). In all, 17% of respondents (1324, 95% Cl 16–18%) reported a relative affected by colorectal, breast or ovarian cancer. Of those, 6% (78, 95% CI 5–7%) met the Scottish guidelines for referral for genetics counselling. In all, 2% (24, 95% CI 1–3%) of all individuals with an affected relative had received genetic counselling and risk assessment. Of these, 25% (6, 95% CI 8–42%) met the moderate-or high-risk criteria for developing a cancer. In conclusion, the number of patients who are at a significantly increased risk of cancer on the basis of a family history is small (approximately 10 per General Practitioner (GP) list). It is therefore unrealistic to expect GPs to develop expertise in genetic risk estimation. A simple family history chart or pedigree is one way that a GP can, within the constraints of a GP consultation, determine which patients should be reassured and which referred to the local cancer genetic clinic. Cancer is one of the three health priorities of the National Health Service in Scotland (NHSiS) (Scottish Office, 1998).Local Health Care Co-operatives (LHCCs) were created in Scotland in 1998 to provide local management of services, and are made up of representatives of local general practices and local service groups and patient groups (Scottish Office, 1998).They have been charged with measuring health needs within their communities to reflect the clinical priorities for the area and to support the development of population-wide approaches to health improvement and disease prevention (Scottish Office, 1997). Cancer genetics is the fastest growing area of clinical genetics (Wonderling et al, 2001).In Scotland, the four Regional Genetic Centres co-ordinate accurate risk assessment to ensure that individuals referred for screening investigations such as mammography and colonoscopy fulfil the national criteria laid down by the Cancer Genetic subgroup of the Scottish Cancer Group (Table 1).The lifetime risks for breast, colon and ovarian cancers in the general population are approximately one in 10, one in 60 and one in 90, respectively (ISD, 1998).All general practitioners (GPs) will, therefore, have patients with a relative with one of these cancers.An unknown proportion of these patients are likely to seek counselling and advice regarding their risk of developing cancer (Biesecker et al, 1993).The relative risk associated with a family history of these cancers has been widely reported (St John et al, 1993;Slattery and Kerber, 1994;Pharoah et al, 1997).The challenge is to identify the minority at significantly increased genetic risk of developing cancer while reassuring the majority whose family history does not indicate a likely increased cancer risk above that of the general population. A major problem in planning cancer genetic services is that it is not known as to what proportion of the population fit into the various cancer genetic risk categories.The Scottish Office report 'Cancer Genetics Services in Scotland' (Haites, 2000) recognised that 'at present there is no means of identifying the total population who have a family history which places them at a significantly increased risk of developing breast, colorectal or ovarian cancer'.The report also noted that the uncertainty of these estimates makes it impossible to predict future costs for the provision of a risk estimation and screening service. Risk estimation is based on the number of affected individuals within the family, the pattern of cancers and the age of onset of cancer.It is therefore necessary for the clinician to take a careful family history.This process is time-consuming and many GPs are unsure of their ability to obtain an accurate family tree and assess genetic risk (Fry et al, 1999).Pre-clinical family history questionnaires have been used extensively by genetic departments.This study was designed to evaluate how a similar questionnaire would be addressed by a general practice population and whether such a questionnaire might provide data in a form to facilitate GP cancer genetic risk estimation.We report the results of a cross-sectional survey conducted between May 1999 and October 2000 of patients in General Practice aged between 30 and 65 years to assess the prevalence of a significant family history of colorectal, breast, or ovarian cancer and to identify the number of individuals with a family history that had been referred onto the Clinical Genetic Service.Ethical approval for the study was granted by the Fife Local Research Ethics Committee. PARTICIPANTS AND METHODS A postal survey of all patients aged 30 -65 years from four general medical practices covering over 99% of the population within one LHCC in Fife, Scotland was undertaken using a cancer family history questionnaire that had been developed and evaluated by a Cambridge-based research team (Leggat et al, 1999).The questionnaire was adapted to determine whether the patients had any concerns regarding their own risk of developing cancer and, if so, whether they had ever been referred to a cancer genetic specialist or had received any form of genetic counselling (questionnaire available online at http://137.195.14.43/cgi-bin/WebObjects/genisys.woa/wa/showDoc?docid ¼ 208). Patients were asked if they had any family members (grandparents, aunts, uncles, father, mother, brothers, sisters and children) who had had colorectal, breast or ovarian cancer and the age at which these cancers were diagnosed.Those with no affected relatives were requested to return the questionnaire at this point.Those with a family member affected were asked to complete a detailed family history including relationship to the affected individual, site of cancer and age at and date of diagnosis.In all, 305 randomly selected participants reporting a family history of cancer (23% of total) were interviewed by telephone (n ¼ 254) or in person by a genetic nurse (n ¼ 51) to check the consistency of the information collected via the postal survey.A fieldworker telephoned 101 of those reporting no family history to confirm that there was no family history of colorectal breast or ovarian cancer in their families. RESULTS A total of 13 155 questionnaires were mailed, of which 5535 were excluded from the study; 281 were returned address unknown and 5254 were not returned by the patient.In all, 7620 (3386 males, 4234 females) were completed and returned (Figure 1).The overall response rate was 59%.A total of 1396 (18%, 95% CI 17 -19%) responders reported a family history of cancer.When checked by a genetics nurse, 72 questionnaires reported relatives with a history of cancers at other sites and were excluded from any further analysis.In all, 17% of respondents (1324, 95% Cl 16 -18%) therefore identified themselves as having a history of colorectal, breast, or ovarian cancer in a first-or second-degree relative.Of these, 918 were females and 375 males.Some respondents reported a family history of more than one of these cancers.In total, 78 respondents with a family history were classified as being at medium or high risk of developing colorectal, breast, or ovarian cancer, and thus met the guidelines for referral to cancer genetics services in Scotland for risk assessment (Haites, 2000).This represents approximately 6% (95% CI 5 -7%) of all respondents reporting a family history of cancer. Colorectal cancer In all, 31 respondents reporting a family history of colorectal cancer met the national guidelines for referral for risk assessment, 11 males and 20 females, that is, 5% (95% CI 3 -7%) of those reporting a family history of colorectal cancer and 2% (95% CI 1 -3%) reporting a history of any of the three cancers or 0.41% (95% CI 0.26 -0.55%) of the population surveyed. Table 1 The Cancer Genetic Sub-committee family history criteria for enrolment in a screening programme for breast, ovarian or colorectal cancer Breast Moderate risk One first-degree relative with bilateral breast cancer One first-degree relative with breast cancer diagnosed under age 40 years or one first-degree male relative with breast cancer diagnosed at any age Two first-or first-and second-degree relatives with breast cancer diagnosed under age 60 years and/or ovarian cancer at any age on the same side of the family Three first-or second-degree relatives with breast or ovarian cancer on the same side of the family (always one first-degree relative unless history is via father) High risk An individual with BRCA1 or BRCA2 mutations or other known predisposing gene mutations or the untested first-degree relative of a mutation carrier One first-degree relative (or second-degree relative via intervening male relative) in a family with four or more relatives affected with breast cancer or ovarian cancer in three generations An individual with a mutation in one of the mismatch repair genes or their untested first-degree relatives A family history compatible with HNPCC according to Amsterdam or modified Amsterdam criteria Individuals are judged to be at low risk if their family history does not meet the moderate risk criteria for screening. Colorectal, breast and ovarian cancer Breast cancer In all, 27 of the female respondents met national guidelines for referral for risk assessment for breast cancer only, that is, 3% (95% CI 2 -4%) of all female respondents reporting a family history of cancer or 0.64% (95% CI 0.40 -0.88%) of the total female population surveyed. Ovarian cancer Two female respondents met the national guidelines for referral for risk assessment for ovarian cancer only, that is, 0.2% (95% CI 0 -0.5%) of all females reporting a family history of cancer or 0.05% (95% CI 0 -0.1%) of the total female population surveyed. Breast and ovarian cancer In all, 18 female respondents met the national guidelines for referral for risk assessment, that is, 2% (95% CI 1 -3%) of all female respondents reporting a family history of cancer or 0.43% (95% CI 0.22 -0.62%) of the total female population surveyed. Interviews of re-contacted participants A validation study was undertaken in order to assess the consistency of this information.In all, 352 patients reporting a family history of cancer were randomly selected and asked to discuss their history with a genetic nurse either face to face or by telephone.Of these, 305 (87%) responded and their family history was verbally confirmed.Of these, 17 (6%, 95% CI 3 -8%) were assessed to be at a moderate to high risk of developing colorectal cancer and thus met the national criteria for referral for risk assessment, 28 (9%, 98% CI 6 -12%) met the referral criteria for breast cancer and three (1%, 95% CI 0 -2%) for ovarian cancer. As a result of this group being interviewed by the genetic nurse, the risk of 21 (7%, 95% CI 4 -10%) of the respondents was altered.The estimated risk of one or more of the three cancers was increased for 16 of the respondents, although in six cases it was difficult to verify the risk due to incomplete information, for example, age of diagnosis of cancer in relative.For five respondents, the estimated risk of cancer was reduced.Only four (4%, 95% CI 0.2 -8%) of the 101 respondents who originally reported no family history of breast, ovarian or colorectal cancer in the family history form subsequently mentioned a family history on interview with a fieldworker.All four were assessed to be at low risk of developing cancer. Contact with health services In all, 15% of respondents who reported a positive family history of these cancers had discussed their concerns with their GP, the great majority during the last 3 years.Out of these respondents, 86% (and 87% of the 30 respondents found to be at moderate or high risk) had raised the issue themselves (rather than their GP asking them about the family history of cancer).In all, 10% of respondents reporting any family history of cancer (and 22% of those at moderate/high risk) had been referred to a specialist to discuss their risk of cancer and 2% (25% of those at moderate/high risk) had received genetic counselling in the past. Workload implications for GPs and cancer genetics clinics Using these results to estimate workload for GPs and cancer genetic clinics in the rest of Scotland, the following figures are obtained: Potentially, only one in 14 patients attending a GP with a positive family history of cancer needs to be referred to regional cancer genetics services for further risk assessment. DISCUSSION A valid response rate of 59% was achieved for the postal questionnaire used in this study.This is considerably higher than that in a previously published study using a similar questionnaire where the response rate was 29% (Leggat et al, 1999).Possible explanations for this high response rate to the questionnaire include: the study was led by the principal GP of one of the participating general practices and was thus well known to most patients; a press release publicising the study was issued prior to mailing; one reminder was sent out to nonresponders 2 weeks after mailing the questionnaire; and a colorectal cancer screening study had recently been undertaken in one of the four participating GP practices.When compared with the nonresponders, the responders were significantly older (mean age 48 years vs 44 years), similar to that reported in the previous study which evaluated the family history of cancer questionnaire. 9The study made no allowance for multiple sampling of the same family, but the aim was to assess the burden of cancer genetics in a GP practice.Males with a family history of breast or ovarian cancer were assessed as low risk, as no clinical screening is indicated for them. We have recently shown in Scotland that such reports of a positive family history of cancer are rarely incorrect but may substantially underestimate the true prevalence of a history of cancer in relatives, especially among second-degree relatives, when compared to cancer registry records (Mitchell et al, 2004).We only attempted to 'validate' a sample of positive reports of family history of cancer in this study.It is likely that a study which also involved an analysis of cancer registry records of all relatives would yield a higher estimate for the family history of cancer.Thus, the prevalence of family history of cancer in this study can be considered to represent a minimum estimate.Nevertheless, patients make decisions about seeking advice about their cancer risk based on their family history as they perceive it and so that data presented in this report are important in seeking to plan services for these patients. It is interesting to note the higher incidence of moderate-or high-risk family histories in the subgroup of participants that agreed to be interviewed.This may reflect a greater interest in discussing their situation in the moderate-and high-risk groups. Prior to the study, it was anticipated that some respondents might experience anxiety concerning their own risk of cancer as a result of completing the family history questionnaire.Participants were invited to voice their anxieties by phone with the study team who could then arrange an appointment with a genetic nurse.However, it was only necessary for the genetic nurse to contact two respondents in relation to this issue and she was able to provide advice and reassurance in both cases.Discussion with GPs in the practices involved revealed no contact with patients worried by the results of the study.Many of the respondents did admit to worries about their family history when interviewed, but had not taken advantage of genetic counselling.In fact, those at the greatest risk were the ones who reported least use of the service. The majority of questionnaires were completed correctly and many respondents included a great deal of information about their family history of cancer, sometimes involving obtaining details from family members living abroad.For GPs faced with patients consulting with concerns about their family history, a suitable response would therefore be to ask the patient to complete a similar family history form and to rely on this in making a decision as to whether or not to refer the patient to the local cancer genetics clinic. Cancer genetics referral guidelines are quite complex.Therefore, computer programmes have been developed based on referral guidelines to support the decision-making by GPs.However, as GPs will see only a few patients a year, acquiring all of the skills necessary for genetic counselling or to operate such programmes may be unlikely to be accorded a high priority.In addition, newly acquired skills following training are likely to degrade over time without frequent reinforcement.We suggest that GPs could use a questionnaire to collect information and then pass it on to the local genetic nurse, primary care genetic clinician or cancer centre for a rapid assessment as to whether further action should be taken. The number of patients seeking genetic counselling has increased sharply over the last few years (Wonderling et al, 2001).This study has shown that only about one in 14 patients attending a GP with a positive family history of cancer needs to be referred to regional cancer genetics services for further risk assessment.The importance of the gate-keeping role of the GP is likely to increase in future.Our experience gained during the course of this study suggests that this role might be facilitated by the use of a self-completion family history form in general practice.Information collected by this means tallies closely with that obtained from interviews with trained genetic nurses and permit accurate risk assessments which can guide referral decisions. One first-degree relative (or second degree via father) with breast and ovarian cancer Ovarian Moderate risk Two or more first-or first-and second-degree relatives with ovarian cancer Two first-or first-and second-degree relatives with ovarian cancer at any age and breast cancer diagnosed under 50 years One ovarian cancer and two breast cancers diagnosed less than 60 years on the same side of the family in first-degree relatives or second degree via a male Two first-or second-degree relatives with colorectal cancer and/or endometrial cancer and one with ovarian cancer one affected relative with ovarian cancer and HNPCC family history High risk An individual with BRCA1 or BRCA2 mutations or other known predisposing gene mutations or her untested female relatives.First-degree relative with breast and ovarian cancer Bowel Moderate risk One first-degree relative with colorectal cancer under age 45 years Two individuals affected with colorectal cancer (one less than 55 years) who are first-degree relatives of each other and one a first-degree relative of the consultant Three affected family members with colorectal or endometrial cancer who are first-degree relatives of each other and one a first-degree relative of the consultant High risk Figure 1 Figure 1 Flow diagram of response and results of a survey to estimate the prevalence of a family history of selected cancers in a Scottish population surveyed in 1999 -2000.
v3-fos-license
2024-07-14T15:59:25.872Z
2024-07-09T00:00:00.000
271142350
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.3389/fphar.2024.1401961", "pdf_hash": "91935570d84bbd780b60610276e9a0e2d94c194d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1942", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "sha1": "882aa89bed8edf67afb24f12d2a271671a721146", "year": 2024 }
pes2o/s2orc
Diabetes cardiomyopathy: targeted regulation of mitochondrial dysfunction and therapeutic potential of plant secondary metabolites Diabetic cardiomyopathy (DCM) is a specific heart condition in diabetic patients, which is a major cause of heart failure and significantly affects quality of life. DCM is manifested as abnormal cardiac structure and function in the absence of ischaemic or hypertensive heart disease in individuals with diabetes. Although the development of DCM involves multiple pathological mechanisms, mitochondrial dysfunction is considered to play a crucial role. The regulatory mechanisms of mitochondrial dysfunction mainly include mitochondrial dynamics, oxidative stress, calcium handling, uncoupling, biogenesis, mitophagy, and insulin signaling. Targeting mitochondrial function in the treatment of DCM has attracted increasing attention. Studies have shown that plant secondary metabolites contribute to improving mitochondrial function and alleviating the development of DCM. This review outlines the role of mitochondrial dysfunction in the pathogenesis of DCM and discusses the regulatory mechanism for mitochondrial dysfunction. In addition, it also summarizes treatment strategies based on plant secondary metabolites. These strategies targeting the treatment of mitochondrial dysfunction may help prevent and treat DCM. Introduction Diabetes mellitus (DM) is a worldwide public health problem.According to the latest data of the International Diabetes Federation (IDF) in 2021, diabetes has become a health burden affecting 537 million people worldwide.It is estimated that this number will increase to 783 million by 2045 (Saeedi et al., 2019;Sun et al., 2022).The DM can be divided into type 1 (T1DM) and type 2 (T2DM), of which T2DM accounts for more than 90% of human diabetes population.Patients with T1DM or T2DM have a high risk of developing diabetes cardiomyopathy (DCM) and even heart failure (Lam, 2015).DCM is a myocardial-specific microvascular complication that results in structural and functional abnormalities of the heart muscle in diabetic patients without other cardiac risk factors such as coronary artery disease, hypertension, and severe valve disease (Heather et al., 2022).It has been estimated that the prevalence of DCM in the general population is ~1.1%, and that it is ~16.9% in diabetics (Rajbhandari et al., 2021).Studies have shown that hyperinsulinemia, insulin resistance, and hyperglycemia are the starting points of the cascade reaction of cardiac dysfunction in DCM (El Hayek et al., 2021;Avagimyan et al., 2022).In the state of high glucose, multiple metabolic pathways are activated and interact with each other, leading to myocardial fibrosis and hypertrophy, cardiomyocyte apoptosis, reduced coronary microcirculation perfusion, and then evolving into diastolic and systolic dysfunction and eventually diabetic heart failure (Schilling, 2015).Furthermore, there are some pathophysiological differences in the triggering of DCM between T1DM and T2DM.Patients with T1DM experience severe insulin deficiency due to autoimmune disease, leading to hyperglycemia and abnormal metabolism and function of myocardial cells.The delayed absorption of calcium in the sarcoplasmic reticulum is associated with hyperglycemia, resulting in impaired left ventricular contractile function, making contractile dysfunction symptoms more typical in T1DM patients.In contrast, T2DM is mainly caused by hyperinsulinemia and insulin resistance.Its clinical manifestations are related to myocardial fibrosis and left ventricular remodeling, leading to increased wall hardness, reduced compliance, and early induction of diastolic myocardial dysfunction in the disease (Nathan, 2015;Waddingham et al., 2015;Prandi et al., 2023). Mitochondria are important organelles in cells, mainly responsible for the generation of cellular energy.Through the process of oxidative phosphorylation, mitochondria produce adenosine triphosphate (ATP), which provides energy for the normal physiological activities of the cell (Wang et al., 2020).In DCM, mitochondrial dysfunction has a significant impact on heart function.Due to hyperglycemia and insulin resistance, the mitochondrial function in the myocardial cells of diabetic patients may be impaired.Mitochondrial dysfunction leads to reduced ATP production, increased oxidative stress, disrupted calcium ion balance, and subsequently affects the normal function of myocardial cells.Further damage may lead to decreased myocardial contractile force, abnormal cardiac structure and function, ultimately resulting in serious consequences such as heart failure (Zhu et al., 2021).The reasons for hyperglycemia levels leading to mitochondrial dysfunction may be related to the generation of glycation end products, oxidative stress, and abnormal lipid metabolism.Hyperglycemia levels may increase the generation of glycation end products, which can bind to proteins and DNA inside the mitochondria, forming highly pathogenic crosslinks, damaging the structure and function of the mitochondria.Hyperglycemia levels may increase the level of oxidative stress inside the mitochondria, leading to an imbalance in the mitochondrial redox balance, thereby damaging the integrity of the mitochondrial membrane and the function of the electron transport chain, affecting the production of energy in the mitochondria.Additionally, hyperglycemia levels may also cause abnormal lipid metabolism, increasing the burden of lipid oxidation inside the mitochondria, leading to an increase in lipid peroxidation reactions inside the mitochondria, damaging the membrane structure and function of the mitochondria.The treatment strategy for mitochondrial dysfunction in DCM is one of the hot spots of current research (Wang et al., 2020;Wang et al., 2020).Some studies have shown that plant secondary metabolites have the potential to improve mitochondrial function and alleviate the development of DCM (Gao et al., 2022;Sodeinde et al., 2023).The plant secondary metabolites may improve mitochondrial function by regulating mitochondrial dynamics, reducing oxidative stress, maintaining calcium ion balance, promoting mitochondrial biogenesis, inducing mitophagy, inhibiting mitochondrial uncoupling, and regulating myocardial insulin signaling.Therefore, in-depth research on the mechanisms of mitochondrial dysfunction in DCM and the development of treatment strategies targeting mitochondrial function is of great significance for improving the prognosis of DCM. The role of mitochondrial dysfunction in the pathogenesis of diabetic cardiomyopathy The pathogenesis of DCM is complex, with early manifestations of myocardial fibrosis, functional remodeling, and associated diastolic dysfunction, progressing to systolic dysfunction, and ultimately leading to heart failure with reduced ejection fraction (EF) values.The diagnostic criteria for DCM include left ventricular diastolic dysfunction and/or reduced left ventricular ejection fraction (LVEF), left ventricular hypertrophy, and interstitial fibrosis, which can be classified as early, late, and end stages (Joubert et al., 2019).Current research indicates that the development of DCM is associated with mitochondrial dysfunction, abnormal glucose and lipid metabolism, oxidative stress, inflammation, and myocardial fibrosis.Among them, mitochondrial dysfunction plays a crucial role in the pathogenesis of DCM (Sangwung et al., 2020).Myocardial mitochondrial dysfunction refers to the accumulation of reactive oxygen species (ROS) in cells induced by factors such as ischemia or hypoxia, resulting in abnormal mitochondrial structure and function (Bozi et al., 2020).In addition, the consequence of mitochondrial dysfunction is the excessive production of ROS in the respiratory chain.When the accumulation of ROS exceeds its limit and is beyond the clearance capacity of the antioxidant system, it promotes cellular oxidative stress and induces tissue damage.Specific factors leading to excessive ROS production include hyperglycemia, hyperlipidemia, inflammatory responses, and more.These factors disrupt the electron transport chain within the mitochondria, leading to increased ROS generation.Excessive ROS negatively impacts cell structure and function, such as oxidizing lipids, proteins, and DNA, damaging cell membranes, mitochondrial membranes, and organelle structures, resulting in cell apoptosis and inflammatory reactions (Dubois et al., 2020).Additionally, mitochondrial dysfunction is often observed in the rat model of streptozotocin (STZ) induced diabetes, which is manifested in the imbalance of mitochondrial structure, the reduction of mitochondrial DNA and the reduction of the level of biologically related messenger RNA, leading to the damage of mitochondrial biology (Marciniak et al., 2014). Mitochondria are double-membrane organelles that maintain a highly dynamic and multifunctional network (Iannetti et al., 2015).Maintaining the integrity and function of mitochondria is crucial for Mitochondrial quality control mainly includes mitochondrial dynamics, mitophagy, and mitochondrial biogenesis.Mitochondrial dynamics involve fusion and fission.Mitochondrial fusion is induced by homotypic and heterotypic interactions between mitochondrial fusion proteins 1 and 2 (Mfn1/2) of the outer mitochondrial membrane (OMM) and optic atrophy 1 (Opa1) of the inner mitochondrial membrane (IMM).Mitochondrial fission mainly involves dynamin-related protein 1 (Drp1), mitochondrial fission factor (Mff), and mitochondrial fission 1 (Fis1).Drp1 is recruited to OMM through interactions with Fis1 and Mff.Under physiological conditions, mitochondrial fusion and fission constraint each other to reach mitochondrial dynamic equilibrium.Mitochondrial fission contributes to the clearance of damaged or aging mitochondria, leading to a decrease in mitochondrial membrane potential (Δψm), thereby activating mitophagy.Mitophagy mainly consists of 4 key steps: 1) Depolarization of damaged mitochondria, leading to the loss of membrane potential.2) Enclosure of mitochondria by autophagosome to form mitochondria autophagosome.3) Fusion of mitochondria autophagosome with lysosomes.4) Degradation of mitochondrial contents by lysosomes.Mitochondrial biogenesis is a tightly regulated process.Adenosine monophosphateactivated protein kinase (AMPK) can phosphorylate peroxisome proliferator-activated receptor-γ coactivator-1α (PGC-1α), while sirtuin 1 (SIRT1) can acetylate PGC-1α.PGC-1α activates signaling molecules such as nuclear respiratory factor 1/2 (Nrf1/2) and transcription factor A mitochondrial (TFAM), driving the replication and transcription of mtDNA, and translating into proteins, assembling to form new mitochondria. cellular physiology, especially in the energy-demanding heart.Within cardiac muscle cells, glucose is oxidized to ATP, which is the main source of cellular energy.The oxidation process mainly occurs within mitochondria, and through the catalytic action of enzymes such as adenosine triphosphate synthase, glucose is gradually broken down and energy is released.The ATP produced provides contraction and relaxation to myocardial cells, thereby maintaining normal heart function (Cree et al., 2017).However, in patients with diabetes, due to insufficient insulin secretion or insulin resistance, glucose cannot be effectively used, and the energy source of the heart turns to fatty acid oxidation (FAO).Over time, long-term dependence on FAO can lead to the accumulation of lipid metabolites in myocardial cells, ultimately leading to mitochondrial dysfunction and cardiac dysfunction (Croston et al., 2014).In summary, mitochondria are not only the main metabolic organelles in cells, but also important regulatory factors for improving insulin resistance.Their dysfunction plays a particularly important role in the development of DCM. Mitochondrial dynamics Mammalian mitochondria are dynamic organelles with two membranes, constantly changing in length, size, quantity, and shape within the cell (Chang et al., 2022).Mitochondrial dynamics consists of mitochondrial fusion and fission, where fusion is the integration of substances from different mitochondria, while fission is the separation of mitochondria from intact parents (Sygitowicz et al., 2022).The main regulatory factors responsible for mitochondrial fusion in mammalian cells are mitochondrial fusion proteins 1 and 2 (Mfn1/2), located on the outer mitochondrial membrane (OMM), and optic atrophy 1 (Opa1), a protein located on the inner mitochondrial membrane (IMM) (Parra et al., 2011).Mitochondrial fusion is primarily divided into two steps: OMM and IMM fusion.OMM fusion is mainly mediated by Mfn1 and Mfn2, which can form bridges on the outer membrane of mitochondria and promote the outer membrane fusion of two mitochondria.The function of Mfn1 and Mfn2 is to guide the outer membranes of two mitochondria into contact with each other, and then promote the fusion of the outer membranes (Yu et al., 2020;Casellas et al., 2021).IMM fusion is mainly mediated by Opa1 protein.The Opa1 protein forms complexes on the mitochondrial inner membrane, promoting the fusion of the inner membranes of two mitochondria into one.The function of Opa1 is to facilitate membrane fusion, thereby forming a continuous inner membrane structure (Gilkerson et al., 2021;Tokuyama et al., 2023).Mitochondrial fission is the process of dividing intact mitochondrial progenitors into two or more mitochondria, leading to the redistribution of mitochondrial genetic material, structure, and quantity within the cell (Fröhlich et al., 2013).Mitochondrial fission mainly involves dynamin-related protein 1 (Drp1), fission protein 1 (Fis1), and mitochondrial fission factor (Mff) (Yang et al., 2022).Drp1 is a cytoplasmic GTP-dependent driving protein involved in the division process.Drp1 functions by localizing to the OMM, forming a ring around the mitochondria, and then hydrolyzing GTP to cause the ring to contract, resulting in mitochondrial fission.In addition, various post-translational modifications, including phosphorylation, ubiquitination, S-nitrosylation, and acetylation, control the transport of Drp1 from the cytoplasm to the mitochondria.These translation modifications promote mitochondrial fission by enhancing Drp1 oligomerization and its receptor attachment (Jin et al., 2021).Mff and Fis1 can also affect Drp1-induced mitochondrial fission through post-translational phosphorylation (Wang et al., 2022).In DCM, the loss of fusion and fission-related proteins can lead to DCM.Studies have shown that during the development of DCM, Drp1, Mff, and Fis1 are significantly upregulated in myocardial cells, while Opa1 and Mfn/2 are significantly downregulated (Ikeda et al., 2015). Based on the essential role of mitochondrial dynamics in regulating DCM, some targeted drugs that normalize mitochondrial dynamics are used to treat hyperglycemia induced myocardial injury.Melatonin is an anti-diabetic drug.In vivo, it can prevent the occurrence of heart dysfunction in diabetes by inhibiting the mitochondrial fission induced by Drp1.Specifically, melatonin intervention reduces the expression level of Drp1, inhibits mitochondrial ragmentation, suppress oxidative stress, and reduces cardiomyocyte apoptosis by inhibiting SIRT1/PGC-1α dependent mitochondrial fission, thereby improving the mitochondrial function and cardiac function (Ding et al., 2018).Moreover, Mdivi-1 intervention inhibited the metastasis and translocation of Drp1, thus reducing the myocardial infarction area of STZ induced diabetes mice after ischemia reperfusion injury surgery.Mdivi-1 is considered as a Drp1 inhibitor (Ding et al., 2017).In diabetic hearts, mitochondrial fusion promoter M1 significantly increases the expression of mitochondrial fusion and Opa1, while reducing myocardial oxidative stress and improving myocardial fibrosis (Ding et al., 2020;Feng et al., 2021).Equally, dapagliflozin can promote mitochondrial fusion and inhibit fission, accompanied by prolonged cardiac action potential and stable Δψm, which may be due to upregulation of Mfn2 expression (Durak et al., 2018).Nicotinamide riboside (NR) activates SIRT1/PGC-1α/PPARα signaling transduction increases Mfn2 expression and promotes mitochondrial fusion in diabetic db/ db mouse, reduces cell apoptosis, and improves heart function (Hu et al., 2022). Mitochondrial biogenesis Mitochondrial biogenesis is a process that maintains the quantity of mitochondria through regeneration, aiming to produce new and healthy mitochondria.The process of mitochondrial biogenesis is complex, mainly regulated by mitochondrial genes (mtDNA) and nuclear genes (nDNA).The mtDNA encode some of the proteins and RNA in the mitochondrial inner membrane, including important protein subunits and tRNA, while most mitochondrial proteins and other components are encoded by nDNA (Cameron et al., 2016;Tao et al., 2022).Under normal conditions, mitochondrial biogenesis enhances mitochondrial oxidative phosphorylation capacity, reduces pathological oxidative stress, maintains normal mitochondrial physiological function, and meets the energy metabolism needs of the cell.However, when the process of mitochondrial biogenesis is disrupted by exogenous or endogenous factors, it can promote mitochondrial dysfunction, leading to excessive ROS production, causing mitochondrial oxidative stress and calcium overload, thereby triggering cell apoptosis or disrupting cellular homeostasis (Bruggisser et al., 2017).PGC-1α is considered a key central mediator regulating mitochondrial biogenesis, and its expression is regulated by various upstream stimuli and posttranslational modifications (Novakova et al., 2022).The expression of PGC-1α is regulated by the activation of transcription factors that act on mtDNA, consisting of sirtuin 1 (SIRT1), myocyte enhancer factor 2 (MEF2), forehead box class-O1 (FoxO1), as well as other signal inducers such as AMPK, AKT-eNOs, and calmodulin dependent protein kinase IV (CaMK IV) (Gleyzer and Scarpulla, 2016;Wang et al., 2022).PGC-1α expression is regulated by post-translational modifications, consist of methylation, acetylation, ubiquitination, and phosphorylation.In addition, PGC-1α collaborating with a series of nDNA transcription factors to regulate downstream signaling pathways, consisting of nuclear respiratory factor 1/2 (Nrf1/2), transcription factor A mitochondrial (TFAM), estrogen related receptors α (ERR-α) and PPARs (Ploumi et al., 2017).PGC-1α promotes mtDNA replication and enhances mitochondrial biogenesis by interacting with its upstream and downstream factors, which is crucial for maintaining the normal physiological function of tissues with high energy metabolism demands.AMPK is a key pathway in energy metabolism, and it is activated when the intracellular ATP level decreases or the intracellular AMP/ATP ratio increases.Activated AMPK can directly phosphorylate PGC-1α, thereby increasing its transcriptional activity.Meanwhile, AMPK can phosphorylate the threonine site 177 and serine site 538 of the PGC-1α promoter, promoting the expression of PGC-1α, a series of mitochondrial target genes, and oxidative metabolism related genes (Fernandez et al., 2011).SIRT1 is a deacetylase that regulates gene expression and metabolic processes within cells, particularly playing a crucial role in energy metabolism and oxidative stress.SIRT1 can increase its expression by deacetylating PGC-1α, thereby promoting mitochondrial biogenesis (Wu et al., 2023).Some evidence has suggested an association between mitochondrial biogenesis and DCM.Diao et al. found reduced mtDNA replication and transcription, damaged mitochondrial ultrastructure, downregulation of PGC-1α, leading to impaired mitochondrial biogenesis and cardiac injury in a DCM rat model (Diao et al., 2021).Research by Tao et al. has shown that MiR-144 is downregulated in HG-induced myocardial cells and STZ-induced DCM rats.Overexpression of MiR-144 enhances mitochondrial biogenesis and inhibits cell apoptosis, while inhibiting MiR-144 shows the opposite results.In addition, Rac-1 has been identified as a regulatory gene of MiR-144.Reduced expression of Rac-1 activates AMPK phosphorylation and PGC-1α deacetylation, leading to increased mitochondrial biogenesis and reduced cell apoptosis (Tao et al., 2020).Adiponectin (APN), as an upstream activator of AMPK, showed a significant decrease in plasma levels of APN in ob/ob mouse.The researchers further validated the hypothesis that there is a causal relationship between APN reduction and mitochondrial biogenic damage.After 1 week of APN treatment in ob/ob mice, activating AMPK and reducing PGC-1α acetylation can increase mitochondrial biogenesis and alleviate mitochondrial diseases.On the contrary, knocking out APN inhibits AMPK/PGC-1α signaling and impairs mitochondrial biogenesis (Yan et al., 2013).Tetrahydrobiopterin (BH4) is a novel endogenous activator of CaMKK2 that can participate in regulating vascular and cardiac function.Research has shown that in the db/db mouse model lacking BH4, ROS production increases and induces mitochondrial dysfunction.Supplementing BH4 can improve cardiac function, correct myocardial morphological abnormalities, and increase mitochondrial biogenesis by activating the CaMKK2/ PGC-1α signaling pathway (Kim et al., 2020).Antioxidant pterostilbene in blueberries regulates AMPK/NRF2/HO-1/PGC-1α signal transduction, which can reduce oxidative stress and inflammation and improve mitochondrial biogenesis in a high glucose rat model (Kosuru et al., 2018).In conclusion, an increasing amount of evidence suggests that abnormal mitochondrial biogenesis is a major factor leading to DCM, and regulating the process of mitochondrial biogenesis has become a potential strategy for treating DCM. Mitophagy Mitophagy contributes to maintaining the health and function of mitochondria within cells, which is crucial for cellular metabolism and survival.The process of mitophagy involves the selective targeting of autophagosomes to phagocytize dysfunctional or damaged mitochondria, which are then transferred to lysosomes for degradation (Li et al., 2021;Saito et al., 2021).There are currently three pathways that induce mitophagy, consist of the phosphatase and tensin homologue-induced putative kinase 1 (PINK1)/Parkin pathway, the FUN14 domain-containing (FUNDC1) pathway, and the BCL2/adenovirus E1B 19kDa interacting protein 3 (BNIP3)/NIX pathway (Tong et al., 2021).Among them, the most studied mitophagy pathway is PINK1/Parkin (Andres et al., 2017).PINK1, a serine/threonine kinase, functions as a messenger to relay the collapse of Δψm to Parkin.Usually, PINK1 is swiftly transported to the mitochondrial matrix and cleaved by mitochondrial proteases (Jin et al., 2010).Therefore, under normal circumstances, the content of PINK1 in mitochondria is relatively low.However, when mitochondria are damaged, the decrease in Δψm is directly related to the increase in PINK1 on OMM.PINK1 and Parkin jointly control the removal of damaged mitochondria (Zhang et al., 2020).Similarly, Parkin acts as an E3 ubiquitin ligase and remains cytosolic in normal conditions.However, upon depolarization of the mitochondrial membrane, it rapidly translocates to the OMM and ubiquitinates proteins located in the outer membrane, thereby marking them for elimination (Cai et al., 2022).Currently, it has been found that many Parkin substrates accumulate on OMM, such as Mfn1/2, OMM transporters, and voltage dependent anion channels (Poole et al., 2010;Morciano et al., 2020).In DCM, mitophagy enhances the regeneration of cardiomyocyte mitochondria and stimulates biogenesis, which can normalize the morphology and bioenergy of cardiac mitochondria (Wang et al., 2018;Wang et al., 2019).In addition, mitophagy also reduced lipid accumulation, improved mitochondrial homeostasis, and restored the diastolic and systolic functions of diabetes heart (Zhou et al., 2019).Tong et al. used the mito-Keima method to evaluate mitophagy in the GFP-LC3 mouse myocardial cell model induced by a high-fat diet (HFD).Knocking out Parkin inhibits mitophagy, increases lipid accumulation, and exacerbates diastolic dysfunction.However, injection of Tat-Beclin1 (TB1) can activate mitophagy, reduce lipid accumulation, and prevent diastolic dysfunction in the heart.The study suggests that inhibiting mitophagy leads to mitochondrial dysfunction and lipid accumulation, thereby exacerbating diabetic cardiomyopathy.In contrast, the activation of mitophagy can prevent HFD-induced diabetic cardiomyopathy (Tong et al., 2019).Alisporivir is a non immunosuppressive cyclosporin derivative and a selective inhibitor of mitochondrial permeability transition pore (mPTP).Belosludtseva et al. treated HFD combined with STZ-induced diabetic mice with Alisporivir (2.5 mg/kg/d) for 20 days.Alisporivir improved mitochondrial swelling and ultrastructural changes in the myocardial cells of diabetic mice, increased the mRNA expression levels of Pink1 and Parkin in the heart tissue, and reduced the accumulation of lipid peroxides.The study suggests that Alisporivir can exert a protective effect on the heart by inducing mitophagy (Belosludtseva et al., 2021).Therefore, pharmacological methods of targeting mitophagy may be effective treatment methods to slow down the progression of DCM and improve prognosis (Figure 1). Mitochondrial oxidative stress Diabetic cardiomyopathy is a disease characterized by structural and functional abnormalities of the myocardium caused by hyperglycemia, and its pathogenesis is closely related to mitochondrial oxidative stress.During the process of mitochondrial oxidative phosphorylation (OXPHOS), NADH and FADH2 serve as electron donors, releasing electrons at electron transport chain (ETC) complexes I and II respectively.These electrons travel from complexes I and II through ubiquinone to complex III, and then in complex IV, O 2 combines with the electrons transferred from complex III by cytochrome C, generating H 2 O. Protons are sequentially transferred to the intermembrane space through complexes I, III, and IV.Ultimately, the protons in the intermembrane space are transported back to the mitochondrial matrix through complex V, generating ATP.It is worth noting that electrons can easily leak from complexes I and III, leading to the generation of ROS (Teshima et al., 2014;Jia et al., 2016).In the environment of insulin resistance and hyperglycemia, the mitochondrial OXPHOS process is impaired, leading to reduced ATP synthesis and the production of a large amount of ROS.ROS is a highly active molecule that can cause oxidative damage by interacting with proteins, lipids, DNA, and other molecules inside the mitochondria, leading to oxidative stress.Oxidative stress affects the structure and function of mitochondria, thereby influencing cellular energy metabolism and signal transduction.In addition, oxidative stress also activates some proteins on the OMM, such as Bax and Bcl-2, which are involved in regulating cell apoptosis (Quan et al., 2020;Dewanjee et al., 2021). Link between mitochondria oxidative stress and lipotoxicity.The augmented uptake of fatty acids (FA) by mitochondria and subsequent oxidation in diabetic cardiac tissues may surpass the respiratory capacity of mitochondria, leading to the buildup of harmful lipid metabolites.This accumulation can result in cardiac lipotoxicity and impairment of mitochondrial function (Jia et al., 2018).Adenosine monophosphate-activated protein kinase (AMPK) typically enhances the generation of new mitochondria by activating peroxisome proliferator-activated receptor-γ (PPAR-γ) coactivator-1α (PGC-1α), a key metabolic regulator of mitochondrial biogenesis and respiratory performance (Crisafulli et al., 2020).The impairment of the AMPK/PGC-1α signalling pathways associated with FAO occurs during the advanced stage of DCM, thereby exacerbating mitochondrial dysfunction (Nakamura et al., 2022).Additionally, an increase in FAO can promote the production of ROS and induce cardiac oxidative stress and inflammation.The elevated ROS levels further contribute to mitochondrial dysfunction, leading to lipid accumulation, fibrosis, diastolic dysfunction, and ultimately exacerbating heart failure (Murtaza et al., 2019).Similarly, an increase free fatty acids (FFAs) in the blood can lead to an increase of FA in cardiomyocytes.Excessive FA accumulate in cells in the form of lipid droplets and triglycerides, while diacylglycerol and ceramide also increase (Nakamura et al., 2020).Diacylglycerol triggers the exacerbation of insulin resistance and oxidative stress by activating protein kinase C (PKC).Research has demonstrated that diacylglycerol serves as a toxic lipid intermediate in cardiac tissue (Chokshi et al., 2012).The accumulation of ceramide leads to a substantial production of mitochondrial ROS, which induces mitochondrial dysfunction and oxidative stress within myocardial mitochondria (Law et al., 2018;Kim et al., 2020).Moreover, the anti-diabetic drug empagliflozin (SGLT2 inhibitor) can lead to a decrease in plasma volume and cardiac preload, regulate superoxide dismutase (SOD) levels and lipid metabolism, reduce oxidative stress, improve mitochondrial function, and thus play a protective effect on the heart (Kaludercic et al., 2020). Recent research has highlighted the close relationship between ferroptosis and mitochondrial oxidative damage.Research shows that abnormal mitochondrial ferroptosis occurs in the heart of diabetes mice, which is mainly manifested by the decrease of Δψm, the downregulation of the expression of SOD and glutathione peroxidase 1 (GPX 1) in mitochondria, and the significant increase in mitochondrial ROS levels (Fang et al., 2020).Furthermore, another study has demonstrated that feeding mice a high-iron diet leads to severe myocardial damage, manifested as iron overload, increased lipid peroxidation, and decreased glutathione levels (Sampaio et al., 2014).Du et al. treated STZinduced diabetic C57BL/6J mice with canagliflozin for 6 weeks, as well as H 9 C 2 cardiomyocytes induced with high glucose (HG) for 24 h.Their in vivo and in vitro studies showed that canagliflozin inhibits the deposition of total iron and Fe 2+ , downregulates the expression of ferritin heavy chain (FTN-H), upregulates the cystineglutamate antiporter (xCT), increases the level of Δψm in the myocardium, reduces ROS levels, and inhibits mitochondrial oxidative damage.This exerts a cardioprotective effect by inhibiting ferroptosis (Du et al., 2022).Ferroptosis is a novel form of programmed cell death, and more clinical research is needed to support its role in the prevention and treatment of diabetic cardiomyopathy.In conclusion, targeting ferroptosis may provide a new strategy for the prevention and treatment of DCM. Mitochondrial uncoupling Mitochondrial uncoupling is an important physiological mechanism that can regulate energy metabolism and heat generation within cells.Under normal circumstances, there is a proton gradient inside the mitochondria, meaning there is a difference in proton concentration between the inner and outer membranes.During the process of uncoupling, the proton gradient is released to maintain the balance of proton concentration.The released protons will bind to the uncoupling proteins (UCPs) protein on the IMM, forming a channel that allows protons to pass through the mitochondrial inner membrane instead of through ATP synthase.In other words, the proton gradient will not be used to produce ATP, but will interact with UCPs to generate heat energy (Azzu et al., 2010).UCPs located on the IMM are considered the main mediators of mitochondrial uncoupling.UCPs family consists mainly of UCP1-UCP5.UCP1 is highly expressed in the mitochondria of adipose tissue and is mainly responsible for temperature regulation.UCP2 is widely expressed in most tissues, such as the myocardium, and is involved in the body's energy metabolism.UCP3 is predominantly expressed in skeletal muscle, while UCP4 and UCP5 are present in brain tissue.The expression of different UCPs in different tissues reflects different physiological functions.Currently, research on DCM mainly focuses on UCP2 and UCP3, especially UCP2 (Mailloux and Harper, 2011;Akhmedov et al., 2015). In DCM, UCP2 may be involved in the development of the disease through various physiological mechanisms.UCP2 can regulate the permeability of the IMM, increasing the proton permeability within the mitochondria, thereby reducing the electrochemical load of the mitochondria, decreasing the proton gradient within the mitochondria, inhibiting the coupling of the tricarboxylic acid cycle and oxidative phosphorylation, ultimately leading to mitochondrial uncoupling.This uncoupling may result in reduced ATP synthesis within the cells, while increasing the production of ROS within the mitochondria, thereby triggering oxidative stress and cell damage (Cadenas, 2018;Nirengi et al., 2020;Ho et al., 2022).In addition, UCP2 may also be involved in the occurrence of DCM by regulating lipid metabolism.Studies have shown that UCP2 can affect lipid metabolism pathways, including FAO and synthesis, thereby influencing intracellular lipid content and oxidative stress levels.The dysregulation of lipid metabolism in relation to UCP2 regulation may lead to increased lipid accumulation in myocardial cells, thereby affecting myocardial cell function (Diano et al., 2012).Due to the elevated levels of FFA in DCM, the enhanced expression of UCPs is directly induced by PPARα, thereby affecting the permeability and proton leak of IMM, inhibiting ATP production, which is typically observed in failing hearts (Wang et al., 2021).PPARα can promote the oxidative metabolism of FA, including FA uptake, transport, βoxidation, and the permeability of the IMM.These processes directly impact the electrochemical gradient of the IMM, thus influencing the degree of mitochondrial uncoupling (Lee et al., 2017;Crescenzo et al., 2019;Liu et al., 2022).Mitochondrial uncoupling is mainly mediated by the activation of UCPs.In a report by Dludla et al., it is suggested that guanosine diphosphate (GDP) can inhibit the activation of UCPs, preventing mitochondrial proton leak in diabetic db/db mice (Dludla et al., 2018).Additionally, some studies have suggested that overexpression of UCP2 may lead to mitochondrial dysfunction and exacerbate the development of diabetic cardiomyopathy.The application of PPAR agonists (such as pioglitazone) can regulate the expression of UCP2, significantly reducing the levels of free fatty acids in the plasma of type 2 diabetic patients, increasing Δψm, and restoring normal mitochondrial function (Wassef et al., 2018).Interestingly, in another study, it was found that UCP2 was downregulated in a STZ-induced diabetic mouse model, leading to a decrease in Δψm and an increase in cell death.However, overexpression of mitochondrial aldehyde dehydrogenase (ALDH2) can reverse this situation, resulting in beneficial effects on cardiac structure and function, mitochondrial function, and cell survival (Zhang et al., 2023).In summary, targeted regulation of UCP2 will enhance our understanding of DCM. Mitochondrial calcium handling The mitochondrial calcium handling plays a crucial role in maintaining normal cellular function.Mitochondrial calcium homeostasis is the balance of intracellular calcium concentration maintained by mitochondria by regulating the uptake and release of calcium ions (Diaz et al., 2021).Within mitochondria, Ca 2+ enhances oxidative phosphorylation activity (including mitochondrial complexes I, III, IV, and complex V), as well as activates pyruvate dehydrogenase complex (PDC), alpha-ketoglutarate dehydrogenase, and isocitrate dehydrogenase to enhance ATP regeneration (Ketenci et al., 2022).Various calcium channel proteins exist on the inner mitochondrial membrane, with the most important ones being mitochondrial calcium uniporter (MCU) and voltage-dependent anion channel (VDAC).MCU is a key protein that regulates the uptake of calcium ions on the IMM, while VDAC is involved in regulating the opening of calcium channels on the OMM.The functional abnormalities or changes in expression levels of these channel proteins may lead to excessive or insufficient uptake of calcium ions in mitochondria, thereby affecting mitochondrial function (Hamilton et al., 2021).Mitochondrial calcium uptake protein 1 (MICU1), located on the inner mitochondrial membrane, interacts with MCU to regulate the uptake of calcium ions by the mitochondria.When the intracellular calcium ion concentration increases, MICU1 binds to the MCU channel and inhibits its activity, thereby reducing the uptake of calcium ions within the mitochondria.This regulatory effect helps maintain the balance of calcium ions within the mitochondria, preventing excessive uptake of calcium ions from damaging mitochondrial function (Ji et al., 2017).In DCM, abnormal mitochondrial calcium handling may lead to mitochondrial dysfunction and damage to myocardial cells.Studies have shown that the expression level of MICU1 is downregulated in 12 week old db/db mouse cardiomyocytes, accompanied by mitochondrial dependent intrinsic apoptosis.In this mouse model, the reconstruction of MICU1 can reduce myocardial hypertrophy and fibrosis, inhibit cell apoptosis, and normalize cardiac function.Furthermore, studies have shown that upregulation of MICU1 increases mitochondrial Ca 2+ uptake, weakens mitochondrial ROS production and cell apoptosis (Dillmann, 2019).This dysfunctional calcium handling can be rescued by restoring calcium to mitochondria, thereby enhancing mitochondrial activity and energy production.Similarly, in the STZ induced diabetes mouse model, Suarez et al. found that the heart of diabetes mice showed changes in the expression of MCU and MCU members, which led to the decrease of mitochondrial Ca 2+ uptake, mitochondrial energy function and cardiac function.On the contrary, normalization of MCU levels based on adeno-associated viruses in these hearts restored mitochondrial Ca 2+ homeostasis, reduced PDC phosphorylation levels, improved cardiac energy metabolism and cardiac function (Suarez et al., 2018).In addition, research has shown that GrpE-like 2 (Grpel2) can reduce myocardial ischemia/reperfusion injury by inhibiting mitochondrial calcium overload mediated by MCU.Grpel2 levels are decreased in STZ-induced DCM, and overexpression of Grpel2 can mitigate mitochondrial dysfunction and apoptosis in DCM by maintaining dihydrolipoyl succinyltransferase (DLST) input to mitochondria (Yang et al., 2023). Myocardial insulin signalling Diabetic cardiomyopathy is a heart disease caused by hyperglycemia and insulin resistance.Diabetic patients often experience insulin resistance (reduced sensitivity of cells to insulin), leading to abnormal insulin signal transduction, including reduced expression and function of insulin receptor substrate-1 (IRS-1) and blocked PI3K/Akt signaling pathway.These abnormalities can affect cellular glucose uptake and utilization, leading to cellular metabolic disorders (Salvatore et al., 2021).IRS-1 is one of the key molecules in insulin signal transduction.Under normal circumstances, insulin binds to its receptor, activates IRS-1, and then activates the PI3K/Akt signaling pathway.Activated Akt can promote the translocation and transposition of GLUT4, thereby increasing glucose uptake.However, in DCM, insulin signal transduction is inhibited, leading to a decrease in the phosphorylation level of IRS-1, which affects the function of IRS-1, making it unable to effectively recruit and activate the PI3K/Akt pathway (Chen et al., 2022).This abnormal insulin signal transduction may directly affect mitochondrial function.Mitochondria are the energy production centers within cells, responsible for producing most of the intracellular ATP (Qi et al., 2013).Insulin signaling abnormalities can affect the structure and Schematic diagram of the structure and function of mitochondria under physiological conditions.In IMM, NADH and FADH2 serve as electron donors, releasing electrons at complexes I and II, respectively.These electrons pass from complexes I and II through ubiquinone to reach complex III.Subsequently, at complex IV, O2 in the matrix accepts electrons transferred from cyt c by complex III, generating H2O.Protons are sequentially transferred to the intermembrane space (IMS) through complexes I, III, and IV.Ultimately, the protons in the IMS are transported back to the matrix through complex V to generate ATP.Additionally, electrons can easily leak from complexes I and III, leading to the formation of reactive oxygen species (ROS).Some protons return to the mitochondrial matrix through uncoupling proteins (UCPs), generating heat.Free Ca 2+ in the cytoplasm can enter the mitochondrial matrix through the voltage-dependent anion channel protein (VDAC) on the OMM and the mitochondrial calcium uniporter (MCU) on the IMM.When insulin binds to the insulin receptor, the activated receptor phosphorylates the IRS-1 protein.IRS-1 further activates the phosphorylation activity of PI3K and Akt, thereby promoting the transport of GLUT4 and glucose uptake. function of mitochondria, leading to changes in mitochondrial membrane permeability, disruption of oxidative phosphorylation, and increased oxidative stress.These changes may result in mitochondrial dysfunction and subsequently affect the energy metabolism of myocardial cells (Chen et al., 2022).Research has shown that knocking out the insulin receptor in the heart leads to reduced glucose uptake and increased mitochondrial ROS production in the heart (Zhou et al., 2018;Gargiulo et al., 2020).Knockout of IRS-1 reduces ATP content in myocardial cells, impairs cardiac metabolism and function, increases fibrosis, and exacerbates heart failure (Battiprolu et al., 2012;Bugger et al., 2012).The results of ventricular muscle biopsy obtained from T2DM patients have shown reduced PI3K/Akt signaling, as well as decreased GLUT4 expression and translocation (Hou et al., 2019).The E3 ubiquitin ligase mitsugumin 53 may play a crucial regulatory role in maintaining insulin signaling.Elevated levels of mitsugumin 53 in a T2DM mouse model are associated with increased degradation of insulin receptor and IRS-1 proteins.Overexpression of mitsugumin 53 inhibits insulin signaling transduction and promotes cardiac fibrosis (Song et al., 2013;Liu et al., 2015).Conversely, downregulation of mitsugumin-53 may be a potential therapeutic approach to prevent diabetic cardiomyopathy from progressing to heart failure.In addition, abnormal insulin signaling may also lead to increased apoptosis, while the activation of the mitogen-activated protein kinase (MAPK) signaling pathway may exacerbate this process.Under normal circumstances, insulin promotes cell proliferation and growth, maintains the structure and function of cardiomyocytes, further promotes glucose uptake and utilization, and provides the energy substrates needed for mitochondria by regulating the MAPK signaling pathway.However, hyperglycemia and insulin resistance can lead to increased levels of inflammatory factors and oxidative stress within cells, thereby activating the MAPK signaling pathway.The activated MAPK signaling pathway can promote cardiomyocyte apoptosis, fibrosis, and inflammatory reactions, accelerating the progression of myocardial disease (Jia et al., 2018).In general, insulin signaling transduction regulates various signaling molecules and pathways, affecting the energy metabolism and function of cells.When these signaling pathways become disrupted or imbalanced, it can lead to insulin resistance and the development of diseases such as diabetes.Therefore, a thorough understanding of the molecular mechanisms of insulin signaling transduction is helpful in revealing the pathogenesis of diabetes and providing new targets and strategies for the treatment and prevention of related diseases (Figure 2). cardiac function changes rapidly and the risk is high of further development into heart failure.Because mitochondrial dysfunction is the most common driving factor of diabetes cardiomyopathy, mitochondrial dynamics imbalance, excessive oxidative stress, damaged mitophagy, impaired mitochondrial biosynthesis, and impaired mitochondrial calcium processing constitute potential therapeutic targets (Varkuti et al., 2020) (Figure 3).Some antidiabetic medications currently in use may directly or indirectly interfere mitochondrial abnormalities associated with DCM, such as metformin, dapagliflozin, and empagliflozin.Metformin is a widely used diabetes drug in clinic.Metformin promotes mitochondrial autophagy and improves myocardial cell dysfunction through AMPK-dependent or -independent mechanisms in diabetic hearts.Additionally, metformin can stimulate mitochondrial biogenesis in high glucose-induced cardiomyocytes by upregulating transcription factors related to mitochondrial biogenesis (such as PGC-1α and TFAM) (Abdel et al., 2018;Liu et al., 2020).However, the detailed mechanism by which metformin regulates DCM mitochondrial function remains unclear (Packer, 2020).In addition, for diabetic patients at risk of cardiovascular disease, cautious consideration of the use of metformin and close monitoring of cardiovascular events may be necessary.As new hypoglycemic drugs, Dapagliflozin and empagliflozin are sodium-glucose cotransporter-2 inhibitors (SGLT2is) that exhibit protective effects on reducing cardiovascular mortality and heart failure in patients with T2DM.Recent evidence suggests that SGLT2is may play a protective role in the heart by regulating the mitochondrial function in diabetes models.In obese insulin-resistant rats induced by a HFD, administration of dapagliflozin 4 weeks before myocardial ischemia/reperfusion injury can effectively reduce mitochondrial ROS production, swelling, as well as depolarization.Studies have also shown that dapagliflozin enhances mitochondrial ultrastructure by decreasing fragmentation and ridge loss within the mitochondria (Wiviott et al., 2019;Li and Zhou, 2020).Additionally, in STZinduced diabetic rats fed with a HFD, empagliflozin has been shown to improve atrial structural and electrical remodeling by enhancing mitochondrial respiratory function and biogenesis.It plays a crucial role in mitochondrial biogenesis by activating the PGC-1α-NRF1-TFAM signaling pathway to prevent the induction of atrial fibrillation (Shao et al., 2019).However, the exact role of mitochondrial biogenesis in the occurrence and progression of ischemic cardiomyopathy or atrial fibrillation in diabetic patients remains unclear. Besides positive effect of antidiabetes drugs on DCM mitochondria, other strategies may also be promising treatments for mitochondrial dysfunction (Ketenci et al., 2022).For example, targeting mitochondrial ROS clearance is a positive potential therapeutic strategy in DCM.The mitochondrial targeted drugs Mn (III) tetrakis (4-benzoic acid) porphyrin (MnTBAP) and mitoquinone (MitoQ) have been shown to have the ability to alleviate oxidative stress in preclinical studies.MnTBAP intervention can reverse myocardial oxidative stress and improve mitochondrial bioenergy in a mouse model of metabolic syndrome.MitoQ treatment can reduce ROS accumulation and demonstrate anti-inflammatory and anti-oxidantion effects in T2DM patients (Ilkun et al., 2015;Escribano et al., 2016).In addition, targeting mitochondrial regulators may also have beneficial effects on DCM. AMPK is the main regulator of mitochondrial energy homeostasis.Activation of AMPK enhances the expression and translocation of GLUT4, enhances insulin stimulated glucose uptake, and promotes mitochondrial biogenesis.Mechanically, activated AMPK in myocardial cells increases glucose uptake and utilization, while also negatively regulating mTOR signaling and gluconeogenesis, lipid as well as protein synthesis (Abdel et al., 2017).The activation of AMPK is essential to prevent the progression of diabetes cardiomyopathy.Therefore, AMPK is considered as an effective target for drug discovery and development to prevent and reverse DCM.Moreover, PPARα plays a crucial regulatory role in mitochondrial oxidative stress and myocardial glucose and lipid metabolism.The role of PPARα in diabetes cardiomyopathy is mainly reflected in regulating lipid metabolism and maintaining energy balance of myocardial cells.It was found that the activity of PPARα was affected by the state of diabetes, and its expression and function might be inhibited, leading to lipid metabolism disorder and myocardial cell function damage (Lee et al., 2013).In addition, the activity of PPARα can also affect myocardial mitochondrial function, including mitochondrial morphology, quantity, respiratory chain complexes, Δψm, and oxidative stress response (Yin et al., 2019).Therefore, in-depth study on the expression and regulation mechanism of PPARα in diabetes cardiomyopathy and the relationship between PPARα and mitochondrial function will help reveal the molecular mechanism of the development of diabetes cardiomyopathy and provide new targets and strategies for disease treatment.Table 1 Summarizes the intervention measures and targets targeting mitochondrial dysfunction in the progress of DCM. Plant secondary metabolites-based diabetes cardiomyopathy targeting mitochondrial dysfunction Secondary metabolites derived from plants have the properties of being safe, effective, and low in toxicity.Research on the prevention and treatment of diabetes and its complications using these metabolites has attracted increasing attention (Sukhikh et al., 2023).Previous studies have reported that the research on plant secondary metabolites for diabetes mainly focuses on regulating lipid and protein metabolism pathways, insulin signaling pathways, anti-inflammatory responses, and anti-oxidant stress (Shehadeh et al., 2021).In recent years, targeting mitochondrial function has become a promising treatment strategy for various diseases.Therefore, influencing mitochondrial function may have beneficial effects on DCM.This section reviews some biologically active plant secondary metabolites targeting mitochondrial dysfunction for the treatment of DCM (Figure 4). Flavonoids Flavonoids are secondary metabolites of plants, characterized by compounds with a 2-phenylchromen-4-one structure, widely present in plants.They exhibit various pharmacological activities, including antioxidant, anti-inflammatory, and anti-tumor effects, and have been used to treat various diseases (including diabetes) (Shen et al., 2022).Flavonoids are considered promising anti-diabetic drugs, but their poor bioavailability is also recognized.The use of drug delivery technologies such as microencapsulation, nano delivery systems, microemulsions, and enzyme-promoted methylation can enhance the therapeutic effects and bioavailability of flavonoids (Hussain et al., 2020).Flavonoids have a significant hypoglycemic effect by regulating the activity of mitochondrial respiratory chain complexes, affecting oxidative phosphorylation in mitochondria, reducing oxidative stress, improving mitochondrial energy metabolism and ATP synthesis, and helping to reduce the risk of diabetes and its complications (Sapian et al., 2021). Puerarin is a flavonoid compound isolated from Pueraria lobata (Willd.)Ohwi, which has pharmacological activities such as reducing insulin resistance, alleviating inflammatory reactions, improving microcirculation, and inhibiting platelet aggregation (Huang et al., 2020).Cheng et al. found that after 4 weeks of treatment with puerarin (100 mg/kg/d), the expression and translocation of GLUT4 increased, while the expression and translocation of CD36 decreased in STZ and nicotinamide (NA)induced diabetic mice.Puerarin also enhances Akt phosphorylation, reduces PPARα expression, and improves heart function after myocardial infarction in diabetes mice by regulating mitochondrial energy metabolism (Cheng et al., 2015).In a controlled trial involving 50 patients undergoing heart valve replacement, puerarin appears to enhance the safety and effectiveness of valve replacement surgery.Pretreatment with puerarin reduces the activation of neutrophil NF-κB and the overexpression of IL-6, IL-8, inhibits the release of cardiac enzymes troponin I (cTnI), and creatine kinase isoenzyme MB (CK-MB), indicating a protective effect on the myocardium (Zhou et al., 2019).Additionally, Sun et al.'s research has shown that Puerarin-V (a new crystal form of puerarin) can significantly reduce mitochondrial ROS production, decrease MDA levels, increase the activity of SOD and GSH in the myocardium, improve the activity of the mitochondrial electron transport chain, and enhance the mitochondrial respiratory function related to complexes I/II in DCM mice.These results indicate that puerarin-V plays an antioxidant role in DCM and may be involved in improving mitochondrial dysfunction.Furthermore, the therapeutic effect of puerarin-V in DCM is superior to that of puerarin injection (a marketed drug for myocardial ischemia), indicating that puerarin-V may be an attractive compound for developing anti-DCM drugs (Sun et al., 2022). Apigenin is one of flavonoids widely found in plants, fruits, and vegetables, with anti-oxidant, anti-inflammatory, and antitumor effects (Salehi et al., 2019).Liu et al. research has confirmed that treatment with apigenin (100 mg/kg/d) for 7 months can significantly improve myocardial remodeling and cardiac function in STZ-induced diabetic C57BM/6J mice models.It also inhibits myocardial cell apoptosis, improves myocardial mitochondrial oxidative stress and inflammatory response, and normalizes myocardial mitochondrial energy.These pathological changes are achieved by inhibiting the excessive accumulation of 4-hydroxynonenal through apigenin, upregulating the expression of Bcl-2 and GPx, increasing SOD activity, reducing malondialdehyde (MDA) activity, downregulating the expression of Bax and Cleaved caspase3, and inhibiting the translocation of NF-κB (Liu et al., 2017).Additionally, apigenin has been found to reverse mitochondrial dysfunction induced by lipopolysaccharide (LPS), maintaining mitochondrial homeostasis and function by promoting the expression of mitochondrial SIRT3, inducing mitochondrial biogenesis (PGC-1α, TFAM) and fusion proteins (Mfn2, Opa1), and activating mitochondrial autophagy (PINK1, parkin) (Ahmedy et al., 2022).In conclusion, the results suggest that apigenin may be a promising compound for treating diabetic cardiac damage and neurological diseases by targeting mitochondrial function. Acacetin is a common natural flavonoid compound that can be extracted and isolated from Carthamus tinctorius L. Pharmacological studies have shown that it has antioxidant, antitumor, anti-inflammatory, and cardiovascular protective effects (Han et al., 2021).The study by Song et al. demonstrated that in the STZ-induced Sprague-Dawley (SD) diabetic rat model, treatment with acacetin (10 mg/kg/d) for 16 weeks activated AMPK protein phosphorylation and regulated the expression levels of PPARα.In vitro experiments showed that acacetin (0.3, 1, 3 μM) downregulated the expression of Bax protein in H 9 C 2 cells, while up-regulating the expression of Bcl-2, SOD1 (located in the intermembrane space of mitochondria), and SOD2 (mainly located in the mitochondrial matrix).The study suggests that acacetin can reduce oxidative stress, inhibit mitochondria-dependent cell apoptosis, improve mitochondrial function, and alleviate diabetic myocardial damage (Song et al., 2022).In addition, Han et al. found that acacetin can reduce ROS production and levels of MDA, inhibit depolarization of Δψm, upregulate the expression and activity of SOD, Bcl-2, PGC-1α, pAMPK, Sirt1, and Sirt3, and exert its cardioprotective effect.It is worth noting that when Sirt3 is knocked out, the cardioprotective effect of acacetin is eliminated (Han et al., 2020).In conclusion, research shows that acacetin can prevent mitochondrial dysfunction, reduce oxidative stress, and reduce the incidence of cardiovascular disease in diabetes. Dihydromyricetin is a dihydroflavonol compound widely present in plants of the ampelopsis family, with pharmacological effects including scavenging free radicals, antioxidant, and antifibrotic properties (Zhang et al., 2022).Wu et al. found that treatment with dihydromyricetin (100 mg/kg/d) for 14 weeks improved mitochondrial function in STZ-induced diabetic C57BL/6J mice, increasing ATP content in, ETC., and the activity of complex I/II/III/IV/V, restoring Δψm, reducing oxidative stress, and improving mitochondrial energy metabolism.Furthermore, the study also indicated that dihydromyricetin can activate AMPK and the phosphorylation level of unc-51 like autophagy activating kinase 1 (ULK1), enhancing autophagic function in diabetic mice and The classification of plant secondary metabolites and their main targets of regulating diabetes cardiomyopathy.Some plant secondary metabolites have been found to alleviate the pathological changes of diabetes cardiomyopathy, including flavonoids, polyphenols, terpenes, alkaloids and glycosides.These plant secondary metabolites are mainly regulated through targets related to mitochondrial function.Opa1, optic atrophy 1; UCP2, uncoupling protein 2; Nrf2, nuclear factor E2-related factor 2; HO-1, heme oxygenase-1; GClC, glutamate-cysteine ligase catalyst; MDA, malondialdehyde; NF-κB, nuclear factor kappa-B; ULK1, unc-51 like autophagy activating kinase 1; GSH, glutathione; GSH-Px, glutathione peroxidase; Nrf-1, nuclear respiratory factor-1; CK2α, casein kinase 2α; Stat3, signal transducer and activator of transcription 3; RyR2, Ryanodine receptor 2. preventing the occurrence of cardiac dysfunction (Wu et al., 2017).Hua et al. believe that dihydromyricetin may lower fasting blood glucose and glycated hemoglobin levels in diabetic mice, inhibit the production of ROS in mitochondria, upregulate SIRT3, SOD2 protein expression, and increase mtDNA copy number to suppress oxidative stress in diabetic mice and improve diabetic vascular endothelial dysfunction.This may be achieved through mediating SIRT3-dependent pathways (Hua et al., 2020).These research findings suggest that dihydromyricetin may have significant potential in regulating mitochondrial biosynthesis, stimulating mitochondrial autophagy, and combating oxidative stress. Terpenoids Terpenoids are polymers of isoprene and its derivatives, and they are very important secondary metabolites in plants.Terpenoids have shown anti diabetes properties in vivo and in vitro studies, which can increase insulin secretion in body tissues, promote the translocation of GLUT4 to increase glucose uptake, protect pancreatic cells and improve the expression of inflammatory factors (Putta et al., 2016).Recent reports indicate that terpenoids may also improve the development of diabetes cardiomyopathy by regulating mitochondrial function (Zhang et al., 2024). Triptolide is a diterpenoid compound isolated from Tripterygium wilfordii Hook, possessing pharmacological effects such as anti-inflammatory, immune regulation, and anti-cancer properties (Gao et al., 2021).Liang et al. treated SD diabetic rats induced by STZ with triptolide (100, 200, or 400 μg/kg/d) for 6 weeks and evaluated cardiac energy metabolism using P-31 nuclear magnetic resonance spectroscopy.The study results indicated that the optimal therapeutic effect was achieved with a dose of 200 μg/kg/d of triptolide, which could enhance cardiac energy metabolism by promoting mitochondrial ATP generation and upregulating the expression of P38 MAPK protein, improving cardiac function in diabetic cardiomyopathy rats through the regulation of MAPK signaling pathways (Liang et al., 2015).Additionally, Pan et al.'s research indicates that during the process of cardiac remodeling, the expression of FoxP3 is downregulated in cardiomyocytes, leading to sustained activation of Parkin-mediated mitochondrial autophagy.However, Triptolide can regulate mitophagy, restoring the activity of FoxP3 in cardiomyocytes.Mechanistically, FoxP3 interacts with a sequence downstream of the binding site for Activating Transcription Factor 4 (ATF4), which involves the promoter of Parkin and sequestered free nuclear ATF4, to reduce the expression of Parkin mRNA during the process of cardiac remodeling.In conclusion, studies suggest that triptolide may be an effective cardioprotective agent (Pan et al., 2022). Celastrol is one of terpenoids isolated from T. wilfordii Hook, which has various biological activities such as anti rheumatoid, anti-tumor, and antioxidant properties (Wang et al., 2020).Wu et al. used network pharmacology to predict the key regulatory targets of celastrol on DCM, and analyzed its biological processes and signaling pathways through animal experiments.The results showed that celastrol (50 μg/kg/d) treatment for 4 weeks could downregulate the expression of P38 protein in the MAPK pathway, reverse the energy remodeling, mitochondrial dysfunction, and oxidative stress induced by STZ-induced SD diabetic rats, thereby delaying the deterioration of heart function and myocardial interstitial fibrosis.The study suggests that the MAPK signaling pathway may be an effective intervention target for DCM (Wu et al., 2022).In another study, celastrol can also alleviate diabetes-induced cardiac damage, inhibit mitochondrial ROS production, and suppress the release of inflammatory factors.The research results indicate that celastrol shows great potential as an effective cardiac protective drug for treating DCM (Zhao et al., 2023). Astragaloside IV is one of the active ingredients extracted from Astragalus membranaceus (Fisch.)Bunge, which has pharmacological effects such as anti-inflammatory, antioxidant, immunomodulatory, and anti-tumor effects (Gao et al., 2022).In a SD rat model of DCM induced by STZ, Zhang et al. confirmed that astragaloside IV (10,20, and 40 mg/kg/d) treatment for 16 weeks can improve mitochondrial biogenesis by upregulating the expression of PGC-1α and Nrf-1 in myocardial tissue, as well as PGC-1α and Nrf-1 mRNA expression in H 9 C 2 cells.This regulation enhances ATP and ADP levels to improve mitochondrial energy metabolism, reduces the expression of cytochrome c (Cyt c) and caspase-3 to inhibit cell apoptosis and myocardial hypertrophy, thereby reducing diabetic myocardial damage (Zhang et al., 2019).Additionally, Zhu et al. have shown that astragaloside IV can downregulate the expression of miR-34a and upregulate the expression of Bcl-2, Sirt1, and pAkt/Akt proteins to protect myocardial cells from high glucose-induced damage (Zhu et al., 2019).These findings suggest that astragaloside IV may exert a protective effect on DCM by promoting mitochondrial biogenesis and inducing mitophagy. Polyphenols Polyphenols refer to secondary metabolites in plants, named for their multiple phenolic groups.They are widely present in traditional herbal medicines and some natural foods.In the treatment of DCM, polyphenols demonstrate significant antioxidant activity, capable of scavenging free radicals, reducing oxidative stress, and thus protecting mitochondria from oxidative damage (Raina et al., 2024). Curcumin is a polyphenolic compound isolated from the root of Curcuma longa L and extensive research has confirmed that it is a highly effective antioxidant (Zheng et al., 2018).In a rat model of SD diabetes established by HFD and intraperitoneal injection of STZ, curcumin (200 mg/kg/d) treatment for 8 months promoted the transfer of Nrf2 to the nucleus through the AKT pathway, increased the expression of the antioxidant factors HO-1 and GCLC, reduced the accumulation of mitochondrial ROS, and mitigated mitochondrial oxidative damage.The study suggests that curcumin inhibits cell apoptosis by activating the AKT/Nrf2/ARE pathway and eliminates the accumulation of superoxide in myocardial cells (Wei et al., 2023).Additionally, Yao et al. research has shown that curcumin can upregulate the expression of AMPK and JNK1 to stimulate mitophagy, as well as upregulate the expression of Bcl-2 and Bim to reduce cardiomyocyte apoptosis.Further mechanistic studies have indicated that curcumin prevents DCM through the cross-talk between mitophagy and apoptosis mechanisms via the AMPK/mTORC1 pathway (Yao et al., 2018). Paeonol is a polyphenolic compound isolated from the root bark of Paeonia suffruticosa Andr, and it is also the active ingredient of paeonol injection (a marketed antipyretic and analgesic drug), which has pharmacological effects such as anti-inflammatory, neuroprotective, and anti-cardiovascular diseases (Zhang et al., 2019).The research by Liu et al. found that intervention with paeonol (75, 150, or 300 mg/kg/d) for 12 weeks in STZ-induced diabetic rats promoted Opa1-mediated mitochondrial fusion, inhibited mitochondrial oxidative stress, and maintained mitochondrial respiratory capacity and cardiac function in DCM.It is noteworthy that knocking out Opa1 attenuated the protective effect of paeonol in diabetic hearts and high-glucose-treated cardiomyocytes.The study indicates that paeonol is a novel promoter of mitochondrial fusion, providing protection against DCM through the CK2-MAN3-OPA1 signaling pathway (Liu et al., 2021).Additionally, in another study, Ding et al. demonstrated that paeonol can activate the transcription factor Stat3 to promote Mfn2-mediated mitochondrial fusion, not only reducing doxorubicin (Dox)-induced cardiac toxicity, but also preserving Dox's anticancer activity (Ding et al., 2023).In conclusion, these research findings suggest that paeonol may have significant value in preventing or treating diabetes and its complications by regulating mitochondrial dynamics. Resveratrol is a polyphenolic compound widely present in plants, fruits, and vegetables, with biological activities such as antioxidant, anti-inflammatory, anticancer, and anti-aging (Galiniak et al., 2019).Fang et al. believe that resveratrol (50 mg/kg/d) treatment for 16 weeks significantly alleviated the cardiac dysfunction induced by HFD combined with STZ in SD rats.This was manifested by a significant increase in the activity of manganese SOD, ATP content, mitochondrial DNA copy number, Δψm, and nuclear respiratory factor (NRF), and a significant decrease in MDA and mitochondrial uncoupling protein UCP2 levels.The results indicate that resveratrol alleviates cardiac dysfunction in diabetic rats by improving mitochondrial function through SIRT1-mediated PGC-1α deacetylation (Fang et al., 2018).Similarly, in another study, Diao et al. demonstrated that resveratrol treatment improved mitochondrial function in diabetic rats, inhibited mitochondrial ROS generation, MPTP opening, and Cyto c release.It also suppressed the expression of UCP2 protein, thereby improving cardiac function in diabetic rats (Diao et al., 2019).In conclusion, resveratrol may have a positive impact on the prevention and treatment of diabetic cardiomyopathy by regulating mitochondrial uncoupling. Salvianolic acid A is a polyphenolic compound isolated from Salvia miltiorrhiza Bunge, which has been proven to have various biological activities, including anti-oxidant, anti-inflammatory, anti-fibrotic and neuroprotective, etc (Wang et al., 2019).The research by Gong et al. indicates that Salvianolic acid A (3 mg/kg/ d) treatment for 6 weeks in STZ-induced diabetic SD rats significantly enhances the respiratory activity and mitochondrial respiratory function related to complex I/II in diabetic rats, improves the abnormal electrocardiogram in diabetic rats, and inhibits cardiomyocyte apoptosis by down-regulating the expression of Bax and up-regulating the expression of Bcl-2, Caspase3, and Caspase9, thus exerting a protective effect on the heart (Gong et al., 2023).Furthermore, Wang et al.'s research indicates that salvianolic acid A can promote mitochondrial biogenesis in endothelial cells by regulating the expression of AMPK, PGC-1α, NRF1, and TFAM.Mechanistically, salvianolic acid A may activate the AMPKmediated PGC-1α/TFAM signaling pathway, thereby improving the occurrence of diabetic cardiovascular diseases caused by mitochondrial dysfunction (Wang et al., 2022).In conclusion, Salvianolic acid A can prevent and treat diabetic cardiomyopathy by enhancing mitochondrial respiratory function and promoting mitochondrial biogenesis. Alkaloids Alkaloids are plant secondary metabolites composed of polycyclic aromatic frameworks containing one or more nitrogen atoms.Alkaloids have significant hypoglycemic effects, as they can stimulate glucose uptake and regulate insulin secretion, and are considered allosteric activators of AMPK (Seksaria et al., 2023).It is reported that alkaloids can affect the permeability of mitochondrial membrane, regulate the level of calcium ions in mitochondria, and regulate the imbalance of mitochondrial dynamics, thus reducing the occurrence of mitochondrial dysfunction, which is beneficial to the treatment of DCM (Patalas et al., 2021). Berberine is an isoquinoline alkaloid that can be extracted and isolated from various plants such as Coptis chinensis Franch and Phellodendron chinense Schneid.It has biological activities such as lowering blood sugar and regulating lipid metabolism (Pang et al., 2015).In a study by Hang et al., it was shown that in a high glucoseinduced H 9 C 2 cardiomyocyte hypertrophy model, berberine intervention for 24 h at a concentration of 100 nM can regulate the imbalance of mitochondrial dynamics, promote mitochondrial biogenesis, and activate mitophagy to eliminate damaged mitochondria.These beneficial effects of berberine may be related to the activation of the AMPK signaling pathway (Hang et al., 2018).Additionally, research by Chen et al. demonstrated that berberine can upregulate the Bcl-2/Bax ratio, reduce the expression of Caspase3 protein, and simultaneously activate the PI3K/Akt and AMPK signaling pathways to improve cardiac contractile and diastolic dysfunction in diabetic rat myocardial I/R and inhibit myocardial cell apoptosis (Chen et al., 2014).Therefore, berberine may treat DCM by targeting the AMPK and PI3K/Akt signaling pathways through the activation of mitochondrial biogenesis. Matrine is an alkaloid extracted and isolated from the dried roots of Sophora flavescens Aiton.It exhibits a wide range of biological activities, such as antioxidant, anti-tumor, antiinflammatory, anti-fibrotic, anti-arrhythmic, and immunomodulatory effects (Wang et al., 2023).In an AGEsinduced SD rat model, matrine (50, 100, and 200 mg/kg/d) treatment for 20 days was found to inhibit the dissociation of FKBP12.6 and RyR2, reduce RyR2 activity and Ca 2+ levels, decrease the expression levels of cytochrome c and active Caspase3, suppress cell apoptosis, and restore Δψm (Wang et al., 2019).Additionally, studies by Liu et al. suggest that matrine significantly reduces mitochondrial ROS production in primary cardiomyocytes of DCM rats, downregulates the expression of Cleaved caspase8 and Cleaved caspase3 proteins to inhibit cardiomyocyte apoptosis.Further research indicates that matrine improves diabetic cardiomyopathy by inhibiting the ROS/TLR-4 signaling pathway (Liu et al., 2015).In conclusion, matrine can effectively improve the occurrence of diabetic cardiac dysfunction and may potentially be developed as a cardioprotective agent. Glycosides Glycosides are an important type of plant secondary metabolites, widely found in plants, fruits, vegetables, and nuts.They include saponin glycosides, flavonoid glycosides, alcoholic glycosides.Phenolic glycosides, coumarin glycosides, and more.Glycosides hold great potential in the prevention and treatment of diabetes and various vascular complications (Yeram et al., 2022). Ginsenoside Rg1 is a saponin glycoside isolated from Panax ginseng C. A. Mey, which has various biological activities such as anti-inflammatory, anti-oxidant, anti-platelet aggregation, anticancer, hypoglycemic, and neuroprotective effects (Zhang et al., 2022).Qin et al. showed that ginsenoside Rg1 (20 mg/kg/d) can promote mitochondrial biogenesis by increasing the expression of PGC-1α, AMPK, Nrf2 and HO-1 proteins, and reduce the expression of NF-κB and NLRP3 proteins to reduce oxidative stress after 8 weeks of treatment in STZ induced diabetes Wistar rats.Furthermore, ginsenoside Rg1 has been found to play a cardioprotective role by mediating the mitochondrial-related AMPK/Nrf2/HO-1 signaling pathway (Qin et al., 2019).In another study, ginsenoside Rg1 significantly reduced MDA and caspase-3 levels in the myocardium of diabetic rats, while increasing levels of SOD, catalase, glutathione peroxidase (GSH-Px), and B-cell lymphoma-extra-large (Bcl-xL).This indicates that ginsenoside Rg1's treatment of diabetic rats is associated with inhibiting oxidative stress and alleviating myocardial cell apoptosis (Yu et al., 2015).In conclusion, these findings suggest that ginsenoside Rg1 may have potential preventive and therapeutic effects on cardiovascular damage in diabetic patients by regulating mitochondrial biogenesis, inhibiting oxidative stress, and the mitochondrial-dependent apoptotic pathway. Salidroside is an alcohol glycoside isolated from Rhodiola rosea L, which has a wide range of biological activities, including antioxidant, anti-tumor, antiviral, and hypoglycemic effects (Rong et al., 2020).The research by Li et al. indicates that in a C57BLKS/J mice model induced by HFD and STZ injection, treatment with salidroside (50 or 100 mg/kg/d) for 16 weeks can improve insulin resistance, mitochondrial ultrastructure damage, and restore normal cardiac contractile function in diabetic mice.Further mechanistic studies have shown that salidroside upregulates the expression of SIRT3 protein, promotes the translocation of SIRT3 from the cytoplasm to the mitochondria, increases the deacetylation of mitochondrial protein MnSOD, and upregulates the expression of AMPK, PGC-1α, and TFAM to induce mitochondrial biogenesis (Li et al., 2021).Additionally, salidroside can improve DCM by activating the Akt signaling pathway, upregulating the expression of Nrf2 and the antioxidant factor HO-1 (Ni et al., 2021).In conclusion, salidroside may play an important role in diabetes and its cardiovascular complications by promoting mitochondrial biogenesis and exerting antioxidant stress. Astragalus polysaccharides are water-soluble polysaccharides extracted from the dried roots of Astragalus membranaceus (Fisch.)Bunge, which have anti-oxidant, anti-inflammatory, anti diabetes, immune regulation and other biological activities (Dong et al., 2023).Studies by Sun et al. have shown that astragalus polysaccharides (0.1-3.2 mg/mL, 24 h) can inhibit HG-induced H 9 C 2 cell apoptosis by up-regulating the expression of Bcl-2, downregulating the expression of Bax, and increasing the ratio of Bcl-2/ Bax in the mitochondria (Sun et al., 2017).In another study, Chen et al. demonstrated that astragalus polysaccharides can protect the ultrastructure of cell mitochondria, reduce cell apoptosis, increase SOD activity, and thereby reduce oxidative stress induced by HG in H 9 C 2 cells (Chen et al., 2018).In conclusion, these results prove that astragalus polysaccharides can prevent and treat DCM through the mitochondria-mediated apoptotic pathway. Sugarcane leaf polysaccharide is an amorphous polysaccharides isolated from Saccharum sinensis Roxb leaves, which has various biological activities such as antioxidant, hypoglycemic, lipid-lowering, antibacterial, and immune regulation (Tang et al., 2019).Studies by Sun et al. have shown that sugarcane leaf polysaccharide (10 and 20 mg/kg/d) can effectively reverse myocardial ischemia-reperfusion injury in diabetic rats, prevent myocardial fibrosis and neutrophil infiltration, increase myocardial tissue SOD activity, reduce MDA and MPO activity, and significantly inhibit the expression levels of TNF-α and IL-6.In vitro, it promotes the translocation of Nrf2 from the cytoplasm to the nucleus by activating the Nrf2/HO-1 signaling pathway, upregulates the expression of Nrf2, HO-1, and NQO-1 proteins, reduces ROS production, and restores Δψm to affect myocardial mitochondrial biogenesis (Sun et al., 2023).In addition, Hao et al. suggested that sugarcane leaf polysaccharides can promote the expression of vascular endothelial growth factor (VEGF), enhance the activity of SOD, reduce the levels of MDA, NO, and GSH-Px, strengthen the antioxidant capacity in NOD mice, facilitate the body in clearing oxidative free radicals, and thereby improve the oxidative stress status of pancreatic β-cells (Hao et al., 2018).These research results indicate that sugarcane leaf polysaccharide may prevent DCM through targeting mitochondrial biogenesis and enhancing antioxidant capacity.medicinal plants in China, Iran, Russia, and some other European countries.Astragali radix was first recorded in the Shennong's Classic of Materia Medica and is listed as a qisupplementing formula.In the theoretical system of traditional Chinese medicine (TCM), astragali radix is sweet in flavor, warm in nature, and acts on the lung and spleen.It has the traditional effects of invigorating qi for ascending, consolidating superficies for arresting sweating and inducing diuresis for removing edema (Liu et al., 2023).Currently, a variety of secondary metabolites have been isolated from the dried roots of astragali radix, including polysaccharides, flavonoids, saponins, amino acids, trace elements, etc.Among them, polysaccharides, flavonoids, and saponins have biological activities such as hypoglycemic, antioxidation, and immune regulation, and are important components of the pharmacological effects of astragali radix.In addition, pharmacological studies have shown that the biological activities of astragali radix, such as antioxidation, hypoglycemic, immune regulation, and antiinflammatory effects, are widely used in the treatment of respiratory, digestive, urinary, and blood system diseases, as well as diabetes and its complications, and have achieved good therapeutic effects (Chen et al., 2020).In previous reports, it has been indicated that astragaloside IV can significantly delay the excessive generation of mitochondrial ROS.It can also exert a protective effect on diabetic cardiomyopathy by upregulating the activities of antioxidant enzymes SOD2, catalase, GSH-Px, and downregulating the expression of c-Jun N-terminal kinase and p38 MAPK (Chen et al., 2018).Astragalus polysaccharides can prevent the occurrence of diabetic cardiomyopathy through the mitochondrial-mediated cell apoptosis pathway (Sun et al., 2017).Kudzu root (P.lobata (Willd.)Ohwi)belongs to the leguminosae and is a homologous medicinal and edible plant.It is mainly distributed in southern China and Southeast Asia.Kudzu root is sweet in flavor, mild in nature, and acts on the spleen, stomach, and lung.It has traditional effects such as expelling pathogenic factors from muscles for clearing heat, relieving rigidity of muscles and activating collaterals.In clinical applications of TCM, kudzu root is often used to treat diabetes, cardiovascular and cerebrovascular diseases, tumors, and other ailments.Kudzu root has various pharmacological effects such as antioxidant, hypoglycemic, antiinflammatory, anti-tumor, blood pressure lowering, lipid-lowering, heart protection, and memory improvement (Wang et al., 2022).The extract of kudzu root contains abundant flavonoids such as puerarin, daidzein, tectoridin, and luteolin-6-C-glucoside, which have free radical scavenging and antioxidant activities.Its mechanism of action may be related to the regulation of oxidative stress-related factors such as COX-2, SOD, MDA, ET-1, NO, and GSH (Gao et al., 2016;Dong et al., 2024).In DCM, puerarin has been found to modulate mitochondrial function.Studies have shown that puerarin can regulate mitochondrial energy metabolism, reduce oxidative stress and apoptosis of myocardial cells, and improve the symptoms of DCM (Sun et al., 2022).In general, kudzu root is a medicinal plant with wide-ranging biological activities that can improve diabetic heart function by modulating mitochondrial function and other pathways. Carthami flos (asteraceae), the dried floret of C. tinctorius L., is a perennial herbaceous plant with rich medicinal value, mainly distributed in Iran, North Korea, China, Mongolia, Russia and other regions.In clinical applications of TCM, carthami flos is pungent in flavor, warm in nature, and acts on the heart and liver.It is traditionally believed to promote blood circulation for removing blood stasis, regulate qi-flowing for relieving pain, and is mainly used for angina, irregular menstruation, diabetes, and hypertension (Tu et al., 2015).Carthami flos contains compounds such as flavonoids, volatile oils, and polysaccharides, which have biological activities such as antioxidant and anti-inflammatory effects.Hydroxysafflor yellow A isolated from carthami flos is considered a potential antioxidant, providing protective effects against myocardial damage.Hydroxysafflor yellow A can increase the levels of SOD and GPX 1 in the serum of DCM mice, reduce MDA content, scavenge free radicals, and reduce oxidative stress damage to cardiac mitochondria (Yao et al., 2021).In addition, the essential oil of carthami flos extracted using different solvents was analyzed by gas chromatography-mass spectrometry (GC-MS) technology.The content of n-hexane extract was 97.65%, petroleum ether extract was 98.05%, dichloromethane extract was 98.93%, and the content of steam-distilled extract was 99.68%.In vitro pharmacology studies have shown that the n-hexane extract of carthami flos has the best in vitro anti-diabetic activity against protein tyrosine phosphatase 1B (PTP1B), demonstrating potential for the treatment of diabetes and obesity (Li et al., 2012). Gynostemma pentaphyllum (Thunb.)Makino is a kind of cucurbitaceae, mainly distributed in India, Nepal, Bangladesh, China, Myanmar, Laos, Vietnam, Malaysia and other regions.Gynostemma pentaphyllum is spicy and slightly bitter in flavor, warm in nature, and acts on the lung, spleen, and stomach.In the theory of TCM, it has the traditional effects of warming spleen and stomach for dispelling cold, ventilating lung qi for dissipating phlegm and regulating qi-flowing for harmonizing stomach.Gynostemma pentaphyllum is a commonly used plant in TCM for treating diabetes.It has various pharmacological effects such as antioxidant, hypoglycemic, anti-inflammatory, antibacterial, antiallergic, and antitumor properties (Nguyen et al., 2021).Active ingredients in gymnema pentaphyllum, such as saponins and polysaccharides, have a certain hypoglycemic effect, significantly reducing insulin resistance index and improving diabetes and its complications.Gypenoside can lower fasting blood sugar and blood lipids in mice with type 2 diabetes induced by HFD and STZ, and significantly improve glucose tolerance and insulin resistance.Its hypoglycemic effect may be related to the downregulation of key proteins in the AMPK signaling pathway, including phosphoinositide 3-kinase and glucose-6-phosphatase (Song et al., 2022).Similarly, in another study, the extract of gynostemma pentaphyllum significantly reduced the levels of MDA, hydrogen peroxide, peroxynitrite, and ROS in DCM rats, while increasing the levels of GSH, SOD, CAT, and GPx.It also significantly reduced the expression of cytokines and inflammatory parameters (TNF-α, IL-6, IL-1β, COX-2, NLRP3, NF-κB).Furthermore, the extract of gynostemma pentaphyllum also promoted mitochondrial biogenesis in cardiac tissues by enhancing the expression of PGC-1, HO-1, and Nrf2.These results indicate that the gynostemma pentaphyllum has a cardioprotective effect on STZ-induced diabetic cardiac dysfunction by regulating the AMPK/Nrf2/ HO-1 pathway (Chen et al., 2022). Conclusion Diabetic cardiomyopathy is manifested as abnormal cardiac structure and function in the absence of ischaemic or hypertensive heart disease in individuals with diabetes.However, its pathogenesis remains unclear.Mitochondrial dysfunction is an important pathological mechanism leading to the development of the disease.Targeted regulation of mitochondrial function can effectively improve the symptoms of DCM.Targeting mitochondria with plant secondary metabolites may be an effective approach for preventing and treating DCM.This review provides evidence supporting mitochondrial dysfunction in DCM, briefly describes the pathophysiological mechanisms leading to mitochondrial dysfunction, and discusses potential targets and treatment strategies. Currently, extensive screening research on anti-diabetic drugs has identified plants as the main potential source for drug discovery.Biologically active secondary metabolites in plants, such as flavonoids, terpenes, polyphenols, alkaloids, and glycosides, have been proven to have hypoglycemic effects in vivo and in vitro.Previous reports indicate that plant secondary metabolites improve hyperglycemia and insulin resistance mainly by regulating lipid and protein metabolism pathways, insulin signaling pathways, anti-inflammatory responses, and antioxidant stress.The key regulatory targets involved include α-glucosidase, α-amylase, dipeptidyl peptidase 4 (DPP-4), protein tyrosine phosphatase 1B (PTP1B), PPARα, GLUT4, and the AMPK signaling pathway (Shehadeh et al., 2021;Sukhikh et al., 2023). In this review, we have gathered plant secondary metabolites that affect mitochondrial function in the treatment of DCM.Some plant secondary metabolites have biological activities such as hypoglycemic, anti-oxidation, and anti-inflammation, which can protect myocardial cells by improving mitochondrial dysfunction. For example, apigenin can promote mitochondrial biogenesis, induce mitochondrial autophagy to maintain the normal myocardial mitochondrial quality and quantity homeostasis; triptolide can improve insulin resistance, regulate mitochondrial energy metabolism; paeonol can promote mitochondrial fusion, inhibit mitochondrial oxidative stress; resveratrol can regulate the opening of mitochondrial inner membrane channels, regulate the process of mitochondrial uncoupling, and help reduce oxidative stress reactions inside mitochondria; matrine can regulate the opening of mitochondrial calcium ion channels, affecting the balance of calcium ions inside mitochondria; sugarcane leaf polysaccharide can improve insulin resistance and promote mitochondrial biogenesis.However, there are still some challenges in the use of plant secondary metabolites in the treatment of DCM.For example, the pharmacological effects and dosages of plant secondary metabolites are not fully understood, and a more detailed quality evaluation system is needed to verify their efficacy and safety.It is also important to address how to improve the bioavailability and stability of plant secondary metabolites, which can be achieved through nanocarrier delivery technology, chemical modification, and other biotechnological methods.In conclusion, plant secondary metabolites targeted at mitochondria are expected to become an important drug resource for the treatment of DCM, and more clinical experiments are needed in the future to elucidate their mechanisms of action. TABLE 1 The intervention measures and targets targeting mitochondrial dysfunction in the progress of diabetes cardiomyopathy. TABLE 2 Plant secondary metabolites targeting mitochondrial dysfunction in diabetes cardiomyopathy models. TABLE 2 ( Continued) Plant secondary metabolites targeting mitochondrial dysfunction in diabetes cardiomyopathy models.
v3-fos-license
2017-09-18T10:01:40.619Z
2016-09-01T00:00:00.000
1005279
{ "extfieldsofstudy": [ "Engineering", "Medicine", "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.3390/s16091425", "pdf_hash": "f44c10db09370a58b4c3ad84f0c9dd120bbd9708", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1944", "s2fieldsofstudy": [ "Physics" ], "sha1": "f44c10db09370a58b4c3ad84f0c9dd120bbd9708", "year": 2016 }
pes2o/s2orc
Sparse Reconstruction for Temperature Distribution Using DTS Fiber Optic Sensors with Applications in Electrical Generator Stator Monitoring This paper presents an image reconstruction method to monitor the temperature distribution of electric generator stators. The main objective is to identify insulation failures that may arise as hotspots in the structure. The method is based on temperature readings of fiber optic distributed sensors (DTS) and a sparse reconstruction algorithm. Thermal images of the structure are formed by appropriately combining atoms of a dictionary of hotspots, which was constructed by finite element simulation with a multi-physical model. Due to difficulties for reproducing insulation faults in real stator structure, experimental tests were performed using a prototype similar to the real structure. The results demonstrate the ability of the proposed method to reconstruct images of hotspots with dimensions down to 15 cm, representing a resolution gain of up to six times when compared to the DTS spatial resolution. In addition, satisfactory results were also obtained to detect hotspots with only 5 cm. The application of the proposed algorithm for thermal imaging of generator stators can contribute to the identification of insulation faults in early stages, thereby avoiding catastrophic damage to the structure. Introduction Stator temperature is one of the most influential parameters in the degradation of hydroelectric generators [1]. High temperatures, above 100 • C, can accelerate the wear of the insulation layer of the windings, leading to premature failure and compromising the integrity of the generator [1,2]. Figure 1 shows some examples of faults that can occur due to insulation wear of the stator. Figure 1a presents a failure caused by defects in insulation between the core and the bars, which can cause partial discharges by corona effect due to potential difference between core and bars [3]. Figure 1b shows a failure caused by defects in insulation between bottom and top bars, which can also cause partial discharges due to the phase difference between the bars, increasing risk of short circuit in the stator [4,5]. In both examples, the faults cause areas of high temperature near the defect location. Thus, these faults can be identified as hotspots in certain parts of the structure. However, if the defect is not identified and repaired in the early stages, the hotspots can propagate over the structure, affecting the life expectancy of the insulation layer, leading in extreme cases to a catastrophic failure of the generator [3][4][5]. Usually, stator temperature measurements are performed through conventional localized sensors such as PT100 (Platinum Thermo-resistance) or RTD (Resistance Temperature Detector). These sensors are suitable for monitoring temperature changes during standard operation (no failure), given that the temperature distribution is considered to be rather uniform [6,7]. On the other hand, they are not able to identify hotspots that may arise in the stator coils as a result of insulation breakdown. Generally, the stator is instrumented with a few tens of localized sensors, which are not sufficient to monitor the entire structure containing hundreds of bars. Furthermore, the use of conventional electronic transducers in generators presents additional drawbacks because of their sensitivity to electromagnetic interference [6,7]. These factors motivate the use of other sensing technologies, aiming at monitoring the temperature distribution over the stator structure [8]. Recent research has shown that the distributed optical sensor technology (DTS) has great potential for applications related to monitoring of generator stator temperature [6][7][8][9][10]. DTS systems measure temperatures by means of optical fibers. These optoelectronic devices provide a continuous profile of the temperature distribution along the fiber cable [11]. Thus, the complete stator instrumentation can be carried out using only one optical fiber as a sensor, which is also immune to electromagnetic interference from the hostile environment inside the generator [11,12]. The main DTS technologies are those based on Raman and Brillouin scattering. Raman DTS systems have become popular for practical application due to their low cost and great stability when compared to equipment based on Brillouin scattering [12]. Typically, commercial Raman DTS equipment is able to provide a temperature profile along an optical fiber with over 30 km length, with accuracy of 0.1 °C and spatial resolution of 1 m [13]. The spatial resolution is defined as the spatial distance between the 10% and 90% levels of response to a temperature step. In general, for a temperature profile described by a step pulse with length smaller than the spatial resolution, the measured temperature is lower than the real temperature by a ratio of temperature step length and spatial resolution [12,13]. This parameter can be a disadvantage of Raman DTS, and sometimes limits its use in certain applications where the thermal variations occur in regions with dimensions less than 1 m. In the case of stator temperature monitoring, hotspots with dimensions in the order of centimeters are either undetected by the Raman DTS or measured incorrectly, compromising the identification of some insulation faults at an early stage. In the last years, several studies for improvement in spatial resolution of Raman DTS systems have been presented in the literature. From the hardware viewpoint, methods based on a more efficient use of optoelectronic devices have shown significant results regarding the DTS spatial resolution (about 10 cm) [14][15][16][17]. However, such techniques often present higher costs, besides other complications that prevent their use in commercial equipment, such as increments in response time and in measurement uncertainty. Therefore, another alternative that has been investigated is the use of signal processing techniques [18][19][20][21]. These techniques have shown enhancement in DTS performance without increasing equipment costs, as this does not require physical changes in the device. Recently, the method proposed by Bazzo et al. [21], based on Total Variation deconvolution, presented great potential with regard to spatial resolution. The results showed that it is possible to Usually, stator temperature measurements are performed through conventional localized sensors such as PT100 (Platinum Thermo-resistance) or RTD (Resistance Temperature Detector). These sensors are suitable for monitoring temperature changes during standard operation (no failure), given that the temperature distribution is considered to be rather uniform [6,7]. On the other hand, they are not able to identify hotspots that may arise in the stator coils as a result of insulation breakdown. Generally, the stator is instrumented with a few tens of localized sensors, which are not sufficient to monitor the entire structure containing hundreds of bars. Furthermore, the use of conventional electronic transducers in generators presents additional drawbacks because of their sensitivity to electromagnetic interference [6,7]. These factors motivate the use of other sensing technologies, aiming at monitoring the temperature distribution over the stator structure [8]. Recent research has shown that the distributed optical sensor technology (DTS) has great potential for applications related to monitoring of generator stator temperature [6][7][8][9][10]. DTS systems measure temperatures by means of optical fibers. These optoelectronic devices provide a continuous profile of the temperature distribution along the fiber cable [11]. Thus, the complete stator instrumentation can be carried out using only one optical fiber as a sensor, which is also immune to electromagnetic interference from the hostile environment inside the generator [11,12]. The main DTS technologies are those based on Raman and Brillouin scattering. Raman DTS systems have become popular for practical application due to their low cost and great stability when compared to equipment based on Brillouin scattering [12]. Typically, commercial Raman DTS equipment is able to provide a temperature profile along an optical fiber with over 30 km length, with accuracy of 0.1 • C and spatial resolution of 1 m [13]. The spatial resolution is defined as the spatial distance between the 10% and 90% levels of response to a temperature step. In general, for a temperature profile described by a step pulse with length smaller than the spatial resolution, the measured temperature is lower than the real temperature by a ratio of temperature step length and spatial resolution [12,13]. This parameter can be a disadvantage of Raman DTS, and sometimes limits its use in certain applications where the thermal variations occur in regions with dimensions less than 1 m. In the case of stator temperature monitoring, hotspots with dimensions in the order of centimeters are either undetected by the Raman DTS or measured incorrectly, compromising the identification of some insulation faults at an early stage. In the last years, several studies for improvement in spatial resolution of Raman DTS systems have been presented in the literature. From the hardware viewpoint, methods based on a more efficient use of optoelectronic devices have shown significant results regarding the DTS spatial resolution (about 10 cm) [14][15][16][17]. However, such techniques often present higher costs, besides other complications that prevent their use in commercial equipment, such as increments in response time and in measurement uncertainty. Therefore, another alternative that has been investigated is the use of signal processing techniques [18][19][20][21]. These techniques have shown enhancement in DTS performance without increasing equipment costs, as this does not require physical changes in the device. Recently, the method proposed by Bazzo et al. [21], based on Total Variation deconvolution, presented great potential with regard to Sensors 2016, 16, 1425 3 of 16 spatial resolution. The results showed that it is possible to measure accurately temperature variations in lengths as short as 15 cm, and to obtain significant improvements for lengths down to 5 cm. This work proposes an image reconstruction scheme for improving the response of the thermal imaging system for generator stator using Raman DTS. The thermal images are generated by a reconstruction algorithm based on a DTS acquisition model and sparse representation theory. In this representation method, using a dictionary that contains prototype signal atoms, images are described by sparse linear combinations of these atoms. Lately, sparse representations have been successfully applied in many areas of image processing, such as denoising, inpainting and super-resolution [22,23]. To monitor the temperature distribution of the stator, we employ a sparse representation because the system can be considered a large structure at a uniform temperature, with occasional hotspots spread out in case of insulation failure (see Figure 1). The DTS readings can be viewed as degraded observation (blurred and downsampled) of the temperature distribution on stator surface. This distribution is assumed to have a sparse representation with respect to a dictionary of hotspots, which is built based on physical properties of the structure [23]. The principle of the image reconstruction ensures that under mild conditions, the sparse representation can be correctly recovered from the degraded observation (sensor readings) [23]. Due to the difficulty for reproducing insulation faults in real stator structure, the experimental tests were performed using a prototype to generate the hotspots in a similar structure. The proposed technique permits a more precise monitoring of the stator temperature distribution, facilitating the identification of insulation faults at an early stage, and preventing further damage to the generator. This paper is organized into seven sections: Section 2 presents an overview of a thermal imaging system for generator stator, with details of real stator structure and stator prototype used in experimental tests. The details of DTS acquisition model are presented in Section 3. The dictionary of hotspots used to generate thermal images is presented in Section 4. The details of the image reconstruction algorithm are presented in Section 5, and the results are shown in Section 6. Finally, Section 7 presents the main conclusions on the results obtained. Overview of Thermal Imaging System In a previous work [6], Bazzo et al. presented a thermal imaging system for stator using DTS that was tested in a 200 MW hydroelectric generator. The main details of the structure and DTS installation on the stator surface are shown in Figure 2. As can be seen, the structure is basically composed of stacked 5 cm high bars with air gaps of 1 cm through which cooling air flow generated by the rotor circulates. The winding bars are installed in vertical slots spaced by approximately 10 cm. A distributed sensor based on fiber optics (DTS) was positioned on each slot that accommodates the winding bars, as the bars are the main source of heat structure [24,25]. Although this system has shown satisfactory results, the limitations of DTS spatial resolution impede the identification of hotspots with dimensions of less than 1 m. A similar work presented by Hudson et al. [7] also reports the need for a DTS equipment with spatial resolution about 10 cm for a more accurate thermal mapping of a generator stator. This motivates the development of an image reconstruction algorithm to improve system response, and to enable the identification of hotspots with dimensions at the order of centimeters. To evaluate the performance of the proposed method, we developed a prototype with similar structure of the stator surface, as shown in Figure 3. The tests on the prototype were necessary since it is not possible to generate insulation faults in the generator stator. Moreover, it is noteworthy that the generator was in perfect condition and in full operation at the power plant. The stator prototype was assembled with 35 aluminum plates with dimensions 200 × 5 × 1.5 cm, stacked with air gap of 1 cm, similar to the stator core plates ( Figure 2). Each plate has holes spaced at 10 cm similarly to the stator slots, and the stator bars were represented by resistances that can be embedded into the holes. Resistances with different lengths were used, and these were driven by a Proportional Integral Derivative (PID) controller. Thus, it was possible to simulate hotspots with dimensions 5 to 209 cm, in a similar structure of stator surface. The optical fiber used as distributed sensor was installed the same way as the real stator structure. As the fiber must be positioned on the main heat sources of the structure, which must be previously known, the maximum lateral displacement between the fiber loops should be 10 cm, which is a critical system parameter to ensure the resolution of thermal images. Although this structure is simple compared with the real stator structure, it reproduces the contact surface and the position where the sensor was installed in the real generator stator. Tests in the laboratory also allowed the use of a thermal camera Fluke ® Ti25 (Fluke Corporation, Everett, MA, USA) as a comparison reference to the thermal images generated by the image reconstruction algorithm. In the real stator structure, this would not be possible, as there is not enough space to install a camera after the rotor engagement. The proposed image reconstruction algorithm was based on a DTS acquisition model and a dictionary of hotspots to generate thermal images. Section 3 presents more details about sensor model development. structure, which must be previously known, the maximum lateral displacement between the fiber loops should be 10 cm, which is a critical system parameter to ensure the resolution of thermal images. Although this structure is simple compared with the real stator structure, it reproduces the contact surface and the position where the sensor was installed in the real generator stator. Tests in the laboratory also allowed the use of a thermal camera Fluke ® Ti25 (Fluke Corporation, Everett, MA, USA) as a comparison reference to the thermal images generated by the image reconstruction algorithm. In the real stator structure, this would not be possible, as there is not enough space to install a camera after the rotor engagement. The proposed image reconstruction algorithm was based on a DTS acquisition model and a dictionary of hotspots to generate thermal images. Section 3 presents more details about sensor model development. Distributed Temperature Sensing (DTS) Acquisition Model In a previous work [21], we showed that a linear model is suitable to represent the DTS response if one aims to reconstruct hot steps. In this work, we employ the same acquisition model, which was obtained by linear system identification techniques [26]. The input ( ) is the real temperature profile and the output ( ) is the DTS temperature readings. As we are considering the steady state, i.e., no time variations, the only independent variable is which represents the distance (cm) along the optical fiber sensor. The response ( ) is obtained by the convolution of the DTS impulse response ℎ( ) with input ( ), as shown in Equation (1) [26]: by applying the Laplace transform we obtain Equation (2): structure, which must be previously known, the maximum lateral displacement between the fiber loops should be 10 cm, which is a critical system parameter to ensure the resolution of thermal images. Although this structure is simple compared with the real stator structure, it reproduces the contact surface and the position where the sensor was installed in the real generator stator. Tests in the laboratory also allowed the use of a thermal camera Fluke ® Ti25 (Fluke Corporation, Everett, MA, USA) as a comparison reference to the thermal images generated by the image reconstruction algorithm. In the real stator structure, this would not be possible, as there is not enough space to install a camera after the rotor engagement. The proposed image reconstruction algorithm was based on a DTS acquisition model and a dictionary of hotspots to generate thermal images. Section 3 presents more details about sensor model development. Distributed Temperature Sensing (DTS) Acquisition Model In a previous work [21], we showed that a linear model is suitable to represent the DTS response if one aims to reconstruct hot steps. In this work, we employ the same acquisition model, which was obtained by linear system identification techniques [26]. The input ( ) is the real temperature profile and the output ( ) is the DTS temperature readings. As we are considering the steady state, i.e., no time variations, the only independent variable is which represents the distance (cm) along the optical fiber sensor. The response ( ) is obtained by the convolution of the DTS impulse response ℎ( ) with input ( ), as shown in Equation (1) [26]: by applying the Laplace transform we obtain Equation (2): Distributed Temperature Sensing (DTS) Acquisition Model In a previous work [21], we showed that a linear model is suitable to represent the DTS response if one aims to reconstruct hot steps. In this work, we employ the same acquisition model, which was obtained by linear system identification techniques [26]. The input f (z) is the real temperature profile and the output g (z) is the DTS temperature readings. As we are considering the steady state, i.e., no time variations, the only independent variable is z which represents the distance (cm) along the optical fiber sensor. The response g (z) is obtained by the convolution of the DTS impulse response h (z) with input f (z), as shown in Equation (1) [26]: by applying the Laplace transform we obtain Equation (2): The system identification consists in estimating the poles and zeros of a transfer function H (s), as shown in Equation (3) [26]: where β i are the zeros and α i are the poles. The DTS equipment used in this work was an AP Sensing ® N4385B (AP Sensing GmbH, Böblingen, Germany) model. This model features a spatial resolution of 1 m, acquisition time of 30 s, sample interval down to 15 cm, and temperature resolution of 0.04 • C for fibers of up to 2 km. To evaluate the equipment response, an experimental test was carried out in a thermal bath LAUDA ® ECO RE415G model, with stabilized temperature at 50 • C, providing hotspots of different lengths. The ambient temperature was 21.7 • C. The input f (z) and output g (z) were obtained for hotspots at 50 • C with lengths from 5 cm to 4 m with intervals of 5 cm, as shown in Figure 4. As can be seen, for hotspots of 5 cm the measured temperature was only 24.7 • C. From 1 m and above, the temperature is measured correctly (50 • C), confirming the spatial resolution specification of the DTS equipment. (2) The system identification consists in estimating the poles and zeros of a transfer function ( ), as shown in Equation (3) [26]: where βi are the zeros and αi are the poles. The DTS equipment used in this work was an AP Sensing ® N4385B (AP Sensing GmbH, Böblingen, Germany) model. This model features a spatial resolution of 1 m, acquisition time of 30 s, sample interval down to 15 cm, and temperature resolution of 0.04 °C for fibers of up to 2 km. To evaluate the equipment response, an experimental test was carried out in a thermal bath LAUDA ® ECO RE415G model, with stabilized temperature at 50 °C, providing hotspots of different lengths. The ambient temperature was 21.7 °C. The input ( ) and output ( ) were obtained for hotspots at 50 °C with lengths from 5 cm to 4 m with intervals of 5 cm, as shown in Figure 4. As can be seen, for hotspots of 5 cm the measured temperature was only 24.7 °C. From 1 m and above, the temperature is measured correctly (50 °C), confirming the spatial resolution specification of the DTS equipment. We employed the prediction error minimization (PEM) approach to estimate the transfer function coefficients. In this case, from initial estimates, the parameters are updated using a nonlinear leastsquares search method, where the objective is to minimize the weighted prediction error norm [26]. As a result, we obtained a transfer function with nine poles and four zeros (determined empirically) and 98% accuracy. The comparison between experimental data and model simulation is shown in Figure 5. We employed the prediction error minimization (PEM) approach to estimate the transfer function coefficients. In this case, from initial estimates, the parameters are updated using a nonlinear least-squares search method, where the objective is to minimize the weighted prediction error norm [26]. As a result, we obtained a transfer function with nine poles and four zeros (determined empirically) and 98% accuracy. The comparison between experimental data and model simulation is shown in Figure 5. The system identification consists in estimating the poles and zeros of a transfer function ( ), as shown in Equation (3) [26]: where βi are the zeros and αi are the poles. The DTS equipment used in this work was an AP Sensing ® N4385B (AP Sensing GmbH, Böblingen, Germany) model. This model features a spatial resolution of 1 m, acquisition time of 30 s, sample interval down to 15 cm, and temperature resolution of 0.04 °C for fibers of up to 2 km. To evaluate the equipment response, an experimental test was carried out in a thermal bath LAUDA ® ECO RE415G model, with stabilized temperature at 50 °C, providing hotspots of different lengths. The ambient temperature was 21.7 °C. The input ( ) and output ( ) were obtained for hotspots at 50 °C with lengths from 5 cm to 4 m with intervals of 5 cm, as shown in Figure 4. As can be seen, for hotspots of 5 cm the measured temperature was only 24.7 °C. From 1 m and above, the temperature is measured correctly (50 °C), confirming the spatial resolution specification of the DTS equipment. We employed the prediction error minimization (PEM) approach to estimate the transfer function coefficients. In this case, from initial estimates, the parameters are updated using a nonlinear leastsquares search method, where the objective is to minimize the weighted prediction error norm [26]. As a result, we obtained a transfer function with nine poles and four zeros (determined empirically) and 98% accuracy. The comparison between experimental data and model simulation is shown in Figure 5. Taking the inverse Laplace transform of the transfer function, we get the impulse response of the system h (z), presented in Figure 6. The DTS impulse response h (z) is used to assemble a sensitivity matrix H, which represents the DTS acquisition model. The matrix-vector notation of H is presented in Equation (4): Taking the inverse Laplace transform of the transfer function, we get the impulse response of the system ℎ( ), presented in Figure 6. The DTS impulse response ℎ( ) is used to assemble a sensitivity matrix H, which represents the DTS acquisition model. The matrix-vector notation of H is presented in Equation (4) (4) Figure 6. Experimental results vs. DTS acquisition model. Although the matrix H has proven suitable for representing the sensor acquisition, the DTS model contains errors and is itself a source of noise, as can be seen in Figure 5. To develop an efficient reconstruction algorithm, a statistical analysis of the noise is fundamental to set the norm in the data term of the cost function [22,27]. Thus, an analysis of the residuals was performed by the histogram g-Hf, where g is a vector formed by the sensor data (DTS readings), and f is a vector representing the temperature profile. Figure 7 summarizes the analysis results. Although the histogram presents a slight skew toward high residual values, thereby indicating large model errors, this misfit is relatively rare (see bar height). This behavior is expected when adopting linear models (for tractability purposes) where the underlying physics is potentially nonlinear. To accommodate this inaccuracy, we performed the following statistical analysis, similarly to [28]. Assuming a generalized Gaussian distribution, we obtained a shape parameter ≈ 1 using the method described in [27], indicating that the residuals have a Laplacian distribution. This information will be further exploited in Section 5. Although the matrix H has proven suitable for representing the sensor acquisition, the DTS model contains errors and is itself a source of noise, as can be seen in Figure 5. To develop an efficient reconstruction algorithm, a statistical analysis of the noise is fundamental to set the norm in the data term of the cost function [22,27]. Thus, an analysis of the residuals was performed by the histogram g-Hf, where g is a vector formed by the sensor data (DTS readings), and f is a vector representing the temperature profile. Figure 7 summarizes the analysis results. Although the histogram presents a slight skew toward high residual values, thereby indicating large model errors, this misfit is relatively rare (see bar height). This behavior is expected when adopting linear models (for tractability purposes) where the underlying physics is potentially nonlinear. To accommodate this inaccuracy, we performed the following statistical analysis, similarly to [28]. Assuming a generalized Gaussian distribution, we obtained a shape parameter p ≈ 1 using the method described in [27], indicating that the residuals have a Laplacian distribution. This information will be further exploited in Section 5. Taking the inverse Laplace transform of the transfer function, we get the impulse response of the system ℎ( ), presented in Figure 6. The DTS impulse response ℎ( ) is used to assemble a sensitivity matrix H, which represents the DTS acquisition model. The matrix-vector notation of H is presented in Equation (4) (4) Figure 6. Experimental results vs. DTS acquisition model. Although the matrix H has proven suitable for representing the sensor acquisition, the DTS model contains errors and is itself a source of noise, as can be seen in Figure 5. To develop an efficient reconstruction algorithm, a statistical analysis of the noise is fundamental to set the norm in the data term of the cost function [22,27]. Thus, an analysis of the residuals was performed by the histogram g-Hf, where g is a vector formed by the sensor data (DTS readings), and f is a vector representing the temperature profile. Figure 7 summarizes the analysis results. Although the histogram presents a slight skew toward high residual values, thereby indicating large model errors, this misfit is relatively rare (see bar height). This behavior is expected when adopting linear models (for tractability purposes) where the underlying physics is potentially nonlinear. To accommodate this inaccuracy, we performed the following statistical analysis, similarly to [28]. Assuming a generalized Gaussian distribution, we obtained a shape parameter ≈ 1 using the method described in [27], indicating that the residuals have a Laplacian distribution. This information will be further exploited in Section 5. Dictionary of Hotspots We assume that the thermal system for generator stator can be modeled through a sparse representation. The reason is that it can be considered a large structure with uniform distribution temperature, where occasional hotspots may arise in case of insulation failure (Figure 1). Using a dictionary matrix D ∈ R m×n that contains n prototype image atoms in the columns { d i } n j=1 , a thermal image f ∈ R m can be represented by sparse linear combinations of the dictionary atoms, as shown in Equation (5) [29]: where the vector α ∈ R n contains the coefficients. Each atom is the thermal image of the whole structure when only one possible heat source is active at a time. In this representation α is sparse, i.e., it is assumed to contain mostly zeros. The representation of f may either be exact f = Dα or approximate f ≈ Dα, satisfying Equation (6) [29]: where is the minimum residual error desired, and p is the norm used for measuring the deviation. Figure 8 shows an example of the sparse representation of an imaging system using a dictionary. In this example, a combination of three atoms of the dictionary D was used to form the image f [29,30]. Dictionary of Hotspots We assume that the thermal system for generator stator can be modeled through a sparse representation. The reason is that it can be considered a large structure with uniform distribution temperature, where occasional hotspots may arise in case of insulation failure (Figure 1). Using a dictionary matrix ∈ ℝ × that contains n prototype image atoms in the columns , a thermal image ∈ ℝ can be represented by sparse linear combinations of the dictionary atoms, as shown in Equation (6) [29]: where the vector ∈ ℝ contains the coefficients. Each atom is the thermal image of the whole structure when only one possible heat source is active at a time. In this representation α is sparse, i.e., it is assumed to contain mostly zeros. The representation of f may either be exact f = Dα or approximate f ≈ Dα, satisfying Equation (7) [29]: where is the minimum residual error desired, and is the norm used for measuring the deviation. Figure 8 shows an example of the sparse representation of an imaging system using a dictionary. In this example, a combination of three atoms of the dictionary D was used to form the image f [29,30]. We built a dictionary of hotspots through simulation using the COMSOL ® (Comsol, Stockholm, Sweden) multiphysics tool. In the simulations, we considered the physical properties of the materials, geometry and environment boundary conditions. Each atom was generated considering the position of the resistances in each plate. As the prototype has 35 plates at 5 cm high, and 19 positions for resistance, considering 1 atom for each 1 cm, we generated 3325 atoms (35 × 5 × 19) to model the system. Although this representation of 1 cm results in a large number of atoms, it was necessary to reconstruct hotspots with more accurate dimensions, also preventing alignment problems of the sensor installation. Since the total size of the image 209 × 200 cm, the vector length of each atom is 41,800, forming a dictionary matrix D of 41,800 × 3325. The columns of the dictionary matrix, or atoms, were formed by the temperature distribution values generated by power of 1 W/cm 3 applied to a resistance of 5 cm in each position along the plates, considering steady state. We set the resistance of 5 cm as a minimum condition, because of the plate dimensions and difficulty in using tubular resistances with lower length. However, the temperature data is sampled every 1 cm for forming atoms, as explained in the previous paragraph. Thus, the vector α represents the values of the thermal power that generate the hotspots in the structure. Therefore, by estimating α, we obtain the location and the amount of thermal power that causes each hotspot. Figure 9a shows details of the mesh geometry used in the simulations, and Figure 9b shows the temperature distribution for total power of 1 W/cm 3 at 26 °C ambient temperature, in steady state. The results show a variation of 2.62 °C with decreasing temperature of / , where is the distance from the heat source. This decreasing function is a parametric fit of the numerical simulations, which was used to ease the construction of the dictionary. The application of the dictionary D in the image reconstruction algorithm is discussed in more detail in Section 5. We built a dictionary of hotspots through simulation using the COMSOL ® (Comsol, Stockholm, Sweden) multiphysics tool. In the simulations, we considered the physical properties of the materials, geometry and environment boundary conditions. Each atom was generated considering the position of the resistances in each plate. As the prototype has 35 plates at 5 cm high, and 19 positions for resistance, considering 1 atom for each 1 cm, we generated 3325 atoms (35 × 5 × 19) to model the system. Although this representation of 1 cm results in a large number of atoms, it was necessary to reconstruct hotspots with more accurate dimensions, also preventing alignment problems of the sensor installation. Since the total size of the image 209 × 200 cm, the vector length of each atom is 41,800, forming a dictionary matrix D of 41,800 × 3325. The columns of the dictionary matrix, or atoms, were formed by the temperature distribution values generated by power of 1 W/cm 3 applied to a resistance of 5 cm in each position along the plates, considering steady state. We set the resistance of 5 cm as a minimum condition, because of the plate dimensions and difficulty in using tubular resistances with lower length. However, the temperature data is sampled every 1 cm for forming atoms, as explained in the previous paragraph. Thus, the vector α represents the values of the thermal power that generate the hotspots in the structure. Therefore, by estimating α, we obtain the location and the amount of thermal power that causes each hotspot. Figure 9a shows details of the mesh geometry used in the simulations, and Figure 9b shows the temperature distribution for total power of 1 W/cm 3 at 26 • C ambient temperature, in steady state. The results show a variation of 2.62 • C with decreasing temperature of e −x/45 , where x is the distance from the heat source. This decreasing function is a parametric fit of the numerical simulations, which was used to ease the construction of the dictionary. The application of the dictionary D in the image reconstruction algorithm is discussed in more detail in Section 5. Imaging Reconstruction Algorithm First, considering the basic model of image reconstruction theory, the acquisition system can be represented by Equation (8) [22]: where is a vector formed by DTS readings, is the sensitivity matrix, is a vector representing the temperature distribution in stator surface, and n is a vector representing all sources of additive noise. Considering that the thermal image has a sparse representation in the constructed dictionary, the acquisition system can be rewritten substituting Equation (6) into Equation (8) As shown in Figure 6, the DTS "spreads" the impulse, which is a characteristic of low-pass systems. This is translated into an ill-conditioning of matrix H. Thus, the recovering of the temperature distribution by simple inversion of Equation (9) yields high noise amplification, generating poor results [21]. This problem requires regularization, which stabilizes the reconstruction and improves the results. Although the dictionary has been built with the hotspots of 1 cm to improve representation, we expected them to occur with 5 cm or more (see Figure 9), because of the minimal condition of structure prototype and steady state, as explained in Section 4. Therefore, the most appropriate regularization is Total Variation, as it privileges piece-wise constant signals. Thus, the cost function to solve the inverse problem is given by Equation (10) [21,22]: where is a vector with the values of the thermal power that generate the hotspots in the structure, is the norm used in the data-fidelity term, λ is the regularization parameter which controls the sensitivity of the solution to the noise, and Q is a finite difference matrix. In approximation methods, typical norms used for measuring the deviation are the -norms for = 1, 2 and ∞. It is common to use the L2 norm in the data term because the noise is usually well represented by a normal distribution [27,29]. However, according to the statistical analysis presented in Section 3, the DTS acquisition model contains residual errors with Laplacian behavior, which indicates that the L2 norm should be replaced by an L1 norm, i.e., p = 1 [27]. To solve Equation (10) with p = 1, we used the Interactive Reweighted Least Squares (IRLS) approach. This method consists of approximating the cost function by weighted quadratic L2 norms, updating the solution by solving a least squares problem and reiterating those two steps until some stop criterion is attained, usually defined by a minimum update rate [31]. The implementation details in Matlab ® (R2014a, MathWorks, Natick, MA, USA) are shown in Algorithm 1. Imaging Reconstruction Algorithm First, considering the basic model of image reconstruction theory, the acquisition system can be represented by Equation (7) [22]: where g is a vector formed by DTS readings, H is the sensitivity matrix, f is a vector representing the temperature distribution in stator surface, and n is a vector representing all sources of additive noise. Considering that the thermal image has a sparse representation in the constructed dictionary, the acquisition system can be rewritten substituting Equation (5) into Equation (7), as shown in Equation (8) [22]: g = HDα + n. (8) As shown in Figure 6, the DTS "spreads" the impulse, which is a characteristic of low-pass systems. This is translated into an ill-conditioning of matrix H. Thus, the recovering of the temperature distribution by simple inversion of Equation (8) yields high noise amplification, generating poor results [21]. This problem requires regularization, which stabilizes the reconstruction and improves the results. Although the dictionary has been built with the hotspots of 1 cm to improve representation, we expected them to occur with 5 cm or more (see Figure 9), because of the minimal condition of structure prototype and steady state, as explained in Section 4. Therefore, the most appropriate regularization is Total Variation, as it privileges piece-wise constant signals. Thus, the cost function to solve the inverse problem is given by Equation (9) [21,22]: whereα is a vector with the values of the thermal power that generate the hotspots in the structure, p is the norm used in the data-fidelity term, λ is the regularization parameter which controls the sensitivity of the solution to the noise, and Q is a finite difference matrix. In approximation methods, typical norms used for measuring the deviation are the L p -norms for p = 1, 2 and ∞. It is common to use the L 2 norm in the data term because the noise is usually well represented by a normal distribution [27,29]. However, according to the statistical analysis presented in Section 3, the DTS acquisition model contains residual errors with Laplacian behavior, which indicates that the L 2 norm should be replaced by an L 1 norm, i.e., p = 1 [27]. To solve Equation (9) with p = 1, we used the Interactive Reweighted Least Squares (IRLS) approach. This method consists of approximating the cost function by weighted quadratic L 2 norms, updating the solution by solving a least squares problem and reiterating those two steps until some stop criterion is attained, usually defined by a minimum update rate [31]. The implementation details in Matlab ® (R2014a, MathWorks, Natick, MA, USA) are shown in Algorithm 1. Algorithm 1: Image Reconstruction Algorithm Require: g% DTS readings, H% sensitivity matrix, D% Dictionary of hotspots, Q% finite difference matrix Require: λ % Regulariztion parameter (set empirically) Require: e = 10 -9 % avoids zero division The proposed image reconstruction algorithm was evaluated with simulated data to assess the robustness to different noise levels, and with experimental data through the stator prototype ( Figure 3). The results are shown in Section 6. Results This section is organized into two subsections: Subsection 6.1 presents the results obtained simulating the response g by sensitivity matrix H for a given hotspot Dα, in order to assess the algorithm robustness applying different noise levels; Subsection 6.2 shows the results obtained by the experimental tests with the stator prototype, using resistances to emulate heat sources of different lengths. Simulated Results The evaluation of the algorithm performance with respect to noise level was conducted simulating a hotspot with the dictionary D. The simulated hotspot covered a region of three plates, with a maximum temperature of 80 • C and ambient temperature of 26 • C, which represents a length of approximately 15 cm, considering the fiber installation ( Figure 3). This length was chosen based on spatial resolution achieved with the Total Variation deconvolution method proposed in [21]. The simulated hotspot image f is shown in Figure 10. Based on the image of Figure 10, the response g was obtained using the sensitivity matrix H of the DTS acquisition model presented in Section 3. Thus, the algorithm performance was evaluated by adding white Gaussian noise at different levels to the simulated DTS readings. The images reconstructed for each noise level were compared with the ground-truth hotspot images. We employed the mean square error (MSE) and the maximum temperature difference (MTD) as figures of merit. The reconstructed images are shown in Figure 11, and a brief analysis of results is presented in Table 1. In the first test, Figure 11a, no noise was added to the simulated DTS readings, and the reconstructed image presented reduced errors relative to simulated hotspot, with only 0.1 • C uncertainty. In tests adding a certain noise level (Figure 11b-f), it can be seen that is possible to obtain, from data with SNR up to 40 dB, acceptable images for application in the stator, with less than 4 • C uncertainty for hotspots of 15 cm. However, from data with SNR between 30 dB and 10 dB (Figure 11d-f), besides large errors (up to −23.7 • C) in the temperature estimation, other parameters such as location and dimension become affected. As can be observed, the reconstructed hotspot covered four plates instead of threee plates of the original hotspot. The reconstructed images are shown in Figure 11, and a brief analysis of results is presented in Table 1. In the first test, Figure 11a, no noise was added to the simulated DTS readings, and the reconstructed image presented reduced errors relative to simulated hotspot, with only 0.1 °C uncertainty. In tests adding a certain noise level (Figure 11b-f), it can be seen that is possible to obtain, from data with SNR up to 40 dB, acceptable images for application in the stator, with less than 4 °C uncertainty for hotspots of 15 cm. However, from data with SNR between 30 dB and 10 dB (Figure 11d-f), besides large errors (up to −23.7 °C) in the temperature estimation, other parameters such as location and dimension become affected. As can be observed, the reconstructed hotspot covered four plates instead of threee plates of the original hotspot. Another numerical analysis was performed to assess the minimum length of a detectable hotspot given a maximum acceptable temperature difference of ±1 °C and different SNR levels. Table 2 summarizes the results, where it can be seen that the minimum length for the given conditions was 15 cm with SNR 50 dB. For other SNR levels, 40 dB and 30 dB, the performance was affected and the minimum length increased to 22 cm and 27 cm, respectively. Subsection 6.2 presents the results of the experimental tests' performance using the stator prototype. The reconstructed images are shown in Figure 11, and a brief analysis of results is presented in Table 1. In the first test, Figure 11a, no noise was added to the simulated DTS readings, and the reconstructed image presented reduced errors relative to simulated hotspot, with only 0.1 °C uncertainty. In tests adding a certain noise level (Figure 11b-f), it can be seen that is possible to obtain, from data with SNR up to 40 dB, acceptable images for application in the stator, with less than 4 °C uncertainty for hotspots of 15 cm. However, from data with SNR between 30 dB and 10 dB (Figure 11d-f), besides large errors (up to −23.7 °C) in the temperature estimation, other parameters such as location and dimension become affected. As can be observed, the reconstructed hotspot covered four plates instead of threee plates of the original hotspot. Another numerical analysis was performed to assess the minimum length of a detectable hotspot given a maximum acceptable temperature difference of ±1 °C and different SNR levels. Table 2 summarizes the results, where it can be seen that the minimum length for the given conditions was 15 cm with SNR 50 dB. For other SNR levels, 40 dB and 30 dB, the performance was affected and the minimum length increased to 22 cm and 27 cm, respectively. Subsection 6.2 presents the results of the experimental tests' performance using the stator prototype. Another numerical analysis was performed to assess the minimum length of a detectable hotspot given a maximum acceptable temperature difference of ±1 • C and different SNR levels. Table 2 summarizes the results, where it can be seen that the minimum length for the given conditions was 15 cm with SNR 50 dB. For other SNR levels, 40 dB and 30 dB, the performance was affected and the minimum length increased to 22 cm and 27 cm, respectively. Subsection 6.2 presents the results of the experimental tests' performance using the stator prototype. Experimental Results The experimental results were obtained by tests with the stator prototype ( Figure 3) and a thermal camera Fluke ® Ti25 (Fluke Corporation, Everett, MA, USA), which was used as reference for the reconstructed images. In addition to the comparison with the thermal camera, the images generated by the sparse reconstruction algorithm were also compared with the linear interpolation method used in the thermal imaging system presented in [6]. Three resistances with different lengths were used as heat sources, which produced hotspots with approximately 100 cm, 15 cm and 5 cm. The test results for a hotspot of 100 cm is shown in Figure 12. In this test, the ambient temperature was 27.8 • C; the total power applied to the resistance was 680 W ( ∼ =12 W/cm 3 ), generating a hotspot with maximum temperature of 63.7 • C after entering in steady state. The image taken by the thermal camera is presented by Figure 12a, where the temperature distribution in 17 plates of the stator prototype is observed. Figure 12b shows the image generated by linear interpolation of raw DTS readings, where it can be seen that the maximum temperature was measured in an approximate way for only seven plates (MTD of −2.8 • C), while for the rest of the plates, the MTD was up to −14 • C. This is because the unprocessed DTS readings are strongly influenced by spatial resolution (Figure 4), generating a blurred image. The image reconstructed by the proposed algorithm is represented in Figure 12c. In this result, the maximum temperature was 63.2 • C (MTD of −0.5 • C), and both length and location of the hotspot were in accordance with the image of thermal camera. As can be seen, the sparse reconstruction showed significant improvements when compared to the linear interpolation method, even for a hotspot with dimensions in line with the DTS spatial resolution (1 m). Although there are some differences in the temperature distribution along the plates, the main parameters of interest for application on stator structure are quite accurate, namely the location, length and maximum temperature. Another test was conducted using a hotspot with dimensions smaller than the DTS spatial resolution. In this experiment, we applied a total power of 120 W ( ∼ =15 W/cm 3 ) at a resistance of 15 cm, for which the minimal length estimated in the tests is presented in Section 6.1. The generated hotspot was of 67.5 • C and an ambient temperature of 25.9 • C, as shown in Figure 13a. Figure 13b shows the image generated by the linear interpolation method, where the maximum temperature measured was only 46.1 • C (MTD −21.4 • C), and both the dimensions and the location of the hotspot were not correctly identified. This poor result is expected because this method uses the raw sensor readings and the hotspot length was only 15 cm, i.e., considerably lower than the spatial resolution of 1 m. The image generated by the proposed reconstruction algorithm is represented in Figure 13c. In this result, the maximum temperature was 66.8 • C (MTD of −0.7 • C), and location of the hotspot was in accordance with the image of the thermal camera. The length measured was about 22 cm, which can be considered a small difference, since that the dimension of the hotspot was six times smaller than DTS spatial resolution. Regarding the application in the generator stator, the proposed algorithm provides a great improvement when compared with the linear interpolation method, because the insulation faults can occur in just a few core plates (Figure 1), so it is important to perform reliable measurements for hotspots with dimensions less than 1 m. Another test was conducted using a hotspot with dimensions smaller than the DTS spatial resolution. In this experiment, we applied a total power of 120 W (≅15 W/cm 3 ) at a resistance of 15 cm, for which the minimal length estimated in the tests is presented in Section 6.1. The generated hotspot was of 67.5 °C and an ambient temperature of 25.9 °C, as shown in Figure 13a. Figure 13b shows the image generated by the linear interpolation method, where the maximum temperature measured was only 46.1 °C (MTD −21.4 °C), and both the dimensions and the location of the hotspot were not correctly identified. This poor result is expected because this method uses the raw sensor readings and the hotspot length was only 15 cm, i.e., considerably lower than the spatial resolution of 1 m. The image generated by the proposed reconstruction algorithm is represented in Figure 13c. In this result, the maximum temperature was 66.8 °C (MTD of −0.7 °C), and location of the hotspot was in accordance with the image of the thermal camera. The length measured was about 22 cm, which can be considered a small difference, since that the dimension of the hotspot was six times smaller than DTS spatial resolution. Regarding the application in the generator stator, the proposed algorithm provides a great improvement when compared with the linear interpolation method, because the insulation faults can occur in just a few core plates (Figure 1), so it is important to perform reliable measurements for hotspots with dimensions less than 1 m. (a) (b) Another test was conducted using a hotspot with dimensions smaller than the DTS spatial resolution. In this experiment, we applied a total power of 120 W (≅15 W/cm 3 ) at a resistance of 15 cm, for which the minimal length estimated in the tests is presented in Section 6.1. The generated hotspot was of 67.5 °C and an ambient temperature of 25.9 °C, as shown in Figure 13a. Figure 13b shows the image generated by the linear interpolation method, where the maximum temperature measured was only 46.1 °C (MTD −21.4 °C), and both the dimensions and the location of the hotspot were not correctly identified. This poor result is expected because this method uses the raw sensor readings and the hotspot length was only 15 cm, i.e., considerably lower than the spatial resolution of 1 m. The image generated by the proposed reconstruction algorithm is represented in Figure 13c. In this result, the maximum temperature was 66.8 °C (MTD of −0.7 °C), and location of the hotspot was in accordance with the image of the thermal camera. The length measured was about 22 cm, which can be considered a small difference, since that the dimension of the hotspot was six times smaller than DTS spatial resolution. Regarding the application in the generator stator, the proposed algorithm provides a great improvement when compared with the linear interpolation method, because the insulation faults can occur in just a few core plates (Figure 1), so it is important to perform reliable measurements for hotspots with dimensions less than 1 m. A test to evaluate an extreme case for the image reconstruction algorithm was performed producing a hotspot of only 5 cm. In this test we applied a total power of 25 W (≅9 W/cm 3 ) on a resistance positioned in one of the plates, which generated a hotspot with maximum temperature of 50.6 °C and an ambient temperature of 22.8 °C. The thermal image taken by the thermal camera is presented in Figure 14a. The result of the linear interpolation method is shown in Figure 14b. As can A test to evaluate an extreme case for the image reconstruction algorithm was performed producing a hotspot of only 5 cm. In this test we applied a total power of 25 W ( ∼ =9 W/cm 3 ) on a resistance positioned in one of the plates, which generated a hotspot with maximum temperature of 50.6 • C and an ambient temperature of 22.8 • C. The thermal image taken by the thermal camera is presented in Figure 14a. The result of the linear interpolation method is shown in Figure 14b. As can be seen, the generated image is extremely blurred and it was not possible to identify the hotspot. This result shows that the use of Raman DTS becomes impractical for measurements in the order of 5 cm without using signal reconstruction techniques. The image generated by the reconstruction algorithm is represented in Figure 14c. In this result, the maximum temperature was 41.5 • C (MTD of −9.1 • C). Besides the large difference in temperature, the hotspot length was also measured incorrectly, which was 25 cm instead of 5 cm. As can be seen, the hotspot is spread on five plates when it should be only on one plate. This is mainly due to the DTS spatial resolution that affects the signal-to-noise ratio (SNR), spreading and attenuating the sensor response, as already shown in Subsection 6.1, more specifically in Figure 11e,f. Although in this case the algorithm does not provide precise measurements of temperature and dimension, it is possible to identify the existence and approximate location of a fault, even for hotspots with dimensions up to 20 times smaller than the DTS spatial resolution. Section 7 presents the main conclusions on the proposed reconstruction method for thermal imaging of generator stators. Figure 13. Experimental results for hotspot of 15 cm, with maximum temperature of 67.5 °C and ambient temperature of 25.9 °C. (a) Reference image taken by thermal camera; (b) Image generated by the linear interpolation method using raw DTS readings [6]; (c) Image reconstructed by proposed algorithm. A test to evaluate an extreme case for the image reconstruction algorithm was performed producing a hotspot of only 5 cm. In this test we applied a total power of 25 W (≅9 W/cm 3 ) on a resistance positioned in one of the plates, which generated a hotspot with maximum temperature of 50.6 °C and an ambient temperature of 22.8 °C. The thermal image taken by the thermal camera is presented in Figure 14a. The result of the linear interpolation method is shown in Figure 14b. As can be seen, the generated image is extremely blurred and it was not possible to identify the hotspot. This result shows that the use of Raman DTS becomes impractical for measurements in the order of 5 cm without using signal reconstruction techniques. The image generated by the reconstruction algorithm is represented in Figure 14c. In this result, the maximum temperature was 41.5 °C (MTD of −9.1 °C). Besides the large difference in temperature, the hotspot length was also measured incorrectly, which was 25 cm instead of 5 cm. As can be seen, the hotspot is spread on five plates when it should be only on one plate. This is mainly due to the DTS spatial resolution that affects the signal-to-noise ratio (SNR), spreading and attenuating the sensor response, as already shown in Subsection 6.1, more specifically in Figure 11e,f. Although in this case the algorithm does not provide precise measurements of temperature and dimension, it is possible to identify the existence and approximate location of a fault, even for hotspots with dimensions up to 20 times smaller than the DTS spatial resolution. Section 7 presents the main conclusions on the proposed reconstruction method for thermal imaging of generator stators. (a) Reference image taken by thermal camera; (b) Image generated by the linear interpolation method using raw DTS readings [6]; (c) Image reconstructed by proposed algorithm. Conclusions This paper presented an image reconstruction method that can be a promising solution for thermal imaging systems based in Raman distributed temperature sensing (DTS). The reconstruction was based on sparse representations, which has proved suitable for the application. The main advantage is the possibility to correctly identify heat sources and hotspots smaller than the DTS spatial resolution (1 m). To reconstruct the thermal images, we employed a dictionary of hotspots built from a multiphysical model of the monitored structure. Tests were performed using a prototype with structure similar to surface of a 200 MW hydroelectric generator stator. This facilitated the laboratory tests and allowed the comparison between reconstruction images and images from a thermal camera used as reference. The evaluation of the algorithm performance with respect to noise level was conducted through simulations with the DTS model and dictionary of hotspots. These simulations shown that when signal-to-noise ratio (SNR) is up to 40 dB it is possible to obtain acceptable images for application in the stator, with less than 4 • C uncertainty. However, with SNR between 30 dB and 10 dB besides the uncertainty in the temperature measurement, other parameters such as location and dimension become affected. This provides a lower bound to which the proposed method is applicable. The experimental results were achieved by generating hotspots of different dimensions in the stator prototype. These results show that it is possible to identify hotspots with dimensions as short as 15 cm, with temperature uncertainty of less than ±1 • C, which represents a great advance considering the DTS spatial resolution. It was also observed significant improvements for hotspot down to 5 cm. In this critical case, despite a maximum temperature difference of almost −10 • C, it was possible to identify the existence and approximate location of hotspots with dimensions up to 20 times smaller than the DTS spatial resolution. Regarding the application for imaging system for generator stators, improvements in image resolution can help identify wear on the insulation layer in early stages, facilitating maintenance and avoiding further damage to the structure, such as short circuit in the stator windings. The accurate temperature monitoring of the stator structure can be a fundamental tool of predictive maintenance, to ensure the performance and operational availability of the generator.
v3-fos-license
2020-04-02T09:35:36.739Z
2020-01-01T00:00:00.000
210966331
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP01(2020)096.pdf", "pdf_hash": "cdda65f67cfaa0c6c28bd56897690bc01635829d", "pdf_src": "Arxiv", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1945", "s2fieldsofstudy": [ "Physics" ], "sha1": "c9245b7e55c749ab6a5468296812f90901b8977e", "year": 2019 }
pes2o/s2orc
Search for a charged Higgs boson decaying into top and bottom quarks in events with electrons or muons in proton-proton collisions at $\sqrt{s} =$ 13 TeV A search is presented for a charged Higgs boson heavier than the top quark, produced in association with a top quark, or with a top and a bottom quark, and decaying into a top-bottom quark-antiquark pair. The search is performed using proton-proton collision data collected by the CMS experiment at the LHC at a center-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 35.9 fb$^{-1}$. Events are selected by the presence of a single isolated charged lepton (electron or muon) or an opposite-sign dilepton (electron or muon) pair, categorized according to the jet multiplicity and the number of jets identified as originating from b quarks. Multivariate analysis techniques are used to enhance the discrimination between signal and background in each category. The data are compatible with the standard model, and 95% confidence level upper limits of 9.6-0.01 pb are set on the charged Higgs boson production cross section times branching fraction to a top-bottom quark-antiquark pair, for charged Higgs boson mass hypotheses ranging from 200 GeV to 3 TeV. The upper limits are interpreted in different minimal supersymmetric extensions of the standard model. Introduction Since the discovery of a Higgs boson [1][2][3] with a mass of 125 GeV [4,5], the ATLAS and CMS Collaborations have actively searched for additional neutral and charged Higgs bosons. Most theories beyond the standard model (SM) of particle physics enrich the SM Higgs sector; a simple extension is the assumption of the existence of two Higgs doublets [6][7][8][9]. Such models are collectively labeled as two-Higgs-doublet models (2HDM), and are further classified into four categories according to the couplings of the doublets to fermions. In Type-I models, only one doublet couples to fermions, while in Type-II models one doublet couples to the up-type quarks and the other to the down-type quarks and the charged leptons. In lepton-specific models one doublet couples only to the leptonic sector and the other couples to quarks, while in flipped models the first doublet couples specifically to the down-type quarks and the second one to the up-type quarks and charged leptons. The two-doublet structure of the 2HDM Higgs sector gives rise to five physical Higgs bosons through spontaneous symmetry breaking: a charged pair (H ± ) and three neutral bosons, namely the light (h) and heavy (H) scalar Higgs bosons, and one pseudoscalar boson (A). Supersymmetric (SUSY) models have a Higgs sector based on 2HDMs [10][11][12][13][14][15]. Among the SUSY models, a popular one is the minimal supersymmetric extension to the SM (MSSM) [16,17], whose Higgs sector is described by a Type-II 2HDM. In the MSSM, the production and decay of these particles are described at tree level by two free parameters, which can be chosen as the mass of the charged Higgs boson (m H ± ) and the ratio of the vacuum expectation values of the neutral components of the two Higgs doublets (tan β). Some variants of the 2HDM achieve consistency with the 125 GeV Higgs boson via a Gildener-Weinberg scalon scenario which stabilizes the Higgs boson mass and alignment [18]. Charged Higgs bosons with a mass below the top quark mass are dominantly produced in top quark decays, whereas charged Higgs bosons with a mass larger than the top quark mass are produced in association with a top quark. Charged Higgs boson production at finite order in perturbation theory is accomplished in association with a top and a bottom quark in the so-called four-flavor scheme (4FS) and in association with a top quark in the five-flavor scheme (5FS) [19], as illustrated in Fig. 1. In this paper, only charged Higgs bosons with a mass larger than the mass of the top quark (heavy charged Higgs bosons) are considered, and charge-conjugate processes are implied. The signal is produced in the 4FS, and the eventual presence of a 5FS production is accounted for in the search region definition. The normalization of the signal processes accounts for both the 4FS and the 5FS. The decay of a heavy charged Higgs boson can occur through several channels, among them H + → τ + ν τ and H + → tb have the highest branching fractions, respectively at low (about 200 GeV) and high (about 1 TeV) m H ± for a large range of tan β values and a large variety of theoretical models [20]. The detection of a charged Higgs boson would unequivocally point to physics beyond the SM. Model-independent searches for charged Higgs bosons are of utmost interest for the CERN LHC program because they allow one to disentangle the Higgs sector physics from the specificity and complexity of the theoretical model by assuming unity branching fraction in each mode. Direct searches for charged Higgs bosons have been performed by the CERN LEP and the Fermilab Tevatron experiments, and indirect constraints on H ± production have been set from fla- vor physics measurements [21][22][23][24][25][26][27][28][29][30]. Searches for a charged Higgs boson decaying into a top and a bottom quark have been performed by the D0, ATLAS, and CMS Collaborations in protonantiproton collisions at a center-of-mass energy of √ s = 1.96 TeV [31] and in proton-proton (pp) collisions at √ s = 8 TeV [32, 33] and √ s = 13 TeV [34]. In this paper we improve the sensitivity to model-independent production of a charged Higgs boson, as well as the sensitivity to relevant MSSM scenarios. The ATLAS and CMS Collaborations have also conducted searches for the production of a charged Higgs boson in the τ + ν τ [32,[35][36][37], cs [38], and cb [39] decay channels at √ s = 8 and 13 TeV. Searches for charged Higgs bosons produced via vector boson fusion and decaying into W and Z bosons, as predicted by models containing Higgs triplets [40][41][42], and searches for additional neutral heavy Higgs bosons decaying to a pair of third-generation fermions tt, bb, and τ + τ − [42-46] extend the program of the ATLAS and CMS Collaborations to elucidate the extended Higgs sector beyond the SM. This paper describes a search for a heavy charged Higgs boson produced in association with a top quark or with a top and a bottom quark and decaying into a top and a bottom quark performed using pp collision data collected at √ s = 13 TeV in 2016. The data correspond to an integrated luminosity of 35.9 fb −1 . The final state contains two W bosons, one from the decay chain of the heavy charged Higgs boson and the other from the decay of the associated top quark. One or both of the W bosons can decay into leptons, producing single-lepton and dilepton final states, respectively. The leptonic decays of tau leptons from the W boson decay are considered as well. The single-lepton final state is characterized by the presence of one isolated lepton (e, µ) that is used to trigger the event, while the dilepton final state contains events with two isolated opposite-sign leptons (e + e − , e ± µ ∓ , µ + µ − ). This leads to the suppression of several backgrounds. The signal process (tbH + + tH + ) has furthermore a large b jet multiplicity; an additional classification of the events is therefore achieved based on the number of jets identified as originating from b quarks. Multivariate analysis (MVA) techniques are used to enhance the discrimination between signal and background. Signal-rich regions are analyzed together with signal-depleted regions in a maximum likelihood fit to the MVA classifier outputs, which simultaneously determines the contributions from the tbH + + tH + signal and the backgrounds. Model-independent upper limits on the product of the charged Higgs boson production cross section and the branching fraction into a top-bottom quark-antiquark pair, σ H ± B(H ± → tb) = , as a function of m H ± , are presented in this paper. Results are also interpreted in specific MSSM benchmark scenarios, where many free parameters of the model are fixed to values corresponding to interesting phenomenological assumptions. The CMS detector The central feature of the CMS apparatus is a superconducting solenoid of 6 m internal diameter, providing a magnetic field of 3.8 T. Within the solenoid volume are a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass and scintillator hadron calorimeter (HCAL), each composed by a barrel and two endcap sections. Forward calorimeters extend the pseudorapidity (η) coverage provided by the barrel and endcap detectors. Muons are detected in gas-ionization chambers embedded in the steel flux-return yoke outside the solenoid. Events of interest are selected using a two-tiered trigger system [47]. The first level, composed by specialized hardware processors, uses information from the calorimeters and muon detectors, while the second level consists of a farm of processors running a version of the full event reconstruction software optimized for fast processing. A more detailed description of the CMS detector, together with a definition of the coordinate system used and the relevant kinematic variables, can be found in Ref. [48]. Event simulation Signal events are simulated using the MADGRAPH5 aMC@NLO 2.3.3 [49] generator at nextto-leading order (NLO) precision in perturbative quantum chromodynamics (QCD) using the 4FS for a range of m H ± hypotheses between 200 and 3000 GeV; the complete list of masses is [200, 220, 250, 300, 350, 400, 500, 650, 800, 1000, 1500, 2000, 2500, 3000] GeV. The 4FS is expected to provide a better description of the observables, while shape effects from 5FS production are expected to be negligible, because eventual additional b quarks would be radiated with low transverse momentum by the beam remnants [20]. Normalization effects induced by the presence of 5FS are accounted for by computing the MSSM production cross sections for the heavy charged Higgs boson signals both in the 4FS and 5FS; the two cross sections are then combined to obtain the total cross section using the Santander matching scheme [19] for different values of tan β. The 4FS and 5FS cross sections differ for all mass point by about 20%, and the Santander-matched cross section lies inbetween the two; typical values are of the order of 1 pb for a mass of 200 GeV, down to about 10 −4 pb for a mass of 3 TeV [20,[50][51][52][53][54]. Branching fractions B(H + → tb ) are computed in the chosen scenarios with the HDECAY 6.52 package [55]. These cross sections are used in Section 7 only for the model-dependent results, and don't affect the model-independent results. The main background to this analysis originates from SM top quark pair production. Other backgrounds are the production of W and Z/γ * with additional jets (referred to as V+jets), diboson and triboson processes, single top quark production, tt production in association with W, Z, γ, or H bosons (collectively labeled tt+V), as well as four top quark production (tttt) and QCD multijet events. The MADGRAPH5 aMC@NLO 2.2.2 generator [49] is used at leading order (LO), with the MLM jet matching and merging [59], to generate vector boson events in association with jets, sin-gle top quark events in the s-channel, and four top quark production. The associated production of tt events with a vector boson and with a γ is simulated at NLO using MAD-GRAPH5 aMC@NLO 2.2.2 with FxFx jet matching and merging [60]. In all cases, the NNPDF3.0 [61] set of parton distribution functions (PDFs) is used, and the parton showers and hadronization processes are performed by PYTHIA 8.212 [62] with the CUETP8M1 [63] tune for the underlying event, except for the tt sample where the tune CUETP8M2T4 [64] provides a more accurate description of the kinematic distributions of the top quarks and of the jet multiplicity. The simulated tt events are further separated based on the flavor of additional jets that do not originate from the top quark decays in the event and are labeled according to their content in b-and c-originated hadrons. The tt+b(b) (tt+c(c)) label is attributed to the events that have at least one b jet (c jet and no b jet) from the event generator within the acceptance. Events that do not belong to any of the above processes are enriched in light-flavor jets and therefore denominated as tt+LF. This partition of the simulated tt sample is based on matching heavyflavor generator-level jets to the originating partons and hadrons and is introduced to account for different systematic uncertainties affecting the corresponding cross section predictions. The procedure is detailed in Refs. [77,78]. All generated events are passed through a detailed simulation of the CMS apparatus, based on GEANT4 v9.4 [79]. The effects of additional pp interactions occurring in the same or in neighboring bunch crossings (pileup) are modelled by adding simulated minimum bias events to all simulated processes. In the data collected in 2016 an average of 23 pp interactions occurred per LHC bunch crossing. In simulation, the difference in the number of true interactions is accounted for by reweighting the simulated events to match the data in the multiplicity distribution of pileup interactions. Event reconstruction Events are reconstructed using the particle-flow (PF) algorithm [80], which aims to reconstruct and identify each individual particle in an event, with an optimized combination of information from the various elements of the CMS detector. The energy of photons is obtained from the ECAL measurement. The energy of electrons is determined from a combination of the electron momentum at the primary interaction vertex as determined by the tracker, the energy of the corresponding ECAL cluster, and the energy sum of all bremsstrahlung photons spatially compatible with originating from the electron track. The momentum of muons is obtained from the curvature of the corresponding track. The energy of charged hadrons is determined from a combination of their momentum measured in the tracker and the matching ECAL and HCAL energy deposits, corrected for zero-suppression effects and for the response function of the calorimeters to hadronic showers. Finally, the energy of neutral hadrons is obtained from the corresponding corrected ECAL and HCAL energy. The reconstructed vertex with the largest value of summed physics-object squared transverse momentum (p 2 T ) is taken to be the primary pp interaction vertex [81]. The physics objects are the jets, clustered using the jet finding al-gorithm [82,83] with the tracks assigned to the vertex as inputs, and the associated missing transverse momentum ( p miss T ), taken as the negative vector sum of the p T of those jets. Electrons are identified using an MVA-based identification algorithm [84]. Working points are defined [85] by setting thresholds for the classifier values to mitigate efficiency losses for highp T electrons observed particularly in high-mass signal events; such working points are labeled Tight (≈88% efficiency for tt events) and Loose (≈95% efficiency for tt events). They result in an efficiency in selecting high-mass signal events of ≈90%, approximately flat across the electron high-p T range. Muon identification uses the algorithm described in Ref. [86] and two working points, referred to as Medium and Loose, with efficiencies of about 97 and 100%, respectively. Thresholds in p T and η for electrons and muons depend on whether they are used for selecting or vetoing events and are detailed in Section 5. Electrons and muons are required to be isolated from other particles. Their relative isolation is measured as the ratio between the scalar p T sum of selected PF particles within a cone of a radius ∆R(p T ( )) and the p T of the particle; ∆R is defined as √ (∆η) 2 + (∆φ) 2 and ∆η and ∆φ are the distances in the pseudorapidity and azimuthal angle. The ∆R(p T ( )) cone decreases with the lepton p T [87, 88] according to the formula ∆R(p T ( )) = 10 GeV min max(p T ( ), 50 GeV), 200 GeV . Efficiencies in triggering, reconstruction, identification, and isolation of leptons are estimated both in data and simulation. Those efficiencies are used to determine correction factors, depending on p T and η, and are applied to simulated events on a per-lepton basis. Jets are reconstructed from the PF particles clustered by the anti-k T algorithm [82,83] with a clustering radius of 0.4. To mitigate the effect of pileup interactions, charged hadrons that do not arise from the primary vertex are excluded from the clustering. Furthermore, jets originating from pileup interactions are removed by means of an MVA identification algorithm [89]. The jet momentum is then corrected in simulated events to account for multiple effects, including the extra energy clustered in jets arising from pileup. In situ measurements of the momentum balance in dijet, photon+jet, Z+jet, and multijet events are used to determine any residual differences between the jet energy scale in data and in simulation, and appropriate corrections are applied [90]. Jets are selected if they satisfy p T > 40 GeV and |η| < 2.4. Loose identification criteria are applied to the jets, in order to distinguish them from well-identified stable particles. Finally, jets are required to be separated from the selected leptons by ∆R > 0.4. Jets from the hadronization of b quarks are identified (b tagged) using the combined secondary vertex algorithm [91]. For the chosen threshold of the tagging algorithm, the mistagging probability-the fraction of jets that arise from the fragmentation of light partons (u, d, s, and g) and c jets misidentified by the algorithm as b jets-is approximately 1 and 15%, respectively, while the efficiency to correctly identify a b jet is about 70%. The difference in b tagging and mistagging efficiencies between data and simulation is corrected by applying correction factors dependent on jet p T and η. The missing transverse momentum vector is defined as the projection of the negative vector sum of the momenta of all reconstructed PF particles in an event onto the plane perpendicular to the beams. Its magnitude is referred to as p miss T . The p miss T reconstruction is improved by propagating the effect of the jet energy corrections to it. Further filtering algorithms are used to reject events with anomalously large p miss T resulting from instrumental effects [92]. Hadronically decaying τ leptons (τ h ) are reconstructed using the hadron-plus-strips algorithm [93], based on the identification of the individual τ decay modes. The τ h candidates are required to be separated from reconstructed electrons and muons by ∆R > 0.4. Tau candidates are further selected by means of a multivariate discriminator combining isolation and lifetime information [93]. Jets originating from the hadronization of quarks and gluons misidentified as τ h are suppressed by requiring that the τ h candidate is isolated. The τ h identification efficiency depends on p τ h T and η τ h , and is on average 50% for p τ h T > 20 GeV with a probability of approximately 1% for hadronic jets to be misidentified as a τ h . The isolation variable is constructed from the PF particles inside a cone of ∆R = 0.3. The effect of neutral PF candidates from pileup vertices is estimated using charged hadrons associated with those vertices and subtracted from the isolation variable. Event selection and classification Events are selected with single-lepton triggers characterized by transverse momentum (p T ) thresholds of 27 (24) GeV for electrons (muons). Additionally, several trigger paths with higher p T thresholds and looser identification requirements are included to maximize efficiency for high-p T electrons (muons), resulting in an overall efficiency in the plateau region close to 95 (100)%. Correction factors quantifying the difference between trigger efficiencies in data and simulated events are evaluated using a tag-and-probe technique [84, 86, 94, 95]. Events are required to have at least one electron (muon) with p T > 35 (30) GeV satisfying tighter identification and isolation criteria than the online requirements, effectively corresponding to the saturation point of the online trigger efficiencies. As briefly discussed in Section 1, the first classification is achieved by separating the events in five single-lepton and dilepton regions (e ± , µ ± , e + e − , e ± µ ∓ , µ + µ − ). In the single-lepton category, only events with exactly one lepton are accepted, whereas the presence of any additional lepton passing the loose identification requirements with p T > 10 GeV vetoes the event. Moreover, the presence of a τ h candidate with p T > 20 GeV and |η| < 2.3 vetoes the event. In the dilepton category, we accept events with exactly two oppositely charged leptons (electrons or muons); the second lepton is required to have p T > 10 GeV and pass looser identification criteria than the leading lepton. To reduce the Z/γ * background, we reject events with two leptons of the same flavor and opposite charge with an invariant mass m less than 12 or between 76 and 106 GeV. The final states examined in this paper include neutrinos from the W boson decays; events are therefore required to have p miss T > 30 GeV. Additionally, in the single-lepton final state, events in which the p miss T is compatible with mismeasurement of electron or jet energy are rejected by requiring the azimuthal angle separation between the p miss T and any jet in the event to be ∆φ > 0.05. Tree-level signal production processes are characterized by having five (three) jets at leading order in the single-lepton (dilepton) final state. The tt background has a lower jet multiplicity in the corresponding regions, but additional jets may be produced through initial-and finalstate radiation. Requiring a high multiplicity of reconstructed jets improves the discrimination of signal events from the background, while the regions depleted in signal processes constrain background estimates using data. Consequently, in the single-lepton and dilepton event regions, the presence of at least four and two jets, respectively, is required. The SM top quark pair production has final states similar to the charged Higgs boson signal production with fewer b quarks at tree level, while additional gluon splitting contaminates the high b jet multiplicity regions. Consequently, one or more of these jets is required to be b-tagged. For a large H + mass range, the highest significance for both the single-lepton and dilepton final states is found in the regions having higher N jets and N b jets . The only exception are the H + signals with the mass around 200 GeV, where the low N jets and N b jets regions have higher sensitivity than the high multiplicity ones. Finally, events with two same-sign leptons are used to form control regions for the multijet background estimation. A set of discriminant variables is selected to enhance the signal and background separation in each category and is summarized in Table 1. Kinematic and topological shapes have different discrimination power for the different mass hypotheses of the charged Higgs boson. Each discriminant variable is studied and included in an MVA classifier if it improves the discrimination, or otherwise discarded. For both singlelepton and dilepton regions, the H T distribution, defined as the scalar sum of the p T of the selected jets, is one of the most sensitive variables. Additionally, the largest p T among the b jets, the p miss T , the minimum invariant mass between the lepton and the b jets, the maximum ∆η between two b-tagged jets, the smallest ∆R separation of the b jets, and the p T -weighted average of the b tagging discriminator calculated using the non-b-tagged jets are used as input variables to the MVA discriminators. Information about the event topology is incorporated via event shape variables, such as the centrality which is defined as the ratio of the sum of the transverse momenta of all jets to their total energy, and the second Fox-Wolfram moment [96] calculated using all jets. In the single-lepton final states, the following variables are also included: the invariant mass of the three jets with largest p T , the transverse mass of the system constituted by the lepton and the p miss T , the angular separation between the lepton and the system constituted by the b jet pair with the smallest ∆R separation between the b jets, and the average separation between the b jet pairs. The event selection for the dilepton final state takes advantage of the presence of the second lepton. The lepton with largest p T (leading lepton) characterizes the decay of a Lorentz-boosted top quark that originates from the massive charged Higgs boson in the signal hypothesis. The following variables are also considered: the ∆R between the leading lepton and the leading b-tagged jet, the momentum of the leading lepton, the lepton p T asymmetry, the mass of the lepton+b-tagged jet system with the largest p T , and the smallest of the transverse masses constructed with the leading b jet and each of the two W boson hypotheses, where the W bosons are reconstructed using the p miss T and the lepton momenta. Separate classifiers are constructed for the single-lepton and dilepton final states, using different technologies in order to fully exploit the different sets of features described above. For each of the suitable discriminating variables, it has been verified that the simulation models data correctly. Figure 2 shows some of the most important input variables in exemplary signal-region subcategories for the single-lepton (≥5j/≥2b) and dilepton final states (≥3j/≥1b). For all the classifiers described below each signal and background sample is randomly divided into three equally populated parts; one third is used for training the classifiers, one third is used for testing the performance of the classifiers, and one third is used for evaluating the classifier in the context of the maximum-likelihood fit detailed in Section 7. The backgrounds are dominated by tt events, but all other SM contributions are also included in the training. Both in the single-lepton and the dilepton regions, the training process and possible sources of over-or under-training are verified by means of statistical tests. A boosted decision tree (BDT) [97,98] classifier is trained using the TMVA package [99] to discriminate between signal and background in the single-lepton regions. The dependence of the kinematic signature on m H ± is accounted for by having a separate training for each m H ± hypothesis. The training process is optimized by targeting a region enriched in signal events by requiring N jets ≥ 5 and N b jets ≥ 2 (training region). The binned output distribution of the BDT classifier is calculated in all the single-lepton subcategories corresponding to the training region plus the (4j/≥3b) region and used in the maximum likelihood fit. In the other single-lepton subcategories, the inclusive event yields are used in the fit to infer additional information on the background normalization. The dilepton final states exploit a novel technology based on deep neural network (DNN) classifiers [97], parametrized as a function of m H ± [100]. The TENSORFLOW (v1.4.0) backend [101] and the KERAS (v2.1.1) frontend [102] are used to train the classifier. The parametrization of the signal events as a function of m H ± enables a unique training for each signal mass hypothesis. The training process is optimized in the region enriched in signal events by requiring N jets ≥ 3 and N b jets ≥ 1. The jet and b-tagged jet multiplicities are used in extending the training parametrization to capture the characteristics of the signal and background processes in the different regions. In the regions characterized by a single b jet we use the non-tagged jet with the highest value of the b tagging discriminator as the second b jet for the purpose of computing the input variables. The binned DNN output is used in the maximum likelihood fit in all the dilepton subcategories to further enhance the separation between the different background processes. The bin size for the MVA output in each of the subcategories of the analysis is chosen with a variable binning strategy such that the statistical uncertainty in signal and background event yields separately is less than 20% in each bin. In order to avoid possible biases in the binning strategy induced by the statistical fluctuations in the simulated samples, the bin boundaries are defined based on the events used for the MVA training. Background estimation and systematic uncertainties The leptonic decay of one or two of the W bosons in the tt process represents the main background of the analysis for both the single-lepton and dilepton final states. The tt production, as discussed in Section 3, is separated into tt+LF, tt+b(b), and tt+c(c) processes. The last two processes are commonly referred to as tt+heavy flavor (HF). The categorization strategy described in Section 5 populates the low b jet multiplicity regions with the tt+LF processes, while the regions enriched with the signal are characterized by a larger contribution from the tt+HF processes. Smaller background contributions arise from single top quark production, vector boson production in association with jets, multiboson production processes, tt production in association with electroweak bosons (W, Z, γ, H), and tttt production. Different sources of experimental and theoretical uncertainties are modelled as nuisance parameters in the fit and they are allowed to change the event yield, the migration of events among regions, and the distribution of the MVA output in each category [103]. Uncertainties that purely affect the yield within a category (rate uncertainties) are modelled via a nuisance parameter with a log-normal probability density function, while changes in shapes (shape uncertainties) are performed using a polynomial interpolation with a Gaussian constraint, and they can also change the event yields. All the sources of systematic uncertainty applied to the analysis are discussed below. The uncertainty in the integrated luminosity measurement of the 2016 dataset amounts to 2.5% [104]. The uncertainty in the evaluation of the pileup in simulation is accounted for by varying the total inelastic pp cross section by ±5% and propagating the effect of the variation to the final yields. The difference between the nominal and the altered distributions is taken as the uncertainty and treated as a shape variation in the fit. Both the integrated luminosity and the pileup uncertainties are separately treated as fully correlated among all processes. Each reconstructed jet is corrected via calibration factors in order to account for the response of the detectors, with dependencies on the geometry, the pileup conditions, and the kinematic properties of the jet [89]. The uncertainties in the jet energy scale and resolution are propagated by varying the jet momenta and, consequently, the missing transverse momentum. The events are reanalyzed in order to extract the appropriate rate and shape variations for the final distributions. An additional uncertainty accounts for the effect of the unclustered energy on p miss T . Each of these uncertainties is treated as fully correlated among all processes. The b tagging and mistagging uncertainties are obtained by varying the corresponding perjet correction factors within their uncertainties [91]. The mistag efficiency uncertainties for jets originating from light partons (u, d, s, and g) are considered to be uncorrelated with the b tagging efficiency uncertainties, while the c quark jet mistag rate uncertainties are varied simultaneously with the b tagging efficiencies. The b tagging and mistagging efficiency uncertainties are conservatively doubled whenever they are extrapolated outside the p T /η range over which the correction factors were derived. Different sources of uncertainties are varied as independent nuisance parameters. The portion of the b tagging efficiency uncertainty that is correlated with the jet energy scale is evaluated within the overall jet energy scale uncertainty by shifting the b tagging scale factors in the same direction as the jet energy scale shift; the procedure reflects the correlation in the derivation of the correction factors. The uncertainties in the lepton selection efficiency correction factors due to trigger, identification, and isolation efficiencies are applied depending on the lepton p T and η. The propagation of the correction factors on the shape of the MVA output impacts only the overall normalization. The squared sum of the variations due to the identification, isolation, and trigger efficiencies is therefore included as a single rate uncertainty amounting to 3 (4)% for electrons (muons), treated as correlated among all the final regions. Small discrepancies between data and simulation are observed in control regions enriched in processes involving a vector boson with additional jets. The Z/γ * and W+jets H T distributions are matched to data using corrections derived in a region close to the mass of the Z boson and in the zero b jet control region, respectively. The uncertainties in the derivation of correction factors for the Z/γ * and W+jets processes in the H T distribution are accounted for in the final results. They are assumed to be uncorrelated between the two processes and correlated among the analysis regions. The QCD multijet production is a minor background to the analysis, amounting to about 1% of the total background across all the signal regions, and is therefore ignored in the fit after the verification of the simulated prediction. For the single-lepton regions, the simulation has been checked in an orthogonal set of events requiring that the p miss T is aligned with the jets, while for the dilepton regions, the QCD multijet production is verified in the same-sign dilepton control regions for each category defined by N jets and N b jets . Theoretical uncertainties related to the PDFs are applied as rate uncertainties to the simulated background samples and account for both the acceptance and the cross section mismodelling [105]. Uncertainties from factorization and renormalization scales in the inclusive cross sections are considered independently for each process for which they are non negligible. They are estimated by varying each scale independently from the others by factors of 0.5 and 2 with respect to the default values. The matching of the POWHEG NLO tt matrix element calculation with the PYTHIA parton shower (PS) is varied by shifting the parameter h damp = 1.58 +0. 66 −0.59 m t [106] within the uncertainties. The damping factor h damp is used to limit the resummation of higher-order effects by the Sudakov form factor to below a given p T scale [106]. An additional source of uncertainty arises from the modeling of additional jets by the event generator in top quark pair production. This uncertainty is estimated in each bin of jet and b jet multiplicity, based on the simulated tt samples which are enriched or depleted in initial-and final-state radiation. The initial-state radiation PS scale is multiplied by factors of 2 and 0.5 in dedicated simulated samples, whereas the final-state radiation PS scale is scaled up by √ 2 and down by 1/ √ 2 [63,106]. For each PS scale and h damp perturbation, the uncertainty is evaluated as the relative deviation with respect to the nominal event rates. A nuisance parameter is added for each category defined by N jets and N b jets and considered uncorrelated among regions with different N jets and also uncorrelated between the single-lepton and dilepton final states. The normalization of the tt+HF processes, as determined by theoretical calculations [107] and experimental measurements, is affected by an uncertainty of 50% that is applied as a rate uncertainty, in addition to the other tt cross section uncertainties described above. This procedure allows the signal-depleted regions to determine the overall normalization factor, which includes the production cross section, detector acceptance, and reconstruction efficiencies. The limited size of the background and signal simulated samples results in statistical fluctuations of the nominal yield prediction. The content of each bin of each final discriminant distribution is varied by its statistical uncertainty. The Barlow-Beeston lite approach [108,109] is applied by assigning, for each bin, the combined statistical uncertainty of all simulated samples to the process dominating the background yield in that bin. Since all bins are statistically independent, each variation is treated as uncorrelated with any other variation. A summary of the effects of the systematic uncertainties on the event yields, summed over all final states and regions, is provided prior to the fit to data in Table 2. Results The statistical interpretation is based on a simultaneous fit of the MVA output discriminators and event yields in the different signal regions described in Section 5. The parameter of interest reflecting the signal normalization σ H ± B(H ± → tb) = σ(pp → H + tb + pp → H + t )B(H + → tb ) + σ(pp → H − tb + pp → H − t)B(H − → tb) and the nuisance parameters specified in Table 2: Effects of the systematic uncertainties as the variation (in percent) of the event yields prior to the fit to data, summed over all final states and regions. The column Shape reports whether a given uncertainty is considered a shape uncertainty or a rate uncertainty. Section 6 are encoded in the negative log-likelihood function and profiled in the minimization process. The log-likelihood ratio is used as test statistic to assess the agreement of data with the background-only hypothesis or the presence of the signal and the asymptotic approximation is used in the statistical analysis [103,110]. The statistical method used to report the results is the CL s modified frequentist criterion [111,112]. Figure 3 shows the event yields in the subcategories of the analysis after a background-only fit to data. In the regions where the shape of the MVA classifier output is used, the yields are obtained by integrating the distribution and the correlations across the bins are accounted for in the quoted uncertainties. The contribution of a hypothetical charged Higgs boson with a mass of 500 GeV and σ H ± B(H ± → tb) = 10 pb is also displayed. In the same configuration, Fig. 4 shows the MVA (BDT and DNN) outputs in exemplary signal-region subcategories for the single-lepton (5j/≥3b) and dilepton (3j/3b) final states. Source of uncertainty The data agree with the background distributions and no significant excess is observed. Exclusion limits are set at 95% confidence level (CL) on σ H ± B(H ± → tb) for m H ± hypotheses between 200 and 3000 GeV. The observed (expected) upper limits with single-lepton and dilepton final states combined are shown in Fig. 5 (left) and listed in Table 3. The single-lepton and dilepton regions have comparable sensitivity in the low-mass regime (≈200 GeV) while the single-lepton regions become increasingly dominant at higher values of the mass hypothesis; Figure 5 [17] is designed to give a mass of approximately 125 GeV for the light CP-even 2HDM Higgs boson over a wide region of the parameter space. The M 125 h (χ) scenario [113] is characterized by small gaugino and Higgs/higgsino superpotential masses which are also close to each other; this results in a significant mixing parameter between higgsinos and gauginos and in a compressed electroweakino mass spectrum. The phenomenology of the M 125 h (χ) scenario resembles therefore the Type-II 2HDM with MSSM-inspired Higgs couplings compatible with m h ≈ 125 GeV for large masses of the pseudoscalar boson, A. Figure Summary A search is presented for a charged Higgs boson decaying into a top-bottom quark-antiquark pair when produced in association with a top quark or a top and a bottom quark. The analyzed proton-proton collision data, collected at √ s = 13 TeV with the CMS detector at the LHC, correspond to an integrated luminosity of 35.9 fb −1 . The search uses events with a single isolated electron or muon or an opposite-sign electron or muon pair. Events are categorized according to the jet multiplicity and the number of jets identified as containing a b-hadron decay. Multivariate techniques are used to discriminate between signal and background events, the latter being dominated by tt production. Results are presented for a charged Higgs boson with a mass larger than the top quark mass. 95% confidence level upper limits of 9.6-0.01 pb are set on the product of the charged Higgs boson production cross section and the branching fraction into a top-bottom quark-antiquark pair, institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centers and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. [88] CMS Collaboration, "Search for new physics in same-sign dilepton events in proton-proton collisions at √ s = 13 TeV", Eur. Phys. J. C 76 (2016) 439, doi:10.1140/epjc/s10052-016-4261-z, arXiv:1605.03171.
v3-fos-license
2019-06-13T13:22:47.562Z
2018-07-20T00:00:00.000
187695407
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://juniperpublishers.com/gjpps/pdf/GJPPS.MS.ID.555675.pdf", "pdf_hash": "8fa544657124e1c13630fd024553eef70f759ec6", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1946", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "sha1": "bb45485b666fbc622c9feb6dde91903e3f4d0270", "year": 2018 }
pes2o/s2orc
Disorder of Infancy and Childhood: A Review During the last decade, a substantial scientific base has been established for psychopharmacology of adult patients. Diagnostic precision for treatment has been facilitated by the continuing revision of the American Psychiatric Association’s Diagnostic and Statistical Manual for Mental Disorders. The increasing confidence with the data thus generated regarding psychotropic drugs has also increased attention to child and adolescent psychotropic drugs in child and adolescent population have been conducted despite their frequent use. This review will focus on three diagnostic whose primary treatment is medication-attention deflect hyperactivity disorder, functional enuresis, and Tourette’s disorder. Using psychotropic drugs to treat children and adolescents often requires a very different approach than when the same drugs are used for psychiatric disorders among adults. Most adults given psychotropic drugs suffer from major and major depression. Despite well-defined diagnostic criteria, many children are given psychotropic drugs merely to control a group of symptoms or behavior in order to facilitate the child’s learning and development. The psychiatric assessment of a child requires obtaining information from the child, the parents or caretakers, and teachers. The overall diagnostic impression is formed from psychiatric, social, neuropsychologic, and educational evaluations. Before the initiation of psychotropic drugs, the child and family need to be familiar with the risks and benefits of drug therapy, any alternate therapies, and possible adverse effects including drug withdrawal. In addition, and idiosyncratic effects should be presented. The risks of untreated illness should also be discussed. Pharmacotherapy for children and adolescents is usually administered in conjunction with other therapies (e.g., psychotherapy, family therapy, or behavioral therapy). Medication should not be used in place of other therapies or because other therapies have failed. Careful documentation of baseline symptoms is necessary before initiating drug therapy to identify the responsive symptoms and established a realistic expectation for treatment outcome. Attention-deficit hyperacidity disorder (ADHD) The three essential features of ADHD are signs of developmentally inappropriate inattention, impulsivity, and hyperactivity. Inattention typically involves the child failing to finish tasks, not seeming to listen, being easily distracted, having difficulty concentrating on schoolwork, and having difficulty sticking to a play activity. Impulsivity is manifest as often acting before thinking, shifting excessively from one activity to another, difficulty in organizing work, needing much supervision. Frequently calling out in class and difficulty awaiting a turn in games or group situation. Hyperactivity typically includes excessive running about or climbing on things, difficulty sitting still or staying seated, excessive movement during sleep, and acting as "driven by a motor." Symptom presence and severity are variable with the situation of the disorder in all setting or even in the same setting at all times [1]. The onset of ADHD is typically by the age of 3 and must be by age 7, though the disorder may not require professional attention until the child enters school. Approximately 10% of boys and 2% of girls have ADHD, with the general prevelance in school age children estimated at 6%. Pathophysiology ADHD involves multiple etiologies. Family studies indicate a genetic component [2]. Early investigators suggested that children with ADHD are chronically underaroused and stimulants induce a state of normalization. Other investigatiors have suggested the opposite that the children with ADHD are overaroused and that the stimulant drugs calm the patients. More recently it has been proposed that ADHD is not a high or low state but a dysequilibratory disorder of the frontal-neostriatial dopamine systems with widely varying states of arousal. Children with ADHD tend to have phasic outbursts of activity and inactivity resulting in insufficient alertness during dull and repetitive tasks and overarousal at other times resulting in ineffective performance. Stimulant drugs may serve as a homeostat to stabilize responses and thereby prevent the spontaneous fluctuations that are characteristic of ADHD [3]. Pharmacological strategies for the treatment of ADHD Stimulants: Dextroampheamine, methylphenidate, and pemoline represent the most effective drug treatment options. The stimulants decrease motor activity and impulsivity and increase attention span. Efficacy is optimal when the diagnosis is clear-cut, classic target symptoms are rather than preschool [4]. In addition to behavioral effects, stimulants may improve cognitive performance. For example, reading, memory, and arithmetic performance is often significantly improved. Improved cognitive performance possibly results from an overall increase in attention and concentration, not to a specific effect on cognition [5]. Amphetamine were the first drugs to be used for ADHD. In a majority of patients, dextroamphetamine improves symptoms in a majority of patients and is significantly more effective than placebo. Methylphenidate is also an effective drug for ADHD. Efficacy studies repory a 70% to 90% response rate as well as clear superiority over placebo [6]. Pemoline, the most recently introduced effective drug, is more effective than placebo and either slightly less effective than or equal in efficacy to dextroamphetamine and methylphenidate. Two other stimulant drugs have been tried but found inferior in efficacy to dextroamphetamine, methylphenidate, and pemoline. Deanol looked promising in open clinical trials, but subsequent controlled studies found its efficacy only slightly greater than placebo. Caffeine also showed early promise, but most of controlled studies failed to establish efficacy [7,8]. Tricyclic Antidepressants (TCAs): Imipramine and desipramine, are the most systematically studied TCAs in the treatment of ADHD. Overall, imipramine and desipramine are more effective than placebo for hyperactivity with minimal to no drug effect on the other symptoms of ADHD [9][10][11][12]. TCAs are inferior in efficacy compared with stimulants for the treatment of ADHD. Patients unresponsive to stimulants have shown the greatest therapeutic response to imipramine and desipramine, suggesting these patients may represent a subpopulation of ADHD. Children with ADHD and concurrent symptoms of conduct disorder, depression or anxiety may respond better to a TCA compared with stimulants, although several studies have shown these additional symptoms tend to respond poorly [13]. Antidepressants are secondary alternatives to the stimulants for treatment of ADHD. Potential benefits of TCAs in comparison with stimulants include a longer duration of action, less sleep disturbance, reduced risk of abuse, and a lack of growth suppression, whereas their negative aspects include decreased efficacy, tolerance, many adverse effects, and the risk of death in overdose. Monoamine Oxidase Inhibitors (MAOIs): Because stimulants inhibit the enzyme monoamine oxidase, MAOIs have been evaluated for their potential efficacy in ADHD. Tranylcypromine and clorgyline, an investigational drug specific for MAO type-A isoenzyme, have been compared with dextroamphetamine in the treatment of ADHD [14]. The MAOIs onset of activity and clinical efficacy was indistinguishable from dextroamphetamine. Tranylcypromine and clorgyline were administered in 5mg doses in the morning and at noon. Dextroamphetamine 10mg was administered in the morning and 5mg at noon. Caretakers and children were instructed on the low-tyramine diet and the need to avoid the use of sympathomimetic drugs. The adverse effects of MAOIs were mild sleepiness and decreased appetite. No significant changes in orthostatic blood pressure or pulse were observed. Further investigations are necessary to verify these reports of efficacy and safety in the treatment of ADHD. The ability of children to follow a low tyramine content in diet in unsupervised situations is a major consideration in the use of MAOIs. Other treatment options Bupropion, a monocyclic antidepressant, is unique as a mild dopamine uptake inhibitor with no direct effect on serotonin, norepinephrine, or monoamine oxidase. Bupropion was compared to placebo in a 6-week controlled trial in 30 children with ADHD [15]. Bupropion was initiated at 3mg/kg and titrated to 6mg/kg over 15 days of therapy. The response bupropion was better than placebo on the overall assessment as well as a subsection of the teacher's questionnaire on hyperactivity. Bupropion was not more efficacious than placebo on the parent's questionnaire or the teacher's questionnaire on conduct. Future investigations are required to determine the role of bupropion in the treatment of ADHD. Clonidine, a central α2 adrenergic agonist, inhibits noradrenergic activity by decreasing the release of norepinephrine from the presynaptic neuron. Controlled studies suggest that clonidine is more effective than placebo in reducing the hyperactivity and impulsive in children with ADHD [16]. Clonidine was initiated at 0.05mg/d and increased by 0.05mg every other day until a divided daily dose of 0.004 to 0.005mg/kg was administered. Fenfluramine, an amphetamine derivative, has dose dependent effects on serotonin activitylow dose result in increased serotonin activity, whereas high doses result in decreased serotonin activity. Fenfluramine also has central dopamine releasing and norepinephrine reducing properties. Despite fenfluramine's chemical similarity to amphetamines, a controlled crossover trial of fenfluramine and dextroamphetamine reported no therapeutic activity of fenfluramine in ADHD [17]. Conclusion: At this time, the best approach to treating ADHD is either dextroamphetamine or methylphenidate for patients with moderate to severe symptomatology. Pemoline remains a secondary treatment option for those who cannot tolerate multiple daily dosing of first line drugs because of insomnia or loss of evening appetite. Functional Enuresis The essential feature of functional enuresis is repeated involuntary or intentional voiding of urine by day or at night not caused by any physical disorder. Noctural enuresis typically occurs 0.5 to 3 hours after sleep onset. Children with daytime enuresis usually have nocturnal enuresis. Rare physical causes of enuresis (e.g., diabetes, seizure disorders or urinary tract infections) should to be ruled out. Diagnostic criteria for functional enuresis have been defined as involuntary voiding of urine at least twice a month for children between 5 and 6 years of age, and once per month for older children. There are two diagnostic categories of enuresis, primary and secondary. Primary functional enuresis occurs in 80% of children with functional enuresis and refers to children who have not experienced a 1-year period of continence. In the secondary type, enuresis follows a 1-year period of urinary continence. At age 5, prevelance is 7% for boys and 3% for girls, and at age 10, it is 3% for boys and 2% for girls. Most children will "out-grow" functional enuresis, as at age 18 only 1% of boys and virtually no girls still have the condition [18]. Factors that predispose a child to either type of enuresis include delayed or lax toilet training, small bladder capacity, and psychosocial stress. The psychiatric disorders most commonly associated with enuresis are depression and developmental delays. In addition, children with nocturnal enuresis do not have the normal diurnal nighttime increase in antidiuretic hormone (ADH) [19]. Urination is not associated with a particular sleep stage, it typically occurs in the deeper stages of non-rapid eye movement(non-REM) sleep, but also can occur during the REM stage of sleep [20]. Trycyclic antidepressants Drug therapy is reserved for those children who have not responded to an adequate trial of dry bed training or the bedwetting alarm methods of therapy. Exceptions to the secondary role of drug therapy are when the child is at risk of physical or psychological harm from the caretaker. TCAs are rapidly effective in the treatment of enuresis, whereas dextroamphetamine, MAOIs, and anticholinergic drugs are ineffective. The exact mechanism of action of TCAs in treating enuresis is unknown, however previous theories (elimination of stage 4 sleep and peripheral anticholinergic effects) have been ruled out as explanations. Imipramine is the most studied TCA, although others are also effective. The initial dose of imipramine should be 25mg at bed time, with weekly increase of 25mg, if necessary. A nightly dose greater than 75mg is rarely necessary. Effect is often immediate and is usually evident within 7 days. Drug plasma concentrations of dopamine and desipramine do correlate with clinical response, and true nonresponders exist in spite of adequate plasma concentration [21]. Imipramine efficacy is about 85%, one half of patients experience total elimination of bed wetting, and the other half, a significant decrease in the number of episodes. An initially effective dose often becomes ineffective in 2 to 6 weeks but increasing the dose usually reestablishes control. One week is needed to evaluate the efficacy of a new dose. Desmopressin Desmopressin, a synthetic analogue of the natural human antidiuretic hormone arginine vasopressin, is available in a nasal spray for the treatment of nocturnal enuresis. The mechanism of action is an antidiuretic effect that raises overnight urinary osmotic concentration by increasing water reabsorption and reducing the volume of urine entering the bladder. The initial recommended dose is 20µg at bedtime, increasing to 40µg per night after 3 days if there is no response. Some patients may respond to as little as 10µg. Half of each dose in administered per nostril. About 10% of the dose of desmopressin is absorbed from the nasal mucosa, plasma concentration reaches a maximum about 45 minutes after administration. Biologic half-life is 4 to 6 hours and the duration of action varies from 6 to 24 hours [22]. Children treated with desmopressin compared with enuresis alarms are significantly dryer during the first Few weeks of therapy, after 3 months, the therapies are equally efficacious, but immediate relapse after discontinuation of therapy is markedly higher in the drug group than the enuresis alarm group. The best response rate to desmopessin appears to occur in children over the age of 9. Patients with colds or allergies that affect the nasal mucosa may have a less than optimal response to desmopressin. Rare adverse effects include irritation of the nasal mucosa, epitaxis, rhinitis, nasal congestion, transient headache, chills, dizziness, nausea, and abdominal pain. Water intoxication, hyponatremia, and tonicclonic seizures have also been reported [23]. Conclusion Both TCAs and desmopressin are effective in the treatment of nocturnal enuresis. Drug therapy selection for the individual patient is based on the drug adverse effect profiles, ease of administration, and cost. Overall, imipramine has a higher incidence of adverse effects than does desmopressin and the risk of accidental overdose with a TCA is of concern. In contrast, desmopressin nasal spray requires a specific administration technique and is more expensive than imipramine. If drug treatment needs to be given longer than several weeks, then attempts to discontinue the drug every 3 to 6 months are advisable, as spontaneous remission occurs at a rate of 15% per year. Before drug treatment begins, an accurate baseline record of bed-wetting frequency must be recorded. Tourette's Disorder This rare disorder of the central nervous system is a lifelong syndrome of recurrent, involuntary, repetitive, rapid, and purposeless motor movements of multiple muscle groups, generally accompanied by involuntary vocalization (throat clearings, coughing, hissing, barking like noises. Snorting, echolalia, and obscenities), any or all may be voluntary suppressed from minutes to hours. Pathophysiology An understanding of the various proposed etiologies of this disorder is necessary to allow for an understanding of the variety of treatment approaches used. The successful use of haloperidol by Seignot in France in 1961 was rapidly followed by other successful reports. This led reaserchers to believe that the syndrome was a disorder of dopaminergic activity in the corpus striatum. The frequent exacerbation of illness in a patient previously well controlled on haloperidol leads researchers to attempt other drug therapies. Success with other treatment methods has modified the simplistic dopamine hypothesis. The currently accepted theory is that Tourette's is a genetically based disorder of central neurotransmitter activity, 47% of female and 28% of males have a positive family history [24]. This disorder involves an imbalance in the interaction of dopaminergic, serotonergic and noradrenergic system. This multiple system etiology best explains the success of a variety of effective treatment options. Heloperidol Haloperidol remains the treatment of choice for Tourette'a disorder, as it is usually effective at low dose dosage and is well tolerated. Despite its long history of use, there is only one adequately controlled study supporting the efficacy and superiority of haloperidol over placebo [25]. Haloperidol is effective in decreasing the frequency of tics but has limited effect on comorbid disorders such as ADHD. Therapy with haloperidol should be initiated at very low doses of 0.025-0.05mg/kg/d and increased gradually to avoid extrapyramidal side effects and excessive drowsiness. The daily amount should be divided into two or three doses and increased by small increments over a 2 to 3week period until symptoms are controlled. The dosage should be read justed to the lowest level that will provide symptoms control with the least amount of troubling side effects. Symptoms may regress within 24 to 48 hours after therapy is initiated and may disappear with proper dosage adjustments [26]. Many patients are maintained on daily doses smaller than 10mg of haloperidol for long periods of time, but the dosage required may very between 6 and 180mg/d. Such treatment generally results in improvement in about 90% of patients. Pimozide Approved for marketing in the United States in 1984 as an orphan drug, pimozide represents an alternative to haloperidol for Tourette's disorder [27]. Pimozide a diphenylbutylpiperidine differs structurally from phenothiazines and butyrophenones. Pimozide possesses selective central dopamine-2 receptor blockade and calcium chammel antagonist activity with no effect on noradrenergic receptors. It's elimination half-life in children with Tourette's disorder is approximately 66 hours, with Tourette's disorder is approximately 66 hours, with a variable range or 24 to 142 hours. The metabolites of pimozide are inactive. Most efficacy studies show pimozide to be equal or slightly less effective than haloperidol. The efficacy of pimozide may be limited by the maximum dosage requirements from the food and drug administration. Clonidine Clonidine is used with some success in patients who do not respond to or cannot tolerate haloperidol or pimozide. Additional effects of clonidine on serotonergic, dopaminergic, and opioid system are mediated through its central adrenergic agonist effects. The efficacy of clonidine in Tourette's disorder is controversial. For some patients, the response is limited to attentional and behavioral problems with no change in the frequency of tics [28]. Clonidine is generally well tolerated as long as treatment is initiated with a single test dose (generally around 0.05mg) given in the morning and blood pressure is carefully monitored. If the test dose is tolerated, treatment is begun with a 0.05mg daily dose, titrated upward every 4 to 7 days to the maintenance dose of 0.15mg administered in divided daily dose. This dose may need to be further increased slowly over several weeks to control symptoms. This treatment approach is effective, with a gradual onset of action over 2 weeks to several months, in a subpopulation of patients [29,30]. Conclusion At this time the best approach for the treatment of tourette's disorder is Haloperidol at the lowest dose possible, with clonidine or pimozide as secondary agents in those patients not responding or intolerant to haloperidol.
v3-fos-license
2021-08-02T00:05:41.331Z
2021-05-13T00:00:00.000
236584282
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2071-1050/13/10/5482/pdf", "pdf_hash": "473e1c45290ac1ca260ab2424e420427cb61a3e8", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1948", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "a662276d928bcdededc04337c73ae5e70f867e3b", "year": 2021 }
pes2o/s2orc
Wood Ashes from Grate-Fired Heat and Power Plants: Evaluation of Nutrient and Heavy Metal Contents : Ashes from biomass heat (and power) plants that apply untreated woody biofuels may be suitable for use as fertilizers if certain requirements regarding pollutant and nutrient contents are met. The aim of this study was to examine if both bottom and cyclone ashes from 17 Bavarian heating plants and one ash collection depot are suitable as fertilizers ( n = 50). The range and average values of relevant nutrients and pollutants in the ashes were analyzed and evaluated for conformity with the German Fertilizer Ordinance (DüMV). Approximately 30% of the bottom ashes directly complied with the heavy metal limits of the Fertilizer Ordinance. The limits were exceeded for chromium(VI) (62%), cadmium (12%) and lead (4%). If chromium(VI) could be reduced by suitable treatment, 85% of the bottom ashes would comply with the required limit values. Cyclone ashes were high in cadmium, lead, and zinc. The analysis of the main nutrients showed high values for potassium and calcium in bottom ashes, but also relevant amounts of phosphorus, making them suitable as fertilizers if pollutant limits are met. Quality assurance systems should be applied at biomass heating plants to improve ash quality if wood ashes are used as fertilizers in agriculture. Introduction Combustion of wood in heat (and power) plants generates solid residues in the form of ashes [1,2]. In the Federal State of Bavaria (i.e., Southeast Germany), a total of 30,000 to 60,000 t/a of wood ashes from untreated wood accumulates each year from plants with an installed capacity of more than 1 MW therm (calculated from the 2018 Energy Wood Market Report of the Bavarian State Institute of Forestry (LWF)) [3]. Due to the physical and chemical properties of these combustion by-products, suitable utilization strategies might be recommended for their use as raw materials in the bioeconomy. Depending on the point of origin of wood ashes in the heat (and power) plant, a distinction can be made between different ash fractions. The ash accumulating in the boiler is called "bottom ash" or "coarse ash". In most cases, the ash from the heat exchangers is also considered as part of the bottom ash. After the hot flue gas passes through the heat exchanger, the air is usually cleaned by a cyclone in which the "cyclone ash" (also called "coarse fly ash") is separated. If the plant has an electrostatic precipitator, a fabric filter or a flue gas condensation system, a third ash fraction, i.e., the so-called "filter ash" (also called "fine fly ash") or the "condensate sludge" is generated [4]. The following article focuses on bottom ash and cyclone ash from grate-fired boilers as these are the most common ash fractions in Bavarian heat (and power) plants. The chemical composition of individual ash fractions depends on the fuel quality and the plant technology [1,2]. Chemical elements such as plant nutrients (e.g., Ca, Mg) or pollutants such as heavy metals vary in wood fuels depending on the species, but also on bark content, the share of green biomass (i.e., needles/leaves), growing conditions, plants according to the current state of the art are therefore missing. However, these data are necessary to estimate the bioeconomic potential of an increased ash utilization. Due to the high solubility of calcium oxide (CaO) in the ash, a pH shock is feared when spreading in the forest, which negatively affects the soil flora and soil fauna. Therefore, the ash often is pretreated before application. This process, the so-called "ash stabilisation" or "ash hardening", includes the addition of water followed by a storage period of several months. Moistening and contact with atmospheric carbon dioxide causes a variety of chemical transformations. Most importantly, the easily soluble calcium oxide (CaO) transforms into the poorly soluble calcium carbonate (CaCO 3 ) [12,13,25,39]. In large piles, this reaction occurs only on the surface if there is no mixing [13,29]. In Germany, wood ash is mixed with lime dolomite and is then used for soil improvement on arable and forestry land [12,42,43]. Wilpert (2020) [12] points out that the use of wood ash-lime mixtures is particularly recommended where improved potassium supply is desired and the alkaline effect is present. Since the solubility of the alkali salts remains high even after ash hardening, the hardened ash should be protected from rain during storage in order to prevent nutrient leaching. Another positive effect of humidifying the ash and storing it for several months is the conversion of any toxic chromium(VI) into the harmless chromium(III). Schilling (2020) [11] and Polandt-Schwandt (1999) [10] observed this effect in the case of ash from combustion plants with a wet ash discharge system where the hot ash was placed in a water bath and then discharged moist. Pelletizing and granulation of wood ash also serve to reduce the reactivity of the wood ash. Auxiliary materials such as cement or organic binders can be used in this process [29,44]. Pelletized or granulated ash can be applied with conventional fertilizer spreaders [41]. Moistening of the ash is not recommended if the ash is to be used as a substitute for quicklime-e.g., in road construction. In this case, the ash must be stored dry [34]. Many authors emphasize the liming effect of ashes and mixtures with ashes on agricultural and forest soils [12,25,29,45]. Katzensteiner et al. (2011) [46] describe the plant availability of calcium and potassium from wood ashes as "high", magnesium availability as "medium" and phosphate availability as "low". "Low" in this context means that less than 10% of the total phosphate from wood ashes is available to the plant in the year of application. In pot experiments, Kebli et al. (2017) [47] and Maltas et al. (2014) [48] demonstrated the uptake of potassium from wood ash in ryegrass and sunflowers. In the case of sunflowers, P uptake from the ashes was also observed. An important prerequisite for approval as a fertilizer in Germany is compliance with the heavy metal limits in the German Fertilizer Ordinance (DüMV) [43] and, if applicable, the minimum nutrient contents required, depending on the fertilizer. If the ash is mixed with biowaste or compost, the limit values of the German Biowaste Ordinance (BioAbfV) [49] must also be complied with in certain cases [32]. Table 1 summarizes these limit values. The DüMV also contains limit values for organic compounds (perfluorinated tensides, dioxins and dioxin-like substances). These compounds are usually absent in ash from biomass heating (and power) plants [11] and were not investigated in this study. Currently, both bottom ashes and cyclone ashes (if the cyclone is not the last precipitation unit in the plant) may be used for fertilizer production according to DüMV. Compared to the application of wood ashes on farmland, 50% higher heavy metal limit values apply to the application on forestry land. The Cr(VI) limit only applies to ash fertilization on arable land. Much uncertainty remains around the variability of wood ashes among plants or within the same plant and which of these ashes might be suitable for application as fertilizers in agriculture or for liming of forest soils. The aim of this study was to assess the range and average values of nutrients and pollutants in ashes from individual Bavarian biomass heat (and power) plants. This is an important prerequisite for an increase in ash utilization as it is in in line with the Bavarian bioeconomy strategy [50]. For this purpose, mainly bottom ashes but also mixtures of bottom and cyclone ashes (due to individual ash handling at certain plants) and pure cyclone ashes from heat (and power) plants with a thermal output of more than 1 MW were sampled and analyzed. The cyclone ashes are used for comparison with the bottom ashes and for an estimate of how the distribution of ash constituents could be influenced by plant operation. Materials and Methods A total of 17 biomass heat (and power) plants with an installed thermal capacity between 0.8 and 31.6 MW therm , as well as one centralized ash collection depot of several small heating plants (plant ID 18), were selected for sampling. Table 2 gives an overview of the sampled heat (and power) plants with thermal outputs, fuels used, ash samples and plant IDs. The quality of the ashes varied due to different fuels, plant types or operating parameters of the furnace. At 17 sites, pure bottom ash could be sampled. At one plant (plant ID 1, TFZ), pure cyclone ash was sampled too. At five points, mixtures of bottom ash and cyclone ash could be sampled, leading to a total of 50 ash samples ( Table 2). Depending on the ash management procedure, the storage duration of bottom ashes at the heating plants varied considerably and ranged from a few days to several weeks. For ten plants, sampling took place in two different heating seasons (winter 2018/2019 and winter 2019/2020). Seven plants and the central ash collection depot were sampled only once (n = 1). At the heating plant of TFZ (plant ID 1), a series of a total of 20 ash samples was obtained over an entire heating period (12 × bottom ashes, 8 × cyclone ashes). For the general analysis of variability between plants, mean values on ash quality per plant were calculated, while individual samples were used to assess heterogeneity within one plant. Bottom ash (n) 12 Sampling was carried out directly at the heating plants in accordance with LAGA PN 98 [51]. Thereby, it was necessary to prepare a representative sample for laboratory analysis from several individual samples of the ashes stored at their respective locations. The minimum volume of an individual on-site sample and of the laboratory sample prepared by sample combination, homogenization and sample division depends on the maximum grain size of the ash and was between 0.5 and 10 L. Fine-grained ashes have a lower required minimum volume than coarse-grained ashes. The minimum number of incremental samples results from the basic quantity of stored bottom ash or cyclone ash. For example, up to a volume of 30 m 3 , at least eight individual samples should be taken according to LAGA PN 98. During sampling, the individual samples were recorded photographically ( Figure 1). To obtain the laboratory sample from the individual samples, the samples were combined and thoroughly mixed with a shovel. After that, the mixed sample was divided into four even parts. Two of the four parts were discarded. The two remaining quarters were combined again, carefully mixed and the laboratory sample of approx. 8 L was taken from the mixture. Each sampling was documented on a sampling protocol. The TFZ heating plant was an exception regarding sampling. Here, twelve individual samples of bottom ash and eight individual samples of cyclone ash were collected to assess variability of this plant over a complete heating season. To compare variation among heating plants, results from the bottom ash analyses were combined mathematically by calculating a theoretical mixed sample for the entire heating season. The chemical analyses were performed by Wessling GmbH, Neuried, Germany. The analysis included the following ash components with a fertilizing effect-the macronutrients calcium (Ca), phosphorus (P), potassium (K) and sulfur (S), as well as the micronutrients cobalt (Co), iron (Fe), manganese (Mn), molybdenum (Mo), sodium (Na) and selenium (Se). The following heavy metals were analyzed: arsenic (As), lead (Pb), cadmium (Cd), chromium (Cr), both as total content and as chromium(VI), copper (Cu), nickel (Ni), mercury (Hg), thallium (Th), and zinc (Zn). In addition, pH, moisture content and loss of ignition of the ashes were measured. Elemental concentrations of the ash were determined mostly according to ISO standards. The dry residue was determined according to DIN EN 12879 [52]. The ash samples were dissolved with aqua regia (DIN ISO 11466 1997-06) [53] and analyzed by plasma mass spectrometry (ICP-MS) (DIN EN ISO 17294-2 (2005-02) [54]. Cr(VI) was determined according to DIN 19734 (1999-01) [55] The pH value in the solid was analyzed according to DIN ISO 10390 (2005-12) [56] and the alkaline active components according to VDLUFA Method Book Volume II.2, Method 4.5.1 [57]. Table 2 summarizes the results for the bottom ashes and the mixtures of bottom and cyclone ashes. The results are given per heating plant and ordered from left to right with ascending boiler output. In this order, the different combustion plants were also provided with IDs. All plants except one used wood chips from natural wood as fuel, and one plant used wood pellets. The analysis of the ashes included heavy metals, nutrients, pH, moisture content and loss of ignition. The results refer to the dry mass and are given either as concentrations (mg/kg d.b.) or as mass fractions (wt% d.b). Quality of Bottom Ash First, the results of the heavy metals in bottom ashes were evaluated in more detail (Figure 2), followed by analysis of the nutrient contents ( Figure 3). The mean values for the relevant chemical elements and the physical ash properties per plant are given in Table 2. The results refer to dry basis (d.b.) and, for each element, the individual results are shown as a cloud of points and as a boxplot with a minimum and maximum. The twelve ash samples from the TFZ heating plant are included in the evaluation as one mean value to avoid weighting effects. This results in a total number of n = 26 for the evaluation of the variation of bottom ashes among plants. In addition, the limit values for agricultural and forestry use according to the German DüMV and the limit values of the German BioAbfV are indicated in Figure 2. Table 3 gives the results in numbers. The limit values of the DüMV were exceeded in one sample for Pb (3%) and in three samples for Cd (8%). These exceedances apply to the DüMV limit values of both agricultural and forestry applications, although a 50% higher heavy metal content is permissible for forestry applications (Table 1). Schilling (2020) [11] examined 334 ash samples from 12 plants. He found exceedances for Pb in 1.9% of cases and for Cd in 1.6% of cases. The author documented exceedances for Cr(VI) in just 6% of cases. This deviates strongly from the values in the present study. For an application on agricultural land, a limit value for Cr(VI) of 2.0 mg/kg applies, according to DüMV. This limit is exceeded by 62% of the examined bottom ashes. Additionally, Reichle et al. (2009) [17] point out that the Cr(VI) limit is frequently exceeded in bottom ash from wood combustion. The authors recommend paying particular attention to Cr(VI) during the recycling of wood ash. Ten heating plants were sampled twice in the current study, i.e., during winter 2018/2019 and during winter 2019/2020, and only two heating plants complied with the limit value for Cr(VI) in both samples. These were plants with a wet ash removal system (plant IDs 4,14,16), whereas all other plants used dry ash removal systems. For the plants that used dry ash removal, at least one sample per heating plant exceeded the limit value for Cr(VI). In three plants, the limit value was exceeded both times. Moistening of bottom ashes provides the conditions for a chemical reduction of Cr(VI) into Cr(III) [10]. Pohlandt-Schwandt (1999) [10] and Schilling (2020) [11] state that wet bottom ashes are low in Cr(VI). Therefore, moistening of bottom ashes is already often applied as a quality management tool to improve bottom ash quality [10,58]. In contrast to DüMV, there is no limit value for Cr(VI) in the BioAbfV. However, some of the other limit values in the BioAbfV are lower compared to DüMV and some bottom ashes exceeded the values for copper (19%, n = 5), nickel (8%, n = 2) and zinc (15%, n = 4). The DüMV limit value for Cd was exceeded by two plants (plant IDs 6 and 11). Nickel and copper limits of BioAbfV were exceeded in each of the heating plants that were sampled twice in one of the samples, while the BioAbfV limit value for zinc was exceeded by both samples at one heating plant. Thereby, Zn was exceeded in all three ash samples with exceeded Cd. Kovacs et al. (2018) [9] and Schilling (2020) [11] show that there is a negative correlation between the concentration of volatile metals such as Cd, Pb or Zn and the temperature in the combustion chamber. Therefore, higher temperature combustion could probably solve the problem of Cd in bottom ash. Schilling (2020) [11] observed a complete volatilization of Cd at an average temperature of above 750 • C in the combustion chamber. The boiling temperature of Cd is 767 • C. In total, only eight of the bottom ashes sampled complied with all heavy metal limit values according to the DüMV and the BioAbfV (Table 1), directly. Assuming that Cr(VI) can be sufficiently reduced by suitable treatments, e.g., by moistening the ashes [10,11], 85% of the ashes (n = 22) complied with the limit values of the DüMV. A total of 54% of the ashes (n = 14) also complied with the requirements of the BioAbfV regarding the maximum permissible heavy metal concentrations. Bottom ashes contain many nutrients that are relevant for plant growth [12,25,27,29]. The sum of the basic components (metal oxides and carbonates [25]) and the individual values for calcium (calculated as CaO), potassium (calculated as potassium oxide K 2 O), magnesium (calculated as magnesium oxide MgO) and phosphorus (calculated as phosphate P 2 O 5 ) are shown in Figure 3 as point clouds and box plots. Table 4 shows the results in figures together with the contents of the additional trace nutrients and other parameters. First, a comparison is made with publications on ash quality from Germany and Austria. Since here the wood qualities and the technology of the CHP plants are quite similar to the plants investigated. Reichle et al. (2009) [17] reported average nutrient contents for bottom ash of 25 to 45 wt% for calcium oxide (CaO), 3 to 6 wt% for magnesium oxide (MgO) and potassium oxide (K 2 O), each, and of 2 to 3 wt% for phosphate (P 2 O 5 ). In the current study, higher values were measured, especially for potassium oxide. Here, the mean value is 6.3 wt% (d.b.) and 50% of the analytical results were between 4.5 and 7.5 wt% (d.b.). Obernberger (1997) [59] also gives a higher value for K 2 O than Reichle et al. (2009) [17] with 6.7 wt% d.b. as the average value for the content of potassium oxide in 12 bottom ashes from the combustion of wood chips. The mean phosphate content in Obernberger (1997) [59] is 3.6 wt% (d.b.) and thus about one percentage point higher than results in this study. The results indicate that the nutrient contents in bottom ash from wood combustion can fluctuate over a wide range of values. The pH values of the ashes examined vary between pH 12.3 (minimum) and pH 13.3 (maximum) ( Table 2). They thus fluctuate quite closely around the mean value of pH 12.8 and lie within the range of pH 11 to pH 13 given by Reichle et al. (2009) [17] for wood ashes. Most of the ashes were very dry, the median value of the moisture content is 0.5 wt%. Nurmesniemi et al. (2012) [16] also notes this value for bottom ashes. Only the two plants with wet ash removal raised the mean moisture content to 6.2 wt%. For the plants with a wet ash removal system, the moisture content varied between 21 and 33 wt%. Most of the ashes were completely combusted and showed only a low loss of ignition, which amounted to 0.6 wt% on average and reached a maximum of 3.6 wt%. Thus, all ashes remained below the value of 5 wt%. Therefore, it can be assumed that there are no organic pollutants in the ash [17]. Looking at ash qualities that have been published beyond Germany and Austria, similar contents for CaO, MgO, P 2 O 5 and K 2 O have been reported by Okmanis et al. (2015) [41] and Ingerslev et al. (2011) [21]. Considerably lower nutrient levels have been published by Nurmesniemi et al. (2012) [14] and Hannam et al. (2018) [17] for bottom ashes. Except for Cr (partly originating from the steels in the combustion chamber [21], the ash constituents originate from the fuels [8,21]. These differences can therefore be partly due to different fuel compositions. However, the main causes are differences in combustion technology and different temperatures in the combustion chamber. Table 5 correlates the bottom ash contents of the present study with the thermal power of the combustion unit classified in <1 MW, 1 to 10 MW and >10 MW. The nutrient levels of alkaline active substances (CaO), MgO, P 2 O 5 and K 2 O decrease with increasing furnace power due to higher temperatures in the combustion chamber. This is consistent with the research of Okmanis et al. (2015) [41] who examined the ash from heating plants in Lithuania. Additionally, Wilpert et al. (2016) [12] shares this observation and suggests a mixture of ashes from large and smaller heating plants to increase the nutrient content in fertilizers from wood ash. The bottom ashes which, apart from Cr(VI), do not exceed any other limit values of the DüMV, all contain more than 15 wt% (d.b.) CaO and thus meet the requirement for a "lime fertilizer made from ash from the combustion of vegetable matter". A recycling path established in Bavaria and Baden-Württemberg consists of mixing ashes of this quality with lime or lime dolomite to form "carbonic acid lime". The ash content may not exceed 30 wt%. Theoretically, it would also be possible to mix this lime fertilizer from ash with biowaste. However, minimum nutrient content limits in the finished product of 3 wt% N, 3 wt% P 2 O 5 or 3 wt% K 2 O in the dry matter would then have to be met. According to Kehres (2016) [32], these contents are generally not achieved by mixtures of bottom ash and biowaste. For a large part of the bottom ash (69%), the classification as "PK fertilizer from ash from the incineration of vegetable matter" would be possible, since at least 2 wt% P 2 O 5 and 3 wt% K 2 O are contained in their dry matter. Four of the ashes (corresponding to approximately 15%) contain at least 10 wt% (d.b.) K 2 O and would thus fulfil the requirement for a "potassium fertilizer from ashes of the combustion of vegetable matter". Wood ash can also be used in composting. If the resulting "organic-mineral fertilizers" are to be spread on agricultural land in accordance with DüMV, the limit values of the BioAbfV must also be met. Taking into account the exceedances of Cr(VI) according to the DüMV, a total of 54% of the bottom ash examined also complies with the limit values of the BioAbfV. However, the limit values of the BioAbfV do not have to be met if the application takes place on land, for which the BioAbfV does not apply, such as in gardening and landscaping or if substrates or topsoil materials are produced from the mixture of ash and compost [32]. This latter recovery path would thus be possible for 85% of the bottom ash investigated, as long as a reduction in the Cr(VI) content can be assumed. Distribution of Element Loads between Bottom Ash to Cyclone Ash (TFZ Heating Plant) At the TFZ heating plant, the distribution of the element loads between bottom ash and cyclone ash was investigated. For this purpose, individual samples of bottom ash and cyclone ash were sampled simultaneously at eight points during the same heating period. Volatile ash components, such as Cd, Pb, Zn and Hg, evaporate at the high temperatures in the combustion chamber [9,11,14,16,17]. For this reason, volatile components can be discharged from the hot ash bed and accumulate in the cyclone ash through condensation. This results in increased concentrations of these elements in the cyclone ash compared to the bottom ash. By using the data set of samples obtained at the TFZ heating plant, this correlation should be directly verifiable. Table 6 shows the heavy metal and nutrient concentrations in the bottom ash in direct comparison with the corresponding cyclone ash. The mean value and the standard deviation of the eight samples taken in pairs are given in each case. Pairs of mean values that differ significantly are printed in bold. Means were compared using the Wilcoxon signed-rank test. At the points where there is no standard deviation, all samples had fallen below the detection or determination limit with respect to this element. The specified detection or quantification limit was then used as the concentration. For the elements As and Hg, which also occur at very low concentrations in the cyclone ash, this can lead to a distortion in the calculation of the element loads, since this procedure means that a similarly high value must be assumed in both the bottom ash and the cyclone ash. In fact, it can be assumed that the proportion of the two volatile elements As and Hg is higher in the cyclone ash than in the bottom ash. However, the detection limit of the analysis via the external laboratory does not allow this conclusion to be drawn. The interpretation of the results in Table 4 is based on the calculated absolute element loads related to the total mass of the respective element in the ash (Figure 4). In order to make quantitative statements about how the actual loads of the individual elements are distributed between the bottom ash and the cyclone ash, it is first necessary to make reasonable assumptions about the mass ratio between bottom ash and the associated cyclone ash. For fixed-bed furnaces, a proportion of 10 to 30 wt% of cyclone ash is usually reported [46,[59][60][61]. Fine fly ash is not considered in the following analysis. The actual proportion of cyclone ash depends on various factors, such as the turbulence of the primary air in the combustion bed or the fineness of the fuel, for comparison of the ash fractions from wood chips or sawdust shows [1]. With these assumptions, it is possible to derive from the eight pairwise analyses of the bottom ash and the cyclone ash at the TFZ heating plant how the fractions of heavy metals and nutrients are distributed between the ash fractions. In addition to the 1:1 mixing ratio (bottom bar chart), Figure 4 shows the distribution of the loads at 10, 20 and 30 wt% cyclone ashes of the total ash. Heavy metal compounds containing Pb, Cd, Tl, Hg and Zn, are highly volatile [9,14] and are predominantly found in the cyclone ash in all calculations. Consequently, even at the lowest assumed cyclone ash content of 10 wt% of total ash, Cd accumulates in the cyclone ash of up to 93 wt%. Should high concentrations of highly volatile elements be observed in bottom ashes that are considered for utilization as fertilizer, an increase in the temperature in the combustion bed could result in a reduction in these elements in bottom ashes and an increase in cyclone ashes. For As, no clear effect could be seen in the data presented here. As the concentrations of As in the investigated bottom and cyclone ashes were overall very low, the limit of determination often had to be used as the concentration in the ash fractions. Cu, Cr, Ni and the main nutrients Mg, P, Ca and K are less volatile and, depending on the calculation performed here, are found in only 11 to 50 wt% in the cyclone ash. Therefore, they predominantly remain in the bottom ash. Obernberger (1997) [59] shows basically similar element ratios between bottom ash and cyclone ash for wood chips. However, the reported concentrations in the cyclone ash were consistently lower compared to the results presented here (with exception of K and P), which may be due to different combustion chamber and cyclone temperatures of the heating plants investigated. The combustion chamber temperatures near the combustion bed are not known for the TFZ heating plant. Lanzerstorfer (2017) [14] observed that at combustion chamber temperatures between 830 and 920 • C, Cd, Pb and Zn accumulate in the fly ashes, while most nutrients (Ca, Mg, P 2 O 5 ) remain in the bottom ash. Both Lanzerstorfer (2017) [14] and Schilling (2020) [11] note a higher volatility for potassium, which leads to K losses from the bottom ash. These increased K losses could not be observed at the TFZ heating plant, suggesting that the combustion temperatures are sufficiently high to remove the volatile heavy metals and at the same time low enough to avoid high potassium losses. Quality of Mixtures of Bottom Ash with Cyclone Ash In some heating plants, the bottom ash and the cyclone ash are collected in the same container due to the plant design. The composition of these ashes is shown in Table 2 (right columns). All five samples of these mixed ashes exceed the DüMV limit value for Cd. Further exceedances occurred for Cr(VI) (n = 4), thallium (n = 1) and lead (n = 1). None of the bottom ashes mixed with cyclone ash can meet the requirements regarding the heavy metal limit values of the DüMV or the BioAbfV. They are therefore not eligible as a source material for fertilizers. These ashes are excluded from being spread on agricultural and forestry land in Germany. If the aim is to recycle bottom ashes, it is recommended that these ash fractions are collected and reused separately. When using other fuels, e.g., when firing agricultural fuels such as straw, a mixture of bottom ash and cyclone ash can often comply with the limit values of the DüMV [43]. This is due to the generally lower heavy metal content of agricultural fuels compared to wood fuels. Conclusions The energetic use of untreated wood in biomass heat (and power) plants produces combustion residues in the form of ash. The increased use of by-products and residues contributes to the conservation of natural resources. It has been shown that the bottom ashes produced are basically suitable for use as fertilizers or as a raw materials for fertilizers despite the low pollutant limits in the German DüMV. However, quality assurance of the ashes and compliance with the relevant legal requirements due to possible exceedances of the heavy metal limits prescribed by the German Fertilizer Ordinance are crucial. The limits were exceeded in the bottom ashes for chromium(VI) (62%), cadmium (12%) and lead (4%). Mixing of the bottom ashes with cyclone ashes led, in all cases, to the heavy metal limit values being exceeded, especially for cadmium. The following measures contribute to the quality assurance of ashes for fertilization purposes: As has been shown, mixing of bottom ash and cyclone ash leads to an increase in heavy metals. Separate collection of these ash fractions is essential. The frequently exceeded limit value for chromium(VI) in the German Fertilizer Ordinance can be reduced by moistening and storing the bottom ashes. In this process, chromium(VI) converts into the harmless chromium(III). The present study maps the ash quality of typical biomass heating plants according to the state of the art in Germany. The evaluation of the results is carried out according to the regulations applicable in Germany for the use of biomass ash for fertilizer purposes. Other combustion techniques, other fuels and other legal regulations may lead to different assessments. Conflicts of Interest: The authors declare no conflict of interest.
v3-fos-license
2018-01-23T22:38:59.635Z
2005-10-01T00:00:00.000
14269541
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1155/wcn.2005.828", "pdf_hash": "bcbefd4f1353566af11840355583904a57d8f1c2", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1949", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "bcbefd4f1353566af11840355583904a57d8f1c2", "year": 2005 }
pes2o/s2orc
A New Strategy to Improve Proactive Route Updates in Mobile Ad Hoc Networks This paper presents two new route update strategies for performing proactive route discovery in mobile ad hoc networks (MANETs). The first strategy is referred to as minimum displacement update routing (MDUR). In this strategy, the rate at which route updates are sent into the network is controlled by how often a node changes its location by a required distance. The second strategy is called minimum topology change update (MTCU). In this strategy, the route updating rate is proportional to the level of topology change each node experiences. We implemented MDUR and MTCU on top of the fisheye state routing (FSR) protocol and investigated their performance by simulation. The simulations were performed in a number of di ff erent scenarios, with varied network mobility, density, tra ffi c, and boundary. Our results indicate that both MDUR and MTCU produce significantly lower levels of control overhead than FSR and achieve higher levels of throughput as the density and the level of tra ffi c in the network are increased. INTRODUCTION Mobile ad hoc networks (MANETs) are made up of a number of nodes, which are capable of performing routing without using a dedicated centralised controller or a base station. This key feature of these networks enables them to be employed in places where an infrastructure is not available, such as in disaster relief and on battle grounds. However, the dynamic nature of these networks and the scarcity of bandwidth in the wireless medium, along with the limited power in mobile devices (such as PDAs or laptops) makes routing in these networks a challenging task. A routing protocol designed for MANETs must work consistently as the size and the density of the network varies and efficiently use the network resources to provide each user with the required levels of quality of service for different types of applications used. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. With so many variables to consider in order to design an efficient routing protocol for MANETs, a number of different types of routing strategies have been proposed by various authors. These protocols can be classified into three groups: global/proactive, on-demand/reactive, and hybrid. Most proactive routing protocols are based on the link state and distance vector algorithms. In these protocols, each node maintains up-to-date routing information to every other node in the network by periodically exchanging distance vector or link state information using different updating strategies (discussed in the following section). In on-demand routing protocols, each node only maintains active routes. That is, when a node requires a route to a particular destination, a route discovery is initiated. The route determined in the route discovery phase is maintained while the route is still active (i.e., the source has data to send to the destination). The advantage of on-demand protocols is that they reduce the amount of bandwidth usage and redundancy by determining and maintaining routes when they are required. These protocols can be further classified into two categories: source routing and hop-by-hop routing. In Source-routed on-demand protocols [1,2], each data packet carries the complete source to destination address. Therefore, each intermediate node forwards these packets according to the information kept in the header of each packet. This means that the intermediate nodes do not need to maintain up-to-date routing information for each active route in order to forward the packet towards the destination. Furthermore, nodes do not need to maintain neighbour connectivity through periodic beaconing messages. The major drawback of source routing protocols is that in large networks they do not perform well. This is due to two main reasons. Firstly as the number of intermediate nodes in each route grows, so does the probability of route failure. To show this let P( f )α n i=1 a i , where P( f ) is the probability of route failure, a is the probability of a link failure, and n is the number of intermediate nodes in a route. From this, 1 it can be seen that as n → ∞, P( f ) → 1. Secondly, as the number of intermediate nodes in each route grows, the amount of overhead carried in each header of each data packet will grow as well. Therefore, in large networks with significant levels of multihopping and high levels of mobility, these protocols may not scale well. In hop-by-hop routing (also known as point-to-point routing) [3,4], each data packet only carries the destination address and the next hop address. Therefore, each intermediate node in the path to the destination uses its routing table to forward each data packet towards the destination. The advantage of this strategy is that routes are adaptable to the dynamically changing environment of MANETs, since each node can update its routing table when they receive fresher topology information and hence forward the data packets over fresher and better routes. Using fresher routes also means that fewer route recalculations are required during data transmission. The disadvantage of this strategy is that each intermediate node must store and maintain routing information for each active route and each node may require to be aware of their surrounding neighbours through the use of beaconing messages. Hybrid routing protocols have been proposed to increase the scalability of routing in MANETs [5,6,7,8,9,10]. These protocols often can behave reactively and proactively at different times and they introduce a hierarchical routing structure to the network to reduce the number of retransmitting nodes during route discovery or topology discovery. Each node periodically maintains the nearby topology by employing a proactive routing strategy (such as distance vector or link state) and maintain approximate routes or on-demand routes for faraway nodes. In this paper, we propose two new route updating strategies to perform proactive route discovery in mobile ad hoc networks. These are minimum displacement update routing (MDUR) and minimum topology change update (MTCU). In MDUR, the rate at which route updates are sent is 1 Assuming that the intermediate nodes have a probability of a link failure of a > 0. controlled by the rate of displacement of each node. This is determined by using the services of a GPS. In MTCU, the rate at which updates are sent is proportional to the level of topology change experienced by each node. In [10], we briefly mentioned MDUR; in this paper we give a full description of this strategy and investigate its performance, along with MTCU, under different network scenarios using a simulation tool. The rest of this paper is organised as follows. In Section 2, we describe a number of different route update strategies proposed in the literature. Section 3 describes our route updating strategies. Section 4 describes the simulation environment, parameters, and performance metric used to investigate the performance of our route updating strategies. Section 5 presents the discussion of our simulation results and Section 6 presents the conclusions of the paper. RELATED WORK Proactive route discovery provides predetermined routes for every other node (or a set of nodes) in the network at every node. The advantage of this is that end-to-end delay is reduced during data transmission, when compared to determining routes reactively. Simulation studies [11,12,13], which have been carried out for different proactive protocols, show high levels of data throughput and significantly less delays than on-demand protocols (such as DSR) for networks made up of up to 50 nodes with high levels of traffic. Therefore, in small networks using real-time applications (e.g., video conferencing), where low end-to-end delay is highly desirable, proactive routing protocols may be more beneficial. In this section, we describe a number of different route update strategies proposed in the literature to perform proactive routing. Furthermore, we also describe a number of different updating strategies proposed for wireless cellular networks. Global updates Proactive routing protocols using global route updates are based on the link state and distance vector algorithms, which were originally designed for wired networks. In these protocols, each node periodically exchanges its routing table with every other node in the network. To do this, each node transmits an update message every T seconds. Using these update messages, each node then maintains its own routing table, which stores the freshest or best route to every known destination. The disadvantage of global updates is that they use significant amount of bandwidth. Since they do not take any measures to reduce control overheads. As a result data throughput may suffer significantly, especially as the number of nodes in the network is increased. Two such protocols are DSDV [14] and WRP [15]. Localised updates To reduce the overheads in global updates, a number of localised updating strategies were introduced in protocols such as GSR [16] and FSR [12,17]. In these strategies, route update propagation is limited to a localised region. For example, in GSR each node exchanges routing information with their neighbours only, thereby eliminating packet flooding methods used in the global routing. FSR is a direct descendent of GSR. This protocol attempts to increase the scalability of GSR by updating the nearby nodes at a higher frequency than that of updating the nodes which are located faraway. To define the nearby region, FSR introduces the fisheye scope (as shown in Figure 1). The fisheye scope covers a set of nodes which can be reached within a certain number of hops from the central node shown in Figure 1. The update messages which contain routing information to the nodes outside of the fisheye scope are disseminated to the neighbouring nodes at a lower frequency. This reduces the accuracy of the routes in remote locations, however, it significantly reduces the amount of routing overheads disseminated in the network. The idea behind this protocol is that as the data packets get closer to the destination the accuracy of the routes increases. Therefore, if the packets know approximately what direction to travel, as they get close to the destination, they will travel over a more accurate route and have a high chance of reaching the destination. In OLSR, a two-hop neighbour knowledge is maintained proactively to determine a set of MPR (or multipoint relay) nodes. These nodes are used during the flooding of globally propagating route updates in order to minimise the number of rebroadcasting nodes (i.e., redundancy). Mobility-based updates Another strategy which can be used to reduce the number of update packets is introduced in DREAM [13]. The author proposes that routing overhead can be reduced by making the rate at which route updates are sent proportional to the speed at which each node travels. Therefore, the nodes which travel at a higher speed disseminate more update packets than the ones that are less mobile. The advantage of this strategy is that in networks with low mobility this updat-ing strategy may produce fewer update packets than using a static update interval approach such as DSDV. Similar to FSR, in this protocol, updates are sent more frequently to nearby nodes than the ones located faraway. Conditional or event-driven updates The number of redundant update packets can also be reduced by employing a conditional-(also known as eventdriven-) based update strategy [14,18]. In this strategy a node sends an update if certain different events occur at any time. Some events which can trigger an update are when a link becomes invalid or when a new node joins the network (or when a new neighbour is detected). The advantage of this strategy is that if the network topology or conditions are not changed, then no update packets are sent, these eliminating redundant periodic update dissemination into the network. Updating strategies for cellular networks Previous sections described a number location and route updating strategies proposed for ad hoc networks. In cellular networks, a number of updating strategies have been proposed for cellular networks. These include movementbased updates, distance-based updates, and timer-based updates. In movement-based updates [19,20], a location update is transmitted when the number of cell boundary crossings exceeds a predetermined value. In distance-based updates [21,22], a location update is transmitted when a node's distance (in terms of number of cells) from the last updating time, exceeds a predetermined limit. In timer-based [23] each node transmits an update packet periodically (similar to the periodic updating used in ad hoc networks). Further research is required for determining the usefulness of these strategies in mobile ad hoc networking models which use a static grid (similar to cells) or zone-based maps [5,6]. Such work is beyond the scope of this paper. PROPOSED STRATEGIES In this section, we propose minimum displacement update routing (MDUR) and minimum topology change update (MTCU). This strategy attempt disseminates route update packets into the network when they are required rather than using purely periodic updates. In MDUR, this is achieved by making the rate at which updates are sent proportional to the rate of displacement. That is, the more a node changes location by a threshold distance the more updates are transmitted into the network. The rate of displacement can be measured using a global positioning system (GPS). Note that the rate of displacement is different to speed, which is used in DREAM [13] routing protocol. This is because speed measurement does not take into account displacement but rather distance. In MTCU, the rate at which route update packets are sent is proportional to the level of topology change detected by each node, using its topology table. Note that this strategy does not require a GPS. The following section describes the idea behind displacement-based updates and illustrates the advantage of using displacement as a route update section criteria rather than speed (or distance). This is then followed by the description of MTCU. Overview and definition of MDUR The idea behind this strategy is to reduce the amount of periodic route updates by restricting the update transmission to nodes which satisfy the following conditions. (1) A node experiences or creates a significant topology change. (2) A node has not updated for a minimum threshold time. In the first condition we assume that a node experiences a significant topology change if it has migrated by a minimum distance from one location to another location. By migrating from one location to another, the routes connected to the migrating node (and the route to the migrating node itself) may significantly change. Therefore, the migrating node is required to transmit an update packet through the network (or parts of the network) to allow for recalculation of more accurate routes. To illustrate how MDUR works, suppose node S (see Figure 2) migrates from one location to another as shown. From this migration it can be seen that the neighbour topology of node S has changed, which has also significantly changed the topology of the network. Therefore, the dissemination of an update packet at this time will be beneficial as each node in the network can rebuild their routing tables and store more accurate routes. Description of MDUR algorithm With MDUR, each node starts by recording its current location and sets it as its previous location. They will also record their current velocity and time. Using this information, each node determines when the next update should be sent. When this update time is elapsed, the nodes check to see if their migration distance is greater than the required threshold distance. If yes, an update is sent. Otherwise, no update is sent and the next update time is estimated according to the current location and velocity of the node. If the current velocity is zero, the node can assume a maximum velocity or set a minimum wait time according to an update time constant, which has been used in the MDUR algorithm. The MDUR algorithm is outlined in Algorithm 1. Displacement updates are more beneficial than using updates based purely on mobility (i.e., speed [13]). This is because this strategy attempts to send an update when a topology change occurs. To show this, suppose node S ( Figure 2) moves rapidly towards node A for a short time such that dist(L c , L p ) < D T . Furthermore, it moves in such a way that it maintains its links to nodes B and D. Now, assuming that there are no interference during this time and nodes A, B, and D stay stationary, the topology of node S will not change. Therefore, an update is not required in this network. However, in the case a strategy is purely based on mobility such as in [13], an update may be disseminated and it may continue to send updates even if node S moves back and forward between these two points. On the contrary, in this scenario in MDUR no updates will be sent. Implementation decisions for MDUR To evaluate the performance and benefits of MDUR, it was implemented on top of FSR, which we refer to as hierarchical MDUR (HMDUR). Recall that FSR disseminates two types of update packets: intrascope update packets which propagate within the fisheye scope and interscope packets which propagate through the entire network. Therefore, we introduced two types of displacement updates, one for the intrascope and one for the interscope, and we modified the MDUR algorithm to disseminate these two updates. To initiate each of these updates we also used two different threshold distances: D intra and D inter for the intrascope and interscope updates, respectively. To initiate the intrascope updates more frequently than interscope updates, we set D intra to be significantly less than D inter . Tables 1 and 2 illustrate the parameters used in FSR 2 and HMDUR. The HMDUR algorithm is outlined in Algorithm 2. Description of MTCU One way to increase the scalability of proactive routing protocols is by maintaining approximate routes to each destination rather than exact routes. In [12,13], each node maintains approximate (or less accurate) information to faraway destinations, since the updates from faraway nodes are received less frequently. Similarly, in HMDUR, nodes maintain approximate routing information to nodes located faraway by using the interscope displacement metric. Another way to determine if an update is required is by monitoring the nearby topology and disseminating update packets only when a minimum level of topology change occurs. To do this, we introduce minimum topology change ( * The HMDUR algorithm * ) L intra ← location at last intra-update L inter ← location at last inter-update L c ← current location D intra ← the intrascope threshold distance D inter ← the interscope threshold distance Disseminate intrascope update packet Disseminate interscope update packet V ← speed of node T c ← current time updates (MTCU). This strategy assumes that each node maintains an intrascope and interscope topology like FSR. However, instead of using purely periodic updates, the rate at which updates are sent is proportional to a topology metric. MTCU is made up of two phases: these are startup phase and maintenance phase. The startup phase is initiated when a node enters the network (or when it comes online). During this phase, each node starts by recording its location and sends three updates, which are neighbour update, intrascope update, and interscope update. Each node then counts the number of neighbouring nodes and the number of nodes in their intrascope. During the maintenance phase, the neighbouring topology is periodically monitored for failure notifications and the number of changes recorded. These changes can include discovery of a new neighbour or the loss of a link. If a significant change in the neighbouring topology is experienced, an intrascope update is sent. Furthermore, each node monitors its intrascope topology and counts the number of changes, such as the number of nodes in the intrazone and the number of route changes for each destination. If the intrascope has changed significantly, then an interscope update is sent. Note that each node maintains its neighbour connectivity through beaconing messages. However, the rate at which intrascope and interscope updates are disseminated is dependent on the rate at which neighbouring or intrascope topology changes, and periodic updates can be used only if each node has not sent an intrascope or interscope update for long time, 3 thus reducing the number of redundant updates if no changes occur. This also means that fewer periodic updates may be transmitted when compared to protocols which use a purely periodic update strategy (such as FSR). To detect if a significant neighbour or intrascope topology change has occurred, a topology metric can be used. In this case, two topology metrics are required to be kept, one for the neighbouring topology and one for the intrascope topology. The topology metric counts the number of changes after the startup phase and triggers an update event if a certain number of changes occur. The MTCU algorithm is outlined in Algorithm 3. Note that the algorithm only shows the maintenance phase of MTCU. In Algorithm 3, the rate at which updates are sent also depends on the percentage of changes experienced (i.e., PT change and PN change ). The percentage of change value can be a static parameter between 0% and 100% and preprogrammed into each device. However, it may be beneficial to dynamically change its value according to the network conditions. One way to do this is by estimating the available bandwidth at each node and also for the intrascope, then varying the percentage change values according to the level of available bandwidth. Therefore, in times where the level of traffic (e.g., data and control) is low, more updates can be sent to increase the accuracy of the routes. Implementation decisions for MTCU Similar to MDUR, MTCU was also implemented on the top of FSR. Table 3 illustrates the simulation parameters of MTCU. Note that the neighbour change threshold and the intrascope thresholds represent the required level of topology change in the neighbouring and intrascope topology, respectively, before an intrascope or an interscope update is disseminated. SIMULATION MODEL The aim of our simulation studies is to investigate the performance of our route update strategy under different levels of node density, traffic, mobility, and network boundary. We simulated HMDUR, MTCU, and FSR for each scenario in order to differentiate their performance. The simulations parameters and performance metrics are described in the following sections. Simulation environment and scenarios The GloMoSim simulation tool was used to carry out our simulations [24]. GloMoSim is an event-driven simulation tool designed to carry out large simulations for mobile ad hoc networks. Our simulations were carried out for 50 and 100 node networks, migrating in a 1000 m × 1000 m boundary. IEEE 802.11 DSSS (direct sequence spread spectrum) was used with maximum transmission power of 15 dBm at 2 Mb/s data rate. In the MAC layer, IEEE 802.11 was used in DCF mode. The radio capture effects were also taken into account. Two-ray path loss characteristics were used for the propagation model. The antenna height is set to 1.5 m, the radio receiver threshold is set to −81 dBm, and the receiver sensitivity was set to −91 dBm according to the Lucent wavelan card [25]. A random way-point mobility model was used with the node mobility ranging from 0 to 20 m/s and pause time varied from 0 to 900 s. The simulation was run for 900 s for 10 different values of pause time and each simulation was averaged over five different simulation runs using different seed values. Constant bit rate (CBR) traffic was used to establish communication between nodes. Each CBR packet was 512 bytes, the simulation was run for 10 different client/server pairs and each session was set to last for the duration of the simulation. Performance metrics To investigate the performance of the routing protocols, the following performance metrics were used. The first metric is used to investigate the levels of data delivery (data throughput) achievable by each protocol under different network scenarios. The second metric will illustrate the levels of routing overhead introduced. The last metric compares the amount of delay experienced by each data packet to reach their destination. SIMULATION RESULTS This section presents our simulation results. The aim of this simulation analysis is to compare the performance of HMUR and MTCU with FSR under different network scenarios. Packet delivery ratio The graphs in Figures 3 and 4 illustrate the PDR results obtained for the 1000 m × 1000 m boundary. In the 50-node scenario, all routing strategies show similar levels of PDR. However, in the 100-node network scenario, HMDUR and MTCU start to outperform FSR. This is because HMDUR and MTCU still maintain a similar level of PDR as in the 50 node scenario, whereas FSR has shown a significant drop in performance when compared to the 50-node scenario. This drop in performance is evident across all different levels of pause time. This is because under high node density the periodic updating strategy in FSR starts to take away more of the available bandwidth for data transmission than our proposed strategies. Furthermore, more updates may increase channel contention, which can result in more packets being dropped at each intermediate node. Normalised control overhead The graphs in Figures 5 and 6 illustrate the normalised routing overhead experienced in the 1000 m × 1000 m boundary. In our simulation, the maximum update intervals for the intrascope and interscope is set to be half of that of FSR. Therefore, under high mobility (i.e., 0 pause time), if purely periodic updates were used in HMDUR and MTCU, the routes produced would have been less accurate, which may have resulted in a drop in throughput. However, adapting the rate of updates by each node to the rate of its displacement allows the nodes to send more updates when they are required (i.e., during high mobility). This means that the accuracy of the routes will be high during high mobility where nodes are more likely to migrate more frequently and experience topology changes, and when mobility is low, less updates are sent. From the results shown in Figures 5 and 6, it can be seen that both HMDUR and MTCU produce less overhead than FSR, across all different levels of pause time and node density. Delays The graphs in Figures 7 and 8 illustrate the end-to-end delay experienced in the 1000 m × 1000 m boundary. These results show that in HMDUR and MTCU each data packet experiences lower end-to-end delay than in FSR. The lower delay experienced is due to the higher level of accessibility to the wireless medium. This is because in our proposed strategies each node generates less route updates than in FSR, which means there is less contention for the channel when a data packet is received. Therefore, each node can forward the data packet more frequently. CONCLUSIONS This paper presents new proactive route update strategies for mobile ad hoc networks. We present minimum displacement update routing (MDUR) and hierarchical MDUR (HMDUR). In these strategies, the rate at which route updates are sent is proportional to the rate at which each node changes its location by a threshold distance. Furthermore, we introduced minimum topology change update (MTCU). In this strategy, update packets are sent only when a minimum topology change is experienced by each node. We implemented HMDUR and MTCU in GloMoSim and compared their performance with FSR. Our results indicate that both HMDUR and MTCU produce fewer routing overheads than FSR while maintaining high levels of data throughput across different network scenarios. Furthermore, the results show that when the node density is high, reducing routing overhead can result in higher levels of data packet delivery and lower end-to-end delay for each packet. In the future, we plan to simulate MDUR and HMDUR with a simple geographic data forwarding (such as those described in [26]) and compare its performance with shortest path routing. Tadeusz Wysocki received the M.S. Eng. degree with the highest distinction in telecommunications from the Academy of Technology and Agriculture, Bydgoszcz, Poland, in 1981. In 1984, he received his Ph.D. degree, and in 1990, he was awarded a D.S. degree (habilitation) in telecommunications from the Warsaw University of Technology. In 1992, he moved to Perth, Western Australia, to work at Edith Cowan University. He spent the whole 1993 at the University of Hagen, Germany, within the framework of Alexander von Humboldt Research Fellowship. After returning to Australia, he was appointed as a Wireless Systems Program Leader within Cooperative Research Centre for Broadband Telecommunications and Networking. Since December 1998, he has been working as an Associate Professor at the University of Wollongong, New South Wales, within the School of Electrical, Computer and Telecommunications Engineering. The main areas of his research interests include indoor propagation of microwaves, code division multiple access (CDMA), space-time coding, and MIMO systems, as well as mobile data protocols including those for ad hoc networks. He is the author or coauthor of four books, over 150 research publications, and nine patents. He is a Senior Member of IEEE. Justin Lipman received a B.E. degree in computer engineering and a Ph.D. degree in telecommunications engineering from the University of Wollongong in 1999 and 2004, respectively. He is currently the Project Manager for Research and Innovation at Alcatel Shanghai Bell telecommunications labs in Shanghai, China. His research interests are diverse but focus mainly on mesh, ad hoc, sensor, and 4G networks.
v3-fos-license
2017-09-12T18:55:31.283Z
2012-09-21T00:00:00.000
264226731
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=22546", "pdf_hash": "6c3379ab9cde04edcec0acf5696cad1b90d27001", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1952", "s2fieldsofstudy": [ "Medicine" ], "sha1": "6c3379ab9cde04edcec0acf5696cad1b90d27001", "year": 2012 }
pes2o/s2orc
Late diagnosis and TCD 8 immune response profile of cutaneous tuberculosis : A case report Introduction: Cutaneous tuberculosis (CTB) is a rare form of extra-pulmonary tuberculosis that, when associated with late diagnosis, worsen the quality of life of the sick individuals. This report presents a case of late diagnosis of CTB. Unusual clinical manifestations retarded the correct tuberculosis diagnosis for more than a year. The immune response elicited by this type of tuberculosis as well as the factors that might contribute to the delay in diagnosis was evaluated and discussed. Methodology: Clinical evaluation and flow cytometric analyses of PBMC were realized for a case of CTB and a healthy individual as a control. Results: M. tuberculosis specific TCD8+ cell response was analyzed by flow cytometry and revealed positive cells for IL-17, IL-10, TGF-β and IDO. The CTB patient presented higher percentage of those cells when compared to a healthy donor. However, TCD8 cells positive for the important protective cytokine, IFN- was decreased in the CTB patient. Conclusion. The assessment of the M. tuberculosis specific TCD8+ immune response showed a regulatory/modulatory phenotype with a reduced IFN- response when compared to the healthy control that could have contributed to the CTB infection. INTRODUCTION Tuberculosis is a chronic disease that affects millions of people in the world.It is a disease that can affect every organ of the body and consequently have equally diverse clinical manifestations, but the pulmonary form is the most frequent.Cutaneous tuberculosis (CTB) is a rare form of extra-pulmonary tuberculosis that can be caused by Mycobacterium tuberculosis, M. bovis and in certain conditions by the attenuated M. bovis-BCG [1].Despite its rare frequency, CTB corect diagnosis and management are fundamental, both for the patient as well as for public health.Additionally, long lasting, misdiagnosed and/or untreated CTB can lead to the development of a variety of types of cancer [2]. The CTB clinical manifestations depend on many factors: the route of infection, the host immune status, and drug resistance and/or bacteria pathogenicity [3][4][5][6].When infection occurs by laboratory accident caused by clinical samples or M. tuberculosis cultures injected directly into the skin, the CTB clinical morphological variants are verrucosa cutis (it manifests as a painless, solitary, purplish or brownish-red warty plaque that may extend peripherally causing central atrophy or form fissures that exude pus or keratinous material), lupus vulgaris (plaque type begins as discrete, redbrown papules that coalesce and form plaques with a slightly elevated verrucas border and central atrophy) or skin ulcers (chancres).A lymphatic or hematogenous disseminated miliary TB may cause CTB variants such as escrofuloderma (firm, painless, subcutaneous, red-brown nodules overlying an infected focus, which gradually enlarge and suppurate forming ulcers and sinus tracts that drain watery, purulent, or caseous material), lupus vulgaris (multiple, discrete, smooth 1 -3 mm brown/red or brown-to-yellowish dome-shaped papules usually located on the face), and tuberculous abscesses (an inflammatory papule develops at the inoculation site and evolves into a firm, shallow, non-tender, nonhealing, undermined ulcer with a granulomatous base) as described before [2,7]. We report a case of cutaneous tuberculosis in a 50year-old male patient with unusual clinical symptoms that consequently delayed the correct diagnosis for two years.The clinical evolutionary aspects of the disease and the cellular immune response against M. tuberculosis antigen were evaluated.The difficulty in diagnosing extrapulmonary tuberculosis and the immune response associated to its development was reviewed and discussed. CASE REPORT A 50-year-old HIV-negative male reported the appearance of two papules of gelatinous consistency on the neck with pus secretion in a 12 months interval.The clinical suspicion of a tumor led to a cervical ultrasound analysis that accused two solid cystic firm lesions, with enlarged lymph nodes.CT scan also revealed enlarged lymph nodes in the neck.Removal of the lumps showed necrosis redirecting the diagnosis to an infection, thus an empirical treatment was prescribed: amoxicillin (250 mg) for 10 days, but resulted in no health improvement/ resolution.A thoracic plain chest X-ray film showed normal results. After eight months, the patient returned to the hospital and complained the appearance of other papules with similar morphology and size on the chest.Again, 10 days of amoxicillin (500 mg) was prescribed.Due to the unaltered clinical signs, two months later, a chest CTS was performed, and revealed the presence of cutaneous abscesses without lung alterations.A series of tests was conducted, in order to identify the etiology of the illness, but all of them showed negative results (Table 1).At this time, abscesses and lymph nodes were biopsied and the samples from the abscesses showed a chronic granulomatous inflammation with caseous necrosis.These lesions were positive for acid-fast bacilli. Two years after the first clinical symptoms, and one year after the first heath assistance, a clinical diagnosis of cutaneous tuberculosis was made.Therefore, TSPOT.TB and tuberculin skin tests (TST) were performed and resulted positive.Culturing and performing biochemical tests of the lymph node samples confirmed Mycobacterium tuberculosis.The patient underwent standard treatment for TB with rifampicin, isoniazid, pyrazinamide and ethambutol for 2 months, followed by rifampicin and isoniazid for 4 months. MATERIALS AND METHODS In order to understand the immune factors that could have contributed for the CTB development and the unusual clinical findings, a 20 mL of blood sample with heparin was collected at the time of the CTB diagnostic.As control, blood from a healthy donor (TST-) matched by sex and age was recruited. Peripheral Blood Mononuclear Cells and Cell Culture Peripheral blood mononuclear cells (PBMCs) were obtained from CTB patient and TST negative control by Ficoll density gradient centrifugation (Ficoll-Paque Plus, GE Healthcare Bio-Sciences AB).The cells were washed twice in saline and distributed in 96-well plates at 2 × 10 5 cells/mL in RPMI 1640 medium (GIBCO, Invitrogen Corporation) supplemented with 2 mM glutamine, 10 Nm pyruvate, 2 mM amino acids, 50 μg/mL penicillin, 50 μg/mL streptomycin and 10% heat-inactivated bovine serum.They were then incubated with recombinant Mycobacterium tuberculosis antigen (MPT-51, 1 μg/mL) previously shown to be immunogenic for pulmonary TB patients [8] or with PHA (1 μg/mL) as positive control and cultivated at 37˚C with 5% CO 2 for 96 hours in the presence of anti-CD3 (eBioscience).Cells stimulated with medium alone or anti-IgG1 and anti-CD3 were used as control. Flow Cytometry The following antibodies (Abs) were used for surface and intracellular staining for flow cytometry: IFN--FITC (BD Bioscience Pharmingen), IL-10-PE, IL-17-PE, CD8-APC (eBioscience), TGF--PE (IQ Products) and IDO-FITC (Santa Cruz Biotechnology).The cells were also stained with PE-conjugated mouse IgG1 for isotype control.For flow cytometry analysis, medium alone, PHA or TB antigen stimulated cells were treated with Golgi Stop Solution (containing monensin-BD Biosciences Pharmingen), and after 4 h of further incubation, they were harvested for analysis.To perform surface and intracellular staining, the cells were treated with PBS containing 0.05% azide for 20 min.After centrifugation (3000 rpm for 10 min), the cells were stained at 4˚C for 18 minutes with surface marker Abs (CD8-APC).Subsequently, the plates were washed twice with PBS containing 0.05% sodium azide and treated with PermFix (BD Pharmingen, San Jose, USA) for 18 min.For intracellular staining, the cells were permeabilized with Perm Wash buffer (BD Biosciences Pharmingen) and incubated at 4˚C for 18 min with the following specific antibodies: IFN-γ-FITC (BD Biosciences Pharmingen), IL-10-PE, IL-17-PE (eBioscience), TGF--PE (IQ Products), and IDO-FITC (Santa Cruz Biotechnology).After washing, the samples were immediately analyzed on a FACSCanto II apparatus (Becton Dickinson, San Jose, USA) at the Farmatec/UFG (Goiás, Brazil).At least 50.000events were acquired per sample.Data analysis was performed using FACSDiva software (BD Biosciences Becton Dickinson).All studies were approved by the Hospital of the Federal University of Goiás Ethics Committee (Goiás, Brazil), and informed consent was obtained from all participating subjects. DISCUSSION Cutaneous tuberculosis is re-emerging in countries with high incidence of multidrug resistance pulmonary tuberculosis and HIV, as Brazil [3].Manifestation of disseminated milliary tuberculosis and CTB is predominant in immunosuppressed individuals [7].Those individuals need rapid diagnosis and prompt treatment.The first described case of CTB in Brazil was reported around the year 1950 [9].Since then, there were rare cases reported in the literature [10][11][12].Here we report a case of a patient that had a delayed diagnosis of CTB due to unusual clinical symptoms.Although rare cases of CTB have been reported worldwide, to our knowledge this is the first reported case in the Midwestern region of Brazil. In the case reported here, the patient did not present a previous history of pulmonary TB (X ray had no TB scars), and the patient did not report any previous hospitalization or contact with TB patients, discarding also a direct skin contamination.His lesions were compatible to disseminated TB, and during the disease evolution, the patient always reported pain in the affected regions. Diagnosis of CTB is a challenge, and requires the association between clinical findings and diagnostic laboratory tests.The usual laboratory tests are biopsies and Mycobacterium sp culture of the lesions, polymerase chain reaction (PCR) and acid-fast bacilli test (AFB) to identify the causal agent in the lesions [2,13].Serological tests and TST (tuberculin skin test) has been used as complementary tests [2,14,15]. The delay in the diagnosis of CTB occurred mainly because of the unusual signs and symptoms presented by the patient.Initially papules or lesions similar to a skin cancer may have misled the physician interpretation, avoiding a suspicion of M. tuberculosis infection [2]. Although the pulmonary disease is the most common form of TB, extrapulmonary TB affects 10% -20% of individuals [15].Host defense against M. tuberculosis is mediated by combination of innate and adaptive immune responses and these responses consist in the activation of macrophages, CD4+ and CD8+ T lymphocytes and B cells in both pulmonary and extra-pulmonary form [14,16]. The protective immune response to TB depends mainly on the activation of T cells with a Th1 phenotype and macrophages, the latter being the primarily effector cells.Th1 cells are a subset of TCD4+ cells that are responsible for producing a series of cytokines such as IFN- and lymphotoxin- that are essential for other cell activation [8,15].IFN- is the main cy okine involved in macro-t phage activation.Infected macrophages produce cytokines and chemokines that recruit other cells to the site of infection and, when previously activated by IFN-, macrophages up regulate membrane MHC proteins and lysosome enzymes, among other molecules to increase bacterial killing [15].As shown previously, pulmonary and extra pulmonary tuberculosis patients present specific TCD4+ IFN- producing cells in response to PPD among other M. tuberculosis antigens [16], however, activation of TCD4+ cells does not directly correlates with disease protection or the clinical TB forms, creating the hypothesis that other cell population might be important in the establishment of the several TB clinical forms.TCD8+ cells are known by its cytotoxic function, however it has been shown that TCD8+ cells also produce IFN- contributing to TB protection.Although some studies emphasize the TCD8+ role in the murine model of infection, little is known in the human disease [18].In the skin, site of M. tuberculosis infection for CTB, dendritic cells and resident macrophages could initiate the immune response and induce the activation of either intraepithelial or peripheral T cells.The intraepithelial lymphocytes are mainly CD8+ and have direct cytotoxic action on infected cells [17].Many subsets of peripheral TCD8+ cells have been characterized in several diseases [19,20], but in TB their functions are yet unclear.CTB immune response characterization has been done by identification of the cells present in CTB lesions [2,13].In the present case, macrophages, lymphocytes and giant cells were observed in the histopathological findings (data not shown) only one year after the patient sought health assistance. Sehgal et al. (1992) described the presence of T CD4 and CD8 cells in the peripheral blood of CTB patients by immunohistochemistry and observed that the clinical manifestations affect the percentage of these cells in the blood [15].Nevertheless, to date, no published data studied the role of TCD8 cells subtypes in CTB.TCD8+ cells contribute by destroying cells infected with the bacilli by the release of granules containing perforin and granzyme and also by activation of other cells through the production of various cytokines.Studies have demonstrated the participation of TCD4+ regulatory cells and TCD8+ cells positive for IDO (an enzyme that promotes tryptophan depletion) in the inhibition and/or down regulation of the immune response [21]. The results presented here revealed a specific TCD8 immune response to M. tuberculosis antigens by the CTB patient.The presence of TCD8 cells positive for IL-10 and TGF-, which characterize a regulatory phenotype could account for the unusual clinical manifestation [4].Furthermore, it was observed the presence of specific TCD8+IDO+ cells that could be inhibiting the proliferation of other cells involved in the immune response to M. tuberculosis.Although we cannot assure the immune statuses of the patient, these cytokines/molecules are known by their ability to down modulate macrophage and T cell functions [4,21], suggesting that the activity of these cells might be modified in the CTB patient. CONCLUSION A case of CTB patient with a delayed diagnosis is presented here.Unusual clinical manifestation was the main reason for the late diagnosis.The assessment of the M. tuberculosis specific TCD8+ immune response revealed a regulatory/modulatory phenotype with a reduced IFN- response when compared to a healthy control. Figure 1 . Figure 1.Histograms showing flow cytometry results of the percentage of specific TCD8+ cells positive for IFN-γ and IL-17 from a TST-healthy control control and the CTB patient.Pheripheral blood mononuclear cells (PBMCs) were obtained by Ficoll density gradient centrifugattion (Ficoll-Paque Plyus, GE Healthcare Bio-Science AB).The cells were washed twice in saline and distributed in 96-well plates a 2 × 10 5 cells/mL in RPMI 1640 medium (GIBCO, Invitrogen Corporation) supplemented.They were then incubated with recombinant Mycobacterium tuberculosis antigen (MPT-51, 1 μg/mL) or with PHA (1 μg/mL) as positive control and cultivated at 37˚C with 5% CO 2 for 96 hours in the presence of anti-CD3 (eBioscience).After the culture, the cells were treated with monoclonal antibodies anti-CD8 APC, anti-IFN-γ FITC, anti-IL-17 PE and anti-IgG2 PE as a isotype control and acquired in a FACS-Canto and analyzed using FACS Diva software. Figure 2 . Figure 2. Histograms showing flow cytometry results of the percentage of specific TCD8+ cells positive for IL-10, TGF-β and IDO from TST-healthy control and CTB patient.The PBMC cells cultured ad described in Figure 1 treated with monoclonal antibodies andi-CD8 APC, anti-IL-10 PE, anti-TGF-β PE, anti-IDO FITC and anti-IgG1 PE.
v3-fos-license
2017-07-17T17:29:37.631Z
2013-09-10T00:00:00.000
375912
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0073072&type=printable", "pdf_hash": "4b81adfe868378710ba3513edafc3d40a4f380a2", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1953", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "4b81adfe868378710ba3513edafc3d40a4f380a2", "year": 2013 }
pes2o/s2orc
Transmembrane Prostatic Acid Phosphatase (TMPAP) Interacts with Snapin and Deficient Mice Develop Prostate Adenocarcinoma The molecular mechanisms underlying prostate carcinogenesis are poorly understood. Prostatic acid phosphatase (PAP), a prostatic epithelial secretion marker, has been linked to prostate cancer since the 1930's. However, the contribution of PAP to the disease remains controversial. We have previously cloned and described two isoforms of this protein, a secretory (sPAP) and a transmembrane type-I (TMPAP). The goal in this work was to understand the physiological function of TMPAP in the prostate. We conducted histological, ultra-structural and genome-wide analyses of the prostate of our PAP-deficient mouse model (PAP−/−) with C57BL/6J background. The PAP−/− mouse prostate showed the development of slow-growing non-metastatic prostate adenocarcinoma. In order to find out the mechanism behind, we identified PAP-interacting proteins byyeast two-hybrid assays and a clear result was obtained for the interaction of PAP with snapin, a SNARE-associated protein which binds Snap25 facilitating the vesicular membrane fusion process. We confirmed this interaction by co-localization studies in TMPAP-transfected LNCaP cells (TMPAP/LNCaP cells) and in vivo FRET analyses in transient transfected LNCaP cells. The differential gene expression analyses revealed the dysregulation of the same genes known to be related to synaptic vesicular traffic. Both TMPAP and snapin were detected in isolated exosomes. Our results suggest that TMPAP is involved in endo-/exocytosis and disturbed vesicular traffic is a hallmark of prostate adenocarcinoma. Introduction The association between prostate cancer and serum prostatic acid phosphatase (PAP; ACPP; EC 3.1.3.2) has been known for more than 70 years [1]. Nevertheless, the molecular mechanisms underlying this association are still poorly understood. In spite of this, the connection between secreted PAP and prostate cancer contributed to the development of Sipuleucel-T, the first FDAapproved vaccine for cancer therapy targeting PAP-expressing cells [2] even when in advanced/androgen-independent prostate cancer tissue, the expression of PAP is down-regulated [3]. Therefore, our goal is to elucidate the pathways where PAP and in particular its transmembrane isoform is involved. PAP is a histidine acid phosphatase [4] from which two isoforms have been cloned, the secreted (sPAP) and the transmembrane type-I (TMPAP). Both are splice-variants of the same gene and widely expressed in different tissues, in both sexes [5]. The current evidence does not support the existence of a third, cytosolic cellular form of PAP, as it has been suggested in the literature but never cloned [6][7][8]. Topologically, TMPAP contains an Nterminal phosphatase activity domain which is extracellular when TMPAP is in the plasma membrane and intra-luminal when it is trafficking in vesicles, and a C-terminal domain with a cytosolic tyrosine-based endosomal-lysosomal (including MVE) targeting signal motif (YxxW) [5]. TMPAP also co-localizes with flotillin and LAMP2 [5], which are known markers for exosomes [9,10]. The prostate gland is fundamentally a secretory organ, and it is known that the secretion of specialized exosomes (prostasomes) is essential for the maintenance of the spermatozoa [11]. Exosomes are nanovesicles originated from multivesicular endosomes (MVE), which contain protein, lipid, DNA, RNA and/or microRNA molecules [12]. Also, it has been shown that exosomes are involved in the promotion of cancer cell proliferation and survival [13], and an increased level of prostasomes (exosomes) has been detected in plasma of prostate cancer patients [14]. PAP exerts its phosphatase activity in vitro against b-glycerophosphate [15], lysophosphatidic acid [16], and phosphoamino acids [17] and has 59-nucleotidase activity [18]. In vivo, the ecto-59-nucleotidase activity of PAP is responsible of dephosphorylating adenosine monophosphate (AMP) to adenosine [18,19] leading to the activation of A1-adenosine receptors in the dorsal root ganglia (DRG) [19]. PAP regulates the levels of adenosine and phosphatidylinositol 4,5-bisphosphate [PI (4,5) P 2 ], an essential regulator of vesicular traffic [20], reducingsensitivity to painful stimuli [19,21]. SNARE proteins comprise a large family found in yeast and mammalian cells, with the primary function to mediate docking and fusion of vesicles with the cell membranes [22] in regulated endo-/exocytosis [23]. Snapin is a SNARE-associated protein [24] interacting with Snap25, Snap23 or Snap29, and increasing the binding of the calcium sensor synaptotagmin to the SNARE complex [25]. Snapin also forms part of the BLOC1 protein complex, which is necessary for the biogenesis of vesicles in the endosomal-lysosomal pathway [26]. Increasing evidence shows that snapin is important in retrograde axonal transport, late endosomal-lysosomal trafficking and glucose-induced insulin exocytosis. In mediating retrograde axonal transport, snapin acts as a dynein adaptor protein for BDNF-TrkB (brain-derived neurotrophic factor -tyrosine kinase receptor B) activated signaling complexes. This interaction leads to the delivery of TrkB signaling endosomes from axonal terminals to cell bodies, which is an essential mechanism for dendritic growth of cortical neurons [27]. Moreover, snapin deficiency in neurons leads also to accumulation of immature lysosomes due to impaired delivery of cargo proteins from late endosomes to lysosomes [28]. In addition, snapin as a target of protein kinase A (PKA), was found to be a critical regulator of glucose-stimulated insulin exocytosis in pancreatic b-cells by promoting the interaction and assembly of insulin secretory vesicle-associated proteins Snap25, collectrin and Epac2 [29]. The mouse prostate consists of three different lobes: anterior (AP), dorsolateral (DLP) and ventral prostate (VP); and it does not show spontaneous development of neoplasia [30]. The mouse prostate lobes have characteristic histology which has been described previously [31,32]. Briefly, all prostate lobes show a monolayer epithelium with eosinophilic columnar cells and eosinophilic secretion which is paler in VP than in AP and DLP. Each ductin the lobes is surrounded by a thin fibromuscular sheet composed mainly of smooth muscle cells and collagen fibers. In the AP epithelium, the cell nucleus is central and the epithelium has a high number of infoldings and papillary structures. The DLP epithelium has central to basal nucleus and a moderate degree of infolding. The VP epithelium is characterized by basal nucleus and focal infoldings. To understand the physiological function of PAP, we studied the prostate of our PAP-deficient mouse model (PAP 2/2 ) [33]. The PAP 2/2 mouse prostate showed disturbed vesicular trafficking, loss of cell polarity and development of slow-growing nonmetastatic prostate adenocarcinoma. Here we report the interaction of TMPAP with snapin; and suggest that TMPAP regulates endo-/exocytosis and the disruption of these processesis a hallmark of prostate adenocarcinoma. Ethics statement The animal protocols were approved by the Animal Experimentation Committee of the University of Oulu and ELLA -The National Animal Experiment Board of Finland. The project license numbers are 044/11 and STH705A/ESLH-2009-08353/ Ym-2. Mice Mice deficient in PAP were generated by replacing exon 3 (ACPP D3/D3 ) of the prostatic acid phosphatase gene (ACPP, PAP) with the neo gene as described earlier [33] thereby abolishing the expression of both PAP isoforms. The fertility status in the PAP 2/2 mice was not affected by the gene modification. PAP 2/2 mice were backcrossed to the C57BL/6J strain (Harlan Laboratories Inc.) for 16 generations to obtain homogenous background. Age-matched C57BL/6J male mice were used as controls in all the experiments. were filled with epithelial cells (black arrow head). Dyscohesive cells with double nuclei were present (white arrows), as well as sites of microinvasions of hyperchromatic epithelial cells with prominent nucleoli (black arrow). Cribriform structures (white arrowhead) and blood vessels among neoplastic epithelial cells (*) were also observed. Transmission electron microscopy DLP samples from age-matched PAP 2/2 and PAP +/+ mice were fixed in a mixture of 1% glutaraldehyde and 4% formaldehyde in 0.1 M phosphate buffer for TEM. The samples were post-fixed in 1% osmium tetroxide, dehydrated in acetone, embedded in Epon Embed 812 (Electron Microscopy Sciences) and analyzed at the Biocenter Oulu EM core facility using Philips 100 CM Transmission Electron Microscope with CCD camera. Yeast two-hybrid analysis To screen for interacting partners of human TMPAP, yeast twohybrid screening was performed using the Matchmaker Gal4 twohybrid System 3 (Clontech) in accordance with the manufacturer's instructions. The bait construct consisted of the coding region of human TMPAP (GeneBank accession BC007460, nucleotides 51-1304, except the starting methionine was changed to valine) cloned in frame into NcoI/SmaI sites of pGBKT7 using PCR generated linkers. A human thymus cDNA library cloned in pACT2 (Clontech) was used as the prey. The bait and prey plasmids were co-transformed into Saccharomyces cerevisiae Mav 203 strain according to the Clontech's two-hybrid protocols. Inserts of positive clones were amplified by PCR, and the DNA was automatically sequenced. The Fö ster resonance energy transfer (FRET) analysis The FRET variant acceptor photobleaching was used. In this technique, the efficiency of energy transfer between two molecules (and consequently the interaction between them) is measured by comparing the fluorescence of the donor molecule before and after the selective photobleaching of the acceptor moleclule [34]. Human TMPAP-GFP and the control GFP plasmid constructs have been previously described [5]. Human snapin (NM_012437, nt 76-486) was cloned into pDsRed-Monomer-C1 vector, between SalI/BamHI restriction sites. LNCaP cells were obtained from the American Tissue Culture Collection (ATCC) and Cultured cells were mounted 24 hours after transfection and epifluorescent images were acquired using an Olympus CellR imaging system with 606oil immersion NA 1.45 objective. Images were collected with a CCD camera (Orca, Hamamatsu). The system was equipped with automated filter wheels for excitation filters and emission beam-splitter/emission-filter cubes for epifluorescence imaging. GFP fluorescence was excited at 450 nm and collected at 510/40 nm. DsRed fluorescence was excited at 575 nm and collected at 640/50 nm. Acceptor fluorescence was bleached for 5 minutes with maximal burner power. Images were quantified and processed using Olympus Biosystems AnalySIS software, ImageJ (freely available at http://rsb. info.nih.gov/ij/) and ImagePro 5.1 (Media Cybernetics). Background fluorescence was subtracted prior to calculations. The FRET efficacy defined as the percentage of donor fluorescence increase was calculated with the following equation: E = 1-(Ib/Ia), where 'Ib' is the fluorescence intensity of the donor before photobleaching and 'Ia' is the post-bleach fluorescence intensity. Histology, immunohistochemistry, proliferation and apoptosis analyses, microarray analyses, generation of stable transfected LNCaP cells, immunofluorescence and co-localization studies, comparative genomic hybridization (CGH), isolation of exosomes and Western blot analyses The detailed methodology can be found in the Supplementary Methodology in File S1. Accession codes Gene expression files containing microarray raw-data can be accessed from ArrayExpress repository database (accession number E-MTAB-1191). PAP-deficiency in mouse prostates leads to development of prostate adenocarcinoma PAP-deficiency led to slow development of prostate neoplasia in DLP and AP. The progressive changes in mouse DLP were observed in all the PAP 2/2 mice examined (n = 8), detecting a hyperplastic growth already at the age of 3 months, followed by mouse prostatic intraepithelial neoplasia (mPIN) at 6 months and prostate adenocarcinoma at 12 months ( Fig. 1 and 2A). The follow-up of the disease in mice spanned until 26 month-old, where the presence of other pathologies arose as strain background or due to mouse aging. . TMPAP is involved in endo-/exocytosis (proposed mechanism). TMPAP synthesized in the endoplasmic reticulum is transported in vesicles to the plasma membrane through the trans-Golgi network (TGN). After the vesicle docking and fusion events leading to release of vesicle content, TMPAP inserted in plasma membrane exerts its phosphatase function over AMP. The resulting product adenosine (Ado) activates the adenosine receptors, which are GPCRs, A1 or A3with G ai (inhibitory G-protein b-subunit) specificity leading to the inhibition of adenylate cyclase (AC) activity, and A2 adenosine receptors with G as (stimulatory G-protein a-subunit) producing the stimulation of AC activity. Activated AC produces cAMP, which activates PKA responsible for the phosphorylation of snapin. The turnover is completed by clathrin-mediated endocytosis of SNARE components and TMPAP for recycling and degradation in lysosomes vía the endosomal-lysosomal pathway. From early endosomes, the cargo can be sorted to late endosomes or to MVE, which can follow the route leading to exosome release. Additional dephosphorylation events by TMPAP can occur while trafficking between different compartments. From late endosomes, TMPAP can go to lysosomes or back to TGN via the retrograde pathway. ATP: adenosine triphosphate, ADP: adenosine diphosphate, AMP: adenosine monophosphate, Ado: adenosine, TGN: trans-Golgi network, P: phosphate group, AP-2: adaptor protein complex 2, ADORA: adenosine receptor A (types A1, A2 and A3), AC: adenylate cyclase, G as , G ai , G b , G c : Gprotein subunits, VDCC: Voltage-gated calcium channel. Synaptobrevin, syntaxin and SNAP25 are SNARE proteins. PI (4,5) P 2 : phosphatidylinositol 4,5bisphosphate. doi:10.1371/journal.pone.0073072.g007 All the PAP 2/2 mice analyzed at the age of 12 months had developed prostate adenocarcinoma (n = 8, Fig. 2B). Pathological acini were filled with non-cohesive pleomorphic epithelial cells, with enlarged hyperchromatic nuclei and prominent nucleoli. The presence of neoplastic acinar cells in the lumen was confirmed with pan cytokeratin staining (Fig. S1 in File S1). The fibrotic stroma surrounding the acini appeared to be invaded by cells, with bulging areas and fusion of acini. Cribriform structures were also observed, additionally to numerous blood vessels among neoplastic epithelial cells. The histological pattern was consistent with locally invasive prostate adenocarcinoma. In the 24 month-old PAP 2/2 mice, we observed an increasedamount of cells in the AP lumen ( Fig. 2A) and a clear invasion of the surrounding areas as well as increased amount of inflammatory cells (n = 5, Fig. 3C). However, we did not detect metastatic lesions in other studied organs such as brain, liver, lungs and lymph nodes at any age analyzed. The breakdown of the fibromuscular sheath and invasion of the epithelial cells into stroma were also detected with smooth muscle b-actin (SMA) staining (Fig. 3A). Bulging of the cells could be seen in atypical acini, as well as adenocarcinoma invasion. Crowding of inflammatory cells was detected in sites of microinvasive adenocarcinoma. Important changes observed by transmission electron microscopy (TEM) included irregularities and invaginations of the basement membrane into the epithelium (Fig. 3B) in addition to the presence of lysosomes and MVE in the basal side of the cell. The prostatic epithelium of PAP 2/2 mice lost the regular structure of uniform columnar monolayer transforming to a cuboidal multilayer epithelium with hyperchromatic nuclei and presence of pseudo-lumens as a sign of loss in cell polarity. Further analysis of prostate ultrastructural changes showed an increased number of electron-lucent enlarged vacuoles ( Fig. 4A and B), and bursting of luminal exosome-like vesicles of30-80 nm in diameter (Fig. 4B). Exosome-like vesicles in the intercellular space and disintegration of the apical microvilli, indicates also loss of cell polarity ( Fig. 4B and C). Lamellar body-like structures were observed in PAP 2/2 mice DLP cells and their contents secreted into the lumen (Fig. 4D and E). Due to the gradually increased number of cells in the prostate acini, we determined the status of proliferation and apoptosis in the tissue. As a result, the proliferation was significantly increased in the three-(P-value = 4.3610 23 , n = 4), six-(P-value = 1.3610 215 , n = 4) and 12 month-old (P-value = 3.9610 25 , n = 4) PAP 2/2 mice DLP, but the apoptosis status was not different between genotypes at the same time points (P-values 0.3, 0.1, and 0.9 respectively, n = 4) ( Fig. 5 and Tables S1 -S4 in File S1). The disturbed exocytosis observed in the ultrastructural studies of PAP 2/2 prostates and the differential expression of genes related to vesicle fusion, such as Snap25, Syt1, Syt4 and Cplx1, led us to search for proteins interacting with TMPAP. The yeast two-hybridization screening of human thymus library detected seven out of 15 clones expressing snapin (NM_012437.3), as a clear candidate for interaction with TMPAP. To validate the yeast two-hybrid result, double-immunofluorescence staining of PAP and snapin in TMPAP/LNCaP cells showed co-localization of these two proteins in vesicular structures and cell membrane (Fig. 6A). The quantification studies displayed relatively low Pearson's correlation coefficient when the whole cell was analyzed (0.48560.012). However, when the co-localization was quantified exclusively in the cell lamellipodia the Pearson's correlation coefficient reached a value of 0.68060.013. This coefficient value not only shows that TMPAP co-localized with snapin but it could imply an interaction between TMPAP and snapin in these cell regions. Therefore, to confirm this hypothesis of interaction, the FRET variant acceptor photobleaching, which gives an in vivo proof of the physical protein-protein interaction, was used to determine the interaction between TMPAP and snapin. Data analysis revealed significant FRET between TMPAP and snapin (FRET efficacy 9.361%, n = 12 cells) compared to experiments with negative control (20.460.7%, n = 9 cells, P,0.0001) while in experiments with positive control FRET efficacy reached a level of 37.766.5% (Fig. 6B). Hence our previous results have showed the co-localization of PAP and flotillin, which is a protein also used as exosomal marker [9], wecorroborate by Western blot the presence of PAP inisolated exosomes produced by stable transfected TMPAP/LNCaP cells. The results showed the presence of TMPAP as well as snapin in the exosomal fraction in addition to flotillin and CD13, a prostasomal marker [35] (Fig. 4F). Discussion Prostate cancer is a disease of complex etiology, in which genetic and epigenetic mechanisms are involved. PAP was the first prostate cancer marker, and its usefulness was based in the assessment of its serum activity levels. We have previously shown that in addition to the secretory PAP, a transmembrane isoform is widely expressed in different mouse organs such as prostate, salivary glands, thymus, lung, kidney and brain, amongst others. TMPAP is also present in androgen sensitive prostate cancer cells (LNCaP), but absent in androgen insensitive prostate cancer cells (PC3) [5]. In all the PAP 2/2 mice, progressive changes were observed in the prostatic tissue leading to the development of prostate adenocarcinoma at the age of 12 months. Despite the presence of prostate cancer, we have not detected any metastatic lesions. Histologically, the mouse DLP has been considered to be the analogous area to the peripheral zone of the human prostate, where the majorities of adenocarcinomas reside [36]. In this regard, Roy-Burman et al. suggested that the mouse models carrying genetic modifications that affect the tumor development in the DLP are more significant for the studies on those pathologies associated with the peripheral zone of the human prostate [37]. In humans, it has been observed that PTEN (phosphatase and tensin homolog deleted on chromosome 10) is downregulated in prostate cancer tissue specimens [38]. PTEN antagonize PI3K (phosphoinositol 3-kinase) activity by dephosphorylating phosphoinositol (3,4,5)-triphosphate which is an activator of the AKT pathway leading to cell survival. The PTEN prostate cancer mouse model showed a development of prostate cancer which resembles the stages of the disease in humans, starting with hyperplasia at 4 weeks of age, mPIN, prostate microinvasive adenocarcinoma in all prostatic lobes and finally metastasizing in different organs at 12 weeks of age [39]. However, the genetic background of the mouse might affect the phenotype. The pattern of the disease progression described in the PTEN mouse model is observed in mice of mixed genetic background. However, when a more homogeneous genetic background (up to eight backcrossing in C57BL/6 strain) was studied, mPIN appeared at 2 month of age and invasive adenocarcinoma at 12 months, nevertheless no metastatic lesions were observed [40]. The deficiency of PAP expression in our mice leads to a similar phenotype than that observed inPTEN-deficient mouse model of prostate cancer both with the same genetic background (C57BL/6). The only difference between these mouse models is that there are no pathological changes in VP lobe of PAP 2/2 mice. The ultrastructural studies of PAP 2/2 mouse prostates showed a high amount of nanovesicles, compatible in size with exosomes. According to this finding, other authors have reported a significant increment of exosomes in plasma of prostate cancer patients compared to healthy donors or benign hyperplasia [14]. In addition, King et al. showed that augmented levels of hypoxia inside solid tumors increased the release of exosomes [41]. Flotillin as well as bis-(monoacylglycero)-phosphate (BMP or LBPA) are present in exosomes [42], and previously we have shown that PAP co-localized with flotillin in LNCaP cells and with BMP in human prostate cancer samples [5]. We now confirm the presence of TMPAP inexosomes from TMPAP/LNCaP cells; in addition these exosomes also containedsnapin, flotillin and CD13. The microarray results indicate significant changes in the expression of genes related to the release of neurotransmitters and vesicular traffic in prostates of PAP 2/2 mice. The interaction between TMPAP and snapin detected in LNCaP cellsisanindication that disturbed exocytosis is involved in the phenotype we observed. The vesicular traffic is an intrinsic factor in the regulation of cell polarity, which is gaining attention as a determinant of tumor development [43]. Recently, exosomes and exosome-like vesicles received increased attention in relation to tumor development and progression. In particular, the release of exosomes containing biologically active molecules, such as microRNAs, DNAs, RNAs and proteins, has been reported to have an impact on cell-cell communication. The above mentioned tumor suppressor PTEN is exported in exosomes while maintaining its phosphatase activity in the recipient cells [12], this could imply that cells which do not express certain protein could obtain it from others. Our results in addition to our previous knowledge of the presence of PAP in the endosomal-lysosomal pathway [5] highlight a new role for PAP in prostatic vesicular traffic which has not been described before. Considering the topology of TMPAP, this enzyme cannot exert cytosolic acid phosphatase activity and consequently it is not able to dephosphorylate cytosolic tyrosines of epidermal growth factor receptor (EGFR) as it has been previously suggested [7]. Therefore, we assume this is not the pathway leading to the observed prostate adenocarcinoma in the PAP 2/2 mice. In Figure 7, we summarize our hypothesis about the modulatory effect of TMPAP on endo-/exocytosis and the mechanisms involved in the physical interaction between TMPAP and snapin. Previous reports have showed that the interaction of membrane proteins with snapin negatively regulates exocytosis by affecting the coupling of synaptotagmin to the SNARE complex [44] or by reducing snapin phosphorylation [45] which is needed to strength the interaction between synaptotagmin and Snap25. The phosphorylation of snapin by the cyclic adenosine monophosphate (cAMP)-dependent kinase PKA is crucial step for SNARE assembly in pancreatic b-cells leading to glucose-induced exocytosis [29]. Our results are consistent with these findings, and we built a mechanistic model representing the interaction between TMPAP and snapin. According to Buxton et al., 70% of snapin is found in the cytosol [46] where its phosphorylation by PKA occurs [47]. This process would be delayed if snapin is bound to TMPAP and could be a first regulatory effect on secretion. A second effect could involve the 59-ectonucleotidase activity of TMPAP responsible for the production of adenosine from AMP [19]. Adenosine receptors are G-protein couple receptors (GPCRs) known to regulate neurotransmission/exocytosis [48]. In this case, adenosine could bind to its cognate receptors A1, A2 and A3 that modulate cAMP levels [49,50] consequently modifying the PKA activity and snapin phosphorylation status. Moreover, the interaction between TMPAP and snapin at the plasma membrane could block the interaction between the cytosolic YxxW motif in TMPAP and the adaptor protein complex-2 required for clathrin-based endocytosis [51]. This effect could delay the internalization of TMPAP and extend the time that TMPAP is present in the cellular surface eliciting its catalytic activity and producing a sustained adenosine effect on adenosine receptors. According to this model, the lack of TMPAP would lead to the observed dysregulation of vesicular traffic, exocytosisand release of exosomes in PAP 2/2 mouse prostate. This could establish a significant starting point for uncontrolled cell proliferation and the development of prostate adenocarcinoma. Interestingly, in DRG of PAP 2/2 mice the levels of PI (4,5) P 2 are increased when compared to wild-type mice [21]. Considering that PI (4,5) P 2 is the main regulator for clathrin-based endocytosis [20], our observations of increased exocytosis in prostates of PAP 2/2 mice requires a concomitant increased endocytosis mechanism to keep the cell membrane homeostasis. In summary, this PAP 2/2 mouse model shows that TMPAP is required for the normal function of prostate in mice, and its deficiency leads to prostate adenocarcinoma. This suggests that TMPAP acts as a regulator of endo-/exocytosis mechanism. Supporting Information File S1 Supplementary fileincludes: Figure S1: Pan cytokeratin immunohistochemistry of DLP from 12 month-old animals. Tables S1: Proliferative cell count statistics. Tables S2: Proliferative and non-proliferative cell counts. Tables S3: Apoptotic cell count statistics. Tables S4: Apoptotic and non-apoptotic cell counts. Tables S5: Significant ontological groups in the cellular component category obtained with Genomatix Bibliosphere software from two-color microarrays experiments. Tables S6: Significant ontological groups in the biological process category obtained with Genomatix Bibliosphere software from two-color microarrays experiments. Supplementarymethodology. (DOCX)
v3-fos-license
2021-03-13T06:16:44.343Z
2021-03-11T00:00:00.000
232207366
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-021-84973-5.pdf", "pdf_hash": "c97cd9846d2b67c22b075f6b92039c76d596df66", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1955", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "sha1": "ba13e42b860a2ddefff27da0626793359a0d5d01", "year": 2021 }
pes2o/s2orc
A meta-analysis of Watson for Oncology in clinical application Using the method of meta-analysis to systematically evaluate the consistency of treatment schemes between Watson for Oncology (WFO) and Multidisciplinary Team (MDT), and to provide references for the practical application of artificial intelligence clinical decision-support system in cancer treatment. We systematically searched articles about the clinical applications of Watson for Oncology in the databases and conducted meta-analysis using RevMan 5.3 software. A total of 9 studies were identified, including 2463 patients. When the MDT is consistent with WFO at the ‘Recommended’ or the ‘For consideration’ level, the overall concordance rate is 81.52%. Among them, breast cancer was the highest and gastric cancer was the lowest. The concordance rate in stage I–III cancer is higher than that in stage IV, but the result of lung cancer is opposite (P < 0.05).Similar results were obtained when MDT was only consistent with WFO at the "recommended" level. Moreover, the consistency of estrogen and progesterone receptor negative breast cancer patients, colorectal cancer patients under 70 years old or ECOG 0, and small cell lung cancer patients is higher than that of estrogen and progesterone positive breast cancer patients, colorectal cancer patients over 70 years old or ECOG 1–2, and non-small cell lung cancer patients, with statistical significance (P < 0.05). Treatment recommendations made by WFO and MDT were highly concordant for cancer cases examined, but this system still needs further improvement. Owing to relatively small sample size of the included studies, more well-designed, and large sample size studies are still needed. With the rapid development of human society, cancer-related knowledge is also growing exponentially, which has caused a knowledge gap for clinic physicians 1 . With the increasing understanding of each patient, more and more information need to be absorbed from the literature in providing evidence-based cancer treatment. Research shows that clinic physicians can only spend 4.6 h a week to acquire the latest professional knowledge 2 , resulting in a relative delay in information absorption, leading to an increasing gap between the results achieved by academic research centers and the actual situation 3 . However, compared with physicians in other clinical disciplines, clinical oncologists urgently need to acquire evidence-based medicine knowledge in time to support patients' personalized treatment plans. Consequently, clinicians need some new types of tools to bridge this knowledge gap, support and adopt new treatment methods in an evidence-based manner, so that more patients can benefit from social investment in research and development 4,5 . Artificial intelligence (AI) first appeared in the early 1950s, which refers to the creation of intelligent machines with functions and reactions like human beings 6 . The goal of AI is to replicate human mind, that is to say, it can perform tasks such as identification, interpretation, reasoning and transformation, and it is good at the areas that human beings are not good at, such as absorbing a large amount of qualitative information that can recognize the patterns of relevant information 7,8 . Now AI has gradually entered medicine. Image recognition using AI has been successfully applied to image-based clinical diagnosis, such as melanoma recognition in dermoscopy images 9 or detection of diabetic retinopathy in retinal fundus photographs 10 , and more and more researches on AI are also carried out in oncology [11][12][13][14] . AI aims to enhance human capabilities, enable human beings to apply more and more complex knowledge to clinical decision-making, and bring more and more diversified and complex patient data into personalized management. Due to the recent development of cognitive computing technology, its application in clinical oncology still lacks large-scale data, and there are clinical differences in different regions and ethnic groups. Watson for Oncology (WFO), an artificial intelligence assistant decision system, was developed by IBM Corporation (USA) with the help of top oncologists from Memorial Sloan Kettering Cancer Center (MSK). It took more than 4 years of training, based on national comprehensive cancer network (NCCN) cancer treatment guidelines and more www.nature.com/scientificreports/ than 100 years of clinical cancer treatment experience in the United States, and can recommend appropriate chemotherapy regimens for specific cancer patients. As for supported cases, the treatment recommendations provided by WFO are divided into 3 groups: Recommended, i.e. green "buckets", which represents a treatment supported by obvious evidence; For consideration, i.e. yellow "buckets", which represents a potentially suitable alternative; and Not recommended, i.e. red "buckets", which stands for a treatment with contraindications or obvious evidence against its use. In order to compare the consistency between WFO and clinicians in different countries and regions in various aspects and on a large scale, many hospitals have formed Multidisciplinary Team (MDT), which is composed of oncologists, surgeons, pathologists and radiologists, etc. They discuss the advantages and disadvantages of each candidate treatment scheme and finally determine the treatment scheme. If the concordance is achieved when the MDT recommendation is in the 'Recommended'/'Recommended' or 'For consideration' categories of WFO, it is defined as concordant; Otherwise, it is discordant. The results showed that there were obvious differences in the concordance rate of different regions and types of cancers. And so far, there has been no published meta-analysis comparing the consistency of WFO and MDT. Therefore, this study aims to systematically review the literature and provide the latest evidence of WFO's clinical use, analyze the consistency, advantages and disadvantages between WFO's treatment scheme in cancer patients and that of clinicians, and further summarize and analyze WFO's clinical practice, so as to provide references for further clinical application of WFO. Materials and methods This meta-analysis is registered in the International Prospective Register of Systematic Reviews (PROSPERO) trial registry (CRD42020199418 Inclusion and exclusion criteria. The studies meeting the following criteria would be included: (a) The clinical use of WFO has been focused on regardless of cancer type, (b) the studies contain at least one subgroup of analysis data, (c) the studies should be original research articles published either in Chinese or English regardless of nationality, (d) the studies have compared the consistency of treatment schemes determined by WFO and MDT, and (e) there is no limit to whether the article is a prospective or a retrospective study and whether blind methods have used. The following are the major exclusion criteria: (a) The studies only describe the simple use of WFO and do not involve any data or only WFO research and development process data, (b) the article does not compare the treatment schemes between WFO and MDT, and (c) book chapter, comment, case reports, and other forms without detailed data. Data extraction and quality assessment. Two investigators evaluated the quality of the literatures and extracted the data independently. Any disagreements were discussed and consulted by an additional independent arbitrator for further resolution. The lack of original data is supplemented by contacting the original author via e-mail. The data were extracted with a standardized table, including (a) general information, such as the title of the publication, first author's surname, the original document number and source, year of publication and country, (b) research characteristics, such as the eligibility of the research, the characteristics of the research object, the design scheme and quality of the literature, the design scheme and quality of the literature, the specific contents and implementation methods of the research measures, relevant bias prevention measures, and the main test results; (c) data needed for this meta-analysis, such as the total number of cases in each group, and the number of cases of events were collected by the second classification. According to the Cochrane Reviewers' Handbook 6.1 (http://www.cochr ane-handb ook.org), the quality of the literature was evaluated including 7 aspects: random sequence generation (selection bias), allocation concealment (selection bias), blinding of participants and personnel (performance bias), blinding of outcome assessment (detection bias), incomplete outcome data (attrition bias), selective reporting (reporting bias) and other bias, and the judgment of "yes" (low bias), "no" (high bias) and "unclear" (lack of relevant information or uncertainty of bias) is made respectively. Review Manager statistical software (RevMan, version 5.3.5, Cochrane Collaboration Network) was applied to assess the risk-of-bias and provide visual results. Statistical analysis. RevMan 5.3.5 was also applied to analyze the extracted data. The main purpose of this study was to compare the consistency of treatment schemes determined by WFO and MDT in different cancer types, so the statistical data were dichotomous data (coincidence or non-coincidence). In the analysis, odds ratios (ORs) and the 95% confidence intervals (CIs) were performed for clinic-pathological features (TNM stage, histopathological category, etc.). Q test or I 2 test was used to judge the heterogeneity among the studies. When P < 0.05 or I 2 > 50%, there was significant heterogeneity among the studies. On the contrary, there was no heterogeneity. When there was no statistical heterogeneity between studies, the fixed effect model was used to www.nature.com/scientificreports/ merge the results. If there was statistical heterogeneity, we analyzed the causes of heterogeneity, and adopted subgroup analysis or sensitivity analysis. For the documents that still could not eliminate heterogeneity, the data could be combined from the perspective of clinical significance. Random effect model was adopted for combination analysis, and the results were carefully interpreted. If the data provided could not be meta-analyzed, only descriptive analysis would be done. Table 1, Supplementary Fig. 1, 2, respectively. Of the 9 studies, 7 studies [15][16][17][19][20][21][22] clearly defined the method of selecting cases, and other studies did not indicate the "randomization" of the included samples. In all studies, WFO and MDT treatment schemes were formulated successively for the same patient in the group, so there was no allocation bias. 7 studies 15,16,18-22 did not indicate specific blind method implementation plan or did not adopt blind method, but the result judg- www.nature.com/scientificreports/ ment and measurement will not be affected. Although two studies 16,22 did not provide detailed four-category data, they did not completely affect our meta-analysis, so we believed that all studies had no obvious bias in selective reporting results and ensured the basic integrity of the data, but other biases were still unclear. Because it was of little significance to use Begg's funnel plot and Egger test to detect publication bias when the number of documents was too small (< 10), no publication bias analysis had been performed in this study. Due to the little difference in the quality of the documents included in this meta-analysis, no further sensitivity analysis had been made. After subgroup analysis, most I 2 test results were less than 50%, and there was lower heterogeneity among the studies included in this system evaluation. Results of meta-analysis. Overall analysis of consistency between WFO and MDT. Of the 9 included studies, a total of 7 studies 15,17-21,23 provided four types of complete data (including WFO three types of treatment schemes and unavailable cases) on the consistency of treatment schemes determined by WFO and MDT in different cancer types, involving seven types of cancers including breast cancer, rectal cancer, colon cancer, gastric cancer, lung cancer, ovarian cancer and cervical cancer. Of the 1738 cases included (shown in Supplementary Fig. 3), 959 (55.18%) cases were WFO 'Recommended' schemes (green schemes) that were consistent with MDT treatment schemes, 503 cases (28.94%) were 'For consideration' (orange schemes), and the sum of the two was 1462 cases (84.12%). However, there were 166 cases (9.55%) that were 'Not recommend' scheme (pink scheme) and 110 cases (6.33%) that were not supported by WFO ('Not available' scheme). Under the condition that the MDT recommendations were consistent with the 'Recommended' or 'For consideration' categories of WFO, we conducted meta-analysis according to different clinical stages of patients (stage I-III vs. stage IV). A total of 8 studies [15][16][17][18][19][20][21]23 were included in the analysis. Of the 1807 cases included, 1473 (81.52%) WFO treatment schemes were consistent with the MDT. The concordance rate of stage I-III was 86.00% (1026/1193), which was higher than 80.78% (496/614) of stage IV. But the meta-analysis results showed that there was a significant statistical heterogeneity (I 2 = 83%) at different stages, the meta-analysis was conducted using random effect model (shown in Fig. 2A). The results showed that the difference was not statistically significant, P = 0.20 [OR 1.68, 95% CI (0.76, 3.74)]. In order to further analyze the consistency between MDT and WFO, we analyzed the situation that only WFO 'Recommended' was included but 'For consideration' was excluded. A total of 9 studies 15-23 were included in the analysis. Of the 2463 cases included, 1299 (52.74%) WFO treatment schemes were consistent with MDT. The consistency of stage I-III was 56.46% (962/1704), which was greater than 44.40% (337/759) of stage IV. The meta-analysis results showed that there was significant statistical heterogeneity (I 2 = 90%) in different stages (shown in Fig. 3A), so we also conducted the meta-analysis using random effect model. The results also showed that the difference was not statistically significant, P = 0.08 [OR 1.77, 95% CI (0.93, 3.40)]. Meta-analysis showed significant statistical heterogeneity (I 2 > 50%), so subgroup analysis was further adopted according to tumor classification. Subgroup analysis of consistency between WFO and MDT. Consistency between WFO ('Recommended' or 'For consideration') and MDT. Under the condition that the MDT recommendations were consistent with the 'Recommended' or 'For consideration' categories of WFO, we conducted meta-analysis according to different clinical stages of patients (stage I-III vs. stage IV). The results showed that the consistency of stage I-III was greater than that of stage IV except lung cancer (shown in Table 2 and Fig. 4). A total of 3 studies 17,20,21 (n = 890) were included in our meta-analysis of breast cancer, the results showed that the difference was statistically significant, In addition, a total of 3 studies 17,20,21 (n = 890) provided data on estrogen and progesterone receptors (ER+/ PR+ vs. ER−, PR−) in breast cancer patients, so meta-analysis was further carried out. The results showed (shown in Fig. 2B) that there was not statistically significant difference, P = 0.47 [OR 0.85, 95% CI (0.54, 1.34)]. A total 2 of studies 17,19 (n = 262) provided data on pathological types (small cell vs. non-small cell) of lung cancer patients. The results showed that the consistency of small cell lung cancer was higher than that of non-small cell lung cancer (shown in Fig. 2C), and the difference was statistically significant, P = 0.02 [OR 3, 95% CI (1.20, 7.48)]. Consistency between WFO (only 'Recommended') and MDT. Under the condition that the MDT recommendations were consistent with only the 'Recommended' categories of WFO, we conducted meta-analysis again according to different clinical stages of patients (stage I-III vs. stage IV). Similarly, the results showed that the consistency of stage I-III was greater than that of stage IV except lung cancer (shown in Table 3 and Fig. 5). A total of 3 studies 17,20,21 (n = 890) were included in our meta-analysis of breast cancer, the results showed that the difference was not statistically significant, P = 0. 37 Fig. 3B). A total of 2 studies 16,22 provided data of different performance status (ECOG 0 vs. ECOG 1-2) and age (< 70-year-old vs. older) of colorectal cancer patients. The results showed that the consistency of ECOG 0 patients was higher than that of ECOG 1-2 patients and the difference was statistically significant, P = 0.003 [OR 1.59, 95% CI (1.17, 2.17)] (shown in Fig. 3C); the consistency of patients under 70 years old was higher than that of older, the difference was statistically significant, P = 0.03 [OR 4.06, 95% CI (1.18, 13.97)] (shown in Fig. 3D). A total of 2 studies 17,19 (n = 262) provided data on pathological types (small cell vs. non-small cell) of lung cancer patients. The results also showed that the consistency of small cell lung cancer was higher than that of non-small cell lung cancer, and the difference was statistically significant, P < 0.00001 [OR 11.05, 95% CI (4.93, 24.77)] (shown in Fig. 3E). Consistency analysis between WFO and MDT. On the whole, it is found that the consistency of stage I-III of other cancers except lung cancer is better than that of stage IV, and most of the results are statistically significant (P < 0.05), regardless of setting WFO consistent with MDT at the 'For consideration' level ('Recommended' or 'For consideration') or at the 'Recommended' level (only 'Recommended'). At the 'For consideration' level, the overall concordance rate of breast cancer is the highest (88.99%), while that of gastric cancer is the lowest (57.94%). The consistency of small cell lung cancer in patients with lung cancer is higher than that of non-small cell lung cancer, and the difference is statistically significant. At the 'Recommended' level, the overall concordance rate of rectal cancer is the highest (81.76%), while that of gastric cancer is still the lowest (29.90%). The consistency of hormone receptor-positive patients (Luminal A and B) of breast cancer is lower than that of hormone receptor-negative patients (HER2 positive and triple negative). In colorectal cancer patients, the consistency of ECOG 0 is higher than that of ECOG 1-2 and under 70 years old is higher than older. However, in lung cancer patients, the consistency of small cell lung cancer is still higher than that of non-small cell lung cancer, and the difference is statistically significant. Advantages of WFO. Besides showing high consistency with MDT in most cancers, WFO, as an artificial intelligence clinical decision support system also has the following advantages: (a) WFO improves doctors' work efficiency and reduces workload. Hu's study 18 showed that using WFO can save an average of 8.2 min per case (the average time for obtaining reports is 7.3 ± 2.2 min, and the average time for MDT consultation is 15.5 ± 6.1 min). There is no need to wait for MDT to discuss together helps to reduce the time required to formulate chemotherapy scheme 24 , thus shortening the hospitalization time of patients. (b) WFO can prevent man-made calculation errors. Chemotherapy schemes and drug selection involve complicated and time-consuming processes, and there may be errors in selection 25,26 ; it can realize accurate medication through computer programs to prevent such errors 20,27 . (c) WFO can improve the quality of doctor-patient communication and prevent doctor-patient disputes. Nowadays, due to a variety of reasons, patients' distrust of doctors is increasing in China 28,29 . The more patients participate in the decision-making of their own therapeutic regimen and understand the incidence of adverse events and other information, the more they have confidence in the therapeutic regimen and will cooperate with doctors more actively 30 . (d) WFO can reduce the burden on patients. It can eliminate the time wasted by patients in consultation in various large hospitals, help patients to obtain the more accurate treatment as soon as possible, avoid fatigue caused by transportation, and reduce travel and accommodation costs while avoiding fatigue caused by travel. (e) WFO can improve the professional level of young Table 2. Meta-analysis results of consistency between WFO ('Recommended' or 'For consideration') and MDT for patients with various cancers in stages I-III and IV. C Concordance cases, NC nonconcordant cases. a The number of rectal cancer and colon cancer, which overlaps with colorectal cancer, has been excluded from the total. In addition, the total includes the number of ovarian cancer and cervical cancer. www.nature.com/scientificreports/ doctors. It can significantly shorten the time that junior doctors must spend on consulting relevant documents. At the same time, WFO will give reasons for selection, evidence documents and drug use instructions for each scheme, and update the system once every 1-2 months, thus improving the ability of junior doctors to make accurate diagnosis and treatment recommendations in a short time and improving self-confidence. www.nature.com/scientificreports/ Disadvantages of WFO. Recent studies showed that the consistency between WFO and MDT for cancer patients is not completely consistent, especially in patients with advanced cancer, there is a significant decrease in consistency. It is confirmed that WFO still has certain limitations, which lead to differences in the consistency rate when the system is applied in other countries. The limitations are shown as follows: (a) Different treatment schemes: yellow and white people have significant differences in sensitivity and tolerance to certain specific chemotherapeutic drugs due to their different constitutions and key enzyme groups of drug metabolism, so that clinical guidelines between different countries and regions must also have certain differences. For example, the mutation rate of EGFR in lung cancer in European and American countries is about 15%, while that in China is more than 50% 31,32 . In China, primary research drugs Icotinib and Endostar [33][34][35] are used to instead of other firstgeneration epidermal growth factor receptor-tyrosine kinase inhibitor (EGFR-TKI) and bevacizumab, because studies have shown that they are as effective as EGFR-TKI and bevacizumab in lung cancer patients in China 36,37 . Liu et al. 19 and others have proposed that if WFO system can provide these two alternative therapeutic regimens in 'Recommended' or 'For consideration' , the overall consistency of lung cancer in China can be increased from 65.8 to 93.2%. Xu et al. 21 also believe that the difference in first-line treatment of advanced breast cancer can also be attributed to the fact that CDK4/6 inhibitors cannot be used because they are not listed in China. Similarly, WFO recommended panizumab targeted therapy in colon cancer patients, but it is not listed in China and patients cannot choose it 38 . (b) Different drug choices: WFO recommended chemotherapy regimen complies with NCCN guidelines, but it also includes thousands of clinical practice cases from MSK 16 . For example, due to the large difference between the surgical methods and guidelines for adjuvant treatment of gastric cancer in China and the United States 39,40 , the WFO applied research on gastric cancer in the study shows poor concordance rate. On the contrary, the adjuvant therapy and drug selection for colon cancer in eastern and western countries are more consistent, so the concordance rate between WFO and MDT is obviously increased. Liu et al. 19 also suggested that WFO recommended concurrent chemoradiation during the treatment of lung cancer, whereas China performs sequential chemoradiation (up to 67%). Chinese patients often cannot tolerate concurrent radiotherapy and chemotherapy because their physique is usually weaker than that of western patients. The physique of Chinese patients is usually weaker than that of western patients, which leads to the decrease of coincidence rate between WFO and MDT. (c) Complications: comprehensive treatment for cancer patients is continuous, and patients may suffer from reversible and transient organ function damage. WFO may sometimes exclude some available schemes in the process of selecting the candidate scheme only based on the transient abnormal biochemical results of the patient 41 . In Hu's study 18 , a biochemical blood test of a colon cancer patient showed creatinine clearance rate < 30. WFO did not recommend CapeOX (oxaliplatin + capecitabine) scheme for the patient, but MDT considered that this was only the result of transient biochemical abnormality of the patient, so creatinine clearance rate was rechecked one week later and the result was > 30, so CapeOX scheme treatment was still carried out. In Liu's study 19 , a patient with active pulmonary tuberculosis was also diagnosed as stage III squamous cell lung cancer. If the standard chemoradiotherapy recommended by WFO is accepted, tuberculosis may spread rapidly, resulting in rapid death. Therefore, Liu et al. modified the treatment strategy to oral anti-tuberculosis drugs before radiotherapy and chemotherapy. Therefore, it is believed that if such individualized information can be incorporated into WFO, the coincidence rate between WFO and MDT will be greatly improved. (d) Economic factors: for example, in the treatment of breast cancer, WFO recommends the use of trastuzumab for HER2 positive patients, but patients in China are often forced to choose chemotherapy first due to the high price of this drug 38 . In the Republic of Korea, both WFO and MDT recommend regorafenib for patients with stage IV rectal cancer 42 , but some patients still received 5-fluorouracil (5-Fu)-base chemotherapy, because regorafenib is not only expensive, but also not covered by the national health insurance system 16 . Similarly, China also needs to consider the issue of medical insurance reimbursement, which also affects the consistency between WFO and MDT. If WFO can make targeted improvements to the treatment recommendations for patients with advanced cancer, non-small cell lung cancer, breast cancer with hormone receptor-positive and colorectal cancer with ECOG 1-2 or older (age > 70), it will be more suitable for clinical use in other countries. www.nature.com/scientificreports/ Characteristics and limitations of this meta-analysis. Although WFO has been gradually developed in many countries and regions, and the types of cancers supported are also gradually increasing, so far there is still a lack of evidence-based medicine research for this system. In order to understand the consistency between WFO and MDT, WFO advantages and disadvantages in clinical use, and to solve the practical problems encountered in the practical use of the system, we carried out a targeted meta-analysis. Unlike most of the original studies, which only carry out the consistency research at the 'For consideration' level ('Recommended' or 'For consideration') or at the 'Recommended' level (only 'Recommended'), this research respectively carries out meta-analysis of the above two aspects, which further supports some statistical results obtained from the original studies and provides new statistical evidence. It not only reminds clinicians to pay enough attention to patients with advanced cancer, non-small cell lung cancer, Luminal A and B breast cancer and colorectal cancer with ECOG 1-2 or older (age > 70) in the future when using WFO, but also provides clinical evidence for improvement of WFO. Of course, this meta-analysis still has certain limitations, which are mainly manifested in the following aspects: (a) The possibility of selection bias may exist in a few included studies; (b) The number of samples included in some studies is relatively small, and some study results are not fully reported, lacking complete data of the four classifications. (3) Most studies did not mention the relevant data of WFO's advantages such as shortening consultation time and coincidence between junior or senior doctors and WFO, which leads us to fail to further analyze some of WFO's advantages. (d) All data are published research or conference summaries, lack of grey literature, and possible literature selectivity bias. In addition, 182 cases were included in the initial stage in Liu's study on lung cancer 19 . In the further study, a total of 33 cases were excluded from the study without the support of WFO, and the remaining 149 patients were included in the study. However, the clinical stages of these 33 cases are not listed in detail and cannot be included for further Meta-analysis. Moreover, the distribution of patients in this study is unbalanced, that is, there are fewer patients in early stage, which is obviously different from the situation that there are more early-stage patients than late-stage patients in other cancers. All these may lead to different conclusions about lung cancer from other cancers. Of course, the sample size included in our systematic evaluation is small, so larger sample size, multi-center and high-quality randomized controlled trials are still needed for further verification in order to reach more reliable conclusions. To sum up, we should regard WFO as "a tool, not a crutch" 43 . If WFO is properly used, it will be regarded as a valuable tool. Proper use requires WFO to be only in the position of a complement to the doctor's work, instead of relying on it completely. Oncologists can integrate it with traditional resources such as colleagues' experience and scientific journals to choose the most effective method to provide chemotherapy schemes for patients, to help patients obtain more accurate and effective treatment, fasten and improve their treatment results. Of course, WFO should also make continuous improvement according to clinical use in other countries. People often say that AI will change medicine. In fact, through examples like WFO, we can look forward to how AI can enable people all over the world to obtain the best quality medical services fairly, no matter where or who the patients are 44 .
v3-fos-license
2020-09-17T13:06:16.488Z
2020-09-01T00:00:00.000
221747722
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1660-4601/17/18/6646/pdf", "pdf_hash": "e4695c1ae9d9c1cf26c3781f11a4c5a360613c79", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1958", "s2fieldsofstudy": [ "Medicine" ], "sha1": "8ce8fc3fd35a6dd5cfc9cc03082b45f0e594c963", "year": 2020 }
pes2o/s2orc
Education, Smoking and CRP Genetics in Relation to C-Reactive Protein Concentrations in Black South Africans Because elevated circulating C-reactive protein (CRP) and low socio-economic status (SES), have both been implicated in cardiovascular disease development, we investigated whether SES factors associate with and interact with CRP polymorphisms in relation to the phenotype. Included in the study were 1569 black South Africans for whom CRP concentrations, 12 CRP single nucleotide polymorphisms (SNPs), cardiovascular health markers, and SES factors were known. None of the investigated SES aspects was found to associate with CRP concentrations when measured individually; however, in adjusted analyses, attaining twelve or more years of formal education resulted in a hypothetically predicted 18.9% lower CRP concentration. We also present the first evidence that active smokers with a C-allele at rs3093068 are at an increased risk of presenting with elevated CRP concentrations. Apart from education level, most SES factors on their own are not associated with the elevated CRP phenotype observed in black South Africans. However, these factors may collectively with other environmental, genetic, and behavioral aspects such as smoking, contribute to the elevated inflammation levels observed in this population. The gene-smoking status interaction in relation to inflammation observed here is of interest and if replicated could be used in at-risk individuals to serve as an additional motivation to quit. Introduction Non-communicable diseases (NCDs) accounted for 71.3% of global deaths between 2005 and 2015 [1]. Of these NCDs, cardiovascular disease (CVD) took the highest toll in developing nations such as South Africa [2]. Several CVDs share an inflammatory origin, which is influenced by numerous factors, including anthropometry, level of physical activity, and the genetic background of an individual [3]. One such marker of inflammation, which has been determined to predict future CVD risk, is the cytokine, C-reactive protein (CRP). Elevated levels of this protein, i.e., >3 mg/L, are predictive of future CVD [4,5]. CRP is generally elevated in black individuals, coinciding with notably stronger inflammatory responses as well as higher CVD risk than in other ethnicities [6][7][8][9]. Other factors besides Materials and Methods This cross-sectional, observational study was nested within the South African arm of the Prospective Urban and Rural Epidemiology (PURE) study, with details of the sampling strategy described by Pisa et al. [16]. In total, 2010 apparently healthy adults (>30 years), from both rural and urban communities, were included at the baseline in 2005. Individuals with a measured fever (tympanic temperature > 38.0 • C), were excluded. Further exclusions were that participants could not have known acute overt pre-existing diseases, be pregnant or lactating at the time of sampling. Biochemical Measurements Fasting blood samples were collected by registered nurses. High-sensitivity CRP concentrations were measured on a Sequential Multiple Analyzer Computer (SMAC), using a particle-enhanced immunoturbidometric assay (Konelab™ autoanalyzer, Thermo Fisher Scientific Oy, Vantaa, Finland). Quantitative determination of high-density lipoprotein cholesterol (HDL-c), triglycerides and total cholesterol in the sera of participants were done on a Konelab™ 20i autoanalyzer (Thermo Fisher Scientific). Low-density lipoprotein concentrations (LDL-c) were calculated using the Friedewald equation for those with triglycerides below 400 mg/dL. Nurses trained in voluntary counseling and human immune deficiency virus (HIV) testing performed HIV tests in accordance with prevailing governmental and WHO guidelines. Pre-test counseling was provided in group format, after which signed informed consent was obtained individually. Those testing positive for HIV on a rapid First Response HIV1-2.O card test (Transnational Technologies Inc. PMC Medical, Nani Daman, India), were retested using a card test developed by Pareeshak (BHAT Bio-tech, Bangalore, India) to ensure diagnostic accuracy. All participants, irrespective of HIV status, received individual post-test counseling. Whole EDTA blood was used for measuring glycated hemoglobin (HbA1c) from fasting participants, with a D-10 Hemoglobin testing system (Bio-Rad Laboratories, Hercules, CA, USA). Anthropometric and Physiological Measurements and Lifestyle Questionnaires The participant's body weight was measured in minimal clothing with arms hanging freely at the side. Weight was measured in duplicate, with the mean recorded. Height was measured in duplicate with a stadiometer, with the head in the Frankfort plane in a fully erect state while the participant inhaled. The mean was then calculated and recorded in meters. Body mass index (BMI) was calculated using the standard formula and reported as kg/m 2 . Waist circumference (WC) and hip circumference were measured using unstretchable metal tape in accordance with the recommendations of the International Society for the Advancement of Kinanthropometry. An Omron automatic digital blood pressure monitor (Omron HEM-757, Kyoto, Japan) was used to measure the right brachial artery blood pressure in the sitting position. Participants did not smoke, exercise, or eat 30 min beforehand, and had to be rested and calm for five minutes before measurement. Volunteers responded to an interviewer-administered questionnaire in their language of choice, in which various socio-demographic variables (age, gender, medical history (stroke and diabetes incidence), tobacco use, alcohol usage, and SES factors (i.e., roof type, access to electricity, primary cooking fuel, primary heat source, water source, and education)) were collected. Water sources were grouped into sourced water (i.e., from wells, rivers, or boreholes) or municipal water sources. Even though we did not have access to a standardized SES-index, we overcame this by focusing on individual factors that constitute an individual's living environment, to identify factors for which mitigation efforts could be instituted in an attempt to lower CRP concentrations. Food portion books were specifically designed and standardized for the South African PURE-North West population. Validated, interviewer-based quantitative food frequency questionnaires or qFFQs [20] were completed to determine dietary intakes. The data obtained from qFFQs were entered into the Foodfinder3 program (Medical Research Council, Tygerberg, South Africa, 2007) and sent to the Medical Research Council of South Africa for nutrient analyses. Genetic Analyses Polymorphic sites and novel SNPs within the CRP gene were identified by sequencing in 30 randomly selected DNA samples and an in silico search. These variants were scored (varying from 0-1) by the Assay Design Tool to determine a viable customized genotyping array to be analyzed using the Illumina ® VeraCode GoldenGate assay technology on a BeadXpress ® platform (Illumina ® Inc., San Diego, CA, USA) for genotyping the selected SNPs. Ultimately only 12 CRP SNP clusters passed the quality control (QC) measures for making genotype calls by having a GenCall score >0.5 and a call rate ≥0.9 and are reported on here (see Table S1 for SNP details). The BeadXpress ® analysis was performed by the National Health Laboratory Service (NHLS) at the University of the Witwatersrand, Johannesburg. Statistical Analyses A total of 1569 individuals, for whom we had both CRP concentrations and all the genetic information regarding the SNPs investigated in the CRP gene, were included in our analyses. Statistical analyses were conducted using R [21]. Continuous variables were inspected for normality using histograms and measures of skewness. Variables with a skewed distribution were natural log-transformed and reported as median and interquartile ranges. Based on global recommendations for CRP cut-off values, data subsets were created i.e., ≤3 mg/L; >3 mg/L [5]. The compareGroups library was used to construct bivariate tables comparing our constructed cohorts, using non-parametric methods for both continuous and categorical data. Spearman correlations were computed, testing for linear associations with of continuous values, while median values and interquartile ranges for each categorical variable were reported. Significance testing was conducted using the independent two-group Mann-Whitney U test or the Kruskal-Wallis One-Way ANOVA by Ranks Test. A backward stepwise linear regression was conducted using the stepAIC function within the MASS library. Models were evaluated based on the Akaike information criterion (AIC) obtained. The final variables obtained were evaluated for co-linearity. Association analyses for SNP X environment interaction were then performed, using the SNPSassoc library, including the co-variates obtained from the linear regression model, and included based on lowest scoring AIC value. This was done for each SNP in combination with each demographic and SES factor. Where applicable, p-values were adjusted using the methods suggested by Bonferroni. Ethics Statement The authors and study coordinators complied with all ethical standards. The PURE-SA (North West province) study was approved by the Health Research Ethics Committee of the Faculty of Health Sciences, North-West University (NWU), in accordance with the ethical principles outlined by the Declaration of Helsinki with approval numbers 04M10, for the larger study, and NWU-00004-17-S1 for our affiliated study. Goodwill permission was granted by household heads and community leaders (mayors and traditional leaders), as well as the Department of Health of South Africa. Signed informed consent was given by each participant after being apprised of the aims of the study. Sufficient time for reflection was given, and subjects could withdraw at any time, or withhold whatever information they were not willing to share, without reprisal. Demographics and Anthropometrics of the Study Population and Their CVD-Risk Factors Stratified to at-Risk CRP Phenotypes Women were more likely to present with elevated CRP concentrations (median unadjusted value of 3.58 mg/L). Post-menopausal women (self-reported with amenorrhea), had higher median CRP concentrations (4.31 [1.72; 11.9] mg/L) than men (2.42 [0.72; 7.87] mg/L) and pre-menopausal women (3.05 [0.82; 9.00] mg/L; p < 0.0001). Individuals with elevated CRP concentrations were physically larger than those with normal CRP, as indicated by higher BMI and other anthropometric markers, even though similar daily dietary intakes were noted. Post-menopausal women also had significantly larger WC (median: 82.4 cm) than pre-menopausal women (median: 79.0 cm) and men (median: 74.2 cm). After adjusting for WC, which differed between the genders (p < 0.0001), the difference in CRP concentrations observed between men and women, as well as pre-and post-menopausal women, disappeared. Those with elevated CRP were also significantly older, although age was only weakly, but significantly, associated with CRP (ρ = 0.12). Median CRP concentrations were similar irrespective of HIV status, tobacco and alcohol use. Smokers had a lower median WC (74 cm) as opposed to grouped individuals who had never smoked or were former smokers (81.4 cm, p < 10 −12 ). Median CRP concentrations were similar (p > 0.05) in rural and urban participants (Table 1), with similar proportions of individuals being classified as having normal or elevated CRP concentrations observed in these two areas. Factors pertaining to SES differed between the two localities (data not shown). Rural participants were more likely to be married and have lower education levels than urbanites, thereby pointing toward a lower SES level for rural dwellers. Ruralists were also more likely to access public water systems such as communal wells, to use wood as a primary heating and cooking fuel source, and have roofs constructed of corrugated iron sheeting with no insulation. Next, we stratified factors pertaining to SES according to CRP risk values (Table 2). Except for marital status, similar distributions and median CRP concentrations were observed for all investigated SES factors. Individuals presenting with normal CRP concentrations were more likely to identify as never being married; however, when adjusting for age and WC, similar CRP concentrations were observed across all marital status categories. Smokers had significantly lower formal educational attainment than non-smokers (data not shown). Individuals with elevated CRP concentrations presented with significantly poorer markers of CVD risk than those with normal CRP concentrations ( Table 3). Cases of elevated CRP were prone to co-present with increased blood pressure, increased heart rate and a poorer lipid profile. Median glycated hemoglobin concentrations were also increased in individuals with elevated CRP concentrations. To describe CRP concentrations and the interactions of modulators thereof on a physiological scale, natural log-transformed CRP (lnCRP) concentrations were modeled using a stepwise, backward linear regression approach. Eight statistically significant predictors were identified from the measured variables, including clinical, demographic and socio-economic factors (Tables 3 and 4). The model presented accounted for 14.3% of the variation observed in CRP concentrations of our black population. A 22.0% predicted reduction in CRP concentration was observed in response to an increase of 1 mmol/L in HDL-c. All SES elements investigated in this study failed to predict CRP concentrations, except for whether an individual had attained 12 or more years of formal education, which resulted in a predicted reduction of 18.9% in CRP concentrations. Effects of SES Factors on Association between Different CRP Genotypes and CRP Concentrations The odds of presenting with elevated CRP concentrations were independently investigated for each demographic or SES component included in this study in combination with each of the twelve CRP genotypes. The only significant interaction observed in our population was that of smoking status in individuals of differing rs3093068 genotypes. Individuals indicating that they were former smokers were included in our association analysis as abstainers to enable sufficient statistical power. Smokers had lower median WC (74 cm) as opposed to individuals who had never smoked or were former smokers (81.4 cm, p < 10 −12 ). In contrast, current smokers presented with the higher median daily dietary intake (7306 kJ) than current non-smokers (7037 kJ). The odds of presenting with elevated CRP concentrations were 71% higher for those homozygous for the minor allele (C/C) than non-smokers (Figure 1). Individuals with the wild-type had similar odds of presenting with elevated CRP concentrations, irrespective of their smoking status. glycated hemoglobin, HDL-c high-density lipoprotein cholesterol, WC waist circumference. Effects of SES Factors on Association between Different CRP Genotypes and CRP Concentrations The odds of presenting with elevated CRP concentrations were independently investigated for each demographic or SES component included in this study in combination with each of the twelve CRP genotypes. The only significant interaction observed in our population was that of smoking status in individuals of differing rs3093068 genotypes. Individuals indicating that they were former smokers were included in our association analysis as abstainers to enable sufficient statistical power. Smokers had lower median WC (74 cm) as opposed to individuals who had never smoked or were former smokers (81.4 cm, p < 10 −12 ). In contrast, current smokers presented with the higher median daily dietary intake (7306 kJ) than current non-smokers (7037 kJ). The odds of presenting with elevated CRP concentrations were 71% higher for those homozygous for the minor allele (C/C) than non-smokers (Figure 1). Individuals with the wild-type had similar odds of presenting with elevated CRP concentrations, irrespective of their smoking status. Figure 1. Interaction between tobacco smoke and rs3093068 in the Prospective Urban and Rural Epidemiology study-North West arm. The minor allele is associated with increased CRP concentrations, which are further increased in smokers. Men were more likely to be current smokers Figure 1. Interaction between tobacco smoke and rs3093068 in the Prospective Urban and Rural Epidemiology study-North West arm. The minor allele is associated with increased CRP concentrations, which are further increased in smokers. Men were more likely to be current smokers (59.5% vs. 47.6%; p < 0.0001). Homozygous smokers for the minor allele had a 71% increased risk of presenting with elevated CRP concentrations. Abbreviations: CRP, C-reactive protein; C, cytosine; CI, 95% confidence interval; G, guanine. Discussion Little evidence exists on whether-and indeed, if-individual SES factors that constitute an individual's immediate living environment affect their inflammatory status. In this study, we failed to find sufficient evidence that the investigated SES elements acted individually as impetus for elevated CRP concentrations, the exception being that lower CRP concentrations were predicted from adjusted analyses in individuals completing at least 12 years of formal education. Our evidence, however, highlights the fact that the inflammatory phenotype observed in black populations is the result of a combination of various factors, including, but not limited to, the combined effects of genetics with individual lifestyle choices such as smoking. Moreover, our results indicated that black participants with CRP concentrations above 3 mg/L have a higher prevalence of CVD risk factors. Several epidemiological studies exclude individuals with CRP concentrations above 10 mg/L, which is seen as the clinical cut-off point for acute infections. However, [22] reported that certain individuals, especially obese women, had repeatedly presented with CRP concentrations above 10 mg/L without any indication of acute infection. In our study, all individuals examined had normal body temperatures, reducing the likelihood of acute infection as a cause of excessively elevated CRP concentrations in the 363 (23.1%) individuals presenting with CRP concentrations above 10 mg/L. Nienaber-Rousseau et al. (unpublished) proved statistically that excluding participants within our population with CRP concentrations higher than 10 mg/L leads to the exclusion of certain CRP genotypes, which results in a biased representation of the actual drivers of increased CRP concentrations observed in black African populations. Furthermore, we included these individuals as excluding them would have decreased the statistical power when stratifying within the different SES components and different genotypes. We also included individuals who were seropositive for HIV, as median CRP values were similar regardless of HIV status. Infection rates are also higher among individuals with low SES, which could result in the introduction of bias should HIV-positive individuals have been excluded [23]. Elevated CRP concentrations were regularly observed in the women included in our study. Study [24] reported that black women were more likely to have CRP concentrations above 3 mg/L and that elevated CRP was more frequently observed in post-menopausal women, although it was strongly correlated with abdominal obesity. Likewise, gender as well as pre-menopausal and post-menopausal differences dissipated when we corrected for WC in our study, implicating WC as a major contributing factor to the development of an elevated CRP phenotype. Anthropometric markers such as waist and hip circumferences, weight and BMI had significant positive correlations with CRP concentration ( Table 2, ρ = 0.27, 0.21, 0.22 and 0.24, respectively). Various other reports record the influence of adiposity on the inflammatory state of the individual [25,26]. The association between BMI and CRP, irrespective of ethnicity, was reported in another study [27], and elevated CRP concentrations, as well as increased CVD risk, are often the result of increased adiposity [26]. Using CRP as a prognostic marker for future CVD risk appears to be independent of ethnic or geographical factors [6]. Factors pertaining to CVD risk were observed as being elevated in individuals harboring elevated CRP concentrations in our sample. Similar to our findings, a multi-ethnic study reports increased resting heart rate to be associated with increased concentrations of inflammatory markers, including CRP [28]. Inflammation markers, and especially CRP, are also linked to vascular stiffness, atherosclerosis and the development of end-organ damage, characteristics of a long-term hypertensive state combined with hyperlipidemia [29]. African Americans are also reported to be more likely to exhibit elevated HbA1c concentrations, with CRP highly correlated with HbA1c levels [30]. Excessive weight, hyperlipoproteinemia, and decreased insulin sensitivity are traits associated with the metabolic syndrome or MetS [31]. Combined with the elevated inflammation levels, MetS was, therefore, prominent in the group of volunteers studied and even more so in post-menopausal women, regardless of their SES. SES factors differed between urban and rural participants; however, CRP concentrations were similar regardless of where individuals resided. The lack of any impact exerted on CRP concentrations by SES elements (Table 2) further strengthens our observation that individual SES components are not the main causative effect of elevated CRP concentrations in this population. The detected similarity in CRP concentrations between different levels of urbanization with varying markers of SES is in contrast to observations made in an Asian population, where city dwellers had higher CRP concentrations [32]. The years following the fall of apartheid in South Africa were marked by unprecedented rates of urbanization, which improved economic activity and increased rural-to-urban migrations [16]. Furthermore, improved access to basic utilities resulted from governmental efforts, even in the rural areas included in this study [33]. It may, therefore, be argued that the definition of what constitutes a rural area differed between our two studies, which may have resulted in this discrepancy. Of all the included SES factors, only education was determined to be an influencer of CRP concentration, and only when controlling for other confounding variables. Although some of the values in Table 4 suggest substantial changes in CRP for a single unit change in a specific variable, the interpretation should consider the physiological changes of such alterations. Age-dependent increases in CRP were associated with elevated adiposity due to changes in hormonal balances, as reported in previous studies [34] similar to our investigation. Substantial reductions in CRP were predicted with a 1 mmol/L change in HDL-c; however, eliciting this response may prove difficult in a resource-poor environment. These covariates, however, do predict possible routes of intervention, whereby proper nutrition (focusing on weight management, treatment of hyperlipidemia, and glycemic control), as well as increased physical activity (to improve resting heart rate) and increasing education levels, can reduce inflammation in populations [35][36][37]. Completing 12 or more years of formal education was associated with reduced CRP concentrations (Table 1, unadjusted), although this reduction was found to be non-significant. In our multivariate model, completing secondary school or tertiary education corresponded to a significant 18.9% reduction in predicted CRP concentration. The authors of [13] estimate that 87.9% of CRP variation attributed to education level could be primarily explained by the higher number of smokers, the lower dietary quality and reduced levels of exercise in lower educated individuals. Similarly, it was reported for our cohort that education levels were associated with lower BMIs in both men and women [16]. Various other studies have also failed to find differences in the CRP concentrations of smokers versus non-smokers, although smoking is known to affect CVD risk [38,39]. Smokers in our study had lower WC, with higher daily dietary intakes than non-smokers. Previously, African American smokers were reported to have lower levels of weight gain than white Americans [40]. However, nicotine does increase energy expenditure [40], which may have resulted in the smaller WC observed in active tobacco users in our study. To our knowledge, we present the first indication that smoking status results in increased CRP concentrations in individuals harboring the minor allele of rs3093068, of which the major allele is associated with increased CRP concentrations [19]. Smokers with the minor allele had odds of presenting with elevated CRP concentrations statistically similar to those with the wild-type, negating the CRP-lowering effects of the minor allele. Conclusions Our main findings suggest that CRP concentrations in black South Africans are not associated with individual SES factors. Even though the SES factors included are not primarily responsible for the elevated CRP concentrations observed, improving the general SES of individuals commonly results in better health outcomes. Therefore, there should be collective efforts to improve the general socio-economic standing of the people of the Republic of South Africa. Health promotion efforts should focus on reducing the individual symptoms that constitute MetS, with public health promotion efforts especially focused on individuals with lower education levels. Here we also presented the first evidence that smoking status increases CRP concentrations in individuals who are homozygous for the minor allele of rs3093068, although more evidence is needed from other ethnicities. Our data were also cross-sectional in nature, and, therefore, do not account for changes in SES factors for which future elevated CRP concentrations were yet to be moderated by improvements in these SES factors. Future studies measuring SES factors should, consequently, also include questions regarding the period for which the individual had access to improved standards of living. Supplementary Materials: The following are available online at http://www.mdpi.com/1660-4601/17/18/6646/s1, Table S1: CRP SNPs, their minor allele frequencies. Funding: Data used in the presented work were collected as part of the North West Province, South African arm of the Prospective Urban and Rural Epidemiological (PURE) study. No external funding sources were utilized for this retrospective analysis.
v3-fos-license
2023-08-14T15:05:33.398Z
2023-08-01T00:00:00.000
260876888
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/pai.14006", "pdf_hash": "d343620014e2b361fc2bcbd6fbb9a107125dc1cf", "pdf_src": "Wiley", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1959", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "sha1": "6b1b47525e6b4a6760dcac4f4baf0ea821d209da", "year": 2023 }
pes2o/s2orc
Teaching children with food allergy to recognize anaphylaxis: The caregivers' perspectives Anaphylaxis is rising in prevalence among children. The current recommendations on the effective transition of anaphylaxis management to adolescents and young adults suggest preparation for the transition may be considered at 11–13 years of age in accordance with the patient's developmental stage. However, there has been limited research conducted on the perspective of caregivers regarding the transition of anaphylaxis management to their children. This study aims to determine the age caregivers feel it is appropriate to begin to teach their child to recognize anaphylaxis and use their adrenaline auto‐injectors (AAI). | INTRODUC TI ON Anaphylaxis is a life-threatening, systemic allergic reaction associated with different mechanisms, triggers, clinical presentations, and severities. 1 Studies suggest increasing prevalence among children, with a higher likelihood of hospitalization among this cohort. 2 Optimal management of anaphylaxis involves avoiding known triggers, use of adrenaline auto-injectors (AAI) when necessary and devising a personalized anaphylaxis management plan. 3 In the case of pediatric patients, caregivers typically bear the responsibility of recognizing anaphylaxis symptoms, carrying, and administrating AAI to the child when required. 4 There are no published data on the optimal age to transfer responsibilities for recognition and treatment of anaphylaxis from caregiver to child. The European Academy of Allergy and Clinical Immunology (EAACI) recommends preparing children for the transition of anaphylaxis responsibility from early adolescence, that is 11-13 years. 5 General guidance in the education sector regarding students self-carrying and self-administrating prescribed medication does not provide precise age-based advice. Children and adolescents are rarely included in studies that investigate the management of anaphylaxis. 6 There are concerns about the capability of children using the AAI correctly and safely such as applying the correct pressure when injecting or accidental self-harm due to unintentional injections from AAI devices. 7,8 As adolescents and young adults gain autonomy and their environment shifts from family to peer-based interactions, they may be at higher risk of anaphylaxis if they fail to take responsibility for selfmanagement of their condition. Fatal anaphylaxis is disproportionately more common among adolescents, possibly reflecting a failure to recognize symptoms and delayed use of AAI. 6,9,10 Two studies investigating caregiver and pediatric allergists' perspectives on this transition reported different findings. Caregivers of pediatric allergy patients expected their children to self-inject adrenaline by the age of 9-11, while pediatric allergists did not expect this autonomy until the age of 12-14. 11,12 This highlights the lack of consensus in this area by revealing that the expectations of clinicians do not correlate with the reality experienced by caregivers. Thus, with limited published data based on the standard age of transitioning these responsibilities onto the patient, this underpins the need for clear guidelines to address this critical knowledge gap. 13 The aim of this study was to determine the age caregivers begin to teach their children to recognize the symptoms of anaphylaxis and use an AAI. This study will also explore readiness factors that influence the caregivers' approach to transferring the responsibility of anaphylaxis management to their child, their confidence in training their children, and their views on who should support this transition. | Study design and population This was a quantitative descriptive cross-sectional study conducted between October 2020 and April 2022, conducted as part of a tele- | Eligibility criteria Caregivers were eligible to participate if they cared for patients who had been diagnosed with an IgE-mediated food allergy and were prescribed an AAI. A caregiver was excluded from participation if there were significant language barriers. Caregivers attending the pediatric allergy clinic between October 2020 and April 2022 (n = 369) were contacted by phone where their eligibility was assessed and if inclusion criteria were met, were invited to participate in the interventional study. If permission was obtained, an invitation to complete an online questionnaire was sent by email. Key Message Caregivers in this sample believe it is appropriate to begin to transfer the responsibility of anaphylaxis recognition and AAI use to their children younger than the European Academy of Allergy and Clinical Immunology suggested age of 11-13 years. Most caregivers in this sample feel it is appropriate to begin to teach children to; recognize anaphylaxis symptoms under 6 years, and use an AAI at 9-11 years based on the child's readiness. Although most caregivers had received AAI training, only half felt confident in teaching their child its administration. Further evaluation is necessary to improve guidelines, enabling clinicians to train and support caregivers during this transition. in Canada. 11 Questions related to demographics, readiness factors, parental AAI training, and caregiver confidence in training their children (Appendix S1). The questionnaire was piloted among a pediatric allergist, an allergy nurse, and general practitioners with special interest in allergy and a group of medical students from University College Cork (UCC). | Data analyses The survey data were extracted from Google Forms, downloaded into Microsoft Excel and imported into Stata 17 (StataCorp™, TX, USA) on an encrypted UCC computer. Descriptive analysis was performed for each variable of interest. Normally distributed data were expressed as mean and standard deviation (SD); non-normally distributed data were expressed as median and interquartile range (IQR); and categorical variables were reported as percentages. Normal distribution of data was evaluated by Shapiro-Wilk test. For non-normally distributed data, comparison was performed employing Mann-Whitney U test; comparison of normally distributed data was performed using independent sample ttest. For categorical data, the chi-squared test was used. Association between each independent variable and an ordinal variable was analyzed using a logistic regression with a cumulative odds model. Confidence intervals of proportions were built by Wilson's method. Parameters displaying p < .05 were considered statistically significant. | Characteristics of sample Of the 123 caregivers who completed the survey, 90.2% were mothers of the child and 8.94% were fathers. The majority of caregivers surveyed had children prescribed with an AAI in the <6 (25.2%) or 12-14 (27.64%) age groups. Most participants were in the 40-49 (51.22%) and 30-39 (32.52%) age groups and had completed postsecondary school education, with postgraduate college degrees as the most common level of schooling achieved (31.71%). The majority of households earned between €20,000 and €60,000. The characteristics of these caregivers are represented in Table 1. | Age of transition The most common ages selected for when caregivers feel it is appropriate to begin to transfer responsibilities were: <6 years for teaching recognition of anaphylaxis symptoms (65.9%); <6 years and 6-8 years for describing when adrenaline should be used (39%, 39%); 6-8 years for teaching how to self-inject adrenaline using an auto-injector trainer (35.8%); and 9-11 years for teaching how to use a real AAI on an orange or similar object (35.8%), carrying an AAI (35.8%), describe their anaphylaxis management plan (35.8%), and teaching to self-inject using an AAI (44.7%) ( Table 2). | Readiness factors Caregivers most frequently cited a history of more than one anaphylactic reaction (86.2%), history of severe anaphylaxis (94.3%), the child's ability to describe reasons to inject adrenaline (87.8%), and demonstrate AAI use (82.1%) as "very important" readiness factors influencing the age at which they feel it is appropriate to begin to teach their child to recognize anaphylaxis and use an AAI. The child's age (11.4%), school grade (30.9%), fear of needles (22%), and school policy regarding AAI carriage (22%) were the readiness factors Not important Somewhat important Very important Child has a history of more than one previous anaphylactic reaction 8 ( | DISCUSS ION This is the first study in Ireland to examine caregivers' perspectives on when they believe it is the appropriate age to begin to transfer the responsibilities of anaphylaxis management to their child. Furthermore, readiness factors that influence the caregivers' approach to transferring the responsibility of anaphylaxis management to their child, their confidence in training their children and their views on who should support this transition were identified. Caregivers in this sample believed it appropriate to begin to transfer the responsibility of anaphylaxis recognition and AAI use to their children younger than the EAACI suggested age of 11-13 years. Most caregivers in this sample believed it appropriate to begin to teach children to recognize anaphylaxis symptoms from when a child is 6 years or younger and use an AAI at 9-11 years based on the child's readiness. The study of Canadian caregivers of allergy patients revealed similar trends, indicating shared experiences in transferring responsibilities to their children. 10 One study of 88 pediatric allergists found that very few expected the transfer of responsibilities to begin before the age of 9-11 years. 12 Most allergists considered 12-14 years of age to be appropriate for children to be able to recognize anaphylaxis symptoms, self-carry and use an AAI. These findings highlight the discrepancy between caregiver experience and clinician's expectations. 5 | Readiness factors The readiness factors highlighted as "very important" by caregivers in this study were similar to those identified by allergists in Simon's study. 12 However, the leading factor for caregivers was "history of previous anaphylactic reaction" compared with the "ability to demonstrate AAI technique with trainer device," which was classed as the primary factor by allergists. 12 Even though the severity or number of past anaphylactic reactions cannot predict those of future episodes, parents were more motivated to start the teaching process after witnessing severe anaphylaxis. 14 | Caregiver training and confidence In this study, pediatric allergy clinical staff were identified as the responsible party to teach children to use an AAI. It has been demonstrated that families were typically willing to take responsibility for their child's care; however, they require clear guidance, information, and support from medical professionals. 15 Although most caregivers had received AAI training, only around half felt confident in teaching their child to use an AAI. Evidently, the delivery of caregiver training requires improvement. One review of caregiver training in anaphylaxis management found that competence varied significantly between studies and further research is necessary to identify the most effective training strategies; however, clinician instruction on AAI administration correlates significantly with parental comfort with AAI use. 15,16 While the numbers in each subgroup analysis were small, this study suggests that caregivers with higher annual household incomes were more likely to believe they were responsible for training their child to recognize anaphylaxis and AAI use and exhibited higher confidence in doing so. This may reflect increased access to specialist health care in this group and therefore additional training in AAI use. [17][18][19] However, a 2021 study investigating factors contributing to the underuse of AAIs in pediatric patients found that despite caregivers reporting recent AAI training in clinic, they still feel nervous during anaphylactic episodes. 10 It is worth noting that the COVID-19 pandemic affected the delivery of all aspects of medical services, and this may have contributed to the lack of follow-up and training, particularly as outpatient appointments are typically where caregiver training occurs. 20 Telemedicine has come to the fore during the pandemic as a medium of healthcare delivery that can improve access to care, patientphysician interactions and limit costs. 21,22 In Ireland, telemedicine was used in epilepsy outpatient care and positive clinician and patient feedback was reported, thus indicating this may be beneficial in allergy management. 23 | Future research Given the discrepancies between clinical and parental expectations, it would be pertinent for future research to study the factors that may indicate caregivers' desire to initiate an earlier transition of anaphylaxis management to their child among a larger cohort. Strategies to train and empower caregivers to teach their child anaphylaxis management should also be further investigated due to suboptimal caregiver confidence in training their children. Telemedicine can improve access to health care and outpatient management; therefore, it is important to investigate the potential role of telemedicine in providing caregiver training in AAI use. ACK N OWLED G M ENTS This research received support from Musgrave Group. The authors wish to thank Dr Elinor Simons, University of Manitoba, for kindly approving the use and adaptation of their questionnaire materials. The authors wish to thank Ms. Jackie O 'Leary for her expertise and management of quality and regulatory affairs during this study. The authors are grateful to the parents who took the time to participate in the study. Open access funding provided by IReL. PEER R E V I E W The peer review history for this article is available at https://
v3-fos-license
2016-06-18T00:09:00.993Z
2015-08-13T00:00:00.000
14049791
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2015.00636/pdf", "pdf_hash": "a83e3786d1dcbf13990a8d3470366fde0a41bb61", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1964", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "sha1": "a83e3786d1dcbf13990a8d3470366fde0a41bb61", "year": 2015 }
pes2o/s2orc
Host and non-host roots in rice: cellular and molecular approaches reveal differential responses to arbuscular mycorrhizal fungi Oryza sativa, a model plant for Arbuscular Mycorrhizal (AM) symbiosis, has both host and non-host roots. Large lateral (LLR) and fine lateral (FLR) roots display opposite responses: LLR support AM colonization, but FLR do not. Our research aimed to study the molecular, morphological and physiological aspects related to the non-host behavior of FLR. RNA-seq analysis revealed that LLR and FLR displayed divergent expression profiles, including changes in many metabolic pathways. Compared with LLR, FLR showed down-regulation of genes instrumental for AM establishment and gibberellin signaling, and a higher expression of nutrient transporters. Consistent with the transcriptomic data, FLR had higher phosphorus content. Light and electron microscopy demonstrated that, surprisingly, in the Selenio cultivar, FLR have a two-layered cortex, which is theoretically compatible with AM colonization. According to RNA-seq, a gibberellin inhibitor treatment increased anticlinal divisions leading to a higher number of cortex cells in FLR. We propose that some of the differentially regulated genes that lead to the anatomical and physiological properties of the two root types also function as genetic factors regulating fungal colonization. The rice root apparatus offers a unique tool to study AM symbiosis, allowing direct comparisons of host and non-host roots in the same individual plant. Introduction One of the most important biological novelties that evolved in plant colonization of land was the root apparatus, an organ specialized to anchor the plant body, and to absorb and store water and nutrients. Current knowledge indicates that the ancient alliance between non-rooted plants and symbiotic fungi, such as Glomeromycota and Mucoromycotina, promoted this morphological innovation and has played a key role in the origin of land flora (Brundrett, 2002;Bonfante and Genre, 2008;Gutjahr and Paszkowski, 2013 and citations therein). This ancient alliance continues with most modern plants, as approximately 80% of vascular plant species establish arbuscular mycorrhizal (AM) symbiosis with fungi of the Glomeromycota (Redecker et al., 2013). However, several angiosperm species belonging, for example, to Brassicaceae (including the model plant Arabidopsis thaliana), Chenopodiaceae, Cyperaceae, and Proteaceae cannot establish AM symbiosis and are considered non-host plants (Delaux et al., 2013;Lambers and Teste, 2013;Veiga et al., 2013). A recent work on plant-microbe interactions characterized a core set of highly conserved genes required for the establishment of AM symbiosis, the so-called "symbiotic toolkit" (Delaux et al., 2013). Non-host plant genomes lack most (64%) of these symbiotic genes, suggesting that the ancestors of these plant families lost the ability to establish AM symbiosis (Delaux et al., 2013). Among host plants, the eudicots Medicago truncatula, Lotus japonicus, and Solanum lycopersicum and the monocots Oryza sativa and Zea mays are considered useful model species to gain insights into the evolution and the mechanisms controlling AM association. Although, eudi-and mono-cotyledonous plants display distinct root system architecture and cellular organization (Hochholdinger and Zimmermann, 2008), both root systems show comparable distributions of AM colonization. In particular, AM fungi preferentially colonize lateral roots and rarely colonize taproots (eudicotyledons) or crown roots (monocotyledons) (Hooker et al., 1992;Gutjahr et al., 2009). Among AM-host plants, rice (O. sativa) has an unusual root system consisting of embryonic and postembryonic crown roots, which branch to generate two types of secondary root: large lateral roots (LLR), which show positive gravitropism and intermediate growth and branching, and the more abundant fine lateral roots (FLR), which do not respond to gravity and never produce lower orders of ramification (Coudert et al., 2010). FLR lack both constitutive and inducible aerenchyma tissues, LLR develop aerenchyma sporadically in dryland and regularly in wetland, and crown roots regularly have aerenchyma, irrespective of water regime (Rebouillat et al., 2009;Vallino et al., 2014). Interestingly, the anatomical differences displayed by the three root types probably mirror a divergent functional role, an issue that has been poorly investigated. To date, it has been proposed that crown roots mainly function to provide anchorage and support, and due to the constitutive presence of aerenchyma, to provide oxygen from shoots to roots, while lateral roots may function to take up nutrients (Kirk, 2003). This hypothesis is supported by a root-type specific transcriptomic analysis performed on CR, LLR, and FLR collected from control and AM colonized root of rice (Gutjahr et al., 2015). CR, in line with their role of plant stabilization, showed an enhanced expression profile of genes involved in secondary cell wall metabolism (SCW), while both lateral roots displayed an enrichment of transcripts related to mineral transport. AM fungi preferentially colonize LLR, not FLR (Gutjahr et al., 2009;Vallino et al., 2014) but the determinants that make FLR not susceptible to AM fungal colonization remain unknown. In this work, our investigations were driven by the following hypotheses: (1) LLR and FLR have different gene regulation profiles leading to different developmental plans; (2) the two LR have different functional roles irrespective of the symbiosis; (3) the different anatomy of the two LR is crucial to determining their different mycorrhizal status; (4) multiple factors may determine the different mycorrhizal status. To address these issues, we combined molecular, morphological, and physiological approaches. Through mRNA-seq, we compared LLR vs. FLR in control and mycorrhizal conditions and we focused our attention on candidate transcripts that may: (i) define the differences in anatomy and thus different roles of LLR and FLR, and (ii) make FLR not susceptible to fungal colonization. We generated a comprehensive and integrated data set that provides baseline information for elucidating gene networks associated with root development, functions, and interaction with AM fungi. Finally, we propose the rice root system, with its host and non-host roots present on the same plant, as a powerful system to discover new determinants involved in AM colonization. Plant Material, Mycorrhization, Growth Conditions, and Sampling All experiments were done on O. sativa cv. Selenio, a common Italian rice variety with round grains. Seeds (provided by the Rice Research Unit of the Agricultural Research Council, Vercelli, Italy) were germinated in pots containing sand and incubated for 7 days in a growth chamber under 14 h of light (23 • C) and 10 h of dark at 21 • C. Plants were then transferred individually to new pots in the presence or absence of the mycorrhizal fungus. Mycorrhizal roots were obtained by the sandwich method (Guether et al., 2009). Plants were grown in 9-cm-high and 11-cm-diameter pots and maintained in a growth chamber, as described above, until harvesting (42 days post inoculation-dpi). Plants were watered as described in Vallino et al. (2014). The colonization status of mycorrhizal roots was checked under a microscope. For RNA-seq experiments, about 30 mg of FLR and LLR were collected manually by using scalpel and forceps to obtain homogenous root sets from both mycorrhizal and control plants ( Figure S1). The FLR collected were those originated from LLR, and in the LLR set, the tertiary roots were not included. Collected roots were immediately frozen in liquid nitrogen and kept at −80 • C until RNA extraction. Nucleic Acid Extraction Total genomic DNA was extracted from R. irregularis extraradical mycelium and O. sativa shoots using the DNeasy Plant Mini Kit (Qiagen, Hilden, Germany), according to the manufacturer's instructions. Plant and fungal genomic DNAs were used to test each primer pair designed for real-time PCR to exclude cross-hybridization. Total RNA was extracted from rice roots of mycorrhizal and non-mycorrhizal plants using the Plant RNeasy Kit (Qiagen), according to the manufacturer's instructions. Samples were treated with TURBO DNase (Ambion, Austin, TX, USA) according to the manufacturer's instructions. The RNA samples were routinely checked for DNA contamination by RT-PCR (OneStepRT-PCR, Qiagen) analysis, using OsRubQ1 (Güimil et al., 2005; Table S1). IlluminaGAIIx Sequencing Three micrograms of total RNA were used for library preparation with the TruSeq RNA sample preparation Kit (Illumina, FC-122-1001) following the manufacturer's instructions. Libraries were amplified with 15 cycles of PCR and then purified and sizeselected to an average size of 300 bp on a 2% low range ultra agarose gel (BIO-RAD). RNA quality and library concentration and size were assayed on a 2100 Bioanalyzer (Agilent). Libraries were single-end sequenced (51 bp; two samples per lane) on Illumina Genome Analyser (GAIIx). Three biological replicates were generated for each condition, but only two replicates produced good RNA for FLRmyc. Therefore, for FLRmyc thesis, only two samples were considered, complying with the recommended RNA-seq standards that two biological replicates are sufficient (ENCODE Project, 2011 -http://genome.ucsc.edu/ENCODE/protocols/ dataStandards/ENCODE_RNAseq_Standards_V1.0.pdf). cDNA Synthesis and Real-time Quantitative RT-PCR Single-strand cDNA was obtained as described in Vallino et al. (2014). Quantitative Real-Time PCR was used to measure the expression of 12 genes shown to be differentially regulated by RNA-Seq. Three biological replicates were conducted for each condition. Quantitative real time PCR experiments and data analysis were carried out as described in Vallino et al. (2014). The primer names and corresponding sequences are listed in Table S1. Mapping of Illumina Reads Raw fastQ files were checked for low-quality reads and contaminants. Low-quality reads (quality ≤ 10 phred score) and contaminants were removed with Cutadapt software (Martin, 2011). Contaminant-free, filtered reads were mapped with Bowtie/Tophat version 1.4.1 (Trapnell et al., 2012) to the rice genome (O. sativa Nipponbare MSU 6.16 release). A minimum and maximum intron length of 40 and 50,000 bp, respectively, were used. Read counts were collected as described in Bagnaresi et al. (2012). DEG Calling and Go Enrichment Analyses The DESeq Bioconductor package version 1.10.1 (Anders and Huber, 2010) was used to call Differentially Expressed Genes (DEG), as described in Zouari et al. (2014). One single DESeq CountDataSet object instance was created for both FLR and LLR and the two treatments. DESeq Parameters for dispersion estimation were: method "pooled" and sharing Mode "fitOnly." The False Discovery Rate (FDR) threshold for DEG calling was set to 0.05. GO enrichment was done as described in Bagnaresi et al. (2012). The GOSEQ Bioconductor package was used to account for RNA length bias typical of RNA-seq approaches (Oshlack and Wakefield, 2009). Miscellaneous Bioinformatic Techniques Heatmaps of clustered samples were obtained upon transformation of count data values with VST function as available in DESeq R package (Anders et al., 2015). Mapman figures were generated upon binning of DEG sequences to mapman bins by Mercator application (Lohse et al., 2014). Unless otherwise stated, further graphical outputs were generated with custom R and Python scripts. Transmission Electron Microscopy Root segments from each independent sample were fixed in 2.5% (v/v) glutaraldehyde in 0.1 M cacodilate buffer (pH 7.2) for 2 h at room temperature and then overnight at 4 • C, rinsed twice and post-fixed for 1 h in 1% (w/v) OsO 4 . After rinsing in the same buffer, they were dehydrated in an ethanol series at room temperature, followed by two changes of absolute acetone and then infiltrated in Epon-Araldite resin (Hoch, 1986). The resin was polymerized for 24 h at 60 • C. Embedded samples were processed for ultramicrotomy. Semithin sections (1 µm) were cut from each sample, stained with 1% toluidine blue and observed under an optical microscope to inspect general morphology. Ultra-thin sections (0.05 µm) were counterstained with uranyl acetate and lead citrate (Reynolds, 1963) and observed under a Philips CM10 transmission electron microscope. Some ultra-thin sections were stained using the Thiéry reaction (Thiéry, 1967) (PATAg-staining) to visualize polysaccharides (Roland and Vian, 1991). PATAg-staining uses the oxidation of polysaccharides by periodic acid, creating aldehyde groups, which are visualized by a silver complex. Lignin was detected by staining with 1% (w/v) phloroglucinol in 35% (v/v) HCl. Stained lignin appears red under white light. Suberin and cutin were detected by staining for 2 h with 0.1% (w/v) Sudan Red 7B (Sigma) and then mounting in 75% (v/v) glycerol (Brundrett et al., 1991). Photographs were taken within 60 min of staining. Root Staining and Drug Treatment For cutin/suberin monomer treatment, 28 days old mycorrhizal rice plants, previously colonized by R. irregularis by means of the sandwich system, were treated for 4 days in hydroponic condition with sterilized Long Ashton solution (with 32 µM Na 2 HPO 4 ·12H 2 O; Hewitt, 1966) containing both C16 cutin/suberin monomers: 16-hydroxyhexadecanoic acid and 1,16-hexadecanediol (20 µg/ml) . The presence of hyphopodia was monitored by screening 400 FLR for each biological replicate. Treatment with equivalent dilutions of ethanol were used as a control. Three biological replicates were considered for each condition. The colonization level of LLR was assessed according to Trouvelot et al. (1986). For paclobutrazol (PAC) treatment, a gibberellin acid synthesis inhibitor, seeds were sterilized and germinated 5 days in the dark and 4 days at light on Murashige and Skoog medium plates supplemented with 10 µM PAC. Plants were transferred to pots, and in order to get more FLR, plants were harvested after 21 days of growth. Plants were watered once a week with water and once with water containing 10 µM PAC. The shoot and root phenotype was evaluated macro-and microscopically. The number of cortical cells in both treatments was counted by microscopy of vibratome sections of FLR, obtained as described above. Phosphorus Quantification FLR and LLR from non-mycorrhizal and mycorrhizal plants were collected manually as described above, from four independent plants. For phosphorus (P) quantification, about 2 mg of dried material was digested in 1 mL 6 M HNO 3 for 1 h at 95 • C. The analysis was performed as described in Zouari et al. (2014). Statistical Analysis For all the RT-qPCR, drug treatment, and phosphorous quantification measurements, values are expressed as mean ± standard deviation. Data were analyzed with a One-Way ANOVA with Tukey post-hoc, using a probability level of P < 0.05. All statistical elaborations were performed using the PAST statistical package (version 2.16; Hammer et al., 2001). Results RNA was isolated from LLR and FLR of rice plants, grown in the absence (LLRc and FLRc) or the presence (LLRmyc and FLRmyc) of the mycorrhizal fungus R. irregularis. In order to obtain a homogenous LLRmyc sample, we selected under a stereomicroscope only LLR exhibiting external mycelium. The mycorrhizal status of this sample was confirmed by the calculation of the total root length colonization (76.3% ± 3.2, consistent among replicates) and by the higher expression of the arbuscular marker gene OsPT11 than control ( Figure S2). FLRmyc sample did not show any fungal structures (data not shown). RNA was subjected to single-end whole transcriptome sequencing, obtaining 13-20 millions (mean 17.5 million) reads (51 bases, single-end; Table S2). An RPKM (Reads per Kilobase per Million) cutoff value of 0.1 was set to declare a locus expressed, resulting in 30,204 loci above the expression cutoff. Pearson correlation coefficients for biological replicate samples sharing the same treatment and tissue were always above 0.9 ( Figure S3) indicating a good level of reproducibility among replicates. Differentially Regulated Genes in LLR and FLR: A Global View We used the R package DESeq to identify DEG among the four tested conditions. Expression values for all genes and comparisons, and annotations of the genes, are reported in Tables S3-S6. The comparisons (Figure 1) included LLRc vs. FLRc (5333 DEG), LLRmyc vs. LLRc (1697 DEG), FLRmyc vs. FLRc (780 DEG), and LLRmyc vs. FLRmyc (3949 DEG). The expression profile of 12 genes randomly selected from those identified in the RNA-seq experiment was successfully validated by qRT-PCR ( Figure S2). In all the tested conditions we identified roughly equal numbers of higher or lower accumulating transcripts, with foldchanges between 5 and −5 (log2 scale) and only a few genes going beyond these limits, with exception of the comparison of LLRmyc vs. LLRc (Figure 1). In this comparison, 63% of DEG were up-regulated in the LLRmyc conditions and 62 DEG had a fold-change above 5 (Table S6, Figure 1B), indicating that in this root type, a set of genes (4% of DEG) is strongly up-regulated in response to the presence of the mycorrhizal fungus. The Venn diagrams in Figure 2 showed that 434 genes were differentially expressed (either up or down regulated) both in LLRmyc vs. LLRc and FLRmyc vs. FLRc (25% of the genes modulated in LLR upon infection), and LLRc vs. FLRc compared to LLRmyc vs. FLRmyc had 2920 genes in common (54% of genes modulated in LLRc vs. FLRc). These data suggest that the two roots types are characterized by different transcriptome profiles and the related differences in gene expression are more important than those driven by the presence of AM fungi. DEG data visualized with MapMan software (Thimm et al., 2004) gave the same indication (Figure 3). In the comparison between LLRmyc and FLRmyc, 35 genes were specifically expressed in LLRmyc and 5 in FLRmyc ( Figure 1D). In LLR, which are the preferential host roots for AM fungi, the comparison between the mycorrhizal and the control plants revealed 53 out of 1697 genes specifically expressed upon mycorrhizal colonization. As expected, most of them were already described by Güimil et al. (2005) as AM marker genes, and only three were specifically expressed in LLRc. We identified the fewest DEG from the comparison between FLRc and FLRmyc, as expected, since the AM fungus does not colonize the FLR. However, the transcript changes suggest that the FLR perceive the fungal presence irrespectively of the fact they are not susceptible to the colonization. To have an overview of the regulation of the main metabolic and signaling pathways involved in the different comparison, we conducted GO enrichment analysis. Table S7 lists the enriched GO terms for each comparison and Figure S4 shows the GO terms over-represented in both root types in response to AM fungus and the enriched GO terms specific for LLRmyc and for FLRmyc. A first analysis of the generated data sets largely confirms the first hypothesis that the two roots have their own transcriptome signature. Genes Involved in AM Symbiosis: The Comparison between LLRmyc and FLRmyc To decipher the molecular determinants responsible of the different responses to AM fungus in LLR and FLR, we analyzed the expression profiles of genes described in the literature involved during AM symbiosis. The genes were clustered accordingly to their role in the different stages of mycorrhizal colonization (presymbiotic phase; down-stream Common Symbiotic Signaling Pathway-CSSP; AM marker genes) and are listed in Table 1. Considering the presymbiotic phase, it is worthwhile to note that the genes involved in strigolactone (SL) biosynthesis, such as carotenoid cleavage dioxygenase (CCD) 7 (OsCCD7), OsCCD8a, OsCCD8b, OsCCD8c and two cytochrome P450 genes (with high sequence similarity to the Arabidopsis SL biosynthesis gene MAX1, Cardoso et al., 2014) and genes which showed Lysin motif domain (LysM) ( Table 1) were highly expressed in LLRmyc vs. FLRmyc. Moreover, the putative homolog of Lotus japonicus Lectin Nucleotide Phosphohydrolase (LjLNP), described by Roberts et al. (2013) as a Nod factorbinding protein required for AM symbiosis, showed higher FIGURE 2 | Venn diagrams of control and AM fungal-modulated genes (DEG). Venn diagrams illustrating the relationships between DEGs in contrasts among same tissue and different treatment (myc vs. control) or same treatment and different tissue (LLR vs. FLR). expression in LLRmyc compared to FLRmyc ( Table 1). As expected, the majority of the genes belonging to the CSSP were not differentially regulated between LLR and FLR in both conditions (myc and non-myc), with the exception of OsNUP133 (Table 1). We also observed that mycorrhization per se induced the up-regulation of some defense-responsive genes in LLRmyc compared to LLRc (Table S8); however a pathogenesis-related Bet v I family protein (a putative OsPR10-LOC_Os12g36840) and two jasmonic acid-induced protein Jacalin-related lectins (JRLs) (LOC_Os01g25280; LOC_Os06g07250) were strongly induced in FLRmyc compared to both LLRmyc and FLRc. Interestingly, we observed that the transcripts of genes acting down-stream of the CSSP, such as those belonging to the GRAS-domain proteins complex (DELLA/SLR1, Required for Arbuscular Mycorrhization -RAM1; the putative homolog of Required for Arbuscule Development 1-RAD1) Floss et al., 2013;Yu et al., 2014;Xue et al., 2015) and RAM2, considered crucial for hyphopodium formation, were barely expressed in FLRmyc compared to LLRmyc. To investigate whether the absence of RAM2 induction in FLR was related to insufficient production or release of cutin or related compounds, which in turn affects hyphopodia formation, we treated rice mycorrhizal plants with C16 cutin/suberin monomers and we assessed the presence of fungal hyphopodia on roots. The addition of the C16 monomers was not sufficient to compensate for RAM2 repression: in fact, we did not observe either an induction of hyphopodia formation on FLR (data not shown) or an increase of AM colonization level on LLR ( Figure S5). All the other genes regulated upon myc treatment but not previously described in the literature in rice, were considered putative new rice transcripts involved in AM symbiosis. Depending on their expression level, we clustered them in the following categories: (i) novel rice AM markers, which are specifically induced in LLRmyc vs. LLRc and not detected in FLR in both conditions; (ii) AM-responsive genes, which are more expressed in LLRmyc vs. FLRmyc and not detected in LLRc; (iii) AM-induced genes, which are strongly up-regulated in LLRmyc vs. LLRc and more expressed in LLRmyc vs. FLRmyc ( Table 2). Using these criteria, we identified four AM-marker genes: Dirigent putative, which is classified as a disease resistanceresponsive gene; a Cytochrome P450 gene (P450 71C4); a receptor kinase gene CRRLPK-8, which shows similarity with L. japonicus receptor-like protein kinase strongly induced in mycorrhizal roots (Guether et al., 2009); and a gene encoding an "expressed protein with unknown function" ( Table 2). We also identified 25 novel AM-responsive genes and 18 transcripts strongly up-regulated in LLRmyc vs. LLRc, that have not been described in the rice-AM fungi interaction so far. Among the AM-responsive genes, seven and six transcripts, respectively, showed similarity with M. truncatula and L. japonicus genes previously detected in AM roots ( Table 2). Two AM-responsive genes that encode lipid transfer protein (LTP) (LTPL29, LTPL35) were also highly regulated in LLRmyc vs. FLRmyc. Blilou et al. (2000) showed that another LTP (LTPL11) is regulated in rice root in response to AM colonization during hyphopodia formation and decreases at the onset of the intercellular colonization of the cortex. We also observed that genes belonging to the Ripening-related family protein precursor (RIPER) family ( Table 2) were strongly induced in LLRmyc. Among these, RIPER3 (OsAM8) was previously identified as a mycorrhiza-responsive rice gene (Güimil et al., 2005). In addition, we found two novel receptor kinases showing an interesting gene expression profile. A serine/threonine-specific receptor protein kinase-like, resulted to be strongly induced in FLRmyc vs. FLRc and not detected in host root, while a receptorlike protein kinase (LOC_Os05g25430), showing high sequence similarity with the Feronia sequence of Arabidopsis (OsRLK-FER), was up-regulated in LLRmyc vs. LLRc and not detected in non-host root ( Table 1). Deeper analysis of the generated data set revealed that the presence of the AM fungus elicits the differential expression of a large number of genes, opening the question whether and how these DEGs overlap with the general changes illustrated in Figure 1. To face this question, we first focused on the wellknown anatomical differences between the two root types. Root Radial Anatomy In their detailed description of rice anatomy, Rebouillat et al. (2009) identified epidermis, exodermis, sclerenchyma, endodermis, and central cylinder tissues in transverse sections of the three root types of the Nipponbare cultivar; they also found that FLRs had no cortical cell layer. To test if this description could also be applied to the Selenio cultivar, we stained root sections for lignin and suberin/cutin, as cell wall markers to identify root tissues, and examined the sections by light and transmission electron microscopy. We observed the characteristic red color of lignin from phloroglucinol-HCl staining in the cell walls of xylem and endodermis Casparian bands in all root types (Figure 4). By contrast, only crown roots and a few LLR showed staining corresponding to the sclerenchyma layer (Figures 4A,B). This confirmed the presence of two types of LLR that differ in the presence (T-type) or the absence (L-type) of sclerenchyma (Kono et al., 1972). No lignified cells were detected in FLRc, suggesting that this root type in the Selenio cultivar does not develop exodermis and sclerenchyma layers, while a two layernon-lignified cortex was consistently present (Figure 4C). No differences were detected in root-cross sections stained with the lipophilic dye Sudan 7B (data not shown) suggesting that suberin/cutin is not a relevant component of the cell walls of LLR and FLR. To get a deeper insight into the anatomy of the two root types, we also embedded LLR and FLR in resin, obtained semi-and ultra-thin sections and observed them by light and transmission electron microscopy. Vibratome sections (500 µm, Figures 5A,B) confirmed differences in radial anatomy, with four cortical layers in the LLR and only two in the FLR. The details of the central cylinder were better revealed in semi-thin sections (0.5 µm) showing a layer of roundish endodermal cells (Figure 5B, inset) surrounded by an inner cortical layer (layer 2). Ultrathin sections (0.05 µm) treated with Thiery's polysaccharide stain (Thiéry, 1967), to better detect cell wall organization, also revealed subtle differences. The LLR endodermal cells were rich in cytoplasm and in direct contact with the inner cortical cells (Figure 5C), where abundant vesicles with a positive reaction to the Thiery's stain lined the periplasmic area between the plasma membrane and the cell wall (Figure 5E), suggesting active polysaccharide secretion toward the multilayered wall. By contrast, in FLR, the endodermis did not show any cytoplasm and was in contact with a highly differentiated layer of cortical cells, which consisted of oval-shaped cells with a very thick, layered cell wall ( Figure 5D). This cell wall strongly reacted with the silver grains of the Thiery reaction revealing a thin, multilayered organization, typical of fibrillar cellulose ( Figure 5F). Lastly, both root types revealed thin Casparian bands with a very thin suberin layer localized exclusively in the central part of the radial endodermis walls (Figure 5H). TABLE 2 | List of the novel rice AM marker genes (specifically induced in LLRmyc vs. LLRc and not detected in FLR in both conditions), AM-responsive genes (specifically induced in LLRmyc vs. FLRmyc and not detected in LLRc), and AM-induced genes (strongly induced in LLRmyc vs. LLRc and in LLRmyc vs. FLRmyc). MSU LOC_OS ID MSU description To better understand the molecular causes of the different numbers of cortical cell layers between LLR and FLR (Figure 5), we hypothesized that gibberellic acid (GA) may have a role. On one hand, GA is a key factor that affects asymmetric cell divisions in the ground tissue (Paquette and Benfey, 2005;Koizumi and Gallagher, 2013); on the other hand, genes related to GA metabolism and perception are differentially expressed between the two rice root types (Table S9). To test this hypothesis, we treated rice plants with paclobutrazol (PAC-an inhibitor of giberellin acid biosynthesis) and after 21 days of growth they showed the typical reduced internodal growth and root elongation (Figures 6A,B). Thirty vibratome cross sections of FLR from treated and untreated plants were examined under the microscope for ground tissue patterning. A statistically significant higher (p < 0.05) number of cortical cells was observed in FLR of plants treated with PAC than in control plants ( Figure 6C). These data support the hypothesis that the different anatomy may be related to the regulation of plant phytohormones, and that in root, the anticlinal cell division is influenced by GA level. Overall, these data demonstrate that FLR of the Selenio cultivar have two cortex layers without exodermal and sclerenchyma tissues. Transcriptomic data corroborated such morphological observations since genes involved in suberin and lignin biosynthesis were mainly induced in LLRc and not in FLRc (Table S9). Lastly, morphological observations revealed thatnotwithstanding the known anatomical differences-FLR possess indeed a cortical parenchyma, which makes them theoretically capable to host intracellular AM structures. Root Phosphorus Content Since transcriptomic data revealed a consistent enrichment in nutrient transporters in FLR vs. LLR (Tables S9, S10), to validate whether a different nutritional uptake could be assigned to the two root types, we quantified the phosphorus contents of the two roots in plants grown in absence of the fungus. The P content of FLRc (2.10 ± 0.4 mg/g dry weight) was statistically significantly higher (p < 0.05) than LLRc (1.51 ± 0.2 mg/g dry weight). Thus, FLRc contained approximately 30% more P than LLRc. In the presence of the fungus, LLRmyc (1.81 ± 0.3 mg/g dry weight) showed higher P content than FLRmyc (1.42 ± 0.4 mg/g dry weight). Even though the increment was not statistically significant, the result suggests that in the presence of the fungus, LLR exploit the mycorrhizal phosphate uptake pathway, balancing the direct pathway by FLR. Discussion AM fungi colonize plant roots by a series of spatio-temporal steps. After a chemical dialog between the two symbionts (Bonfante and Genre, 2015), the fungus reaches the root surface and forms the hyphopodium, from which a penetration hypha invades the rhizodermal cell layer. The intracellular hyphae rapidly develop into the plant cortex and form the arbuscule, which is the functional site where bidirectional nutrient exchange takes place between the host and the fungus. The root apparatus of rice plants offers a powerful, unique tool to study the plant-AM fungus interaction, since it consists of both host and non-host roots, thus allowing their direct comparison in the same genetic background and also in the same individual plant. In our work, we took advantage of this peculiarity to investigate the determinants involved in AM core colonization. To this end, we first obtained a wholetranscriptome data set from LLR and FLR and examined their responses upon mycorrhizal colonization. Subsequently, we mined the data set for the expression profiles of genes involved in AM symbiosis, keeping in mind the main phases of fungal colonization and combining the results with anatomical and physiological evidences. Genes Involved in Presymbiotic Recognition As illustrated in many reviews (Bonfante and Requena, 2011;Gutjahr and Parniske, 2013;Schmitz and Harrison, 2014;Bonfante and Genre, 2015), plants dialog with AM fungi thanks to the release of signal molecules like strigolactones (SL), the activation of signaling pathway genes belonging to the CSSP, as well as the activation of defense related genes. On the one hand, our data from the RNA-seq experiment confirmed that the majority of the CSSP genes were not differentially regulated (Gutjahr et al., 2008). On the other hand, genes involved in SL biosynthesis and defense reactions were differentially expressed. In fact, consistent with the observation that LLR are susceptible to mycorrhization, we found an induction of genes involved in SL biosynthesis compared to FLR, in both control and mycorrhizal conditions ( Table 1). By contrast, FLRmyc showed a higher expression of defense-response genes ( Table 1). The accumulation of pathogenesis-related proteins represents a ubiquitous response to pathogen infection in plants (van Loon et al., 2006), but the up-regulation of genes involved in defense also occurs in response to mycorrhization (Campos-Soriano et al., 2010;Lopez-Raez et al., 2010). Such a host response is probably under the control of a finely tuned phytohormonal network, but experimental data are at the moment not conclusive (Lopez-Raez et al., 2010;Kloppholz et al., 2011;Plett et al., 2014). In our experiment, we observed that two jasmonic acidinduced JRLs were strongly induced in FLRmyc compared with both LLRmyc and FLRc. New roles are emerging for JRLs, which are a subgroup of proteins with one or more jacalin-like lectin domains. Interestingly, rice JRLs are associated with biotic or abiotic stimuli, such as salt stress or pathogen infection (Garcia et al., 1998;Zhang et al., 2000;Qin et al., 2003); however, their biological functions in plants are still poorly understood (Ma et al., 2010;Al Atalah et al., 2011;Xiang et al., 2011;Balsanelli et al., 2013). Taken in the whole, the transcriptomic data show that the repression of genes involved in SL biosynthesis and the induction of defense genes might have a role in preventing AM fungal colonization in FLR. Otherwise, the induction of the defense responsive genes in FLRmyc might be a consequence of the mycorrhizal priming effect. Downstream of the Common Symbiotic Signaling Pathway It was suggested that the formation of a large complex of GRASdomain proteins (DELLA/SLR1, DELLA Interacting Protein 1-DIP1, RAM1, and RAD1) is a prerequisite for elicitation of nodulation or mycorrhization (Oldroyd, 2013). In line with the results obtained in mycorrhizal M. truncatula roots (Floss et al., 2013) we observed a down-regulation of DELLA transcript in LLRmyc compared to LLRc. Although DELLA is involved in arbuscule formation through the repression of gibberellin signaling (Floss et al., 2013;Yu et al., 2014), DELLA expression is high during Pi-limiting conditions. Once arbuscules form, symbiotic Pi transport leads to an increase in Pi levels in the root, resulting in a decrease in DELLA transcript levels (Floss et al., 2013). Moreover, we found further repression of DELLA/SLR1 and OsRAM1 expression ( Table 1) in FLRmyc vs. LLRmyc. A lower DELLA/SLR1 mRNA abundance was also detected in FLRc vs. LLRc suggesting that this transcript regulation is probably related to the different morphophysiological features of the two root-types. By contrast, the lack of induction of OsRAM1 prompts us to speculate that in FLRmyc the mycorrhizal signaling pathway is not activated . Downstream of the complex of GRAS-domain proteins, the glycerol-3-phosphate acyl-transferase RAM2 functions in the production of cutin monomers and induces hyphopodium formation . Cutin monomers can guide and stimulate the initial approach of the AM fungus and hyphopodium formation which requires cutin monomers Wang et al., 2012;Gobbato et al., 2013). Cutin mostly occurs in the aerial part of the plant, providing a hydrophobic surface that pathogens exploit in the early stages of the interaction. In general, roots do not contain cutin, but instead contain the related compound suberin. No difference in suberin/cutin deposition was observed, suggesting that they are not relevant components of the cell walls of LLR and FLR. Furthermore, the addition of C16 monomers upon AM fungus inoculation did not elicit hyphodopia formation on FLR, suggesting that the absence of fungal structures on FLR does not directly depend on the availability of cutin/suberin monomers. Intracellular Phase: The Cortex Is Necessary But Not Sufficient for AM Symbiosis Since intercellular hyphae and arbuscules require cortical cells, Gutjahr et al. (2009) proposed that the lack of cortex tissue in FLRs, as described in previous work (Rebouillat et al., 2009), can be a key factor for their inability to form arbuscules. However, FLR of the Selenio cultivar do have cortical layers, but are not susceptible to fungal colonization. Moreover, the lack of hyphopodia on the FLR surface implies other mechanisms, such as insufficient release of diffusible molecules and/or lack of a specific surface signal required for fungal attachment and hyphopodium induction. LLR have more cortical layers than FLR. The higher expression of the DELLA/SLR1 and of the gene encoding the cytochrome P450 CYP714B1 (Magome et al., 2013) detected in LLRc compared with FLRc (Table S9), suggests that a repression of GA signaling may occur in LLRc, leading to multiple cortical layers. In fact, GA also acts in a partially overlapping pathway with the GRAS family transcription factors Short-Root (SHR) and Scarecrow (SCR) to regulate asymmetric cell divisions in the ground tissue (Paquette and Benfey, 2005;Cui et al., 2007;Koizumi and Gallagher, 2013). Consistent with previous studies (Paquette and Benfey, 2005;Koizumi and Gallagher, 2013), we observed significantly more cortical cells in FLR of plants treated with PAC (GA-inhibitor) (Figure 6C), suggesting that GA inhibition, at least in FLR, induces anticlinal cell division. The role of GA in AM symbiosis is constantly evolving. Recent works demonstrated that the inhibition of GA biosynthesis or the suppression of GA signaling can strongly inhibit arbuscular mycorrhiza development in the host root (Floss et al., 2013;Foo et al., 2013;Takeda et al., 2015). The spatial-temporal regulation and the fine-tuning of the GA level is necessary to promote AM colonization. Considering our data, it is tempting to speculate that GA has pleiotropic effects, since it affects root anatomical traits and in turn potentially influences the symbiosis signaling pathway. Expression of Rice AM Marker Genes Previous studies on the transcriptome of rice mycorrhizal roots identified a group of genes exclusively induced by AM fungi (Güimil et al., 2005;Gutjahr et al., 2008). Interestingly, we found basal expression of all the AM marker genes in FLRmyc ( Table 1). This could be attributed on the one hand to a systemic alteration in gene expression, as previously demonstrated in nonmycorrhizal halves of the root system (Pozo et al., 2002;Gutjahr et al., 2008), or on the other hand, to a specific molecular dialog between a non-host root and the AM fungus. We also detected an up-regulation of OsPT11 in FLRmyc vs. FLRc and a slight induction of OsPT13 in FLRmyc vs. LLRmyc (Table 1); the expression of these two PT genes in a non-host root was surprising since they are strongly induced in AM symbiosis and involved in arbuscule formation (Paszkowski et al., 2002;Yang et al., 2012). The systemic expression in FLR of genes previously described as AM-marker genes suggests that in our system these genes might perform other functions. For instance, the AM marker LjLTP4 (L. japonicus homolog of OsPT11) was detected in the apex of non-inoculated roots (Volpe et al., 2012), indicating that it may function in root meristem in an AM symbiosis-independent manner. FLR and AM Fungi: Do They Understand Each Other? Differently from the microarray-based work of Gutjahr et al. (2015), that did not reveal transcriptomic differences between FLR and LLR, our RNA-seq analysis showed impressively different expression profiles in the two types of lateral roots, including changes in many relevant metabolic activities, from cell division to phytohormone balance (i.e., gibberellin). These findings provide some putative explanations to help us understand the mechanism that make FLR recalcitrant to AM fungal colonization. Considering the morphological and physiological results, it is tempting to speculate that the two cortical cell layers present in Selenio FLRs are not sufficient to support the formation of arbuscules. As an alternative hypothesis, FLR's efficient role in nutrient uptake leading to a consistently higher P flow, may repress signaling pathways that are influenced by Pi levels (Russo et al., 2013). Along these lines, the genes involved in strigolactone biosynthesis and in GA-signaling are less expressed in FLR compared to LLR. The absence of fungal hyphopodia adhering to FLR provides excellent morphological support for the transcriptomic data. Since the expression analysis revealed that transcripts of genes involved in the AM presymbiotic phase are almost absent in FLRmyc, we suggest that fungal hyphopodia directly or indirectly require such transcripts for their morphogenesis. The missed induction of the "symbiotic toolkit" genes in FLR is a unique biological trait, since non-host plant genomes generally lack these genes (Delaux et al., 2013). By contrast, a specific transcriptional program is switched off in FLR, leading to plantfungus incompatibility. Coming back to our initial hypotheses, we can conclude that a strong regulation of gene expression leads to the heterogeneity of lateral roots of rice, in line with the different transcriptional profiles of CR vs. lateral roots detected by Gutjahr et al. (2015). As a consequence, the two root types investigated here have different functions and anatomy, with FLR as the most competent for successful mineral nutrition. However, in contrast to one of our hypotheses, the different anatomy does not seem to have a major effect on AM colonization, while the deep differential regulation of genes involved in signaling could impair the initial steps of colonization. On the basis of this work, and thanks to its peculiar root system, we propose rice as a useful instrument to pave the way to discover new molecular determinants underlying successful and unsuccessful root colonization by AM fungi. Author Contributions VF and MV carried out the majority of the experiments and wrote the manuscript. CB contributed to the RNAseq library preparation. AF carried out the electron microscopy experiments. P.Bag. performed the bioinformatics data analysis. PB coordinated the project, designed the experiments, and wrote the manuscript. All the authors read and approved the final manuscript.
v3-fos-license
2024-05-18T15:32:21.474Z
2024-05-14T00:00:00.000
269832124
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2075-1702/12/5/338/pdf?version=1715693542", "pdf_hash": "be5a8728c7f19e918d74df7d664722e498082379", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1967", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "sha1": "30be149246ad6179f59fdd4fdbc00d2e0d581b36", "year": 2024 }
pes2o/s2orc
Natural Characteristics of a Marine Two-Stage Tandem Hybrid Planetary System : This study focuses on a marine two-stage tandem hybrid planetary system. Natural frequencies and vibration modes are determined using a translational–torsional coupled dynamic model. Based on the motion characteristics of the transmission system, free vibration is categorized into three typical modes. The parameter sensitivity of natural frequencies is computed, and the effects of structural parameters such as unequally spaced planet, mesh stiffness, planet mass and rotational inertia on the natural frequencies are analyzed. Utilizing the coupling factor, the mode transition criterion for the natural frequencies response to parameters is formulated. The results demonstrate that the vibration modes of the two-stage tandem hybrid planetary system can be classified as the fixed-axis train vibration mode, the differential train vibration mode, and the coupled vibration mode. Unequally spaced planet primarily disrupts vibration modes without significantly affecting natural frequencies. In contrast, the effects of mesh stiffness, planet mass and rotational inertia on the modes are opposite to those of unequally spaced planets. The validity of the parameter sensitivity and mode transition criterion is substantiated through illustrative examples. Introduction The development of marine trade puts more and more demands on the transmission of ships.The two-stage tandem hybrid planetary system studied in this paper is an important part of the ship's after-transmission.The whole system is a combination of herringbonetooth fixed-axis train and differential train, which is the main source of vibration in ship machinery and equipment and has a great influence on the reliability of ship operation.Reducing the noise and vibration of the transmission system is of great significance in prolonging the service life of the ship and improving economic efficiency. The study of natural characteristics is an important part of dynamic research, which has an important influence on the dynamic response of the system, the generation and transfer of dynamic loads, the form of system vibration, etc.In recent years, many scholars have carried out much meaningful research on the natural characteristics of gear systems.Ambarisha et al. [1,2] mainly studied the natural characteristics of helical planetary systems, which opened the prelude to the study of the natural characteristics of gear systems.Zhao et al. [3] carried out a sensitivity analysis based on the torsional dynamics of a wind turbine and found that the natural frequencies are very sensitive to the torsional stiffness of the shaft and the gear mesh stiffness.Zhang et al. [4] investigated the effects of the number of planet gears and coupling stiffness on the natural characteristics of closed planetary gear systems to analyze the dynamic response and avoid resonance.Wu et al. [5] investigated the mode characteristics of equally spaced planets by using the perturbation method and the candidate modal method.Sondkar et al. [6] proposed a linear constant model for a double-helix planetary gear set by solving the corresponding eigenvalues.The natural characteristics were calculated by solving the corresponding eigenvalue problem.Hao et al. [7] developed a dynamic model of dual-power split gear transmission, solved the time-varying mesh stiffness of the model and obtained the natural characteristics by using the loaded tooth contact analysis (LTCA) technique.Cui et al. [8] developed a bendingtorsion coupled dynamic model of a composite planetary gear transmission system for a vehicle, from which the system's natural frequency and vibration mode characteristics of the system were extracted, and the effects of rotational inertia and mesh stiffness on the natural characteristics were investigated.Shuai et al. [9] developed a dynamic model of a herringbone planetary gear train based on the concentrated parameter theory and Lagrange's method and investigated the effects of the flexible support and the floating of the sun gear on the natural frequencies.Mbarek et al. [10] analyzed the natural characteristics of a planetary gear train under different load conditions and mesh stiffness fluctuations and performed hammering tests to verify the correctness of the centralized parameter model.Cooley et al. [11] investigated the vibration modes and natural frequencies of high-speed planet gears with gyroscopic effects at very high speeds for the phenomena of dispersion and chattering instability.Qiu W et al. [12] established a dynamic model for a typical interlinked planetary gear system, considering translational vibration, torsional vibration and gyroscopic effects to investigate its free vibration characteristics.Liu H et al. [13] studied the modal characteristics of a two-stage planetary gear system (TSPG) with a short, intermediate shaft and applied modal energy analysis to quantify the importance of the intermediate shaft throughout the entire TSPG system.Hu Z et al. [14] proposed a dynamic model for a bifurcated torque split gear transmission system, obtaining the system's natural characteristics, including natural frequencies and critical speeds.Huang C et al. [15] established a finite element modal analysis model for planetary reducers with small tooth number differences in ABAQUS, obtaining the natural frequencies and corresponding vibration modes of the reducer.Additionally, modal parameters were validated through modal hammer experiments conducted in the LMS Test.Lab.Hu C et al. [16] developed a translational-torsional dynamic model for a multistage planetary gear transmission system; subsequently, they investigated the influence of natural frequencies, mesh stiffness and component mass on the natural frequencies.Tatar A et al. [17] considered the gyroscopic effect and established a six-degree-of-freedom dynamic model of the planetary gear system with equally spaced planets, analyzing its modes. Planet gears used in planetary gearboxes are usually equally spaced.However, unequal planet spacing can sometimes lead to positioning errors between the planet gears [18].There are some studies related to this phenomenon.Tatar et al. [18] proposed a parametric study to determine the effect of design parameters, such as an unequally spaced planet, on the global modal behavior of a planetary gear rotor system.Guo et al. [19] investigated the sensitivity of the natural characteristics of a general planetary gear train to mass and stiffness parameters in response to tuning and unequal phenomena.Parker et al. [20] proved the highly structured modal characteristics of planet gears with unequally spaced planets and elastic ring gear and discussed the rules of how the modes of planet gears with equal spacing evolve with unequally spaced planets.Cooley et al. [21] defined an eigenvalue steering parameter and used it to analyze steering in high-speed planet gears, which is prominent in unequally spaced planetary gears. Although there have been some studies on the natural characteristics of planetary gear systems, the two-stage tandem hybrid planetary system consisting of a fixed-axis train and a differential train has been studied to a lesser extent.In particular, the sensitivity of the natural frequencies to the gear parameters and the mode transition phenomena due to parameter variations in this system have not been explored in depth.For this purpose, this paper summarizes three typical vibration modes based on the centralized parametric model of a two-stage tandem hybrid planetary system and investigates the sensitivity of the The centralized parameter method is used, and the coupling between the fixed-axis train and the differential train is considered to establish the dynamic model of the twostage tandem hybrid planetary system.The schematic diagram of the differential train model is shown in Figure 2, with detailed parameters shown in the reference [9]. In this model, the sun gear, planet gear, ring gear and planet carrier are assumed to be rigid, and the meshing of the gears is represented by linear springs acting on the tooth surfaces.The time-varying component of the mesh stiffness due to the variation in the meshing tooth pairs is neglected, and the average mesh stiffness is used to calculate the contact between the tooth surfaces.Using the Lagrange's equations, the motion equations for a two-stage tandem hybrid planetary system can be written as: In this model, the sun gear, planet gear, ring gear and planet carrier are assumed to be rigid, and the meshing of the gears is represented by linear springs acting on the tooth surfaces.The time-varying component of the mesh stiffness due to the variation in the meshing tooth pairs is neglected, and the average mesh stiffness is used to calculate the contact between the tooth surfaces. Using the Lagrange's equations, the motion equations for a two-stage tandem hybrid planetary system can be written as: where q is the generalized coordinate vector including transverse, axial and torsional motions; M is the mass matrix; C is the damping matrix; K b is the bearing support stiffness matrix; K m is the mesh stiffness matrix; and F t is the vector of internal and external excitation forces due to the combined drive and load moments and mesh errors. Mode Analysis According to the dynamic model of the system, the undamped free vibration equation is obtained as follows: M .. The dynamic equation corresponds to the characteristic equation: where w i is the i-th order natural frequency and φ i is the i-th order mode. To quantify the mode sensitivity, the frequency shift between the two extreme cases is calculated as follows [17]: where w i and w f represent the initial and final natural frequencies. All planets have the same model parameters in the fixed-axis train and the differential train.Table 1 lists the main system parameters.Using numerical methods, the natural frequencies and vibration modes can be calculated (the tables are in Appendices).The vibration modes of the system can be categorized into three types: (1) fixed-axis train vibration mode; (2) differential train vibration mode; (3) coupled vibration mode. Fixed-Axis Train Vibration Mode The fixed-axis vibration mode satisfies the following characteristics: (1) The vibration displacements of the sun gear, ring gear, planet gears and planet carrier in the differential train are zero. (2) The corresponding multiple roots of the natural frequency are 2, with a total of 7 pairs. (3) The fixed-axis train has no axial and torsional motion, and the vibration modes of the central component corresponding to the frequencies φ i and φ i are: The corresponding planet gear vibration modes have the following relationships: The vibration modes can be expressed in the following form: A schematic diagram of the fixed-axis train vibration mode is given in Figure 3. Differential Train Vibration Mode Similar to the fixed-axis train, the differential train vibration mode satisfies the following characteristics: (1) The vibration displacements of the sun gear, ring gear and planet gears in the fixed-axis train are zero. (2) The corresponding multiple roots of the natural frequency are 2, with a total of 8 Differential Train Vibration Mode Similar to the fixed-axis train, the differential train vibration mode satisfies the following characteristics: (1) The vibration displacements of the sun gear, ring gear and planet gears in the fixed-axis train are zero. (2) The corresponding multiple roots of the natural frequency are 2, with a total of 8 pairs. (3) The differential train has no axial and torsional motion, and the vibration modes of the central components corresponding to the frequencies φ i and φ i are: The corresponding planet gear vibration modes have the following relationships: where I is the 3rd-order unit matrix. The vibration modes can be expressed in the following form: A schematic diagram of the differential train vibration mode is given in Figure 4. Coupled Vibration Mode The coupled vibration mode satisfies the following characteristics: (1) The corresponding natural frequencies are single roots, with a total of 22. (2) There are no transverse motions of the central components in the fixed-axis train and the differential train, and the vibration modes of the respective planet gears in the two systems are the same. (3) The coupled vibration modes include a planet carrier axial vibration mode and ring gears torsional vibration mode, corresponding to the number of natural frequencies of 1 and 2, respectively, which also satisfy the characteristics of the coupled vibration modes and can be regarded as a special coupled vibration mode. The vibration modes of the central components are as follows: , , , , , ) The planet gear vibration modes have the following relationship: Coupled Vibration Mode The coupled vibration mode satisfies the following characteristics: (1) The corresponding natural frequencies are single roots, with a total of 22. (2) There are no transverse motions of the central components in the fixed-axis train and the differential train, and the vibration modes of the respective planet gears in the two systems are the same. (3) The coupled vibration modes include a planet carrier axial vibration mode and ring gears torsional vibration mode, corresponding to the number of natural frequencies of 1 and 2, respectively, which also satisfy the characteristics of the coupled vibration modes and can be regarded as a special coupled vibration mode. The vibration modes of the central components are as follows: The planet gear vibration modes have the following relationship: The vibration modes can be expressed in the following form: A schematic diagram of the coupled vibration mode is given in Figure 5. Verification of the Mathematical Model Using the Finite Element Method Reference [22] utilized the finite element method to calculate the natural frequencies of the 2K-H planetary gear system and compared the results with a mathematical model.To validate the mathematical model of the paper, a finite element model was employed.Based on the parameters in Table 1, the transmission system was accurately modeled using the three-dimensional modeling software Solidworks(2018).Subsequently, ANSYS Workbench(2020) was utilized for mesh generation, dividing the transmission system into a total of 131,027 nodes and 56,674 elements.Appropriate boundary and loading conditions were set, as shown in Figure 6.The first six vibration modes were extracted, and the obtained vibration modes and frequencies are presented in Table 2. Verification of the Mathematical Model Using the Finite Element Method Reference [22] utilized the finite element method to calculate the natural frequencies of the 2K-H planetary gear system and compared the results with a mathematical model.To validate the mathematical model of the paper, a finite element model was employed.Based on the parameters in Table 1, the transmission system was accurately modeled using the three-dimensional modeling software Solidworks(2018).Subsequently, ANSYS Workbench(2020) was utilized for mesh generation, dividing the transmission system into a total of 131,027 nodes and 56,674 elements.Appropriate boundary and loading conditions were set, as shown in Figure 6.The first six vibration modes were extracted, and the obtained vibration modes and frequencies are presented in Table 2. Verification of the Mathematical Model Using the Finite Element Method Reference [22] utilized the finite element method to calculate the natural freque of the 2K-H planetary gear system and compared the results with a mathematical m To validate the mathematical model of the paper, a finite element model was empl Based on the parameters in Table 1, the transmission system was accurately modele ing the three-dimensional modeling software Solidworks(2018).Subsequently, A Workbench(2020) was utilized for mesh generation, dividing the transmission system a total of 131,027 nodes and 56,674 elements.Appropriate boundary and loading c tions were set, as shown in Figure 6.The first six vibration modes were extracted, an obtained vibration modes and frequencies are presented in Table 2. Table 2 reveals that, among the first six natural frequencies, the maximum difference between the finite element model and the mathematical model is 4.8%.The obtained vibration types are also consistent.Therefore, the finite element analysis results are in good agreement with the mathematical model, validating the effectiveness of the mathematical model. Sensitivity Analysis of Natural Frequency The study of the sensitivity of the natural frequencies to the system parameters can provide an important basis for the reduction in the system response and the optimization of the structural design.The sensitivity analysis of natural frequencies focuses on the effect of gear parameters on natural frequencies and vibration modes.The parameters include unequally spaced planet, mesh stiffness, planet mass and rotational inertia. The characteristic sensitivity of the free vibration of the system shown in Equation ( 7) can be obtained by the following expressions: (1) When the eigenvalue is a single root, the eigenvalue sensitivity is given by (2) When the eigenvalue is a repeated root, the eigenvalue sensitivity can be obtained by solving for the eigenvalue of the following equation: where Γ represents a set of m-dimensional vectors, m is the number of repeated roots, and the following relationship holds: Unequally Spaced Planets For the convenience of the study, the deviation angle of the first planet gear from its original position in the fixed-axis train and differential train was set to range from 0 • to 15 • , respectively.The frequency shifts of the first 25 modes were calculated, as shown in Table 3.The sensitivity of the first 25 natural frequencies to the inequality is shown in Figure 7.The frequency shifts for deviation angle = 0 • and deviation angle = 15 • are shown in Figure 8. From Table 3 and Figures 6 and 7, it can be seen that when the planet gears of the fixed-axis train and the differential train are unequally spaced, the first 25 natural frequencies are not very sensitive to the deviation angles.The maximum frequency shift is not more than 7% in both unequally spaced planets.However, it can be observed from Table 3 that the change in vibration modes is large.When the planet gear of the fixed-axis train is unequally spaced, except for the 8th-order planet carrier axial vibration mode, which remains unchanged, the vibration modes are transformed into global vibration modes.(In this vibration mode, vibration may exist in all directions with no obvious pattern.)When the planet gear of the differential train is unequally spaced, except for the 8th-order planet carrier axial vibration mode, the 6th-, 7th-and 17th-and 18th-order fixed-axis train vibration modes remain unchanged, and the rest of the vibration modes are transformed into the coupled-global vibration modes.(In this vibration mode, the fixed-axis train maintains the characteristics of the original coupling vibration mode, and the vibration of the gears of the differential train may be found in all directions without any obvious pattern.) Since the unequally spaced planets break the cyclic symmetry of the two-stage tandem hybrid planetary system, the vibration modes are no longer easy to summarize, and new global vibration modes and coupled-global vibration modes appear.The unequally spaced planet in a fixed-axis gear train is more serious in damaging the vibration modes than that in a differential gear train.Because the latter still retains part of the original vibration pattern, the former is in addition to the planet carrier axial vibration mode; the vibration modes of the rest orders are changed to irregular global vibration modes. In conclusion, the change in the natural frequencies caused by unequally spaced planets is very small, but the breakup of the vibration modes is more serious, and new global vibration modes and coupled-global vibration modes appear. Mesh Stiffness This section focuses on the influence of the mesh stiffness of sun gear-planet gear on the modes in the two-stage tandem hybrid planetary system.It is mainly divided into mesh stiffness sensitivity analysis and mode transition. Mesh Stiffness Sensitivity Analysis Taking the meshing stiffness of the sun gear-planet gear in the fixed-axis train kam as an example, the sensitivity of the natural frequencies to the meshing stiffness was studied.It is categorized into the following cases: (1) Fixed-axis train vibration mode. In the fixed-axis gear train vibration mode, the eigenvalues are multiple roots.Let the two eigenvalues be λ1, λ2 and λ = w 2 , which are the eigenvalues of the matrix D. D is given by the following formula: where δamn is the relative displacement between the n-th planet gear m and the sun gear a. In the differential train vibration mode, δamn = 0, so the sensitivity of the natural frequencies is zero. In the coupled vibration mode, the eigenvalues are single roots, and the deformations of each planet gear are the same: Similarly, the sensitivity of the natural frequencies to the mesh stiffness of sun gearplanet gear in the differential train ksp can be obtained, which is similar to the above derivation and is not repeated here. Briefly, with the change in kam, the natural frequencies of the fixed-axis train vibration mode and the coupled vibration mode change, while the frequencies of the differential train vibration mode remain unchanged.With the change in ksp, the frequencies of the 6 and 7, it can be seen that when the planet gears of the fixedaxis train and the differential train are unequally spaced, the first 25 natural frequencies are not very sensitive to the deviation angles.The maximum frequency shift is not more than 7% in both unequally spaced planets.However, it can be observed from Table 3 that the change in vibration modes is large.When the planet gear of the fixed-axis train is unequally spaced, except for the 8th-order planet carrier axial vibration mode, which remains unchanged, the vibration modes are transformed into global vibration modes.(In this vibration mode, vibration may exist in all directions with no obvious pattern.)When the planet gear of the differential train is unequally spaced, except for the 8th-order planet carrier axial vibration mode, the 6th-, 7th-and 17th-and 18th-order fixed-axis train vibration modes remain unchanged, and the rest of the vibration modes are transformed into the coupled-global vibration modes.(In this vibration mode, the fixed-axis train maintains the characteristics of the original coupling vibration mode, and the vibration of the gears of the differential train may be found in all directions without any obvious pattern.) From Table and Figures Since the unequally spaced planets break the cyclic symmetry of the two-stage tandem hybrid planetary system, the vibration modes are no longer easy to summarize, and new global vibration modes and coupled-global vibration modes appear.The unequally spaced planet in a fixed-axis gear train is more serious in damaging the vibration modes than that in a differential gear train.Because the latter still retains part of the original vibration pattern, the former is in addition to the planet carrier axial vibration mode; the vibration modes of the rest orders are changed to irregular global vibration modes. In conclusion, the change in the natural frequencies caused by unequally spaced planets is very small, but the breakup of the vibration modes is more serious, and new global vibration modes and coupled-global vibration modes appear. Mesh Stiffness This section focuses on the influence of the mesh stiffness of sun gear-planet gear on the modes in the two-stage tandem hybrid planetary system.It is mainly divided into mesh stiffness sensitivity analysis and mode transition. Mesh Stiffness Sensitivity Analysis Taking the meshing stiffness of the sun gear-planet gear in the fixed-axis train k am as an example, the sensitivity of the natural frequencies to the meshing stiffness was studied.It is categorized into the following cases: (1) Fixed-axis train vibration mode. In the fixed-axis gear train vibration mode, the eigenvalues are multiple roots.Let the two eigenvalues be λ 1 , λ 2 and λ = w 2 , which are the eigenvalues of the matrix D. D is given by the following formula: where δ amn is the relative displacement between the n-th planet gear m and the sun gear a. In the differential train vibration mode, δ amn = 0, so the sensitivity of the natural frequencies is zero. In the coupled vibration mode, the eigenvalues are single roots, and the deformations of each planet gear are the same: Similarly, the sensitivity of the natural frequencies to the mesh stiffness of sun gearplanet gear in the differential train k sp can be obtained, which is similar to the above derivation and is not repeated here. Briefly, with the change in k am , the natural frequencies of the fixed-axis train vibration mode and the coupled vibration mode change, while the frequencies of the differential train vibration mode remain unchanged.With the change in k sp , the frequencies of the differential train vibration mode and coupled vibration mode change, while the frequencies of the fixed-axis train vibration mode remain unchanged. Mode Transition Criterion When the system parameters are changed, the natural frequency curves approach gradually and then separate rapidly with a large curvature at a very close distance, a phenomenon known as mode transition [14].Take k am as an example to study the mode transition of natural frequency to mesh stiffness.It is categorized into the following cases: (1) The corresponding modes of λ r and λ s are both fixed-axis train vibration modes.In this case, both λ r and λ s are double roots, λ r = λ p and λ s = λ q , and the coupling factors are calculated as follows: When the coupling factor is 0, the natural frequencies undergo a mode transition; otherwise, they intersect.Bringing in the characteristics of the fixed-axis train vibration mode, it can be seen that the coupling factor can not be 0. Therefore, the natural frequencies, in this case, transition. (2) The corresponding modes of λ r and λ s are both differential train modes. In the differential train vibration mode, the fixed-axis train does not vibrate, the natural frequencies have zero sensitivity to k am , and the trajectories of natural frequencies are straight lines.Therefore, the natural frequencies will not transition and intersect when k am changes. (3) The corresponding modes of λ r and λ s are coupled vibration modes.In this case, both λ r and λ s are single roots, and the coupling factors are calculated as follows: Bringing in the characteristics of the coupled vibration mode, it can be seen that the coupling factors cannot be 0. Therefore, the natural frequencies, in this case, transition. (4) λ r belongs to the differential train vibration mode.In this case, there is δ amn = 0. Therefore, the natural frequencies under the differential train vibration mode intersect with those under other vibration modes. (5) λ r belongs to the coupled vibration mode, and λ s belongs to the fixed-axis train vibration mode. In this case, λ s are double roots, λ s = λ q , and the coupling factors are calculated as follows: Bringing in the characteristics of the two vibration modes shows that χ r = χ s = χ q = 0 holds true, and therefore, the natural frequencies intersect. Similarly, the mode transition criterion of the natural frequencies to k sp can be obtained.The obtained mode transition criterion is summarized as shown in Table 4. Sensitivity and Modal Transition Verification The k am and k sp were set to be from 10 7 N/m to 10 8 N/m, respectively.The frequency shifts of the first 25 modes were calculated, as shown in Table 5.The sensitivity of the first 25 natural frequencies to the mesh stiffness is shown in Figure 9, and the frequency shifts for mesh stiffness of 10 7 N/m and 10 8 N/m are shown in Figure 10.From Table 5 and Figures 8 and 9, it can be seen that the influence of the mesh stiffness of the sun gear-planet gear on the natural frequencies is very large, and the frequency shift even reaches 172.09%.When kam changes, the vibration modes corresponding to the changed natural frequencies are fixed-axis train vibration mode and coupled vibration mode; when ksp changes, the vibration modes corresponding to the changed natural frequencies are differential train vibration mode and coupled vibration mode.This also verifies the sensitivity of the natural frequencies to the mesh stiffness in Section 4.2.1.Since there are more differential train vibration modes than fixed-axis train vibration modes in the first 25 modes, although the peak frequency shift generated by kam is greater than that by ksp, its influence affects fewer orders than ksp. The mode transition phenomenon can be observed in Figure 9, and the mode transition in Figure 9a is verified by comparing the mode transition criterion in Table 4.There is no case1 in Figure 9a; frequencies (13,14) are a double root, belonging to the differential train vibration mode, which are straight lines, verifying case2; frequencies (11,12) are two single roots, belonging to the coupled vibration mode, and the transitions occur at 5 × 10 7 From Table 5 and Figures 8 and 9, it can be seen that the influence of the mesh stiffness of the sun gear-planet gear on the natural frequencies is very large, and the frequency shift even reaches 172.09%.When k am changes, the vibration modes corresponding to the changed natural frequencies are fixed-axis train vibration mode and coupled vibration mode; when k sp changes, the vibration modes corresponding to the changed natural frequencies are differential train vibration mode and coupled vibration mode.This also verifies the sensitivity of the natural frequencies to the mesh stiffness in Section 4.2.1.Since there are more differential train vibration modes than fixed-axis train vibration modes in the first 25 modes, although the peak frequency shift generated by k am is greater than that by k sp , its influence affects fewer orders than k sp . The mode transition phenomenon can be observed in Figure 9, and the mode transition in Figure 9a is verified by comparing the mode transition criterion in Table 4.There is no case1 in Figure 9a; frequencies (13,14) are a double root, belonging to the differential train vibration mode, which are straight lines, verifying case2; frequencies (11,12) are two single roots, belonging to the coupled vibration mode, and the transitions occur at 5 × 10 7 N/m, verifying case3; frequencies (15,16) are a double root, belonging to the fixed-axis train vibration mode; frequency ( 19) is a single root, belonging to the coupled vibration mode, and intersections occur at 9 × 10 7 N/m, verifying case4; there is no case5 in Figure 9a; frequencies (4, 5) are a double root, belonging to the fixed-axis train vibration mode, frequencies (6, 7) are a double root, belonging to the differential train vibration mode, and intersections occur at 2.5 × 10 7 N/m, verifying case6.The mode transitions and intersections occurring at other positions in Figure 9a also conform to the mode transition criterion in Table 4, and similarly, the mode transitions and intersections in Figure 9b can be verified to conform to the mode transition criterion, which is not repeated here. In conclusion, the mesh stiffness of the sun gear-planet gear mainly affects the natural frequencies of the system and does not change the original vibration modes.The mode transitions and intersections that occur are also consistent with the formulated mode transition criterion. Planet Mass The study of the sensitivity of the natural frequencies to the planet mass/rotational inertia is similar to the methodology of Sections 4.2.1 and 4.2.2, which is not repeated here due to space implications.The mode transition criterion for the natural frequencies to the planet gear mass/rotational inertia is consistent with Table 4.The variation rule of the sensitivity of natural frequencies to planet mass/rotational inertia was obtained as follows: As the planet gear mass M m /rotational inertia J m in the fixed-axis train changes, the natural frequencies of the fixed-axis train vibration mode and the coupled vibration mode change, while the natural frequencies of the differential train vibration mode remain unchanged.As the planet gear mass M p /rotational inertia J p in the differential train changes, the natural frequencies of the differential train vibration mode and the coupled vibration mode change, while the natural frequencies of the fixed-axis train vibration mode remain unchanged. Setting M m and M p from 0.1 kg to 1 kg, respectively, the frequency shifts of the first 25 modes were calculated, as shown in Table 6.The sensitivity of the first 25 natural frequencies to the planet mass is shown in Figure 11, and the frequency shifts for planet mass of 0.1 kg and 1 kg are shown in Figure 12.From Table 6 and Figures 11 and 12, it can be seen that the effects of the planet mass on the natural frequencies are also very large, and the maximum value of the frequency shift reaches 61.47%.When Mm changes, the vibration modes corresponding to the changed natural frequencies are the fixed-axis train vibration mode and the coupled vibration mode; when Mp changes, the vibration modes corresponding to the changed natural frequencies are the differential train vibration mode and the coupled vibration mode.This also verifies the sensitivity of natural frequencies to the planet gear mass, as mentioned above.Similarly, the peak frequency shift generated by Mm is larger than that of Mp, but its influence affects fewer orders than Mp. The mode transition phenomenon can be observed in Figure 11, and the mode transition in Figure 11a is verified by comparing the mode transition criterion in Table 4.There is no case1 in Figure 11a; frequencies (12,13) are a double root, belonging to the differential train vibration mode, which are straight lines, verifying case2; frequencies (7, 10) are two single roots, belonging to the coupled vibration mode, and the transitions occur at 0.5 From Table 6 and Figures 11 and 12, it can be seen that the effects of the planet mass on the natural frequencies are also very large, and the maximum value of the frequency shift reaches 61.47%.When M m changes, the vibration modes corresponding to the changed natural frequencies are the fixed-axis train vibration mode and the coupled vibration mode; when M p changes, the vibration modes corresponding to the changed natural frequencies are the differential train vibration mode and the coupled vibration mode.This also verifies the sensitivity of natural frequencies to the planet gear mass, as mentioned above.Similarly, the peak frequency shift generated by M m is larger than that of M p , but its influence affects fewer orders than M p . The mode transition phenomenon can be observed in Figure 11, and the mode transition in Figure 11a is verified by comparing the mode transition criterion in Table 4.There is no case1 in Figure 11a; frequencies (12,13) are a double root, belonging to the differential train vibration mode, which are straight lines, verifying case2; frequencies (7, 10) are two single roots, belonging to the coupled vibration mode, and the transitions occur at 0.5 kg, verifying case3; frequencies (8,9) are a double root, belonging to the fixed-axis train vibration mode, and frequency (6) is a single root, belonging to the coupled vibration mode, and intersections occur at 0.15 kg, verifying case4; frequency ( 16) is a single root, belonging to the coupled vibration mode, and it intersects with frequencies (12,13) between 0.15 and 0.2 kg, which belong to the of the differential train vibration mode, verifying case 5; frequencies (4, 5) are a double root, belonging to the differential train vibration mode, and intersect with frequencies (8, 9) between 0.75 and 0.8 kg, which belong to the fixed-axis train vibration mode, verifying case6.The mode transitions and intersections occurring at other positions in Figure 11a also conform to the mode transition criterion in Table 4, and similarly, the mode transitions and intersections in Figure 11b can be verified to conform to the mode transition criterion in Table 4, which is not repeated here. Planet Rotational Inertia Setting J m from 10 −4 kg•m 2 to 10 −3 kg•m 2 and J p from 10 −5 kg•m 2 to 10 −4 kg•m 2 , the frequency shifts of the first 25 modes were calculated, as shown in Table 7.The sensitivity of the first 25 natural frequencies to the planet rotational inertia is shown in Figure 13, and the frequency shifts for extreme values of the rotational inertia are shown in Figure 14.From Table 7 and Figures 12 and 13, it can be observed that due to Jm being an order of magnitude larger than Jp, the selected range for Jm also varies significantly.Within the chosen range of rotational inertia, the maximum frequency shift caused by Jm reaches 42.12%, while the maximum frequency shift caused by Jp is only 5.14%.As the rotational inertia changes, the sensitivity of the natural frequencies to the planet rotational inertia follows the expected rules as discussed earlier.Similarly, the order of occurrence frequency shift affected by Jm is less than that of Jp.The mode transition phenomenon can be observed in Figure 13, and the mode transition in Figure 13a is verified by comparing the mode transition criterion in Table 4.There is no case1 in Figure 13a; frequencies (4, 5) are a double root, belonging to the differential train vibration mode, which are straight lines, verifying case2; frequencies (11,12) are two single roots, belonging to the coupled vibration mode, and the transition occurs at 3.5 × 10 −4 kg•m 2 , verifying case3; frequencies (20,21) are a double root, belonging to the fixedaxis train vibration mode, frequency ( 12) is a single root, belonging to the coupled vibration mode, and intersections occur between 7.5 × 10 −4 and 8 × 10 −4 kg•m 2 , verifying case4; frequency (24) is a single root, belonging to the coupled vibration mode, frequencies (22, From Table 7 and Figures 12 and 13, it can be observed that due to Jm being an order of magnitude larger than Jp, the selected range for Jm also varies significantly.Within the chosen range of rotational inertia, the maximum frequency shift caused by Jm reaches 42.12%, while the maximum frequency shift caused by Jp is only 5.14%.As the rotational inertia changes, the sensitivity of the natural frequencies to the planet rotational inertia follows the expected rules as discussed earlier.Similarly, the order of occurrence frequency shift affected by Jm is less than that of Jp.The mode transition phenomenon can be observed in Figure 13, and the mode transition in Figure 13a is verified by comparing the mode transition criterion in Table 4.There is no case1 in Figure 13a; frequencies (4, 5) are a double root, belonging to the differential train vibration mode, which are straight lines, verifying case2; frequencies (11,12) are two single roots, belonging to the coupled vibration mode, and the transition occurs at 3.5 × 10 −4 kg•m 2 , verifying case3; frequencies (20,21) are a double root, belonging to the fixedaxis train vibration mode, frequency ( 12) is a single root, belonging to the coupled vibration mode, and intersections occur between 7.5 × 10 −4 and 8 × 10 −4 kg•m 2 , verifying case4; frequency (24) is a single root, belonging to the coupled vibration mode, frequencies (22, From Table 7 and Figures 12 and 13, it can be observed that due to J m being an order of magnitude larger than J p , the selected range for J m also varies significantly.Within the chosen range of rotational inertia, the maximum frequency shift caused by J m reaches 42.12%, while the maximum frequency shift caused by J p is only 5.14%.As the rotational inertia changes, the sensitivity of the natural frequencies to the planet rotational inertia follows the expected rules as discussed earlier.Similarly, the order of occurrence frequency shift affected by J m is less than that of J p . The mode transition phenomenon can be observed in Figure 13, and the mode transition in Figure 13a is verified by comparing the mode transition criterion in Table 4.There is no case1 in Figure 13a; frequencies (4, 5) are a double root, belonging to the differential train vibration mode, which are straight lines, verifying case2; frequencies (11,12) are two single roots, belonging to the coupled vibration mode, and the transition occurs at 3.5 × 10 −4 kg•m 2 , verifying case3; frequencies (20,21) are a double root, belonging to the fixed-axis train vibration mode, frequency ( 12) is a single root, belonging to the coupled vibration mode, and intersections occur between 7.5 × 10 −4 and 8 × 10 −4 kg•m 2 , verifying case4; frequency (24) is a single root, belonging to the coupled vibration mode, frequencies (22,23) are a double root, belonging to the vibration mode of the differential train, and intersections occur between 4.5 × 10 −4 and 5 × 10 −4 kg•m 2 , verifying case5; frequencies (13,14) are a double root, belonging to the differential train vibration mode, and intersect with frequencies (20,21) between 5.5 × 10 −4 and 6 × 10 −4 kg•m 2 , which belonging to the fixedaxis train vibration mode, verifying case6.The mode transitions and intersections occurring at the other positions in Figure 13a also conform to the mode transition criterion in Table 4. Within the selected mass range, the range of natural frequency variations in Figure 13b is small and does not reach the point where mode transitions and intersections occur. Conclusions Based on the centralized parameter model of the two-stage tandem hybrid planetary system, the natural characteristics were analyzed; three typical vibration modes were summarized; and the sensitivity of natural frequencies to unequally spaced planet, mesh stiffness, planet mass and rotational inertia, as well as the mode transition and intersection phenomena of natural frequencies under the influence of different parameters, were investigated. (1) The vibration modes of the two-stage tandem hybrid planetary system include fixed-axis train vibration mode, differential train vibration mode and coupled vibration mode. (2) The sensitivity of the natural frequencies to the parameters was investigated.With the change in the parameters in the fixed-axis train, the natural frequencies of the fixed-axis train vibration mode and the coupled vibration mode change, while the natural frequencies of the differential train vibration mode remain unchanged.With the change in parameters in the differential train, the natural frequencies of the differential train vibration mode and coupled vibration mode change, while the natural frequencies of the fixed-axis train vibration mode remain unchanged.Combined with the numerical examples, the correctness of the parameter sensitivity was verified, and the frequency shifts under the influence of different parameters were calculated. (3) The mode transition phenomenon was investigated, and the criterion for the occurrence of mode transition of the two-stage tandem hybrid planetary system was determined, and the accuracy of the proposed mode transition criterion was verified by calculations. Through the model established in this study, natural characteristics of the marine twostage tandem hybrid planetary system can be swiftly obtained, aiding design engineers in better comprehending the dynamic behavior of the gear system.This understanding facilitates design optimization to mitigate resonance and enhance system performance, reliability and longevity.Despite the close resemblance between the mathematical model presented in this paper and finite element results, disparities from reality persist.Therefore, future research endeavors could involve further refining the model, such as the coupling between the sun gear and the planet gears at their respective ends, which could be explored in detail, treating the entire assembly as flexible components, among other aspects.It is anticipated that such refinements will yield results closer to reality, representing our forthcoming focus and dedication. Figure 1 . Figure 1.Schematic diagram of marine two-stage tandem hybrid planetary system. Figure 1 . 20 Figure 2 . Figure 1.Schematic diagram of marine two-stage tandem hybrid planetary system.The centralized parameter method is used, and the coupling between the fixed-axis train and the differential train is considered to establish the dynamic model of the two-stage tandem hybrid planetary system.The schematic diagram of the differential train model is shown in Figure 2, with detailed parameters shown in the reference [9].Machines 2024, 12, x FOR PEER REVIEW 4 of 20 Figure 2 . Figure 2. Dynamic model of the differential train. Figure 6 .Table 2 .Figure 5 . Figure 6.The finite element model of the marine two-stage tandem hybrid planetary system.Table 2. Comparison of vibration modes obtained from two models. Figure 6 .Table 2 .Figure 6 . Figure 6.The finite element model of the marine two-stage tandem hybrid planetary system. Table 3 . Vibration modes for planet gears are unequally spaced. Figure 7 . Figure 7. Sensitivity of the first 25 natural frequencies to the unequally spaced planet: (a) unequally spaced planet of a fixed-axis train; (b) unequally spaced planet of a differential train. Figure 7 .Figure 8 . Figure 7. Sensitivity of the first 25 natural frequencies to the unequally spaced planet: (a) unequally spaced planet of a fixed-axis train; (b) unequally spaced planet of a differential train. Figure 8 . Figure 8. Frequency shifts for deviation angle = 0 • and deviation angle = 15 • : (a) unequally spaced planet of a fixed-axis train; (b) unequally spaced planet of a differential train. Figure 9 . Figure 9. Sensitivity of the first 25 natural frequencies to the mesh stiffness: (a) kam; (b) ksp.Figure 9. Sensitivity of the first 25 natural frequencies to the mesh stiffness: (a) am ; (b) k sp . Figure 9 . Figure 9. Sensitivity of the first 25 natural frequencies to the mesh stiffness: (a) kam; (b) ksp.Figure 9. Sensitivity of the first 25 natural frequencies to the mesh stiffness: (a) am ; (b) k sp . Figure 10 . Figure 10.Frequency shifts for mesh stiffness of 10 N/m and 10 8 N/m: (a) k am ; (b) k sp . Figure 11 . Figure 11.Sensitivity of the first 25 natural frequencies to the planet gear mass: (a) Mm; (b) Mp.Figure 11.Sensitivity of the first 25 natural frequencies to the planet gear mass: (a) M m ; (b) M p . Figure 11 . Figure 11.Sensitivity of the first 25 natural frequencies to the planet gear mass: (a) Mm; (b) Mp.Figure 11.Sensitivity of the first 25 natural frequencies to the planet gear mass: (a) M m ; (b) M p . Figure 11 . Figure 11.Sensitivity of the first 25 natural frequencies to the planet gear mass: (a) Mm; (b) Mp. Figure 12 . Figure 12.Frequency shifts for planet mass of 0.1 kg and 1 kg: (a) M m ; (b) M p . Figure 13 . Figure 13.Sensitivity of the first 25 natural frequencies to the planet gear rotational inertia: (a) Jm; (b) Jp. Figure 14 . Figure 14.Frequency shifts for planet gear rotational inertia of extreme value.(a) Jm; (b) Jp. Figure 13 .Figure 13 . Figure 13.Sensitivity of the first 25 natural frequencies to the planet gear rotational inertia: (a) J m ; (b) J p . Figure 14 . Figure 14.Frequency shifts for planet gear rotational inertia of extreme value.(a) Jm; (b) Jp. Figure 14 . Figure 14.Frequency shifts for planet gear rotational inertia of extreme value.(a) J m ; (b) J p . Table 1 . Parameters of the marine two-stage tandem hybrid planetary system. Table 2 . Comparison of vibration modes obtained from two models. • Table 4 . Mode transition criterion of natural frequency to mesh stiffness. Table 5 . Vibration modes under variation in mesh stiffness. Table 6 . Vibration modes under variation in planet mass. Table 7 . Vibration modes under variation in planet rotational inertia.
v3-fos-license
2018-12-15T08:18:17.111Z
2013-10-31T00:00:00.000
56527565
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://sciforum.net/paper/download/2262/manuscript", "pdf_hash": "5e1bb8c79294ff3a087a66384ae5eb0bef406fc1", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1968", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "5e1bb8c79294ff3a087a66384ae5eb0bef406fc1", "year": 2013 }
pes2o/s2orc
Crystal structure of a G-1 dendrimer of aminoisophtalic acid The G-1 dendrimer 4,4',4'',4'''-methanetetrayltetrakis(N-(3,5-dicarboxyphenyl)benzamide) has been obtained from 4,4',4'',4'''-methanetetrayltetrabenzoic acid and isophtalic acid. The compound was recrystallized from methanol and its structure resolved. The crystal belongs to the tetragonal crystal system space group I-4 2d, the cell lengths being a=b=18.9585(16) Å, and c=23.703(2) Å. The crystal structure evidences the formation of cavities. Only one type of hydrogen bond is observed implying the nitrogen atom (acting as donor) and the oxygen atom of one carboxy group of the aminoisophtalic residue (as acceptor), while the other carboxy group does not participate in the network. The donor-acceptor distance is 3.024(2) Å. Introduction Crystals are described by translation of the unit cell into all three directions of space, but by considering them as supramolecular entities, they may be analyzed them in terms of networks.This way of analyzing molecular crystals is called molecular tectonics. 1Consequently, molecular tectonics is a supramolecular construction using tectonic subunits 2 and, according to Wang et al, 2 a tecton is a molecule whose interactions are dominated by specific attractive forces that induce the assembly of aggregates with controlled geometries.Therefore, tectons are active building units bearing recognition information and thus capable of recognizing each other, 1 and consist of multiple peripheral sticky sites linked to a core that holds them in a suitable orientation. 3traphenylmethane may be considered as a reference molecule in designing tetrapodal tectons.Its crystal structure was first reported by Sumsion and McLachlan in 1950, 4 and latter refined by Robbins et al. 5 The X-ray diffraction studies showed that the crystal is tetragonal, space group P4 2 1 c, with unequal central valence angles, not far from the tetrahedral ones.Several tectons were designed by linking functional groups to the aromatic rings.Examples are hydroxy, 6 halogens, 7 carboxy, 8 ethynylpyridinone, 9 and acetamido and aminobenzamido 10 groups, whose crystal structures have been resolved.The resulting crystal structures, where hydrogen bonding, coordination to metals or weak interactions play a decisive role, usually have large chambers. 8e central carbon atom has been substituted by Si, Sn and Pb, 2,6,[11][12] and other central nuclei or cores may be used as well.Adamantyl tetrapodal core, 13-14 2,2',6,6'tetracarboxybiphenyl, 15 and the pentaerythrityl-tetraphenil ether 3,16 are nice examples. Alternative residues to the phenyl moities which constitute the four branches 17 have also been used for, as well as other cores with different podal degrees. 18ndrimers differ from classical polymers by their symmetry, starburst branching and terminal functionality density. 190][21] The divergent method consists on the sequential addition of repeating units to a starting core (normally a small molecule or ion), thus forming shells or generations.Dendrimers are of great interest as carriers of functional groups, 22 the number of which depends on the number of branches at the core (core multiplicity); the number of branches on each monomer repeating unit, and the number of generations.Different cores, particularly with different number of functionalities, have been used. 23e crystal structure of tetrakis(4-carboxyphenyl)methane have been reported by Malek et al 8 as well as the structure of the silane analogous. 11To this core, four aminoisophalic groups were attached forming a G-1 dendrimer which exhibits eight carboxy groups in the surface.In this communication, the crystal structure of 4,4',4'',4'''methanetetrayltetrakis(N-(3,5-dicarboxy-phenyl)benzamide), in which four aminoisophtalic units are linked to the core tetrakis(4-carboxyphenyl)methane is presented. Experimental Thyonil chloride was used as received from Aldrich.1,4-dioxane was from Panreac and dried over sodium/benzophenone.5] The final compound was obtained according to the scheme of Figure 1.Synthesis of 4,4',4'',4'''-methanetetrayltetrabenzoyl chloride: A mixture of tetrakis(4carboxyphenyl)methane (0.34g, 0.68 mmol) and 15 mL of thionyl chloride was heated to and held at a gentle reflux until all the solids were dissolved.Then, excess of thionyl chloride was removed under reduced pressure.The product obtained was used without further purification. 1 H and 13 C NMR spectra were recorded on a Brucker AC-300 MHz. Results and discussion The tetrahedral geometry of the molecule is shown in Figure 2, together with all hydrogen bonds (dot red lines) in which it participates.The central atom of the molecule is a tetrahedral carbon atom and consequently a diamondoid crystal structure results. 26The C-C-C angles are either 113.4 or 101.9 o , far from the tetrahedral value (Figure 2, right).The aromatic rings directly bonded to the central atom are quasi-planar, the maximum distance from any carbon atom to the plane being 0.005 Å.On the other hand, these distances are higher in the isophtalic rings, now reaching a maximum distance value of 0.019 Å.The angle between interior aromatic rings planes are either 88.9º or 60.8º.The angle between the two aromatic rings planes belonging to the same branch is 70.9 o .Figure 3 shows the crystal packing along the b and c crystallographic axes.The packing along the a axis is equal to the packing along b but rotated 90º.The distance between the central carbon atoms of molecules connected through hydrogen bonds is 17.893 Å. Figure 3 Figure 3 evidences the formation of channels along the a and b axes.Figure4 Figure 4 shows Figure 3 evidences the formation of channels along the a and b axes.Figure 4 shows an enlarged view of the cross section along b (or equivalently along a).The cross section is an almost perfect elipse with radius values of 6.1 Å and 3.7 Å, giving an area of 71 Å 2 .A wide range of cross sections values have been observed for closely related tectons.Table2resumes some results. Figure 4 . Figure 4. Cross section of channels formed along the a and b axes. the number of possible donor/acceptor hydrogen bond sites, each branch of the molecule is only bonded to other two molecules through N-HO hydrogen bonds.It is to say, only one type of hydrogen bond is observed implying the nitrogen atom (acting as donor) and the oxygen atom of one carboxy group of the aminoisophtalic residue (as acceptor), while the other carboxy group does not participate in the network.The geometric parameters of the hydrogen bond are shown in (http://www.ccdc.cam.ac.uk/prods/mercury).A summary of the crystal data and experimental details are listed in Table1. Table 2 . Published cross sections for several tecton crystals. Table 3 .Table 3 . Geometric parameters of the N-HO hydrogen bond.
v3-fos-license
2016-10-14T01:18:46.145Z
2016-01-01T00:00:00.000
954808
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/s12879-015-1118-2", "pdf_hash": "c60d9dc76949d25dee2fa154edcb670aff86ef1a", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1970", "s2fieldsofstudy": [ "Medicine" ], "sha1": "c3dcffb6c7b8dc642304b944e572c6b82bbe6bbc", "year": 2016 }
pes2o/s2orc
Edinburgh Research Explorer Reported co-infection deaths are more common in early adulthood and among similar infections Background: Many people have multiple infections at the same time, but the combined contribution of those infections to disease-related mortality is unknown. Registered causes of death offer a unique opportunity to study associations between multiple infections. Methods: We analysed over 900,000 death certificates that reported infectious causes of death. We tested whether reports of multiple infections (i.e., co-infections) differed across individuals ’ age or sex. We also tested whether each pair of infections were reported together more or less often than expected by chance, and whether this co-reporting was associated with the number of biological characteristics they had in common. Results: In England and Wales, and the USA, 10 and 6 % respectively of infection-related deaths involved co-infection. Co-infection was reported reported most often in young adults; 30 % of infection-related deaths among those aged 25 – 44 from the USA, and 20 % of infection-related deaths among those aged 30 – 39 from England and Wales, reported multiple infections. The proportion of infection-related deaths involving co-infection declined with age more slowly in males than females, to less than 10 % among those aged >65. Most associated pairs of infections co-occurred more often than expected from their frequency of being reported alone (488/683 [71 %] in the USA, 129/233 [55 %] in England and Wales), and tended to share biological characteristics (taxonomy, transmission mode, tropism or timescale). Conclusions: Age, sex, and biologically similar infections are associated with death from co-infection, and may help indicate patients at risk of severe co-infection. Background Infectious diseases cause 25 % of human mortality worldwide; in 2008, respiratory infections caused 4.26 million deaths, diarrhoea caused 2.16 million deaths, and HIV/ AIDS caused over 2 million deaths [1].However, the role of co-infection (more than one simultaneous infection in an individual) in this mortality is unknown.Some co-infections are known to cause death; for example HIV-tuberculosis co-infection caused 350,000 deaths worldwide in 2008 [2], and bacterial pneumonia increases the risk of death from influenza [3].While some co-infections are not detrimental, most papers report a negative effect of co-infection on human health [4].Despite previous reports of negative health effects, we know little about the characteristics of people who died from co-infection. A key question is whether deaths due to co-infection are predictable, and what factors influence this.Demographic characteristics (i.e., age, sex) of individuals could be important determinants of whether co-infection is reported on a death certificate.Older people and males are typically more susceptible to infectious disease than younger people and females [5][6][7].There is evidence of age and sex biases for certain co-infections; most measles deaths are from viral or bacterial co-infection in young females [8], whereas sepsis deaths are higher in males than females [9].Whether deaths from many different co-infections generally differ by age or sex is unclear. Characteristics of the infecting organisms could also underlie associations among reported infections.We hypothesise that biologically similar pairs of infections co-occur more often than expected.For example, taxonomically similar infections may coinfect due to similar life cycles, targeted organs, or antigens [10,11].Shared transmission routes may promote co-infection, e.g., bloodborne viral infections among injecting drug users [12].Similarly, chronic infections in the same body part may exacerbate morbidity (e.g., hepatitis viruses A and C [13]). An alternate hypothesis is that antagonistic interactions are stronger between more similar infections, and thus would be found together less often. The characteristics of people who died with co-infections can be studied using causes of death reported on death certificates.These data offer a general description of coinfection-associated death in humans, providing broad context for other co-infection research, and enable tests of specific, public-health relevant hypotheses about co-infection.Here, we use death certificates to test for relationships between co-infection-related deaths and individual age and sex, and explore how similarities in biological characteristics (taxonomy, transmission, tropism, and timescale) related to associations between reported infections. We gathered data on reported infectious causes of death across a recent four-year time period in England and Wales, and the USA.These data offer a novel opportunity to study how infections associate with one another at death, and the contribution of characteristics of both the individual and the infections.We address three hypotheses: (i) the proportion of infection deaths attributed to multiple infections differs by age and sex, (ii) the frequency of pairs of infections co-occurring on death certificates differs from that expected from the occurrence of each infection alone, and (iii) the frequency of co-occurrence of each pair of infections on death certificates increases with similarity in terms of taxonomic group, transmission route, tropism, and timescale. Methods Death certificates in England and Wales report one underlying cause of death and up to 15 contributory factors, following the International Classification of Diseases (ICD [14]).We used 139,459 death certificates from 2005 to 2008 in England and Wales that reported at least one infection and followed ICD-10.In England and Wales 2005-08 was the longest recent time period within which ICD codes were interpreted consistently.In the USA, one main and up to 20 extra causes of death are listed on death certifications following ICD-10.There were 816,390 death certificates from 2005 to 2008 in the USA that reported at least one infection.By infection we mean a type of infectious disease classified in ICD, not necessarily a particular pathogen. In ICD-10, one infection code explicitly indicates coinfection ('B20' , which denotes other infections arising from HIV infection).Other co-infections are indicated by multiple infections reported on the death certificate.Hereafter, the term "single infection death" indicates death certificates with only one infection reported, and "coinfection death" indicates death certificates with more than one infection reported. Data for England and Wales were obtained from the Office for National Statistics.Other data provided were sex and age at death (eight age categories: 0-19, 20-29, 30-39, 70-79, 80+).Other information, (i.e., exact age, date and place of death, higher resolution ICD codes) were removed by the Office for National Statistics to prevent identification of individuals. Data for the USA were downloaded from http:// www.nber.org/data/multicause.html.For comparability with England and Wales we ensured similar ICD coding, and roughly decadal categories, while also keeping higher resolution data available among children (<1, 1-4, 5-14, 15-24, 25-34, 75-84, 85+).We also used data on place of death (e.g., patients in hospital, hospice, or nursing home) to test whether the patterns were consistent among inpatients with access to treatment before death. Ethical approval We use data from public agencies in the US and the UK that relate to deceased humans.We were exempt from requiring ethical approval to undertake our study because the individuals were not living when the data were gathered (US federal regulations 45 CFR 46.102f Protection of Human Subjects 2009).We also worked with the Office for National Statistics to ensure that data they supplied and the results that we report herein did not disclose personal or sensitive data relating to living persons (Freedom of Information Act 2000 c. 36 II section 40(3)(a)). Statistical analyses (i) Age, sex, and co-infection death Associations between age, sex, and co-infection death were modelled using generalised additive models (GAM) with two predictor variables: sex (two level factor; male and female) and age (eight level factor for England and Wales, eleven level factor for the USA), and the interaction between age and sex.This analysis is similar to a logistic (i.e., binomial) regression where the response variable is the number of "successes" (co-infection death) and "failures" (single infection death) with binomial error structure and logit link, except GAM allows for non-linear relationships between co-infection and age (e.g., [15]).We used the Akaike Information Criterion (AIC), where a lower AIC indicates a more informative model, to determine which terms should remain the model.We started with a saturated model and proceeded to drop interactions and then main effects if their deletion reduced AIC. (ii)Associations between pairs of infections For each pair of infections we tested whether the number of deaths involving both infections was different from that expected in the absence of any association using a Chi-squared test.Every cooccurring infection was included in this analysis such that a death certificate reporting three infections would have contributed three pairs.The residuals from this analysis were used to quantify the disparity between the observed and expected frequencies of coinfection death; a negative residual indicated fewer coinfection deaths than expected, while a positive residual indicated more than expected.To account for infections being reported with different frequency, we report Pearson standardised residuals [16]. (iii)Are biologically similar infections associated with co-infection death? We tested whether the measured associations between each pair of infections were related to the biological similarity between them, based on four characteristics: taxonomy, transmission, tropism, and timescale (see Additional file 1: Supplementary Information S2 for details).For each infection we gathered data on each characteristic using PubMedHealth [17], a human infection database [18], and an RNA virus database [19]. Pairwise similarity between each pair of infections was calculated as the number of matching characteristics (zerofour).We used a Mantel test [20] to measure the correlation coefficient between the matrix of pairwise biological similarities and the matrix of pairwise associations on death certificates (standardised Chi-squared residual values).All analyses were done in R version 3.1.0[21], including using the mgcv package to fit GAMs and calculate AIC [22] and the ade4 package for Mantel tests [23]. Results From To test the sensitivity of this result to our analytical methods we tested for associations between co-infection death, age, and sex using other settings; in all instances we found omission of the age:sex interaction to increase AIC by at least 25 (Additional file 1: Table S1). (ii)Associations between pairs of infections Of 9453 possible pairings from 138 infections reported on death certificates in the USA, 1067 (11 %) co-occurred on death certificates.Of 4560 possible pairings from 96 infections reported on death certificates in England and Wales, 366 (8 %) co-occurred on death certificates.Most pairs co-occurred less often than expected, indicated by the positive skew in Chi-squared residuals (Fig. 2, USA: 91.2 % had a negative standardised residual; England and Wales: 94.2 % had a negative standardised residual).Nevertheless, the strongest associations tended to be positive.For example, the proportion of pairs with residuals greater than 5 is 5.05 % in the USA, and less than −5 is 1.24 %.And for England and Wales these proportions are 2.81 and 1.55 %. The longer positive tails on the distribution were also found for non-standardised and adjusted residuals, particularly for the USA data (Additional file 1: Figures S1 and S2). (iii)Are biologically similar infections associated with co-infection death?There were 3501 pairs of associated infections reported in both the USA and England and Wales.Standardised residuals in both countries were viral hepatitis (−307 and −32), A41 and A49 Bacterial of unspecified site (−125 and −64), and A41 and B24 Unspecified HIV disease including AIDS (−142 and −31; see http://figshare.com/preview/_url/1328406/project/3684 for residuals of all pairs in both countries).We used these pairs with similar cooccurrence in both countries to investigate how cooccurrence on death certificates was associated with biological similarity between each pair of infections.The standardised chi-squared residual for a pair of infections was positively correlated with the number of shared biological characteristics (Mantel test with 100 repetitions simulated mean ± 2sd: 0.24 ± 0.17).In other words, infections that co-occurred more often than expected tended to have more characteristics in common.Co-occurring pairs tended to share each characteristic (separate Mantel tests with 100 repetitions: tropism 0.19 ± 0.14, transmission 0.12 ± 0.11, taxonomy 0.15 ± 0.13, timescale 0.18 ± 0.17, Additional file 1: Figure S4).To test the sensitivity of this result to our analytical methods we also tested for associations using data for each country alone, and using linear regression between the number of shared characteristics and the Chi-squared residual.In all instances we found a strength of association whose confidence intervals did not overlap zero (Additional file 1: Supplementary Information S4). Discussion Humans can get many different co-infections, but treatment guidelines only exist for a few specific combinations (e.g., HIV and hepatitis C).Co-infection morbidity has also been studied within certain cohorts (e.g., 5-16 year olds, [15]), and is often reported to be worse than single infections [4].However, the occurrence of coinfection in death across age and sex cohorts has, to our knowledge, never been studied before.Our results indicate that (i) co-infection death may be more common in early adulthood, but it is not known whether younger adults are more susceptible to co-infection per se, or more susceptible to fatal co-infection.We also found that (ii) pairs of infections with strong positive association on death certificates tended to co-occur more often than those with strong negative associations.This suggests that medical care of severely ill patients with some co-infections can be problematic.Finally, (iii) cooccurrence on death certificates was positively related to biologically similarity.Better understanding of these biological interactions may help efforts to predict and combat co-infection mortality.We discuss the factors that may contribute to these patterns, before considering implications for treatment, limitations of the data, and future research needs. Possible causes The early-to-mid adulthood peak in co-infection death contrasts with theories that the immune response declines in old age [5], and with non-infectious diseases where comorbidities increase with age [24].This could be explained by individuals being more susceptible to death from one infection in old age, either because they are frailer as their bodies deteriorate through oxidative damage [25], or the infection coincides with noninfectious causes of death that are more common with age, like cancer [26].Alternatively, young adults are more prone to severe immunopathologies following infection: critically ill patients with influenza A(H1N1) tended to be 20-30 years old [27], and the added physiological stress of co-infection might make death more likely.Another possibility is that more effort is made to find infections in critically ill young adults than for older patients.We are not aware of evidence that biased medical practices also contribute alongside the physiological factors mentioned above, but this is a possibility that could be examined further. Reasons for males being at higher risk of infection than females include behaviours that put them at greater risk of infection, or physiological reasons, such as sex hormones, that make them more susceptible to severe disease once infected [7,28].Our data do not enable us to distinguish which of these mechanisms may have played a role.If males undertake riskier behaviour, have higher testosterone in early adulthood, or are less likely to visit the doctor when ill this may explain why the sex difference appears around the peak of the distribution (Fig. 1). Treatment implications Our results suggest that co-infection treatment guidelines could be based on synergistic interactions between infections.Most possible pairs of infections co-occurred on death certificates at a frequency expected from their occurrence alone.We suggest that the unassociated pairs of infections could be excluded from efforts seeking to identify severe co-infections. Around 1 in 20 possible pairs were associated and tended to co-occur more often than expected.Positively associated pairs of reported co-infections included: mycobacteria and HIV, viral hepatitis co-infection, and cytomegalovirus and pneumocystis.While these similar pairings were often reported together, associations were context dependent; they were negatively associated with other infections, including mycobacteria and infectious bloody diarrhoea, pneumocystis and sequelae of tuberculosis, and viral hepatitis and Zoster virus infection.The direction of association is therefore not consistent for the same infection, and so treatment guidelines should not be based solely on the identity of one constituent infection.Whether the relatively correlations are clinically meaningful remains a debatable point, but on a population scale, across hundreds of thousands of deaths, the results suggest that it may be important to public health and worthy of further investigation.The biological similarity of associated pairs could be an important consideration when assessing the potential severity of a given co-infection. Data quality and limitations Studies based on reported data must consider potential biases.In our dataset there may be underreporting of co-infection death on death certificates if infectious disease was undetected, wrongly deemed not to have contributed to death, or were not reported using multiple codes.Poor reporting of causes of death was a problem in the UK in the 1990s [29].There have since been legal and educational reforms [30], and death certificate data have been audited by the Center for Disease Control and the Office for National Statistics.Using multiple infectious causes as indicators of co-infection probably underestimates the true number of co-infection deaths.One could hypothesise that certain types of infections, such as those detected by the same test, with similar tropism, of high severity, might be more likely to be diagnosed.However, from death certificates alone we are unable to examine whether behaviour or diagnostic techniques may have played a role.We have no evidence of systematic bias that could have generated the patterns we found, but we encourage further broad scale analyses of co-infection to help establish the key factors of the individual and their infections that can best guide treatment.Our conclusions are robust to the complexity of model fitted (Additional file 1: Supple-and the Department of Homeland Security's RAPIDD scheme for ECG to the co-infection meeting in Princeton. positively (Additional file 1 : Figure S3, r = 0.32, df = 3499, 95 % CI 0.303-0.345).Hence, most of these pairs of infections (3089/3501, 88.2 %) had the same direction of association in both countries and we have greater certainty over their co-occurrence.Pairs with the strongest negative residuals in both countries were A41 Other septicaemia and A04 Other bacterial intestinal infections (standardised residual −143 in England and Wales and −206 USA), A41 and B18 Chronic Fig. 1 Fig. 2 Fig. 1 Proportion of infection-related deaths reported due to co-infection in the USA (left) and England and Wales (right).Points are the observed proportions of co-infection death among death certificates reporting at least one infection.Solid lines are the fitted binomial GAM (female = red, male = blue).Dashed lines are two standard errors above and below the fitted values
v3-fos-license
2018-11-08T10:28:55.017Z
2018-11-06T00:00:00.000
53208871
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://hqlo.biomedcentral.com/track/pdf/10.1186/s12955-018-1032-6", "pdf_hash": "394d949b2baaa23419ac2dd4e41d9da11edc0cf6", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1971", "s2fieldsofstudy": [ "Medicine" ], "sha1": "62cce9294a6772374110482e66dd803d49be22d9", "year": 2018 }
pes2o/s2orc
Health related quality of life improvement in chronic non-specific neck pain: secondary analysis from a single blinded, randomized clinical trial Background Chronic non-specific neck pain is related to limited cervical mobility, impaired function, neck muscles myofascial pain syndrome, and stress at work. The aforementioned factors are strongly related and may lead to a negative impact on health-related quality of life. There are some effective conservative Physical therapy interventions for treating chronic non-specific neck pain. Currently, Deep Dry Needling is emerging as an alternative for improving symptoms and consequently, the quality of life in patients with chronic non-specific neck pain. The purpose of the study was to examine the effectiveness of Deep Dry Needling of myofascial trigger points on health-related quality of life improvement, as a secondary analysis, in people with chronic non-specific neck pain. Methods A randomized parallel-group blinded controlled clinical trial was conducted at a public Primary Health Care Centre in Madrid, Spain, from January 2011 to September 2014. One hundred thirty subjects with chronic non-specific neck pain and active myofascial trigger points in neck muscles were randomly allocated into two groups. Subjects in the intervention group (n = 65) were treated with Deep Dry Needling in active myofascial trigger points plus stretching in neck muscles; Control group (n = 65) received only stretching. Both interventions lasted 2 weeks, 2 sessions per week. Health-related quality of life was measured with Short Form-36 (SF-36), in 5 assessments: at baseline, after intervention period; and at 1, 3 and 6 months after intervention. Results For both groups, SF-36 mean values increased in all dimensions in every assessment. Significant differences (p < 0.05) were found in favor of the intervention group for all dimensions at the last assessment. For some dimensions (physical function, physical role, social function and vitality), the evidence was more consistent from the beginning. Conclusions Deep Dry Needling plus stretching is more effective than stretching alone for Health-related quality of life improvement, especially for physical function, physical role, social function and vitality dimensions, in people with non-specific neck pain. Trial registration Current Controlled Trials ISRCTN22726482. Registered 9 October 2011. Background Up to 67% of world's population may present chronic non-specific neck pain at least once in their lives. There is a relationship between functional limitation and disability in individuals with chronic pain, and they use health services and medication for pain relief very often. It is considered a public health and it is a frequent cause of job absenteeism which provokes high socioeconomic costs [1][2][3]. Chronic non-specific neck pain is diagnosed as cervical pain without a known pathological basis as the underlying cause of the complaints. Some symptoms are limited cervical spine mobility and neck muscles weakness, which may be often related to other problems, such as, vertebral, neck or shoulder impaired function, and mental and physical stress at work. Besides, chronic non-specific neck pain patients have more functional limitations and catastrophizing beliefs that may cause disability, lower vitality and worse general health status. All the aforementioned factors are strongly related, affect one into the other, and may lead to a negative impact on health-related quality of life (HRQoL) [2,[4][5][6]. Some recent studies have also reported the relation between chronic non-specific neck pain and Myofascial Pain Syndrome (MPS), caused by myofascial trigger points (MTrPs) in cervical muscles with a high prevalence in trapezius, levator scapulae, multifidi cervicali and splenius cervicis muscles [5]. The most frequent conservative physical therapy interventions for treating MPS are stretching, massage, ischemic compression, and pressure release techniques [4,[16][17][18][19]. The effectiveness of Deep Dry Needling (DDN), an invasive technique which is included in some physical therapy interventions for treating MPS, has also been reported, in different studies, to improve pain intensity, mechanical hyperalgesia, neck range of motion, neck muscle strength and neck disability [4,[20][21][22][23][24]. The benefits of DDN for the aforementioned symptoms have been described in the primary analysis of this study and the results showed better and clinically meaningful results for all of them, when compared with control group in the short-term and at 6-month follow-up [2,[4][5][6]. There are some studies that report HRQoL in patients with chronic non-specific neck pain about different physical therapy interventions, such as global posture reeducation and static stretching [8]; neck strength training [25]; physical training, specific exercises and pain education [26]; and home-based exercise [27]. However, as far as the authors know, no studies relate chronic non-specific neck pain, MPS, DDN of MTrPs and HRQoL. Therefore, in the present study, a secondary analysis was performed in order to determine HRQoL improvement, providing new data not described in the primary analysis [2,[4][5][6]. Aim To determine the effectiveness of DDN of MTrPs on HRQoL improvement in people with chronic non-specific neck pain [2,[4][5][6]. Design and setting This paper reports a secondary analysis of the study "Effectiveness of dry needling in chronic non-specific neck pain: randomized, single blinded, clinical trial" which was carried out between January 2011 and September 2014 at a Primary Health Care Center in Alcalá de Henares (Madrid-Spain)X by the Physiotherapy in Women's Health Research Group. It was approved by the Human Ethics Committee at Principe de Asturias Hospital in Alcalá de Henares, Madrid (Spain) [4]. Participants The sample was recruited at 3 primary health care centers in Alcalá de Henares (Madrid) and consisted of 130 participants who gave written informed consent to participate in the study. All participants were diagnosed with chronic non-specific neck pain by their primary care doctor [4,28]. After the diagnosis, a trained physical therapist, with more than 15 years of experience in the diagnosis and treatment of MTrPs, assessed each participant with a standardized clinical physical therapy assessment of the neck and upper extremities to determine if there was MPS in neck muscles. Those subjects who presented at least 1 active MTrP in elevator scapulae, trapezius, multifidi or splenius cervicis muscles, according to the diagnosis criteria established by Simons et al. [29], were included in the study. The assessment was performed by a group-affiliation-blinded expert physical therapist with more than 10 years of experience on assessing and treating MPS [4]. After signing the informed consent, participants were randomly allocated into 2 groups: DDN group (DDN; n = 65) and control group (CG; n = 65). Sample size was calculated according to the main objective of the original clinical trial. Details on sample size, sample recruitment, randomization, and blinding are explained in the paper with the primary analysis by Cerezo et al. [4]. Interventions Physical therapy interventions in both groups consisted on 20-min sessions, twice a week, during 2 weeks, and were carried out by 2 experienced physical therapists with more than 10 years of experience in the treatment of MTrPs at a primary health care center in Alcalá de Henares, Madrid. DDN group intervention was performed by physical therapist 1 (Pt1) and included DDN for each active MTrP found in multifidi cervicis, esplenius cervicis, levator scapulae and trapezius muscles using a 4 cm × 0.32 mm acupuncture needle with a guided tube (ASP. A1040P. Agu-punt S.L. acupuncture-Physiotherapy. Barcelona, Spain). After DDN, a passive stretching of splenius cervicis, cervical multifidi, levator scapulae and trapezius muscles was performed 4 times in the positions described by Simons et al. [29]. In the CG, physical therapist 2 (Pt2) performed the same passive stretching of the above mentioned muscles. Before the study started, a series of consensus meetings were carried out in order to ensure both physical therapists (Pt1 & Pt2) would perform the same passive stretching intervention. They were the only study members aware of group allocation. During the intervention period, if participants referred high pain intensity, they were treated with the rescue medication proposed by their primary care physician. No participant was treated out of the interventions established in the study. Outcome assessments Patients were assessed 5 times: at baseline (A0), just after the intervention period (A1-3 weeks from baseline), and then at 1 month (A2-7 weeks from baseline), 3 months (A3-16 weeks from baseline), and 6 months (A4-30 weeks from baseline) after the intervention. HRQoL was measured with the Short Form 36 Health Survey Spanish version 2 (SF-36v2) at each of these time points. Statistical analysis Participants' characteristics and relevant health variables and HRQoL were compared between the two groups at baseline with descriptive statistics. To estimate the effect of the intervention on HRQoL over time, a separate baseline-adjusted linear regression model was used for each SF-36v2 dimension at each visit. The differences in SF-36v2 score from baseline to the visit was regressed over a binary variable for trial arm (0 = control, 1 = intervention) and the baseline values of the SF-36v2 dimension centered on the mean. This allows us to estimate the difference between the two trial arms of the changes from baseline visit while considering the possible "regression to the mean" that might typically occur when using repeated measures of the same variable. No correction was applied for multiple testing as many of these p-values are clearly not independent and no decision has to be taken based on these p-values. Actual p-values and confidence intervals are shown and the evidence for each outcome is discussed. The software R v.3 © was used for data analysis. Results Between January 2010 and December 2014, 150 subjects were recruited to participate in the study, as they were diagnosed with chronic non-specific neck pain by their primary care doctor. After excluding 20 subjects for not accomplishing inclusion criteria, 130 participants with chronic non-specific neck pain and active MTrPs in neck muscles were included in the study to receive physical therapy treatment. After randomization, 2 subjects dropped out because they moved away from the city. Therefore, 128 participants self-fulfilled SF-36 and more than 98% of the items were answered (Fig. 1). Although the sample and methodology was the same as the primary study, the results presented in this manuscript correspond to a new and secondary analysis of in order to analyze HRQoL with every SF-36v2 dimension. Baseline demographics and descriptive pre-intervention statistics of the sample are shown in Table 1, according to the intervention groups. Both groups were fairly homogeneous at baseline, except for sex, with more females in the CG (45) than in DDN group (36); and BP dimension which had a lower value in CG (45.1) than in DDN group (59.6). As this is a randomized design, the main analysis will be uncontrolled by sex but adjusted by baseline HRQoL dimension to account for regression to the mean effect. However, a sensitivity analysis controlling for sex was done and the results were basically the same in all the outcomes (results not shown). Table 2 shows the effect of the group interventions, at each time point, by the terms of differences between baseline (A0) and each of the other 4 follow-up assessments (A1 to A4) of every SF-36v2 dimension. For both groups, SF-36v2 mean values increased in all dimensions in every time point. Significant differences (p < 0.05) were found, in favor of DDN group for all dimensions at the last follow up visit. However, for some dimensions such as PF (Fig. 2), PR (Fig. 3), SF and VT the evidence is stronger and more consistent from the beginning. The other dimensions did not show significant differences in all the assessments (Figs. 4 and 5). The summaries SF-36 PCS showed stronger differences towards the end, while SF-36 MCS showed stronger differences at the first and last visits. Discussion This randomized controlled trial is the first that relates chronic non-specific neck pain, DDN of MTrPs and HRQoL. The results show that a Physical Therapy intervention with DDN plus stretching improved HRQoL, especially at PF, PR, SF and VT dimensions, in patients with chronic non-specific neck pain. The mentioned dimensions, describe physical activities limitation (PF), role limitations due to physical problems (PR), physical and emotional health problems interference in social life (SF), and vitality or tiredness feelings (VT). It means that, despite neck pain, participants perceived an improvement in their limitations after the proposed intervention and this perception lasted over time. In the literature, there are many Physical Therapy interventions that are performed to improve symptoms of chronic non-specific neck pain, such as, pain intensity, mechanical hyperalgesia, neck range of motion, neck muscle strength and neck disability [4, 7-15, 20-24, 31, 32]. The combination of DDN and stretching has also shown [4, 20-24, 31, 32], as either DDN [4, 20-24, 31, 32] and stretching [8,9,14] have proved to be a good election on treating subjects with non-specific neck pain. Some studies report HRQoL in patients with chronic non-specific neck pain using different physical therapy interventions [8,[25][26][27], although they do not relate symptoms, DDN and HRQoL. Therefore, in this manuscript, the need of reporting HRQoL is important because it could not be performed in the primary analysis for the extension of the data. HRQoL was measured with SF-36v2. As far as the authors know, currently there is no specific instrument to measure chronic non-specific neck pain, therefore most studies use SF-36 to report the effects of Physical Therapy interventions for improving HRQoL [33][34][35]. Regarding baseline values, the average age of participants is around 50 years old, most of them are women and overweight, similar to other studies´samples on non-specific neck pain [8,20,25,36]. Most SF-36 basal values obtained in both groups were lower than Spanish population reference values, especially for PR, BP and ER [37]. Some authors have established reduction of HRQoL 6 months after patients are diagnosed with chronic cervical pain. The participants of the present study had chronic nonspecific neck pain, and clinical data are representative of what occurs with neck pain subjects in developed countries and similar to those for other studies [38][39][40]. There were some remarkable SF-36 differences in baseline values between DDN and CG for PF, PCS and especially for BP, but these differences did not affect the results as the analysis by baseline values have been adjusted. The BP dimension of SF-36 assesses bodily pain in general, including other kinds of pain that the participants might suffer and cannot be mitigated by this intervention, so it might not be the most specific outcome to measure the effect of this intervention. It is common that people with chronic non-specific neck pain and other musculoskeletal alterations perceive worse health status and have limitations in their work or other daily life activities and this is reflected in their GH and ER dimensions. In our data, the differences in the ER dimension were more significant in the later visits. This fact supports the idea that the negative impact of pain on HRQoL seems to be more related to the duration and sensation of limitation, when performing the daily life activities, than of its severity. It is also deeply related to functional, psychologic, social and working alterations [6,33,41]. The results show minimum clinically important differences [33,35,42] in some dimensions, which suggests that the inactivation of MTrPs could improve HRQoL in subjects with chronic non-specific neck pain. Cunha et al. [8] performed a randomized clinical trial (n = 33) comparing two groups where patients were treated during 1 h, twice a week during 6 weeks. One group intervention was 30 min of manual therapy and 30 min of stretching of upper trapezius, suboccipitalis, back of the neck, pectoralis major and minor, rhomboids, finger and wrist flexors, forearm pronators, finger and wrist extensors, forearm supinators, and paravertebral muscles. The other group received 30 min of manual therapy and 30 min of stretched muscle chains. They found that both groups reported improvement in all dimensions of SF-36 and concluded that this fact was probably due to the duration of the interventions, which could be enough time to influence the perception of the participant over the therapy received. In the present study, the intervention group obtained better results than control group in some dimensions of SF-36 (PF, PR, SF and VT) from the beginning and in all dimensions at the last follow up assessment. Therefore, the good results obtained in the present study are probably not related to patient's perception. Ris et al. [26] described the effectiveness of a physical training intervention plus specific exercises and pain education on HRQoL in subjects (n = 200) with chronic neck pain, during a 4-month follow up. The intervention group received exercises for neck/shoulder, balance and oculomotor function, graded physical activity training and pain education; CG received only pain education. They observed statistically significant differences in PCS and MCS SF-36 dimensions in favor of the intervention group. In the present study, no endurance, or strengthening programs were performed. Inspite of that, an improvement of HRQoL was obtained which might reinforce the idea of the inactivation of MTrPs as the best way to improve HRQoL, otherwise, it could be a perpetuator factor maintaining the vicious cycle of the MTrP. Participants' therapeutic adherence and level of response in the questionnaires during assessments were high, similar to other study developed by Salo et al. [27]. This fact may be due to the attention and monitoring carried out by the researchers and that the questionnaires were completed in the researchers´physical presence during the physical therapy assessments, while in most HRQoL studies the questionnaires were sent by mail or telephone. Other authors do not discuss this issue in their publications. In fact, in other studies some rehabilitative interventions were self-administered by participants, which may adversely influence results and therapeutic adherence [26,43]. The authors consider that the present study has some limitations. Although consensus meetings were carried out before the study started and the same material was used for the educational intervention, passive stretching was performed by 2 different physical therapists which may have influenced the outcomes. Furthermore, the sample was composed of participants from just one health area which could bias the study's external validity. The fact that this manuscript reports a secondary analysis might be a limitation. However, after reporting the primary results, the authors considered it necessary to develop a secondary analysis in order to describe the distinct effects of the intervention on HRQoL. Conclusions Deep dry needling plus stretching is more effective than stretching alone for HRQoL improvement in people with non-specific neck pain, especially at long term. The evidence was stronger for PF, PR, SF and VT dimensions. Future studies should strive to use high-quality conditionspecific patient reported outcome instruments to determine the impact of special conditions and its physical therapy interventions on chronic non-specific neck pain subjects.
v3-fos-license
2023-02-02T16:17:27.118Z
2023-01-31T00:00:00.000
256484205
{ "extfieldsofstudy": [], "oa_license": "CCBYSA", "oa_status": "HYBRID", "oa_url": "https://njcmindia.com/index.php/file/article/download/2625/1839", "pdf_hash": "8345cd68a1c26bd74dfff8f5cef146d96aa04fc2", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1972", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "sha1": "dd7c7a397569b8d0f18c5617f26ac7fc448b124a", "year": 2023 }
pes2o/s2orc
LETTER TO EDITOR Antimicrobial Resistance – A Silent Pandemic Antimicrobial resistance is posing a great threat in many  low and middle income countries as a result of groundless and unseemly use of antimicrobial agents in the community. Research related to antimicrobial use, determinants and development of antimicrobial resistance, regional variation and interventional strategies according to the existing health care situation in each country is a big challenge. Current statistics reveal around 1.3 million deaths annually are due to antimicrobial resistance which is more than due to HIV/AIDs or Malaria.It is predicted AMR might consume 10 million lives by 2050. 1 We are already close to the predicted number.Besides this it also poses a serious socio -economic burden globally.The World Bank has cautioned that, by 2050, burden posed by drug resistance would be higher than that caused by 2008 financial crisis and 24 million people will succumb by 2030 due to the impact of AMR on economy and health. 2To control and contain AMR we need a joint acton from variety of sectors.In many countries, antimicrobials have become over the counter (OTC) drugs, India being no exception.India rates of drug resistance are the leading in the world.In spite of being the largest producer and exporter of drugs, AMR is high due to lack of effective drug surveillance system.Policies have been formulated at National level but what is needed is policies at local community level for effective and justifiable use of antimicrobials to withhold and curtail the emerging resistance.Children under 5 are most vulnerable to AMR. Data from CDC reveals 30% of antibiotics being prescribed are unjustifiable or unnecessary.From being a submerged and unaddressed problem, a clearer picture of the AMR burden has emerged.Superbugs are the outcome of our prolonged failure to preserve antibiotics. 4Study from WHO and Lancet reveals drivers of AMR to be multifaceted indicating the need for one health initiative saying human health is directly related to animals and environment. 5WHO has released the priority pathogens list in 2017 as a result of evolving resistance. 6Table 1 shows the WHO list of priority pathogens.Carbapenem resistance in India is 20 times more common than in U.S. ICMR revealed 85% resistance of spp to carbapenems, genus uting to 20% of admissions in ICU setting. 7Data from GLASS study revealed MRSA rates to be 25%, E.coli resistance to 3 rd generation cephalosporins as 36.6% and alarming level of carbapenem resistance in Acinetobacter ie.,65%. 8Common infections could once again become deadly killers if we are unable to treat them with antimicrobials. A R T I C L E I N F O Development and spread of resistance par exceed our innovations and developments.In the past 3 decades we have developed only 2 new classes of drugs for Gram positives with no developments for drug resistant Gram-negative bacteria. 9We need to strengthen our funds on vaccines and drugs to combat the death due to AMR.Adequate surveillance at community level through epidemiological and microbiological tools are needed to analyse the situation and to fill the gap in knowledge on microbes. What can be done at hospital level is adoption of Infection control measures and antimicrobial stewardship through regular training of health care professionals and prescriptions audits.We need to watch antimicrobial use in food animals by strengthening veterinary medicine. To conclude, antimicrobial resistance is a multifaceted problem which if not taken actions can have a devastating health, social and economic impact.
v3-fos-license
2018-05-31T16:34:05.798Z
1934-01-01T00:00:00.000
30737658
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "f4e4dad70921031a6ff79752c4d0ef8cd0089e09", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1973", "s2fieldsofstudy": [ "Medicine" ], "sha1": "f4e4dad70921031a6ff79752c4d0ef8cd0089e09", "year": 2016 }
pes2o/s2orc
Reviews of Books The Clinical Examination of the Nervous System associated with hirsutism and virilism in women. Why apparently similar pathological lesions should be associated clinically sometimes with this syndrome and sometimes without it has not been adequately explained. The authors, however, claim to have found a differential stain (Ponceau luchsin) which has given positive results in all their cases subjected to unilateral adrenalectomy, and which enables them to separate the two types, such as simple hyperplasia, cortical adrenoma, cortical carcinoma or hypernephroma. The authors consider that there must be some special peculiarity of the cortical cells in virilism, involving the production or over production of a specific secretion. The fuchsinophil material which they describe appears to be related to the masculinizing function of the cortex, a function which is probably physiologically normal, but which in " virilism " is exercised to excess. The clinical condition of eighteen cases is described Price 7s. 6d.?We have reviewed this book on several occasions previously in these columns. It has now, however, reached its sixth edition, which proves that it is a work which has been much appreciated and one which is most useful to students, house physicians and general practitioners. Numerous minor additions and alterations have been made in this edition, more particularly in various methods which are of clinical importance, including Laruelle's method of encephalography. Adreno-Genital Syndrome," in which certain lesions of the adrenal cortex have been associated with hirsutism and virilism in women. Why apparently similar pathological lesions should be associated clinically sometimes with this syndrome and sometimes without it has not been adequately explained. The authors, however, claim to have found a differential stain (Ponceau luchsin) which has given positive results in all their cases subjected to unilateral adrenalectomy, and which enables them to separate the two types, such as simple hyperplasia, cortical adrenoma, cortical carcinoma or hypernephroma. The authors consider that there must be some special peculiarity of the cortical cells in virilism, involving the production or over production of a specific secretion. The fuchsinophil material which they describe appears to be related to the masculinizing function of the cortex, a function which is probably physiologically normal, but which in " virilism " is exercised to excess. The clinical condition of eighteen cases is described 183 in detail. The book also contains many interesting observations, and coloured plates showing the condition when t hereaction is present and also when it is absent. It is a publication more suitable for the specialist than the general practitioner, though the authors state they desire to help the latter. The first half of the book is devoted to abstracts from an immense field of recent articles on dermatology,, whilst the second half is given up to a very comprehensive survey of current literature in the domain of urology. The majority of the abstracts are enriched by a little editorial comment and criticism. Those engaged in these branches of special medicine will find a most valuable addition to their knowledge in this splendidly brief and concise book. Massage and Remedial Exercises. Second Edition. By Noel M. Tidy. Pp. xii., 430. Illustrated. Bristol: John Wright & Sons Ltd. 1934. Price 15s.?The fact that a second edition of Miss Tidy's text-book has. been found necessary within eighteen months of the first is a tribute to its having filled the void for which it was intended ; this, the author explains, is to provide for senior students of massage and those recently qualified, for whom other books on massage offer too few details ; while the advanced works, like Dr. Mennell's, could not be appreciated without " a far more extensive background of knowledge and experience than would be possessed by any medical gymnast at the beginning of her career." The wealth of detail provided by Miss Tidy certainly offers the student an opportunity of acquiring such a background, if she is not stunned by its complexity ; for one cannot help feeling that the average intellect entering such an area is apt to lose sight of the wood for the trees. Textbooks such as these are inevitably designed to meet the demands of qualifying examinations, and certainly the one under review is a model of accuracy and up-to-date information, representing the digestion of an enormous amount of information and the teaching of many schools. Nevertheless, one feels certain that the practitioner of massage, like that of medicine, would be safer if thoroughly grounded at the outset of her career in certain fundamental principles, which are applicable to all joints and all diseases, and left to discover their modifications and the subtleties of practice by the actual handling of cases. If, on the other hand, the training schools demand the minutiae contained in this text-book, we would strongly recommend the publishers to re-issue it in a set ?f handy pocket volumes with a print that would not strain the eyes of those whose bookwork is done at the end of a day of strenuous exercise, which has already lowered their muscle tone. Aids to Operative Surgery. Second Edition. By Cecil P. G. Tindall & Cox. 1934. Price 3s. 6d.?There is a marvellous amount of information packed into this little book, and it is very cheap. Gynaecological, ear, nose and throat operations are included. For students taking a class in operative surgery it is excellent, they need nothing better. It contains all they are likely to be asked in a surgical examination for a pass degree ; indeed, no examiner has any business to expect a tithe of what is here contained of operative detail. The author's technique is not always that most favoured to-day; we think Coffey's method of transplanting the ureters for ectopia vesicae might well be substituted for the three procedures mentioned ; it is customary to stretch or divide the sphincter for anal fissure; an orchidopexy is more likely to be successful if some method of holding the testis down is used. The treatment described for exophthalmic goitre is misleading. As the author devotes a paragraph to discuss means of checking haemorrhage from the superior longitudinal sinus, it might well be mentioned that a piece of the temporal muscle stitched over the tear is simple and effectual. The book is well turned out. Mayou's Diseases of the Eye.?This book must surely be so well established in the affections of students as to need no introduction. Since the appearance of the last edition, however, a great deal of new work has been published. This the authors-have carefully sifted, and much that seems likely to be of permanent value has been incorporated in the present edition. This has meant the re-writing of much of the book, but its essential characteristics remain the same, and, within a small compass, it succeeds in covering adequately the whole field of ophthalmology. That this is accomplished without the sacrifice of clarity of stjde and without obvious condensation of subject-matter is a tribute to the authors. Some of the views expressed?e.g. on glaucoma?are a little in advance of those commonly taught, but this is probably good for both students and teachers. Altogether, this is a book which should be of great value to students and general practitioners. Guide to Fundus Appearances.?In order to reduce the cost of the book the usual coloured plates, illustrative of fundus appearances, are published separately in a small volume, in which are a dozen plates, some of them of composite character. With its descriptive letterpress it furnishes a fairly complete guide to such fundus conditions as the student may be expected to recognize. Price 5s.?It is probably not generally realized, outside the large mining areas, how much incapacity results from the somewhat ill-defined condition known as miners' nystagmus, nor that, as yet, very little advance in its prevention has been made. Mayou's Diseases It is in order to stimulate others to carry out research on the subject that the author has written his monograph. After a brief survey of nystagmus in general, he passes to a consideration of the symptoms, physical signs, etiology and treatment of miners' nystagmus, and exemplifies his views in a series of fifty case reports. This is a book which should interest those medical men whose work brings them in contact with the disease. misleading. In the first place its scope is limited to disease of the tonsils alone; in the second it contains very little that can claim to be " modern advances." Indeed, if' we mention Brown Kelly's researches on tortuosity of the carotid, a short account of agranulocytosis and diathermic treatment of the tonsils, we have practically exhausted the list. After a full description of the accepted anatomy of the tonsil the bulk of the book is devoted to a tedious rechavffee of indications for and technique of tonsillectomy. Complications are discussed at length, though not their treatment, except that of haemorrhage, and even here the slip-knot is the only method ligature shown and the treatment of htemophilic bleeding is ignored. The superficial nature of the work is indicated by the fact that the whole subject of malignant disease receives three pages only, and diathermy is the sole method of treatment described. The illustrations are such as are usual in text-books : figures 16 and 27 are identical, and eight of the remaining thirty-five are from instrument catalogues. They are well reproduced, though the coloured frontispiece " Vincent's disease " is (pace the artist) astonishingly unlike the usual clinical appearance. Paper and type are excellent, and the whole get-up of the book worthy of the publisher, but it is hard to conceive the reader who will find it of any value. Physics for Medical Students. By J. S. Rogers, B.A., M.Sc. Pp. x., 205. Illustrated. Melbourne : Melbourne University Press. 1933. Price lis. 6d.?This book can hardly be looked ?n as an initial text-book for medical students ; an elementary knowledge of Physics would be necessary before much that is in it could be satisfactorily appreciated. As the sub-title implies, it consists of a series of chapters covering some of the chief applications of Physics in Medicine, and as such it should he of considerable help to students during the professional years of their curriculum. Almost without exception each chapter is independent of what has gone before, and can be read by itself. The subjects covered include osmosis, the colloidal state, hydrogen ion concentration, blood-pressure, heat gain and loss, cochlear function, optics of the eye and the microscope, high-frequency currents, radiations and radioactivity. The section on the various types of radiations is good, and should prove helpful to one who is studying their applications in diagnosis and treatment. The chapter on blood-pressure might be expanded with advantage. One bonders why the chapter on hydrogen ion concentration should be divorced from the other physico-chemical chapters , it is to be found between the sections on the microscope and high-frequency currents. A welcome feature, uncommon in hooks of this size, is a section on the history of Physics , this, though of necessity brief, cannot fail to add to the interest of the book. In addition, a few references are given at the end Vo1- LI. No. 193. of some of the chapters. The book is of handy size, and is well-printed and clearly illustrated. The Constitution in Health. By T. E. Hammond, F.R.C.S. Pp. ix., 160. London : H. K. Lewis & Co. Ltd. 1934. Price 7s. 6d.?In this book the author deals with a conception which found more favour in a former time than it does with the present generation. It performs a real service in focusing attention on the fact that whilst bacteria are universally recognized as an important factor in the causation of disease, they are not the sole factor. Some more elusive phenomenon must be found to account for the various abnormalities we witness in our patients. As the author stresses in the last chapter, personal idiosyncrasy and abnormal reaction, whether it be to drugs or infections, would well repay further inquiry. Many of the author's statements challenge criticism, e.g., page 72, " It will eventually be found that too much fresh air is not so beneficial as it is thought." The book suffers also from frequent repetition. Price 10s. 6d. As the author says, this book is intended for the final year medical student, and follows closely on the lines of the author's larger i.Synopsis of Public Health. The book is written on the lines of others of this series, and will be found most useful for the student studying for his final qualifying examination. Special attention is given to the services and assistance given by a health department to the general medical practitioner. years is ample proof of its popularity, and justifies the good Press notices which the first edition received. The second edition has been increased in size by the inclusion of chapters on congenital affections of the skin, atrophy and sclerosis, vesicular and bullous eruptions, and the erythrodermias. There are also some new diagrams and photomicrographs. At the same time the price has been reduced to 16s. The student will find that reading this text-book, combined with attendance at the out-patient department, will give him a sound general knowledge of dermatology, while the practitioner will find it a valuable help in the diagnosis and treatment of those cases with which he has to deal. The simpler methods of treatment are clearly explained, and the indications for more specialized treatment, e.g. X-rays, are given. The It is addressed to women, and particularly to married women, ihe exercises given are sound, and should, if indulged in regularly, not only keep the exerciser shapely but healthy as yell-The diets are simple and, if anything, err on the inadequate side. A big point is rightly made of taking plenty fluid during the day. For those women inclined to indigestion, adiposity, or general lack of muscle tone, this book can be recommended. There is a full index and the Printing is good. It is a pity to have bound it in paper. Manipulative Treatment for the Medical Practitioner. By T-Marlin, M.D. Pp. vii., 133. Illustrated. London : Edward Arnold & Co. 1934. Price 10s. 6d.?There is a tendency among members of the medical profession to relegate exhaustively, and in this little book we glean the results of his experience. He shows that manipulation does come into the field of legitimate therapeutics, and obviously it is much more appropriately carried out by qualified medical men than by those who may adopt the method for the cure of carcinoma of the colon. If doctors are able to bring about some of the miraculous cures now credited to the quack, it will undoubtedly make some impression upon the minds of the lay public, and the author shows us the way. In this clearly written and well illustrated work Dr. Marlin has made a valuable contribution to medical literature, which, with careful study and application, should endow its owner with the necessary knowledge and skill to bring about cures in certain types of difficult patients to their mutual benefit. Price Cd.?In the third edition of Professor Browne's little handbook, Advice to the Expectant Mother on the Care of Her Health, the subject-matter has been brought up to date, thus enhancing its value for the class for whom it has been written. The chapter on the common disorders of pregnancy is particularly good. Without being unduly alarming, the author gives a concise account of the serious complications which may arise during gestation, and impresses upon his readers the danger of delay in getting medical help when symptoms pointing to toxsemic conditions arise. The chapter dealing with the hygiene of pregnancy is also excellent, and will be read with benefit by all expectant mothers, but in the chapter devoted to the management of the infant there are one or two statements which are open to criticism. The mother who is breast feeding is advised to nurse from alternate breasts instead of both breasts at each feed, the latter method is usually found to produce a more regular and satisfactory milk supply. Lewis & Co. Ltd. 1933. Price 5s.?Dr. Gillett is to be congratulated upon giving the general practitioner a reliable and well-balanced short work on the subject. The author msists on the importance of the individual needs in each case, deprecating routine methods. The description of the culture of the organism and preparation of the vaccines is sound, and his odvice as to dosage and remarks on the patient's response are practical and helpful. The author favours the control of frequency and size of the dose in accordance with the response ?f the patient to each dose, rather than following a rigid eourse of increasing dosage. The chapter on catarrh and the sequelae of influenza is particularly helpful, and his reference to the now accepted view of the defensive action of tonsil and adenoid tissue is well worth noting ; and the advisability of immunizing the patient after and often before removal is suitably dealt with. In those cases where catarrhal conditions persist after excision of tonsils and adenoids, and where glands " do not clear up, the condition is due, he points ?ut, to a latent septic focus, which re-infects some remaining lymphoid tissue. Such cases, as a rule, clear up entirely with a course of suitable vaccines, provided there is efficient drainage of any infected accessory sinuses. Dr. Gillett advocates the use of quite small doses of vaccine in all acute conditions, and this is the trend of opinion of most vaccine workers of to-day. From experience we fully endorse the author's statement that provided an efficient vaccine is used in suitable doses at correct intervals good results are assured in chronic and acute infections. This small volume of one hundred pages, including many charts and lists of references, is very well bound and printed, and might with advantage be added to the library of the young general practitioner. 1934. Price 10s. 6d.?The popularity of this medical dictionary has called for a revision of the tenth edition. Actually there have been sixty-eight printings of Gould's j)ocket dictionary, aggregating over 800,000 copies. The relegation of the tables of arteries, bones, etc., to an Appendix at the end is a very welcome improvement. Apart from this, no substantial changes have been made. Gould's original plan is adhered to of selecting for inclusion such new words as are of sound and permanent value. This is one of the chief causes of the continued success of his medical dictionary.
v3-fos-license
2019-04-27T13:09:53.946Z
2018-07-01T00:00:00.000
134881457
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://agupubs.onlinelibrary.wiley.com/doi/pdfdirect/10.1029/2017WR021803", "pdf_hash": "656e123cea8e46a9e533051a996efecafa1bfdde", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1974", "s2fieldsofstudy": [ "Business" ], "sha1": "16b98a114d0b21d217bb0aa244f3ce319cf97b80", "year": 2018 }
pes2o/s2orc
Real‐Options Water Supply Planning: Multistage Scenario Trees for Adaptive and Flexible Capacity Expansion Under Probabilistic Climate Change Uncertainty Planning water supply infrastructure includes identifying interventions that cost‐effectively secure an acceptably reliable water supply. Climate change is a source of uncertainty for water supply developments as its impact on source yields is uncertain. Adaptability to changing future conditions is increasingly viewed as a valuable design principle of strategic water planning. Because present decisions impact a system's ability to adapt to future needs, flexibility in activating, delaying, and replacing engineering projects should be considered in least‐cost water supply intervention scheduling. This is a principle of Real Options Analysis, which this paper applies to least‐cost capacity expansion scheduling via multistage stochastic mathematical programming. We apply the proposed model to a real‐world utility with many investment decision stages using a generalized scenario tree construction algorithm to efficiently approximate the probabilistic uncertainty. To evaluate the implementation of Real Options Analysis, the use of two metrics is proposed: the value of the stochastic solution and the expected value of perfect information that quantify the value of adopting adaptive and flexible plans, respectively. An application to London's water system demonstrates the generalized approach. The investment decisions results are a mixture of long‐term and contingency schemes that are optimally chosen considering different futures. The value of the stochastic solution shows that by considering uncertainty, adaptive investment decisions avoid £100 million net present value (NPV) cost, 15% of the total NPV. The expected value of perfect information demonstrates that optimal delay and early decisions have £50 million NPV, 6% of total NPV. Sensitivity of results to the characteristics of the scenario tree and uncertainty set is assessed. Introduction and Background Water utilities aim to maintain an efficient and reliable water supply service by optimally combining the scheduling of supply augmentation projects and demand reduction policies (Mortazavi-Naeini et al., 2014). Water planners investigate a range of feasible interventions including both the supply-side (e.g., wastewater reuse, desalination, and reservoirs) and demand-side interventions (e.g., demand reduction and leakage reduction). In its simplest form, the capacity expansion problem refers to finding the optimum timing and scale of predefined projects. Deterministic supply-demand optimization aims to meet service levels commitments under historically dire conditions and identifies a fixed least-cost schedule of system upgrades (Padula et al., 2013). However, fixed investment plans are brittle; that is, if future conditions turn out to be different than assumed, the plan is likely to fail (Chung et al., 2009). The antidote to brittleness is robustness (defined as a decision that performs acceptably well over a range of conditions) and flexibility (defined as the ability to switch a decision depending on outcomes that materialize; Maier et al., 2016). Methods that use an ensemble of plausible scenarios to seek robustness and flexibility are discussed below. Robust decision making is an attempt to identify plans that perform well under a wide range of plausible future conditions (Lempert et al., 2006). That is, investment plans should aim to be insensitive to the most significant uncertainties (J. W. Hall, Lempert et al., 2012;Huskova et al., 2016;Lempert et al., 2006;P. A. Ray & Brown, 2015). Robust plans trade optimality with the ability to perform acceptably well in a wide range of future scenarios. Robust decision making has been applied in a range of water resource planning contexts, such as in England (Matrosov et al., 2013), in Australia (Mortazavi-Naeini et al., 2015), and in Southern California (Tingstad et al., 2013). Robust approaches accommodate for a wide range of possible future conditions (i.e., mild to dire). Depending on the statistics used to quantify the performance of the system over the RESEARCH ARTICLE Water Resources Research 10.1029/2017WR021803 set of possible scenarios, they may lead to excess capacity (over investment; Herman et al., 2015) if an excessively conservative set of actions is chosen (Shapiro, 2012). If optimization is used, different metrics to define robustness will lead to different results (McPhail et al., 2018;Mortazavi-Naeini et al., 2015). A robustness metric determines how a definition of robustness is operationalized (Kwakkel, Eker, & Pruyt, 2016). Misdefined robustness metrics generally lead to solutions that underestimate the system performance with respect to the one achievable with a better metric (Giuliani & Castelletti, 2016). Adaptive approaches are based on considering the uncertain future and responding to future conditions by adjusting intervention schedules as the future manifests (Maestu & Gómez, 2012). Adaptability enables a system to change proactively to environments, markets, regulations, and technology (De Neufville & Scholtes, 2011). Dynamic Adaptive Policy Pathways (DAPP) and Real Options Analysis (ROA) are among the decision-making processes that differently identify adaptive strategies under uncertain future (P. A. Ray & Brown, 2015). While DAPP appears in the literature to be implemented in situations with absence of information on likelihood of the multiple plausible futures (Haasnoot et al., 2013;Kwakkel et al., 2015;Kwakkel, Haasnoot, & Walker, 2016), ROA typically makes use of probability information (Dixit & Pindyck, 1994;P. A. Ray & Brown, 2015) to treat future uncertainty. DAPP is an amalgamation of two approaches, Adaptive Policy Making and Adaptation Pathways. The former is a structured approach for designing dynamic robust plans (Dessai & Sluijs, 2007;Kwakkel et al., 2010;Walker et al., 2001), and the latter approach uses adaptation tipping points to specify the conditions under which a given plan will fail as it no longer meets the specified objectives . DAPP includes transient scenarios representing multiple uncertainties used to analyze the vulnerabilities and opportunities of policy actions and how they develop gradually over time. Alternative types of actions are then identified to address these potential vulnerabilities and opportunities, specifying a dynamic adaptive plan (Hamarat et al., 2014;Herman et al., 2015;Kwakkel et al., 2015). In a water resource management context, adaptation tipping points could be a certain climate change trigger indicating that the current plan must change as new actions are needed to ensure water supply security. The challenge of this approach for water resource management applications is to identify good triggers for water management due to high natural variability as well as a monitoring framework for short time period of measurements (Diermanse et al., 2010). ROA is a probabilistic decision process with the ability to value the flexibility and adaptability in future decision making when irreversibility and uncertainty are key characteristics of the decision problem (Dixit & Pindyck, 1994). While it can be used as part of the evaluation and design of DAPP (Buurman & Babovic, 2016), it is mainly used to enable planners to examine the implications of future uncertainties. Within ROA, flexibility is valued since it allows delaying commitment to large, costly, and irreversible decisions while either exercising different interventions or incrementally implementing interventions with high regret cost and long construction times until more information is available. Adaptation is enabled because ROA provides an optimal sequence of future investment decisions that respond to changes in uncertainty over time. Traditional ROA methods are based on financial theory, such as the Black-Scholes equation (Black & Scholes, 1973) or expected value decision tree analysis (Dixit & Pindyck, 1994). ROA is implemented through different techniques. These include decision trees, lattices, and Monte Carlo analysis (Chow & Regan, 2011;De Neufville & Scholtes, 2011;Lander & Pinches, 1998;Trigeorgis, 1996) as well as multistage stochastic optimization programs (De Weck et al., 2004;Wang & De Neufville, 2004Zhao et al., 2004). Combinations of staged decision making (Beh et al., 2014;Cai et al., 2015;Hobbs, 1997;Kang & Lansey, 2012;Kracman et al., 2006;P. Ray et al., 2011;Vieira & Cunha, 2016) and ROA (Jeuland & Whittington, 2014;Steinschneider & Brown, 2012;Woodward et al., 2014) can be found in the water and flood management literature. The number of decision stages in these multistage problems defines the frequency that intervention strategies can be modified in the planning horizon. For example, P. Ray et al., 2011Ray et al., 's (2011 long-term water supply planning under climate change uncertainties extends 75 years into the future and the decision stages are made in years 2035, 2060, and 2085. In another work, Woodward et al. (2014)'s model stages flood risk interventions every 50-year time step over a 100-year time horizon. There has been significant effort by using different decomposition methods (Escudero, 2009;Mulvey & Ruszczyński, 1995;Rockafellar & Wets, 1991), and/or uncertainty reduction and clustering techniques (Dupačová et al., 2003;Gröwe-Kuska et al., 2003;Gülpınar et al., 2004;Heitsch & Römisch, 2005;Housh et al., 2013;Latorre et al., 2007;Šutienė et al., 2010) to represent long-term future uncertainty in stages using a scenario tree. Nevertheless, applying ROA in water resource planning is still challenging for three reasons. First, ROA is sensitive to the structure of the scenario tree so the parameterization of its design must be defensible. This includes deciding the number of nodes over the planning horizon and choosing the branching between states. Second, the probability assignment to scenario branches and nodes affects the optimized investment decisions. This can become intractable for a relatively complex problem. Lastly, as the number of scenarios used grows, the problem becomes more complex, often without increasing the quality of the solution (Lander & Pinches, 1998;Wang & De Neufville, 2004). The decision-making process presented in this paper aims to explicitly seek adaptability and flexibility in least-cost supply-demand infrastructure investment planning. We estimate the value of adaptability and flexibility under conditions of probabilistic uncertainty where probabilities are assigned to future states of supply. This is different from decision making under deep uncertainty approaches (Lempert et al., 2006) where key criteria for evaluating alternative decisions such as robustness, adaptability, and trading-off conflicting objectives are addressed without requiring probabilities (Kasprzyk et al., 2012;Lempert et al., 2006). To account for the above, this paper proposes a generalized uncertainty sampling and optimized scenario tree construction approach for multistage investment planning. We optimally build a scenario tree with multiple decision stages to allow for frequent and regular modifications to the investment strategies. The decision tree presented in this paper uses a range of supply scenarios to represent uncertainties of future climate change effects from mild to dire. The range of possible climate change futures was defined by the UKCP09 weather generator, which provides probabilistic projections of precipitation, temperature, and other variables for the United Kingdom using perturbed physics ensemble simulations (J. Murphy, Sexton, Jenkins, Boorman, et al., 2009). The analysis has used UKCP09 data assuming that the impacts are for a medium emissions scenario, as reported in Thames Water (2014). The scenario tree is incorporated into a multistage stochastic optimization formulation that applies ROA for enabling flexible and adaptive water resource investment decisions. Frequent corrective decisions allow the model to compensate for insufficient or excessive investment made in initial decision stages. The recommendations of the proposed method depend on the probabilities assigned to the supply scenarios; errors in those probabilities will lead to errors in the models recommendations. To measure the adaptability and flexibility enabled by the ROA implementation, two metrics are used and discussed. We apply the model to a water supply infrastructure planning problem in England over 50 years with a 5-year decision-making time step. The proposed approach is described in section 2 and the results of its application to Thames Water's London supply zone are presented and discussed in sections 3 and 4. Two metrics to evaluate the implementation of ROA are proposed in section 4.2. Sensitivity of results to the use of different scenario trees and the characteristics of the uncertainty set used to create the trees are in section 4.3. Section 4.4 discusses the limitation of the proposed method, and section 5 concludes the paper. Adaptive and Flexible Formulation for ROA Implementation We take two steps in formulating a multistage stochastic program for ROA implementation. In the first step, a scenario tree (see definition in section 2.1) is generated to approximate the stochastic supply representing an ensemble of plausible futures. In the second step, a multistage mathematical programming formulation is solved on the scenario tree to obtain the future plan under plausible future scenarios. The section concludes with an illustration of a utility that practices real-options investment decision making provided by the proposed formulation (section 2.3). Scenario Tree Approximation We consider a discrete time horizon T in which decisions are made at each stage t ∈ T. To facilitate adaptive decision making to changing future condition, and to represent the multistage planning for flexible decision making, a set of paths is built to represent the evolution of an uncertain future. The paths, or trajectories, correspond to a particular state of the uncertain parameter in time. These paths are approximated using a tree structure which we refer to as a scenario tree. The scenario tree, schematized in Figure 1a, is built by creating the root node at time stage 1 associated with the first stage deterministic decision. The successor nodes to the root depict the possible outcomes of the next decision point at time index 2. This process is repeated until the end of the planning horizon resulting in a tree structure. A single scenario is then defined as a unique path from the root node to the terminal node defined by a leaf showing one realization of the future. The probability of scenario occurrence is defined by multiplying all state transition probabilities of the scenario path starting from root leading to the leaf. The scenario tree is an approximation of the stochastic process The parameters s i and p i are the scenarios and the transition probabilities for each outcome branch, respectively; for each pair of branches the sum of the probabilities adds to 1. A path is defined from root node to leaf node at the end of the planning horizon. (b) An illustration of a simple water resource problem solved with the proposed real-options formulation. The supply-demand gap and the activated intervention are provided above and below each tree node, respectively. and is suitable for multiperiod decision making as until a given point on the tree, the past is shared among a set of scenarios while a future event is yet to manifest. In Figure 1a, an example scenario tree structure is presented. We see that tree nodes F and G share a common point C and all decisions that come before it. Nonanticipativity enforces that investment decisions at time t only utilize any information that is available up to this stage. Hence, this dictates that all decisions made for scenarios 2 and 3 should be the same on nodes A and C. The path indicates that the possible outcomes from C in the next stage is transition to either F with probability p 5 or G with probability p 6 , subject to p 5 + p 6 = 1. The number of leaf nodes corresponds to the number of distinct scenarios and their probabilities are calculated as the multiplication of associated transition probabilities starting from root leading to the leaf node. For instance, the probability for supply scenario s 3 to occur, from root to the end of the planning horizon is p 2 × p 6 × p 11 . Manually generating the above scenario tree and deciding on the number of nodes, leafs, and probability information on each node for practical purposes requires complex calculation and sufficient judgment (Lander & Pinches, 1998). This is especially a major deterrent to ROA implementation in complex decision problems as scenario trees can quickly grow large. To account for this, we automatically construct the scenario tree by implementing the fast forward iterative greedy algorithm, which aims to minimize a so-called probability distance between the uncertainty sets . The algorithm optimally creates a most informative scenario tree based on the original stochastic process by successively bundling the tree nodes into separate sets to be later represented by a new node while maintaining the probability information of the constructed uncertain process as close as possible to the original stochastic process. By bundling similar scenarios and reducing the number of nodes this not only produces a valuable and smaller computationally accessible multistage decision model but also reduces the burden of manually representing the uncertainty through scenario tree generation for multistage stochastic ROA implementation. Appendix A gives details of the construction algorithm where the quality of the constructed tree is controlled by a metric that calculates the percentage of information lost known as relative probability distance (Heitsch & Römisch, 2011). The lower the metric value is, the less information is lost and hence the more accurate the constructed tree becomes. This is set to 5% in this study, as we assume that this is an acceptable loss of information. The tolerance indicates the relative probability distance between the constructed tree and the original stochastic process and consequently determines the number of scenarios preserved in the scenario tree. Staged Mathematical Model With a scenario tree constructed, we formulate a mathematical program to represent the staged decision process for obtaining an optimal decision for each node of the scenario tree. This provides adaptive optimal solutions which propose actions to be implemented at each decision-making time interval and for each estimate of the uncertain future. We introduce a binary decision variable dS representing the activation of Water Resources Research 10.1029/2017WR021803 an intervention at each node of the tree where the decision at each stage only depends on the information available up to that point. The following formulation defines the staged mathematical program for sequential capacity investment decision making over time. Let N be the set of nodes on a scenario tree and N t be the set of nodes belonging to stage t. For a node n ∈ N we denote with n − 1 and n + 1, respectively, the predecessor and successor nodes on the scenario and with p n the probability that node n is realized. For a node n ∈ N and scenario s ∈ Ω, Ω n is the set of nodes that belong to scenario s. where n is a node, t denotes time (stages), i is an intervention, p n is the probability that node n is realized, r is the discount rate, eS n,i denotes levels of existing supply from intervention i, cC i is the undiscounted capital cost of intervention i, cF i is the undiscounted fixed operational cost of intervention i, cV i is the undiscounted variable operational cost of intervention i, D t is demand in time t, cS n,i is the maximum capacity of intervention i in node n, i is the construction time period for intervention i, dS n,i is the activation of intervention i for node n, S n,i is the supply from intervention i for node n, and aS n,t,i is the associate supply on the intervention i to supply on node n in time t. The optimization model minimizes the expected cost of investments discounted back to the present. Constraint (2) makes sure that the supply balances the demand in each node of the tree. Constraints (3)- (5) allow an intervention to be utilized up to its capacity considering its construction period, i , before its activation; constraint (3) sets an earliest year for the yield, constraint (4) sets the available supply to associate with construction period, and constraint (5) prevents yield from being used during the construction period. Constraint (6) forces an intervention once activated to remain active at later nodes of the tree. Activation of two interventions that are mutually exclusive is avoided by introducing constraint (7) over the set of mutually exclusive interventions, I m . Constraint (8) ensures that modular interventions can be further expanded as long as the previous phase has been completed. I d denotes the set of dependent interventions and I p denotes the set of prerequisite interventions. The proposed problem structure follows a node-based formulation related to the multistage stochastic program. Intervention activation constraints, due to path dependency are nonanticipative. For instance, although scenarios s i and s j end up in different terminal nodes, they can be passing through the same node in time t. In that case, the intervention activation decision variables at time stage t in scenario s i equals that of the other scenario s j . This means that the multistage stochastic program will determine an optimal decision for each node of the scenario tree, given the information up to time stage t. Given that there are multiple succeeding nodes, the optimal decisions will not exploit hindsight, but they should anticipate future events. The mathematical model above allows nonanticipativity to be incorporated implicitly through its scenario tree formulation. Constraint (10) makes sure that an intervention can only be activated at most once in any scenario. Figure 1b illustrates a simplified scenario tree for the purpose of demonstrating the ROA implementation. We consider a utility that wants to cost-effectively balance future supply-demand by investing in a new reservoir with three possible capacities (50, 100, or 150 Ml/d). The 50 Ml/d reservoir can be built with a fixed or modular capacity. As shown in Table 1, if the utility builds a 50 Ml/d fixed capacity reservoir with 1,000 £m cost, they cannot expand it later. Alternatively, if they pay a higher initial capex cost (1,100 £m) for a modular 50 Ml/d reservoir design, they are able to expand later to 100 Ml/d or further to 150 Ml/d by paying the relevant expansion cost (Table 1). The 100 £m premium is an upfront cost that the utility pays to reserve the right for expansion in later stages if required. This premium allows the utility to delay investment for the sake of acquiring information. The mathematical formulation in section 2.2 finds the minimum discounted expected investment cost of capacity expansion over a four-stage planning horizon. The supply-demand gap is shown in each node of the tree. In t 2 node B, a fixed reservoir of 50 Ml/d capacity is activated (50 Ml/d fx) since its capacity is sufficient to balance the supply-demand gap till the end of the planning horizon. In t 2 node C, however, a 50 Ml/d modular capacity is the most cost effective intervention that gives the ability to respond uncertain supply-demand level in the future. If s 2 happens, it avoids further investment till the end of planning horizon, while under s 3 , it requires the planner to expand capacity by an extra 50 Ml/d at t 4 to balance the larger supply-demand gap. In t 2 node D, the 50 Ml/d modular reservoir is again picked by the mathematical model, incrementally increasing capacity by an extra 50Ml/d and 100 Ml/d under s 4 and s 5 , respectively, till the end of planning horizon. This example shows how the ROA implementation is used to assess under different future scenarios the suitability of paying a premium to postpone capacity expansion. Application to Infrastructure Investment Planning England offers an interesting context to apply adaptive and flexible multistage investment planning, because every 5 years, the economic regulator requires the water utilities to produce a plan demonstrating that the supply-demand balance is satisfied throughout their operating area over a long-term planning period. A plan is an optimal combination of new supply and demand management interventions, scheduled to meet estimated water supply zone demand plus an uncertainty allowance at least cost and is periodically updated. That is, company asset planners must select short-term (5 years) interventions for the next planning decision period and be able to demonstrate how they fit within a strategic long-term plan (25 years or more). Current water capacity expansion scheduling approaches used by water companies in England are based on deterministic annual supply-demand balance (Padula et al., 2013). However, present investment decisions need to account for significant uncertainty. Climate change projections for the United Kingdom in 2009 (UKCP09) is usually used to define the climate states in relevant studies of water asset planning in England (J. M. ). Borgomeo et al. (2014Borgomeo et al. ( , 2016 use daily time series of precipitation and temperature derived from the UKCP09 projections coupled with a transient stochastic weather generator produced by Glenis et al. (2015). They use a rainfall runoff model to generate daily flow time series to simulate the Thames water resource system. The output from each simulation is a record of the annual frequency of water shortages of different levels of severity (Borgomeo et al., 2016). The baseline supply uncertainty presented in this paper has several sources of uncertainty including vulnerable surface and groundwater licenses, the impact of climate change on source yields, the gradual pollution of sources causing a reduction in abstraction as well as accuracy of supply-side data, which depends on the nature of the intervention (pumping, aquifer, etc.; Thames Water, 2014). Supply uncertainty is calculated using the UKCP09 for the current annual supply-demand planning framework, termed 10.1029/2017WR021803 Economics of Balancing Supply and Demand (EBSD; Padula et al., 2013), where annual central estimates of supply are compared to central estimates of demand (see Thames Water, 2014 for details). Multimodel ensembles of general circulation models (GCMs) can be used by water planners to derive probability distributions of climate change impacts (Dessai & Hulme, 2007;Fowler et al., 2007). The resulting scenarios define the domain of plausible outcomes under climate change. We use deployable output which is the volume of water that can be supplied from a water company's sources (surface water, groundwater, etc.) or bulk supply, constrained by environment, licensing, hydrological or hydrogeological factors, water quality, and works capacity. In England, deployable output is estimated using prescribed methodologies as outlined in Water Resources Planning Tools (United Kingdom Water Industry Research, 2012), commonly through system simulation of long historical or plausible future hydrological time series. We apply the proposed multistage modeling to the London urban water supply area which is located in the Thames basin, southeast England. This basin has been classified as water stressed and is facing high population growth (Environment Agency, 2013) making it a suitable case study to investigate the use of the proposed flexible approach, as without investment security of supply cannot be achieved. Water supply is managed by Thames Water, a privately owned water utility, serving 15 million customers across London and the Thames Valley. Financial costs include the net present value (NPV) of capital expenditures incurred when selecting an intervention and operational expenditures, using a discount rate of 4.5% (Thames Water, 2014). In this case study, a scenario tree is constructed to approximate the continuous distribution of the underlying London water supply (the annual yield or deployable output) provided by London's water utility (Thames Water). We used the supply's cumulative distribution function (CDF) and evenly partitioned the CDF into 100 regions. Each region's highest percentile value is picked up as the sample point. The probability of a scenario occurring is equal to the probability that supply falls within that region (supply range of each scenario interval is defined by the upper and lower percentile values). For instance, the scenario interval for scenario 2 is defined by (X 1 , X 2 ) and its probability P(S 2 ) is calculated by Given the evenly partitioned CDF using the percentile values, the probability of occurrence of each scenario is 1%. This is shown in Figure 2. This set is used to efficiently construct the scenario tree where the probability of each node and the threshold value for branching from one node to the other is calculated optimally. The constructed optimal scenario tree is used for multistage stochastic programming model for ROA implementation. We do not consider uncertainty around demand growth rate and assume that the demand for water is expected to increase at a known rate. Figure 3 shows the supply uncertainty range for London as well as the deterministic demand values. The problem is structured so as to allow asset managers to review the plan in the distinct decision points (every 5 years) and respond through selecting additional interventions or expanding existing ones, by taking advantage of the observed changes to the main uncertainty drivers (e.g., water supply, demand, capital, and operational cost of intervention). We assume deployable outputs remain constant during the 5-year planning decision periods. Large water resource schemes can be built in phases. The flexibility to build resources in incremental stages allows for improved supply estimates before committing to larger schemes. Final plans are submitted in the year before the first planning decision period covered, and in practice, the proposed approach would allow planners to decide on their investment plans depending on the supply-demand gap a year ahead of the 5-year period end. Although the plans should demonstrate security of supply over the entire 50-year planning period, the main focus of asset managers is to decide which interventions should be implemented in the short term, that is, the optimal investment portfolios for planning decision period 2020-2024. The scenario tree to approximate the stochastic London water supply-demand balance (due to supply uncertainty) is optimally produced as described in section 2.1 using the uncertainty over the deployable outputs. Each of the 100 unique paths denotes a plausible supply scenario (a set of deployable output values for each source). Each path starts from the unique root node at the first period and is linked to a supply scenario at each distinct time period (see Figure 4). The 50-year planning period was divided in 5-year time steps forming 10 discrete time periods t. Asset managers can rebalance their infrastructure portfolios at the beginning of each planning decision period. Submission of final Water Resource Management Plans occurs 1 year before the plan is due to come into action following a consultation period. At each time step, the scenario tree branches into nodes that belong to the next period. As seen in the simplified scenario tree in Figure 1a, in t 2 node C has a decision which leads to nodes F and G in the next period representing different levels of supply-demand balance. The branching continues up to the nodes of the final period whose number corresponds to the number of supply scenarios. See Table 2 for the number of nodes used at each time step. We note that the scenario tree approximation method is independent from the staged mathematical model presented earlier and allows consideration of other sources of uncertainty through the use of joint probability distributions of random variables. This can be achieved if the uncertainty set is more than one dimensional, for instance, if it has both supply and demand distributions. The joint probability density function of supply-demand gap, which represents the stochastic component, is used to derive the scenario tree. Appendix A gives details of deriving the scenario tree when the uncertainty has more than one dimension. We consider 47 alternative supply interventions in the appraisal process. Some interventions have been developed as long-term water resource interventions and are expected to be operated at high utilization given their capacity (e.g., intervention i28), while others are being considered by Thames Water as contingency interventions (e.g., intervention i21), expected to be operated at low utilization to avoid excessive operational costs. The type and capacity ranges of the interventions are given in Table 3 and are provided by Thames Water. Large interventions of 50 Ml/d or greater (such as effluent reuse schemes, desalination plants, and reservoirs) can also be built with a modular capacity that allows expanding later on. This ability for future expansion comes at a price. For each type of intervention, the premium for modular capacity is expressed as a percentage. The percentage value expresses how much larger the initial capital investment cost of the intervention with modular capacity is compared to the fixed (unexpandable) one. Figure 4 shows the nine supply scenarios in planning decision period 2020-2024, at t 2 magnified from the scenario tree. The solutions in 2020 are clustered into six sets of optimal interventions, by identifying the common sets of interventions across the nine nodes. Decision paths are formed using supply-demand gap threshold values. Each threshold value designates which set of interventions are optimal for the given forecasted deficit and leads to different amounts of water capacity increase for the planning decision period 2020-2024. The added water supply capacity is optimal for each scenario if it occurs. The scenario tree within the ROA incorporates uncertainty about how the evolution of different futures may trigger the selection of different interventions and hence examines the implications of future uncertainty. In this long-term water resource planning problem, sequential decisions are made at multiple stages over time. Early stage decisions are based on long-term supply-demand forecasts whose accuracy decreases over time. The multistage optimization model formulation allows adjusting earlier stage decisions in later stages. This way the model compensates for the impact of earlier decisions made under supply-demand forecast inaccuracy. Solving the Water Resource Planning Problem at Multiple Stages Over Time In the London case study, the scenario tree is made based on the state of the world as known in 2015; from that vantage point the future is described via six supply scenarios for 2020. In our case study, if the planner in 2015 considers that the supply-demand balance in 2020 is most likely to be between 10.5 Ml/d and −32.5 Ml/d, then set 4 is the best intervention response. This short-term set of investment interventions is optimally obtained using a scenario tree that considers the longer-term future, and hence, the interventions associated with this set of interventions delineate the best response to uncertainty. The proposed approach is significant because least-cost scheduling of water supply infrastructure is required of English water utilities, and there is wise-spread support at the policy level for improving it to consider flexibility and adaptability. Table 4 shows that 40% of the 100 supply scenarios were directed to the top two paths (sets 5 and 6) where no extra capacity is needed in planning decision period 2020-2024. However, in set 5, an intervention is planned to be delivered in planning decision period 2025-2029 to meet the future demand for water beyond the 5-year period. The remaining 60% is directed into paths where London water capacity is increased by selecting alternative interventions. When supply deficit is greater than 10.5 Ml/d, intervention i28 is always selected with increasing utilization, the amount of water supplied from an intervention, as levels of existing supply decrease. Figure 5 shows the utilization of interventions i28 of 150 Ml/d capacity and intervention i4 of 5 Ml/d capacity indicating that small schemes are selected to postpone the activation of large schemes in case water supply in 2020 is no greater than 2,036.4 Ml/d. In set 4, intervention i28 is replaced by i21 in planning decision period 2020-2024 as an alternative intervention with 150 Ml/d (see Table 4). The two interventions have equal capacity but contrasting intentional usage in terms of the amount of water produced. Intervention i21 has a relatively lower cost to build and a higher cost to operate and is considered to be a provisional contingency scheme. Contingency schemes are not expected to have a high capacity utilization, resulting in an excess capacity due to their higher operational cost compared to the average cost of taking the water from alternative water sources. Due to their higher operational cost, these schemes can be substituted if less expensive interventions are available in the future. Conversely, intervention i28 is an irreversible long-term interventions (once built, it is used for the rest of the modeled time horizon) with an expected high utilization rate given its relatively higher construction costs but lower operational costs. This indicates that the selection of schemes is decided on the basis of the estimated required water utilization under different future uncertainty. In doing so, overspending on capital is avoided. When the lower operational costs outweigh the savings in the capital expenditure due to higher utilization then the long-term intervention i28 is selected. Decision on long-term intervention i28 is, however, delayed on the three paths that begin with sets 4, 5, and 6 of investment interventions in 2020. Instead, the modeling suggests to replace this with activation of the contingency intervention i21 in sets 4 and 5 and no interventions activation in set 6. Interventions i1, i2, and i3 are only selected in the path that begins with set 1 in 2020, as these contingency schemes are only required when the supply-demand balance is expected to be less than −179.7 Ml/d. A key strength of ROA is the opportunity it provides for exploiting learning over time. For example, Figure 6 shows that if the estimated supply-demand gap is greater than 16.0 Ml/d, there is no need to make an investment in the current planning decision period. This flexibility is valuable because by not selecting an intervention now and deferring it to the next planning period, asset managers avoid the costs of building an intervention until it is needed later. The results, shown as a colored bar chart in Figure 7, depict the frequency of activation of interventions in nodes at each time step on a scale from 0% (white) to 100% (black). A high percentage of activation denotes that the selection of this intervention is robust across a number of supply-demand scenarios. For instance, as shown in Table 4 in the S1 set of interventions, i1, i2, and i3 are all contingency interventions of small capacities, which get activated at t 2 in the most extreme scenarios that correspond to 2% of all scenarios in t 2 . As shown in Figure 4, these extreme scenarios, where S1 is selected at t2, pass through one node. Since interventions i1, i2, and i3 are selected only in S1, they have an activation frequency of 11% (one out of nine nodes) in Figure 7 in t 2 . By the end of the planning period, unlike interventions i2 and i3, i1 has an increased activation frequency. This implies that contingency interventions i2 and i3 are only selected in extreme scenarios, while activation of i1 is more robust across a number of supply-demand scenarios, that is, intervention i1 will also be activated in less extreme scenarios. Metrics for Flexibility and Adaptability Assessment We introduce two metrics used in stochastic programming problems (Birge & Louveaux, 1997;Escudero et al., 2007), namely, value of the stochastic solution (VSS) and expected value of perfect information (EVPI), to measure the adaptability and flexibility of the decisions suggested by the ROA formulation. VSS is calculated by replacing the uncertain variables with their expected values and measuring the performance of this expected value problem to future uncertainty. EVPI is estimated by comparing the solution of the ROA-based approach with the optimal solution for the wait-and-see problem with perfect information. Appendix B gives mathematical detail on the calculations of VSS and EVPI. In the context of this paper, VSS indicates the difference of implementing ROA via a multistage stochastic program that explicitly allows adaptation to different future conditions via a distribution of uncertain future supply instead of using the average supply values in each stage. VSS quantifies the cost of not recognizing the uncertainty and hence ignoring the adaptability advantage ROA provides. For the London case study, VSS is £113,206,815 discounted over the 50-year planning period. VSS estimates the value of adaptability by quantifying the cost of ignoring uncertainty by Thames Water that can be avoided by adaptive plans to changing future conditions. For the London case study, the VSS result is significant when it is compared to the total investment NPV cost of £737,648,067. That is, VSS corresponds to 15.4% of the total NPV cost. This relatively high VSS value is an indication that supply uncertainty is an important factor in London's supply-demand problem where adaptive solution to changing future can mitigate its consequences. EVPI measures the value of information in planning under uncertainty. EVPI estimates how important, in the context of uncertainty, evolution of information over time is and therefore it indicates the value of a wait-and-see decision; how valuable it is to know the future before making a decision. In the context of ROA implementation, EVPI is a measure of valuing flexibility of delaying irreversible investment commitments and taking early provisional actions until new information is available. For the London case study, EVPI is £44,092,250 discounted over the 50-year planning period, which is 6% of the total NPV cost. EVPI estimates that the value of waiting to gain more information corresponds to 6% of total NPV. Even this small percentage reflects a significant value for the implementation of large irreversible long-term interventions given their large socioeconomic and environmental impacts. Sensitivity to Scenario Tree It is relevant to explore the sensitivity of results to the use of different scenario trees as well as the characteristics of the uncertainty set used to create the trees. We have performed two types of sensitivity analysis. First we investigated the consequences of generating and using alternative scenario trees in the analysis. The London case study was run using 30 different and randomly generated scenario trees from the stochastic London supply distribution making sure that each tree has the same uncertainty source data but has a different structure, that is, different number of nodes at each time step as well as different branching structure. Then, we performed a second type of sensitivity analysis, to investigate the consequences of using random subsets of the full set of scenarios. Each tree was generated using a different subset of supply scenarios randomly sampled from the full set of 100 scenarios. The results of both types of sensitivity analysis, shown as a bar chart in Figures 8 and 9, respectively, depict the activation frequency of the interventions in planning decision period 2020-2024. It can be appreciated from both types of sensitivity analysis that most interventions suggested by the multistage optimization planning have a high frequency of selection (more than 75%) indicating the quality of the interventions' activation recommendation regardless of whether a different scenario tree (first type of sensitivity analysis) or different subsets of the full set of scenarios were used (second type of sensitivity analysis). Other types of sensitivity analysis could include understanding the impact of using different relative tolerances by varying the relative distance between the constructed tree and the original stochastic process. Limitations of the Approach The proposed approach is an extension of least-cost supply-demand planning (Padula et al., 2013) aiming to optimize for flexibility and adaptability in addition to cost when investing in infrastructure under supply uncertainty. Planning resources via the yield or deployable output concept implies simplifying the problem by comparing a single value of annual regional supply with an annual demand. Although the use of regional annual supply and demand balancing is conceptually simple, these aggregate quantities are difficult to validate (J. Hall, Watts, et al., 2012). Unlike simulation-based optimization approaches that have become routine for analyzing water policies , the proposed optimization model does not rely on simulating Figure 9. Activation frequency of interventions in planning decision period 2020-2024 using 30 subsets of the full set of scenarios. alternative observable outcomes, such as the frequency with which customers are predicted to experience water shortages. The analysis uses supply uncertainty data from the UKCP09 weather generator that addresses GCM uncertainties. Although the GCM-based climate projections are obtained from the most credible climate change information available, concerns in the assignment and use of probabilities to these future climate change scenarios have been raised (Maier et al., 2016). These climate models use numerous assumptions about how the future will unfold (Taner et al., 2017) which impact results. For instance, climate projections are contingent on greenhouse gas (GHG) emissions scenarios and future reductions in atmospheric aerosols (Stouffer et al., 2017) which are unknown. Such assumptions impact the probability distributions in climate model outputs which in turn will impact the supply probabilities and the findings of the analysis in our proposed approach (Dessai & Sluijs, 2007). Another limitation of least-cost supply-demand planning is that plans are optimized using a single least-cost objective, requiring all aspects of system performance to be monetized, leading to potentially imbalanced decisions (Matrosov et al., 2015). Using a single objective might prevent the finding of good solutions . Conclusion This paper described how a least-cost scheduling approach for water infrastructure investment planning, used currently at national scale in England, can be extended to explicitly enable flexibility and adaptability given future supply uncertainty. The RO concept using scenario trees over a predefined planning horizon with distinct decision points has been applied to allow rebalancing of the supply-demand system at intermediate stages. A compact scenario tree is generated to approximate the stochastic supply representing an ensemble of plausible futures. At each time step of the planning horizon, an optimal set of interventions is identified in each node of the scenario tree according to plausible source yield scenarios. Supply-demand gap threshold values are used to determine which path to follow in order to minimize the NPV cost of investments. The staged decision process provides the planner with adaptive solutions whose implementation can be delayed and replaced as information on future supply-demand balance is gradually revealed. The proposed flexible and adaptive approach is applied to London's water supply planning problem. In the appraisal process, 47 interventions of different capacities (ranging from 1.5 Ml/d to 150 Ml/d) and alternative types (e.g., wastewater reuse, desalination, and reservoirs) are considered. The 50-year planning period using 100 equally probable supply scenarios identified six optimal sets of investment interventions for the planning decision period 2020-2024. Depending on the forecasted short-term supply-demand balance, the planned capacity expansion ranges from 0 Ml/d (no intervention) to 330 Ml/d (as a result of activation of seven interventions). The results show that the large forecasted gap between supply and demand in London is being bridged through long-term (maintained after selection) interventions and through contingency schemes when the gap is smaller. The results demonstrate the benefits of ROA to enable adaptive and flexible decision making in water resource planning. These are quantified using the VSS and EVPI metrics showing that, respectively, ignoring adaptive planning costs 15.4% of the total NPV and flexible decision making has a value of 6% of the total NPV of London's water supply system. Sensitivity of results to the use of different scenario trees as well as the characteristics of the uncertainty set used to create the trees are assessed. They point toward high-quality intervention activation selections by the proposed model. Appendix A: Scenario Tree Construction Algorithm The scenario tree construction uses the original supply scenarios to build a tree with probabilistic weights assigned to each nodes used in the optimization model. The tree construction is an optimization method based on Kantorovich transport functional (developed by Gröwe-Kuska et al., 2003) as follows, where ,̃is n-dimensional stochastic processes, i ,̃j is scenarios (sample path of ), p i , q j is scenario probabilities, probability distribution of the processes and̃, respectively, S is number of scenarios in the initial scenario set, J is index set of deleted scenarios, cJ is cardinality of the index set J; i.e., the number of deleted scenarios, s = S -cJ is number of preserved scenarios, is tolerance for the relative probability distance, and c t ( i , j ) is distance between scenario i , j . 10.1029/2017WR021803 Let P be the set of original scenarios. Scenario set Q based on the scenarios having minimal Kantorovich D K distance to P is computed in equation (A1), The probability q j of the preserved scenarios is given by the rule where J(j) ∶= {i ∈ J ∶ j = j(i)}, j(i) ∈ arg min j∉J c T ( i , j ), ∀i ∈ J. That is, Kantorovich transport functional make sure that the scenario sample is the best possible approximation of the stochastic process. By bundling similar scenarios and reducing the number of nodes, this produces a smaller, computationally accessible multistage scenario tree that is the solution of the following optimal problem, where s = S -cJ is the number of preserved scenarios. The maximal reduction strategy is deduced to determine a reduced probability distribution Q of such that the maximum number of scenarios are deleted subject to Appendix B: Computational Insight on the Metrics Used to Evaluate the Implementation of ROA The calculations of the two metrics, namely, the EVPI and the VSS, in multistage problems are explained below. These metrics were developed for the case of two-stage problems (Birge & Louveaux, 1997) and have been extended to multistage problems (Escudero et al., 2007). For the minimization model the following inequalities are satisfied: where WS denotes the expected value of the objective function obtained by replacing all random variables by their expected values; WS is known in the literature as the wait-and-see resolution value. AP denotes the optimal solution value to the adaptive multistage stochastic problem presented in this paper. EV denotes the expected result of expected value problem and measures how the optimal solution of the expected value problem performs allowing the other stages decisions to be chosen optimally as functions of different scenarios. From equation (B1), EVPI and VSS are calculated as follows, To calculate the EVPI, nonanticipativity constraints are relaxed at each time step so that decisions are made with perfect information about the future. From equation (B2), the difference AP − WS displays the value of perfect information. From equation (B3), the difference EV − AP, known as the VSS, indicates the benefit of finding different solutions for each scenario by solving the stochastic program than to assume lack of uncertainty. In the work of Escudero et al. (2007) those parameters are generalized to the multistage case explained below. Let the expected result in t of using the expected value solution, denoted by EV t for t = 2, … ,T, be the optimal value of the AP model, where the decision variables until stage t − 1, (x 1 , … ,x t−1 ), are fixed at the optimal values obtained in the solution of the average scenario model. The value of the stochastic solution is defined in t, denoted by VSS t , as This sequence of nonnegative values represents the cost of ignoring uncertainty and not providing adaptive solution to future condition until stage t in the decision making of multistage models. VSS and EVPI in multistage problems are then calculated as and, Notation n Nodes t Time (stage) i Intervention p n Probability that node n is realized r Discount rate eS n,i Levels of existing supply from intervention i for node n cC i Undiscounted capital cost of intervention i fC i Undiscounted fixed operational cost of intervention i vC i Undiscounted variable operational cost of intervention i D t Demand in time t cS n,i Maximum capacity of intervention i in node n i Construction time period for intervention i dS n,i Activation of intervention i for node n S n,i Supply from intervention i for node n aS n,t,i Associate supply on the intervention i to supply on node n in time t
v3-fos-license
2018-04-03T03:14:33.912Z
2014-12-08T00:00:00.000
1476312
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "http://dspace.uni.lodz.pl:8080/xmlui/bitstream/11089/17847/1/grabowski.pdf", "pdf_hash": "0fa2037e43d1037bd1b15f5c48eddd8156bfd70c", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1975", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Geography" ], "sha1": "c2a863101dc040ac40ae11347683c24fec32c92a", "year": 2014 }
pes2o/s2orc
Zoogeography of epigean freshwater Amphipoda (Crustacea) in Romania: fragmented distributions and wide altitudinal variability Inland epigean freshwater amphipods of Romania are diverse and abundant for this region has a favourable geographical position between the Balkans and the Black Sea. Excluding Ponto-Caspian species originating in brackish waters and freshwater subterranean taxa, there are 11 formally recognized epigean freshwater species recorded from this country. They belong to 3 genera, each representing a different family: Gammarus (Gammaridae, 8 species or species complexes), Niphargus (Niphargidae, 2 epigean species) and Synurella (Crangonyctidae, one species). Their large-scale distribution patterns nevertheless remain obscure due to insufficient data, consequently limiting biogeographical interpretations. We provide extensive new data with high resolution distribution maps, thus improving the knowledge of the ranges of these taxa. Gammarus species display substantial altitudinal variability and patchy, fragmented distribution patterns. They occur abundantly, particularly in springs and streams, from lowlands to sub-mountainous and mountainous regions. In the light of recent molecular research, we hypothesize that the complex geomorphological dynamics of the Carpathian region during the Late Tertiary probably contributed to their allopatric distribution pattern. Contrasting with Gammarus , the genera Niphargus and Synurella exhibit low altitudinal variability, broad ecological valences and overlapping distributions, being widespread throughout the lowlands. The current distribution of N. hrabei and N. valachicus seems to be linked to the extent of the Paratethys during the Early Pliocene or Pleistocene. We further discuss the taxonomic validity of two synonymized and one apparently undescribed taxon, and provide an updated pictorial identification key that includes all taxa and forms discussed in our study. The mosaic distribution of epigean freshwater amphipod species in Romania shows that this region is particularly suitable for phylo- and biogeographical analyses of this group. Introduction Distribution patterns offer valuable insights towards understanding historical factors that have shaped the contemporary distributions of species (Brown et al. 1996). Freshwater amphipod crustaceans are particularly suitable for biogeographical studies because of their restricted dispersal capabilities and the fragmentary nature of freshwater habitats (Väinölä et al. 2008;Hou et al. 2011). Amphipods are predominantly aquatic benthic animals that do not possess free-swimming larval stages or resistant propagules, and thus are prone to genetic differentiation and isolation (J.L. Barnard & C.M. Barnard 1983). Furthermore, many freshwater taxa display allopatric or discontinuous distributions, frequently presumed to result from vicariant events of geological origin, such as island separation, sea level fluctuations, and continental break up, or that follow ancient drainage patterns (Hogg et al. 2006;Finston et al. 2007;Bauzà-Ribot et al. 2011. The European continent is inhabited by a relatively high number of freshwater amphipod species with diversity increasing towards the south-east (Väinölä et al. 2008). The diversity of the amphipod fauna of Romania is rich due to the favourable geographical position of the country, being situated at the edge of the Balkan Peninsula and the a benthic hand-net with a mesh size of 250 μm and were stored in either 70% or 96% ethanol, or 4% formaldehyde solution. At every sampling locality, we investigated all available microhabitats. A literature review was performed and distribution data were gathered from the relevant studies, including the most recent ones (Cărăuşu et al. 1955 and reference therein, Paraschiv et al. 2007;Petrescu 1994;1996;1997a;1997b2000;. The material from the studies of Pârvulescu (2008; was revised and incorporated into this study. Data from both the literature and from this study were taken into consideration for producing the distribution maps. Locality names, geographic coordinates and altitude are provided for each taxon in a table available from the Dryad Digital Repository: http://dx.doi.org/10.5061/dryad.fd8m9 (Copilaş-Ciocianu et al. 2014). Taxa were identified using the morphological delimitation criteria of the following authors: Cărăuşu et al. (1955), G. Karaman & Pinkster (1977a;1977b;1987) and Jazdzewski & Konopacka (1989). At present it is known that G. balcanicus, G. fossarum, G. komareki, G. roeselii, and G. pulex are taxonomically challenging poly/ paraphyletic cryptic species complexes (indicated by "s.l." below) (Scheepmaker 1990;Müller 1998;Hou et al. 2011Hou et al. , 2013Weiss et al. 2013). We are aware that the Romanian populations might represent distinct cryptic lineages, as has been shown for G. balcanicus (Hou et al. 2011. Without a detailed insight based on molecular data, we treated the Romanian populations as belonging to the above mentioned morphospecies. However, we took into consideration the three distinct morphs of G. balcanicus reported from Romania. Two of them, G. balcanicus dacicus Manolache, 1942 andG. balcanicus montanus S. Karaman, 1929, were formally described as subspecies. The third morph resembles G. balcanicus from the type locality (Cărăuşu et al. 1955), but the results of Hou et al. (2011) and Mamos et al. (2014) indicate that the Romanian populations are molecularly distinct from those from the type locality and, therefore, we label it G. cf. balcanicus. Although both above-mentioned subspecies are presently synonymized with G. balcanicus (G. Karaman & Pinkster 1987), it seems likely that many such synonymized taxa might actually be distinct species (Hou et al. 2011, Wysocka et al. 2014. Thus, we considered it appropriate to show the morphology and distribution of the Romanian morphs of G. balcanicus separately, to facilitate future taxonomic and systematic studies. Aside from G. balcanicus s.l., we present a putatively undescribed taxon, G. cf. kischineffensis, which is morphologically and geographically distinct although it bears some resemblance to G. kischineffensis (see below). Spatial analyses. We performed spatial autocorrelation analyses using Moran's I (Moran 1950) on latitude and longitude data in order to determine the degree of geographical clustering between the localities of different species. This analysis was performed with the software SAM v4.0 (Spatial Analysis in Macroecology) (Rangel et al. 2010) with default settings, using a significance test with 200 permutations. This analysis requires at least 30 entries to give reliable results, thus it was performed only on taxa for which we had a sufficient number of records. These were: G. cf. balcanicus, G. balcanicus dacicus, G. fossarum s.l., G. cf. kischineffensis, N. valachicus and S. ambulans. Identification key. Based on available literature (Cărăuşu et al. 1955;G. Karaman & Pinkster 1977a;1977b;1987) and our own observations, an identification key was compiled to contain all morphologically distinct taxa of the respective genera (including the synonymized subspecies of G. balcanicus s.l. and G. cf. kischineffensis) recognized from Romanian territory. The key (provided in the Appendix) combines text with graphical depiction of all relevant identification characters based on original drawings. However, as it does not contain the taxa of Ponto-Caspian origin, the key should be used as a complement to other identification resources rather than alone. Results and discussion Checklist of species. The presence of 11 epigean freshwater amphipod species or species complexes previously recognized in Romania, and the two presently synonymized subspecies of G. balcanicus, was confirmed by our data and recent literature. The only species not encountered during our field survey was N. hrabei; however, a recent study reported it in south-eastern Romania (Flot et al. 2014), confirming its presence after more than 50 years. Furthermore, one morphologically distinct form with a well delimited distribution pattern did not correspond to any of the morphological criteria or character combinations used to differentiate previously recognized taxa (for details, see Taxonomic remarks below) and therefore we treat it as a separate entity throughout this paper. It superficially resembles G. kischineffensis and we have temporarily labelled it G. cf. kischineffensis. Distribution patterns, altitudinal ranges and habitat preferences. Below, each taxon recognized in Romania is discussed separately, in alphabetical order. Their distributions are shown in three maps: the G. balcanicus complex in Fig. 1, remaining Gammarus species except G. fossarum s.l. in Fig. 2, and Niphargus, Synurella and G. fossarum s.l. in Fig. 3. Altitudinal ranges of taxa are summarized in Fig. 4. Gammarus arduus is a species with a south-eastern European distribution (G. Karaman & Pinkster 1977a). Altogether, literature records and new data from our study reveal only 6 localities scattered throughout Romania (Fig. 2). Previous reports indicated its presence in western and southern Romania. During this study it was collected from three additional sites in the south-eastern part of the country, in the Măcin Mountains. It is found in streams (Table 1) at altitudes below 300 m a.s.l. (Fig. 4). The Gammarus balcanicus complex is widely distributed throughout south-eastern Europe and Asia Minor (J.L. Barnard & C.M. Barnard 1983;G. Karaman & Pinkster 1987;Özbek & Ustaoğlu 2006;Özbek et al. 2009). It is also the most widespread amphipod in Romania (Petrescu 1994). We present data for all three reported morphs of this taxon from the country (Fig. 1). The most common is G. cf. balcanicus, phenotypically similar to the morph from the species' type locality, which is widespread throughout the Carpathians and reaches the Dobrogea region in south-eastern Romania. Intriguingly, its occurrence decreases considerably in the central part of the country and it is absent in the south (Fig. 1). It has the widest altitudinal range among freshwater amphipods from Romania, being recorded between 16 and 1530 m, with the majority of localities situated between 300 and 600 m (Fig. 4). It mostly occurs in springs and brooks and occasionally in caves or in rivers (Table 1). It was occasionally found coexisting with G. balcanicus dacicus, G. fossarum s.l. and G. roeselii s.l. The morphologically distinct G. balcanicus dacicus is likewise well represented in the Romanian territory but differs in distribution from G. cf. balcanicus. It is concentrated mostly in central and southern Romania where it replaces G. cf. balcanicus and reaches the western lowlands of the country (Fig. 1). The altitudinal range of this taxon extends from 37 to 1072 m, mostly between 170 and 600 m (Fig. 4). It coexists with the same taxa as G. cf. balcanicus and inhabits small brooks, springs and lowland rivers (Table 1). G. balcanicus montanus has been recorded from only a few localities in the Southern Carpathians (Fig. 1). It inhabits only high altitudes that range between 880 and 1930 m (Fig. 4). The Gammarus fossarum complex has a wide distribution area that spans western, central and south-eastern Europe and reaches northern Anatolia (G. Karaman & Pinkster 1977a; J.L. Barnard & C.M. Barnard 1983;Özbek & Ustaoğlu 2006). In Romania, it occurs in the western part of the Carpathians in two isolated regions, one in the north-west and the other in the south-west (Fig. 3). Its altitudinal distribution ranges from 47 to 860 m, mainly between 300 and 550 m (Fig. 4). The populations from south-western Romania also occur in rivers in the lowlands while the north-western populations are confined to springs and streams in sub-mountainous regions (Table 1, Fig. 3). In some localities G. fossarum s.l. coexists with G. cf. balcanicus, G. balcanicus dacicus and G. roeselii s.l. Gammarus kischineffensis has a discontinuous distribution encompassing two distinct areas. One is restricted to north-eastern Romania, Moldova and south-western Ukraine and the other is limited to the eastern half of Turkey (G. Karaman & Pinkster 1977a, J.L. Barnard & C.M. Barnard 1983, Özbek & Ustaoğlu 2006). Thus, it is possible that the latter represents a distinct lineage. On the Romanian territory, G. kischineffensis occurs only in the north-eastern part of the country, being limited to the Siret and Prut River catchments and never reaches the inner Carpathian basins (Fig. 2). Nevertheless, morphologically distinct populations, which we treat separately (see below), are found throughout south-western Romania. G. kischineffensis is restricted to altitudes below 460 m and inhabits springs, streams and rivers (Table 1, Fig. 4). Gammarus cf. kischineffensis is a newly recognized form. Due to its morphological distinctness and allopatric distribution, it seems likely that it is a species separate from G. kischineffensis in a strict sense (see Taxonomic remarks). It is encountered in the south-western part of the country, in the Almăjului, Aninei and Semenic Mountains (Fig. 2). Altitudinally, it was found between 140 and 860 m in springs and streams (Table 1, Fig. 4). It occasionally coexists with G. fossarum s.l. and G. cf. balcanicus. The Gammarus komareki complex has a distribution range that extends from Bulgaria and northern Greece, throughout the northern half of Turkey into the north-western part of Iran (G. Karaman & Pinkster 1977a;Grabowski & Pešič 2007;Özbek & Ustaoğlu 2006;Zamanpoore et al. 2011). It is known only from three localities in south-eastern Romania from the Dobrogea Region (Fig. 2). It occurs at altitudes below 100 m and inhabits slow flowing rivers with rich vegetation, coexisting with G. pulex s.l. (Copilaş-Ciocianu 2013) (Table 1, Fig. 4). Gammarus leopoliensis occurs only in the northern half of the Carpathian region in Poland, Ukraine, Slovakia, Hungary and Romania (Grabowski & Mamos 2011;Papp & Kontschán 2011). Its distribution in Romania is confined to the northern part of the country (Fig. 2), to streams at high altitudes, from 600 to 1150 m (Table 1, Fig. 4) (Papp et al. 2008). The Gammarus pulex complex has a western, central and northern European distribution with patchy populations being encountered in south-east Europe, Asia Minor and throughout Asia (G. Karaman & Pinkster 1977a; J.L. Barnard & C.M. Barnard 1983;Özbek & Ustaoğlu 2006). It is encountered only in south-eastern Romania, in the Dobrogea Region (Fig. 2), in groundwater, springs, and streams, at altitudes that do not exceed 100 m (Table 1, Fig. 2), cohabiting with G. komareki s.l. in some sites (Fig. 2). The Gammarus roeselii complex is distributed across western, central and south-eastern Europe as well as the western part of Turkey (G. Karaman & Pinkster 1977a; J.L. Barnard & C.M. Barnard 1983;Jazdzewski & Roux 1988;Özbek & Ustaoğlu 2006). It is present in the western and southern parts of Romania in a few distinct patches (Motaş et al. 1962;Pârvulescu 2009) (Fig. 2). It is a typical lowland taxon, occurring mostly at altitudes below 200 m (Fig. 4). G. roeselii s.l. is ecologically the most plastic gammarid in Romania, being found in springs, streams, rivers, and occasionally in lakes and swamps (Motaş et al. 1962 , Table 1). It can co-occur with G. cf. balcanicus, G. balcanicus dacicus and G. fossarum s.l. Niphargus hrabei occurs in central and south-eastern Europe from the Small Hungarian Plain to the Danube Delta (Cărăuşu et al. 1955; J.L. Barnard & C.M. Barnard 1983;Meijering et al. 1995;Nesemann et al. 1995). It is found only in the south-eastern lowlands of Romania and the Danube Delta in springs, streams, ponds and swamps at altitudes below 350 m (Table 1, Figs 3-4). Niphargus valachicus has a large and fragmented range, spanning from the Pannonian Plain along the Lower Danube, and reaching the Danube Delta (Cărăuşu et al. 1955; J.L. Barnard & C.M. Barnard 1983). It is also present in Turkey along the southern shore of the Black Sea and reaches the south of the Caspian Sea in Iran (Fišer et al. 2009;Hekmatara et al. 2013). In Romania, it is a common species in the lowlands, being found in swamps, canals, temporary ponds, and large rivers in sympatry with S. ambulans (Cărăuşu et al. 1955;Copilaş-Ciocianu & Pârvulescu 2012; Table 1). It inhabits the western and southern plains of Romania (Fig. 3), being encountered between 0 and 360 m with most localities ranging around 100 m (Fig. 4). It often coexists with S. ambulans and G. balcanicus dacicus, and occasionally with G. roeselii s.l. and N. hrabei (Motaş et al. 1962). Synurella ambulans is widespread in central, eastern and southern parts of Europe (G. Karaman 1974;Sidorov & Palatov 2012). In Romania it has a distribution similar to N. valachicus, co-occurring in the same habitats and at the same altitudes (Table 1, Figs 3-4). Biogeographical patterns. Freshwater amphipods in Romania have patchy and often non-overlapping distribution patterns. A distinction can be made between the distributions of Gammarus and Niphargus /Synurella. Gammarus species exhibit high altitudinal variation and allopatric distributions, contrasting with Niphargus and Synurella which are sympatric and restricted to the lowlands. The distribution patterns of the analysed species are significantly non-random (I > 0, p ≤ 0.05). However, the positive autocorrelation distance values varied between genera. Maximum significant positive autocorrelation distances ranged between 19 and 180 km for different Gammarus taxa (p ≤ 0.01) and reached 327 km and 296 km for N. valachicus (p = 0.03) and S. ambulans (p = 0.005), respectively (Fig. 5). Thus, the spatial autocorrelation analyses further emphasize this patchy distribution pattern by revealing that populations of Gammarus are significantly autocorrelated for shorter distances than N. valachicus and S. ambulans. This means that Gammarus species have more geographically clustered distributions than the latter two. Four Gammarus taxa have well-delimited, wide distributions in Romania; these are G. cf. balcanicus, G. balcanicus dacicus, G. fossarum s.l. and G. kischineffensis. The former two have an intertwined, complementary distribution pattern that is mostly non-overlapping, while the latter two are restricted to the western and eastern parts of the country, respectively (Figs 1-3). The remaining taxa, G. arduus, G. leopoliensis, G. komareki s.l., G. pulex s.l. and G. roeselii s.l. are not so widespread, most likely due to the fact that the territory of Romania only marginally overlaps with their distribution areas. The full distribution of G. cf. kischineffensis is at present unknown, it is possible that it extends into neighbouring Serbia. The allopatric distributions displayed by Gammarus species in Romania are typical for the genus (G. Karaman & Pinkster 1977a;1977b;1987;Väinölä et al. 2008). Molecular phylogenetic analyses indicate that the frequent allopatry observed in many Gammarus species is the result of geological vicariant events of Tertiary age and that the majority of extant freshwater species originated in the Late Tertiary (Hou et al. 2007;2013;Wysocka et al. 2014). During this period, the Carpathian Mountain range was a geomorphologically highly dynamic archipelago surrounded by the shallow Central Paratethys Sea (Popov et al. 2004). This constantly changing topography was characterized by different timings of landmass uplift and drastic variations in sea level (Harzhauser & Piller 2007, Kováč et al. 2007. Sea level fluctuations and the vicariance they create are considered to be of significant importance in the evolution of many, especially subterranean, freshwater amphipod taxa (Notenboom 1991;Holsinger 1994). We hypothesize that the dynamic geomorphology of the Carpathian region during the Late Tertiary has left its footprint on the contemporary ranges of Gammarus species in Romania. Niphargus valachicus, N. hrabei and S. ambulans are unusual amongst their congeners because of their predominantly epigean lifestyle, wide distributions and ecological plasticity (Straškraba 1972;Meijering et al. 1995;Nesemann et al. 1995;Sidorov & Palatov 2012). Both Niphargus species are sympatric with S. ambulans throughout their distribution range, sharing the same ecological requirements (Motaş et al. 1962;Meijering et al. 1995;Nesemann et al. 1995), and in many cases coexisting in the same habitat (Motaş et al. 1962;Straškraba 1972;Akbulut et al. 2001;Juhász et al. 2006;Copilaş-Ciocianu & Pârvulescu 2012). The co-existence of Niphargus and Synurella seems to be quite old, since both genera are known from Baltic amber that dates back to the Eocene, i.e., is at least 35 million years old (Jażdżewski & Kupryjanowicz 2010;Jażdżewski et al. 2014). Molecular data indicate that Niphargus colonized south-eastern Europe at the beginning of the Oligocene (~25 million years ago) (McInerney et al. 2014). Although N. valachicus was not included in that study, its distribution reflects the extent of the Paratethys Sea during the Early Pliocene (~5 million years ago), when-as it has been hypothesized-it may have colonized available freshwater habitats through coastal lagoons (Sket 1981). Straškraba (1972) suggested that the present-day distribution of N. valachicus is linked with the more recent Pleistocene extent of the Paratethys. However, during the Pliocene and Early Pleistocene, the extent of the Paratethys was more or less the same, without major fluctuations (Popov et al. 2004). Thus, it is possible that the hypothesized freshwater colonization may have taken place anytime during this time frame. The lowland regions of Romania, where N. valachicus is nowadays present, fit to this general pattern since they were continuously submerged under the waters of the Paratethys during the Pliocene/Pleistocene (Popov et al. 2004). Due to its large and fragmented range, it is possible that this taxon might harbour independently evolving lineages (Fišer et al. 2009). Straškraba (1972) also suggested a connection between the distribution of N. hrabei and the extent of the Paratethys during the Pleistocene. Since this species is morphologically, ecologically and biogeographically similar to N. valachicus (Straškraba 1972;Nesemann et al. 1995), it is possible that it follows the same pattern. It is considered that N. hrabei is expanding its range at present (Ketelaars 2004). Both species are sympatric in the Lower Danube basin in Romania and coexist in some instances (Motaş et al. 1962). S. ambulans, although having a wide distribution in Europe, has a problematic taxonomy and probably represents a species complex, as suggested by its morphological variability and ecological plasticity (Meijering et al. 1995;Konopacka & Blazewicz-Paszkowycz 2000;Sidorov & Palatov 2012). It belongs to the crangonyctoid group which is considered one of the most ancient groups of freshwater amphipods (J.L. Barnard & C.M. Barnard 1983). Its unclear taxonomy and lack of molecular data restrain biogeographical interpretations for this taxon. Aside from historical factors, we presume that the contrasting patterns (allopatric vs. sympatric) observed between Gammarus and Niphargus/Synurella are due to the fact that these taxa have different ecological preferences. Romanian Gammarus species, except for G. roeselii s.l., are seemingly more restricted in their habitat preferences, favouring springs and streams along an extensive altitudinal gradient (Table 1, Fig. 4). Therefore, their spread might be limited by the availability of these suitable habitats. Competition and interspecific predation, common between Gammarus species (e.g. MacNeil & Dick 2012), might also be important factors contributing to their distribution patterns. On the other hand, Niphargus and Synurella are more euryoecious, preferring a wide spectrum of habitats ranging from groundwater and springs to stagnant waters or temporary ponds, but along a much narrower altitudinal gradient (Table 1, Fig. 4). This is probably due to the fact that niphargids and crangonyctids seem to be more tolerant to low oxygen levels than gammarids (Dick et al. 1997;Simčič & Brancelj 2006). Thus, they can occupy a wider variety of habitats and prevail at lower altitudes, in which they probably face lower competition pressure from gammarids. The mechanisms that permit these two genera to coexist are unclear, given the fact that freshwater amphipods often exhibit high levels of intra-guild predation (MacNeil et al. 1997;1999;Luštrik et al. 2011). Taxonomic remarks. We present the morphological differences between the three morphs of G. balcanicus reported from Romania, as pointed out by Cărăuşu et al. (1955). G. balcanicus dacicus morphologically differs from G. cf. balcanicus by the longer endopod of uropod 3 (up to 90% the length of the exopod), the pointed inferoposterior corners of epimere 2 and the presence of long setae (as long as the urosome spines) on the dorso-posterior side of the metasome segments (Fig. 6). It also has a very distinct distribution that is complementary to that of G. cf. balcanicus and in some localities they are known to coexist and still maintain their morphological distinctness (Fig. 1). Based on this evidence, we consider that this taxon is a distinct entity and should be 'resurrected' if not even elevated to a specific status, an issue that remains to be resolved by further molecular research. In the case of G. balcanicus montanus, the situation is less clear. Its main morphological differences from G. cf. balcanicus are its smaller size and a short endopod of uropod 3 (about 50% the length of the exopod) (Fig. 6). Its distribution is not geographically separated but it is supposedly ecologically distinct, inhabiting only high altitude springs or brooks. It might represent an ecomorph of G. cf. balcanicus, although both substantially overlap altitudinally. Molecular analyses are needed to resolve the taxonomic status of G. balcanicus montanus. The distinct form G. cf. kischineffensis is morphologically different from its presumed relative G. kischineffensis (Fig. 7). Geographically, their ranges are separated by ca. 300 km (Fig. 2). The main distinctive features of G. cf. kischineffensis are 1) pointed infero-posterior corners of the 2 nd and 3 rd epimeres that are straight in G. kischineffensis (Fig. 7a); 2) the inner side of the telson lobes that has one spine; this spine is absent in G. kischineffensis (Fig. 7b); and 3) the presence of long setae (longer than or as long as the width of the underlying segment) in the upper quarter of the external margin of the 3 rd uropod exopodite (these are present along the upper half in G. kischineffensis) (Fig. 7c). Based on its morphological and geographical distinctness, we propose that this morph might be an undescribed species, a hypothesis that will be tested by further morphological and molecular studies. Conclusions Romania has a diverse fauna of epigean freshwater amphipods and further taxonomic studies are needed to truly recognize this diversity. Morphological variation in the G. balcanicus complex and G. cf. kischineffensis suggest the presence of undescribed taxa, and it is likely that molecular studies will reveal additional cryptic diversity (as it has been the case with G. cf. balcanicus). The distributions of the epigean freshwater amphipods in Romania are characterized by their patchiness and altitudinal variability. Coupled with their low dispersal abilities and the heterogeneous topography/geology of this region, these animals constitute a suitable model system for studying biogeography and phylogeography at a fine scale, with implications for further research in ecology, adaptation and speciation of freshwater amphipods.
v3-fos-license
2018-06-05T04:28:59.508Z
1910-12-17T00:00:00.000
43968971
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "6b20d536390eed1835ad16165b2c0d08dc4a62df", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1981", "s2fieldsofstudy": [ "Medicine" ], "sha1": "6b20d536390eed1835ad16165b2c0d08dc4a62df", "year": 2017 }
pes2o/s2orc
"The Hospital" Medical Book Supplement—No. XXXVII A Manual of Practical Inorganic Chemistry 353 The Amateur Gardener's Diary and Dictionary 3b5 Laboratory Notes on Organic Chemistry for Medical Students 353 Massage Movements, including the Nauheim Exercises: An Illustrated Guide for Nurses and Masseuses 351 Golden Rules of liefractioa 354 Practical Nursing lor Male Nurses in the ll.A.M.C. and Other Forces 354 The Abuse of the Singing and Speaking Voice: Causes, Effects, and Treatment 354 educated at a time when pathological research had not attained to the importance in clinical medicine it now enjoys, will find in this short work a handy compendium of the simpler laboratory methods, such as can be carried out without complicated and expensive apparatus. Though .simple the methods described are of sufficient accuracy to .serve ordinary purposes of diagnosis, and what is of great importance, the directions given are easy to understand ?even by those who have not undergone a special training in laboratory work. The volume is confined in scope to the analyses of urine, stomach contents, faeces, and the examination of blood. Comparatively little bacteriology is introduced except in so far as recognition under the microscope of characteristic organisms is concerned, such as the Oppler-Boas or long bacillus, sarcinse, etc. An account of a simplified Wassermann's test is given, as well as of the agglutination tests for typhoid and Malta fever. The book can be heartily recommended to those in search of a short and simple guide to the more ordinary laboratory methods. Medical Diagnosis. By W. Mitchell Stevens, M.D., M.R.C.P., Senior Assistant Physician, Cardiff Infirmary. (London : H. K. Lewis. Pp. xi and 1571. Demy 8vo. 25s. net.) There are so many books on medical diagnosis competing tor popularity that it seems supererogatory, almost, to take the trouble of writing another on this, from a literary point of view, so extremely difficult subject. Dr. Stevens, we are bound to confess, brings no literary talents into requisition in discussing his subject; his style?if such a bald enumeration of points can be dignified by the name of style?is everywhere concise and terse, but lacks everything that goes to make such a book readable to those who are not forced to take it up for a special purpose. We look in vain, for instance, for any description of signs that can approach the pen pictures in the old Fagge, or, to take a more modern example, the admirable summaries which those who have had the privilege of listening to the greatest medical lecturer of the day, Krause, find indelibly fixed in their memory. But it is entirely unfair (and we readily admit our fault) to look for literary polish in a volume that deals with medical diagnosis; in a volume, moreover, that attempts to condense that enormous subject within the -smallest compass. In such a book we must look to other points; serviceability, accuracy, classification, and clearness of presentation are of much more importance here than fine writing. And judging Dr. Stevens' work on these points, and at the same time comparing it with the rival manuals in the field, we must confess that we have not yet had the pleasure of reading a book which is, from a student's point of view, so ideal as this. Nor is it only the student who will appreciate the conciseness with which facts are presented here; the busy practitioner will readily give his vote in favour of a volume that combines with almost catholic fulness and a wide range of subject-matter a carefully thought-out scheme of presentation in which it is the easiest thing in the world to find any special disease or condition. As a text-book, apart from a medical classic, this work fully merits the attention of students and practitioners; the former will find it a wholly useful aid in preparing for examinations and in ward work; the latter will hold it in esteem because it is a handy and exceptionally ably devised scheme of classification which will materially help in diagnosis. There are several items which the author would do well to revise in a second edition ?which we feel sure will not be long delayed. Apart from several printer's errors noted, (we may instance the annoying "myostitis") and the blurring of some of the illustrations, there are some details which should be amplified. The first section, on general conditions, is wholly admirable, but the note on pigmentation is not as full as it might be; pellagra, leprosy, and after the prolonged application of Bier's treatment may be added to the list of causes. The short chapter on food poisoning is much too condensed ; a few further particulars regarding the differential diagnosis of ptomaine poisoning would have been of great value to practitioners. In the note on lead poisoning no reference is made to the blood examination, which we have found of decided value in obscure cases; in that on portal pytemia too little stress is laid on the importance of examining the prostate; in that on typhoid no mention is made of the yellow pigmentation on the palms and soles which is so distinctive a sign, according to some observers, in the early stages. We have often thought that some useful purpose would be served by a short volume on the lesser-known signs which are sometimes of use in difficult cases, and in a work of this character they ought, at least, to be mentioned. Dr. Stevens will probably agree with us that " penny-in-the-slot " diagnosis is a mistake, but every test that can be applied ought to be used where there is the slightest hesitation in pronouncing definitely with regard to a specific condition. In a future edition we therefore hope that the author will give his readers some reference to the Wassermann reaction, Quinquand's sign, Stiller's sign, Squire's sign, Grocco's triangle, the paradoxic reflex, and some others which we do not find in the exhaustive index provided. The note on the diazo-reaction is misleading; what is described is really the modified Penzold reaction, which is almost valueless, since it is obtained in nearly every septic condition and is affected by the freshness or antiquity of the reagents employed ; the real Ehrlich's reaction depends on the characteristic precipitate obtained when the red-coloured fluid is allowed to stand for a few houre. Taken as a whole the book is a praiseworthy effort to treat a difficult subject in a manner that conduces to clearness and brevity, and as such it is eminently suitable for the practitioner's library. Considering the vast amount of useful information given, the comparatively high price of the book cannot be regarded as excessive. The demand for a second edition of this book less than a year after the publication of the first one is a proof in itself that it has found favour with the public for which it is designed. In this reissue there are several additions, and the whole text has been carefully revised. The subject is treated conscientiously and thoroughly, and the most modern and up-to-date theories and methods connected with the upbringing of children are adequately described and explained. Practitioners may feel assured that this is a book which can safely and with advantage be placed in the hands of mothers and children's nurses; it is a favourable specimen of a type of book for which the demand is regular and considerable, notwithstanding a diminishing birth-rate. We have found but one point in which we are seriously displeased with the author, and that is on the subject of " croup." Otherwise the book is quite trustworthy, and, if not of any great distinction, is perhaps none the less useful as a guide to the management of babies and children. This is the official report which Dr. Steven was commissioned to make by the Government of South Australia, on the subject of school inspection in Europe and America. It is extremely interesting for those who wish to compare the various systems now in use. For the most part the author contents himself with noting facts, leaving his readers to draw their own conclusions. We should have liked more comment on the various systems, and the opinion of so acute and so fair an observer would have been singularly useful at the present moment, when the methods adopted in London are the subject of such earnest and even acrimonious debate. It is worth while to bear in mind, however, that Dr. Steven speaks very highly of the scheme that has been elaborated by Dr. Kerr, both in Bradford and in London, and that he puts the position very fairly. On the subject of part versus whole time officials he is evidently reluctant to state a definite opinion; he gives the arguments for and against each side of the question and leaves it at that. There are some excellent illustrations, notably of the American methods. The author's tour of inspection embraced Germany, America, and England and Scotland, but Switzerland is only briefly discussed, while Denmark, Norway and Sweden, Belgium, and Italy are not mentioned at all. DISEASES The report is therefore incomplete, but it is nevertheless very useful for purposes of comparison, and it brings out very forcibly that the best systems appear to be those in which the existing arrangements for the care and treatment of sick children are utilised as much as possible. The whole question of the establishment of independent school clinics is at present under discussion, and it is not necessary to deal with it here, even if it were possible to do so within the scope of a brief review. All that need be said is that this work should prove a valuable contribution to the literature which will be helpful in enabling us to arrive at some conclusion with reference to this important subject. Dr. Steven is to be congratulated on his report, and the publishing firm on the excellence of its presentation to the public. We only wish all reports were drawn up in this readable and attractive manner. (London : Rebman, Ltd. Second edition. Price 5s. net.) The first edition of this little manual was published three years ago. Since then dietetics has made prodigious advances, and the author has been well advised to edit and revie? a new edition of his popular work. It covers a wide range of subjects and is a popular exposition of the whole subject of dietetics. As such it is valuable and extremely useful; a book, in fact, that the practitioner may safely lend to his patient who is interested in the question of food and feeding. At the present time, when so many food faddists are about, the perusal of such a work, which gives the main facts about various matters which the laity regard as questions on which there is a wide difference of opinion among members of the faculty, is bound to prove useful. The cheapness of the new edition is a point in its favour which should not be overlooked, but we do not exaggerate when we say that its usefulness is not to be gauged by its price. It is a thoroughly sound, authoritative exposition of the subject with which it deals, and as such well suited to have a place on the shelves of the practitioner's library. Diet and the Maximum Duration of Life. By Charles Reinhardt. (London : The Publicity Co., Ltd. Is. net.) This further treatise on dietetics by an exponent of the " sour milk cure," is devoted to a more general discussion of foodstuffs and diets. It contains a good deal of common-sense which might profitably appeal to the layman. Many useful points are brought out in regard to the relative values and usefulness of most of the common articles of food, but of course much stress is laid on what the author calls "lactic bacterium therapy," and particularly on a special brand of sour milk and cream cheese. While opinions may vary concerning this panacea, much of the advice given in the book is valuable, though there is nothing but what is well known to medical men. More attention is now being paid to the subject of' diet, and it is to be hoped that a more scientific practice will prevail; certainly we may feel justified in believing that the application of the diet absolu or nothing but water, is more frequently called for as a preliminary treatment in many disorders. of the French school which is invaluable to those who are not in a position to read the original. The work of translation has been excellently done. ZSIr. Murphy's notes are everywhere elucidative, and his language is generally smooth, so that it is a pleasure to read the book. We would suggest that a companion volume giving the details of German methods and experience, on the basis of Professor Pfaundler's recent work, would be not only interesting but extremely useful for purposes of comparison. The book is especially well illustrated, the blocks being everywhere helpful and the diagrams of real utility. The work is essentially one for the practitioner, for the question of treatment receives full consideration, while the indications for operative interference are usually very clearly stated. As an instance of the excellent arrangement we may cite the chapter on appendicitis. This is one of the fullest and most exhaustive in the book, as it ought to be when one considers the importance of the subject and the difficulties of diagnosis and treatment in many cases. Special stress is rightly laid, in the paragraph on differential diagnosis, on the importance of eliminating lobar pneumonia before the case is definitely taken to be one of appendicular trouble; in children this mistake of confounding the two conditions is particularly liable to be made. Cases illustrating the difficulties of deciding between appendicitis and intestinal obstruction, intussusception, and hip disease are given, which are equally valuable to point a moral. Kirmisson does not agree that there is no medical treatment for appendicitis, and his arguments against immediate operation as a routine method of treatment are well worth careful study. With these, in the main, every experienced pediatrist will agree. He prescribes rest in bed ?absolute rest, that is?the avoidance of all solids, allowing only milk or a few teaspoonfuls of iced light wine, and the application of ice to the abdomen. He prohibits the use of opium, and warns against the exhibition of purgatives. Two sentences in this admirable summary are well worth quoting : " Medical treatment of appendicitis, when prudently and methodically employed from the very beginning, nearly always gives excellent results in children. We should note that the results of such treatment are nearly always very rapid." With these dicta general practitioners who have riot had the frequent opportunities of hospital surgeons for operating on cases of appendicitis in children will cordially agree. Professor Kirmisson has a deserved reputation as an orthopaedist, and his remarks on deformities are therefore particularly instructive, although we confess that we do not always share his opinions as expressed in this book. The description of the treatment of clubfoot is especially good. The author differs from German orthopaedists in preferring mid-tarsal arthrotomy, in long-standing cases of this condition, to tarsoclasia; the latter method, he thinks, has several dangers, among which he mentions osteomyelitis, osteitis, and fat embolism. In cases where it is necessary to operate farther he chooses Nelaton's or Rydygierand's operations. Mr. Murphy's notes appended to this section on more recent methods are interesting, and it is worth while to notice that the author lays stress on the advisability of tryingreduction measures in all cases before proceeding to operative interference. We lack the necessary space to deal with the other interesting points in this manual, and in conclusion we need merely add that the book is one of the finest contributions to the literature of the surgery of children's diseases which exists in the English language. It is already deservedly popular on the Continent. Mr. Murphy's translation ought to make it equally appreciated in this country. MISCELLANEOUS. A Manual of Practical Inorganic Chemistry. This new manual should be particularly useful to medical and pharmaceutical students, as it gives much consideration to the preparation of nearly all the inorganic compounds of the British Pharmacopoeia, while the scope of the work meets the requirements of the Intermediate Scientific Examination of the London University. The schemes of analysis are clear and well arranged. Laboratory manipulations are fully explained, and the teacher's work should be greatly facilitated. The several summaries will be found helpful. The sections on gravimetric and volumetric work are probably the best in the book; calculations and standard solutions are fully described, while some logarithms and other useful tables are added. In the description of Pettenkofer's method in the section on Gas Analysis, it is suggested that the breath be held during the emptying of a 5-litre bottle; surely this is something of a strain on the apnoeic capacities of the student? We feel sure that this book deserves a large measure of popularity among students of chemistry, and doubtless in future editions one or two minor defects will "be remedied. The Amateur Gardener's Diary and Dictionary. The 1911 edition of this well-known diary has been entirely re-written. A large amount of information is available in a handy form, while the schemes of work suggested month by month contain valuable hints. Pp. 128. Price 2s. 6d. net.) The author's object in writing this short work is to provide a small book on practical chemistry adapted especially to the needs of medical students. A perusal of the volume leads to the opinion that in this he has succeeded, for, although the student must be cognisant of the main outlines of chemistry before he can hope to use the volume with understanding, the style of the author is etraightforward, his explanatory matter clear and to the point, and he takes every opportunity throughout of impressing on the reader the practical bearing which the subject in hand has upon medicine. Thus the reader will realise that chemistry is not to be regarded merely as an examination subject to be got rid of as soon as possible and then forgotten, but rather as an adjunct of value to his knowledge for the fuller understanding of his clinical and other studies. The subject-matter covers and extends somewhat beyond the syllabus of the practical examination in organic and applied chemistry of London University. Massage Movements, including the Nauheim Exercises : An Illustrated Guide for Nurses and Masseuses. New edition. (London : The Scientific Press, Ltd. Price Is. net.) This booklet gives the barest outlines of the art of massage; it is, in fact, little more than a list of definitions. It contains, however, a large number of diagrams and ilustrations of the movements, including the Nauheim exercises. The pronunciation is given of most of the terms employed, but this is occasionally rather feeble and sometimes incorrect. A second part consists of a list of the muscles, bones, vessels, and nerves of the human body, with neat diagrams showing the position of some of them. As a first introduction to the subject this little book may be recommended to those interested in massage, but it is in no .sense a text-book. Golden Rules of Refraction. By Ernest Maddox, M.D., F.R.C.S. Third edition. (Bristol : John Wright and Sons, Ltd. Price Is.) It is not surprising that these neat little books of the " Gold Rules Series " should have obtained so much favour as they have done. If rightly used by those who have already mastered the groundwork of the subject they cannot fail to be most useful, containing, as they do, sound information written in each case by a recognised authority. The third edition of the volume on refraction bears this out. In a small compass, but clearly set out, is most of the knowledge necessary for the practice of refraction. As in the past, the little volume is sure to continue to be one of the most popular of the series. We must confess to having read every page of this excellent manual with the greatest interest not unmixed with profit. It is obvious that the authors have had practical experience of every condition they describe, from the many excellent hints with which their technical descriptions are interspersed. The teaching laid down is accurate, and on the whole is set forth in an attractive manner. At times there are signs of want of care in the construction of sentences, by which their meaning is rendered somewhat obscure. It must be granted, however, that in the main the authors have succeeded in producing a book which should be of the greatest utility to those for whom it was in the first event intended. While it would, of course, be impossible to mention all forms of treatment in a email volume such as the one before us, we might suggest that in a future edition some reference be made to the treatment of fractured clavicle by the padded-ring method, by which all the unpleasantness of strapping is avoided and a more satisfactory position of the fragments attained. In addition there is the advantage that the rings can be Temoved daily to wash the skin, and massage and move the arms. The only disadvantage to this method is that it necessitates the patient being seen daily by the medical officer, so that the rings may be kept in the right position, and a proper degree of tension maintained with them to ensure that the shoulders are held well thrown back. A little more patience in the correction of the proof-sheets of the work before us would have ensured the avoidance of a number of mis-spellings and misprints, of which thefollowing are examples : Orygen for oiygen (p. 83), ipeeachuana for ipecacuanha (pp. 130 and 192), spitoon for spittoon (p. 143), desquammation for desquamation (p. 144), eufficeint for sufficient (p. 167), amoebae for amoeba, (p. 189), and medcal for medical (p. 280). net.) In this little work the authors make a strong appeal for the co-operation of singing-masters and pupils with laryngologists of experience when the question arises as to which class of voice that possessed by the pupil belongs. Without the guidance of someone conversant with the anatomy of the organs that go to make up the vocal instrument, pupils are apt to find themselves studying music totally unsuited to their particular class of voice, as well as undertaking breathing exercises of a type calculated to develop the capacity of some portion of their respiratory organs, which it is not desirable to increase owing to the delicate' nature of their vocal cords. For example, a light operatic tenor, most of whose singing is done with the middle or head registers, has little need of diaphragmatic or abdominal respiration, and, in fact, he may do irreparabledamage to his thin and fragile cords by the production of excessive blasts of air. On the other hand, a strong tenorrequires a large volume of air, and it would be a mistake to impose costal respiration on such a one. By his knowledge of the anatomy of the vocal organ as a whole, including bellows, reed, and sounding-board, the laryngologist can give useful advice as to which type of breathing is best suited to the individual, and in this manner ward off many of the calamities which follow in the train of vocal abuse. The authors maintain that "no one should be admitted to study singing, and even declamation, without having passed a probationary examination in the knowledge recognised as indispensable to this class of masters," and that "conservatoires should always possess one or several laryngologists, whose care it. should be to examine the pupils periodically, at the beginning, in the course of, and at the end of their studies." Used as we are to the English notation, we found it difficult to follow the foreign one. This will not, however, trouble a singer much, as he is likely to know both. The little volume contains much teaching that appears to us to be sound, and as a whole it is interesting. Our pleasure in reading this English version would have been enhanced had the^ style of the translator been less involved. It is irritating to come across from time to time sentences which need tobe read twice or oftener before their meaning can be grasped. The following, which occurs on page 98, is an example, and, unfortunately, not the only one, of what we mean.
v3-fos-license
2022-05-12T13:09:20.750Z
2021-09-08T00:00:00.000
248699187
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://vbn.aau.dk/files/445885379/DAFx20in21_paper_32.pdf", "pdf_hash": "716130401e8d1d1f75e3698e586d85ae49b29fb9", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1983", "s2fieldsofstudy": [ "Physics" ], "sha1": "9b4933a10cfd8dcbba2f01fc9c26825e58e4b1f1", "year": 2021 }
pes2o/s2orc
Real-time implementation of a friction drum inspired instrument using finite difference schemes Physical modelling sound synthesis is a powerful method for constructing virtual instruments aiming to mimic the sound of real-world counterparts, while allowing for the possibility of engaging with these instruments in ways which may be impossible in person. Such a case is explored in this paper: particularly the simulation of a friction drum inspired instrument. It is an instrument played by causing the membrane of a drum head to vibrate via friction. This involves rubbing the membrane via a stick or a cord attached to its center, with the induced vibrations being transferred to the air inside a sound box. This paper describes the development of a real-time audio application which models such an instrument as a bowed membrane connected to an acoustic tube. This is done by means of a numerical simulation using finite-difference time-domain (FDTD) meth-ods in which the excitation, whose position is free to change in real-time, is modelled by a highly non-linear elasto-plastic friction model. Additionally, the virtual instrument allows for dynamically modifying physical parameters of the model, thereby allowing the user to generate new and interesting sounds that go beyond a real-world friction drum. INTRODUCTION The friction drum has been described as a peculiar musical instrument or even a noisy toy [1].In Scandinavian tradition, the friction drum was used in the Middle Ages as a rhythmic instrument.Over time, children used it in the 19 th century to play when they went door to door during the Christmas holidays and sang [2]. Figure 1 shows the friction drum present at the Danish Music Museum.As can be seen, the drum is a combination of a stick inserted in the middle of a cylindrical drum.Similar drums are found in other cultures, for example the cuica in Brazil, putipù in Italy, zambomba in Spain, buhai in Romania and variations on rummelpot in Germanic countries like Denmark, Germany or the Netherlands.The sound is produced by rubbing the stick connected to the membrane, generating a frictional excitation, hence the name friction drum.This excitation is transferred to the membrane and then further to the air inside the acoustic tube, with the output radiating at the open end of the tube.As the stick is of limited length the sound produced by these means cannot be sustained indefinitely and these instruments are typically used for percussive sounds, characterised by a sharp attack.The pitch produced by these instruments can be also modulated by applying pressure to the drum head and therefore changing its tension. For our virtual model, we envision the excitation mechanism somewhat differently: we consider the stick as an infinitely long bow that excites the membrane directly via frictional interaction.This allows for an output sound sustained indefinitely.Therefore our virtual instrument does not necessarily sound identical straight "out of the box" compared to either of the real friction drums in particular, but could hopefully be tuned to achieve a particular sound.In addition, as the virtual instrument is based on a physical model, there comes a learning curve for any new user with regards to learning how to play it, as there is for any real instrument. Friction has been extensively investigated in the sound synthesis literature, being the sound excitation mechanism of several musical instruments such as the violin and the musical saw, but also everyday sounds such as squeaking doors and rubbed wine glasses [3].The literature has examined different ways of simulating friction, and elasto-plastic friction models have proven to be an accurate way to simulate dry interactions between rubbed surfaces [4].In the elasto-plastic friction model, friction does not depend only on the relative velocity between bodies in contact, but also on the relative displacement of the micro-interactions between such DAFx.1 bodies [5].Such models have been recently used in combination with finite difference schemes [6].In this paper, we combine an elasto-plastic friction model together with a membrane simulation based on FDTD methods to model the interaction between a stick, acting as a bow, and the drum head of a friction drum inspired virtual instrument.This is furthermore coupled with a 1D wave model with radiating boundary conditions at the open end used to describe the drum's sound box. We mathematically describe the different elements of the drum, and present a real-time implementation which uses the Sensel Morph, a pressure sensitive touch pad controller [7], in order to play the virtual instrument. FRICTION DRUM MODEL We propose to model the friction drum as two main components and an excitation mechanism: a membrane connected to an acoustic tube, where the membrane can be bowed via a non-linear elastoplastic friction model.This section describes the partial differential equations (PDEs) describing these components in isolation. Membrane First off, for a membrane defined over a domain Dm = [0, Lx] × [0, Ly], with Lx and Ly being the lengths of the membrane [m] in the Cartesian coordinates (x, y), the transverse displacement at a time t [s]: u(x, y, t) [m] can be described by the following PDE, as per [8]: where the 2D Laplacian operator is defined as: and the parameter c = T /ρmH is a measure of wave speed resulting from the membrane's tension per meter T [N/m], its density ρm [kg/m 3 ] and its thickness H [m]. Furthermore, σ0 [s −1 ] and σ1 [m 2 /s] are parameters controlling the frequency-dependent and frequency-independent loss respectively. Dirichlet boundary conditions are considered for the membrane, as it is a reasonable assumption that no energy is lost at the boundaries.Thus: Acoustic Tube The longitudinal vibration of an air column ζ(χ, t) [m] in a tube of uniform cross-section and length Lχ can be described by the following equation: χ ∈ [0, Lχ] is a spatial coordinate along the length of the tube.This 1D wave approximation for a tube holds true if the length scale in the longitudinal direction is significantly greater than in the others.For the case of the friction drum this is not true and this approximation will actually not produce the desired "drum" type sound for the wave speed γ resulting from the bulk modulus and density of air.However, this value can be tuned to produce the desired sound.The choice for this value is given in Section 4. A more accurate model would be the 3D wave equation or propagating 2D cross-sections but would be too computationally demanding for a real-time implementation. A Neumann boundary condition is imposed at the side of the tube connected to the membrane, while at the open end of the tube a radiating boundary condition is chosen: with the constants α1 and α2 modelling the inertia and loss at the open end of the tube. Connection A connection between the two components is considered, which means that a connection force is added to the PDEs describing the two separate components in the following way: where fc,m [N] is the connection force acting on the membrane, while fc,t [N] acts on the tube.A [m 2 ] is the area of the tube and the terms Em [m −2 ] and Et [m −1 ] represent some distributions over which the connection force is applied on.Notice that the distributions are given over the appropriate domain, i.e. over a surface (m 2 ) for the membrane and over a length (m) for the tube.This essentially means that the connection forces applied to the zero-input PDEs given in Equations ( 1) and ( 4) are scaled with the mass per unit length or area of each considered component.For the current model, a rigid connection is assumed meaning that: with ⟨•, •⟩ being an L 2 inner product over the appropriate domain (1D for the tube and 2D for the membrane).Thus η represents the relative displacement of the two components over the connection.A more detailed discussion on the choice of connection distributions is given in the Section 3. Excitation -Bowing Model An elasto-plastic friction model, first proposed in [9], is used for the friction drum excitation mechanism.Such a model, applied to the bowing of a stiff string, was shown by both Serafin et.al [10] and Willemsen et.al [6] to produce a hysteresis loop in the bowing force versus relative velocity plane, a detail which was experimentally observed by Smith and Woodhouse in [11].This was arrived at by both a digital waveguide model, implemented by [10] as well as a FDTD method used by [6], who presented a working real-time implementation of a bowed stiff string. The elasto-plastic friction model assumes the contact between two interacting elements is highly irregular at the microscopic level, i.e. not all the overlapping surface is actually in contact.Instead, the contact can be modelled via a large group of bristles each contributing to the total friction force.These bristles are modelled as damped stiff springs and therefore each generates increasing contact force with increasing displacement, describing an elastic DAFx.2 regime (stick).However, each bristle can only displace so far before it "breaks" and not all bristles "break" at the same time.This represents the elasto-plastic regime of the friction model, where only some bristles have reached the breaking point (slide).Once all bristles "break", a completely plastic regime is entered (slip).In the case of the bowed membrane, after the slip a "new" portion of the bow gets in contact with the membrane and the stick-slide-slip cycle restarts. Adding the bowing force to the membrane can be done by introducing an extra term to (6a), the PDE which governs the membrane connected to the tube: Again the force is applied over some distribution which is in fact a single point on the Cartesian grid of the membrane given by the bowing position at some time where v can be computed as the difference between the velocity of the membrane at the bowing location and the externally supplied velocity of the bow, vB(t) [m/s]: Furthermore, s0 is the bristle stiffness [N/m], s1 is the damping coefficient of the bristles [kg/s] and s2 is the viscous friction [kg/s].s3 [N] is a force coefficient proportional to the normal bowing force fN (t) (which is an external input and can vary over time) scaled with a pseudorandom function w(t) ∈ [−1, 1] and is used to add noise to the total bowing force, as per [3].The time derivative of the average bristle displacement, ż [m/s] is given by: Perhaps the most important function in this elasto-plastic friction model is introduced above: the adhesion map α(z, v) which controls the transition between the various regimes of friction.The function is defined as follows and is illustrated in Figure 2a: ) where z ba [m] is the breakaway bristle displacement, below which the friction regime is purely elastic.Indeed, when α = 0, it follows that ż = v.Then, when z ba is surpassed, the elasto-plastic regime is entered where the value of z will be a proportion, governed by αm(v, z), of the steady-state bristle displacement zss(v), with: At steady-state, when slipping occurs and therefore ż = 0, α = 1 and together with Equation ( 11) it follows that z = zss(v), with zss(v) [m] being defined as: where the Coulomb force FC = µC fN [N] and stiction force FS = µSfN [N] are given as a proportion of the normal force, fN (t) of the bow acting on the membrane.These proportions are controlled by the dimensionless dynamic and static friction coefficients respectively, µC and µS.Looking at Figure 2b which illustrates zss(v) for a fixed fN one can see that the Stribeck effect, i.e. the dip of force at low velocities is captured, as the magnitude of zss at values of v close to zero is larger than for higher relative velocities.This allows for a larger total friction force to be obtained in this region before the plastic regime is reached.After slipping occurs, the "grip" of the bow on the membrane is briefly lost and the membrane displaces in the opposite direction, hence sgn(v) ̸ = sgn(z) and α becomes again zero, meaning that the bow again sticks to the membrane and a new stick-slip cycle begins. Complete System The complete system for the friction drum can be therefore written in continuous time as: DISCRETIZATION The system given in ( 15) is discretized using FDTD methods, which subdivides the continuous model into grid points in space Using these discrete definitions for space and time, the continuous state variables presented in the previous section can then be approximated by grid functions as u(x, y, t) ≈ u n l,m for the membrane and ζ(χ, t) ≈ ζ n p for the tube.Furthermore, approximations to the derivatives can be described in the following way: Note that the same continuous operation can be approximated in different ways.With these definitions in place we can move on to discretize the individual components of the friction drum model.An important thing to take into account when it comes to numerical models is the issue of stability, from which limitations arise on the possible size of the grid spacing hm and ht.Stability conditions are available for each individual component and will be presented in the upcoming subsections.Working with grid spacings that satisfy the stability conditions as close to equality as possible ensures a more accurate numerical scheme. Membrane The complete membrane including the bowing force and connection to the tube can be discretized as Notice in Equation ( 17) the use of the δt− operator in the mixed time/space derivative term, which is used in order to keep the numerical scheme explicit. otherwise, (18) with lB = floor(xB/hm), mB = floor(yB/hm), αx B = xB/hm − lB and αy B = yB/hm − mB.This spreading function, necessary for exciting a discretized grid has a dual: the interpolation function IB which is of interest when obtaining the state of a discrete grid between grid points and will be of use further along for the discretization of the complete system.See [8] for more details on this.They are termed dual functions because they serve inverse purposes: one is used for adding input to a distributed object at a specific location and the other is used to extract a state at a specific location on the same object.Furthermore, they are related as such: Drum heads are typically of circular shape and although the membrane is defined in Cartesian coordinates over some rectangle of length Lx × Ly, one can "sculpt" a circular grid using a staircase approximation, as done in [12], as long as boundary conditions are satisfied.Since Dirichlet conditions are assumed, the only requirement is that points on the rows and columns at the edge of the square grid need to be fixed to zero.Regarding the connection with the tube, it is clear that the entire membrane contributes to movement of the air column inside the tube.However, there is a factor which points towards skewing the weight of the membrane displacements towards its center.Due to the boundary layer effect, the air at the edges of the tube will be semi-stationary.Therefore a 2D Hann distribution over 72.25% of the area of the grid is used, centered at the middle of the membrane.This is illustrated in Figure 3 together with the grid points of the circular membrane approximated from the initial rectangular grid. This connection distribution, named Im, is normalized such that its integral is equal to 1 and can be seen to act as an interpolation function acting on u n l,m .Therefore it's dual spreading function Jm will be defined in the same way as Equation ( 19). Using von Neumann analysis [8], a stability condition can be derived and is given by the following inequality: (20) Acoustic Tube A discretized version of the acoustic tube and its connection is given by: with Jt being the spreading operator for the connection force acting on the tube, essentially the discretized version of Et.This is DAFx.4 related to its dual interpolant function It in the following way: with It taken as a normalized half-Hann window such that its integral is 1, spread over 4% of the length of the tube, with its peak at the connection point (the top of the acoustic tube).This was preferred due to a Dirac type connection in order to dampen out some of the high frequencies which would result from an excitation of the tube at a single point and thus produce a more realistic friction drum sound.Notice in Equation ( 22) that ht is not squared, as was the case for hm in Equation ( 19).This is due to the different spatial dimensions of the components.The boundary conditions of the tube presented in Equation 5 are discretized in the following way: and a stability condition on the grid size ht is given by [8]: ht ≥ ht,min = γk. (24) Connection The rigid connection given in Equation (7b) can be discretized as with Dm and Dt being the domains of the membrane and of the tube respectively.This means that the connection location is described by the two spreading functions Jm and Jt for the membrane and the tube respectively.If the equality in Equation ( 25) is true at sample n then it follows that it will be true at sample n + 1 as well.This together with with the following identity will provide valuable information for solving the complete discretized system: where f is a grid function in some domain D and I and J must be dual interpolation and spreading functions.This results in the following equality: Excitation -Bowing Model For the bowing force, the discrete counterpart of Equation ( 9) is taken as: Additionally, the relative velocity between the bow and the membrane described in Equation ( 10) will be: (30) Solving the System In order to calculate the update values for the grid functions: u n+1 l,m and ζ n+1 p , three unknown variables must first be determined: v n , z n and f n c and for this we need a system of three equations dependent on these variables at each sample n, which can then be solved using a multivariate Newton-Raphson method.An interesting observation is that the reaction of the air inside the acoustic tube, i.e. the connection force, will instantaneously affect the bowing force and vice-versa.This would not be the case in a simpler model where bowing would not occur at the connection point. The first equation is g1(v n , z n , f n c ) = 0 and can be found by making use of the following identity: and introducing it together with Equation (30) in Equation ( 17), which results in: with The second equation needed is, as per [6] and [8]: where the operators applied to z n describe the trapezoid rule.Finally, the third equation comes from the rigid connection condition in Equation ( 27).The displacements of the membrane u n+1 l,m and for the tube ζ n+1 p can be extracted and expressed only in terms of values at current or previous samples by expanding the operators in Equations ( 17) and ( 21).This results in DAFx.5 with The following iteration is then used to calculate the unknown values v n , z n and f n c : where i is the iteration number.The threshold for convergence is set at 10 −7 , with a maximum number of iterations of 99. Once the three values at at the sample n are known, update values for the grid points u n+1 l,m and ζ n+1 p can be found by expanding the operators in Equation (17) and Equation (21). IMPLEMENTATION The implementation of the finite difference scheme presented in Section 3 has been carried out in C++ using the JUCE framework [13] and a demonstration video is available at [14].The parameters used can be found in Table 1, and have been chosen starting from the work of Serafin [3] and Willemsen et al. [6], but tuning them to achieve the desired sound for the instrument.The number of grid intervals in the discretization of both elements is limited as compared to the stability condition in order for the model to be able to run in real-time without audio dropouts. Prototype Model Results Before implementing the audio application in JUCE, tests were done in MATLAB [15] to identify that the model is stable and that results are in line with expectations.Figure 4 shows a snapshot of the circular membrane being bowed with fN = 12 [N] and vB = 0.1 [m/s] coupled with the acoustic tube at some time step in the middle of a simulation.Looking at the tube, one can see the free and the radiating boundaries at its endpoints. The next step was to test whether the vibrations of the membrane exhibit the Helmholtz motion, which is typical for bowed instruments and tends to produce triangular-shaped wave forms.Finally, the presence of a hysteresis loop in the force vs. relative velocity plane is investigated, which is an expected behavior as per experimental observations of bowed strings by Woodhouse and Smith [11].This is illustrated in Figure 5b.tice the orange square, highlighting the bowing position, which has different opacities in the two snapshots, their meaning being further described in the following.An important part in designing the real-time application was to have a natural type of interaction.Since there are 4 dimensions of input to the model, i.e., the bowing position (xB(t), yB(t)), bowing force fN (t) and velocity vB(t), it was desired to find a way to somehow control all these inputs simultaneously.An ideal match for this task was the Sensel Morph which is a tablet-sized pressure sensitive controller which is very fast and extremely sensitive [7].The work of [16] provided an open source library for allowing easy communication between the Sensel and JUCE.This Other parameters which can be modulated via sliders are the tuning of the membrane, i.e. the wave speed c ∈ [15, 150] [m/s], which is named "Tuning" thus allowing for a more intuitive understanding of the parameter by a non-technical user.Similarly the damping parameters σ0 ∈ [0, 6] [s −1 ] and σ1 ∈ [0, 0.00266] [m 2 /s] are combined into one value called "Damping".Note, that grid spacing in Eq. ( 20) is initialised using the highest values for c and σ1 so that the stability condition is not violated.Also note, that even when the damping parameters are set to 0 the radiation damping parameters for the tube α1 and α2 are fixed.Hence, even with zero damping, there will still be decay present.A third slider at the top of the graphical user interface (GUI) window controls the s3 ∈ [0fN , 0.04fN ] [N] term in the bowing force, and is called "Noise" as it adds some white noise to the friction force proportional to the normal force fN . Real-Time Application Furthermore, a vibrato effect is added where one can modulate via a sine wave the tuning of the membrane by a chosen frequency and with a chosen amount.This is introduced in the GUI as the sliders, named "Variation" which adds an oscillation between [0, 3] [m/s] to the wave speed c and "Rate", which controls oscillation frequency of the sine wave and is in the range [0, 10] [Hz]. All the ranges mentioned above are mapped in the GUI to be in a [0, 10] non-dimensional scale as to not confuse the user with different scales. To add to the natural feel of the interaction another important addition is included in the GUI: the vibration of the membrane is plotted in real-time in a gray scale, inspired by [16], together with the bowing position, plotted with an orange color and an opacity given by the amount of pressure one applies to the Sensel.This is in fact, in the first author's view, one of the paramount features of this digital instrument, the fact that one can hear and see what the membrane is doing and change the bowing accordingly to find "sweet spots" for the sound. The output sound is retrieved from the model by following the state of the open end of the tube (ζ n Nχ ) and amplified to the usual range of amplitudes [-1,1].Since the amplitudes of the model states are higher when using a lower value for c, an adjustable gain is used. EVALUATION The real-time simulation presented in the previous section was demoed by the first author during a Zoom session with 17 students enrolled in a physical modelling for sound synthesis class. After a demo where the different parameters of the interface were explored, a qualitative interview and discussion took place.To the question regarding which instrument it was, the answers were varied.One student said the sounds were inspired by the Theremin, another mentioned a gong, hand drum, metallic drum, bowed bar, low frequency saw, a cymbal that is "contact-miced" and bowed or even "a chair being dragged across the floor".Particularly, one student mentioned that it sounded like one was "inside" the instrument or that it resembles the sounds a contact microphone might pick up.This was encouraging to hear as the sound is being picked up right at the end of the tube so in some way the listener is inside the drum itself.Overall, the answers gave some indications on how the sonorities of the physical model remind of a bowed inharmonic resonator and references to friction were abundant in the DAFx.students' responses.Even if it was not possible for the viewers to play with the interface, they found the use of the Sensel intuitive and the sound produced felt natural.This informal evaluation is obviously not ideal; the feeling of the quality of the instrument is better experienced from the viewpoint of the player.Nonetheless, it provided some indications for further development of the interaction and the GUI associated with the instrument.An important addition to the evaluation would be an objective comparison with the sound produced by various real friction drums.To this end, however, the virtual instrument would need to be tuned and played accordingly, as the excitation mechanisms are not entirely analogous. CONCLUSION In this paper, the development of a real-time audio implementation of a virtual friction drum inspired instrument using physical modelling has been presented.FDTD methods are used for simulating the friction drum as a bowed membrane connected to an acoustic tube.Furthermore, an advanced elasto-plastic friction model is used for the excitation mechanism which is shown to exhibit physically consistent behavior observed in experiments on real music instruments such as the presence of a hysteresis loop in the resulting bowing force-velocity plane. Future work may involve the investigation of other possible mappings to the various parameters or the use of different type of controllers.Another important direction should be the optimization of the C++ implementation with the aim of reducing the grid size intervals in the numerical model and therefore work closer to the stability conditions.This would produce more accurate results with a broader bandwidth and allow for the application to be implemented on a micro-controller and developed as a stand-alone digital instrument. Copyright: © 2021 Marius George Onofrei et al.This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0 Unported License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Figure 1 : Figure 1: A picture of the friction drum present at the Danish music museum. with the wave speed γ = B/ρt [m/s] resulting from the bulk modulus B [Pa] and density ρt[kg/m 3 ] of the air inside the tube. t: (xB(t), yB(t)).This is achieved by the use of a 2D Dirac delta function δ(x − xB, y − yB) [m −2 ].Furthermore, the force needs to be scaled with the mass per unit area of the membrane of the component.As for the bowing force itself, fB [N], it is a function of the relative velocity between the bow and the membrane, v [m/s], and the average bristle displacement, z [m]: Figure 2 : Figure 2: (a) A plot of the adhesion map α(v, z) plotted against z when the signs of v and z are the same.(b) Steady-state bristle displacement zss(v) for a constant normal force fN . DAFx. 3 and samples in time.The (x, y)-plane of the membrane is discretised as x = lhm and y = mhm, with l ∈ [0, ..., Nx] and m ∈ [0, ..., Ny].Here, Nx = Lx/hm and Ny = Ly/hm are the horizontal and vertical number of grid intervals the membrane is divided in with grid spacing hm [m].For simplicity the same spacing is used in both directions.Similarly for the tube, χ = pht, where p ∈ [0, ..., Nχ] and Nχ = Lχ/ht is the total number of grid intervals along the tube's length with a grid spacing ht [m].Time t is discretized as t = nk where k = 1/fS, sampling frequency fS [Hz] and temporal index n ∈ N. Furthermore Jm and JB(xB, yB) are spreading operators.The former is a discretized version of the connection distribution Em and the latter, a discrete 2D Dirac delta function which defines the bowing position in the continuous model.Here we use a first order 2D spreading function defined as Figure 3 : Figure 3: Circular grid approximation from a rectangular grid and the normalized Hann distribution used for the connection to the tube, Im.The green crosses are the original grid points from the square Lx × Ly grid, while the red circles are the points used in the calculation. Figure 4 : Figure 4: Snapshot showing the displacements of the friction drum's components at a time step in the middle of a bowing simulation, (a) being the longitudinal displacements of the air column in the acoustic tube ζ and (b) being the transverse displacements of the membrane u.The magenta cross highlights the bowing position. Figure Figure5ashows the displacement of the membrane at the bowing location during a simulation, as well as the relative velocity and the resulting displacements at the open end of the tube.The membrane indeed shows Helmholtz motion, while the relative velocity exhibits the stick-slip behavior with values hovering around zero followed by an abrupt drop after which a new portion of the bow sticks again and the cycle restarts.The wave form of the displacement at the open end of the tube is somewhat more complex and highlights the effect of the interaction of the elements, with the entire membrane contributing to the motion of the air inside the tube, which then feeds back into the membrane.Finally, the presence of a hysteresis loop in the force vs. relative velocity plane is investigated, which is an expected behavior as per experimental observations of bowed strings by Woodhouse and Smith[11].This is illustrated in Figure5b. Figure 6 Figure 5 : Figure 6 shows snapshots of the friction drum audio application during use, where due to variation of the bowing position and force/velocity different modes of vibration are in resonance.No- allowed to map the (x, y) touch position to the bowing position while the pressure was mapped to the bowing force and velocity, linearly coupled.The normal force is limited in the range of fN ∈ [0, 20] [N] while the bowing velocity is mapped in the range vB ∈ [0, 0.2] [m/s].Naturally, the bowing position is limited in the range of [0, Lx] and [0, Ly]. Figure 6 : Figure 6: A screenshot of the real-time audio application where a resonance occurs with (a) mode 2 of vibration and (b) mode 3. Table 1 : Parameter values used for the friction drum simulation.
v3-fos-license
2017-03-31T08:35:36.427Z
2017-03-16T00:00:00.000
10196442
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ijbnpa.biomedcentral.com/track/pdf/10.1186/s12966-017-0485-z", "pdf_hash": "93c3a5939d232b72e7a9d00a1d4d4d67f6900656", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1986", "s2fieldsofstudy": [ "Medicine" ], "sha1": "93c3a5939d232b72e7a9d00a1d4d4d67f6900656", "year": 2017 }
pes2o/s2orc
Sedentary behaviours during pregnancy: a systematic review Background In the general population, at least 50% of time awake is spent in sedentary behaviours. Sedentary behaviours are activities that expend less energy than 1.5 metabolic equivalents, such as sitting. The amount of time spent in sedentary behaviours is a risk factor for diseases such as type 2 diabetes, cardiovascular disease, and death from all causes. Even individuals meeting physical activity guidelines are at a higher risk of premature death and adverse metabolic outcomes if they sit for extended intervals. The associations between sedentary behaviour with type 2 diabetes and with impaired glucose tolerance are stronger for women than for men. It is not known whether sedentary behaviour in pregnancy influences pregnancy outcomes, but if those negative outcomes observed in general adult population also occur in pregnancy, this could have implications for adverse outcomes for mothers and offspring. We aimed to determine the proportion of time spent in sedentary behaviours among pregnant women, and the association of sedentary behaviour with pregnancy outcomes in mothers and offspring. Methods Two researchers independently performed the literature search using 5 different electronic bibliographic databases. Studies were included if sedentary behaviours were assessed during pregnancy. Two reviewers independently assessed the articles for quality and bias, and extracted the relevant information. Results We identified 26 studies meeting the inclusion criteria. Pregnant women spent more than 50% of their time in sedentary behaviours. Increased time in sedentary behaviour was significantly associated with higher levels of C Reactive Protein and LDL Cholesterol, and a larger newborn abdominal circumference. Sedentary behaviours were significantly higher among women who delivered macrosomic infants. Discrepancies were found in associations of sedentary behaviour with gestational weight gain, hypertensive disorders, and birth weight. No consistent associations were found between sedentary behaviour and other variables such as gestational diabetes. There was considerable variability in study design and methods of assessing sedentary behaviour. Conclusions Our review highlights the significant time spent in sedentary behaviour during pregnancy, and that sedentary behaviour may impact on pregnancy outcomes for both mother and child. The considerable heterogeneity in the literature suggests future studies should use robust methodology for quantifying sedentary behaviour. Electronic supplementary material The online version of this article (doi:10.1186/s12966-017-0485-z) contains supplementary material, which is available to authorized users. Background Sedentary behaviours are activities that expend very low energy, close to the basal metabolic rate, without significantly increasing energy expenditure. This equates to activities such as sitting or lying, that utilise less than 1.5 metabolic equivalent units, or times the basal metabolic rate [1,2]. Sedentary behaviours are thus distinct from lack of physical activity, although the latter is sometimes mistakenly used as a marker of sedentary behaviour in the literature [3]. Epidemiological studies have shown that in the general adult population, around 55 to 60% of time awake is spent in sedentary behaviours [4,5]. In the UK, children, young people, adults and older adults, spend on average at least half of their waking hours being sedentary [6,7]. In pregnant women the situation appears to be similar or even worse [8][9][10][11][12], although the literature has not been systematically reviewed. The quantity of time spent in sedentary behaviours is a key risk factor for diseases such as type 2 diabetes [13], cardiovascular disease [14], metabolic syndrome [15] and death from all causes [14,16,17]. New evidence also suggests that sedentary behaviour has an adverse effect on mental wellbeing, including depression [3]. Importantly some studies have exposed that even when individuals meet physical activity recommendations, they are still at a higher risk of premature death and adverse metabolic health if they sit for extended intervals [2,[18][19][20]. Sedentary behaviours, mostly television watching, are also linked to high risk of obesity and type 2 diabetes in the general population, independent of physical activity levels [1,20], and in some studies the associations between sedentary behaviours with type 2 diabetes and with impaired glucose tolerance were stronger for women than for men [18,21,22]. If the negative health outcomes associated with sedentary behaviour in the general population, also occur in pregnancy, this could have implications for development of cardiometabolic complications such as gestational weight gain, gestational diabetes mellitus and hypertension, as well as mental wellbeing. It is not known whether sedentary behaviour in pregnancy influences outcomes for the baby such as birthweight or gestation at delivery. We aimed to carry out a systematic review of the literature investigating sedentary behaviours during pregnancy to determine: a) the time spent in sedentary behaviours and the prevalence of sedentary behaviours among pregnant women, and b) whether sedentary behaviours are associated with pregnancy outcomes in mothers and offspring. Data sources and searches The Meta-analysis of Observational Studies in Epidemiology (MOOSE) guidelines were followed for the conduct [23], and the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) guidelines for the reporting of this systematic review [24]. The systematic review was registered in PROSPERO with the number CRD42015023611. Two researchers (CF, KL) independently performed the literature search using 5 different electronic bibliographic databases: MEDLINE, EMBASE, Web of Science, CINAHL and SPORTDiscus. The strategy ( Fig. 1) was developed using Boolean. In MEDLINE medical subject headings (MeSH) used were: pregnant women (used also for pregnant woman), pregnancy (used also for pregnancies and gestation), prenatal care and sedentary lifestyle (used also for sedentary lifestyles). In EMBASE, main terms used were: pregnant woman (used also for pregnant women), pregnancy (used also for child bearing, childbearing, gestation, gravidity, intrauterine pregnancy, labour presentation, pregnancy maintenance and pregnancy trimesters), prenatal care (used also for ante natal care, antenatal care and antenatal control), prenatal period (used also for antenatal period) and sedentary lifestyle (used also for sedentary life style). The following keywords were also used for plain text searching in all databases: pregnan*, gestation*, gravid*, antenatal, prenatal, sedentar*, sitting, television, screenbased, TV, watching and viewing. Recursive searching of reference lists of retrieved articles was performed to identify any additional studies (Additional file 1). Studies were included if the sample considered pregnant women over 16 years old, and if sedentary behaviours (specified as watching TV, sitting or lying, low energy expenditure activities, etc.) were assessed at any point during gestation. Only published studies were included. There were no exclusions related to study design, language, ethnicity, socioeconomic status, parity or physical condition. Two reviewers (CF, KL) independently assessed articles for inclusion according to the inclusion/exclusion criteria. After screening the titles and abstracts, the reviewers selected potentially relevant studies. If it was not possible to determine relevance from titles and abstracts, full texts were retrieved. Any disagreements that could not be resolved by consensus were discussed with a third reviewer. Two reviewers (CF, KL) independently extracted relevant information on study characteristics, methodology, and study results using a data extraction form in order to determine whether the study reported the time that pregnant women spent in sedentary behaviours, the prevalence of sedentarism among pregnant women, and whether the sedentary behaviours were linked to pregnancy outcomes. For presentation in the tables reporting time and proportion of time in sedentary behaviours, we standardised the outcomes (converted to the same units) in order to make them comparable. Due to the heterogeneity of outcome data, a narrative synthesis was developed. Quality and risks of bias were assessed using objective criteria relating to sample population and recruitment, reliability of instruments, use of validated outcome measures, follow-up, risk of bias and data analysis, using a quality assessment instrument that was modified from the Grading of Recommendations Assessment Development and Evaluation (GRADE) Guidelines used in assessment of clinical trials [25][26][27][28]. A paper could attain a maximum score of 8, a score of 1-3 indicating poor quality, 4-6 intermediate, and 7-8 good quality. Results From 974 abstracts, 39 full text articles were assessed and 26 studies met the inclusion critera for the systematic review (Fig. 1). Most studies were carried out in the USA (n = 11) and Europe (n = 9), and the remaining were in China (n = 2), Africa (n = 1), Canada (n = 1), Australia (n = 1) and Singapore (n = 1). One study included couples (for the purpose of this review we only considered data from the women, not the men) [33]; 2 other studies included both pregnant and non-pregnant women (non-pregnant women were considered in this review when comparisons between the two groups were made) [33,47]. Three studies were conducted in Hispanic pregnant women [34,40,43], and 1 in Latina pregnant women [36]. One study was conducted in nulliparous pregnant women, 1 in obese pregnant women [49], 1 in pregnant women with type 1 diabetes mellitus [41], and 1 in pregnant women with sedentary lifestyles [38]. Thirteen studies utilised objective methods to assess sedentary behaviours (accelerometers, pedometers, combined heart rate and accelerometer device, and indirect calorimetry), and 13 studies employed non-objective measures including 4 administrating the Pregnancy Physical Activity Questionnaire (PPAQ), 9 using another kind of survey or questionnaire (The Australian Women's Activity Survey, Modified version of the Kaiser Physical Activity Survey, Behavioral Risk Factor Surveillance System, modified version of the leisure time activity section of the Physical Activity Scale for the Elderly, and other type of nonobjective appraisal methods) ( Table 2). The PPAQ has been validated among pregnant women, similarly 2 of the administered surveys were also validated among pregnant women, meanwhile 3 studies used validated questionnaires, but not validated among pregnant women. Finally, 4 of the questionnaires were not validated. Amount and proportion of time spent in sedentary behaviours (Table 3) The amount of time spent in sedentary behaviours was estimated in 8 studies using either objective [8-12, 30, 38, 44] or non-objective methods [42] ( Table 3). The time spent in sedentary behaviours during pregnancy assessed objectively, varied between 7.07 and 18.3 h per day. Of these studies 1 declared that sleeping was included [9], 2 stated that sleep time was not considered [8,11], and the rest did not declare anything regarding sleep [10,12,44]. Meanwhile the study which assessed using a questionnaire found that women spent 2.4 h per day watching television and the mean of total sitting time was 8.6 h per day [42] (Table 3). Among the 5 studies assessing the proportion of time spent in sedentary behaviours all used objective devices, finding that pregnant women spent more than 50% of their time (range 57.1 to 78%) in sedentary activities [8][9][10][11][12] (Table 3). Definitions of sedentary behaviours The definition of time spent in sedentary behaviours differed according to method of assessment. Studies that used accelerometers defined activities with less than 100 counts per minute as sedentary behaviours, while activities expending 1.5 metabolic equivalents or less was used for combined heart-rate and activity monitors. Meanwhile, non-objective methods focused mostly on television viewing and sitting time. Prevalence of sedentarism among pregnant women (Table 4) Five studies determined the prevalence of sedentarism among the pregnant population, all except 1 [30] used non-objective methods to assess activity behaviour, and all used their own cut-offs to classify women as sedentary. Two used the term "sedentary", defining this as <5000 daily steps [30] or considering women as 'sedentary' if they declared "watching television, or pursuing some other sedentary occupation" as the most appropriate description of their activities [35], respectively. One study focused on the second trimester of pregnancy and found that prevalence of sedentarism was 18% [30], the other study assessed women on the third trimester of pregnancy finding that 29% were sedentary [35]. Three studies analysed the prevalence of sedentary women, however these 3 studies did not use the term 'sedentary' , but used different activity categories defined variously by the authors as: "watching television (for a certain amount of time)", or being "mostly sitting". One study found that 15.3% of the studied women watched television or videos for 5 or more hours per day [37], other study found that 34% viewed television 2 h or more per day [29], and the last one found that 31.9% watched television more than 21 h per week, i.e. about 3 h per day [42]. Additionally 1 of the studies found that 24% of women were "mostly sitting" during usual daily activities [37] (Table 4). Comparison of data was difficult due to different cut-offs to define sedentary behaviour and categorisation of sedentarism. Change in sedentary behaviour during pregnancy Among the included studies, 5 aimed to determine whether time spent in sedentary behaviours was stable or changed during gestation [8,[10][11][12]37]. Four of these studies examined minutes per day or percentage of day spent in sedentary activities based on objective measures [8,[10][11][12]. Of these, only 1 found that the percentage of time awake spent in sedentary behaviours significantly increased between week 18 and 35 of gestation [8]. Another study found that women spent a mean of 40 min (standard deviation ±75) less in "very light sitting activities" (activities that spend around 1.3 times the basal metabolic rate) in later gestation than in earlier gestation [38]. The 3 studies which objectively assessed time or percentage of time of monitored time spent in sedentary behaviours, did not find significant differences in time spent in sedentary behaviours between trimesters of gestation [10][11][12]. When focused on the number of sedentary pregnant women across gestation, more women were sedentary during the third trimester than during the second trimester (18%, n = 155; 24.9%, n = 215, respectively) [30]. When the time spent between trimesters in TV watching and computer use was compared, no differences were found [37]. Five studies compared sedentary behaviours between pregnant and non-pregnant women [35,38,42,43,47]. Four compared from before pregnancy to during pregnancy, and 1 compared pregnant women versus one year postpartum women [38]. Three studies used nonobjective methods [35,42,43], and 2 objective procedures [38,47] to assess sedentary behaviours. All found that the time spent in sedentary activities is significantly greater among pregnant than non-pregnant women. When the number of women that watched television for long periods was compared before and after pregnancy, 1 study observed that the number increased [42], and the other found no change [29]. Additional factors affecting sedentary lifestyles Some studies considered additional factors which could influence the development of sedentary lifestyles. These factors included: smoking, meeting physical activity recommendations, parity, maternal age, and education level. Time spent in sedentary behaviours was significantly less among women who smoked cigarettes in the past 5 days, compared to those who did not [11]. Time spent in sedentary behaviours at 35 weeks of gestation was significantly less among women meeting physical activity guidelines compared to women who did not [8]. During pregnancy women expecting their first child decreased their sedentary time significantly more than non- pregnant women without children, as well as first time pregnant women also decreased their sedentary time significantly more than those expecting their second baby as pregnancy advanced [33]. When the changes before and during gestation were compared, women aged 16-19 years, significantly decreased their sedentary activity compared to those aged 20-24 years. Women who had completed college, also significantly decreased their sedentary activity during pregnancy, compared with those with less than a high school education [43]. Interruptions during sedentary time One study focused on the transitions between sit to stand, using an objective device that evaluates postural allocation [8]. No differences were found in sit/lie and upright time between week 18 and 35 of gestation. However, the number of transitions between sedentary (sit/lie) to upright per day and the number of sit/lie bouts increased significantly from week 18 to week 35 of gestation, whilst the length of sit/lie bout in minutes per day significantly decreased across this gestation window. Associations between sedentary behaviours and pregnancy outcomes (Table 5) Three studies investigated whether there is an association between sedentary behaviours and gestational weight gain [12,30,40]. One study found no association between percentage of time spent in sedentary behaviours with gestational weight gain at 15 weeks of gestation, between 15 and 32-35 weeks of gestation, or with gestational weight gain per week [12]. Likewise, change in percentage of time in sedentary behaviours during 15 to 32-35 weeks of gestation was not associated with total gestational weight gain or with gestational weight gain per week. Another study also observed no significant associations between sedentary activity and inadequate or excessive gestational weight gain, at each stage of pregnancy [40]. However, in another study the ' Active' group (named according to author´s categorisation) gained significantly lower maternal weight during the second and third trimesters than the 'sedentary' group (named according to author´s categorisation) [30]. Three studies explored the association between pregnancy sedentary behaviours and hypertensive disorders during gestation. Two studies found no association [34,44], but 1 study found that women who had persistent sedentary work (and were not authorised to move from their work place during working hours), such as sewing operators, developed significantly more gestational Hayes 2014 [49] 183 Yes g (p < 0.05) Risk of preterm delivery Both 2010 [31] 11759 No hypertension than women in the control group, whose work was also mostly sedentary, but whom were allowed to move during working time, such as secretaries [46]. No association was found between pregnancy sedentary behaviours and depression [45]. Associations between sedentary behaviours and metabolic outcomes (Table 5) The relationship between time spent in sedentary behaviours and fasting glucose levels was analysed in 1 study, finding a positive association [44]. On the other hand, sedentary behaviours were not associated with altered insulin sensitivity [47], gestational diabetes mellitus [49], or abnormal glucose tolerance [36]. Two studies found associations between sedentary behaviours and Creactive protein (CRP) [10,44]. In 1 study sedentary time and proportion of wear time spent sedentary were positively associated with CRP among women in the second trimester, but this finding was no longer statistically significant in analyses adjusting for confounders [10]. In the other study the positive association between sedentary behaviours and CRP levels remained after adjustment for confounders [44]. A significantly positive association between time spent in sedentary behaviours and higher LDL cholesterol was found in 1 study, but no association was found with any other blood lipid marker [44]. Associations between sedentary behaviours and infant outcomes (Table 5) Two studies found no association between birth weight and mother's sedentary behaviours during pregnancy [12,32]. One study found a significant association between lower birthweight with time spent in sedentary lifestyle in each trimester of gestation [31], whilst another found that women who delivered macrosomic infants (birthweight ≥4000 g) spent significantly more time sedentary than women delivering offspring weighing less than 4000 g [39]. The 1 study exploring the correlation between the new born abdominal circumference (as an indicator for abdominal adiposity) with mothers' time spent sedentary found differing results according to gestation. At 16-18 weeks of gestation a significantly inverse association was found between infant abdominal circumference and time spent sedentary, however at 36 weeks of gestation, the relationship became significantly positive [49]. No associations were found between sedentary behaviours and gestational length [12,31], or risk of preterm delivery [31]. The 2 studies that were classified as good quality were randomised controlled trials. Of those classified as poor quality the main reasons were small sample size [45,46], use of a non-objective appraisal tool to classify women as sedentary [37,45,46] and lack of detail about the outcome measures [37,46]. Main findings There is increasing interest in research in the general population about whether reducing time spent in sedentary behaviours has a beneficial effect on health [50,51]. Here we systematically reviewed the literature in this field among pregnant women. Our key findings were that pregnant women spend at least half of their time in sedentary activities, which is similar to time reported in children, young people, adults and older adults in the UK [6]. Whether sedentary behaviours impact on pregnancy outcomes was less clear-cut with inconsistencies in the literature. Our review highlights the considerable heterogeneity in the definitions of sedentary behaviours and the methods used to assess this. Differences in the reported prevalence of sedentary behaviours between studies could be due to the unclear definition of sedentary behaviours, or classification of sedentary. For example, 1 study used a pedometer, an objective method, to classify women as sedentary, considering less than 5000 steps per day as a sedentary lifestyle [30], meanwhile in another study women were considered sedentary if they answered "Reading, watching television, or pursuing some other sedentary occupation", as the most appropriate description of their activities during pregnancy [35]. Many of included studies defined sedentary behaviours as activities expending the same or less than one metabolic equivalent [39,41], however there is no consensus in how many hours per day spent in sedentary behaviours are sufficient to be categorised as sedentary, making it difficult to determine the prevalence of sedentarism. In addition sedentary behaviours were often assessed retrospectively [32,35], potentially introducing recall bias. Studies also differed in the assessment measures to calculate sedentary behaviours making comparisons difficult. This corresponds with what has been exposed regarding sedentary behaviours assessment in other populations [6]. Half of the identified studies considered whether sedentary behaviour in pregnancy impacted on maternal or offspring outcomes. This is an important consideration as interventions based on increasing physical activity among obese pregnant women have had limited impact on pregnancy outcomes [49,[52][53][54][55]. One study found that reducing time spent in sedentary activity was associated with decreased gestational weight gain [30]. Two other studies, including a large study of >1000 women found no associations with gestational weight gain [12,40]. Likewise there were discrepancies in studies examining associations of sedentary behaviours with hypertensive disorders [34,44,46]. Notably the 1 study which found a significant association was classified as poor quality, which decreases the reliability of the result [46]. Differences in ethnicity between the study populations may partly explain the discrepant findings with gestational weight gain (1 study developed in Denmark, other included only Latin-American pregnant women, and 1 was developed in China) and hypertensive disorders (1 included only Latin-American women, 1 was developed in the USA and 1 in China). No association was found between depression and sedentary behaviours, however the 1 study focusing on that was classified as poor quality [45]. None of the studies reported associations between sedentary behaviour and glucose metabolism, as assessed by fasting glucose levels [44,49], insulin sensitivity (measured using an oral glucose tolerance test) [47], gestational diabetes mellitus (GDM) [49] and in a large study of >1000 women glucose tolerance measured during a glucose tolerance test [36]. In contrast, 2 studies found associations between higher CRP levels and increased sedentary behaviour [10,44], and 1 found an association with blood lipids [44] suggesting there may be subtle beneficial effects on maternal metabolism if time spent sedentary is reduced. Overall, there was some suggestion that sedentary behaviours may impact on size at birth [31,39,49], but not timing of delivery [12,31]. However, the largest study including over 11,000 pregnant women and which reported associations of sedentary behaviour with birthweight but not gestational length or risk of preterm birth, assessed sedentary behaviours during pregnancy using a postal questionnaire using the question "Are/were you mostly sitting?" [31]. Strengths and limitations The strengths of this review include the systematic and comprehensive review process which was followed in line with PRISMA guidelines. Two researchers independently assessed eligibility of the titles, abstracts and full-text studies, extracted the data and assessed the articles for bias. A further strength of the review is that many of the studies were of considerable sample size. Eleven studies included samples of over 1000 women [29,34,36,37,40,42,43], including 2 assessing more than 4000 women using validated questionnaires [32,35]. Nevertheless, larger studies using objective assessments of sedentary behaviour in pregnancy would considerably add to the literature in this field. There are also some potential limitations. Though we used a robust search strategy developed from other systematic reviews of sedentary behaviour in the general population [2,56,57], it is possible that some potentially eligible studies may not have been identified. For example, some studies appraise sedentary behaviours when assessing physical activity, but the titles do not mention the key words we chose to identify sedentary behaviours. We included a search of reference lists of all papers that the full text was read, to identify any further additional papers. A limitation of the data is that only 2 of the identified studies were trials, all the rest were observational. Of the trials, just 1 used an objective method to assess sedentary behaviours, the other employed a questionnaire. Of the 24 observational studies, only 12 used objective instruments, the other 12 utilised self-reported methods to assess sedentary behaviours. Most of these studies were considered of intermediate quality due to the small sample size, or lack of use of a validated questionnaire or objective measurement. Therefore, the use of objective methods, such as accelerometers, or the combination of movement and physiological (e.g. heart rate) devices should be encouraged if we wish to provide a more clear, realistic, and objective estimate of time spent in sedentary behaviours. Also, the cut-offs used for defining sedentary behaviours as to categorise people as sedentary are not clear and differ between studies, and should be standardised. Although 3 studies (11.5%) were classified as poor quality one of these [37] did not report any maternal or infant outcomes and so will not have influenced our interpretation of the literature. As noted the findings of the other 2 poor quality rated studies [45,46] should be interpreted with caution. The rest of the studies were classified at least as intermediate quality, mostly because the designs were less reliable (not randomised controlled trials), most of the sample size were small, some utilised non-objective assessment methods, and/or were not validated, but we are confident that they are representative of the available literature. Conclusions The observation that pregnant women spend much of their time in sedentary activities opens new approaches aiming to improve pregnant women's health. However our review has identified important gaps in our understanding in this field. For example only 2 studies considered sleeping time during pregnancy [8,38] which may be an important consideration when assessing sedentary behaviour due to changing sleep patterns in pregnancy. Further, only 1 study assessed the transitions from sit/lay to stand, or breaks during sedentary time [8], which may be an important area to target in future interventions studies. Our review highlights a high prevalence of sedentarism and significant time spent in sedentary behaviours, also that changes in sedentary behaviour may impact on pregnancy outcomes for both mother and child, emphasising this as an area for future mechanistic and intervention studies. However, the heterogeneity in the literature suggests future studies should use robust methodology, preferably with objective measures for quantifying sedentary behaviour.
v3-fos-license
2022-04-10T15:25:50.084Z
2022-04-01T00:00:00.000
248066786
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1422-0067/23/8/4137/pdf", "pdf_hash": "aad1fb34e1cd6eabeef82a38a3ec9046db09c121", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1990", "s2fieldsofstudy": [ "Biology" ], "sha1": "198ce39cae4becca841dbe90f0a5c3a1c331f3ef", "year": 2022 }
pes2o/s2orc
Decoding the Synaptic Proteome with Long-Term Exposure to Midazolam during Early Development The intensive use of anesthetic and sedative agents in the neonatal intensive care unit (NICU) has raised controversial concerns about the potential neurodevelopmental risks. This study focused on midazolam (MDZ), a common benzodiazepine regularly used as a sedative on neonates in the NICU. Mounting evidence suggests a single exposure to MDZ during the neonatal period leads to learning disturbances. However, a knowledge gap that remains is how long-term exposure to MDZ during very early stages of life impacts synaptic alterations. Using a preclinical rodent model system, we mimicked a dose-escalation regimen on postnatal day 3 (P3) pups until day 21. Next, purified synaptosomes from P21 control and MDZ animals were subjected to quantitative mass-spectrometry-based proteomics, to identify potential proteomic signatures. Further analysis by ClueGO identified enrichment of proteins associated with actin-binding and protein depolymerization process. One potential hit identified was alpha adducin (ADD1), belonging to the family of cytoskeleton proteins, which was upregulated in the MDZ group and whose expression was further validated by Western blot. In summary, this study sheds new information on the long-term exposure of MDZ during the early stages of development impacts synaptic function, which could subsequently perturb neurobehavioral outcomes at later stages of life. Introduction Approximately 15 million babies are born prematurely each year globally [1,2], with most of them requiring surgery and mechanical ventilation to increase their survival rates. In the neonatal intensive care unit (NICU) setting, these preterm neonates are often treated for prolonged periods with sedative medications such as opioids, benzodiazepines, or ketamine to mitigate pain and reduce agitation [3,4]. The intensive use of analgesia and sedation in the NICU has raised concerns to the FDA about the potential implications for brain and cognitive development. Midazolam (MDZ) is a common benzodiazepine used in the NICU to relieve anxiety before undergoing major surgical procedures and as a drug to control seizure attacks. Previous studies have described MDZ sedation as contributing to spatial learning and memory impairments in vivo and disrupting synaptogenesis in vitro [5]. Importantly, it has been shown that a single exposure to MDZ with other anesthetic agents causes synaptic alterations and later causes learning disturbances in both clinical and preclinical models [4][5][6]. However, to date, there are no studies that have characterized how long-term exposure to MDZ during very early stages of development induces changes at the synaptic level. Accordingly, we employed high throughput quantitative mass-spectrometry-based proteomics on purified synaptosomes to identify synaptic protein signatures and functional pathways impacted by long-term MDZ exposure using a preclinical rodent model. Long-Term MDZ Exposure Alters the Synaptic Proteome To ascertain if long-term MDZ exposure during early stages of life induced alterations in the synaptic proteome, we subjected P21 purified synaptosomes from the control and MDZ groups to high throughput quantitative mass-spectrometry-based proteomics. A total of 2262 proteins were identified. Further employing a criterion of 2+ unique peptides, and p < 0.05, we identified 433 proteins to be differentially expressed between the two groups (Table S1). Figure 1A shows the Venn diagram of the differentially expressed proteins after MDZ exposure based on a criterion of 1.5-fold up or down and p < 0.05. A total of 139 proteins were upregulated, while 39 were downregulated. Furthermore, principal component analysis (PCA) revealed good reproducibility of the biological replicates and overall separation between the groups ( Figure 1B). Next, using the bioinformatics tool ClueGO, we analyzed the molecular functions and biological processes enriched with these DEPs (Figure 2). The most abundant biological process was negative regulation of protein depolymerization process with 26.32% of gene ontology (GO) terms associated with this process, followed by tricarboxylic acid cycle, ADP metabolic process, protein binding, etc. Interestingly, two enriched functional groups shared an equal 33.33% of the GO terms associated with actin-binding and cytochrome-c oxidative activity. Additionally, 16.67% of the GO terms related to DEPs were involved in pyridoxal phosphate binding. The remainder 16.67% were engaged in chloride transmembrane transporter activity. A list of GO terms and associated genes can be found in Table S2. A total of 2262 proteins were identified. Further employing a criterion of 2+ unique peptides, and p < 0.05, we identified 433 proteins to be differentially expressed between the two groups (Table S1). Figure 1A shows the Venn diagram of the differentially expressed proteins after MDZ exposure based on a criterion of 1.5-fold up or down and p < 0.05. A total of 139 proteins were upregulated, while 39 were downregulated. Furthermore, principal component analysis (PCA) revealed good reproducibility of the biological replicates and overall separation between the groups ( Figure 1B). analysis (PCA) between the saline and midazolam-exposed samples from all six biological replicates from each group. Next, using the bioinformatics tool ClueGO, we analyzed the molecular functions and biological processes enriched with these DEPs (Figure 2). The most abundant biological process was negative regulation of protein depolymerization process with 26.32% of gene ontology (GO) terms associated with this process, followed by tricarboxylic acid cycle, ADP metabolic process, protein binding, etc. Interestingly, two enriched functional groups shared an equal 33.33% of the GO terms associated with actin-binding and cytochrome-c oxidative activity. Additionally, 16.67% of the GO terms related to DEPs were involved in pyridoxal phosphate binding. The remainder 16.67% were engaged in chloride transmembrane transporter activity. A list of GO terms and associated genes can be found in Table S2. We further investigated potential enriched pathways associated with the DEPs using ingenuity pathways analysis (IPA). As seen in Figure 3, pathways associated with synaptogenesis signaling, oxytocin signaling, and PKA signaling were enriched after MDZ exposure, while oxidative phosphorylation was downregulated. These data overall suggest that long-term exposure to MDZ treatment does have an impact on altering changes at the synapse by dysregulating key molecular and biological processes. Gene-to-disease associations are provided in Table S3. . Enriched pathways associated with long-term MDZ exposure. The pathways are ranked by the negative log of the FDR corrected p-value of the enrichment score and color-coded according to the Z score. A significantly increased pathway activity is indicated by a positive Z score, represented by the orange bars, and an overall decrease in pathway activity is represented by a negative Z score, represented by blue bars. The Gray bar represents enriched pathways with no predicted activity change. We further investigated potential enriched pathways associated with the DEPs using ingenuity pathways analysis (IPA). As seen in Figure 3, pathways associated with synaptogenesis signaling, oxytocin signaling, and PKA signaling were enriched after MDZ exposure, while oxidative phosphorylation was downregulated. These data overall suggest that long-term exposure to MDZ treatment does have an impact on altering changes at the synapse by dysregulating key molecular and biological processes. Gene-to-disease associations are provided in Table S3. Since high throughput omics studies generally generate many potential hits, it is imperative to further validate them. Based on our ClueGO analysis that identified negative regulation of protein depolymerization process to be most abundant, we accordingly focused on validating hits associated with this function. We generated the heatmap of DEPs associated exclusively with the negative regulation of the protein depolymerization process and actin-binding function ( Figure 4). We further investigated potential enriched pathways associated with the DEPs using ingenuity pathways analysis (IPA). As seen in Figure 3, pathways associated with synaptogenesis signaling, oxytocin signaling, and PKA signaling were enriched after MDZ exposure, while oxidative phosphorylation was downregulated. These data overall suggest that long-term exposure to MDZ treatment does have an impact on altering changes at the synapse by dysregulating key molecular and biological processes. Gene-to-disease associations are provided in Table S3. . Enriched pathways associated with long-term MDZ exposure. The pathways are ranked by the negative log of the FDR corrected p-value of the enrichment score and color-coded according to the Z score. A significantly increased pathway activity is indicated by a positive Z score, represented by the orange bars, and an overall decrease in pathway activity is represented by a negative Z score, represented by blue bars. The Gray bar represents enriched pathways with no predicted activity change. Since high throughput omics studies generally generate many potential hits, it is imperative to further validate them. Based on our ClueGO analysis that identified negative regulation of protein depolymerization process to be most abundant, we accordingly focused on validating hits associated with this function. We generated the heatmap of DEPs associated exclusively with the negative regulation of the protein depolymerization process and actin-binding function ( Figure 4). . Heatmap visualization of differentially regulated proteins associated with actin-binding and negative regulation of protein depolymerization from ClueGO analysis features measured in the saline and midazolam-exposed samples from all six biological replicates from each group. The arrow highlights alpha-adducin (protein selected for post validation). One such hit we identified was alpha adducin (ADD1), which belongs to the cytoskeleton protein family. ADD1 was upregulated +1.75-fold in the MDZ group, and its expression level was further validated with Western blot ( Figure 5). . Heatmap visualization of differentially regulated proteins associated with actin-binding and negative regulation of protein depolymerization from ClueGO analysis features measured in the saline and midazolam-exposed samples from all six biological replicates from each group. The arrow highlights alpha-adducin (protein selected for post validation). One such hit we identified was alpha adducin (ADD1), which belongs to the cytoskeleton protein family. ADD1 was upregulated +1.75-fold in the MDZ group, and its expression level was further validated with Western blot ( Figure 5). and negative regulation of protein depolymerization from ClueGO analysis features measured in the saline and midazolam-exposed samples from all six biological replicates from each group. The arrow highlights alpha-adducin (protein selected for post validation). One such hit we identified was alpha adducin (ADD1), which belongs to the cytoskeleton protein family. ADD1 was upregulated +1.75-fold in the MDZ group, and its expression level was further validated with Western blot ( Figure 5). Figure 5. Validation of ADD1 upregulation after MDZ exposure. A representative Western blot is depicted here. GAPDH was used as an internal control. Data are represented as Mean ± SEM (n = 15/group) and significance was determined with an unpaired t-test after Welch's correction. * p < 0.05. Discussion In our current study, we show, for the first time, alterations in the synaptic proteome associated with long-term MDZ in a rodent model. Our main findings highlighted upand downregulated differentially expressed proteins (DEPs) involved in various molecular functions (e.g., actin-binding, cytochrome c oxidase, pyridoxal phosphate binding) and biological processes (e.g., protein depolymerization, tricarboxylic cycle, central nervous system neuron development) with long-term exposure to MDZ. We also uncovered po- Figure 5. Validation of ADD1 upregulation after MDZ exposure. A representative Western blot is depicted here. GAPDH was used as an internal control. Data are represented as Mean ± SEM (n = 15/group) and significance was determined with an unpaired t-test after Welch's correction. * p < 0.05. Discussion In our current study, we show, for the first time, alterations in the synaptic proteome associated with long-term MDZ in a rodent model. Our main findings highlighted upand downregulated differentially expressed proteins (DEPs) involved in various molecular functions (e.g., actin-binding, cytochrome c oxidase, pyridoxal phosphate binding) and biological processes (e.g., protein depolymerization, tricarboxylic cycle, central nervous system neuron development) with long-term exposure to MDZ. We also uncovered potential pathways associated with the DEPs such as synaptogenesis signaling, protein kinase A signaling, and oxidative phosphorylation. Altogether, these findings provide new insights pertaining to long-term exposure to MDZ during the early stages of development can impact neurodevelopmental outcomes, especially synaptic function. The developing brain is vulnerable to constant exposure to neurotoxicity substances [17,18]. Previous studies have provided evidence showing anesthetics and sedative agents potentially modulate brain connectivity and neuron circuits [19][20][21]. The formation of neural circuits is driven by a process called synaptogenesis, which is highly dynamic and balances both synapse formation and elimination [22]. A study by De Roo et al. showed that mice that receive a single dose of MDZ at early development have a higher rate of synaptogenesis at postnatal days (P) 15, 20, and 30 [23]. One possible reason is that enhanced synaptogenesis could be a compensatory effect to aid in the possible loss of spines with a single acute dose of MDZ. Interestingly, a study by Xu et al. has revealed that neonate mice that received MDZ repetitively for five days have a lower synapse formed when these mice reach adulthood (P63) [5]. A potential explanation for this observation could be those multiple repetitive doses of MDZ could possibly induce more toxicity at the synapse, thus resulting in a lower count. These two studies point to the fact that exposure to long-term MDZ, albeit considering the number of exposures, can either increase synapse formation or lead to faster elimination during early development. Our findings here provide more support to the study by De Roo et al., based on our observation from our IPA analysis that shows enrichment of the synaptogenesis signaling pathway after MDZ exposure (Figure 3). Learning and memory constitute significant aspects of neurodevelopment. Synaptic plasticity, which typically refers to the activity-dependent of strength and efficacy of synaptic transmission, is the feature that reflects learning and memory storage potential [24,25]. Earlier studies have suggested that exposure to MDZ during development subsequently leads to cognitive deficit and learning disturbances. Specifically, studies in rodents showed that exposure to anesthetic and sedative agents resulted in poor outcomes on cognitive tasks such as Morris water maze, radial arm maze, and Y-maze tests in the exposed animals [5,26]. Moreover, one recent clinical study showed that extremely preterm infants receiving opioids and benzodiazepines during their NICU stay were more likely to have lower cognitive, motor, and languages scores than infants with no exposure [27]. Altogether, these studies suggest potential alterations in synaptic plasticity with long-term exposure to sedatives and anesthetics. A significant aspect associated with synapse function is the modulation of the actin cytoskeleton [28,29]. The actin cytoskeleton is essential to cellular processes involving membrane dynamics such as cell motility and morphogenesis [30]. Actin exists in monomeric globular (G-actin) and filamentous (F-actin) states [31]. The dynamic polymerization and depolymerization between G-and F-actin drive the morphological changes in dendritic spines that are associated with synaptic plasticity [32,33]. Actin regulators, including actin-binding protein (ABPs), can facilitate actin polymerization, promote disassembly, or stabilize filaments. Altogether, the dynamic actin cytoskeletons and regulation in dendritic spines development implies the notable area for focusing on the mechanism underlying abnormal or dysfunction of synapse formation upon the exposure to anesthetics and sedatives. Our ClueGO analysis ( Figure 2 and Table S2) identified molecular and biological processes associated with actin-binding and negative regulation of protein depolymerization to be significantly enriched after MDZ exposure. One critical potential hit we identified and further validated was alpha adducin or ADD1 (Figures 4 and 5). In mammalian cells, the adducin family (alpha, beta, and gamma) is ubiquitously expressed. Multiple studies have highlighted the importance of adducin in neural cell signal transduction [34,35]. Adducin promotes the binding of actin to spectrin and may affect cytoskeleton transport, cell structure, and modulation of Na + /K + pump activity [36,37]. In Drosophila, deletion of adducin results in an overgrowth of large-diameter presynaptic boutons, and an increase in synaptic retractions at the neuromuscular junction, while overexpression of adducin inhibits the formation of small-diameter type II and type III boutons [34,35]. Another study using the nematode C. elegans model implied that ADD1 contributes to learning and memorization. Deleting ADD1 in C. elegans subsequently impaired short-and long-term memory by destabilizing the actin at the synapse [38]. In the mice model, knockdown ADD1 interferes with the axon's structure and integrity [39]. Our study observed an upregulation of ADD1 ( Figure 5), which possibly implies stabilizing the cytoskeletal architecture perturbed by MDZ exposure. Future investigations into the functional role of ADD1 and the mechanisms associated with MDZ exposure are needed to establish the link between ADD1 and synaptic function. Additionally, we found cytochrome c oxidase (COX)-related proteins, including Cox4i1, Cox5b, and Cox6c (Figure 2, Tables S1 and S2), are downregulated after MDZ exposure. Furthermore, those proteins are also associated with the deactivation of the oxidative phosphorylation pathways seen in IPA analysis (Figure 3, Table S3). Eukaryotic COX is the terminal enzyme associated with the energy-transducing mitochondrial electron transport chain [40]. COX locates the inner mitochondria membrane, facilitating the electrons transfer from reduced cytochrome c to molecular oxygen. COX also participates in proton pumping, which generates the electrochemical gradient for ATP synthesis [40,41]. Neurons' activities and functions, including synapse formation, depend on ATP [42,43]. Notably, oxidative phosphorylation in the brain's mitochondria generates and synthesizes approximately 90% of the ATP [44]. Synaptic mitochondria are critical for sustaining neurotransmission, and this process is controlled by energy metabolism, mitochondrial distribution, and trafficking, as well as cellular synaptic calcium flux [44][45][46]. Synaptic loss is an early but progressive pathological event in Alzheimer's disease (AD) that causes cognitive impairment and memory loss, which is thought to be prevalent, especially in the later stages of the disease [46]. Interestingly, an association study explored the involvement of these COX-related genes in contributing genetic risk to developing AD in the Han Chinese population [47]. Taking our findings and the given relationship between COX and synaptic loss together, they can explain the impaired synaptic activity and possibly cognitive function seen with long-term exposure to anesthetics and sedatives. In summary, our study elucidated a comprehensive characterization of the synaptic proteome, including yielding novel insights on how long-term exposure to MDZ during the early stages of development. Importantly, the identification of ADD1 as a potential target and further characterization of its downstream mechanisms can lend further insights into its role as a potential therapeutic to treat neurodevelopmental alterations associated with long-term MDZ use in neonates. Animals Pregnant dams-Pregnant Sprague Dawley rats were obtained from Charles River Laboratories Inc. (Wilmington, MA, USA) and grouped individually in a 12 h light-dark cycle. All animals were fed ad libitum and allowed to birth naturally. All procedures and protocols were approved by the Institutional Animal Care and Use Committee of the University of Nebraska Medical Center and conducted by the National Institutes of Health Guide for the Care and Use of Laboratory Animals. Midazolam Treatment Starting at postnatal day 3 (P3), pups are given a single subcutaneous (s.c.) injection of 1 mg/kg midazolam (mixed with sterilized saline and administered at a uniform volume of 100 µL/10 g of birth weight of pup) and ramped up using a dose-escalation method until day 21 to closely mimic increments as performed in a NICU ( Figure 6). Due to faster drug metabolism in rodents than in humans, a higher relative dose is required to induce an equivalent of 60 min sedation, which is similar to the NICU setting [6] This dosage was adapted based on previously published studies [5,6,26,48,49]. Immediately after injection, pups are monitored for any distresses, including placing them under a heating lamp to prevent thermal loss. adapted based on previously published studies [5,6,26,48,49]. Immediately after injection, pups are monitored for any distresses, including placing them under a heating lamp to prevent thermal loss. Pups were then evaluated for four reflex scales: posture, righting, cornea, and tail reflex, as described in [5], to determine the sedation status. The pups were monitored an additional 2-3 times throughout the day, to ensure that they were nursing post-treatment. For this study, pups were sacrificed at P21, and brains were harvested on ice and stored at −80 °C. Purified Synaptosome Isolation To investigate the effects of long-term midazolam exposure at early development on synaptic transmission, we isolated purified synaptosomes following the protocol previously described in our earlier publication [50]. Specifically, 100 mg of brain cortex was homogenized in 10 volumes of ice-cold homogenize buffer (0.32 M sucrose, 5 mM HEPES, 0.1 mM EDTA) containing protease-phosphatase inhibitors (Thermo Scientific, Waltham, MA, USA) with 12 strokes using Dounce Homogenizer Wheaton Overhead Stirrer (Wheaton, Millville, NJ, USA) at 250-300 rpm. The homogenized solution was spun at 1000× g for 10 min at 4 °C, and the supernatant was collected. A small aliquot of this homogenate was set aside, followed by centrifugation at 12,000× g for 20 min at 4 °C to obtain a crude synaptosome pellet. The crude synaptosome pellet was carefully resuspended in a homogenization buffer containing protease and phosphatase inhibitors, overlayed on top of sucrose gradients, and spun at 145,00× g for 1 h 40 at 4 °C using SW41 Ti Rotor (Beckman Coulter, Brea, CA, USA). The synaptosome band (approximately 1 mL) at the interface of 0.8 and 1.2 M sucrose was harvested using an 18-gauge needle and resuspended with a 9mL homogenization buffer. The solution was again spun at 14,500× g for 45 min at 4 °C using SW41 Ti Rotor (Beckman Coulter, Indianapolis, IN, USA), which resulted in the purified synaptosome pellet. This pellet was resuspended in 200 µL 1× PBScontaining protease-phosphatase inhibitors, followed by passing through a 27-gauge needle several times to be completely resuspended. Pups were then evaluated for four reflex scales: posture, righting, cornea, and tail reflex, as described in [5], to determine the sedation status. The pups were monitored an additional 2-3 times throughout the day, to ensure that they were nursing post-treatment. For this study, pups were sacrificed at P21, and brains were harvested on ice and stored at −80 • C. Purified Synaptosome Isolation To investigate the effects of long-term midazolam exposure at early development on synaptic transmission, we isolated purified synaptosomes following the protocol previously described in our earlier publication [50]. Specifically, 100 mg of brain cortex was homogenized in 10 volumes of ice-cold homogenize buffer (0.32 M sucrose, 5 mM HEPES, 0.1 mM EDTA) containing protease-phosphatase inhibitors (Thermo Scientific, Waltham, MA, USA) with 12 strokes using Dounce Homogenizer Wheaton Overhead Stirrer (Wheaton, Millville, NJ, USA) at 250-300 rpm. The homogenized solution was spun at 1000× g for 10 min at 4 • C, and the supernatant was collected. A small aliquot of this homogenate was set aside, followed by centrifugation at 12,000× g for 20 min at 4 • C to obtain a crude synaptosome pellet. The crude synaptosome pellet was carefully resuspended in a homogenization buffer containing protease and phosphatase inhibitors, overlayed on top of sucrose gradients, and spun at 145,00× g for 1 h 40 at 4 • C using SW41 Ti Rotor (Beckman Coulter, Brea, CA, USA). The synaptosome band (approximately 1 mL) at the interface of 0.8 and 1.2 M sucrose was harvested using an 18-gauge needle and resuspended with a 9mL homogenization buffer. The solution was again spun at 14,500× g for 45 min at 4 • C using SW41 Ti Rotor (Beckman Coulter, Indianapolis, IN, USA), which resulted in the purified synaptosome pellet. This pellet was resuspended in 200 µL 1× PBS-containing protease-phosphatase inhibitors, followed by passing through a 27-gauge needle several times to be completely resuspended. Mass Spectrometry Analysis Protein quantification was performed using Pierce BCA protein assay (Thermo Scientific, Rockford, IL, USA), as described in our earlier studies [51][52][53][54][55]. The mass spectrometry analysis was performed by a UNMC Mass Spectrometry Core (Omaha, NE, USA), and the protocol was based on the label-free quantitative mass spectrometry protocol described in our recently published studies [55][56][57]. Specifically, 50 µg of protein per sample (n = 6/group) was subjected to chloroform-methanol extraction to remove the detergent in each sample. Prior to mass spectrometric analysis, the protein pellet was resuspended in 100 mM ammonium bicarbonate and digested with MS-graded trypsin (ThermoFisher, Waltham, MA, USA) overnight at 37 • C. The peptides were then cleaned using PepClean C18 spin columns (Thermo Scientific, Waltham, MA, USA) and resuspended in 2% acetonitrile (ACN) and 0.1% formic acid (FA). Then, 500 ng of each sample was loaded onto trap column Acclaim PepMap 100 75 µm × 2 cm C18 LC Columns (Thermo Scientific, Waltham, MA, USA), at a flow rate of 4 µL/min and then separated with a Thermo RSLC Ultimate 3000 (Thermo Scientific, Waltham, MA, USA) on a Thermo Easy-Spray PepMap RSLC C18 75 µm × 50cm C-18 2 µm column (Thermo Scientific, Waltham, MA, USA) with a step gradient of 4-25% solvent B (0.1% FA in 80 % ACN) from 10 to 130 min, and 25-45% solvent B for 130-145 min at 300 nL/min and 50 • C with a 180 min total run time. The eluted peptides were then analyzed using a Thermo Orbitrap Fusion Lumos Tribrid (Thermo Scientific, Waltham, MA, USA) mass spectrometer in a data-dependent acquisition mode to analyze eluted peptides. A complete survey scan MS (from m/z 350 to 1800) was acquired in the Orbitrap with a resolution of 120,000. The AGC target for MS1 was set as 4 × 10 5 , and ion filling time was set as 100 ms. The most intense ions with charge state 2-6 were isolated in a 3 s cycle and fragmented using HCD fragmentation with 40% normalized collision energy and detected at a mass resolution of 30,000 at 200 m/z. The AGC target for MS/MS was set as 5 × 104 and ion filling time set at 60 ms; dynamic exclusion was set for 30 s with a 10 ppm mass window. Protein Identification We used the in-house mascot 2.6.2. (Matrix Science, Boston, MA, USA) search engine to further identify the proteins from MS/MS data, as described in our previous studies [55][56][57]. Specifically, MS/MS data were applied against the NCBI Rattus norvegicus protein. The search was set up for full tryptic peptides with a maximum of two missed cleavage sites. Acetylation of protein N-terminus and oxidized methionine were included as variable modifications, and carbamidomethylation of cysteine was set as a fixed modification. The precursor mass tolerance threshold was set at 10 ppm, and the maximum fragment mass error was 0.02 Da. The significance threshold of the ion score was calculated based on a false discovery rate (FDR) of ≤1%. Qualitative analysis was performed using progenesis QI proteomics 4.1 (Nonlinear Dynamics, Milford, MA, USA). Bioinformatic Analysis Proteins were identified as differentially expressed if the t-test p-value was ≤0.05, and absolute fold change was ≥1.5. In each comparison, heatmaps of all differentially expressed proteins were plotted using the function heatmap.2 in the R (version 3.6.0) package, gplots. Gene Ontology (GO) analysis of differentially expressed proteins was performed using the Cytoscape plug-in ClueGO [58]. Biological processes and molecular functions were included for GO enrichment analysis. Canonical pathway analysis was performed using the Ingenuity Pathway Analysis (IPA) software (Ingenuity ® Systems, Redwood City, CA, USA, www.ingenuity.com, accessed on 8 November 2021) by comparing the differentially expressed proteins against known canonical pathways (signaling and metabolic) within the IPA database. Enriched pathways with Benjamini-Hochberg false discovery rate (FDR) p-value ≤ 0.05 were considered for further analysis. Statistical Analyses For proteomics analysis, after normalization, Student's t-test was performed to identify proteins showing significant differences between groups (saline versus midazolam). Proteins that had at least two unique peptides and a t-test p-value < 0.05 were considered significant. All statistical tests were performed with GraphPad Prism version 8.4.3 (La Jolla, CA, USA). A p-value < 0.05 from an unpaired Student's t-test, followed by Welch's correction, was used to determine significance. Data are represented as the Mean ± SEM on the graphs.
v3-fos-license
2022-05-07T06:23:09.339Z
2022-05-05T00:00:00.000
248543850
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "86ac7593337c53f2a2f2a1d4ca470fd17f4f19e3", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1992", "s2fieldsofstudy": [ "Sociology", "Medicine" ], "sha1": "3b173f75c2475d775accb08ffe4b37361042ad15", "year": 2022 }
pes2o/s2orc
Using ethics of care as the theoretical lens to understand lived experiences of caregivers of older adults experiencing functional difficulties The lived experiences of caregivers of older adults in Ghana are not well understood. The purpose of this study was to explore and discuss the lived experiences of these caregivers using the Ethics of Care as a theoretical lens and Interpretative phenomenological analysis as the methodological approach. Ten caregivers in receipt of social welfare services on behalf of older adults were recruited from the Social Welfare Unit at the Komfo Anokye Teaching Hospital (KATH) in southern Ghana. The analysis identified five interrelated themes: 1) committing the Self to caregiving; 2) caregiving impacting the Self; 3) motivating factors to caregiving; 4) caregiving burdens, and 5) thinking about personal affairs. Their experiences demonstrate that caregivers value the caregiving relationship, as posited by Ethics of Care, and tend to care for their health and well-being. Caregivers’ expression of commitment to caring for older adults is mainly influenced by reciprocity, despite internal and external stressors, and desire to fulfil unmet personal needs. Ethics of care offers an understanding of the lived experiences of caregivers of older adults in Ghana. The findings draw attention to the state to develop specific programs to ensure the health, social and financial well-being of older adults’ caregivers. Introduction The population of adults aged 60 years or older in Ghana is growing, in both proportion and number, mainly due to decreasing birth rates and delayed mortality [1]. The number of older adults in Ghana increased more than seven-fold from 213,477 (4.5%) in 1960 to 1,643,381 (6.7%) in 2010 with the percentage further expected to increase to 9.8% by 2050 [2]. The growth in the number and proportion of the older adults population shows how Ghana is successful in healthcare, increasing the life expectancy for all Ghanaians including older adults. Even though Ghanaians may be living longer, it does not mean they are free of aged related disability or frailty. Disability and frailty are significant influences on older adults' ability to function independently [1,[3][4][5]. Biritwum and colleagues' study on disability (from the WHO Study on global AGEing and adult health (SAGE) project) established that about nine in ten older adults in Ghana experience functional difficulties [6] These functional difficulties include difficulty in engaging in daily activities for self-care, such as toileting [3,7,8] difficulty engaging in activities needed to live independently like preparing meals; and social participation activities such as attending social meetings and transportation [9][10][11]. Generally, older adults who experience difficulties in engaging in life activities often rely on their caregivers to fulfil their primary care needs. It is essential to explore caregivers' experiences of the assistance they provide to older adults who may be noticing a decrease in functional abilities. Background Caregivers' importance is reflected in the promotion of their input in both health and social care since the 1970s worldwide [12]. Caregivers' relevance is incredibly profound in this era where the majority of older Ghanaians are expected to live with a disability in their later lives [13]. In the Ghanaian context, very often a caregiver is an adult child or a family member, who assists with all, or most of the older adults' needs [14][15][16]. The adult child caregiver is a relatively new phenomenon, resulting from a decline in the traditional extended family support system. The change in the extended family support system is primarily thought to have occurred due to migration, the modernisation of society, the introduction of formal education, population growth, economic hardship, and the arrival of numerous religious doctrines in Ghana [15,[17][18][19]. To date, little is known about caregivers' experience of care provided to older adults or their coping strategies. Research on the lived experiences of caregivers will help in understanding caregiving sustainability. Globally, substantial research exists regarding the factors that influence the care adult children provide to their ageing parents. An in-depth qualitative study in Sri Lanka revealed that adult children take pride in caring for their older parents [20], however, whether adult children provided the care out of personal interest or reciprocity was not explained. Earlier studies [21][22][23] identified psychological factors such as moral and religious obligations, attachment, and fulfilment of filial duty, societal expectation and affection toward their older adults as reasons for caring. Supporting more recent evidence, the current study identifies reciprocity, sense of obligation, selflessness, and feelings of closeness, secure attachment, and entitlement of care needs as some motivating factors to care for older parents. Moreover, being the only daughter, being the oldest child, and being closest to the older parent are also reasons for adult children care provision. Sons usually assume care for older parents when there is no adult daughter, or when adult daughters in the family live far away from the older parent or are experiencing their own chronic health conditions [24][25][26][27]. Irrespective of this evidence existing elsewhere, more information is needed in Ghana regarding the motivating factors influencing family caregivers' care for older adults. Moreover, the literature on caregiving also reports on the decreased role of societal filial obligation, and a shift to personal moral judgement as a motivating factor to care and support older parents [28][29][30] For instance, a Japanese quantitative study found a high reduction in perceived filial obligation in daughters-in-law physical and emotional support, and a reduction in biological daughters' perceived filial duty in the provision of material support [29]. In Confucian Chinese society, where filial piety requires that daughters and daughters-in-law provide care and support out of respect and tolerance, caregivers report experiencing little or no motivation to assume this duty often due to a lack of proper parenting from their parent [27]. Holroyd's ethnographic study that focused on how caregiving daughters in China develop a sense of what is right and identified that daughters assume care work because of public reputation, and moral obligation, more so than affection toward their frail older parents. For some adult children, they may provide support and care to their older parents for personal gains like obtaining inheritance [31]. In developed countries, positive effects of caregiving have been reported as an opportunity for caregivers to receive some advice and guidance from older adults, enhance social status, and increase their feeling of leaving a model for their children to learn from [32][33][34]. However, negative effects of caregiving have been reported, such as increasing caregivers' stress, inadequate well-being, poor psychological and physical health, and increasing financial difficulties [35][36][37]. In contrast, there are few studies on caregiving for older adults in Africa [38]. Although similar to the previous findings on caregiving, the available evidence suggests that caregivers of older adults in Africa do not plan to become caregivers but instead assume the responsibility and role change unprepared [39]. Caregivers have to give up work (such as farming), change for a different job, or miss work altogether to care for older adults [39][40][41]. Caregivers experience difficulty emanating from their low level of literacy, lack of paid work and the level of care needs of older adults [42,43]. In addition, the stress caregivers experience can lead to inadequate and/or poor quality care and support for older adults in Africa [9,39,41]. Many older adults report insufficient material support such as finances and food provided by children, irregular visitation by children and siblings, and in some cases, adult children taking too long in responding to the needs of the older adults [39,[43][44][45]. Inadequate and poor-quality care and support have resulted from financial constraints on caregivers [42][43][44]. Despite the caregiving burdens, caregivers commit themselves to caregiving due to reciprocity and to negate any caregiving burden [46]. Presently, there is little qualitative evidence concerning caregivers' lived experiences of caring for older adults in Ghana, with available evidence-primarily concerned with caregiver income and financial constraints [44]. Quantitative evidence in Ghana revealed that caregivers experience emotional, health, and physical burden from caregiving [47,48]. Often less than 5% of caregivers receive financial, emotional, health, physical and personal care assistance [48]. A recent quantitative study conducted in Ghana revealed health and environmental-related factors that influence caregiver availability for older adults. These factors include: 1) older adults' advanced age, 2) being a widow, 3) living with a chronic condition, 4) hardly being understood by friends and family, 5) having no neighbourhood support, and 6) having two to four children [49]. The current study is unique because it seeks to capture, what it is like for caregivers to care for older adults who experience functional difficulties in Ghana, and to apply the Ethics of Care theory to understand their lived experiences. Ethics of care theoretical perspective Ethics of care was used as a theoretical perspective to understand issues surrounding care provision for older adults in Ghana. Ethics of care recognises human relationships, interdependency and mutuality [50][51][52]. Rather than holding ethics as universal principles, ethics of care recognises that some responsibilities exist within certain relationships that do not exist in more general human interaction [52]. These duties and relationships are strongly gendered within most cultural contexts. Most influential in the ethics of care approach are Gilligan [53] and Noddings [54]. Gilligan [53] studied men and women's attitudes and predisposition towards care as a construct of moral development, finding that men tend to adhere to ethical codes and principles, while women were more emotionally connected, and driven by interdependent relationships and concern for others. Noddings [53] extended this idea of the mutuality of caring relationships, believing that (for men and women) the ethical Self only exists in one-to-one caring relationships and that the choice to enter a relationship hinges on the vision held by a person's best (ethical) Self. According to Noddings [54], the ethical ideal is intrinsic in the intimate relationship that two people establish for their own reasons. For Noddings [54], a person's best Self to care (ethical Self) depends on two factors. First is that a person has been cared for in past relationships; for instance, the care relationship between parents and children. Second, the person believes that they are in the best position to care for someone. In the ethics of care, there are two roles: the cared-for and the caregiver. The caregiver enters with a receptive attitude without evaluation of expectations of the cared for and needs to be fully engrossed in the caring relationship [54]. They need to view care from the perspective of the cared-for, which deepens their understanding of the needs of the cared-for and strengthens the caring relationship. Noddings also talked about 'motivational displacement', where the caregivers ignore their own needs and concentrate more on how to help the one cared-for achieve what they need. Noddings also emphasised that receptivity (in any way) on the part of the one cared-for is essential to complete the caring relationship. According to Rice, acknowledgments does not have to be confirmation, appreciation, or reciprocal caring but has to be some manifestation of self-generated happiness that the caregiver can witness [55]. This acknowledgments will help caregivers to know that their efforts have been fruitful or valued by the cared-for. Some researchers have criticised Noddings' theory of ethics of care, for several reasons. Hoagland [56], Houston [57], and Kyu et al. [58] have all argued that Noddings' ethics of care is dangerous because it lacks regulatory boundaries, and can lead to exploitation of caregivers especially those who are unpaid. Moreover, since many women are positioned in caregiving roles [53], the notion of caring as an inherently ethical part of certain strong (gendered) relationships creates gender inequality in care work. Keller [59], Koehn [60] and Meyers [61] criticised Nodding's theory for lack of autonomy of an assertion that a person's ideal Self is dependent on them fulfilling the caring role. This lack of autonomy reduces caregivers' ability to leave a distressing relationship when care is not reciprocated [59], making them more vulnerable. Due to this perceived lack of justice caregiving has the potential to be harmful and manipulative [60], denying caregivers the ultimate benefits of autonomy, self-respect and selfpreservation [61]. Ethics of care served as a theoretical framework for this current study to help understand the motivations and vulnerabilities of caregivers in providing care and support for older adults in Ghana. Design This qualitative study is part of a larger concurrent mixed-method program of research exploring the functional abilities and care needs of older adults in Ghana. This current sub-study employed interpretative phenomenological analysis (IPA) and semi-structured interviews to gather participants' caregiving experiences. After analysing the interview data according to IPA principles, ethics of care was then employed as a lens to examine the findings. According to IPA methodology, experiences are unique to the individual and require careful exploration and analysis to reveal nuances of the phenomena of interest [62]. Davidsen [63] recommends that critical reflexivity and engagement with interview transcript are needed to offer interpretations that reflect participants' experiences. Moreover, researchers need to declare their preconceptions and ideologies at the beginning of the work, and throughout the analysis stage for readers to be confident of the interpretations made [63]. In IPA, interpretations begin from the first interview with each transcript occurring consecutively. In this study, interpretations of findings followed three recommended stages. First, we offered interpretations of how participants understood their phenomenon. Second, we contrasted the interpretation of participants' experiences against the ethics of care. Third, in some cases, we raised general questions to make meaning of participants' experiences [63,64]. In this study, research question construction, sampling, data collection methods, interviewing and analysis followed IPA procedures [62]. Chan, Fung, and Chien [65] recommends four strategies to encourage bracketing. First, the literature was reviewed sufficiently to inform the research process. Second, the author responsible for data collection was open to learning about the experiences of participants. Third semi-structured interviews were used, allowing further probing during data collection. The fourth strategy, which usually involves the researcher returning to participants to confirm interpretations of their shared experiences, was not able to be achieved due to time and resource constraints. However, the primary researcher identified the factors, which could influence analysis and interpretation to reduce the biases in the study. Participant sample and site. Purposive criterion sampling [62] was used to recruit caregivers seeking services on behalf of older adults from the Social Welfare Unit at the Komfo Anokye Teaching Hospital (KATH) in southern Ghana. Participants were eligible to take part in the study if they were older than 18 years, a caregiver providing care and support to an older adult (frail, sick, or incapacitated in any form) for at least six months, and provided informed consent. The caregiver did not necessarily need to be a child of the older adult receiving care. The study setting, Social Welfare Unit in KATH, was selected because caregivers seek and receive services from the Unit on behalf of those under their care, particularly older adults receiving health care at the hospital. The Social Welfare Unit is one of the many Units of KATH that seek to improve the psychosocial needs of patients and other health workers in the hospital setting. The Unit helps caregivers to make treatment decisions on behalf of older adults, counsels and offers emergency treatment for patients, including older adults whose relatives cannot be contacted immediately. The Social Welfare Unit only provides services to people attending KATH but does not receive a referral from the broader community. For this study, caregivers were first identified by a receptionist at the Social Welfare Unit, using study eligibility criteria, as the caregiver presented seeking services on behalf of older adults under their care. A Research Assistant then shared the Participant Information Statement (PIS) and Consent Form (CF) with each caregiver and answered any questions they had about the study. The PIS and CF were explained in participants' language, Twi (the dominant Akan dialect in Ghana). Caregivers could provide consent to the Assistant Researcher at the Unit or to the primary researcher using a contact phone number provided on the PIS. Almost all eligible participants decided whether to participate within 72 hours. The primary researcher was informed of the consents and arranged an interview with each participant at a suitable place and time, within the hospital setting. Ten individuals provided written informed consent out of 53 eligible participants. According to IPA methodology, a sample size of 10 is sufficient to discover the nuances and complexities of people's lived experiences [66]. Semi-structured interviews. The primary researcher, who is experienced and trained in qualitative research, including IPA, interviewed each participant. Respect, concerns for privacy, a non-judgemental attitude together with genuine interest towards participants were always ensured. Participants were asked to reflect and talk about their experiences concerning the care they provide to older adults. Moreover, the primary researcher speaks the same language as participants, (Twi), which helped facilitate interactions. The interview guide was developed specifically to encourage discussion by participants of how they experience providing care. Questions and prompts explored broad domains of the nature of caregiving, why they provide care, how they feel about their role as a caregiver as well as their coping strategies. Each participant was interviewed once, with interviews lasting an average of 53 minutes. Interviews were transcribed immediately following completion. Transcripts were then coded and analysed. Codes were reviewed independently by co-authors, then discussed together as a group. Any discrepancies identified were resolved in turn through in-depth deliberations by all team members, with the consensus reached for each. During discussions with co-authors, definitions of identified themes were approved. Quality of the research The study was carried out in accordance with the four quality criteria formulated by Yardley [67], which are: 1) sensitivity to context, 2) commitment and rigour, 3) transparency and coherence, and 4) impact and importance. First, to ensure sensitivity to context, we recruited participants according to the eligibility criteria; adopting an IPA approach was appropriate for the purpose of the study [62]; and the primary researcher was very informed regarding 'rules' surrounding social interactions and cultural beliefs and ideologies. However, this prior cultural understanding was bracketed by using Chan's four strategies Chan et al. [65] described earlier. Interview questions were uncomplicated and in lay language [68,69], also translated to Twi, the participants' language. Finally, the primary author showed respect to caregivers during interviews [68], listening to the information they wished to share without interruption. Respect to any person, in Ghanaian culture, takes precedence and is cherished [41]. Second, to ensure commitment and rigour, the primary author was attentive to the participant's information during data collection and careful in analysing each participants' transcript. Prior understanding of the IPA approach and research skills from previous qualitative studies increasing the author's commitment to conduct credible work. Interviews in Twi were transcribed and translated back into English; the transcription, translation, and analysis were all cross-checked by co-authors to ensure that participants' intended meaning was retained; and the primary author ensured disciplined attention to the inherent experiences in participants' interviews to understand how the participants made sense of their experiences. Also, existing literature was used to provide further context to the study [62]. The third criterion was transparency and coherence. For transparency, a thorough description of recruitment, analytical approach, and researcher's awareness of the relationship with participants have been provided in this study. To ensure coherence, authors were cognizant of the quality of narrative during their re-creation of the caregiver's lived experiences, in order for readers of the work to find meaning. Efforts were ensured to offer an interpretation that reflected participants' interview data. The fourth and last principle is the impact and importance of the findings. Findings from this have demonstrated how healthcare and social care professionals' understanding of caregivers of older adults can be improved. Finally, the primary researcher undertook critical selfreflection to bracket his beliefs, and knowledge to reduce the impact it may have on the participant-researcher relationship. Data analysis First, the initial interview transcript was re-read several times. Second, after gaining an understanding of the first interview transcript, coding began. Third, the codes were then translated into initial themes. In developing the initial themes, descriptive, in-vivo and process first cycle coding methods were employed [70]. Fourth, the initial themes were reflected upon to find connections between them through abstraction and subsumptions [62]. In terms of abstraction, like-themes were grouped together and named. With subsumptions, some themes were put under others because they fit well. These processes allowed for re-organisation, categorisation and further analysis to further enrich the analysis (Morse, 1994). A table specifying final themes and respective sub-themes was then developed for the first transcript, reviewed and discussed. Fifth, the previous steps one through four were repeated for each one of the remaining nine interview transcripts. The sixth step was to look for patterns of themes across all ten transcripts, leading to a master table of five overarching themes (see Table 2 in Findings). Coauthors compared the five overarching themes to the individual transcripts to ensure they reflected the participant's statements. The five themes were then reported in the form of a narrative account supported by participant quotes. Ethical consideration Ethical approval for this study was obtained from The University of Newcastle (Australia) Human Research Ethics Committee (H-2018-0148), and Kwame Nkrumah University of Science and Technology (CHRPE/AP/112/18) in keeping with the Declaration of Helsinki. Informed consent was obtained from the study participants. Anonymity and confidentiality were ensured. Findings Participants ages ranged from 25 to 60 years, and all but one identified as female. Participants were providing care to their own family member mostly parent for at least seven months. Caregivers provided the care to older adults with varied health conditions (see Table 1). The five interrelated themes were: 1) committing the Self to caregiving; 2) caregiving impacting the Self; 3) motivating factors to caregiving; 4) caregiving burdens, and 5) thinking about personal affairs in addition to caregiving. These themes together reflect the lived experiences of caregivers (Table 2). Committing the Self to caregiving This theme refers to the inherent acts, duties and behaviours that caregivers ensured while caring for their older adults. Caregivers' dedication was reflected in the extent to which they provided care, the duties they performed, the self-sacrificing spirit towards older adults' needs, the desire to manage caregiving stress, and their strong wish to continue caring. Some caregivers negotiate the care they provide with their siblings, while others provided the care alone. Central to Cecilia's statement was the desire to share caregiving duties: We decided that every month, someone would be there and care for her. This month, for instance, I am the one who is taking care of her. When I am done someone else will come and take over. For those caregivers who negotiate care with their siblings, it appears that taking on all caregiving duties by themselves when it was their turn, was an expression of total commitment to the fulfilment of the needs of older adults. See how Cecilia emphasised each sibling's awareness of how they prepare to care: As for that, when it reaches an individual's turn, they prepare to care for the older woman alone. Therefore, when it gets to my turn, too, I solely care for her. When there is no such arrangement about how, who and when to care, the decision then depends on the closeness and prior relationship between a family relative and older adult. It is on this relationship that decision to care hinges, even if prior negotiations for caring existed. Gloria explained how she decided to care for her grandmother, intuitively: She was doing a chop bar work cooking food for people to buy. That time she became weak, and so she was not able to work, so I chose to take over from her and run the chop bar business. Also, because all her children had travelled, I decided to stay with her and take care of her. Another topic of discussion was caregiving duration. Commitment, to caregivers, meant spending many hours and years caring for older adults. Margaret talked about how long she had been providing care for her father's needs: I have cared for him for more than even 10 years. And at his present condition (stroke), I am still caring for him. For Cynthia, she dedicated many hours to her mother's care: When I wake up in the morning, at 6 am, I will wake her up to take her diabetic drug and at 6:30 am, she can eat her food. Then at 10 am, I will have to give her fruit. And at 12 pm, I will have to give her good food. At 2 pm she will eat fruit, at 4 pm I give her good food, and this continues throughout the night. Commitment to caregiving was evident in the number of older adults' who were receiving care, from one caregiver, whilst not the primary care recipient. To commit oneself to caregiving seemed to entail being ready to offer support to anybody who needed it. From the extract below, Elizabeth described her care for others aside from the primary care recipient: I provide care to other people. Even now, I have my mother and her sister at the hospital I care for them. At the time of her interview, Naomi was taking care of two older adults, including her mother, who suffered from multiple chronic conditions: I care for two people. They are two siblings I am caring for them in the same room: my mother and her older sister. Commitment to caregiving meant accepting care duties irrespective of nature. Cecilia elaborated on how she assisted her mother in self-care: When I wake up in the morning when I finish bathing her, she being an older woman, she becomes hungry quickly because of the numerous drugs she has been taken. So, I will have to cook and give her some to eat and keep some there for her so that anytime she request for food, it will be ready at the right time. Eugenia explained what she does to assist her mother with incontinence: She goes to the toilet on herself. Because of this, I make her wear diapers. Her condition has worsened, she cannot even tell me that she will want to urinate or defecate and so she can defecate or urinate on herself. With that, I will have to wash all the bedsheet using Dettol so that there will not be any scent on her. Phinehas, the one male caregiver in this study, spoke about how he assisted his grandmother in bathing and dressing: If she wants to bath too, we have a chair in the bathroom, and so I will assist her to sit on it and bath. When she is done bathing herself, I will pour water on her, and when I am done, I will clean the water on her. Then I will clothe her and bring her inside the room. Commitment to older adults' included assistance with medication, personal care, household chores and general conversations. To commit to caregiving also meant ensuring that the needs of older adults came first, and caregivers' personal needs, came second. Caregivers demonstrated appreciation for human life and relationships. It is in this context that commitment to caregiving comes in to play expressively. Cecilia chooses 'life' over 'work' as an expression of her commitment to caring for her older mother: Human life is very important, as for work, it is there at any time. Appreciation for the dignity of human life forced caregivers to work fewer hours. Naomi described how she reduced her time working as a popcorn seller in order to care for her mother and her sister: Both of them have diabetes, and both of them have a time they use to eat, and if I don't prepare food for them on time they will not eat, and if they don't eat early, it can make their condition worse at night and so I had to close from work and come early to care for them. Commitment to caregiving was reflected in the caregivers' management strategies to combat their own stress to ensure continuity of care. To some caregivers, commitment meant being spiritual. Caregivers put their faith and hope in God as a medium of lessening the effects of caregiving burdens. Resorting to prayers, Cindy expressed renewal of strength to provide care: Sometimes, I will feel about quitting the care, but when I think about God and pray, it helps me to keep taking care of her because God gives me encouragement. It was revealed that hope for good things empowers caregivers to remain in caregiving while they put personal desires or demands on hold. Margaret expressed worry when she reflects on the neglect of her own family to provide care to her father, but then she feels consoled with her trust in God for blessings: When it gets to some point, it worries me most that I have left my family, my children and husband there to assume this duty. When it gets to certain times, I comfort myself with the hope that God will bless me. Whatever God does is good. For Margaret, commitment to caregiving meant maintaining her focus on caregiving duties: What makes it easy is that when I get close to him, I concentrate on all what I suppose to do for him, and this make it easy for me. Social support offered a boost when providing care to older adults. It appears that government support was only dependent on older adults' contribution to social security (i.e. a pension). None of the caregivers reported receiving support from the government. Support from church members increased caregiver's willingness to continue. Phinehas reflected on how church members' encouragement helps him to cope with caregiving stress: You see in my church, I have one father who likes me much, and so he sometimes comes to the house, advises me and encourages me to go on. It is because of his encouragement and advice that I am still here today. Support from family and friends appeared to be the main available support for caregivers in meeting the needs of older adults. Margaret illustrated how she receives support from her husband: Concerning my husband, he understands that I am caring for my father here. That is why he allowed me to come here and care for my father. Caregivers expressed willingness to continue providing care, even though feeling burdened in their role. For some caregivers, caring for an older adult was a lifetime duty. Reflecting on the extent to which she desired to provide care, Gloria explained: It is only death that can part us. It is God who gives and takes, so if the time reaches and God takes her, what can I say? Other than this, I will continue to take care of her. Caregiving impacting the Self For caregivers, caregiving was a period of self-transformation. At one point, caregiving was a blessing; at other times, it was a cost. Perception of caregiving, in the minds of caregivers, takes the form of a chameleon, where its true form becomes challenging to pinpoint and describe. Caregiving, at any point in time, was a combination of mixed feelings. For all caregivers, providing care had at some time, come at a personal cost. It seemed that care duties and caregiver's poor health are inseparable. Cindy's account below is an illustration of how caregiving to her older mother has impaired her health: I feel pains. When she was admitted to the hospital, I have suffered a lot like when I sit in a car from a long place morning and night. I am risking my life. It was always a choice between caregiving and precious time with friends. The desire to maintain friendship always manifested itself as an issue of concern for caregivers. It is in this context that Cindy became anxious about the possible impressions, her friends hold regarding her limited time with them: Oh, sometimes I think that my friends may think that maybe someone has said something bad about them to me, and that may be the reason why I am not visiting them again. Caring for older adults was incompatible with maintaining caregivers relationships with immediate family. It was in this context that caregivers felt irresponsible for failing to undertake their role in the family. For Cecilia, caring for an older relative implied neglecting some crucial roles as a mother who was expected to provide food for her immediate family: When I was living with the family, I use to cook for them to eat and now that I am not there, they sometimes miss me. For Margaret, taking care of her father away from home affected her children's education, spiritual growth and feeding: I have children at home, and for now, even regarding my children schooling, it has become something bad. As for a mother, if you are at home, you will know how to cater and nurture your children, but because of my father's illness, I have moved from home and come to stay in Kumasi. For my children welfare, I cannot say now. However, this is not better than when I was there with them. Even during Sundays, maybe my children don't even go to church, but if I were there, it would not happen this way. For Cynthia, caregiving has dissociated her from providing support to her own parent: I am not even able to go to my family, which is my parents, to visit them. Caregivers described how caregiving disrupted their work causing financial constraints. Sustaining work and finances while providing care seemed impossible. Inherent in Phinehas' account was a demonstration of the devastating impact of caregiving on finances and work: When she became weak, and I started taking care of her, I have lagged behind so many things. At first, I was able to earn about Gh2000 (US$400) every month, but for now, because I left the work for other people when the money comes, I share the money with those people. The second dimension under the transformation of self was the perceived benefit irrespective of the inherent cost to the self. Gloria illustrated how she benefited from caregiving financially: When her children come, they sometimes give me money. To some caregivers, caregiving was a medium for exposure to new skills and knowledge in life. In the following extract, Cecilia illustrated how she benefited from caregiving, emphasising how her mother was seen as a reservoir of knowledge and full of advice for life: This is because any time I come home, I see her, and she advises me, and we converse all the time, and that makes me happy. Whom will I go to if she was dead? Caregivers felt happy when they realised that their caregiving duty was fulfilled, working from the assumption that care improves the older person's life, even given the cost of caregiving. For Margaret, it appeared that her happiness depended on their older relative being alive: This is what I am saying that people suffer, yet they lose their older adults they care for. But in my case, with all the suffering, he is still alive. This has made me know that I have gotten some benefit. Caregiving seemed to offer spiritual benefit for Davida: Last time one pastor said the care I provide for this woman is protecting me. He said some people want to kill me, but because of the care, I provide for this old cousin God has been rescuing me. Motivating factors to caregiving This theme encompasses the aspect of caregivers' experiences that motivated them to remain as carers in the face of caregiving challenges. It appeared that reciprocity drives other motivational factors to operations. Caregivers demonstrated varying but related reasons for providing care. Reciprocity was present at every expression of caregivers' even up to the point where expressing their desire to quit caregiving. Elizabeth described how she could not quit continuing to provide care: Because she was washing my clothes and cleaning my toilet when I was young, and she was taking care of me so if she did not use gloves, then I will not use gloves in cleaning her toilet. Caregivers perceived caregiving to an older adult as a duty that needed fulfilment by any possible means. Fundamental to this belief was reciprocity. Cindy emphasised how her reason to care was influenced by a sense of duty to care: We live in the same house with my mother and because my mother brought me to this world and so, it is my duty to continue caring for her. The desire to obey God was at the forefront of some caregivers' narratives. To caregivers, God takes an interest in their adherence to caregiving duties for older adults. It appears that the desire to obey God could not be separated from reciprocity. Margaret spoke of this link, expressed as a fear of God and reciprocity: I will have to obey God because the bible says children should obey their parent in the Lord. And so, I don't worry using my time and my money to care for my mother because If I don't do that, I will get punishment from God. The recognition that one day, caregivers may, in turn, require care from their children held great meaning according to caregivers' narratives. Margaret explained her belief in reciprocity and how that influenced her desire to leave a model for her children: When it gets to some point, I think that when parents enter a distressing situation, I have to devote myself to care for him. If you don't care for him one day when you grow old, your child will do the same thing to you. Caregivers sometimes provided care because of circumstances such as being the only daughter among siblings, being the eldest daughter, and self-employed. Eugenia talked about how being the eldest daughter obliged her to accept the caregiving to her older mother: Out of all my mother's children, I am the elderly daughter among the daughters. Although I don't do any rigorous work or work that requires much time, I am a cosmetic seller. Because she is my mother, I needed to forget about my work and keep taking care of her because she is sick. Another motivating reason was caregivers' concerns about other people's approval and the need to keep their relations with significant others strong. This fear was like a glue that bonded them to their caregiving duties. Central to a fear of social censure was the ever-present expression of reciprocity. Anticipating the likely fear of ridicule when dissociated self from caregiving, was Naomi's reason to care: What motivated me is that she is my mother, I don't have any other mother elsewhere, and if I decide not to care for her, people will even say bad things about me. Caregiving burdens This theme reflects the different external and internal stressors that can burden caregivers. There appears to be a double blow, one from the caregiving duties and another from external stressors. External stressors are the felt tension that emanates from older adults, unfriendly environments, and caregivers' siblings. These stressors place a burden on caregivers. Margaret described the frustration she experiences emanating from her older father's uncooperative attitudes: I have two Kuraba (chamber pots) I have given to my father, one that he needs to urinate in and the other he needs to spit in but it will get to a time where the lid will be on a chamber pot, but he will spit on the lid. It will get to a time when you will see that he wants to pass faeces, I will ask him several times, but he will tell me that he will not go. When hospital official tells us to leave the wards, it is there that he will call me and tells me that he wants to go to the toilet. Therefore, when it gets to this time, it makes the caregiving very difficult for me. The physical environment appeared to serve as a constraint for caregivers. For Cindy, who has been caring for her older mother for nine months at the hospital, the unfriendly nature of the hospital stairs cause her problems? Coming to this hospital and climbing stairs up and down make it difficult for me too. Caregivers demonstrated feelings of tension concerning the pressures from their siblings or family members. Davida expressed worry about the false rumour her family members spread about her concerning the care she provides for her older cousin: Sometimes her sisters can say that I have intentionally decided not to allow anybody to come and stay in the house and so I have decided to take care of her alone for some benefit, and that worries me. Naomi emphasised the tension she feels from her husband alluding to her unfulfilled duty as a wife as causing an impediment to care: First, has to do with the tension between my husband and me. There is no peace between us at all. The reason is that, when my husband needed me most, I will not get time for him because I am caring for my mother. Therefore, all the time, he will be thinking about me wrongly, he will not even talk to me dearly, and it is always quarrelling. Sometimes, he also insults me when I call him on the phone. Internal stressors included the enormity of the care duties, making caregivers feel burdened. It was evident in their comments that assisting with toileting and vomiting overburdened them. This theme also encompasses how intimate caregiving was causing an unpleasant feeling for caregivers. Reflecting on the past and the older adults' current health problems, Cecilia admitted an increase in caregiving responsibility: When she was not sick, she used to help me out in certain things like picking chairs and bowls. She is growing older, and because of the illness, she cannot even help me out in anything. Therefore, I have to do everything for her. I change her clothes every day. Although caregivers expressed the willingness to care, they found assisting older adults with toileting and cleaning to be the most unpleasant. Elizabeth shared the difficulty she encounters when she compared assisting toileting in young children and that of older adults like her mother: You know that as for here this town, we are used to giving children a chamber pot. For me it is much easier to clean toilet of children than that of adults. Therefore, it was difficult for me to clean the toilet of my mother. Davida spoke of assisting with incontinence and other daily living assistance with unhappiness and unease: It is the toilet and urinating herself that makes it difficult for me. Even as for feeding her, it is not a problem for me, but the toilet is a problem for me. Every day, she defecates on herself, and I will have to use my hands to clean the toilet. Eugenia was different from others because, for her, assistance with vomiting overburdens her more than with toileting: As for me, it is the vomiting and the toilet that becomes difficult for me to do. Out of these two, the vomiting is more challenging to perform. Thinking about personal affairs Despite caregivers' desire to surrender themselves to the care needs of older adults, they concurrently described behaviours and attitudes connoting internalised concerns for their welfare. Expressions of the desire to want to improve their future became a matter of concern primarily when they reflected on their unfulfilled dreams. Referent to God's approval for her to seek personal blessings and the need to further her education, Cynthia talked about her future: A time will come that I will stop. Because I also have to look for my future in terms of education. Although it is good that I care for her because I can get God's blessings, yet I have to look for my future. Spiritually, God has blessed you, but I will have to try to make sure I take a step for what God has purposed about me to be fulfilled. For Davida, she wished for a husband to live with. She reflected on how a potential husband might interpret caregiving to her older cousin: Even if the man wants us to go out after marriage, I will not be able to leave my older cousin behind. Unless I get a man who will understand that, he will come and live with us in the same house with my older cousin. Even with that, a man can say that because of the toilet and other things I clean, he will not even eat the food I prepare. The desire to strengthen relationships with significant others was discussed by caregivers. Though Margaret accepted caregiving to her father, she expressed concern and the need to improve her relationship with her immediate family: This is because I have little children and a husband. Therefore, leaving all these people and coming to stay and cater to my father, it was not easy at all. However, because there was nobody available, I decided to care for my father. Caregiving pressures, together with personal reflections on their unfulfilled aspirations, usually led to a push to seek a break from caregiving. It is in the context of a push for a break that Cynthia seems to achieve a result: Always I tell them. Even my father said next year February when all his brothers come from abroad, and he said he would have a meeting with them that next year I will have to go to school. Naomi expressed the wish to share the caregiving duty with a paid caregiver: If I get financial help, I can even hire one person to assist me in caring for them. In that case, I can go outside and work small while the person cares for them. Discussion This study employed the ethics of care theoretical framework to understand the lived experiences of caregivers of older adults who required help with everyday activities in Ghana. In this study, caregivers express commitment to caring for older adults motivated primarily by reciprocity, despite internal and external stressors, and the desire to fulfil unmet personal needs of caregivers. Ethics of care offers an explanation to experiences of caregiving to older adults in Ghana [50]. Applying the ethics of care in a developing country like Ghana, we were able to identify that caregivers are also conscious of meeting their demands despite caregiving commitment determined by relationships. Caregivers' appreciation of the relationship between themselves and older adults was demonstrated in this study, reflecting the ethics of care framework as valuing mutuality, interdependency and people coming together in a caring relationship [50][51][52]. The decision about sharing caregiving duties among siblings, spending many years providing care and putting aside their own needs and wishes connoted the extent of commitment they gave to caregiving to older adults. Similar findings were revealed in Nigeria, which has a similar cultural context in terms of relationship and mutuality [46]. In Faronbi et al [46] study, caregivers manage caregiving challenges and assist older adults with chronic conditions as an expression of caregiving commitment. This current study adds that caregivers show a willingness to continue caring for older adults until older adult receiving the care dies signalling the extent of their commitment level. These findings are similar to the evidence from Sri Lanka, a developing country with similar socio-economic context as Ghana, those adult children take pride in caring for their older adults [20]. The finding from this study show how caregivers in Ghana, despite their demands, are committed to meeting the needs of their older adults. Sometimes the over-commitment of caregivers in meeting the needs of older adults can lead to caregivers not satisfying their own needs and demands. Given this suggestion, the major concern for policy direction in Ghana should be how specific intervention can be developed and sustain the long-term care older adults receive. We assert that without the state' support for caregivers in Ghana, irrespective of caregivers willingness, they may not be able to continue to provide for their older adults. Specific programs including provision of financial support, are relevant to ensure continuity of care. The finding from the present study shows that caregiving can have both negative and positive impacts on caregivers. Caregivers openly acknowledge the cost caregiving had on all aspect of their lives, which is in contrary to other finding that caregivers can deny burden from caregiving [46]. The cost to caregivers manifested itself in poor health, impairment of friendships, self-blaming, perceived loss of role as a parent, couple, or child, negative impact on work and finances, feeling overwhelmed that life visions will remain unfulfilled. The adverse effects found in this study are similar to the effects described for carers in developed countries [35][36][37]. These similar findings suggest that the negative effects of caregiving on caregivers may not be contextual; however, the management of these effects may be impacted by the level of the country's wealth and support structures. These findings will be relevant for stakeholders, including social workers, to improve the health and social needs of caregivers. Additional support for caregivers to mitigate caregiving stress, including counselling and respite services, may assist in balancing care responsibilities with the caregivers' own life events, including working, attending school and fulfilling their marital duties. Addition to the above factors burdening caregivers found in this study, care to the opposite gender, the intensity of the care duties, unfriendly physical infrastructures, uncooperative attitudes of older adults and influence of family members intensify the internalised feelings of burden. Caregivers need to be trained on older adults' needs and how the care they provide can enhance older adults' functional abilities. With appropriate training, caregivers can provide care to minimise caregiving burdens. Given the economic condition of most of the caregivers in Ghana, it will be impossible for caregivers to afford the cost of the training. Moreover, caregivers may not be aware of the need to receive training on how to care. These challenges can be met if the state assumes responsibility to encourage caregivers to enrol in caregiver training programs at subsidised prices. The training can cover care needs of older adults, how to engage older adults in their care and the need to be conscious of the physical and social environment impacting on older adults health [71]. One factor increasing the caregiving burden found in this present study was assistance with toileting. This finding is consistent with studies showing that having toileting difficulty increases older adults dependency on caregivers for support and care [3,72]. On the other hand, the perceived benefits from caregiving were an opportunity for carers to enhance and enrich themselves by developing new skills and knowledge, receiving spiritual protections, receiving social approval, and raising self-pride. Similar benefits have been described in other studies [32][33][34]. The perceived benefits found in this current study offer caregivers some reason to hold on to caregiving despite the inherent cost. Although inherent cost exists in caregiving, caregivers feel self-fulfilling completing caregiving roles [50]. This perception may lead to lack of autonomy, and reduces caregivers', mostly women, ability to select and leave a distressing relationship making them more vulnerable [59][60][61]. Supporting caregivers and allowing them to fulfil their life endeavours can promote their well-being, ensuring the continuity of care in Ghana. The study shows that several factors motivate caregivers to care for older adults. Inherent in these motivations was the sense of reciprocity as depicted by ethics of care [50]. Caregivers provided care to older adults motivated by seeing care as a sense of duty, obeying God, leaving a model for their children, less demanding circumstances, and concern about people's approval all driven by a sense of reciprocity. The findings in this study corroborate with other studies revealing reciprocity as the primary motivation influencing adult children to provide for their older adults [24,27,41,44,46]. It appears that in the Ghanaian context, caregivers' sense of accomplishment, fulfilling care duties depend on the awareness that they were cared for in the past relationship. Though the current study has highlighted adult children's willingness to continue providing care, their motivation may not be continuously be hinged on reciprocity or sense of duty as urbanisation seems to be rapid in Ghana. Adults children may resort to providing care based on personal judgement or choice, or personal gain just as reported in China and Japan [29,31]. A decline in perceived obligation to care for their ageing parent is expected in Ghana given the enormous burden on caregivers, including their inability to fulfil their own needs and demands. A reflection in this perspective is a result of some older adults reporting that they are being deprived of care and protection in Ghana [9]. The current study's finding implies that the government should assume authority over the long-term care for older adults, requiring that caregiver's health and social needs be ensured to ensure continuity of their care to older adults. Providing caregivers with training, health, financial and social care can foster their reciprocity motive, which may increase their willingness to remain in care for older adults. Caregivers demonstrated an awareness of the need to fulfil or accomplish their own personal needs. Personal needs involved continuous education, getting married, strengthening the relationship with significant others such as husbands, friends and family, and pushing for some respite from caregiving. Though caregivers concentrate more on how to help older adults achieve their needs [50], this current study adds to the ethics of care that caregivers do not totally ignore their own personal needs but rather are conscious of fulfilling them. The unfulfilled vision will create more anxiety for caregivers, which will have a negative impact on caregiving. Given the long-term care nature in Ghana, where family especially adult children provide care with no or minimal government support, it will be very challenging and more stressful for caregivers to provide long-term care for older adults in Ghana. These findings call for the government of Ghana to implement the policies in the national ageing policy [73], that seek to ensure the health, financial and social needs of caregivers. Currently, the social protection programs, which includes Livelihood Empowerment Against Poverty in Ghana do not adequately meet the needs of eligible older adults, let alone factoring the needs of caregivers of older adults. Social workers can be very helpful in developing programs that can help caregivers achieve their needs [74]. In this regards, a respite care systems can be instituted to support caregivers in caring for their ageing parents. Our study findings have numerous implications. First, nationally instituted caregiver benefits should be established to support caregivers of older adults in Ghana to serve as a demonstration of how care provision is valued [55]. Second, per the difficulties caregivers experience with infrastructures, user-friendly public transportation, facilities, and accessible health care services for caregivers need to be established and implemented to support people providing care. Third, the study indicates the importance of promoting spirituality among caregivers to improve their mental and emotional well-being. Fourth, sibling relationship with primary caregivers on care for older adults warrants further investigation, with the potential to enhance emotional, instrumental and financial support provided to the caregiver. Moreover, more research with male caregivers is needed to understand how caregiver experiences differ among men and women. The strengths of this study include the application of ethics of care into exploring caregivers care to older adults. Second, the study recruited caregivers who provided care to older adults for at least six months serving as a common ground for providing in-depth experiences of caregiving. One limitation was that having only one male caregiver was not enough to understand how caregiving roles playout for men who provide care. Conclusion Caregivers demonstrate a commitment to caregiving despite several stressors and costs to themselves. The findings reveal how caregivers' care for older adults are being fostered by reciprocity. Irrespective of this, the caregiver is concerned about fulfilling their life endeavours and needs. This study recommends that the state assuming authority in promoting social support, respite care system, financing long-term care, improving transportation, healthcare and user-friendly public infrastructures if put in place for caregivers, could help to replenish the effort caregivers expend. Byles.
v3-fos-license
2024-02-11T16:26:03.771Z
2024-02-01T00:00:00.000
267586995
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://assets.cureus.com/uploads/case_report/pdf/226589/20240208-22853-y73zb4.pdf", "pdf_hash": "ce9a4cc2775df081d226e7338eb97846d1d038ca", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1993", "s2fieldsofstudy": [ "Medicine" ], "sha1": "94d215c24714d23d9ee35a722d7b46b06723c820", "year": 2024 }
pes2o/s2orc
Neonatal Renal Failure Following Intrauterine Exposure to an Angiotensin-Converting Enzyme Inhibitor The renin-angiotensin-aldosterone system (RAAS) plays a crucial role in the normal development of the fetal kidney. Late pregnancy blockage of the RAAS, through in-utero exposure to angiotensin-converting enzyme inhibitors (ACEIs) or angiotensin II receptor blockers, is associated with poor fetal outcomes, including oligohydramnios, renal tubular dysplasia, postnatal anuric renal failure, and hypotension. The present case describes a 39-year-old primigravida that was referred to the emergency department, at 37 weeks, for the evaluation of intrauterine growth restriction and suspected coarctation of the aorta (CoA). She was medicated with enalapril since the 35th week of gestation. She delivered a male infant, weighing 2,110 g, with no apparent malformations. CoA was excluded. During his first day of life, the patient developed anuria, acute renal failure, and hypotension, requiring ionotropic support. Renal ultrasound appeared normal. Diuresis was reinitiated at 48 hours of life after continued supportive measures. Kidney function tests progressively normalized. Additional investigations revealed a low concentration of angiotensin-converting enzyme. The patient is currently 12 months old and has had a favorable evolution. This case highlights the fact that even brief exposure to enalapril in the third trimester may cause RAAS blocker fetopathy. As long-term sequelae of ACEI-exposed infants are poorly described, close follow-up of renal complications is essential. Physicians should be aware of the deleterious effects of RAAS blockers in pregnancy. Introduction The integrity of the renin-angiotensin-aldosterone system (RAAS) is an essential prerequisite for the normal development of the fetal kidney [1,2].Pharmacological blockade of the RAAS, through in-utero exposure to angiotensin-converting enzyme inhibitors (ACEIs) or angiotensin II receptor blockers (ARBs), compromises the normal nephrogenesis, among other deleterious effects [3].The resulting fetopathy, usually termed fetal RAAS blockade syndrome, was first described in 1981 by Duminy et al. [2,4].It is characterized by a spectrum of manifestations, ranging from transitory renal impairment to irreversible anuric renal failure and death [1,2]. Here, we report a case of ACEI fetopathy, resulting from a brief third-trimester exposure to enalapril, and a review of the literature concerning this topic. Case Presentation A 39-year-old woman, gravida 1 para 0, was admitted to the obstetric emergency department (ED) of a tertiary care hospital for the urgent evaluation of intrauterine growth restriction (IUGR) and suspected congenital heart disease.Her previous medical record was unremarkable.She had been receiving regular care since her first trimester.At 35 weeks of pregnancy, she was diagnosed with gestational hypertension and was since medicated with enalapril (20 mg/day).The pregnancy was otherwise uneventful. Her first and second-trimester ultrasounds were normal.The third-trimester ultrasound (30 weeks) raised concerns for a possible cardiac malformation.Subsequent fetal echocardiograms, performed at 31 and 36 weeks, identified a probable, discrete, coarctation of the aorta (CoA).Re-evaluation at 37 weeks showed oligohydramnios and IUGR (fetal weight estimation on the second percentile by Hadlock), with pathological umbilical blood flow on Doppler study.There was no evidence of preterm rupture of membranes.She was referred to the ED for urgent evaluation.On admission, cardiotocography showed signs of fetal distress, requiring an emergency cesarean section.The woman gave birth to a male newborn, weighing 2,110 g (third percentile, according to the Intergrowth-21st charts), measuring 45.6 cm (11th percentile), and with a head circumference of 34.5 cm (88th percentile).His Apgar scores were 8 and 9, at one and five minutes, respectively. The neonate was admitted to the neonatal intensive care unit for close monitoring.He had a normal physical examination and was hemodynamically stable, with a mean blood pressure (BP) of 39 mmHg, and no BP differential between the upper and lower extremities.An echocardiogram was performed and excluded any cardiac anomalies.Through his first hours of life, the newborn was clinically stable and had a registered micturition of 20 mL.There was no urine output past 15 hours of life, despite increasing parenteral nutrition to a volume load of 120 mL/kg/day.At 24 hours of life, the patient remained anuric and developed severe hypotension (mean BP: 20 mmHg), with poor response to fluids.Dopamine was initiated and gradually titrated to 15 µg/kg/minute.Initial laboratory findings revealed metabolic acidosis (pH: 7.28, HCO 3 : 17.8 mmol/L, lactate: 4.2 mmol/L), hyponatremia (128 mmol/L), a raised serum creatinine of 200.7 μmol/L (2.27 mg/dL), and serum urea of 10.8 mmol/L (65 mg/dL).His sepsis workup was negative, and he was not receiving antibiotics or any other nephrotoxic agents. Renal ultrasound with Doppler scanning showed normal-sized kidneys, with adequate blood flow, and an empty bladder.After a dose of furosemide (1 mg/kg), an albumin bolus (0.5 g/kg), and continued dopamine infusion, diuresis was reinitiated at 48 hours of life.In the following days, the patient maintained an adequate urine output (>1 mL/kg/hour).BP normalized, and dopamine was gradually reduced and stopped on the sixth day of life.Urea peaked at 22 mmol/L (132 mg/dL) on day three, and serum creatinine at 371.3 μmol/L (4.20 mg/dL) on day five, with subsequent normalization.Additional investigations (day six) revealed a low concentration of angiotensin-converting enzyme (ACE <8 U/L, normal value: 20-70 U/L).Urine analysis showed glucosuria, albuminuria (39.1 mg/dL), high urinary β2-microglobulin (11,900 µg/L, normal value: <300 µg/L), as well as an increased spot urine protein-to-creatinine ratio (1.7 mg/mg).The patient was discharged on the 12th day of life. Currently, at 12 months of age, the patient has adequate neurodevelopment and growth.Renal ultrasound at three months revealed normal-sized kidneys, with diffuse cortical hyperechogenicity and reduced corticomedullary differentiation.These alterations were not seen in subsequent ultrasounds (Figure 1).Serum creatinine and urea remain within the normal range, and initial alterations seen on urinalysis resolved.He maintains regular follow-up. FIGURE 1: Bilateral renal ultrasound performed at 12 months of age. Normal-sized kidneys with preserved corticomedullary differentiation. Discussion Hypertension is highly prevalent and affects 7.7% of reproductive-aged women [5].Hypertensive disorders of pregnancy, which include preexisting and gestational hypertension, complicate up to 10% of all pregnancies, representing an important cause of maternal-fetal morbidity [5].The National Institute for Health and Care Excellence currently recommends ACEIs and ARBs as first-line treatments for hypertension in adults, including women of childbearing potential, stating, however, that both medications are contraindicated during pregnancy [6].Given the rising prevalence of chronic hypertension in women of reproductive age, as well as the increasing birth rate among advanced maternal-age women (which is associated with a higher risk of both essential and gestational hypertension), the inadvertent use of RAAS blockers in pregnancy is likely to increase. Fetal circulation is characterized by low pressures, requiring high angiotensin II levels to maintain adequate renal perfusion [2].Inhibition of the ACE results in hypoperfusion and ischemia, compromising the normal tubular development [1,2].This leads to oligohydramnios, which is often the first sign of RAAS blockade fetopathy.After birth, anuria and hypotension develop [7]. In this case, although there was only a two-week exposure to enalapril, the newborn exhibited some of the most frequent manifestations of the RAAS blocker fetopathy, including oligohydramnios, postnatal hypotension, and acute anuric renal failure, with a favorable course.To our knowledge, this is the first report of RAAS blocker fetopathy described after such a brief exposure to an ACEI.In fact, the time of exposure to RAAS blockers seems to determine the severity of symptoms.A review of 190 newborns exposed to either ACEIs (n = 89) or ARBs (n = 101) concluded that RAAS fetopathy was not observed if the exposure ceased before 20 weeks [7].Many other reports also consistently document poorer outcomes in newborns exposed in the second and third trimesters versus those exclusively exposed in the first [1,2]. Information about the long-term follow-up of these patients is scarce.In the anteriorly mentioned review, follow-up was only available in 26 children (14 of whom had been exposed to an ACEI) [2].Renal complications were the most frequent problem (which included renal failure in 23% of patients, and hypertension in 15%), followed by neurodevelopmental delay and failure to thrive [2].Up until now, the presented patient has not experienced any of these complications. Conclusions Brief third-trimester enalapril exposure is associated with RAAS blocker fetopathy and subsequent acute anuric renal failure.Long-term sequelae of ACEI-exposed infants are poorly described, and, therefore, close follow-up of renal complications is essential.Physicians from different fields should be aware of the deleterious effects of RAAS blockers in pregnancy.
v3-fos-license
2022-01-28T17:06:48.791Z
2022-01-01T00:00:00.000
246352274
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.nature.com/articles/s41560-021-00968-6.pdf", "pdf_hash": "e074a3622be632938c8e26314336a5e004349adb", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1994", "s2fieldsofstudy": [ "Political Science" ], "sha1": "c2a68681d827218c2ba57d4c816e1abf3b5c648b", "year": 2022 }
pes2o/s2orc
Temperature extremes exacerbate energy insecurity for Indigenous communities in remote Australia For remote Indigenous communities prepaying for electricity in Australia’s Northern Territory, temperature extremes increase reliance on the services that energy provides and the risk of disconnection of those services. Policy should focus on reducing the frequency, duration and negative impacts of disconnection, within the context of a warming climate. Electricity disconnections among households with prepayment meters are more frequent during temperature extremes, curtailing access to essential services. Households with high electricity use experience more disconnection events, so policy responses should account for household structure and occupancy, as well as the opportunity to use rooftop solar. Greater visibility and understanding of data on disconnections in these communities is needed to determine the extent of their energy insecurity. Policy should seek to reduce the frequency and duration of involuntary self-disconnections in remote communities, particularly during extreme temperatures. To account for the multifaceted nature of energy insecurity, policy responses need to be informed by residents, local councils, healthcare professionals and other relevant organizations. Temperature extremes exacerbate energy insecurity for Indigenous communities in remote Australia For remote Indigenous communities prepaying for electricity in Australia's Northern Territory, temperature extremes increase reliance on the services that energy provides and the risk of disconnection of those services. Policy should focus on reducing the frequency, duration and negative impacts of disconnection, within the context of a warming climate. The policy problem In Australia's Northern Territory, most remote Indigenous households are provided with or elect to use prepayment electricity meters. This payment method is associated with high disconnection rates and is uncommon in other Australian urban and rural communities. These remote communities also experience some of the most extreme temperatures in Australia (Fig. 1a). Electricity use to sustain safe indoor temperatures can rapidly deplete available means, resulting in disconnection with little warning. As such, safe temperatures cannot be maintained, and households lose access to other essential services that electricity provides, such as food storage, washing and cooking. This raises the need to understand both the extent of current disconnections and the degree to which they are triggered by temperature. Without this understanding, the existence and severity of problems cannot be identified, and policy cannot be designed to mitigate current harms or prevent future ones. The findings Among 28 remote communities in the Northern Territory, we found that 91% of households experienced a disconnection event at least once during the 2018/19 financial year; 74% of households were disconnected over 10 times, and 29% of all disconnections occurred during extreme temperatures. In mild temperatures (20-25 °C), households had a 1 in 17 chance of disconnection on a given day (Fig. 1b). This increased to a 1 in 11 chance during hot days (34-40 °C) and a 1 in 6 chance during cold days (0-10 °C). Households with high electricity use in the central Australian climate zones had a 1 in 3 chance of a same-day disconnection during temperature extremes. Energy insecurity is worsened when energy use is heightened owing to heating or cooling needs (Fig. 1c). Our analysis does not explore all of the complexities underlying energy insecurity in these communities, but we expect that these findings will inform discussions of energy insecurity in regions with extreme temperatures. The study This analysis used daily smart-meter data from 3,300 households across 28 remote communities in Australia's Northern Territory to identify the incidence of disconnection events. These smart-meter data were matched with daily temperature observations from the closest weather station using data from the Australian Bureau of Meteorology. We estimated the probability of disconnection Messages for policy • Electricity disconnections among households with prepayment meters are more frequent during temperature extremes, curtailing access to essential services. • Households with high electricity use experience more disconnection events, so policy responses should account for household structure and occupancy, as well as the opportunity to use rooftop solar. • Greater visibility and understanding of data on disconnections in these communities is needed to determine the extent of their energy insecurity. • Policy should seek to reduce the frequency and duration of involuntary self-disconnections in remote communities, particularly during extreme temperatures. • To account for the multifaceted nature of energy insecurity, policy responses need to be informed by residents, local councils, healthcare professionals and other relevant organizations. across distinct temperature ranges using random-effects probit regressions, which allowed us to include variables for the daily average temperature, month of the year, and different levels of electricity use. Using a reference temperature range allowed us to measure how temperature influenced electricity use and the likelihood of a disconnection during both hot and cold days. This assessment of whether extreme temperatures are a factor determining disconnection events was only possible with access to smart-meter data. As the vulnerability of prepayment customers is often overlooked, we recommend that these data be better monitored and made more accessible to residents, community organizations and researchers. ❐ Electr. J. 33, 106859 (2020). This paper explores the differences in utility disconnection policies that have the potential to protect vulnerable populations from exposure to excessive heat or cold. Longden, T. The impact of temperature on mortality across different climate zones. Clim. Change 157, 221-242 (2019). This study shows how exposure to extreme temperatures is associated with higher death rates in the three hottest climate zones in Australia, which correspond with the Northern Territory. Competing interests The authors declare no competing interests.
v3-fos-license
2019-02-19T14:07:14.670Z
2018-01-01T00:00:00.000
69554638
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2018/67/matecconf_icmie2018_03009.pdf", "pdf_hash": "6415c9ea6f74e49b314634c4f10951dc1b517d38", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1997", "s2fieldsofstudy": [ "Engineering" ], "sha1": "51d01ffc7358850bf15651bc344d97a830750cfc", "year": 2018 }
pes2o/s2orc
On the Analysis Performance of Updating Weight for Estimation Target of Drone System . In this paper, we propose the method which desired signal is estimated by updating the weight of the MVDR algorithm. The MUSIC algorithm is generally a lot of used in the direction of arrival estimation method. The MUSIC algorithm has a good resolution because of using subspace techniques consisting of a signal subspace and a noise subspace. The processor capability of drone system is required low power consumption and low computation complexity because it uses a microprocessor. If the drone system has a lot of computation complexity, the desired signal cannot be estimated. This paper study a method estimating the desired signal with a simple calculation. The proposed method is updated weight by the covariance matrix of MVDR algorithm. Through simulation, we analyse performance by comparing MVDR, MUSIC and the proposed method. In the simulation results, the proposed method is the same as the MUSIC algorithm in direction of arrival estimation. Since the proposed method has no subspace, it reduces computational complexity than MUSIC algorithm. The desired signal estimation of the proposed method is superior to the MVDR algorithm. Introduction Recently, the estimation of direction of arrival(DoA) for target has been many studied according to the development of wireless communication technique. Direction of estimation method have been widely used in many applications such as radar, sonar, biomedical, and communication systems. DoA estimation methods are Bartlett, Capon, Linear predict, MUSIC, and ESPRIT [1][2][3]. Also, the estimation method of target is divided into two methods. First, there is non-parameter method such as Bartlett and Capon, finally, there is parameter method such as MUSIC and ESPRIT. MUSIC estimation method has a super resolution because it uses the subspace method. There has a lot of computation complexity because the MUSIC method uses eigenvalue decomposition. In order to improve the DoA estimation, there are improved signal to noise ratio, higher transmission power, and adaptive array signal processing [4][5][6]. In this paper, we propose the method which is a low computation and an accurate DoA estimation. The proposed method is estimating the desired signal by updating the weight of the MVDR (Minimum Variance Distortionless Response) algorithm. Generally, the MVDR algorithm is called Capon method. The Capon method has poor resolution to estimate the desired signal due to low computation complexity and inaccurate weight. We propose a method how to improve the weight of MVDR algorithm. The proposed method for improving the weight use an adaptive array antenna and a beamforming technique. The adaptive array method finds the covariance matrix using Lagrange multiplier and applies the adaptive array algorithm to improve the resolution. And, we divided it into two step to find optimum weight. The source signal covariance matrix is obtained in two steps of the proposed method. As a result, we can obtain the covariance matrix of the received signals. The MUSIC method is mostly used to estimate for target in spatial. But, it is not effective for drone system to use the MUSIC method because of the processing capability of microprocessor. The processing capability of drone system using microprocessor degrades form computation complexity and power consumption, so that it cannot accurately estimate the desired signal. The organization of this paper is as follow. In section 2, the signal mode analysis considered is described. The output power of MVDR algorithm and the proposed weight of covariance matrix are presented in Section 3. A performance analysis of the proposed method is provided in Section 4. Conclusions are drawn in Section 5. Figure 1 shows an adaptive array system. We consider that the receiver is uniform linear array composed of an M-arrays antenna with adjacent array element spacing d deployed at the figure 1 and N narrowband signals. The N-dimensional column vector a(θ ), the antenna array response vector is as follow [7][8] Signal model analysis Where a(θ ) = exp (− j2πdcosθ ). Bothd and are the the weavelenght and incident signal on array antenna, respectively. Then, output signal of the array antenna is as follow Where L is a number of snapshot. Block diagram of the adaptive array system. Output spectrum of MVDR Algorithm In this chapter, we discuss the MVDR algorithm for estimating the direction of arrival. Output signal of incident wave at receiver can be written as follow [9][10][11] where, ( ) = ( ) ( ). ( ) and ( ) is called the array steering vector and the source signals on the array antenna, respectively. N(t) is a zero mean complex Gaussian random signal( 2 ). and I are a variance and unit matrix, respectively. Output signal is represented by multiplier of weight(W) and the receiver signal on array element. When the weight vector is 1 x N, output signal to minimize the variance of Y(t) in the noise can be written as follow In order to minimize output noise E[| | 2 ] , the constraint of no distortion can be written as follow Where = [ ( ) ( )] is a source signal correlation matrix. The constraint condition is as follow The output noise power is can be as follow Where () is Hermit matrix. We would like to minimize the output noise power in subject to the constraint at the equation (7). The output power is as follow Let us assume that We find it the solution minimizing the variance of the output signal by using a Lagrange multiplier. The solution can be as follow Taking the gradient with respect to in equation(14) as follow Substituting equation(13) into equation (7) to find λ(t), which give by Thus, optimum weight can be written as follow Where The equeation(16) is called the MVDR algorithm. Array output power is as follow Proposed covariance matrix in mutual coupling We consider that all signals of the receiving on array antennas are coherent. Source signal of each array antenna are amplitude and phase delayed due to multipath. First array antenna is reference signal. In the case of ( = 1,2, ⋯ , )narrowband sources, Replicas of the first array antenna source signal can be as follow Where h represents the complex attenuation of the kth signal with respect to the first signal. The signal correlation matrix is as follow Where H= [ℎ 1 , ℎ 2 , ⋯ , ℎ ] . We have to remove the effects of mutual coupling before estimating desired signals because it is impossible to estimate desired direction of arrival signals. We have the following steps: Step 1. eigenvalue decomposition of can be written as follow Where is × 1 eigenvector corresponding to the largest eigenvalue, and is eigenvector corresponding to the smallest eigenvalue by − 1. According to subspace method, we have as follow Where b and C are constant and mutual coupling matrix, respectivly. Mutual coupling matrix is as follow Step 2. Reconstruct the covariance matrix of the array output is as follow Spatial smoothing is represented from subarray method which the nature array is divided uniform overlapping subarray. Simulation & performance analysis In this chapter, we analyse the performance to compare classical direction of arrival method with the proposed method. The classical method used which is MVDR and MUSCI. With the simulation condition, Snapshot, SNR, and targets were 100times, 10dB, and objects 2, respectively. Figure 2 shows the angle estimated by the MVDR algorithm using 9 array elements, and the desired signal estimate at (-20 o , 20 o ). In figure 2, the desired signal of two targets is estimated accurately at (-20 o , 20 o ). Figure 3 shows the angle estimated by the MVDR algorithm with 6 array elements, and the desired signal estimate at (-5 o , 5 o ) as two targets. In figure 3, the desired signals could not be estimated accurately at (-5 o , 5 o ). The desired signal estimation in figure 3 represent one signal at (0 o ). Figure 4 shows the angle estimated by the MUSIC algorithm with 6 array elements, and the desired signal estimate at (-5 o , 5 o ). In figure 4, the desired signals of two targets could were estimated accurately at (-5 o , 5 o ). Figure 5 shows the angle estimated by the proposed method with 6 array elements, and the desired signal estimate at (-5 o , 5 o ). In figure 5, the desired signals of two targets were estimated accurately at (-5 o , 5 o ). Conclusion In this paper, we studied the proposed method to accurately estimation the desired signal with the modified MVDR algorithm. The proposed method is to estimate the direction of arrival by updated weight. First, we studied the MVDR algorithm to obtain the weight. Second, the obtained weight is updated by covariance matrix. Final, the covariance matrix is obtained by the mutual coupling matrix of Toeplitz matrix. In simulation, the classical MVDR algorithm have a poor resolution. Comparing figure 4 and figure 5, the resolution of the proposed method is the same as the MUSIC algorithm. The proposed method has much better resolution than the convention MVDR algorithm. Since the proposed has no subspace, the calculation complexity decreases than the MUSIC algorithm. The proposed method is suitable for the drone system using microprocessors to detection others targets.
v3-fos-license
2021-09-27T21:03:35.244Z
2021-08-02T00:00:00.000
238849368
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.researchsquare.com/article/rs-701134/v1.pdf?c=1631901613000", "pdf_hash": "0f561abd0eadbf0b3e629de9fc6e82176645a6ff", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1999", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "1648e1776566ea9c6486afaf99efe577abcca3a0", "year": 2021 }
pes2o/s2orc
IRF2 Destabilizes Oncogenic KPNA2 to Modulate the Tumorigenesis of Osteosarcoma via Regulating NF-κB/p65 Background: Osteosarcomas (OS) are the most frequent primary malignant bone tumor. Emerging evidence revealed that karyopherin alpha 2 (KPNA2) was strongly associated with the tumorigenesis and development of numerous human cancers. The aim of the present study was to investigate the expression pattern, biological functions and underlying mechanism of KPNA2 in OS. Methods: Bioinformatics TFBIND online was applied to forecast the transcription factor (TF) binding sites in the promoter region of KPNA2. The expression prole of KPNA2 in OS tissues were rstly assessed using TARGET dataset. The expression of KPNA2 in clinical OS samples and normal human adjacent samples were analyzed by RT-qPCR and western blot. CCK8, colony formation, wound-healing, and Transwell assays were used to assess cell viability, proliferation and migration in vitro, and in vivo experiments were performed to explore the effects of KPNA2 and interferon regulatory factor-2 (IRF2) on tumor growth. In addition, the correlation between IRF2 and KPNA2, and their roles on the NF-κB/p65 was investigated using chromatin immunoprecipitation (ChIP), RT-qPCR, western blot and dual-luciferase assays. Results: KPNA2 was obviously upregulated while IRF2 was signicantly decreased in OS tissues and cell lines, as well as they were negatively correlated with each other. KPNA2 knockdown remarkably suppressed OS cell growth, migration, invasion in vitro and tumor growth in vivo, while IRF2 knockdown exerts an opposing effect. IRF2 binds to KPNA2 promoter to modulate the tumorigenic malignant phenotypes of OS via regulating NF-κB/p65 signaling. Conclusion: The present study demonstrated that KPNA2 performed the oncogenic function, possibly regulating tumorigenesis through NF-κB/p65 signaling pathway. Importantly, IRF2 was conrmed to serve a potential upstream TF of KPNA2 involving in the regulation of NF-κB/p65 pathway in OS. Introduction Osteosarcoma (OS) is the frequent primary malignant bone tumor, affecting mainly pediatric and adolescents (1), which is composed of malignant mesenchymal cells producing osteoid and/or immature bone (2). It typically formed in the metaphysis of long bones, speci cally the proximal tibia, distal femur, and proximal humerus accompanied by bone swelling and pain (3). It is common that metastasis to the lungs of OS (4). Before the use of neoadjuvant and adjuvant chemotherapy, approximately 90% of OS patients died due to pulmonary metastases (5). OS is characterized by high levels of genomic instability. However, the molecular basis involved in OS remains unclear, and continuing to seek new treatments is urgently needed. Karyopherin alpha 2 (KPNA2, 58 kDa) is one of seven members of the karyopherin α-family (6, 7); Dysregulation of KPNA2 had been reported serving as as a potential biomarker in several malignancies, including breast cancer (8), gastric cancer (9), lung cancer (10) and glioma (11). KPNA2 served as the adaptor to transfer p65 to the nucleus to identify the classic nuclear localization signal (7,12). A Previous study had shown that KPNA2 interacts with p65, and facilitated the NF-κB p65/p50 nuclear transportation in TNF-α-stimulated lung cancer cells (13). As is well-known, NF-κB/p65 signaling is rest and concentrated in the cytoplasm until being activated and migrating to the nucleus in response to stress. NF-κB is involved in the regulation of cell growth and survival, and its abnormal activation is associated with the malignant progression of various cancers (14,15), includes OS (16). Although, a recent study revealed that KPNA2 promoted NF-κB activation and thus subsequently accelerate the development of osteoarthritis (17). However, the molecular pathways regulated by KPNA2-mediated NF-κB/p65 in OS are yet to be elucidated. To better understand the molecular mechanism of KPNA2 in OS, we rst applied an online database of TARGET to seek KPNA2-related factors and ultimately found that the interferon regulatory factor-1 (IRF2) expression was positively correlation with KPNA2 in OS tissues. Besides, transcription factors (TFs) are a kind of proteins that speci cally recognized DNA through consensus sequences, thus to control chromatin and transcription, guiding expression of the genome (18). To identify the TF of KPNA2 transcription, the online software of TFBIND (http://tfbind.hgc.jp/) was employed to identify putative binding sites in the promoter region of KPNA2. Speci cally, several canonical IRF2-binding sites on the promoter region of KPNA2 were observed and IRF2 was then extracted from 147 candidate genes. IRF2, belonging to the IRF family, was widely expressed in various tissues (19). Among several cancer types, belong to IRF family, IRF1 signaling pathways may directly induced p21-dependent G0/G1 cell cycle arrest and p21-independent modulation of surviving (20). IRF2 was proved to serves as an important regulator in acute myeloid leukemia by targeting INPP4B (21). Recent study has indicated that IRF2 was able to suppress the strengthen of cell migration and invasion in OS, which were mediated by miR-18a-5p (22). Similarly, IRF2 was not expressed or at a low level in OS tissues (23). Intriguingly, IRF was already identi ed as a functional TF in non-small-cell lung cancer (NSCLC) that suppressed KPNA2 expression (24). Therefore, we speculate that IRF2 may negatively regulated KPNA2 as its upstream TF to modulate OS progression by a p65-dependent signaling. This study aimed to investigate the role of IRF2 in modulating KPNA2 expression, which may serve an important role in p65 nuclear importation in the progression of OS. Here, we found that KPNA2 de ciency suppressed the malignant behaviors of OS cells, and that the underlying mechanisms involved were regulated by IRF2 and associated with NF-κB signaling pathway by p65 nucleus translocation. Taken together, our ndings demonstrated that KPNA2 may serve as a new potential prognostic indicator and therapeutic target for OS. Cell culture Four human OS cell lines (Saos-2, HOS, U2OS and MG-63) and human normal osteoblastic cell line (hFOB 1.19) were obtained from the Cell Bank of Type Culture Collection of the Chinese Academy of Sciences (Shanghai, China). The cells were grown in Dulbecco's modi ed Eagle's medium (DMEM; Gibco) harboring with 10% fetal bovine serum (FBS) (Gibco). All OS cell lines were cultured in a humidi ed incubator under 5% CO 2 at 37 °C, while hFOB1.19 cells were grown at 34 °C. Patients and tissue specimens Twenty-ve paired tumor samples and their adjacent non-tumor tissues from patients who had undergone surgery were obtained from Zhongshan Hospital, Fudan University. This study was approved by the Ethics Committee of Zhongshan Hospital, Fudan University (Y2014-185) according to the Declaration of Helsinki, and written informed consents were obtained from all the patients. No patients had undergone chemotherapy, radiation therapy or other related targeted therapy before surgery. The diagnosis of OS was con rmed by at least two pathologists. All surgical tissue samples used in our study were immediately placed in liquid nitrogen and then stored at -80 °C until use. Cell proliferation assay The cell proliferation assay was performed using a Cell Counting Kit (CCK-8, Dojindo, Japan) following the manufacturer's instructions. Cells at a density of 5 x10 3 were added into the 96-well plate and 10 microliters of CCK-8 solution was added to each well at 1, 2, 3, 4 and 5 days at 37°C. An additional 1 h later, the absorbance at wavelength of 450 nm was then measured under a microplate reader. Colony formation assay For the colony formation assay, 500 cells were plated into each well of a 6-well culture plate. The plates containing DMEM were incubated at 37°C for 2 weeks. After washed with PBS for three time, cells were xed by 4% paraformaldehyde for 10 min at room temperature, followed by staining with 0.5% crystal violet solution for another 20 min. Lastly, the visible colonies more than 50 cells were manually counted and imaged under a microscope. Transwell invasion assay Cells at the density of 1×10 4 were seeded into a diameter Transwell palte with 8-μm pores (Sigma-Aldrich). The upper chamber of the plate was added with 50 µl of Matrigel collagen, and 600 μL of complete DMEM was added to the lower chambers, and then the cells were incubated for 24 h. The cells on the upper layer were removed and the invasive cells were xed with 4% formaldehyde for 20 min, and then stained with crystal violet for 15 min. Cells that had invaded the bottom surface of the lter were counted to assess the invasive ability. Invaded cells were quanti ed by at least ve elds of view under a light microscopy (Leica) to obtain the representative images. Wound healing assay Cells were cultured in six-well plates. After reaching 90% con uence, a 200 µl pipette tip was used to create the scratch wounds in the cell monolayer. Representative images of cell migration were photographed under a light microscopy (Leica) at 0 and 24 h after wounding. The migration ability was assessed by measuring changes in the size of the wound width or area with Image J software. Western blot assay The protein was lysed from tissues and cells with RIPA buffer (Thermo Fisher Scienti c). Protein's concentration was assessed by the bicinchoninic acid (BCA) assay kit (Thermo Fisher Scienti c). Equal amounts of protein samples were separated by 12% SDS-PAGE gels and transferred onto polyvinylidene di uoride (PVDF) membranes. After incubating with 5% not-fat milk for 2 h at room temperature, to hatch the blots with the primary antibodies including anti-KPNA2, IRF2, NF-kB p65, p-NF-kB p65, GAPDH overnight at 4°C. Hatching the horseradish peroxidase-conjugated secondary antibodies at romm temperature. The endogenous GAPDH is the internal reference protein. The protein band signals were visualized on an ECL detector (Pierce) and quanti ed by scanning the densitometry using image J software. Chromatin immunoprecipitation (ChIP) assay ChIP assays were performed using a kit (Sigma-Aldrich) following the protocol provided by manufacturer. To hatch the diluted DNA-protein complex, the antibodies of anti-IRF2 and mouse IgG (Sigma-Aldrich) were added in the presence of protein A/G beads and incubated at 4°C overnight. RT-qPCR assay was applied to examine the ChIP DNA samples. IgG was the negative control. Dual luciferase test The wild (WT) and -mutant (MUT) of KPNA2 were inserted into the pGL3 promoter vector, which was transfected into U2OS and MG-63 cells using Lipofectamine 2000 (Invitrogen) along with plasmid of IRF2 overexpression or empty plasmid (NC). 48 h later, luciferase activity was measured using a dualluciferase reporter assay system (Promega) following the manufacture's protocols. Tumor xenograft assay Histological analysis and immunohistochemistry Xenotransplant tumor samples were isolated and xed in 4% paraformaldehyde. Para n was used to embed the tumor tissues and then sectioned into a thickness of 5-μm. The para n sections were dewaxed, hydrated, and then stained with hematoxylin and counterstained with eosin. Antigen were retrieved by citrate buffer and blocked with 3% H2O2, immunohistochemistry (IHC) was performed with diluted primary KI67 antibody overnight at 4°C, followed by incubation with secondary antibody at room temperature. The slides were developed by diaminobenzidine (DAB) and stained with hematoxylin. IHC staining pictures were obtained under a light microscope. Statistical analysis All obtained data are expressed as the mean ± standard deviation (SD). Differences Student's t-test or one-way ANOVA followed by a Tukey's post hoc test were used to compare data between two group or among multiple groups. Statistical difference was analyzed using GraphPad Prism 8.0 software. followed by a Tukey's post hoc test. Overexpression of KPNA2 in the osteosarcoma tissues Based on the TARGET database, we previously revealed KPNA2 expression was notably upregulated in the OS tissues when compared with the non-tumor tissues ( Figure 1A). To examine whether KPNA2 altered in clinical OS, qRT-PCR assay was performed to examine KPNA2 levels in 25 paired of cancerous OS samples and their adjacent normal samples. Figure 1B displayed that the mRNA levels of KPNA2 was obviously up-regulated in OS samples compared with that normal samples. Representative IHC images showed similar results ( Figure 1C). Furthermore, the protein levels of KPNA2 in 8 cases of OS samples were obviously elevated as well ( Figure 1D). Besides, compared to hFOB1.19 cells, a higher level of KPNA2 in four OS cell lines, including U2OS, HOS, Saos2 and MG-63, was able to been observed ( Figure 1E). KPNA2 expression was the highest in U2OS and MG-63, which were chosen to use for subsequent analyzes. These results suggest that KPNA2 might play a vital role in the progression of OS. Transcription factor IRF2 speci cally regulates KPNA2 expression in osteosarcomas Considering that the regulation of KPNA2 in OS focused on the transcriptional level, we performed bioinformatic analysis for seeking a transcription factor (TF) in the KPNA2 promoter region. Then we extracted 147 TFs of KPNA2 from online dataset. Data from TARGET datasets showed total of 9792 genes were negatively associated with KPNA2 in OS (R>0.2, FDR<0.05). We further found 5950 genes were overlapped in down-regulated gens from GSE157322 dataset, and 31 genes were overlapped in 147 TFs of KPNA2. Therefore, a total of 26 genes were at the intersection of three (Figure 2A). According to the correlation index of these 26 genes with KPNA2, we chosen the top 5 TFs of KPNA2, includes RFX1, STAT3, IRF2, PPARA and MZF1. To assess whether these ve TFs could alter KPNA2 expression, we overexpressed these TFs in two OS cell lines. As shown in Figure 2B, these results demonstrated that only overexpression of IRF2 signi cantly suppressed KPNA2 expression in two OS cell lines, while other four TFs had no signi cant effect on the KPNA2 expression. Meanwhile, due to KPNA2 was overexpressed in four OS cell lines, we also determined the change of these ve TFs after KPNA2 knockdown using RT-qPCR. Silence of KPNA2 markedly upregulated the mRNA levels of IRF2 while other four TFs had not obviously changed response to KPNA2 knockdown ( Figure 2C). Therefore, we decided to use IRF2 for subsequent experiments. Through the TARGET dataset, IRF2 expression was mildly downregulated in OS tissues without signi cant difference ( Figure 2D), while IRF2 expression was strongly negatively correlated with KPNA2 ( Figure 2E). Consistently, the mRNA level of IRF2 was signi cantly downregulated in 25 cases of clinical OS tissue samples when compared with adjacent normal tissues ( Figure 2F). Besides, a ChIP test was conducted in two OS cell lines and hFOB1.19 to evaluate the binding relationship of IRF2 with KPNA2, and we found that the enrichment of IRF2 binding was prominently decreased in two OS cell lines comparing to the hFOB1.19 cells ( Figure 2G). The present ndings showed that IRF2 might be one of major regulators to regulate KPNA2 expression in OS. To further determine if IRF2 bound to the KPNA2 promoter, a Dual-luciferase assay revealed that IRF2 was able to dramatically reduce the luciferase activity of KPNA2-WT, but not in KPNA2-MUT ( Figure 2H). Collectively, IRF2, as a functional TF, could bind to the KPNA2 in OS cells. IRF2 de ciency may cooperate with KPNA2 to regulate cell proliferation and tumor growth of OS cells in vivo and in vitro Since a negative correlation existed among KPNA2 and IRF2, we investigate their effects on cell proliferation, migration, invasion, and cell cycle. Firstly, in four OS cell lines, IRF2 protein expressions were lower than that in hFOB1.19 cells ( Figure 3A). Down-regulation of IRF2 or KPNA2 in U2OS cells was achieved by transfections of IRF2 or KPNA2 knockdown vectors (shKPNA2 and shIRF2), as con rmed by western blot ( Figure 3B). Regarding the malignant phenotypes of OS cells, KPNA2 knockdown inhibited the cell viability, proliferation while IRF2 knockdown had the opposite effects and partially rescued abovementioned malignant phenotypes suppressed by KPNA2 knockdown in vitro ( Figure 3C-D). In vivo, KPNA2 knockdown remarkably reduced tumor weight and tumor volumes while IRF2 knockdown promoted tumor growth, and IRF2 knockdown could weaken the KPNA2 knockdown-medicated tumor growth inhibition ( Figure 3E and 3F). The KI67 expression was reduced by KPNA2 knockdown while elevated by IRF2 knockdown ( Figure 3G). These ndings demonstrated that IRF2 silence partially attenuates the impact of KPNA2 knock-down on OS growth. IRF2/KPNA2 might regulate migration and invasion of in osteosarcoma cells in a p65-dependent manner Regarding the malignant phenotypes of migration and invasion, KPNA2 knockdown inhibited the abilities of migration and invasion while IRF2 knockdown had the opposite effects and partially rescued abovementioned malignant phenotypes suppressed by KPNA2 knockdown OS cells ( Figure 4A-B). Due to KPNA2 was able to activate NF-κB/p65 signaling pathway (25), we further clarify the expressions of NF-κB/p65 and p-NF-κB/p65 in OS cells after transfected or co-transfected with shKPNA2 or/and shIRF2. The protein level of p-NF-κB/p65 was apparently decreased by shKPNA2 with a slight change of NF-κB/p65 expression ( Figure 4C). However, the activation protein of p-NF-κB/p65 increased in shIRF2infected cells; more importantly, IRF2 silence could obviously attenuate the inhibitory effect of KPNA2 silence on the p-NF-κB/p65 expression ( Figure 4C). Thus, IRF2 could negatively alter KPNA2 by directly binding to the KPNA2 promoter and participated in OS progression through NF-κB/p65 signaling pathway. Discussion Osteosarcomas is relatively rare but devastating (26). Unfortunately, the introduction of novel adjuvant chemotherapy after aggressive surgical resection has temporarily improved overall 10-year survival, but has not signi cantly improved patient survival since the 1990s (27). Therefore, there is of great signi cance to identify novel molecules, which further helps to develop effective methods to diagnose and treat this malignant bone tumor. Here, we draw a conclusion that IRF2 binding to KPNA2 to regulate malignant biological properties of OS cells via regulating NF-κB/p65 signaling pathway. This evidence may provide new ideas for the diagnosis and treatment of osteosarcoma. Recently, several studies have linked KPNA2 to various cancers, such as lung, breast, colon cancer. High KPNA2 was positively related to cancer invasiveness and poor prognosis, thus regarded KPNA2 as a potentially relevant therapeutic target for patients with different cancers (28). KPNA2 was involved in several cellular biological processes, including cell differentiation, development, viral infection, immune response and transcriptional regulation (29). Similarly, our study illustrated that KPNA2 was dramatically elevated in OS samples comparing to normal samples. Although KPNA2 was proved to frequently express in OS as a novel marker for the diagnosis, as well as in chondrosarcoma and Ewing sarcomas (30), the functions of KPNA2 in osteosarcoma is unclear. In the present study, data mining and bioinformatics analysis indicated that KPNA2 was overexpressed in OS patients from TARGET dataset, and the experiments veri ed high KPNA2 level in clinical OS samples and OS cell lines. Additionally, KPNA2 knockdown inhibited the proliferation, migration and invasion in two OS cell lines, and remarkably reduced tumor weight and tumor volumes in vivo. These ndings revealed that KPNA2 might play a crucial role in the biological progress of OS. Interferon regulatory factor-2 (IRF2) exerted anti-tumor effects in several human cancers. For instance, IRF2 could suppress cell proliferation and migrate ability, and promote cell apoptosis in non-small cell lung cancer cells (31). IRF2 might play as a tumor suppressor by regulating P53 signaling in gastric cancer (32). Besides, IRF2 was proved to serve as a tumor suppressor in patients with hepatocellular carcinoma, whose inactivation led to impaired TP53 function (33). The current study highlighted that KPNA2 could negatively alter the expression of IRF2 in OS cells. Meanwhile, data mining in the GSE157322 and TARGET datasets, we discovered that IRF2 could bind to the promoter of KPNA2 and activate KPNA2 expression through bioinformatics analysis for TF prediction. This underlying mechanism was consistent with a previous report that IRF2 could bind to miR-1227 promoter, thus inhibited tumor growth (23). Besides, IRF2 was downregulated obviously, which was negatively associated with KPNA2 in OS. More importantly, IRF2 knockdown promoted malignant behaviors, which were subsequently suppressed by KPNA2 knockdown. These rescuing effect of IRF2 on KPNA2 was also re ected in tumor growth in vivo. These ndings demonstrated that IRF2 silence partially attenuated the impact of KPNA2 knock-down on osteosarcomas progressions. Previously, researches had shown that KPNA2 played a pivotal role in melanoma development through activating NF-κB/p65 signaling pathways (25). In addition, IRF could cooperate with NF-kB p65 to promote the e cacy of T cell-related immunotherapy in neuroblastoma, which revealed a synergistic regulatory relationship between IRF and p65 (34). What's more, IRF2 could regulate NF-κB activity via the modulation of NF-κB subcellular localization (35). In our study, the deletion of KPNA2 reduced p-NF-κB/p65 expression, and futile total NF-κB/p65 expression, whereas the opposite results were found after knockdown of IRF2. KPNA2 and IRF2 had opposite regulatory effects on the activation of NF-κB/p65 pathway. Coincidentally, our results also con rmed that the inhibitory effect of KPNA2 was partially recovered by IRF2 silence. Thus, elevated KPNA2 contributes to the progression of OS by negatively regulating IRF2 via NF-κB/p65 pathway. Availability of data and materials All the data in the results of this study can be obtained on reasonable request from the corresponding authors. Con ict of interest The Authors declare that they have no con ict of interests. Funding No. for detecting KI67 expression. The data was presented as the mean ± SD from three independent experiments. *p<0.05, **p<0.01 and ***p<0.001.
v3-fos-license
2018-12-21T15:57:17.214Z
1991-09-01T00:00:00.000
73575783
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://journal.fi/afs/article/download/72406/34202", "pdf_hash": "3adf9f1d675dcd44819504952d6ff4fb0ef59158", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2001", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "sha1": "3adf9f1d675dcd44819504952d6ff4fb0ef59158", "year": 1991 }
pes2o/s2orc
Competition and yield performance in mixtures of oats and barley nitrogen fertilization , density and proportion of the components Competition and yield performance in mixtures of barley and oats were evaluated from addition series experiments (three experiments) in 1983 and in 1984. Three doses of nitrogen fertilization (10 kgN/ha, 40 kgN/ha and 80 kgN/ha) were applied. In the first year the components were Agneta barley and Veli oats and in 1984 in addition to the previous combination also Ida barley and Veli oats were included. The competitive relationship between components was analysed by replacement series model and by regression analysis. The results showed that the dominant component according to the regression analysis was also dominant according to the indices of the replacement series model independently of density and proportion. Barley was generally more competitivethan oats. The dominance of barley usually increased with increasing nitrogen fertilization, especially in the mixture of Agneta and Veli. All the yield components of the barley plants increased with the decreasing proportion of barley in the mixture. In 1983, some mixtures overyielded significantly (p<0.05). The relative yield total being usually greater than one indicated yield advantage. In 1984, oats suffered from insect damage and neither barley cultivar was able to compensate enough so no overyielding occurred. The relative yield total was lower than one and thus no yield advantage was achieved. Index words: Competition, yield advantage, barley, oats, mixtures INTRODUCTION interest has been paid to crop mixtures for two main reasons: an increase in yield brought about by the complementary habits of as- yield over locations and seasons due to the ability of at least one genotype in the mixture to yield well in adverse conditions (Taylor 1978).Approximately 50% of the barley and oats grain produced in Ontario is from mixsociated genotypes, and greater stability of tures of the two components (300 000 ha) (Fejer et al. 1982). In several studies of barley-oats mixtures grown for feed, grain yield increases over the mean of the components in monoculture have been observed and even overyielding has oc- curred (Salminen 1945, Van Dobben 1953, Bebawi and Naylor 1978, Taylor 1978, Fejer et al. 1982).Ontario provincial agricultural statistiscs show that mixed grain consis- tently outyielded pure stands as reported by Fejer et al. (1982).In trying to combine the two species so that there was less mutual com- petition at critical stages, Syme and Bremner (1968) found that mixture yields did not ex- ceed the better component and were usually similar to mid-component. In most cases of mixtures of oats and bar- ley the yield of the mixture is compared with the average of the yield of the two monocul- tures.A higher yield of the mixture is inter- preted as an argument for mixed cropping.This indicates that the yield advantage of mixtures is not always completely assessed.This is because without calculation of relative yield total the interpretations based on the ratio of actual and expected yields can be misleading, especially in cases where compensation occurs (Willey 1979).Compensation seems also to be the most common situation with mixtures of barley and oats, i.e. the competitive abili- ties of barley and oats differ (for example Salminen 1945, de Wit 1960, Syme and Bremner 1968, Fejer et al. 1982). A mixture of species might more efficient- ly utilize the resources and therefore yield more than the pure stand.This may indicate that intraspecific competition is more severe than interspecific interference with growth.For these reasons mixtures might be expected to show a yield advantage (Spitters 1983). To achieve an accurate assessment of the relative strengths of intra-and interspecific com- petition in mixtures of barley and oats, these experiments were conducted. In the experiments described here, replacement series (substitutive) (de Wit 1960, Harper 1977, Connolly 1986) at three total plant densities of barley-oats mixtures and monocultures were used to assess the competitional relationship between species and the yield advantage of mixtures.The design is characterized by the term addition series (Spitters 1983). Two approaches were used to analyse com- petition.The first approach was to use mea- sures of competitive abilities and combining abilities of varieties based on the relative yield responses according to the de Wit model (de Wit 1960).The other approach is based upon linear regression with the reciprocal of average plant grain yield as the dependent varia- ble and density as the independent variable.The reciprocal yield model was expanded for multiple genotypes by Wright (1981) and Spitters (1983). MATERIALS AND METHODS The addition series field experiments were carried out in 1983 (one experiment) and in 1984 (two experiments) at the experimental farm of the University of Helsinki in Helsinki Viikki (60°13'N, 25°00'E) with barley and oats sown separately and in mechanical mixtures.In 1984, the experiments situated side by side.In 1983, the soil was silty clay with pH 5.6 and in 1984 finer fine sand with pH 5.4. Experimental design and management.A split-split-plot design (nitrogen levels in main plots, total densities in subplots and genotypic composition of stand in subsubplots) was used with three blocks.The subplot size was 10 m 2 (1.25 m x 8 m) with rows spaced 12.5 cm apart.In 1983, the varieties were Agneta bar- ley and Veli oats and in 1984, in addition to previous combination, also Ida barley and Veli oats were included.The general charac- ters of the cultivars are described elsewhere (Jokinen 1991 a). Sampling and analyses.The number of plants in each plot were determined by count- ing the number of seedlings in four randomly selected 1-m-long rows/plot about three weeks after sowing before the start of tillering.Simi- larly the number of generative shoots in 1983 was determined after the complete ear emer- gence of the cultivars.The height of the stands was estimated visually as well as the emergence time of seedlings.Four weeks after sowing in 1984 samples were taken in three randomly selected 1-m-long rows/plot for determination of the total above ground dry matter of the plants. From each mixture yield a 50 g sample was taken for determination of the seed yield of the barley and oats components.The separat-ed samples of each mixture as well as samples of each pure stand yield were used for deter- mination of 1000 grain weights (g) (3 x seeds/sample) in 1983.The number of grains/head was calculated using the data of yield, number of generative shoots and grain weight. Relative yield (RY) and relative yield total (RYT) were calculated according to the meth- od of de Wit and van den Berg (1965).Com- petitive ratio (CR) was determined according to the method of Wiley and Rao (1980).The mean yield/area was calculated before com- puting the indices.Details of the calculations are described elsewhere (Jokinen 1991 b) A discussion of the use of hyperbolic yielddensity equations in various situations has been given elsewhere (Wright 1981, Spitters 1983, Firbank and Watkinson 1985, 1990, Connolly 1987, Roush et al. 1989).The method used here was described previously (Jokinen 1991 c). Data on the plant dry weights, the grain yields, 1000 grain weight and the number of generative shoots were subjected to analyses of variance for split-split-plot design (Steel and Torrie 1980).Mean separation was ac- complished by Tukey's honestly significant difference test (HSD) (P = 0.05) (Steel and Torrie 1980). Table I.The influence of nitrogen fertilization and proportion of oats in the stand on the phytomass accumulation (dry weight mg/plant) of Veli oats during the first month of growth in 1984 in two barley-oats experiments (Veli/Ida and Veli/Agneta).The analysis of variance is done separately for each experiment.Dry weight means in the average columns and in the average rows followed by the same letter are not significantly different at the 5% level (HSD test).The first leaves of barley were larger than those of oats (data not given).The number of seedlings in each plot was about the same (0.95 -1.05) as expected (data not given). The average phytomass of the oats was approximately the same in both experiments (Table 1).Both barley varieties were over twice heavier than the oats (Tables 1 and 2).The average phytomass of all the varieties de- creased with increasing density (data not given).The seedlings of Agneta were heavier than those of Ida.Unlike the oats, the phytomass of the barley varieties increased with decreas- ing proportion of the species in the mixture.Both barley varieties were more competitive than the oats competitive ratio varying from 1.21 to 1.52 (data not given).The relative yield totals varied from 0.96 to 1.05 (data not given). No lodging occurred in 1983.In 1984, the pure stands of barley at the highest density and the highest level of nitrogen fertilization were the most lodged (Table 3). Grain yields In 1983, the mean yield of the experiment of Veli oats and Agneta barley was 5446 kg/ha (Table 4).The analysis of variance showed sig-Table 2. The influence of nitrogen fertilization and proportion of barley in the stand on the phytomass accumulation (dry weight mg/plant) of Ida barley and Agneta barley during the first month of growth in 1984 in two barley-oats experiments (Veli/Ida and Veli/Agneta).The analysis of variance is done separately for each experiment.Dry weight means in the average columns and in the average rows followed by the same letter are not significantly different at the 5% 3. Lodging of the stands (Vo of area) in 1984.(-=nolodging, 100 = completely lodged, Ag = Agneta/Veli, Id = Ida/Veli). Nitrogen fertilization (kgN/ha) nificant (p<0.05)interaction between the nitrogen fertilization and the proportion of the components, and between the density and the proportion of the components.At the lowest level of nitrogen two out of three mixtures yielded significantly more (approximately 9%) than the pure stands, i.e. the mixtures over- yielded.At the highest density all the mixtures overyielded significantly (approximately 7%). Comparison between the actual and expected yields of the mixtures (50:50) shows that all the mixtures were more productive than monocultures. In 1984, the mean yield in both experiments was lower than in the previous year (Tables 5 and 6).This was due to the very low yield of oats because of frit fly {Osdnella frit) dam- age.In general, the grain yield of the stands increased with increasing proportion of bar- ley in the mixture and no overyielding oc- Table 4.The influence of nitrogen fertilization, density and the proportion of barley (Agneta) and oats (Veli) in the mixture on the grain yield (kg/ha) of the stands in 1983.A/E is the ratio of the actual and expected yield of the mixture of 50:50.Grain yield averages within each treatment (nitrogen fertilization, density and proportions) followed by the same letter are not significantly different at the 5% level (HSD test).Comparison between the grain yield averages of different proportions is done at different levels of nitrogen and density (interaction statistically significant). Proportion Density Nitrogen fertilization (kgN/ha) curred.It is important to note that in 1984 in both experiments the actual yields of mixtures (50:50) were higher than expected in some cases (Tables 5 and 6). Relative yields (RY), relative yield totals (RYT) and competitive ratio (CR) in 1983, the relative yield of Veli oats was higher than expected only at the lowest level of nitrogen fertilization (Fig. 1).In 1984, the relative yield of oats was always lower than expected (Figs. 3 and 5).In both years the rela- tive yields of barley were usually higher than expected (Figs. 1, 2 and 3). In general, barley was more competitive than oats (CR> 1) (Figs. 2, 4 and 6).Only in 1983 at the lowest level of nitrogen fertiliza- tion were oats as competitive as barley in some cases (Fig. 2).In 1984, Agneta was more corn- Table 5.The influence of nitrogen fertilization, density and the proportion of barley (Agneta) and oats (Veli) in the mixture on the grain yield (kg/ha) of the stands in 1984.A/E is the ratio of the actual and expected yield of the mixture of 50:50.Grain yield averages within each treatment (nitrogen fertilization, density and proportions) followed by the same letter are not significantly different at the 5% level (HSD test).Comparison between the grain yield means of different proportions is done at different levels of nitrogen and density (interaction statistically sig- nificant).Figure I.The influence of density (plants/m ! ), nitrogen fertilization (kg N/ha) and proportion of the components on the relative yields (RY) of Agneta barley and Veli oats, and on the relative yield totals (RYT) of the mixtures in 1983. Figure 2. The influence of density (plants/m ; ), nitrogen fertilization (kg N/ha) and proportion of barley on the competitive ratio (OR) of Agneta barley over Veli oats in 1983. Figure 3.The influence of density (plants/m 2 ), nitrogen fertilization (kg N/ha) and proportion of the components on the relative yields (RY) of Agneta barley and Veli oats, and on the relative yield totals (RYT) of the mixtures in 1984.petitive over oats than was Ida.In 1984, Agneta was more competitive than in the previous year.Especially Agneta barley was the most competitive at the highest level of nitrogen fer- tilization.In 1984, unlike the previous year, the competitive ratio of Agneta usually increased with increasing proportion of barley.As a rule, the relative yield totals exceeded one in 1983 (Fig. 1).In 1984, the relative yield totals of both mixtures were close to or lower than one (Figs 3 and 5). Regression models The regression equations accounted for 90-96% of the variation in grain yield of both species (R 2 = 0.90-0.99)(Tables 7, 8 and 9).Only in 1984 in the mixture of Agneta barley and Veli oats were the regression coefficients of oats not statistically significant in regression equations for oats. As a rule, the intraspecific competition of barley was more severe than the interspecific competition and vice versa for oats.Barley Table 6.The influence of nitrogen fertilization, density and the proportion of barley (Ida) and oats (Veli) in the mixture on the grain yield (kg/ha) of the stands in 1984.A/E is the ratio of the actual and expected yield of the mixture of 50:50.Grain yield averages within each treatment (nitrogen fertilization, density and proportions) followed by the same letter are not significantly different at the 5% level (HSD test).Figure 5.The influence of density (plants/m ! ), nitrogen fertilization (kg N/ha) and proportion of the components on the relative yields (RY) of Ida barley and Veli oats, and on the relative yield totals (RYT) of the mixtures in 1984. Figure 6.The influence of density (plants/m 1 ), nitrogen fertilization (kg N/ha) and proportion of barley on the competitive ratio (CR) of Ida-barley over Veli-oats in 1984. .NDI (Niche differentiation index) = (Bbb/Bba)/(Bab/Baa). 1/W is the reciprocal yield of an individual plant (grain yield/plant).B 0 is the reciprocal of the theoretical maximum yield of an individual, B 1 describes influences of intragenotypic competition, B 2 describes influences of intergenotypic competition, N is plant density and RC predicts relative competitive ability of each genotype.p< 0.001 for B 1 and B 2 in each model. Table 8.Multispecies reciprocal yield models (1/W = Bo+ BINI + 82N2) for interactions between barley (Agneta) and oats (Veli) grown at three levels of nitrogen fertilization in b-values x lO-3 .NDI (Niche differentiation index) = (Bbb/Bba)/(Bab/Baa).1/W is the reciprocal yield of an individual plant (grain yield/plant).B 0 is the reciprocal of the theoretical maximum yield of an individual, B 1 describes influences of intragenotypic competition, B 2 describes influences of intergenotypic competition, N is plant density and RC predicts relative competitive ability of each genotype.p<o.ool for B 1 and B 2 in each model of barley and for B 2 in each model of oats.B 1 in the models of oats is not significant. Table 9. Multispecies reciprocal yield models (1/W = Bo+ BINI + 82N2) for interactions between barley (Ida) and oats (Veli) grown at three levels of nitrogen fertilization in benefitted at the expense of oats.The exceptional case when the intraspecific competition in the mixture was stronger than interspecific competition for both species (81/B2> 1)was at the lowest level of nitrogen fertilization in 1983 (Table 7).Then both components benefitted from mixed culture.In this case the asymptotic yields of both components grown in mixture (1/(B1 -I-B 2)) were higher than the asymptotic yields of both components grown in monoculture (1/B1).Barley was a stronger competitor than oats as determined for the ratio of regression coefficients from barley (RC = 81/B2).Agneta was a stronger competitor in 1984 than in 1983.In 1984, Agneta was more competitive against oats than was Ida.In most cases the relative competitive ability of barley increased with increasing nitrogen fertilization. Only in 1983 was the overall intraspecific competition greater than the overall inter- specific competition (NDI > 1) independent of nitrogen fertilization.In 1984 the competition was more severe in the mixture of Agneta and Veli than in the mixture of Ida and Veli (NDI Agneta/Veli < NDI Ida/Veli). The square root of the product of the interspecific competition coefficients [(Babxßbaf'/i] was less than the intraspecific competition coefficients of barley and oats only in two cases in 1983.In these situations a mix- ture of optimum proportions will yield more than both monocultures, i.e. a mixture will overyield. Yield components In 1983, the addition of nitrogen, change Table 10.The influence of nitrogen fertilization, density and the proportion of barley (Agneta) and oats (Veli) in the mixture on the number of generative shoots per plant of Agneta barley in 1983.Shoot number averages within each treatment (nitrogen fertilization, density and proportions) followed by the same letter are not significantly different at the 5% level (HSD test).Comparison between the shoot number means of different proportions is done at different levels of density (interaction statistically significant). Proportion Density Nitrogen fertilization (kgN/ha) of the total density of stands or the growth in a mixture as compared with pure culture affected all the yield components (the num- ber of generative shoots per plant, the 1000 grain weight and number of grains per head) of both species in certain extent (Tables 10-15). Advantages of mixtures The results of the present experiment suggest that overyielding may occur in the mixtures of oats and barley under certain conditions.Other studies of mixtures of barley and oats suggest that the yield of a mixture can be above that of the better component (Salminen 1945, van Dobben 1953, Bebawi and Naylor 1978, Taylor 1978, Jokinen 1991 a).Syme and Bremner (1968) and Fejer et al. (1982) found that mixture yields did not exceed those of the better component. In addition to overyielding, mixtures can be advantageous over monocultures if the yield of the mixture exceeds the mid-component but are not necessarily so.For example in 1984 the actual yields of mixtures exceeded the expected in many cases, however, theresults of rela- tive yield total of a given mixture indicated no yield advantage.Thus when relative yield to- tal did not exceed one the same yield of bar- ley and oats might have been obtained with monocultures as with mixtures, without changing the total area of land (Willey 1979).At least from field experiments the rela- tive yield total could be assessed for the proper Table 11.The influence of nitrogen fertilization, density and the proportion of barley (Agneta) and oats (Veli) in the mixture on the number of generative shoots per plant of Veli oats in 1983.Shoot number averages within each treatment (nitrogen fertilization, density and proportions) followed by the same letter are not significantly different at the 5% level (HSD test).Comparison between the shoot number means of different proportions is done at differ- ent levels of density (interaction statistically significant). Proportion Density Nitrogen fertilization (kgN/ha) There are only a few published experimental results with barley-oats mixtures from which it is possible to calculate relative yield totals.The relative yield total of the mixture of oats and barley was close to one (1.03) as calculated by the author from the results of eight experiments conducted by Salminen (1945).This indicates no or a very slight yield advantage.The results calculated by de Wit (1960) indicated that in general the yield of barley or oats is proportional to the relative space occupied by these crops.The relative yield totals calculated by the author from the experiments of Syme and Bremner (1968) varied from 0.90 to 1.15 with two out of sev- en values being lower than one.The calculat- ed relative yield totals (1.07, 1.15) from the two experiments of Fejer et al. (1982) as well as the results of the present experiment in 1983 suggest that cropping of mixtures of barley and oats may be of benefit.However, more experiments in different environments are needed to provide support for the practical use of mixtures. In addition to possible yield advantages there are other benefits of growing barley and oats in mixture such as prevention of lodging (de Wit 1960).The results of the present ex- periments in 1984 suggest that lodging may be reduced by growing mixtures.The decreased lodging of mixtures compared with monocul- tures may be because of shorter barley plants in mixtures (K.J. Jokinen unpubl.).The in- creasing light intensity during the growth of barley plants is known at first to increase and Table 12.The influence of nitrogen fertilization, density and the proportion of barley (Agneta) and oats (Veli) in the mixture on thousand grain weight (g) of Agneta barley in 1983.Grain weight averages within each treatment (nitrogen fertilization, density and proportions) followed by the same letter are not significantly different at the 5% level (HSD test).Comparison between grain weight means of different proportions is done at different levels ofnitrogen fertilization (interaction statistically significant).then to reduce plant height (Briggs 1978 p.274).Thus in the monoculture of barley, plants might shade each other more than in mixed stands light being possibly a limiting factor especially at high levels of nitrogen fertilization and at high densities. Compensation in 1984, the dominant-suppression relation- ship between barley and oats was not always complete (RYT < 1).This indicates that in mixtures barley interfered with the yield for- mation of damaged oats more than expected without benefitting by itself.Thus barley was not flexible enough especially at low densities.According to de Wit (1960), the ability of un- damaged component to compensate depends on the time of damage.Thus the compensa-tion relates to the determination of the yield components and flexibility of the plants dur- ing the course of the development as well as the total density of the stands.One has to notice that in an extreme case the competition experiment can degenerate into a spacing ex- periment for one component. Competition models Although there were no profound dis- crepancies between the results of the two different approaches for analysing competitive interactions between components, the regression approach provided a more flexible framework for mixture studies than the conventionalreplacement analysis.The regression analysis uses a model of competition that al- lows the yields of both species in a binary mix-Table 13.The influence of nitrogen fertilization, density and the proportion of barley (Agneta) and oats (Veli) in the mixture on thousand grain weight (g) of Veli oats in 1983.Grain weight averages within each treatment (nitrogen fertilization, density and proportions) followed by the same letter are not significantly different at the 5% level (HSD test).This is because the shoot growth of species adapted to high nutrient conditions (with intense shoot competition) usually responds more to increased nutrient supplies than does that of species adapted to low nutrient condi- tions (with intense root competition). It is important to note that in the 1984study the results did not show thoroughly the effect of the nitrogen gradient on competition.This was because of the weak response of the stands to the added nitrogen possibly due to the high release of nitrogen from the soil.The yields of oats were also depressed due to in- sect damage (frit fly). The results especially in 1983 demonstrat- ed the effect of nitrogen fertilization on the structure of mixed plant community.There will be a point along the gradient, where Veli oats and Agneta barley can stably coexist by producing the same absolute yields (either number of seeds/plant or weight/plant).From theoretical considerations it follows that a necessary condition for a stable coexistence is that the species populations are regulated in different ways (Braakhekke 1980).Thus bar- ley and oats might be able to coexist because they are not limited by the same resources (differentiation of their realized niches).This means that in mixtures intraspecific competition of both species is greater than interspecif- ic competition which occurred at the lowest level of nitrogen fertilization in 1983.If, how- ever, there is no such differentiation of their realized niches, or if it is precluded by the hab- itat (high nitrogen in 1983 and in 1984), then one competing species (in this case barley) will eliminate or exclude the other (in this case oats).Exclusion occurs when the realized niche of the superior competitor fills those parts of the inferior competitor's fundamen- tal niche provided by the habitat, and the weak interspecific competitor lacks a realized niche when in competition with the stronger competitor (Begon et al. 1986 p. 258-260). Although the overall intraspecific competition is greater than the interspecific competition in mixtures (NDI> 1), it does not neces- sarily mean that the species can stably coexist in that environment.This is because the strong interspecific competitor will gradually out- compete the weak interspecific competitor as the regression model predicts (in 1983 high nitrogen).Thus NDI when greater than one did not always express that the realized niches of both species differ from each other.The fundamental question is whether NDI is a rele- vant index for evaluating niche differentiation from a terminological point of view.Only when interspecific competition for both species is less significant than intraspecific com- petition the species coexist (81/B2>1), and coexisting competititors may then exhibit differentiation of realized niches. Yield components All the yield components of barley tended to associate positively with higher yields per plant obtained from mixtures.The higher grain weight of oats in mixtures did not al- ways compensate for a lower number of ears per plant and lower number of grains per ear, leading usually to a lower yield per plant of oats in mixtures.The results for yield com- ponents suggest that the type of the plants (individuals of the same species or different spe- cies) and their relative amount in the neighbourhood are also significant in respect to the formation of different yield components and not only the total density.As a rule, the yield formation of the individual plant seems to be a rather complex phenomenon in different microenvironments. the barley seedlings emerged first, about three days earlier than the oats. Figure 4 . Figure 4.The influence of density (plants/m 2 ), nitrogen fertilization (kg N/ha) and proportion of barley on the competitive ratio (CR) of Agneta barley over Veli oats in 1984. *b-values x 10~3 .NDI (Niche differentiation index) = (Bbb/Bba)/(Bab/Baa).1/W is the reciprocal yield of an individual plant (grain yield/plant).BO is the reciprocal of the theoretical maximum yield of an individual, B 1 describes influences of intragenotypic competition, B 2 describes influences of intergenotypic competition, N is plant density and RC predicts relative competitive ability of each genotype.p<0.05 for B 1 and B 2 in each model. reversals of dominance.
v3-fos-license
2019-05-30T23:47:20.523Z
2019-01-01T00:00:00.000
169109222
{ "extfieldsofstudy": [ "Business" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.22495/rgcv9i2p3", "pdf_hash": "c1f3e163273a4ce08e8d2174fa7f8626a7b5bbab", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2002", "s2fieldsofstudy": [ "Economics", "Business" ], "sha1": "f98feb75005af71700ef7000c32e5507132cbaff", "year": 2019 }
pes2o/s2orc
CREDIT DERIVATIVES DISCLOSURE IN BANKS’ RISK REPORTING: EMPIRICAL EVIDENCE FROM FOUR LARGE EUROPEAN BANKS How to cite this paper: Scannella, E. (2019). Credit derivatives disclosure in banks’ risk reporting: Empirical evidence from four large European banks. Risk Governance and Control: Financial Markets & Institutions, 9(2), 34-46. http://doi.org/10.22495/rgcv9i2p3 This paper aims to analyze the derivatives disclosure in banks’ annual risk reports. In this paper, the author uses content analysis to examine the qualitative and quantitative profiles of the derivatives disclosure at a cross-country level, with particular reference to credit derivatives. The empirical research is conducted on a sample of large European banks. The paper also shows that there is room to improve various aspects of derivatives disclosure, and provides some useful insights for further research. The derivatives disclosure in banks’ annual risk reports has deep managerial, financial, regulatory and accounting implications at a firm and industry levels, and the comprehension of the rational underlying it is critical to maintaining competitive advantages in the banking industry and informational and allocative efficiency in the financial markets. Although the existence of substantial research on credit derivatives and financial statements in the literature, none have directly focused on credit derivatives disclosure at a cross-country level applying the content analysis based on an objective evaluation approach. It leaves a gap that the paper aims to overcome. This paper aims to analyze the derivatives disclosure in banks' annual risk reports. In this paper, the author uses content analysis to examine the qualitative and quantitative profiles of the derivatives disclosure at a cross-country level, with particular reference to credit derivatives. The empirical research is conducted on a sample of large European banks. The paper also shows that there is room to improve various aspects of derivatives disclosure, and provides some useful insights for further research. The derivatives disclosure in banks' annual risk reports has deep managerial, financial, regulatory and accounting implications at a firm and industry levels, and the comprehension of the rational underlying it is critical to maintaining competitive advantages in the banking industry and informational and allocative efficiency in the financial markets. Although the existence of substantial research on credit derivatives and financial statements in the literature, none have directly focused on credit derivatives disclosure at a crosscountry level applying the content analysis based on an objective evaluation approach. It leaves a gap that the paper aims to overcome. Keywords: Risk Reporting, Risk Disclosure, Credit Derivative, Banking, Financial Regulation, Risk Management, Banking Risk INTRODUCTION Risk disclosure, and credit derivatives disclosure particularly, is a pivotal topic in banking. The attention on banking risks and risk reporting has improved enormously in the last times due to the turmoil in the financial systems, the growing regulatory and accounting requirements in the banking industry, and the growing complexity of banking activity, especially large and multi-business banks. In this paper we make an empirical analysis of credit derivatives disclosure with reference to four largest European banks, using the content analysis as a research method. The paper shows that different aspects of derivative disclosure can be improved, discusses some policy and theoretical implications, and provides some useful suggestions for further research. The structure of this paper is as follows. Section 2 reviews the relevant literature. Section 3 analyses the methodology that has been used to conduct empirical research on risk disclosure. Section 4 introduces credit derivatives and bank risk management. It aims to highlight the nature and functions of credit derivatives and provide a risk management perspective. Section 5 provides an accounting perspective on credit derivatives in banking. Section 6 analyses the research design of the empirical study. Section 7 analyses and compares the results of the empirical research. Section 8 discusses the research findings. Section 9 provides some proposals for a better credit derivative disclosure in banking and for future research. Section 10 concludes. LITERATURE REVIEW The derivative disclosure is essential for banks' stakeholders to assess the bank's risk exposures and to make decisions. It is necessary that an adequate disclosure on banks' credit derivative exposures does not stay within the boundaries of a bank, but it should be provided to all stakeholders. Risk disclosure contributes to reducing asymmetric information (Akerlof, 1970;Leland & Pyle, 1977). Disclosing information about banking risks and their derivative portfolios will result in a reduction of the information asymmetry and agency problems (Fama & Jensen, 1983;Jensen & Meckling, 1976) among stakeholders in banking. Bank managers have more information about risks that might affect future results than other stakeholders. From this view, risk disclosure can be intended as an incentive device (Armstrong et al., 2010;Dobler, 2008) to make a level with different stakeholders' interests and to better perform the functions of screening, selection, and monitoring (Diamond, 1984). Risk disclosure acts also as a signal of the soundness and stability of a bank; it performs a signaling mechanism for the market (Leland & Pyle, 1977;Ross, 1977). By disclosing more information about risks, shareholders and other stakeholders are able to correctly appreciate the bank's performances and market value (Belcredi, 1993;Linsley & Shrives, 2005;Linsley et al., 2006). Risk transparency allows users to make more informed decisions on banks' performances and strategies. Disclosing information about banking risks will give the opportunity to maximize shareholders' value (Carey & Stulz, 2007). Consequently, a risk disclosure threshold has to be established by accounting standard setters and banking regulators, otherwise, the efficacy of risk disclosure would be affected by a "firm-specific" principal-agent problem. In the last decades, many studies have examined bank disclosure from different points of view, mainly accounting and financial markets efficiency perspectives. Recent studies have analyzed specifically risk reporting in bank's annual reports (Ahmed et al., 2006;Ammon, 1996;Chalmers & Godfrey, 2000;Gaetano, 1996;Malinconico, 2007) Woods & Marginson, 2004). In this perspective, risk disclosures and banking risks have to be analyzed using a holistic approach (Tutino, 2013;Tutino et al., 2011). Although the existence of a large quantity of research on credit derivatives and financial statements in the banking literature, none have directly focused on credit derivative disclosure at a cross-country level, with homogeneous banking regulations, applying the content analysis based on an objective evaluation approach. This paper provides insights to overcome the gap in the literature. RESEARCH METHODOLOGY In this paper, the content analysis is used to measure the quality and quantity of credit derivative disclosure in banking. The content analysis, as a "research technique for making replicable and valid inferences from texts to the contexts of their use" (Krippendorff, 2004), and as "a research technique for the objective, systematic and quantitative description of the manifest content of communication" (Berelson, 1952), is the methodology that has been mostly used by many researchers to examine and evaluate risk reporting in annual reports. Over the years, content analysis has been widely used in many research areas (Holsti, 1969;Weber, 1990). The content analysis enables researchers to investigate, evaluate, systematize, and categorize a large amount of textual information, such as the information that is published in banks' risk reporting. The aim of the content analysis is to organize and elicit meaning from collected data. Consequently, we argue the appropriateness of the use of content analysis for conducting empirical research on risk disclosure. By using a scoring model based on key disclosure parameters, this paper shows evidence that banks provide different credit derivative risk reporting, even though they are compliant to a harmonized regulatory and accounting framework. Although the topic of risk disclosure has increased in the last years, there are not so many empirical researches that examine cross-country and industry-specific factors. In this empirical research, we employ a content analysis methodology than aims to evaluate the quality and quantity of credit derivative disclosure in banking. CREDIT DERIVATIVES AND RISK MANAGEMENT IN BANKING Credit derivatives are credit risk transferring instruments. They are used by banks and other financial institutions (mainly insurance companies) for credit risk management purposes, as they provide an effective means to hedge and trade credit risk. Credit derivatives separate the credit risk trading from asset trading. Credit derivatives are over-the-counter financial instruments, exchanged on a bilateral transaction scheme where two parties (protection buyer and protection seller) decide to trade credit risk arising from a specific asset, usually called "reference obligation" (Chance & Brooks, 2012;Chaplin, 2010;Choudhry, 2013;Das, 2005;Tavakoli, 1998). Credit derivatives offer a flexible approach to conduct credit risk management in banking since they can be tailored to reflect the specific characteristics of credit risk exposures, and credit risk management purposes (Dunbar, 2011;Mengle, 2007;Murphy, 2013;Steinherr, 2000). Hedging continues to be the predominant reason for the use of derivatives. Speculation is the second reason behind the widespread of credit derivatives in financial markets. Speculators use derivatives not to reduce financial risk but to potentially profit from it. They gain exposures to credit risk without the need to purchase the underlining asset. At the same time, they provide liquidity that makes risk hedging achievable (Drago, 1998). Another important reason for the use of derivatives is the arbitrage (McDonald, 2013). A critical aspect that makes the credit derivatives instruments very attractive to banks is that credit risk might have a huge impact on bank performance (Bomfim, 2005;Culp, 2004;Nelken, 1999;Onado, 2004Onado, , 2018. Banks are exposed to credit risk since their core business is lending, mainly in the commercial banking business, and investing in bonds issued by firms, mainly in the investment banking business. Credit risk is a serious threat for banks and even more for the stability of the financial industry since the interconnections among financial institutions provide a contagion mechanism by which financial crises spread, as reflected by the systemic risk (Acharya & Richardson, 2009;Hull, 2018). Credit risk transfer provides many advantages to banks. It reduces the regulatory capital requirements on credit exposures. Consequently, it will free bank capital that could be used to make more loans. At the same time, the bank keeps maintaining the relationships with borrowers. Another important aspect that makes the credit derivatives attractive is that they allow banks to diversify their credit portfolios, especially for small and medium banks, without negotiating the asset itself (Drago, 2014;Scannella, 2013). The credit risk transfer has microeconomic and macroeconomic implications. Since credit derivatives instruments allow the credit risk transfer and the overall reduction of credit risk exposure, banks may attenuate the lending standards and the monitoring of credit exposures subject to risk transfer. In this perspective, a bank has fewer incentives to control and monitor borrowers. Therefore, bank lending can increasingly be characterized by a decreasing level of accuracy towards the creditworthiness of the borrower, determining a potential increase in the overall level of credit risk in the financial industry. At the microeconomic level, we can observe that the expansion of the credit risk transfer market has contributed to the transformation of the traditional banking business model called "originateto-hold" into the new one called "originate-todistribute". According to the first business model, a bank originates loans and holds them in their balance sheet until their maturity. Instead, in the originate-to-distribute business model, a bank does not provide all the functions of the previous model, but it specializes mainly in the origination and servicing activities. Consequently, the previous "relationship banking" approach is replaced by a "transactional banking" approach, in which banks do not have strong incentives to establish a long-term financial tie with borrowers (Baravelli, 2011;Mottura, 2011Mottura, , 2016Ruozi, 2015;Scannella, 2011). Credit derivatives were first introduced in 1993, but they have experienced very rapid growth since then Basel Committee started in December 2004 publishing data on credit derivatives market. At the end of 2016, the Bank for International Settlement (2016) estimated that the total notional principal underlying outstanding credit derivatives were close to $6 trillion. It was $57,894 trillion in 2007. As a share of all OTC derivatives, credit derivatives fell from 10% to 2% (in terms of notional amounts) between end-June 2007 and end-June 2016, and from 8% to 2% (in terms of gross market value). Briefly, credit derivatives are effective tools for credit risk management in banking. With credit derivatives, it is possible to isolate the credit risk from the risk/return profile of a financial asset. As an insurance instrument, credit derivatives allow banks to protect their credit exposures against borrowers' defaults or other credit events. CREDIT DERIVATIVES IN BANKING: AN ACCOUNTING PERSPECTIVE Accounting for credit derivatives is based on the accounting standards for derivatives and are covered by IAS 39 that establishes principles for recognizing and measuring financial assets, financial liabilities. IAS 39 has been removed by IFRS 9 in January 2018. At the time of initial recognition, all derivatives are measured at fair value. After the initial recognition, the fair value changes of derivatives are recognized in profit or loss of the bank's balance sheet. Special hedge accounting requirements are provided for hedging instruments (Chalmers & Godfrey, 2000;Ramirez, 2015;Rutigliano, 2011Rutigliano, , 2016. Initially, the "fair value" was defined in IAS 39 as the price for which an asset can be negotiated, or a liability settled, between knowledgeable parties in an arm's length transaction. After that, IFRS 13 has revisited this definition. It describes the fair value on the basis of an "exit price" notion and uses a "fair value hierarchy". IFRS 13 defines fair value as the price that can be obtained to sell an asset or to transfer a liability ("exit price") in an orderly transaction between market participants. IFRS 13 provides a three-level hierarchy of fair value measurements. IFRS 13 requires to use valuation methods that are appropriate with reference to available data. Derivatives are often measured by using market prices in an active market (fair value hierarchy: level 1). When there are not any available market prices, its fair value is calculated using appropriate evaluation methods: if inputs are observable we have the level 2 fair value hierarchy. On the contrary, if inputs are unobservable we have the level 3 fair value hierarchy. Over-the-counter derivatives are usually evaluated by using measurement methodologies because there are not any discernable market prices. As we have seen above, the use of derivative instruments is widespread among financial institutions. This increases the importance of an accurate credit derivative disclosure in banking, particularly with respect to the ongoing turmoil in world financial systems. An appropriate evaluation of a bank's risk exposures is possible if a bank discloses information not only on its accounting policies and practices where financial risks arise (i.e. investments, proprietary trading, lending, funding) but also on credit derivatives in banks' portfolios (Bessis, 2015;Sironi & Resti, 2008). In 2002, the European Union started a process of accounting harmonization with the aim to adopt a common accounting language. This process began with the adoption by the European Parliament and the Council of the Regulation n. 1606/2002 of 19 July 2002 on the application of IAS standards in order to harmonize and compare the financial information provided by European banks in their financial statements, both across time and space (Bisoni et al., 2012;Tutino, 2009Tutino, , 2015. Since January 2006, European financial institutions have been disclosing information on risk exposures and derivatives accordingly to International Accounting Standards. With the directive 2001/65 the European Parliament and of the Council have recommended a fair value accounting scheme for the evaluation of most financial instruments, including derivatives, for the annual and consolidated statements of banks and other financial institutions. More risk reporting requirements are proposed by Basel II. This bank capital adequacy framework consists of three pillars. For the purpose of this research, Pillar 3 is the most important one, because it aims to promote an effective market discipline mechanism of the financial markets that is mainly based on disclosure frameworks. Pillar 3 requires quantitative and qualitative information to be disclosed for each type of banking risk. It is essential to notice that both Basel II and International Accounting standards require risks to be disclosed "through the eyes of management" and to be "consistent with the approaches and methodologies that the directors use to assess and manage the bank's risk" (Linsley & Shrives, 2005). In this section, we briefly analyzed the most important regulation and accounting standards that represent the regulatory and accounting framework of derivatives disclosure in banking. This will help discern between obligatory and voluntary disclosure in banking, and it sets the background behind the bank's disclosure strategies and practices. Next section depicts the research design of the empirical investigation. AN EMPIRICAL STUDY ON CREDIT DERIVATIVES DISCLOSURE IN BANKING: RESEARCH DESIGN The main purpose of this study is to examine the differences among banks with reference to credit derivative disclosure in annual reporting. In order to do this research, we have analyzed the information on credit derivatives in the 2015's Annual statements and Pillar 3 reports of four large European banks. This is a cross-country research established on one-year risk reporting. It is not historical analysis. We decided to investigate 2015 because it is a sort of "timeline" in derivative disclosure in banking. The banks considered in this empirical research are the largest ones in Europe, one for each country, ranked by market capitalization: BNP Paribas, Banco Santander, Intesa Sanpaolo, and Deutsche Bank (Table 1). These banks have some characteristics in common that enhance the accuracy of the content analysis: they have a market capitalization greater than 20 billion euro; all of them are global and multi-business banks; each one is the most significant bank in its own country; their size calls for a "too big to fail" policy; with the exception of Intesa Sanpaolo, they are "global systemically important banks" (Financial Stability Board, 2015). In this paper, we propose a scoring model to evaluate credit derivative disclosure in banking. This model provides two disclosure ratios for each bank based on key disclosure parameters, that will be used to compare the quality of banks' derivative disclosure: the derivative transparency ratio (DTR): it gives an overview of the derivative disclosure in banking; -the credit derivative transparency ratio (CDTR): it focuses only on credit derivative information. In order to provide the first ratio (derivative transparency ratio) we have chosen 10 meaningful risk disclosure parameters as follows: reasons to hold derivative instruments; fair value hierarchy; valuation techniques, notional amount of derivatives disaggregated by use; fair value of derivatives disaggregated by use; notional amount of derivatives disaggregated by hedge accounting category; fair value of derivatives disaggregated by hedge accounting category; notional amount of derivatives by instrument type; fair value of derivatives by instrument type; maturity of derivative instruments. For each parameter, we have assigned "1" or "0" score: score "1" means that a bank discloses the piece of information; score "0" means that a bank does not disclose the piece of information, and the bank fails to provide any information required. The transparency of derivative information is calculated by dividing the total score for each bank by the maximum score. The derivative transparency ratio (DTR) is equal to "bank's score/bank's total maximum score". In order to provide the second ratio (credit derivative transparency ratio) we have chosen 4 meaningful risk disclosure parameters as follows: credit derivatives by protection and portfolio type; credit risk mitigation techniques; fair value of credit derivatives; notional amount of credit derivatives. For each parameter, we have assigned "1" or "0" score: score "1" means that a bank discloses the piece of information; score "0" means that a bank does not disclose the piece of information, and the bank fails to provide any information required. The transparency of credit derivative information is calculated by dividing the total score for each bank by the maximum score. The credit derivative transparency ratio (CDTR) is equal to "bank's score/bank's total maximum score". By reading the 2015 annual reports, qualitative and quantitative data on derivative disclosure are collected and analyzed through the application of content analysis on the published disclosure statements (Annual statements and Pillar 3 reports). This analysis is not based on software. In the disclosure indices, each of the 14 key parameters is treated equally. The most important characteristic of this content analysis is the absence of any subjective evaluation. The final result of the scoring model includes qualitative and quantitative key information that is analyzed using an objective evaluation approach. So, it means that the disclosure indices do not involve any subjective judgment. The disclosure indices detect differences in transparency across banks. The content analysis we propose in this paper provides a scoring model based on a binary evaluation scheme ("0" or "1" score) to evaluate the risk reporting. This is the most important aspect of the methodology. Furthermore, it is not based on users perspectives. Consequently, it cannot evaluate the usefulness of risk disclosure and the level of satisfaction of users of the bank's risk disclosure. Specifically, it leaves unanswered the question of whether risk disclosure in banking adds "pages" to annual reports rather than increases transparency. More disclosure does not necessarily imply an increase of transparency. It is crucial to differentiate between disclosure and transparency (Beretta and Bozzolan, 2008; Freixas and Laux, 2012). The next section will discuss the empirical research results more in depth. RESULTS In this section, we discuss the results of the empirical research we have conducted on the following banks: BNP Paribas, Banco Santander, Intesa Sanpaolo, and Deutsche Bank. c) equity derivatives: several volatility option models. A score 0 is assigned to the parameter "reasons to hold derivative instruments" because BNP Paribas does not disclose any information. BNP Paribas provides a disaggregation of derivatives by use and their fair value: derivatives held for hedging and derivatives held for trading purposes. Interest rate derivatives are mainly used for hedging purposes. It also discloses information on the fair value of derivatives disaggregated by hedge accounting category in the note 5.b of the 2015 Annual Report, and the fair value of derivatives by instrument type. BNP Paribas failed to provide the notional amount of derivatives disaggregated by hedge accounting category and the notional amount of derivatives by instrument type, thus a score of 0 is assigned to these items. BNP Paribas discloses information as regards the fair value of derivative contracts by maturity in section "Contractual maturities of the balance sheet". Derivative financial instruments are included in the "not determined" maturity section. Briefly, the BNP Paribas' derivative transparency ratio (DTR) is equal to 0,7 (total score/bank's total maximum score = 7/10). 7.2. BNP Paribas: Credit derivative transparency ratio BNP Paribas does not provide any information about the protection and portfolio type of credit derivatives, thus a score 0 is assigned to this item. BNP Paribas uses netting agreements in order to reduce credit risk that is related to derivative trading. The Fédération Bancaire Française (FBF) and the International Swaps and Derivatives Association (ISDA) provide the most used agreement frameworks. It also discloses information about the notional amount and the fair value of credit derivatives. Briefly, the BNP Paribas' credit derivative transparency ratio (CDTR) is equal to 0,75 (total score/bank's total maximum score = 3/4). Briefly, Banco Santander's derivative transparency ratio (DTR) is equal to 0,9 (total score/bank's total maximum score = 9/10). Banco Santander: Credit derivative transparency ratio Banco Santander's Pillar 3 report (at page 151) provides statistics about the amount (in thousands of euros) of credit derivatives, divided between bought and sold protection and by portfolio type (regulatory banking book and regulatory trading book). Banco Santander employs many methods to mitigate and reduce credit risk exposures in derivative trading by entering into framework agreements for the netting-off of asset positions (such as ISDA Master Agreements) and the provision of collateral for non-payment. Banco Santander also discloses information as regards the notional amount and the fair value of credit derivatives. Briefly, Banco Santander's credit derivative transparency ratio (CDTR) is equal to 1 (total score/bank's total maximum score = 4/4). Intesa Sanpaolo does not show any information on the notional amount and fair value of derivatives by instrument type, and the date of maturity of derivative instruments. Consequently, score 0 is assigned to these items. Briefly, Intesa Sanpaolo's derivative transparency ratio (DTR) is equal to 0,6 (total score/bank's total maximum score = 6/10). Intesa Sanpaolo mitigates the exposure with reference to OTC derivatives, by using two techniques: bilateral netting agreements (by entering into ISDA agreements); collateral agreements to cover OTC derivatives transactions. Intesa Sanpaolo discloses information as regards the notional amount and the fair value of credit derivatives. Briefly, Intesa Sanpaolo's credit derivative transparency ratio (CDTR) is equal to 1 (total score/bank's total maximum score = 4/4). Deutsche Bank holds derivatives for different reasons: to attenuate its market risks with reference to asset and liability management (hedging derivatives); to gain profits in derivatives markets (trading derivatives); to meet customers' risk management needs. Deutsche Bank uses different types of derivatives. Deutsche Bank provides details about the fair value hierarchy in note 14 "Financial instruments carried at Fair value" to the consolidated financial statements for the year 2015. The financial instruments are classified into three levels of the fair value hierarchy: -level 1: it includes exchange-traded derivatives and equity securities traded on active markets, government bonds; -level 2: observable market data to evaluate financial instruments. In Level 2 there are many OTC derivatives and CDOs; -level 3: not directly observable market data to evaluate financial instruments. Level 3 positions include complex OTC derivatives, highly-structured bonds, illiquid assetbacked securities, distressed debt, illiquid CDO's, some private equity placements, illiquid loans and some municipal bonds, many commercial real estate loans. Most of Deutsche Bank's derivative positions are classified within Level 2 in the fair value hierarchy. As regards the valuation techniques, Deutsche Bank does not disclose the main techniques used for the product type, thus a score 0 is assigned to this item. Deutsche Bank provides a disaggregation of derivatives by use: derivatives held for hedging and derivatives held for trading purposes. Deutsche Bank also discloses the fair value of derivatives disaggregated by hedge accounting category in note 37 of the 2015's Annual Report. Deutsche Bank does not provide any information about the notional amount of derivatives disaggregated by use and hedge accounting category, the notional amount and the fair value of derivatives by instrument type, thus a score 0 is assigned to these items. On the contrary, Deutsche Bank discloses information as regards the fair value of derivative contracts by maturity in section "Maturity Analysis of Assets and Financial Liabilities". Briefly, Deutsche Bank's derivative transparency ratio (DTR) is equal to 0,5 (total score/bank's total maximum score = 5/10). Deutsche Bank: Credit derivative transparency ratio Deutsche Bank's Pillar 3 Report (page 130) discloses the exposures of credit derivative transactions, used for hedging, divided between bought and sold protection and split into regulatory banking book ("used for own credit portfolio") and regulatory trading book ("acting as intermediary"). Deutsche Bank employs mainly two credit risk mitigation techniques in order to reduce credit risk on derivative exposures: netting agreements (for exchange traded and OTC derivatives), and collateral arrangements. Deutsche Bank discloses information as regards the notional amount and the fair value of credit derivatives. Briefly, Deutsche Bank's credit derivative transparency ratio (CDTR) is equal to 1 (total score/bank's total maximum score = 4/4). Research findings: Summary This subsection briefly summarizes the research findings. Half of the banks in the sample disclose the reasons for the use of derivatives in their 2015's Annual Reports. All banks provide information about the fair value hierarchy. Most banks (3 out of 4) disclose information about the valuation techniques. All banks disclose the fair value of derivatives disaggregated by use and hedge accounting. Half of the sample provides information about the notional amount of derivatives disaggregated by use and hedge accounting, and the maturity of derivatives. Table 10 summarizes the total scores and the derivative transparency ratios of the four banks. Banco Santander shows the highest "derivative transparency ratio". This score means that disclosed derivative information is wide and easily accessible for all users. The bank should disclose information as regards the maturity date of derivatives to improve the level of transparency. BNP Paribas is the second bank ranked by "derivative transparency ratio". The score of 0,7 shows that BNP Paribas has room for the improvement of the derivative transparency. The disclosure could be improved by providing the notional amount of derivatives disaggregated by hedge accounting category and instrument type. Moreover, there is also room for the improvement of the explanations of the use of derivatives. Intesa Sanpaolo shows a "derivative transparency ratio" equal to 0,6. It means that there is room to increase its level of derivative transparency. The disclosure could be improved by providing more information about the notional amount and fair value of derivatives by instrument type, and their date of maturity. As for BNP Paribas, there is also room to improve the explanations of the use of derivatives. Deutsche Bank shows the lowest "derivative transparency ratio". This score shows that Deutsche Bank can improve significantly the level of derivative disclosure. The disclosure could be improved by providing more information about the notional amount and fair value of derivatives, as well as the valuation techniques for derivatives. Table 11 summarizes the total scores and the credit derivative transparency ratios of the four banks. Table 11 suggests that the level of transparency as regards credit derivatives is very high. Most of the banks in the sample (3 out of 4) disclose information about credit derivatives by protection and portfolio type, while all banks provide information about credit risk mitigation techniques, notional amount and fair value of credit derivatives. Banco Santander, Intesa Sanpaolo, and Deutsche Bank have a "credit derivative transparency ratio" equals to 1. This means that the credit derivatives disclosure is very high. BNP Paribas shows a lower ratio. It could improve its disclosure by providing information about the number of credit derivatives with reference to protection and portfolio type. Taking into account the fair value of derivatives and the total assets of the four banks in the sample (as stated in their annual reports), Deutsche Bank has the highest percentage of derivatives on total assets (more than 30%), while Intesa Sanpaolo has the lowest percentage of derivatives on total assets (5,6%). Banco Santander holds derivatives for 84,451 million of euros (it is equal to 6,30% of the total assets), and BNP Paribas holds derivatives for 354,687 million of euros, that represents the 17,78% of its total assets. DISCUSSION This empirical research has evaluated the quality of derivative disclosure in 2015's annual statements and Pillar 3 reports of four European banks: BNP Paribas, Banco Santander, Intesa Sanpaolo, and Deutsche Bank. By reading the annual reports and using a scoring model based on key disclosure parameters, the paper finds that qualitative and quantitative risk information was disclosed by banks in different ways. Nevertheless, banks face a level playing field in terms of regulation and accounting standards. The research findings suggest that derivative disclosure can be improved. Given the large diffusion of derivatives in banking, it is evident the unavoidable need to improve disclosure practices by providing qualitative and quantitative information about their derivative activities, portfolios, policies, and strategies. Meaningful and accurate information provides an important basis for the decision making processes of banks' stakeholders, investors' understanding of risk exposure in banking, and the well-functioning of financial markets. To be able to correctly understand and appreciate the bank performance, investors need information to respect two critical dimensions of credit derivative disclosure: derivative use and hedging strategies. In this sense, the Notes to the account in the Annual Reports play a crucial role. Inadequate and incorrect derivative disclosure has many negative effects on investors, such as limited knowledge of derivative counterparties, and credit and liquidity risk, limited ability to evaluate the effectiveness of hedging, and underestimated risk exposure not reported on balance sheets. Consequently, derivative disclosure promotes a contraction of information asymmetry and agency problems. From this perspective outside stakeholders will have more information to take into account in their decision-making processes. Increased risk disclosure would help stakeholders in their investment decisions, although it is arduous to use the disclosure to verify a bank's risk exposure or risk appetite (Woods & Marginson, 2004). In addition, the risk disclosure is also a way to reduce agency problems that arise from a divergence of interests between principals and agents (Fama, 1980;Fama & Jensen, 1983;Jensen & Meckling, 1976;Ross, 1973), and to increase externalities in financial reporting (Foster, 1980). Risk disclosure, and derivative disclosure particularly, is also connected to the cost of capital of the bank. There is a connection between risk disclosure and the cost of capital. Risk disclosure might result in reduced costs of capital (Botosan, 1997 This empirical investigation outlines some key characteristics of derivative reporting in banking. Risk disclosure is largely limited to compliance with legal requirements. Banks show remarkable differences in their reporting even though they adopt common accounting and regulatory standards. In this perspective, the adoption of standardized measures and reports can create the right conditions to achieve the objective. The harmonization has proved that the discretion left to the European member states in the creation of country-specific regulations allowed the presence of many discrepancies between financial statements across European banks. Such differences in risk reporting can also be analyzed within the "signaling" approach (Leland & Pyle, 1977;Ross, 1977) which proposes that banks might prefer to differentiate themselves from each other and that particularly those with good performances. Even if disclosure rules are homogeneous across European countries, there are important differences in the disclosure indices among banks in the sample. This evidence suggests that there is typically a voluntary element to risk disclosures. It might be the results of different information disclosure strategies. Banks develop and implement disclosure strategies that drive to a firm-specific combination of mandatory and voluntary disclosure. However, according to Lev (1992), not mandatory disclosure may change stakeholders' expectations on bank market value. These findings are consistent with the view that firms provide voluntary disclosures for three main reasons: to reduce firm's risk perception; to promote the reputation for transparency, and to address the shortages of mandatory reporting (Graham et al., 2005). Furthermore, the comprehensive maturity disclosure of derivative contracts (contractual and expected maturity), both assets and liabilities side, is important for stakeholders due to the poor reporting of the cash flow effects of derivatives, and the fact that a derivative asset can be transformed into a liability during the holding period. Despite the improved disclosure over the years, there still are important differences across large banks in Europe regarding the type, the features and usefulness of the information disclosed about their derivative strategies. PROPOSALS FOR A BETTER CREDIT DERIVATIVE DISCLOSURE IN BANKING This empirical research provides evidence that there is significant room for disclosure improvements in banking. The disclosure of the effects of hedging strategies on the bank's performance, the effectiveness of hedging strategies and objectives, the costs of hedging, the risk management policies, should be enhanced. In particular, derivative and hedging disclosure could be better integrated with other risk disclosures in banking. Derivative disclosure and, in a wider perspective, risk disclosure in banking lacks a holistic view. The adoption of a holistic perspective will likely enhance the derivative disclosure on the interconnection of different risk factors. It also may help to better appreciate the effectiveness of risk management policies and strategies. The derivative disclosure lacks also of an adequate forward-looking perspective (e.g. scenario analyses and simulations, risk sensitivity analysis, expected and unexpected potential losses of derivatives exposures) that might stimulate the adoption of a longer-term instead of a short-term investment perspective. In order to appreciate the purpose of derivatives and hedging strategies in banking, the derivative disclosure should include details on the following aspects: underlying risk factors of derivative instruments; nature and purpose of embedded derivatives; distinction between hedging and trading derivatives; hedging strategies and techniques; profits and losses of derivative exposures; impacts of derivative exposures on bank's cash flows and income; explanation of the methodologies that have been used by the bank to determine the fair value of derivatives. In particular, value at risk (VAR) can be an informative measure to predict the variability of trading revenues and compare the risk exposures of banks' trading portfolios (Jorion, 2002). An element denoted in the course of the analysis regards the disclosure of the notional amount of derivatives. Each bank in the sample uses to disclose this kind of information. It is important to notice that the "notional amounts of derivatives" is not an appropriate piece of information to understand the bank's risk exposure and derivative portfolios. In order to increase the usefulness of such information it should be disaggregated as follows: risk category (foreign currency, interest rate, commodity, and so on), nature (hedging or trading), accounting method (cash flow, fair value, net investment), long versus short exposures, type of instrument, and expected losses and gains. Finally, it is possible to state that the derivative disclosure in banking can be improved in the next future to better satisfy the growing demand of transparency that comes from investors and the growing accounting and regulatory constraints that come from national and international banking authorities and accounting standard setters. In brief, despite the progress observed in recent years, there is still an information gap between disclosure that users require for analytical purposes and the disclosure provided by banks. It is important to notice that the current International Accounting standards for derivatives (IAS 39) has been replaced in 2018 by the new IFRS 9. It will stimulate future research in this field. Furthermore, the Pillar 3 disclosure requirements have been recently modified by the Basel Committee on Banking Supervision (2015, 2017). These will imply relevant qualitative and quantitative changes in banks' annual derivative reports. CONCLUSION The use of derivatives is widespread across large banks, and they can be a significant source of systemic risk in the financial industry (Acharya & Richardson, 2009;Masera, 2009). The credit derivatives market has grown extraordinarily since 1993, and the most important derivative instrument is the credit default swap. In addition, the use of derivatives in the banking industry has continued to raise after the burst of the financial crisis (Bank for International Settlements, 2016). Recent developments in the financial markets and regulatory frameworks at the European level put more emphasis on risk reporting in banking. The ongoing financial crisis and the adoption of a new bank resolution regulation have significantly increased the demand for a better risk disclosure in banking. The aim of this research is to compare derivative disclosure among four large European banks, ranked by market capitalization. The derivative transparency ratio and the credit derivative transparency ratio of the four banks provide empirical evidence that derivative disclosure could be improved in the Annual statements and Pillar 3 Reports. There is still room for improvement in the explanations of the use of derivatives and hedging strategies. The conclusion to be drawn from the research is that the examined banks provide derivative disclosure variously. We expect that the risk disclosure in banking will increase after the introduction of the recent new version of the Pillar 3 disclosure requirements and the new IFRS 9. It is important to mention some crucial aspects of this empirical research. The paper is based only on the 2015 annual reports of four large European banks. Further investigations could extend the period of analysis and the sample of larger banks in Europe. The content analysis we propose in this paper is based on an objective evaluation of risk disclosure by reading the annual bank reports. The scoring model uses a binary scheme to evaluate each key disclosure parameter. This is the main restriction of the methodology. On the contrary, this purely objective method attenuates the subjectivity that affects the content analysis. More risk disclosure parameters could be taken into account within a more granular scoring model to improve the quality of the methodology. Further research could overcome these limitations.
v3-fos-license
2014-10-01T00:00:00.000Z
0001-01-01T00:00:00.000
16915689
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcgeriatr.biomedcentral.com/track/pdf/10.1186/1471-2318-11-90", "pdf_hash": "8983d5662e4dd7393d22fda7e5d74ea7be1427d8", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2003", "s2fieldsofstudy": [ "Medicine" ], "sha1": "8983d5662e4dd7393d22fda7e5d74ea7be1427d8", "year": 2011 }
pes2o/s2orc
STUDY PROTOCOL Open Access Study protocol: follow-up home visits with nutrition: a randomised controlled trial Background Geriatric patients are at high risk of re-admission after discharge. Pre-existing nutritional risk amongst these patients is of primary concern, with former nutritional intervention studies being largely ineffective. None of these studies has included individual dietary counselling by a registered dietician or has considered competing medical conditions in the participants. A former randomised study has shown that comprehensive discharge follow-up in geriatric patients homes by general practitioners and district nurses was effective in reducing the re-admission risk in the intervention group compared to the control group. That study did not include a nutritional intervention. The purpose of this study is to assess the combined benefits of an intervention consisting of discharge follow-up in geriatric patients' home by a general practitioner and a registered dietician. Methods/design This single-blind randomised controlled study, will recruit 160 hospitalised geriatric medical patients (65+ y) at nutritional risk. Participants will be randomly allocated to receive in their homes, either 12 weeks individualised nutritional counselling by a registered dietician complemented with follow-up by general practitioners or a 12 weeks follow-up by general practitioners alone. Discussion This trial is the first of its kind to provide individual nutritional intervention combined with follow-up by general practitioner as an intervention to reduce risk of re-admission after discharge among geriatric medical patients. The results will hopefully help to guide the development of more effective rehabilitation programs following hospital admissions, which may ultimately lead to reduced health care costs, and improvement in mobility, independence and quality of life for geriatric patients at nutritional risk. Trial Registration ClinicalTrials.gov 2010 NCT01249716 Background Undernutrition is common in old people admitted to the hospital and nutritional state often deteriorates further during hospital stay [1]. Therefore, at discharge a high amount of old patients will still be undernourished or at-nutritional risk: A recent study among 2076 old rehabilitation patients (80.6 y) have found that 85% were at risk of undernutrition (MNA 17-23.5) or undernourished (MNA < 17) according to the Mini Nutritional Assessment (MNA) and that length of stay was higher in those two groups compared with the wellnourished (p < 0.001) by 18.5 and 12.4 days respectively [2]. And an older study among old people (81 y) discharged to their own home has found that those with empty refrigerators were more frequently readmitted and three times sooner than those who did not have empty refrigerators [3]. Potentially the period after discharge is the most important time to intervene, because hospital stays are generally short and getting shorter. Further, according to the Resolution from the Council of Europe patients in need of nutritional support should receive such treatment before admission (where possible), at the earliest opportunity during hospital stay and after discharge [4]. In spite of this, there is a dearth of published evidence of benefit or harm: According to our knowledge six studies has assessed the benefits of oral nutritional support to geriatric patients at risk of or already undernourished, initiated in relation to discharge [5][6][7][8][9][10][11]. In most studies a positive effect of the intervention was found on the energy and nutrient intake, the nutritional status and in some also the functional status. In contrast the effect on the rehabilitation capacity, the quality of life and the survival was very limited. One explanation for the limited effect observed could be that the length of the majority of studies was relatively short, 4-8 weeks. This may be a problem since e.g. it can be seen from the data presented by Miller and co-workers [10] that participants in both intervention and control group continues to loose weight, both during the 6 weeks intervention and in the 6 weeks after. Another explanation for the limited effect could be the high number of re-admissions -especially found in the studies among the medical patients. This may have worsened the outcome for an already very frail population. Inappropriate medical treatment often has inadvertent effects, and a considerable number of admissions are attributable to inappropriate medical treatment that could be avoided. In a former randomised Danish study it was shown that comprehensive discharge follow-up in geriatric patients homes by general practitioners (GPs) and district nurses was reducing the re-admission risk in the intervention group compared to the control group after 12 weeks (29 vs. 39%, p = 0.044) [12]. The main focus in that study was GPs follow-up on hospital treatment, and medications, with no special emphasis on nutrition. A third explanation for the limited effect could be the relatively low level of compliance with the oral nutritional supplements reported in some of the earlier studies [5,6,9]. None of these studies have included individual goal setting, energy dense menus, and counselling focussing on nutritional risk factors, i.e. the expertise from a registered dietician (RD). All in all, a comprehensive approach to nutrition support, rather than commercial oral nutritional supplements alone, is likely to be required to improve nutritional status and prevent re-admissions, and hence impact positively on functional outcomes and quality of life. The purpose of this study is to assess the combined benefits of an intervention consisting of discharge follow-up in geriatric patients' home by a GP and a RD. Methods/design Design This study is designed as a randomised controlled trial comparing discharge follow-up in patients' home by GP vs. discharge follow-up in patients' home by GP and RD. Patients are eligible for this study when they are 65 + years old and at nutritional risk according to the level 1 screen in NRS2002 [13]. The primary outcome parameter will be the prevalence of re-admissions in the intervention and control group. Secondary outcomes will be changes in body weight, muscle strength, quality of life, and rehabilitation capacity. Feasibility of recruitment and sample size An earlier study has shown that 50% of the elderly hospital population is at nutritional risk according to the level 1 screen [14]. For a clinically relevant difference of 10% in re-admissions and an expected drop-out rate of 12% (based on [12]), a statistical significant level of 0.05 and a power of 80%, two groups of 80 patients are calculated to be necessary. A pilot study has shown that inclusion of up to 10 patients at nutritional risk per week is feasible. Taking in account an expected refusal rate of 30% at inclusion and loss to follow-up of 10% during the 12 weeks intervention, we aim to include two groups of 90, to be reached in approximately 6 months. Randomisation Patients will be randomised after discharge, right before the baseline assessment. Participants, the GPs and RDs (RLS and KT-J), the principal investigator (AMB) and research assistants (BSH, SK) are not blinded for the intervention. Before starting the analysis the principal investigator will be re-blinded for patients' group assignment. Population, inclusion and exclusion criteria All elderly patients (65+ years of age, living in three municipalities (Herlev, Rødovre or Gladsaxe)), hospitalised for minimum two days at the wards of geriatric medicine of the University Hospital of Herlev, will be screened by a research assistant for nutritional risk. Patients will be excluded from the study when they; suffer from senile dementia or terminal disease; can not understand the Danish language; are residing in nursing homes; or are not able to or willing to give informed consent. Nutritional status Patients are eligible for this study if they are identified at nutritional risk according to the following criteria in the level 1 screen NRS2002 [13]: ▪ Is Body Mass Index (BMI in kg/m 2 ) < 20.5? and/ or ▪ Has the patient lost weight within the last 3 months? and/or ▪ Has the patient had a reduced dietary intake in the last week? and/or ▪ Is the patient serious ill? (e.g. in intensive therapy) ▪ The nutritional risk will be confirmed by the research assistant by means of medical records. Discharge follow-up in all patients' home by GPs The follow-up consists of three contacts, conducted approximately one, three and eight weeks after discharge in both control and intervention patients. The contacts are guided by an agenda (based on [12]): ▪ Checking the discharge letter for specific recommended paraclinical or clinical follow-up ▪ Check need for adjustment of medication ▪ Check of the family's medical cabinet ▪ Checking the general health status (nutrition (vitamin D), physical activity, alcohol, continence, depression, dementia, and so on) The contacts are either in the GPs clinic or as a home visit depending on the patients overall condition. Patients randomised to nutritional intervention The research registered dieticians (RLS and KT-J) will perform a comprehensive nutritional assessment at the first home visit, as a basis for developing a nutrition care plan consistent with estimated nutritional requirements and nutritional rehabilitation goals. Basal metabolic rate will be assessed by means of Harris-Benedict and a factorial method, eventual accounting for weight gain factors, will be used to estimate the total energyand protein requirement for each patient (based on [15]). To assess dietary intake, the RDs will perform a standardised dietary interview with each participants to determine total energy and protein intake at each visits. Strategies for achieving energy and protein requirements will include dietary counselling with attention to nutritional risk factors, timing, size and frequency of meals, recommendations for nutrient dense foods and drinks, and provision of leaflets with information. Supplementation with energy and protein-dense meals-on-wheels, subscription of commercial oral nutritional supplements as well as vitamin D, calcium and other vitamin-minerals will also be considered to achieve optimal nutritional status. All in all, the RDs will perform three home visits, to perform dietetic care and maximise participants' nutritional status by way of reviewing the nutrition care plan, dietary counselling, motivation and education, monitoring participant weight, and ensuring energy and protein requirements are achieved. If it is considered relevant the participants will receive short follow-up consultations by telephone by the RDs in order to give advice and to stimulate compliance to the proposed nutritional intake (in-between the home visits). At least one counselling will be together with the patient's GP, in order to discuss the treatment, either in home or at the GPs clinic. Procedure After obtaining patients informed consent (either at the hospital or right after discharge) an inventory will be made of possible confounders. This includes the following characteristics: ▪ Socio-demographic data (age, gender) ▪ Medical diagnosis ▪ New Mobility Score (assessment of mobility before admission (total score 0-9) [16] ▪ Additional discharge interventions (e.g. outgoing hospital teams, discharge follow-up phone calls etc.) ▪ Prescription/use of commercial oral nutritional supplements ▪ Prescription of Vitamin D supplements ▪ Prescription of rehabilitation in the form of physiotherapy After 12 weeks participants will be contacted via telephone and mail to organise the follow-up assessment. If there is no response then the research assistants will contact the hospital to check for an eventual readmission. Outcome parameters Outcome parameters will be measured in the participants' home as soon as possible after discharge (t = 0) and at +12 weeks (t = 12). Primary outcome is prevalence of re-admissions. All outcome parameters that will be measured are listed below. If nothing else is stated the data is gathered by the research assistants. Re-admissions (t = 12) A register-based evaluation of readmissions will be done after 12 weeks. Data on admission to the hospital will be based on the National Patient Register. Information about the number of days to first re-admission and the number of days spent in hospital will also be collected from the Register. Nutritional status (weight, height, BMI) (t = 0 and t = 12) Weight is measured (with patients wearing light indoor clothes and no shoes). Information about weight will also be obtained by the RDs during the visits to the intervention group. BMI is calculated as actual weight in kilograms divided by the square of height in meters. As measurement of height is often not feasible in this chronic diseased, old and frail population, data on height will be retrieved from self-reported height. Dietary intake (t = 0 and t = 12) Dietary intake will be assessed by means of a 4-days dietary record. Participants will receive instructions from the research assistants on how to fill in the dietary record. They will receive the dietary records in advance of the visits at t = 0 and t = 12. At the visits the finalised records will be inspected and ambiguous entries clarified. The intake of energy and nutrients will be calculated by means of a computer program based on the Danish food composition table (available at: http://www. foodcomp.dk). Hand grip strength (t = 0 and t = 12) Hand grip strength (in kg) will be measured with a Jamar 5030J1 Hydraulic Hand Dynanometer. Participants will be seated with forearms rested on the arms of the chair. They are asked to perform three maximum force trials with their dominant hand and using the second handle position. The maximal grip score from the three values will be used. Chair stand (t = 0 and t = 12) To test the physical performance, the participants are asked to fold their arms across their chest and to stand up and sit dawn on a chair without pushing off with arms, as many times as possible during 30 seconds. The arms may be used for assistance or for safety if need [17]. Cognitive performance (t = 0 and t = 12) The Mini Mental State Examination (MMSE) will be administered to assess cognitive status of the participants. The MMSE is a widely used and easily administered test of cognitive status. It consists of 11 tasks and is graded to assign old people a score in the range of 0 to 30. Participants, who has difficulties with seeing, hearing or writing, will not be asked to complete the MMSEtest. Activities of Daily Living (t = 0 and t = 12) The ability to participate in activities of daily living (ADL) will be assessed using the validated de Morton Mobility Index (DEMMI) [18]. The DEMMI is a 15item one-dimensional instrument that measures mobility across the spectrum from bed bound to independent mobility. The raw score total (0-19) must be converted to a DEMMI SCORE (0-100 where 100 is independent mobility). Disability and tiredness in daily activities (t = 0 and t = 12) Disability is measured by a validated scale (the Mob-H Scale) by asking questions about need of help in the following six activities: (1) transfer, (2) walk indoors, (3) get outdoors, (4) walk out of doors in nice weather, (5) walk out of doors in poor weather, and (6) manage stairs. Tiredness in daily activities is measured by asking the participants if they feel tired after performing the same six activities [19]. Health-related quality of life (t = 0 and t = 12) Quality of life is measured by questions from SF-36 regarding physical functioning; role-physical; bodily pain; general health; vitality; social functioning; role-emotional; mental health and health transition http:// www.sf-36.org. Rehabilitation capacity (t = 0 and t = 12) The Functional Recovery Score (FRS) is used to assess restoration of function after discharge. The eleven-item questionnaire is comprised of three main components: basic activities of daily living (BADL) assessed by four items, instrumental activities of daily living (IADL) assessed by six items, and mobility assessed by one item. Basic activities of daily living comprise 44 percent of the score; instrumental activities of daily living comprise 23 percent, and mobility comprises 33 percent. Complete independence in basic and instrumental activities of daily living and mobility results in a score of 100 percent [20]. Participants will receive instructions from the research assistants on how to fill in the questionnaire. They will receive the questionnaire in advance of the visits at t = 0 and t = 12. At the visits the finalised questionnaire will be inspected and ambiguous entries clarified. Organisation The primary investigator is responsible for the informed consent procedure, final participants' selection, measurements, analysis and reports. The primary investigator will be assisted by two research assistants and two RDs. Data flow will be controlled by the primary investigator. Data-entry and control will be conducted by the research assistants under supervision of the investigator. The primary investigator is responsible for the data cleaning and analysis. Statistical analysis All statistical analysis will be performed using SPSS for Windows. Data will be entered in EXCEL and will subsequently be exported into SPSS software for analysis. Primary analysis for this study will be undertaken using intention to threat principles. 95% confidence intervals will be calculated for the differences in percentages, and medians. Independent samples t-test, Mann-Whitney U test and Chi-square test of association will be used as appropriate to compare groups at baseline. Ceiling and floor effect will be taken into account in the analysis of the questionnaires. In order to test the independent contribution of the intervention on the outcome variables, multivariate regression analysis will be used to adjust for the possible confounders. Specifically, the concordance between the GPs knowledge of the medical treatment and what the participant is actually taken plus the degree to which the GP have implemented the recommended follow-up as described in the hospital discharge letter, in respectively, the intervention and control group, will be used. The analysis will be undertaken by the principal investigator blinded to the randomisation. Ethics The protocol has been send to the Danish Ethical Board which has concluded that approval is not needed and that the project can be carried on as described. Discussion This project is the first to combine individualised nutritional intervention with intervention from GPs. We have chosen not to use strict exclusion criteria, but to include all eligible patients even though they are suffering from a variety of (chronic) diseases. Their homogeneity stems from their age (65+ years old), nutritional risk and background of disease (non-surgical). If the results of a broad study like this one are positive, it justifies wide implementation, because the included group is representative for a mixed elderly population; in contrast, selection of a more specific group would make the intervention less applicable to other patients group. In Denmark it is recommended that the nutritional management of (geriatric) patients involves the provision of high energy, high protein diets and individualised nutritional therapy [15], however the evidence for this, is limited. A review of the literature highlighted that most nutrition support provided for geriatric patients is based on the provision of a standard volume of commercial oral nutritional supplements rather than individualised therapy [21]. Specifically, former nutritional intervention studies among geriatric medical patients after discharge has all used commercial oral nutritional supplements [5][6][7][8]. A comprehensive approach to nutritional therapy combining individual education, motivation and counselling, dietary modification and supplementation offered by a RD, differs from previous work, however was deemed necessary given the limited evidence that commercial oral supplements alone can improve outcomes in this frail group. Strength training is an effective intervention for improving physical functioning in older people [22]. In Denmark it is part of the legislation to offer some patients rehabilitation in the form of physiotherapy. In this study, it was therefore decided not to include training as a specific part of the comprehensive intervention, but instead register if there is a difference in the prevalence between the two groups. Most of the former discharge studies may have had an intervention time that was too short to have a realistic chance of detecting differences in morbidity, functional status or quality of life. According to a recent Cochrane Review future trials need to have sufficient statistical power and length of follow-up to be able to detect any beneficial effects [23]. The follow-up period of 12 weeks is therefore chosen because this seems a reasonable time to achieve benefits of nutritional intervention in older people at nutritional risk. Further, in a former study of discharge follow-up in patients home by GPs, there was seen a significant different in the number of re-admissions after 12 weeks [12]. Weaknesses In the Danish study offering discharge follow-up in patients' home [12] the inclusion criteria were aged 78+ years. This means that the participants in this study will be younger, probably less frail and maybe less susceptible to re-admissions. However in the former nutritional intervention studies among 65+ year old geriatric patients, at risk of or already undernourished, the prevalence of re-admissions has been high -up till 56% [5]. Also, in the Danish study offering discharge follow-up in patients home, district nurses were part of the intervention [12]. Due to structural changes in the involved municipalities this is no longer possible. To try to compensate for this, regularly contact with district nurses will be arranged. In this study there are possibilities of contamination between intervention and control groups since some GPs may be involved in both groups. Since the aim of the study can not be blinded to the GPs, the chosen method may raise the GPs attention in relation to nutritional aspects in both intervention and control participants. On the other hand the GPs will be paid by the project for their discharge contacts, since such contacts are not yet obligatory. This fact may bias this study towards a better effect than can be obtained in daily practice. Conclusions It is important to provide adequate rehabilitation after hospitalisation to rehabilitate people as close to premorbid function as possible so that physical decline, hospital re-admission and even nursing home admission are avoided. The result of this project will hopefully help to guide the development of more effective rehabilitation programs following hospital admissions, which may ultimately lead to reduced health care costs, and improvement in mobility, independence and quality of life for geriatric patients at nutritional risk.
v3-fos-license
2020-02-19T14:14:22.022Z
2020-02-19T00:00:00.000
211147245
{ "extfieldsofstudy": [ "Medicine", "Psychology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2020.00092/pdf", "pdf_hash": "e77a94b6b72240f307af8b114da191dda2c57df9", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2004", "s2fieldsofstudy": [ "Psychology" ], "sha1": "e77a94b6b72240f307af8b114da191dda2c57df9", "year": 2020 }
pes2o/s2orc
Contributions From Psychology to Effectively Use, and Achieving Sexual Consent Psychology related to areas such as gender, language, education and violence has provided scientific knowledge that contributes to reducing coercive social relationships, and to expanding freedom in sexual-affective relationships. Nonetheless, today there are new challenges that require additional developments. In the area of consent, professionals from different fields, such as law, gender, and education, are in need of evidence differentiating human communication that produces consent, and those conditions that coerce. Up to now, consent has been focused on verbal language, for example, “no means no,” or “anything less than yes is no.” Despite the fact that focusing consent on verbal language is a very important part of the problem, it does not solve most of the issues currently raised, like the famous case of “La Manada” in Spain. This article presents the most recent results of a new line of research, which places the problem and the solution in communicative acts, not only in speech acts. Even though there might be a “yes” in a sexual-affective relationship, there might not be consent, and it is indeed a coercive relationship if that “yes” has been given in a relationship determined by institutional power or by interactive power. Institutional power may occur if whoever made the proposal for the relationship is a person in charge of the process of selecting personnel in a company, and one of the candidates is the person who receives the proposal. Interactive power may occur if whoever makes the proposal is situated in an equal or inferior position in the company to the person receiving it, but the former threatens sextortion the latter. The potential social impact of this research has been already shown in the cases analyzed for this study. INTRODUCTION According to a 2017 report from the World Health Organization (WHO), 35% of women worldwide have suffered some kind of physical or sexual violence throughout their lives. The World Health Organization [WHO] (2017) also state that interpersonal violence is one of the leading causes of deaths in women aged between 15 and 44 worldwide; ahead of deaths caused by cancer, wars or traffic accidents (World Health Organization [WHO], 2018). In addition, another gender-based-violence (GBV) related concern is the decreasing age of the victims. Almost 1 in 3 adolescents aged 15-19 years suffer or have suffered violence in their sexual-affective relationships (World Health Organization [WHO], 2018). This data leads us to consider GBV occurs from adolescence (Puigvert et al., 2019). Pressure from "friends" to have sporadic relationships or sexual encounters have, in some cases, led to the death of several girls who refused to continue the encounter at some point during the sexual contact. Gang rapes are an increasingly present reality as recent media (Catalan News 1 ; The Guardian 2 ; BBC News 3 ), statistics (Geoviolencia sexual 4 ) and research (Dixon et al., 2019) has shown, and there is still little research dedicated to this phenomena. According to the Geoviolencia platform, gang rapes occurred in Spain increased from 18 in 2016, to 60 in 2018 (and counting 42 in the first half of 2019). Factors identified an intersection of several contextual elements to explain gang violence (Dixon et al., 2019). In the same way, authors claim the need for a bridge between interdisciplinary areas of study to better explain interpersonal violent crimes While studying gang-rape perpetuators, Porter and Alison (2019) determined the existence of leaders who are more influential in the offense, encouraging others to be implicated in the crime. Indeed, researchers have questioned the psychology of criminal conducts for decades (Fortune and Heffernan, 2018), and consider the need to socially approach criminal behaviors, involving both individuals and the community. The analysis presented in this study by Puigvert et al. (2019), not only focuses on the problem related to the causes of genderbased violence, but also analyzes its underlying factors to identify effective actions that may contribute to prevent young girls and women from becoming victims. They observed that boys with violent attitudes and behaviors mostly preferred one night stands, while those boys with non-violent traits mostly preferred stable relationships. This separation between types of boys leads some girls to tend toward having a "a good time" with the more violent boy, and later in life to settle down with the other type of egalitarian boy (Gómez, 2014). These two models potentially lead to gender-based violence, and above all, it tends to force and intimidate girls, even at very early ages, to hook up and have sexual experiences that they may have not chosen on their own. Duress and coercion also exist in relationships between teenagers. In a study conducted by Katz et al. (2019), 422 students between the ages of 15 and 16 were surveyedwith just one question: "Did someone with whom you are dating or with whom you dated, force you to do sexual things that you did not want to do?" The results showed that approximately 22% of women and 8% of men reported having experienced sexual coercion at least once in their life. Findings of this study illustrate that sexual coercion tends to be a common element among adolescents. Poor quality relationships, many of them based on this type of forceful friendship might have a long-lasting impact people's lives. On the other hand, high quality relationships protect against harassment and ensure a good quality of life (Harvard Study of Adult Development 5 ). Positive relationships also improve work environments and may contribute to overcome situations of conflict and violence (De Cordova et al., 2019). All types of relationships, at any age, should be free of coercion. Social influence, especially peer influence, plays a crucial role on adolescent decision making (Ciranka and van den Bos, 2019). Education on consent is needed for all children, youth, teens and adults. People from all walks of life are participating with the feminist movement, and joining the struggle against sexual harassment to make to possible to have positive sexual relationships (Joanpere and Morlà, 2019). Awareness on consent, from early ages, has to do with freedom, shaping the limits of one's own body and that of the other person. Duress, coercion and any similar kinds of acts committed by another individual, are considered harmful and they can potentially involve seriously adverse life-course health consequences (Bellis et al., 2019). Indeed, sexual harassment and gender-based violence have serious negative repercussions on people's physical and mental health (World Health Organization [WHO], 2018). Kandel et al. from the neuroscientific field, also demonstrated GBV's long-term effects on people's health (Kandel et al., 2012). Considering it as a public health issue, the damage of violence is difficult to endure psychologically for many victims. Addressing this reality becomes a matter of high importance for psychology. To properly tackle this scourge, society is at a crucial historical moment; not only from social movements such as #MeToo and similar, but also from research. The European Commission and its Directorate of Research has social impact as one of their priorities for the near future (Flecha et al., 2015). In line with this concerning problem of GBV and with the aim of improving people's lives, the European Commission has designed 17 priorities within the agenda of its Sustainable Development Goals (SDGs), to carry out together with 169 associated targets. Goal 5: Achieve gender equality and empower all women and girls shows the EU commitment with Gender equality in line with European values rooted under the EU political and legal framework. "The EU's strategy and action plan to promote Gender equality and women's empowerment aims at changing the lives of girls and women by focusing on their physical integrity, promoting women and girls' economic and social rights, their empowerment and strengthening their voices and participation 6 ." This statement also highlights the need and willingness of governments, professionals and policy-makers to act in this line, empowering women victims to make their voices heard, and to support them. Different scientific research has already demonstrated the impact of interpersonal violence or aggression on the physical and mental health of people (Waldinger et al., 2006;Shonkoff et al., 2012). Indeed, gender-based violence may intersect with other inequalities (Shefer, 2019). Sexual justice and gender rights are becoming main aims for scholars and activists all over the world. Prevention interventions are also urgent and necessary, as a modifiable factor of risk, protection and above all rejection of any unconsented attitude. Contributing from psychology to effective use and achievement of sexual consent, examples of the most successful actions in this area will be analyzed in this article. STATE OF THE ART Psychology as a discipline has contributed for decades to the research on gender, violence and someway on consent too (Walker, 1979;O'Connell and Russo, 1991;Jordan et al., 2010). From the studies of American psychologist Leonor Walker, leader in the field of domestic violence and founder of Domestic Violence Institute, analyzed psychological contributions on the social problem of men's violence against women (VAW) based on power, and a kind of socialization of men who believe they may control women by any means (Walker, 1989). In this vein, we look at the necessity of a feminist analysis in psychology during the late 1960s, when the feminist movement began examining the gender role socialization on female and male behaviors, previously considered biologic and innate (Maccoby and Jacklin, 1974). Thus, this new psychological field was created focused on women and men power and their role in society (Dilling and Claster, 1985). In this sense, the issue of consent is raised from several perspectives, from the latest research in psychology, which is includes people's ways of thinking, behavior and ways of interpreting other's actions as well as the way consent is willing to be asked and provided. In the case studies taken as examples for this review article, it is easy to see how consent need to be free, agreed, informed, during all time and legislated under the basis of communicative actions. Known as a major pioneers in the study of women in psychology, Agnes O'Connell and Nancy Russowrote on women's life stories and heritage in the origins and development of psychology (1991). With this publication, celebrating the American Psychological Association's Centennial, the scholars recognize eminent women and feminists who serve for the transformation of traditional psychological theories, methods and practice, aiming to preserve their contributions within an appropriate historical and sociocultural context (O'Connell and Russo, 1991). One of these traditional psychological theories consisted of blaming victims, for instance questioning their way of dressing, and without even considering rape at home, or by dating partners; linking their trauma with the need for therapy (Walker, 1989). Sexually abused children were following theories proposed by psychoanalysts such as Freud also blaming them, until the moment when feminist psychologists were finally able to discredit the myth of the seductive child (Lerman, 1986;Walker, 1988). In the same line, research on psychology has other social impacts. Using psychological research methods to approach VAW, Walker's research helped to develop programs for survivors to attend, to create new policies and change old legislation (Schneider, 1986;Walker, 1989). In terms of gender violence, psychology also conduced to point out that gender discrimination might contribute in making women vulnerable to potential mental health problems. As an expert on psychology and psychiatry, and founding director of the Center for Research on Violence Against Women at the University of Kentucky, Jordan et al. (2010) argues that, besides considering VAW as a legal and social justice problem, research should also focus on the psychological impact of violence on victims. New scientific psychology is integrating gender analysis when considering victim's voices, case studies and basing new approaches on survivors' narratives. As described, much psychological research has contributed to the study of gender-based violence. Drawing on previous research, our contribution pretends to approach the use and achievement of sexual consent. MacKinnon (1979) was one of the pioneer scholars, contributing to raise awareness about the importance of legislating what is considered sexual harassment in public institutions. MacKinnon aimed to achieve gender equality in international and constitutional law, as well as in political and legal theory. She also worked toward obtaining legislation against sexual harassment in the United States and other types of GBV that violates civil rights. Title IX itself constitutes an example of this struggle for legislation. Pretending to make gender bias visible into law, she began to open a legal debate on issues such as sexual discrimination and sexual abuse. MacKinnon (2005) supported the recognition of sexual harassment, rape and abuse based on power, focusing the point on affirmative consent, which led to reformulate the debate on United States legislations and gender equality. Along this line, in a broader understanding of communicative sexuality (MacKinnon, 1983) other feminists have explored it further. Pineau (1989) is considered one of the first women who, analyzing legislation from a feminist point of view, examined victimization toward vulnerable women. She raised the point on rape based on non-consensual sex, and thus opened the debate on a more communicative sexuality model, so that consent should be explicit and clear, objective and legislated, with a more complete model that "no means no" or "yes means yes." Similarly, Cowling (2004) also suggested a move toward teaching a communicative model of consent. Her research provides evidence that consent communication occurs most often indirectly and non-verbally. In the European context, Wilson (2000) discusses the subjective experience of sexual harassment and assault. Based on data from university students in Scotland, Great Britain, New Zealand and North America, Wilson argues that for a correct analysis of the understanding of sexual harassment, the representation of both the complexity of thought and the behavior of someone who has suffered this harassment is necessary. These two spheres refer to the psychological impact that harassment triggers in people. Fiona Wilson claims the need to better understand individual experiences and how harassment or aggression produces some kind of labels in people's lives. This subjective world, in terms of Habermas (1987), frames a world only accessible for individuals themselves. A deep understanding connecting harassment and its psychological effects is crucial to better defend its approach and legislation. In this sense, people and survivors' narratives (Clark and Pino, 2016;Miller, 2019) have shed light on the victim's healing, peer-support affect and a way of making their stories public. This achieves both solidarity with other survivors, an awareness raised of GBV, and the need to engage people into action. Known as a representative of modern psychology, Bruner turned the discipline placing intersubjectivity and narrative at the center, pointing out that the future of psychological science was linked to understanding of the human mind in relation to human interaction and the cultural context (Bruner, 1996). As a social psychologist, G. Mead (1934) founded what is now called the symbolic interactionism. Mead describes a sociological perspective of interaction, how individuals interact with one another, in order to communicate and create "symbolic worlds, " while these "worlds" shape each individual's behaviors. In this way, society as a whole is built through interactions in a continuous process of interpretation of people's worlds and the meanings they share and develop among them. Social and cultural factors form not only our environment and understanding but also our brain. By shaping our way of thinking and behaving in society, understanding and applying consent becomes singularly relevant. This is true for making each person understand that they own their bodies, and above all, for making people with whom interaction is produced, understand not only the impossibility of touching another body without permission, but the severity of doing so. Thus, individuals would then avoid memories of unwanted relationships, which leave deep marks on human brains (Hirst and Rajaram, 2014) and potential problems later in life. Within the challenges for free cognitive development, psychologists begin to pay attention to violence in its broadest aspect (Racionero-Plaza, 2018). Many authors have researched for decades to find the most beneficial ways to intervene to support survivors or position themselves in the context of GBV at universities. The multilongitudinal study of Coker et al. (2016) already shows that programs for overcoming and acting against harassment, based on bystander intervention are the most efficient ones. From the academic field, research on this topic has been extended to other areas. Recent studies in psychology (Philpot et al., 2019) continue to affirm the success of bystander intervention. In a comparative analysis between countries of different continents, this research demonstrates that in most public conflicts, the tendency of bystanders is to intervene to help someone in an emergency. People are also more likely to intervene when accompanied by other people. Based on these findings, Philpot et al. (2019) argue the need for psychology to change the narrative of the absence of help, toward a new understanding of what makes intervention successful. Approaching consent also involves an important link between psychology and legislation framed within the difficulty of legislating people's wills. Consent may be non-verbal and dependant on context. How the other person is willing to understand it as such, so as not to commit a crime, also depends on what is considered moral, correct, or legal and what is not. Slavery for instance, would be immoral today. Even just a few decades ago, many women did not have a say over their marriage, as consent would have been given by their parents or siblings. Since then, society has changed to give women this autonomy, and has encouraged them, with men, to continue the struggle for more rights. It is not acceptable to touch another body without permission, being it the most precious thing that makes us human beings; and so, laws have to legislate it to protect citizens. The most essential part of a human being cannot be assumed in any other way. Beyond Words Defining Consent and Asking for Its Regulation According to the National Sexual Violence Resource Center (NSVRC, 2015 7 ) consent is understood as "an affirmative agreement to engage in various sexual or non-sexual activities. Consent is an enthusiastic, clearly communicated and ongoing yes. One can't rely on past sexual interactions, and should never assume consent" (NSVRC, 2015). The student movement was pioneering in opening the debate on consent in sexual relationships. In 2004, "Understanding consent to Sexual Activity" constituted one of the first laws known in the topic, making the "No means no" a pivotal slogan in this regard. The bill cements that states of unconsciousness, alcohol, and drugs, make someone unable to give consent. In addition, fear, intimidation, power relations, and academic evaluations are situations that may inhibit the victim's capacity to say "no"; so consent should be nullified in this context. "Affirmative consent" means affirmative, conscious and voluntary agreement to engage in sexual activity (according to the 2014 law). Meaning, when a person says "no" at any kind of sexual engagement, the other person must understand the "no" as such. Sexual contact with a person who has not given her/his consent constitutes a crime. The affirmative consent law "yes means yes, " includes three important elements: (1) the definition of consent as "an affirmative consent standard in the determination of whether consent was given by both parties to sexual activity. "Affirmative consent" means affirmative, conscious, and voluntary agreement to engage in sexual activity." (2) The configuration of the sexual crime: it is understood that when a person says "no" at any kind of sexual engagement, the other person must understand the "no" and as such, the opposite reaction constitutes a crime. (3) It also marks the responsibility of the person who has to ensure consent: "It is the responsibility of each person involved in the sexual activity to ensure that he or she has the affirmative consent of the other or others to engage in the sexual activity." The change from "no means no" to the message of "get consent" caused several scholars analyze how young adults conceptualize consent (Beres, 2014). Worried for sexual violence prevention, education, and research on sexual consent, Beres studies the understanding of consent, from a perspective based on "communication about sex" and not only "ok sex." This paradigmatic change makes the issue of consent greater than words. Campaigns such as the one of PlannedParenthood 8 defined consent as, such act either tacit or explicit, which involves the following criteria: (1) Freely agreed. Consent is a choice without pressure, manipulation or under the influence of drugs or alcohol. (2) Reversible. Anyone can change his or her mind about what they feel at any time; considering silence is not consent. (3) Informed. Anyone can only consent to something if he/she knows the full story of the facts and intentions. (4) Enthusiast. When it comes to sex, someone has to do what she or he wants to do, not the things that the other person might be expecting. (5) Specific. Saying yes to one thing does not mean a yes to other things or other people. While these definitions are essential when conssidering sexual consent understanding and training, the regulation of consent is indeed crucial. As any other crime, sex crimes based on consent need to be considered, justified and properly formulated. But, why is consent relevant for sexual freedom? Historically, consent has been an important issue in social, economic and personal relationships. Consenting to a contract or a medical intervention is a legally recognized act based on the will of any human being. However, sexual consent would not always have the same relevance. According to Pérez Hernández (2016), a person could "formally" consent to having a sexual relationship or to sexual conduct (even saying "yes") and not "really" want to participate in it, expressing their "decision" through words or silence. Similarly, later movements show that "silence is not consent" (Spark movement, 2013 9 ). Portugal passed its consent-based rape legislation 10 following the "silence is not consent" principle. In these terms, silence is not be legally interpreted as consenting. Research also indicated some reasons that may affect someone's own will (Mead, 1934;Walker, 1997), including coercion, given in fear, or for pleasinganother, among others. In our terms, consent means actively accepting to participate in any sexual activity. Sexual activity without consent is considered rape or sexual assault and is legislated as such in some countries. However, there is still an unsolved problem, in which this article will focus: Those situations in which, even with a "yes" the real message and the real will of the person is "no." Thus, the challenge states "beyond words, " to interpret the attitude, the will and possible coactions, fears or other elements of the context that might influence someone at a psychological level. Previous Steps to Approach Consent Legally Legislation offers legal certainty, which provides a solid foundation for the judge when making a decision. Indeed, one of the greatest impacts of legislating a reality states on achieving, through law, the legal certainty. At the point when a judge must make a decision, he or she needs to know, onboth, the facts which occurred andunder which legal category these facts have to be framed. The classification that a judge attributes to a fact (e.g., rape, sexual assault) is such according to its corresponding legal type. Thus, the better a social phenomenon is defined, the more concrete and more restricted it is, and the better interpreted it will be. This leaves less space for a judge's own interpretation. Personal understanding of the facts can even be subjective, including separate opinions. This happened during the conviction of "La Manada" case, in which the aggressors were convicted for abuse and not for rape, based on opinion. This case constitutes an example of the need for a common legal framework. If different judges have different opinions, the lack of legal certainty may potentially lead to an ambiguous and conflicting decision, in which the collective subconscious uses to prevail, including those ideas taken for granted, as the one of "who keeps silent, grants consent" (Tomás, 2003). The lack of consent constitutes a crime, and it is therefore aggression. Researchers have addressed the issue of informed consent, collective unconscious and tacit consent (Tomás, 2003). Here. the author reveals how the idea of consent has been harbored in our minds, from Roman law to Common law, creating what is considered a collective legal unconscious. For example, the phrase "who keeps silent, gives consent" was not included in Roman law but ended up being configured later in time until the present. In this way, Tomás argues that Civil law, unlike other systemic human rights, has led to configuring silence through principles created over time, but not through legal norms. Back in history, it was during the Canon law when silence was taken as affirmative acknowledgment, referring to fathers' lack of verbal consent, so that their daughters could become nuns without their permission. The fathers' absence of verbal consent led to consider silence as a legal act, with value given only to affirmative consent. However, in Roman law, individual silence was not considered consent anymore. Facing the dilemma of the legal interpretation of silence, findings in psychology have already shown the existence of certain situations and mental decisions, conscious or unconscious, that can affect a person's behavior pattern (Kandel, 2018). The psychological shock may occur due to fear, panic, anxiety or other situations of power that psychology has already defined as causing the inability to speak and immobility (e.g., turn cold, freeze) (Gidycz et al., 2008). Rape is one of those situations. However, following the mere legal thought, judges who are not aware of these psychological effects, tend not to seriously consider certain situations in which the victim is simply not able to speak. The link between psychology and the law becomes crucial at this point. The dilemma about the interpretation of consent used to emerge from situations in which it cannot occur or be requested, for instance, harassment and violence (often arisen under coercion, under the influence of alcohol or other substances or in sporadic relationships). These are certainly the kind of situations in which consent and its regulation are most necessary. The contradictory outcome in a sentence, as happened in the case of "La Manada, " provide serious consequences, not only for the survivor's victimization, but also for the emergence of "other Manada's." Even in many Spanish schools, boys under 16 years old have created "mini-Manadas" to attack their female classmates. Thus, the reality of gang rapes, many of them produced in hook-up situations (Puigvert et al., 2019), require the need to build a legal type, basing crimes on the lack of consent for any sexual act. This would contribute to social impact in the following terms: (1) by providing legal certainty; (2) reducing judges' subjectivity when sentencing; (3) contributing to transforming the collective legal subconscious; (4) leading to increase the number of complaints for sexual harassment (which is positive in terms of victims' coming forward); and this, (5) driving to reduce the emergence of "new Manadas." Interpretations must be based on standard legislation. Current Legislation on Consent The Istanbul Convention, article 36, states that parties shall take the necessary measures to ensure the following intentional conducts are criminalized: engaging in non-consensual vaginal, anal or oral penetration of a sexual nature of the body of another person with any bodily part or object; engaging in other nonconsensual acts of a sexual nature with a person; causing another person to engage in non-consensual acts of a sexual nature with a third person 11 . In this sense, the Convention includes that "Consent must be given voluntarily as the result of the person's free will assessed in the context of the surrounding circumstances". How should legislation approach consent? How have other legislations done so? Starting with considering the difficulty to legislate non-verbal interactions, there are some laws that have made little approximations. The "no means no" legislation (2004) pioneered. Years later, the positive consent emerged from below. Campus student movement claimed that saying "no" is not enough, since there are situations in which a person cannot say "no"; but even then, she or he might not be consenting. Thus, the need for affirmative consent raised and the struggle for a law claiming "anything less than yes is no"; also known as the "Yes means yes" law, passed in 2014 in the State of California. One year later, in 2015, the State of New York passed it consent law, as "Enough is enough, " stating: "New York State has the most aggressive policy in the nation to fight against sexual assault on college campuses. By standing up and saying "Enough is Enough," we made a clear and bold statement that sexual violence is a crime, and students can be assured they have a right to have it investigated and prosecuted as one." Considering the United States legislation trajectory regarding consent, another key factor in this sense, occurred in 2013, when the Campus Sexual Violence Elimination Act (SaVE Act) was a bill whose components incorporated as an amendment to the Clery act. The Campus SaVE Act updated the Clery Act by expanding the scope of this law in terms of transparency, accountability and education. In other words, encouraging reporting, response and prevention education requirements on rape, acquaintance rape, domestic violence, dating violence, sexual assault and stalking. Other approximations, in New Zealand and Canada, include for instance situations of intoxication, sleeping or death meaning that the person does not have the capacity to consent (Crimes Amendment Act, 2005 12 ; House of Commons Bill C-49,1992 13 ). In the European Union, currently only 9 of the 28 EU member states include in their jurisdictions rape as sex without consent, either tacit or explicit: Ireland, United Kingdom, Belgium, Cyprus, Germany, Iceland, Luxembourg, Sweden and Portugal. Some others, including Spain, embrace consent under the concept of sexual assault and only recognize it when physical violence or intimidation takes place. The German Criminal Code, section 179, considers sexual offense as situations in which the victim does not suspect an attack, is defenseless, or makes a refusal to consent to the sexual act known either verbally or through his or her behavior (e.g., by crying or stiffening). This is a communicative act providing important information for other legislation to include consent. The Luxemburg Criminal Code, article 375 defines the lack of consent as a rape crime. It states: Any act of sexual penetration, of whatever nature, by any means whatsoever, committed on a person who does not consent, including using violence or serious threats by ruse or artifice, or abusing a person incapable of giving consent or free to oppose resistance, constitutes rape and shall be punished by imprisonment of five to ten years. In 2018, Iceland's Parliament 14 passed a landmark bill which makes sexual relations with someone illegal, unless you have their explicit consent. Under the new law, consent must be clearly and voluntarily expressed. The Belgian Criminal Code (Act 375) defines rape as: any act of sexual penetration committed on a person who does not consent. Consent is deemed to be absent when the act is imposed by means of violence, force or by a trick, or if the victim is suffering from a physical or mental disability. The United Kingdom law considers informed consent as freely, by both partners, enthusiastically, every time and for every sexual act. An intoxicated person is legally unable to consent to sex and having sex with a person who is very drunk is rape or sexual assault. Swedish law includes the requirement of consent regardless of whether there has been violence or threats, or has violated the situation of vulnerability of the victim. It included the concept of "negligent rape" and "negligent sexual abuse." In 2017 Ireland 15 include in its Criminal Law Act, the sexual offense as an act which if done without consent would constitute a sexual assault; considering situations when a person lacks the capacity to consent to a sexual act if he or she is, by reason of a mental or intellectual disability or a mental illness incapable of consenting (specifying concrete situations). The Cyprus Criminal Code, Section 144 includes consent by stating: Any person who has unlawful carnal knowledge of a female, without her consent, or with her consent, if the consent is obtained by force or fear of bodily harm, or, in the case of a married woman, by impersonating her husband, is guilty of the felony termed rape. The Spanish Criminal Code defines rape under the presumption of aggression. It defines sexual abuse, sexual harassment and sexual assault as follows: sexual abuse (article 181): who without violence or intimidation and without consent, performs acts that attempt against the freedom or sexual indemnity of another person; sexual harassment (article 184), as the one requesting favors of a sexual nature, for himself or for a third party, within the scope of an employment, teaching or service provision, continued or habitual, and with such behavior will provoke the victim an objective and seriously intimidating situation, hostile or humiliating; sexual assault (article 178) as the one that attempts against the sexual freedom of another person, with violence or intimidation. Article 179 specifies: when the sexual assault consists of sexual intercourse by vaginal, anal or oral route, or introduction of objects by one of the first two routes, the person responsible will be punished as a criminal of rape. Gender and law are academic disciplines both linked to social concerns. People ask now for legislation on gender issues. While attempting the legal involvement seems something of extreme gravity that requires immediate solution, it is also true that laws make us free and prevent many undesired social behaviors. That is why when the law fails, it sends a warning message to society, to not be compliant or trustful. Legislation changes people's morality, and we have changed laws accordingly. Human consciousness and laws have to go hand in hand. In ancient Rome, the to rape the daughter of a tax-paying citizen was an offense because it was his property; only the father could consent for the daughter, showing the crime was against the father, not against the woman, regardless of her consent (Ted Talk, Joyce Short 16 ). In German law, acts such as crying and screaming are included, and some legislative developments do include nonverbal language in an attempt to legislate consent. However, the psychological perspective is crucial, as it is necessary for the other person to interpret the message as such. The case that Joyce Short explains demonstrates the interactive power, since the boy is the connoisseur of the information and therefore the one who has to ensure the free and equal dialogic interaction. The same video shows how, similar realities differ in different states of the United States, according to their rape or consent legislation. That is why it is necessary to go beyond current legislations. In short, taking into consideration all the previous arguments and definitions on consent for any sexual relationship, it should be affirmative, agreed, free, informed, without coercion, based on the lack of interactive power and institutional power, extended from the beginning until the end of each sexual engagement, based on the non-verbal communicative acts. In this line, crimes on rape, sexual aggression and abuse should be treated based on consent or the lack of it. CONSENT FROM SPEECH ACTS TO COMMUNICATIVE ACTS How can we be certain when someone consents or not? Language is one of the channels. In a normal way of communication, people "tell" their will. Verbal language is key, but so is how it is understood and how it is applied. In this space, the role of psychology intervenes, the willingness to interpret, understand what the other person wants to tell and pretends to communicate, even without saying a word. Thus, the complexity of defining consent is not so much its definition, but its applicability (Katz et al., 2019). While there is a quiet widespread agreement on the way consent is defined, the debate is driven to know exactly what consent does involve and the context surrounding it (Muehlenhard et al., 2016). States of unconsciousness, alcohol, and drugs make a person unable to provide consent. In addition, fear, intimidation, power relationships, evaluations from professors to students, letters of recommendation, are example situations that restrict the "no" of the victim and even cancel it. Thus, positive consent arises, which also can be nullified at some point. By accepting a "no" is to understand that the other person does not belong to us, even if even if he/she had consented at the beginning with a yes; even consenting without wanting to be there. The psychological debate used to be focused on the victim, in showing resistance, running, being afraid or calling out. On the other hand, the social debate must focus the aggressor, it must be clear that consent is a requirement for any kind of intimate act. In this vein, other existing research helps us to advance in the knowledge and the application of consent. Thus, the communicative acts and the researches on this topic, so far are key to be applied to the study and achievement of sexual consent. In his theory on "Speech Acts, " Austin (1955) discussed How to do things with words, giving examples of how words create realities. For instance, by saying yes, a marriage is created. Austin also realized that language depends on some conditions. Illocutionary and perlocutionary speech acts focus on the intentionality of the speaker. John Searle, constituted his linguistic phenomenology in this vein. For Searle (1969), intentionality in context is key for any speech act. Thus, the "construction of social reality" for Searle (1995) is based on intentionality too, which is not only for the individual but also collective society. Its speakers share this collective intentionality. For example, a five-Euro bill is a five-Euro bill because people have agreed so. Along these lines, Habermas (1987) published his "Theory of communicative action, " in which he used the concept of communicative action. Searle and Soler (2004) talk about "dialogic communicative acts, " in which, both the context and the consequences of our communicative action are important in the development of the action, due to its influence in the construction of a wide range of social phenomena (Searle and Soler, 2004). Soler Gallart (2017) introduced and analyzed the relevance of non-verbal communication (gestures, easy expressions, tone of voice, etc.). That is, not only do "we do things with words" but also with non-verbal symbols, which can communicate by themselves, or be accompanied by words. In this way, body language also "says" a lot and so, communication is not just about talking, but inquiring about the context in which communicative acts take place. From the social psychology perspective, Mead (1934) developed the symbolic interactionism proposing the external world as the place where the subjective world is constructed. The external and the internal worlds are both interacting and shared by people in the social world. Meaning is creating through language, in this process of intersubjectivity. Thus, the Habermas approach of communicative action focuses on social interaction for people understanding, beyond language communication. The act of speech is based on interactions. Among the different types of actions defined by Habermas, communicative action arises as an ideal type because language functions as a means to achieve an essential understanding to reach consensus and to take action among people. MATERIALS AND METHODS This article is based on a theoretical reflection and the transferability criteria, framed in line with communicative acts and their link to gender-based violence. The authors have been developing their research on this for years. This review presents evidence that have been found under the framework of communicative action and the prevention of GBV in light of consent and how legislation on consent can be enabled. We put all this knowledge at the service of two aspects: those situations in which consent may happen and conversely those situations in which consent does not exist. New steps focus on contributing to create a legislative framework able to build future legislation. We also analyze, first, the definition and theoretical advances made so far on the issue of consent., and then, different existing legislations on consent in both the United States and Europe to present here. Beyond these studies, the article involves an analysis of cases recently occurred and published in different media: (1) the case of La Manada 17 ; (2) the gang rape case of Manresa (Barcelona) 18 ; (3) a case raised by Joyce Short during her Ted Talk 19 ; and (4) a case presented in the section of the "New York Times Opinion" 20 . Research in social psychology provides the necessary framework for analyzing communicative acts based on symbolic interaction, that is, on Mead's theory (1934) describing the link between self and society, which leads to a constant dialogue between the person and her or his self, as responsible for the self-consciousness. Assuming that none of the girls whose cases are analyzed here consented to what occurred, this article studies which situation, intervention, legislation and/or measure would have improved the consequences of their situations. Based on the criteria of transferability, among actions and interactions, we will focus on those cases which are transferable to other contexts. In this way, several verbal and non-verbal communicative acts that lead someone to raise awareness of others' response regarding consent or the lack of it, will be enumerated. Here the will of the other person to interpret the communicative act and their psychological disposition to the facts are raised (Mead, 1934). Therefore, one of our goals is collecting these transformative communicative acts (eye looking, criteria to act, ways of responding to third parties' coercion, etc.) and explain them with examples of cases on the Internet and the media in Spain and in the United States. These actions may not only transform someone's present, but they could also change their future by aiding the reconstruction of people's autobiographical memories of their worst life episodes in healthy directions . This kind of methodology, analyzing data and cases by considering people's voices (Puigvert et al., 2017) have already being used by several research projects and published in respected journals showing social impact and transformation (Gómez González, 2019). The whole idea of the need for training in any specific area, such as the need for bystander training, has also led us to appreciate the importance and the requirement to educate into consent, to raise awareness about it, all in all to contribute to the overcoming of gender-based violence. Data of this research will be analyzed twofold, on one hand based on the Social Impact Open Repository (SIOR 21 ) and on the other, based on the European Commission study Monitoring the Impact of EU Framework Programmes 22 . SIOR was created as one of the outcomes of the IMPACT-EV framework project of research, constituting a tool that enables researchers to share the social impact of their own research projects with other researchers as well as with stakeholders (Flecha et al., 2015). SIOR established five criteria for evaluating political and social impact: (1) connection to United Nations Sustainable Development Goals, EU2020 targets or other similar official social targets; (2) percentage of improvement achieved in relation to the starting situation; (3) replicability of the impact: the actions based on the project findings have been successfully implemented in more than one context; (4) publication by/in scientific journals (with a recognized impact) or by governmental or non-governmental official bodies; (5) sustainability: the impact achieved by the action based on project findings has showed to be sustainable throughout time. Drawn on the societal impact, the report on monitoring the impact of the European Framework Program for research and innovation, elaborated by experts, established a set of indicators, divided into short-term, medium-term and long-term indicators, following four key impact factors: (1) addressing global challenges; (2) achieving Research and Innovation mission; (3) engaging EU citizens; (4) supporting policy-making. In this line, the following set of indicators for the societal and policy key impact pathways includes considering: the difference between outputs and results; the estimated cost necessary for their collection; the knowledge and transference concepts to determine social impacts; the level of reporting burden for beneficiaries; and the impact timeframes. NEW APPROACH TO CONSENT Current advances in the study of communicative action point out to the issue of linguistics, involved in symbolic interaction and the creation of the worlds around us, which can be strongly connected through analyzing the predominance of dialogic relationships regarding power interactions, based on the following four points: (1) institutional power; (2) interactive power; (3) consequences and intentions; (4) regulation vs. prohibition. Institutional Power Institutional power refers to that which usually exists within institutions influencing their organigram and hierarchy. In the context of universities, it may be embodied by professors who have, at least, symbolic power over students, in some vulnerable situations such as, grades, the ability to decide on their academic future, recommendation letters. In this case, consent could not be asked nor given, since institutional power might limit or prevent the student's freedom to reject or to say "no" to his or her professor. To name another situation, institutional power may exist in a company context. Companies also have power structures that characterize the way they function. There are high, low and medium managerial positions. To the extent that some managerial positions rule over others, it attributes more power to highest charges. In this way, for example, if a boss asks a secretary to have a beer after work, the freedom of that person in a lower position can be diminished by the power of the other person. If harassment occurs while having that beer, consent cannot be given nor requested, since the difficulties for ensuring that it is actually voluntary and free. Interactive Power Interactive power refers to that power provided by the interactions established among people. For instance, one classmate could threaten a girl with sextortion (Patchin and Hinduja, 2018) if she does not say "yes" to having sexual relationships with him. Another example: five boys with a girl in a small doorway, besides the normal interaction being spoiled, there is an additional kind of power established by the interaction itself. In relation to consent, the group of five men or just one must know that, in that interaction, they have more power, because of the context. Considering the desire to have a sexual relationship with the girl, they have to be very sure about getting consent; as otherwise, if the girl would complain at some point, society will stand on her side. That is, the most vulnerable person in the relationship would get the social support. Under this scenario, interactive power is determined by context, which provides more power to one person over another. For instance, if two friends decide to have dinner in the house of one of them, the host has more power that the guest, just because of the context. If a blurry line surrounds consent, position has to accompany consent, because of its weaker position. Weber (1930Weber ( :1905 defined the ethics of responsibility referring to the consequences and not the intention of any action committed. In this case, following the ethics of consequences involves considering whether the consequence of the action conducted between two or more people has been the desired one, or the contrary of what happened. Although the intention might have been a good one, a male boss inviting a woman candidate to go to a pub, during the selection of unemployed people for a job, the consequence of that fact could be that she feels pressed to say "yes." In this sense, "good intentions" do not justify "bad consequences" or the outcome desired by all the people involved. Regarding consent, ensuring the bestintended outcome continuously is the duty of all people involved into action. Providing another example, let us imagine someone convicted for aggression, declaring something that, "I didn't want to harm her" at the beginning of the dispute. In that case, the fact to be judged is the consequence of the matter, not the initial intention, but the final consequence. In this scenario, consent needs to be assured until the end, in other words, the consequence is crucial to determine if consent occurred or not. Regulation vs. Prohibition This model involves both situations, the regulation and the prohibition of any potential sexual-affective relationship under a context based on institutional power, as for instance, the academia. It does not necessarily mean that relationships between professors and students are not allowed, or between five boys and a girl should be considered a crime; as a power relationship, even when conducted between adults, they are under the power system. To find solutions to this dilemma, some of the highest ranking universities have decided to prohibit and condemn all and any kind of sexual relationships between a professor and his/her students. Other high-ranking universities have taken the option to allow those potential relationships while they are freely consented, and both members inform the university. However, if the most vulnerable later complaints, the university will take his/her side. This scenario contributes to the field of consent by considering situations in which, even current legislation allows consent in a sexual relationship (because of age), there are situations under which that relationship may not be free. However, in order for it to be so and for no one to be harmed, situations where interactive power and institutional power have to be regulated; also in order to allow, and not necessarily prohibit, a relationship between a student and a professor or between an employee and a boss; while also believing the most vulnerable person. ANALYSIS The results of this study are configured based on four cases already mentioned above: (1) the so-called gang rape, La Manada; (2) the case of Manresa; (3) the rape described in the Joyce Short's Ted Talk; and (4) the situation explained in the opinion section of The New York Times, in which consent seems to be given but the lack of both sides sharing the same information, convicted him for rape 23 . They all are different stories with common elements. The public information released about these cases included episodes of young women who did not give their verbal consent, as well as cases of girls whose lack of negation may have been understood as consent, but they did not want to participate on any of the acts occurred. Above all, these examples highlight the need to address consent also from a non-verbal perspective, beyond words, the essential for training on nonverbal communication, as well as to raise awareness on others' influence on decisions., Mead described this through his theory on social interaction creating ways of being, thinking and acting. University needs to consider prevention, not just punishing the issue of consent and its consequences, but being aware that this problem constitutes a public health issue. The Case of La Manada For the last 2 years, the Spanish people have watched the most controversial trial for a rape case. Five men started talking to a girl, who was sitting alone on a city bench. Subsequently, they put her in a doorway, and all began to rape her in different ways, one after another. In the end, they stole her cell phone and left her there alone. This happened in July 2016 during the celebration of the Pamplona's regional party, San Fermín. In April 2018, the five aggressors were sentenced for sexual abuse and not for rape. The issue was consent: the 18-yearold victim did not say "no." Rape is not configured based on consent, under the Spanish Penal Code, and one of the judges had a particular vote. That same day thousands of people were demonstrating in the streets in favor of the survivor. Social pressure was necessary, both to demonstrate a step forward as a society, supporting survivors and a way of pressing for the change of current legislations. We all took her side, we believed her, and mostly citizens defended her right to not show negation; we understood her fear to be killed in case she dared to say no. In June 2019, the Supreme Court sentenced La Manada for rape, and raised the punishment to 15 years in jail. This ruling established doctrine on intimidation, and considered this enough reason to break victim's will. Nevertheless, the criminal reform is still incomplete, and concepts such as harassment, aggression and violence need to be adjusted, to provide legal certainty to the legislator and prevent particular opinions. The Manresa Case At the beginning of July 2019, the trial against six men began. Five men directly raped a young woman aged 14 years old in 2016, including intercourse and sexual acts while she was unconscious. A sixth man already knew the victim, according to the public prosecutor report. He was completely aware of her age, and saw that she was barely conscious, after drinking alcohol and smoking marijuana. He took her to a nearby location and allegedly raped her. After that, he encouraged his friends to do the same while the girl was unconscious. They have been accused of raping the minor in an old factory of Manresa, in Barcelona. The victim, who is now 17, reported that she was raped in a building that night. The men were arrested and convicted by the public prosecutor with the lowest charges for sexual abuse, which could be raised to sexual aggression. Again, the issue for the judicial debate is consent. The victim testified her fear while observing a gun during the aggression. Her horror to be killed, justifies her lack of negation. This case should be link to a similar crime in Spain. We all remembered Nagore, a 20-year-old woman who was killed during the Pamplona festival in July 2008. She had accepted -coerced by her friends-to go to a boy's apartment, but she said no to sex and was killed. Joyce Short's Ted Talk Universities are also spaces where sexual harassment occurs. After much research and much progress in relation to harassment based on power relationships, it turns out that harassment also occurs among peers, and that is where consent covers a crucial relevance. However, some examples demonstrate how the consent line is blurry and how getting consent needs to take into account contextual situations and minor nuances. In this sense, Joyce explains in this Ted Talk the case of a young man who entered, late at night, into a young woman's room in their university dormitory. Though asleep, she felt someone get into her bed. Thinking itwas her boyfriend, engaged in sex with him. The act could have beeen consensual at some point, but it was not here because she did not have the full information. Thus, the police analyzed the case and arrested him for rape. The issue of informed consent is raised here, as only he had all the information. For these additional contextual reasons, it is necessary to delve into this problem in order to expand its circumstances, as well as to clearly delimit those situations under which consent cannot be agreed or requested. The New York Times Opinion Case Hanna Stotland argues that simply expelling college students accused of sexual assault is a misguided response to what is a public health problem. In her video, Stotland describes very different cases surrounding consent and the difficulties to get it right, with different degrees of what is considered sexual assault while she is asking to name each action by a different term. During the video she explains a case of a man and woman who both agreed to have a sexual relationship. After a while she filed a complaint against him based on the lack of consent. She recognized that she said yes at the beginning, but later, said she did not mean it at that time, but just agreed to have sex in order to leave the room more gracefully. The man was accused for rape and was suspended from the university for 2.5 years. According to Stotland, this example make us aware that there are confusing moments when yes might actually mean no, so consent is a murky process and universities should look for justice while training students to navigate this gray zone for prevention of raped, and not being accused of rape. RESULTS: TRANSFORMATIVE COMMUNICATIVE ACTS There is a broad agreement on the definition of consent, which has to be affirmative, voluntary, enthusiastic, conscious, and repeated. The issue is of how to get it in the right way, and how to punish someone when he or she did not get consent. Communicative acts and dialogic interaction are a contribution to this regard. Lidia Puigvert (El Diario Feminista, 2019) already describes some prerequisites for a consenting relationship. For example, the turn from relationships based on speech acts to the ones of communicative acts, which include all types of communication not just the verbal. It means to base the relationship on dialogic interactions and not on power interactions, some of which may come through power manifested by institutions, for instance a boss over his female worker. However, considering that power interactions may exist in the absence of institutional power, interactive power should be considered. For example, five boys with one girl in a doorway. Referring to the cases of La Manada or the Manresa gang rape, it is proven that there was a sexual act in which women recognized being coerced. When a group of men is alone with a woman, they should know that she might not feel free. So, they only can try to have any kind of sexual contact if they are sure that it is a totally free relationship; knowing that they are taking the risk if they are wrong. There is much concern about educating in consent. Consent education involves creating collective awareness, both about the severity of violence and the importance of society taking position against this problem. The consequences of GBV can become unbearable psychologically. This human grief needs to be addressed from a scientific perspective, even when research is still wondering how, each new step leads to a new reality, still unsolved. GBV is an emerging and urgent issue for scientists who, from psychology, seek ways to impact it. Drawn from our research, based on SIOR and Monitoring impact criteria, the following set of actions point out as being transformative in order to add to sexual consented relationships, while contributing to the social impact of physiological research. The concept of consent needs to include: (1) Ethics of responsibility. Accounting for power interactions in an unequal social structure. Limitation of the idea of consensus proposed by Habermas based only on validity claims and orientation toward understanding. (2) Non-verbal body language. This is crucial, as it does makes little sense to to ask at every moment "do you want to keep doing this?" (3) Provide conditions free of coercion. Conditions that enable consent means ensuring spaces and interactions in which consent is freely given, clear, continuous, specific and unambiguous. Situations of duress, power relationships, unconsciousness, fear and threat, cannot ensure consent. (4) Solidarity with survivors. In any situation when someone fills a complaint for sexual harassment, everybody's duty is to believe survivors and be in solidarity with them. In the same way, this action involves empower and protect active bystanders. (5) Consent training. The need for asking for and getting consent should be trained, speaking about its challenges but also its benefits. (6) Communicative acts, beyond words, need to be considered for ensuring consent for any sexual activity. Nobody should ever judge a victim for the way she or he reacted once sexually assaulted. (7) Common sense. Some legislations are based on tradition, jurisprudence and common sense. In a moment when legislations on consent are being built, situations in which the meaning of consent is not clear (verbally and nonverbally), common sense may be used. (8) Overcome barriers and resistances. While achievement of consent for any free sexual relationship is not easy, local and structural barriers should be considered and overcome. Evidences of Social Impact of Psychology In May 2019, at the Oñati International Institute for the Sociology of Law, took place the Workshop 24 on GBV including a roundtable discussion on the issue of consent, of which the authors have taken part discussing with members of the police, the issue of gender violence, its link to consent, and the need to add this approach to their cases. Additionally, we shared this contribution with lawyers, scholars, representatives of women lawyers' associations, gender experts, policy makers and social workers as well as with survivors and educators. They all could appreciate the social impact of this research to the reality they are facing each day. Based on current approaches to consent, there are two clear scenarios, so far. The "no means no" and the "yes means yes, " following the principle "anything less than yes is no." However, there is a third situation, to which this article aims to contribute, considering occasions when "yes, " a potential "yes" or even a silence, actually means "no, " referring to those situations in which a specific context pushes the person to have no choice but to say yes (or to agree). Following this line, to approach specific contexts, we build knowledge along analytical elements in two veins: (1) the communicative acts and the will for understanding them from a psychological perspective; (2) the interactive power and the institutional power, which frame specific contexts. Consequently, new realities create on us, as researchers, the duty to provide scientific elements both for the analysis of cases as well as for legislating them. Some of these realities are described in section 6, such as the case of "La Manada, " The "Manresa case, The Joyce Short's Ted Talk, and The New York Times" Opinion Case. These cases show the need to consider consent in a conscious manner, from the begging of the engagement until the end, and informing at every moment the partner about the intentions. In the same line, the way consent has been taken into account from a legislative perspective; it shows the importance of analyzing these realities as a pressing moment for the creation of new legislations in relation to consent. This section presents below, three cases that influenced legislation in their own countries of origin. Usually, legislation need reality first in order to be created and changed. Three Cases That Impacted Legislation Previous studies have already shown the importance of the gang rape analysis and it trials in terms of what is considered social opportunity, that social moment necessary to raise awareness and contribute to make possible translating a social claim into a law with the aim of legislating on affirmative "yes" (Vidu and Tomás Martínez, 2019). As for instance, without the struggle for women's rights, we would not have legislation about it. Laws shape our morality; we need new laws on sexual assault to change the way people think and act regarding to them. (1) As history show us, specific cases have promoted legislations on sexual harassment. Clery Act or the Jeanne Clery Disclosure of Campus Security Policy and Campus Crime Statistics Act is a federal law passed in 1990 as a consequence of the Jeanne Clery rape and murder in 1986 by another student, at her campus university residence hall. At that moment, it was discovered that 37 other cases involving violent crimes occurred at that university during the last 3 years. This is how the Clery Act law requires institutions to disclose, publish and distribute their data and statistics on violent episodes. (2) The Portuguese legislation on consent was passed in January 2019, once the Third Criminal Section of Lisbon has confirmed a 6 years and 6 months prison sentence for aggravated rape to a man aged 35 who in September 2016 took a girl aged 14, without her consent and raped her without her resisting. According to the judicial sentence, the judge considers that the absence of physical resistance of the victim cannot be considered a form of consent, but a tool to survive the attack. (3) The Chanel Miller's rape case (known as Emily Doe during the legal process of the complaint) also changed the California law. She was raped while unconscious at her campus in 2015. One year later, Chanel was reading the victim-impact statement in a courtroom in California. The statement was later published on BuzzFeed 25 and had more than 18 million views. The Chanel case triggered a change in the California law, which at that moment did not consider her case as rape. Currently, the definition of rape into law includes any kind of penetration and there is a mandatory 3-year minimum prison sentence for penetrating an unconscious or an intoxicated person 26 . The three cases presented above are clear examples of the issues of responsibility, body-language interpretation, providing situations free of coercion, solidarity with survivors, common sense and training on consent. Linking these points with the four cases presented in section 6, highlighting the impact of psychology in any and each of them. For instance, in "La Manada" case, once the men knew she is helpless, they are five, she is alone and feared, they even did not ask her about consent as they do not want to know her will. The psychological point is skipped from acting in accordance with the will of others, in line with what Mead says. In the case of Manresa, the men were aware that she is unconscious, they really want to rape her, and they 25 For more information, see: https://www.buzzfeednews.com/article/ katiejmbaker/heres-the-powerful-letter-the-stanford-victim-read-to-her-ra 26 For more information, see: https://www.latimes.com/politics/la-pol-sacstanford-rape-prison-sentences-20160806-snap-story.html do not want her consent. Psychologically, the men are aware of what they do. In the case of the Ted Talk, at the time he knew that she is not his boyfriend, he knew that he is cheating on her. In the same line, for the New York Times' Opinion case, she said yes at the begging but actually meaning "no, " so interpretation beyond words is needed, and communicative acts arises in order to better understand her willing. Increasing Movement of Supporting Survivors All these cases share the need and the difficulty to define and legislate sexual consent. The cases of La Manada and the Manresa case were also in the media and were deeply rejected by the feminist movement after the provisional request of the Prosecutor's Office which accused them of sexual abuse instead of sexual assault awaiting the victim's testimony. Under the slogan We do believe you, massive support for both victims and rejection of harassers was publicly shown. In this sense, La Manada as a case study for this article, shaped the proper legal opportunity to build new legislation on the issue of consent, including context of the action and its features. This is a historical moment for lawmaking on sexual harassment and consent. Through legislating different contexts and situations which may occur, it will be possible to better prevent GBV and harassment. The social impact serves to raise a debate in the social and legal field, feasible to overcome victimization and contribute to the effective use and achievement of sexual consent. It is necessary to advance, beyond words, into the interpretation of silence, considering the interactive power as well as the institutional power. Along the same lines, there are already programs that have led people to act differently. The It's on United States campaign 27 , says: non-consensual sex is sexual assault. Here it becomes necessary to stablish situations which make consent ineffective, as power, force, duress or deception. However, we still need to establish how actually consent is implemented. Some campaigns define it as: freely given, knowledgeable and informed agreement. In her web page 28 on consent awareness, Joyce Short makes a difference between assent and consent. Permission is a form of assent, but consent has a different meaning according to the law. This legal distinction makes some sexual conduct, even those containing assent but not consent, to be criminal. According to the Anti-Violence project 29 , the "no means no" messages of the 1990s have been replaced with "yes means yes" and "consent is sexy" messages particularly for use in poster campaigns and slogans used in "slut walks" as examples. There is also increased focus on consent in a range of anti-sexual violence education programming. "Consent it's as simple as tea" 30 is a campaign consisting on describing what consent is for all ages; being specially based on the idea that consent can be given and ungiven during the same sexual conduct. In her Ted Talk video 31 , 27 For more information, see: https://www.itsonus.org/ 28 For more information, see: https://consentawareness.net/2016/01/31/assent-vsconsent-theyre-not-one-and-the-same/ 29 For more information, see: https://avp.org/ 30 For more information, see: https://www.youtube.com/watch?v=oQbei5JGiT8 31 For more information, see: https://www.youtube.com/watch?v=LdDRv2f2dFc Amy Adele Hasinoff, talks about what "sexting" teaches us about consent. While digital communication has positive effects, affirmative consent needs to be simple. She discusses mutual communication and the simple and clear response when someone asks for consent from the other person, really gets it. Most importantly, she agrees that speaking about consent for decades now, makes people more aware on the fact of asking and getting consent, but specially, on the consequences of not getting consent. Considering the findings of psychology, legislating consent will provide juridical information for legal certainty, contributing to right interpretation of unconsented sexual encounters. Severe legal basis, including psychological reactions disabling consent, will increase victim protection and complaints, while contributing to reduce the emergence of gang rape cases. CONCLUSION While rape is not always a problem of miscommunication, and consent is still complicated to be defined under the law spectrum, the contribution of communicative acts and dialogic interactions is unprecedented in the research in psychology and its impact for society. Psychology has already contributed to impact this issue from its previous research, highlighting Mead's symbolic interactionism and the communication among people based on consensual dialogue. In the same line, considering communicative actions and egalitarian dialogue for consenting sexual affective engagement is certainly apioneer important contribution. Indeed, interactive power beyond structural power opens a new channel to understand situations for which the current definitions of consent have shown not to be broad enough to respond to current realities. The dialogic sessions we have had with relevant lawyers, police officers, gender specialists, educators, social workers and victims have outlined the clear social impact of this line of research. Day by day society is more demanding and needs more answers to current problems. It is time to eradicate GBV. In our duty to provide scientific knowledge to this claim and achieve the goal of contributing to preventing aggressive sexual contacts from early ages, we suggest people's relationships might be based on communicative acts and consent established as a space in which dialogic interactions can be freely asked and given. AUTHOR CONTRIBUTIONS All authors equally contributed to conceived the presented idea and discuss it with members of others fields of study, gender specialists, policemen responsible of gender violence cases, and policy-makers implementing prevention and response mechanisms and directly working with survivors and have deeply debated on the psychological perspective of consent, on categorizing its legal frameworks and analyzing concrete contextual situations. RF contributed to the conceptualization of the reality of sexual consent and to gestate the interactive and institutional power in analyzing the context. GT contributed to develop thoughts on the legal analysis and the social and legal advancement of the consent notion. AV contributed to the formal analysis and discussion and the writing of that part of the manuscript.
v3-fos-license
2018-12-17T23:03:31.084Z
2012-11-09T00:00:00.000
56253704
{ "extfieldsofstudy": [ "Psychology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://cultureunbound.ep.liu.se/article/download/2015/1381", "pdf_hash": "dd476c73d4a1c8d27988c7e88203d5fe8cb2ba27", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2005", "s2fieldsofstudy": [ "Sociology" ], "sha1": "dd476c73d4a1c8d27988c7e88203d5fe8cb2ba27", "year": 2012 }
pes2o/s2orc
‘ Successful Ageing ’ in Practice : Reflections on Health , Activity and Normality in Old Age in Sweden This article aims to contribute to the critical examination of the notions of health and activity, and to discuss how these cultural and social constructs have impact on elderly people’s lives. An ethnographic perspective gives fruitful inputs to explore how old people deal with the image of old age as one of decay and decline, while they simultaneously relate to the normative idea of so-called successful ageing. The focus is thus on how elderly people create meaning, and how they manage and make use of the contradictory cultural beliefs that are both understood as normality: old age as a passive period of life involving decline and disease, and activity as an individual responsibility in order to stay healthy. The study sample is created with two different methods, qualitative interviews and two different questionnaires, and the majority of the respondents are 65+ years old. The article demonstrates the intersection between old age and a health-promoting active lifestyle. The notion of activity includes moral values, which shape the beliefs and narratives of being old. This forms part of the concept of self-care management, which in old age is also called successful ageing. The idea that activities are health promoting is the framework in which activities are performed, but significance and meaning are rather created from practice. Introduction 'Exercise becomes more important in old age' is the headline of an article in the Swedish lifestyle magazine Hälsa (Health).The article stresses the importance of good nourishing food and physical exercise in old age, in view of the fact that 'ageing means vulnerability and frailty'.It finally makes the point that 'successful ageing is connected to high protein intake and regular exercise' (Hälsa 2011).'Successful ageing' is a notion and ideal also used within gerontology, meaning wellbeing, health and an overall active engagement with life (Torres 1999).A similar term is 'active ageing', linked to wellbeing, independence and health, which derives from established gerontological theories (Venn & Arber 2011).Both concepts aim to empower older people to be active and independent, and to avoid the expected negative consequences of ageing, such as dependency and poor health.To be successful in old age is understood as to be healthy and active, while what could be called unsuccessful ageing is associated with frailty, illness, loneliness and dependency on others (Gilleard & Higgs 2000;Hepworth 2000;Cruikshank 2003;Blaakilde 2007;Jönson & Larsson 2009). The association of activity with health implies a perspective of power and normality that permeates late modernity.Thus, becoming old is more than a biological process.It also means that people are sorted into special social categories.Old people are 'the others' of modern society, who represent what the rest of the population does not want to be, but hopes all the same to become; namely old, with infirmities as well as a shrinking future.Categorisations of this kind are cultural constructs, and as such, they often say more about the values of the time we live in than about the actual conditions of age groups.Old people are not alone, of course, in being ascribed a type of alien status in society.Nevertheless, the very category of 'old' highlights and refers to various forms of disciplining and systems of control -it constitutes altogether a specific focal point that makes plain the state of tension between body, health and ageing on the one hand, and ideas about normality on the other (Foucault 1994).Activity could therefore be looked upon as a means to be normal and to lead a normal life.Good health requires an active, disciplined body; the individual is expected to strive towards being strong, fit and healthy (Lock & Scheper-Hughes 1996:62;Lundin 2008). 1 There is a broad scholarly discussion on the paradigm of activity (Giddens 1991;Conrad 1994;Lupton 1995).However, in the field of elderly research this paradigm is seldom critically scrutinized.Nevertheless, some important studies address the notion of activity as a cultural and social construction.They include, for example, Susan Venn's and Sara Arber's (2011) discussion of how elderly people's views on and approaches to 'active ageing' are intricately linked to the bodily changes that arise from the ageing process.Moreover, Sandra Torres and Gunhild Hammarström (2006) contribute to the discussion by showing that the ageing process can either be regarded as biologically determined and natural, or as something that can be influenced and postponed by lifestyle.They demonstrate that old people may perceive the process of growing old either as a limitation that must be accepted, or as something that one should counteract (cf.Werntoft 2006). Our overall aim is to contribute to the critical examination of the notion of activity and to discuss how this cultural and social construct has impact on elderly people's lives.As a development of the discussions that suggest that people relate to either one or the other concept, we assume that these approaches and concepts interact with each other.We are, thus, interested in how notions relate to practice, that is, the doing of ideas (Shove 2003).We argue for the necessity to examine the activity norm and its promoting of health from an ethnographic perspective that shows how it is rooted and manifested in individuals.We believe that field observations and in-depth interviews give fruitful inputs to explore how elderly people deal with the image of old age as one of decay and decline while they simultaneously relate to the normative idea of so-called successful ageing.The focus of the article is thus on how elderly people create meaning, manage and make use of what appears as contradictory cultural beliefs that are both understood as normality: old age as a passive period of life concerning decline and disease, and activity as an individual responsibility in order to stay healthy. 2 In this article we lean towards critical cultural science.We are inspired by analyses, such as Lock's and Scheper-Hughes' (1996), which point out that power structures are connected to conceptions of the body (cf.Gilleard & Higgs 2000;Venn & Arber 2011).They argue that the perception of how this body of ours should be used occurs against the light of a moral mobilization in which people, as Nikolas Rose emphasises (1999), are expected to be responsible and take care of themselves.We have also found Stephen Katz (2000) useful, who argues that the concept of activity and productivity are incorporated as key elements into older people's lives and in their stories of everyday life.Katz points out that even though older persons freely participate in various activities, they are aware of the correlation between activity and a larger ethical regime of self-disciplining in later life. Methods Our empirical data is collected in Sweden.The study sample is created with two different methods: qualitative interviews and two different questionnaires.Even though the methods differ, the same question themes and types of questions, concerning experiences of ageing and health in relation to everyday life, were used in the questionnaire Ageing and Health, LUF 227, and in the interviews.The aim of the questionnaire Biomedicine and Prioritizations in Health Care, LUF 214, was to cast light upon views of advanced medical treatments, i.e. measures that are expensive and that bring to the fore questions about who in society should be given precedence.Using various processes of creating data can provide different per-spectives and understandings (cf.Lundin & Idvall 2003).The interviews give access to deeper knowledge concerning each individual, whereas the questionnaires increase diversity using a larger number of participants.Yet, both methods employ a micro-perspective to create an understanding of comprehensive cultural processes (cf.Kaijser and Öhlander 1999).Additional material that is used include official government recommendations and reports like Prioritisations in Health Care (SOU 2001:1), as well as press coverage and other media reports. 3 Interviews The interview study is part of a research program concerning elderly people and geriatric care, conducted by the Vårdal Institute. 4Interviewees were contacted during their participation in an intervention study 5 connected to the overall research program.Those who were regarded as reluctant or as having difficulties to participate in the intervention were not asked to participate in the interviews.Our study focuses on people's perceptions and experiences of ageing, health and activity.However, one has to consider that the intervention project may have facilitated the interviews by increasing the participants' reflections on the topic.We perceive this not as a negative element in the investigation, but rather as a way to open for an awareness and thoughtful response. 6 The participants, six women and four men, were living in condominiums or rented flats in an attractive city district of Gothenburg, a large town in the west of Sweden.They were between 80 and 90 years old, and were not dependent on assistance in everyday life.The interviews were carried out in the respondents' homes, where they had lived most of their adult lives or moved to after retirement.All the women, except one, were widows, while only one of the men was widowed.The others were still married, and their spouses sometimes participated spontaneously in parts of the conversation.We used an interview guide, thematically structured, as a point of departure for discussions of experiences and perceptions of ageing and health, and descriptions of everyday activities.The interviews lasted between forty-five minutes and three hours, and were recorded digitally.Afterward they were transcribed verbatim. 7 Questionnaire The questionnaire is constructed as a thematic open-ended questionnaire, where a group of respondents are asked to write down their answers: thoughts, opinions, memories and experiences of a certain subject (cf.Hagström and Marander Eklund 2005).The questionnaire is distributed to an existing pool of respondents bound to the Folk Life Archives at Lund University.These people fill out and respond to questionnaires sent to them on a regular basis (approximately twice a year). 8The questions follow specific themes and the respondents decide which questions they want to answer.These permanent respondents have initially replied to an advertisement from the Folk Life Archives or they have heard about it in other ways, for example through a friend.The only requirement is that you enjoy writing.Regarding the questionnaire Ageing and Health (LUF 227), 62 answers were received from respondents aged from 42 to 93, even though the majority of the respondents (75 per cent) are 65 years and older.The majority is living in the countryside or in smaller cities, primarily in the south of Sweden.Some receive assistance from community care or get help from relatives or neighbours to cope with certain daily chores.Furthermore, the answers from Biomedicine and Prioritizations in Health Care (LUF 214) were predominantly received from older people.Of a total of 61 respondents, 90 per cent were between 45 and 89 years old. It is important to discuss and reflect upon the questions of the questionnaire (and of course upon the questions asked in the interviews).What does the researcher want to know?How can the questions be formulated in order to encourage the respondents to bring forth their own views and not what they think the researcher or the archives want to hear?Perhaps the questionnaire gives the opportunity to interpret the questions more freely, while the interview is more of a well-defined situation, accepted and initiated of both parties (cf.Kvale 1996).Nevertheless, both methods are ultimately about communication, which requires some level of mutual understanding (cf.Lundin & Idvall 2003:191). To Deserve Health The most common justification of activity is that it is healthy, in all ages (Cruikshank 2003:159pp).The activity device in old age is put into words by a woman, aged 73, in the questionnaire LUF 227: 'don't stop doing things because you're growing old, because you'll only grow old if you stop doing things'.And the notion of growing old implies illness, isolation and dependence on others. The idea seems to be that being healthy and in good health is not something people simply are, but something they must strive for, and deserve.Good health is described as a loan, which can be retained with the right genes and a correct lifestyle.An 83-year-old man writes, as a reaction to an on-going media debate on prioritizations in health care, in the daily newspaper Sydsvenska Dagbladet's letters to the editor, that: All people have to prepare for old age by keeping themselves healthy as long as possible.I do gymnastics for 15 minutes a day and take an hour-long walk every evening [---].I feel super and have never been ill, apart from a few injuries on the job.Society has to invest much more in fitness activities; it saves money in the long run.Geriatric care is miserable, people are kept locked up as if they were criminals.(Sydsvenska Dagbladet 23/04/2003) 9 Similarly, one of the interviewed men, aged 85, argues that staying healthy is something everyone should think about: You don't think about your health as long as you enjoy good health.But when it begins to falter, you will understand what it means to be healthy.How foolish of people not to think about looking after themselves in order to stay healthy.It's possible I didn't consider that myself when I was younger.But my wife and I have done plenty of sports and been outdoors and we used to go skiing in the winter.That has made us stay healthy. Later on during the interview, the man accounts for his chronic diseases; he has a stomach disease and rheumatism.Recently, because of an eye disorder, he has undergone surgery.Clearly, there is more to good health than being free of illness and diseases.Most of the respondents claim to be in good health, even those with relatively serious illnesses and disabilities.This suggests that good health involves more than being healthy; good health implies well-being on many different levels.As long as the consequences of ill-health are possible to adapt to, and everyday life can continue without changing too much, there seems to be no reason to consider yourself as ill or unhealthy.Everyday habits and routines are important for the experience of health.Poor health, on the other hand, is described as not being able to work and perform daily chores; i.e. not being able to be active. Many respondents claim to be in good health in relation to their age; that is to say despite their old age.Since ageing and old age are associated with poor health, the concepts of ageing and health are intrinsically interwoven and cannot be explained separately.Health and ageing are intimately linked together (cf.Alftberg 2010).The belief is that health deteriorates the older you get.The expression 'age is beginning to show' signifies that at a certain age, one should not be surprised of bodily decline and disability.It is difficult to describe ageing without using health as a reference; people talk about their ageing in terms of how they feel with reference to illness and ailments.Similarly, health can be described in age metaphors: 'on a bad day, I feel like a hundred years'.To be active is a sign of health and, if it concerns an elderly person, a person young for his or her age.A male respondent of LUF 227, aged 72, illustrates this: To my wife's dismay, I still climb on a ladder and wash the house, remove moss from the roof, fell trees or clear the brushwood from the common grove across the street.Is that a sign of health or sheer stupidity?One fine day I may lie on the ground, bruised and broken, after falling off the ladder. It seems that old age is considered a risk, regardless of health status.Climbing a ladder becomes unsafe, even for a healthy individual, because of the age of that person.Old age stands out as a period of increased risk of injuries, and that is something to be prepared and take responsibility for.Possibly, the wife mentioned in the quotation is taking that responsibility, trying to make her husband stay off the ladder.As shown by Arber and Ginn (1995), the traditional female care for the family lingers on, in our case articulated as male health being a female responsibility.This was illustrated in the interviews with the men that were married; often the wives spontaneously participated and developed the accounts of their husbands' health conditions (Alftberg 2008). 10 Another example of the notion of activity as a means of promoting health can be found in relation to people's views on health care, and the question of what should be prioritized in health care.Indeed, people's views on health care tell us about their values and what they deem to be 'normal'.As our study on Biomedicine and Prioritizations in Health Care (LUF 214) shows, people's way of life is important when reflecting on who should receive cost-intensive care (Lundin 2008).In our questionnaire, just over 40 per cent of those responding stated that older people should give younger people precedence in life-threatening illnesses, while 58 per cent demand that regardless of age, people should take responsibility for their health in order to be considered for expensive treatments. 11Thus, for example, a 73-year-old man thinks that 'a heavy smoker who does not intend to stop smoking should not receive treatment for lung cancer', and a 63-year-old woman says that 'if you don't want to contribute to your well-being and try to hold off lifestyle-related illnesses, then you shouldn't be surprised that resources and prioritisations have to be taken into consideration'.Another person who answered the questionnaire, the wife of a man who is on the waiting list for a new organ, says: It disturbs us when he is terribly ill and we know there are people who precede the waiting list -people having mistreated their bodies all their lives, while my husband was born with this disease, which he has been struggling with all his life. The results of our questionnaires correspond to those of researcher Elisabet Werntoft (2006).Her studies indicate that age is an important factor in prioritisations in Swedish medical care.At the same time, she emphasizes that 80 per cent of the old people who were consulted in her studies thought that factors like pain or way of life, for example, were more pressing to take into account than age.As Rose (1999) points out, the concept of health is permeated by a moral imperative stating that health is something one must work to obtain.It has to be earned! The Making of an Active Life An active lifestyle emerges as important and is motivated for reasons of health and postponing the ageing process.The empirical data exhibit different forms and descriptions of activity.The respondents give detailed accounts of associations and club activities, exercise, gardening, solving crosswords or simply being able to carry out everyday household chores without help.A common activity is walking, alone or together with a spouse or friends.When walking, a certain kind of stick is often used for support, the so-called Nordic walking poles.The stick has long been a symbol of old age, attached with notions of decreased mobility and inactivity (Odén 1994:9).Nordic walking poles associate instead to exercise and movement, in line with the activity norm.In contrast to ordinary sticks or canes, Nordic walking poles provide a more youthful and sporty appearance.The poles are associated with physical fitness rather than impaired ability, and we argue that they create a different representation of old age, corresponding to the notion of activity (cf.Alftberg 2011). Taking a walk is perceived as a healthy and sound activity.Still, it can be difficult to motivate yourself to do it.One of the interviewed women, aged 90, describes what usually happens when she is thinking of walking: If I plan to take a walk, I might think: 'Should I be taking a walk now?Nah, I'll do that tomorrow instead.No, get yourself going now!' I wander around the house and discuss with myself: 'Go outside and take a walk!Nah…' Perhaps I start to do some housework: 'No, don't do that, you can do that when you come home!All right, all right!' Finally I get so tired of myself nagging: 'All right, I'll take a walk then!' The woman explains that even when she is not in the mood for walking, she knows she needs the exercise in order to feel bright and cheery.In this way she is able to perform other activities she is more interested in.It appears that performing health-promoting activities is a responsibility that cannot be ignored even at lack of interest or dislike. A finished working life is expected to change into an active retirement life (cf.Nilsson 2011).The respondents stress that they are living a normal life, which includes physical, mental and social activities.The only exception seems to be that more time is required; an interviewed 80-year-old woman describes herself as being 'not as nimble and quick as before'.But even though activities take more time, it is not considered a problem.The point is that you at least try to do them.It appears to be important to attempt to be active and independent, according to your own ability.But this also requires the right attitude or approach (cf.Torres & Hammarström 2006).This can be illustrated by quoting another of the interviewed women, aged 87, who talks of a friend of hers: She's almost ninety years old, but she's alert and in her right senses.It's lovely, she's such a positive person too -because there are so many people who just grumble and complain.Darned, I get so tired of it.It won't help feeling sorry for yourself; one has to get out and about.Of course, some days I find it difficult, but you can't stay inside all day.She goes on telling how she activates herself on days when the weather is too bad for being outdoors.Since she lives a few floors up in a block of flats, she uses the stairwell for exercise.By going down to the front door, and then up again, and doing this every two hours, she will get the exercise she feels she needs.Another female friend of hers has impaired vision, but the interviewed woman means that her friend could at any rate activate herself with audio books or by listening to music.The ideal of a health-promoting, active lifestyle remains even with poor health.The attitude is essential.As mentioned in the quote above, feeling sorry for oneself is not an acceptable behaviour.An 86-year-old woman in LUF 227 also articulates this, when she describes how to age well: I believe that mental training is as important as physical exercise.Reading, discussing, solving the cross-words and above all, spending time with your friends and not isolating yourself, as well as not feeling sorry for yourself that things are not the way they used to be. What happens when an older person does not have the strength or desire to be active?Several of the respondents describe themselves as lazy when they have given up a regular activity.One of the interviewed men, aged 80, explains that he will not go out walking as much as he used to because he has become a little lazy.A woman in the questionnaire LUF 227 comments that, as a result of her indolence her interest in doing sports has diminished.The fact that she is 81 years old and describes herself as overweight appears not to be significant to her.She could have used other explanations, but chooses to describe herself as idle. Nevertheless, according to the respondents, the emphasis on activity may actually be overdone and result in impairing people's health.An interviewed woman, aged 87, explains that a friend of hers shows an unhealthy behaviour: She's a bit restless, I think.[---] She wants to help and she'll be there to help each and everyone all the time.I think this is not good for her.It becomes stressful in the end, when she's expected to be here, and needs to be there, and ...She has a very nice cottage, then suddenly she plans to have a dinner party and cook all this food -I asked if she expected a crowd of people coming.The whole thing is somewhat restless. Self-care could be described as keeping a balance between rest and activity.Too much activity causes too much stress and stress causes illness.Too much activity implies restlessness, where restlessness could be seen as one end of a scale where the opposite end is inactivity.The middle of the scale is the normal, healthy point of activity.It therefore seems to be a difference between being active and being restless.Restlessness is an exaggeration of the amount of activity one does, and a sign that the responsibility of maintaining one's health is not taken seriously.Both inactivity and restlessness can be regarded as the antithesis of prevailing ideals, and therefore may possibly cause illness and disease (cf.Sontag 1990).The normative notion of activity creates meaning when activities are actually done, and the performance also shapes what is regarded as normal and what is regarded as deviant (cf.Shove 2003). Good and Bad Activities Normality in relation to the amount of activity discussed above also includes normality concerning the nature of activity, what kind of activities you perform.All activities should primarily be beneficial to your health.This idea leads to frequent responses concerning physical utilities, possible psychological values and certainly social benefits; the ultimate activity may be described as something that combines busyness with pleasure.Activity must not be entirely amusing, but it has to be health-promoting and wholesome.Accordingly, it would be appropriate to speak about good activities and bad activities, ranking 'good' in the same category as 'normal' and 'bad' as 'deviating'.Being active, as we have discussed above, is connected to moral virtues such as responsibility and normality (cf.Katz 2000).People can be active in the right way as well as in the wrong way. A female respondent in the questionnaire LUF 227, aged 70, puts a gender difference in relation to the proper manner of an active lifestyle: I believe men age quicker than women, due to the fact that men are less active than women.Of course, there are active men, but many of them just sit in front of the television or lie on the sofa. Several of the participants, primarily females, express the opinion that men appear to be less active than women.The experience is that older men are not to be found in social contexts as clubs and associations as much as women, even considering the difference in the average length of their life.A common view is that women are expected to have a stronger social network than men; consequently, the significance and meaning of activity might differ between the sexes, and gender will affect the perception of 'normal' activity (cf. de Beauvoir 1977). In the quotation above, watching TV or lying on the sofa are perceived as bad activities or not actual activities at all.We want to show how these occupations are culturally and morally loaded, giving an example from an interviewed 80year-old woman: Lying in bed all day is described as a hazard, at least by the woman's son.The activity norm becomes more challenged when lying down compared to sitting up.In Western historiography, there is a perception of correlation between upright posture and moral virtues.Classical accounts of human evolution are illustrated with pictures of stooping apes gradually turning into humans standing straight with their head high and body erect.Man's eventual achievement of upright posture is the foundation of culture and civilization, of moral height (Ingold 2004).Lying down could consequently be regarded as the opposite of being in possession of moral virtues.Perhaps the posture of the body becomes more significant in old age because of the image of old age as decay and decline, and therefore a higher risk of confinement in bed.An upright posture is also considered as a characteristic of a health-promoting active lifestyle. It is not only the horizontal position that is a danger.With its associations to inactivity and passivity, the television is a moral hazard as well.Nevertheless, the woman quoted above claims to prefer quiz shows since they give her the opportunity to learn something.No matter how much she enjoys lying in bed and watching TV, the pleasure and fun must be legitimized in terms of health.The quiz shows offer mental exercise, and she can learn from it.Activities that are performed for their own sake and represent their own goals, with the main empha-sis on the emotional, aesthetic and sensual, are not regarded as healthy enough and disguised in rational, instrumental explanations (Ronström 1998). Are Activities Leisure or Work? Not all activities have the same status and some pursuits are not even considered to be activities, like watching TV as mentioned in the example above.But perhaps there is a question of ability and capacity that needs to be noticed.Depending on health and ability, watching television or going shopping may be described as important activities that account for the whole day.It is important to try to lead an active life, adapted to the current situation that may involve impending illness, disabilities and ailments.Venn and Arber discuss similar attitudes concerning day-time sleep and old age.They state that attitudes and practices of 'active ageing' are intricately linked to the bodily changes that arise from the ageing process.The desire to be active later in life leads to primarily different attitudes to daytime sleep.Those who accepted daytime sleep did so in recognition of decreasing energy in old age, and acknowledge that napping is beneficial in helping themselves maintain active lives.Those who resisted daytime sleep did so because time spent napping was regarded as being both unproductive and as a negative marker of the ageing process (Venn & Arber 2011).We argue that this means that old age actually transforms what an activity is considered to be.One example is an 81-year-old woman who puts in writing her week schedule in the questionnaire The chores of everyday life, such as shopping in the supermarket on Fridays, are defined as important activities that require scheduling.One occupation per day can be enough to feel busy and useful.In addition, the schedule describes Wednesday as 'day off', and Saturday is labelled 'nothing'.The notion of activity looks like a form of work to be done, which explains the desire for a day off.In retirement, wage work is replaced by another kind of work called activities.Hence, there can be time off from 'leisure time' in retirement, if retirement is defined in terms of activities. A schedule maps out time for work and time for leisure.The notions of time and work are related.They are both fundamental Western metaphors that we use and live by, according to George Lakoff and Mark Johnson.Both concepts are perceived as resources; something that can be measured, used and saved.The connection between time and work has consequences for the comprehension of non-work, or leisure.Leisure becomes part of the same metaphorical thinking, and is understood as something to use, spend, save, waste or lose (Lakoff & Johnson 2003).Activities in old age can be said to take on the form of work, health work, in order to age successfully; to be healthy and active, to fulfil oneself and not become a burden on society (cf.Ronström 1998). As was mentioned in the introduction, Venn and Arber (2011) suggest that the notion of activity is incorporated into the lives of older people.Even when freely participating in a wide range of new and continuing activities, older persons are aware of the correlation between activity and the imposing overall structure concerning self-disciplining in later life (cf.Katz 2000).We would like to add that the notion of activity results in the transformation of meanings of occupations and activities in old age.Solving the crosswords changes from an easy-going and pleasant occupation to a health promoting activity, just as everyday chores and pursuits develop into scheduled labour. Successful Ageing in Practice This article has examined how elderly people manage and make use of two contradictory cultural beliefs that are both understood as normality: old age as a period of life characterized by disease, and activity as an individual responsibility in order to counter a declining ageing process.As pointed out by Katz (2000), activity is a conceptual and ethical keyword that shapes our understanding of later life.Activity must be considered part of a larger disciplinary discourse in the management of everyday life and as 'the hallmark of responsible living ' (p. 144).The lifestyle magazine Health, introduced in the start of this article, is one among many culturally and morally loaded voices that stress the importance of 'successful ageing'.They function, in the words of Rose (1999:74), as a kind of technology for making people responsible. However, as our empirical data shows, the importance of attempting to be active sometimes appears to be more important than the activity itself.This means that the proper attitude or state of mind is as central as the actual performance of health promoting activities in order to postpone ageing (cf.Lock & Scheper-Hughes 1996). 12Our material shows that activity can be understood in terms of good or bad activities, and some pursuits are not considered to be activities at all.The concept of activity includes moral values, which form the beliefs and narratives of being old (Katz 2000).Although, depending on health status, watching TV or phoning a friend can be experienced as healthy and useful activities. It appears that activity does not only mean physical exercise, but mental and social exercises as well (cf.Gunnarsson 2009).Activity also has a connection to independence; by including everyday chores as activity, people demonstrate the will and capacity to cope on their own.Our ethnographical data shows that individuals assume that leading an active life demands efforts, and that good health should be deserved.Nevertheless, they agree that such activities should not be exaggerated.In order for activities to be healthy, they need to be carried out in a balanced manner -neither too much nor too little.Furthermore, it is important to emphasize that as one gets older, the meaning attached to activities is transformed.Easy-going occupations, in substance done for amusement and enjoyment, are not considered to be sufficiently healthy.They are therefore described and defined as useful and salutary.Likewise, everyday chores and recreational activities change into health work, becoming part of the practice of successful ageing. We have demonstrated the intersection between old age and a healthpromoting active lifestyle.This forms part of the concept of self-care management, which in old age is also called successful ageing.The idea that activities are health promoting is the framework in which activities are performed, but significance and meaning are rather created from practice.When making activities a regular part of everyday life, normative routines are created.As we have showed, carrying out activities produces normality just as much as the normative notion of activity generates the performance of activity.We argue, in accordance with Elizabeth Shove (2003), that dominant beliefs and rhetoric in regard to a particular phenomenon set the scene for specific actions, but it is practice that gives power to these ideas and concepts.Meanings are created primarily through practice and action (Shove 2003:191). *** The process of ageing is full of contradictions and paradoxes (Jönsson & Lundin 2007).People want long lives, but do not want to get older, or rather: they want to grow old in a very special way.Through strategies such as conscious food choices, and physical and mental training, many are seeking a life in which characters of old age are kept away.It is about ageing in the 'right' way.Or, in Margaret Lock's and Nancy Scheper-Hughes's (1996) terms, to become politically correct bodies.That is, bodies reflecting both a biological age as well as society's normative expectation of personal responsibility.Describing health from a perspective of power helps reveal how health in modern society increasingly signifies normality.Health stands out as a guardian of norms and values, as well as a point of reference.The idea of health and activity create a framework for how ageing is defined and looked upon.Ageing is interpreted by these concepts, and affects the experiences of growing old as well as the organization of everyday life.There are a number of discussions that define these processes in terms of 'ageism', an analytic concept to describe discrimination based on people's age (Butler 1975).We have chosen not to employ the concept of ageism. 2 This contradiction is apparent.At a deeper level, these beliefs have the same starting point; the expected decline in old age stresses the importance of health promoting activities even more.The anticipated decay thus acts as a reinforcement of the notion of activity. 3 In the last few years, there has been repeated coverage in Swedish media about the rights of old people.In articles as well as letters to the editor there have been discussions of neglect or mismanagement of in-home services and homes designed for the elderly, or protests that sick old people do not have access to care. 4 Vårdalinstitutet, the Swedish Institute for Health Sciences, is a national environment for research and development in the field of health care and social service in close cooperation with the universities and the health care principals.This article, as well as Alftberg´s dissertation project, is part of the Vårdal Institute´s research program concerning elderly people and geriatric care.(http://www.vardalinstitutet.net) The intervention project is a health-promoting and preventive intervention aimed at preventing functional disability and restriction of activity.For a discussion on gender and ageing, see e.g.Arber and Ginn 1995, Arber, Davidson and Ginn 2003, Calasanti and King 2005. 11 The questionnaire responses have been processed with SPSS. 12 The Swedish Welfare State has a long tradition of cultivating an ideal of conscientiousness, which relates to the modern society's increased emphasis on the individual's own responsibility (Hirdman 1992;Ambjörnsson 1993). Alftberg is an ethnologist, PhD, in the Department of Arts and Cultural Sciences, Lund University.Her research concerns old age and how elderly people interpret and understand ageing, body and health in relation to cultural norms and beliefs.This reflects a wider interest in how materiality, bodies and objects are involved in creating meaning of everyday life.E-mail: asa.Alftberg@kultur.lu.seSusanne Lundin is professor of ethnology in the Department of Arts and Cultural Sciences, Lund University.Her main research areas are cultural analysis of medical praxis with regard to new regenerative medicine such as IVF, stem cell research, and transplantations.She has published a number of essays and books on these subjects, including Gene Technology and Economy, coauthored with Lynn Åkesson (2002); "Organ Economy: Organ Trafficking in Moldova and Israel," in Public Understanding of Science (2012), and The Atomized Body: the Cultural Life of Stem Cells, Genes and Neurons, coauthored with Max Liljefors and Andréa Wiszmeg (2012).E-mail: Susanne.Lundin@kultur.lu.seNotes 1 6 Before starting the interview field work, the project underwent an ethical review by the Regional Ethical Review Board of Gothenburg University, Sweden.7 Files and transcripts are currently kept by Åsa Alftberg and will later be kept at the Folk Life Archives at Lund University.8 The questionnaires for this study, Biomedicin och prioriteringar i vården [Biomedicine and Prioritizations in Health Care] LUF 214, and Åldrande och hälsa [Ageing and Health] LUF 227, were designed by Åsa Alftberg, Susanne Lundin and Charlotte Hagström at the Folk Life Archives at Lund University.(http://www.lu.se/folklivsarkivet) 9 All quotations are translated by the authors.10
v3-fos-license
2018-04-03T02:42:15.908Z
2010-06-22T00:00:00.000
25564593
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.jbc.org/content/285/37/28826.full.pdf", "pdf_hash": "018b4ecd2bfe9676ce5b5a61d04a6f5d76d87911", "pdf_src": "Highwire", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2007", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "bc6a94e6d441e7321196eb092bee37079bfce227", "year": 2010 }
pes2o/s2orc
Deficiency of Chemokine Receptor CCR1 Causes Osteopenia Due to Impaired Functions of Osteoclasts and Osteoblasts* Chemokines are characterized by the homing activity of leukocytes to targeted inflammation sites. Recent research indicates that chemokines play more divergent roles in various phases of pathogenesis as well as immune reactions. The chemokine receptor, CCR1, and its ligands are thought to be involved in inflammatory bone destruction, but their physiological roles in the bone metabolism in vivo have not yet been elucidated. In the present study, we investigated the roles of CCR1 in bone metabolism using CCR1-deficient mice. Ccr1−/− mice have fewer and thinner trabecular bones and low mineral bone density in cancellous bones. The lack of CCR1 affects the differentiation and function of osteoblasts. Runx2, Atf4, Osteopontin, and Osteonectin were significantly up-regulated in Ccr1−/− mice despite sustained expression of Osterix and reduced expression of Osteocalcin, suggesting a lower potential for differentiation into mature osteoblasts. In addition, mineralized nodule formation was markedly disrupted in cultured osteoblastic cells isolated from Ccr1−/− mice. Osteoclastogenesis induced from cultured Ccr1−/− bone marrow cells yielded fewer and smaller osteoclasts due to the abrogated cell-fusion. Ccr1−/− osteoclasts exerted no osteolytic activity concomitant with reduced expressions of Rank and its downstream targets, implying that the defective osteoclastogenesis is involved in the bone phenotype in Ccr1−/− mice. The co-culture of wild-type osteoclast precursors with Ccr1−/− osteoblasts failed to facilitate osteoclastogenesis. This finding is most likely due to a reduction in Rankl expression. These observations suggest that the axis of CCR1 and its ligands are likely to be involved in cross-talk between osteoclasts and osteoblasts by modulating the RANK-RANKL-mediated interaction. and results in bone destruction (5). Several reports suggested that CCL3 is also produced by myeloma cells and directly stimulates bone destruction in myeloma-related bone diseases (5)(6)(7). These findings indicate the possible roles of CCL3 as a crucial chemokine for osteoclast function. Several antagonists of the chemokine ligands of CCL3, such as CCR1-specific (BX471) and CCR5-specific (TAK779) blockers, have been tested as drug candidates for the treatment of patients with rheumatoid arthritis-associated bone destruction and multiple myeloma (4,8). The chemokine CCL9 (also called MIP-1␥), is also abundantly produced by various myeloid lineage-derived cells, including osteoclasts (9), activates osteoclastogenesis through its receptor, CCR1 (10 -12). However, the exact physiological functions of CCR1 and its related chemokines in bone remodeling are still not fully characterized (12,13). A recent study using an ovariectomy-induced bone loss model found that the chemokine receptor CCR2 was associated with postmenopausal bone loss (14), but there are few reports on bone phenotypes in other chemokine receptor-deficient mouse models. In the present study, we demonstrated that osteopenia in Ccr1 Ϫ/Ϫ mice appeared to be due to impaired osteoclast and osteoblast function. Our data also uncovered a possible role for CCR1 and its related ligands in the communication between osteoclasts and osteoblasts. Osteoclast and Osteoblastic Cell Culture-Mouse bone marrow cells cultured in ␣-minimal essential medium were used as sources of osteoclastic and osteoblastic cell cultures. The non-adherent cells were collected for bone marrow-derived macrophage and pre-osteoclast induction, and adherent bone marrow-derived mesenchymal stromal cells were collected for osteoblast induction. Bone marrow-derived macrophages were induced with 10 ng/ml M-CSF for an additional 10 days. To generate pre-osteoclasts, non-adherent cells were passed through a column filled with Sephadex G-10 microspheres (Amersham Biosciences) and were then cultured with 10 ng/ml M-CSF and 20 ng/ml RANKL for 4 days. The mature osteoclasts were induced from pre-osteoclasts by culturing for an additional 14 days with M-CSF and RANKL. The culture media were replaced every 3 days. TRAP activity in the osteoclasts was determined by staining using an acid phosphatase leukocyte staining kit (Sigma). The contamination of stromal/osteoblastic cells was monitored using Q-PCR analysis, as a low expression level of the Osteoprotegrin gene indicates stromal/osteoblastic cells. Osteoblastic differentiation in adherent bone marrow mesenchymal stromal cells was induced by culture in ␣-minimal essential medium containing 10% FBS, 200 M ascorbic acid, 10 mM ␤-glycerophosphate, and 10 nM dexamethasone (16). The culture media was replaced once every 3 days in the presence or absence of chemokine-neutralizing antibodies. The cells were fixed with 4% paraformaldehyde and stained for alkaline phosphatase with naphthol AS-MX phosphate plus Fastblue-BB (Sigma) and for minerals with alizarin red. Mineral deposition was alternatively identified by von Kossa staining (Polysciences, Inc., Warrington, PA), and the mineralized areas were measured by using an Array Scan VTI HCS analyzer (Beckman Coulter). Co-culture experiments with osteoclast precursors and osteoblasts were performed by inoculating bone marrow-derived precursors (1 ϫ 10 5 cells/well) onto the layer of osteoblastic cells that had been cultured for 21 days with osteoblastinducing media in 24-well plates. Thereafter, these cells were co-cultured for 7 days in ␣-minimal essential medium supplemented with 10% FBS and 10 g/ml vitamin D 3 . To assess bone resorption activity, these co-culture studies were also conducted using bone slices. After fixation of the cells with 2.5% glutaraldehyde/1.6% paraformaldehyde in 0.1 M cacodylic acid (pH 7.4), the bone slices were briefly rinsed, and were completely dehydrated in an ascending series of ethanol and liquid carbon dioxide. The samples were coated with an ultrafine titanium oxide powder and observed under a scanning electron microscopy. Immunohistochemical Staining-For the immunohistochemical staining analyses, osteoclasts were fixed with 4% paraformaldehyde, permeabilized, and stained with the indicated specific antibodies, followed by Alexa594-conjugated secondary antibodies and Alexa488-labeled phalloidin (Molecular Probes). The osteoclasts with multiple nuclei (Ͼ3) were quantified. Images were captured using an IX-81 fluorescence microscope equipped with a confocal microscopy DSU unit (Olympus, Japan) and were analyzed with the MetaMorph TM software program (Universal Imaging, Molecular Devices, Sunnyvale, CA). The formation of osteoclasts was quantified by capturing and analyzing images using the ImageJ software program (National Institutes of Health, Bethesda, MD) based on TRAP staining of 25 fields in each well, which were randomly chosen and analyzed. Real-time PCR Analysis-Total cellular RNA from osteoclasts, osteoblasts, and bone tissues (proximal tibia after the bone marrow flush and the removal of metaphysial regions) was isolated using the RNeasy kit (Qiagen, Valencia, CA). The total RNA was then reverse-transcribed into cDNA using the Superscript III RT kit (Invitrogen). The real-time quantitative PCR analyses were performed using the ABI 7700 sequence detector system with SYBR Green (Applied Biosystems, Foster City, CA). The sequences were amplified for 40 cycles under the following conditions: denaturation at 95°C for 15 s, annealing at 60°C for 30 s, and extension at 72°C for 45 s with primers for the chemokine receptors as previously reported (17). Gene expression levels were compared with Gapdh gene expression by the 2 Ϫ⌬(Ct) method. Measurement of Cytokines and Chemokines-Chemokine CCL5 and CCL9 secretion levels were determined by ELISA using the antibodies MAB4781 and BAF478 (R&D systems) and MAB463 and BAF463 (R&D systems), respectively. The reaction intensities were determined by using HRP-conjugated streptavidin (Chemicon). The cytokine production levels were quantified with a mouse 23-plex multiple cytokine detection system (Bio-Rad Corp., Hercules, CA) according to the manufacturer's instructions. Microcomputed Tomography and Peripheral Quantitative Computed Tomography-Micro-computed tomography (microCT) scanning was performed on proximal tibiae by CT-40 (SCANCO Medical AG) with a resolution of 12 m, and the microstructure parameters were three-dimensionally calculated as previously described (18). The bone scores were measured by peripheral quantitative CT using the XCT Research SAϩ system (Stratec Medizintechnik GmbH, Pforzheim, Germany). The bone scores and density were measured and analyzed at 1.2 mm below the epiphyseal plate of distal femora. The scores were defined according to the American Society for Bone and Mineral Research standards. Bone Histomorphometry-The unilateral proximal tibiae fixed with ethanol were embedded in glycol methacrylate, and the blocks were cut in 5-m-thick sections. The structural parameters were analyzed at the secondary spongiosa. For the assessment of dynamic histomorphometric indices, calcein (at a dose of 20 mg/kg body weight) was injected twice (72-h interval) to wild-type and Ccr1-deficient mice, respectively. The sections were stained with toluidine blue and analyzed using a semi-automated system (Osteoplan II, Zeiss). The nomencla-ture, symbols, and units used in the present study are those recommended by the Nomenclature Committee of the American Society for Bone and Mineral Research (19). Measurement of TRAP, BALP, and Collagen-type I N-telopeptides (NTx)-Tartrate-resistant acid phosphatases (TRAP5b) in serum and culture supernatant were measured by the mouse TRAP EIA assay kit (Immunodiagnostic system, Fountain Hills, AZ). In brief, the culture supernatant or diluted serum was applied to an anti-TRAP5b-coated microplate, according to the manufacturer's instruction. The enzymatic activities of bound TRAP were determined with chromogenic substrates. Bonespecific alkaline phosphatase (BALP) levels were measured using the mouse BALP ELISA kit (Cusabio Biotech Co. Ltd., Wilmington, DE). Collagen-type I NTx were measured by ELISA (SRL, Tokyo). Statistics-Data are presented as the mean Ϯ S.E. for the indicated number of independent experiments. Statistical significance was determined with a post-hoc test of one-factor factorial analysis of variance (Figs. 3E, 6D, 7B, and 7C), the Wilcoxon Mann-Whitney U test (non-parametric analysis, Differences with a p value of Ͻ0.05 was considered statistically significant (* and # indicate up-regulation and down-regulation, respectively; NS indicates not significant). CCR1-deficient Mice Exhibit Osteopenia-To understand the functions of CCR1 in bone metabolism, we investigated the bone mineral density in Ccr1 Ϫ/Ϫ mice. A peripheral quantitative CT analysis showed a significant reduction in bone mineral density in cancellous bone in Ccr1 Ϫ/Ϫ mice compared with wild-type mice (Fig. 1A). There were no significant differences between bone mineral density in the cortical bone at the metaphysial (Fig. 1A) and diaphysial regions (data not shown) between Ccr1-deficient and wild-type mice. In Ccr1 Ϫ/Ϫ mice, a microCT analysis indicated decreased cancellous bone tissue at the metaphysical region (Fig. 1B). An analysis of bone histomorphometrics confirmed a significant decrease of bone volume (BV/TV) at the metaphysial region of Ccr1 Ϫ/Ϫ mice. This was associated with a diminished number of trabeculae (Tb.N), increased trabecular bone separation (Tb.Sp), and no significant changes in trabecular bone thickness (Tb.Th), thus indicating that Ccr1-deficient mice have sparse trabeculae (Fig. 1C). We examined the effect of Ccr1 deficiency on the function of osteoblasts and osteoclasts in bone morphometry ( Fig. 1, D-F). The morphological analyses revealed that Ccr1 Ϫ/Ϫ mice have a significantly reduced number of osteoblasts (Ob.S./BS) (Fig. 1F). Ccr1 Ϫ/Ϫ mice exhibited extremely low values of osteoid surface (OS/BS) and osteoid volume (OV/BV) compared with wild-type mice (Fig. 1D). Notably, Ccr1 Ϫ/Ϫ mice showed a sig-nificant decreases in the mineral apposition rate (MAR), mineralized surface (MS/BS), and bone formation rate (BFR/BS) (Fig. 1D), which were calculated based on calcein administration (representative pictures are shown in Fig. 1E). In addition, the number of osteocytes per area was significantly reduced in Ccr1 Ϫ/Ϫ mice (Fig. 1G). These results indicate that Ccr1 Ϫ/Ϫ mice have impaired bone formation. Fig. 1F summarizes the bone morphometric parameters associated with bone resorption. Ccr1 Ϫ/Ϫ mice have significantly decreased osteoclast numbers (N.Oc./B.Pm) and osteoclast surface area (Oc.S./BS), and an eroded surface (ES/BS). These findings indicate that Ccr1 Ϫ/Ϫ mice have diminished osteoclast function. Taken together, the morphometric analyses suggest that the bone phenotype in Ccr1-deficient mice exhibit osteopenia with low bone turnover, which is most likely due to the diminished function of osteoblasts and osteoclasts. Impaired Osteogenesis and Osteoclastogenesis in the Bone Tissue of Ccr1-deficient Mice-To elucidate the status of osteoblasts and osteoclasts in bones of Ccr1 Ϫ/Ϫ mice, we compared the transcriptional levels of osteoclast-and osteoblast-related markers in the proximal tibiae of wild-type and Ccr1 Ϫ/Ϫ mice. The analyses of osteoblast-related markers, such as bone-specific transcriptional factors (Runx-2, Atf4, and Osterix) (23)(24)(25) and bone matrix proteins (Collagen1a1, Osteonectin, Osteopontin, and Osteocalcin), revealed that the expression levels of Runx2 and Atf4 were dramatically up-regulated in Ccr1 Ϫ/Ϫ mice than in wild-type mice ( Fig. 2A). However, there were no significant changes in the expression levels of Osterix. Early markers for osteoblast differentiation, including Collagen1a1, Osteonectin, and Osteopontin, were significantly up-regulated. Osteocalcin expression, a marker for mature osteoblasts, was significantly down-regulated in Ccr1 Ϫ/Ϫ mice. These results suggest that osteoblasts in Ccr1-deficient mice are retained in an immature state due to the overexpression of Runx-2 and Atf4 by osteoblasts, which is also consistent with the significant reduction in number of osteocytes in Ccr1 Ϫ/Ϫ mice. Constitutive Runx-2 overexpression in osteoblasts results in maturation arrest in osteoblasts and in a reduced number of osteocytes (25). The serum levels of BALP in Ccr1-deficient mice were significantly decreased (Fig. 2C). The expression levels of markers related to osteoclast differentiation, revealed attenuated transcription levels of TRAP5b and cathepsin K in Ccr1 Ϫ/Ϫ mice (Fig. 2B). In addition, Ccr1 Ϫ/Ϫ mice exhibited significantly decreased levels of serum TRAP (26) and collagen-type I NTx (27, 28) (Fig. 2C). This finding is consistent with diminished osteoclastic bone resorption in Ccr1 Ϫ/Ϫ mice. These observations led us to assess the RANK-RANKL axis, a key signaling pathway in osteoblast-osteoclast interactions that regulates osteoclast differentiation and function. Interestingly, the analyses revealed that both Rank and Rankl were down-regulated (Fig. 2D), thus implying that CCR1 is involved in the regulation of the RANK-RANKL axis. Considering the fact that Ccr1 Ϫ/Ϫ mice exhibit osteopenia with low bone turnover, these bone cell marker expression levels suggest that CCR1 is heavily involved in the differentiation and function of osteoblasts and osteoclasts as well as in the cellular interactions between these cell types. CCR1 Signaling Is Important in the Maturation and Function of Osteoblasts-To further corroborate the necessity of CCR1 in osteoblast maturation and function, we examined the formation of mineralized nodules in vitro by osteoblastic cells isolated from bone marrow of wild-type and Ccr1 Ϫ/Ϫ mice. Mineralized nodule formation in osteoblastic cells isolated from Ccr1 Ϫ/Ϫ mice was markedly abrogated compared with wild-type osteoblastic cells (Fig. 3A). We next investigated the time-course expression profiles of osteoblastic markers in this in vitro culture system and compared them between wild-type and Ccr1 Ϫ/Ϫ mice (Fig. 3B). In wild-type mice, Runx2 exhibited the highest levels of expression at day 14, but was drastically downregulated at day 21, during the mineralization stage. However, , D). Data are expressed as the copy numbers of these markers normalized to Gapdh expression (mean Ϯ S.E., n ϭ 8). In C, the levels of serum BALP, TRAP, and serum collagen-type1 N-telopeptides (NTx) were measured by ELISA. The bars indicate the mean Ϯ S.E. Each sample was duplicated. Wild-type and Ccr1 Ϫ/Ϫ male mice at 9 weeks of age (n ϭ 10 and 6, respectively) were subjected to BALP and TRAP. Wild-type and Ccr1 Ϫ/Ϫ male mice at 9 -13 weeks of age (n ϭ 8 and 6, respectively) were assayed for NTx. # , significantly different from wild-type controls, p Ͻ 0.05. N.D., not detected. an inverse Runx2 expression pattern was observed in CCR1deficient osteoblastic cells, in which the levels of expression were markedly suppressed in the early stages (days 0 and 14), and was then significantly up-regulated at day 21, reaching the levels present in wild-type mice. Osterix expression was highly up-regulated at day 21 in wild-type mice, whereas its expression in CCR1-deficient osteoblastic cells was sustained at an intermediate level between the lowest and the highest levels in wildtype mice, overall resulting in a lower expression levels than in wild-type mice at day 21. These inverted expression patterns were also consistently observed, especially at day 21, with other osteoblastic markers, including Atf4, Caollagen1a1, Osteonectin, Osteopontin, and Osteocalcin. Similarly, the expression pattern of ATF4 was also confirmed by a Western blot analysis (Fig. 3C). These observations indicated that CCR1 deficiency severely affected the temporal expression of osteoblastic markers, resulting in the impaired differentiation and maturation of osteoblasts. Because CCR1 signaling is activated by several cross-reactive chemokines (CCL4, CCL5, CCL9, and CCL11), we next compared the levels of these chemokines in wild-type and CCR1-deficient osteoblastic cells. We observed significantly diminished expression levels of these chemokines in CCR1-deficient osteoblastic cells (Fig. 3D). A test on the effects of neutralizing antibodies against various chemokines, including CCR1 ligands, revealed the role of each chemokine in mineralized nodule formation by osteoblastic cells. The neutralizing antibodies against CCL4, CCL5, CCL9, and CCL11 significantly reduced the number of mineralized nodules in osteoblastic cells, although the antibodies against CCL2 and CCL3 did not inhibit the numbers completely (Fig. 3E). Pertussis toxin (PTX), an inhibitor of G i protein-coupled receptors involved in chemokine signaling, inhibited mineralized nodule formation in a dose-dependent manner. In further support of these findings, we observed similar temporal changes in the transcriptional levels of osteoblastic markers in wild-type osteoblastic cultures treated with an anti-CCL9 antibody, compared with Ccr1 Ϫ/Ϫ osteoblastic cells (supplemental Fig. 2). These results suggest that CCR1 signaling mediated by its ligands (CCL4, CCL5, CCL 9, and CCL11) plays an essential role in mineralized nodule formation. . Impaired mineralized nodule formation in CCR1-deficient osteoblastic cells. In A, osteoblastic cells were cultured from the bone marrow of wild-type and Ccr1 Ϫ/Ϫ mice, and then minerals were stained with alizarin red and BALP with chromogenic reagents (shown in "blue") (magnification ϫ100, left). Mineral deposition was determined by von Kossa staining (n ϭ 6, right). In B, total RNAs were isolated from osteoblastic cells isolated from wild-type (open circles) and Ccr1 Ϫ/Ϫ mice (filled circles). The real-time Q-PCR analyses examined the relative expression levels of osteoblast-related transcriptional factor mRNAs (Runx-2, Osterix, and Atf4) and osteoblast-related marker mRNAs (Osteonectin, Osteopontin, Osteocalcin, and Collagen1␣1). Data are expressed as the copy numbers of these markers normalized to Gapdh expression (mean Ϯ S.E., n ϭ 8). In C, the protein expression levels of the transcriptional factor ATF4 by wild-type and Ccr1 Ϫ/Ϫ osteoblastic cells were measured by a Western blot analysis. Osteoblast lysates (10 g of protein per lane) was loaded and separated by SDS-PAGE. The expression levels of ATF4 were normalized to GAPDH expression. In D, the production of CCR1-related chemokine ligands in the culture media of wild-type and Ccr1 Ϫ/Ϫ osteoblastic cells was measured by ELISA (n ϭ 5). # , significantly different from wild-type controls, p Ͻ 0.05. In E, osteoblastic cells were cultured with the indicated neutralizing antibodies against chemokines. The mineral deposition rate was measured by von Kossa staining (n ϭ 4). Stained cells cultured with control rat IgG were set as 100%. # , significantly different from between different concentrations of each antibody, p Ͻ 0.05. PTX, pertussis toxin. Lack of Chemokine Receptor CCR1 Causes Impaired Osteoclast Differentiation and Bone-resorbing Activity-To elucidate the roles of CCR1 in osteoclast differentiation, we analyzed the differentiation potency of osteoclast precursors derived from Ccr1 Ϫ/Ϫ mice (Fig. 4A). Osteoclast precursors from Ccr1deficient mice markedly abrogated multinucleation with defective actin ring formation (Fig. 4A, yellow arrows) compared with precursors from wild-type mice, which generated a large numbers of osteoclasts with multinucleation and well organized actin ring formation at the cell periphery. The histograms of the osteoclast area and number of nuclei per cell as well as TRAP-positive areas reveal the presence of impaired cellular fusion and differentiation in Ccr1-deficient osteoclasts (Fig. 4B). We further investigated the activity of bone resorption in Ccr1-deficient osteoclasts (Fig. 4C). Few resorption pits were observed in Ccr1 Ϫ/Ϫ osteoclasts by scanning electron microscopic examination, in contrast to obvious resorption pits with well digested collagen fibers detected in wildtype osteoclasts. This observation was also confirmed by collagen zymography demonstrating that Ccr1 Ϫ/Ϫ osteoclasts failed to digest type-I collagens (Fig. 4D). Furthermore, the transcriptional levels of osteoclastic differentiation markers were investigated in the osteoclast culture system. Rank and its downstream targets Nfat-c1, other markers such as c-fos, Trap, CathepsinK, Atp6v0d2, integrin ␣V, and integrin ␤3 were markedly down-regulated in Ccr1-deficient cells, whereas S1P 1 and Irf-8 were up-regulated (Fig. 5A). We next examined whether the down-regulation in RANK expression in vivo (see Fig. 2D) and in vitro (Fig. 5A) directly correlated with the reduction in RANK-expressing osteoclast precursors. The cellular profiles of osteoclast precursors by a flow cytometric analysis revealed that the Ccr1 Ϫ/Ϫ mice had lower numbers of CD45 ϩ CD11b ϩ CD115 ϩ myeloid-lineage precursors compared with wild-type mice (Fig. 5B). In addition, the subpopulations of osteoclast precursors, which are categorized into CD11b hi (R1) and CD11b lo (R2), were marked reduced in the R2 subpopulation in CCR1-deficient cells. Because the R1 and R2 subpopulations reportedly express higher and lower levels of RANK, respectively (29), a reduction in the R2 subpopulation likely contributed to reduced expression of osteoclast markers in CCR1-deficient osteoclastic cells. Importantly, our observation is also consistent with a previous work reporting that RANK lo precursors are required for cellular fusion (29). CCR1 Signaling Is Involved in Osteoclast Differentiation-To further explore the role of CCR1 signaling in osteoclast differentiation, we next examined the expression levels of chemokine receptors during osteoclastogenesis using an in vitro culture system. CCR1 was expressed in the course of the osteoclastogenesis, with the highest levels of expression at day 4 after culture (10 -12), whereas other chemokine receptor CCR2 was gradually down-regulated during this culture period (30) . Essential roles of CCR1 in multinucleation and bone-resorbing activity. Pre-osteoclastic cells were cultured from the bone marrow of wild-type and Ccr1 Ϫ/Ϫ mice. Osteoclasts were induced from the pre-osteoclastic cells by M-CSF and RANKL treatment. In A, the formation of multinuclear osteoclasts by wild-type and Ccr1 Ϫ/Ϫ precursors was visualized by TRAP chromogenic staining (magnification ϫ400, upper panels). Immunohistochemical staining was carried out using an anti-cathepsin K antibody conjugated with Alexa594 (red). F-actin and nuclei were counterstained by phalloidin-AlexaFluor 488 (green) and Hoechst 33258 (blue), respectively (magnification ϫ640, bottom panels). The yellow arrow indicates multinuclear giant cells with an impaired actin ring rearrangement, and the red arrows indicate TRAP accumulation. In B, histograms of the area distribution of multinuclear osteoclasts delimited with phalloidin, and of the number of multinuclear osteoclasts in A. Area comprises TRAP-positive multinuclear (Ͼ3 nuclei) giant cells shown in A (mean Ϯ S.E., n ϭ 3). In C, pit formation by wild-type and Ccr1 Ϫ/Ϫ osteoclasts on bone slice observed by scanning electron microscopy (magnification: ϫ1000 (top) and ϫ6000 (bottom), respectively). In D, collagen digestion activity by wild-type and Ccr1 Ϫ/Ϫ osteoclasts was measured by collagen-based zymography. Lanes M, 1, 2-3, and 4 -5 indicate the molecular markers, bone marrow-derived macrophage lysates (10 g of protein/lane), wild-type osteoclast lysates (1 and 10 g of protein/lane), and Ccr1 Ϫ/Ϫ osteoclasts lysates (1 and 10 g of protein/each lane), respectively. (Fig. 6A). Immunohistochemical staining revealed that CCR1 was highly expressed on the multinuclear osteoclasts (supplemental Fig. 3). The expression profiles of CCR ligands in this in vitro osteoclast culture system revealed that ligands specific for CCR1, such as Ccl5 and Ccl9, had a relatively higher levels of expression than other ligands, and appeared to be regulated depending on the maturation stages of the osteoclasts. Ccl5 was preferentially expressed at day 4, a stage of mononuclear preosteoclasts, whereas multinuclear osteoclasts predominantly produced Ccl9 at later times (Fig. 6B). These regulated transcriptional patterns of Ccl5 and Ccl9 were also confirmed by the analysis of protein expression levels in cultured media (Fig. 6C). These observations suggested that the interaction between CCR1 and its ligands, CCL5 and CCL9, could be involved in osteoclast differentiation. We verified this hypothesis by culturing osteoclast precursors in the presence of neutralizing antibodies against CCL5 and CCL9. Blockade of either ligand resulted in a partial inhibition of osteoclast formation in a dose-dependent manner. Similarly, simultaneous treatment with neutralizing antibodies against CCL5 and CCL9 induced synergistic inhibitory effects (Fig. 6D). Furthermore, PTX treatment blocked osteoclastogenesis to the basal levels. Notably, we found no CCL3 production by ELISA or any inhibitory osteoclastogenesis effects using an anti-CCL3 antibody (data not shown), although CCL3 is thought to play an essential role in inflammation-related oste-oclastogenesis in humans (4,7,31,32). These findings indicate that CCR1 is essential for osteoclast differentiation, and CCL5 and CCL9 are the likely candidate ligands that participate in the CCR1 axis. CCR1 Is Involved in the RANK-RANKL Axis and Induces the Impaired Osteoclastogenesis-Because osteoclast differentiation is critically regulated by the signals through the RANK-RANKL axis, we investigated the transcriptional level of Rankl in Ccr1 Ϫ/Ϫ osteoblastic cells. The cells expressed significantly lower levels of RANKL compared with wild-type osteoblastic cells (Fig. 7A). We next performed co-cultures of pre-osteoclasts with layers of osteoblastic cells by reciprocal combinations of these two cell populations from wild-type and Ccr1 Ϫ/Ϫ mice. As expected from the reduced Rankl expression, a significantly reduced number of osteoclasts were formed from co-culture with Ccr1 Ϫ/Ϫ osteoblastic cells compared with wild-type osteoblastic cells (Fig. 7B). In the presence of PTX, wild-type osteoblastic cells also failed to generate substantial numbers of osteoclasts (Fig. 7B). Ccr1 Ϫ/Ϫ osteoclast precursors did not form differentiated osteoclasts even in the presence of wild-type-derived osteoblasts (Fig. 7C), as is consistent with our observations in Fig. 4. These observations suggest that the CCR1 chemokine receptor, which is expressed by both osteoblasts and osteoclasts, plays a critical role on osteoblast-osteoclast communication through the regulation of the RANK and RANKL expression. Osteoclastic impairment by CCR1 deficiency is due to the changes in osteoclastic precursor population. Pre-osteoclastic cells were cultured from the bone marrow of wild-type and Ccr1 Ϫ/Ϫ mice. Osteoclasts were induced from the pre-osteoclastic cells by M-CSF and RANKL treatment. In A, relative expression levels of the osteoclastic differentiation markers (Rank, Nfatc1 transcription factor, c-fos, Trap, CathepsinK protease, H ϩ -ATPase subunit ATP6v0d2, integrins ␣V and ␤3, S1P 1 , and Irf-8) on wild-type (open column) and Ccr1 Ϫ/Ϫ (filled column) osteoclasts were measured by a real-time Q-PCR analysis at day 4 after culture (mean Ϯ S.E., n ϭ 5). # , significantly different from wild-type controls, p Ͻ 0.05. In B, expression analysis of RANK in CD45 ϩ CD11b ϩ CD115 ϩ pre-osteoclastic cells isolated from the bone marrows of wild-type and Ccr1 Ϫ/Ϫ mice after 4 days in culture were analyzed by flow cytometry. DISCUSSION Pathological findings postulate that chemokines and chemokine receptors are involved in bone remodeling (9 -13). Among these receptors, CCR1 appears to be an important molecule involved in bone metabolism (9). We used Ccr1 Ϫ/Ϫ mice to investigate whether CCR1 affects bone metabolism. Our findings have demonstrated that a CCR1-deficiency affects the differentiation and function of both osteoblasts and osteoclasts, and also causes osteopenia. Our bone histomorphometric study in Ccr1 Ϫ/Ϫ mice clearly demonstrated impaired osteoblast differentiation and function (Fig. 1, D-G). The bone tissues in Ccr1 Ϫ/Ϫ mice exhibited down-regulation of osteocalcin, which is a marker for mature osteoblasts, whereas the expression of Osteonectin and Osteopontin, which are markers for early osteoblasts, were upregulated in the bones of these mice ( Fig. 2A). Significantly, Ccr1 Ϫ/Ϫ osteoblastic cells exhibited much less potency to generate mineralized tissues (Fig. 3A). These results suggest that the deficiency of CCR1 results in arrested osteoblast maturation and defective osteoblast function. Previous reports have demonstrated that the sustained expression of Runx2 in osteoblasts inhibits their terminal maturation and causes osteopenia with a reduction in the number of osteocytes (25,33). Consistent with these findings, bone tissue specimens from Ccr1 Ϫ/Ϫ mice exhibited a higher expression level of Runx2 and a reduced number of osteocytes (Fig. 3G). These findings suggest that osteopenia in Ccr1 Ϫ/Ϫ mice is due to impaired osteoblastic function via Runx2 up-regulation. Our findings in Ccr1 Ϫ/Ϫ osteoblastic culture supportively demonstrated that an inverse temporal expression level of osteoblastic transcriptional factors, such as Runx2, Atf4, and Osterix could be related to the disordered expressions of bone matrix proteins, thus resulting in impaired bone mineral deposition (Fig. 3B). Furthermore, treatment with neutralizing antibodies against CCR1 ligands (e.g. CCL4, CCL5, CCL9, and CCL11) significantly inhibited mineral deposition (Fig. 3E) and osteoblastic protein expression (supplemental Fig. 2) in osteoblastic cells isolated from wild-type mice. These observations indicate that CCR1-mediated signaling is essential for osteoblast differentiation and function. Although we detected substantial levels of various chemokine ligands (CCL4, CCL5, CCL9, and CCL11) in osteoblastic cells, these levels were greatly reduced in cells isolated from Ccr1 Ϫ/Ϫ mice (Fig. 3D). This observation implies a chemokine-dependent amplification loop by which a given chemokine signaling sustains or amplifies the expressions of its participating ligands and receptors, which has been previously reported in several contexts. For instance, the activation of CD14 ϩ monocytes form a CCR2-CCL2 axis-dependent amplification loop that ultimately leads to fibrosis (34). Several other studies have reported that macrophage infiltration in injured tissue is mediated by a CCR1-mediated loop (35)(36)(37) and a CCR5-CCL5 loop (38). Reports of renal inflammatory signals and abdominal inflammation have described CCR7-CCL19/ CCL21 (39) and CCR8 -CCL1 loops (17), respectively. Therefore, the CCR1-mediated loop is likely to be involved in osteoblast differentiation, function, and cellular interactions that regulate bone metabolism. Possible roles of the CCR1-mediated loop in osteoblast differentiation and function suggest that changes in the bone marrow microenvironment by a CCR1 deficiency affected the osteoblastic lineage and/or the intercellular regulation of osteoblast differentiation and function. CCR1 conventional knock-down seems to have affected many cell types that express CCR1, affecting the bone marrow microenvironment, which regulates whole process of osteoblast differentiation and function. Our in vitro experiments did not successfully retrieve this point. Nevertheless, the present experiments have confirmed an essential role for CCR1-mediated signaling in osteoblastic cells. The expression and possible roles of CCR1 in osteoclast lineage cells have been reported by several studies (4,10,11). We observed the up-regulation of Ccr1 expression and down-regulation of Ccr2 during cultured osteoclastogenesis (Fig. 6A). The bone histomorphometric analyses demonstrated impaired osteoclast differentiation and function in Ccr1 Ϫ/Ϫ mice (Fig. 1F). In addition, we observed impaired bone resorption activity by osteoclasts isolated from CCR1 Ϫ/Ϫ mice (Fig. 4, B and C). A potential reason for the impaired bone resorption is due to defects in osteoclast differentiation. Indeed, the flow cytometric analyses revealed that the component of CD11b ϩ CD115 ϩ myeloid-lineage pre-cursors in Ccr1 Ϫ/Ϫ mice are drastically changed; this population of cells lacked the RANK lo CD11b lo subpopulation, which is required for cellular fusion (29) (Fig. 5B). Recent live observation of calvarial bone marrow by two-photon microscopy clarified the roles of chemoattractant S1P 1 (sphingosine-1phosphate 1) and its receptors in the migration of osteoclast precursors to the bone surface (40). Therefore, it is indeed intriguing to speculate that elevated levels of S1P 1 expression in Ccr1 Ϫ/Ϫ osteoclasts (Fig. 1F) reduced the supply of osteoclast precursors from peripheral circulation in the bone marrow to the bone surface. Further investigation will reveal whether the CCR1 axis is involved in the chemotactic migration of osteoclast precursors to the bone surface. One of the possible reasons for osteoclast dysfunction in Ccr1 Ϫ/Ϫ mice may be diminished signaling along the RANK-RANKL axis. The down-regulation of both Rank and Rankl mRNA was observed in the bone tissue of Ccr1 Ϫ/Ϫ mice (Fig. 2D). Cultured osteoblastic cells and osteoclasts isolated from Ccr1 Ϫ/Ϫ mice exhibited remarkable reductions in Rank and Rankl expression levels, respectively (Figs. 5B and 7B). Furthermore, Ccr1-deficient osteoclasts had discouraged the levels of osteoclastic maturation markers such as c-fos, Nfatc1, Cathep-sinK, and several integrins (Fig. 5A). These results suggest that CCR1-mediated signaling controls the RANK-RANKL axis through the regulation of both osteoblasts and osteoclasts. Our intercross co-cultures of pre-osteoclasts with osteoblastic cells from wild-type and Ccr1 Ϫ/Ϫ mice obviously demonstrated an impaired interaction between these two cell types, resulting in the impaired induction of functional mature osteoclasts (Fig. 7, B and C). These findings, interestingly, support the idea that the chemokines produced by the osteoblasts and osteoclasts that stimulate CCR1-mediated signaling could be categorized as putative "bone-coupling factors" (41), which mediate the crosstalk between osteoclasts and osteoblasts to maintain bone remodeling. Our data imply that the regulatory mechanism of Rankl expression is associated with osteoblast maturation. Runx2 reportedly induce a low steady-state level of Rankl expression and is also required for the stimulatory effect of vitamin FIGURE 7. CCR1 is involved in the RANK-RANKL axis and induces the impaired osteoclastogenesis. In A, osteoblastic cells were cultured from the bone marrow of wild-type and Ccr1 Ϫ/Ϫ mice. Relative expression levels of Rankl by Ccr1 Ϫ/Ϫ osteoblasts as measured by real-time Q-PCR (mean Ϯ S.E., n ϭ 3). # , significantly different from wild-type controls, p Ͻ 0.05. In B and C, the number of TRAP ϩ multinuclear osteoclasts induced by co-culture with osteoblasts. Co-culture with osteoblastic cells isolated from wild-type or Ccr1 Ϫ/Ϫ mice (mean Ϯ S.E., duplicated, n ϭ 2, B), and with osteoclast precursors isolated from wild-type or Ccr1 Ϫ/Ϫ mice (mean Ϯ S.E., duplicated, n ϭ 2, C). Osteoclast cultures with M-CSF and RANKL without osteoblasts were set as positive control. # , significantly different from co-culture of osteoclasts with wild-type osteoblasts, p Ͻ 0.05. D 3 on Rankl transcription possibly by condensing or decondensing the chromatin structure (42). It is possible that the inverse-temporal Runx2 expression in CCR1-deficient mice is causative of the down-regulation of Rankl, due to a reduced cellular response to bone-targeted hormones such as vitamin D 3 and parathyroid hormone. However, a more direct role of CCR1-mediated signaling on Rankl transcription remains to be elucidated. CCR1-mediated signaling pathways on both osteoblasts and osteoclasts raise important questions on how the several members of murine chemokine ligands for CCR1 (in rodents, CCL3, CCL4, CCL5, CCL6, CCL8, CCL9, and CCL11) (43) distinguish the downstream signaling pathways, despite sharing the same CCR1 receptor. Each chemokine may possess specific regulatory control for binding to the receptor and inducing a specific cellular response. For example, the osteoclasts may have a distinct intrinsic signaling adaptor protein for cellular response, as well as the adaptor protein FROUNT for CCR2-mediated signaling (44). It has also been demonstrated that the spatiotemporal expression of chemokine receptors and their ligands may relay chemokine signaling and sequential output that regulate bone metabolism. This is related to several findings in this study, including the distinct temporal expression patterns of different ligands as observed in Fig. 6 (B and C) and supplemental Fig. 1, the chemokine-dependent amplification loop, and the possible chemokine-mediated cellular interaction. Further studies are warranted to investigate the intracellular signaling pathways downstream of each chemokine receptor. Our current results also support the concept that chemokine receptor antagonists are potentially novel therapeutic candidates for the treatment of patients with certain inflammatory bone diseases. Several reports suggest that CCL3 promotes pathological bone destruction by excessively triggering osteoclast activation (2,4,7,31,32). However, we were unable to detect increased CCL3 production by cultured osteoclasts (Fig. 6, B and C, and data not shown), suggesting that physiological osteoclastogenesis is primarily maintained by CCL9 rather than CCL3. It is probable that pro-inflammatory CCL3 overcomes the physiological process of osteoclastogenesis by CCL9 expression and signaling, thereby inducing ectopic osteoclastogenesis that causes bone destruction mediated by T-lymphocyte-mediated activation (45). Alternatively, the species differences between rodents and humans must be considered; CCL9 is described only in rodents, and the putative human homologue is predicted to be CCL15 and CCL23 (46), which are potent osteoclastogenesis mediators in humans (47). It is therefore worthwhile to dissect the distinct roles of chemokine signaling in both the pathological and physiological contexts, which would provide novel information that may help researchers identify new therapeutic targets. In conclusion, the present observations provide the first evidence for the physiological roles of CCR1-mediated chemokines in the bone metabolism. Further studies on chemokine receptors in the bone metabolism will enable the targeted development of new therapeutic strategies for the treatment of patients with bone destruction diseases and osteoporosis.
v3-fos-license
2017-06-20T22:08:33.212Z
2014-01-13T00:00:00.000
8799285
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcanesthesiol.biomedcentral.com/track/pdf/10.1186/1471-2253-14-4", "pdf_hash": "f486ca3b238d339a4aa0f2953ebc557be67c0ba2", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2009", "s2fieldsofstudy": [ "Medicine" ], "sha1": "0922ff3610748c2a0d3a1bcbd6f92affa1258875", "year": 2014 }
pes2o/s2orc
No signs of dose escalations of potent opioids prescribed after tibial shaft fractures: a study of Swedish National Registries Background The pattern of opioid use after skeletal trauma is a neglected topic in pain medicine. The purpose of this study was to analyse the long-term prescriptions of potent opioids among patients with tibial shaft fractures. Methods Data were extracted from the Swedish National Hospital Discharge Register, the National Pharmacy Register, and the Total Population Register, and analysed accordingly. The study period was 2005–2008. Results We identified 2,571 patients with isolated tibial shaft fractures. Of these, 639 (25%) collected a prescription for opioids after the fracture. The median follow-up time was 17 (interquartile range [IQR] 7–27) months. Most patients with opioid prescriptions after fracture were male (61%) and the median age was 45 (16–97) years. The leading mechanism of injury was fall on the same level (41%). At 6 and 12 months after fracture, 21% (95% CI 17–24) and 14% (11–17) were still being treated with opioids. Multiple Cox regression-analysis (adjusted for age, sex, type of treatment, and mechanism of injury) revealed that older patients (age >50 years) were more likely to end opioid prescriptions (Hazard ratio 1.5 [95% CI 1.3-1.9]). During follow-up, the frequency of patients on moderate and high doses declined. Comparison of the daily morphine equivalent dose among individuals who both had prescriptions during the first 3 months and the 6th month indicated that the majority of these patients (11/14) did not have dose escalations. Conclusions We did not see any signs in registry-data of major dose escalations over time in patients on potent opioids after tibial shaft fractures. Background Previous studies on consumption of opioids in patients with non-cancer pain were either non population-based [1,2], limited to a specific population, such as workers with low back injuries [3][4][5], or had only a short follow-up [6]. Moreover, studies dealing with concerns of abuse, side effects, and efficacy of long-term opioid therapy in these conditions have not been conclusive [7][8][9]. There is a lack of studies on the pattern of opioid use after skeletal fractures. Most of the reports in the literature are concerned with chronic back pain or other non-cancer pain conditions, but not with skeletal trauma patients [2,10,11]. The design and results of these studies illustrate the need for more selected patient groups with specific end-point data such as skeletal injuries. Fractures of the tibial shaft are among the most common of serious skeletal injuries [12]. They are slow to heal and frequently cause permanent sequelae [13]. We analysed the long-term pattern of opioid consumption in patients with tibial shaft fractures. We aimed to study if potential risk factors such as age, sex, type of treatment, and mechanisms of injury would predict a prolonged opioid therapy. Moreover, we wanted to assess the potential risk of dose escalations in prescribed opioids in these patients. Methods Sweden has a unique personal identification number for all residents, which allows linkage between healthcare and other information from different registers for research. Data on all patients with tibial shaft fractures were obtained from the Swedish National Hospital Discharge Register (SNHDR). The Register records diagnoses and designated treatment codes according to the International Classification of Diseases (ICD), covering at least 98% of all hospital admissions in Sweden. A matched control group without tibial fractures was extracted from the Total Population Register. Each patient in the fracture group was matched with five individuals by age, sex, and residential area. None in the control group had been admitted to a hospital for a tibial fracture during the study period. Data on death or emigration for both groups were retrieved by Statistics Sweden from the Total Population Register. Since July 1, 2005, all prescriptions filled at pharmacies in Sweden are stored in the National Pharmacy Register [14]. This does not include over-the-counter sales, which include some analgesics such as paracetamol and some of the non-steroidal anti-inflammatory drugs. However, opioid analgesics can only be obtained in pharmacies with prescription and are thereby included in the Register. We identified all admissions in the SNHDR with ICD diagnostic codes for tibial shaft fractures (S822, S8220, and S8221). Relevant surgical intervention codes were analysed accordingly (NGJ29-NGJ99). Mechanisms of injury were studied using ICD E-codes (external codes) and grouped into 6 categories: fall on the same level, fall from height, unspecified fall, transport accident, miscellaneous, and unreported cause. The study period was July 1, 2005 to December 31, 2008. All opioid analgesics prescribed to the patients in the study and control group were extracted from the National Pharmacy Register. These data include the following: name of the drug, date of filling the prescription, drug strength, number of pills, and dosage. The morphine equivalent dose (MED) for each opioid prescription in milligrams (mg) was calculated by multiplying the number of pills prescribed by the drug strength. These doses were then converted to MED using available equianalgesic conversions [15]. The median MED per day was calculated for each month. The MED was categorized as beeing low (< 20 mg), moderate (20-180 mg), or high (>180 mg) [16,17]. We analysed potent opioids (oxycodone, morphine, and fentanyl), whereas less potent opioids (dextropropoxyphene, codeine, and tramadol) were not included [18]. We did not want to be biased by patients with associated fractures, therefore we exluded all patients with other fractures than tibial shaft fractures. Moreover, we excluded patients who had potent opioids before the index hospitalisation, as we wanted to study new opioid use after fracture. The study was approved by the regional ethical review board located at the Karolinska Institutet (2009/837-31/3 and 2010/0125-32). Statistical analysis We used descriptive statistics to define the median values with interquartile ranges (IQR). Kaplan-Meier analysis calculated the cumulative opioid consumption with 95% confidence intervals (CI). The opioid therapy was considered to be ceased when no new prescription was found during 3 consecutive months of follow-up (after 3 months a new opioid prescription has to be issued). We used the Cox multiple-regression model to study risk factors for a prolonged opioid consumption after sustaining the fracture. Results were expressed as hazard ratios (HR) with corresponding 95% CI. If the HR is >1, the patients are more likely to end getting opioids compared with patients in the reference group. In the simple Cox model, we studied the following risk factors: age, sex, method of treatment (surgical or non-surgical) as well as mechanism of injury. All variables were later adjusted for in a multiple Cox model. Logistic regression analysis compared the group of patients using opioids after the fracture with those who never had opioid prescriptions during follow-up. The dependent variable in the model was opioid use (yes/no) and the covariates were age, sex, type of treatment, and mechanism of injury. The level of significance was set at P < 0.05. All statistics were performed using the PASW statistics package version 18 (SPSS inc., Chicago, Illinois, USA). Study population We identified 3,732 patients (>= 16 years of age) who were hospitalized with tibial shaft fractures. We excluded all patients with associated fractures and patients using opioids before the index hospitalization. This left us with a final sample size of 2,571 patients. Of those, 639 (25%) had prescriptions of opioids after the fracture (new opioid use) ( Figure 1). The corresponding age-and sex-matched control cohort consisted of 12,855 individuals (median age 46 years, 62% men). Filling a prescription on opioid analgesics in the controls was seen in 353 (3%) cases during the same observation period. Baseline data of the final study cohort with new opioid use after isolated tibial shaft fractures (n = 639) is shown in Table 1. The median age was 45 (16-97) years and most of the patients were males (61%). The type of fracture was most often a closed fracture (78%) and surgical treatment was chosen in the majority of the cases (81%). The mechanism of injury was fall on the same level in 41% of the cases, followed by transport accidents (21%). The median follow-up time after the fracture was 17 (IQR 7-27) months. Opioid prescriptions Kaplan-Meier analysis revealed that 6, 12, and 18 months after sustaining the tibial fracture, 21% (95% CI 17-24), 14% (11)(12)(13)(14)(15)(16)(17), and 11% (8-13) still required opioid prescriptions, respectively ( Figure 2). The median daily MED was 21 (IQR 8-32) mg within the first month after the fractures for those patients who were started on opioids ( Figure 3). Figure 4 shows the distribution of patients on various doses during different exposure windows. The first prescription of opioids was filled during the first month after fracture by the majority of the patients (86%) (Figure 4). During the study period, the proportion of patients using moderate and high doses decreased and the proportion of patients who stopped taking opioid drugs increased ( Figure 4). Comparison of the daily MED among individuals who both had prescriptions during the first 3 months and the 6th month indicated that the majority of these patients (11/14) did not have dose escalations (an increase by more than 30% of the original dose). The simple (unadjusted) Cox regression-analysis showed that older patients (> 50 years) (HR 1.7), women (HR 1.3), and non-surgical treatment (HR 1.4) made it less likely to continue opioid analgesic use. After adjustment for covariates in the multiple Cox analysis, older age was still a statistically significantly associated with ending opioid use sooner (HR 1.5) ( Table 2). Patients with isolated tibial fractures who received opioids after fracture (n = 639) were compared with the patients who did not get opioid prescriptions (n = 1,932). There was no difference concerning age, sex, and mechanism of injury between the 2 groups (data not included). However, patients receiving opioids during follow-up were more likely to have undergone surgery for the fracture (odds ratio 2.3, 95% CI 1.7-2.6, p < 0.001). Discussion There has been a continuous increase of opioid use for pain treatment among patients with non-cancer pain conditions during the past decade [19]. We studied the long-term opioid prescriptions after tibial shaft fractures in a national Swedish study. 25% of the patients filled a prescription for opioid analgesics at some point after the fracture. However, the doses prescribed were rather low and we did not see any evidence of a major dose escalations over time. We excluded all patients with potent opioid prescriptions prior to the fracture, as we wanted to study the occurrence of new opioid prescriptions. Moreover, we excluded patients with other fracture diagnoses as we wanted study a rather homogenous fracture cohort. We are aware that the included patients may have obtained opioids during follow-up due to other reasons than the skeletal injury such as back pain, extremity pain, and abdominal pain. Therefore, an age-and sex-matched control cohort without fracture was included for comparison. In a cross-sectional survey from 2010 based on a nationwide register in Denmark, a high overall prevalence of opioid consumption (4.5%) was found in the general population. The relevance of which was, however, unknown [20]. These findings are in accordance with our findings of an opioid use of 3% in the control cohort without tibial fracture. We did not detect any indication of major dose escalation in our cohort during the follow-up period. The median daily MED was between 7 and 21 mg during month 1 and 12 after fracture. Furthermore, as shown in Figure 4, the median MED for patients taking opioids was predominantly moderate to low in the beginning. During follow-up, the frequency of patients on moderate and high doses reduced. This is consistent with other data concerning non-trauma related pain conditions. In a meta-analysis of efficacy and safety of long-term opioid therapy for chronic non-cancer pain, many patients discontinued the therapy and very few patients showed signs of opioid addiction or abuse [21]. Our finding of a higher risk for continued use of opioids in younger patients (< 50 years) may be explained by a more extensive injuries often sustained during transport accidents in comparison with falls on the same level which is more often seen in elderly people. Furthermore, this finding for the older patient group may be reassuring, recently published reports raised increasing concerns on the safety of opioid analgesics in elderly people [22][23][24]. The shortcomings of our study include the following: opioids prescribed to patients is not always equivalent to requirement or consumption of opioids. The incidence figures in this study may present and over-or underestimation of actual opioid use. An overestimation of the use of potent opioid analgesics is due to the fact that not all prescribed drugs are consumed. Thus the actual number of consumed doses is probably lower than 100 percent. In contrast, we did not include less potent opioids such as codein which converts to morphine in the liver, resulting in an underestimation. Moreover, we did not analyse other analgesics such as COX-inhibitors which may augment the analgesic effect of opioids reducing the quantity of the consumed opioid required. A further limitation of the study is: we do not know anything about the efficacy of the analgesic treatment. Lack of analgesic effect and/or side effects of opioids are major reasons why opioid therapy is stopped [25]. Patients may also, after some time, be prescribed less potent opioids by their general practitioners, who may be reluctant to provide potent opioids for non-cancer pain. This is a register study, therefore we do not know the specific reason why the patients discontinued the use of opioid medication. For example, one reason for discontinuing opioid treatment for elderly patients could be due to a higher incidence of adverse events. We also do not know if the excluded patients, who were already taking opioids prior to their tibial fracture, had an increase in their prescribed opioids following the fracture. Our study is based on well validated government controlled national registries, including all hospitalized patients and opioid prescriptions in Sweden. We only studied the use of strong opioids in order to get a more homogenous patient group and to guarantee that the patients' consumption is registered in order to obtain accurate statistics regarding drug escalation problems. HR = hazard ratio, CI = confidence interval, a crude, b adjusted for age, sex, type of fracture, treatment, and mechanism of injury; if the HR is >1, the patients are more likely to end opioid intake compared with the reference group.
v3-fos-license
2020-10-28T18:58:03.482Z
2020-10-05T00:00:00.000
225146688
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://aslopubs.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/lom3.10394", "pdf_hash": "572523406dae6f77eebe0a2c49134b5a15588392", "pdf_src": "Wiley", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2011", "s2fieldsofstudy": [ "Environmental Science", "Engineering", "Biology" ], "sha1": "30ffad7828f920fe1977030f10b42fe6fb1f2d06", "year": 2020 }
pes2o/s2orc
The Scripps Plankton Camera system: A framework and platform for in situ microscopy The large data sets provided by in situ optical microscopes are allowing us to answer longstanding questions about the dynamics of planktonic ecosystems. To deal with the influx of information, while facilitating ecological insights, the design of these instruments increasingly must consider the data: storage standards, human annotation, and automated classification. In that context, we detail the design of the Scripps Plankton Camera (SPC) system, an in situ microscopic imaging system. Broadly speaking, the SPC consists of three units: (1) an underwater, free‐space, dark‐field imaging microscope; (2) a server‐based management system for data storage and analysis; and (3) a web‐based user interface for real‐time data browsing and annotation. Combined, these components facilitate observations and insights into the diverse planktonic ecosystem. Here, we detail the basic design of the SPC and briefly present several preliminary, machine‐learning‐enabled studies illustrating its utility and efficacy. Studying and understanding the fluctuations of planktonic populations is of critical importance to assessing the health and functioning of the ocean. Plankton include the primary producers in the ocean, which form the base of the food web. Through carbon fixation, nutrient uptake, and oxygen production, these organisms influence global scale biogeochemical cycles (Arrigo 2005;Hays et al. 2005). Sampling these populations is extremely challenging due to the wide range of spatial and temporal scales relevant to their population dynamics (Haury et al. 1978). Comprehensive studies of plankton therefore generally require extensive field campaigns requiring hundreds of hours of human labor. Established methods for studying plankton are often limited by high financial costs, low temporal resolution, limited spatial coverage, or low taxonomic specificity. Net tows, for example, are used to sample water at a specific time and place in the ocean by filtering water in situ to concentrate biological material. The abundance of target species is then manually enumerated using a microscope in a lab (Wiebe and Benfield 2003). In addition to the logistical difficulties of deployment, and the human costs associated with organism enumeration and identification, water and net sampling has been shown to significantly under-sample environmental conditions and fragile and gelatinous organisms (Remsen et al. 2004;Benfield et al. 2007;Jochens et al. 2010). At the other end of the sampling spectrum, fluorometers estimate chlorophyllcontaining organismal abundance; they sample continuously, generating a numeric indicator that is correlated with primary producer abundance (Cowles et al. 1993;Kolber and Falkowski 1993). While allowing for dense temporal sampling, fluorometers generally integrate over a small volume and yield bulk measurements of fluorescent organisms. Since the early 1990s, oceanographers have been developing and deploying digital in situ imaging systems to address the need for taxon-specific data captured at high temporal or spatial resolution (Benfield et al. 2007). The abilities of such optical instruments to sample organisms in the ocean are well established. Indeed, the detailed population time series or maps of spatial distributions they yield have allowed new insights into difficult-to-study aspects of plankton dynamics: environmental pressures on plankton populations, the role of gelatinous organisms in ecosystems, parasitic activity, and inter-and intra-species interactions to name a few (Bi et al. 2013;Peacock et al. 2014;Biard et al. 2016). When designing optical instruments, a sacrifice is necessarily made between camera resolution and the sample volume. An instrument with a given resolution is limited in its ability to quantitatively sample both abundant small objects and rare *Correspondence: e1orenst@ucsd.edu This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. large ones. At the smallest resolution, the system captures many tiny, indistinguishable samples. Conversely, large objects are relatively easy to identify, but are captured so rarely as to be statistically insignificant. In their study of colloidal particles in Monterey Bay, Jackson et al. (1997) recognized the importance of imaging system roll off-the inability of a particular instrument to effectively sample objects at the largest and smallest ends of its resolvable size range. To make their observations, they used complementary sampling methods ranging from shipboard Coulter counters for micron-scale particles to an in situ tow sled with a planar laser imaging system. Jackson et al. combined all these data streams by computing the minimum detectable particle size and the size spectra from each instrument. These values were then stitched together by computing the volumetric abundance of organisms in each system and combining the results. We built on Jackson et al.'s insights and determined that effective quantification of organisms across the whole size spectrum of a plankton population requires a suite of instruments with overlapping spatial resolution, deployed in a variety of configurations. In principle, any imaging system or combination of systems could be used in this manner. Critical to this framework is the development and consistent maintenance of a serverbased data management system to collate data from all inputs. Optical-based instruments, such as the Imaging FlowCytobot, the In Situ Ichthyoplankton Imaging System, the Underwater Vision Profiler, the Zooglider, the Continuous Particle Imaging and Classification System, and the PlanktonScope have been developed to monitor plankton populations Cowen and Guigand 2008;Picheral et al. 2010;Ohman et al. 2018;Gallager 2019;Song et al. 2020). These instruments, among many others, are being deployed in a variety of environments, in a variety of configurations, and are producing highquality data (Lombard et al. 2019). The Scripps Plankton Camera (SPC) was designed to be a flexible, easily reconfigured imaging system: it can be outfitted to observe objects from tens of microns to several centimeters; it can be deployed on a variety of platforms; it can record data autonomously or to a remote disk; and it operates with a highly extensible database management system. The SPC was originally developed to augment the Scripps Pier plankton time series currently maintained by the Southern California Coastal Ocean Observing System (SCCOOS) through the Harmful Algal Bloom Monitoring and Alert Network (HABMAP) (Kim et al. 2009;Kenitz et al. 2020). The SCCOOS time series, dating back to 2005, was built from weekly hand-collected net tows and discrete water samples. While this time series is extremely valuable, it misses shorter time scale population fluctuations and likely undersamples fragile and gelatinous organisms. With these limitations in mind, the SPC was designed to sample at high temporal frequency, with minimal influence on the fluid being imaged, and to maximize the ratio of the amount of water sampled to the amount of image data stored. One way to minimize instrument interference with an ambient population is to use a free-space imaging setup-one in which a light source and a camera are focused on the same distant plane in an open volume. There are several types of free-space imaging systems defined by the illumination location relative to the camera: backscatter systems, where the illumination is directed at the sample from the same angle as the camera; side scatter systems orient a camera to observe a plane illuminated by an oblique source; and transmission systems, in which a light is facing the image plane. Backscatter and side scatter illumination have advantages for imaging opaque objects, but suffer from poor scattering efficiency in dynamic, particulate-rich media like the waters off the Scripps Pier. Transmission imaging performs better in such environments and is better able to resolve objects that have an index of refraction close to that of sea water. There are a number of relevant transmission imaging modalities, all with various trade-offs. Holography, for example, has several attractive properties for plankton microscopy, such as large imaged volume and the ability to resolve object positions in three dimensions (Sheng et al. 2006). Shadowgraph imaging likewise expands the depth of field by using a collimated light source to allow recording of silhouettes of objects in the beam path (Settles 2012). Darkfield microscopy sacrifices sample volume for enhanced edge contrast and color images for translucent objects (Gage 1920). Moreover, because darkfield imaging uses forward scattered light, it is effective for imaging small objects down to tens of wavelengths of the illumination and provides color information. The SPC was thus constructed as a free-space, darkfield imaging microscope to maximize data collection in turbid costal conditions, and to observe fragile gelatinous organisms. A single SPC has two housings, one for the illumination hardware and the other for the camera and electronics. The system uses no nets, filters, or pumps; it images only particles that enter the sample volume via ambient flow. The embedded computer segments the raw frames in real time, saving only subimages of foreground objects. The resulting regions of interest (ROIs) are then stored either on-board or downloaded externally via an Ethernet connection. An accompanying database management system, user interface, and annotation tools were built in concert to work with the incoming data. This framework images 10s to 1000s of liters of sea water per year-depending on the volume of interrogation as defined by the specific optical setup-with minimal computer memory requirements. For example, the instrument package deployed on the Scripps Pier uses a pair of microscopes with 0.5× and 5× objective lenses that collectively produce approximately 5 terabytes of foreground ROIs out of 2000 terabytes of raw image data over a year of continuous operation. The SPC's adaptable configuration lends itself to straightforward integration and expansion; the same physical footprint can be used to observe many different size ranges and the data can be saved without manipulation using the same data management system. This basic framework could enable many new sampling designs, long-term monitoring efforts, and experimental setups. Several versions of the SPCS are already in service in ocean environments from the Gulf of Alaska to the Cayman Islands and freshwater systems from the Sacramento River to the Greifensee Lake in Switzerland (SI 1). The original system, deployed as a permanent installation on the Scripps Pier in 2014, has collected more than 1 billion ROIs and has facilitated observations of fragile gelatinous organisms, ongoing studies of episodic blooms of extremely long diatom chains, and characterization of forms of parasitism never recorded before in the Pacific. Materials and procedures Model A fundamental issue in particle imaging is the trade-off between microscope resolution and effective sample rate for a given sized object assuming a constant size-distribution of organisms. To better quantify this trade-off, we developed a physics-based model of a free-space imaging system to guide the instrument design. The model computes the amount of time needed to collect a desired number of images of a range of object sizes given an underlying particle size-distribution, a microscope magnification, and a sensor size. Our hypothetical system assumes a free-space light microscope with a welldefined relationship between optical resolution and the resulting imaged volume. The model, however, could be adapted to evaluate the same trade-offs with different imaging modalities such as holographic or plenoptic imaging. The resolution and sample volume of a diffraction-limited microscope are defined by the numerical aperture (NA) of its objective. NA is a dimensionless number that describes the range of angles over which a lens can collect light: where n is the index of refraction of the medium (e.g., air or water) and θ max is the angle describing the maximum cone of light picked up by the lens. The size of the finest detail that can be resolved by the microscope d is inversely proportional to the NA. with λ representing the wavelength. The constant is defined by the Rayleigh criterion for resolving two point sources of light on an image plane (Hecht 2016). We rearrange Eq. 2 to express the NA as a function of the desired minimum resolution and an assumed wavelength of collected light. In this illustrative model, the smallest resolvable object d is a constant defined by x obj , the size of a pixel in the image plane: x obj is in turn defined by the pixel size on the sensor, x pix , and the magnification, M. The maximum angle over which the microscope can collect is likewise defined by the NA of the system. The depth of field over which an object can be resolved is then: where β is a constant describing the amount of acceptable blur and x obj is the pixel size in object space (Eq. 5). This model assumes that blur is purely a Gaussian function of distance from the image plane described by the shape of the aperture (Joshi et al. 2008). The volume of water observed in an individual frame, v f , is then computed as the product of the full sensor size and the DoF, taking care to convert to the appropriate units. where i and j are the dimensions of the sensor in pixels and vol is a volume conversion factor. The rate of objects imaged per unit time is a function of particle concentration per unit volume κ and the system frame rate f where v f is the computed sample volume from Eq. 8. In all model runs, κ was set assuming an allometric size-scaling of metabolic rate constraining the size abundance distribution of the observed plankton (Huete-Ortega et al. 2011). Defining a "sample" as a collection of imaged objects, dividing Eq. 9 by f stat -the number of objects needed to create a statistically relevant sample-yields the rate of statistically significant samples per unit time: The model then outputs the number of samples collected per hour as a function of plankton concentration, the system magnification, and frame rate. When designing the SPCs, we used the model to visualize the sampling rates r samp for six different microscope objectives with magnifications between 5× and 0.05×. The model assumed that each system used a 12 megapixel sensor, an object must occupy at least 20 pixels to be counted, and the sample ratio f stat was set to 100 objects per sample (Supporting Information Table S1). Each line in Fig. 1 represents the sample rate of each microscope objective when operating at eight frames per second in an environment composed purely of plankton. The left-most end of each line is the sample rate r samp of small objects imaged with enough pixels to be counted. This is the lower marginal performance boundary of an objectivethe case in which the camera images many low-resolution objects. The right-most terminus of each line is at a sample rate r samp of 0.01 samples per hour, corresponding to a single object per hour when the desired sample ratio f stat = 100 objects/sample. Moving from higher to lower power objectives yields an increase in sample rate r samp obtained by lowering magnification while maintaining at least 20 pixels per object diameter. Consider, for example, a system akin to the MICRO-SPC fitted with a 0.5× microscope objective (Table S2). With a 20 pixels per body length lower bound, the smallest object the instrument will image has a length of 138 μm. At eight frames per second, the system will capture approximately 55 samples comprised of 100 ROIs every hour (r samp = 55). This 0.5× microscope will image larger targets rarely-a 5 mm object will only show up in the data approximately once an hour (r samp = 0.01-necessitating long binning intervals (hundreds of hours) to acquire a sample with a sufficiently high signalto-noise (SNR) ratio. This model informed our design criterion for all deployments. All systems were developed in close coordination with the teams using the instruments to ensure that the target plankton of interest would be identifiable and collected with sufficient SNR. When designing the SPC deployment on the Scripps Pier, we desired a lower bound SNR of 10 dB-the system must image 100 objects of a given size to constitute a statistically significant sample-for objects ranging in size from 10 μm to 1 cm in size. No single microscope could sample that broad a size spectrum with reasonable statistics; our solution, therefore, was to use two microscope objectives. The model results are intuitive: all systems interrogate a volume of water defined by the system design, collecting many images of potentially difficult-to-identify objects at the low end of its resolution. Large, rare objects approaching the size of the imaged volume require a long integration time to capture statistically significant estimates of abundance. These limitations must be considered during the design phase of a system to achieve a desired SNR for a given sized object. It is important to note that this model only considers biological objects. A free-space imaging system such as the SPC will capture images of any object passing through the sample volume. Objects such as sand, particulate matter, or detritus might not conform to an allometric size-abundance scaling. System performance should not, however, be affected with regard to biological sampling. Once manual or automated processing has been applied to remove noise (i.e., unwanted images), the same principles apply-the time to collect statistically relevant samples in situ will be unaffected by noise. General system design We used the model to determine the maximum required time to sample a minimum sized object of interest with reasonable statistics (1/f samp ). To support the weekly Scripps Pier sampling program, the system was designed to resolve objects spanning 2-3 orders of magnitude of body size. From an engineering standpoint, the minimum resolvable size class-as defined by the statistically significant sample rate in the Model section-of a system is determined by several factors: the available camera hardware, imaging sensor format, bus bandwidth, and data rate into the real-time processor. Taken together, these limitations imply that no individual camera system can realistically achieve the desired scientific output. As an example, consider a single camera that samples the size range of 10 μm to 10 cm with only 5 pixels per minimum object diameter. Based on the model, this would require an optical resolution of 2 μm or better and a field of view of 10 cm × 10 cm. It would require 50,000 × 50,000 pixels, or 2.5 gigapixels; an impractical sensor even by futuristic standards. Each line represents a different microscope objective in a telecentric optical set-up imaging onto a 12 megapixel sensor. The lower bound of objects to be counted is set at 20 pixels per characteristic object length and the sample size is assumed to be 100 objects. The MINI-SPC is represented by the blue line, the MICRO-SPC is shown in purple, and the MACRO-SPC is in maroon. Instead, a suite of cameras with overlapping detectable size ranges is more practical for imaging such a large range of sizes. Scripps plankton camera The SPC is a full, end-to-end plankton observation platform. Broadly speaking, the system consists of three distinct nodes: the in situ free-space imaging system, a server array to manage data and facilitate analysis, and an interface for remote clients to observe and annotate images (Fig. 2). The imaging unit is a set of darkfield telecentric microscopes that can resolve objects from tens of microns to several centimeters (Table S2). The server organizes data from any number of underwater cameras and hosts a web interface. Remote users can view data in real-time, add labels, and sort data according to several basic filters. The in situ imaging system can be configured in a number of ways to observe a broad size range of organisms and particles in the planktonic ecosystem. To date, we have developed four distinct in situ systems denoted by their target size range: MACRO-, MINI-, MICRO-, and DUAL-SPC (Table S2). Each one is fitted with different microscope objectives, which change the resolution, sample volume, and data rate. The DUAL-SPC combines the MINI-and MICROsystems into a single housing. All SPC instruments can be deployed autonomously-saving data to onboard storage before offloading to the server, or cabled-saving data directly to the remote server via Ethernet. Mechanical The underwater unit of the SPC remains largely the same, regardless of the particular deployment type, and consists of two housings: one containing the embedded computer, camera, and microscope optics; the other has the illumination hardware (Fig. 2). Both housings have a clear port, made of acrylic or sapphire, to transmit or collect light. The two housings are mounted facing each other and are attached by standoffs or a rigid plate. The physical dimensions of each housing vary by deployment type (Table 1). The illumination is powered and triggered by the camera via a 5-pin Subconn cable (MCIL5). Optical Microscope A machine vision camera is fitted with a microscope objective and a tube, or telecentric, lens. Together, these optical components both magnify objects and move the image plane away from the port. The SPC has been fitted with a variety of objectives to target different sizes of organisms (Table 1). Further adjustments to the optical design can be made to optimize to a particular sampling protocol. The physical size of the housing containing the system is governed by the diameter and length of the telecentric lens, which in turn is dictated by the desired resolution. Illumination Darkfield transmission illumination was chosen to enhance the edges of translucent organisms with an index of refraction Fig. 2. Free-space in situ imaging system concept. (a) Schematic of the entire framework. A subsurface underwater microscope (bottom) illuminates an object (plankton) in free-space using a high-power strobed LED and darkfield optics. The scattered light is imaged by a microscope objective and tube lens on to a camera sensor. An embedded CPU and GPU process images from the camera in real time, saves ROIs to local storage, and sends them to a remote database (when available). On-site data servers host images and provide a client application for remote browsing and annotation of images. (b) An image of the MINI-and MICRO-SPC on the Scripps Pier before deployment. The camera housing is on top and the illumination housing below. close to that of the surrounding water. This illumination technique improves the system's ability to resolve fine details of objects such as extremities or interior structures that are close to the focal plane. The optical set up also yields color images of transparent objects. Moreover, targets in the sample volume appear bright on a dark background because only scattered light is collected; this facilitates ROI extraction. As with all in situ imaging modalities, the image quality degrades as a function of water turbidity. For systems like the SPC high concentrations of small particles will generate forward-scattered light, eliminating the darkfield effect, and causing any segmentation routine to fail. The key parameter in designing darkfield illumination is the NA of the illumination. It must be greater than the NA of the imaging lens such that only scattered light is collected by the system (Gage 1920). If even a small amount of unscattered light leaks onto the collection sensor, the contrast will be significantly reduced. In most designs, this NA constraint is achieved by placing a darkfield stop in the illumination path. The size of the stop, and focal lengths of the lenses determine the minimum NA of the illumination. The illumination can be designed easily using nonsequential optics simulation software such as Zemax. However, in practice, the illumination must be adjusted manually after assembly to balance color, intensity and imaging quality. It is essential to ensure the opto-mechanical design allows for these small adjustments. The MINI-and MICRO-SPC use similar illumination paths: expanding light from an LED point source passes through a collector lens and the central beam is blocked by an opaque stop. The remaining annulus of light is focused at the imaged volume by a condenser lens. An object in the volume scatters light which is then collected by the camera (Fig. 3a). The MACRO-SPC uses a projection lens design to create a darkfield image over a larger sample volume. The darkfield stop blocks light from the LED at the source, yielding an expanding annulus of light. A series of plano-convex lenses collimate the source light from an LED, retaining enough angle to allow the beam to pass obliquely through the sample volume (Fig. 3b). This design retains the desired contrast enhancing effect. Biofouling During the deployment of the SPC on the Scripps Pier, organism growth, sedimentation, and the presence of larger organisms in the sample volume have all interfered with imaging. Several approaches to mitigating biofouling have been tested including water jets blown across the ports, copper components and strobed UV LEDs to limit growth, and mechanical wipers to remove settled material. In practice, we found that the most effective solution has been running Hydro-Wipers (Zebra-Tech; Nelson, New Zealand) once an hour to remove sediment and growth from the ports. This reduced the need for diver servicing of the system to a frequency of once a month. Copper mesh cages of various dimensions have also been used to prevent lobster and fish from inhabiting the sample volume. Other deployments of the SPC have had fewer issues with biofouling due to factors such as geographic location, physical movement of the system, and parking the housings below the euphotic zone (Supporting Information S1). On-board processing After an image is initially acquired, an on-board computer segments bright objects from the dark background in real time with an image processing routine written in C++. The software uses OpenCV routines and relies heavily on the multithreading library from POCO with a wrapper class from Open Frameworks (Bradski 2000). Raw, full frame images are downsampled by a factor of 2 or 4 by block-averaging pixels. This step is critical to allow megapixel-scale images to fit in the processor cache and be processed efficiently. A Canny edge detector is then applied to the downsampled images to find objects in the frame and a region filling algorithm is used to close contours. For each ROI in the frame, a bounding box is drawn from the centroid of the region. The dimensions of the box are doubled to ensure that the object is completely segmented and that extremities are not cropped out. Pixels from the raw (not block-averaged) images in each box are extracted and saved locally before being exported to network storage (Fig. 4). Each ROI is tagged with a datetime stamp, the pixel coordinates of the upper left corner of the bounding box, and the ROI area. User-defined thresholds dictate the minimum and maximum areas of the bounding boxes to save and must be experimentally adjusted based on the water quality of the study site. The first iteration of the SPC, built in 2013 and deployed in 2014, runs a lightweight distribution of Linux on a 1.8 GHz Fig. 4. Schematic of the region of interest (ROI) selection procedure. The raw image is downsampled by averaging pixels into 4-or 16-pixel blocks. A Canny edge detector is then used to detect edges in the image and a region filling algorithm is used to fill closed contours. Bounding boxes with area within a specified range are then mapped back to the original raw image and pixels from the raw image are saved out as ROIs. Quad Core Odroid XU3 board. With just 2 GB of RAM, this computer ran the camera, segmented ROIs, and saved images at 8 frames per second even when object densities were high. New versions of the system run on an Auvidea J120 carrier board with an NVIDIA Jetson TX1 embedded GPU. The new build allows real-time operation at 20 frames per second, allows simultaneous video and image capture, supports onboard classification with deep neural networks, and can run multiple cameras simultaneously (Supporting Information S1). We note that these are minimum hardware specifications for the operation of the system presented in the current paper. Embedded computing technology is rapidly improving and will undoubtedly yield enhanced performance for future in situ imagers. Server and database The server provides redundant data storage for ROIs and metadata, the database-an organization system that sorts the images by morphological features, time, taxonomic labels, semantic tags-and web-based tools for browsing, searching, annotating, and categorizing ROIs (Fig. 2). The server loads a new ROI, converts the raw pixels to color, autoscales the pixel values to 8-bit, extracts morphological features such as major and minor axis length and aspect ratio, saves JPEG compressed and uncompressed PNG files to the ROI storage, and creates a new entry. The database entry holds the unique image ID, file path, major and minor axis length, aspect ratio, timestamp in UTC, and image height and width. This design can support 100 s of millions of ROIs in the database in 64 GB RAM. Assuming a file size of 100 kB/ROI, 50 million images requires 5 TB of storage-the equivalent number of full-frame images would require on the order of 1000s of TB of storage space. Web application All ROIs can be browsed and downloaded via a JavaScript web application (spc.ucsd.edu). The ROI files are served by a nginx server that proxies requests from a Django-based application through a Gunicorn WSGI HTTP server. The Django application supports a REST API for searching and annotating images using the JavaScript application. The web application itself is highly extensible, allowing for easy modification and feature addition. The web app currently allows users to browse ROIs by date and time, major and minor axis length, aspect ratio, and human labels. Other functionality is easily built into the framework depending on the desired display interface. A static version of the interface can be used to browse images in the field without access to a remote server. Automated classification Over time, the SPC can collect millions of individual ROIs, regardless of the deployment type. As with many other plankton imaging systems, the high data rate far outstrips a researcher's ability to manually sort it (Benfield et al. 2007). Automated classification is therefore critical to the ultimate success of the SPC framework. Many plankton researchers have begun looking toward machine learning to alleviate the human cost of classifying the data, and to expedite scientific results (Blaschko et al. 2005;Sosik and Olson 2007;Ellen et al. 2019). Experiments with automated classification, particularly modern deep learning methods, have been conducted throughout the development process of the SPC. We have selected two studies being done with SPC data and machine learning to illustrate the sorts of information that can be extracted from the system. Each experiment uses a different lightweight neural network architecture, but the same supervised training procedure: fine tuning. Fine tuning takes networks that were previously trained on images of diverse macroscopic objects and refines them to examine plankton. The method improves classification scores when there is limited training data and is demonstrably effective for oceanographic applications (Yosinski et al. 2014;Orenstein and Beijbom 2017). Moreover, the technique allows practitioners to use the best available system from the machine learning community to initialize their system. All training and testing were done on a server-based NVIDIA GTX-1080. Study 1: Observation of a host-parasite interaction During human inspection of ROIs collected at Scripps Pier, it was noted that the cosmopolitan copepod Oithona sp. was often infected by the parasite presumed to be Paradinium sp. (Fig. 5). We believe this to be the first observation of this parasite on its copepod host in the Pacific Ocean. Paradinium sp. is a parasitic rhizarian that grows in the copepod's hemocoel, migrates to the digestive systems, is expelled through the anus, and attaches as a cell mass called a gonosphere to the anal somite in the host's urosome (Shields 1994;Skovgaard and Daugbjerg 2008). The life cycle of Paradinium is poorly understood due to difficulties studying it in the lab and the field. The obvious external gonosphere remains attached to the host for less than an hour before bursting and releasing spores into the environment. Moreover, the gonosphere is fragile and can break off, making it difficult to find in net samples. The gonosphere is thus the only portion of the Paradinium life cycle visible in the SPC images. We took advantage of the SPC's dense temporal sampling to generate a time series indicating the prevalence of the parasite. This entailed training and applying a version of the Ale-xNet convolutional neural network written in Caffe (Krizhevsky et al. 2012;Jia 2013). The process required creating a human-curated training image set, using it to train the computer classifier, and finally applying it to all the images not observed by the human. Images of parasitized, apparently unparasitized, and ovigerous Oithona sp. were labeled from 58 randomly selected, nonconsecutive 4-h chunks of time during the summer of 2015. All ROIs not belonging to the three groups were considered noise. ROIs from the MINI-SPC were prefiltered by major axis length to between 0.5 and 2.5 mmthe general size range of Oithona with ± 0.5 mm to account for foreshortening-before human observation. Approximately 650,000 ROIs were sorted into the four classes. AlexNet was fine-tuned from a version originally trained on ImageNet data (Russakovsky et al. 2014). The final three fully connected layers were removed and retrained with the human labeled Oithona images. The network was then trained for 40,000 iterations with a base learning rate of 0.0002. The final classifier achieved 89% accuracy on an independent test set. The trained network then classified all ROIs captured by the MINI-SPC from March 2015 to April of 2016. Study 2: Tracking bloom forming species A classifier was developed to find potential harmful algal bloom (HAB) formers in the MICRO-SPC time series to assist SCCOOS researchers. Thirteen common taxa, including seven potential harmful algal bloom formers, and eight noise categories were identified for this preliminary study. A human domain expert sorted objects into these categories using the SPC online interface. The annotated data set was then used to fine tune a deep residual network: the ResNet34 implementation, originally tuned for ImageNet data (He et al. 2016). Each class was represented by 1000 labeled examples: 800 for training and 200 for validation. Classes with more than 1000 examples were subsampled and those with fewer were augmented with randomly affine-transformed images as needed (Orenstein and Beijbom 2017). The best resulting classifier was applied to all MICRO-SPC data from 2018 (Fig. 6). Performance at the Scripps Pier The SPC was originally deployed on the Scripps Pier in La Jolla, California. The system was designed to supplement the existing Scripps Pier plankton time series that monitors harmful algal bloom species at weekly intervals. Water samples and net tows are performed at the end of the pier and the samples examined under a microscope to enumerate species. The SPC was envisioned as a tool to track population fluctuations during the periods between the net samples. Researchers also wanted to study local zooplankton populations in addition to the phytoplankton and microzooplankton of interest to the HAB monitoring program. These plankton range in size from tens of microns to several centimeters. A single free-space imaging system could not effectively sample this entire size spectrum. Combining the MINI-and MICRO-SPC allowed us to effectively image organisms over the entire desired size range. The system uses 0.5× and 5× microscope objectives contained in separate housings. The two units are attached to a single frame mounted to a pier piling and connected to a surface unit via a single Subconn Ethernet cable. Together, the two cameras collect thousands to millions of ROIs per day depending on the ambient particle density. All ROIs are stored on a remote server. To date, the pair of instruments has collected over 1 billion individual ROIs. The system has sampled during blooms of long chain-forming diatoms, captured short-term fluctuations of ecologically important species, observed high volumes of gelatinous organisms, and imaged a form of parasitism never observed at the Scripps Pier. Machine classification Preliminary classification work has been done using data from both the MINI-and MICRO-SPC on the Scripps Pier. We stress that these are preliminary results and do not claim to draw firm ecological conclusions. They are presented here to illustrate the procedure and types of questions that can be addressed. Oithona A version of AlexNet was fine-tuned and run over all data in the appropriate size bin for a calendar year to search for Oithona sp. and a possible parasite. The classifier achieved an accuracy of 89% on an independent test set. To estimate the performance of the classifier, a human expert observed all data sorted by the machine annotator into the three classes of interest between March and August of 2015 (Table 2). The classifier had the most trouble identifying egg-bearing Oithona, falsely identifying 32% of the ROIs. We note that the false negative rate is extremely low. A subset of ROIs classified as noise was also observed to estimate the classifiers false negative rate. A total of 29,459 ROIs were examined; only 1 parasitized, 1 ovigerous, and 3 healthy Oithona sp. were missed by the CNN-a false negative rate of approximately 0.0001%. The classifier output was used to estimate fraction of parasitized and ovigerous individuals relative to the total local Oithona population between March 2015 and April 2016 (Fig. 7). Counts were binned on a daily basis and smoothed with a 24-h Gaussian filter. While these outputs are uncorrected for false detections, the low false negative rate derived from the validated data indicates that the gross patterns are indicative of the true signal. Harmful algal bloom A ResNet34 was fine-tuned using a small set of images from the MICRO-SPC to observe fluctuations in phytoplankton and microzooplankton. The machine annotator was highly accurate on an independent test set, scoring 98% over the 38 classes. The trained classifier was used to sort a random sample of 10,000 ROIs from each day of 2018. Previous studies have noted that classifier performance varies as a function of changes in the underlying population being observed (Moreno-Torres et al. 2012). Examples of socalled "dataset shift" have been noted in time series studies of plankton (González et al. 2017). To examine the issue in the context of the SPC, we estimate the classifier's performance on four classes in two contexts: elevated and normal prevalence of an organism of interest. Spikes in the abundances of Akashiwo sanguinea, Cochlodinium spp., Lingulodinium polyedra, and Polykrikos spp. were highlighted by computing the fractional anomaly on a class-by-class basis at daily intervals. The fractional anomaly is defined as the total number of ROIs of a class on a given day relative to its annual mean. Thus, if the classifier identifies a greater number of ROIs than the mean, the fractional anomaly is greater than one. The larger the fractional anomaly, the more unusual the observation (Fig. 6). We define "normal" as periods of time when the fractional anomaly was within two standard deviations of the mean. To estimate the false detection rate under normal relative abundances, 10 ROIs of the organism were randomly selected from the classifier output on 20 random days. When the classifier selected fewer than 10 ROIs on a given day, all ROIs were retained. The process was repeated until 200 ROIs had been selected. A human expert examined a mosaic of these ROIs and counted the number of false positives (example for A. sanguinea; Fig. S1). The average false detection rate during these low abundance periods was quite high for all organisms. A single day of elevated conditions was evaluated for each of the 4 example organisms. Two hundred random ROIs from the class of interest were drawn and examined to estimate the false detection rate (example for A. sanguinea; Fig. S2). Likewise, 200 random samples were drawn from all other classes to produce a false omission rate (example for A. sanguinea; Fig. S3). Note that this procedure only provides a single estimate of the classifier performance during periods of elevated abundance. The false detection rate on elevated days is substantially lower than during baseline periods. Likewise, the false negative rate appears negligible for all four of the evaluated organisms evaluated on their peak abundance days (Table 3). Discussion When the SPC was under development, we envisioned it as a digital plankton net: a system that would capture and analyze undisturbed organisms in a given size range, eliminating the need for time-consuming and costly enumeration of physical specimens. The SPC has indeed been a successful filter, acquiring images of objects large and small in their natural habitat. Moreover, it has enabled novel sampling designs by densely sampling in time, producing data in real time, and being capable of untethered deployments. There is, however, no single instrument that can observe the entire size spectrum of the myriad microscopic denizens of our oceans. There are fundamental physical limitations, defined both by instrument design and the environment itself, to what can be observed by any single instrument. Taking both the instrument and the environment into account is critical to developing a system and designing an experiment or observational schematic to appropriately address a particular question. The trade-offs between system resolution and expected observational ability must be carefully considered when designing experiments to target a population of interest. Our model of free-space microscopy performance as a function of planktonic size spectra is an effective guideline for selecting the appropriate hardware. It also serves to highlight when multiple systems are appropriate and how to maximize their efficacy for a given sampling protocol. The SPC lends itself to designing studies in this way. The system's flexibility allows it to be deployed in many configurations and on a variety of platforms. When designing any experiment with imaging microscopes, the entire framework-the in situ imager, real-time image processing, and consistent database practices for storage and annotation-must be treated as a cohesive whole. Acting in concert, these elements enable arbitrary expansion of a plankton sampling infrastructure. One can imagine using such a system to develop a distributed network of complementary in situ observatories working together to better understand the planktonic ecosystem (Lombard et al. 2019). Hardware comparison The utility of the SPC contributes to an increasingly broad universe of plankton imaging systems. Lombard et al. (2019) Table 4 follows the layout of their comparison to contextualize the SPC (see table 1 in Lombard et al., 2019). The choice of a dual resolution deployment of the SPC on Scripps Pier is akin to the use of two magnifications on the VPR (Davis et al. 2004). In both cases, the research teams determined that sampling the whole desired size range required parallel imaging systems. The VPR, however, is strictly a towed array and is not well-suited for moored deployments. It is also not appropriate for fully autonomous deployment as it does not have on-board processing capabilities. Other instruments have been designed that can be built with several magnifications, but none to our knowledge have been deployed in parallel, in situ. Many of the systems discussed in Lombard et al. (2019) were purpose-built for a particular deployment type-towed or moored, in situ or deck-based-and targeting of certain organisms. The diversity of sampling methodologies makes direct comparison very challenging. Lombard et al. present a schematic visual aide describing the range of sampled organisms, but do not compare the sample rate (time to collect a statistically relevant number of ROIs of a given size) of each system. The SPC is not a replacement for any these instruments. It is instead best viewed as a complementary system. For example, when the SPC is moored it sacrifices spatial resolution for temporal resolution; a mobile or profiling system might add an additional dimension to the protocol. The range of body sizes of plankton, the temporal variability in their populations, and the spatial heterogeneity in their distributions necessitate a holistic sampling approach-one that integrates several systems with overlapping resolutions and different deployment strategies. On-board processing Part of what makes the SPCS adaptable and reconfigurable is the embedded computer. Having processing power and storage on-board makes it relatively easy to reprogram the instrument for different missions. We have experimented with some alternative deployments that have used the SPCS for tow-yos and vertical profiles with minimal hardware modifications aside from the addition of a battery pack (Supporting Information S1). Many of these configurations have required the system to collect data on-board until the instrument is recovered. The ability to process images into ROIs in real-time allows the instrument to operate for much longer without filling the local storage. One can imagine reconfiguring and reprogramming the instrument to sample in many environmental conditions with varying degrees of human intervention. The most extreme case might be a remotely deployed system transmitting relative species abundance estimates via satellite. Doing so would require substantial effort in quantifying ROI extraction performance and understanding the limitations of the trained automated classifier (Bi et al. 2015;Orenstein et al. 2020). Automated analysis comparison Many groups have begun integrating machine learning techniques into their workflow. Indeed, it is a necessity as increasing amounts of data are collected by digital imaging systems. These processing techniques are all very similar, implementing some form of supervised learning-training an algorithm on a curated set of labeled images, evaluating using an independent subset, and then applying the algorithm incoming data. In the past 5 yr, virtually all plankton image classification schemes have adopted a flavor of deep learning (Luo et al. 2018;Ellen et al. 2019;Briseño-Avena et al. 2020). These methods are quite accurate when evaluated using a random subset of the training data, typically achieving accuracies around 90%. The two studies with SPC data outlined above are no different; both did well on independent test sets. It is not surprising that tests with independent subsets of training data are uniformly high. Despite high accuracy in testing, system performance often degrades when applied to new incoming data; an issue known as dataset shift in the machine learning community (Moreno-Torres et al. 2012). Our results also suffered from dataset shift, with accuracies dropping when considering new images (Tables 2 and 3). We believe that our approach of performing further human evaluation on new classifier output will help mitigate errors in population estimates . The server backend is built for compiling the output of classifiers and quickly displaying the results via the front-end web interface. This type of pipeline is crucial to maximize the work hours of human experts. Automated analysis considerations System performance will be affected by changes in the relative distributions of particles in the water. This could manifest as a bloom saturating the signal, a storm suspending sand or other particles, or larger organisms taking up residence in the sample volume. These noise sources vary dramatically among locations and the population of study. Specific environments and deployments will require careful consideration when selecting instruments, selecting onboard filtering criteria, and making storage provisions. We have dealt with noise in two primary ways: (1) sorting images offline to remove the noise when there is a consistent underlying noise source; or (2) subsampling when a noise source is abundant enough to overwhelm the real time processing. Removing noise in post-processing is simply a classification procedure: a system is designed to remove the irrelevant data points. When the noise is high enough to inhibit the onboard image processing, the data rate can be reduced to accommodate the influx of objects. This can be done either by saving a random assortment of ROIs or, in extreme cases, saving full-frame images at longer intervals. Automated and manual annotation With the rapid development of digital imaging systems comes a concomitant data problem. For systems like the SPC to be truly effective, careful consideration must be given to classification procedures. While there are promising early returns for unsupervised classification, the majority of imaging systems currently make use of expert trained algorithms (Schröder et al. 2020). In either learning paradigm, special attention must be paid to how the highly trained human annotators sort the dataset. Well-designed software tools and experimental procedures can facilitate rapid development of effective automated classifiers (Gomes-Pereira et al. 2016). The two case studies presented in this work demonstrate both the feasibility and limitations of such procedures. In both cases, we designed deliberately biased classifiers by forcing the training set distribution to be even rather than mirroring the relative distributions of the underlying populations. This was done due to the limited available training data and a desire to make them sensitive to large population changes (González et al. 2017). Indeed, both classifiers acted as effective detectors, if not perfect classifiers. These preliminary studies with machine learning demonstrate that such techniques could eventually be quite effective. In future work, we will validate the output by directly comparing estimated relative abundance from the cameras on the Scripps Pier to values observed by the SCCOOS HABMAP program (https://sccoos.org/harmful-algal-bloom/). Consideration will also be given to understanding how the shifting nature of the population being observed affects the output of a classifier (Moreno-Torres et al. 2012). Developments in this area will facilitate "smart" sampling by using trained classifiers onboard the camera's embedded computer to further filter data. In the ideal case, such systems would be able to output numbers of organisms rather than images, enabling autonomous deployment of imaging systems on long duration platforms. Free-space imaging limitations When using traditional microscopy with free-space imaging, the imaged volume is inherently size-dependent: small objects will blur into the background over a shorter distance along the optical axis than large ones. This has been characterized in our model and to a large extent does not significantly impact studies of changes in relative abundance. However rigorous calibration is necessary when moving from relative to absolute abundance. We envision accomplishing this through a series of dilution experiments for species of interest. A known concentration of the species is prepared, diluted, and subsequently imaged by a bench top version of the system. This is then repeated for several different size classes and an effective imaged volume is estimated for each size. This is relatively straightforward for phytoplankton but likely quite challenging for larger zooplankton. In that case, constraining the distance between viewports to be within the depth of field provides a better solution. This method is used for the MACRO-SPC to provide a known imaged volume.
v3-fos-license
2021-05-11T00:03:52.774Z
2021-01-15T00:00:00.000
234322017
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://psychologyandeducation.net/pae/index.php/pae/article/download/1492/1263", "pdf_hash": "909267dacaa2919ca6a4ff53fcbe0044e2660147", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2012", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "d1e3004673257d3e4a752d4e787bdc27503bdfde", "year": 2021 }
pes2o/s2orc
AUTHORSHIP VERIFICATION USING MODIFIED PARTICLE SWARM OPTIMIZATION ALGORITHM Digital forensics is the study of recovery and investigation of the materials found in digital devices, mainly in computers. Forensic authorship analysis is a branch of digital forensics. It includes tasks such as authorship attribution, authorship verification, and author profiling. In Authorship verification, with a given a set of sample documents D written by an author A and an unknown document d, the task is to find whether document d is written by A or not. Authorship verification has been previously done using genetic algorithms, SVM classifiers, etc. The existing system creates an ensemble model by combining the features based on the similarity scores, and the parameter optimization was done using a grid search. The accuracy of verification using the grid search method is 62.14%. The time complexity is high as the system tries all possible combinations of the features during the ensemble model's construction. In the proposed work, Modified Particle Swarm Optimization (MPSO) is used to construct the classification model in the training phase, instead of the ensemble model. In addition to the combination of linguistic and character features, Average Sentence Length is used to improve the verification task accuracy. The accuracy of verification has been improved to 63.38%. Introduction Digital forensics locates the evidence located on computers, mobile phones, and networks [1]. Digital forensics branches are computer forensics, network forensics, forensic data analysis, and mobile device forensics [2]. Tasks in digital forensics include Authorship Verification (AV), Author clustering, and Text classification. Text classification is the task of classifying a document into one or more classes. It can be done either manually or algorithmically. The manual category of the documents is widely used in the library science. Algorithmic classification of documents is used in information science and computer science. It is done either according to the subject or the content in the text document. The process of determining whether a given unknown document x is written by the same author who had written the given set of known documents D is known as Authorship Verification (AV). It can be viewed either as a mono-class classification or as a multi-class classification task. It can be used for tasks like intrinsic plagiarism detection. The classical approach of authorship verification involves four steps [3]. Documents, articles or online messages composed by potential creators are gathered from the web. Subsequent to gathering the reports, the unstructured writings are spoken to as a vector of composing style highlights. The preparation information is utilized to prepare the arrangement model. Created model is utilized to anticipate the origin of the obscure reports. RELATED WORKS The techniques for verifying the authorship of an unknown document are based on information sources used, classification approach and learning methods used. There are two approaches to perform the authorship verification task based on information sources. These are extrinsic method and intrinsic method. The extrinsic method transforms a one-class classification task to a binary classification task [4]. It requires information from the external resources in addition to the information about the set of known documents written by the author and the unknown document for which the authorship verification is to be done. The authorship verification tasks based on extrinsic method have performed well. One of the best examples is the Imposter method (IM) proposed by Koppel and winter (2014) [5]. So as to choose whether X and Y were composed by same creator, a lot of "sham" records were methodicallly delivered. In the event that X is adequately more like Y than to any of the created impostors, at that point both X and Y were composed by a similar creator. The estimation of report comparability relies upon choosing irregular subsets of highlights utilized while looking at the records. In the inherent technique, just the data about the arrangement of realized archives composed by the creator and the obscure report for which the initiation check is to be done are required [6]. Lately, the creation check [7]used CART calculation (Classification and RegressionTrees) to choose if obscure report is composed by An or not. Another model is the Profile-Based Method for Authorship Verification [8] proposed by Potha and Stamatatos, in view of the natural strategy, as it doesn't need any outer data to choose whether the obscure archive is composed by a similar creator. Authorship Verification can be viewed as a one-class or multi-class classification problem. In one-class classification, the training dataset consists of elements of a specific class; the task is to identify the elements which belong to the classification training dataset, from the collection of elements which belongs to different classes. Authorship verification is the application of one-class classification methods to stylo metric datasets [4]. Authorship Verification is a difficult problem, if modelled as a one-class classification task. The challenge is, in determining the boundaries between the class elements and the outliers without negative examples or at least exhaustive and representative positive examples [9]. Though it is difficult, one-class classification is more efficient to decide whether the unknown document is written by the same author or not. Koppel and winter (2014) [5] transformed the one-class classification problem of AV into multi-class classification problem and then calculated the similarity score. The learning methods used in authorship verification are instance based learning and profile based learning. In instance based learning the known set of documents in the training corpus are not concatenated after pre-processing. They are kept separately and similarity is checked between each of the known documents and the unknown document [9]. It is called instance-based as it constructs the hypothesis from each document. Complexity increases with the increase indata. It is a kind of lazy learning. The time for training the model is high when using the instance based learning approach. In profile based learning approach, after preprocessing, all the documents of known authorship in the training corpora are concatenated to form a single large corpus [9]. Then the most frequent n-grams in the corpus are identified, it is compared with the most frequent ngrams in the unknown document using the dissimilarity function. Based on the threshold obtained, the authorship of unknown document is verified using binary values [9]. The existing system is a profile based system, as all the known documents in the training document are concatenated after the pre-processing is completed. This reduces the time complexity of the training phase. Features are used to determine the individual writing styles of each author to distinguish them from others. The features used for authorship verification are lexical, Syntactic, Semantic, Character and Application specific features. The number and types of features used for authorship identification were varied in order to determine the influence of each type of feature [10]. 3.PRPOSED SYSTEM In the proposed approach , MPSO was used for the construction of the ensemble models used for verification task. Figure 1: System Architecture of the proposed system The system architecture of the proposed system is shown in Figure 1. The dataset collected is pre-processed and a subset of corpus is used in the training phase. All the ten features used in the proposed system are derived and a similarity score between the known and the unknown documents is calculated. Then the parameter optimization is done using the Grid search method i.e. best parameter value of each feature is identified and a MPSO model is constructed using those features. In the testing phase the testing corpus is given as input and the authorship of the unknown documents is verified using the proposed MPSO model. Pair-wise Similarity score calculation The similarity between the known and the unknown documents is being calculated. The ten types are used in this system are as follows: punctuation n-grams, Average Sentence Length, character ngrams, token k-prefixes, token k-suffix ngrams, n-prefixes k-suffixes and n % frequent tokens. The features of Average Sentence Length(ASL) and Construction of feature vector also used to improve the accuracy of verification. Once the feature vector is built, the Manhattan distance is calculated using the formula: Where X is the known document D and Y is the unknown document A. The output of Manhattan distance is transformed into the likeness score using the following equation: sim ( X, Y) = 1 / ( 1+dist(X,Y) ) (3.4) Parameter optimization using Grid search In order to increase a accuracy, parameter optimization is carried out to choose the best parameters. The acceptance threshold Ө is calculated separately for each feature category. It is used to classify the problem P with Yes or No based on the classification function: Where Sp denotes the similarity score of an unknown document. The value of Ө is chosen so that Equal Error Rate (EER) is gotten for the issues in the corpus C (the bogus positive rate is equivalent to bogus negative rate) during the grouping of preparing issues. The limit Ө isn't really situated at the convergence purpose of the two likelihood capacities, yet near them. The EER is picked as the standards to decide esteems for Ө, since the exhibition measure in existing framework gives equivalent loads to bogus positive and bogus negative [10]. For the reasonable corpora in existing framework, the edge is resolved as the middle of all comparability scores. The above advances are rehashed for each element in Fi and the entirety of its potential blends for the boundaries n and k. The exactnesses of all conceivable boundary mixes of each highlights is gotten and afterward for each element, the boundary esteem that prompts most extreme precision is acquired and put away as the model M. Construction of the Ensemble model using Modified Particle Swarm Optimization Algorithm From the model M made utilizing boundary advancement, group models are made utilizing MPSO calculation. In MPSO the intellectual part of the overall PSO is splitted into two distinct segments. The principal part is acceptable experience segment. The molecule has memory about its recently visited best position, which is like the overall PSO. The subsequent segment is the awful experience part. It causes the molecule to have memory about its recently visited most noticeably terrible position. To figure the speed of the molecule, the terrible experience part is likewise thought about [11]. The position update equation is same as general PSO algorithm: Where ω ->The inertia weight C1,C2,C3 → The acceleration coefficients pworst -> the worst position of the particle. R1, R2, R3 denotes uniformly distributed random numbers varies in the range (0, 1). Algorithm of Modified PSO method [11] are: Step 1 : Select the number of particles, generations, tuning acceleration coefficients c1, c2, and c3 random variables R1, R2, R3 to start optimal solution searching Step 2 : Initialize the particle position and velocity Step 3 : Select the particle's individual best value for each generation Step 4 : Choose the particle's global best value Step 5 : Choose the particle's separate worst value Step 6 : Apprise the particle separate best pbest, global best gbest, particle worst Pworst in the velocity equation and find updated velocity. Step 7 : Apprise the new velocity in Eq (3.7) and get the location of the particle. Step 8 : Repeat all steps till the required accuracy is attained. In the proposed technique, the quantity of particles is set as 20. The quantity of ages is changes between 10 to 100. Every molecule is a gathering of haphazardly chosen highlights signified as 1 if the component is available and 0 on the off chance that it is absent in the molecule. RESULTS AND DISCUSSION In Authorship Verification task, one of the difficulties is to locate a normalized, reasonable and publically accessible corpus. Skillet sorts out many shared assignments on creation attribution, initiation check and creator bunching. Skillet likewise gives publically open corpora to these undertakings. The dataset utilized in the current framework is the English corpus utilized for PAN 2013, 2014 [6]: • PAN 2014: Training corpora with one hundred and ninety seven English essays and seventy five novels are used. • The testing corpora with two hundred and thirty English essays and two hundred and thirty English text books are used. The performance of the proposed Authorship verification system is evaluated using the accuracy measure: The classification model generated using the MPSO algorithm has the highest accuracy compared to the ensemble model generated by the grid search, the results are shown in table 1 Table 1 Accuracies of Classification Models From the table, it is evident that the first feature (punctuation n-gram) performs well in both the existing and the proposed methods. The time complexity of construction of ensemble model using MPSO algorithm and evaluating its performance is 23 minutes, whereas the time complexity in existing system is 54minutes. The proposed method has a low time complexity compared to that of the existing system. The accuracy has also increased from 62.14 % in the existing system to 63.38% in the proposed method. CONCLUSION AND FUTURE WORK Authorship verification is a task of deciding whether two documents originate from same author. It is used in plagiarism detection. The existing system performs Authorship verification task by optimizing the parameters and constructing ensemble models using grid search method. The drawback of grid search method is it's high time complexity. In the proposed strategy, authorship verification task is finished utilizing the classification model developed utilizing Modified PSO. Every molecule is alloted with arbitrary position and speed, which are refreshed in every age. In MPSO calculation every molecule has the memory of its best and furthermore most exceedingly terrible position. The primary preferred position of the proposed strategy is its time intricacy. The time taken for advancement of grouping model and check task by the proposed system is low, appeared differently in relation to that of the current structure. Ordinary Sentence Length, a lexical part is added to improve the exactness of the order. The proposed order model was had a go at using a sub-set of PAN 13, PAN 14 English language datasets. The accuracy of grouping in proposed model was improved in the seventh time itself.
v3-fos-license
2020-02-03T09:34:25.998Z
2020-02-01T00:00:00.000
211007003
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scielo.br/j/riem/a/gBjyWHgJJbkkf7LDCsnTybM/?format=pdf&lang=en", "pdf_hash": "00820b582d54cb36b27747d6a7f9f273d782459c", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2014", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "sha1": "00820b582d54cb36b27747d6a7f9f273d782459c", "year": 2020 }
pes2o/s2orc
Elastoplastic constitutive modeling for concrete: a theoretical and computational approach Resumo This article presents a study of the plasticity model applicability to concrete in a theoretical framework that generalizes the formulation of constitutive models for physically nonlinear analysis of structures. In this sense, the theoretical framework for the computational implementation of the plasticity mathematical theory is described, detailing the models formulations capable to describe the inelastic behavior of concrete. The loading surfaces associated to Drucker Prager and Ottosen criterion are highlighted. Furthermore, the Cutting Plane return mapping algorithm, necessary to the integration of constitutive relations that govern the behavior of the material in the context of computational plasticity, is described. Finally, numerical simulations are presented, such as the direct tension loading and three-point bending tests. The results of these simulations are compared with those from the literature to verify the model stability and accuracy. Introduction A realistic solution for a structural problem involving concrete depends in large part on the choice of an appropriate constitutive model. The mechanical response of the concrete is complex and it seems unlikely that any phenomenological approach would be able to understand all the possible variations in the characteristics of the material. Even if a perfect model for the concrete could be built, it would be too complex to serve as a basis for the stress analysis of practical problems (CHEN; HAN [1]). However, intensive studies over the past decades have led to a better understanding of the constitutive behavior of quasi-brittle media. Research focused on modeling the mechanical behavior of concrete has led to formulations such as mathematical theory of plasticity, a necessary extension of the theory of elasticity which provides a more realistic approach about the behavior of the material. The theory of plasticity seeks to mathematically describe immediate and non-reversible deformations that occur in a solid body, i.e. the deformations that do not disappear completely when the causal forces are removed (CHEN; HAN [1]; LUBLINER [2]; SOUZA NETO et al. [3]). The description of the elastoplastic behavior of the concrete on the macroscopic point of view (designated as phenomenological behavior) for multiaxial stress states is based on the following assumptions: (1) The existence of an elastic field, that means, a region in which the material behaves as it is purely elastic without the appearance of permanent strains. The elastic domain is delimited by a flow function written in terms of yield stress. (2) The occurrence of inelastic deformation, that is, the deformations associated to stresses above the yield stress whose evolution can be described by a yielding rule. (3) The occurrence of the phenomenon of strain hardening of the material, in other words, the possibility of increasing flow stress, following the evolution of inelastic deformations. Rupture criteria for concrete are classified according to the num-ber of parameters that appear in the defining expressions. Simple models should be used, representing only those properties that are essential for the problem to be considered (CHEN; HAN [1]). To this purpose, this work aims to present a theoretical and computational framework necessary for the implementation of elastoplastic constitutive models, especially the models of Drucker Prager and Ottosen on INSANE computing system (Interactive Structural Analysis Environment), showing also the Cutting Plane return mapping algorithm, necessary for the integration of the constitutive relations governing the behavior of the material in the context of computational plasticity. Plasticity for concrete The classical theory of plasticity was originally developed for the study of metals and some of its proposed fundamentals are not safe for other engineering materials such as concrete. However, they still have some similarities, particularly in pre-failure regime. For example, the concrete exhibits a nonlinear stress-strain behavior during loading and has a substantial irreversible deformation under unloading regimen. Especially under compressive loads with confining pressure, the concrete may show some ductile behavior. Thus, concrete irreversible strains are induced by microcracks and can be treated by the theory of plasticity (CHEN; HAN [1]). A variety of constitutive models were then proposed in order to mathematically reproduce the material stress-strain relationships for different load conditions. The majority adopts a phenomenological approach, i.e., models that describe the material from the macroscopic point of view, neglecting the microscopic mechanisms, and considering the material media as continuous and homogeneous. The approach of plasticity falls into this category. A constitutive model suitable for concrete structures requires a complete description of the material behavior, as that one showed in Figure [ From a macroscopic point of view, the classical plasticity can simulate the behavior of the concrete particularly in the pre-peak regime, such as nonlinearity of the stress-strain curve and the irreversible strains after loading. Many papers have been presented by researchers seeking to adapt the classical theory of plasticity in order to get a better representation for concrete (PARK; KIM [5]; GRASSL; JIRASEK [6]). In the concrete plasticity modeling, it is important to observe some characteristics such as sensitivity to hydrostatic pressure, not associative flow rule, compatible inelastic law, and tensile strength limit. Some of these models are mathematically highly complex, making them undesirable for many engineering applications, especially the analysis of simple structural elements. The failure surface definition according to phenomenological models is performed by using a yielding parameter. The rupture surface of the concrete can be generally expressed by: where, I 1 is the first invariant of the stress tensor and J 2 and J 3 are respectively the second and the third principal invariant of the deviatoric stress tensor. These invariants are generally represented by the following expressions: wherein, σ 1 , σ 2 and σ 3 are the values of the principal stresses and e s ij are the components of the deviatoric stress tensor given by: The explicit form of the failure function for concrete is defined by experimental data, since concrete resistance tests are well documented in the literature (CHEN; HAN [1]). Such functions shall have the following characteristics: (1) Represent a smooth convex surface, with the exception of its apex; (2) The meridians of its surface are parabolic, and open towards the negative hydrostatic axis; (3) Failure curve is roughly triangular to tensile stresses and low compressive stresses, becoming more circular as the compressive stress increases. The Drucker-Prager Model The criteria proposed by D. C. Drucker and W. Prager in 1952 requires that the plastic flow occurs when the second invariant of the deviatoric stress tensor, J 2 , and the hydrostatic pressure reaches a critical combination. The function that models the criteria of Drucker-Prager is given by: where η is a constant related to the material and the function k. In case the material undergoes an isotropic hardening, the function k relates to the uniaxial stress-strain curve (CHEN; HAN [1]) and can be defined by: where σ is a function of the hardening internal variable α. The yielding surface in the principal stress space is represented by a circular cone whose axis is the axis of symmetry of the hydrostatic pressure. The Drucker-Prager yield surface is shown in Figure [2]. The Ottosen Model The rupture surface of four parameters (α, β, and c 1 and c 2 ) for concrete, proposed by OTTOSEN [7] involves the strain invariants I 1 , J 2 , and the loading angle, θ. Its smoothness, convexity and its curved meridians that have a gradual transition from an almost triangular shape to an almost circular in the deviatoric plan, as the hydrostatic pressure increases, make this criterion suitable for failure simulation of concrete structures ( Figure [3]). The mathematical representation of the criteria is given by: Where σ c is a function of the hardening internal variable, α and β are material parameters. The parameter λ depends on two other parameters (c 1 and c 2 ) and is given by: Still in the equation (9), the angle involved in this criteria is given by: There are several propositions for the determination of the four Elastoplastic constitutive modeling for concrete: a theoretical and computational approach There are several propositions for the determination of the four parameters (α, β, c 1 e c 2 ) of the Ottosen model. The INSANE system has implemented the calibration proposals of OTTOSEN [9], CEB-FIP Model Code [10] and DAHL [11]. According to OTTOSEN [9], the four parameters can be determined based on the following tests: (1) f c -uniaxial compressive strength (θ = 60°); (2) f t -uniaxial tensile strength (θ = 0°); (3) f bc ≅ 1,16 f c -biaxial compressive strength (θ = 0°); (4) -the triaxial stress state on the compressive meridian (θ=60°). The values obtained for the parameters from these tests depend on the average tensile and compression ratio k = f tm / f cm . Table [1] shows some of the most commonly used values from this calibration. The parameters for intermediate values of k can be obtained by interpolation. Another way to obtain the model parameters is through the expressions recommended by the CEB-FIP Model Code [10], which also make use of the relation k = f tm / f cm . This calibration allows obtaining the parameters to any values of k automatically, provided that the compressive strength values do not exceed prescribed values. DAHL's [11] proposal is based on the observation that the CEB's recommendations are in agreement with the experimental results only for low strength concrete, thus suggesting a way to obtain the coefficients using only the mean resistance to compression of the concrete (f cm ). NETO [12] describes the hardening as a process that is physically connected to increased dislocation density (geometric defect in atomic arrangement). For many real materials, the yielding stress limit of the material is dependent on a measure of accumulated plastic strain. In uniaxial model, after reaching the yielding, the stress-strain curve continues to grow (in hardening) or decreasing (in the case of softening) causing a variation in yielding stress during plastic flow. In models of two and three dimensions, the hardening is characterized by changes in the set of internal variables α during plastic yielding. These changes can generally affect the size, shape and orientation of the yielding surface, defined by Φ (σ, α) = 0. Depending on the type of material, the stress-strain curves can have different forms, being convenient to idealize some of these behaviors. Figure [4] illustrates three models commonly used to describe materials that have an elastoplastic behavior (MALAVOLTA [13]). Case (a) corresponds to the perfect plasticity model with a material having an elastic portion with modulus of elasticity E, and after yielding, the material remains with a constant level of stress as the strain increases. Model (b) is called bilinear elastoplastic, where the first slope corresponds to the elastic portion and, after reaching the yielding, begins a new line with an inclination H associated with the material hardening, corresponding to the plastic domain. Case (c) is the nonlinear elastoplastic model, which once reached the yielding, the hardening shall be described by a nonlinear law. In this paper has been used a linear and potential hardening laws. The linear hardening law is a generally used law, and it is defined by: Hardening laws The potential law, proposed by BOUCHARD et al [14] and VAZ JR; ROJAS [15], is characterized by: where a e b are material parameters and k is the internal variable related to the hardening or softening. Cutting plane return mapping algorithm The elastoplastic analysis requires the integration of the constitutive law, so that the elastic and plastic portions of the total strain increment are obtained, which is an iterative process, due to the fact that the elastoplastic module is a function of plastic deformation. For the incremental nature of the numerical models of plasticity, a return mapping algorithm, able to obtain the update of the stresses needed to balance internal forces in the nonlinear analysis, must be used. The development of efficient schemes for the integration of constitutive relations in a numeric context is still subject of recent research in the world, mainly because of its importance in engineering problems involving plastic deformation. There are a variety of methods of integration with different levels of complexity (TAQIEDDIN [16]). Conceptually, the idea of the algorithm is quite simple and consists of an explicit process that includes the first elastic equations from the stress in the previous step, to obtain the trial stress on the current step. One of the great advantages of Cutting Plane is the fact that there is no need to compute gradients of yielding function and hardening law, as this task can be extremely cumbersome for complex plasticity models. The general case of this scheme involves the following steps: (1) Assume the existence of a plastic load, i.e., , such that λ > 0. Define the residual plastic flow R n+1 and the yield condition. (3) Update the state variables and the consistent parameters The algorithm convergence to the final value of the state variables is obtained in a quadratic rate. These quadratic convergence rates are achieved here in spite of the simplicity provided by the method, which end up making the cutting plane algorithm very attractive for large-scale calculations in more elaborate models, mainly in the explicit codes that do not require the solution of a global system of equilibrium equations. Formulation of constitutive models Constitutive models typically have a proper notation and, although in many cases they keep similarities, the lack of unity of formulations prevents a generic and objective computational implementation. The constitutive models framework proposed by PENNA [17] and GORI et al. [18] presents an expansion of the theoretical framework proposed by CAROL et al. [19], being able to contemplate various constitutive models -elastoplastic or elastic degradation; isotropic, orthotropic, or anisotropic -formulated with one or more loading functions. Next, is presented the design for the elastoplastic constitutive models following the theoretical proposed framework. The plots necessary to the description of each model are explained indicating the correlation between the original form and this work objectives. Drucker-Prager Model The mathematical representation of the Drucker-Prager criterion is given by the function: For the case of isotropic hardening, the function σ (α), used for the Elastoplastic constitutive modeling for concrete: a theoretical and computational approach determination of parameter k (given by [7] equation) is given by: (19) in which σ y is the initial yield stress, α is the accumulated plastic flow, and H is the strain hardening modulus. The Drucker-Prager model is non-associative (F ≠ Q). The Drucker-Prager non-associative law is obtained by adopting for the plastic potential function, a similar function to the yielding function, wherein the angle of friction, ϕ, is replaced by dilatancy angle, ψ. (20) where ψ < ϕ and is an additional constant of material. Therefore, F and Q gradients represented by tensors m and n are given by: The inelastic modulus, obtained from a law associated with hardening or softening phenomenon, is given by: The application of Drucker-Prager model should take into account the existing singularity on the yielding surface, its apex. Therefore, it must be used an alternative solution strategy for the implementation of the constitutive relations integration algorithm. Various methods have been proposed in the context of yielding surfaces with singularities like corners and vertices, such as SIMO; HUGHES [20] and SOUZA NETO et al. [3]. When the return occurs at the vertex, the yielding function (equation [18]) and plastic flow potential (equation [20]) should be changed to (SZABÓ; KOSSA [21]): Ottosen model The Ottosen criteria is given by the following yielding function where in σ c = (σ y + Hα). The derivative of F for an isotropic material may be obtained by chain rule to: in which the invariant stress derivatives are: where, δ ij is the Krönecher delta, s ij are the tensor components of deviatoric stress and t ij is the quadratic tensor of the deviatoric stress. The derivatives from the yielding function in relation to invariants are: the model adopted is associative (n ij = m ij ). The inelastic module associated with hardening or softening phenomenon, is given by: Diametral compression test The diametral compression test is commonly used to determine the tensile strength of concrete and consists of applying diametrically opposed loads on a cylindrical specimen in order to produce an indirect traction in its central region. In this sense, a plasticity criteria can be used to determine the failure stress. Therefore, the Drucker-Prager's model with the internal approach of the Mohr-Coulomb surface was adopted. The model geometry and loading conditions are specified in Figure [5]. The material parameters are shown in Table [2] and were based on the study made by CECÍLIO [22]. The finite elements model ( Figure [5-b]) consists of 404 quadrilateral four-nodes elements in plane strain state, with a thickness of 300 mm and 2x2 points for integration. For the nonlinear analysis, direct displacement control method was adopted with increment of 0,00002179 mm, controlling the horizontal displacement of the highlighted node in Figure [5] with tolerance for convergence of 5 × 10 -3 and reference load of P = 60 kN. The result of stresses in the specimen's center, while the material is in the elastic region may be obtained analytically depending on the applied load, P, on the diameter D and on the length L, by the equation: The simulation was not able to describe the inelastic behavior of the specimen. However, the simulation could represent the material behavior in the elastic range. Figure [6] shows the instant when the plastic strain appears in the specimen (step 111 of 250 steps), represented by the accumulated plastic flow variable, illustrating the elastic limit of the material. The stress distribution is shown in Figure [7], where the condition of the indirect tensile test is clearly observed. The tension stress limit in the elastic range, according to equation [35] on the presented conditions, is: Direct tension of a concrete plate (Dogbone-shaped panel) This example shows a finite element model to simulate the experimental behavior obtained from a direct tensile test on a concrete flat specimen reinforced with fibers. Due to the symmetry, only a quarter of the plate was discretized. The boundary conditions and finite element mesh adopted are presented in Figure [10]. The plate was discretized with 12 quadrilateral eight-nodes elements in plane stress state, with 3 x 3 points integration. The material is considered a cement matrix composite reinforced with fibers with 2% of vinyl polyvinyl acetate (PVA), according to PEREIRA et al. [23]. The material parameters of the experiments are given in Table [3]. For the numerical simulation, the Ottosen's model with linear hardening law was adopted, using the generalized displacement control method, with initial load factor of 1,0 and tolerance for Figure 8 Distribution of normalized stress σ x along the y axis Figure 9 Distribution of normalized stress σ y along the y axis Figure 10 Dogbone-shaped panel -Finite element mesh, geometry and image of supports used during the test performed by PEREIRA et al [23]. Adapted from PEREIRA et al. [8] convergence of 1× 10 -4 . The Stress x Strain curve is shown in Figure [ 11] in comparison with the experimental results obtained by PEREIRA et al. [23]. The numerical model has showed capable to simulate the behavior of the material and was able to represent the experimental results with good accuracy. Figure [12 -a] shows the deformed mesh and it's observed that the orthogonal lines along the longitudinal axis of the plate remained parallel after deformation. The pattern of evolution of displacements, after reaching the yield stress obtained with the Ottosen's model, implemented in this work, can be seen in Figure [12 The results are in excellent agreement with the experimental and numerical results presented by PEREIRA et al. [8] (Figure [12-c]). In order to attest the convergence behavior, 4 mesh were adopted, with 3, 12, 24 and 128 quadrilateral eight-nodes elements. The meshes are schematically represented in Figure [ 13]. Figure [14] shows the load factor-displacement curves. The results present the convergence of the solution and indicates no approximation errors related to the discretization and the mesh refinement. However, more tests should be performed in order to attest the mesh sensitivity and the general behavior of the model under highly refined meshes. Reinforced concrete beam In Elastoplastic constitutive modeling for concrete: a theoretical and computational approach The Ottosen's criteria with potential hardening law for concrete was adopted. The materials parameters are given in Table [4]. For steel, the elastoplastic criterion of von Mises was adopted, assuming perfect plasticity and perfect bond between steel and concrete. The materials parameters are given in Table [5]. The Figure [16] shows the discrete model, consisting of 132 quadrilateral eight-nodes elements to represent the concrete, and 22 onedimensional three-nodes elements to represent the reinforcement. In the analysis, the direct displacement control method was adopted, incrementing of 0,001 mm the horizontal displacement of the supported right node, tolerance for convergence of 1 × 10 -4 and reference load P = 1,0N. The model was analyzed considering plane stress conditions. Figure 15 Reinforced concrete beam Load-Displacement curves corresponding to the vertical displacement of the point where the load is applied are shown in Figure [17]. The numerical results are compared to the experimental values presented by MAZARS and PIJAUDIER-CABOT [24]. The graphic shows good agreement between the experimental results and the results obtained with Ottosen's model, even that the model presents a higher initial stiffness and a lower yielding load when compared with the experimental values. Conclusion In this paper an elastoplastic constitutive model applied to concrete, emphasizing the criteria of Drucker-Prager and Ottosen, has been presented. In addition to the constitutive models, also has been presented equations for the implementation of the Cutting-plane return mapping algorithm, required for the integration of the constitutive relations governing the behavior of the material. The constitutive models have been implemented in the computational system INSANE (INteractive Structural ANalysis Environment), according to the theoretical and computational environment for constitutive models developed by PENNA [17] and GORI et al. [18]. Classical models of associated and non-associated plasticity were easily incorporated in the theoretical framework, varying only the return algorithm according to the needs of each model. Numerical simulations presented in order to illustrate, validate and emphasize the individual characteristics of each model. From the results presented, the following considerations can be made: i. Models showed appropriate behavior, and the responses were consistent; ii. Models showed appropriate behavior using the Cutting Plane return mapping algorithm; iii. By analyzing all numerical simulations presented, the constitutive models representing the elastoplastic behavior of concrete showed a good correlation among numerical, experimental and analytical results. Figure 16 Finite element mesh Figure 17 Numerical results obtained with Ottosen's model compared with experimental results of reinforced concrete beam performed by MAZARS and PIJAUDIER-CABOT [24]
v3-fos-license
2018-09-16T06:23:00.821Z
2018-09-01T00:00:00.000
52179495
{ "extfieldsofstudy": [ "Medicine", "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.3390/s18092903", "pdf_hash": "7aebb56de820b929be440464aa3c1e1ee1487504", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2018", "s2fieldsofstudy": [ "Engineering" ], "sha1": "7aebb56de820b929be440464aa3c1e1ee1487504", "year": 2018 }
pes2o/s2orc
Target Tracking While Jamming by Airborne Radar for Low Probability of Detection Although radiation power minimization is the most important method for an advanced stealth aircraft to achieve the low probability of detection (LPD) performance against the opposite passive detection system (PDS), it is not always effective when the performance of PDS is advanced. In a target tracking scenario, an interference tactic is proposed in this paper to keep the airborne radar in an LPD state. Firstly, this paper introduces the minimization radiation power design of airborne radar based on the distance between the radar and the target, and introduces the minimization radiation power design of the airborne jammer based on the predicted detection probability of the opposite PDS. Then, after consulting the most commonly used constant false alarm rate (CFAR) technologies in passive detection systems, including the cell average CFAR, the greatest of CFAR, the smallest of CFAR and the ordered statistic CFAR, this paper analyzes their relationships and points out the way of interference. Finally, based on the constraints, not only including the predicted detection probabilities of airborne radar and opposite PDS, respectively, but also including the time synchronization which is necessary to avoid the leaked interference power generated by airborne jammer jamming the airborne radar echoes from the target, this paper establishes a math model to minimize the total interference power of airborne jammer without interfering target tracking. Simulation results show that the proposed model is effective. Introduction The Low probability of intercept (LPI) technology is used to protect the airborne radar from the threat of the opposite passive detection system (PDS). Stove (2004) proposed that LPI should be divided into at least two levels, low probability of detection (LPD) and low probability of exploitation (LPE) [1]. For LPE, Fancey (2010) analyzed many LPI signals from various aspects, and then proposed an empirical index to evaluate their LPE performance [2]. Shu (2017) proposed an advanced pulse compression noise waveform that uses random amplitude and phase changes to avoid being exploited by the opposite PDS [3]. Compared with LPE, LPD is an efficient method to improve the LPI performance of active radiation sources. One of the most significant ways to improve LPD performance is to minimize the radiated power. For the airborne radar tracking and search process, LPD performance has received more and more attention, because it helps to spare the airborne radar from the serious threat from an opposite PDS. In recent years, the LPD of airborne radar has been studied not only to study how to control its radiant power, but also to study how to control its irradiation interval and dwell time on target. In those documents, Krishnamurthy (2005) proposed a computationally efficient dynamic emission control and management algorithm to minimize the threat to the platform caused by the opposite PDS [4]. Liao (2011) proposed two radar radiation energy control strategies: minimum power strategy and minimum resident strategy [5]. proposed a radar search method that minimizes the radiant energy function as the optimization target, with the beam width, dwell time and average radiated power as optimization parameters. Simulation results show that the algorithm can not only ensure good detection performance, but also reduce energy consumption [6]. Liu (2015) proposed to minimize the intercept probability as an optimization target under a certain radar detection probability, which can be used to control radar radiant energy during the tracking process [7]. Andargoli (2015) provided a flexible and effective method to control radar power based on certain detection requirements [8]. She (2016She ( , 2017 proposed a joint sensor selection and power allocation algorithm for multi-target tracking in radar networks based on LPD objectives, which helps to minimize the total transmit power of the radar network based on predetermined mutual information (MI) or the minimum mean-square error (MMSE) threshold between target impulse response and the reflected signal [9,10]. proposed a new radar network resource scheduling method for clutter tracking. Simulations show that compared with other methods, the LPD performance of this algorithm is better [11]. applied classical radar's minimization radiation method to opportunistic array radars [12]. Simulations show that classical radar radiation control methods are also useful for opportunistic array radars. presented a series of resource management methods for LPD and LPE in various contexts such as radar power, dwell time and illumination interval [13][14][15][16][17][18]. In addition to airborne radars, the minimization of interference power by jammers was studied to avoid positioning by anti-radiation system. Liu (2012) offered an interference power allocation model based on the detection probability in the active interference process [19]. Song (2014) proposed an adaptive control method of interference power aiming for LPD purpose by predicting radar echo power, which was useful for designing self-defense interference system [20]. Wang (2015) discussed the self-defense power method in the process of electronic countermeasures under LPD constraints [21]. Hao (2015) proposed a new interference method that can reduce the threshold detection probability and improve the interference power efficiency [22]. The serious threat to airborne radar comes from the advanced PDS developed in recent years. As far as we know, the constant false alarm rate is a key parameter of the existing PDS signal processing process. The most classical CFAR processing algorithm is the cell average constant false alarm (CA-CFAR) processing algorithm. However, CA-CFAR was analyzed in the Rayleigh clutter background and found that its detection performance was greatly influenced by the clutter characteristics. In order to reduce the impact of clutter in the tracking target process, a unit average maximum detector (Greatest of Constant False Alarm Rate, GO-CFAR) and a unit average selection detector (Smallest of Constant False, SO-CFAR)have been proposed [23][24][25]. To further improve the detection performance, luo (2009) and liu (2001) proposed a new CFAR detection algorithm which can not only process time domain data in, but also convert the data into frequency domain and wavelet domain for further processing [26,27]. He (2011) analyzed GO-CFAR, SO-CFAR and OS-CFAR. And results show that GO-CFAR can maintain a stable false alarm probability at the edge of the clutter, but the detection performance of GO-CFAR decreases in multi-target scenarios. On the contrary, SO-CFAR maintains good detection performance in multiple targets situations, but the false alarm capability at the edge of the clutter is seriously affected.The detection performance of OS-CFAR is between GO-CFAR and SO-CFAR [23][24][25]. proposed a constant false alarm detection method based on summation. Compared with traditional detection methods, sum-CFAR can improve the detection probability of exponentially distributed clutter background [28]. Different from existing literatures, this paper comprehensively considers airborne radar and jammer and proposes a tracking while jamming tactic, which is helpful for airborne radar to maintain LPD state against advanced PDS. Section 2 describes the airborne radar's adaptive minimum radiation power design criteria, which is based on the distance from the airborne radar to the target and the interception factor. Then, according to the probability of detection required by the radar receiver, this section shows that the combination of tracking while jamming in Figure 1 is an effective method to protect the airborne radar platform from the threat of the opposed advanced PDS. Section 3 first introduces several different CFAR, and then analyzes the detection probabilities in CA-CFAR, GO-CFAR, SO-CFAR and OS-CFAR, and proves that the minimum interference power design mainly depends on GO-CFAR, SO-CFAR, OS-CFAR and the number of reference cells. Section 4 establishes a math model to minimize the total interference power of airborne jammer without interfering target tracking based on the constraints of predicted detection probabilities and time synchronization. Problem Scenario The commonly used evaluation index of LPI performance of airborne radar is the interception factor proposed by Schleher, which is expressed as [29]: where, R I max is the maximum interception distance of the opposed PDS and R R max is the maximum detection distance of the airborne radar. where, P t is the radar radiated power; G t and G rr are antenna gains of the transmitter and the receiver respectively; k = 1.38 × 10 −23 J/K is the Boltzmann constant; σ is the radar cross section(RCS) of the target; λ is the wavelength of the radar signal; F n is the noise coefficient; T 0 = 290 K is the standard noise temperature; B r is the bandwidth of the receiver; L is the loss of the radar system; SNR min is the minimum detectable SNR. where, G ir is the receiver antenna gain of the opposed PDS; G i is the transmission gain from the airborne radar to the opposed PDS; P i min is the minimum detectable sensitivity of the opposed passive detection system. Taking (2) and (3) into (1), there is: Let S I = P i min G ir and S R = kT 0 B r F n L·SNR min G rr , there is: During the target tracking process, the transmission gain in (3) is the same as the antenna gain in (2). Then, Equation (5) can be rewritten as: From (1), it can be seen that if α > 1, the opposed passive detection system can easily detect the airborne radar signal. If α < 1, the airborne radar signal may be in the LPI state. And if α = 1, combined with Formula (6), the critical detection distance is defined as: Then, according to Formula (2), the corresponding critical power is: or according to Formula (3), the power is: Substituing Formulas (8) and (9) into (2) and (3), without considering radar signal sorting and radar signal tracking processes by the opposed PDS, the airborne radar might be possible in the LPD state if the radiated power of the radar signal is: where, R is the distance from the airborne radar to the target. However, the solution of P t in (10) is not always available. The radiated power constraint in (10) is only for single pulsed radar signals. For pulse integration conditions, the radiated power in (10) can be rewritten as: (11) where, R c,n η p = σ 4π · S I S R n η p 1 2 , n p is the number of pulses, η is the coherent efficiency and η = 1 means exactly the same. In theory, when (11) exists, the airborne radar might be in a LPD state because the detection probability of the opposite PDS to the radar signal is not greater than the detection probability of the radar to the target echo. However, in real scenario, the equality of right side of (11) is a critical LPD state which could not be suitable to meet the actual requirement. Therefore, the LPD design is to make the right side of (11) is far away from P t,n η p as much as possible when (11) exists. Similar to (10), the solution of P t,n η p and n p in (11) are also not always available which would be explained later by analyzing the value of n p . The important constraint hidden in Formula (11) is to maintain its detection probability under a certain false alarm probability, which is defined as: where, U T = 2σ 2 n ln 1/P f a is the detection level, I 0 ( ) is a zero-order Bessel function, σ n is a noise standard deviation,and P f a is a false alarm probability. A is the amplitude of the interference and r is the amplitude of the signal. As can be seen from Equations (10) and (11), a possible approach is to change S I in Equation (7) to keep the airborne radar in LPD state. However, the general sensitivity of an advanced PDS is approximately −80 dBm, so common LPD methods such as minimizing radiation power, dwell time and maximum tracking interval are not suitable against an advanced PDS. When those common LPD methods become invalid, Equation (11) is not existed and is not again suitable to describe the LPD state of the airborne radar. However, the detection probability in (12) is still useful to describe LPD state of the airborne radar. To make the airborne radar be in LPD state, in most time, the detection probability of the airborne radar is often demanded to be greater than or equal to 0.8 while the detection probability of the opposite PDS is often demanded to be less than or equal to 0.2. Noise interference is an alternative method of reducing the sensitivity of passive detection systems. In theory, advanced air-to-air missiles guided by the PDS could threaten airborne radar about 90 km away. According to (2) and (3), there is no solution in (11) if the airborne radar tries to detect the target 90 km away without being detected by the PDS on target when σ = 1 m 2 , S R = −110 dBm, S I = −80 dBm, η = 1 and n p ≤ 10 7 . For a maneuvering target, the number of pulse of airborne radar signal is not possible dwell on target over 10 7 which is about 20 s at least if the pulse width is 1 µs and the duty cycle is 50%, so the interference is necessary to keep the airborne radar in LPD state. However, in order to protect the platform of the airborne radar, it is still necessary to control the radiation power of interference to make the interference effective without jamming the target tracking process. In complex confrontation scenarios, the PDS always uses a constant false alarm rate (CFAR) to avoid unacceptable false alarms. Although some novel CFAR methods have been proposed, the commonly used but effective CFAR algorithms for real time PDS are CA-CFAR, GO-CFAR, SO-CFAR and OS-CFAR. In order to interfere with the opposed PDS in the target tracking process, this paper proposes an adaptive radiation power control method of the airborne jammer based on the predicted radiation power of the airborne radar, and shows that the time synchronization performance of the airborne jammer based on the radiation time of the airborne radar is necessary to avoid the target tracking process being interfered by the airborne jammer. Detection Probability of CA-CFAR in Jamming As for the average (mean level, ML) monopulse CFAR detector, let x i (i = 1, · · · , n) and y i (i = 1, · · · , n) represent the reference units (also referred to as the front and the rear along the reference sliding window) on both sides of the detection unit respectively. Let the reference length of sliding window be R = 2n, and let the detection unit be close to the two protection units so as to avoid the leakage of the target energy into the reference units and affect the estimation of the clutter intensity. The adaptive decision criterion is: where, H 0 represents the assumption that there is no object. H 1 represents the assumption that the target exists. σ 2 n is the estimation of the interference power level in the reference sliding window, the a is the nominal factor, and D is the detection statistic in the detection unit. The received clutter obeys Gaussian distribution. In the CA-CFAR detector, the estimation of the background clutter power level is the average of 2n reference units, which is the maximum likelihood estimation of the clutter power level given that the reference units samples obeys the exponential distribution [23,24]. Define: where, Z is the total clutter power level estimation. Among them, a is a nominal factor, where, λ is the predicted SNR on PDS according to predicted radiation power of the airborne radar at the next time, and the relationship between the false alarm probability and the nominal factor a is: From (17), we can see that the detection probability and the false alarm probability have nothing to do with the average noise. Therefore, CA-CFAR has CFAR characteristics. Assuming that there is interference in the reference cell and the interference power is γ J . From (13), the decision criterion with interference becomes a(1 + γ J 2nσ 2 )σ 2 and Equation (16) can be written as: If the purpose of interference is to control the probability from P d,CA to P d0,CA , there is: Detection Probability of GO-, SO-, OS-CFAR in Jamming The false alarm probability detected by CA-CFAR will increase at the clutter edge, and if the radar signal appear in the sliding window, the detection performance of the detector will be reduced. As a modification scheme of CA-CFAR, the maximum value selection of CFAR detection and the minimum value selection CFAR are proposed [23][24][25]. When the interference source only exists in the front sliding window or the rear sliding window, SO-CFAR is better to detect multiple radar signals, but its false alarm capability is poor. GO-CFAR can maintain a stable false alarm probability in the clutter edge environment, but the detection performance in a multiple radar signals environment is worse. GO-CFAR is mainly used for clutter edges, which takes advantage of the maximum local estimation as the total clutter power level of the detector, that is [23,24]: When there are multiple interference sources, it is necessary to reduce the influence of adjacent interference sources. SO-CFAR detection uses a smaller local estimation as the total clutter power level estimation [23,24]. The OS-CFAR detector is a sort of reference units samples from small to large. In a uniform clutter background, the probability density function of the k sample in the 2n sample is [23,24]: where the samples of the detection units are x i (i = 1, 2, · · · , 2n). The OS-CFAR detector first sorts the reference units samples in ascending order. The probability and false alarm probability of the OS-CFAR detector in a uniform clutter background are: From (17), (21), (24) and (31), a is related to false alarm rate and the number of reference cell. However, the key point of this paper is to limit the probability of detection so that this paper only takes (16), (22), (25) and (30) into account. From (22) and (25), their average detection probability is: which illustrates that interference power on P d,GO or P d,SO must be greater than that on P d,CA . And max(γ J,GO , γ J,SO ) > γ J,CA 2n , where γ J,CA is defined in (19). As for (30), with Γ (s + 1) = sΓ (s), there is: When k = 1, there is: , and which means that the detection probability of OS-CFAR decreases when k increases. That illustrates that the interference power of blocking CA, GO, SO and OS-CFAR only needs to interfere GO, SO and OS-CFAR. That is γ J = max(γ J,GO , γ J,SO , γ J,OS ) > γ J,CA 2n , when the interference is evenly distributed in the reference cell. Track while Jamming Design From Figure 2, the target tracking process would be interfered if the radar echo and jammer echo is overlapped showed in Figure 2a. Since that, the time synchronization performance is very important to separate radar echo and jammer echo showed in Figure 2b. Therefore, an interference model based on adaptive radiation power design is proposed, which can not only maintain the target tracking performance, but also jam PDS on target without considering the CFAR modes. where E j is the total interference energy, P d,CFAR is the maximum detection probability of opposite PDS supposing that its CFAR modes might include CA, GO, SO, and OS-CFAR, P d is the detection probability of airborne radar, A i j and τ i j represent the interference amplitude and interference time of the ith illumination time of airborne radar respectively, t j is the arrival time of jammer echo at radar receiver and τ j is the pulse width of jammer echo, t r is the arrival time of radar echo at radar receiver and τ is the pulse width of radar echo. As mentioned in Section 2, let P d,CFAR be less than or equal to 0.2, and let P d be greater than or equal to 0.8. In (36), the predicted radiation power of the airborne radar is subject to P d , the radiation power of the jammer is subject to P d,CFAR and the predicted radiation power of the airborne radar. In addition, what we should pay attention to is that the predicted radiation power of the airborne radar is not constrained by (11) because (11) is existed only in non-interference conditions. Simulations As for the simulation scene, we assume that the initial distance between the aircraft and the target in Figure 1 is 180 km, and their initial relative speed is 280 m/s which is a general subsonic speed so that the Doppler filter algorithm and other non-coherent detection algorithms for subsonic speed target tracking are all suitable. As for the key parameters of radar signal, this paper assumes that the pulse width and the duty cycle of radar signal are 1 µs and 10% respectively. And some other parameters of (2) are shown in Table 1. As for the opposite PDS on target, this paper assumes that the width of reference unit of the CFAR detector of PDS is 1 µs, and the length of the reference units include three modes which are 6, 8 and 12 respectively. Although the target echoes process method by the airborne radar is not the contribution of this paper, the SNR min in Table 1 must be satisfied in simulation to keep the detection probability of the airborne radar to be greater than or equal to 0.8. According to the determined SNR min , the predicted minimum radiation power of airborne radar should meet SNR min as close as possible. Then, the corresponding minimum interference power of jammer related to the predicted minimum radiation power of airborne radar should meet the upper limit of P d,CFAR in (36) as close as possible. This principle would be used in the following simulations from Figures 3-10. During the target tracking process, with the general interactive multiple models Kalman filter (IMMKF) and adaptive sampling by the airborne radar [30], the state equations of maneuvering target in simulations is showed in Table 2, in which there are three models F 1 , F 2 , F 3 , and Γ 1 = sin(0.05T)/0.05, Γ 2 = [cos(0.05T) − 1]/0.05, Γ 3 = cos(0.05T), Γ 4 = sin(0.05T). And the threshold of tracking accuracy is set to 160 m, and T is set to 0.1. Obviously, the target speed is higher; the sampling interval by airborne radar is smaller. Since the target tracking process is not the key point of this paper, the simulation here assumes that the target RCS is a constant and would not show the target tracking process. To illustrate the tactic of this paper, this paper at first simulates the detection performance of different CFAR when airborne jammer is invalid, and simulates the interference results when radar echo exists or not according to (36) in Figures 3-5. Then, to show that the jammer echo does not interfere the tracking process, this paper compares the detection probabilities at each tracking time with and without interference process. Finally, through comparing the interference results with other two methods, this paper shows that the method proposed by this paper is effective. Time(s) State Equations The left figures of Figures 3-5 show the detection performance of different CFAR with different length of reference units, which illustrate that the detection performance of SO-CFAR is better when the length of reference units are 6 and 8. However, when the length of reference units is 12 the detection performance of OS-CFAR is better. According to (36), when P d,CFAR ≤ 0.2 and without radar echo, the simulation results in the right figures of Figures 3-5 show that the interference pulse has to fill in 4, 6 and 4 reference units of CFAR detector of the opposite PDS whose reference units length are supposed to be 6, 8 and 12 respectively. When the horizontal coordinate is less than 1200 and there is no radar echo, the right figures of Figures 3-5 show that P d,CFAR ≤ 0.2 well according to (36). However, when the horizontal coordinate is larger than 1200 and the radar echo appears, the interference results in Figures 3-5 changed rapidly because the jammer echoes have to avoid to interfere radar echoes as Figure 2b shows. Although the value of vertical coordinate changed rapidly in the right figures of Figures 3-5 when the horizontal coordinate is larger than 1200, their maximum value is still less than 0.3 which illustrates that the math model proposed by this paper is still effective to stop the opposite PDS to detect airborne radar signal easily. The right figures of Figures 3-5 also show that the interference results depend on the mode and the reference length of CFAR. When the reference units are 6 and 8, the right figures of Figures 3 and 4 show that the interference power is effective as long as SO-CFAR is interfered. However, when the reference units is 12, the right figures of Figure 5 shows that the interference power is effective only OS-CFAR is interfered. Figures 3-5 also show that the conclusion that CA-CFAR is susceptible to interference in the last paragraph of Section 3 is correct. Figure 6 compares the detection probabilities of airborne radar before and after interference by airborne jammer, which shows that the detection probabilities of airborne radar are almost higher than 0.8 constrained by (36). From Figures 3-6, the airborne jammer is useful to interfere the opposite PDS but almost does nothing to airborne radar. Although there are many smart interference tactics in literatures, noise interference is always the simple but very effective tactic. In addition, the smart interference frequently is only useful in special scene. To illustrate that some smart interference tactics are invalid in the simulation scene of this paper, we take the multiple false target interference and the non-uniform false target interference [31,32] From (36), there is a necessary constraint t j , t j + τ j ∩ (t r , t r + τ) = ∅ which is the key point to maintain the detection probability of airborne radar showed in Figure 6. In fact, t j , t j + τ j ∩ (t r , t r + τ) = ∅ is influenced by the time synchronization performance. From Figures 3-9, this paper assumes that the time synchronization performance of airborne radar and airborne jammer is perfect. However, the time synchronization error is always exists. The simulation in Figure 10 is to show the acceptable synchronization errors for different lengths of reference units. Figure 10 indicates that the shorter the reference units is, the smaller the acceptable synchronization error is. From Figure 10, we take the minimum value as the acceptable synchronization error of tracking while jamming system because the number of reference cells of the opposite PDS is difficult to know in advance, so that the acceptable synchronization error is 1.5 µs which is showed in the top subgraph. Although the simulated acceptable synchronization error in the middle subgraph is a constant, this is only a coincidence which might be from the assumed simulation parameters above and the minimum step of synchronization error which is 0.1 µs in simulation. The number of reference cells is 12 Figure 10. Acceptable synchronization error for different reference cells during tracking process. Conclusions The opposite advanced PDS makes it diffcult for conventional methods (such as minimizing the radiation power) to maintain the airborne radar in the LPD state. This paper argues that another way to keep the airborne radar in the LPD state is to interfere with the opposite PDS on the target while the airborne radar is tracking the target. Through analysis and simulations, we illustrate that our tactic is effective when the interference power and time synchronization performance of airborne jammer meet the necessary requirements.
v3-fos-license
2020-09-10T10:09:26.967Z
2020-11-07T00:00:00.000
225358957
{ "extfieldsofstudy": [ "Medicine", "Mathematics", "Physics" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1088/1748-3190/abb3b6", "pdf_hash": "4011d9c682acd947c10605bb8125baac943a645d", "pdf_src": "IOP", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2020", "s2fieldsofstudy": [ "Engineering" ], "sha1": "59d7ece2948ecbc80d762f63d8c4ad2245b4a072", "year": 2020 }
pes2o/s2orc
The effect of variable stiffness of tuna-like fish body and fin on swimming performance The work in this paper focuses on the examination of the effect of variable stiffness distributions on the kinematics and propulsion performance of a tuna-like swimmer. This is performed with the use of a recently developed fully coupled fluid-structure interaction solver. The two different scenarios considered in the present study are the stiffness varied along the fish body and the caudal fin, respectively. Our results show that it is feasible to replicate the similar kinematics and propulsive capability to that of the real fish via purely passive structural deformations. In addition, propulsion performance improvement is mainly dependent on the better orientation of the force near the posterior part of swimmers towards the thrust direction. Specifically, when a variable body stiffness scenario is considered, the bionic body stiffness profile results in better performance in most cases studied herein compared with a uniform stiffness commonly investigated in previous studies. Given the second scenario, where the stiffness is varied only in the spanwise direction of the tail, similar tail kinematics to that of the live scombrid fish only occurs in association with the heterocercal flexural rigidity profile. The resulting asymmetric tail conformation also yields performance improvement at intermediate stiffness in comparison to the cupping and uniform stiffness. Introduction Tuna fish, as one of the most derived members of the family Scombridae, has long been thought as an efficient swimmer during high-speed swimming (Donley and Dickson 2000, Fierstine and Walters 1968, Mariel-Luisa et al 2017. It is streamlined with a tear-drop-shaped body and a narrow caudal peduncle. One crucial feature of tuna fishes is their highlyforked semilunate caudal fin with dorsal-ventrally symmetric external and intrinsic morphology. The tail has a high aspect-ratio shape (defined as the ratio of the square of the span relative to the surface area) and mostly composes of bone and collagen fibres. The stiffened fin rays, the hypural plate, and the collagen fibres together form the main structure of the caudal fin, which withstands the majority of resistance fish experiences when it swims. These intrinsic configurations indicate that tuna tail is a composite structure, similar to the fish body. Anatomy studies revealed that the internal formulation of a tuna body includes anterior pointing arms, backbone, red muscle, main horizontal septum, myosepta, etc (Westneat et al 1993). The non-uniform distribution of vertebrae and caudal fin rays impart anisotropic structural flexibility and contribute significantly to the fish's swimming behaviour (Affleck 1950 andMcHenry et al 1995). Attributed to the above mentioned morphological feature and associated high swimming efficiency of the scombrid fish, it is prudent to study the underlying swimming mechanism, and thus provide insight into the design of bio-inspired artificial underwater vehicles. Unfortunately, the aforementioned material stiffness and its distribution were either excluded in previous studies by assuming that the physical models are rigid (Triantafyllou et al 1993, Triantafyllou andTriantafyllou 1995); or simplified by using predefined body/fin deformation [see a series of work at University of Virginia, e.g., (Han et al 2020 and Zhong et al 2019)]. Indeed, stiffness was considered in many previous studies. However, those models were composed of uniformly distributed bending stiffness (Dai et al 2012b andHeathcote andGursul 2007). The study of the impact of stiffness on the swimming behaviour of fish-like models can be found from a series of experimental work at Harvard University (Feilich and Lauder 2015, Lucas et al 2015and Mariel-Luisa et al 2017. In the early study by Lucas et al (2015), four rectangular plates were tested with the variation of stiffness along the length of the foils. Their results indicated that models with high stiffness anteriorly and low stiffness posteriorly outperformed the others with uniform stiffness profiles in terms of thrust generation and self-propelled speed. In another investigation, Feilich and Lauder (2015) modified the physical model to include an anterior part mimicking the fish body and a posterior part representing the caudal fin. Detailed examinations on the shapes, which varied from a forked tuna-like tail to an unforked tail with a deep peduncle, covered three different stiffness variations with a heave motion imposed at the leading edge of the model. It was found that there was no single 'optimal' tail shape exhibiting the best performance in all metrics studied, highlighting a complex interaction among body, tail shape and its material stiffness. Following the work by Feilich and Lauder (2015), Mariel-Luisa et al (2017) extended the tuna-like foil models to a broader parametric span including the variation of structural flexibility, heave amplitude and frequency. They compared the foil models' kinematics with the counterparts from a live tuna fish, and they found that stiffness and kinematics interacted subtly in effect on hydrodynamic performance, with no one stiffness maximizing both the thrust and efficiency. Although some progress has been made in the study of stiffness effect on tuna-like swimmers, the main limitations in the above work are three folds. Firstly, the availability of realistic materials to build up the physical models is restricted for an experimental study, resulting in the limitation to select the parameters considered. Secondly, most of the foil models were constructed from plastic uniform material stiffness, while in reality, tuna fish body is characterized by their stiffness variation along the body length (Mariel-Luisa et al 2017). Moreover, the experiment is restricted to observe some detailed flow field information, e.g., the surface force and wake vorticity, which can be compensated by numerical modelling. On the other hand, the conformation of the tuna tail is not extensively explored in the study of (Mariel-Luisa et al 2017), as the caudal fin is one of the significant factors affecting the swimmer propulsion performance. Experimental observation of live Scombrid fish revealed that the locomotion of tail is asymmetric during steady swimming within a wide range of swimming speeds, i.e., 1.2-3.0 L/s (L is the fish body length) (Gibb et al 1999). However, this dorsal-ventrally asymmetry is also frequently observed for bluegill sunfish during its braking manoeuvring, resulting in the loss of thrust and efficiency as compared with symmetric tail locomotion, e.g., the cupping tail movement (Esposito et al 2012, Flammang and Lauder 2009, Luo et al 2020b. Given the differences in morphology between these two fish species, i.e., scombrid and bluegill sunfish, what is the actual role of biologically asymmetric tail locomotion on the swimming behaviour for tuna-like swimmers remains unknown [see examples in (Feilich and Lauder 2015) and (Krishnadas et al 2018)]. Inspired by the above studies (Lucas et al 2015 andMariel-Luisa et al 2017), we systematically investigate the effects of non-uniform distributions of flexural stiffness on the kinematics and propulsion performance of a tuna-like swimmer (figure 1) using our recently developed fully coupled fluid-structure interaction (FSI) solver (Luo et al 2020b). Distinguish from previous studies, both the fish body and fin stiffness variation is considered. Specifically, through a non-uniform distribution of stiffness along the body (figure 2), the main objective is to examine whether a bio-inspired stiffness profile improves performance and yields more fish-like kinematics. Additionally, with the non-uniform stiffness distribution in the spanwise direction of the caudal fin (figure 3), we aim to explore the possibility to passively control the fin deformation and replicate some features of tail kinematics observed from live fish (Gibb et al 1999), and understand the effect of these tail conformations on the hydrodynamic force production. Particularly, we are curious about the role of the aforementioned asymmetrical tail conformation during steady swimming of scombrid fish and the comparison with the experimental and numerical findings of bluegill sunfish (Esposito et al 2012 andLuo et al 2020b). The remainder of this paper is organized as follows: the geometry, locomotion kinematics, structural properties of the tuna-inspired flexible swimmer are introduced in section 2. The metrics to evaluate swimming performance is also defined in this section. In the next section, the governing equations of fluids and solids, as well as implemented numerical techniques, are presented. Section 4 provides numerical results including structural deformations, propulsion performance and flow field etc. The discussion is then presented in section 5. Finally, the conclusions are drawn in section 6. Problem formulation As mentioned earlier, the present tuna-like model in figure 1(b) is inspired by the experimental studies of (Feilich and Lauder 2015) and (Mariel-Luisa et al 2017) and has the same dimension and size as that in the experiments. In this study, we consider the stiffness variations of the body (from the leading edge to the peduncle, as shown in figure 1(b)) and the caudal fin separately, with an attempt to shed light upon their respective effect on propulsive performance and kinematics. However, it is worthwhile to note that the present study does not attempt to reproduce the real fish in terms of its lifelike geometry or material features in-vivo. Instead, following the studies by Esposito et al (2012) and Zhu and Bi (2017), we focus on some key characteristics, e.g., anisotropic flexural rigidities and associated FSI extracted from a real fish. The length of the model L is defined as the characteristic length in this problem, and its thickness h is 0.139 cm. The leading edge of this model matches 30% total body length point on a real live fish (Mariel-Luisa et al 2017). All the edges of the model are chamfered to ease our fluid solver mesh generation. In accordance with the experimental studies in (Feilich andLauder 2015 andMariel-Luisa et al 2017), the swimmer performs heave motion in the y-direction, i.e., the leading edge moves laterally in heave without the pitch motion, in a uniform flow along the positive x-direction with a velocity of U ∞ . The timedependent heave motion of the model is described by y = y 0 sin 2πft , where y 0 is the maximum heave amplitude and f denotes the oscillation frequency. The dimensionless parameters are defined in this study as the Reynolds number Re = U ∞ L/ν with ν is the kinematic viscosity of the fluid; the mass ratio m * = ρ s h/ρ f L, with ρ s and ρ f represents the density of the solid and fluid, respectively; the reduced frequency where E denotes Young's modulus and I = b h 3 /12 is the area moment of inertia of the cross-section per unit height. It is worth noting that the height of the model b is variable along the body length. We take the unit height b as a reference here for simplification (Dai et al 2012b). Structural models of the tuna-like swimmer As aforementioned, the stiffness distributions of the body and caudal fin are considered separately to avoid the interactive effect. Specifically, two scenarios are considered here, i.e., one is varying stiffness along the body while the other has different spanwise stiffness distributions in the tail. Stiffness variation along the body length The body of this model which is considered as 70% total length of a real fish is chosen to be composed of 21 segments in our study [the vertebral number for various scombrid fishes ranges from 22 to 66 (Fierstine and Walters 1968)], as shown in figure 2. Each body segment is assigned with a unique K, i.e., for the ith segment, the normalized flexural rigidity is . . . , N, where N = 21). To our knowledge, there are no direct stiffness measurements of tuna body in literature. Instead, inspired by the stiffness distributions measured by McHenry et al (1995) of a pumpkinseed sunfish, the variation pattern of K bi is written in the form of part of an elliptic equation as where K c is a constant and denotes the normalized flexural rigidity of the first segment. Meanwhile, a uniform profile K bi = K c is also used for comparison. These two variation patters are denoted by NU (non-uniform) and UB (uniform along the body length) modes for simplification. The variation patterns of the flexural rigidities of the body segments are depicted in figure 2(b). In all these cases, the stiffness of the first segment is 10 times that of the segment near the peduncle, which is within the range of stiffness variation naturally observed in live fishes (McHenry et al 1995). When the stiffness of the body segments is varied, the caudal fin has a uniform stiffness with a value of 0.04 K c which is derived from the estimation from the measurement in (McHenry et al 1995). Stiffness variation along the tail Regarding the flexural rigidities variation of the fin surface, inspired by the studies by Zhu and Bi (2017), the following stiffness profiles are used with an attempt to replicate some deformation patterns of the caudal fin observed from scombrid fishes in (Fierstine and Walters 1968) and (Gibb et al 1999), i.e., a cupping and heterocercal fashion. The total number of principal caudal-fin rays of a tuna fish is almost always 17 according to Fierstine and Walters (1968), and therefore, there are 17 segments in the fin part of our model, as shown in figure 3(a). Following the study by Zhu and Bi (2017) and our previous studies (Luo et al 2020b andShi et al 2019), the variation styles of K fi corresponding to different deformation fashions can be described as: (a) Cupping distribution: Here, R = 1 N N fi=1 R fi , and K m is the mean value of the stiffness of all the fin segments. The parameter γ is used to determine the ratio of the stiffness between the least flexible segment and the most flexible one. Like the study in (Luo et al 2020b), γ = 10 is selected in this work. The uniform distribution is also introduced here for a comparison. The three stiffness fashions are notated by CF (cupping style fin segments), HF and UF, and their corresponding stiffness profiles of the fin segments are presented in figure 3(b). Like the above practice, when the stiffness of the fin segments is varied, the stiffness of the body is uniform and has a value of 25K m . Following an experimental measurement of the kinematics of Sombrid fishes by Gibb et al (1999), we also placed seven maker points, as shown in figure 1(b), on the fin surface to monitor the fin deformation during the locomotion. Performance metrics The propulsion performance of the tuna-like swimmer is characterised by the mean thrust coefficient C T , the mean energy expenditure coefficient C P , the mean lateral force coefficient C y in the y-direction and the mean vertical force coefficient C z in the z-direction. These mean values can be obtained by averaging the instantaneous values over one locomotion period T. The instantaneous thrust generated by the model is defined as where S is the reference area, i.e., the area of the model in xz plane, and F x is the component of total hydrodynamic force in the x-direction. Similarly, the lateral and vertical force coefficients are written as where F y and F z are the components of hydrodynamic force in the y-and z-direction, respectively. For the present tethered model in the flow, the power expenditure coefficient can be evaluated as (Olivier and Dumas 2016) With the mean value of C T and C P over one period, the propulsion efficiency is given by Governing equations and numerical approach The present FSI model involves a finite volume method based fluid dynamics solver, a finite element method based solid dynamics solver and the coupler between the two. The fluid solver solves the unsteady, viscous and compressible flow which is governed by the laws of the conservation of mass, momentum and energy, and it can be written in an integral form as where W = {ρ f , ρ f v, ρ f E} T is the conservative variable vector with v is the velocity vector and E denotes the total energy, the vector F c represents the convective and pressure fluxes and F d is the fluxes arising from the viscous shear stress and thermal diffusion. Ω f denotes the fluid control volume with the boundary Γ f . n is the unit vector in the outward direction. The fluid governing equation is discretized using a cell-centred finite volume method based on a multiblock structured grid system. The fluid domain Ω f is divided into an array of hexahedral grid cells. For each of these hexahedral cells indexed by i, j, k , the conservation laws are applied and reformulated in the semi-discrete form given by where ΔΩ f is the volume of the cell i, j, k , W i,j,k denotes the average flow variables of the cell, and R i,j,k is the residual which measures the net fluxes entering the hexahedral cell through all the six cell faces. An artificial viscosity term is introduced in R i,j,k to stabilize the scheme and eliminate the spurious numerical oscillations (Jameson et al 1981). The dual-time stepping scheme (Jameson 1991) is employed for time-dependent simulations, where equation (8) is reformulated as a steady-state flow problem with a pseudo-time t * where where the solution vectors of two previous time levels denoted by n and n − 1 are used here to yield a second-order accuracy. Equation (10) is integrated by a hybrid multistage Runge-Kutta scheme. In this study, message passing interface based parallelization is achieved by domain decomposition to enable large-scale computation. The different grid blocks are automatically distributed over a number of processors by the block size with the application of a load-balancing algorithm. Furthermore, the local time-stepping and multigrid method are implemented to accelerate the convergence. It should be noted that the present fluid solver resolves compressible Navier-Stokes equations. To ensure that the compressibility is small enough to be negligible, the freestream Mach number, defined as Ma = U ∞ /a ∞ with a ∞ denotes the speed of sound of the freestream, is chosen as 0.06, which is far below the critical value 0.3 where compressibility effect becomes pronounced, but meanwhile, still sufficiently large to ensure numerical stability. Besides, the local Mach numbers in the whole computational domain are under monitor during computation to guarantee that they are below the critical value. This compressible fluid solver has been successfully applied to different incompressible flow simulations in our previous biomimetic studies (Liu et Regarding the structural dynamics, the basic equation is the weak form of the balance of momentum which is written in the differential form as where the acceleration of the material point is obtained by the second derivatives of displacement vector U of the structure, and surface forces are modelled by the second Piola-Kirchoff stress tensor P and body force of per unit mass such as gravity is represented by f. A constitutive equation describing the relation between the stress and the strain is used to close up equation (11). Specifically, for a Saint Venant-Kirchhoff material, the second Piola-Kirchoff stress tensor P is obtained by where C is the elasticity tensor, E represents the Green-Lagrange strain tensor, the deformation gradient is characterized by F and δ is the unit tensor. The general governing equation of the solid dynamics, i.e., equation (11), is discretized using the finite element method. With the application of the virtual work method, we obtain a linear algebraic equation system by the discretization in the complete solid domain: The time domain is discretized using the αmethod here (Dhondt 2004). Denoting the velocity vector {V} := U and acceleration vector {A} := Ü , the solution at time level n + 1 can be obtained by where Ṽ n+1 and Ũ n+1 can be considered as the predictor at time level n + 1 which are only dependent on the values at time level n. In this work, the finite element method based solid solver is CalculiX written by Dhondt (2004), in which a variety of element types are used to discretize the solid domain and define the shape functions. Another main ingredient of an FSI solver is the coupling between the fluid and structure solver. To reduce the effort to adapt the original computation codes and preserve both the advanced features of the fluid and structure solvers, the two solvers are coupled via a partitioned framework based on preCICE (Bungartz et al 2016 and Luo et al 2020b). It is a challenge to simulate strongly coupled FSI problems (Tian et al 2014), like the current flexible swimmer propulsion, in which numerical instabilities may cause divergence when the densities of fluid and solid are comparable (Causin et al 2005). Therefore, within this framework, an implicit scheme is designed in which sub-iterations are introduced during each time step to ensure numerical stability and convergence. The details of the numerical methods and the validation tests are provided in (Luo et al 2020a andLuo et al 2020b). This FSI solver has also been applied to the simulation of flexible swimmer propulsion in our previous work (Luo et al 2020b andLuo et al 2019). Results For all our simulations, the Reynolds number Re = 8000, mass ratio m * = 0.0089, the heaving amplitude y 0 = 1 cm, the Poison ratio ν s = 0.25. Most of the parameters are chosen to match with that in the experiment by Mariel-Luisa et al (2017). It is worth noting that the flow in our study is assumed to be laminar. At this Reynolds number regime (below or in the order of 10 3 ), the turbulence effect may play an insignificant role on the flow field, which was proved from some previous studies (Bozkurttas et al 2009 andBuchholz andSmits 2006 A self-consistency study is performed to assess the appropriate mesh and time-step resolution for f * = 2.5 when the stiffness of the body segments is varied in the UB pattern with K c = 0.1. Three grids are generated: a coarse grid with 2628 096 cells of which the minimum grid spacing is 1.48 × 10 −3 L, a medium gird with 4056 000 cells of which the minimum grid spacing is 9.73 × 10 −4 L, and a fine grid with the cell number is 5679 360 of which the minimum grid spacing is 5.95 × 10 −4 L. The computational domain and the medium fluid mesh around the tuna-like locomotor are shown in figure 4. On the model surface, the no-slip condition is applied, while for the other boundaries, the non-reflective farfield boundary condition is imposed. The structural mesh contains 4937 quadratic tetrahedral elements. For the three fluid meshes, different non-dimensional time-step sizes defined as Δt = Δt/T are used, i.e., Δt = 0.0087 corresponding to the coarse mesh, Δt = 0.0069 corresponding to the medium mesh and Δt = 0.0056 for the fine mesh. The time variation of C T within one locomotion period, when three meshes with different time-step sizes are used, are compared in figure 5. As seen, the results yielded by the medium and fine mesh are quite close. Therefore, the medium mesh and Δt = 0.0069 are used for our following simulations to reduce computational cost while retaining sufficient accuracy. Midline kinematics The midline kinematics envelopes of the tuna-like models with uniform and non-uniform stiffness variations along the body length when f * = 2.5 are depicted in figure 6. The variation pattern of the model when K c = 2 corresponds to the first mode defined by Michelin and Llewellyn Smith (2009). They characterized the vibration modes of the flexible wings according to the number of necks in the enclosing envelope. Therefore, other patterns correspond to a second mode. We also find that material flexural rigidity governs the number of waves and an increase of wavelengths along the model with increasing inflexibility is observed here which was also demonstrated in (Dai et al 2012a, Feilich and Lauder 2015and Mariel-Luisa et al 2017. With a closer inspection of the quantitative results of the lateral displacements, as shown in figure 7, the difference in the excursion in the lateral direction between the two stiffness distributions is mainly seen between 40%-80% of the model length. The maximum tail tip displacement is experienced by the intermediate-stiffness model when K c = 0.2 of the model with NU stiffness variation pattern, followed by that of the model with UB profiles when K c = 0.2 and 0.06. The total lateral displacements presented by live tuna kinematics measured by Donley and Dickson (2000), the S3 foil model in the experimental study (Mariel-Luisa et al 2017) and the current FSI tunalike swimmer model are compared in figure 8. It can be found that the latter two show different kinematics variation patterns compared with that observed from a real kawakawa fish. The most evident difference between the tuna-like models and tuna is that the minimum lateral displacement of the tuna fish is located at around 20%-40% of its 'thrust producing' body length which corresponds to 44%-58% of its total length, while for most of the other models they are observed around 50%-80% of the models' length, as shown in figures 7 and 8. Propulsion performance The results of C T , C P and η when K c is varied under different heave frequencies are depicted in figure 9. As can be observed from figure 9(a), the generated thrusts generally increase with a larger locomotion frequency. The effects of body stiffness distribution patterns on the thrust production are not monotonous. Under the parameters studied here, the NU mode generally creates larger thrust than the UB mode by a small majority (15 out of the total 26 cases). Nevertheless, the advantages of the NU mode are only presented in low frequencies, i.e., f * = 2 and f * = 2.5, where larger thrust force is yielded by the NU style in more than 70% of the cases (13 out of 18 cases). In contrast, for a higher frequency when f * = 3.7, the UB mode produces larger thrust for most cases (6 out of the 8 cases). Based on an experimental study of flexible rectangular foils in (Shelton et al 2014), one would expect the maximum thrust occurs at K c = 8 which has the largest stiffness when the frequency is fixed. However, our results indicate that at this point, thrust is not the largest when f * = 3.7 even there is no thrust generated under a small frequency. This finding corroborates a similar conclusion by Mariel-Luisa et al (2017) that the stiffer models do not always produce more massive thrust. The kinematic patterns are not reliable indicators in predicting the swimming performance, as suggested by Mariel-Luisa et al (2017). For example, the models with the NU stiffness variation have the most 'fish-like' kinematic curvatures at K c = 0.2 (see figure 6), and therefore they are expected to present high performance. However, their thrust production is poor in some circumstances, e.g., at f * = 2.5 and K c = 0.2, compared with that of the cases when K c = 0.06 and K c = 0.5 at the same frequency. On the other hand, based on the experimental results of rectangular foils by Lucas et al (2015), they suggested that a larger lateral displacement, especially the tail tip displacement, leads to a larger thrust. By the comparison between figures 7 and 9(a), we find that for the UB mode when K c = 0.06 and f * = 2.5, the tip displacement is 2.48 cm and C T = 0.12. Nevertheless, for the NU mode, it yields a tip displacement as 2.41 cm and a larger thrust coefficient C T = 0.16 at the same stiffness and frequency. Consistent with the experimental results in (Mariel-Luisa et al 2017), this indicates that the tail tip displacement does not necessarily predicate the propulsion performance in isolation. In figure 9(b), significant distinctions in C P between the two modes are observed for very flexible and stiff cases when f * = 3.7, and at this frequency, the NU mode is more energy-saving (7 out of the 8 cases) compared with the UB mode. When the frequency is smaller, i.e., f * = 2 and 2.5, the difference of C P between the two stiffness styles are less noticeable, except for a few stiffness values, e.g., K c = 2. A quantitative comparison between these two modes reveals that a smaller C P is seen for the NU mode in a majority of the total cases (18 out of the total 26), although some of the differences are marginal, e.g., when K c is near 0.2. Regarding the variations of propulsion efficiency, the effect of frequency on η is not so dominant as that on thrust and power expenditure. Namely, a high frequency does not always yield high efficiency, especially when f * = 3.7. Generally, the more flexible models are almost always more efficient, which is in line with the experimental results in (Mariel-Luisa et al 2017). This may not be applicable when the flexibility is sufficiently high, as indicated by FSI studies in (Dai et al 2012b). Given the same locomotion frequency, the NU mode performs more efficiently than the UB mode in most cases (19 out of the 26 cases). It is interesting to compare the present numerical results with that of flexible flapping wings. For example, in the numerical study of a 2D flexible flapping wing in forward flight, Tian et al (2013) found that the thrust value always peaked at certain wing flexibility as the flexibility was varied (see figures 3(a) and 4(a) in their paper). However, in this work, a global and several local thrust peaks are reached as the bending stiffness is varied at a fixed frequency, as shown in figure 9(a). The difference of variation patterns of power expenditure coefficient and efficiency between the current tuna-like model and the flapping wing can also be found in contrast with the results in (Tian et al 2013). This may be attributed to the different model shapes and kinematics imposed on the models. The present model comprises a 3D body and a forked tail while a 2D flexible plate model was used in the study of Tian et al (2013). Besides, in (Tian et al 2013), the flapping wing performed asymmetrical combined translational and rotational locomotion, while only a heave motion is applied to the tuna-like swimmer here. When pure heave locomotion was applied to the models, the occurrence of several local thrust peaks as the variation of stiffness was also reported in (Dai et al 2016, Ryu et al 2019and Zhu et al 2014. As shown in figures 9(a) and (b), the data plotted using conventional dimensionless parameters defined in section 2.2 is not organized concisely. Thus, it may be interesting to investigate a new scaling parameter to present the data. Inspired by the scaling parameter study by Kang et al (2011), two nondimensional parameters are defined here, i.e., the effective stiffness 1 = Eh 3 / 12 1 − ν s 2 ρ f U 2 ∞ with h = h/L is the thickness ratio, and the relative tip deformation λ = w tip − w root /y 0 where w tip and w root is the displacement of the tail tip and the root of the model. The resulting scaling plotted in the logscale for the UB and NU stiffness styles is presented in figure 10. Two linear fits are used to approximate the correlation between log 10 (C T / 1 ) and λ when the value of log 10 (C T / 1 ) is positive and negative, respectively. When the frequency is small, i.e., f * = 2, the values of log 10 (C T / 1 ) are all smaller than zero and their relation to λ is well represented by the linear fit with a coefficient of determination (R 2 ) as 0.93. Under higher frequency, especially when f * = 3.7, the variation of log 10 (C T / 1 ) with λ is less regular, indicating a more complex interaction between the structure and the fluid. With an inspection of figure 10(b), we find that more than half of the points with high frequency, i.e., f * = 3.7, lie above the linear fitted line. In contrast, for other smaller frequencies, they are more likely to be seen below this line, which indicates that frequency has a significant effect on the power expenditure. The time histories of the thrust and power input within one locomotion period of the two stiffness distribution modes along the body length are depicted in figure 11. As can be observed, the non-uniform stiffness profile only slightly changes the phase positions of the peak and valley values of C T and C P compared with uniform distribution. Meanwhile, it significantly increases the amplitude of the instantaneous thrust, e.g., a 23% increase of the peak thrust from the UB to the NU mode. Moreover, the case with non-uniform stiffness distribution generates no drag throughout the entire motion period, which is reminiscent of a previous study on a flexible pectoral fin which suggested that fish can avoid the creation of drag force by complex 3D conformations (Mittal et al 2006). In contrast, the power expenditure only shows a minor difference, thus leading to a significant increase of propulsion efficiency (45 % from the UB mode) at K c = 0.2 and f * = 3.7, as observed in figure 9(c). Near-body flow field The wake structure around the tuna-inspired models is visualized in figure 12. Remarkable cow-horn shaped posterior body vortices (PBVs) are generated near the dorsal edge (PBV(D)) and the ventral edge (PBV(V)) in the wake of swimmers with the UB and NU mode. Similar PBVs were also reported in the numerical simulations of the swimming a Crevalle Jackfish by Liu et al (2017) and a bluegill sunfish by (Han et al 2020). The dorsal and ventral PBVs are compressed towards the root of the caudal fin, as shown in figures 12(e) and (f ), which has also been presented in (Zhu et al 2002) (see figure 8 in their paper) and (Liu et al 2017) (see figure 11 in their paper). This vortex compression is due to the narrowing peduncle at the posterior fish body. Leading-edge vortices (LEVs) and trailing-edge vortices (TEVs) are seen near the caudal fin whose strength is weaker than the PBVs. In comparison, the previous shed TEVs of the swimmer with the NU stiffness profile is stronger than that of the UB mode (see figures 12(e) and (f)). Additionally, the tooth-shaped vortices are seen near the first quarter model length which is covered by high-pressure when the swimmer flaps at the right-most position and is about to stroke reversal. The Z-vorticity formulation within the xy plane around the locomotors with the NU and UB stiffness fashion is presented in figure 13. As can be seen, the vorticity of the two cases is qualitatively similar. At this instant, a pair of LEVs of the body and TEVs of the tail can be observed clearly. With the illustration of the streamline, we can observe remarkable vortex flow, especially for the NU mode, near the left surface of the leading edge part of the propulsor. By comparison, the clockwise vortices near the trailing edge of the tail with the NU mode are slightly larger than that of the UB mode. The dense distribution of the streamline near the wake of the trailing edge also indicates the suction effect on the flow velocity, which may contribute to the thrust production. To visualize the pressure distribution along the model surface, we depict the pressure coefficient contours on both sides of the model in the xz plane in figure 14. The area and magnitude of the high (left side surface, figure 14(a)) and low (right side surface, figure 14(b)) pressure regions of the UB mode are larger than that of the NU mode (figures 14(d) and (e)). These high and low-pressure regions may correspond to the counterclockwise and clockwise LEVs of the body, respectively, as shown in figure 13. For instance, the counterclockwise LEVs dominate at the left side surface of the body. Likewise, high-pressure distribution is observed near the anterior body part on the same side surface as shown in figures 14(a) and (d). This is reminiscent of a numerical simulation of the mackerel-like swimmers by Borazjani and Daghooghi (2013) which suggested that the LEVs could alter the pressure distribution on the tail. By comparison, the pressure difference in the anterior part of the UB mode is more pronounced than that of the NU through direct observation. However, it appears hard to apply this method to evaluate the pressure difference in the posterior part. Despite this, the configurations of the propulsors in the xy plane present very different bending patterns, i.e., the tail of the model with the NU mode as shown in figure 14(f) flexes to a more considerable extent and thus showing a much larger pitch angle. This leads to a better orientation of the hydrodynamic forces along the negative x-axis direction, which benefits the thrust generation directly. The force vector, the magnitude of C T and C y for the two stiffness distribution fashions within one motion period are presented in figure 15. An inspection of figure 15(a) reveals that the force generated by the model with the NU mode is almost always better oriented in the thrust direction, although in some cases the magnitude of the force is smaller than that of the UB mode. This leads the NU mode to have a larger thrust than the UB mode in the entire motion cycle, as demonstrated in figure 15(b). As shown in figure 9(c), the values of C P for the two stiffness styles are quite close at K c = 0.2 and f * = 3.7, which indicates that this orientation of forces attributed to the flexing patterns of the tail does not require additional power expenditure. As a result, higher propulsion efficiency is obtained by the NU mode. The larger pressure difference at the anterior part of the model with the UB mode as aforementioned and shown in figure 14, however, only leads to a greater lateral force in the ydirection as depicted in figures 15(a) and (c), which may be adverse for the straight cruising of swimmers. Results when stiffness distribution of the tail is varied 4.2.1. Tail kinematics The instantaneous deformation patterns of the swimmers with two different stiffness profiles assigned on the fin segments are presented in figure 16. The dorsal and ventral lobes of the tail with the CF mode are symmetrical with respect to the middle horizontal plane, and similar conformation is observed from the UF mode and thus not shown. The tail with the HF stiffness profile yields an asymmetry of movement, i.e., the dorsal lobe leads the ventral lobe during the flapping. To quantitatively analyse the tail kinematics, the movement of the dorsal tail tip, i.e., point 7 in figure 1(b), in x, y and z-direction is plotted in figure 17. As can be seen, the present numerical tuna-like propulsors present relatively little tail movement in the vertical (z) and horizontal (x) dimensions but the majority of the locomotion occurs in the lateral (y) direction during one tail-beat cycle, which aligns with the experimental measurement of scombrid fishes by Gibb et al (1999). By observing the time histories of the dorsal tailtip lateral displacement and tail height in figure 18, we find that the amplitude of the tip displacement which is around 2.6 cm is close to that range around 3 cm observed from live Scomber japonicus fishes with a similar total body length (around 25 cm) (Gibb et al 1999). Besides, the variation range of the tail height is about 0.5 cm and its variation period is around half of that of the varied tail-tip displacement, which is in line with the measurements by Gibb et al (1999). The maximum displacements in x, y and zdirection of the seven points 1-7 on the fin are shown in figure 19. The amplitude of the tail excursions tends to be smallest on the peduncle and mid-tail regions and larger at the tail-tips in all the three dimensions, which agrees with the measurements of live S. japonicus fishes by Gibb et al (1999). The excursions in x (horizontal) and z (vertical) direction are quite small compared with that in y (lateral) dimension whose magnitude is almost an order larger than the former. The wave of the lateral displacement is propagated posteriorly as shown in figure 20. For instance, the ventral and dorsal peduncle points reach the maximum excursions in lateral dimension around 10% of the flapping period time ahead of the tail tip. However, the ventral tail-tip reaches its maximum displacement approximately 7% of the cycle time behind the dorsal tail-tip. Propulsive capabilities The results of C T , C y , C z , C P and η when the stiffness is varied for the three different stiffness profiles are presented in figure 21. In general, the effects of different flexural rigidity distribution patterns on propulsive performance are mainly noticeable in the vertical-forces generation when the stiffness is the same, while others present close results. It also applies to the scenarios when the locomotion frequency is varied, and therefore they are not shown in this study. With an inspection of the curves of thrust coefficient, we find that the tuna-like swimmers generate quite close thrusts unless at the very flexible and moderate stiff cases. At intermediate stiffness, models with the HF stiffness profile produce larger thrust compared with the others. For example, at K m = 0.005 and 0.02, the thrust generated by the swimmer with the HF stiffness patterns increases by 4.8% and 4.0%, respectively, from that of the CF mode. Regarding the lateral (y) force production, the magnitudes of C y for both the CF and UF fashions firstly decrease as stiffness is increased under highly flexible cases, and then experience a general increase with a larger K m . In comparison, the lateral force by the HF mode almost always increases as the models become stiffer under the parameters studied. Only the swimmer with the HF stiffness profile yields non-negligible vertical forces, as presented in figure 21(c). The values of lift force (C z ) by the HF stiffness model are generally smaller by an order of the magnitude of the thrust forces, which is in line with the experimental measurements of the forces produced by chub mackerel fishes (Nauen and Lauder 2002). In terms of power . Phase lag measured as the percentage of tail-beat cycle period illustrating the effect of location on the fin on the timing of lateral (y) locomotion of the fin. The dorsal tail-tip is defined as the reference location and therefore, has a zero phase shift. A negative value indicates that the point reaches its maximum lateral displacement before the dorsal tail-tip. expenditure, they all experience a continuous increase with larger inflexibility. On the contrary, the propulsion efficiency monotonously decreases as the swimmers become stiffer after the peak at K m = 0.001. The performance drop presented in figures 21(a) and (e) at very flexible cases, i.e., K m = 0.0008, may be attributed to the declining ability of a highly flexible swimmer to communicate momentum to the flow to induce thrust production, which has been demonstrated in (Michelin andLlewellyn Smith 2009 andOlivier andDumas 2016). To accurately distinguish the effects of body and fin on the thrust production, we split the total thrust of the swimmer into two parts as shown in figure 22. The cases when the body is rigid are also considered for comparison. As can be seen, the difference in thrust among different stiffness distributions is indeed derived from the tail. Especially when the body Figure 21. The time-averaged coefficient of thrust C T , lateral forces C y , vertical forces C z , power input C P , and efficiency η when the stiffness along the fin is varied when f * = 2.5. is rigid, the thrust of the HF stiffness profile doubles compared with the others, which indicates a remarkable interaction between the body and tail during flapping. The instantaneous variations of thrust and power expenditure over one locomotion cycle are depicted in figure 23. The total value of the model including the body and fin part and the partial value of the fin alone are both presented for comparison. With a closer inspection of figure 23(b), we find that the stiffness distribution along the fin appears to have little effect on the variations of C P both of the entire model and the tail alone. Nevertheless, it has a more significant impact on the thrust production of the fin part, although this difference is almost entirely eliminated when combined into the overall consideration. Figure 24 demonstrates the wake structure near the tail of the tuna-like swimmers with the CF and HF inflexibility patterns. The wake structures near the body are similar to that when the body stiffness is varied, as shown in figure 12. Therefore, only the vortex formation near the caudal fin is presented here. With an inspection of figure 24, we can find that the TEV of the CF mode generally presents a good symmetry relative to the middle line in the z-direction. A closed vortex ring is observed at the wake of the trailingedge of the swimmer with CF mode. In contrast, the counterpart near the tail of the swimmer with the HF stiffness style has an opening at the dorsal lobe, which indicates the symmetry is broken here. Flow field near the swimmer The Y-vorticity contour and near-body streamline by the locomotor with the CF and HF profile are depicted in figure 25. As seen, the height of the vortices near the tail tip, i.e., the secondary trailingedge vortices, is approximately equal to the caudal fin height, which is consistent with the wake structure of chub mackerel fish obtained by using digital particle image velocimetry techniques (Nauen and Lauder 2002). Their results also showed that the vortex jet was oriented at a slightly negative angle around −3 degree relative to the horizontal x-axis. This is also presented in figure 25(b) where the perpendicular line to the streamline tilts from the vertical direction, indicating that the flow is slightly pushed downward along the negative z-direction as pointed by the black line. The pressure distribution along the swimmer surface is presented in figure 26. The main difference between the CF and HF mode is that much lower pressure (marked by a black circle in figure 26(d)) is located at the ventral lobe of the tail at the right side surface for the HF stiffness fashion. Therefore, a more considerable pressure difference between the left (high pressure) and the right (low pressure) is generated by the ventral lobe of the swimmer with the HF stiffness pattern. With an observation of the tail conformation at this instant in figure 16, one may find that the ventral half of the tail rolls upward. This orientation of the tail gives rise to a positive vertical component of forces resulted from the pressure difference, and it cannot be balanced by the dorsal lobe which is almost upright in vertical. This may explain the production of lift by the HF stiffness profile. In contrast, although there is also pressure difference generated both on ventral and dorsal lobes of the tail of the swimmer with the CF mode, these two forces are counteracted by the symmetrical distribution dorsally and ventrally. Passive control via non-uniform stiffness distribution A few previous studies have indicated that it is possible to imitate some morphological features of fish using passive control via imposing appropriate stiffness distribution (McHenry et al 1995). A study by Videler (1993) revealed that the rigidities of the pectoral fish fin are enhanced near leading-edge as rays are bonded together. This phenomenon was reinforced with numerical modelling of a flexible fin ray by Shoele and Zhu (2012). The mechanism behind was the lessening of the effective angle of attack in the vicinity of the leading edge, which was reflected by the mitigation of LEVs separation (Shoele and Zhu 2012). The experimental work from (Lucas et al 2015) also provided evidence that the model with a biologically relevant stiffness, i.e., the stiffer anterior, presented more fish-like kinematics than uniform foils. Similar fish kinematic features are also revealed in the present study when the attention is paid to fishtail. For example, the symmetrical and asymmetrical rigidity styles lead to rather different tail conformations, as shown in figure 16. Generally, the HF profile causes similar features of scombrid fishtail previously observed in the experiment of Gibb et al (1999). It is noted that such asymmetry only presents by HF profile, where the lateral excursion of the dorsal tail-tip is 9.6% larger than that of the ventral tail-tip as depicted in figure 19. The time-dependent dorsal-ventral asymmetry tail movements are also noticeable for the HF stiffness fashion (see figure 20). Although previous studies by Fierstine and Walters (1968) and Gibb et al (1999) reported similar trends with a caudal fin of Skipjack tuna fish and a chub mackerel fish, to our knowledge, for the first time, such asymmetry trends in lateral displacement magnitude and phase shift of the tail-tip is well replicated in this numerical FSI study covering a variety of flow and structure parameters. However, it is challenging to replicate true-tonature fish kinematics entirely relying on pure passive control via non-uniform stiffness. For instance, the location where the valley value of the lateral displacement occurs by the NU stiffness profile does not match with that of a live tuna fish (figure 8). Additionally, the variation pattern of the lateral displacement of the numerical swimmer differs much with that of the real fish data at the first 60% model length. This may suggest that active muscle contractions/extractions play a dominant role in the formulation of body waveform. Regarding the tail kinematics, the present models show a difference in the phase shift of the curves of tail-tip displacement and tail height with the experimental observations (Gibb et al 1999), as depicted in figure 18. Particularly, their results suggested that the tail is maximally compressed at the maximum lateral displacement of the tail tip. Instead, in our results, the maximum lateral displacement almost appears at the same instant as the maximum abduction of the tail. Such difference is much likely the consequence of the sophisticate caudal fin muscle active control, which is hard to be achieved by purely passive deformations. This finding is also reminiscent of the speculation that the cyclical vertical compression of the Scombrid fishtail is a result of the action by interradialis muscle which is positioned such that contraction of this muscle could draw the dorsal and ventral rays towards one another by Gibb et al (1999). Our numerical results corroborate this opinion. In addition to the above locomotion kinematics, passive control on the flow field can also be achieved indirectly by the non-uniform stiffness distributions as presented in (Shoele and Zhu 2012), where the strengthened leading-edge caused the mitigation of LEVs separation. In the present study, the bio-inspired non-uniform body stiffness profiles yield slightly stronger trailing edge vortices (figure 13) and alter the pressure distribution, reflected by reduced pressure at the anterior body surface and near the peduncle as shown in figure 14. Collectively, the nonuniform stiffness causes reorientation of the fluid force so that it points more towards the swimming direction and thus increases thrust as presented in figure 15. This applies to the thrust augmentation by the heterocercal stiffness profile along the fin surface. HF profile does not change the pressure magnitude very much but results in the rolling motion of the ventral lobe of the caudal fin, yielding a resultant force along the thrust direction (see figures 26 and 16). As a consequence of that, a lift force is also generated (figure 21(c)). The function of heterocercal conformation of mackerel fishtail Scombrid fish has a homocercal tail with dorsal-ventrally symmetrical external and internal morphology. Due to this symmetric feature, it has usually been thought to function as a homocercal model (Gibb et al 1999). Kinematics measurement suggested that its tail flaps asymmetrically so as to provide lift force during steady swimming (Gibb et al 1999), which is presented by the heterocercal stiffness profile in our numerical models. Indeed, previous research has indicated that the fish body is negatively buoyant, tending to push fish moving towards the substratum (Magnuson 1973). To prevent this sinking trend, the body is tipped up to deliver additional upward lift near the anterior part of the fish which is balanced by the vertical lift produced posteriorly by the tail (Aleev 1969). This theory leads to a general prediction that the neutrally buoyant fish do not show asymmetrical tail conformation during swimming. Interestingly, biological observations have suggested that significant dorsal-ventral asymmetry and tilting may also appear in some teleost fishes with near-neutral buoyancy. (Gibb et al 1999, Lauder 1989, 2000and Webb 1993. Subsequent research revealed that this dorsal-ventrally asymmetrical tail deformation of bluegill sunfish is closely related to their manoeuvring behaviours (Flammang and Lauder 2009). Inspired by this finding, a biomimetic bluegill sunfish tail whose shape is plump without a sharp fork was numerically studied in our previous study (Luo et al 2020b). Our previous results suggested that the asymmetrical heterocercal tail conformation continuously yielded the smallest thrust and lowest efficiency during steady swimming among all the deformation patterns, including cupping and uniform styles. Similar conclusions have been drawn from the experiment on a robotic fish caudal fin with imposed tail motion derived from biological observation of the bluegill sunfish (Esposito et al 2012). However, in the present study on the swimmer with a tuna-like tail, a heterocercal stiffness profile and its resultant asymmetrical deformation do not cause the deterioration of propulsion performance. Instead, in some instances, when the stiffness is at an intermediate level, the locomotors with the HF pattern even outperform the others in terms of thrust generation and propulsion efficiency (see figures 21(a) and (e)). By comparing the present results with the other studies, we may propose one additional explanation/hypothesis to the asymmetric scombrid fishtail conformation: the role of it may play is not only to contribute to the lift force balance but also the thrust generation and propulsion efficiency. This is suggested at all speeds during steady swimming of mackerel fish. However, this does not apply to bluegill sunfish with a different tail shape. Most of the bluegill fish are neutrally buoyant, and thus there is no need to balance the gravity and buoyancy force when they are swimming. In this situation, they adopt the asymmetric tail movement to offer additional lift force along with the thrust reduction during manoeuvring. It is also found that the stiffness profiles adopted in this study do not yield as remarkable impact on the propulsion performance as that in the results from (Zhu and Bi 2017) and (Luo et al 2020b) where a bluegill sunfish inspired tail model was used. For example, the largest relative thrust difference was 29.3% seen between the models with cupping and heterocercal stiffness profile in (Luo et al 2020b). In contrast, the largest thrust distinction is seen between the HF and CF stiffness styles with a difference of 4.8% in this study. This may be related to the different tail shapes tested and the different intrinsic musculature conformations of the tails of their biological prototypes. Morphologically, the scombrid fishtail has a larger aspect ratio with a highly forked trailing edge while the bluegill sunfish has an unforked tail with a smaller aspect ratio. Hydrodynamically, the different tail deformations due to variable stiffness are more likely to induce different force production for the bluegill sunfish thanks to their large control surface. On the other hand, biologically, the intrinsic tail myology also determines the role of the tail plays on swimming behaviour. Anatomical studies revealed that there is an extensive complement of intrinsic musculature of bluegill sunfish (Lauder 2015). It is believed that the complex conformations can control adduction and abduction of individual fin rays, the movement of fin rays and the relative motion of upper and lower tail lobes. The utilization of these intrinsic muscles activities enables excellent control of the tail surface during different locomotor behaviours (Flammang and Lauder 2009). In comparison, the intrinsic caudal fin musculature significantly reduces for scombrid fishes (Nursall 1963), and there is even no intrinsic tail musculature as to black skipjack tuna fish (Fierstine and Walters 1968). In summary of the above results and discussions, the tuna-like tail may not be a favourable prototype when the manoeuvrability purpose is the focus. This is suggested from the results that the change of tail shape has little influence on the force generation, although their semilunate conformation is believed to offer high propulsion efficiency during high-speed (Nauen and Lauder 2002). Conclusions By the utilization of a fully coupled three-dimensional FSI solver, we have numerically studied a tunainspired swimmer. Specifically, we investigate the effects of variable stiffness distributions along the body and tail on the kinematics and dynamics of the locomotors separately. Firstly, a bio-inspired nonuniform rigidity profile of the body is compared with a uniform mode through systematic simulations. The numerical results indicate that given the parameters studied in this work, the larger thrust produced by the model with the bionic stiffness fashion is mainly seen under low frequencies. Instead, when the frequency is high, the swimmer with uniform stiffness produced larger thrust in most cases. The enhanced performance by the non-uniform stiffness mode is more noticeable in terms of propulsion efficiency where more than 73% of the total cases saw an increased efficiency, and this improvement is seen for all the three frequencies. Secondly, among the three different distributions of tail stiffness, i.e., heterocercal, cupping and uniform, the swimmer with a heterocercal pattern shows resemblance to that of real scombrid fishes in terms of tail kinematics. Additionally, those with the heterocercal stiffness profile also outperform that with other inflexibility distributions at an intermediate stiffness. The lift force produced by them is absent for the other two stiffness patterns. These findings suggest that the asymmetrical tail conformation does not only provides additional lift to balance swimming body but may also contribute to efficient propulsion during steady swimming of scombrid fish. This heterocercal tail deformation has distinctive functions compared to that of a bluegill sunfish whose caudal fin has superior abilities in the manoeuvre. Throughout our results, we also find that it is impossible to achieve entirely real fish-like kinematics if only the passive control, via the variable body and fin stiffness proposed here, is adopted. These are reflected by the comparison between our results with the experiment in (Donley andDickson 2000 andGibb et al 1999) as we discussed in section 5. It is reasonable to conjecture that these discrepancies are induced by the subtle and advanced active control of vertebrae and tail muscular activities by fish, which are unable to be considered in this study. More work needs to be done in the future in order to fully explore the complex interactions between the swimmer and its surrounded environment, which are driven by muscular actuation morphology and structural properties, and the resultant swimming kinematics and performance.
v3-fos-license
2019-03-10T13:06:50.377Z
2014-10-01T00:00:00.000
72576951
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://ijmedph.org/sites/default/files/IntJMedPublicHealth_2014_4_4_520_144137.pdf", "pdf_hash": "032e6f95da669c172dfe68471f989c412413f070", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2022", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "032e6f95da669c172dfe68471f989c412413f070", "year": 2014 }
pes2o/s2orc
Juvenile myasthenia gravis Juvenile myasthenia gravis is a rare disorder acquired in childhood, representing 10% to 15% of all cases of myasthenia gravis. Like the adult form, it is generally characterized by an autoimmune attack on acetylcholine receptors at the neuromuscular junction. Most patients present with ptosis, diplopia, and fatigability. More advanced cases may also have bulbar problems and limb weakness and may progress to paralysis of the respiratory muscles. DISCUSSION Autoimmune JMG is an uncommon disorder in the pediatric population, characterized by fatigable weakness due to antibodymediated destruction of the AChR at the neuromuscular junction.Children typically present with ocular symptoms (ptosis, diplopia, ophthalmoplegia) but can also present with generalized weakness or bulbar symptoms (facial weakness, voice change, diffi culty in chewing or swallowing).About 50-69% of JMG patients are seropositive (AChR antibodies), compared to 80% of adult patients.In both age groups, generalized MG has a higher seropositivity rate than pure ocular MG. [4] The differential diagnosis of JMG includes congenital myopathies, congenital myasthenic syndromes, toxins, hypothyroidism, mitochondrial myopathies, multiple sclerosis and brainstem tumors. [5]The diagnosis is based upon clinical signs and symptoms, with laboratory and electrophysiological studies used for confi rmation.Although thymoma in children is rare, the thymus must be imaged (usually by CT) once JMG has been diagnosed.Most mediastinal tumors in the pediatric population are either neurogenic in origin (33%) or lymphomas (41%).Primary thymic lesions (such as thymic cysts, thymolipomas, and thymic hyperplasia) represent only 2.5% of mediastinal tumors, while thymomas comprise about 1%. [6]The Myasthenia Gravis Foundation of America (MGFA) clinical classifi cation divides MG into fi ve main classes and several subclasses, which is designed to identify subgroups of patients who share distinct clinical features or severity of disease that may indicate different prognoses or responses to therapy. [7] Class I Any ocular muscle weakness.May have weakness of eye closure.All other muscle strength is normal. Class II Mild weakness affecting other than ocular muscles.May also have ocular muscle weakness of any severity. IIa: Predominantly affecting limb, axial muscles, or both.May also have lesser involvement of oropharyngeal muscles. May also have lesser or equal involvement of limb, axial muscles, or both. Class III Moderate weakness affecting other than ocular muscles.May also have ocular muscle weakness of any severity. May also have lesser involvement of oropharyngeal muscles. IIIb: Predominantly affecting oropharyngeal, respiratory muscles, or both.May also have lesser or equal involvement of limb, axial muscles, or both. Class IV Severe weakness affecting other than ocular muscles.May also have ocular muscle weakness of any severity. IVa: Predominantly affecting limb and or axial muscles.May also have lesser involvement of oropharyngeal muscles. May also have lesser or equal involvement of limb, axial muscles, or both. Class V Defi ned by intubation, with or without mechanical ventilation, except when employed during routine postoperative management.The use of feeding tube without intubation places the patient in class IVb. This patient fi ts into class IIa MGFA classifi cation. Treatment consists of anticholinesterase drugs like pyridostigmine (30-90 mg every 6 hourly) to be given fi rst with oral corticosteroids (prednisone 15-20 mg/day).When long-term immunosuppression is necessary, azathioprine is recommended to allow tapering the steroids to the lowest possible dose whilst maintaining azathioprine.Cyclosporine A, mycophenolate mofetil and cyclophosphamide are used for severe cases.Plasma exchange is recommended in severe cases to induce remission and in preparation for surgery.Intravenous immune globulin and plasma exchange are effective for the treatment of MG exacerbations.For patients with non-thymomatous MG, thymectomy is recommended as an option to increase the probability of remission or improvement.It is considered an appropriate procedure for many patients with generalized MG between puberty and 55 years of age.If possible thymectomy should be postponed until puberty because of the importance of the gland in the development of the immune system, but JMG is also quite responsive.The remission rate after thymectomy is approximately 35% provided it is done in the fi rst year or two after the onset of the disease and another 50% will improve to some extent. [8]Once thymoma is diagnosed, thymectomy is indicated irrespective of MG severity.The course of the illness is extremely variable.The long-term outlook for children with myasthenia is better than it is for adults. International Journal of Medicine and Public Health | Oct-Dec 2014 | Vol 4 | Issue 4 Figure 2 : Figure 2: Thymus gland scientigraphy: Spect tomographic images in different views reveal physiological tracer concentration in myocardium and moderate increased tracer concentration in retrosternal region
v3-fos-license
2021-05-21T16:57:20.243Z
2021-04-10T00:00:00.000
234833947
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-4441/13/8/1048/pdf", "pdf_hash": "ecb6854d32c3eb16e29be05b166323e2be1b2f42", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2028", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "b0ee3e73eb999928ce8bbdc4ad8037d6721a6e58", "year": 2021 }
pes2o/s2orc
PISCO_HyM_GR2M: A Model of Monthly Water Balance in Peru (1981–2020) : Quantification of the surface water offer is crucial for its management. In Peru, the low spatial density of hydrometric stations makes this task challenging. This work aims to evaluate the hydrological performance of a monthly water balance model in Peru using precipitation and evapotranspiration data from the high-resolution meteorological PISCO dataset, which has been developed by the National Service of Meteorology and Hydrology of Peru (SENAMHI). A regionalization approach based on Fourier Amplitude Sensitivity Testing (FAST) of the rainfall-runoff (RR) and runoff variability (RV) indices defined 14 calibration regions nationwide. Next, the GR2M model was used at a semi-distributed scale in 3594 sub-basins and river streams to simulate monthly discharges from January 1981 to March 2020. Model performance was evaluated using the Kling–Gupta efficiency (KGE), square root transferred Nash–Sutcliffe efficiency (NSE sqrt ), and water balance error (WBE) metrics. The results show a very well representation of monthly discharges for a large portion of Peruvian sub-basins (KGE ≥ 0.75, NSE sqrt ≥ 0.65, and − 0.29 < WBE < 0.23). Finally, this study introduces a product of continuous monthly discharge rates in Peru, named PISCO_HyM_GR2M, to understand surface water balance in data-scarce sub-basins. Introduction In Peru, surface water resources are distributed heterogeneously throughout its three hydrographic regions: Pacific (western Andean slopes and Peruvian coast), Titicaca (endorheic part of the Peruvian altiplano), and Atlantic (Amazon basin). The densely populated Pacific slope is characterized by high water stress due to its low water supply and high demand by its economic activity. In contrast, the sparsely populated Atlantic slope has a large surplus due to low demand and, above all, because it is supplied by the Amazon basin [1]. In this context, adequately quantifying water supply is critical for properly managing and planning water resources in the country [2][3][4]. However, the low density of stations and their short-term data records make it difficult to monitor and forecast streamflows at a national level, so hydrological modeling emerges as a promising option for complementing the hydrometric records, improving the understanding of the rainfall-runoff relationship [5,6] and for seasonal hydrological forecasting [7,8]. Implementing a large-scale hydrological model requires precipitation data with wide spatiotemporal coverage, so the use of satellite precipitation products has become increasingly important in recent years [9], especially for their application in Peruvian basins with scarce information [10]. For example, the recent meteorological gridded product development for the Peruvian domain called PISCO (Peruvian Interpolated data of SENAMHI's Climatological Observations) [11] might drive a large-scale hydrological model to estimate monthly discharges at a national level in a data-scarce context. Regional hydrological models, unlike local models, requires more effort because of the 'problem of regional modeling' that arises as described by [12] because the model must be calibrated and validated simultaneously with a large number of hydrometric stations and with multiple basins of different rainfall and temperature regimes, physiography, and vegetation cover [13], all without altering the regional representation of hydrological processes [14]. This characteristic is problematic for quantifying surface runoff in datascarce basins, leading to alternative ways to extrapolate hydrological information from one basin to another [15], commonly following criteria of proximity [16] and hydrological similarity based on physiographic and hydroclimatic characteristics [17]. Recently, hydrological regionalization techniques have been explored based on empirical relationships between geomorphological characteristics and model parameters [18], analysis of hydrological dissimilarity [19], streamflow fluctuation [20] and sensitivity [21], machine learning techniques [12], partial least squares regression and clustering analysis [15,22], principal components, and self-organization mapping [23]. Currently, there are physically based hydrological models that represent in detail all the hydrological processes in a basin [24,25]; however, their application in a data-scarce context would increase uncertainties in different components of the hydrological system [26][27][28]. In contrast, conceptual hydrological models require fewer data, making them operationally easier and decreasing the computational cost in an extensive domain [29,30]. For instance, the GR2M conceptual model [31] has been widely used in different hydroclimatic conditions around the world with satisfactory results [32,33] and even performing better than other water balance models [34]. Additionally, it is used to assess climate change effects on water resources [35][36][37]. In recent years, experiments have been developed to improve hydrological modeling performance with GR2M, incorporating Bayesian calibration approaches [38,39] and coupling to fuzzy models [7]. No prior research has been reported at a national level in this region using the GR2M model. A regional hydrological simulation that incorporates observed data and provides a dataset of estimated discharges at river streams in three Peruvian slopes is still challenging. However, in the Amazon basin, the GR2M model has been applied to assess water resources' climate change and identify annual discharge trends [40]. A recent implementation in the Pacific slope also evaluates multidecadal runoff in a data scarcity condition, showing higher model robustness than other conceptual models [18]. In ungauged basins, regional GR2M parameter estimation has been studied in [23] using a regression approach finding unsuitable model results for basins located under a semi-arid climatic regime. In [41], the GR2M model is applied to reconstructed monthly river streamflows in 51 gauged sub-basins. The generation of global and regional hydrological datasets such as the products Model Parameter Estimation Experiment data (MOPEX) [42] and the Catchment Attributes and MEteorology for Large-sample Studies (CAMELS) [43], among others, are beneficial for exploring the behavior of basins [44], anticipating hydrological changes [45] and studying the impact of human activities on the hydrological cycle [46]. In South America, local adaptations have been integrated into the CAMELS product, such as the CAMELS-BR [47] datasets in Brazil and CAMELS-CL [48] in Chile currently used to study the impact of climate change on water resources and the study of droughts, among others. In this sense, hydrological modeling at the national level is particularly useful to establish a basis for constructing a hydrological dataset in Peru. This study aims to evaluate the hydrological performance of a monthly water balance model at a national level. For this purpose, the sensitivity analysis of two hydroclimatic indices is used to define calibration regions, and the GR2M conceptual model is used to simulate monthly discharges in gauged and ungauged sub-basins from January 1981 to March 2020. Finally, a new hydrological product in Peru is introduced to provide continuous monthly streamflow information over the country and contribute to understanding the water balance in data-scarce basins. Study Area Peru is located on the west coast of the South American continent. It has an area of 1,285,220 km 2 and a population of approximately 32.5 million people. It borders on the west with the Pacific Ocean, on the north with Ecuador and Colombia, and on the southeast with Brazil, Bolivia, and Chile. The Andes mountain range creates a complex topography and introduces hydroclimatic variability along its three hydrographic regions: Pacific, Atlantic, and Titicaca. This natural orographic barrier traps atmospheric moisture from the Atlantic, producing high rainfall over the Andean-Amazon region and Amazon lowlands (eastern side) and low rainfall on the coast (western side) [40], leading to the great contrast of water resources in the country, characterized by a much larger water supply on the Atlantic slope than on the Titicaca and Pacific slopes [1]. Rainfall is highly variable in both space and time [49]. Maximal rainfall rates occur between November and March. Arid conditions with low rainfall rates characterize coastal areas in the Pacific slope (<~150 mm/year) and semi-arid conditions (<~400 mm/year) in the western flank of the Andes [18]. The Atlantic and Titicaca slopes have humid conditions with high rainfall rates in the eastern flank of the Andes (~1100 mm/year), the Andes-Amazon transition (~3200 mm/year), and the Amazon lowland (~2550 mm/year) [11]. Mean annual temperature fluctuations over the country appear indirectly related to elevation (lower altitude, more temperature). In this work, the study domain corresponds to the entire Peruvian territory, including transboundary basins with Ecuador, Colombia, and Brazil. It has an approximate total drainage area of 1,480,620 km 2 ( Figure 1). Moreover, 3594 river streams and sub-basins with a median area of 300 km 2 (with extremes values of 40 km 2 and 2500 km 2 ) were delimited to obtain fine streamflow spatialization according to meteorological inputs resolution (~10 ×~10 km) and considering a unique river stream by sub-basin to compute flow accumulation. In Figure 1, gauged areas correspond to drainage areas covered by a hydrometric station, while ungauged areas correspond to areas without any hydrometric control downstream. Data and Methods The methodology used in this study is outlined in Figure 2. The details of the data and methods used are shown below. Currently, the PISCO product is only a dataset with a daily and monthly temporal resolution for the variables of precipitation (P), air temperature (TMP), and potential evapotranspiration (PET). It has a spatial coverage throughout the Peruvian territory, including transboundary basins with Ecuador, Colombia, and Brazil. In its stable version, PISCO is only available from 1981 to 2016. However, an unstable version of the precipitation sub-product used for operative purposes is available from 1981 to the present day, and it is updating daily. The gridded precipitation sub-product (0.1 • × 0.1 • ) is generated by applying geostatistical techniques to combine satellite estimates of precipitation data from the CHIRP project (Climate Hazards InfraRed Precipitation) and ground data of the SENAMHI pluviometric network [11]. Similarly, the temperature sub-product (0.1 • × 0.1 • ) is generated by combining air temperature data from MODIS images and observations from weather stations. The evapotranspiration sub-product (0.1 • × 0.1 • ) is generated from the previous temperature-gridded data following the methodology proposed by [49]. The PISCO dataset is available on the IRI Data Library: http://iridl.ldeo.columbia.edu/ SOURCES/.SENAMHI/.HSR/.PISCO (accessed on 20 March 2019). In this work, the mean areal values of P and PET are calculated for each of the 3594 subbasins from January 1981 to March 2020 and then are continuously updating for operational purposes of the continuous monthly streamflows estimation. Due to our hydrological model's operational purpose, the precipitation sub-products unstable version was used in this study. In contrast, in evapotranspiration, we only use climatological values due to the lack of data since January 2017. Discharge Data In this work, monthly observed streamflows at 43 hydrometric stations were selected from January 1981 to March 2020. Most of these stations belong to the National Service of Meteorology and Hydrology of Peru (SENAMHI, https://www.gob.pe/senamhi, accessed on 13 March 2021). Specifically, in the Amazon region (Atlantic slope), most of the stations are monitored by the SENAMHI and the IRD (French Institute for Sustainable Development) in the frame of the HYBAM Project (https://hybam.obs-mip.fr/, accessed on 25 March 2021). The detail of selected stations is summarized in Table 1, and their distribution throughout the Peruvian territory is shown in Figure 1a. The selection processes considered stations with more than 10% coverage in the study period (1981-2020), stations with high quality of data, and stations located in basins that guarantee coverage throughout the national territory. Finally, a total of 74.8% of the study area is gauged by the 43 hydrometric stations selected, and only 25.2% correspond to ungauged areas. Semi-Distributed GR2M Model In this study, the GR2M conceptual model [50] simulates monthly runoff in 3594 subbasins. The GR2M model transforms rainfall into runoff by two equations: production and transfer functions [51], and requires monthly input data of precipitation (P) and potential evapotranspiration (PET). P is distributed in the upper storage tank (S) with limited capacity and the underground storage reservoir (R). Besides, GR2M has two calibratable parameters (X1 and X2), where X1 defines the maximum capacity of S and X2 defines the exchange between R water outside the basin. Because GR2M is a lumped model, the simulated runoff in each sub-basin is then routed (as FlowAcum) considering the flow direction (FlowDir) generated from a 90 m HydroSHEDS Hydrologically Conditioned Digital Elevation Model (CONDEM) [52] and a Weighted Flow Accumulation (WFAC) algorithm where the simulated runoff gives the weighting factor (Weight) in each sub-basin. For example, at time step t = 1, the monthly runoff (q) is first calculated for each sub-basin in the study area. Then the Weight is created by rasterizing the sub-basins based on the flow direction raster and associating the value of q to the pixel corresponding to the centroid of each sub-basin, as appropriate. Finally, the WFAC is used to accumulate the values of q and obtain a raster map with the values of discharges (Q) for each river stream. This process is repeated for each time step. The scheme of the semi-distributed GR2M adaptation for a national level modeling is shown in Figure 3. In this work, the GR2M model incorporated in the airGR R package [52] was used together with adaptations to calculate P and PET's areal means from gridded data, run the semi-distributed GR2M model using the WFAC algorithm, and automatically calibrate the model parameters. As a result of this process, an experimental R package called GR2MSemiDistr was generated and is free available on https://github.com/hllauca/GR2 MSemiDistr (accessed on 13 March 2021). Sensitivity Analysis Similar to [53], the spatial patterns and the magnitude of the relative sensitivities of X1 and X2 concerning two hydroclimatic indices are considered the basis for delimiting the homogeneous calibration regions at a national level. Unlike studies that perform a sensitivity analysis (SA) using metrics such as the Nash-Sutcliffe efficiency index [54], this study examines the sensitivity based on the runoff rate (RR) and the runoff variability (RV) hydroclimatic indices proposed by [55]. Table 2 describes both indices and their respective equations. Fourier amplitude sensitivity testing (FAST) [56] was applied using the fast R package of [55] to calculate the relative sensitivities of X1 and X2 for both indices in all sub-basins. An ensemble of 1000 unique and uncorrelated parameters was generated. The GR2M model was then run at a national level for each ensemble member using the same P and PET forcing data, yielding thousands of streamflows time series and RR and RV indices. Finally, a Fourier transformation is applied to RR and RV, and results are scaled from 0 to 1 to obtain the X1 and X2 relative sensitivities. Zero values indicate a low sensitivity of the hydroclimatic index to a GR2M parameter variation, and 1 indicates a high sensitivity. The details of the FAST methodology are found in [57]. Table 2. Hydroclimatic indices used for the sensitivity analysis (SA). Index Unit Equation Description Runoff Ratio (RR) - The ratio of simulated runoff to precipitation Runoff Variability (RV) - The standard deviation of simulated runoff to the standard deviation of precipitation Note: X, simulated runoff in mm; P, rainfall in mm; σ X,P, standard deviation of simulated runoff and rainfall. Calibration Regions and Sub-Regions RR and RV's relative sensitivities calculated in the previous section and proximity variables (latitude and longitude) concerning each sub-basin's centroid were used for cluster analysis. Then, Ward's hierarchical clustering method based on L-moment statistics [58] was used to divide the sub-basins into homogeneous regions. Finally, the regions are hydrologically conditioned according to the discordancy and heterogeneity statistics done in [59] and including at least one hydrometric station in the calibration region. In the absence of a station within the previously defined region, neighboring regions are merged to ensure this condition. Due to the low density of hydrometric stations in the study area providing a long record of monthly streamflows for the model calibration ( Figure 1a and Table 2) and the effect of equifinality of the parameters [60], sub-regions are restricted to the portion-gauged by hydrometric stations within each calibration region. These sub-regions are defined by superimposing the calibration regions and gauged areas' boundaries ( Figure 1a). Thus, there will be as many sets of calibrated GR2M parameters for a given region as there are sub-regions within it. In the case of ungauged areas, only the calibration regions are considered. GR2M Calibration and Validation Strategy In this work, 43 hydrometric stations are used to calibrate and validate Peru's water balance model. The selected calibration period for the hydrometric stations ranging from 60 to 70% of their available records. The P and PET climatologies (1981-2010) were used to fill data between January 1978 and December 1980 and consider them a warm-up period for national simulation. Parameters X1 and X2 were automatically calibrated using the Shuffled Complex Evolutionary (SCEA-UA) algorithm [61], considering the Kling-Gupta efficiency criterion (KGE) [61] as an objective function similar to [62] with an emphasis on high flows. The square root transferred Nash-Sutcliffe efficiency (NSE sqrt ) [62] is used to evaluate model performance in general flows, and the Water Balance Error (WBE) [63] was used to assess the model bias. The summary of the statistical metrics used and their corresponding equations are shown in Table 3. The validation process consisted of evaluating the model's outputs based on the previously calibrated parameters using the remaining 30-40% of the available streamflow records. Table 3. Statistical metrics and their corresponding equations used for evaluating the hydrological performance of GR2M model. Statistical Metric Unit Equation Optimal Value Kling-Gupta efficiency (KGE) - The model calibration and validation were performed in gauged areas, and a stepwise calibration strategy was used in this study (sub-region approach, Figure 4). In the first step, the parameters of headwater sub-basins are calibrated and are used downstream, while in the last step, only the parameters of the remaining sub-basins are calibrated. The sub-basins of the Pacific and the Titicaca slopes were calibrated in a unique step, while for the Atlantic slope, seven steps were required. Finally, after model calibration and validation for each sub-region, X1 and X2 values are grouped by calibration region at a national level. Discharge Simulation at a National Level The median of X1 and X2 is calculated for each calibration region to simulate monthly discharges in ungauged areas (regional approach). To assess the model performance when using the median of X1 and X2 as a representative parameter for each calibration region, the GR2M model was again run-in gauged areas, and new statistical metrics were calculated. Moreover, changes in model performance before and after applying the median of parameters by calibration regions were evaluated. Finally, a new model run including gauged and ungauged areas was executed to simulate monthly discharges in 3594 river streams. Sensitivity Analysis and Calibration Regions The relative sensitivities derived from the FAST analysis using the RR and RV indices for each of the 3594 sub-basins throughout the study area are shown in Figure 5a. Because the GR2M model has only two parameters, the patterns of relative sensitivities to X1 and X2 are inverse. This study assesses the hydrological response in terms of the magnitude (RR) and variability (RV) of the rainfall-runoff relationship. RR and RV indices are more sensitive to X2 over much of the study area. Especially on the Pacific slope, RR has high sensitivities to X2 in coastal sub-basins but declines slightly towards the Andes' western flank. On the Titicaca and Atlantic slopes, moderate to high sensitivities to X2 is observed, except for the central part of the Amazon extending southeast and to part of the North Pacific, where there are high sensitivities to X1. In RV, spatial patterns are like RR but with greater sensitivities relative to X2 in the major part of the Atlantic and the Titicaca slopes, while in the Pacific slope, sensitivities to X1 increase in the west flank of the Andes and the North Pacific sub-basins. The clustering analysis results incorporating the relative sensitivity patterns of RR and RV at a national level are shown in Figure 5b. The lack of hydrometric information in some of the initial regions meant having to merge contiguous regions, so 14 regions were identified throughout the study area. In the coast of the Pacific, three regions were identified (F, L, and N); in the Andes-Amazon transition, five regions were obtained (C, G, I, J, and M), while six regions (A, B, D, E, H, and K) were obtained in the Amazon lowlands. It was ensured that there was at least one hydrometric station for each of the calibration regions (as is the case for regions B, E, and K), while in some cases, regions with more than five stations (M and C) were obtained (Table 4). Finally, overlapping the boundaries of gauged and ungauged sub-basins (Figure 4) with the 14 calibration regions (Figure 5b) generated 96 sub-regions (Table 4), which corresponds to 96 different sets of GR2M parameters. Of these, 84 parameter sets are estimated by model calibration (sub-region approach), while the remaining 12 were inferred based on the median values for each calibration region (regional approach). Figure 6 shows the spatial distribution of the three metrics selected to assess the monthly water balance model's performance at a national level during its calibration, validation, and entire period. In terms of KGE and NSE sqrt , the model performs well during the calibration period, with values above 0.75 and 0.65, respectively, at stations on the Pacific and the Andes-Amazon transition; however, low values of KGE and NSE sqrt (<0.50) predominate at stations on the Amazon lowlands. Performance remains the same during validation but with a slight decrease in the KGE and NSE sqrt metrics at stations belonging to the Andes-Amazon transition. Regarding the WBE, balance errors close to zero during the calibration period are observed at stations with high KGE and NSE sqrt values, except for positive balance errors not greater than 0.25 at stations in the Amazon lowlands. Negative balance errors increase in the validation period, up to −0.38 on the Pacific coast and the Andes-Amazon transition. Figure 6. (a-c) Statistical metrics for evaluating the GR2M model performance at a national level during the calibration, the validation, and the total period. In terms of the KGE and NSE sqrt metrics, cold colors represent good model performance while warm colors represent inadequate model performance. In WBE, blue colors are associated with the underestimation of the total volume of surface runoff and red colors indicate the overestimation. Model Performance Assessment When evaluating the total period, good model performance in terms of KGE (KGE ≥ 0.75) is maintained for 71% of the stations. In the same period, NSE sqrt values higher than 0.65 for 70% of the stations demonstrate a good representation of the subbasin's general flows. In terms of the WBE, negative values of not less than −0.20 are evident at most of the stations with high KGE and NSE sqrt values, and positive values of not more than 0.23 are evident for the Amazon lowlands. This behavior indicates that stations with a good fit in terms of KGE and NSE sqrt tend to slightly overestimate the total runoff, while on the Amazon lowlands, it tends to be underestimated. Figure 7 shows the observed and simulated monthly and annual hydrographs and their respective seasonal variation curves for two hydrometric stations on the Pacific slope (ETI and SOC), one on the Titicaca slope (HNE), and three on the Atlantic slope (EKN, TOC, and PUC), corresponding to stations with extensive streamflow records from January 1981 to March 2020 (Table 1). In all cases, the model manages to represent the seasonal and interannual variability of the observations in both small basins of 65 m 3 /s average annual flow (Cañete basin) up to basins with 9000 m 3 /s (Ucayali basin). The simulated series fit very well to the observations at most of the stations evaluated, except for SOC, where the wet season's streamflows (December-March) were slightly overestimated. At an annual scale, the model can represent very dry (e.g., 1991 and 1992) to very wet years (e.g., 1997) in sub-basins of the three Peruvian slopes. The seasonal variation curves adequately represent the peak flow month, except for the PUC station, where there is one lag month (Figure 7d). This adequate seasonal and interannual representation is repeated in the remaining hydrometric stations (not shown), except for those located in the Amazon lowlands, where the monthly model performs poorly in terms of NSE sqrt (Figure 6b). The variation of the calibrated GR2M model parameters in the gauged areas is shown in the boxplots of Figure 8a,b. In [31] reports that X1 could take values from 0.1 to 2000 mm while X2 could vary between 0 and 2. There are slight variations of X1 and X2 in calibration regions located in the south of the country (J, K, L, M, and N; see Figure 5b) and northcentral coast regions (F; see Figure 5b). Also, there are slight variations of X1 and high variation of X2 in calibration regions located in the northeast of the country (B, D, and E; see Figure 5b), predominantly in the Amazon lowlands. Additionally, we identified high variations of X1 values and low variations of X2 in regions of the north-central Andes (A, C, H, and I; see Figure 5b). Only in the calibration region G, high variations in both X1 and X2 values are present (Figure 8a,b). It is important to notice that regions with high X2 variations in Figure 8a,b coincide with stations of low performances in terms of KGE and NSE sqrt (Figure 6a,b). In contrast, the regions with smaller variations in both X1 and X2 parameter values correspond to stations with a good model fit. Figure 8c-e shows the model performance variation before (sub-region approach-Sub) and after (regional approach-Reg) applying the median of GR2M parameters by each calibration region. In terms of the KGE and NSE sqrt , the model performance using the regional approach (Figure 8c,d) declines mainly in sub-basins of the A-D regions (see Figure 5b) and is relatively stable (except for region M) in south-central regions. Since ungauged areas are located predominantly in regions F, L, and N, the regional approach of parameters in these sub-basins is suitable for estimating monthly discharges. Product of Simulated Monthly Discharges at a National Level The regionalization based on sensitivity analysis of GR2M parameters at a national level and using the meteorological (P and PET) PISCO dataset allows simulating continuous monthly discharges in 3594 rivers streams (including ungauged areas) from January 1981 to March 2020. This new product of simulated monthly discharges named PISCO_HyM_GR2M is available at https://doi.org/10.6084/m9.figshare.14382758 (accessed on 7 April 2021), and it is the new hydrological sub-product of the PISCO dataset. Additionally, it will contribute to the understanding of the water balance in data-scarce basins. For instance, the PISCO_HyM_GR2M product is currently used for drought monitoring in the National Service of Meteorology and Hydrology of Peru (available online: https://www.senamhi.gob.pe/?p=monitoreo-pronostico-sequias-accessed on 14 January 2021). The qualitative classification of the PISCO_HyM_GR2M simulations, based on KGE [63] and NSE sqrt [64] performance categories for gauged areas are shown in Figure 9a,b, respectively. Both metrics agree that simulated monthly discharge in the central and southern of the study area are well represented, while those for the northeast (Amazon lowlands) should be interpreted with caution. The latter varies depending on the hydrograph assessment approach because the KGE metric emphasizes high flows [65], while NSE sqrt reduces this effect and emphasizes the general representation of streamflows [64]. Sensitivity Analysis and Calibration Regions In this paper, two conceptual parameters' relative sensitivities are used as main predictors to define calibration regions at a national level similar to [21], instead of traditional climatic and physiographical characteristics [15]. Despite the differences between objective functions selected (RR and RV, Table 2), the spatial patterns of relative sensitivities for GR2M parameters are very similar in both cases (Figure 5a) due to the parsimonious model structure [31,34], finding that X2 is the most sensitive parameter for RR and RV indices in a great number of sub-basins due to its correction role in runoff generation [18] instead of X1 in charge of controlling soil moisture in the production storage. In terms of GR2M outputs, we found that a slight variation of X2 (controlling the routing storage) can significantly alter the rainfall-runoff transformation in many basins nationwide due to changes in the magnitude and variability of the simulated runoff. The results showed main differences in RR and RV sensitivities spatial patterns in the Pacific Slope (see Figure 5a) due to X2-that controls water exchange and groundwater fluxes as mention in [65]-is more relevant for RR in coastal sub-basins where GR2M runoff is base flow-dominated [66] than in mountain ranges areas where X1 has more relative sensitivity for RV. However, in the behavior in the central-northern Amazonian region (Atlantic slope), abrupt changes in X1-X2 sensitivities might be more related to model inputs biases and structural uncertainties that propagate to model parameters and outputs. As the regionalization approach is based on the sensitivity analysis of a parsimonious conceptual model (see Section 3.3), calibration regions delimited (Figure 5b) represent areas with a similar level of parameter uncertainties [65] and hydrological model response [65], instead of providing similarities of geomorphological and climatic such as present in [66]. Thus, calibration regions generated in this work are only valid for the GR2M water balance model using the PISCO product as meteorological inputs. In addition to the relative sensitivities calculated in this work, the hydrometric network at the national level plays an important role in the final delimitation of the calibration regions and sub-regions (Figure 5b) because the existence of at least one of them is a determining factor. In this sense, unlike the studies by [67] where parameter uncertainty bounds are identified based on residuals analysis of hydrometric stations by region, the low density of stations (Figure 4), mainly on the north Atlantic slope, could be altering the natural grouping of sub-basins and thus reducing the predictive capacity of regional parameters of the GR2M model (see Section 4.2) based on a sensitivity analysis. Future studies will assess regional parameter uncertainty in uncontrolled areas and impact discharge estimation. Model Simulations at a National Level The unsatisfactory results in the northern Amazonian region (Figure 9) reflects two issues: first, the greater uncertainty of the spatial rainfall distribution in the Marañón [49], Ucayali and Huallaga [49,68] basins, and the PISCO P sub-product biases probably because of the lack of adequate rainfall estimates in equatorial regions [11]. This lower model performance is similar to that were obtained in [67,68] using different hydrological models in a daily timestep and different sets of satellite precipitation products. Thus, rainfall uncertainties propagate to model outputs and reduce the model's predictable capacity [26]. Additionally, PET climatology used in this paper for operational purposes might not be reflecting actual evapotranspiration in the Amazon plain. Future works will incorporate a robust assessment of evapotranspiration in the hydrological modeling with a data scares scenario and its impacts on the water balance. Secondly, floodplain plays a key role in the flow routing, with a large amount of water stored during the flood [69]. For instance, in the Ucayali basin, the flood peak is delayed by two months between the LAG and REQ stations [70,71]. These behaviors might alter basins storage and delay months of peak flow in basins with larger drain areas such as the Amazon plain, and GR2M routing might not represent this characteristic such as PUC station in Figure 7d. It is also important to consider that GR2M is a model with limitations due to the conceptualization of hydrological processes in two reservoirs (production and routing) in a lumped modeling approach [33]. Despite its outstanding performance throughout the national territory ( Figure 6), it may not be able to adequately represent runoff in basins with large drainage areas (>200,000 km 2 ) such as in the Amazon plain. Future works will incorporate routing models such as the Routing Application for Parallel computatIon of Discharge (RAPID) [72] to improve flow routing throughout the national drainage network, especially in the Atlantic slope. Conclusions This study evaluated a monthly water balance model's hydrological performance in 3594 sub-basins and river streams in Peru. Parameter calibration regions were defined based on the sensitivity analysis of two hydroclimatic indices. Finally, the monthly simulated streamflows product named PISCO_HyM_GR2M from January 1981 to March 2020 was developed. The main conclusions are summarized below: (a) The hydrological performance of the GR2M model in Peru performed well in subbasins of the Pacific slope and the Andes-Amazon transition (part of the Titicaca and the Atlantic slopes). The model adequately represents the seasonality and interannual variability of the streamflows, except for the Amazon lowlands, where only high flows are well-represented. (b) Through the monthly meteorological PISCO sub-products, it is possible to simulate the runoff volume over most of Peru adequately. However, the uncertainties associated with these sub-products are more significant towards the north of the country where there are not enough meteorological stations, so this error propagates towards the hydrological model outputs for the Amazon lowlands. (c) The proposed methodology to define the calibration regions based on the spatial patterns of two hydroclimatic indices' relative sensitivities proved to be an appropriate technique for calibrating and validating the GR2M model and estimating monthly discharge in ungauged sub-basins. The results presented in this work also demonstrate the enormous potential of the PISCO_HyM_GR2M product for understanding the dynamics of surface water resources in Peru. Future versions of this product will include an extensive analysis of different routing methods and the uncertainty analysis of discharges.
v3-fos-license
2022-03-23T06:19:07.202Z
2022-03-21T00:00:00.000
247597744
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41467-022-28909-1.pdf", "pdf_hash": "995b59aa461a1a396f1553865764c5bf79418c10", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2029", "s2fieldsofstudy": [ "Medicine" ], "sha1": "1b4131501cf256166323d00840bbc55564f908ea", "year": 2022 }
pes2o/s2orc
Progression of type 1 diabetes from latency to symptomatic disease is predicted by distinct autoimmune trajectories Development of islet autoimmunity precedes the onset of type 1 diabetes in children, however, the presence of autoantibodies does not necessarily lead to manifest disease and the onset of clinical symptoms is hard to predict. Here we show, by longitudinal sampling of islet autoantibodies (IAb) to insulin, glutamic acid decarboxylase and islet antigen-2 that disease progression follows distinct trajectories. Of the combined Type 1 Data Intelligence cohort of 24662 participants, 2172 individuals fulfill the criteria of two or more follow-up visits and IAb positivity at least once, with 652 progressing to type 1 diabetes during the 15 years course of the study. Our Continuous-Time Hidden Markov Models, that are developed to discover and visualize latent states based on the collected data and clinical characteristics of the patients, show that the health state of participants progresses from 11 distinct latent states as per three trajectories (TR1, TR2 and TR3), with associated 5-year cumulative diabetes-free survival of 40% (95% confidence interval [CI], 35% to 47%), 62% (95% CI, 57% to 67%), and 88% (95% CI, 85% to 91%), respectively (p < 0.0001). Age, sex, and HLA-DR status further refine the progression rates within trajectories, enabling clinically useful prediction of disease onset. R ecent advances in type 1 diabetes research have increased appreciation of heterogeneous patterns of islet autoimmunity before a diagnosis of clinical diabetes. Previous research points to two distinct pathways from the appearance of islet autoimmunity to clinical diabetes, i.e. associated with initial development of islet autoantibodies (IAb) to either insulin (IAA) or glutamic acid decarboxylase (GADA) 1 . There is evidence that these pathways may be triggered by pathogenic exposures, with differential associations for intestinal viruses or prenatal exposures [2][3][4] . Further, there are observed differences in both genetic associations and expansion from initial IAb to multiple autoantibodies and risk of progression to diabetes based on the pattern of autoantibody acquisition 3,[5][6][7] . While the number of observed IAb predicts risk for progression to type 1 diabetes, the temporal progression of these biomarkers displays heterogeneous patterns and could further stratify risk [7][8][9][10][11] . Previous observations underscore the need to better define individual trajectories from islet autoimmunity to type 1 diabetes 12 . Better identification and understanding of the heterogeneity of the disease may have substantial implications for elucidation of its etiology. Further, the ability to predict diabetes risk, progression rate, and intervention response may enable personalized therapeutic approaches. To better understand these patterns, we investigated the presence or absence of three islet autoantibodies, GADA, IAA, and islet antigen-2 (IA-2A), prior to clinical diabetes in a large cohort of data combined from five prospective studies in four countries. Using an unsupervised machine learning approach, we generated quantitative descriptions of underlying progression patterns from islet autoimmunity to diagnosis of type 1 diabetes and utilized novel visualization strategies to gain new insights into differences between individuals in these trajectories. Results Three trajectories. Continuous-Time Hidden Markov Models (CT-HMMs) were learned as disease progression models (DPM) based on presence or absence of IAb from the longitudinal T1DI study cohort. Using machine learning methods, a model containing 11 latent states was discovered that best fit the observed data and was subsequently applied to all IAb positive participants to draw the insights presented here. For the 643 diagnosed (D) participants, the discovered states formed three trajectories, TR1, TR2, and TR3, each characterized by a distinct sequence of latent states (Fig. 1a), which were further explored using interactive data visualization and statistical analysis. In this figure, each latent state is described by a set of probabilities for presence of each IAb (Fig. 1a). Longitudinal observation sequences of participants with IAb positivity were labeled using the model, and these participants were then divided into those who developed type 1 diabetes (Diagnosed/D) during the study period vs. those who did not or were lost to follow-up (Undiagnosed/UD). Statistical analyses correlated these latent states and resulting trajectories with other study variables to draw insights from all participants with IAb positivity about each of these two groups. For those diagnosed (Fig. 2a), TR3 shows a later onset of islet autoimmunity than the other trajectories, with a median TR3-1 entry age of 3.3 years, compared to TR1-1 (2.5 years) and TR2-1 (1.3 years). The diagnosed participants in TR2 stay in the first three states (TR2-0, TR2-1, TR2-2) briefly, as illustrated by the widths of the respective state nodes in Fig. 2a. The diagnosed participants in TR2 were diagnosed at states ranging from TR2-1 to TR2-4, among which the numbers were fairly evenly distributed, while those in TR1 had a higher diagnosis rate in state TR1-1 and those in TR3 were disproportionately diagnosed in the final state, TR3-2, i.e., after gaining IA-2A as an additional autoantibody. Islet autoantibody pattern by age and trajectory. As specific IAb patterns could exist in more than one trajectory, we examined the composition of trajectories amongst the seven possible IAb patterns across ages 2-7 years ( Fig. 3a-f). We showed that for all but one pattern, the majority of individuals with that pattern were in a single dominant trajectory. For the single antibody positive patterns, proportions higher than 60% across these ages consisted of: TR3 for GADA + only (Fig. 3a), TR2 for IAA+ only (Fig. 3b), and TR1 for IA-2+ only (Fig. 3c). Among patterns with two IAb positive, both the GADA+ /IAA+ and the IAA+ /IA-2A+ patterns showed TR2 as a dominant trajectory (Fig. 3d, f, respectively). Mean age at confirmed seroconversion and clinical onset. The three trajectories showed significant differences in mean age at seroconversion and diagnosis (Table 1 and Supplementary Fig. 5). Among the diagnosed (n = 546), those in TR3 seroconverted significantly later (F(2, 543) = 27.19, p < 0.0001) than those in TR1 (p = 0.001) who seroconverted later than those in TR2 (p = 0.001). Among the undiagnosed (n = 840), those in TR3 seroconverted significantly later (p < 0.0001) than in TR1; however, no significant differences were seen for seroconversion age between TR1 and TR2 or TR2 and TR3. Sex and HLA-DR category. Trajectory distributions by sex (Table 1) marginally differed across trajectories between diagnosed and undiagnosed (X 2 (5, n = 2145) = 10.59, p = 0.0602). The pairwise comparison shows that diagnosed participants in TR3 had a higher ratio of females to males in comparison to the diagnosed participants in TR1 (p = 0.0105) and the diagnosed participants in TR2 (p = 0.0161). No other pairs of trajectory/ diagnosis groups showed statistically significant differences in the ratio of female to male participants. Finally, HLA-DR risk groups differed across trajectories between diagnosed and undiagnosed (X 2 (15, n = 2145) = 161.53, p < 0.0001) ( Table 1). The Chi-square test shows significant differences in the proportion of four HLA-DR risk groups among all groups of trajectories and diagnosis. Nine combinations of pairwise comparisons between the undiagnosed and the diagnosed in three trajectories all showed significant differences in the proportions of HLA-DR risk groups (all, p < 0.0001). The undiagnosed in TR1 were different from the undiagnosed in TR2 (p = 0.0004) and TR3 (p = 0.0015). The survival curves stratified on sex are provided in Supplementary Fig. 3. Females in TR2 showed faster progression and lower rates of type 1 diabetes-free survival than males (Z 2 (1) = 5.7, p = 0.02). There was no significant difference between sexes in TR1 ( The survival curves stratified on HLA-DR status are provided in Supplementary Fig. 4. Survival analysis stratified on HLA-DR status showed no difference in progression between individuals with DR3/ 4 vs. DR4/X in TR1. In both TR2 and TR3, individuals with DR3/4 progressed faster than those with DR4/X (p = 0.003, p = 0.0041). To examine the role of age in progression rates for each trajectory, we separated participants by median age of entry into first IAb positive state (3.75 years). Of note, for TR1-1, there was no difference in survival rates between participants entering the multiple islet autoantibody states before or after the overall median age of entry into first IA positive state (Z 2 (1) = 1.6, p = 0.2) (Fig. 5a). In contrast, participants who entered the first islet autoantibody positive states in TR2 and TR3 did show differences in survival rates by age. Participants entering IAA positive state (TR2-1) earlier than 3.75 years old progressed faster to type 1 diabetes than those entering later (Z 2 (1) = 19.1, p < 0.0001) (Fig. 5b). Similarly, participants entering GADA positive state (TR3-1) before 3.75 years old progressed faster to type 1 diabetes than those entering later (Z 2 (1) = 6, p = 0.01) (Fig. 5c). Entry into three distinct trajectory states, namely TR1-1, TR2-4, and TR3-2, describe a pattern of high probabilities of GADA and IA-2A positivity, despite similar IAb patterns in each of these states, survival curves showed significant differences in diabetesfree survival rates from those states (Z 2 (2) = 34.1, p < 0.0001). Post-hoc analysis showed significant differences between TR3-2 and TR1-1 (p < 0.0001) and between TR3-2 and TR2-4 (p = 0.007) (Fig. 5d). TR3-2 showed significantly slower and less frequent progression to type 1 diabetes than TR1-1 and TR2-4. Mean entry ages for GADA and IA-2A positive states in the three trajectories were significantly different (F(2, 516) = 41.19, p < 0.0001). Tukey HSD test confirmed the mean entry age for TR1-1 as significantly younger than for TR2-4 or for TR3-2. There was no difference in entry age between TR2-4 and TR3-2. Discussion In this study, we discovered 11 latent states of progression to type 1 diabetes onset using data-driven modeling based on longitudinal IAb data. The latent states described three distinct trajectories of disease progression, using an unbiased, probabilistic method and characterized the autoimmune pathways to development of clinical type 1 diabetes. Our descriptions of the dynamic nature of the three trajectories corroborate and expand recent observations from multiple studies 1,3,5-7,13 . The descriptions illustrate the heterogeneous journey of participants defined not just by their first IAb observed but rather by the transitions between probabilistic states of autoantibodies as they age. Our findings suggest age at seroconversion and subsequent progression to type 1 diabetes onset differs significantly among three trajectories. Despite following similar trajectories of IAb patterns, the diagnosed and undiagnosed participants show differences by age, sex, and HLA-DR, at least in the 15 years of follow-up studied. Our longitudinal analysis underscores the necessity of follow-up beyond cross-sectional description of islet autoantibody positivity as it may not be sufficient to understand an individual's journey towards diagnosis of type 1 diabetes. The three trajectories found in the present investigation show distinctive progression patterns. The observation that females progress faster than males in TR2 may be related to a more aggressive pathogenesis with age as females tend to be diagnosed with type 1 diabetes earlier than males 14 . The present study discovered underlying subtypes based on three trajectories that could be important in selecting research participants for clinical intervention trials through data-driven modeling. The visual analytic methodology used in this study can be a powerful tool to explore trajectories and to interact with individual-level data, including factors that may vary by location, which could advance clinical research and practice. The study provides important implications for screening in routine clinical practice, a possibility that is being explored in population screening studies 15,16 . Clinicians may use IAb pattern and age to estimate the trajectory and therefore the risk for developing type 1 diabetes. In other words, our findings show the proportion of participants belonging to a specific trajectory given their age and IAb positivity. Once a likely trajectory is identified, one could examine the preceding and upcoming states for the trajectory, and estimate the type 1 diabetes-free survival of the participants in the trajectory, given their age and IAb positivity. Another strength of this study is the large number of participants followed from an early age until the appearance of one or more islet autoantibodies. The harmonized data in this international effort not only made it possible to identify and visualize three distinct trajectories but also enabled researchers to examine the impact of different contributing factors specific to the environments of the participants. This approach can also be a valuable addition to available recruitment tools to identify research participants for secondary prevention trials in a variety of settings. Additionally, this study demonstrates the advantages of using interactive visualizations to characterize trajectories and explore data from individuals. By visually representing both the granularity of individual data and the overall patterns of change over time, this method could be combined with other variables to explore new relationships between observational data and identified trajectories. A novel and hitherto uncharted possibility is the ability of visualization not only to delineate groups but also to distinctly follow individuals within trajectories. In clinical applications, this tool may thus have the potential to allow better counseling for individuals and families by providing an improved understanding of likely progression. Intervention studies have shown differences in response to disease-modifying treatments based on stage of the disease 5,17,18 , as well as heterogeneity in response amongst participants 19 . Machine learning models of disease progression combined with interactive visualization tools reveal novel trajectories and enable the requisite increase in granularity needed to support precision medicine approaches to prevention and modulation of disease progression. Future work will include the development of a more directed tool for clinical practice, allowing assessment of an individual patient's progression pattern in the context of population pathways. Future work could also assess the impact of varying genetic backgrounds. By using such information, we can improve our understanding of varying clinical pathways, better utilize resources, and recruit participants following similar disease pathways for clinical interventions. age 25 years based on high-risk HLA genotype or history of first-degree relative with type 1 diabetes. Presence or absence of three islet autoantibodies, GADA, IAA, and IA-2A, were combined across studies, and data were included up to a follow up of 15 years or until the diagnosis of type 1 diabetes, whichever came first, per originating study protocols. Type 1 diabetes was diagnosed according to American Diabetes Association criteria 25 , seroconversion by two consecutive visits with at least one IAb persistently positive at both visits and seroconversion age as the first of the two visits. In addition to IAb measurements and outcome of diagnosis, the T1DI cohort contains anthropometric, metabolic, diet, and environmental mea- Modeling analysis. Using a probabilistic approach 26 , incorporating Continuous-Time Hidden Markov Model (CT-HMM) 27 , we trained disease progression models (DPM) on presence or absence of IAb and the age of participants. The DPMs discovered latent states from longitudinal measurements of the three IAbs from each participant's visit and the age of the participant at the visit (Supplementary Table 3 and Supplementary Fig. 6 for examples of the observational data) 28 . Further analyses correlated latent states and resulting trajectories with other study variables. The DPM were generated in an unsupervised way, meaning type 1 diabetes diagnoses were not used to inform model parameters 29 . To produce robust results, model parameters (for CT-HMM) were learned in 1900 repeated experiments in total: 100 sub-samples (bootstrapping) × 19 different latent state models, exploring possible numbers of latent states ranging from 2 to 20-state models. For each latent state model experiment, we randomly split data in the ratio of 70:30 (training: heldout validation test) for model training and model validation 29 . In the random split, each participant's entire visit history could either belong to the training set or the test set, but not both, thus creating random sub-samples of the observed data. In each experiment, the model parameters for each of the 19 possible models with a number of latent states ranging from 2 to 20 were discovered from the time- stamped (age at visit) observational data (training set) consisting of the IAbs. The CT-HMM parameters (transition and emission probabilities) were learned by maximum likelihood estimation using the Expectation-Maximization (EM) algorithm iteratively till convergence and the number of iterations were empirically determined based on computed likelihood in each iteration. At convergence, the trained model assigned a latent state number to each participant visit (indexed by age) in the training set. Similarly, the participant visits in the held-out test set were assigned a model (latent) state number or "labeled" in these experiments 26 . We also calculated the (predictive) log-likelihood of observed data given the model using the held-out set. The predictive log-likelihood was used to select the best model we use for the analysis in the manuscript. To learn a robust model for disease progression, we only included participants eventually diagnosed with type 1 diabetes (within 15 years of follow-up) and who had three or more visits during the follow-up period. Additionally, the model was learned using data from only three T1DI studies (DAISY, DIPP, DiPiS) (n = 559) for whom data were available at the time of model development. Later, independent model validation was done using participants from two other studies (BABYDIAB, DEW-IT) (n = 150). Since, we performed 1900 experiments, each generating a possible disease progression model (of 2 to 20 latent states), we needed to select a model based on the best fit among the latent states. To find the best model fit among the latent states explored, we computed the Bayesian Information Criterion (BIC) score 30 . BIC penalizes model overfit (i.e. number of model parameters to learn given the number of latent states and the number of observations required for training). We selected the most probable model from a set of competing models having minimal BIC scores (latent state model 11,12,13) and the highest value for predictive log-likelihood (calculated based on a held-out test set during the learning process). The final model contains 11 latent states representing the observed islet autoimmunity development of diagnosed participants from the T1DI cohort. It was used to draw insights from all participants with IAb positivity in the analysis cohort, i.e., irrespective of their diagnosis. Longitudinal observations of participants were labeled using the model for further analysis. Specifically, the 11-state was used to label each participant's visit with one of the 11 states using an index ranging from 0 to 10. The results show that most participants started and ended their observations within one out of three trajectories: Trajectory 1 consists of three states (0,1,2), starting from the state "0"; Trajectory 2 consists of five states (3,4,5,6,7), starting from state "3"; Trajectory 3 consists of three states (8,9,10), starting from the state "8", as Supplementary Fig. 7 illustrates. The analysis of state characteristics revealed that the starting states 0 (TR1-0), 3 (TR2-0), and 8 (TR3-0) were characterized by low probabilities of antibody positivity (Figs. 1 and 2 in the manuscript). As the manuscript describes, each trajectory is characterized by the first state with autoantibody positivity, such as multiple islet autoantibodies (Trajectory 1), IAA (Trajectory 2), GADA (Trajectory 3). To clearly describe the distinct patterns of the three trajectories in the manuscript, we renamed the 11 states to the {Trajectory Name-Step within Each Trajectory} format, e.g., TR1-1, in the manuscript. In this way, readers can recognize which trajectory and which step a participant's visit belongs to by the name. These participants' data were then divided into those who developed type 1 diabetes (Diagnosed/D) during the study period vs. those who did not, or were lost to follow-up, (Undiagnosed/UD). Analysis cohort. We studied 2172 individuals from the T1DI cohort with one or more IAb measurements at or before the age of 2.5 years and at least one positive IAb measurement during participation, identified as "diagnosed" (n = 652), or "undiagnosed" (n = 1520), based on the diagnosis status at their last observation. The median age at participants' last observation were 7.62 and 12.87 years for the diagnosed and undiagnosed, respectively (see Supplementary Fig. 2 in Supplementary Information for further detail). On visualization, T1DI-DPM discovered three trajectories, which uniquely fit all but 27 participants (1.2%, nine diagnosed, 18 undiagnosed), who could possibly fit into two different trajectories. After eliminating these 27 individuals, our analytic cohort included 2145 participants (98.8%, 643 diagnosed, 1502 undiagnosed). A flow chart of the cohort selection process and criteria is in Supplementary Fig. 1 in Supplementary Information. The data that support the findings of this study are available on request from the corresponding author B.K. The data are not publicly available due to privacy concerns. Analysis methods. We used an interactive visualization method called DPVis 29 to discover and characterize trajectories in the IAb positive participants by enabling visual identification and analysis of patterns of IAb trajectories. Using the visually discovered trajectories as boundaries for groups, we performed a one-way, twosided ANOVA followed by Tukey HSD for statistical differences in age at seroconversion and age at diabetes onset. Two-sided Chi-Square test was used to examine the relationship between trajectories and participant characteristics, specifically HLA-DR status and sex. We performed Kaplan-Meier survival analysis and tested differences in type 1 diabetes-free survival rates between trajectories using the two-sided log-rank test. We then compared the survival rates within each trajectory before or after median age of entry into first IAb state, and finally compared the survival rates after entering GADA and IA-2A positive states in each trajectory. Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The raw data in this study have been separated and deposited in each of the five study groups: DiPiS, BABYDIAB, DIPP, DEW-IT, and DAISY. The raw data are protected and are not publicly available due to data privacy laws. The source data for figures generated in this study are provided in the Source Data file. All other data that support the findings of this study are included in Supplementary Information or can be made available upon reasonable request. Source data are provided with this paper. Code availability The code to generate the waterfall diagram is deposited in the following repository (https://github.com/bckwon/dpvis-waterfall). All other figures can be generated using any standard charting library.
v3-fos-license
2018-10-10T16:16:03.000Z
2018-10-10T00:00:00.000
59599953
{ "extfieldsofstudy": [ "Computer Science", "Mathematics", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1099-4300/21/8/741/pdf", "pdf_hash": "dce11199c43842657feba0e8dabbcaa9b4716d14", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2030", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "sha1": "13876262a977b332cd3594a7632b7381e634d406", "year": 2019 }
pes2o/s2orc
A General Framework for Fair Regression Fairness, through its many forms and definitions, has become an important issue facing the machine learning community. In this work, we consider how to incorporate group fairness constraints into kernel regression methods, applicable to Gaussian processes, support vector machines, neural network regression and decision tree regression. Further, we focus on examining the effect of incorporating these constraints in decision tree regression, with direct applications to random forests and boosted trees amongst other widespread popular inference techniques. We show that the order of complexity of memory and computation is preserved for such models and tightly binds the expected perturbations to the model in terms of the number of leaves of the trees. Importantly, the approach works on trained models and hence can be easily applied to models in current use and group labels are only required on training data. Introduction As the proliferation of machine learning and algorithmic decision making continues to grow throughout industry, the net societal impact of them has been studied with more scrutiny. In the USA under the Obama administration, a report on big data collection and analysis found that "big data technologies can cause societal harms beyond damages to privacy" [1]. The report feared that algorithmic decisions informed by big data may have harmful biases, further discriminating against disadvantaged groups. This along with other similar findings has led to a surge in research around algorithmic fairness and the removal of bias from big data. The term fairness, with respect to some sensitive feature or set of features, has a range of potential definitions. In this work, impact parity is considered. In particular, this work is concerned with group fairness under the following definitions as taken from [2]. Group Fairness: A predictor H : X → Y achieves fairness with bias with respect to groups A, B ⊆ X and O ⊆ Y being any subset of outcomes iff, The above definition can also be described as statistical or demographical parity. Group fairness has found widespread application in India and the USA, where affirmative action has been used to address discrimination against caste, race and gender [3][4][5]. The above definition does not, unfortunately, have natural application to regression problems. One approach to get around this would be to alter the definition to bound the absolute difference between the respective marginal distributions over the output space. However, this is a strong requirement and may hinder the model's ability to model the function space appropriately. Rather, a weaker and potentially more desirable constraint would be to force the expectation of the marginal There are many machine learning techniques with which Group Fairness in Expectation constraints (GFE constraints) may be incorporated. While constraining kernel regression is introduced in Section 3, the main focus of the paper is examining decision tree regression and respective ensemble methods which build on decision tree regression such as random forests, extra trees and boosted trees due to their widespread use in industry and hence their extensive impact on society [6]. The reason for this is to show that such an approach will not affect the order of computational or memory complexity of the model. The main contributions of this paper are: I We use quadrature approaches to enforce GFE constraints on kernel regression with applications to Gaussian processes, support vector machines, neural network regression and decision tree regression, as outlined in Section 3. II We incorporate these constraints on decision tree regression without affecting the computational or memory requirements, as outlined in Sections 5 and 6. III We derive a tight bound for the variance of the perturbations due to the incorporation of GFE constraints on decision tree regression in terms of the number of leaves of the tree, as outlined in Section 7. IV We show that these fair trees can be combined into random forests, boosted trees and other ensemble approaches while maintaining fairness, as shown in Section 8. Related Work There are many ways in which the now huge volume of literature on algorithmic fairness may be split. One such approach is to break the proposed literature into three branches of research based upon the stage of the machine learning life cycle they belong. The first is the data alteration approach, which endeavours to modify the original dataset in order to prevent discrimination or bias due to the protected variable [7,8]. The second is an attempt to regularise such that the model is penalised for bias [9][10][11][12][13]. Finally, the third endeavours to use post-processing to re-calibrate and mitigate against bias [14,15]. The literature also differs dramatically as to what is the objective of the fairness algorithm. Recent work has made efforts towards grouping these into consistent objective formalisation [2,16]. Often, the focus of algorithmic fairness is on classification problem with regression receiving very little attention. The approach applied to enforce fairness may be from a plethora of definitions, anti-classification [16], or fairness through unawareness as it is also referred to as [2], endeavour to treat data agnostic of protected variables and hence enforces fairness via treatment rather than outcome. The second popular method is classification parity, i.e., the error with respect to some given measure is equal across groups defined by the protected variable. Finally, calibration is the term used when outcomes are independent of protected group conditioned on risk. Narrowing our focus to regression, two contradicting objectives once again arise, namely group level fairness and individual fairness. Individual fairness implies that small changes to a given characteristic of an individual leads to small changes in outcome. Group fairness on the other hand endeavours to make aggregate outcomes of protected groups similar. The latter is the focus of this work and an overview of where this fits into the broader litterature may be found in Table 1. Table 1. This table is amended from [2], highlighting some of the major contribution currently in the domain of fairness in machine learning. Parity versus preference refers to whether fairness means achieving equality or satisfying the preferences. Treatment versus impact refers to whether fairness is to be maintained in treatment or process of the learning algorithm or resulting output of the system. To the best of the authors knowledge, this work is the first group fair framework for regression problems. Specifically to decision trees, discrimination aware decision trees have been introduced [30] for classification. They offer dependency aware tree construction and leaf relabelling approach. Later, fair forests [13] introduced a further tree induction algorithm to encourage fairness. They did this by introducing a new gain measure to encourage fairness. However, the issue with adding such regularisation is two-fold. Firstly, discouraging bias via a regularising term does not make any guarantee about the bias of the post trained model. Secondly, it is hard to make any theoretical guarantees about the underlying model or the effect the new regulariser has had on the model. The approach offered in this work seeks to perform model inference in a constrained space, leveraging basic theory from Bayesian quadrature such that the predicted marginal distributions are guaranteed to have equal means. Such moment constraints have a natural relationship to maximum entropy methods. By utilising quadrature methods, it is also possible to derive bounds for the expected absolute perturbation induced by constraining the space. This is shown explicitly in Section 7. Ultimately, the paper develops a general framework to perform group-fair regression, an important open problem as pointed out in [23]. We emphasise to the reader that, as outlined in the next section, there are many definitions of fairness, each with reasonable motives but conflicting values. Group fairness, addressed in this work, inherently leads to individual unfairness, i.e., to create equal aggregate statistics between sub-population, individuals in each sub-population are treated inconsistently. The reverse is also true. As such, we should always think through the adverse effects of our approach before applying it in the real world. The experiments in this paper are aimed to explore and demonstrate the approach introduced, but are not meant to advocate using group fairness specifically for the task in hand. Constrained Kernel Regression We first show how one can create such linear constraints on kernel regression models. This work builds on the earlier contributions in [31], where the authors examined the incorporation of linear constraints on Gaussian processes (GPs). Gaussian processes are a Bayesian kernel method most popular for regression. For a detailed introduction to Gaussian processes, we refer the reader to [32]. However, for the reader unfamiliar with GPs specifically, they may simply think of a high dimensional Gaussian distribution parameterised by a kernel K(·, ·), with zero mean and unit variance without loss of generality. Given a set of inputs and respective outputs, {x i , y i } N i=1 , split into training and testing sets, where K x,x denotes the kernel matrix between training examples, Kx ,x is the kernel matrix between the test and training examples and Kx ,x is the prior variance on the prediction point defined by the kernel matrix. Gaussian processes differ from high dimensional Gaussian distributions as they can model the relationships between points in continuous space, via the kernel function, as opposed to being limited to a finite dimension. An important note is that any combination of Gaussian distributions via addition and subtraction is a closed space, i.e., the sum of Gaussians is also Gaussian and so on. While this may at first appear trivial, it is, in fact, a very useful artefact. For example, let us assume there are two variables, a and b, drawn from Gaussian distributions with mean and variance µ a , µ b , σ 2 a , σ 2 b , respectively. Further, assume that the correlation coefficient ρ describes the interaction between the two variables. Then, a new variable c, which is equal to the difference a and b, is drawn from a Gaussian distribution with mean and variance, We can thus write all three variables in terms of a single mean vector and covariance matrix, Given any two of the above observations, the third can be inferred exactly. We refer to this as a degenerate distribution as K will naturally be low rank. If we observe that µ a − µ b is equal to zero, we are thus constraining the distribution of a and b. This can easily be extended to the relationship between sums and differences of more variables. Bayesian quadrature [33] is a technique used to incorporate integral observations into the Gaussian process framework. Essentially, quadrature can be derived through an infinite summation and the above relationship between these summations can be exploited [34]. An example covariance structure thus looks akin to, where p(x) is some probability distribution over the domain of x, on which the Gaussian process is defined and against which the quadrature is performed against. Reiterating the motivation of this work, given two generative distributions p A (x) and p B (x) which subpopulations A and B of the data are generated from, we wish to constrain the inferred function f (·) such that, This constraint can be rewritten as, which allows us to incorporate the constraint on f (·) as an observation in the above Gaussian process. Let q A,B (x) = p A (x) − p B (x) be the difference between the generative probability distributions of A and B; then, by setting the corresponding observation as zero, the covariance matrix becomes, We refer to these as equality constrained Gaussian processes. Let us now turn to incorporate these concepts into decision tree regression. Trees as Kernel Regression Decision tree regression (DTR) and related approaches offer a white box approach for practitioners who wish to use them. These methods are among the most popular methods in machine learning [6] in practice as they are generally intuitive even for those not from statistics, mathematics or computer science background. It is their proliferation, especially in businesses without machine learning researchers, that makes them of particular interest. DTR regress data by sorting them down binary trees based partitions in the input domain. The trees are created by recursively partitioning the domain of input along axis aligned splits determined by a given metric of the data in each partition, such as information gain or variance reduction. In this work, we do not consider the many possible techniques for learning decision trees, but rather assume that the practitioner has a trained decision tree model. For a more complete description of decision trees, the authors refer the readers to [35]. For the purposes of this work, DTR can be described as a partitioning of space such that predictions are made by averaging the observations in the local partition, referred to as the leaves of the tree. As such, DTR has a very natural formulation as a degenerate kernel whereby, where L(·) is the index of the leaf in which the argument belongs. The kernel hence becomes naturally block diagonal and the classifier/regressor written as, with Kx ,x denoting the vector of kernel values betweenx and the observations, K x,x denoting the covariance matrix of the observations as defined by the implicit decision tree kernel and y denoting the values of the observations. It is also worth noting how one can also write the decision tree as a two-stage model: first by averaging the observations of associated with each leaf and then by using a diagonal kernel matrix to perform inference. Trivially, the diagonal kernel matrix acts only as a lookup and outputs the leaf average that corresponds to the point being predicted. Let us refer to this compressed kernel matrix approach as the compressed kernel representation and the block diagonal variant as the explicit kernel representation. Fairness Constrained Decision Trees Borrowing concepts from the previous section on equality constrained Gaussian processes using Bayesian quadrature, decision trees may be constrained in a similar fashion. The first consideration to note is that we wish the constraint observation to act as a hard equality, i.e., noiseless. In contrast, we are willing for the observations to be perturbed in order to satisfy this hard equality constraint. To achieve this, let us add a constant noise term, σ 2 noise , to the diagonals of the decision tree kernel matrix. Similar to ordinary least squares regression, the regressor now minimises the L2-norm of the error induced on the observations, conditioned on the equality constraint, which is noise free. In the explicit kernel representation, this implies the minimum induced noise per observation, whereas in compressed kernel representation this implies the minimum induced noise per leaf. An important note is that the constraint is applied to the kernel regressor equations, hence the method is exact for regression trees or when the practitioner is concerned with relative outcomes of various predictions. However, in the case that the observations range within [0, 1], as is the case in classification, then we must renormalise the output to [0, 1]. This no longer guarantees a minimum L2-norm perturbation and while potentially still useful, is not the focus of this work. The second consideration is how to determine the generative probability distributions p A (x) and p B (x). Given the frequentist nature of decision trees, it makes sense to consider p A (x) and p B (x) as the empirical distributions of subpopulations A and B, as described in Section 1. Thus, the integral of the empirical distribution on a given leaf, L i p A (x)dx, is defined as the proportion of population A observed in the partition associated with leaf L i . We emphasise that how p A (x) and p B (x) are determined is not the core focus of this work and many approaches have merit. For example, a Gaussian mixture model could be used to model the input distribution, in which case L i p A (x)dx would equal the cumulative distribution of the generative PDF over the bounds defined by the leaf. This is demonstrated in the Experimental Section. Many other such models would also be valid and determining which method to use to model the generative distribution is left to the practitioner with domain expertise. Efficient Algorithm For Equality Constrained Decision Trees At this point, an equality constrained variant of a decision tree has been described, in both explicit representation and compressed representation. In this section, we show that equality constraints on a decision tree do not change the computational or memory order of complexity. The motivation for considering the order of complexities is that decision trees are one of the more scalable machine learning models, whereas kernel methods such as Gaussian processes naively scale at O(n 3 ) in computation and O(n 2 ) in memory, where n is the number of observations. While the approach presented in this work utilises concepts from Bayesian quadrature and linearly constrained Gaussian processes, the model's usefulness would be drastically hindered if it no longer maintained the performance characteristics of the classic decision tree, namely computational cost, and memory requirements. Efficiently Constrained Decision Trees in Compressed Kernel Representation As Figure 1 shows, the compressed kernel representation of the constrained decision tree creates an arrowhead matrix. It is well known that the inverse of an arrowhead matrix is a diagonal matrix with a rank-1 update. Letting D represent the diagonal principal sub-matrix with diagonal elements equal to one, z being vector such that the ith element is equal to the relative difference in generative populations distributions for leaf i, z i = L i (p A (x) − p B (x))dx, then the arrowhead inversion properties state that, Note that the integral of the difference between the two generative distributions when evaluated over the entire domain is equal to zero, as both p A (x) and p B (x) must sum to one by definition and hence their differences to zero. Returning to the equation of interest, namely f (x) = Kx ,x K −1 x,x y with y as the average value of each leaf of the tree, and subbing in Kx ,x as a vector of zeros with a one indexing the jth leaf in which the predicted point belongs to and is equal to zero, as it does not contribute to the empirical distributions, we arrive at, Figure 1. This is a visualisation of a decision tree kernel matrix with marginal constraint, left in explicit representation and right in compressed representation. The dark cell in the upper left of the matrix is the double integrated kernel function with respect to the difference of input distributions, which constrain the process. The solid grey row and column are single integrals of the kernel function. White cells have zero values and the dashed (block) diagonals are the kernel matrix between observations or leaves of the tree. We can note that the above, compressed representation kernel matrix is an arrowhead matrix, which we exploit to create an efficient algorithm. The term 1 1+σ 2 n is the effect of the prior under the Gaussian process perspective; however, by post-multiplying by (1 + σ 2 n ), this prior effect can be removed. While relatively simple to derive, the above equation shows that only an additive update to the predictions is required to ensure group fairness in decision trees. Further, if the same relative population is observed for Group A and Group B on a single leaf j, then z j = 0 and no change is applied to the original inferred prediction before the constraint is applied other than the effect of the noise. In fact, the perturbation to a leaf's expectation grows linearly with the bias in the population of the leaf. From an efficiency standpoint, only the difference in generative distributions, z, needs to be stored, which is an additional O(L) extra memory requirement and the update per leaf can be pre-computed in O(L). These additional memory and computational requirements are negligible compared to O(N) cost of the decision tree itself. Efficiently Constrained Decision Trees in Explicit Kernel Representation Let us now turn our attention to the explicit kernel representation case, where the D in the previous subsection is replaced with the block diagonal matrix equivalent. First, let us state the bordering method, a special case of the block diagonal inversion lemma, with ρ = − 1 z T D − 1z once again. Substituting this into the kernel regression equation once more, we find, where I j denotes a vector of zeros with ones placed in all elements relating to observations in the same leaf. Expanding the above linear algebra, where j is iterating over the set of leaves. Note that, when m j = 1 for all j, we arrive at the same value for ρ as we did in the previous subsection. We can continue to apply this result to the other terms of interest, where y j is once again the average output observation over leaf j. The terms have been labelled X 1 , X 2 and X 3 for shorthand. The computation time for the three terms, along with ρ, can be computed in linear time with respect to the size of the data, O(n), and can be pre-computed ahead of time, hence not affect the computational complexity of a standard decision tree. Once again, only z j and m j have to be stored for each leaf and hence the additional memory cost is only O(L). As such, we can simplify the full expression for the expected outcome as, Expected Perturbation Bounds In imposing equality constraints on the models, the inferred outputs become perturbed. In this section, the expected magnitude of the perturbation is analysed for the compressed kernel representation. We define the perturbation due to the equality constraint, not due to the incorporation of the noise, as, Theorem 1. Given a decision tree with L leaves, with expected value of leaf observations denoted by the vector y ∈ R L normalised to have zero mean and unit variance and leaf frequency imbalance denoted as z ∈ R L , the expected variance induced by the perturbation due to the incorporating a Group Fairness in Expectation constraint is bounded by, As the expectation of z j is zero due to it being the difference of two probability distributions, the variance is equal to the expectation of 2 , withz equal to z after normalisation. By Lemma 1, the expectation of the dot product (z T y) 2 is equal to 1 L . Further, the 2-norm of z can be cancelled from the numerator and denominator. Finally, using the L 1 , L 2 norm inequality, z 2 ≤ z 1 ≤ √ L z 2 , we can then tightly bound the worst case introduced variance as, Given two vectors y,z uniformly distributed on the unit hypersphere S L−1 , the expectation of their dot product is zero and variance, Proof. As the inner product is rotation invariant when applied to bothz and y, let us denote the vector z as [1, 0, . . . , 0] without loss of generality. The first element of the vector y, denoted by y 0 , is thus equal toz T y. The probability density mass of the random variable y 0 is proportional to the surface area lying at a height between y 0 and y 0 + dy 0 on the unit hypersphere. That proportion occurs within a belt of height dy 0 and radius 1 − y 2 0 , which is a conical frustum constructed out of an S L−2 of radius 1 − y 2 0 , of height dy 0 , and slope 1 √ 1−y 2 0 . Hence, the probability is proportional to, Substituting u = y 0 +1 2 . we find that, Note that this last simplification of P(u) is equal to the probability density function of the Beta distribution with both shape parameters equal α = β = L−1 2 . The variance of the Beta distribution is, Rescaling to find the variance of y 0 , we arrive at 1 L . As the expectation of E[z T y] = 0 due to the properties of symmetry, E[(z T y) 2 ] = 1 L . This is an interesting result as it implies that, if the model is not exploiting biases in the generative distribution evenly across all of the leaves of the tree, i.e., z 1 = √ L z 2 , then the resulting predictions receive the greatest expected absolute perturbation when averaged over all possible y. For the explicit kernel representation, the expected absolute perturbation bound can be analysed whereby each leaf holds an even number of observations. In such a scenario, m i = m is equal for all leaves i ∈ 1, . . . , L. Substituting this into the equations for ρ, X 2 and X 3 , we can find that the bounded expected perturbation is equal to, L For the sake of conciseness, the full derivation of the above is left to the reader but follows the same steps as the compressed kernel representation. Combinations of Fair Trees While it is intuitive to say that ensembles of trees with GFE constraints preserve the GFE constraint, for the sake of completeness, this is now shown more formally. Random forests [36], extremely random trees (ExtraTrees) [37] and tree bagging models [38] combine tree models by averaging over their predictions. Denoting the predictions of the trees at point x as f i (x) for each i ∈ 1, . . . , T, where T is the number of trees, we can easily show that the combined difference in expectation marginalised over the space is equal to zero, It can also be easily shown that modelling residual errors of the trees with other fair trees, such as is the case for boosted tree models [39], also results in fair predictors. These concepts are not limited to tree methods either and the core concepts set out in this paper of constraining kernel matrices can have applications in models such as deep Gaussian process models [40]. Synthetic Demonstration The first experiment was a visual demonstration to better communicate the validity of the approach. The models examined are ExtraTrees, Gaussian processes and a single hidden layer perceptron. They endeavour to model an analytic function, f (x) = x cos(αx 2 ) + sin(βx), with observations drawn from two beta distributions, p A (x) and p B (x), respectively. The parameters of the two beta distribution are presented in Table 2. Figure 2 shows the effect of perturbing the models using the approach presented to constrain the expected means of the two populations. The figure shows the greater is the disparity between p A (x) and p B (x), the greater is the perturbation in the inferred function. Both the compressed and explicit kernel representation lead to very similar plots for the tree-based models, thus only the compressed kernel representation algorithm has been shown for conciseness. Note, in the case of the ExtraTrees model, each tree was individually perturbed before being combined. Further, in the case of the perceptron, a GMM was fit to the data in the inferred latent space rather than in the original input space. A downside to group fairness algorithms more generally, as pointed out in [7], is that candidate systems which impose group fairness can lead to qualified candidates being discriminated against. This can be visually verified as the perturbation pushes down the outcome of many orange points below the total population mean in order to satisfy the constraint. By choosing to incorporate group fairness constraints, the practitioner should be aware of these tradeoffs. ProPublica Dataset-Racial Biases Across the USA, judges, and probation and parole officers are increasingly using algorithms to aid in their decision making. The ProPublica dataset (https://www.propublica.org/datastore/ dataset/compas-recidivism-risk-score-data-and-analysis) contains data about criminal defendants from Florida in the United States. It is the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) algorithm [41], which is often used by judges to estimate the probability that a defendant will be a recidivist, a term used to describe re-offenders. However, the algorithm is said to be racially biased against African Americans [42]. To highlight the proposed algorithm, we first endeavoured to use a random forest to approximate the decile scoring of the COMPAS algorithm and then perturbed each tree to remove any racial bias from the system. The two subpopulations we considered constraining are thus African American and non-African American. We encode the COMPAS algorithms decile score into an integer between zero and ten such that minimising L 2 perturbation is an appropriate objective function. The fact the decile scores are bounded in [0, 10] was not taken into account. The random forest used 20 decision trees as base estimators and the explicit kernel representation version of the algorithm was used for the sake of demonstrative purposes. Figure 3 presents the marginal distribution of predictions on a 20% held out test set before and after the GFE constraint was applied. It is visible that both the expected outcome for African Americans is decreased and for non-African Americans is increased. Notice that, while the means are equal, the structure of the two of distributions are quite different, indicating that GFE constraints still allow greater flexibility than more strict group fairness such as that described in Section 1. The root square difference between the predicted points before and after perturbation was 0.8. Importantly, the GFE constraint described in this work was verified numerically with the average outputs recorded as shown in Table 3. We can see that the respective means (vertical lines) become approximately equal after the inclusion of the constraint using the empirical input distribution. Intersectionality: Illinois State Employee Salaries The Illinois state employee salaries (https://data.illinois.gov/datastore/dump/1a0cd05c-7d17-4e3d-938d-c2bfa2a4a0b1) since 2011 can be seen to have a gender bias and bias between veterans and non-veterans. The motivation of this experiment was to show how we can deal with intersectionality issues (multiple compounding constraints) such as if one wished to predict a fair salary for future employees based on current staff. Gender labels were inferred using the employees' first names, parsed through the gender-geusser Python library. GFE constraints were applied between all intersections of gender and veteran/non-veterans, the marginals of gender and the marginals of veteran/non-veterans. Figure 4 visualises the perturbations to the marginals of each demographic intersection due to the GFE constraints. The train-test split was set as 80-20% and the incorporation of the GFE constraints increase the root mean squared error from $12,086 to $12,772, the cost of fairness. The only difference to allow for intersectionality is the z is no longer a vector, but rather a matrix with a column for each constraint. Thus, f (x) = y j + z j (z T z) −1 z T y. Conclusions This work offers an easily implementable approach to constrain the means of kernel regression, which has direct applicability to decision tree regression, Gaussian process regression, neural network regression, random forest regression, boosted trees and other tree-based ensemble models.
v3-fos-license
2021-05-21T16:57:02.424Z
2021-04-14T00:00:00.000
234852350
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/journals/mpe/2021/6640499.pdf", "pdf_hash": "d34cb44f5ee839b1be82fb89504f51068733274b", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2032", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "f5b90d02e7b18ff4372fe13105ed85b156eb1e90", "year": 2021 }
pes2o/s2orc
Survey on Botnet Detection Techniques: Classification, Methods, and Evaluation With the continuous evolution of the Internet, as well as the development of the Internet of *ings, smart terminals, cloud platforms, and social platforms, botnets showing the characteristics of platform diversification, communication concealment, and control intelligence. *is survey analyzes and compares the most important efforts in the botnet detection area in recent years. It studies the mechanism characteristics of botnet architecture, life cycle, and command and control channel and provides a classification of botnet detection techniques. It focuses on the application of advanced technologies such as deep learning, complex network, swarm intelligence, moving target defense (MTD), and software-defined network (SDN) for botnet detection. From the four dimensions of service, intelligence, collaboration, and assistant, a common bot detection evaluation system (CBDES) is proposed, which defines a new global capability measurement standard. Combing with expert scores and objective weights, this survey proposes quantitative evaluation and gives a visual representation for typical detection methods. Finally, the challenges and future trends in the field of botnet detection are summarized. Introduction A botnet is an overlay network formed by many hosts (bots or zombies) infected by bots and controlled by an attacker (botmaster) for the purpose of malicious activities [1,2]. Botmaster can control the server to control the bot and initiate various types of cyberattacks, such as distributed denial of service (DDoS), spam, phishing, click fraud, and information theft, which is one of the most serious security threats facing the Internet [3]. Given the security problems due to the continuous development of botnets, accurately identifying and detecting botnets, particularly unknown botnets in the incubation period, are the main challenging issues in academic and industrial research. Firstly, the C&C mechanism of botnets shows diversified and intelligent characteristics. Public service resources such as 5G, Internet of ings, smart terminals, cloud platforms, and social platforms have gradually emerged as fertile ground for botnets. Botnets use technologies such as zero-day vulnerabilities, P2P networks, phishing, fast flux, anonymous networks, bitcoin networks, and lightning networks as their means of utilization and spread [4][5][6]. Secondly, compared with conventional network security threats, botnets spread faster, have more infection channels, are more concealed, have a higher technical content, and have greater destructive power. Finally, because botnets are mostly in the silent state, they only maintain the connection state through C&C channels, without attacking and intruding, and often do not have conventional attack characteristics. erefore, most intrusion detection systems cannot effectively identify botnets. Deep learning theory has been rapidly developed, with significant advancements in related theoretical research and practical applications, particularly in speech recognition [7] and image recognition [8], etc. Deep learning methods can be used to solve conventional zombies. e low accuracy problem in the case of multiclassification task detection and the complexity of feature engineering in network detection technology have become research hotspots. e characteristics of blockchain technology such as decentralization, anticensorship, and concealment, as well as smart contracts, digital signatures, and incentive mechanisms, provide a new paradigm for the construction of botnets and distributed detection. e community mining algorithm in the complex network discipline provides new ideas for behavior-based botnet analysis. Swarm intelligence algorithms, SDN, MTD, and integrated methods, are some of the new methods for botnet detection. In the field of botnet detection in recent years, there is a lack of a comprehensive overview on the latest detection technologies. is survey is divided into six major parts: first, we analyze previous surveys; second, we study botnet background and new development of botnet construction mechanism; third, we classify botnet detection technologies from a new perspective; fourth, we analyze the latest and most advanced botnet detection technologies; fifth, we propose the common bot detection evaluation system (CBDES); sixth, we summary the challenges and future trends in the field of botnet detections. e main contributions of this article are as follows: (1) A novel summary of new developments in the construction mechanism of botnets (2) A novel classification of botnet detection technologies (3) A comprehensive analysis of the latest and most advanced botnet detection technologies, such as deep learning, complex networks, swarm intelligence, SDN, MTD, and blockchain. (4) A common bot detection evaluation system is proposed, from the four dimensions of service, intelligence, collaboration, and assistant, drawing on the ideas of analytic hierarchy process (AHP) (5) A new global capability metric € is defined, combined with expert scores and objective weights, to quantitatively evaluate eight typical detection methods and use spider diagrams to give a visual representation e rest of the paper is organized as follows. Section 2 describes and analyzes previous botnet detection surveys. Section 3 studies botnet background and new development of botnet construction mechanism. Section 4 proposes a novel classification of botnet detection methods. Section 5 analyzes the most the latest and most advanced detection technologies. Section 6 proposes the common bot detection evaluation system. Section 7 discusses the challenges and prospects of the area, and Section 8 presents the conclusions. Previous Surveys ere have been several surveys on botnet detection techniques in recent years, which are analyzed in this section. e IoT botnet detection technologies are divided into host-based and network-based in [9]. Network-based detection is further divided into signature-based, DNS-based, traffic-based, anomaly-based, and mining-based methods. However, this review is not comprehensive enough because it targets one dimension of IoT botnet. A detailed statistical analysis of IoT attack literature in recent years is summarized in [10]. e review outlines the existing proposed contributions, datasets utilized, network forensic methods utilized, and research focus of the primary selected studies. But it does not introduce the specific detection technology and compare and analyze the detection methods. DNS-based botnet detection technologies are classified into five categories in [11]: flow-based, anomaly-based, flux-based, DGA-based, and bot infection-based. Essential attributes of a smart DNS-based botnet detection system are proposed. But the survey did not provide context for the botnet's construction mechanism. A comprehensive botnet detection is analyzed in [12]. is survey classifies botnet detection techniques into four classes: signature-based, anomaly-based, DNS-based, and mining-based. Unfortunately, the summary is too simple and does not cover the introduction of the latest technology. For botnet detection technologies based on DNS traffic analysis, the technologies are classified into two categories in [13]: honeypot-based and IDS-based. It mainly introduces passive technologies, including graph theory, statistical analysis, clustering, decision tree, and neural network. is literature is comprehensive, but it is old and has not been evaluated. Evasion and detection techniques of DNS-based botnets are focused on [14]. is survey introduces Fast-Flux and DGA botnet detection technology. Also, the dimensions are relatively single and there is no evaluation. e detection is divided into four categories in [15]: honeypot analysis, communication signature, anomaly, and log. e literature introduction content is relatively few, not comprehensive enough. Each survey emphasized different aspects of the papers. e analysis of the surveys shows some limitations: e surveys use different taxonomies and terminologies A lot of surveys focus on one type or function, such as DNS and IoT, with a single dimension and lack of comprehensive analysis of new construction mechanism of botnets Most surveys do not cover the most advanced method, and there is a lack of systematic introduction to the latest technologies Most of the data lack a comprehensive evaluation of the detection methods A comparison of our survey with other surveys is presented in Table 1. Our survey aims at knowing and understanding botnet detection and eliminates these limitations. Background Based on an in-depth understanding of the working mechanism and behavior characteristics of botnets, this section introduces the latest development in botnet 2 Mathematical Problems in Engineering construction mechanisms in terms of the botnet architecture, life cycle, and C&C channel, as shown in Figure 1. 3.1. Architecture. e botnet C&C system architecture is mainly divided into the following three categories: centralized, distributed, and hybrid. (1) Centralized. e centralized botnet architecture adopts a client-server model generally. e bot mainly obtains control commands from the control server in a polling manner, and the botmaster sends the control commands and resources to the zombie host through these servers. Centralized botnets have advantages such as simple implementation, high efficiency, and good coordination, but their control process is associated with a central node failure. (2) Distributed. To improve the robustness of botnet, an attacker can use a decentralized structure of the P2P (peer to peer) mode as its channel architecture. Any node can act as a client and a server simultaneously, and the communication process does not rely on public network reachable server resources. Although the P2P botnet command issuance delay is higher than that of a centralized structure, the distributed structure is difficult to be hijacked, measured, and closed. (3) Hybrid. A hybrid architecture typically means that botnets have both central and P2P structure. is can be divided into two categories: an overall central structure and a partial P2P structure. From an overall perspective, it still belongs to the centralized structure of the C/S model; however, a P2P structure is present between the service nodes. One is the overall P2P structure and the other is a local center structure. is type of network structure is conducive to the realization of regional differentiation management and control, and it is difficult for defenders to detect all the key nodes and the overall scale of the botnet. Life Cycle. e life cycle of a botnet mainly includes the stages of propagation, rally, interaction, and malicious activities. Propagation. As an independent runnable program, the spread of bots includes the use of conventional malicious code. e main propagation methods include shared media spread, vulnerability exploitation spread, social engineering spread, and password guessing. Rally. Rally refers to the behavior of bots in locating and controlling the server and its resources. Implementation methods are mainly divided into two categories: static and dynamic. Static addressing means that the C&C resources that bots try to access are static and unchanging. ese resources are typically hard-coded in the body of the bot or stored in a hidden path of the infected machine, such as the registry. Dynamic addressing means that the access address is not fixed Common bot detection evaluation system (ii)Life cycle (iii)C&C channel but needs to be dynamically generated based on the specific algorithm used. Interaction. When the zombie host successfully discovers the available command control server or resource, it will establish a connection with the controller and begin to interact. is process is also called the command control phase, which mainly includes four activities: registration, file download, order distribution, and result feedback. Malicious Attack. e main purpose of an attacker to build and control a botnet is to control many victim hosts to launch a variety of attacks. Common attack activities include DDoS, spam, spreading malware, information leakage, click fraud, phishing attacks, information collection, virtual currency mining, and encrypted blackmail [16]. A product development model was used to define the life cycle in [17], including concepts, recruitment, interaction, motivation, and attack execution (CRIME). Literature [18] proposed a fine-grained, hidden Markov model-based botnet life cycle model, describing the state transition of botnets from propagation to extinction and dividing the typical botnet life cycle into nine types of hidden states: infection/initialization/idle/propagation/attack/maintenance/offline/isolation/dead. e model used "state" instead of "stage" to describe the evolution of botnets and broke the conventional irreversible and abstract timing relationship. e model could better represent the migration and changes of botnets. Command and Control Channel e core of the botnet is communication, and the classical C&C channel is mainly implemented through IRC, HTTP, SMB, P2P, or other custom protocols [2]. Using IRC service as a centralized C&C channel is easy to implement and has low latency and good real-time performance; however, the centralized topology can easily be detected and blocked [19]. Bots that use the HTTP protocol to construct C&C channels can periodically access the botnet controller, obtain command files, parse, and perform corresponding operations, and can penetrate IDS and firewalls, with good versatility and concealment [20]. e Server Message Block (SMB) protocol is a protocol that hides communication under the typical traffic patterns in home and enterprise networks and is mainly used for communication in local area networks [21,22]. e P2P protocol is used to construct distributed botnet control channels, which solves the single-point failure problem of botnet controllers, and has good robustness, stealthiest, and self-organization capabilities. e disadvantage is that it is vulnerable to index poisoning and Sybil attacks and initial vulnerability. Botnets that use custom protocols to communicate are stealthier, and the communication process is less likely to be detected. New C&C Channel (1) Diversified Platforms. e decentralization and concealment of public service resources represented by cloud platforms, social networking sites, and blockchains present natural advantages and have become fertile ground for botnets. Cloud Botnet. e multitenant feature of cloud computing can provide computing resources to anyone. Botnet controllers pretend to be legitimate tenants of cloud services and use virtual machines of cloud service providers to quickly construct botnets and use them to launch attacks. At the RSA2014 conference, Ragan introduced a cloud botnet construction method [23], which can realize the mining of electronic money by controlling massive cloud computing resources. IoT Botnet. e Internet of ings (IoT) has been implemented in various fields such as agriculture, healthcare, food supply management, drug supply management, environmental monitoring, and smart homes. IoT has heterogeneous environments and resource-constrained devices, i.e., low memory, low computing power, and low original security performance, which increases the risk of infection. Mirai is a common IoT botnet, the main objective of which is to perform DDOS attacks, with a strong scale and attack capability [24][25][26]. Social Botnet. Botnets use social media sites such as Facebook, Twitter, or WeChat to build transmission channels or spread messages in social networks. e Flashback botnet [27] uses Twitter to construct a backup C&C channel. Once the main channel fails, the bot will search for the C&C domain name by searching for a specific identifier that is dynamically generated to restore communication with the controller. A botnet that is parasitic on social networking sites can imitate normal users to complete a variety of online social actions [28]. Attackers can control social bots to achieve rumor dissemination, advertisement push, and personal information collection [29]. Mobile Botnet. e portability of mobile devices and the increasing popularity of applications also have an impact on the threat pattern of botnets. Geinimi can steal IMEI [30], geographic location, SMS, address book, and other information and send spam SMS and install malware; in 2012, Dexter, the first POS machine botnet, uses a memory reading technology to steal users' payments card data; Zhao et al. [31] proposed a mobile botnet network based on Google Cloud to Device Message (C2DM). Based on Blockchain. e literature [32] uses bitcoin blockchain floating C&C servers to propose a new type of resilient botnet. e literature [33] uses the bitcoin transaction propagation mechanism as the C&C infrastructure and proposes the use of subliminal channels [34,35] to create a concealed method to repeatedly create signatures on transactions. Fbot botnet, one of the Mirai variants, uses Emercoin [13] domain name system based on a distributed blockchain to solve a key problem that conventional DGA-based botnets are easily detected by reverse engineering. e literature [36] proposes a new generation of hybrid two-layer botnet LNBot, which uses lightning network (LN) infrastructure to communicate between bots and C&C servers. e off-chain concept [37,38] realizes almost instant bitcoin transactions. e covert communication technology mainly includes information hiding and C&C channel hiding. Information Hiding. is method modulates the secret information into the protocol redundancy field through various modulation methods, mainly including encryption, compression, and obfuscation, and steganography. Nagaraja et al. [39] proposed a botnet to use existing social networks as C&C channels and hide communications in JPEG pictures. Cui et al. [40] proposed a three-channel botnet model. e core idea was to use Domain-Flux, URL-Flux, and Cloud-Flux as subchannel protocol for the entire C&C based on registration, command issuance, and data return functions. e C&C Channel Hiding. e covert channel technology based on DNS protocol is one of the mainstream network covert channel realization methods. e common methods are Domain-Flux and Fast-Flux [41]. Casenove et al. [42] introduced scalable and stealthy botnets based on anonymous networks and could observe the C&C traffic at the Internet Service Provider (ISP) level. (3) Intelligent Control. e concept of complex systems, with their self-organization, resilience, and adaptability, can greatly help botnet communication protocol and architecture design. A heuristic algorithm based on ant colony optimization was proposed to construct a botnet C&C [43]. is could ensure spontaneous and intelligent collaboration between independent bot agents, which improves network fault tolerance and the ability to dynamically adapt to the network environment. Classification Conventional detection methods are no longer suitable for new botnet detection. e industry has a more in-depth understanding of the working mechanism and behavior characteristics of botnets, and various botnet detection methods have been proposed. is section divides the key technologies of botnet detection into three categories based on honeypot analysis, communication signatures, and abnormal behavior. We focus on the application of deep learning, complex networks, swarm intelligence, MTD, SDN, blockchain, and other cutting-edge technologies in botnet detection. Botnet detection technology classification standards are different, and there are multidimensional classification methods. is article classifies the key technologies as shown in Figure 2. Based on Honeypot Analysis. Based on the honeypot analysis and detection method, many malicious code samples can be obtained through honeypot trapping, i.e., the botnet binary files of the existing botnet, and the monitoring and analysis can be performed in a controlled environment, and the bots and their malicious behaviors can be discovered [44]. It is an active detection behavior. Representative honeypots include Snort, Ntop, Argos, Nepenthes, Sebek, and the Goddess of Hunting project led by Peking University Zhuge Jianwei [45]. e darknet is quickly becoming a popular alternative to using honeypots and is essentially a derivative of the honeynet. Mathematical Problems in Engineering 5 Although the honeypot-based method has a high accuracy rate for known botnets, it cannot effectively identify encrypted traffic and detect unknown attacks. Moreover, it cannot easily find botnets that spread through social engineering and is useless for real-time systems. Because of the lack of user operations, it can be easily recognized by bots with an anti-honeypot function. Based on Communication Signature. e method based on communication signature detection is a commonly used defense method, which detects bot activities based on predefined patterns and signatures retrieved from wellknown bots [46]. Common methods include regular expressions [47], whitelists (or blacklists) [48], and N-gram models [49]. By configuring feature matching rules in advance, conventional intrusion detection systems such as Snort have a rich signature database, which can help quickly and accurately discover botnet activities. e communication signature-based method is suitable for botnets with definite features, which helps further understand the communication mechanism and potential vulnerabilities of botnets. e disadvantage is that robots can avoid signature-based detection by using code obfuscation technology and cannot detect botnets with unknown features. e method needs to maintain and update the signature knowledge base continuously, which increases the cost of detection. Based on Abnormal Behavior. Anomaly-based detection is an important research field in botnet detection. e basic idea is based on host behavior or network traffic abnormalities, such as the high network latency, large amounts of traffic, traffic on abnormal ports, and abnormal system behaviors and based on established systems. e deviation in the benign behavior or the similarity with the behavior of bots can be detected. Deep Learning. In the past few decades, researchers have used various conventional machine learning methods to detect botnets [50][51][52] and have made great progress, such as Naive Bayes [53], support vector machines [54], random forests [55], and clustering algorithm (such as DBSCAN [56] and X-means [57]), based on a variety of characteristics to establish a model that can identify malicious network traffic. e characteristics are typically set by researchers through the experience before the model is established. Common dimensions include network flow properties, such as the number of data packets, the average byte of data packets, and the average interval between two adjacent data streams, and behavior, such as whether to access the same server. ese detection models were found to have a low false-negative rate and false-positive rate in an experiment. However, there are some shortcomings. First, manual selection has higher requirements on the prior knowledge of the designer. e second is that fixed features also provide an opportunity for attackers. Attackers can use anti-machine learning ideas to change the characteristics of botnet traffic in a targeted manner, thereby evading model detection. e botnet shape and command control mechanism are gradually changing, and artificial feature selection is becoming increasingly difficult. With the rapid development of deep learning technologies, neural networks, reinforcement learning, knowledge graphs, and other methods are gradually being applied to the field of botnet detection, which represents new approaches. (1) Neural Network. e basic idea is to extract network traffic features based on temporal and spatial similarities. is method involves mapping network traffic into a grayscale image or feature vector and sending it to a neural network model, extracting distinguishable features and patterns from the space and time dimensions, and automatically learning network traffic features. CNN. e convolutional neural network (CNN) mainly learns spatial features from the spatial dimension through network traffic. Aiming at BotCloud, Guang et al. [58] first extracted basic features from the network stream and mapped them onto grayscale images. Subsequently, the Mathematical Problems in Engineering CNN algorithm LeNet-5 was used for feature learning, and more abstract features were extracted to express the hidden patterns and structural relationships in the network stream data; the algorithm was finally applied to detect BotCloud. ere is no significant difference in traffic in the early stages of IoT botnet. Most botnet detection systems are not suitable for resource-constrained IoT devices. e literature [59] used side-channel power consumption information, such as power consumption, etc., to distinguish whether IoT devices are affected by malicious behaviors and proposed a CNNbased deep learning model to perceive the subtle differences in power consumption data. e literature [60] proposed an extensible framework that uses LSTM to collect DNS traffic data at the ISP level and detect DGA-based malware in real time. Deep learning ImageNet model was used to classify domain names generated by DGA [61]. RNN. Recurrent neural network (RNN) mainly learns the characteristics of network traffic in time series from the time dimension. e literature [62] applied RNN to detect botnets by modeling network communication behavior as a sequence of time-varying states. e behavior model of each flow is established based on four parameters: source and destination IP addresses, destination port, and protocol. e literature [63] proposed a solution to detect botnet activities in consumer IoT devices and networks, four attack vectors of Mirai were used as eigenvectors, and a detection model was established based on the RNN and bidirectional long-and short-term memory (BiLSTM-RNN). e literature [64] proposed an anomaly detection system for 5G adaptive real-time deep learning system, which includes two modules: abnormal symptom detection (ASD) and network anomaly detection (NAD). DBN (Dynamic Bayes) was used to realize the ASD time measurement process, and the LSTM network model was used to realize NAD. e literature [65] proposed a malicious domain name detection method based on knowledge graph. For DNS traffic, TransE took the embedded model of the system as input and completed the storage and representation of information in the knowledge graph, which not only included the embedding of entities and relationships, but also the embedding of attribute values. e advantage of combining the BiLSTM neural network to extract features for detection was that it can learn the context relationship of vectors in the sequence and better extract features for classification. CNN + RNN. Reference [66] proposed a deep learningbased botnet detection system Bot Catcher, which extracted network traffic characteristics from the two dimensions of time and space automatically. Spatial feature learning was based on the application CNN LeNet-5 structure, which was used in the field of image recognition, and each stream data was converted into a 2D gray image. e data stream of 1024B (32 × 32) data was intercepted before each data stream. Typically, the data in front of a data stream mainly includes connection information (such as the three-way handshake in TCP connection and key exchange in TLS connection), and less part of the content exchange could better reflect the main characteristics of the entire data stream. To mine deeper into the characteristics of the data stream in the time series, Bot Catcher used the BiLSTM neural network in the RNN to learn the time characteristics and scanned each data stream in both forward and reverse directions. In [67], for Fast-Flux botnets, combined with convolutional neural network (DenseNet) and recurrent neural network (BiLSTM), the DNS data response packet in the network traffic based on analysis, a fast-flux botnet detection method based on the temporal and spatial characteristics of traffic was proposed. GAN. e literature [68] proposed a botnet detection framework based on generative confrontation networks (Bot-GAN), which was different from other variants of generative confrontation networks. e framework focused more on discriminative models instead of generative models. DNN. e literature [69] proposed a DGA domain name detection method that does not require extracting specific features, based on word-hashing technology to map strings to a high-dimensional space, and used deep neural network DNN to classify domain names. e literature [70] proposed a two-level deep learning framework for real-time detection of botnets. In the first-level framework, the Siamese network is used to estimate the similarity measure of DNS queries. In the second level, a domain generation algorithm (DGA) based on deep learning architecture is proposed to classify normal and abnormal domain names. e literature [71] constructed a deep learning framework based on dualstream network (TS-ASRCaps), used multimodal information to reflect the characteristics of DGAs, and proposed an attention sliced recurrent neural network (ATTSRNN) to automatically mine the underlying semantics. A capsule network with dynamic routing (Cap-sNet) was used to model high-level visual information. FNN. e literature [72] proposed a real-time online Fast-Flux botnet filtering system, aiming to improve the detection of unknown "zero-day" online fast-flux botnets. Fast-Flux botnet domain was distinguished from the legal domain in an online mode based on new rules, characteristics, or classes. Using the adaptive evolutionary fuzzy neural network algorithm, the first stage preprocessing includes feature analysis and stemming, and the A-Import feature represents the Fast-Flux botnet, and the second stage uses an evolutionary fuzzy neural network (EFUNN) algorithm to establish FF Hunter (FFH) system. (2) Reinforcement Learning. Reinforcement learning is an algorithmic method used to solve sequential decision problems, in which agents (or decision makers) interact with the environment to learn to respond under different conditions. Reinforcement learning is used for botnet detection in three ways: firstly, combined with NN neural network for new feature extraction, the agent learns a strategy to maximize the total number of bots detected over time. Secondly, it is used for the deployment of distributed detectors to make intelligent decisions. e agent estimates the system status and operation rewards by monitoring the network activities in different network segments. irdly, deep reinforcement learning is used for evading machine learning detection. Mathematical Problems in Engineering e literature [73] proposed a method that combines reinforcement learning technology to detect botnets as early as possible in the propagation stage or before any malicious activities are initiated by the bot. It included four stages: network traffic capture and packet filtering, feature extraction, malicious activity detection, and bot behavior detection using reinforcement learning. e network traffic feature extraction was carried out in three levels: data packet level, data flow level, and connection level. Malicious activity detection included three stages: offline stage (training), online detection stage, and reinforcement learning stage. e literature [74] proposed an online clustering-enhanced botnet detection method using reinforcement learning PRCL, which could detect botnets in real time with high accuracy. Research on adversarial machine learning has shown that botnet attackers can bypass the detection model by constructing specific samples, and many algorithms are susceptible to less input disturbances. e literature [75] proposed a new anti-botnet traffic generator framework based on deep reinforcement learning (DRL), which could effectively generate reverse traffic flow through the RL algorithm and Markov decision process (MDP). e agent could add disturbances to the flow and changed the spatial and temporal properties of the network traffic and automatically added disturbances to the samples to try to fool the target detector. is research could help inspectors to find defects and improve the robustness of the system. e summary of typical botnet detection techniques based on deep learning is shown in Table 2. Complex Network. Botnet communication is associated with both similarity and stability. e relatively frequent communication activities based on the heartbeat mechanism will form a correlation graph. For abnormal behaviors, complex network methods are used to conduct community mining to detect botnets. e methods used can be typically divided into two categories: graph methods and community mining algorithms. (1) Graph Method. ere are two main ideas for graph-based methods. One is for the behavior of executable files, such as control flow graphs, call graphs, and code graphs, to model graphs. e other is based on the behavior of nodes in network traffic, e.g., the IP-domain name mapping relationship is modeled on the graph, and then classified and detected on this basis. e literature [83] proposed a new high-order subgraph feature based on PSI (printable characters) extracted from malicious code to detect large-scale botnets. ese features had precise behavior descriptions and less space requirements. Aiming to Large Scale Spamming, BotGraph [84] revealed the correlation between botnet activities by constructing a large user graph, including two components: detector registration and behavior connection. e first component ensures that the total number of bots was limited, and the second component was based on constructing a user-user random undirected graph to detect invisible robot users, and then it detected bots through abnormal behavior [85]. Based on the topological features of the nodes in the graph, a novel botnet detection method was proposed, which extracts in-degree, out-degree, weight, degree weight, clustering coefficient, internodes and feature vector centrality, and based on these features, a selforganizing map clustering method SOM was used to establish the clustering of nodes in the network. is method can isolate bots in small clusters. In literature [86], for highspeed networks, it correlates NetFlow-related data and uses host-dependent models for advanced data mining, extends the popular link analysis algorithm PageRank [87] for cluster processing, and uses P2P communication infrastructure to effectively detect invisible botnets without a significant amount of traffic. Aiming at the anonymity of botnets, based on the DNS query response, a mapping relationship between its domain name and IP was extracted through the DNSmap tool to construct a DNS association map in [88] named XIONG. Moreover, the authors analyzed the structural characteristics, FQDN (full domain name was called fully qualified domain name) node and IP node characteristics, and connection edge characteristics of the graph and integrated the blacklist statistical characteristics to realize the multifeature analysis of the graph components and selected the light GBM algorithm to complete diagram component classification. It also proposed a prototype system for Fast-Flux and Domain-Flux botnet detection under high-speed networks. e architecture was analyzed from the vertical perspective of data flow transmission, which was divided into data access layer, data storage layer, processing unit layer, and user interface layer. (2) Community Mining. e literature [89] considered three types of community behavior: traffic statistics characteristics, digital community characteristics, and structural community characteristics, and proposed an early method based on community behavior analysis, PeerHunter, which uses complex networks for community detection. Louvain method algorithm could detect botnets communicating through the P2P structure. In [90], advanced features were extracted from network traffic to detect P2P botnets in real time. By jointly considering flow-level traffic statistics and network connection patterns, a dynamic group behavior analysis (DGBA) was applied to distinguish P2P bot-infected hosts from legitimate P2P hosts, and a new dynamic group behavior analysis was conducted to extract the collective and dynamic join patterns for each group. Wang and Paschalidis [91] proposed a new two-stage method for detecting the existence of botnets and identifying damaged nodes. e first stage detected anomalies by using large deviations in the empirical distribution. Two methods for creating empirical distributions were proposed: the flow-based method to estimate the histogram of the quantized flow and the graph-based method to estimate the degree distribution of the node interaction graph, including the ER graph and the unscaled graph. e second stage used the idea of social network community detection in the network to detect robots. is graph captured the relationship between the interactions between nodes over time and conducted community detection by maximizing the modularity metric in this graph. Table 3. Swarm Intelligence. e swarm intelligence optimization algorithm mainly simulates the group behavior of insects, herds, birds, or fish, which search for food in a cooperative manner. Each member of the group constantly changes the search direction through learning experience. e main idea of this type of botnet detection method is to use heuristic biological behavior to search and find abnormal points, perform feature extraction, and then combine with classifiers for detection. e literature [95] proposed a botnet detection method (BD-PSO-V), which was a hybrid particle swarm algorithm and voting system. e PSO algorithm was used for feature selection of network stream data. e feature was considered as particles, and the birds found the best particles. e voting system, including a deep neural network algorithm, support vector machine (SVM), and decision tree C4.5, based on the maximum number of votes, was utilized to identify botnets and classify samples. Six well-known adversarial attacks, including Fast Gradient Sign Method (FGSM), were evaluated on the ISOT and Bot-IoT datasets. e literature [96] proposed a detection model based on multiobjective particle swarm optimization (MOPSO) to identify malicious behaviors in bulk network traffic. In [97], a smart adaptive particle swarm optimization support vector machine (SAPSO-SVM) algorithm was proposed for Android botnet detection application. e algorithm used the changes in each stage of the execution process of the personal best and the global best to specify a new evolution factor value and then eliminated the interference of the inertial weight interval. (2) GWO. e literature [98] proposed a new unsupervised evolutionary IoT botnet detection method. By using the latest Grey Wolf Optimization (GWO) swarm intelligence algorithm to optimize the hyperparameters of one-class support vector machine (OCSVM), it detected botnet attacks launched from compromised IoT devices. e summary of typical botnet detection techniques based on swarm intelligence is shown in Table 4. e summary of typical botnet detection techniques based on statistical analysis is shown in Table 5. Statistical Analysis. e statistical method is mainly based on the data modeling of its statistical attributes to find outliers and estimate whether the test sample is a bot. e literature [102] proposed a spatial snapshot rapid flux detection system (SSFD), which relied on spatial distribution estimation and spatial service relationship evaluation. e space of distinction and information entropy were combined to measure the equivalent distribution of the nodes in each time zone. e benign areas tended to be distributed in the same time zone, whereas the fast-flux nodes were widely distributed in multiple time zones. [103] Aiming at the Internet of ings botnet DGA, a lightweight system was proposed to detect IoT-based botnets through the flow of the rapid recognition algorithm generation domain (AGDS). reshold random walk (TRW) was used to quickly classify NXDOMAIN (a large set of random nonexistent domain names) query flows to create opportunities to interrupt C&C connections. Botnets can hide periodic behavior characteristics by changing the communication interval. When the interval is too large, a time-series analysis cannot detect a periodic communication behavior. Based on the periodic communication detection method along with sequential hypothesis testing, Wang et al. [104] proposed a botnet periodic communication behavior detection algorithm and introduced a fast quantum search algorithm called the Grover quantum state to better realize parallel processing and improve the algorithm speed. is method can complete botnet detection with less query time and improved detection speed. Distributed Approach. To increase the detection accuracy and improve the flexibility of the detection system, some literature studies have designed the distributed detector to collect massive and multidimensional data for detection. (1) MTD. e literature [105] proposed a new type of botnet defense mechanism based on a combination of honeypot and network-based strategies, MTD (moving target defense), and reinforcement learning technology. e MTD method is used to periodically change the position of the detector, constantly reshaping the attack surface of the system and increasing the complexity and cost of the attacker. Using reinforcement learning to optimize and dynamically deploy detectors in an iterative manner, the agent learns a strategy to maximize the detection and removal of bots over time. (2) SDN. SDN technology realizes the separation of control plane and data planes, and the visibility and programmability provided are typically used to implement e basic idea of the SDN-based botnet detection mechanism is based on Open vSwitch, a virtual switch that implements the OpenFlow protocol, combined with classifiers to detect bots and identify malicious traffic. OFX [106] proposed by Sonchack et al. can deploy security functions in the existing OpenFlow infrastructure, allowing control applications to dynamically load security modules directly into unmodified SDN-compatible switches. Zha et al. [107] proposed an SDN-based scalable, accurate, and online data center bot detection framework BotSifter, which distributed detection tasks across the edge of the network in Open vSwitch, and the use of centralized learning (DNN) and distributed detection enhances the robustness of detection. e literature [108] proposed a lightweight real-time botnet detection scheme BotGuard under SDN, using the idea of graph matching algorithm, and proposed a convex lens imaging model graph to describe the topology feature of the botnet. It allowed the SDN controller to independently locate the attack location while reducing the network load. e Mininet platform was used for simulation evaluation. (3) Blockchain. e basic idea of using blockchain technology for botnet detection is to the use of smart contracts, digital signatures, incentive mechanisms, and other technologies, based on proxy or collaborative detection, to achieve trust information exchange or voting among different detectors. In [109], AutoBotCatcher used BFT (Byzantine Fault Tolerant) to perform dynamic and collaborative botnet detection on large networks and used the community detection algorithm Louvain method to detect communities. e literature [110] proposed a blockchain trust model (BTM) for malicious node detection in wireless sensor networks, which used blockchain smart contracts and WSN quadrilateral measurement and positioning methods to achieve malicious node detection in the 3D space, with good traceability. Based on a consensus mechanism blockchain, Spathoulas et al. [111] used lightweight agents installed at multiple IoT locations to collaboratively detect DDoS attacks carried out by a botnet of IoT devices. e literature [112] proposed an incentive platform SmartRetro driven by blockchain smart contracts and PoW consensus schemes, which could incentivize and attract more distributed detectors to participate in traceable vulnerability detection and contribute their detection results. e summary of typical botnet detection techniques based on distributed approach is shown in Table 6. Combination Method. e evolution of botnets presents characteristics such as diversified platforms, concealed communications, and intelligent control. A single abnormal behavior detection method cannot meet the actual requirements. Multidimensional, multiagent, and multitechnology combined detection methods have therefore emerged. Multidimensional refers to the combination of multiple detection objects, mainly referring to the combination of network traffic and signature detection. e literature [113] proposed a hybrid botnet detection method HANABot based on a host-side and network analysis; this is a general technology that can detect new botnets in the early stage. e system contains three components: network analysis component, host analysis component, and a test report. e document [114] proposed an effective two-stage traffic classification method based on a non-P2P traffic filtering mechanism and session feature-based machine learning technology to detect P2P botnet traffic. In the first stage, non-p2p packets are filtered, and network traffic was reduced through well-known ports, DNS queries, and flow counting. In the second stage, the session features were extracted based on the data stream characteristics and stream similarity, and the P2P botnet was successfully detected by the machine learning classifier. e literature [115] proposed a signature generation method based on the similarity of HTTP botnet header information, which could automatically generate high-quality network signatures. is method combined the advantages of network traffic and data packet detection, and TCP flow was used as the object to extract the size statistical characteristics of the HTTP first request packet and the first response packet (referred to as "a question, one answer packet") and combine the HTTP header field content. Statistical analysis could detect "silent" state bots. e literature [116] proposed a multistage detection method for domain fluxing, fast-flux service network (FFSN), and domain generation algorithm (DGA). e first stage used NX domain and server failure errors to detect DNS tunnel C&C server calls. In the second stage, a signature matching technology was used to detect the DNS tunnel SSH handshake between BOTS and C&C server. Multiagent (Agent) refers to the combination of multiple agents [117]. A multiagent Robot Detection System (MABDS) [118] was a hybrid technology that associates an event log analyzer with a host-based intrusion detection system (HIDS). It used multiagent technology to combine management agent, user agent, honeypot agent, system analysis, and knowledge database. Multitechnology refers to the combination of multiple technologies or algorithms. e literature [119] used the characteristics of a graph and a neural network for detection. ey generated network communication graphs at regular intervals by modeling the graph features over time, extracted the graph-based statistics and central features, assembled a time series of the features for each host (identified by IP address), and trained the time-series classification model. Ten graph features were extracted for each node: out-degree, in-degree, adjacency, neighbor, PageRank centrality, intermediate centrality, feature vector centrality, authority, and hub centrality, using the graph tool library [120] local clustering coefficient. Using the time-series data in the network is suitable for a real-time detection. e summary of typical botnet detection techniques based on combination method is shown in Table 7. We mainly focus on the comparison of botnet detection techniques based on abnormal behavior. e basic ideas, advantages, and disadvantages of various methods are summarized in Table 8. Evaluation e classification of botnet detection systems has more dimensions. is section draws on AHP, from the four dimensions of D Service (t), D Intelligent (t), D Collaboration (t), and D Assistant (t), a general botnet detection system performance evaluation system CBDES (common bot detection evaluation System) is designed. By constructing the judgment matrix and checking the consistency, the performance of the subsystems is independent of each other, and the weight of each index is calculated. Combining expert scores and objective weights, a new global capability metric € is defined, and eight typical detection methods are quantified and evaluated. Finally, a visual representation is given using spider graphs. Index System. e performance index system of the botnet detection system is mainly divided into four dimensions: D Service (t), D Intelligent (t), D Collaboration (t), and D Assistant (t), as shown in Figure 3. Evaluation Index System (1) D Service (t) refers to the basic performance of the botnet detection system. e indicators are divided into three subindices: accuracy, scenes, and stage. e accuracy F Service(ac) refers to the accuracy of botnet detection. e scenes F service(sc) refers to which scenes the method is suitable for. Stage F service(st) refers to which stage of the botnet's life cycle is detected. In the D Service (t) dimension, the weight is used to give the importance of each indicator, the quantitative formula of this dimension is obtained, and the result is normalized to [0,1]: [118] MABDS associated the event log analyzer with a host-based intrusion detection system (HIDS) A variety of techniques Collected data itself (i) Used multiagent technology to combine administrative agent, user agent, honeypot agent, system analysis, and knowledge database (1) (2) D Intelligent (t) refers to the degree of automation of the detection system, which is divided into three subindexes: automation, adaptability, and real time. Automated F Intelligent(au) refers to the degree of automation of feature extraction in the detection process. Adaptive F Intelligent(ad) refers to whether the detection model can detect unknown types of botnets. Real-time F Intelligent(re) refers to whether the detection system can be performed in real time. In the dimension, the weight is used to assign the importance of each indicator to get the quantitative formula of this dimension, and the result is normalized to [0,1]: (3) D Collaboration (t) refers to the system's synergy and scalability and is divided into subindices such as architecture, content, and integration. F Collaboration(ar) refers to the organizational structure of the detection system, which is divided into centralized and distributed. Content F Collaboration(co) refers to the category of detection content, which is divided into single type, such as host log or network traffic. Diversity refers to the detection of host and network combination, or code and traffic combined with multiple data. Integration F Collaboration(in) refers to the use of multiple types of detection methods. In the D Collaboration (t) dimension, the weight is used to give the importance of each indicator, the quantitative formula of this dimension is obtained, and the result is normalized to [0,1]: D Collaboration (t) � w ar * F Collaboration(ar) (t) + w co * F Collaboration(co) (t) + w in * F Collaboration(in) (t). (3) (4) D Assistant (t) refers to some other indicators, mainly latent, cost, and visualization. Latent F Assistant(la) refers to whether it can detect deep latent botnets. Cost F Assistant(cos) refers to the consumption cost of the detector, such as GPU and bandwidth consumption. F Assistant(vi) refers to the visualization of data information or botnet detection through visualization methods. In the D Assistant (t) dimension, the weight is used to assign the importance of each indicator to get the quantitative formula of this dimension, and the result is normalized to [0,1]: According to the calculated value of the four dimensions, it can be abstracted as a polygonal area. is paper uses Gauss's area formula to define a global measure. A new global competency metric € is defined with expert ratings and objective weights as follows: Quantitative Assessment (1) AHP (analytic hierarchy process) is used to determine various weight indicators Step 1: construct a judgment matrix. According to the AHP hierarchical structure, the judgment matrix of the criterion layer to the target layer is constructed from top to bottom. Generally, the factors of the lower layer are used to evaluate the factors of the upper layer. e judgment matrix is composed of the results of the comparison of the factors of the lower layer. As shown in Table 9, the scale is 1-9. First construct the judgment matrix of criterion layer and index layer: Taking the criterion layer as an example, the calculation method of weight value is introduced in detail. Using the L1 paradigm, the W elements are normalized by column to get Step 2: calculate the weight of each performance index. is paper uses the maximum eigenvalue max corresponding to the components of the standard eigenvector as the weight of each factor. λ max � 4.245, e feature vector is 0.482 0.272 0.157 0.088 T . Step 3: the consistency correction of the judgment matrix. In addition to human factors, the consistency of the judgment matrix is different in the acceptance range of the consistency of the judgment matrix of different orders. By using the consistency index, RI revises CI to achieve its goal. e RI value is shown in Table 10. it passed the inspection. erefore, the weight vector of the criterion layer is 0.482 0.272 0.157 0.088 T . Step 4: use the same method to calculate the weight of the indicator layer, and then multiply the weight of the above layer to get the value of all weights, as shown in Table 11. (2) Description of various indicators of botnet detection system is shown in Table 12. 16 Mathematical Problems in Engineering (3) Evaluation of typical botnet detection system According to the botnet detection methods introduced in Section 3, eight typical detection methods are evaluated. Step 1: expert scores are as shown in Table 13, where EV represents evaluation vector. Step 2: calculate the four-dimensional indicators according to the weight values obtained by the analytic hierarchy process and the calculation of the global metrics. Table 14 is obtained and sorted. According to the quantitative evaluation proposed in this article, the top four methods with good detection performance are PRCL, BotSifter, PeerHunter, and Bot Catcher. Step 3: use spider diagrams for visual representation of the top four as shown in Figure 4. Challenges. In the process of attacking security organizations and overcoming government supervision, botnets are constantly evolving. To solve problems, such as concealment, survivability, and survivability, new terminal botnets, such as IoT and smart mobile devices, have also become the main source of various types of Internet security threats. Nevertheless, the industry has a more in-depth understanding of the working mechanism and behavior characteristics of botnets, and a variety of botnet detection methods have been proposed. is article summarizes the challenges faced by detection methods: (1) Multisource information collection and fusion: because of the concealment and cross-platform nature of botnets, its traces are often hidden in various information scattered at different dimensions, such as personal hosts, regional networks, and backbone networks, and stored in different formats. Multisource information contains various redundant information. Data collection must have the characteristics of collaboration, distribution, and intelligence. Any combined method should be highly accurate and have a low complexity, provide unified data representation and storage, and then perform data processing. It should also dynamically adjust the collection based on the actual scene strategy. (2) Deep latent command and control channel: botnet detection is typically in the communication and attack stage of its life cycle. In the communication stage, the focus is on traffic data. e centralized structure of botnets shows strong similarity and correlation characteristics, and the detection effect is evident. However, for third-party channels, such as P2P, cloud platform, and blockchain technology, there is a lack of effective detection methods. Detecting deep latent botnets is challenging in the early stages, such as the spread and infection stages. (3) High-speed network real-time detection: the backbone network has the characteristics of high bandwidth, large traffic, and limited storage. ese factors have led to the slow development of real-time detection technology. Lightweight real-time detection is an important content of future research. (4) Detection system structure coordination: the existing botnet detection system architecture has some problems. First, the centralized structure is unsuitable for large-scale network environments; the second is that the feature extraction method is not flexible, the structure is single, and multiple methods cannot be integrated; the third is the lack of coordination in the architecture function, and although some systems implement distributed detection, they lack effective information sharing and cooperation, and the coordination method is single, and they cannot respond quickly to botnet activities. e detection system framework must meet the requirements of distributed, scalable, and extended models and should realize the coordination between the detection system and other security systems. Prospects. As an evolution of conventional malicious code, botnets provide controllers with a flexible and efficient command control mechanism, which is an ideal platform for DDoS, spam, information theft, click fraud, and malware distribution. With the convergence of the network era with the advent, botnets have seen changes in terms of infection targets, management and control technologies, and malicious behaviors, which pose a greater threat to future Internet security. Future research directions and technical difficulties in the field of botnet detection include the following: (1) Botnet multidimensional data representation: based on the knowledge graph technology, for DNS traffic, the entity-relationship and entity-attribute modeling embedding vectors are simultaneously performed, and the collected data are represented in multiple dimensions. (2) A method based on the combination of code and traffic analysis: this is conducive to the detection of botnets in all directions, improves the detection accuracy, and is suitable for the actual environment. (3) Lightweight deep learning model: existing traffic feature extraction methods that combine the spatiotemporal characteristics are commonly used in CNN + RNN. e model is more complex, and the processing speed is not high [104]. A fast quantum search algorithm called Grover and a cutting-edge lightweight deep neural network can help improve the feature extraction speed. (4) Efficient community mining algorithm and visual detection technology: the visualization of botnet behavior and efficient data mining algorithms in the field of social networks help botnet group detection. (5) Lightweight real-time detection: the DNS traffic in the network traffic is relatively less. For zombie DNS Visualize data information or botnet detection through visualization methods 0.6 Both performance indicators are equally important 3 e former indicator is slightly more important than the latter 5 e former indicator is more important than the latter 7 e former indicator is much more important than the latter 9 e former indicator is extremely more important than the latter 2\4\6\8 e score set in the intermediate states of the two judgments If the two performance indicators are reversed, the score is reciprocal Conclusion is survey introduces the new construction mechanism of botnet, summarizes the latest technologies in the field of botnet detection, and makes a comparative analysis of the key technologies based on anomaly. One of the contributions of this paper is to propose an evaluation system for the comprehensive evaluation of detection techniques. New botnets emerge one after another, and new technologies and their comprehensive applications will be the research focus on the field in the future. is survey is of great significance for security personnel to analyze and defend botnets, and it may help the research community to produce better tools and techniques for mitigating the threat of botnets. Conflicts of Interest e authors declare that they have no conflicts of interest.
v3-fos-license
2017-10-24T08:00:13.730Z
2015-11-03T00:00:00.000
146590120
{ "extfieldsofstudy": [ "Psychology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=61360", "pdf_hash": "3f006ec6ec4afa36449905f3622cf1de84ed0073", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2034", "s2fieldsofstudy": [ "Sociology" ], "sha1": "3f006ec6ec4afa36449905f3622cf1de84ed0073", "year": 2015 }
pes2o/s2orc
Men Reflecting on Prevention of Conjugal Violence Approach The theme is part of a field of study on gender, health and violence; it narrates the perspectives of military policemen on domestic violence and the forms of prevention suggested to both men and women. As for the psychological point of view, the gestalt phenomenological clinic is the area of focus based on the area of human rights. We understand that conjugal violence provokes psychic and intersubjectivity suffering for the distortion it causes on how men make contacts and social adjustments, in other words, for the introjection of conceived male models. The subjects of research were college graduates between 19 and 30 years of age. We used a questionnaire with three integrated parts: 1) identification of informant and wife/partner; 2) experience with situations of violence; 3) prevention. This paper covers only the third part. The results show that the definitions of violence indicate disrespect, imposition and lack of emotional control. As for prevention, the suggestions made to the wives highlighted self-control and report to the police, that they are conscious of their emotions, and if they are unable to live in a healthy conjugal relationship, that they terminate their marriage/involvement. For the men, the suggestions focused mainly on equality of rights, to see women not as an object but to take good care of their wives/partners. As a conclusion, we have indicated brief psychotherapy and the group as an intervention for the psychological suffering originated from experience with violence, to welcome and listen to the men, woman and the couple, the depathologization, and the exam of nexus relations between genders, equality and power. Introduction The subject is part of a field of study on gender, health and violence of the graduate program in psychology of Universidade Federal do Pará (Federal University of Pará), and results from the exploratory collection of material for the project financed by FAPESPA, through Notice 08/20141 .The text narrates the point of view of men who are members of the police force on conjugal violence and the forms of prevention suggested by them for both men and women. To accommodate the narrative of the participants we have distinguished our method of ratiocination.It is based on our guiding principles of research that aims to understand phenomenologically the experience, relation between subjective and inter subjective in the context where it takes place2 , the forms how language participates in communication of and between the subjects; to position the conception of gender and violence (psychological and physical) within a dynamic-structural focus; delineate care in an anthological dimension, and locate its flux in health services in order to configure a network (Holanda, 2014;Moura, 1989;Ayres, 2004;Amatuzzi, 2001;Pimentel, 2003;Keen, 1979;Japiassu, 1982;Ricoeur, 1988;Connel, 1995). Such fundaments permit to update the involvement of clinical psychology in the field of study of violence between couples reshaping a classic practice associated to a private model. The clinical practice of psychologists and psychoanalysts has shown the importance of violence in the world today, and confirms that this phenomenon affects the human being in different ways: in its subjectivity and behavior, in its quality of life, family relations, education, political life, etc. From the political and philosophical point of view, it undermines the concept that it is defined as human and points to the "inhuman" in the other individual and in each one of us (Safra et al., 2009: p. 8). As for the health care practices, an existential ontological approach concerns the three dimensions that operate in an integrated manner: self-care, care for the other, and care with the context.Together they permit that the psychotherapy interventions with the couple and the males articulate among themselves aiming "the assessment of the relation between purposes and means, and its practical sense for the patient, according to a dialogue as symmetric as possible between the professional and the patient" (Ayres, 2004: p. 86). On the subject of clinical psychology, our approach tends to give the psychological and intersubjective suffering for the distortion on how the men make contacts and social adjustments, that is, directed by the introjection of the gestaltic point of view models a substantiation in the area of human rights; thus, our interventions understand that conjugal violence is an action that provokes males: "the media men... they become models for other men, with all the consequent frustrations resulting from an ideal type absolutely distant from the real life (Blay, 2014: p. 19).In turn, Gold & Zahn (2014: p. 47) assure that devised norms when chronically introjected require a psychotherapy procedure of integration and awareness, because the subjects "when trying to get rid of thoughts or feelings, sometimes worsen the symptoms, by adding another layer of "should be" and self-criticism, and put aside the energy denied and little understood". As for the concept of psychological suffering, it relates itself to studies on labor psychology3 , however the authors have done distinct links with studies on gender trying to configure an extension between mental health4 and gender.For example, Albuquerque (2012: p. 15) points out that psychological suffering "is an uneasiness that expresses itself as anguish, concern, anxiety tension and/or discouragement not necessarily characterized as strict mental illness".Besides, the author mentions a group of researches that indicates greater prevalence of anxiety and depression among women, possibly by gender inequalities, and "by the fact that be men hardly ever discuss their health problems, unless considered serious" (p.19).Santos (2009), andSilva &Cols (2013: p. 1) claim that, "The psychological suffering experience is constructed socially and brings among itself the conformation of values and norms of a certain society and historical era.It may be understood as something highly individual, as the experience of set of uneasiness in the subjective context." Considering the complexity of the issues that surround us, we have examined that the decline in psychological and intersubjective suffering in men (and couples) requires the effectiveness of public policies such as the Brazilian Comprehensive Healthcare Policy for Men (PNAISH), the strengthening of basic healthcare services, and dialogue with both male and female movements that claim to end violence against women (Brasil, 2008;Figueiredo, McBritto, & Peixoto, 2012). In regard to violence, the general concept we have used, refers to the asymmetrical daily practices, in which some members of a social system are denied or invalidated as subjects installing the domination and inequality as vertical action and imposition of beliefs and values (Arendt, 1970(Arendt, /2014).In the gender dimension, we refer, among others, to a consideration by Beauvoir (1990: p. 16) about reciprocity and inequality, where philosophy has reassured, "There are as many men as women on Earth, and both groups had been initially interdependent; they ignored each other or accepted the other party autonomy; it was a historical event that subordinated the weak to the strong: the Jewish diaspora, the introduction of slavery in America, the colonial conquests are precise facts." We added to the clinical intervention on the violence that occurs between couples, the understanding that the attitudes of each spouse during conjugal violence in their conflicting everyday life are dynamic, that is, they demand that psychologists refute essential thesis to understand what is violence, and its association to "passivity" and the "victim" role attributed to women; and, the "active" attitude rather than that of an "aggressor" inflicted on men.This way, they may reveal some games of domination established between themselves, and overcome the use some already of saturated categories such as: women "fragility", men "strength", victim, aggressor "attached" to the subjects in a timeless way. Back to the specificities of the research, within the context of health, the Núcleo de Pesquisas Fenomenológicas-NUFEN (Nucleus of Phenomenological Research) and the Instituto de Estudos Superiores (Institute of Higher Education) for the military educational training and intervention directed by the gestalt clinical psychology, in order to offer the men and the couples, between 19 and 50 years of age, a space of brief psychotherapy5 .The interlocutors were men attending the first year at Institute of. Higher Studies for Military Educational Training6 aiming the execution of planning in the various units of public safety in the state.We observed that the profession is one of the dimensions that form the processes of subjectivation, even though our analysis of the responses were done considering the respondents' apprehension on men's social representation; thus, we integrated such aspects to the meanings attributed to the horizon of experience by the participants without reducing their experience to the world of military institution. We emphasize that in the State of Pará, justice and health are areas where projects of attention to men have been implemented.At the State Health Secretariat a working plan has been implemented by the Men's Health Coordination and regulated by Ordinance 2708, dated November 17 th , 2011.The strategies to consolidate the policy in the state include prevention and control of obesity, domestic violence, engagement of the local population in activities, such as Men's National Day, seminars, elaborations of brochures, etc.According to the plan, there are 2,537,790 men living in the urban areas; another 1,284,047 in the rural areas, and 1,964,780 men between 20 and 59 years in both urban and rural areas (SESPA, 2011). The epidemiological data enumerated in 2011, for this age bracket indicate the leading occurrences: lesions, poisoning and other external causes totaling 20,325 cases; infectious and verminous diseases with 15,542 occurrences; digestive tract disorders with 12,417 cases.As for psychological suffering, 11,455 men are diagnosed with mental and behavioral disturbances (SESPA, 2011). In the field of justice, the institution responsible for the education-justice interface is Núcleo Especializado de Atenção ao Homem-NEAH (Nucleus Specialized of Attention to Men) with origin in a partnership between Public Defender Office of the State of Pará and the Ministry of Justice.The philosophy that supports the interventions of NEAH are the guidelines set by the Program of Promotion to Sentences and Alternative Measures.The participation in groups of reflection, in lectures and workshops on domestic violence form a set of alternative measures (NEAH, 2012). Reasserting what we have said before, interventions in the field of change of posture of man and the couple require the integration of both individuals in activities related o health, education, human rights and social justice.We should remember that the Convention of Belém do Pará (1994) is one of the bases to understand violence against women as a public health issue which affects everyone regardless of race ethnicity, socio economic level and age.National and international researches on the issue indicate its high prevalence and qualification as a violence of human rights and a hindrance for the achievement of gender equality (Brasil, 2006). In turn, Lyra, Medrado, Barreto & Azevedo (2012) state that the report of the Conference on Population and Development of Cairo held in 1994, recommends working with men.However, "men surge in the utilitarian perspective, as a benefit to the life condition of women and children, immersed in an argument of responsibility and obligation.We are far from that, but we can still think of them as subjects of rights or objects of policies" (p.9).Pinheiro & Couto (2012: p. 50) remark that "the implementation of PNAISH legitimizes and expands the academic debate about the relation men and health-care and, opens space for reflection and proposition on how to provide them some assistance". On the conjugal violence situation, Beiras, Ried & Toneli (2011) did some analyses of policies of intervention programs carried out in Latin America and Portugual.From this approach we started an analysis of the masculine social representations. Masculinities In the international discussion Fonseca (1998), bases his position on the North American context of the 20th Century, and presents his classifying perspective: 1) The essentialists theories, with emphasis on the conceptions on sexuality elaborated by Kinsey, Masters and e Johnson; the psychoanalytical investigations on gender done by Chodorov which explores the object relations and the process of separation between mothers and their boys; 2) theories on social roles, in the functional arena devised by the Sociologist Parsons; and the Foucauldian model represented by Joseph Pleck; 3) costructionism, developing an epistemology that stressed the ideological character of the male domination; 4) the theories of gender, emphasizing the cultural variations, contents, activities, discourses, experiences as qualifying dimensions of manhood. Four theoretical lines-essentialist, positivist, normative and semiotics-where different models, universal and stable characteristics and identity symbols of masculinity, as well the common aspect present in all beacons, demarcate an arbitrary exercise of power sustained by rationality and by hegemony of one type of man: the white Anglo Saxon.These are the references analyzed by Connell.Following, this author considered masculinity as a position in relation to genders (Connell, 1995). In the Brazilian scenario, the debate presents the subjectivity, the definition of masculinity, sexuality, the changes in gender relations, the understanding of masculine practices, paternity, the references for the youth and more recently health as some of the researchers' concerns.Grossi (2004) in a literature revision on masculinity challenges the qualities and images assigned to men when defining masculinity.He assures that in Brazil the gender identity mark is formed by sexual activity, while in Europe and the United States it is formed by heterosexuality.This author emphasizes that what really matters most for the Brazilian man, during sexual intercourse, is penetration and not his partner.In Pimentel (2011) I stated that, it seems to me that the most important thing in this activity is the maintenance of a social image as a "stallion, virile" and self-image as potent where the reflection about games of power and ideology imprisons men by the money-talks dimension. The researches show how to conceive the masculine condition that may be called modern matrix.In the 21 st Century, the heterosexual man begins to associate the beauty issue to identity, which had been little emphasized during the 20 th Century, during the construction of the masculine representation.Parallel to characteristics of provider, of virile male, aggressive man, the investigations point out the thesis of the male condition "crisis", concerning ruptures on the exercise of patriarchal power, of the civil institutions, with particular reference to the family and school; at work, through employability and the exchange made with women of public for private space; a more present paternity; in a more flowing sexuality and affectivity (Pimentel, 2011). The social representations of masculinity and femininity that circulate socially are elaborated subjectively, sometimes including internal references to influences of unhealthy family models where conjugal violence is emphasized. In the international scenario, the first programs to face violence against woman were elaborated in Canada and the USA in the 80's: Among them: Counseling & Education to Stop Domestic Violence e the Duluth Model, Domestic Abuse Intervention Project.In Brazil, in order to help understand and learn about the dynamics that domestic violence undertakes in the relation between couples and families, it was created Rede Brasileira de Pesquisas sobre Violência, saúde, Gênero e Masculinidades-VISAGEM (Brazilian Network of Research on Violence, Gender and Masculinity-VISAGEM) (Lima & Buchele, 2011). On the punctual actions, the logic of prison that removes from the house, without emphasizing the reconceptualization of perceptions and experiences with domestic violence learning is a strong connection in the ideology of some social movements of women/feminists.As for Brazilian programs and public policies for recuperation, reeducation and psychotherapy, the literature has shown low interventions, among them: a Campanha do Laço Branco (White Bond Campaign), Siga Bem Mulher (Women do Well Project), Centro Especial de Orientação à Mulher-CEOM (Special Center for Woman Guidance) from São Gonçalo/RJ, Centro de Apoio a Famílias (Center for Family Support) in situation of violence, and Casa Abrigo de Santa Catarina (Shelter Home of Santa Catarina) (Lima & Buchele, 2011). In 2014, the Department of Sociology of the School of Philosophy, Letters and Human Sciences of the University of São Paulo arranged the Seminar on Feminism and Masculinity to address domestic violence and proposals of intervention carried out in Brazil by various groups working to confront and reduce women's death. As a result of the event Eva Alterman Blay composed a book from which we separated a text produced by Acosta and Bronz (2014).The authors make a retrospective emphasizing the methodological aspect of endeavors with men under situation of violence.They stress the importance of systemic approach and the complexity of the phenomenon, and overall they highlight the forms of addressing men: "In the beginning they were called aggressors, with ideas associated to the area of gender, they became "authors of violence against women", changing to "men under situation of violence with their intimate female partners" (pp.141-142). We understand that the language transmits values and ideologies through polysemic meanings, thus the renaming of men is a form to overcome essentializing concepts and the binary logic present in the understanding of genders, as well as considering the context and interactions in the original context of domestic violence. Methodology This is about a qualitative research in clinical psychology of gestalt phenomenological orientation in interface with feminist studies of gender, masculinity and human rights.According to, Osborne (1990: p. 81), "Phenomenological research is not intended to test a hypothesis.The aim is to understand a phenomenon by allowing the data to speak for themselves, and by attempting to put aside one's preconceptions as Best one can. The method provides us with descriptions of experience which are then interpreted by the researcher from a particular theoretical perspective." We did some bibliographical survey to find articles on care, masculinity, violence against women.This procedure was to have a basis to qualify the articulation between theory and the empirical material.To locate the sample, we had the mediation of a Psychologist who is also a police colonel and works in a military police health institution.We took her indication and presented to the Institute General Director and the pedagogical coordinator the research project and were authorized to do the study through a lecture on masculinity crisis to students being trained to become military officers.During our exposition we presented a questionnaire and the research consent form and which after an explanation we had the adhesion of 97 men. The research subjects were between 19 and 30 years old.From the sample, only two interlocutors stated that they had no stable intimate relation.We handed out a 3 integrated parts questionnaire: informant's identification7 , the partner's/wife's identification; experience8 with situations of violence and prevention9 .In this paper we focus specifically the answers given to define violence and the ideas on how to prevent it. As for the ethical cares, we subjected the Project to Plataforma Brasil.Since the first contacts with the subjects involved, we respected the procedures provided for on Resolution n 196/96, of the National Health Council/ Ministry of Health which sets forth the research norms involving human beings.The subjects were invited to adhere to the questionnaire receiving information about the project and criteria for their participation in the research.They were shown and informed of the research consent form and received information on the questions regarding the ethics of the research with human beings set forth by the National Health Council.They were assured of the confidentiality of their names and the use of pseudonyms.To be included in the research, the subject had to be between 19 and 50 years old, literate and capable of signing the term of free and clarified consent.The criteria for exclusion considered those below the age of 19; the ones unavailable for the research and the illiterate ones. For the analysis, the data were organized in terms of descriptive-interpretative theme.We stress the qualitative proprieties found based on the characterization of the examined elements.The results were organized in three axis: definition of violence; recommendations to the female partner, and suggestions to the men.All questionnaires were read and reread to select those common answers to the group, the unique ones to the respondent, and write a synthesis of the material found. Results and Discussion The conceptions of domestic violence mentioned by the interlocutors stress dimensions interconnected that may be combined in two categories: 1) intrasubjective, concerning lack of emotional control; 2) intersubjective, where imposition and disrespect culminate with lack of recognition of the partner during acts of violence.In the same manner as the dynamic of establishment of domestic violence, the lack of control + imposition + disrespect were also recurrent.As mentioned by the interlocutors, conflict that occurs with couples, with most of them beginning with men, who are naturally stronger and use it to impose their will before their female partners; It starts with minor misunderstandings and may reach or not a painful tragedy, but still considered severe. It attracted our attention during the men's account the perception that there are some stereotypes still present in the conjugal relation, such as the association of physical strength and the centralization of power to the men's "nature". The percentages for the main identifiers of domestic violence conception are: 47% for physical aggression and psychology; 11% for disrespect, 5% for imposition; 3% for lack of control, and 34% for a group of expressions added to the category of violence between couples.We can perceive that, 19% of the responses can be considered causes, and 81% consequences; which leads us to consider a need for further reflections, debate and intrapersonal development with the men and the couple to identify factors that motivate domestic violence and procedures to prevent it. The findings on psychological and physical violence are similar to those found in Lisbon, Portugal , as reported on a research done in 2007 by Matos and Cols (2014), "The most prevalent violence is the psychological one (53.9%),followed by physical violence (22.6%) and sexual violence (19.1%).The riskiest place is home itself with the husband as the major aggressor (72.7%)". In summary, we enhance that psychological domestic violence is meant as disrespect, lack of control and imposition.We have selected some excerpts: lack of emotional control, caused by immaturity in the relationship; anger reflected on the partner; Violent imposition of conditions upon the partner; Inflict the aggressor's subjective limits; of physical or psychological pain within the couple's environment; attempt to impose a certain authority by force; lack of mutual respect concerning the other partner's deficiencies. When making suggestions for the wives and female partners, the respondents emphasized personal qualities such as; dialogue, patience and calmness, understanding and respect.Self-control and dialogue are means of prevention, the key for everything between couples, paramount to avoid violence, because most serious arguments have their origin in the lack of dialogue between couples. They have also mentioned the importance of questioning in order to understand the partner's motivation, in order words: always asking the reasons why?Often, violence originates from lack of attention and dialogue; the first step is to sit and talk with the partner advising that his/her attitude is not correct, and if it continues, he/she will be denounced and the relationship terminate whether marriage or not.Communication benefits tolerance, to be more patient and address themes considered disturbing; to have more freedom to express oneself, to give opinions and suggestions during the relationship. When dialogue was not possible it was suggested that the woman seek legal expertise: in cases of violence to report it to the police.Such conjecture permits us to infer that it is not woman the one responsible to "save the marriage" regardless of her unhappiness and suffering.Thus, the men from the sample indicated: the police should be called at the very first occurrence; do not be inhibited; do not get isolated; do not keep to yourself or hide any violent act against you; first, sit down and talk with your partner, warning that his/her attitude is not correct and that if it continues that you will make a police report and terminate the marriage/stable relationship; try to know the person better and in cases of violence make a threaten with a police report; if you discover during the initial period of your relationship that the partner is violent, move away as fast as possible. Other procedures equally important for communication were the impulse control, patience and calmness; therefore, female partners should think first before acting and speaking; look for happiness in the partner's pleasure.By not losing control and staying calm it is possible to perceive the partner's emotional state; to think carefully because certain decisions are crucial; do not permit feelings to set over reason; be flexible; be aware for changes; be less aggressive. An element of facilitation that was stressed is the capacity for empathy, so that the partners may put themselves in the other partner's place before listening, without being humiliated or downgraded. Some forms of self-control that can be exercised are: never to act violently because the reaction can be vi-olence; substantiate your accusations of treason as scientifically as possible, and avoid being taken over by emotions; be controlled in a moment of anger; manage stress though physical activity, dialogue and impulse control; do not stimulate lack of patience for very small reasons.As for dialogue: be attentive toward the partner's emotional state; feel empathy, trust and be trusted; maintain a high level of communication and perceive the partner's level of tolerance. The quantitative indicators show 37% for dialogue; 15% for patience/calmness; 15% for understanding; 13% for respect.The fields with fewer incidences, but, that might have a high impact were self-control with 5% and police report 9%.Maybe this strategy has not been adhered entirely by men due the symbolic and social consequences caused by a police report, besides demanding responsibility for the setting up of violence in the couple relationship. In summary, the suggestions can be grouped in a linear scenario, that is, of casual relations between provoked violence and its occurrence; also, on a daily basis of the relationship, it seems that the couple was in a permanent attempt to convince each other than in communication, an action of intensive difficulty, since it requires listening and attention while the partner is talking.Buber (1982: pp. 53-54) alleged that there are three forms of dialogue: "the authentic, when each partner has the intent to establish a live reciprocity; the technical one moved only by objective understanding; and the monologue disguised as dialogue, when the partners talk to themselves". As for suggestions to both partners, the respondents mentioned timidly the adoption of the perspective of equality, since it has been often recommended as means of prevention of conjugal violence the dialogue + understanding + respect + patience and calmness.The equality of rights is a result of a better understanding of women as the other partner, and not as an object is what we took from the narratives: to perceive that the wife or partner is not an object but someone who is important for one self's personal development, besides the satisfaction provided by a good company; think of women as person and not as a sexual or domestic object. The quantitative indicators show 23% for dialogue; 18% for patience/calmness; 12% for self-control and understanding respectively.The addition of those indications that do not see women as an object (2%), equal rights (2%) and treat the partner well (3%) is over 7%, indicating a reduced rate for equality. The interlocutors also indicate the way for good care requires a mutual effort to recognize alterity and exercise tolerance.The coexistence leads to the daily relationship the existential stories of each individual partner , which requires opening the difference in the forms of understanding the world. The men who participated of the research proposed that their partners know their emotions and warn that, if the couple is not capable of achieving healthy marital relations that they report and terminate it.And, in the peculiarity of the suggestions to other men we emphasize equal rights, which is different from perceiving women as an object but as a partner who must be treated well To illustrate these conditions, we have highlighted some excerpts: that he does not use his physical force to impose power; he should understand the relation as a horizontal condition of power between the two partners; man needs to know that his right is similar to that of woman, therefore he should respect the feminine space and reject the use of violent practices to impose his will. The interlocutors advise other men and couples to practice mutual recognition: first, the couple must love each other and whenever something bad happens they must stay together and overcome such problems, always talk when something is disturbing, but for that they must be open and frank, and finally take of the partner as if it were you; avoid drug consumption; focus anger on the situation and try to solve it rather than direct it toward your partner; discus your problems with a specialist; do not insist on relationships that already had some type of violence and always talk with your female partner. Final Considerations The men who participated of this study not only were pursuing a college degree in public safety but also had some education (complete or incomplete) in human sciences, such as psychology, pedagogy and social sciences.Such profile suggests the importance to aggregate knowledge about patriarchal ideology and matters related to gender, specially the most recent advances in the area of legislation and Science. We consider that the obtainment of information and change of attitude in relation to its process of subjectivation, to women and gender violence, collaborates with men to establish ruptures on the path to inequalities and psychological suffering.Under this perspective, research has been done by Anderson (2013) in the USA and England to redefine masculinity, aiming the new generation to contribute for the understanding of new forms of subjectivation.This author points out that there are, between men, relations based on increasing intimacy and non-sexual physical contact. In Brazil, according to Acosta (2014) the development of the methodology of group work began in the early 90s, with actions initially carried out as interventions followed by the establishment of partnerships with universities in Rio de Janeiro and non-governmental organizations to carry on research.The author points out that the creation of a working group by the Noos Institute, the Nucleus of gender, health and citizenship around the line of research made it possible the redefinition of the methodology being constructed by the adoption of the systemic theory referential. Our experience with domestic violence has included the brief psychotherapy as a strategy to break away with stereotyped learnings and psychological processes oriented toward defensive mechanisms that create adjustments, where creativity is absent, but the orientation of introjective and /or confluent norms that impose on men certain behaviors that they do not always agree with, but continue to reflect by force of social pressure coming from several institutions where they have access.Thus, we understand that brief psychotherapy in group has a subsidy potential for suppression of violence between the genders through a dialogue counting on a collection of relevant knowledge which exists between the two situations.Beiras & Cantera (2014: p. 39) confirm our perspective of the importance of men who live violence in their intimate relationships to express their emotions, deconstructing myths "in order to enable that they, since childhood, may express their vulnerabilities, sensations, fears, feelings, As means to grant power to other expressions of masculinity, of force and strength".Monteiro (2012: pp. 25, 31) considers that therapeutic listening contributes for men's self-expression.In the Gestalt-therapy one objective is the awareness increase through unveiling of meanings and motivations that install conjugal violence.This author reminds that "in psychotherapy the way is to make the person feel responsible and conscious of the introjection of "macho" and "patriarchal" values. The adoption of a tolerant attitude, respect to avoid that aggressions become an usual practice, avoid lies, think in conjunction with the partner, accept the gender differences, preserve a minimum level of privacy in the relationship, act with knowledge, sometimes remain silent, try to know your mistakes, be less selfish, and think about the common good for both of you, be careful and informed on domestic violence, ensure loyalty, be sober, calm, gentle, ask for God's presence in the relationship, education above all, mature personality, treatment against stress, were suggestions given by the interlocutors in the research.We can observe in the propositions a mix of patriarchal paradigm that focuses on fissures in the statistical narratives; however, there is still a lot to do, especially on the perspective of prevention in public health, mainly in primary care. Brief psychotherapy and the group configure form and place of re-elaboration and questioning of patriarchal culture, where identity and difference between men and women is thought in its non-essentialist complexity.We agree with Silva and Cols (2013) that psychological suffering has multiple expressions, thus the acceptance and listening by the man, the woman, and the couple favors the despatialization and not imposition by man to participate of groups of education as an alternative to imprisonment.Thus, it is possible to understand and organize brief psychotherapy in a socio political approach focusing on the psychological suffering of the experience with situations of violence articulating the exam of relational nexus between gender and power (Pimentel, 2011a(Pimentel, , 2011b(Pimentel, , 2014)).
v3-fos-license
2022-10-29T15:12:27.057Z
2022-10-03T00:00:00.000
253188272
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "104a344e2de37668f5e0bfc2637b0bf356883560", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2036", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "sha1": "de8b2cb51be5ca25a4edfcd429dfa9f9e632bd7b", "year": 2022 }
pes2o/s2orc
The complete mitochondrial genome and phylogenetic analysis of the ocellated angelshark: Squatina tergocellatoides Chen, 1963 Abstract The ocellated angelshark (Squatina tergocellatoides Chen, 1963) is a threatened shark within the family Squatinidae. In the present study, we reported the mitochondrial genome sequence of the ocellated angelshark. The complete mitochondrial genome is 16,683 bp in length and contains 37 mitochondrial genes and a control region as similar to most fishes. In addition, we constructed a maximum-likelihood phylogenetic tree of S. tergocellatoides and its relative species. This work will provide molecular data for further studies on S. tergocellatoides. Among the 16 valid species of the genus Squatina, four species were reported in the western North Pacific, including S. tergocellatoides, S. formosa, S. japonica, and S. nebulosa (Compagno et al. 2005). Due to bycatch and destruction of habitat, ocellated angelshark S. tergocellatoides population have decreased (Figure 1). In 2020, S. tergocellatoides has been assessed for The IUCN Red List of Threatened Species as Endangered under criteria A2d (Rigby et al. 2020). However, the genetic resource of S. tergocellatoides is still limited. Hence, we sequenced and reported the complete mitogenome sequence of S. tergocellatoides here. Our work will provide molecular data for further studies of S. tergocellatoides. Tissue samples (muscle) from a S. tergocellatoides specimen from the South China Sea (N 118.37 , E 22.83 ) were collected in 23 September 2018. Then, the sample was preserved in our laboratory. Genomic DNA of S. tergocellatoides was extracted using a Tiangen marine tissue genomic DNA Extraction Kit. The mitogenome of S. tergocellatoides was sequenced on an Illumina HiSeq 4000 platform (Table S1). The specimen and its DNA were deposited at Key Laboratory of Marine Ranching, Ministry of Agriculture and Rural Affairs, PR China (BB. Shan, shanbinbin@yeah.net) under the voucher number Squatina_tergocellatoides_SCS_01. After trimming and assembly, we obtained a mitochondrial genome sequence with a total length of 16,683 bp ( Figure 2). We annotated the mitogenome by using MITOS2, and identified 22 transfer RNA (tRNA) genes (1499), 13 protein-coding genes (11,435 bp), two ribosomal RNA (rRNA) genes (2603), and a non-coding AT-rich region (1146) (Bernt et al. 2013). The nucleotide composition of the mitogenome is: 13.4% G, 23.1% C, 31.1% A, and 32.4% T, the composition showed an anti-G bias like other fishes (Miya et al. 2001). Furthermore, we constructed a maximum-likelihood phylogenetic tree based on protein-coding genes sequences of the S. tergocellatoides and other related sharks (Figure 3). The selected nucleotide sequence model was GTR þ R6 þ F (Posada and Crandall 1998). The topological structure of the phylogenetic tree shows that S. tergocellatoides, S. nebulosa, S. formosa, and S. japonica in genus Squatina, have a close relationship. Furthermore, the four species cluster into a sister group to S. squatina and S. aculeata. The result of our study is an important resource for further genetic studies of S. tergocellatoides. In the present study, we sequenced and assembled the mitochondrial genome of S. tergocellatoides. We annotated the genes and estimated base compositions of the mitochondrial genome. In addition, we constructed a phylogenetic tree using the maximum-likelihood method based on the 13 protein-coding genes of S. tergocellatoides and other species. We expect that the results of the present study will facilitate further investigations on the molecular evolution and conservation biology of S. tergocellatoides. Ethics statement All the experimental procedures were approved by the ethics committee of Laboratory Animal Welfare and Ethics of South China Sea Fisheries Research Institute (project identification code: nhdf 2022-05, date of approval: 13 March 2022). The methods involving animals in this study were conducted in accordance with the Laboratory Animal Management Principles of China. Author contributions C.J., Y.L., and D.S. performed the experiments investigation and project administration; B.S., Y.L., and D.S. performed writing the original draft and data curation. C.Y. prepared the resources. L.W. supervised the project. C.J., B.S., and Y.L. made revisions to the manuscript. All authors agree to be accountable for all aspects of the work.
v3-fos-license
2017-09-05T11:49:19.341Z
2017-09-01T00:00:00.000
54726682
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBYSA", "oa_status": "GOLD", "oa_url": "https://ijaems.com/upload_images/issue_files/3%20IJAEMS-AUG-2017-8-ANN%20Models%20to%20Correlate%20Structural.pdf", "pdf_hash": "bbfe5003352e86212fdb8c4478d853742744f435", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2037", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "sha1": "c37e6e0558ca5e03ac30240211112d0147df3a12", "year": 2017 }
pes2o/s2orc
ANN Models to Correlate Structural and Functional Conditions in AC Pavements at Network Level Artificial Neural Network (ANN) model was developed to estimate the correlation between structural capacity and functional conditions in Asphalt Cement (AC) pavements at the network level. To achieve this objective, the relevant data were obtained and integrated from the Iowa Pavement Management Program (IPMP) including construction parameters, traffic loading and subgrade stiffness, and Iowa Environmental Mesonet (IEM) for climate data. The ANN model proves its ability to learn and generalize from the input data. Overall, rutting data were found to be appropriate indicator of the structural capacity. Since the deflection tests are expensive and require experience and knowledge to deal with such data, this approach might be feasible for small transportation agencies (cities and counties) that do not have these capabilities. INTRODUCTION State Highway Agencies (SHAs) spends millions of dollars each year on providing and managing transportation infrastructures. Evaluating structural capacity is an important consideration in pavement highway systems to optimize network maintenance and agency fund allocation. Most of structural capacity evaluations have been done at the project level, and many highway agencies do not include structural condition evaluations in their Pavement Management Systems (PMS) at the network level management for many reasons such as costs of conducting structural tests and they require experiences to carry them out. In State of Indiana the routinely structural evaluation and thickness data often are not available for Indian pavement network [9] . Agarwal et al. (2006) reported that more than 75% of highway agencies in India do not carry out any structural evaluations on pavement conditions [3]. Researchers have proposed a number of methodologies to evaluate the structural capacity of pavements by using nondestructive testing. There are various nondestructive testing equipment that are used for pavement evaluation, and falling weight deflectometer (FWD) is the popular one. The FWD applies loads to the pavement surface and the resulting surface deflections are measured by sensors at different distances from the load source ( Fig. 1). AASHTO (1993) provided an accurate approach for determining the Structure Number (SN) using the deflection results from FWD [1]. [12] However, applying nondestructive tests to provide structural capacity evaluation has many shortcomings because of the FWD stop-and-go operation which affect traffics flow. Moreover, the process of analyzing acquired data is often complex and requires experience and knowledge to deal with such data [3]. Many studies have tried to evaluate the correlation between roughness and structural capacity of pavements at the project level. Sollazzo et al. (2017) conducted ANN models with high accuracy to find the relation between roughness and structural performance in AC pavement by using Long Term Pavement Performance (LTPP) data [13]. Agarwal et al. assessed the relation between alligator cracking and rutting, and pavement structural conditions [3]. The pavement structural adequacy can be estimated from existing distress and backward calculation to design procedures [1]. ANN models have recently been widely used to simulate the human process in the brain. ANN models use the collected data to build prediction models and compute the relative importance of variables instead of the natural relationship between variables. Rafiq et al. (2001) reported a definition of an ANN as: "A computational mechanism has an ability to acquire, represent and compute mapping from one multivariate space of information to another, given a set of data representing that mapping" [10]. Engineers often find incomplete or noisy data, so the ANN models are the most applicable models to learn and generalize from the input data until meaningful relation are found to problems [10]. The ANN model has an ability to predict the nonlinear relationships between variables as similar to traditional models [16]. ANNs have been widely used in different civil engineering areas with good results because they are very generic, accurate and convenient mathematical models able to simulate numerical model components [8]. Adeli (2001) had reviewed the papers that had used neural network models since 1989 especially in structural engineering, construction engineering, and management [2]. Golshani et al. (2017) compared the prediction capabilities between statistical approach model and neural network models for modeling two critical trip-related decisions of travel mode and departure time [6]. Their results show the neural network model offers better performance with easier and a faster implementation process. Moreover, ANN and multivariable regression models were used to predict the stress intensity factors (SIFs) in pavement cracking, the results show the advantage of utilizing ANN over multivariable regression models on the prediction accuracy [15]. Felker, et al. (2004) used the ANN and statistical analysis approaches to develop the reliable and accurate roughness prediction models for jointed plain concrete pavements, and they found the ANN is able to predict the roughness with reasonably high coefficient of determination, R-squared= 0.90 , whereas R-squared of statistical analysis approach = 0.73 [4]. Gencel et al. (2011) presented a comparison between ANN and general linear (GL) models to figure out the correlation between cement content, metal content and traffic loading on the rough wear of concrete [5]. The comparison results show the robustness of ANN models compared to the GLM models. Also, Vlahogianni and Karlaftis (2013) compared between ANN and autoregressive time series models for forecasting the freeway speeds, and they found that neural networks provide more accurate predictions than classical statistical approaches [8]. In this paper, the ANN models have trained to find a reliable relation between SN and rutting at the network level data. The rutting is defined as a deformation on AC layers or subsurface layers [12]. A large data set of input parameters are included in the model to capture the relevant factors such as structure parameters, traffic loading, climate factors, subgrade stiffness, and pavement age. The ANN model output shows satisfactory results which are adequate to pay attention on this relation for improving the pavement management system. II. DATA In order to have enough data for training the ANN models, the historical database from the Iowa Pavement Management Program (IPMP) was used. The IPMP started in 1994 to develop, implement, and operate a pavement management system on 23,500 miles of roads in Iowa [7]. The database contains records on structural characteristics, maintenance activities, and traffic details. Also, Long-term climate data was obtained from the Iowa Environmental Mesonet (IEM). The Geographic Information System (GIS) was used to relate weather data that is available from point sources to their highway network. The analysis focuses on AC pavement sections that have not been exposed to any maintenance or rehabilitation operations. The detailed database included in the model is listed below: 1) Pavement age (years): age of pavement since construction, major rehabilitation or overlay date. /dx.doi.org/10.24001/ijaems.3.9.3 ISSN: 2454-1311 www.ijaems.com Page | 921 III. METHODOLGY In order to construct the neural network, three components must first be identified: architecture, learning method and neuron activation function. Architecture The architecture of neural network includes determining the input layers, hidden layers and output layers (Fig. 2). The challenge with neural network is how many neurons should be in the hidden layers as they impact the model performance. There is no specific method to select the appropriate number of neurons, so trial and error has been used in many studies. Learning Method The three-layered feed forward back propagation neural network is used during the training process to adjust the weight between the layers. During the training process, the resulted error from the output that is calculated with the initial connection weights are returned back through the hidden layers for many times until the actual and calculated output meet within pre-determined range. Neuron Activation Function Each neuron in the hidden layer has own summation and transfer functions with input and output values as illustrated in Fig. 3. The output of summation function is showed in (1). Fig. 3: Diagram of Artificial Neuron Where: Xi = the i th input wi = the weight of the link that connect input i th with the node Oj = output of the j th neuron, and f = the transfer function The transfer functions are consider as neuron activation functions based on the characteristic of the study. Sigmoid transfer function is used to avoid large or negative input values that may affect the model. The output of each neuron is calculated by (2). 3.4. Training ANN For train the neural network, the database was divided into three samples, 4706 samples (75%) were randomly selected for training set, 1008 samples (15%) for validation set, and the residual data (1008 samples) (15%) were selected for testing data. Shekharan (1998) divided database randomly into 80 percent for training data and 20 percent for testing data in order to train ANN models [11]. The performance of each model is evaluated by Coefficient of Determination (R 2 ) by (3). ANALYSIS AND DISCUSSION The flexible pavement data was investigated to find the relationship between the structure number and rutting. The other pavement types were not included in this analysis because they do not have sufficient structure data. Before the training process, the data was divided randomly into 70% for training process and 30% for validation process. The values of Rutting are plotted against the structural number to show the relationship between them which is not strong as shown in (Fig. 4). Figure 5. Fig. 5: Fitted line between observed and estimated SN Finally, predicting the correlation between the functional and structural conditions at network level by utilizing ANN model produces satisfactory results, and the ANN models can deal with noisy data and nonlinear relationship. V. CONCLUSION Modeling the relationship between the structural capacity and functional conditions is very important to the pavement management system. In this study, the ANN models were used to evaluate the relationship between the functional and structural conditions of existing asphalt pavements at the network level. The IPMP historical data and IEM data were used to train ANN models. The results reported above indicate that it is feasible to use rut depth data as an indicator for structural capacity where structural data is not available or not conducted regularly.
v3-fos-license
2024-05-20T15:07:54.034Z
2024-05-01T00:00:00.000
269917200
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.cell.com/article/S2405844024073997/pdf", "pdf_hash": "9064e8c6ae5dc554369d5e5db7bd86038c1088fb", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2038", "s2fieldsofstudy": [ "Education", "Computer Science" ], "sha1": "b2a5afd4cfcd64d31f94f727f40efe9395365c63", "year": 2024 }
pes2o/s2orc
New model of college physical education teaching based on the algorithm and data structure of flipped classroom and OBE Although college physical education (PE) is a compulsory course in college teaching, due to the openness of assessment standards and students studying to pass the final exam, college PE has been undervalued, so this paper aims to explore the new model of college PE teaching. In response, this paper took the air volleyball course as an example and redesigned the teaching of college PE based on the theory of flipped classroom and outcomes-based education (OBE). This paper also proposed a personalised learning system for college sports based on genetic algorithm (GA) and data structure, greatly improving college PE's autonomous learning ability and willingness. In the design of the teaching model, this paper compared and analysed the teaching model combining flipped classroom and OBE with flipped classroom teaching model, OBE teaching model and traditional teaching model. A one-semester investigation was conducted by selecting 40 students who took the air volleyball PE course at Hebei Normal University for Nationalities. The 40 students were divided into four groups, and their learning after one semester was compared. The experimental results showed that, compared with the traditional teaching model group, the outcome-based education and flipped classroom education group's performance of hitting the ball, passing the ball, spiking the ball, and serving the ball increased by 3.8 %, 14.3 %, 20.8 %, and 10.3 %, respectively. This suggested that the new college physical education teaching model based on flipped classrooms and OBE has a good teaching effect and can be used as a reference and help for others. Introduction Unlike wired networks, wireless networks break through the limitations of wired networks.They can connect people at any time through wireless signals.Their network expansion performance is relatively strong, and they can effectively achieve network work expansion and configuration settings.Users will also become more efficient and convenient when accessing information.The wireless network not only expands the space range of people who use the network but also improves the efficiency of the network.The traditional PE teaching model focuses on implementing teaching methods and means and only pays attention to the completion rate and pass rate of students' learning tasks [1].In teaching college PE, such a teaching model can allow students to understand the basic theoretical knowledge of air volleyball and master basic sports skills.However, for the employment of students majoring in PE, this is far from meeting modern society's needs for air volleyball professionals.Especially in the process of air volleyball gradually getting on the right track, the demand for air volleyball professionals in society and school sports increases greatly, and the requirements continue to increase.Cultivating talent in PE majors is a powerful guarantee for supplying talent for developing school sports and social sports in China.The general public's physical and mental health and physical fitness need guarantee and support, and China's transformation from a sports power to a sports power requires high-quality reserve talents.Therefore, this paper must study the new model of college PE based on flipped classrooms and OBE.The personalised learning model of genetic algorithms has introduced new ideas into physical education.By optimising the teaching mode, students' learning needs can be met individually, which provides innovative exploration for improving the teaching effect and cultivating students' comprehensive quality.This paper has important theoretical and practical value for promoting the innovation of physical education teaching mode and improving students' learning experience and overall quality. College PE is a compulsory course in college education.However, due to the lack of unified evaluation standards and teaching models, students have not valued it for a long time.In this regard, many scholars have studied the new model of college PE.Wang et al. [2] through the reconstruction of the reform of teaching methods, we should integrate the relationship between modern technology and sports, build a bridge between information technology and traditional teaching with the help of the classroom platform, realise the transformation of sports technology and skills from imparting to learning, realise the lifelong sharing of resources with the help of the public platform, create a precedent for lifelong sports exercise classes, and fully realise the function of university sports to serve the society.Xin and Wang [3] investigated the current situation of PE in colleges and universities under the cloud computing environment, analysed and summarised the problems existing in the process of PE in colleges and universities, and gave corresponding countermeasures and suggestions.Starting from the significance and characteristics of the experiential teaching model, Wang [4] expounded the implementation significance and existing problems of the current PE teaching model and finally explored effective application strategies.To meet the market demand for social sports, Yu et al. [5] optimised and adjusted it based on the original "dual system" teaching model for sports talents in colleges and universities, combined with the advanced talent introduction model, to establish a new college sports talent training system and optimise the existing talent training model.To better implement the student-oriented concept and promote the growth and development of college students, Wang and Tang [6] studied the path and method of building a team of college students' growth mentors based on political work.Their research not only started with technology and introduced Fig. 1.Three elements of a flipped classroom. Y. Kong et al. information technology but also started with teaching methods and put forward many new models and suggestions.However, their research lacked thinking about student groups and did not consider students as the main object. The flipped classroom and OBE theory are more commonly used methods in modern teaching, and many scholars have conducted research on them.Song [7] discussed the problems in logistics teaching and the urgency of curriculum reform.He introduced the concept of outcomes-based education (OBE) into the logistics teaching process, reversed the design of teaching based on results, and implemented the teaching design as a flipped classroom.Kadam and Sawant [8] studied the teaching mode of communication skills for senior students, and they used the OBE concept and flipped classroom to optimise the teaching model.The effectiveness of flipped classrooms is currently debated due to conflicting results from different studies.Sajid et al. [9] aimed to evaluate the efficacy and acceptability of the flipped classroom in undergraduate medical education at the Faculty of Medicine, Alpha Sal University.Tavares et al. [10] conducted a systematic literature review on the flipped classroom methodology with a focus on K-12 education.They proposed model recommendations for using digital information and communication technologies as a methodology support tool for the flipped classroom.Liu et al. [11] analysed the problems and challenges in the epidemic environment from the perspectives of teachers, students and technology and proposed the teaching model of "re-flipped classroom" and the teaching mode of "SPOC + MOOC + live broadcast".Combining OBE with mind mapping, the teaching evaluation was introduced into teaching design, and finally, implementation suggestions were put forward to ensure the quality of teaching.However, there were few studies on applying flipped classrooms and OBE concepts to PE, and the research was not deep enough. The innovations of this paper are as follows.For college PE, this paper starts with the well-known air volleyball and selects experimental objects from various majors in the school, thus ensuring the universality of the experimental results in this paper.In the experiment, this paper not only compares and analyses the new teaching model designed in this paper with the traditional teaching model but also compares it with the new teaching model of flipped classroom theory and OBE concept. Concept of flipped classroom and OBE The flipped classroom emphasises that students are the main body and need the support of micro-lectures, a teaching environment, and teaching activities, as shown in Fig. 1.Outcome-based education is an educational philosophy that revolves around four questions: "What can students learn?" "Why do students learn them?" "How do students learn them?" and "How to evaluate and improve learning outcomes", as shown in Fig. 2. It is precisely these four questions that reflect the advantages of outcome-oriented education.These four questions are closely linked to an important theme, "student-centred" [12,13]. What students can learn is the definition and design of learning outcomes, and it is also the beginning of reverse design, which determines the model and method of the entire teaching process.Students learn this because of critical thinking about learning outcomes.The iterative determination of learning outcomes is to make learning outcomes more comprehensive and precise.Learning is to let students understand the impact of the learning outcomes acquired through learning on their future careers and lives and to think about why they should learn from the perspective of students [14,15].How students learn them refers to the concept of taking learning outcomes as the centre is implemented to implement teaching design, and reasonable and scientific teaching methods and means are adopted to enable students to achieve teaching goals to the greatest extent.Evaluating and improving learning outcomes is to obtain teaching feedback information through several evaluation methods, such as process evaluation and summative evaluation, after implementing teaching activities through reverse-designed teaching design so that the degree of achievement relationship between learning outcomes and expected learning outcomes can be obtained.Using the teaching feedback, a new round of reverse design is carried out to update and improve the implementation details in the teaching so that each student is constantly approaching the final learning outcome and gains a sense of achievement.As the innovation and key point of outcome-oriented education, "reverse design" has played an irreplaceable role in teaching implementation.The talents cultivated by colleges and universities are the main source of social human resources.The reverse design in the OBE concept ensures the scientific and forward-looking training work to a certain extent, which is conducive to improving the quality of talents cultivated in various majors and promoting social progress [16,17]. The construction of a teaching model is a complex and unified process composed of various links, but in the end, they all aim to achieve the same purpose -to achieve learning outcomes.The outcome-based concept is closely related to students and learning outcomes in the whole teaching process and radiates to every link with the two as the centre so that students have more learning gain and a sense of achievement in the learning process [18,19]. In addition, the difference between the outcome-based teaching concept and the traditional teaching concept in the teaching process is reflected in the results and the design method of the entire educational process.The major of PE is to cultivate PE talents with strong knowledge, skills and comprehensive quality for the country, society and schools, so colleges and universities should focus on what kind of talents are needed in the social market and environment and carry out all-round training for students starting from the training results, to make the training process and purpose clearer and clearer.The classroom teaching mode of sports professional air volleyball guided by the OBE concept echoes the training goals of PE professionals, aiming to reversely design the entire teaching specifications of cultivating talents, aiming to reverse the design of the teaching based on the specifications of cultivating talents and aiming at the learning outcomes obtained at graduation, which continuously improves teaching quality with specialised and continuous evaluation methods, to ensure that students achieve their goals at graduation and meet the multiple needs of the new era society for talents [20,21]. GA optimization: Coding the indicators in PE: Choosing the right fitness: The fitness function is divided into maximum and minimum cases [22,23].This paper evaluates the system and selects the minimum fitness value, as shown in Equation (1). In Formula 1, f max represents the maximum value of the fitness function. Different selection strategies can lead to different selection pressures.The roulette is used [24], and a disc is divided into N parts according to the selection probability p i as given in Equation ( 2). Generating a random number r, see Equation ( 4), if it satisfies Equation (3), individual i can be selected. Then, the traditional genetic operator design is adopted. The optimization function has the following options.Global maximisation: For the seeking point x max ∈ S, there are Equations ( 5) and (6). Global minimisation: For the seeking point x max ∈ S, there are Equations ( 7) and (8). However, the optimization problem can be transformed into a maximum value problem.If the function f finds the minimum value, the transformation is as in Equation ( 9) and Equation (10), If the objective function f only takes a negative value and wants to convert it to a positive value, a positive number C can be added, as shown in Equation ( 11): The specific steps of using GA to extract the characteristics of college students online learning are as follows [25]. Step 1 Keywords are extracted.When learning online, the keywords entered on the system page are recorded as a document set C 1 as in Equation (12).It is considered to decompose C 1 into multiple document matrices A 1 and filter word set B. Step 2 Similarly, the test document set C 2 , see Equation ( 13), A 2 can be obtained through filtering with B: Step 3 A 1 and A 2 are combined to get the test matrix A, see Equation ( 14): Step 4 In this article, f(a i ) can be used to define function f(a), as in Equation (15), to evaluate the i-th document.Among them, W is the weight of the set B keywords and W ʹ is the weight optimised by the GA: Step 5 When Equation ( 16) is satisfied, or when it is greater than f(C 2 ) in a certain proportion: It is assumed that Equation ( 17) is the document collection in the online learning system of college sports personalization.Equation ( 18) is the document collection of learners' individual interests or online test evaluation, and Equation ( 19) is the rest of the test collection.Among them, m ≪ n. B represents the feature set of the learner's information needs, as shown in Equation ( 20): The optimization teaching mode of the genetic algorithm includes the following detailed parameters and optimization process: roulette wheel selection method is used in the selection stage, single-point crossover is used in the crossover stage, and mutation probability is used to mutate individual genes in the mutation stage randomly.The population size is set to a moderate number, and the number of iterations is set to the maximum algebra.The fitness function considers students' academic performance, participation, and the quality of teaching plans.In the optimization process, individual genes constantly evolve through selection, crossover and mutation, aiming at the expected teaching effect.Through continuous iteration, this process gradually makes the teaching mode optimal and improves students' learning experience and comprehensive quality. Construction and implementation of the classroom teaching model of air volleyball The air volleyball classroom teaching model under the OBE concept is a reverse design starting from the final demand.The whole design process was guided by the needs of modern society, disciplines, and students in designing the entire teaching model, as shown in Fig. 3. Various factors in the teaching model were also analysed. PE teaching mode based on OBE concept (group of OBE): The classroom teaching model of air volleyball under the OBE concept is a non-instructive teaching mode.In the structure of the teaching process, there are mainly learning stages, such as question raising, trial learning, and cooperative discussion.Teachers use reasonable means and methods to intersperse learning content with exercises.There are three types of teaching organisation. Teaching model based on the flipped (group of FCM). (1) Before class: watch teaching videos and communicate with students. (2) In class: students' learning movement skills independently and teachers' guidance and evaluation. Teaching mode based on OBE and flipped classroom (group of OBE + FCM). (1) Before class: studying in groups, watching videos, and thinking. (3) After class: teachers assign open homework to guide students to think openly and continue to improve. Traditional teaching mode (group of TTM): Traditionally, teachers teach, and students practice.Then the class is over. Experimental design Testing purposes: The improvement of air volleyball teaching in colleges and universities can be promoted, and the results of new methods of college PE teaching can be discussed by testing and comparing the feasibility and educational impact of the conventional teaching model and the air volleyball classroom teaching mode under the OBE concept. Expectations before testing: According to the purpose of the research, there are the following two assumptions: (1) The classroom teaching model of air volleyball based on the OBE concept is helpful for students to learn the basic skills and tactics of air volleyball; (2) The classroom teaching model of air volleyball based on the OBE concept can help cultivate students' basic teaching skills and qualities. The test objects are shown in Table 1. The test subjects were 40 students who had chosen to major in air volleyball at Hebei Normal University for Nationalities.First of all, psychologically, college students' physical and mental development tended to mature.They could think and judge independently and control their attention more stably, with strong self-restraint abilities.The students also had basic sports ability.According to the project characteristics of air volleyball, students were particularly interested in the holistic practice of air volleyball learning [26].College students' psychological maturity and various abilities, including independent thinking, judgment, attention control, self-discipline, and basic sports ability.This helps to establish a learning environment so that students can better participate in physical education teaching activities, improve the teaching effect, and promote the positive learning experience in balloon volleyball class by combining psychological maturity and sports ability.Therefore, in teaching, it was necessary to let students combine physical exercise and logical thinking well, provide students with an opportunity to learn professional knowledge and innovative learning methods and improve the teaching effect.In teaching, according to the age characteristics of students, various teaching methods should be adopted, Fig. 3. Instructional design under the OBE concept. Y. Kong et al. and multiple methods such as discussion method, intuitive method and cooperative learning should be used to mobilise students' enthusiasm for learning and practice [27]. The sample selection process has been carefully designed.First, the researcher selected students majoring in balloon volleyball at a university as potential research subject.This choice is based on the purpose of the study, that is, to evaluate the influence of the new teaching mode on balloon volleyball education. From the potential research objects, the stratified sampling method is adopted to ensure the diversity and representativeness of the samples.The levels of stratified sampling include different teaching modes, such as OBE, FCM, OBE + FCM and TTM.In each teaching mode, 50 % male and 50 % female students are selected to keep the gender balance. Finally, 40 students' samples were obtained through this sample selection process, covering different teaching modes and genders.This sample selection method is helpful in ensuring the reliability and validity of the research results to comprehensively evaluate the influence of varying teaching modes on students' learning achievements. Test results Reference 28 established a teaching model based on the theory of multiple intelligences and took university sports basketball teaching as the research object, incorporating students' sports skills, practical abilities, and cognitive abilities for teaching effectiveness analysis [28].Reference 29 proposed a pattern-based practical physical education teaching method and conducted a survey on the learning effectiveness of students of different age groups using the proposed method through semi-structured interviews [29].Among them, sports skills, practical abilities, and cognitive abilities can more comprehensively reflect students' comprehensive quality and development.Semi-structured interviews can collect more detailed and specific feedback from students, helping evaluators thoroughly understand teaching effectiveness.Therefore, this article has cited and improved the evaluation indicators and methods of references 28 and 29.Before the implementation of teaching, this article divides the testing indicators into cognitive, physical, and technical categories.Cognitive categories are surveyed through interviews.In the category of physical fitness, the 5-m triathlon and approach touch are used as test indicators.The physical and technical categories selected self-buffering as the testing content to investigate and test the experimental subjects.In the testing and evaluation, this article sets the scoring range to 1-10 points and statistically analyses the performance differences between the two groups of students through a T-test. Results and analysis of students' cognitive situation: The differences in students' cognitive situation are shown in Table 2, and the detailed cognitive situation of each group is shown in Fig. 4. The study examined the cognitive abilities of students using a questionnaire format to ensure that there were no differences in test subjects before starting teaching.The results of the air volleyball cognition survey of the students in the experimental group and the control group are shown in Table 2.It can be said that there was no significant difference between the groups in terms of student's understanding of air volleyball.That is, the test subjects used to analyse the relevant indicators and phenomena might be students of male and female courses in the control group and the experimental group. Test results and analysis of students' physical fitness.Before the implementation of this teaching, the contents of the physical fitness test for the students were run-up touch height, three items of 5 m.After the test, the test results were compared and analysed by the indifference test (T-test).The results are shown in Fig. 5.As can be seen from Fig. 5A, the average of the test results of three items of 5 m in each group of males was about 8.6s, and the score of run-up touch height was about 3 m, with no significant difference.As can be seen from Fig. 5B, the average of the test results of three items of 5 m for each group of females was about 9.7s, and the score of run-up touch height was about 2.6 m, with no significant difference. Test results and analysis of basic skills.Before the implementation of teaching, the basic skills of air volleyball were selected to test the two contents of serving and selfcushion, and the T-test was carried out on the average value of the test results of each group.The results are shown in Fig. 6, and the specific results are shown in Fig. 6A and B. There was little difference between the number of self-cushions and servings between the male' and female' groups.It can be concluded that the air volleyball cushioning and serving skills of each group in the male and female classes were at the same level, with no significant difference, and the test conditions were met. Test results and analysis of students' cushioning skills. As seen from Fig. 7, compared with the traditional model, the scores of the other groups all increased.As shown in Fig. 7A, the males' OBE + FCM group achieved the standard score of 10; the OBE group was 9.78, the FCM group was 9.89, and the TTM group was 9.638.The OBE + FCM group's up-to-standard score was 3.8 % higher than that of the TTM group.The technical assessment score of the males' OBE + FCM group was 4.325, the OBE group was 4.231, the FCM group was 4.112, and the TTM group was 3.216.The OBE + FCM group's technical assessment score improved by 34.5 % compared with the TTM group.As shown in Fig. 7B, the females' OBE + FCM group achieved the standard score of 9.5; the OBE group was 9.342; the FCM group was 9.311, and the TTM group was 8.911.The OBE + FCM group achieved a 6.6 % improvement in the achievement of the standard compared with the TTM group.The technical Y. Kong et al. assessment score of the females' OBE + FCM group was 4.112, the OBE group was 3.842, the FCM group was 3.863, and the TTM group was 3.411.The technical evaluation score of the OBE + FCM group was 20.6 % higher than that of the TTM group.It can be found that whether it was a male or a female, the OBE + FCM group had the highest score, and the TTM group had the lowest score.This showed that the students in the OBE + FCM group and the TTM group had significant differences in the application and mastery of the skill in the ball-cushion skill after the experiment.In the teaching link of the ball-cushion skill during the experiment, in the OBE + FCM group, the skill of air volleyball was used in the form of technical application to teach the technical movements of the ball, create a Y. Kong et al. situation, and imitate the receiving and serve in the game to carry out the teaching and organisation of the ball.In contrast, the TTM group taught in the form of traditional education, explaining-demonstration-exercise-correction.According to the requirements of the teaching syllabus, two-person padding skills are used in the assessment of ball padding skills, which mainly tests the student's mastery of the hand shape and coordination of paddling skills, as well as their coordination with preparation postures and movements. Test results and analysis of student passing skills.As shown in Fig. 8, the passing skills of each group still showed the trend of the lowest in the TTM group and the highest in the OBE + FCM group.As shown in Fig. 8A, the achievement of the males' OBE + FCM group was 14.3 % higher than that of the TTM group, and the males' OBE + FCM group's technical assessment performance was 42.8 % higher than that of the TTM group.As shown in Fig. 8B, the achievement of the females' OBE + FCM group was 14.5 % higher than that of the TTM group, and the technical assessment performance of the females' OBE + FCM group was 29.3 % higher than that of the TTM group.After 14 weeks of teaching, students in each group had a basic grasp of the basic skills of air volleyball, but there was still room for improvement in their understanding and application of skills.From Fig. 8, it can concluded that there were significant differences in the passing skills between the new model of PE teaching and the traditional model in the male and female classes after teaching activities.The experimental group was higher than the control group regarding passing skills and technical evaluation.Passing in air volleyball is a basic skill and the most critical technical action, which plays a vital role in actual combat, which is a difficult skill for beginners to master.In the classroom conversation, some students reported that after the whole semester of study, they still felt that they had only learned the basic movement essentials for passing skills and could complete the movements but could not use them flexibly in actual combat.Therefore, in this semester's teaching, the students' passing skills and technical assessment results belonged to the normal range of beginners. Results and analysis of students' spiking skills test. As shown in Fig. 9, the passing skills of each group still showed the trend of the lowest in the TTM group and the highest in the OBE + FCM group.As shown in Fig. 9A, the achievement of the males' OBE + FCM group was 20.8 % higher than that of the TTM group, and the technical assessment performance of the males' OBE + FCM group was 38.9 % higher than that of the TTM group.As shown in Fig. 9B, the achievement of the females' OBE + FCM group was 0.2 % higher than that of the TTM group, and the females' OBE + FCM group's technical assessment performance was 2 % higher than that of the TTM group.It can be concluded that after 14 weeks of practice of the OBE concept and the flipped classroom air volleyball classroom teaching model, the technical and technical assessment performance of the experimental group in the male class is better than that of the control group in the traditional classroom teaching model, while there was no significant difference in the average achievement and technical assessment scores between the experimental group and the control group in the female class.This paper believed this was closely related to females' sports experience and physical quality, and males had certain advantages over females in terms of sports experience and physical quality.In addition, the spiking technical action in air volleyball is difficult to master in this sport, and it has high requirements in terms of both ball and coordination.However, from the test data, the male experimental group had obvious differences in spiking skills compared to the control group, not only in the success rate but also in the technical evaluation, which was inseparable from their daily exercise habits and specialities.Some students specialised in basketball, with good bounce and explosiveness, while others specialised in badminton, with good speed and coordination.These factors have a certain degree of influence on the learning and mastery of air volleyball skills. Results and analysis of students' serving skills test. As shown in Fig. 10, the passing skills of each group still showed the trend of the lowest in the TTM group and the highest in the OBE + FCM group.As shown in Fig. 10A, the achievement of the males' OBE + FCM group was 10.3 % higher than that of the TTM group, and the males' OBE + FCM group's technical assessment performance was 37.9 % higher than that of the TTM group.As shown Fig. 8. Post-test of passing skills in each group.(A) for males and (B) for females. Y. Kong et al. in Fig. 10B, the achievement of the females' OBE + FCM group was 6.3 % higher than that of the TTM group, and the females' OBE + FCM group's technical assessment performance was 21.1 % higher than that of the TTM group.There was a significant difference between the OBE + FCM and TTM groups in the service test scores, especially in the technical evaluation scores.The experimental group using the outcome-based classroom teaching model was better than the control group in the traditional classroom teaching mode regarding the accuracy and completion quality of serving technical movements.Serving is a basic skill in air volleyball and plays a vital role in the game.In the teaching process, teaching based on the outcome-based concept enables students to better understand the details and internal laws of technical movements in the learning process, helping students establish more accurate movement representations and concentrate more.Moreover, learners can conduct trial teaching as teachers throughout the whole teaching process.Through learning and applying movement skills in different identities, learners can have a richer experience, establish a sense of selfidentity, and have a deeper understanding of the teaching content. It can be seen that OBE flipping topics can effectively improve students' physical education skills.This is because OBE encourages personalised learning and adjusts teaching methods according to student's needs and abilities.In physical education teaching, educators can develop customised training plans based on students' exercise levels and interests so that each student can progress at a level that suits them.OBE emphasises the evaluation of students' actual performance and provides timely feedback.Physical education teaching, this can be evaluated by observing students' motor skills, physical fitness level, and teamwork.Timely feedback can help students understand their strengths and the direction of improvement. Results and analysis of students' lesson plans: Compiling lesson plans can highlight students' understanding of air volleyball and combine PE compulsory theoretical knowledge and skills with sports learning, which is conducive to students' acceptance and learning of skills.At the same time learning a sports skill mastering the teaching methods and means of this sports skill is the best way to cultivate college PE skills.After learning, students can not only gain a motor skill but also master its teaching method, which is helpful for students to understand and identify themselves.Students can give full play to their strengths, complete the writing of lesson plans independently, and improve their problem-solving ability. Table 3 shows the writing scores of the lesson plans.The OBE + FCM group has the largest number of students with a score of 85 and above and the lowest number of students with a score of 60-71.This showed that the air volleyball classroom teaching mode based on the OBE concept and flipped classroom can enable students to combine practice and theory and let most students master the skills of writing lesson plans, giving most students a deeper understanding of air volleyball. Results and analysis of students' test lecture scores: The ability of students to test lectures reflects the student-centred teaching philosophy.As shown in Table 4, the number of students in the OBE + FCM group with a high score of 85 or above was two, and the number of students with a score of 60-71 was the lowest.This showed that the design and distribution of teaching content in the teaching process is the main factor affecting students' learning outcomes.In the traditional classroom teaching model, the teaching content of teachers only includes the mastery of sports skills and competition requirements, etc., and the teaching skills of this project are not gradually taught to students.There is no special training for students' teaching skills either.Suppose teachers do not train students' teaching skills in the teaching content setting or do not get sufficient exercise.In that case, students may have serious problems in employment during the transition period of "learning to teach".Mock lectures exercise students' ability to apply learning outcomes, which requires students to learn through the theoretical knowledge of various school courses, put the theory into practice, and constantly polish and temper.In the evaluation link of the mock lecture, it can be known that the air volleyball classroom teaching mode based on the OBE Y. Kong et al. concept and flipped classroom meets the needs of students' ability to survive and work in the future and helps students improve their personal, professional quality and ability. Students have a positive experience of the OBE concept and flip classroom mode, and they think that the classroom is more interesting by observing videos to understand skills in advance.In contrast, in the traditional mode, students are bored and lack interest in practice.This feedback reveals the positive experience triggered by the new teaching model, which provides a useful reference for educational reform. To better adapt to specific sports or sports activities.First, we must consider the characteristics of sports and learning needs to determine the teaching objectives.Then, the teaching content and methods must be adjusted to ensure they meet the technical requirements of different sports.For example, ball games may focus on teamwork, while individual events may focus more on particular skills.In addition, the teaching mode should consider the difficulty of sports and the age group of students.By flexibly adjusting teaching strategies and contents, this model can adapt to various sports or sports activities and improve students' learning effects in different sports fields. Discussion This paper compares the conventional teaching mode with the classroom teaching mode of balloon volleyball based on the OBE concept and deeply studies the teaching effect.In the experimental design, students' cognition, physical quality, and technology are comprehensively tested, which provides detailed data that supports a comprehensive evaluation of the teaching effect.Secondly, through the innovation of teaching mode, the concept of OBE and the elements of flipping the classroom are introduced, and the theoretical knowledge and practical skills are combined to enhance the depth and breadth of students' learning.In addition, the student's ability to understand and apply the new teaching model is demonstrated through the grading results of the students' teaching plans and trial lectures, which provides a feasible way to cultivate students' teaching skills.Most importantly, the personalised learning model based on a genetic algorithm has injected new ideas into the field of physical education and explored new ways to improve the teaching effect and cultivate students' comprehensive quality by optimising teaching modes to meet students' learning needs individually.Therefore, this study not only expands the research field of physical education teaching in theory but also substantially contributes to improving teaching quality and cultivating students' all-round quality in practice. Conclusions By comparing the conventional teaching mode with the classroom teaching mode of balloon volleyball based on the OBE concept, this paper deeply analyses the influence of different teaching methods on students' cognition, physical fitness and technology.It provides empirical data for physical education teaching.Innovatively introducing the OBE concept and flipping classroom elements, the teaching mode is optimised, theoretical knowledge and practical skills are better integrated, and the depth and breadth of students' learning are improved.Through the results of students' teaching plans and trial lectures, the students' understanding and ability to apply the new teaching model is demonstrated, which provides a new way to cultivate students' teaching skills.The applicability of the article to different sports needs further verification, and the research duration is only one semester, lacking long-term effect verification.Future research can consider a longer follow-up to deeply understand the effect of the new teaching mode on students' longterm development.Such research will help fully understand educational reform's long-term impact on students' comprehensive literacy. Ethics statement This study was approved by the Ethics Committee of Hebei Normal University for Nationalities, the participants were consented by an informed consent process that was reviewed by the Ethics Committee of Hebei Normal University for Nationalities and certify that the study was performed in accordance with the ethical standards as laid down in the 1964 Declaration of Helsinki. Y.Kong et al. Fig. 6 . Fig. 6.Basic technical pre-test of each group.(A) for males and (B) for females. Fig. 7 . Fig. 7. Post-test of each group of ball cushioning skill.(A) for males and (B) for females. Fig. 9 . Fig. 9. Post-test of spiking skill in each group.(A) for males and (B) for females. Fig. 10 . Fig. 10.Post-test of each group's serving skill.(A) for males and (B) for females. Table 1 Test objects. Table 3 Number of students scoring each section of the lesson plan. Table 4 Number of students who scored each paragraph in the trial lecture.
v3-fos-license
2014-10-01T00:00:00.000Z
2005-03-24T00:00:00.000
6323303
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "fe495c643a3099f2dc51705a3b3fd982fd645b30", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2039", "s2fieldsofstudy": [ "Medicine" ], "sha1": "fe495c643a3099f2dc51705a3b3fd982fd645b30", "year": 2005 }
pes2o/s2orc
Management of an acute thermal injury with subatmospheric pressure. Objective: This article reports the first application of subatmospheric pressure management to a deep, partial-thickness human thermal burn. Methods: After cleaning the wound, the decision was made to treat the hand and distal forearm with subatmospheric pressure (V.A.C., KCI, Inc, San Antonio, Tex). The sponge was applied directly to the burned skin without additional interface at approximately 6 hours after injury. The dressing was maintained at a continuous negative pressure of 125 mm Hg over the next 40 hours, with interruption only for routine clinical evaluation at 5, 16, and 24 hours after initiation of treatment. This was accomplished by opening the dressing without completely changing it. The treatment was tolerated well by the patient, requiring no excessive pain medication. After the subatmospheric pressure treatment was stopped, the wound appeared to be of indeterminate depth and the patient was started on twice daily applications of silver sulfadiazine. Results: The clinical impression at this time was that the hand burn had not progressed but had stabilized and had minimal edema. He was followed as an outpatient and returned to work by 8 weeks. At approximately 4 weeks postinjury, his skin not only was functional but also appeared more normal, with less hyperemia than adjacent areas treated with topical antibacterials. Conclusion: The present case does not prove that subatmospheric pressure treatment prevents burn wound progression. However, when combined with the previously reported laboratory studies it suggests the need for further research. Currently, a prospective, randomized, blinded, controlled multicenter trial is underway to evaluate the clinical importance of these observations. Management of acute thermal injury is often frustrated by the phenomenon of burn wound progression. In this circumstance, heat-damaged tissue that is alive at presentation becomes progressively nonviable until the skin is found to be nonsalvageable and requires excision and grafting. The etiology of this process is unclear. It has been best described by Jackson as a zone of stasis where with increased vascular permeability, progressive edema, increased blood viscosity, and vascular thrombosis the tissue dies. 1 While limiting burn wound progression would be of clear benefit to the burn patient, no clinical studies have shown a way to prevent it. 2 The V.A.C. (K.C.I., Inc, San Antonio, Tex) consists of an open cell polyurethane ether foam with an embedded evacuation tube. The foam is sealed to the wound with an adherent drape, and subatmospheric pressure is applied to the evacuation tube. Previous studies have demonstrated the effectiveness of this device in helping to control edema and speed up the vascularization of wounds. 3 Morykwas et al have also demonstrated in a swine model of thermal injury that the maximum depth of cell death could be decreased with application of subatmospheric pressure. 4 We report the first application of subatmospheric pressure management to a deep, partial-thickness human thermal burn. CASE REPORT On August 2, 1995, a 26-year-old male electrician received a flash burn to his right upper extremity and face when exposed to the heat from a high-voltage electrical arc (Fig 1). The hand and digits were pale in color and dry on the dorsum, suggesting deep, partialthickness burns (Fig 2). The injury was progressively more superficial proximally on the extremity. The clinical impression of 3 surgeons with experience in burn care was that the distal portion of the burn, ie, hand and forearm, would require excision and grafting. After cleaning the wound, the decision was made to treat the hand and distal forearm with subatmospheric pressure (V.A.C., KCI, Inc, San Antonio, Tex) and to apply silver sulfadiazine more proximally. The sponge was applied directly to the burned skin without additional interface at approximately 6 hours after injury. The dressing was maintained at a continuous negative pressure of 125 mm Hg over the next 40 hours, with interruption only for routine clinical evaluation at 5, 16, and 24 hours after initiation of treatment. This was accomplished by opening the dressing without completely changing it. The treatment was tolerated well by the patient, requiring no excessive pain medication. After the subatmospheric pressure treatment was stopped, the patient was started on twice daily applications of silver sulfadiazine. The clinical impression at this time was that the hand burn had not progressed but had stabilized and had minimal edema (Fig 3). However, it was now of indeterminate depth. The patient was started on hand therapy, and the hand was kept elevated. The wound continued to epithelialize until it was clinically healed by Day 10, but the patient had received significant fingernail injury that persisted until the nail was completely replaced. The patient was discharged home on the 12th postinjury day. He was followed as an outpatient and returned to work by October. At approximately 4 weeks, his skin not only was functional but also had an excellent cosmetic result (Fig 4). In addition, the skin on the hand appeared more normal with less hyperemia than the skin of the shoulder, despite the fact that the hand had received the deepest burn consistent with the mechanism of surgery ( Fig 5). DISCUSSION Fifty years ago, Jackson put forth a paradigm for an understanding of the pathogenesis of burn wound progression. 1 He described the wound as consisting of 3 concentric zones Handle of screwdriver (a) and screwdriver tip (b) used by electrician described in text. Note that fingerprint melted into the handle and that the tip of the screwdriver has been rounded. of injury. The most severe of these is the zone of coagulation. It is irreversibly damaged and represents nonsalvageable dead tissue. To the other extreme is the zone of hyperemia. This tissue is minimally injured, resulting in an inflammatory response, and will usually heal spontaneously. In between is the zone of stasis. This is characterized by increased vascular permeability, edema, and progressive blood viscosity, leading to thrombosis and additional tissue death. It is this zone of stasis that represents the deep second-degree burn that is clearly viable tissue when the patient arrives but subsequently goes on to die and requires excision and grafting much in the manner of a third-degree or full-thickness burn. While initially Jackson thought that such capillary stasis and burn wound progression was an inevitable consequence of the original injury, Order et al 5 demonstrated reopening of the circulation in second-degree burns in a rat model more than a decade later. This led Jackson and others to consider the possibility of prohibiting the progression so as to minimize the potential need for surgery and perhaps to save lives. 6 However, in order to control this process, it would be necessary to understand the mechanism. Evaluation of the microcirculatory changes due to thermal injury has demonstrated the complex nature of the response. Early after the injury, endothelial cells swell, resulting in capillary narrowing and decreased flow. 7 The swelling of the endothelial cells contributes to capillary leak but may also be the result of free-radical mechanisms. 2,8,9 The capillary leak allows margination of cellular elements of the blood, platelet aggregation, and stimulation of inflammatory mediator response. This process begins in the first 3 to 24 hours depending on the severity of injury and continues for up to 48 hours after burn. 4,5,9 While much of the emphasis of microcirculatory clotting has been on the arteriole, it appears that venous occlusion may occur first, resulting in secondary arteriolar clotting. [10][11][12][13] In addition to inflammation and progressive thrombosis, more direct mechanisms may cause progressive tissue damage. Zawacki and others have shown that dehydration due to the loss of the outer protective layers may contribute to burn wound progression. [14][15][16] Systemic hypoperfusion, infection, malnutrition, and inadequate immune response are all important causes of worsening of the burn wound, and proper resuscitation and metabolic support limit this process. 13,14 Efforts to limit burn wound progression have primarily concentrated on pharmacologic interventions in the thrombosis or inflammatory response. Robson et al showed that application of 1% methylprednisolone acetate to a guinea pig model of burn wounds decreased loss of dermal appendages and increased dermal perfusion, presumably by interfering with white blood cell adherence. 17 However, this was not confirmed by subsequent investigators using clobestasol propionate. 18 Use of monoclonal antibodies to prevent leukocyte adherence in a burn model did decrease burn size, speed up reepithelialization, produce thinner eschar, spare more hair follicles, and have greater patency of vessels than controls. 19,20 While Ehrlich found that a lazaroid could prevent burn wound progression, Melikian et al could not find an effect of the free-radical mechanism on burn wound progression using Note that the skin of the shoulder, which appeared more superficially burned than the hand on arrival, now has more discoloration than the skin of the hand. dimethyl sulfoxide, allopurinol, or polyethylene glycol-superoxide dismutase (PEG-SOD; a superoxide scavenger). 9, 21 Ehrlich has also demonstrated the importance of the clotting mechanism in this process by the use of ancrod, a protease derived from pit vipers that converts fibrinogen to a nonclotting molecule. 22 By giving this to rats 3 days before creating experimental burns, he was able to limit the size of the burn. Heparin has also been used anecdotally to treat clinical burns by preventing worsening by thrombosis. 23 However, the most consistent effect has been seen with the use of the nonsteroidal anti-inflammatory drug ibuprofen. While early reports suggested a thromboxane mechanism to prevent burn wound progression, more recent studies have suggested that it works by blocking a plasmin inhibitor that would normally block fibrinolysis in the burn wound. 2,24-26 Finally, a topical form of ibuprofen (flurbiprofen) when applied to an acute burn model within 4 hours of injury has also been suggested to have a positive effect on vascularity. Despite these efforts, no definitive technique has been shown to be clinically effective in minimizing burn wound progression. This is because the pharmacologic methods described are either toxic or contraindicated or must be administered before or too early after injury to be effective. Since the average burn patient arrives at the hospital 3 hours after injury, 27 therapeutic interventions must take this into consideration. The ideal technique to stop burn wound progression would allow for a 3 or more hour delay before hospital treatment, would have no systemic effects, and would not interfere with other treatment methods. Morykwas et al evaluated the effect of subatmospheric pressure on acute burn wounds in a swine model. 4 He applied a relative negative pressure of 125 mm Hg in an artificially closed space to experimental wounds with control wounds on the same animal. When applied within 12 hours after injury, a significant improvement was found as measured by the maximum depth of cell death. Based on this evaluation, treatment periods as short as 6 hours were efficacious. In fact, application periods as long as 5 days were not significantly different from application periods as short as 6 or 12 hours. In addition, histologic evaluation demonstrated decreased inflammatory response in wounds treated with subatmospheric pressure as compared to controls. In the present study, subatmospheric pressure treatment was applied to an upper extremity in a patient with a flash burn that extended from his fingertips to over his shoulder. This is the first clinical application of subatmospheric pressure to an acute human burn injury. The treatment period was approximately 2 days, and it was applied approximately 6 hours after injury. Despite the clinical impression of 3 surgeons experienced in burn care that this would ultimately require excision and grafting, this was avoided. The wound healed without complication, and applying subatmospheric pressure to the acutely burned tissue did no harm. In addition, the skin that initially appeared the deepest burned on the hand and the forearm healed in a manner that was less hyperemic with superior skin quality to the skin of the shoulder that received less injury. Unfortunately, there is no absolute method of evaluation that the burn surgeon may use to ascertain the depth of the burn at this early time point. Recent reports using scanning laser Doppler have had some interest but are not without error and not widely used to make such decisions. 28 The evaluation of burn depth remains primarily a clinical decision. It is impossible to know with certainty if this patient would have healed as well with alternative treatments such as silver sulfadiazine. Nonetheless, the severe nail damage, as seen in Figure 4, suggests that excision and grafting would have been necessary. A more intriguing issue is how subatmospheric pressure treatment improves the healing of the burn wound as seen in the laboratory studies of Morykwas et al 4 and as suggested by this clinical case. One possible way is that the device removes acute inflammatory mediators, such as free radicals and cytokines, that are involved with burn wound progression. 2,8,9,20,21 While this has not been proven in burns, it is clear from the studies in crush injury that certain toxins may be removed from acute wounds. 29 It is also possible that decreasing edema is an important mechanism to speed up the healing of the acute burn. With edema there is a decrease in vascular density, increased diffusion distance, possible vasospasm, thrombosis, and stasis in the microcirculation. Present observations suggest that subatmospheric pressure treatment does decrease wound edema by yet uncertain mechanisms. Finally, subatmospheric pressure treatment may provide an ideal environment for the healing wound by providing the damaged skin with the ideal water vapor pressure to avoid desiccation. [14][15][16] The present case does not prove that subatmospheric pressure treatment prevents burn wound progression. However, when combined with the previously reported laboratory studies it suggests the need for further research. Currently, a prospective, randomized, blinded, controlled multicenter trial is underway to evaluate the clinical importance of these observations.
v3-fos-license
2020-08-13T10:07:28.225Z
2020-07-06T00:00:00.000
225397530
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://zenodo.org/record/5561146/files/IJCHNV9N3A4-Moon-OA.pdf", "pdf_hash": "be9e60fd7e0f87748f274cb7a051c173c4dcce00", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2040", "s2fieldsofstudy": [ "Medicine" ], "sha1": "fd02b10a3fc30c8445343b2f0feb9132d2d1441b", "year": 2020 }
pes2o/s2orc
Malnutrition and Associated Factors with Nutritional Status among Orphan Children: An Evidence-Based Study from Nepal Background : Malnutrition is a common public health problem among children in low and middle-income developing countries. Orphan's children are vulnerable and neglected groups in society and are more prone to malnutrition. The study aims to identify the prevalence of underweight, stunting, thinness, and factors associated with nutritional status among orphan children. Methods : Quantitative method & analytical cross-sectional research design were used to assess the nutritional status and its associated factors among orphan children in Pokhara Valley, Nepal. The sample size of 160 children was obtained by a simple random technique. The semi-structured questionnaire, digital bathroom scale, stadiometer was used as the data collection technique. Data management and analysis were done from Epi-info, SPSS 25 version, and WHO Anthro plus. Findings : The majority of children were malnourished (80.6%) with the prevalence of stunting (55.1%), thinness (13.8%), and overweight (6.9%). Prevalence of underweight, stunting, and thinness was high among the boys (85.5%, 26.3%, and 15.8%), but overweight was more prevalent among the girls (7.1%). Ethnicity, sex, age, stay duration in an orphanage, and education of caregivers was associated with the nutritional status of orphan children (p <0.05). Non-privileged children and children below 11 years were more prone to malnutrition. Conclusion : Malnutrition is highly prevalent in orphan children and needs to be addressed. There is still limited study available on the nutritional status of orphan children in Nepal. Nutritional status should be monitored regularly for early identification and timely intervention of malnutrition to promote the nutrition health status of orphan children. INTRODUCTION According to (WHO), nutrition is food intake, considered in relation to the body's dietary needs. Optimum nutrition is required for the physical, mental growth, and development of the children [1]. Malnutrition is the common public health problem among children in low and middle-income countries [2][3][4][5][6]. In many countries, Demographic and Health Surveys (DHS) and national nutrition and surveillance systems have measured the height and weight of children below the age of 5 years, starting in the early 1990s. However, there is a scarcity of anthropometric data for school-aged children (5-14 years) [3]. Globally 150.8 million children are stunted 50.5, and 38.3 million children are wasted and overweight, respectively, and 2.01 billion adults are overweight and obese. Children living in children's homes are most vulnerable, and malnutrition is a particular concern [24]. In Asia, the total orphan population is around 5,72,20,000, accounting for 5.8% of the total child population. Asia is *Address correspondence to this author at the School of Public Health, Busan Medical Campus, Inje University, South Korea; +82-011-836-2641 Fax: +82-051-896-7066; E-mail: immdh@inje.ac.kr the home for nearly 60 million children. The highest rate of under-nutrition in the world is seen in Asia. One in every two children is malnourished. The national nutrition survey report shows that about 62% of the children aged 6-9 years are malnourished, 43.4% are stunted but not wasted, and 9.3% of the children are wasted but not stunted. Besides, 9.1% of the children are both stunted and wasted [25,26]. According to the state of children of Nepal, 2014, a total of 16,617 children are living under the care and protection of 594 residential child care homes across the country , and they have been deprived of nutrition. The political situation has leftover 5000 children homeless, according to a UNICEF study, and of those children, 50% may be HIV positive and much more ill. 2.6 million children are working in Nepal, and nearly 5% of those working are in the cruelest forms of work [27] . The number of children who are left orphaned in the world due to the loss of mother or father or both has increased in recent years. According to UNICEF, about 151 million children worldwide have lost one or both parents, where 61 million in Asia, 52 million in Africa, 10 million in Latin America and the Caribbean, and 7.3 million in Eastern Europe and Central Asia in which 17.8 million children are orphans due to the global HIV epidemic [28]. In every 2.2 seconds, a child loses a parent somewhere in the world [13]. Orphan children may experience a reduction in health, nutrition, and psychological well-being [7]. They are a vulnerable and neglected group [6,[8][9][10][11][12], in the society and are more prone to malnutrition [13]. Chronic undernutrition during childhood results in slower cognitive development and severe health impairment in later phases of life [14]. Whereas, inadequate dietary intake is the direct cause of malnutrition and indirectly household food security, maternal and child care, health services, and environment [6,15]. METHODOLOGY AND MATERIAL Quantitative methods and analytical cross-sectional study design were used to assess the nutrition status of orphan children in Pokhara Metropolitan, Nepal, from June 2019 to October 2019. Children staying at the orphanage for more than 3 months and between 6-14 years of age were included in the study. Based on Nepal's population statistics, the number of orphan children of age group 6-14 of child homes in Pokhara Metropolitan, Nepal, was 702 [29] Simple random sampling was adopted. The sample size was calculated using the formula, n = Z 2 pqN d 2 (N !1) + Z 2 pq Z = standard normal variable at 95% CI (1.96), N = number of orphan children of age group 6-14 of child homes in Pokhara Metropolitan, Nepal (702), p = estimated proportion (p = 0.16) based on the previous study [30], q = 1-p, d = margin of error (5%). i.e. the sample size for this study was calculated to be 160. A semi-structured pretested and predesigned questionnaire was used to collect information regarding age, gender, hygiene practices, etc. Details like orphan status, reasons for stay, duration of stay in orphanages, age at admission were taken from orphanage records. A child was subjected to anthropometric and personal hygiene assessments. Weight was measured with a bathroom weighing scale. Weighing Machine was regularly standardized with known standard weights. The scoring system evaluated personal hygiene; data was collected on important hygiene aspects like hair, skin, oral cavity, nails, etc. and depending on the scores, different grading was done as good (>8), fair (6)(7)(8) and poor (<5). Anthropometric data, namely weight for age, was assessed through BMI classification and height for age. BMI for age Z-scores was assessed using WHO ANTHRO PLUS 2007 software and the Zscores of the children then compared to the existing World Health Organization growth standards (WHO, 2007). Data entry, management, and analysis were done with EpiData 3.1, SPSS version 25, and WHO Anthro plus. Chi-square (χ 2 ) test was performed to find an association between study variables. Age, religion, sex, ethnicity, duration of stay, the reason for stay, orphan status was socio-demographic variables. Education, income source, the occupation was as assessed as socio-economic variables. Personal hygiene, physical activity, food consumption were assessed as behavioral variables. Nutrition status was considered as dependent variables for the study purpose. Ethical Approval was taken from IRB of Pokhara University (IRB Ref. No. 127/076/077), local government, and respondents for conducting this study. Informed consent was taken from the respondents. The privacy of the information was maintained and used for the research purpose only. RESULTS Among the 160 total population, more than half of the participants were female. The majority of the respondents were the followers of the Hindu religion, i.e. (76.9%). The highest number of the participants, 44.4%, were of the age group 12-14 years, where the mean age was 10.7 years, the minimum age was six years, the maximum age was 14 with SD 2.6. The majority of 31.9% of the participants have the only mother as parental status. The number of upper caste groups among other ethnic groups was highest, i.e. (28.1%). The highest number of participants are in the orphan home due to poverty (36.9%), as shown in Table 1. The prevalence of underweight was 80.6%, which was more among the boy's comparison to girls. (85.5% vs. 72.2%). The prevalence of stunting, thinness, and overweight was 55.1%, 13.8%, and 6.9%. Moderate and severe stunting were found high in boys about 22.4% and 3.9% comparison to girls 13.1% and 3.6% respectively, whereas moderate (15.8%) and severe (6.6%) thinness was also more prevalent among boys. Still, overweight was high among girls, about 7.1% comparison to boys 6.6%. Table 2 elicited the consumption pattern in terms of different food groups. It was seen that cereals, proteinrich food like pulses and lentils, other vegetables, roots and tubers, sugar and fats, and oil were consumed daily. Fried snacks and milk and milk product are maximally consumed once a week. Personal hygiene was observed regarding washing hands before eating, washing hands after using the toilet, and washing had by soap water was found to be 100% ( Table 3). The status of personal hygiene was assessed using a tenpoint grading system [13], which was graded as >8 points as good, (6-8) points as moderate, and <5 as poor. It was found that study participants (11.9%) had good personal hygiene scores, while 80% had moderate hygiene scores, and 8.1% were found to have poor hygiene ( Table 5. DISCUSSION In the present study, unlike the common perception that children's home includes orphaned children, in our study, only 31.3% of the children had neither of their parents. Interestingly 36.9% of them cited poverty and education as a reason for seeking children's homes. Only 6.3% of children were there because their parents are in prison for years. A study done in Kaski in 2017, found that 34.5% of people are in orphanages because their parents are not alive [16]. Almost similar to our study result that may be due to the same area and the same study population. On the contrary, a study conducted in orphanages in Bhubaneshwar India in 2018 found that the highest percentage of people (47.1%) are in orphanages because their parents are not alive [13]. A study conducted in orphanages in Bangladesh in 2013 found that the highest percentage of people (50.7%) were living in orphanages for educational purposes, and this may be due to poor economic status of parents [17]. In the present study, food frequency consumption patterns of respondents 100% are found to have cereals, pulses and lentils, vegetables, sugar, and fat product daily. A study conducted in Bhopal India in 2013, founds that 100% of people have cereals, vegetables, fat, and oil product in daily basis which is similar to our study but least people were found to have green vegetable about 4% in regular basis and 100% in milk and milk product which is higher than ours, this may be due to small age group of children [18]. A study conducted in orphaned adolescents' girls of children's home in Uganda in 2018 found that 97.7% have cereals product in daily basis whereas, vegetables are found half than ours about 50% and dark green vegetables are found to be 15% which is one-quarter to ours [19]. In the present study prevalence of underweight was 80.6% among the study population. A study done in Bangladesh in 2013 among (5)(6)(7)(8)(9)(10)(11)(12)(13)(14) year) orphan children founds that 65% of children were underweight, which were similar to our findings [17]. A study was done among orphan children (6)(7)(8)(9)(10)(11)(12)(13)(14) year in 2018 in India founds that 55.7% were underweight [13]. Also, another study from India done in 2013 among the orphan and non-orphan children (6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) years founds that 45.7% were underweight [1]. This study seems to have less underweight than ours. This might be due to less sample size as it was a pilot study and also consist of non-orphan children. From the annual report of Nepal, it was found that total 27% under-five children population are found to be underweight [6,21], which is least than our study as it is under five age group and it is evident that children from children's home came from a poor economic family where children could not get adequate nutritious food. In the present study, the prevalence of stunting was 55.1% among the study population. Our research is supported by a study done in India in 2019 in orphan children of (6-14 years) founds that 53.3% of stunting prevalence [13]. A study done in Uganda among (10-19 years) adolescents' girls in 2018 found that 18.9% were stunting [19]. A study done in India in 2013 among the orphan and non-orphan children (6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) years found that 37.1% were stunting [1], which is quite similar to our study findings. The high rate of stunting was not surprising as the children's home participants are more likely to have grown up in poor conditions. According to Nepal Demographic & Health Survey (NDHS), stunting is relatively high among children from the lowest wealth quintile (49%) compared with the most top wealth quintile (17%) [22]. From the Annual Report of Nepal, it was found that a total of 36% under-five children population found to be stunted, which is near similar to our study [21]. In the present study, the prevalence of thinness was 13.8% among the study population. A study done in India in 2019 in orphan children of (6-14 year) founds that 25.3% had thinness [13], While a study done on orphans in Bangladesh found that 48% children had thinness and this difference may be due to poor standards of living and nutrition in Bangladesh [17]. A study done in Gondar city, Ethiopia, in 2014, among orphan children below age five, was found to be 9.9%, which is similar to our study result [23]. It was found that a total of 10% under-five children population are found to be wasted, which is near similar to our study [21]. In the present study prevalence of overweight was 6.9% among the study population. A study was done among the orphan, and vulnerable children in Kaski district in 2017 among (6-18 age) found 4.3% overweight [16]. A study was done in Douala, Cameroon in 2019, found that 1.7% were overweight, which was done among orphan children up to 18 years, which is quite relevant to our findings [2]. The study found that girls were following hygiene practices better than boys. Similar findings were found in India's orphanages children's study [13]. Another similar finding was found in a study conducted in 2010 among primary school children (5-10 years) in South Kolkata [20]. Though the hygiene practices were found to be better, the situation of nutritional status was very poor and critical among the orphan children, leading to child mortalities. CONCLUSION Malnutrition is highly prevalent in children living in orphanages and needs to be addressed. Age, ethnicity, sex, and duration of stay at the orphanage were the major associated factors with malnutrition of orphan children. The major prevalence of underweight, stunting with thinness and overweight among the orphan children, indicates the severity of children's overall health. Interestingly, a high percentage of the children are in children's homes due to poverty, education, and abandoned. There is still limited study available on the nutritional status of orphan children in Nepal. Nutritional status should be monitored regularly for early identification and timely intervention for improving the nutritional status of children living in orphanages. FUNDING AND CONFLICT OF INTEREST This research study work was conducted without any funding. We declare that we don't have any conflicting interests.
v3-fos-license
2020-01-23T09:07:53.644Z
2020-01-01T00:00:00.000
210864521
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.3390/cells9010258", "pdf_hash": "d447e5ee2b79caa29ef04db0934877c5caeda285", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2041", "s2fieldsofstudy": [ "Biology" ], "sha1": "8480557bc568a8dd8dab924e528f9ddd0a556881", "year": 2020 }
pes2o/s2orc
The Emerging Roles of Fox Family Transcription Factors in Chromosome Replication, Organization, and Genome Stability The forkhead box (Fox) transcription factors (TFs) are widespread from yeast to humans. Their mutations and dysregulation have been linked to a broad spectrum of malignant neoplasias. They are known as critical players in DNA repair, metabolism, cell cycle control, differentiation, and aging. Recent studies, especially those from the simple model eukaryotes, revealed unexpected contributions of Fox TFs in chromosome replication and organization. More importantly, besides functioning as a canonical TF in cell signaling cascades and gene expression, Fox TFs can directly participate in DNA replication and determine the global replication timing program in a transcription-independent mechanism. Yeast Fox TFs preferentially recruit the limiting replication factors to a subset of early origins on chromosome arms. Attributed to their dimerization capability and distinct DNA binding modes, Fkh1 and Fkh2 also promote the origin clustering and assemblage of replication elements (replication factories). They can mediate long-range intrachromosomal and interchromosomal interactions and thus regulate the four-dimensional chromosome organization. The novel aspects of Fox TFs reviewed here expand their roles in maintaining genome integrity and coordinating the multiple essential chromosome events. These will inevitably be translated to our knowledge and new treatment strategies of Fox TF-associated human diseases including cancer. An Evolutionary Overview of Fox Family Transcription Factors (TFs) The forkhead box (Fox) family of transcription factors (TFs) spans from unicellular eukaryotes to humans, aside from plants ( Figure 1). The Fox family has four members, Fkh1, Fkh2, Fhl1 and Hcm1, in Saccharomyces cerevisiae. It flourishes to be one of the largest classes of TFs in human. The Fox family TFs share a conserved common structurally related DNA binding domain (DBD), the forkhead domain. This domain belongs to a much larger superfamily, the winged-helix. The winged-helix/forkhead class of TFs is characterized by a 100-amino-acid monomeric DBD folded into a variant of the helix-turn-helix motif with three α helices and two characteristic large loops or so-called "wings" [1,2]. The evolutionary history was inferred using the neighbor joining method in MEGA7. The bootstrap consensus tree inferred from 100 replicates was taken to represent the evolutionary history of the taxa analyzed. Branches corresponding to partitions reproduced in less than 50% bootstrap replicates are collapsed. The percentage of replicate trees in which the associated taxa clustered together in the bootstrap test (100 replicates) are shown next to the branches. The evolutionary distances were computed using the Poisson correction method and are in the units of the number of amino acid substitutions per site [3]. Sc: Saccharomyces cerevisiae, Sp: Schizosaccharomyces pombe, c: Caenorhabditis elegans, Dr: Danio rerio, Dm: Drosophila melanogaster,m: Mus musculus, x: Xenopus Laevis, h: Homo sapien. The DNA Sequence Bound by Various Fox TFs The DNA-binding specificities of different forkhead proteins were intensively examined ( Figure 2). The canonical forkhead target sequence is RYAAAYA, which is referred to as the forkhead primary (FkhP) motif. A similar variant, AHAACA, was identified during in vitro selection and protein-binding microarray experiments for several Fox proteins. It was designated as the forkhead secondary (FkhS) motif. A third motif, (G)ACGC, is called the FHL motif, which is the preferred binding site of FoxN1, N4, and Fhl1 in vitro and in vivo [4]. Besides various sequence specificities, the roles of Fox TFs might be regulated through additional layers such as protein homo-or hetero-oligomerization and distinct DNA binding patterns, which are discussed in Section 5 in more detail. The Role of Fox TFs in DNA Replication DNA replication is a fundamental process essential for all living beings. Each cell needs to accurately duplicate and distribute the whole set of genetic information into two daughter cells. DNA replication is strictly controlled in an orchestrated manner according to different stages of the cell cycle in all eukaryotes. It is crucial for genome stability maintenance, cell proliferation, and cell fate decision. During the past decades, Fox family TFs were demonstrated to play critical roles in regulating DNA replication and cell cycle through both transcription-dependent and transcription-independent mechanisms. Fox TFs Regulate DNA Replication in a Transcription-Dependent Way Early work on Fox TFs focused on their function as a general transcription factor to control gene expression to affect DNA replication. In Saccharomyces cerevisiae, Fkh1 and Fkh2 proteins bind the promoters of the "CLB2 cluster", which contains 33 genes, including CLB1, CLB2, SWI5, ACE2, CDC5, and CDC20. They act as transcription factors to ensure the cell-cycle regulated expression of these genes, then drive progression through mitosis after binding to the Cdk1 kinase [5]. More direct evidence has come from the role of Fhl1 in ribonucleotide reductase (RNR) gene expression. Ribonucleotide reductase catalyzes the rate-limiting step in the de novo biogenesis of deoxyribonucleotide triphosphates (dNTPs). It usually comprises a homodimer of large subunit, Rnr1, and a heterodimer of two small subunits, Rnr2 and Rnr4. Another large subunit, Rnr3, is only induced through a multi-level surveillance system when cells suffer replication stress or DNA damage [6,7]. Heterozygous deletion of FHL1 reduces transcription of RNR1 and RNR3 (but not RNR2 and RNR4). Chromatin immunoprecipitation (ChIP) shows that Fhl1p binds to the promoter regions of RNR1 and RNR3. The ∆fhl1/FHL1 mutant confers a decrease in dNTP levels and an increase in hydroxyurea (HU) sensitivity. This study suggests that Fox TF, from another aspect, affects DNA replication by regulating the supply of building blocks of nucleic acids [8]. Similar to Fox TFs in yeast, the expression of FOXM1B is increased at the G1/S transition in regenerating liver. FOXM1B protein directly binds the CDC25B promoter region and regulates the expression of cell cycle proteins essential for hepatocyte entry into DNA replication and mitosis [9]. FOXM1 controls transcription of the mitotic regulatory genes Cdc25B, Aurora B kinase, survivin, centromere protein A (CENPA), and CENPB in both mouse embryonic fibroblasts (MEFs) and human osteosarcoma. Moreover, FoxM1 is also essential for the expression of Skp2 and Cks1. The latter two are substrate recognition subunits of the Skp1-Cullin 1-F-box (SCF) ubiquitin E3 ligase that targets p21Cip1 and p27Kip1, the CDK inhibitor (CDKI) proteins for degradation during the G1/S transition [10]. Therefore, FOXM1 deficiency leads to elevated nuclear levels of these CDKI proteins, which may account for a significant decrease in proliferating cells and an increase in apoptotic cells [11]. Cooperation of FOXM1 and AR accelerated DNA synthesis and cell proliferation by affecting CDC6 gene expression [12]. FOXO3 could form a complex with DDB1 and compete with DDB1 and PCNA interaction [13]. There is also evidence to show that FOXM1 is regulated by B-Myb, which a key TF in the cell cycle regulation of somatic cells and implicated in different types of human cancer. B-Myb is ubiquitously expressed in various cell types. However, the levels of B-Myb in embryonic stem cells (ESCs), embryonic germ cells, and embryonic carcinoma cells are over 100 times than those in normal proliferating cells. B-Myb ablated ESCs show a significantly decreased expression of FOXM1 and c-Myc. ChIP results reveal a specific enrichment of B-Myb at the FOXM1 locus in the binding site 2 region (BS2) [14]. Moreover, through a systematic screen, Anders et al. identified FoxM1 as a substrate of cyclin D1-CDK4 and cyclin D3-CDK6. FOXM1 protein is stabilized and activated after phosphorylation by CDK4/6, then plays its role in the expression of G1/S phase genes. Meanwhile, it suppresses the levels of reactive oxygen species (ROS), maintains genome stability, and protects cells from ROS-induced senescence. In conclusion, Fox TFs functions in signaling cascades and gene expression to regulate DNA replication, cell cycle progression, and cell fate decisions from yeast to human. Besides such exquisite step-by-step assembly of the replication machine in each origin, DNA replication is also conducted by a global tempo-spatial program throughout the genome. Unlike bacteria, Eukarya exploit a large number of origins ranging from~500 in yeast to 50,000 in humans. Interestingly, origins do not fire simultaneously but follow some particular timing program [33][34][35][36]. The timing program is established in the late G1 phase, which is called timing determinant point [34]. Based on a series of studies in yeast from several groups, a limiting factor model was proposed for the determination of replication timing. The firing factors such as Sld3 and Cdc45 are available in limited amounts relative to the total number of origins in budding yeast. Overexpression of these factors often results in the advanced firing of some late origins [37][38][39]. Meanwhile, Sld3 and Cdc45 are enriched at early origins in G1 in a DDK-dependent manner [38,40]. The essential role of DDK attributes to MCM phosphorylation, which mediates the recruitment of Sld3-Cdc45 through direct association with a basic patch motif within Sld3 [15,23]. Fkh1 is first implicated in modulating the late-origin firing and heterochromatin structure of the mating-type locus in a genetic study in budding yeast [41]. Its exact role in DNA replication was not known until seminal studies from Aparicio and his colleagues [42]. By using a quantitative genome-wide BromodeoxyUridine immunoprecipitation-sequence (BrdU-IP-seq), they uncovered that Fkh1 and Fkh2 somewhat redundantly determine a subset (~30%) of early origin firing. The bioinformatic analysis predicted that these origins contain more than one Fkh binding site (FBS), which was validated by the ChIP results. FKH1 or FKH2 overexpression advances the initiation timing of many origins throughout the genome, resulting in a higher total level of origin firing in the early S phase. On the other hand, deletion of FKH1 and FKH2 or their binding sites proximal to Fkh-activated origins results in delayed activation of these origins. As a consequence, other origins, referred to as Fkh-repressed origins, become activated in the absence of FKH1 and FKH2, likely due to reduced competition from Fkh-activated origins for dose-limiting replication initiation factors. In an independent study, through high-throughput yeast two-hybrid screens, Fang et al. identified two novel Dbf4 interactors: Fkh1 and Fkh2 [23]. ChIP analysis showed that Fkh TFs are required for the enrichment of Dbf4 in a subset of early origins in G1 but not for the recruitment of pre-RC components such as ORC and MCM. Next, by using the purified proteins and the biotin-labeled origin DNA, they reconstituted the recruitment of Dbf4 to these early origins in vitro. The minimal requirements of Dbf4 recruitment are Fkh and an origin bearing FBS (FBS + ). This indicates that the pre-RC assembly is not a prerequisite for Dbf4 recruitment. These findings demonstrated that Dbf4 is barely able to bind origins per se, and Fkh TFs are sufficient to recruit Dbf4 to the FBS + group of origins. Very interestingly, Tomoyuki and colleagues found that Dbf4 is recruited to the early origins near centromeres through another mechanism [16]. Through GFP-labeled PCNA, a sliding clamp for DNA polymerases, they noticed that replication foci initially localize in the spindle pole body (SPB, equivalent to the centrosome in metazoa). As with PCNA, initiation factors Cdc7-Dbf4 and Sld3-Sld7 localize at centromeric regions in telophase to G1 phase. In the absence of Ctf19, a component of the kinetochore complex (also called COMA, Ctf19p-Okp1p-Mcm21p-Ame1p), the formation of Sld7 foci near SPB is diminished. ChIP-qPCR analysis of ctf 19∆ cells revealed a normal enrichment of Dbf4 at early origins such as ARS606 or ARS607 but a reduced binding of Dfb4 to early origins located in peri-centromeric regions. Collectively, these two independent studies elucidated that Dbf4 is preferentially recruited to early origins through distinct mechanisms in a chromatin context-dependent manner. The very C-terminal 50 amino acid of Dbf4 mediates its interaction with Fkh1 and Fkh2. The interaction-defective mutant, dbf4∆C, phenocopies fkh1∆ alleles in terms of origin firing. More importantly, the direct fusion of the DNA-binding domain (DBD, also called forkhead) of Fkh1 to Dbf4∆C fully restores the Fkh-activated origin firing. In control, a fusion of DNA-binding defective forkhead mutant with Dbf4∆C results in no rescue at all. These findings convincingly demonstrated that DNA-binding activity but not the transcription activation activity of Fkh TFs is necessary for the recruitment of Dbf4. In other words, Fkh TFs determine DNA replication timing through direct physical interaction with DDK and are utterly independent on their transcription roles. Intriguingly, genome-wide replication profiles show that Dbf4 C-terminal fusion with either forkhead or an epitope interferes with the early replication of pericentromeric origins. It was noticed that the addition of a C-terminal tag may specifically abolish the interaction of Dbf4 with Ctf19. In addition to its role as an essential regulatory subunit of DDK, Dbf4 interacts directly with Sld3, which may be attributed to the direct recruitment of these downstream limiting factors. This represents the first clue that Dbf4 may play a direct role in regulating DNA replication in a DDK-independent way. These studies depict how early origins in the different chromosome context compete for DDK, the upstream rate-limiting factor in determining the replication timing program in G1. Rif1-PP1 Phosphatase Negatively Regulates Replication Initiation to Compete for Fox TFs Contrary to the stimulatory role of Fkh1 and Fkh2 in origin firing, Rif1-mediated PP1 phosphatase inhibits replication initiation through reversing MCM phosphorylation. Rif1 (RAP1-interacting factor) was originally identified as a telomere-binding factor to regulate the telomere length in yeast [43]. The Masai group first discovered a critical role of Rif1 replication timing in human cells [44]. Rif1 colocalizes specifically with the mid-S replication foci and establishes the mid-S replication domains that are restrained from being activated at early-S-phase. Rif1 prolongs the embryonic S phase at the Drosophila mid-blastula transition [45]. The Donaldson group showed that deletion of RIF1 increases the proportion of hyperphosphorylated Mcm4 and partially compensates for the limited DDK activity in a temperature-sensitive (ts) mutant of the catalytic subunit of DDK, cdc7-1, in yeast [46]. Rif1 has two conserved N-terminal motifs, RVxF and SILK, which directly associate with Glc7, the sole protein phosphatase 1 (PP1) in budding yeast [47]. Mutation of these domains increases MCM phosphorylation and thus suppresses the growth defects of cdc7-4 and dbf4-1 mutants. ChIP results confirmed that the Rif1-PP1 interaction is necessary for PP1 enrichment in late origins in both S. cerevisiae and S. pombe. After docking onto the pre-RC through Rif1, PP1 then reverses the MCM phosphorylation carried out by DDK and represses origin firing. Rif1 can also mediate MCM dephosphorylation at replication forks, and the stability of dephosphorylated replisomes strongly depends on Chk1 activity in animals [48]. Interestingly, MCM phosphorylation is regulated by additional mechanisms [49]. After loading, MCM is SUMOylated, which peaks in G1 and declines during S-phase, then rises again in the M phase. DDK is required for the S-phase decline of MCM SUMOylation. SUMOylation of Mcm6 increases its interaction with Glc7, promoting the MCM dephosphorylation. Besides MCM, RIF1-PP1 protects the origin-binding ORC1 from premature phosphorylation and consequent degradation by the proteasome in G1 [50]. Meanwhile, Rif1 is to counteract DDK phosphorylation of Sld3 as well. Very recently, Javier Garzón et al. showed that human RIF1-PP1 protects nascent DNA from over-degradation by DNA2 at stalled replication forks and limits phosphorylation of WRN at sites implicated in resection control [51]. Bringing things full circle, PP1/Rif1 interaction is downregulated by the phosphorylation of Rif1, most likely by CDK/DDK [52]. Indeed, putative conserved DDK and CDK phosphorylation sites were found adjacent to the protein phosphatase 1 docking domains in both S. cerevisiae and S. pombe Rif1. When nine putative DDK or CDK sites in Rif1 are changed to alanine, the temperature sensitivity of cdc7-1 is enhanced, while changing them to mimic phosphorylation (aspartic acid) has the opposite effect. In conclusion, MCM loading and pre-RC assembly occur in all origins throughout the genome, which means that all of them are licensed for replication. However, the replication timing and the efficiency of each origin are determined by the phosphorylation of MCM. This critical event is precisely controlled by protein kinase DDK and phosphatase PP1, which are mediated by Fox TFs (or COMA at pericentromeres) and Rif1, respectively (Figure 3a). The Role of Fox TFs in Origin Clustering, Relocalization and Replication Factories Besides helping "the early birds catch the worm", which means that Fox TFs recruit the limiting initiation factors to early origins [39], there is a great deal of evidence that 3D chromatin structure represents another dimensional regulation of replication timing. For instance, even when the replication limiting factors are overexpressed, RPD3 needs to be knocked out to activate the dormant origins [37]. There are excellent reviews on epigenetic determinants and dynamic chromosome organization of replication timing [18,34,35,[53][54][55][56]. Fox TFs were first found to be required for the clustering of early origins in G1 by the Aparicio group [42]. They observed a non-random distribution of Fkh-activated and -repressed origins. Origins of each class-not just limited to CEN-and TEL-proximal ones-often cluster linearly along the chromosome. Indeed, 4C (chromosome conformation capture-on-chip) reveals both intrachromosomal and interchromosomal interactions of Fkh-activated origins in G1 phase in an Fkh-dependent manner. Fox TFs do not participate in the formation of topologically associating domains (TAD), but they mediate the long-range interactions of origins at TAD boundaries [57]. On the other hand, the involvement of telomere-binding proteins such as Rif1 as global regulators of the replication timing of subtelomeric and many internal origins implies a role for these proteins in organization or localization of origins within the nucleus. Palmitoylation of Rif1 regulates the association of telomeres with the nuclear periphery, suggesting that palmitoylated Rif1 anchors chromatin to the nuclear membrane [58]. Origins locate at the nuclear periphery often replicate late, whereas early origins are often observed in the nuclear interior during G1 [59,60]. When FBS late origin ARS501 in the subtelomeric region of chromosome V-R is replaced by Fkh-activated origin ARS305, the "new" origin (ARS305V-R) loses early-firing in fkh1∆fkh2∆ cells [61]. If Fkh1 is induced in G1-phase, ARS305V-R regains early replication in the succeeding S phase. Using this Fkh1-induced origin activation system, Zhang et al. recently labeled origin ARS305V-R with tetO/TetR-Tomato [62]. They observed this subtelomeric origin re-positions from the nuclear periphery to the interior upon Fkh1-induction in G1 and replicates early in S. This phenomenon, called Fkh1-dependent origin relocalization, disappears in cdc7-4 andcdc45-1 ts mutants as well as in Fkh interaction defective mutant dbf4∆C. Moreover, an MCM4-14D mutant mimicking the phosphorylation status by DDK can bypass the requirement of DDK. Therefore, the origin mobility depends on MCM phosphorylation by Fkh1-mediated DDK and subsequent Cdc45 loading [62]. These seminal studies argued that the replication of eukaryotic chromosomes is organized temporally and spatially in a four-dimensional manner within the nucleus through Fox TFs and epigenetic elements. Origin clustering enables cooperativity between origins in the recruitment of limiting initiation factors Dbf4, Sld3, and Cdc45. Such assemblages inevitably fit for the concentration of replication factors and DNA synthesis, i.e., the observed replication foci, thus representing an in vivo evidence to support the theory of replication factories (Figure 3b). Dimerization of Fkh Contributes to Origin Clustering and Dynamic Localization Precise spatial and directional arrangement of Fkh1/2 sites is crucial for the efficient binding of the Fkh1 protein and for early firing of the origins [63]. Both Fkh1 and Fkh2 harbor a domain-swapping motif (DSM) that allows for either homo-or hetero-dimerization [64]. which underlies and accounts for the observed origin clustering. Crystal structures of the human FoxP2 and FoxP3 forkhead DNA-binding domains also provide the possibility to form domain-swapped dimers, which may "catch" two origins [65,66]. fkh1 with DSM mutated (fkh1-dsm) cannot dimerize, which binds origins in vivo but fails to cluster them. Therefore, Fkh1/2 dimers perform a structural role in the spatial organization of chromosomal elements with functional importance. However, such mutations pose a subtle effect on their transcription function, suggesting that the different binding patterns of Fkh determine their distinct roles in transcription and replication/chromatin organization. Interestingly, Fkh1/2 bind strongly at their transcriptional target genes of the CLB2 group. In comparison, their binding to replication origins and recombination enhancer is relatively weak and dynamic [64]. Summary and Prospects Fox TFs are well-known to maintain genome stability through participation in the process of DNA damage response and repair. Here, we summarize their crucial physiological roles in DNA replication, providing a new aspect of this highly conserved family of TFs because they participate in most, if not all, essential biological processes critical for cell fate decisions, including cell cycle progression, cell proliferation, cell renewal, cell differentiation, cell migration, and cell survival. Additionally, the expression of Fox TFs is frequently upregulated in many types of tumors. For example, the upregulation of FOXM1 expression is an early event during cancer initiation, progression, invasion, metastasis, and drug resistance [67,68]. Therefore, the novel functions of Fox TFs in DNA replication and chromosome organization will inevitably shed new light on the related genome-instability diseases. Despite the rapid-growing knowledge of Fox TFs in the regulation of chromosome replication and structure, there are still some key scientific questions ahead, for example: 1. Do FOX TFs determine replication timing in higher eukaryotes? Which FOX TFs are required? 2. In addition to a critical DDK regulator, are there other transcription-independent roles of FOX TFs in DNA replication? 3. Are there more FOX TFs participating in regulating DNA replication and maintaining genomic stability? 4. Do FOX TFs participate in other chromosome processes such chromosome segregation? 5. Are FOX TFs involved in high-order chromosome organization in higher eukaryotes?
v3-fos-license
2014-10-01T00:00:00.000Z
2010-04-29T00:00:00.000
15684974
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0010338&type=printable", "pdf_hash": "98577ca3b493a0d9d8f80b9fc3682d0c266340f5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2042", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "98577ca3b493a0d9d8f80b9fc3682d0c266340f5", "year": 2010 }
pes2o/s2orc
HOXB13, a Target of DNMT3B, Is Methylated at an Upstream CpG Island, and Functions as a Tumor Suppressor in Primary Colorectal Tumors Background A hallmark of cancer cells is hypermethylation of CpG islands (CGIs), which probably arises from upregulation of one or more DNA methyltransferases. The purpose of this study was to identify the targets of DNMT3B, an essential DNA methyltransferase in mammals, in colon cancer. Methodology/Principal Findings Chromatin immunoprecipitation with DNMT3B specific antibody followed by CGI microarray identified genes with or without CGIs, repeat elements and genomic contigs in RKO cells. ChIP-Chop analysis showed that the majority of the target genes including P16, DCC, DISC1, SLIT1, CAVEOLIN1, GNA11, TBX5, TBX18, HOXB13 and some histone variants, that harbor CGI in their promoters, were methylated in multiple colon cancer cell lines but not in normal colon epithelial cells. Further, these genes were reactivated in RKO cells after treatment with 5-aza-2′-deoxycytidine, a DNA hypomethylating agent. COBRA showed that the CGIs encompassing the promoter and/or coding region of DCC, TBX5, TBX18, SLIT1 were methylated in primary colorectal tumors but not in matching normal colon tissues whereas GNA11 was methylated in both. MassARRAY analysis demonstrated that the CGI located ∼4.5 kb upstream of HOXB13 +1 site was tumor-specifically hypermethylated in primary colorectal cancers and cancer cell lines. HOXB13 upstream CGI was partially hypomethylated in DNMT1−/− HCT cells but was almost methylation free in cells lacking both DNMT1 and DNMT3B. Analysis of tumor suppressor properties of two aberrantly methylated transcription factors, HOXB13 and TBX18, revealed that both inhibited growth and clonogenic survival of colon cancer cells in vitro, but only HOXB13 abolished tumor growth in nude mice. Conclusions/Significance This is the first report that identifies several important tumor suppressors and transcription factors as direct DNMT3B targets in colon cancer and as potential biomarkers for this cancer. Further, this study shows that methylation at an upstream CGI of HOXB13 is unique to colon cancer. Introduction Symmetrical methylation of DNA at position 5 of cytosine within a CpG dinucleotide is a major epigenetic modification (,5% of the total cytosine in the mammalian genome) although a small amount of 5-hydroxymethylcytosine (5hmC) generated from 5-meC by a methylcytosine dioxygenase has recently been detected in certain cell types [1][2][3]. Very recently it has been shown that cytosine methylation at nonCpG sites, although rare, is involved in gene silencing in mammals [4]. DNA methylation is essential for mammalian development. DNA hypermethylation suppresses spurious promoters located within the repeat elements and proviruses in mammalian genome whereas hypomethylation induces genomic instability [5,6]. DNA methylation is also involved in the regulation of genomic imprinting, inactivation of the silent X chromosome in females and expression of certain tissue specific genes [1,6]. In humans, alterations in genomic methylation patterns are linked to imprinting disorders and other human diseases including cancer [7][8][9]. Although CpG is usually underrepresented in much of the genome, short (500-2000 bp long) CpG regions, designated CpG islands (CGI), are predominantly located in the proximal promoter regions of almost 50% of the mammalian genes. These regions are usually methylation free in normal cells with the exception of imprinted alleles and genes on the inactive X chromosome. Recent high throughput genome wide DNA methylation analysis identified many more CGIs located distal to promoters that are tissue-specifically methylated [5]. Furthermore, methylation also occurs in the coding regions of active genes and reversible DNA methylation can regulate gene expression in response to stimuli such as estrogen treatment and membrane depolarization [6]. DNA methylation in mammalian cells is established and maintained by DNA (cytosine-5) methyltransferases (DNMTs). Methylation is initiated by highly homologous DNMT3A and DNMT3B that prefer unmethylated DNA as the substrate [1,10]. DNA methylation is heritably propagated by DNMT1 that prefers hemimethylated DNA as substrate. All three DNMTs are essential for development in mammals [11,12]. Among these three enzymes, DNMT3B is directly linked to different diseases. For example, mutation of the DNMT3B gene causes immunodeficiency, centromeric instability and facial anomalies (ICF) syndrome, a rare human disorder due to alteration in the methylation of minor satellite repeats [13] and genes regulating immune function and neuronal development [14]. Thus, DNMT3B deficiency in these patients cannot be compensated by other DNMTs. Studies in mutant mice have shown that DNMT3A and DNMT3B methylate distinct as well as overlapping regions of the genome [12]. For example, DNMT3A2 catalyzes methylation of imprinted genes in germ cells whereas tandem repeat elements are methylated by both DNMT3A and DNMT3B [2]. DNMT3B has also been linked to type 2 diabetes by regulating mitochondrial DNA copy numbers through fatty acid-induced non-CpG methylation of PGC-1a [4]. Emerging studies have shown that a variety of cofactors specifically target DNMTs to distinct chromosomal regions in vivo [15], as these enzymes demonstrate specificity only towards CpG base pairs in vitro [2]. Gene silencing by DNMTs occurs predominantly by recruitment of repressors that include methyl CpG binding proteins (MBDs) and corepressors such as histone deacetylases (HDAC) and histone methyltransferases (HMT), resulting in distortion of local chromatin structure [3,9,16]. Hypermethylation of CpG islands (CGIs) is a common epigenetic event in almost all malignancies [7,9]. Upregulation of DNMT3B is also a characteristic of many cancer cells [17]. For example, in sporadic breast carcinoma, 30% of the patients showed increased expression of DNMT3B compared to minimal increase (3-5%) in DNMT1 and DNMT3A [18]. Significantly higher expression of DNMT3B was observed in acute myeloid leukemia compared to normal myeloid cells [19]. DNMT3B overexpression was associated with high tumor grade and CIMP (CpG island methylator phenotype) in colon cancer [17]. Furthermore, depletion of DNMT3B, but not DNMT3A, induced apoptosis specifically in human cancer cells [20]. It has also been reported that upregulation of DNMT3B is more dramatic and more frequent than DNMT1 and DNMT3A in cancers including bladder and colon [21]. Studies in a mouse model have shown that the overexpression of Dnmt3b but not Dnmt3a promoted colon tumorigenesis in Apc Min/+ mice [22]. These observations suggest that DNMT3B may play a causal role in tumorigenesis. Different groups have identified methylation targets using different techniques [14,22], in the present study we have identified direct DNMT3B target genes in colon cancer cells by performing chromatin immunoprecipitation followed by CpG island microarray analysis (ChIP-on-chip). Many DNMT3B targets are embedded in CpG islands and some are known tumor suppressors. We also report the methylation status of some of these genes with potential growth suppressor properties in primary colorectal tumors and colon cancer cell lines. Further, we examined tumor suppressive characteristics of two important transcription factors, HOXB13 and TBX18, in colon cancer cells. Mice Nude mice were purchased from Jackson laboratory. All mice were housed, handled, and euthanized in accordance with federal and institutional guidelines under the supervision of the Ohio State University Institutional Animal Care and Use Committee. All animals used in this study were handled in strict accordance with good animal practice as defined by the relevant national and/or local animal welfare bodies. Western blot analysis Affinity purified antibodies against DNMT3A and DNMT3B were used for western blot analysis as described [24,25]. Anti-Flag antibody was from Sigma. Primary Human Tumors The tumor samples were obtained from patients at James Cancer Hospital (The Ohio State University). Complete pathologic classification is available for all tumor samples studied. All tissues used for this study were part of an institutional review board-approved protocol at the Ohio State University College of Medicine. ChIP on Chip assay Chromatin immunoprecipitation (ChIP) assay was performed as described [26] with some modifications. ChIP was performed on formaldehyde cross-linked chromatin (DNA fragmented to ,600 bp to 3000 bp by sonication) from 10 8 RKO cells with antibody against DNMT3B [26,27]. The anti-DNMT3B antibodies raised in our laboratory do not cross react with each other or with DNMT1 [28]. We used affinity purified DNMT3B antibodies to pull down DNA from formaldehyde cross-linked chromatin prepared from RKO cells. The chromatin was cleared with preimmune IgG and protein A beads. The precipitated DNA was dissolved in RIPA buffer and subjected to a second round of immunoprecipitation with the same antibody to minimize pull down of false positive targets. This DNA was then separated on an agarose gel and DNA from 0.5 to 3 kb in size was purified by using Gel Extraction kit (Qiagen), labeled with Cy5-labeled dNTP and hybridized to a CpG island library coated on glass slides [29,30]. The same amount of input DNA and DNA precipitated with preimmune-IgG) was used as control. We selected only those genes for sequence analysis where the signal in ChIP DNA was $2 fold compared to the control rabbit IgG signal. MIAME complaint data has been submitted to Geo database (accession number GSE18929). Sequence analysis of clones The construction of CpG island library has been described earlier [29]. The clones pulled own by DNMT3B antibodies were picked up from the CpG island library and sequenced in an automated sequencer. RT-PCR and real-time RT-PCR analysis RNA was isolated using guanidinium thiocyanate-acid phenol method, treated with DNase 1 to remove residual DNA, if any, and reverse transcribed using random hexamers following standard protocol. Real time RT-PCR was done using SYBR Green technology following published protocol [27,31]. RT-PCR primers will be available upon request. ChIP-CHOP analysis was performed as described [32] Primer sequences are provided in the Table S1. COBRA (Combined bisulfite-restriction analysis) COBRA of genomic DNA was performed as described [33,34]. CGIs different genes were amplified with primers specific for bisulfite converted DNA where unmethylated cytosines are converted to uracils. Primers were designed using Methprimer software (http:// www.urogene.org/methprimer/index1.html). Primer sequences are provided in the Table S1. Quantitative DNA methylation analysis of HOXB13 CGI by MassARRAY DNA methylation analyses were carried out using the EpiTYPER application (Sequenom, San Diego) as described [35]. Briefly, genomic DNA was isolated, subjected to integrity control and subjected to bisulfite treatment. Regions of interest were amplified using primers for bisulfite treated DNA (primer sequences available upon request), amplified DNA was transcribed in vitro and cleaved using RNAse A. The molecular weight of the resulting fragments indicative of the DNA methylation state was analyzed using matrix-assisted laser desorption ionization time-of-flight (MALDI-TOF) mass spectrometry. DNA methylation standards (0, 20, 40, 60, 80 and 100%) were used to control for PCR amplification bias. Equation fitting algorithms based on the R statistical computing environment were used for data correction. Display of methylation results as heat maps and unsupervised clustering were performed using the Multiple Experimental Viewer software (http://www.tm4.org/mev.html). Cloning of HOXB13 and TBX18 cDNA and generation of RKO and HCT116 cell lines overexpressing these proteins cDNA derived from total RNA from normal human colon (Clontech) was amplified with primers specific for HOXB13 and TBX18 coding region and cloned into pcDNA-3XFlag (Sigma). The authenticity of the cDNAs was confirmed by sequencing. The Flag-tagged cDNAs were then cloned into pBabe-puro and infectious retroviruses were generated in phoenix cells. Stable cell lines (RKO, HCT116 or DLD1b) overexpressing HOXB13 or TBX18 were generated by infecting these cells with the recombinant retroviruses and selected with puromycin. Clonogenic survival was performed as described [36] Cell growth was measured by MTT assay as described [37] Tumor growth in nude mice was performed as described [38] HOXB13 promoter activity assay HOXB13 promoter regions spanning 21.2 kb to +0.2 kb and 25.2 kb to +0.2 kb with respect to transcription start site (TSS) were amplified from lymphocyte DNA using Accuprime polymerase, confirmed by sequencing, and cloned into pGL3 basic (firefly luciferase vector). These promoter reporter plasmids along with pRLTK (renilla luciferase vector) were transfected into RKO cells and luciferase activity was measured using Dual luciferase assay kit (Promega). Results ChIP on Chip analysis identified DNMT3B target genes in RKO, a colon cancer cell line Aberrant DNA methylation is prevalent in colorectal carcinogenesis [39][40][41]. To identify hypermethylated genes in colon cancer (RKO) cells we performed ChIP with anti-DNMT3B followed by CpG island (CGI) microarray. We selected DNMT3B because its expression is significantly higher than DNMT3A and DNMT1 in RKO cells ( Figure 1A) and it appears to play a causal role in colon tumorigenesis [22]. The specificity of the affinity purified DNMT3B antibody was confirmed by using extracts from DNMT3B null HCT cells. A major (,98/96 kDa) polypeptide was detected by this antibody in RKO cells, in the wild type and DNMT1 2/2 HCT cells. That these polypeptides are different variants of DNMT3B was confirmed by the inability of the antibody to detect any protein in DNMT3B 2/2 and DKO (DNMT1 2/2 DNMT3B 2/ ) cells. A few very minor polypeptides detected in cells expressing DNMT3B are probably its isoforms because it is known that DNMT3B exhibits different spliced variants [42]. To reduce nonspecific pull down, ChIP was performed twice with the same antibody and the precipitated DNA was resolved on an agarose gel to elute DNA of smaller sizes (0.5 to 3 kb) as probe for CpG island microarray (see Methods for detail) ( Figure 1B). Next, we sequenced only those pulled-down genes that were at least 2 fold enriched in ChIP DNA compared to that pulled down by control rabbit IgG. The microarray data has been submitted to Geo database and it is MIAME complaint (accession number GSE18929). We classified these genes into four groups, i) genes with CpG island (CGI) but without repeat elements, ii) genes without CGI and repeat elements, iii) repeat elements, and iv) genomic contigs associated with repeat elements ( Table 1 and Table S2). Among the genes with CGIs, there are some known tumor suppressors, such as P16/MTS1, DDC, PGRMC1, CAVEO-LIN1 and some disease susceptibility genes such as DISC1 (disrupted in Schizophrenia 1) [43], TBX5 (Congenital heart failure or Holt-Oram syndrome) [44] ( Table 1). We also identified a few novel genes such as TBX18, DGKI, SLIT1 and GNA11 as DNMT3B targets. To confirm the association of DNMT3B with some of its putative target genes, we amplified their promoter regions from a different batch of chromatin immunoprecipitated DNA. We performed ChIP-CHOP analysis to determine methylation status of the target DNA. For this assay, the immunoprecipated DNA was divided into three identical aliquots for mock-digestion, digestion with Hpa II (methylation sensitive enzyme) or Msp I (methylation insensitive enzyme) (see Figure 2A for a schematic diagram). The digested DNA was then used to amplify the promoter of interest using primers that encompass one or more DNMT3B target genes such as P16, TBX5, TBX18, GNA11, DMRT1, HOXB13, CAVEOLIN1, PGRMC1, DCC (Table S3). It is noteworthy that none of these genes are methylated in normal colon epithelial cells CCD841. Treatment of RKO cells with decitabine resulted in activation of the methylated genes To determine whether methylation of some of the DNMT3B target genes indeed suppressed their expression in RKO cells, we treated these cells with the commonly used DNA hypomethylating agent, 5-aza-29-deoxycytidine. RNA isolated from untreated and the inhibitor (1 and 2.5 mM) treated RKO cells harvested at 24, 48 or 72 hours was subjected to RT-PCR analysis with gene specific primers. TBX18, DCC and CAVEOLIN 1 were re-expressed in RKO cells treated with 1 and 2.5 mM decitabine as early as 24 hr and their expression persisted up to 72 hr ( Figure 2B). In contrast, SLIT1 was induced at a very low level under these conditions whereas HOXB13 was reactivated only after exposure to the drug for 72 hr. On the contrary, GNA11 was expressed at high level in RKO cells indicating that the methylation of CGI located in intron 1 of this gene did not affect its expression. CGIs of DNMT3B target genes are methylated in human primary colorectal tumors Next, we extended our study to analyze the methylation status of a selected group of genes (DCC, TBX18, TBX5, SLIT1 and GNA11) in several primary human colorectal tumors and matching normal colon tissues by COBRA. The methylation status of these genes in a few pairs is presented in Figure 3. The CGI spanning promoter/exon 1 of DCC was methylated at Taq I site in 8 out of 10 tumors as demonstrated by almost complete digestion with Taq I whereas it was methylated in matching normal colon tissues only in sample #8 and #10 (Figure 3.i). Similarly, CGI of TBX18 was methylated at the BstU I site in 6 out of 7 tumors without significant methylation in matching normals (Figure 3.ii). TBX5 encodes 4 transcript variants that are generated by alternate transcription initiation sites and alternative splicing. We analyzed methylation status of CGIs spanning the promoter/exon 1 of variants 1 and 3 (TBX5L) and variant 4 (TBX5S) (Figure 3.iii). Interestingly, CGI of TBX5L was methylated at Taq I sites both in normal colon tissues and tumors but methylation was more pronounced in 3 out of 5 tumors (#1, 4 & 6) than in the matching normals. In contrast, CGI of TBX5S was specifically methylated in tumors in 5 out of 6 samples at the BstU I site. Thus, two CGIs located in close proximity demonstrated differential methylation status in the same sample at least with respect to Taq I and BstU I sites. CGI spanning promoter/exon 1 of SLIT1 was tumorspecifically methylated at the Taq I site in 5 out of 10 samples analyzed (Figure 3.iv). Notably CGI located in the intron of GNA11 was completely methylated both in normals and tumors as demonstrated by complete digestion of the PCR product with BstU I (Figure 3.v). Methylation at this intronic CGI did not silence GNA11 as demonstrated by its robust expression in RKO cells (Figure 2c). Complete bisulfite conversion was demonstrated by digestion of the amplicons with methylation insensitive Mse I or Tsp509 I (data not shown). These COBRA data showed that some of DNMT3B targets were hypermethylated in primary colorectal tumors albeit at different levels. However, we did not observe methylation of CGI located in the promoter as well as exon 1 region of the HOXB13 gene in primary colon cancer by COBRA (data not shown). CGI located 4.5 kb upstream of HOXB13 Transcription start site (TSS) is hypermethylated in colorectal tumors Although COBRA did not reveal methylation of CGI located in the promoter and exon 1 region, HOXB13 gene was reactivated after treatment with decitabine ( Figure 2C). This observation suggested that methylation of a CGI located at a different region of the gene or specific CpGs within the promoter region might regulate its expression in colon cancer cells or tissues. BLAT analysis identified two CGIs in human HOXB13 gene, one in the promoter/exon1 and the other ,4.5 kb upstream of transcription start site (TSS) ( Figure 4A). We, therefore, analyzed methylation status of CpGs spanning the promoter and upstream CGI ( Figure 4B, and Tables S4 and S5) in primary colorectal tumors and in cell lines by MassARRAY because in this mass spectrometry based assay methylation status of CpGs can be estimated quantitatively. It is evident from the HEAT map that methylation at the upstream CGI is higher in tumors than in normals ( Figure 4C.i). Quantification of the results showed that overall methylation density at the upstream CGIs was significantly higher (P = 0.02) in tumors compared to the normals ( Figure 4C.ii). In contrast, CGI in the promoter region was essentially methylation free ( Figure 4C.i), which correlated with the COBRA data (data not shown). MassARRAY analysis of colon cancer cell lines revealed hypermethylation at the upstream CGI in HCT, CaCo2 and RKO cells whereas relatively low level methylation was detected in RWPE and SW837 cells. Notably, CCD841, a normal colon epithelial cell line, was essentially unmethylated. Selective methylation of the upstream CGI but not the one located in the immediate upstream promoter region occurs in colon cancer cells probably due to distinct chromatin structure that is accessible to DNMTs. To investigate the role of the upstream CGI in HOXB13 expression we generated luciferase reporter constructs harboring 25.4 kb to +0.2 kb and 21.2 kb to +0.2 kb regions, respectively amplified with gene specific primers followed by digestion with a methylation sensitive enzyme Taq I or BstU I. T and N denote tumor and matching normal, respectively. Sample numbers shown in red identify tumors with gene-specific methylation. doi:10.1371/journal.pone.0010338.g003 into pGL3basic vector and compared their ability to modulate firefly luciferase activity at 36 and 48 hr post transfection in HCT116 cells, because these cells can be transfected with high efficiency. The luciferase activity driven by 25.4 kb promoter was at least 2 fold higher than that contributed by 21.2 kb region ( Figure 4D). Promoterless pGL3basic showed minimal activity (data not shown). Since HOXB13 is an estrogen responsive gene [45], we also measured activity of these two promoter regions by treating cells with estrogen 24 hr post-transfection with estradiol that increased the activity of both promoters at 12 and 24 hrs ( Figure 4D). Taken together, these results demonstrated that the upstream promoter region of HOXB13 gene stimulated promoter activity but did not contribute to estrogen responsiveness. To identify the DNA methyltransferase (DNMT) that catalyzes methylation of HOXB13, we measured its methylation status in the wild type and mutant HCT cell lines lacking DNMT1, DNMT3B or both. Massarray analysis showed that the upstream CGI is heavily methylated in the wild type and DNMT3B2/2 cells but is significantly hypomethylated at certain CpGs in DNMT12/2 cells and almost completely in double knock out (DKO) cells ( Figure 4E and Tables S4 and S5). These results indicate that although DNMT1 alone can methylate certain CpGs, its cooperation with DNMT3B is required for efficient methylation of the upstream CGI. Notably, promoter CGI is not methylated in any of these 4 cell lines. Surprisingly, HOXB13 expression is upregulated only in DNMT12/2 cells compared to the wild type cells ( Figure 4E). Significant downregulation of ERa ( Figure 4E), an activator of HOXB13 [45] probably accounts for HOXB13 suppression in DNMT3B2/2 and DKO cells. Increase in HOXB13 expression in DNMT12/2 cells suggests that methylation at certain CpGs in the upstream CGI suppresses HOXB13 expression. Thus, the upstream CGI probably functions as an enhancer and its methylation partially suppresses but does not silence HOXB13 expression. Ectopic expression of TBX18 and HOXB13 inhibits growth, clonogenic survival and anchorage independent growth of colon cancer cells We next explored the anti-tumorigenic properties of TBX18 and HOXB13 in colon cancer cells. TBX18, a member of T-box family of transcription factor, is expressed in the segmented somites and in the limb bud [46]. TBX18 knock out mice die immediately after birth due to severe defects in organs deriving from the lateral sclerotome [47]. In contrast, ubiquitously expressed HOXB13 is a member of homeobox super family involved in establishing cell fate during embryonic development and maintaining differentiation state in adults [48,49]. HOXB13 is upregulated in many solid tumors including cancers of the endometrium, cervix, ovary and prostate whereas it is down regulated in renal cell carcinoma, melanoma and colon cancer [45]. To study the potential role of HOXB13 and TBX18 in modulating tumorigenic property of colon cancer cells we expressed these proteins using retroviral vector (pBabe) in the nonexpressing colon cancer cell lines RKO and DLD1b. Ectopic expression of the proteins was measured in puromycin-selected cells by Western blot analysis with anti-Flag antibody ( Figure 5A). The growth rates of TBX18 and HOXB13 expressing versus nonexpressing cells were assessed by MTT assay. Overexpression of these proteins resulted in a significant decrease in the growth rate of the cells compared to vector starting from day 1 in both cell lines ( Figure 5B), which correlated with dramatic reduction in replication potential in RKO cells expressing TBX18 or HOXB13 compared to vector transfected cells ( Figure 5C). Similarly, clonogenic survival of RKO cells expressing these transcription factors was significantly reduced ( Figure 5D). Ectopic expression of HOXB13 and TBX18 in another nonexpressing cell line, DLD1b inhibited these properties (data not shown). Together, these results demonstrated that TBX18 and HOXB13 severely compromised tumorigenic potential of colon cancer cells in vitro. HOXB13 but not TBX18 inhibits growth of colon cancer cells in nude mice We next investigated whether HOXB13 and TBX18 expression could inhibit ex vivo growth of colon cancer cells. For this purpose, RKO cells expressing either HOXB13 or TBX18 and vector transfected cells were injected into the flanks of nude mice. Mice were monitored for tumor growth every week and tumor size was measured. Tumor growth was visible as early as one week in mice injected with the control and TBX18 expressing cells. In contrast, the tumor was not detectable in most of the animals injected with HOXB13 expressing cells ( Figure 6A). At the end of the experiment, the mice were sacrificed, tumors were removed and their weights and volumes were determined. Notably, HOXB13 expressing cells could not form tumors in majority of animals ( Figure 6B, C). These cells formed only two visible tumors in mice. Surprisingly, no significant change in tumor growth in RKO cells expressing TBX18 was observed. Western blot analysis with anti-Flag antibody to detect ectopic TBX18 showed that the tumors developed nude mice expressed TBX18 ( Figure 6D) suggesting that lack of inhibition of tumor growth in nude mice was not due to loss of TBX18 expression. It is likely that some host factor(s) in the tumor microenvironment antagonizes the tumor suppressor function of TBX18 in nude mice. Similar results were observed in HCT116 and DLD1b cells (data no shown). Thus, HOXB13 functions as a tumor suppressor in colon cancer cells both in vitro and ex vivo. Discussion It is now well established that hypermethylation is a common mechanism for silencing tumor suppressor genes in cancer cells. CpGs sequenced by MassARRAY are boxed and numbered. It is notable that this technique provides average methylation status of CpGs that are in close proximity (numbered together as 1, 2 etc). Gray bars indicate samples that could not be sequenced. C.i. HEAT map of methylation profile of CpGs located within upstream CGI and promoter (TSS) region. The same amplicon of methylation density ranging from 0 to 100% was used to generate standard curve. C.ii. Box plot of quantitative analysis of methylation density in the upstream CGI and promoter regions in primary colorectal tumors and normals. Significance was assessed by Welch test (adaptation of t test, parametric, unequal variance, one-tailed). D. Upstream region (25.4 kb) activates HOXB13 promoter activity in colon cancer cells. HOXB13 promoter regions (21.2 kb and 25.4 kb) cloned into pGL3 basic vector (RLU1) were transfected into HCT116 cells along with internal control pRLTK (RLU2), followed by treatment with 10 nM estradiol (E) in phenol red free medium containing 5% charcoal stripped serum for different time periods. E. Only the upstream CGI is methylated in HCT cells, which undergoes site-specific and global demethylation by DNMT1 alone and both DNMT1, DNMT3B, respectively. i) MassARRAY, ii) real-time RT-PCR analysis of HOXB13, and iii) RT-PCR analysis of ERa and GAPDH. doi:10.1371/journal.pone.0010338.g004 Because re-expression of these genes upon demethylation was perceived to be an alternate strategy for cancer therapy, considerable effort has been expended to identify novel tumor suppressor genes in specific cancer types that are silenced by methylation. Clinical trials of Vidaza and Dacogen against different cancers underscore the significance of epigenetic therapy in cancer [50,51]. Further, differentially methylated genes could be potential biomarkers for colorectal cancer. Indeed, recent studies have shown that some of the hypermethylated genes could be detected in the stool of colon cancer patients [41,52]. DNA methyltransferases, expressed at relatively low levels in somatic cells, are frequently upregulated in cancer cells. Gain of function studies have shown that Dnmt3b but not Dnmt3a promotes colon tumorigenesis in APC Min/+ mice by inducing de novo methylation of multiple genes harboring CpG islands [22]. It, therefore, becomes important to identify the targets of DNMT3B in colon cancer cells to understand its function in tumorigenesis. A recent study has used expression profiling to identify its targets in colon cancer cell lines [53]. To our knowledge, the present study is the first report on the identification of direct DNMT3B targets in colon cancer cells using ChIP-onchip with antibodies that are specific for DNMT3B. The targets identified include not only well known tumor suppressors such as P16/INK4A, DCC, CAVEOLIN1, PGRMC1 but also novel genes like TBX18, TBX5, SLIT1, DGKI. Activation of some of these genes after treatment with demethylating agents confirmed that methylation indeed silenced their expression in colon cancer cells. DNMT1 and DNMT3B function co-operatively to methylate and silence many tumor suppressor genes in colon cancer cells [23]. It is, therefore, conceivable that both enzymes could act in concert to alter methylation status of the target genes, as observed in HOXB13 upstream CGI ( Figure 4E). An important observation is that a validated set of genes (TBX5, DCC, DGKI, CDH26, HOXB13, CAVEOLIN1, PGRMC1, GNA11, TBX18, ZBTB3 and DMRT1) are indeed associated with DNMT3B and that they are methylated in more than one colon cancer cell line relative to normal colon epithelial cells (CCD841). Among these, only DCC [54] and CAVEOLIN1 [55] have recently been reported to be methylated in colorectal carcinoma. DMRT1 is methylated in gastric cancer [56] whereas HOXB13 is methylated in melanoma [57], renal cancer [58] and breast cancer [45]. Further, analysis of a subset of these genes (DCC, TBX18, TBX5, SLIT1) in primary colon cancer revealed tumorspecific methylation. Recently several investigators have identified genes methylated in colorectal cancer some of which were also detected in the stool of colorectal cancer patients [41]. Different etiology, genetic background and the techniques used probably account for the identification of distinct methylated genes [59]. Cluster analysis demonstrated that tumors with dense methylation at the upstream CGI of HOXB13 gene clustered together ( Figure S1). It would be of interest to analyze a large cohort of colorectal tumors to determine whether methylation of HOXB13 occurs in specific type of tumors and this epigenetic modification can be used as a diagnostic or prognostic marker for colorectal cancer. HOXB13 belongs to the homeobox family of transcription factors. It is a unique developmentally regulated protein that is upor down-regulated depending upon the cellular context. While it is upregulated in ovarian [60] and endometrial cancers [61] where it functions as a tumor promoter, its expression is suppressed in malignant melanoma [57], renal [58], prostate [62], colorectal [63] and breast [45] cancer. HOXB13 is methylated in malignant melanoma, renal and breast cancer in the CGIs spanning the immediate upstream promoter and exon 1. Surprisingly, this region is essentially methylation free in normal colon and colorectal tumors (Figure 4). Methylation at an upstream CGI located ,4.5 kb upstream of the HOXB13 transcription start site in primary colorectal tumors and colon cancer cell lines suggests that chromatin structure of this region acquires a unique conformation accessible to DNMTs. C/EBPa is another transcription factor that is tumor-specifically methylated at an upsteam CGI in lung [64] and head and neck cancer [65]. Recent high throughput analysis has identified many more genes that are methylated at far upstream CGIs and even in coding regions [6]. The upstream CGI of HOXB13 appears to contribute to its promoter activity in colon cancer cells and harbors several conserved cis-regulatory elements some of which encompass CpG dinucleotides. It is, therefore, likely that methylation of this region is involved in modulating expression of HOXB13 and that this mechanism is unique to colon cancer cells. It would be of interest to examine whether HOXB13 knockout mice are susceptible to colon tumorigenesis spontaneously, after crossing with Apc/ Min+ , Mlh1 2/2 mice or upon exposure to carcinogens. Similarly, identification of HOXB13 target genes in colon epithelial cells is likely to elucidate the mechanism of its tumor suppressor function. Generation of immunoprecipitation grade antibody for HOXB13 will help us to answer this question. Studies along these lines are in progress. Supporting Information Table S1 List of primers (ChIP-CHOP, COBRA, RT-PCR, cDNA cloning and promoter regions) used in the present study. COBRA primers were designed using Methprimer database (http://www.urogene.org/methprimer/index1.html). Found at: doi:10.1371/journal.pone.0010338.s001 (0.11 MB DOC) Table S2 Chromatin from RKO cells were immunoprecipitated with affinity purified Dnmt3B antibodies or mock-immunoprecipitated. Precipitated DNA ranging in size from 0.6 to 3 kb were subjected to CpG island microarrary. The chromatin was cleared with pre-immune IgG and protein A beads. The precipitated DNA was dissolved in RIPA buffer and subjected to a second round of immunoprecipitation with the same antibody to minimize pull down of false positive targets. This DNA was then separated on an agarose gel and DNA from 0.5 to 3 kb in size was purified by using Gel Extraction kit (Qiagen), labeled with Cy5labeled dNTP and hybridized to a CpG island library coated on glass slides. The same amount of input DNA and mockimmunoprecipitated DNA (with rabbit preimmune-IgG) was used as control. We selected only those genes for further analysis where the signal in ChIP DNA was greater than 2 fold compared to the control rabbit IgG signal. Found at: doi:10.1371/journal.pone.0010338.s002 (0.16 MB DOC) Table S3 Genomic DNA from different colon cancer and normal colon epithelial (CCD841) cells were digested with Hpa II, Msp I or mock-digested and an aliquot (100 ng) of DNA from each was subjected to PCR with primers specific for CGI of each gene followed by separation of the PCR products on an agarose gel. The gene was codiered to be methylated if PCR product was generated in the Hpa II digested DNA but not in Msp I digested DNA. Found at: doi:10.1371/journal.pone.0010338.s003 (0.08 MB DOC) Table S4 MassARRAY data of upstream CGI of HOXB13 gene in primary colon cancer and matching colon tissues and colon cell lines (normal and cancer). Methylation at each CpGs was determined based on a standard curve generated using methylation density ranging from 0% to 100% of the amplicon. Found at: doi:10.1371/journal.pone.0010338.s004 (0.02 MB XLS) Table S5 MassARRAY data of promoter CGI of HOXB13 gene in primary colon cancer and matching colon tissues and colon cell lines (normal and cancer). Methylation at each CpGs was determined based on a standard curve generated using methylation density ranging from 0% to 100% of the amplicon.
v3-fos-license
2021-03-06T06:16:25.874Z
2021-03-04T00:00:00.000
232122126
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0247231&type=printable", "pdf_hash": "ebf7913afc14a898c000fe3c8a7d287b80950e29", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2043", "s2fieldsofstudy": [ "Medicine" ], "sha1": "2ca7385453a47440cd6d48d5fc52caf81078a28e", "year": 2021 }
pes2o/s2orc
Impact of metformin use on the recurrence of hepatocellular carcinoma after initial liver resection in diabetic patients Background Metformin is proposed to have chemopreventive effect of various cancer currently. However, the anti-cancer effect of metformin for diabetic patients with hepatocellular carcinoma (HCC) undergoing liver resection remains unclear. The aim of our cohort study was to assess whether metformin influence the recurrence of HCC. Methods We retrospectively enrolled 857 HCC patients who received primary resection from April 2001 to June 2016. 222 patients were diagnosed with diabetes mellitus (DM) from medical record. Factors influence the overall survival (OS) and recurrence-free survival (RFS) were analyzed by multivariate analysis. Results During the follow-up period (mean, 75 months), 471 (54.9%) patients experienced recurrence, and 158 (18.4%) patients died. Multivariate analysis revealed that DM (p = 0.015), elevated AST (p = 0.006), hypoalbuminemia (p = 0.003), tumor number (p = 0.001), tumor size (p < 0.001), vascular invasion (p <0.001), high Ishak fibrosis score (p <0.001), hepatitis B (p = 0.014), hepatitis C (p = 0.001) were independent predictors for RFS. In diabetic patients, only HbA1c>9% (p = 0.033), hypoalbuminemia (p = 0.030) and vascular invasion (p = 0.001) were independent risk factors for HCC recurrence; but the metformin use revealed no significance on recurrence. DM is a risk factor of HCC recurrence after resection. Adequate DM control can reduce the recurrence of HCC. However, the use of metformin does not reduce the risk of HCC recurrence in diabetic patient after initial resection. Hence, metformin may not have protective influences on HCC recurrence in diabetic patients who undergo initial liver resection. Results During the follow-up period (mean, 75 months), 471 (54.9%) patients experienced recurrence, and 158 (18.4%) patients died. Multivariate analysis revealed that DM (p = 0.015), elevated AST (p = 0.006), hypoalbuminemia (p = 0.003), tumor number (p = 0.001), tumor size (p < 0.001), vascular invasion (p <0.001), high Ishak fibrosis score (p <0.001), hepatitis B (p = 0.014), hepatitis C (p = 0.001) were independent predictors for RFS. In diabetic patients, only HbA1c>9% (p = 0.033), hypoalbuminemia (p = 0.030) and vascular invasion (p = 0.001) were independent risk factors for HCC recurrence; but the metformin use revealed no significance on recurrence. DM is a risk factor of HCC recurrence after resection. Adequate DM control can reduce the recurrence of HCC. However, the use of Introduction Hepatocellular carcinoma (HCC) is the fifth most common cancer worldwide nowadays, and its incidence is approximately 850,000 new cases per year [1][2][3]. HCC is often considered to be linked to multiple risk factors, such as infections with hepatitis B virus (HBV) or hepatitis C virus (HCV), alcohol abuse, and metabolic syndrome [4]. Metabolic factor such as obesity and diabetes are associated with increased mortality rates of several cancers [5,6] and diabetes is also reported as a risk factor for liver, pancreatic, renal, and colon cancers [7,8]. Therefore, therapeutic intervention for diabetes may lead to prevention of HCC recurrence and may improve survival of diabetic HCC patient after hepatectomy. Metformin is one of the most frequently prescribed antihyperglycemic drugs and is used as the first-line therapy for type 2 diabetes mellitus (T2DM) in Taiwan. Many previous studies have showed an anticancer effect from metformin in several cancer types with T2DM comorbidity [9,10]. Patient received curative hepatectomy might ask whether metformin can prevent the recurrence of HCC and have better outcomes. However, the anticancer effect has not been noted in all cancers and remains controversial. Little is known for the anticancer effects of metformin on HCC recurrence and mortality recently. We therefore evaluated the associations between metformin use and the risk of HCC recurrence and mortality among diabetic patients with HCC after curative resection. Patients We reviewed a total of 2103 patients who were diagnosed with HCC and underwent surgical resection between January 2001 and June 2016 at Kaohsiung Chang Gung Memorial Hospital. This hospital is a tertiary referral center that covers the southern part of Taiwan. We excluded 234 patients with prior HCC treatment, 918 patients with BCLC stage B or C, 94 patients received liver transplantation after resection. Finally, we recruited 857 patients with BCLC stage 0 or A HCC who underwent primary curative resection. Among them, 222 patients were diagnosed with DM from medical record, and 136 patients used metformin as anti-DM treatment (Fig 1). This study complies with the standards of the Declaration of Helsinki and current ethical guidelines, and approval was obtained from the Ethics Committee of Chang Gung Memorial Hospital. The requirement for informed consent was waived by the IRB (IRB number: 201901103B0). HCC was defined according to the results of imaging studies and biochemical assays, and the diagnosis was confirmed using histopathology. The HCC diagnosis was based on the criteria of the practice guidelines of the European Association for the Study of the Liver (EASL) or the American Association for the Study of Liver Disease (AASLD) [11,12]. Patients were included in the T2DM group if they had ≧1 diagnosis of T2DM as noted by an ICD-10 code in the medical record or the usage of anti-diabetic medication for more than 3 months. Drug exposure Drug exposure was defined as receiving OHAs in the same class for at least three months during the follow-up period. All patients treated with metformin were categorized as ''metformin users," whereas use of other drugs including sulfonylurea, thiazolidinedione, insulin or other OHAs were categorized as ''non-metformin users." In patients treated with combination therapies, those prescribed metformin for more than 3 months were categorized as metformin users. Assessments and follow-up evaluation The baseline demographics, serum biochemistry, tumor burden and anti-DM therapy were comprehensively recorded before any forms of definite treatment. The diagnosis of cirrhosis, grade of steatosis and Ishak fibrosis score were documented by resected non-tumor pathologic report. The HCC stage was defined according to the Barcelona Clinic Liver Cancer (BCLC) guidelines. Tumor differentiation was determined using the Edmondson grading system. The follow-up ended on November 30, 2019. OS was defined as the interval between the dates of surgery and death or last observation. Patients were followed up at the 1 st month after liver resection, followed by every 3 months in the first year and every 3-6 months in subsequent years. Routine tests such as serum AFP levels, serum biochemistry, and abdominal ultrasound were performed at every follow-up. Liver computed tomography or magnetic resonance image were performed at the 1 st month after liver section and every 12 months or recurrence was suspected clinically. Statistical analysis Statistical analyses were performed using IDM SPSS Statistics for Windows, version 22.0 (IBM Corp., Armonk, NY, USA). Experimental values of non-continuous variables are expressed as the median ± interquartile range (IQR). The chi-squared test is used as appropriate to evaluate the significance of differences in data and multiple comparison in groups. The relationship between recurrence-free survival (RFS), OS were analyzed using Kaplan-Meier survival curves and the log-rank test, and p<0.050 was considered statistically significant. Factors that were significant in the univariate analysis (p <0.05) were included in a multivariate analysis using a Cox forward stepwise variable selection process of the estimated OS and RFS. Table 1 presents the baseline characteristics of the study cohort. The mean follow-up time was 75 months. The sample comprised 670 men and 187 women, and the median age was 60 years at enrollment. As shown in Table 1, 635(74%) patients are non-diabetic and 222(26%) patients are diabetic. Compared to patients without DM, patients with DM were significantly older (p <0.001), lower serum bilirubin at baseline (p <0.001), higher prevalence of hypertension (p <0.001), higher BMI (p <0.001), higher grade of steatosis (p = 0.003), higher prevalence of HCV infection(p = 0.005), lower percentage of HBV infection (p< 0.001) but had a higher percentage of recurrence (p = 0.019). Overall, patients with DM had higher rates of death (28.4%) than subjects without DM (14.9%, p < 0.001). Baseline characteristics of the study patients Among those who with diabetic, 136 patients are metformin users and eighty-six are nonmetformin users. Kaplan-Meier analysis reveals no statistically significant in overall survival and recurrence free survival between metformin group and non-metformin group (Fig 2A and 2B). Poor DM control (defined as HbA1C> 9%) would lead to higher recurrence rate of HCCs (p = 0.011) and a trend of poor overall survival (p = 0.142) (Fig 3A and 3B). Independent factors for HCC recurrence As shown in Table 2 (Fig 4A and 4B). Independent factors for mortality A total of 63 patients died during the follow-up period; 29 of them suffered from liver-related death: 26 died of HCC and 3 of complications associated with cirrhosis. Of the 34 patients Impact of DM and metformin on the outcomes of patients between patients with BCLC stage 0 and A There are 124 patients categorized to BCLC stage 0 and 733 patients BCLC stage A. Kaplan-Meier plot of overall survival between BCLC 0 and A reveal p = 0.103 without statistically significance. However, the Kaplan-Meier plot of RFS between BCLC 0 and A showed there is statistically significance between these two groups with p<0.001 (S1 Fig). The BCLC 0 group had better RFS than BCLC A. We further divided our study cohort into BCLC 0 and BCLC A. Kaplan-Meier plots revealed patients with DM had poor overall survival than those without DM in BCLC 0 group (p = 0.012), poor outcomes in RFS and OS in BCLC A (p< 0.001 and <0.001). Metformin wound not affect the outcomes in BCLC 0 and BCLC A (S2 and S3 Figs). Discussion Diabetes is associated with increased mortality rates of several cancers [5,6] and reported as a risk factor for hepatocellular carcinoma [13]. Also, diabetic patients had higher recurrence rate and poor prognosis compared with those without DM after HCC treatment [14]. Therefore, whether DM management would be beneficial to the prognosis of HCC patients after curative hepatectomy is an important issue and needs further evaluation and studies. Our study demonstrated that patients with diabetes mellitus have higher recurrence rate and poor overall survival rate after HCC resection compared with those without DM. The finding is identical to the result of Ikeda, Y. et al [14]. Diabetic patients have much more comorbidities, increased infection risk, difficult cell regeneration and wound healing, higher risk of cardiovascular events, weakened immune system and lead to poor overall survival. Also, hyperglycemia induces DNA damage and cytotoxicity, which contributed to carcinogenesis [15]. Furthermore, patients with noninsulin dependent diabetes mellitus are characterized by insulin resistance, compensatory hyperinsulinemia and increased growth factor production, which will interact with liver cells and stimulates mitogenesis or carcinogenesis [16,17]. It is worth noting that in the present study, the use of Metformin is not significant in RFS and OS in diabetic HCC patients after curative resection (p>0.05) (Fig 3A and 3B). There is no statistically significant difference in the clinical and pathological characteristics between metformin and non-metformin user before received curative resection in our study cohort (S1 Table), including the level of glycohemoglobin (p = 0.627). Although many studies and systemic reviews showed the chemopreventive effect of metformin in several cancers as well as in HCC [18][19][20][21][22], some studies also demonstrated metformin doesn't improve the survival in patients with hepatocellular carcinoma [23] and doesn't reduce the risk of HCC in diabetic patients [24]. The reasons might be explained that although diabetes mellitus is a progressive disease accompanied by persistent chronic inflammation results from hyperglycemia or hyperinsulinemia, which play key roles in cancer cell activity, including its initiation, promotion, and progression [25], metformin can decrease insulin resistance but cannot directly reduce abnormal insulin secretion. In addition, hyperinsulinemia wound directly affect liver tissue and lead to the genesis of HCC but metformin wound not directly inhibit this pathway. Furthermore, DM results from chronic inflammation and can cause additional oxidative stress and lead to the HCC, but the anti-oxidative stress effect of metformin may be too weak to reverse this condition. To show the dose-dependent relationship, we stratified the study population by metformin daily use level into three groups (non-users, 500-1000 mg, and >1000mg daily dose). The Kaplan-Meier survival analysis showed no statistically significances among non-users and different daily dosage of metformin use in RFS (p = 0.958) and OS (p = 0.355), respectively (S4 Fig). We further stratified the patients by overall metformin use levels into three groups (<90, 90-365, and >365 cDDD). Similarly, there were no significant differences in patients with different cDDD of metformin use in RFS (p = 0.284) as well as the OS (p = 0.606) (S5 Fig). This result implies that there was no dose-dependent relationship between the metformin use and HCC recurrence. However, such analysis was limited by the low number and heterogeneity of the study population. Thus, large, randomized trials in well-selected patients treated with different dosage are warranted to confirm the value of metformin in HCC recurrence. We further divided our study cohort into BCLC 0 and BCLC A. DM was a poor factor for OS in BCLC 0 group(p = 0.012), and patients without DM had better RFS and OS in BCLC A (p< 0.001 and <0.001). Metformin wound not affect the outcomes in BCLC 0 and BCLC A. Patients in BCLC 0 group had better RFS (p< 0.001) than BCLC A. Therefore, a noninvasive diagnostic strategy to detect HCC at an early stage and to monitor HCC recurrence such as circulating tumor DNA (ctDNA) may provide better outcomes in early HCC patients. The use of insulin is a risk factor for poor OS and RFS. We noticed the study group of insulin user had higher mortality rate in diabetic patient after HCC resection (p = 0.001). The insulin group had higher glycohemoglobin level (8.25% vs 6.7%, p = 0.013), higher mortality rate (50% vs 25.3%, p = 0.007) and lower albumin level (3.3 vs 3.7, p = 0.012), showed in S2 Table. There is no statistically significant difference in age, gender, liver cirrhosis, Child Pugh grade, tumor size, tumor recurrence between these two study groups. Hyperglycemia contributed to the environment of hyperinsulinemia and increased the demand of insulin for sugar control, which led to a vicious cycle. Adequate blood sugar control is a good factor for diabetic HCC patients with BCLC 0/A received curative resection. In our present study, patients with poor DM control (HbA1c> 9%) have higher HCC recurrence rate (p = 0.011). On the contrary, patients with diabetes under adequate blood sugar control had no difference in HCC recurrence and mortality compared with those without DM. These results indicated that adequate management of hyperglycemia led to reduction in the risk of HCC recurrence and improvement of overall survival. Hyperglycemia and hyperinsulinemia cause a chronic inflammation condition and lead to the genesis of cancer cell. If we can well control the blood sugar of diabetic patient, which would not lead to vicious course of hyperinsulinemia, cause chronic inflammation and oxidative stress. Hosokawa et al emphasized that inadequate maintenance of blood glucose in diabetic patients is a significant risk factor for recurrence of HCC and for poor survival after curative RFA therapy [26]. Therefore, we suggested diabetic patient should focus on adequate blood sugar maintenance rather than craving for the chemopreventive effect of metformin in HCC. There may be several mechanisms involved in the relationship between hyperglycemia and HCC recurrence. In animal study [27], high sugar content diet leads to the greatest liver tumor incidence. Diet-induced postpradial hyperglycemia and hyperinsulinemia significantly correlated with tumor incidence. Hyperglycemia promotes cancer cell proliferation [28][29][30] through accelerated cell cycle progression or through the production of reactive oxygen species. Iwasaki et al. confirmed that high glucose alone, as well as in combination with proinflammatory cytokines, could stimulate the nuclear factor Kappa-B-mediated transcription in hepatocytes in vitro [31]. The results support our finding, sugar control is the key point to avoid HCC recurrence and overall survival instead of the chemopreventive effect of metformin in diabetic patients. Second, the insulin user's HbA1c level is higher than non-insulin user, and difficult sugar control, more diabetic complications and shorter survival rate. Also, the use of insulin contributes to hyperinsulinemia and attributes to carcinogenesis. It is compatible with our result, insulin users had poor prognosis after curative hepatectomy. There are 484 patients with hepatitis B virus infection and 264 patients received nucleos(t) ide analogue (NUC). Also, there are 300 patients with HCV infection and 123 patients received HCV treatment. Kaplan-Meier plots revealed the treatment of HBV and HCV wound lead to better outcomes in OS and RFS. In our study cohort, posthepatectomy liver failure (PHLF) was defined according to by the International Study Group of Liver Surgery (ISGLS) definition [35]. The rate of posthepatectomy liver failure among our study cohort was 6.2% (53/857). 41 patients (41/635, 6.5%) without DM had post hepatectomy liver failure, and 5.4% with the diagnosis of DM had post hepatectomy liver failure. The p value between DM and non-DM is 0.576. As for the subgroup of metformin user and non-metformin user, the metformin user group had higher rates of post hepatectomy liver failure, 8.1% (11/136) vs. 1.2% (1/86) with p value = 0.026. There are some possible limitations in our study. First, it is not a prospective study. However, we believed that the bias was small because patients were followed by the same physicians throughout the course of disease, with clinical and laboratory assessment and HCC screening using ultrasonography every 3-6 months. Second, the prevalence of DM in Taiwan is 6.6%, whereas it is up to 12.3% or more in the population of the Western countries [36]. Moreover, the access to medical professionals of blood sugar control is easy and affordable in Taiwan but medication nonadherence remains must not to be ignored. In conclusion, DM is a risk factor of HCC recurrence after resection. Adequate blood sugar control is associated with the prognosis of diabetic patients with BCLC 0/A HCC after curative resection. However, the use of metformin does not reduce the risk of HCC recurrence in diabetic cohort after initial resection. Hence, we suggested diabetic patient with HCC after resection should go on adequate diet and/or medication control for blood sugar maintenance rather than craving for the chemopreventive effect of metformin in HCC. Further prospective randomized controlled study is required to validate our observation.
v3-fos-license
2022-06-29T15:03:40.089Z
2022-06-24T00:00:00.000
250099396
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-4360/14/13/2569/pdf?version=1656065958", "pdf_hash": "94ccf5555866fdf681319f7cda61602fef574458", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2044", "s2fieldsofstudy": [ "Environmental Science", "Materials Science" ], "sha1": "e81c6aa59f41a27c2cc45fc5736edf389e045e77", "year": 2022 }
pes2o/s2orc
Characterizing Mechanical, Heat Seal, and Gas Barrier Performance of Biodegradable Films to Determine Food Packaging Applications In an organic circular economy, biodegradable materials can be used as food packaging, and at end-of-life their carbon atoms can be recovered for soil enrichment after composting, so that new food or materials can be produced. Packaging functionality, such as mechanical, gas barrier, and heat-seal performance, of emerging biodegradable packaging, with a laminated, coated, monomaterial, and/or blended structure, is not yet well known in the food industry. This lack of knowledge, in addition to end-of-life concerns, high cost, and production limits is one of the main bottlenecks for broad implementation in the food industry. This study determines application areas of 10 films with a pragmatic approach based on an experimental broad characterization of packaging functionality. As a conclusion, the potential application of these materials is discussed with respect to industrial settings and food and consumer requirements, to support the implementation of commercially available, biodegradable, and, more specifically, compostable, materials for the identified food applications. Introduction Plastic materials have been increasingly applied in packaging over the last several decades because of their low cost, low weight, and customizable functional properties. In 2019, 368 million tons of plastics were produced globally, of which a staggering amount of around 40% is used in packaging [1]. To reduce the amount of plastic waste, global and local initiatives, such as the European directive (EU) 2018/852 [2], are established, that fit into a vision of a circular economy of plastics. The circular economy diagram of the Ellen MacArthur foundation illustrates a continuous flow of technical and biological materials through the value circle [3]. Plastic biodegradation is defined as the microbial conversion of all its organic constituents to carbon dioxide (CO 2 ), new microbial biomass, and mineral salts under aerobic conditions [4]. Composting of biodegradable packaging is described in the DIN EN 13,432 standard [5]. Besides composting, anaerobic degradation systems that produce methane gas are emerging. Currently, only a small fraction of globally produced plastics is biodegradable (1.553 million tons in 2021), but this amount is predicted to rise to 5.297 million tons in 2026 [6]. With a low but increasing availability of biodegradable plastics, this group of materials can become an emerging alternative to mechanical recycling and reuse in a long-term organic circular economy. Packaging is already the main application of biodegradable plastics, with 43% and 16% of biodegradable materials being applied as flexible and rigid packaging, respectively [4]. With a projected growth from $338 billion in 2021 to $478 billion in 2028, the food packaging market plays an important role in our society [7]. Considering the number of food packages, plastic and paper are the most important materials for food applications [8]. In food packaging films, different materials are often combined to obtain high-performing and cost-effective packages. This can be done by blending, coating, or laminating. In order to maintain biodegradability by composting, it is important that these composites are made of compostable materials. However, small fractions of non-compostable materials, limited to a maximum content of 10% because of degradation and disintegration criteria, can be allowed for composting if the whole package meets the demands of the DIN EN 13432 standard [4]. Industrial and home compostability can be differentiated; these processes differ in temperature and time. Polylactic acid (PLA), polybutylene adipate terephthalate (PBAT), polybutylene succinate (=PBS), and polyhydroxyalkanoates (PHAs) are biodegradable plastics that were subjects of previous studies on packaging functionality in food applications [9][10][11]. These materials are industrial-compostable [12]. Depending on the properties of the coating, coated paper can be considered as biodegradable packaging. Interest is increasing for its implementation in food packaging, mainly because of the versatile end-of-life options of this material [13,14]. Cellulose, the main component of paper, is a natural polymer that can be easily obtained from the cell walls of plants. Processes to extract and modify cellulose are subjects of recent studies, of which the lyocell process is one example [15]. Plant waste streams can be valorized by extracting cellulose to make packaging films. A recent study extracted cellulose from cocoa pod husk, a waste stream of the chocolate industry, to develop biodegradable cellulose films [16]. Cellulose and its derivatives can be found in food packaging films, such as solution-casted cellulose acetate, extruded cellulose nanocrystals, electrospun hydroxymethyl cellulose, and many others [17]. Starch is another example of a natural abundant polymer that can be used in packaging. This polymer is home-compostable, which is a less aggressive process than industrial composting. Also, cellulose is home-compostable, if the lignin content does not exceed a threshold value of 5% [12]. In a 2021 survey, among 24 European food companies and packaging material providers, functionality of biodegradable materials was indicated, in addition to high cost, low availability, and end-of-life concerns, as a bottleneck for implementation in food packaging. Because of the interest of the food industry in packaging functionality of biodegradable materials, the research project BIOFUN evaluates typical food packaging functionalities, such as mechanical, gas barrier, and heat-seal performance of commercially available films in 2021 and 2022 [18]. The objective of this study is to determine application areas in food packaging of currently commercially available biodegradable films. A pragmatic approach is followed, based on a broad characterization of the mechanical, seal, and gas barrier performance. Additionally, opacity and water contact angle are determined for further characterization. Table 1 lists 10 films that were supplied by companies participating in the BIOFUN project. Results of thickness measurements and the main components of the seal side, identified with attenuated total reflectance Fourier transform infrared (ATR-FTIR) spectroscopy (spectra are not shown), are added to this table to give supporting information of these samples. The identified components with FTIR compensate for the lack of commercially available information, which is the result of the high level of secrecy on the chemical composition in the industry. The list includes paper, PLA, PBAT, PBS, poly(butylene succinate-co-butylene adipate) (PBSA), starch, cellulose and poly(3-hydroxybutyrate-co-3hydroxyvalerate) (PHBV), which are considered for use as food packaging. Coated paper 1, with PE as coating material, is unlikely to be compostable. The materials of Table 1 are differentiated in four material groups: coated papers, cellulose films, pilot extrusions and commercial monolayers. Two coated papers, two cellulose films, two rather thick pilot extrusions and four commercial monolayers, subdivided in two monolayer monomaterials and two monolayer blends, are the subjects of this study. Results of materials in each group are mutually compared and discussed. Digital photos of the samples in Table 1 are shown in Figure 1. Methods To compare the test materials based on their packaging performance, the mechanical, gas barrier and seal characteristics are determined for all samples. Tests are performed by machine direction in a standard climate (23 • C, 50% relative humidity (RH)), unless otherwise stated. Standard deviations are calculated to show the level of scattering of results. Mechanical Performance Thickness is tenfold-measured according to ISO 4593. Peak stress (N mm −2 ) and total strain (%) are determined in fivefold measure with a tensile tester. Dumbbell-shaped samples with 3.18 mm width of the narrow section, described in ASTM D638 [19], are used to prevent the samples from breaking at the clamp. Total strain values are mainly used for mutual comparison. No extensometer is used, so comparisons of total strain values in the literature must be made with caution. Slipping is prevented by clamping the wide section in diamond-coated jaws. A clamp distance of 20 mm and a separation rate of 100 mm min −1 are used to perform the test. Additional tests in a temperature chamber are done to evaluate the impact of environmental temperature on peak stress and total strain. Relevant temperatures for food processing, ranging from freezing at −18 • C until pasteurization, hot fill and/or microwave at 100 • C and/or melting of the sample, were considered in this test. Maximum force (N), total displacement (mm), and total energy (mJ) are determined in fivefold with a puncture-resistance test. A penetration probe, as described in ASTM F1306 [20], moves toward the outer side of a clamped film with a speed of 25 mm min −1 until the film is penetrated. Tear resistance (mN) is determined in tenfold with an Elmendorf test, which uses a pendulum to propagate an existing slit, as described in ISO 6383-2 [21]. Gas Permeability Single measurements are done in standard conditions to screen the oxygen transmission rates (OTR) of all samples at 23 • C and 0% relative humidity, as described in ASTM F1307 [22]. Additional tests on high gas barrier materials are performed at 23 • C and 50% relative humidity, following ASTM F1927 [23], at both sides of the film. Single measurements are done in extreme test conditions to screen the water vapor transmission rates (WVTR) of all samples in a worst-case scenario. WVTR, according to ASTM F1249 [24], is determined at 38 • C and 100% relative humidity at the outer side of the film, while 0% relative humidity is maintained at the inner side. Seal Performance Seal temperature is varied with two hot jaws, at a seal time of 1.0 s and a seal pressure of 1.0 N mm −2 . Samples of 30 mm width are sealed while Teflon sheets are used on both sides to prevent the material from sticking against the jaws. At each temperature, three samples are sealed. Seal strength, following ASTM F88 [25], is evaluated in a timeframe of 4 h after sealing. Then 15-mm-wide samples are clamped with a distance of 20 mm and separated at a rate of 300 mm min −1 . Three characteristics of the sigmoidal seal curve are determined: an initiation temperature, which is the jaw temperature at which seal strength exceeds a threshold value of 0.05 N mm −1 [26]; a mid-slope temperature, which is the jaw temperature at which half of the maximum seal strength is exceeded; and the maximum seal strength. Hot tack tests, following ASTM F1921 [27], are performed on 15-mm-wide samples at a test speed of 200 mm s −1 . Seal time and seal pressure are respectively set at 1.0 s and 1.0 N mm −2 , while the seal temperature of two Teflon-coated hot jaws is varied. At each temperature, three samples are measured. Seals are evaluated 0.1 s after opening of the seal jaws. Four characteristics are determined: seal initiation temperature, which is the jaw temperature at which a threshold value of 0.03 N mm −1 is exceeded [28]; the temperature of maximum strength, which is the jaw temperature at which hot tack strength reaches its maximum; the hot tack window, which is the temperature range of the jaws where hot tack strength is higher than 0.1 N mm −1 [28]; and the maximum hot tack strength. In addition to the above-described broad seal characterization, additional seal tests can be done to check the compatibility with specific food applications. This is done with real food contamination in seal through contamination tests. Two case studies, which relate film samples with food applications, are defined based on gas barrier performance. Low gas barrier samples are evaluated with contamination types that are related with unprocessed fruit and vegetables. In this application, water droplets and solid soil particles are expected. Sand and coffee particles are selected as simulants of soil particles. High gas barrier samples are evaluated as grated cheese packaging. Square samples of approximately 10 cm × 10 cm are cut and attached to a cardboard tool with plastic tape. A rectangle of 20 mm × 40 mm was marked in the center of the sample to ensure that the contamination was distributed over the entire length of the seal. Then, 10 mg of the solid contamination or 30 µL of water was evenly spread into the rectangle to maintain a 12.5 g/m 2 or 37.5 mL/m 2 contamination density. Specifically, for grated cheese, three strings were placed vertically and distributed in the middle and the two corners of the rectangle. A second sample was also attached to the cardboard tool to cover the contamination. In a final step, the tool was manually placed between the hot bars, forming the seal. The above-described set-up is illustrated in Figure 2. In a previous study, solid contamination was applied in a standardized method and seal through contamination performance was evaluated with a design-of-experiments (DOE) approach [28]. This approach was followed, with the exception of adding contamination as a categorical parameter in the design space. For the low gas barrier samples, three levels are considered for seal temperature, time, and pressure, and contamination was added as a categorical variable with four levels: clean, ground coffee, sand, and water. Three replicates are carried out for each contamination level. Main-order, second-order and interaction effects are included with seal strength as response, resulting in 41 runs. A similar approach is followed for the contamination experiments with the high-barrier cellulose samples, with the exception of only two considered levels for contamination: clean and grated cheese, resulting in 24 runs. After experimentation, a standard least square method is followed to fit a model. Second-order and interaction terms with a p-value above 0.05 were not used in the model. Seal strength is maximized for clean seals, and the predicted values are validated by performing five measurements at maximal settings. All contaminations are also validated at equal settings to allow comparison between clean and contaminated seal strength. For more details on this approach, the reader is referred to the previous study [28]. Additional Characterization Opacity is measured to show the appearance and decoration potential in food packaging. The Hunter lab method in the reflectance mode is followed. The opacity Y (in %) is calculated by dividing the opacity on a black standard Y b with the opacity on a white standard Y w . For each sample, average values of four measurements, twice on each side, are calculated. Water contact-angle measurements are carried out to characterize hydrophobic properties of the samples. Samples are cut to fit the sampling area. A 2-µL MQ water (18.2 MOhm cm) drop is gently deposited on the seal surface by using a micro-syringe, and digitally photographed immediately. Contact angles are measured at both sides. Average values of contact angles of 15 drops at different spots on the surface of each sample are calculated. Apparatus Thickness is measured with a precision thickness gauging model 2010 U (Wolf Messtechnik GmbH, Freiberg, Germany). Tensile, puncture, and seal-strength tests are performed with a 5ST universal testing machine (Tinius Olsen Ltd., Redhill, United Kingdom), inside a TH 2700 temperature chamber (Thümler GmbH, Nürnberg, Germany). Tear resistance is tested with a tearing tester ED 300 (MTS Adamel Lhomargy, Roissy-en-Brie, France). Table 2 shows the average values and standard deviations of the mechanical characterization of all materials in standard climate (23 • C, 50% RH). Representative stress-strain curves of each of the samples are shown in Figure 3. Because of the high relevance of processing temperatures in the food industry, such as in freezing, cooling, hot filling, microwaving and/or pasteurizing, environmental temperature is varied in tensile tests of a selection of materials. Thin commercial films with no backing layer, with the addition of coated paper 2 and cellulose 1, are evaluated in this test. Table 3 shows the transmission rates for oxygen gas and water vapor. Table 4 shows the results of the seal characterization. Two cases are studied in additional seal experiments with contamination: coated papers 1 and 2, with relative low gas barriers, for unprocessed fruit and vegetables, and cellulose films, with relative high gas barriers, for grated cheese. Table 5 shows the predicted maximum seal-strength values for clean and contaminated seals of all cases at optimal seal parameters. Additional Characterization The results of opacity and water contact angle are respectively shown in Tables 6 and 7. Mechanical Performance Coated paper shows moderate peak stress values in Table 2. As a result, actual tensile forces will be high because of the rather thick materials that are used in food packaging. The strain of coated paper is limited because of the immediate break of the paper substrate in a tensile test, high variations can be caused by delamination of the plastic coating. In a previous study on PLA coated paper, tensile stress and elongations ranged, respectively, from 58-75 N mm −2 and 3-4% [29]. However, paper type, coating material, and coating thickness impact, among others things, the mechanical properties of coated papers. The puncture results show moderate forces, small displacements, and moderate energies. Moreover, tear resistance was moderate, compared to other samples. Cellulose 1 was the strongest material in the tensile test. The decreased peak stress of cellulose 2 is probably caused by the lamination with a weaker but tougher PBS layer. Cellulose films have limited strain because of the almost immediate break of the brittle cellulose layer in a tensile test, and high variations of cellulose 2 are caused by the delamination of the tough seal layer. The experimental values of stress and strain of cellulose 1 are equal with values in the datasheet of commercial cellulose film [30]. Puncture resistance forces and energies of cellulose films are high, and displacements are moderate. The tear resistance of cellulose 1 reaches the lowest value of all samples. This property can be dramatically improved by laminating a tough seal layer, as observed in the results of cellulose 2, which has a laminated PBS layer. The pilot extrusion of PBS is mechanically superior to that of PHBV, with the exception of tear resistance. There is no comparable value found in the literature for the PHBV-PBAT blend. In a review on monomaterial PHBV [11], a tensile stress range of 18-45 N mm −2 is found. The peak stress value of the PHBV film of this study, which is blended with PBAT, mineral filler and process additives, fits within the range of monomaterial PHBV. PBS is strong and tough at the same time, and this is reflected in the tensile and puncture results. In the comparison of the puncture and tear resistance results of the pilot extrusions with the commercial films, caution must be taken with puncture and tear resistance, because of the different thickness. The strong mechanical performance of PBS is also reflected in the results of the commercial monolayer, reaching a moderate peak stress and very high strain in the tensile test. The stress values of the two PBS-based films, the pilot extrusion and the commercial monolayer, are relatively high, compared to the stress values, ranging from 20 to 34 N mm −2 , found in a study on poultry meat packaging [10], bread packaging [31], and a recent review on PBS properties [32]. The increase in strength of the films in this study indicate a difference in production, which is known of the pilot extrusion film, by blending with PBSA and process additives, but is not known of the commercial monolayer. A previous study on PBS blends showed that mechanical properties were majorly influenced by compatibility between polymers and morphology, including microstructures and crystallinity [31]. A moderate puncture force, high displacement and high energy in the puncture test are achieved. Tear resistance of PBS is rather low compared to other samples. PLA also stands out as a mechanical good performing film, with high peak stress and moderate strain in the tensile test, and high force, displacement, and energy in the puncture test. Also, this film is easy-tearable. The two blended films with PBAT are characterized with low strength, high toughness, and very high tear resistance. A previous study on the mechanical properties of PLA and PLA-PBAT blended films illustrates a strong but brittle tensile performance of PLA film and a weaker but tougher performance of the PLA-PBAT blended film [33]. PBAT is often used in blends to increase flexibility and toughness of brittle biodegradable materials. The results in Figures 4 and 5 are discussed below. With the exception of cellulose 1, peak stress tends to decrease at increasing temperatures. The tendency for total strain is less clear. PBS, PLA, and the PBAT blends could not be tested at high temperatures because of high stickiness. With respective glass transition and melting temperature values of PBS and PLA of −32 • C and 114; 59 • C and 154 • C, it is clear that the sticky behavior occurs above glass transition temperature [34]. The peak stress of coated paper 2 decreased from 51 N mm −2 at −18 • C to 23 N mm −2 at 100 • C while remaining brittle at all environmental temperatures of Figure 4. The deviating results of total strain at 4 and 23 • C were caused by delamination of the plastic coating. Cellulose 1 remains very strong, mostly above 100 N mm −2 , and brittle, with strain values ranging from 8 to 22%, at all considered temperatures. PBS remains strong up to 60 • C. The total strain decreased below 100% at cool temperatures. PLA showed a bigger temperature-dependent peak stress behavior, compared to PBS, achieving 89 N mm −2 at −18 • C and 22 N mm −2 at 80 • C. The drop in tensile stress from 20 to 60 • C is previously illustrated in another study on the mechanical performance of PLA tensile specimens, attributed to approaching the glass transition region of PLA [35]. The total strain decreased below 20% at cool temperatures. PBAT blends have low peak stress values, between 24 and 40 N mm −2 at cool temperatures and 12 N mm −2 at 60 • C, but high total strain values. In conclusion, the mechanical characterization of coated paper and cellulose-based films can be described as strong but very brittle materials. The low strain values, compared to other tougher samples, are illustrated in Figure 3. However, brittleness might be overcome by laminating a tough layer. Both materials can be used over a wide temperature range, from freezing at −18 • C up to 100 • C. The film with PHBV is rather weak and brittle, compared to the other materials. The films with PBS and PLA are strong and tough materials under standard conditions. The toughness, however, decreases at low temperatures. On top of that, stickiness initiates well below 100 • C, which will restrict their use to a narrow temperature range, especially if moderate toughness is required. If brittleness is not a big issue, these materials can be used in cold and standard temperatures. The blended PBAT films, with starch or PLA, are rather weak but very tough, even at cool temperatures. Because of the melt initiation well below 100 • C, the use of these blends is restricted to cold and standard temperatures. Gas Permeability In Table 3, coated paper 1 shows similar barrier properties as polyolefin film, because of its high OTR and rather low WVTR values [36], pp. 259-308. This gas barrier performance can be related with the presence of low-density polyethylene (LDPE) at the seal surface, identified with ATR-FTIR. A 25-µm pure LDPE reference film has OTR between 6500 and 7800 cc m −2 d −1 , measured at 23 • C and 0% RH, and WVTR between 12 and 19 g/m 2 .d, measured at 38 • C and 90% RH [36], pp. 259-308. The values of coated paper 1 correspond with TR values of 10-15 µm LDPE. Coated paper 2 on the other hand is a low gas barrier material for food packaging applications. Specifically, for WVTR of coated paper, a recent study compared high gas barrier coated papers at 23 • C, 85% RH and 38 • C, 85% RH and suggested that the integrity of the barrier layer was disrupted at 38 • C [37]. The authors of that study suggest using milder test conditions to simulate more closely the environment of food packages and to prevent disruption of barrier layers. Because of the low OTR values of the cellulose films, additional oxygen measurements at 50% RH are done to check the influence of humidity on oxygen transmission. With respective values of 3.7 and 5.8 cc m −2 d −1 , it is clear that the OTR increases with increasing RH. These cellulose films have barrier coatings because neat cellulose is a low gas barrier for food applications. Both films achieve values similar to poly(vinylidene dichloride) (PVDC)-coated materials. PVDC, which is a high gas barrier for food applications [36], pp. 259-308, is identified in the seal surface with ATR-FTIR in cellulose 1, but not in cellulose 2. Cellulose 2 is, however, laminated with a PBS layer that obstructs identification with ATR-FTIR of parent layers. These films can be used to maintain modified atmosphere in food packages. Paper and cellulose are low-barrier substrates that require a barrier layer, such as in coatings, to improve the barrier properties. This is illustrated in Figure 6. Barrier properties of such coated materials are mostly attributed to thin barrier layer(s) in the coating. Coating thickness, multilayer architecture, individual layer composition and concentration gradient are determining factors in this process [36], pp. 259-308. An example of such a process is the transmission of water vapor in the atmosphere, across a packaging material, in the dry headspace of food applications, such as cookies. In some applications, such as yogurt, the process is reversed. Figure 6. Permeation of gas and/or vapor, from atmosphere to headspace, through coated, lowbarrier substrates. A previous study, which produced biodegradable, blown extruded films of blends of thermoplastic starch and PBAT, functionalized with plasticized nitrite, measured a relatively low oxygen permeability with a permeability coefficient down to 1.2 cc mm m −2 d −1 for films with 5% nitrite content [38]. This coefficient corresponds with an OTR-value of 24 cc m −2 d −1 , considering a film of 50 µm thickness. There is still a gap between this moderate value and those that are measured with the commercial cellulose films in this study. More research is needed to obtain biodegradable food packaging with the permeation levels of the cellulose films in this study, without the need of non-biodegradable functional components. The pilot extrusions and monolayer films are low gas barrier materials for food packaging applications. The application of low gas barrier samples, such as the coated papers, the pilot extrusions, and the monolayer films, is restricted to foods with low-barrier or high-respiration requirements such as unprocessed fruit and vegetables with short shelf lives. With these food applications, the high permeation of water vapor and oxygen gas is required to avoid, respectively, the accumulation of saturated water vapor which leads to fungal growth, and an anoxic condition [39]. If a high gas barrier is required, these films need to be coated and/or laminated with materials that are able to add this property. Coated paper 1 might be used for applications that need a water vapor barrier, but no oxygen barrier, which can be the case for some dry foods, such as flour, dried pastas, crackers, and cookies. The barrier cellulose films can be used for applications with oxygen and water vapor barrier requirements. Typical examples are cheese, meat, high-fat products, and ready meals [36], pp. 259-308. Seal Performance Seal strength, hot tack strength initiation, and mid-slope temperatures, in Table 4, are, with the exception of the thick pilot extrusion films, below or equal to that of typical polyolefin-based seal layers, such as LDPE, ionomers, or metallocene plastomers [36], pp. 181-257. Six out of ten films achieve over half of the maximum seal strength at jaw temperatures below 100 • C. These materials can be considered in high-speed packaging operations. Because uncoated paper cannot be heat-sealed, heat-seal characteristics of coated paper are mainly attributed to the coating material, coating thickness, and coating process. Coated paper 2 outperforms coated paper 1, with lower initiation temperatures and higher hot tack strength. It is capable of maintaining a minimum hot tack strength threshold value of 0.1 N mm −1 over a very wide temperature region of 110 • C. The seals of coated paper fail by delamination of paper fibers during seal strength and hot tack tests. Cellulose 2 has lower initiation temperatures and higher strengths than cellulose 1. The better seal performance of cellulose 2 is attributed to the lamination of a PBS layer with excellent seal properties. The seals of cellulose 1 fail by peeling cohesively, whereas those of cellulose 2 fail by breaking unsealed material during a seal strength test. The difference in the failure mechanism is related with the big difference in maximum seal strength. In the hot tack test, both materials fail by peeling cohesive. The different seal failure mechanism of cellulose 2 in the hot tack test, compared with the seal strength test, is related to the very low cool time. The seal is evaluated 0.1 s after opening of the hot jaws, when it is still hot. The pilot extrusion films show high initiation temperatures; this is typical with heat conductively sealed thick films, where heat is transferred through a thick layer, from the hot jaws to the outer layers and the seal interface so entanglement can occur. The seals of the PHBV blend fail by peeling cohesively, whereas those of the PBS blend fail by breaking unsealed material during a seal-strength test. In the hot tack test, a break in the proximity of the seal is observed with both materials. The presence of a weak spot in the remote materials is suggested as a hypothesis. The weak spot is still hot, but thinner than the seal area. Both thick pilot extrusion films can be heat-sealed, but a thinner commercial structure should be evaluated to determine specific application areas for these materials. The thin PBS and PLA monolayers have low initiation temperatures and rather high strengths for materials without rigid backing layers. The seals of these materials fail by breaking unsealed material during a seal-strength test. In the hot tack test, both materials peel cohesively and/or break in the proximity of the seal. PLA has the advantage of maintaining its hot tack strength over a wide temperature range. The thin monolayers with PBAT also seal at low temperatures but strengths are rather low. Both PBAT blends show similar seal failure mechanisms than those observed with the PLA and PBS monolayers. Low seal strengths are beneficial in easy-peel applications. In a previous study, that evaluated the seal performance of several PLA-PBAT blend ratios, sealed to a PLA container, the blended films were characterized as easy-peel [9]. It can be concluded that coated paper 2, cellulose 2, and PLA are very well-suited for packaging operations where the hot seal is put under pressure, such as in verticalform-fill-sealing or when springback forces are induced, immediately after sealing, for example, by solid food contaminants in the seal area. The thin PBS monolayer could similarly be used, but a stricter temperature control is advised because of the smaller hot tack temperature window. The use of coated paper 1 is restricted to operations where the hot seal is not pressurized. Cellulose 1 and the two thin PBAT blends are heat sealable, but their use is restricted to applications where low strength is required, such as in packaging of low-weight foods or easy-peel applications. The optimal parameters in Table 5 are equal for clean and contaminated seals because all interaction terms of contamination with a seal parameter are not significant and are left out in the fitted models. The results of individual runs, coefficients, and p-values of terms in fitted models are not shown because of the sole objective on evaluation of the clean and contaminated maximal seal strengths. A 95% confidence interval is calculated, based on 5 experiments at optimal seal parameters. Only with clean-coated paper 1, watercontaminated coated paper 2 and grated cheese contaminate cellulose 2, predicted values are slightly outside the confidence interval. All other predicted maxima fall in a 95% confidence interval. All considered materials have overlapping confidence intervals for clean and contaminated seals, so the clean maximal seal strengths can be matched with contamination. Powder contamination densities of 12 g m −2 and above are related with aggregate forma-tion and a decrease in maximum seal strength of polyethylene film [40]. For the considered coated papers, this threshold value can be exceeded while the maximum seal strength is maintained. Further experiments with higher contamination densities can be done to study the limits for these materials. Both coated papers can be considered to pack fresh foods. Further experimental tests and/or finite element analysis with target foods and packaging with specified dimensions can be done to check if the seal strength of these coated papers is sufficient for the food packaging application. The barrier cellulose films can be considered when packing grated cheese. The very low seal strengths of cellulose 1 makes this material not suited for heavy weight applications. One might think of combining the good seal through contamination performance, almost equally strong hot tack, shown in Table 4 and easy-tear features, shown in Table 2, of cellulose 1 in easy-tearable low-weight packages. Cellulose 2 can be used in packages with higher weight in cheese. Besides additional mechanical analysis of the entire food packaging concept, to check if seal strength is sufficient, additional leak tests are advised, because of the importance of good barrier properties of grated cheese packaging. Additional Characterization Opacity, which is normalized to thickness with homogeneous film structures in previous studies, is correlated with film thickness [41,42]. Besides thickness, variations in opacity can be related with the material composition, such as the reflection of light of foreign nanoparticles [43]. There is also an obvious impact of printing and coloration on opacity. The opacity results in Table 6 show big differences between the samples. Nontransparent samples, as shown in in Figure 1, such as the coated papers and the black PLA + PBAT blend, have high opacity values. Food packaging with transparency properties are, however, preferred by consumers [44]. Samples with low opacity values, such as PLA and cellulose 1 approach full transparency, with respective values of 7.9 and 11.5. These values are in the same range as other biodegradable films that were measured with the same method [43]. Other thin samples have hazier appearances, which is reflected by increased opacity values. The thicker pilot extrusions have moderate opacity values compared to other samples. The tendency of food to adhere to the packaging surface determines to a large extent the preservation of food [45]. Hydrophobic properties of the surface are desired to improve the resistance of chemical interactions with food by minimizing the contact area. The values in Table 7 are in a narrow range of 80-105 • , between that of smooth cellulose films, which are hydrophilic and have contact angles below 50 • , and superhydrophobic surfaces, a property that can also be achieved with biodegradable materials, characterized by contact angles above 150 • [46]. The standard deviations of the results are rather high, suggesting inhomogeneous surfaces, compared to reported values in the literature [47][48][49]. The water contact angle of coated paper 1 is similar to a value of LDPE, reported in a previous study [47]. Contact angles of PBAT blends, with thermoplastic starch and nano zinc oxide, in a previous study were in between 89 and 104 • [49]. This range is similar to the ranges of the values of the PBAT blends on the surface of the samples in this study, such as coated paper 2, pilot extrusion PHBV, and the two monolayer blends, PLA + PBAT and starch + PBAT. Another study reports a low value of 57 • for PBAT [47], which highlights the difficulties comparing these values in the literature. The same study reports a value of 68 • for PLA, whereas the value for PLA in this study is 80 • , which is low compared to the other samples. The two PBS samples of this study are with values of 84 • for the thin monolayer and 104 • for the pilot extrusion, also higher than a value reported in a previous study [48]. In conclusion, water contact-angle values of the samples in this study are higher than or equal to values found in the literature. This is probably related with modifications in commercial food packaging films, in order to decrease the contact area with food. Conclusions Coated papers and high-barrier cellulose films are brittle materials with a potential use over a wide environmental temperature range. Barrier and/or heat seal properties can be altered with the appropriate plastic coating. The case studies to check the seal through contamination performance show that maximal seal strength can be maintained. In a comparison of two thick pilot extruded films, the PBS blend is stronger and tougher than the PHBV blend at standard environmental temperature. Without the use of additional gas barrier layers, application of these materials is restricted to food with lowbarrier requirements, such as takeaway meals and unprocessed fruit and vegetables. Both materials can be heat-sealed. In order to be able to determine seal application areas, film production need to be optimized to obtain commercial structures, such as thin flexible films or trays. The application of PBS, PLA, a PLA-PBAT blend, and a starch-PBAT blend is restricted to food with low-barrier requirements. Additional barrier layers, of which the identified PVDC layer in high gas barrier cellulose film is an example, are needed to implement these materials for food with high-barrier requirements, such as meat, cheese, high-fat products, and ready meals. Monolayers with PLA and PBS combine high strength and toughness at standard environmental temperatures. However, the temperature window of these good mechanical features is narrow. Both materials are able to produce strong seals with low initiation temperatures. Both materials can be applied as strong seal layers in high-speed VFFS applications or as heavy-duty monolayers in standard environmental temperatures. The application at cold temperatures can be considered if the low maximum strains are sufficient for the specific food packaging. The PBAT blends are weak but tough from cold to standard environmental temperatures. Application is restricted at temperatures above 60 • C. These materials can be applied as relatively weak seal layers, which is of high interest in easy-peel applications, and as light-duty monolayer in cold and standard environmental temperatures. Depending on the selection of coated and/or laminated materials, the application potential of biodegradable materials in food packaging is very broad, ranging from low barrier packaging of low-weight foods at standard temperature, to high-barrier packaging, such as modified atmosphere packaging of high-weight foods, extreme temperature processing and/or high-speed applications, such as the vertical form fill seal. Biodegradable food packaging is emerging. This study fully supports the implementation of commercially available biodegradable materials for the identified food applications. Data Availability Statement: The data presented in this study are available on request from the corresponding author. Conflicts of Interest: The authors declare no conflict of interest.
v3-fos-license
2016-05-17T16:18:25.415Z
2015-11-03T00:00:00.000
17425413
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://systematicreviewsjournal.biomedcentral.com/track/pdf/10.1186/s13643-015-0136-x", "pdf_hash": "18ecd6f778352ab066f54f981715ef99a250fa08", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2045", "s2fieldsofstudy": [ "Medicine" ], "sha1": "94cb4092908a4b4ba3df39ef80444aa4e3f24c97", "year": 2015 }
pes2o/s2orc
The effectiveness of interventions to improve uptake and retention of HIV-infected pregnant and breastfeeding women and their infants in prevention of mother-to-child transmission care programs in low- and middle-income countries: protocol for a systematic review and meta-analysis Background Despite recent improvements, uptake and retention of mothers and infants in prevention of mother-to-child transmission (PMTCT) services remain well below target levels in many low- and middle-income countries (LMICs). Identification of effective interventions to support uptake and retention is the first step towards improvement. We aim to complete a systematic review and meta-analysis to evaluate the effectiveness of interventions at the patient, provider or health system level in improving uptake and retention of HIV-infected mothers and their infants in PMTCT services in LMICs. Methods/Design We will include studies comparing usual care or no intervention to any type of intervention to improve uptake and retention of HIV-infected pregnant or breastfeeding women and their children from birth to 2 years of age attending PMTCT services in LMICs. We will include randomized controlled trials (RCTs), cluster RCTs, non-randomized controlled trials, and interrupted time series. The primary outcomes of interest are percentage of HIV-infected women receiving/initiated on anti-retroviral prophylaxis or treatment, percentage of infants receiving/initiated on anti-retroviral prophylaxis, and percentage of women and infants completing the anti-retroviral regimen/retained in PMTCT care. The following databases will be searched from inception: Ovid MEDLINE and EMBASE, The WHO Global Health Library, CAB abstracts, EBM Reviews, CINAHL, HealthSTAR and Web of Science databases, Scopus, PsychINFO, POPLINE, Sociological Abstracts, ERIC, AIDS Education Global Information System, NLM Gateway, LILACS, Google Scholar, British Library Catalogue, DARE, ProQuest Dissertation & Theses, the New York Academy of Grey Literature, Open Grey, The Cochrane Library, WHO International Clinical Trials Registry, Controlled Clinical Trials, and clinicaltrials.gov. Reference lists of included articles will be hand searched and study authors and content experts contacted to inquire about eligible unpublished or in progress studies. Screening, data abstraction, and risk of bias appraisal using the Cochrane Effective Practice and Organization of Care criteria will be conducted independently by two team members. Results will be synthesized narratively and a meta-analysis conducted using the DerSimonian Laird random effects method if appropriate based on assessment of clinical and statistical heterogeneity. Discussion Our findings will be useful to PMTCT implementers, policy makers, and implementation researchers working in LMICs. Systematic review registration PROSPERO CRD42015020829 Electronic supplementary material The online version of this article (doi:10.1186/s13643-015-0136-x) contains supplementary material, which is available to authorized users. (Continued from previous page) Methods/Design: We will include studies comparing usual care or no intervention to any type of intervention to improve uptake and retention of HIV-infected pregnant or breastfeeding women and their children from birth to 2 years of age attending PMTCT services in LMICs. We will include randomized controlled trials (RCTs), cluster RCTs, non-randomized controlled trials, and interrupted time series. The primary outcomes of interest are percentage of HIV-infected women receiving/initiated on anti-retroviral prophylaxis or treatment, percentage of infants receiving/initiated on anti-retroviral prophylaxis, and percentage of women and infants completing the anti-retroviral regimen/retained in PMTCT care. The following databases will be searched from inception: Ovid MEDLINE and EMBASE, The WHO Global Health Library, CAB abstracts, EBM Reviews, CINAHL, HealthSTAR and Web of Science databases, Scopus, PsychINFO, POPLINE, Sociological Abstracts, ERIC, AIDS Education Global Information System, NLM Gateway, LILACS, Google Scholar, British Library Catalogue, DARE, ProQuest Dissertation & Theses, the New York Academy of Grey Literature, Open Grey, The Cochrane Library, WHO International Clinical Trials Registry, Controlled Clinical Trials, and clinicaltrials.gov. Reference lists of included articles will be hand searched and study authors and content experts contacted to inquire about eligible unpublished or in progress studies. Screening, data abstraction, and risk of bias appraisal using the Cochrane Effective Practice and Organization of Care criteria will be conducted independently by two team members. Results will be synthesized narratively and a meta-analysis conducted using the DerSimonian Laird random effects method if appropriate based on assessment of clinical and statistical heterogeneity. Discussion: Our findings will be useful to PMTCT implementers, policy makers, and implementation researchers working in LMICs. Systematic review registration: PROSPERO CRD42015020829 Keywords: HIV, Prevention of mother-to-child transmission, Interventions, Retention, Uptake Background Although the incidence of pediatric HIV acquisition is falling, over 240,000 children were newly infected with HIV in 2013, primarily through mother-to-child transmission [1]. Prevention of mother-to-child transmission (PMTCT) therapeutic regimens have been proven to reduce the risk of mother-to-child transmission from 20-45 % to 2 % in non-breastfeeding populations and 5 % or less in breastfeeding populations [2]. However, despite recent improvements in PMTCT clinical service coverage in low-and middle-income countries (LMICs) from 10 % in 2004 to 67 % in 2013, uptake and retention of mothers and newborns in PMTCT clinical services remain well below target levels in many LMICs [1,2]. PMTCT services begin with maternal HIV testing and counseling and for HIV-infected women include the following: initiation and maintenance of pregnant and nursing women and their infants on PMTCT medication regimens for the duration of treatment as defined by the specific regimen employed; and completion of appropriate infant HIV testing. As a result of the 2010-2015 PMTCT strategic vision, the World Health Organization (WHO) has called for renewed commitment and effort towards achieving universal PMTCT coverage. The identification of interventions to support PMTCT uptake and retention is the first step towards improvement. To date, two systematic reviews have been published that specifically evaluated the effectiveness of interventions to improve PMTCT coverage. Both were limited to specific interventions-male involvement [3] and integration of services [4] -and found too few studies meeting inclusion criteria to assess or make recommendations regarding effectiveness. A third systematic review indentified nine completed studies and five ongoing trials which examined initiation of antiretroviral (ARV) treatment in pregnant women [5]. While the authors report several promising interventions for improving ARV initiation, the quality of evidence was insufficient to support recommendations. In addition, results for ARV initiation in pregnant women were not independently examined, and maternal retention in PMTCT care and exposed infant care were not assessed. However, in our preliminary search, we identified a number of additional interventions including integration of HIV and antenatal care, peer-based programs, and community health worker programs [6][7][8] that have been evaluated to improve PMTCT uptake and retention in LMICs. Given the paucity of synthesized evidence to date, we propose to complete a systematic review to identify what interventions are effective in improving uptake and retention of HIV-infected mothers and their infants in PMTCT services in LMICs. While we anticipate a relatively small number of evaluations of any given intervention type, which may preclude meta-analysis, a narrative synthesis of the evidence to date is urgently needed to inform LMIC PMTCT program development and policy. With the exception of Option B+ (lifelong triple ARV therapy for all HIV+ pregnant and breastfeeding women, regardless of clinical stage or CD4 count) recommended by WHO in April 2012 for which evidence is not yet available, the effectiveness of PMTCT regimens is well established and will therefore not be included in the present search [9]. Methods/Design Protocol A preliminary systematic review protocol was developed based on the Cochrane Handbook [10]. The protocol was revised with input from the PURE Malawi Consortium, a research partnership of governmental, nongovernmental, and academic organizations working to improve PMTCT programming in Malawi. The final protocol was registered with the PROSPERO database (CRD42015020829, available at: http://www.crd.york.ac. uk/PROSPERO/display_record.asp?ID=CRD4201502082 9#.VXHCNUZBn5I), with reporting of the protocol guided by the PRISMA-P [11]. Eligibility criteria We will include studies of HIV-infected pregnant and breastfeeding women and their children from birth to 2 years of age or termination of breastfeeding in LMICs. For the purpose of this review, we will utilize the EPOC filter to identify low-and middle-income countries [12] updated using the most recent World Bank World Country and Lending group classification [13] to define LMICs. Based on the unique challenges facing PMTCT health services in LMICs and intended use of the findings of this review to inform PMTCT service development in Malawi and other LMICs, we chose to limit the review to studies conducted in LMICs. Studies conducted only in high-income countries or where LMIC results cannot be separated will not be eligible for inclusion. We will include studies comparing usual care or no intervention to any type of intervention (including patient, provider, or health system level interventions) to improve uptake and retention of HIV-infected pregnant or breastfeeding women and their children from birth to 2 years of age in PMTCT services. Patient level interventions are those focused on the patient and may include patient education programs, peer support programs, or efforts to improve patient support through engagement of partners or family members. Provider level interventions may include provider training, incentive programs, or tools to improve care provided. Health system level interventions may include restructuring of services and task shifting or other mechanisms to address human resource shortages. The primary outcomes of interest are percentage of HIV-infected women receiving or initiated on ARV prophylaxis or treatment, percentage of infants born to HIV-infected mothers receiving or initiated on ARV prophylaxis, and percentage of women and infants retained in PMTCT care/completing the ARV regimen as defined by the PMTCT regimen utilized. Secondary outcomes of interest include the following: percentage of infants completing post-exposure HIV testing at 4-6 weeks after birth and percentage of infants completing postexposure HIV testing at 6 weeks following termination of breastfeeding for all infants with known HIV exposure as recommend by the WHO [14]; percentage of HIVexposed infants testing positive for HIV; and adverse events including negative impact(s) on resources/delivery and/or effectiveness of other health care programs (including economic impact), major (e.g., heart defects, neural tube defects, major limb malformations, hypospadias) or minor (e.g., syndactyly, cutis aplasia, accessory digit) congenital malformations, small for gestational age, premature delivery, still birth, and infant death within the first 2 years of life). We will include controlled experimental studies (randomized controlled trials, cluster randomized controlled trials, non-randomized controlled trials) and controlled quasi-experimental studies (interrupted time series). We chose to include non-randomized controlled trials and quasi-experimental designs based on the results of our scoping searches, in which we found few randomized controlled trials that evaluated interventions to improve uptake and retention of HIV-infected women and their children in PMTCT services conducted in LMICs. Language of publication will be restricted to the language spoken by the study team and includes English only. No restrictions will be placed on publication status, study time frame, or duration of follow-up. Information sources and literature search Our search strategy was developed in consultation with an experienced information specialist and peer reviewed by two additional information specialists with expertise in systematic reviews using the Peer Review of Electronic Search Strategies checklist [15]. We will search the following electronic databases from inception to June 2015 using medical subject headings (MeSH) and text words related to HIV, pregnancy, breastfeeding, mother-to-child transmission, interventions, treatment uptake and retention, and low-and middle-income countries, using MEDLINE (OVID interface, 1946 to July . In addition, we will search reference lists of included articles and will contact experts in the field to inquire about eligible unpublished or in progress studies. Low-and middle-income countries will be searched utilizing the EPOC LMIC filter [12], updated based on the most recent World Bank LMIC list [13], see Additional file 1 for full MEDLINE search strategy. We will employ the Cochrane highly sensitive search strategy for identifying randomized trials in OVID MEDLINE: sensitivity and precision maximizing version [16], with the following two changes: Random* was used in place of randomized or randomly and trials ti was not used as an isolated term. Study selection process All titles and abstracts identified by the database search will be entered into a reference manager and duplicates removed manually into the duplicate folder, with companion papers for the same study retained for further evaluation at the full article phase of the review. Citations will be screened in two phases, level 1 (titles and abstracts) and level 2 (full-text review). A screening checklist will be developed and pilot tested by the reviewers on a random sample of 50 citations for each screening phase. Inter-rater agreement will be calculated for the pilot test and the form revised and re-piloted if percent agreement is <90 %. Once adequate agreement has been achieved, two team members will independently screen citations using the screening checklist. Differences at each stage will be resolved by consensus and if necessary through discussion with a third team member who is a content expert. Reference lists of included studies will be reviewed independently by the same two team members and again differences resolved through consensus and if necessary consultation with a third team member. A review log will be maintained in order to provide a record of resolution of discrepancies, decisions regarding studies described in >1 report, and reasons for exclusion. Data abstraction and management Data abstraction forms will be developed and pilot tested. Two team members will independently abstract directly into excel spreadsheets, corresponding to outcome tables, with additional space for comments and reasons for exclusion. Inter-rater reliability will be measured for data abstraction on a sample of excluded and included articles (approximately 10 %), and if percent agreement is found to be below 90 %, abstraction is conducted by a third team member. All discrepancies will be reviewed and consensus reached through discussion. Data abstraction will be based on the PICOST [17] format including population, intervention, comparator, context, outcomes, study DESIGN, and time frame. Population characteristics to be abstracted include maternal age, number of children, marital status, place of residence (rural/urban), level of education, primary language, first infant HIV testing (4-6 weeks), and at end of study. Study characteristics of interest include study design, country and geographical location within country (rural/urban), setting (home, hospital or health center clinic, maternity ward), detailed description of intervention and comparator (usual care/no intervention), number of participants per group at study baseline and follow-up, duration of intervention and follow-up period, source of data (self-report, clinical records, pill counting), and publication status. Outcome data to be abstracted include percentage of HIV-infected women and their infants receiving or initiating PMTCT treatment, retained in or completing PMTCT as defined by the PMTCT regimen(s) used. Where data necessary for analysis are missing, corresponding authors will be contacted. Although improved in recent years, examples of cluster trails inappropriately analyzed (without adjustment for cluster randomization) may be found among older trials. Data on appropriateness of analysis will be abstracted and reported as part of the review findings. Methodological quality/risk of bias appraisal Risk of bias assessment will be conducted using the Cochrane Effective Practice and Organization of Care (EPOC) criteria for assessing risk of bias [18]. Categories of bias assessed by this tool for randomized controlled trials, and non-randomized controlled trails include: allocation concealment, measurement of baseline characteristics and outcomes, management of incomplete data, blinding of outcome assessment, protection against contamination, selective reporting, and other categories of bias [18]. Categories of bias assessed by this tool for interrupted time series and repeated measures studies include independence of intervention from other changes, pre-specification of the intervention effect shape, effect of data collection on the intervention, allocation concealment, management of incomplete data, selective reporting, and other sources of bias [18]. Two team members will independently assess the studies for risk of bias at both study and outcome levels with disagreement resolved by consensus and discussion with a third team member if necessary. Studies will not be excluded based on risk of bias assessment, but the information will be used in the analysis and reporting of findings. Risk of bias will be categorized as low, high, or unclear risk of bias, using the EPOC-suggested risk of bias criteria [18]. We have elected not to use GRADE for this review given that the review findings are urgently needed to inform PMCTC program development and policy and that the need to build capacity in the use of grade across the team which would significantly prolong the review timeline. Risk of publication bias will be examined using funnel plots. For studies in which selected reporting bias is suspected, planned outcomes will be reviewed for registered trials and authors contacted for missing outcomes and for unregistered trials, and risk of selected reporting bias rated as unclear if response not received within 8 weeks of our initial email request. Evidence synthesis A flow diagram will be utilized to visually present the results of the search strategy and reasons for exclusion of articles. Included articles will be synthesized and reported narratively and in tabular form to provide an overview of findings, assessment of bias and its potential impact on reported findings, and strengths and weaknesses of included studies. Summary statistics for continuous outcomes will be expressed as mean difference and standardized mean difference with 95 % CIs, for outcomes reported using the same and different scales, respectively. Summary statistics for dichotomous data will be expressed as risk ratio with 95 % CI. If meta-analysis is possible, it will be conducted using the DerSimonian Laird random effects method. Summary statistics will be expressed as risk ratios with 95 % confidence interval. Clinical heterogeneity will be determined based on patient, intervention, and outcome characteristics of included studies. Statistical heterogeneity will be determined visually and the impact of heterogeneity assessed using the I 2 test, with I 2 of 75 % considered significant. Given the time constraints for this review, reanalysis for unit of analysis errors will not be conducted and cluster trials with unit of analysis errors will be excluded from the primary meta-analysis, and their impact assessed with sensitivity analysis comparing meta-analysis with and without studies with unit of analysis errors included. Interventions at the patient, provider, and health system level will be reported separately and analyzed separately if possible to do so. Discussion The findings of this review will have significant implications for PMTCT program development and policy in LMICs. If high-quality evidence of intervention effectiveness is identified, this will provide important guidance to ongoing efforts to address low rates of uptake and retention of HIV-infected mothers and their infants in PMTCT services in LMICs. If high-quality evidence is not identified, findings of the systematic review may identify gaps in evidence and promising interventions providing direction for future intervention research. To ensure our findings reach audiences who may benefit from the review findings, we plan to disseminate the results through publication in open access peerreviewed journals, presentations at relevant international conferences, and direct communication within the professional networks of PURE consortium members. Authors' contributions LPR and MvL conceived the study. LPR and SS were responsible for developing the search strategy. LPR was responsible for preparing and registering the protocol and for manuscript preparation. LPR, MvL, and SS were responsible for finalizing the protocol. MCH, NER, SP, ML, and FC provided content expertise and assisted with preparation of the protocol and manuscript. All authors provided critical revision of the protocol and manuscript. All authors read and approved the final manuscript.
v3-fos-license
2022-04-22T13:37:13.068Z
2022-04-21T00:00:00.000
248302330
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "bd72738ab0dfdb72be4860d90a6ce71e1c84712f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2046", "s2fieldsofstudy": [ "Biology" ], "sha1": "c0ab9f95e02fce502d5eb01dbea0075d5e5a3fce", "year": 2022 }
pes2o/s2orc
Transcriptomic profiling of pemphigus lesion infiltrating mononuclear cells reveals a distinct local immune microenvironment and novel lncRNA regulators Pemphigus is an autoimmune skin disease. Ectopic lymphoid-like structures (ELSs) were found to be commonly present in the pemphigus lesions, presumably supporting in situ desmoglein (Dsg)-specific antibody production. Yet functional phenotypes and the regulators of Lymphoid aggregates in pemphigus lesions remain largely unknown. Herein, we used microarray technology to profile the gene expression in skin lesion infiltrating mononuclear cells (SIMC) from pemphigus patients. On top of that, we compared SIMC dataset to peripheral blood mononuclear cells (PBMC) dataset to characterize the unique role of SIMC. Functional enrichment results showed that mononuclear cells in skin lesions and peripheral blood both had over-represented IL-17 signaling pathways while neither was characterized by an activation of type I Interferon signaling pathways. Cell-type identification with relative subsets of known RNA transcripts (CIBERSORT) results showed that naïve natural killer cells (NK cells) were significantly more abundant in pemphigus lesions, and their relative abundance positively correlated with B cells abundance. Meanwhile, plasma cells population highly correlated with type 1 macrophages (M1) abundance. In addition, we also identified a lncRNA LINC01588 which might epigenetically regulate T helper 17 cells (Th17)/regulatory T cells (Treg) balance via the peroxisome proliferator-activated receptor (PPAR) signaling pathway. Here, we provide the first transcriptomic characterization of lesion infiltrating immune cells which illustrates a distinct interplay network between adaptive and innate immune cells. It helps discover new regulators of local immune response, which potentially will provide a novel path forward to further uncover pemphigus pathological mechanisms and develop targeted therapy. Supplementary Information The online version contains supplementary material available at 10.1186/s12967-022-03387-7. subjects. An enhanced understanding of the genetic basis of these largely unexplored immune cells is a requisite to advance the search for a more targeted therapy. The use of transcriptome analysis has been a key method in uncovering the latent mechanism that may be causing or compounding diseases. Microarray expression profiling of human PBMC has identified novel therapeutic targets and promising diagnostic biomarkers for autoimmune diseases [6][7][8][9]. However, as skin harbors a pool of innate and adaptive immune cells constituting a complex network, studies of peripheral blood may not reflect the local immune responses in skin lesions. By B cell receptor repertoire sequencing, we have previously revealed that certain clones of lesional B cells expanded locally in pemphigus [5]. Hence, we aim to further characterize the compositions and dynamics of immune infiltrates in lesions. Meanwhile, increasing evidence has shown that immune responses are not only regulated by signaling pathways but also by epigenetic mechanisms involving DNA methylation, histone modification and non-coding RNAs (ncRNA) [10]. Changes of lncRNAs (ncRNA transcripts > 200 bp) are especially pervasive in human autoimmune diseases [11]. lncRNAs possess various biological functions, such as regulating protein and RNA stability as well as protein-DNA interaction. Yet, little is known about lncRNA expression profile in pemphigus. As a valuable model of organ-specific humoral autoimmune disease, transcriptome analysis of pemphigus, including lncRNA and mRNA, may help to identify novel autoimmunity-promoting genes. In this study, both SIMC and PBMC microarray datasets were analyzed. We first screened out DEGs between pemphigus and healthy samples, then compared two sample sources (peripheral blood and lesions) to uncover their transcriptomic difference. CIBERSORT and GSEA were used to evaluate the abundance of immune cells and analyze the mechanism by which those immune infiltrates may affect pemphigus pathogenesis. Subsequently, both datasets were integrated and analyzed by WGCNA and cystoscope in attempt to discover pathogenesis related modules. Our findings corroborate the involvement of local immune dysregulation and altered Immune cell composition as potential drivers of pemphigus lesions. Moreover, we constructed a lncRNA-mediated competing endogenous RNA (ceRNA) network and identified epigenetic regulators, such as LINC01588 which might modulate Treg/Th17 balance via PPAR signaling pathway. Our study shed lights on the microenvironment at skin lesions and its potential epigenetic regulatory mechanism in pemphigus. Patient recruitment and ethical approval Skin biopsies were collected from 4 patients with pemphigus, and 4 from age-and sex-matched healthy donors. In the pemphigus group, only blisters or erosions skin lesions were collected. Blood samples were also collected from 4 patients with pemphigus, and 4 from age-and sexmatched healthy donors. All the patients were diagnosed with pemphigus foliaceus or pemphigus vulgaris and had not been treated with systemic therapy before the study. The diagnoses were confirmed with clinical manifestations, histology, Dsg-specific antibody tests and immunohistology criteria. Shanghai Jiao Tong University School of the Medicine Research Ethics Committee approved the study. Written informed consent was obtained from all subjects before involving them in the study. Sample collection, skin cell preparation, and mononuclear cell preparation 1cm 2 sized skin biopsy samples from four patients with pemphigus and four healthy donors were collected and incubated in a buffer containing collagenase IV, hyaluronidase, and DNase-I (Sigma-Aldrich, St. Louis, MO) for digestion at 37 °C for 2 h. After digestion, the samples were passed through a 70 mm cell strainer (BD Biosciences, USA), and single cell suspensions were obtained. Mononuclear cells were isolated from skin tissue single cell suspensions by density separation gradient using Lymphoprep solution (Axis-shield, Norway) and resuspended in RPMI 1640 (Invitrogen, USA) medium supplemented with 1 ml 5% fetal bovine serum (FBS; Sigma-Aldrich, USA) after washed with phosphate buffer saline. 4 ml blood samples were collected from a total of 8 participants (4 pemphigus patients and 4 healthy controls), from which PBMCs were isolated by density separation gradient using Lymphoprep (Stemcell Technologies, Vancouver, Canada) within 4 h since blood collection. RNA extraction, quality, and integrity determination Ranging from 3.0*10 5 to 8*10 5 cells, Lymphoprep isolated mononuclear cells derived from each sample were prepared for further experimentation. Total RNA was extracted from the mononuclear cells of a pemphigus lesion and normal skin using Trizol (Invitrogen, USA). Purity and concentration of isolated total RNA were measured using a NanoDrop ® UV-Vis spectrophotometer (Thermo Fisher, USA). Sampling and RNA isolation was performed by the same personnel using the same methodology. [12]. Differential expression analysis and functional enrichment Raw signal intensity was converted into normalized and summarized expression data which was used as input for the linear models for microarray data analysis algorithm (LIMMA) to assess differential expression of genes between pemphigus group and HC. The computing process was done with LIMMA package in R. Genes with log fold-change (logFC) greater than or equal to 1 and p-value < 0.05, were regarded as differentially expressed and selected for further functional enrichment analysis. We used the clusterProfiler package in R (Guangchuang Yu, 2011) to perform Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analyses on DEGs, respectively. GSEA was performed on the gene expression matrix through the clusterProfiler package and "c7.immunesigdb.v7.4.entrez. gmt" was selected as the reference gene set. A false discovery rate (FDR) < 0.25 and p < 0.05 were considered as significant enrichment. Identification of candidate RNAs and development of an integrated mRNA-lncRNA co-expression signature The co-expression relationship between DEGs was investigated by Pearson's correlation measures, and modules were detected using WGCNA package in R software (Langfelder & Horvath 2008). Scale-free topology fit index was set as 0.9 as a function of the soft-thresholding power. Edges with weight > 0.1 were selected to construct the co-expression network in Cytoscape (version 3.7.0) software (Broad Institute, Inc., Massachusetts Institute of Technology, and Regents of the University of California). RNA fluorescence in situ hybridization (FISH) The FISH assay was performed to detect and localize LINC01588 and NOP14-AS1 in SOMC of pemphigus patients. The probes of LINC01588 and NOP14-AS1 were synthesized by the ServiceBio Company (China) and labeled with fluorescent dye. The Servicebio ™ FISH Kit (Servicebio Company, Wuhan, China) was used to carry out RNA FISH assay according to the procedure provided by the manufacturer. Immunohistochemistry Skin tissues were fixed and stained with hematoxylin. For Immunohistochemistry analysis, deparaffinized sections were washed with phosphate-buffered saline (PBS) and then treated with 3% hydrogen peroxide for 5 min. The sections were blocked with 10% normal goat serum in Tris-HCl-buffered saline or horse serum in PBS for 1 h and then incubated with primary anti-NCAM1/CD56 antibodies(Clone number: EP2567Y; Abcam, Waltham, USA) at a concentration of 1:200 for 1 h at room temperature or overnight at 4℃. After washing, the sections were incubated with appropriate secondary antibodies (biotin-conjugated IgG; Servicebio, Wuhan, China). The staining intensity was measured in three fields of every section and quantified morphometrically using Image J software. Identification of DEGs in both SIMC and PBMC datasets SIMC Expression profiling data (4 patients and 4 controls, Additional file 1: Table S1) of 40,173 LncRNAs and 20,730 protein coding mRNAs were obtained by leveraging microarray analysis. PBMC expression data (4 patients and 4 controls, Additional file 2: Table S2) were obtained from our previous study [12]. After consolidation and normalization of the microarray data, DEGs were separately screened in each dataset. A total of 11,798 transcripts were differentially expressed at a significant level in SIMC while only 159 transcripts in PBMC comparing patients to control. Out of the DEGs in SIMC, there were 6829 mRNAs (1864 up-regulated and 4965 down-regulated) and 4969 lncRNAs (3515 upregulated and 1454 down-regulated). As for DEGs of PBMC, there were 79 mRNAs (57 up-regulated and 22 down-regulated) and 60 lncRNAs (33 up-regulated and 27 down-regulated). DEGs are shown by volcano plots (Fig. 1a-d). Next, those common DEGs in SIMC and PBMC were highlighted, as both types of cells have been proven to produce Dsg-autoantibodies, hence the common DEGs may provide more information on pemphigus-driver genes. Surprisingly, only 17 genes overlapped in two datasets (Fig. 1e), which constitute 12.23% of DEGs in PBMC dataset and only 0.14% of DEGs in SIMC dataset. Then the correlation heatmap were constructed for these 17 genes (Fig. 1f ). Examining the statistically significant pair-wise correlation (p-value < 0.05 for Pearson co-efficient), one cluster of positively correlated genes were detected. The cluster included the mRNAs PPFIA3, GTAG1B, HPCA, TP73, and the lncRNAs G013396 and AC007003.1. Different immune cell subtype composition in peripheral blood and skin lesions We expected the differences of DEGs between the SIMC and PBMC mainly stem from the immune cell subtype composition. To better understand and characterize the differences between SIMC and PBMC, we used the CIB-ERSORT deconvolution algorithm to identify immune cell infiltration characteristics in pemphigus patients. SIMC from pemphigus patients had a higher NK cells, and neutrophils infiltrating level, compared with control, while PBMC showed no such a difference ( Fig. 2a, b, Additional file 3: Fig. S4). An increased number of infiltrating NK cells in pemphigus lesions was confirmed by immunohistochemistry. Dermal layer of pemphigus lesions had significantly more CD56 positive cells compared with normal skin (Fig. 2e). To further study the immune environment, we generated the chemokines and chemokines receptors based on the global gene expression analysis (Fig. 2c). Notably, CCL19, CCL26, CCL27 and CXCR5 were highly expressed chemokines and chemokine receptors detected in pemphigus group (Fig. 2c). The correlation heatmap of the 16 types of immune cells revealed that B cell abundance had a positive correlation with Tfh cell and NK cell abundances (p < 0.05). Moreover, plasma cell (PC) abundance had a positive correlation with type I macrophage (M1) abundance, and dendritic cell population was positively correlated with type II macrophage (M2) abundance (Fig. 2d). To further investigate the immunological mechanisms involved in pemphigus lesions, the GESA package in R was used to analyze the SIMC global gene expression data. The results displayed that the related genes were significantly enriched in immune cell activation processes and immune cell differentiation signatures (Fig. 2f). Among top scoring activated gene sets, from GSEA database (https:// www. gsea-msigdb. org/ gsea/ index. jsp), immune cell differentiation signatures, GSE24574 (related to BL6 hi Tfh cells) and GSE37532 (related to regulatory T cell's inability for maturation) were shown (Fig. 2g). GO and KEGG analysis for the DEGs in SIMC and PBMC datasets To better illustrate the unique roles of SIMC, we performed functional enrichment analyses on SIMC and PBMC datasets, respectively. The GO analysis results showed that DEGs in both datasets were mostly enriched in biological process terms. The up-regulated genes in SIMC were mainly enriched in neutrophil aggregation, defense response to fungus, Toll-like receptor 4 binding and RAGE receptor binding, among others (Fig. 3a). On the other hand, in PBMC group, over-represented genes were enriched in immunological processes including cellular response to interleukin-1, response to chemokines, CCR chemokine receptor binding, phagocytic vesicle lumen as well as endocytic vesicle, among others (Fig. 3b). KEGG pathway analysis showed the related genes in PBMC groups, were involved in viral infection related pathways, Chemokine signaling pathway, Cytokine-cytokine receptor interaction, and interleukin-17 (IL-17) signaling pathway (Fig. 3d). In SIMC group, most enriched KEGG terms were fructose and mannose metabolism, VEGF signaling pathway, p53 signaling pathway, PPAR signaling pathway, and IL-17 signaling pathway among others (Fig. 3c). Identification of LINC01588 as a potential epigenetic regulator in Treg/Th17 balance LncRNAs has emerged as critical regulators in the immune system. To explore its contribution to the immune phenotype of SIMC and PBMC, we looked at the expression level of immune cell specific lncR-NAs. 22 immune-cell specific lncRNAs were differentially expressed in skin lesions. These lncRNAs are related to dendritic cells, CD8+ T cells, T helper 17 cells and NK cells, among others. While in PBMC, only TAPT1-AS1, which is specific to activated B cell was found (Table 1). It is well established that pemphigus is an IL-17 related immune response [13][14][15][16]. Therefore, we next focused on Th17 cell specific LncRNA LINC01588. The PPAR signaling pathway score was calculated by averaging the expression values of genes in the PPAR signaling pathway. The PPAR signaling pathway gene-set was acquired from the Nanostring platform. The result showed that LINC01588 had a significant negative correlation with the PPAR signaling pathway (p-value < 0.01) (Fig. 4a). FISH results showed that LINC01588 had a higher expression in pemphigus lesions (Fig. 4b). Higher expression of NOP14-AS1, one of most up-regulated lncRNA in pemphigus lesions, was also confirmed using FISH. Furthermore, NOP14-AS1 was co-expressed with CD4 (Fig. 4e). To explore the relationship between lncRNA and mRNA, we next constructed a lncRNA-mRNA and lncRNA-miRNA-mRNA networks (Fig. 4c, d), based on the data from ENCORI database [17,18]. WGCNA network module mining reveal pemphigus associated pattern The microarray dataset contains over 20 thousand gene expression data. Merely focusing on DEGs may lead to overlooking potentially significant results. Therefore, we next integrated the expression matrices of all 16 samples in both SIMC and PBMC datasets and identified pemphigus-related preserved gene modules in two datasets using weighted gene co-expression network analysis (WGCNA). After batch effect removal (Additional file 3: Fig. S1), scale free topology criterion was applied as follows: the soft threshold power of β was 10 when scale-free topology model fit R 2 was maximized at 0.85 (Additional file 3: Fig. S2). A total of 19 modules were identified in the network. The parameters were set as follow: a relatively large minimum module size (size = 30), a medium module detection sensitivity (deepSplit = 2), and the cut height for merging modules (height = 0.25). The modules whose eigengenes were correlated above 0.75 were merged. Dendrogram clusters and heatmaps are shown in the attachment (Additional file 3: Fig. S3). The heatmaps of eigengene adjacency and module-trait relationships showed that, in the 19 modules, the brown module was positively correlated with occurrence of pemphigus. None of the modules correlated with sample sources were found (Fig. 5a, b). A multi-dimensional scaling plot was generated to evaluate the expression of genes in the brown module (Fig. 5c). The brown module includes 1462 mRNAs and 1129 lncRNAs. To screen out the hub genes, we calculated the intramodular connectivity of all genes in the module. According to the multi-dimensional scaling plot, we defined genes of high gene significance (GS > 0.5) and high intramodular connectivity (IC > 0.8) as main contributor genes in the brown module. Then, functional enrichment analysis was used to investigate the module function. The bubble plot illustrates that main contributor genes were markedly enriched in dendritic spine membrane, interleukin-28 receptor complex, dendrite membrane, cluster of actin-based cell projections, neuronal cell body membrane, cell body membrane and neuron projection membrane (adjust p value < 0.01, Fig. 5d). We also constructed the Protein-Protein Interaction Networks (PPIs), to explore molecule interaction in the brown module on protein level. PPIs of these genes were constituted by STRING and visualized by Cytoscape. The genes with combined scores greater than 0.4 were selected for constructing networks. The ten hub genes were chosen using CytoHubba plugin and were as follows: DNAJC17, PRSS3P2, POMGNT1, DTX4, G028733, GPHBP1, G011530, XLOC_001219, PAPD4, and G050942 (Fig. 5f ). In the enrichment results, genes such as IFNL, IFNLR (Additional file 4: Table S3) and hub gene DTX4 were type I IFN related. Type I IFN signature has been reported in many autoimmune diseases, such as systemic lupus erythematosus. Hence, we displayed type I IFN signaling pathways associated genes in a heatmap manner. However, most genes were not up-regulated in pemphigus (Fig. 5e). The type I IFN signaling pathways gene-set used above was generated using the Nanostring panel database (https:// www. nanos tring. com/ produ cts/ ncoun ter-assays-panels/ panel-selec tion-tool/). Discussion The main characteristic of pemphigus is autoantibodies targeting Dsg1 and Dag3. Current studies regarding pemphigus have largely focused on PBMC. Our group has previously reported that local immune response in pemphigus lesions may play an important role in pemphigus pathogenesis. Yet, an advanced understanding of the altered biological pathways and molecular mechanisms in SIMC is needed to illustrate its role. Mounting evidence indicates that skin-resident immune cells play an important role in maintaining skin immunity homeostasis [19]. Crosstalk between innate immune cells and adaptive immune cells has become a research hotspot. They co-operate to achieve finely balanced state of the immune system that maintains tolerance to self-antigens. To explore this aspect, we illustrated the immune landscape of pemphigus blood and skin by applying CIBERSORT, a computational approach for inferring leukocyte and lymphocyte representation in bulk transcriptomes. The results showed that pemphigus lesions had a higher neutrophil infiltrating level which is in line with the GO enrichment results (Fig. 3a). By investigating the correlation between different cell types, we found that M1 abundance correlated with PC abundance. Xu et al. [20] Identified macrophages as important players in the induction of PC terminal differentiation through the secretion of CXCL10. Our team has reported CD138 + PCs in pemphigus lesions and confirmed they were able to secrete Dsg-specific antibodies via in vitro experiments [5]. These findings indicated that M1 might be a potential catalyst in pathological progression and should be the highlight of further studies. In pemphigus lesions, we found a significantly increased infiltration of NK cells and its abundance was positively correlated with B cell abundance. NK cells have been traditionally considered as innate immune cells, but recently they have been proven to be mediating adaptive immunity and have vaccination-dependent, antigen-specific, and long-lived immunological memory characteristics [21]. NK cells exhibit immunoregulatory function in the pathogenesis of myasthenia gravis (MG) [22]. The killing effects of NK cells on CD4 + T cells and Tfh cells were impaired in MG patients, resulting in promotion of the differentiation and activation of Tfh cells. The role of pemphigus lesion infiltrating NK cells needs further elucidation. Development of a bispecific antibody therapy may be worth pursuing. Bispecific antibodies are monoclonal antibodies that targets two different epitopes [23]. One end binds to target cells like tumor cells, in this case autoantibody producing B cells. Another end binds to killing cells like T cells or NK cells. Even though Rituximab (RTX) therapy have been tested effective, patients with a high baseline frequency of memory class-switch IgG B-cells (25% among DSG-3 specific B-cells) still had active disease after RTX treatment [24]. Meanwhile, patients also face risk of severe infection due to immunosuppression. An alternative treatment like bispecific antibody that activates local NK cell to kill Dsg-specific Ig + B cells will be promising. Chemokines and its receptors could be a major contribution for the enriched infiltration of immune cells in pemphigus lesions. In this study, the global gene expression analysis displayed that the most highly expressed chemokine is CCL27, and chemokine receptor is CXCR5. CCL27 (CTACK) is an inflammatory chemokine which binds to CCR10 and is associated with homing of memory T cells to sites of inflammation. Bernhard etc. established the pivotal role of CCL27-CCR10 interactions in T cell-mediated skin inflammation using mice models. Their data showed that, lymphocytes accumulate at sites of CCL27 injection and neutralization of CCL27-CCR10 interaction by administration of anti-CCL27 neutralizing antibodies can impair lymphocyte recruitment [25]. The accumulated body of evidence indicates that skinassociated immuno-surveillance may be influenced by the CCL27/CCR10 interaction. Yet, its role in pemphigus remains elusive. CXCR5 is mainly expressed on the cell surface of B cells and Tfh cells. Our previous study described the formation of ELSs in pemphigus lesions. It is a structure constituted by T cell and B cells, serving as a local factory for autoantibody production. In the previous study, we detected the mRNA expression level of selective chemokines. CCL5 and CCL20 were found to be highly expressed in pemphigus lesions [5]. At this stage of understanding, we believe many chemokines and their receptors are involved in the enrichment of immune cell in pemphigus lesion. Nevertheless, it is important to note, that the present evidence relies on mostly transcriptomic data. More experiments at protein level need to be conducted in order to complete the overall picture of skin homing factors in pemphigus. We also compared the functional analysis results of SIMC with that of PBMC to better understand the unique roles of SIMC. The GO analysis results showed that PBMC had an over-representation of inflammatory cytokines and chemokines, while SIMC had a signature of neutrophil aggregations and other metabolism-related pathways. These results indicated that SIMC and PBMC had vastly different functional phenotypes. Interestingly, SIMC and PBMC shared similarity in the KEGG results. The IL-17 signaling pathway was over-represented in both SIMC and PBMC which is consistent with previous reports [13][14][15][16]. Increasing evidence has shown that Dsg1/3-specific autoantibody production may be promoted by IL-17 + T cells. Cellular response to IL-1 term was enriched in PBMC. IL-1 is a strong inducer of innate IL-17 who in turn, recruits IL-1-secreting myeloid cells [26], suggesting that a positive feedback cycle may exist in pemphigus. Holstein et al. [13] has shown that neutrophil aggregation was the most significantly enriched GO term in pemphigus skin lesions which was also confirmed by the GO analysis of our study. Neutrophil was reported to also produce IL-17 [27][28][29]. However, whether the neutrophil aggregation contributed to local IL-17 production needs further investigation. Many differentially expressed lncRNAs were also screened out using bioinformatics techniques. Recent evidence has shown that lncRNAs are expressed in a highly lineage-specific manner and control the differentiation and function of both innate and adaptive cell types [30]. The CIBERSORT and GSEA results demonstrated that SIMC had distinct immune cell subtype composition and immunophenotypes. We suspected that lncRNAs are likely to function as epigenetic regulators in SIMC and contributed to these differences. For this reason, we examined immune cell specific lncRNAs expression level and constructed a ceRNA network. Immune cell-specific lncRNAs were defined by Zhou et al. [31]. Twenty-three immune-cell-specific lncRNAs were found differentially expressed in pemphigus SIMC compared with healthy control. Our correlation results showed that a Th17 specific lncRNA, LINC01588 expression level was negatively correlated with the PPAR score meaning that LINC01588 may be a negative regulator of the PPAR signaling pathway which is required for Treg cells maturation. We attempted to build a mRNA and lncRNA expression network using WGCNA to identify a disease associated gene signature. Out of 19 modules, only the brown module was related to disease status. The main contributor genes (GS > 0.5 & IC > 0.8) in the module were further analyzed. These genes were functionally enriched by GO terms and ten hub-genes screened out using CytoHubba based on their protein-protein interaction. Because type I IFN related genes were enriched and It was well established that type I IFN response is highly correlated with autoimmune diseases such as cutaneous lupus [32]. We deduced that type I IFN signaling pathway may play a role in pemphigus disease. However, the heatmap showed that most genes related to type I IFN signaling pathway were not up-regulated in SIMC, suggesting type I IFN signaling probably is not the key pathogenic mechanism. In this study, we sought to further explore the role of immune cell infiltration in pemphigus and identify novel genes in its pathogenesis. However, there are some limitations to our study. Firstly, this study had a relatively small sample size given the fact that pemphigus is a rare disease. And for the same reason, only limited laboratory experiments were conducted to validate these results. Secondly, the exact mechanism of interaction between immune cells and immune reaction regulated by lncRNA needs to be further investigated. Lastly, the bioinformatic analyses were based on limited transcriptomic data. Therefore, our findings still need verification through in vitro and in vivo experiments. Conclusion Overall, the present study represents the first transcriptional profiling of SIMC. Our study is important in the context of a prior report that illustrated the unique gene expression pattern and immune landscape in pemphigus lesions. We showed the crosstalk between innate and adaptive immune cells, like macrophages and plasma cells. Our study is also the first to demonstrate an increased infiltration of NK cell in pemphigus lesions. In addition, we found that LINC10588 was negatively correlated to PPAR signaling which may be related to the pathogenesis of pemphigus.
v3-fos-license
2019-03-23T13:03:00.229Z
2018-05-01T00:00:00.000
84846789
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.21315/mjms2018.25.3.6", "pdf_hash": "a01ac9a473d254bb5f4bcc7fa71c602c21e319b8", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2047", "s2fieldsofstudy": [ "Medicine", "Education" ], "sha1": "a01ac9a473d254bb5f4bcc7fa71c602c21e319b8", "year": 2018 }
pes2o/s2orc
Dietary Habits and Lifestyle Practices among University Students in Universiti Brunei Darussalam Background Young adults are at risk of developing obesity, especially when transitioning into university life as they become responsible for their daily eating and lifestyles. This study estimates the prevalence of overweight/obesity and explores the eating patterns and lifestyle practices of university students. Methods A cross-sectional study was conducted at Universiti Brunei Darussalam (UBD). A total of 303 students participated. Data was collected from January to April 2016. Self-designed questionnaires comprised questions pertaining to current weight, self-reported height data, information on eating habits, exercise and knowledge of the food pyramid. The collected data were used to compare and contrast eating habits and lifestyle practices among overweight/obese students with those of non-overweight/obese students. Results The prevalence of overweight/obesity was 28.8% (95% CI: 24.0%, 34.0%). The majority ate regular daily meals, but more than half skipped breakfast. Frequent snacking, fried food consumption at least three times per week and low intake of daily fruits and vegetables were common. The frequency of visits to fast food restaurants was significantly higher in the overweight/obese. 25.4% of the students exercised at least three times per week. Almost all students are aware of balanced nutrition and the food pyramid. Conclusions Most university students had poor eating habits, although the majority had good nutrition knowledge. By way of recommendation, the university is encouraged to provide a multi-disciplinary team specialising in health promotion that includes nutrition and physical activity programmes to increase the awareness among the university students. Introduction In 2005, a global burden of obesity stated that 33.0% of the adult population (1.3 billion people) is overweight/obese. It predicts that this percentage would likely increase to 57.8% (3.3 billion people) by the year 2030 if the trend persists (1). In Brunei Darussalam, the reported prevalence of obesity has increased from 12% in 1996 to 27.2% in 2011. This alarming rise has attracted the attention and concern of from the public because obesity is a recognised risk factor for numerous non-communicable diseases (NCDs) such as diabetes mellitus, hypertension, cardiovascular diseases and stroke (2). The problems associated with obesity affect not only the adult population but also the youth. An overweight child or teenager is at a higher risk of being overweight/obese as an adult (3) and of developing adult diseases. Although the onset and development of obesity are most apparent during childhood (5), university students also undergo a critical period when their behaviours are conducive to change often resulting in weight gain (6). A study conducted in the International Medical University of Malaysia at the university. A total of seven faculties were involved: Data were collected from the students attending each faculty for three days. University students of all ages excluding international students were eligible for participation. Participation was voluntary with informed consent. A total of 303 university students participated in the study. The sample size was calculated using the following equation (11); where n is the minimum sample size required in the study, Z is the area under normal curve corresponding to the desired confidence interval used in this study, i.e. 95% CI (1. Research Instruments All research instruments, including the Participant Information Sheet (PIS), consent forms and questionnaires were available in Malay and English. The self-administered structured questionnaire (developed based on an adaptation from previous studies and literature search) consisted of 31 multiple choice questions (5,6,10,12,13). Apart from obtaining the sociodemographic information such as age, gender, ethnicity, current study semester, faculty, and accommodation status, the questions were designed to explore the eating patterns and lifestyles of university students. Personal views on dieting and self-body image were also solicited together with questions exploring their knowledge about balanced nutrition, eating patterns, lifestyles and daily exercises. Prior to conducting the research, to ensure the validity and reliability of the self-designed questionnaire, found that out of 240 clinical students, 72 were either overweight/obese (based on the World Health Organisation body mass index cut-offs for Asian population, i.e. BMI > 23.0 kg/m 2 ) (7). The emerging practice of dieting for weight loss and image purposes among university students (8) and their effects on university students' behaviours require public attention. College weight gain is likely during the transition into university life, which is a critical period when young adults' behaviours including dietary habits are conducive to change as they gain independence in making food choices (3,4). These groups of individuals are at higher risk of developing unhealthy eating behaviours with inadequate nutrient intake, as shown by Gan et al. (5). Some of these behaviours include irregular meals, not eating breakfast, reduced fruit and vegetable intake and increased consumption of fried food (6). Apart from the change in dietary habits, poor exercising habits, bad time management and the increasing amount of stress from school work also contribute to weight gain (9). Moreover, the opening of numerous fast food stores, cafés and restaurants provide university students more opportunities to dine outside instead of consuming self-prepared meals (10). The improper eating habits developed during this stage of life can continue into adulthood. Studying the change in dietary habits and lifestyle practices among university students can help educate them on the importance of preventing early development of obesity by adopting healthy lifestyles. It is hoped that this study can increase the awareness of healthy lifestyle and eating among young adults, thereby reducing the risks of developing chronic diseases. This research estimates the prevalence of overweight/obesity among students in a university in Brunei Darussalam, to compare eating habits and lifestyle practices of overweight/obese university students with that of non-overweight/obese university students and to explore students' views about balanced nutrition, dieting and self-body image. Study Design, Population and Sample A cross-sectional study was conducted through self-administered questionnaires in Malay and English from February to March 2016 Demographic Data of Respondents A total of 303 university students were recruited during the study period, of which 83 (27.4%) were male, and 220 (72.6%) were female. The response rate among those approached to participate was 95.3%, with a total of 15 rejections (no reason was given). Table 1 presents the sociodemographic data of the participants. Eating Habits of University Students Out of 303 university students, 226 (74.6%) reported eating meals regularly on a daily basis with 42.6% (129 out of 303) practising consuming breakfast daily. The majority (52.5%) consumed three meals per day, while 101 (33.3%) of university students consumed less than three meals and 43 (14.2%) more than three meals. Many of the participants had a habit of snacking regularly and consumed fried food at least 3-5 times per week (82.2%, 60.7%, respectively). Only 23.4% (71 out of 303) and 9.2% (28 out of 303) of participants consumed vegetables and fruits every day, respectively, which is relatively low. Table 3 compares eating habits between the non-overweight/obese populations with that of overweight/obese. The number of regular daily meals differed significantly between these two groups (P = 0.011). Lifestyle Practices of University Students Most students (80.5%) would sometimes prepare/cook their meals, but very few (24.1%) would eat a variety of food (rice, meat, vegetables and fruits) as required for a balanced diet. The frequency of eating at restaurants, fast food both English and Malay versions were tested on ten randomly selected university students to access their comprehensibility. No significant amendments were made based on the pre-test. An electronic weighing scale (Brand/ Model: Tanita/HD-382 Australia) was used for participants' weight (in kilogram) measurement without heavy clothing (e.g. jackets) and accessories. The weight values along with selfreported height (in centimetres) values were used to calculate and classify body mass index [BMI] (overweight is a BMI greater than or equal to 25, and obesity is a BMI higher than or equal to 30) (11). Definition of terms: Snacking: refers to the intake of food between regular meals. Regular exercise: refers to physical activities for at least 3-4 times per week. Dieting: refers to restrictions in daily calorie consumption associated with unbalanced nutrient intake. Knowledge of food pyramid: refers to an understanding of the main components (carbohydrate, protein, vitamins, fat and oil) of the food pyramid as well as the recommended daily portions. Ethical approvals for this study were received from the Medical and Health Research Ethics Committee (MHREC), Ministry of Health, Brunei Darussalam and the Ethics Committee of PAPRSB Institute of Health Sciences (IHSREC), Universiti of Brunei Darussalam. Statistical Analysis Data collected from the questionnaires were entered and analysed using IBM SPSS Statistics version 21.0 for Windows. The statistical analysis included the estimation of the proportion of university students who were overweight/obese with 95% confidence interval (CI) and chisquare test to compare eating habits and lifestyle practices of overweight/obese university students with that of non-overweight/obese university students. (Table 4). Most students (72.3%) ate with family at home at least three times weekly. As much as 58.7% (178 out of 303) of all participants preferred eating cheap food over healthy/ Dieting, Balanced Nutrition and Self-Body Image The majority of students were aware of the food pyramid (96.4%) and the concept of balanced nutrition (96.0%) ( Table 5). Although the majority 82.5%) were concerned about body size and physical appearance, slightly less than half (47.9%) had tried dieting. The main reason for dieting (34.7%) (those who never dieted were questioned why they think other people dieted) was to be strong and healthy. nutritious food. This was significantly (P = 0.042) true for the overweight/obese population (67.8%). The majority (70.6%) ate more when feeling stressed. Regarding physical activity, 78.5% (238 out of 303) walked around the campus when going to classes. However, for the frequency of weekly exercise, only ten students (3.3%) exercised daily, while others exercised three to four times per week (22.1%), one to two times (36.6%), or rarely exercised (38.0%). and a higher BMI value (13,14). In this study, although most participants ate meals regularly, more than half did not eat breakfast daily. This result is similar to the findings of a Malaysian study (6) where 56.1% reported not consuming daily breakfast. It is possible that meal skipping caused frequent snacking as the majority of the participants admitted to snacking between regular meals. An association between daily meals frequency and BMI status was identified from our study where a higher proportion of overweight/obese participants (23.0%) consumed more than three meals daily (Table 3). Hakim et al. (15) emphasised that skipping meals leads to more eating throughout the day including frequent snacking, which can subsequently result in weight gain. Minimal intake of daily fruits and vegetables combined with increased fried food consumption is common among university students (6,12,15). Such a trend was also observed among our participants (Table 3). Discussion In this study, 28.8% of participants were overweight or obese, and 10.6% of the populations were obese. Although this percentage is lower when compared to the Bruneian adult obesity rate of 27.2% reported in the 2011 National Health and Nutritional Status Survey (2), the university obesity rate can still be worrying considering the younger age of participants. The prevalence of overweight/obesity was similar among male and female students with a difference of only 0.3% (Table 2). This finding is different from the study conducted in the International Medical University of Malaysia (12), where male university students were more overweight/obese (15.3% more). Eating regular meals with daily breakfast is considered healthy eating behaviour. Several studies concluded that the habit of skipping breakfast was associated with weight gain healthy/nutritious food, especially among the overweight/obese population (P = 0.042). It is important to keep a balance between energy intake and energy expenditure, as disruption of this balance can lead to obesity (14,17). Physical activity is also an important determining factor of weight status. A combination of low physical activity with poor dietary habits increases the risk of overweight or obesity (17). In this study, most participants adopted the habit of walking around the campus, but only 25.4% (77 out of 303) of participants engaged in physical exercises at least three times per week. According to the WHO guidelines (18), physical activity of moderate intensity for at least 150 minutes throughout the week (equivalent to 30 min/day for five days) is recommended for ages 18-64 years. The majority of the participants did not meet these requirements. Although some of the reported eating patterns were unhealthy, the majority of students had good knowledge of the food pyramid and balanced nutrition. Due to stress, heavy workload and lack of time, university students tend to make poor food choices (7). Hence, it is challenging for them to adhere to the food pyramid. The phenomenon of nutrition transition is emerging globally in which diets are shifting away from home food intake to dependence on outdoor processed food that is high in fats, salt and sugar (16). The majority of our respondents prefer eating lunch in the campus cafeteria (51.1%) instead of bringing lunch from home (11.2%), indicating their reliance on outside food. Furthermore, they resorted to eating instant noodles when required to cook their meals, while few would eat a balanced meal including a variety of food (i.e., rice, meat, vegetables and fruits). The frequency of visits to fast food restaurants and cafés were significantly higher in the overweight/obese population, suggesting consumption of more food that is high in fat, salt and sugar. Hakim et al. (15) believed that increasing accessibility to fast food stores is closely linked to overweight or obesity as there is an associated risk of consuming high energy food, sweetened drinks and fatty food but low intake of nutritious food. Fast food is a quick and cheap choice for university students, especially when the time is limited and there is a large university workload. The majority of the respondents preferred cheap food to The transition from home food to increased reliance on outside food such as fast food common among the respondents especially among the overweight/obese population. Physical activity was low among students and less than WHO recommended levels. Therefore, the university should provide a multi-disciplinary team to support nutrition and physical activity programmes to increase the awareness among the university students (21). Physical activity programmes in the campus may have a positive impact on the student's behaviour towards exercise. This study reported that female participants were more concerned about physical size and appearance, and slightly more females tried dieting compared to males. Similar results were also seen in a previous study (9), where being overweight was more of a fear among female students. This study was subject to a number of limitations. As a list of the names of all attending students was not available, convenience sampling was used instead of random sampling, hence limiting the validity of the data. However, the response rate of those approached to participate in the study was high (95.3%). Although self-reported height values could be under-or overestimated by participants, some studies (19,20) had shown that BMI calculations based on self-reported data were still able to classify most of the population into the correct BMI categories. The BMI classification used in this study was based on the WHO international cut-off values. However, considering the WHO Asian BMI cut-off values, the prevalence of overweight/ obesity among students may be underestimated. In regard to the questionnaires, no quantitative data (such as daily food portions, calorie intake, and duration of daily exercises) was available to identify the association between the lifestyle practices and BMI status of university students. In addition, the type of food and snacks that university students tend to eat on a daily basis were not identified. Psychological factors associated with overweight/obesity leading to students' desire for weight loss practices were also not explored. Conclusion The prevalence of overweight/obesity among this population of university students was 28.8% and affected males and females equally (28.9% versus 28.6%). A higher proportion of females were concerned about body size and physical appearance; hence dieting was more common among them. Although most university students reported having good knowledge of the food pyramid and balanced nutrition, the majority did not adhere to and practiced such healthy eating habits. Most of them skipped breakfast, snacked frequently, consumed fried food often and had a low intake of daily fruits and vegetables.
v3-fos-license
2023-01-19T21:46:29.306Z
2021-09-26T00:00:00.000
255981602
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1186/s13058-021-01470-3", "pdf_hash": "8dc029c84b191606f7dbaaaf5ef51d2e88ce0b7e", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2048", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "8dc029c84b191606f7dbaaaf5ef51d2e88ce0b7e", "year": 2021 }
pes2o/s2orc
A matrisome RNA signature from early-pregnancy mouse mammary fibroblasts predicts distant metastasis-free breast cancer survival in humans During pregnancy, the mouse mammary ductal epithelium branches and grows into the surrounding stroma, requiring extensive extracellular matrix (ECM) and tissue remodelling. It therefore shows parallels to cancer invasion. We hypothesised that similar molecular mechanisms may be utilised in both processes, and that assessment of the stromal changes during pregnancy-associated branching may depict the stromal involvement during human breast cancer progression. Immunohistochemistry (IHC) was employed to assess the alterations within the mouse mammary gland extracellular matrix during early pregnancy when lateral branching of the primary ductal epithelium is initiated. Primary mouse mammary fibroblasts from three-day pregnant and age-matched non-pregnant control mice, respectively, were 3D co-cultured with mammary epithelial cells to assess differences in their abilities to induce branching morphogenesis in vitro. Transcriptome analysis was performed to identify the underlying molecular changes. A signature of the human orthologues of the differentially expressed matrisome RNAs was analysed by Kaplan–Meier and multi-variate analysis in two large breast cancer RNA datasets (Gene expression-based Outcome for Breast cancer Online (GOBO) und Kaplan–Meier Plotter), respectively, to test for similarities in expression between early-pregnancy mouse mammary gland development and breast cancer progression. The ECM surrounding the primary ductal network showed significant differences in collagen and basement membrane protein distribution early during pregnancy. Pregnancy-associated fibroblasts (PAFs) significantly enhanced branching initiation compared to age-matched control fibroblast. A combined signature of 64 differentially expressed RNAs, encoding matrisome proteins, was a strong prognostic indicator of distant metastasis-free survival (DMFS) independent of other clinical parameters. The prognostic power could be significantly strengthened by using only a subset of 18 RNAs (LogRank P ≤ 1.00e−13; Hazard ratio (HR) = 2.42 (1.8–3.26); p = 5.61e−09). The prognostic power was confirmed in a second breast cancer dataset, as well as in datasets from ovarian and lung cancer patients. Our results describe for the first time the early stromal changes that accompany pregnancy-associated branching morphogenesis in mice, specify the early pregnancy-associated molecular alterations in mouse mammary fibroblasts, and identify a matrisome signature as a strong prognostic indicator of human breast cancer progression, with particular strength in oestrogen receptor (ER)-negative breast cancers. Background Stromal-epithelial interactions control epithelial cell growth during normal organ development and cancer progression [1]. Fibroblasts, as a major stromal cell type, play key roles in controlling development and cancer progression-associated histological changes [2]. While normal fibroblasts can suppress tumour growth, cancer-associated fibroblasts (CAFs) have been reported to support or even induce growth, invasion and metastasis [2,3]. However, CAFs are highly heterogeneous and can even suppress tumour growth [4,5]. No single biomarker is so far described that would either clearly define CAFs or identify a specific tumour suppressive or supportive role [5]. Their mechanisms of action are accordingly also highly complex, affecting growth factor-induced proliferation, remodelling of ECM and neo-vascularisation [2]. Thus, our understanding of how fibroblasts control cancer progression is still limited. It has therefore been suggested that in order to fully comprehend the role(s) of CAFs during cancer progression one first needs to understand the roles of normal fibroblasts in controlling epithelial cell growth [6]. The developing mouse mammary gland is an excellent model system to study stromal influence on epithelial growth as it develops mainly postnatally, showing significant morphological changes. Furthermore, remodelling processes similar to those seen during breast cancer progression are also observed during normal mammary branching morphogenesis [7,8]. At puberty, a rudimentary epithelium grows into the surrounding mammary fat pad helped by highly proliferative terminal end buds (TEB), forming a branched primary ductal network behind them [9]. While ECM at the growth front of TEBs mainly contains a thin layer of hyaluronic acid-rich basement membrane (BM), ECM of the neck region is defined by a thick surrounding layer of fibrous collagenous BM/ECM [7,9], which continues along the developing milk ducts. During pregnancy, these ducts form lateral side branches and alveoli, a process which requires remodelling of the surrounding ECM; specifically, breakdown of existing BM/collagen sheath and formation of a new BM and collagen network [10]. This process could therefore be described as 'controlled invasion' . Epithelial-stromal interactions are crucial for these morphological changes to occur [1] and fibroblasts play an essential part [10,11]. However, our understanding of the molecular processes involved and how fibroblasts enable ductal branching remains limited. By enzymatically isolating TEB and mammary ducts, we previously identified the involvement of axon-guidance proteins [12] as well as the involvement of BM proteins fibulin-2 (FBLN2) and versican (VCAN) in pubertal ductal development [13,14]. We also recently established a method, which enabled us to carry out whole-genome transcriptome analysis on RNA from very small populations of freshly isolated, non-cultured mammary fibroblasts using linear amplification [15]. Here we used this method to characterise the mammary fibroblast transcriptome of 3 day-pregnant and agematched control mice. We focussed on RNAs encoding proteins of the matrisome, which comprises core ECMproteins, ECM-modifying enzymes, growth factors, and matricellular proteins [16]. The identified RNAs describe a network of growth factor activation, protease expression, collagen sheath breakdown, and induced ECM/ BM formation, thereby identifying potential new control mechanisms for epithelial outgrowth. Consistent with our hypothesis, this pregnancy-associated RNA signature of matrisome genes was able to significantly predict distant metastasis-free and recurrence-free survival in a dataset of 1881 human breast cancers [17] independent of other clinical parameters, as well as progression-free survival in patients with lung or ovarian cancer. Our data therefore sheds important new light onto potential regulators of breast cancer progression and provides a potential new matrisome-based prognostic marker for the risk of developing distant metastases. Animal husbandry Mice (strain C57BL/6) were kept in conventional M3 cages bedded with wood chips and paper nesting material in a temperature-controlled environment at 21 ± 1 °C, 45-55% humidity on a 12-h light/dark cycle. Food and water were provided ad libitum. Mice were allowed a 7-day acclimatisation period after arrival on site prior to experimental use. Fibroblast enrichment Primary mammary fibroblast-enriched extracts were isolated as previously described [15]. Briefly, pregnant mice were sacrificed by schedule 1 method three days fibroblasts, and identify a matrisome signature as a strong prognostic indicator of human breast cancer progression, with particular strength in oestrogen receptor (ER)-negative breast cancers. Keywords: Mammary gland, Fibroblasts, Branching, Microarray, Extracellular matrix after first plug formation together with their virgin agematched control littermates (12-13 weeks of age). Thoracic and inguinal mammary glands from one flank were dissected and collected into DMEM-F12 medium (Life Technologies, Paisley UK), while glands from the other flank were processed for paraffin-embedding. Glands were finely minced and digested with 2.5 mg/ml (w/v) collagenase type II (Sigma-Aldrich Ltd., St. Louis, USA) and Trypsin (0.2%) (Sigma) in DMEM-F12 medium for 30 min, using a shaking incubator set at 37 °C and 100 rpm. Free genomic DNA was removed using 2units/ ml DNase I (Sigma), cells were centrifuged and resuspended thoroughly in serum-free DMEM-F12 medium. Epithelial cells were separated from other cells by a series of pulsed centrifugations (5-6 times) and fibroblastcontaining supernatant was then incubated in a 100 mm tissue culture dish for 1 h at 37 °C and 5% CO 2 . Fibroblasts that stuck to the plate were washed 3 times and were either cultured or further purified for RNA extraction and amplification. For microarray analysis cells were gently trypsinised using 0.05% Trypsin/EDTA (Thermo Fisher Scientific, Waltham, MA, USA) and CD45 pos contaminants were removed using a CD45-Biotin antibody (Biolegend, San Diego, USA, Clone 30-F11) in conjunction with an EasySep Biotin Selection system (Stem Cell Technologies, Vancouver, Canada). The CD45 neg fibroblasts were collected by centrifugation and directly used for RNA extraction. Epithelial-fibroblast co-culture Growth factor-reduced Matrigel ™ (BD Biosciences, Oxford, UK) was thawed on ice overnight. 50 μl were added to each well of a 96-well plate to cover the surface of the well and incubated at 37 °C for 30 min to solidify. 5000 cells per well (EpH4 cells and fibroblasts 1:1) were re-suspended in DMEM-F12/serum-free media mixed with 5% Matrigel and plated on top of the Matrigel layers. Cells were grown in triplicate in serum-free media overnight. Media was then replaced every other day with DMEM-F12/5% FBS for 7-8 days before microscopic analysis. For structural analysis, spheroids were categorized according to their size and degree of branching to 5 distinct types: small unbranched, small branched, large unbranched, large branched and large highly branched (structures with secondary branching). For quantification, 10 × bright field objective was employed to count the structures directly from under the microscope in 4 representative fields per condition in triplicate experiments to avoid local differences within the 3D culture. Representative pictures were captured with 20 × objective for reference. Statistical analysis was performed on the mean percentage of the different structures between the conditions using the student T-test function in Excel. RNA isolation and amplification RNA was extracted and amplified as described previously [15], using Direct-zol ™ RNA MiniPrep (Zymo Research, Irvine, USA) for cultured cells (> 10,000 cells) or Directzol ™ RNA MicroPrep (Zymo Research) for freshly isolated fibroblasts (500-2,000 cells) as per manufacturer's instructions. RNA was eluted in RNase-free water and quantified using the NanodropND-1000 Spectrophotometer (Thermo Fisher Scientific). Quality was assessed using an Agilent bioanalyser (Agilent Technologies). An RNA 6000 Pico kit (Agilent, South Queensferry, UK) was used for quantification and assessment of RNA from small cell numbers. RNA amplification was performed using Ovation ® PicoSL WTA Systems V2 kit (NuGEN, San Carlos, USA) as per manufacturer's instructions with slight modifications as described previously [15]. Briefly, equal volumes of RNA samples from individual pregnant and nonpregnant mice (minimum concentration 500 pg in 1 µl of maximum volume) in 5 µl of nuclease-free water were subjected to a cycle of 1st strand synthesis followed by a cycle of 2 nd strand synthesis. Double stranded product was separated from excess primers using the included magnetic bead-based system. For amplification, purified double stranded products were subjected to the SPIA amplification system (with RNase H and DNA polymerase) and the amplified ss-cDNA product was further purified using a PCR purification kit (QIAGEN Ltd., Manchester, UK) as per manufacturer's instructions. Pure amplified product was reconstituted in nucleasefree water and quantified using a NanodropND-1000 Spectrophotometer (Thermo Fisher Scientific). Quality and distribution of the RNA and subsequent cDNA curves were assessed using an Agilent Bioanalyser (Agilent Technologies). Microarray hybridisation After amplification, cDNA samples were labelled using an Encore Biotin Module (NuGEN) as per manufacturer's instructions. 1.5 µg of cDNA was subjected to uracil-DNA glycosylase (UNG) treatment to remove uracil base incorporation from the amplification process, followed by one step of biotin incorporation. Samples were purified with a PCR purification kit (QIAGEN Ltd.) and samples were kept at − 20 °C until hybridisation. cDNA samples were hybridised to MouseWG-6 v2.0 Expression BeadChip arrays (Illumina Inc., Little Chesterford, UK) as per manufacturer's instructions. 1.5 μg of labelled cDNA of each sample in 10 μl was mixed with 20 μl of hybridisation buffer in nuclease-free tubes and pre-heated at 65 °C in a thermocycler for 5 min. Six preheated samples (three pregnancy-associated and three control virgin) were incubated in an incubation chamber with humidity control buffer at 48 °C for 16 h. The bead-chip was carefully de-sealed, washed and blocked in E1 blocking buffer for 10 min with rocking. The beadchip was blocked in E1 buffer with a 1:1000 dilution of streptavidin-Cy3 (of 1 mg/ml) for 10 min, followed by a washing step for 5 min. Finally, the bead-chip was spun at 275 g for 4 min at 25 °C to dry and scanned with a microarray scanner (Illumina Inc.) via the decode file (.dmap) provided with the chip. Microarray analysis Scanner data was transferred to Genome Studio software (Illumina Inc.) for hybridisation quality control and general data analysis. Quality control included sample-independent and sample-dependent assessments. Exported data were further analysed using R-software/ranking products (RP) module [18] (The R Foundation). Haematoxylin-eosin staining 5 μm FFPE-mouse mammary gland sections were immersed 3 × in xylene for 5 min, 3 × in absolute ethanol for 2 min, then rinsed in distilled water. After incubation in haematoxylin (Sigma) for 2 min, slides were washed under running tap water and dipped in Scott's Tap water for 30 s. Slides were transferred to eosin (Dako, Eli, UK) for 2 min and rinsed thoroughly using running tap water. Stained tissue sections were de-hydrated through increasing concentrations of ethanol, immersed in fresh xylene for 3 min and finally mounted using Pertex mounting medium (Cell Path, Newtown, UK). Kaplan-Meier analysis Kaplan-Meier analyses were performed using the Gene Expression-Based Outcome for Breast Cancer Online (GOBO) tool [17] from Lund University and KM-Plotter [21]. This tool uses RNA expression data from 1881 breast tumour samples generated on Affymetrix U133A microarrays in 11 independent studies that are freely available from the Gene Expression Omnibus (GEO; please refer to [17] for more detail). To assess prognostic power of individual RNAs (Kaplan-Meier analysis), the 'Gene Set Analysis-Tumors' function within GOBO was used. Distant metastasis-free survival (DMFS) and recurrence-free survival (RFS) were selected as end points, respectively, with a 10-year cut-off. The selected number of quantiles was set to '2' . LogRank p-values were adjusted for multiple comparisons across all genes and 20 predefined subgroups of the total cohort using the Benjamini-Hochberg method, implemented in the p.adjust function of R [22]. Genes which remained significant in any subgroup were included in the final 18-gene signature. To assess ability of any of the identified RNA signatures for breast cancer cohort stratification, the 'Sample Prediction' analysis function within the GOBO analysis tool was used. 'Day 3 pregnancy vs control' fold-change ratios were used as expression centroids within the signatures (averages were used where more than one probe per RNA was present). Kaplan-Meier analysis was performed using correlative centroid prediction (Pearson) with a cut-off of 0 (all patients with expression profiles positively correlated to the direction and magnitude of changes observed in early pregnancy were included in one group; negative correlations formed the other group). DMFS or RFS were selected as end points with a 10-year cut-off. LogRank p-values of < 0.05 were again regarded as significant. Multivariate analyses were performed in the presence of ER-status, LN-status, grade, age and tumour size. Multivariate analyses in GOBO are implemented using the survival library in R [17]). For KM-Plotter analysis, patient cohorts were split using combined median expression levels of all probes against the 18-gene signature with negative weighting for those genes associated with good prognosis in GOBO. Again, a 10-year cut-off was used, with DMFS (breast cancer set) and progression-free survival (all other cancer sets) chosen as endpoint. Early pregnancy induces a loosening collagen sheath and BM protein expression The first microscopically observable histological changes occur two to three days after conception [13], including an overall denser stromal adipose tissue and more prominent ECM layer (Fig. 1A). To test whether these morphological changes were accompanied by an altered ECM protein expression, candidate proteins of the collagen sheath and BM were assessed by IHC in mammary glands of three days pregnant and age-matched virgin mice. Staining for fibrillary collagen (COL) I and COLVI was strong and highly localised around the ducts of nonpregnant adult mice. This staining appeared to become weaker and less defined in the glands of early pregnant mice (Fig. 1B). In contrast, bone morphogenetic protein (BMP) 1, involved in formation of new collagen fibrils, was predominantly detected around ductal epithelium of pregnant mice. This suggests a general loosening of the fibrillary collagen sheath at onset of pregnancy as new collagen bundles form. BM components like agrin (AGRN) and FBLN2 were noticably up-regulated in early-pregnancy mammary gland sections (Fig. 1B), consistent with our previous observation of upregulated FBLN2 and VCAN during early pregnancy [13]. Contrastingly, FBLN5 was more abundant around ducts from non-pregnant mice. Similar results were observed in the 3 rd gland of the same animals (Additional file 1: Figure S1). These results highlight widespread ECM remodelling and fibrillar collagen sheath loosening ahead of pregnancy-induced lateral ductal branch morphogenesis. Since fibroblasts play key roles in ECM remodelling, we next focussed our attention on the fibroblasts within the mammary gland of early pregnant mice. Pregnancy-associated fibroblasts induce branching of epithelial cells in vitro We hypothesised that if fibroblasts were involved in initiation of ductal branching in early pregnancy, isolated fibroblasts from early pregnant mice (pregnancy-associated fibroblasts (PAFs)) might initiate branching morphogenesis in vitro. We therefore isolated mammary fibroblast-enriched extracts from 3 days-pregnant and age matched non-pregnant control littermates. Both fibroblast-enriched extracts were initially cultured on plastic for 2-3 passages before co-culturing with mouse mammary epithelial EpH4 cells in Matrigel. While EpH4 cells alone formed mostly small spheroidal structures (Fig. 2), EpH4 cells grown in the presence of adult virgin mouse fibroblasts showed enlarged acini with limited branching. When EpH4 cells were grown in the presence of PAFs, significantly more enlarged, highly branched structures were detected (11% vs < 2%; P = 0.001; Additional file 2: Table S1). Hence, isolated PAFs and mammary fibroblasts from nulliparous mice showed significantly different branch initiation activities in vitro, which implied an altered gene expression pattern in these cells. Pregnancy induces a distinct RNA expression pattern in mouse mammary fibroblasts We next aimed to identify the potential molecular mechanisms that could enable mouse mammary fibroblasts to induce branching morphogenesis in vivo by using whole genome transcriptome analysis. Freshly isolated, noncultured primary fibroblast-enriched extracts were used to reflect the in vivo situation more closely. These were again isolated from inguinal and thoracic mammary glands dissected from one flank of 3 days-pregnant mice and from age-matched non-pregnant littermates. Pregnancy-associated morphological changes were confirmed by staining contralateral glands for FBLN2 as previously described [13], while increased COLIV staining confirmed further BM-associated changes (Additional file 3: Figure S2). Analysis of markers for fibroblasts (Platelet-derived growth factor-receptor 1 (Pdgfra), Col1a1, Col1a2, serpin family H member 1 (Serpinh1)/heat-shock protein 47 (Hsp47), vimentin (Vim)), S100a4), myo-fibroblast and , macrophages (EGF module-containing mucinlike hormone receptor (Emr1)), and leukocytes (Cd45/ Protein tyrosine phosphatase, receptor type, C (PtprC)) confirmed strong enrichment of fibroblast-associated RNAs in our extracts from both pregnant and virgin mice, with low cross-contamination with other cell types tested (Additional file 4: Figure S3). Differentially expressed RNAs were then ranked according to their p-value. 897 probes showed a change with p-value < 0.05, representing 840 genes. Table 1 shows the 50 most significantly changed probes ranked by p-values and grouped into upand down-regulated genes (for full list please see Additional file 5: Table S2). Two of the top four differentially expressed RNAs encoded proteins already known to be expressed in mammary stroma of pubertal and pregnant mice, which are necessary for mammary gland outgrowth and/or branching: glucocorticoid receptor DNA-binding factor 1 (Grlf1) [23], and aristaless-like 4 (Alx4) [24]. The list further included the proteoglycan Vcan, which we had previously described to be specifically detected in the stroma of pubertal outgrowing mammary epithelium and during early pregnancy [13]. IHC confirmed this specific expression, as well as expression of ALX4 protein in stromal cells surrounding ductal epithelia during early pregnancy ( Fig. 3A; Additional file 6: Figure S4). Differential expression was further confirmed by qRT-PCR for a selection of identified RNAs (Alx4, Gpc1, Vcan, Wisp2/ Cnn5), though Wisp2/Ccn5 did not reach statistical significance (P = 0.19) (Fig. 3B). Pregnancy-associated gene expression changes identify an ECM-remodelling programme To identify those factors most likely to affect the described morphological changes seen in the mammary gland of early pregnant mice, we next focussed on the RNAs encoding proteins of the matrisome as defined previously [16,25]. Filtering the 897 probes identified 74 probes (8.24%), representing 64 differentially expressed core-matrisome and matrisome affiliated genes (Table 2). Again, qRT-PCR confirmed differential expression of selected RNAs (Col18a1, Col3a1, Tnc) within this table (Fig. 3C). This list described a complex programme of collagen remodelling, growth factor signalling and induced BM formation. STRING analysis showed a tight network with enrichment of factors associated with ECM organisation, collagen biosynthesis and cell motility, as well as cell adhesion, and glycosaminoglycan and heparin binding activities (Additional file 7: Figure S5, Additional file 8: Table S3). To establish if expression of identified matrisome RNAs is specific to early pregnancy, we assessed expression levels at other stages of mammary gland development (puberty, adult virgin, early-, mid-, and late pregnancy, lactation, and involution) using previously obtained microarray data from whole BALB/c mouse mammary glands [26]. Data were available for 33 of the 64 Matrisome RNAs (Fig. 4). 18 of 24 RNAs upregulated in PAFs showed the strongest abundance during times of epithelial outgrowth, puberty (V6) and/ Table 1 (continued) The table shows the top50 hits (Probe ID) of RNAs that show significant (p < 0.05) differential expression (Fold-change) in fibroblast from 3-days pregnant (Preg) compared to virgin control (Crtl) mice, using median signal intensities from 3 individual experiments (biological replicates) and RankP software [18]. The matrisome signature of PAFs predicts distant-metastasis free survival (DMFS) For normal and cancerous epithelium to grow into the surrounding stroma, both require stromal remodelling. Tissues often use the same or similar molecular mechanisms to drive similar morphological changes. We therefore hypothesised that the molecular mechanisms associated with tissue remodelling during early mammary lateral branching may also operate during human breast cancer progression, enabling and/or supporting cellular invasion and further metastatic spread. If this was the case, we would expect the RNA expression patterns found in our PAFs to be found at least in part in breast cancers with a higher risk of progression and metastasis formation. To test this hypothesis, we assessed whether expression of our 64 gene matrisome [26]. Colour intensities reflect signal intensities relative to the median (50% percentile) of each RNA across all developmental time points (red: above; blue: below) RNA signature correlated with metastatic spread in 1881 breast cancer patients, using the Gene expression-based Outcome for Breast cancer Online (GOBO) webtool [17]. Distant metastasis-free survival (DMFS) was used as endpoint with a 10-year cut-off point. Fold-change values (preg vs ctrl) in expression of each gene from our array analysis were used as expression centroids (Table 2). Kaplan-Meier analysis showed that the 64 gene PAF matrisome signature was a strong univariate prognostic indicator of DMFS (LogRank P = 3.36e−5) for all breast cancers. The signature remained significant in multivariate analysis including age, tumour size, grade, ER-, and LN status (HR = 1.85, 95% CI: 1.39-2.48, P = 2.74e−5) (Fig. 5, Additional file 9: Figure S6). Similar results were obtained when recurrence-free survival was used as an endpoint (Additional file 10: Figure S7). Univariate subgroup analysis showed that this signature predicted DMFS in the basal (LogRank P = 0.047) and HER2-positive cancer cohorts (LogRank P = 0.004), though not luminal A or B, or normal-like cancer subgroups (Additional file 9: Figure S6). Correspondingly in multivariate analyses, the signature was more powerful in the ER neg cohort (HR = 2.78 (1.65-4.68), P = 1e-04) than in the ER pos cohort (HR = 1.59 (1.13-2.26), P = 0.008) (Fig. 5). Additionally, the signature showed prognostic power in all histological grades (grade 1: LogRank P = 0.008; grade 2: LogRank P = 0.008; grade 3: LogRank P = 0.005) (Additional file 9: Figure S6). Therefore, the signature is a prognostic indicator independent of grade. An 18-RNA matrisome signature shows significantly increased prognostic power To identify the most significant contributors to the signature's prognostic power, each of the 64 genes was tested individually within the GOBO breast cancer dataset. 52 of them were recognised in this dataset. 48 of 52 reached significance (p < 0.05) to stratify patient groups in at least one defined breast cancer subgroup. Forty-two were consistently associated with either higher or lower levels of DMFS within the subgroups. After multiple testing correction across all gene-and subgroup-analyses, 18 RNAs retained an adjusted p-value of < 0.05 for at least one breast cancer subgroup (Additional file 11: Figure S8A). 11 of those 18 RNAs (WISP2, CXCL13, POSTN high breast cancer expression of the human orthologues associated with poor prognosis, or down-regulated in PAFs with low breast cancer expression associated with poor prognosis, and therefore potential drivers of progression within the signature. Contrastingly, the other 7 RNAs (VCAN, TIMP1, IGF1, SLIT2, TGFBI, CTSC, VTN) had up-regulated expression of the RNAs in PAFs but high expression of the human orthologues in breast cancers was associated with better prognosis, or downregulated in PAFs and low expression in breast cancers was associated with better prognosis (Additional files 11: Figures S8, Additional files 12: Figure S9). These differences might reflect the controlled nature of mammary epithelial branching morphogenesis versus the uncontrolled situation in metastasis. Using again the fold-change expression values of these 11 + 7 RNAs in PAFs, the univariate prognostic power of this combined gene set was comparable to the initial 64 gene set (LogRank P = 2.45e−06). However, a signature using only the above-mentioned 11 RNAs showed a strongly increased prognostic power (Log-rank P = 7.66e−12). Similarly, a signature using only the residual seven RNAs also showed stratification ability, but in the opposite direction (LogRank P = 6.42e−09) (Additional file 11: Figure S8B, C). To test for breast cancer-specificity of the signature, we analysed its prognostic power also in the gastric, ovarian and lung cancer datasets available in KM-Plotter. The signature had significant prognostic power in all datasets (ovarian cancer (HR = 1.29 (1.51-1.56), LogRank P = 0.0075); lung cancer (HR = 1.72 (1.3-2.26), LogRank P = 1e−04)). However, in gastric cancer, higher signature levels were associated with better prognosis in this cohort (HR = 0.64 (0.51-0.82), LogRank P = 0.00029) (Fig. 7D-F). Nevertheless, our data show that the described changes are not breast cancer-specific. Discussion The mammary stroma has been recognised as an important determinant of epithelial growth and differentiation during normal development as well as cancer progression [1,27]. Lateral branch formation and cancer invasion both require major remodelling of the surrounding BM and collagen sheath, creating an epithelial growth promoting environment. In both cases, fibroblasts play significant regulatory roles [11,28]. We therefore hypothesised that by studying the gene expression changes in primary mammary fibroblasts during the initiation of pregnancy-associated lateral branching morphogenesis, we may identify stromal factors that control normal ductal branching and also regulate breast cancer invasion, and hence progression to metastatic disease. We have identified an 18-gene PAF-associated RNA signature that can now be used as a starting point for further biological tests, studying the functions of the associated proteins during mammary branching morphogenesis and breast cancer progression. It is so far unclear whether the identified RNAs reflect different levels of expression within cancer-associated fibroblasts or other stromal cells within the individual tumours, or within the cancer epithelium itself, e.g. through epithelial-mesenchymal transition-type changes. This can now be addressed by IHC and in-situ hybridisation in future studies. Importantly, our data show that the changes of RNA expression of these genes are associated with cancer progression in a significant number of breast cancers, and could therefore play important roles in the control of invasion and/ or metastasis formation. 11 of the 18 genes showed similar expression patterns during early pregnancy (5 up-and 6 down-regulated) and in poor prognosis breast cancer, and therefore identified potential stromal regulators of tissue remodelling during normal development and breast cancer progression that occur in both biological settings. However, expression of Figure S8C). These RNAs therefore behaved diametrically opposed to our initial hypothesis. We hypothesise that these seven identified factors reflect some of the differences between the 'controlled invasion' of normal mammary branching morphogenesis and cancer cell invasion, and therefore may identify mechanisms that prevent mammary epithelial cells from growing uncontrollably into surrounding stroma. One unexpected finding was the correlation between increased Vcan RNA expression and better DMFS in ER pos and LN neg cancers as VCAN protein expression in peritumoral stroma of the breast has previously been associated with poor prognosis in LN neg breast cancers [29]. That study analysed a mix of 60 grade 1-3 breast cancers in total with six showing increased VCAN staining. Unfortunately, no further clinical data (e.g. ER-status) or association of VCAN staining with grade were available for this patient group. This contradiction may reflect a divergence between RNA and protein expression. However, in the mouse mammary gland, VCAN is strongly co-expressed and colocalises with FBLN2 [13], a protein required for BM integrity, around newly outgrowing ducts during puberty and early pregnancy [14]. We have recently shown that in breast cancer FBLN2 together with COLIV is reduced in areas of invasion compared to neighbouring morphological normal tissue, with high Fbln2 RNA levels showing significant association with better DMFS in breast cancers of low and intermediate grade in KM-Plotter. In contrast, in high grade cancers FBLN2 RNA expression was associated with poor prognosis [14]. This could reflect different protein requirements at the various progression stages, where FBLN2 presence may suppress tumour invasion in the early stages but may enable cancer cells to survive and form metastases once invasion has occurred. This could either be through expression of those stromal proteins by the malignant cells themselves or by inducing their local microenvironment to express these proteins, as the tumour ECM is a product of both the tumour epithelial and stromal cells [30]. Hence, our results may reflect a similar association for VCAN. Collagens form a key part of the extracellular matrix, requiring extensive remodelling during tissue turnover during cancer progression and development. This is reflected in our 18-gene signature, with three of five RNAs upregulated in PAFs and for which higher expression in breast cancers was associated with poor prognosis encoded collagen proteins (Col5a2, Col13a1, Col18a1). COLV is an essential regulator of collagen fibrillogenesis [31] and is expressed in breast cancer desmoplastic stroma in response to invasive carcinoma [32]. Consistent with our data, COL5A2 expression itself is upregulated in epithelial cells of breast invasive ductal carcinoma compared to DCIS [33]. Similarly, COLXIII has been detected in several cancers at the invasive front [34] and its expression in breast cancers is associated with increased invasion and metastasis [35]. Interestingly, recent evidence has linked COL18A1 to the mammary stem cell-niche with Col18a1 −/− mice developing fewer terminal end buds and branch points. Oestrogen and progesterone induce WNT4, which activates the protease ADAM-TS18 in myoepithelial cells, leading to remodelling of the BM and activation of mammary stem cells through binding of ADAM-TS18 to COL18A1 in the stem cell niche [36]. It is therefore interesting to note that Adamts-18 was also significantly induced in PAFs together with Col18a1 ( Table 2). The Wnt-signalling pathway is an important activator of mouse mammary branching morphogenesis [37], and two further RNAs in our signature indicated an involvement of our PAFs in the activation of the Wnt-pathway and mammary stem cells: Postn and Sfrp1. Postn is necessary for correct collagen fibril assembly [38] and for metastatic colonisation, recruiting Wnt-ligands for cancer stem cell maintenance [39]. It has been detected in cancer-associated fibroblasts of invasive breast carcinoma [40,41], and overexpression in human mammary epithelial cells enhances breast tumour growth and metastasis [38]. Sfrp1 is a negative regulator of Wnt-signalling, which was downregulated in our PAFs, and reduced SFRP1 was associated with poor DMFS. This is consistent with SFRP1 being epigenetically silenced in ~ 75% of invasive breast cancers [42]. By focussing our study purely on the matrisome, our signature did not show any similarities with previously described prognostic RNA signatures, such as the Core Serum Response signature by Chang et al. [43], which was derived from cultured serum-activated fibroblasts. Our study deliberately avoided in vitro culturing, and instead used RNA from freshly isolated primary PAFs. Our RNA data therefore should reflect the in vivo situation more closely [44]. Further, in contrast to other published signatures [45], our signature is also not driven by descriptors of cellular proliferation and ER-signalling and is hence independent of grade. In recent years, several molecular diagnostic RNA signatures for breast cancer progression have been developed and are now widely commercially available (for example Endopredict [46] and OncotypeDX [47,48]). All of these have been specifically designed and approved to assess the risk of metastasis formation in early low-grade ER pos /HER2 neg /LN neg breast cancer patients, which represent 60-70% of all newly diagnosed cases. Since our signature performs particularly well in high grade, ER neg , and HER2 pos breast cancer cohorts, it might complement these established tools in providing crucial information about the risk of distant metastasis formation for therapy decision-making in these difficult to treat patient groups. Notably, the 18-gene set had prognostic significance using the different analysis methods of GOBO and KM plotter, and performed well when compared to Endopredict and OncotypeDX, using the GOBO Gene Set Analysis Tool (Additional file 16: Table S4), which allows for analysis of weighted expression (rather than the centroidal analysis method of 'Sample Prediction' , as shown in Figs. 5 and 6). We acknowledge that the current comparison is imperfect. Nevertheless, our results show that our matrisome derived gene set performs better, particularly in HER2 and ER neg and high-grade tumours than proliferation-associated signatures. Therefore, several approaches could now be taken to develop an optimised score. Conclusions In summary, we identified potential new candidates involved in a complex system of stromal-controlled epithelial branching, and provide a testable novel dataset for further analyses of stromal-epithelial interaction and stromal-controlled breast cancer progression. In addition, we have provided a potential new tool to identify breast cancer patients, particularly within the ER neg and HER2 pos cohorts that have a significantly altered risk of developing metastases, and may therefore be further developed into a diagnostic tool to aid therapy decision-making.
v3-fos-license
2016-05-12T22:15:10.714Z
2014-01-01T00:00:00.000
8360150
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://academic.oup.com/gbe/article-pdf/6/1/238/17921298/evu001.pdf", "pdf_hash": "ef12a8b6bd6a5cd4fd2eccc5da1d0c365773230f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2049", "s2fieldsofstudy": [ "Biology" ], "sha1": "3a66b4d8bfda8c2f3630547a955177ec31e7de3d", "year": 2014 }
pes2o/s2orc
The Plastid Genome of Mycoheterotrophic Monocot Petrosavia stellaris Exhibits Both Gene Losses and Multiple Rearrangements Plastid genomes of nonphotosynthetic plants represent a perfect model for studying evolution under relaxed selection pressure. However, the information on their sequences is still limited. We sequenced and assembled plastid genome of Petrosavia stellaris, a rare mycoheterotrophic monocot plant. After orchids, Petrosavia represents only the second family of nonphotosynthetic monocots to have its plastid genome examined. Several unusual features were found: retention of the ATP synthase genes and rbcL gene; extensive gene order rearrangement despite a relative lack of repeat sequences; an unusually short inverted repeat region that excludes most of the rDNA operon; and a lack of evidence for accelerated sequence evolution. Plastome of photosynthetic relative of P. stellaris, Japonolirion osense, has standard gene order and does not have the predisposition to inversions. Thus, the rearrangements in the P. stellaris plastome are the most likely associated with transition to heterotrophic way of life. Nonphotosynthetic plants represent a unique model for studying the evolution of plastid genome under relaxed selection. Typical plastome of photosynthetic plant contains~110 genes, and at least one-third of them encode proteins directly involved in photosynthesis. Apparently, a wide diversity of structures of plastid genomes, differing in gene content and order, should be observed in nonphotosynthetic plants-as it occurs in systems with experimentally induced heterotrophy (e.g., Cahoon et al. 2003). It seems, however, that only a limited number of ways of plastome modification was realized in the evolutionary history of higher plants. This may reflect some functional constraints or be just a consequence of insufficient sampling. But now only few complete plastome sequences, that of liverwort Aneura mirabilis (Wickett et al. 2008), parasitic dicots Epifagus virginiana and Cistanche deserticola (Wolfe et al. 1992;Li et al. 2013), and three mycoheterotrophic orchids (Delannoy et al. 2011;Logacheva et al. 2011;Barrett and Davis 2012), are available so this question is hard to address. Corallorhiza striata (Barrett and Davis 2012) demonstrates the least degree of reduction as its plastome is only 6% smaller compared with its photosynthetic relatives and Rhizanthella shows the highest degree-it has a plastome that is reduced by more than 50% compared with its relatives (Delannoy et al. 2011). In terms of gene content, in plastomes of nonphotosynthetic plants, chlororespiratory genes and most photosynthesis-related genes are lost or pseudogenized. The degree of reduction of other classes of genes is different; the most conserved are those that encode products involved in translation (ribosomal RNAs, ribosomal proteins, and transfer RNAs). Despite the drastic differences in length and gene content, these genomes are mainly colinear to that of photosynthetic plants, with the exception of minor shifts in the inverted repeat (IR)-single copy (SC) region boundaries. The information about the sequence of plastid genomes in nonphotosynthetic plants is poor mostly due to technical limitations. Tools that facilitate the analysis of plastid genome sequences were developed, including universal primer sets for amplification and sequencing (Heinze 2007;Dong et al. 2013) and computational resources (Wyman et al. 2004;Cheng et al. 2013). However, most of them are applicable mainly to the plastids that have "standard" gene content and order and not to the highly reduced and/or rearranged genomes that are expected for nonphotosynthetic plants. Also, many nonphotosynthetic plants are very small and represent rare species; this complicates the extraction of plastid DNA in a sufficient quantity. In last years, new DNA sequencing techniques made great progress, allowing overcoming these difficulties. The studies encouraging the researchers to use whole genome sequencing data to characterize organelle genomes are emerging (Smith 2012;Straub et al. 2012), and this approach can be applied to nonphotosynthetic plants as well. In this study, we report the characterization of the complete plastome sequence of a mycoheterotrophic plant Petrosavia stellaris and partial sequence of its photosynthetic relative, Japonolirion osense, based on whole genome sequencing data using Illumina technology. The genus Petrosavia is very unusual in many respects, and it was treated as a sole representative of the family Petrosaviaceae (Cronquist 1981), but molecular studies revealed the affinities of Petrosavia and a monotypic endemic Japanese genus Japonolirion and they were united within one family (Cameron et al. 2003). Further insights from morphology supported this (Remizowa et al. 2006). Petrosaviaceae (including Japonolirion) have an isolated position within the monocots and are the sister group of all monocots except Alismatales and Acorales (Chase et al. 2006;Davis et al. 2006) and are treated within a monotypic order Petrosaviales (Angiosperm Phylogeny Group 2009). The loss of photosynthetic activity arose many times in evolution of monocots: besides Petrosaviales it is known in Pandanales, Asparagales, Liliales, and Dioscoreales (Merckx and Freudenstein 2010). Complete plastome sequences are available only for Orchidaceae; the examples from other monocot families are useful to understand the pattern of evolutionary transformations of plastid genomes under the loss of photosynthetic activity. Also, the information on plastid genome sequence from Petrosavia will improve the reconstruction of angiosperm phylogeny as Petrosaviales is one of the few angiosperm orders for which complete plastome sequence is not available. A total number of 40,001,803 100-bp paired reads were generated for Petrosavia and 8,418,085 reads for Japonolirion total genomic DNA libraries (data are available in NCBI under Bioproject accession numbers PRJNA196233 and PRJNA196234 correspondingly). For Petrosavia, assembly by Velvet resulted in a single scaffold with high similarity to plastid genome. PCR joining and sequencing of the amplicons allowed reconstructing the complete sequence. As the de novo assembly algorithms are not able to distinguish between two copies of IR and we expected them to be assembled together, the position of IR region was deduced based on coverage (supplementary fig. S1, Supplementary Material online). Verification of the assembly was performed in three ways: 1) back-mapping of the reads on the assembled sequence used as reference, 2) comparison with sequences available in the GenBank, and 3) Sanger resequencing of several regions. All three methods confirmed consistent and accurate assembly. Approximately 321 thousands of reads were mapped as paired, with no zero coverage regions and an average coverage 346.9Â. The sequences of Petrosavia plastid genes available from the GenBank (AF206806, AF209649, AY465613, AY465715, AY465690, AB088839, AB040156) aligned on our assembly at their total length and had 98-99% similarity for P. stellaris sequences and 96-99% for other Petrosavia species. The Sanger resequencing of selected regions (petD, rps16-trnQ-UUG, ycf2, atpA, trnL-UAA-trnF-GAA) using the same DNA sample as for Illumina sequencing yielded 100% similar sequences. Complete sequence of P. stellaris plastid genome represents a circular molecule 103,835 bp in length, with the IR 10,750 bp, large single copy (LSC) 62,725 bp, and small single copy (SSC) 19,610 bp (GenBank accession number KF482381). GC content: total 37.47, LSC 36.39%, SSC 40.47%, IR 37.9%. In terms of gene content, it encodes a reduced gene set represented for the most part by genes responsible for protein synthesis-all ribosomal protein as well as ribosomal and transfer RNA genes (except for the trnT-GGU which is present as pseudogene) are intact. Also, two giant plastid genes-ycf1 that encodes a component of plastid translocon complex (Kikuchi et al. 2013) and ycf2 (unknown function) and genes involved in plastid metabolism (accD and clpP)-are conserved. The genes related to photosynthesis are either lost or pseudogenized, with exception of rbcL, psbZ, petG, and genes encoding subunits of ATP synthase ( fig. 1). The genes that are retained have high similarity with those of photosynthetic monocots (table 1). Positions of introns in splitted genes are conserved. Sequencing of cDNA of two intron-containing genes-clpP and rps12 (accession numbers KF482379 and KF482380, respectively)-confirmed the presence of spliced transcripts. The rpl2 gene has an atypical start codon ACG-a feature shared between all monocots, both photosynthetic and nonphotosynthetic. cDNA sequencing showed the presence of C/T polymorphism at the second position of rpl2 start codon (supplementary fig. S2, Supplementary Material online), indicating the presence of RNA editing. The most unusual trait is the gene order, including the position of IR. The LSC-IR junction is located between the genes rps4-rpl20 from one side and rpl20-rps18 from the other. SSC-IR junction lies within rrn16 gene, and all other rrn genes are located in the single copy region. Petrosavia plastome is highly rearranged relatively to the plastomes of other monocots. There are seven major syntenic blocks: from trnK to psaB (block 1), from petB to trnL-CAA (block 2), ndhC-rps18 (block 3), rpl20-clpP (block 4), ndhB-trnN-GUU (block 5), ycf1-rpl32 (block 6, encompasses the SSC), trnF-GAA-rps4 (block 7), and a block 8 represented by single gene, trnS-GGA ( fig. 1). For Japonolirion, assembly resulted in seven contigs longer than 1 kb with total length 128,505 bp (supplementary table S1, Supplementary Material online). The quality of DNA, isolated from 10-year-old ethanol-fixed material, was insufficient for using PCR to join all the contigs. Thus, the comparative analysis was done using four longest contigs (52,466, 26,533, 23,498, and 18,560 bp). These contigs contain more than 90% of genes typical for plastid genome of a photosynthetic plant. Based on coverage and gene content, we attribute 52,466, 23,498, and 18,560 bp contigs to single copy regions and the 26,533 bp contig to IR region. Comparison with other monocots shows that gene order in these contigs does not deviate from the typical. Thus, we assume that Japonolirion possesses a non-rearranged plastome. Based on this assumption, we propose that the following events mediated transition from ancestral, non-rearranged plastid genome to that observed in P. stellaris: 1) large inversion in the LSC affecting the trnK-rps4 region (blocks 1-8-7), 2) contraction of the IR to ndhB-rrn16 (block 5b partial), 3) translocation of ndhC-clpP (blocks 3-4a) into trnL-CAA-ndhB spacer (junction of blocks 2 and 5), 4) expansion of the IR to include clpP-rpl20 (block 4), and 6) translocation of trnS-GGA (block 8) between blocks 3 and 4a. As relationships of Petrosavia to other monocots were never studied using plastid genome-scale data, we employed the information from plastid genome for phylogenetic reconstruction. Trees inferred from nucleotide and aminoacid sequences are mostly congruent, with few exceptions confined to poorly supported nodes. Petrosavia is sister to Japonolirion (with 100% support), and Petrosavia + Japonolirion clade is sister to all monocots except for Alismatales and Acorales ( fig. 2). This is consistent with the results of analysis of small (2-4) number of genes but high number of taxa (including Petrosavia) (Chase et al. 2006) and larger plastid data sets where Petrosaviales are represented by Japonolirion only (Barrett et al. 2013;Davis et al. 2013). Nonphotosynthetic plants often exhibit the increased rate of nucleotide substitutions in all their genomic compartments (Bromham et al. 2013). The same is characteristic for organelles of some photosynthetic species, especially those with rearranged plastomes (Guisinger et al. 2008;Sloan et al. 2012). We compared nucleotide substitution rates in Petrosavia and in other flowering plants. Analysis of relative nucleotide substitution rates shows no increased rate in Petrosavia (supplementary table S2, Supplementary Material online). Although the relative nucleotide substitution rate in Petrosavia is higher than that in many other monocots (including Japonolirion), it is considerably lower than in other nonphotosynthetic plants (e.g., in Epifagus it is 3.6 times higher, in Neottia and Rhizantella 2.4 and 5.5 times) or photosynthetic plants with rearranged plastid genomes (in Pelargonium, Trachelium, Scaevola 3.9-4.5 times higher). Similar to other nonphotosynthetic plants, Petrosavia plastome has lost most of photosynthesis-related genes. Patterns of gene loss are generally consistent with the order proposed by Barrett and Davis (2012). They suggested, based on their observations on plastid genomes of mycoheterotrophic orchids, that the ndh genes are the most susceptible to loss and the atp are the least. Petrosavia seems to be at early stages of plastome degradation as the gene set is much more complete than that of Epifagus, Neottia, and Rhizanthella and is similar to that of Corallorhiza. The conservation of rbcL in Petrosavia, unexpected for a nonphotosynthetic plant, was also observed in several holoparasitic species (Delavault et al. 1995;Randle and Wolfe 2005). This may be explained by either recent loss of photosynthetic ability or by existence of alternative functions of rbcL gene product in these plants (Krause 2008). Another distinctive feature of Petrosavia plastome is the high number of rearrangements. The most plausible explanation is that these rearrangements occur as a result of relaxed selection caused by switch to heterotrophy. However, the plastomes of nonphotosynthetic plants characterized by date are colinear to that of their photosynthetic relatives, even in case of extreme reduction seen in Rhizanthella (Cai et al. 2008). In most cases where rearrangements were found in photosynthetic plants' plastomes, they were correlated with the highly increased number and length of repeats. The putative mechanism generating them is the intramolecular recombination between these repeats. Now about 200 complete plastid genome sequences are available for flowering plants, representing all major evolutionary lineages within this group. This allowed us to perform a global survey of the repeat content and its correlation with the conservation of gene order (supplementary table S3, Supplementary Material online). In basal angiosperms, magnoliids, and basal eudicots, plastid genomes have low number of repeats and show no or minor deviations from the typical gene order. There are, however, several reports of IR/SC boundary shifts and inversions in Ranunculaceae (e.g., Johansson and Jansen 1993), thus apparent uniformity of plastid genomes in basal eudicots might be result of undersampling. In rosids, most species also have low number of repeats and typical gene order, but there are notable exceptions found in the families Geraniaceae and Fabaceae where rearrangements are abundant (up to 16 colinear blocks in Trifolium subterraneum). As mentioned earlier, this trait is well documented and studied in details in both families and is found to be correlated with the high number of repeats and increased evolutionary rates (Chumley et al. 2006;Guisinger et al. 2008Guisinger et al. , 2011Magee et al. 2010). All these three features are hypothesized to be caused by aberrant DNA repair (Guisinger et al. 2008). In asterids, highly rearranged plastomes are found in Campanulaceae (Cosner et al. 2004;Haberle et al. 2008) and Ericaceae (Fajardo et al. 2013). In both cases, high number of repeats is observed. In Petrosavia, repeat content is low so it is unlikely that its photosynthetic ancestor could have high repeat content. Also, no increase in substitution rate is found; this suggests that the mechanisms responsible for rearrangements are different in photosynthetic dicots and in Petrosavia. Thus, the characterization of Petrosavia plastome demonstrates that despite the increased knowledge on plastid genomes, an important modus of nonphotosynthetic plastomes' evolution, related to genome rearrangements, remained overlooked. Besides information on Petrosavia plastid genome structure, our study provides an example of de novo assembly of organellar genome from low-coverage nuclear genome sequence data for nonphotosynthetic plant. This approach can be used not only for plastid genome, but we were also able to assemble partial sequence of mitochondrial genome-the assembly produced 38 scaffolds with total length~840 Kb which have high similarity to plant mitochondrial genomes (will be presented elsewhere). The retrieval of the data on organelle (mainly chloroplast) genomes from short-read high throughput sequencing data is not novel (e.g., Wang and Messing 2011). For nonphotosynthetic plants, this approach was used only once, and it employed the information on the plastome structure in the related species for the alignment of contigs resulting from de novo assembly and further iterative gap closing (Barrett and Davis 2012). Any deviations from the typical gene order, including shift of IR regions and rearrangements, impede the application of this approach. We found that the de novo assembly generates long and accurate contigs of plastid genomes that can be joined into complete sequence using PCR without relying on the information about plastomes of related species. However, there is an important precondition for the successful assembly of the organellar genomes-the gap between the coverage of plastid genome and mitochondrial genome. Mitochondrial genomes harbor many sequences of plastid origin; the reverse situation is much rarer but also occurs (Iorizzo et al. 2012). This creates a threat of generation of incorrect contigs chimeric between plastid and mitochondrial genomes. In case if there is a great difference between read depth of plastid and mitochondrial genomes, it is possible to reveal such misassemblies by analyzing the contig coverage. Typically, the coverage of the plastid genome is much higher than for the mitochondrial one, because of its smaller size and higher copy number per cell (Straub et al. 2012). We observed the same situation in Petrosavia, where the coverage of plastid genome is about 350Â and that of contigs derived from mitochondrial genome is about 40Â. We expect that the same could be applied for other nonphotosynthetic plants. 4,2003). Total genomic DNA from both samples was extracted from a single plant using CTAB method (Doyle and Doyle 1987) with two modifications: 1) we used pure chloroform instead of chloroform-isoamyl alcohol and 2) chloroform extraction was performed twice. To construct the libraries for whole genome sequencing, DNA was processed as described in the TruSeq DNA Sample Preparation Guide (Illumina). Libraries were quantified using fluorimetry with Qubit (Invitrogen, USA) and real-time PCR and diluted up to final concentration of 9 pM. Diluted libraries were clustered on a paired-end flowcell using cBot instrument and sequenced using HiSeq2000 sequencer with TruSeq SBS Kit v3-HS (Illumina, USA), read length 101 from each end. The assembly of P. stellaris plastome was performed by Velvet 1.2.03 (Zerbino and Birney 2008) using 5 million read pairs (10 million reads) with kmer length 65 and expected k-mer coverage (exp_cov parameter) 150. Assembly of J. osense plastome was performed using CLC Genomics Workbench v. 5.5 with following parameters: word size ¼ 22, bubble size ¼ 50, mismatch cost ¼ 2, insertion cost ¼ 3, deletion cost ¼ 3, minimal contig length ¼ 1,000 bp. Based on assembly, primers were designed to join contigs (supplementary table S4, Supplementary Material online). PCR was run on MJ Mini thermal cycler under the following conditions: initial denaturation 90 s at 95 C, then 32 cycles of denaturation 10 s at 95 C, primer annealing 25 s at 56-60 C depending on primer GC-content, and elongation 40-120 s at 72 C. All reactions were performed using reagent from Encyclo PCR kit (Evrogen, Russia) following manufacturer's instructions. To check the presence of spliced transcripts and RNA editing, we extracted RNA from RNAlater-fixed material using RNEasy Plant Mini kit (Qiagen). Reverse transcription was performed using MMLV RT kit (Evrogen) with random decanucleotide primers followed by RT-PCR (primers are listed in supplementary table S4, Supplementary Material online). Annotation of a complete sequence (for Petrosavia) and contigs (for Japonolirion) was performed using DOGMA (Wyman et al. 2004) with further manual checking and correction. For visualization of gene content and order web-based tool, GenomeVx was used (http://wolfe.ucd.ie/GenomeVx/, last accessed January 13, 2014). For repeat content and synteny analysis, plastome sequences were truncated to retain only one IR copy and used in all kinds of analysis in this form. To determine the exact position of IR copies in sequence, we have performed Blast alignment of each plastome to itself. A Blast match was considered to describe IR if its length was more than 500 bp with identity more than 95%, and two sequences constituting the match were reverse complement to each other. The IR copy situated at the end of plastome sequence is removed. To detect repeats, Vmatch 2.2.1 (http://www.vmatch.de/, last accessed January 13, 2014) was used. We searched for repeats longer or equal to 30 bp with similarity no less than 90% and no more than 10 differences (which can arise from mismatches, insertions, and deletions) totally. Both direct and inverted repeats were searched for, without limitations for maximal distance between two repeat instances and not allowing two repeat instances to overlap (-l 30 1 -identity 90 -e 10 -seedlength 10 -d -p). The estimation of syntenic blocks was performed with mauveAligner from Mauve 2.3.1 (Darling et al. 2010). Minimal weight of colinear block to be considered was taken 300 and seed size was nine nucleotides. Inversions of whole single copy region were not treated as rearrangements because it was demonstrated that chloroplast DNA exists in two forms relative to the orientation of SSC versus LSC (Palmer 1983;Martin et al. 2013). In some cases (for small blocks and/or regions with low sequence conservation), the estimates of syntenic blocks number can be inaccurate. To optimize the alignment, the reference plastome was chosen for each evolutionary lineage (Amborella trichopoda for basal angiosperms, Liriodendron tulipifera for magnoliids, Nandina domestica for basal eudicots, Arabidopsis thaliana for rosids, Nicotiana tabacum for asterids, and Acorus calamus for monocots), and all sequences from representatives of this group were aligned against it. All reference plastomes are completely colinear one to another (with the exception of minor shifts of the IR-SC border). Phylogenetic analysis was performed using a set of sequences of 37 protein coding genes from 93 angiosperm plastid genomes. We considered only plastid genes present in Petrosavia and in other plants. Nucleotide sequences were aligned according to corresponding aminoacid alignment produced by MUSCLE (Edgar 2004), and frameshift mutations were corrected manually. The most variable and gap-rich positions were excluded from the alignment using the GBLOCKS program (Castresana 2000). We used the "softest" settings and reduced 52,638 positions of nucleotide alignment to 30,423. Phylogenetic trees were reconstructed using maximum likelihood approach as implemented in RAxML (Stamatakis 2006) for both nucleotide and aminoacid alignments. GTR + G model was selected by the Akaike information criterion (AIC) in Modeltest (Posada and Crandall 1998) for nucleotide sequences, and JTT + F + G model was selected by the AIC in ModelGenerator (Keane et al. 2006). ML branch support was assessed via 100 nonparametric bootstrap pseudoreplicates, using the "rapid" bootstrap approach. Comparison of nucleotide substitution relative rates was performed using the GRate program (Mü ller 2003). A topology of the NJ tree rooted with A. trichopoda and a nucleotide substitution model selected in Modeltest were used.
v3-fos-license
2014-10-01T00:00:00.000Z
2007-09-01T00:00:00.000
5658285
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.3201/eid1309.070021", "pdf_hash": "fc83f5d2fe41976926a065fd8469cc2121dfb1a2", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2050", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "sha1": "37585b9aaf53fa4e8abd4bdd9fa7f2dfc15578ee", "year": 2007 }
pes2o/s2orc
Anaplasma platys in Dogs, Chile We conducted a 16S rRNA nested PCR for the genus Ehrlichia and Ehrlichia spp. with blood samples from 30 ill dogs in Chile. Phylogenetic analysis was performed by using groESL gene amplification. We identified Anaplasma platys as 1 of the etiologic agents of canine ehrlichiosis. We conducted a 16S rRNA nested PCR for the genus Ehrlichia and Ehrlichia spp. with blood samples from 30 ill dogs in Chile. Phylogenetic analysis was performed by using groESL gene amplifi cation. We identifi ed Anaplasma platys as 1 of the etiologic agents of canine ehrlichiosis. E hrlichioses are recognized as important emerging tickborne diseases in humans and wild and domestic animals. The brown dog tick, Rhipicephalus sanguineus, is the main tick that infests dogs in Chile (1). This tick species is a vector of Ehrlichia canis and has been implicated, but not confi rmed, as a vector of Anaplasma platys (2). Serologic and clinical evidence of canine ehrlichiosis and serologic evidence of human ehrlichiosis have been reported in Chile (3,4). The purpose of this study was to identify the etiologic agent of canine ehrlichiosis in Chile. The Study Blood samples were obtained from 30 pet dogs seen in a private veterinary clinic in Santiago, Chile, with tick infestation and clinical signs compatible with ehrlichiosis (hemorrhagic manifestations and thrombocytopenia). We performed a nested PCR to amplify a portion of the 16S rRNA gene by using specifi c primers for the genus Ehrlichia and for Ehrlichia spp. DNA was extracted from 300 μL of whole blood by using the Wizard Genomic DNA Purifi cation kit (Promega, Madison, WI, USA). For Ehrlichia genus-specifi c PCR, 2.5 μL of DNA was amplifi ed by using outer primers EHR-OUT1 and EHR-OUT2 and inner primers GE2F and EHRL3-IP2 in 1 reaction with a fi nal volume of 25 μL (5) ( Table 1). The fi rst-round amplifi cation included 20 cycles of denaturation at 94°C for 45 s, annealing at 72°C for 1.5 min, and chain extension at 72°C for 1.5 min. The second-round amplifi cation included 50 cycles of denaturation at 94°C for 45 s, annealing at 50°C for 1 min, and chain extension at 72°C for 1 min, followed by a fi nal extension at 72°C for 5 min. Amplifi cation products were analyzed by agarose gel electrophoresis. The expected size of the amplifi cation product was 120 bp. A. phagocytophilum DNA was used as a positive control (provided by Didier Raoult). For Ehrlichia spp.-specifi c amplifi cation, we used the same set of outer primers for Anaplasmataceae and specifi c inner primers for A. phagocytophilum (6) The Ehrlichia genus PCR resulted in the expected DNA band in 6 of 30 dogs (dogs 7, 12, 17, 19, 23, and 25). These 6 samples were positive only for A. platys, showing the expected 151-bp product, and negative for other species tested ( Figure 1, panel A). A. platys PCR was also conducted on the remaining 24 Ehrlichia-negative samples; none were positive. DNA obtained from 3 16S rRNA PCR products (dogs 7, 17, and 25) was purifi ed by using a commercial kit (Rapid Gel Extraction System; Marligen Biosciences, Ljamsville, Germany) and sequenced twice with an ABI 3100 genetic analyzer (Model 3100; Applied Biosystems, Foster City, CA, USA). The 16S rRNA sequences obtained were compared by using BLAST (www.ncbi.nlm.nih.gov/blast) with sequences available at GenBank. Sequences obtained were similar to that of A. platys strain Okinawa 1 (Gen-Bank accession no. AF536828), with similarities of 98%, 95%, and 98%, respectively. GenBank accession nos. for 16S rRNA sequences of A. platys strains obtained in this study are DQ125260 and DQ125261, which correspond to strains from dogs 7 and 17, respectively. For phylogenetic analysis, the groESL gene of A. platys was amplifi ed from samples positive for A. platys 16S rRNA that had suffi cient amounts of DNA (dogs 17, 23, and 25) and from 1 negative sample (dog 13). Reactions contained 2 μL of purifi ed DNA as template in a total volume of 25 μL. Amplifi cations contained 1.25 U Taq DNA polymerase (Invitrogen, Carlsbad, CA, USA), 3 mmol/L MgCl 2 , 2.5 mmol/L deoxynucleotide triphosphates (Invitrogen), and 0.2 pmol/L of primers EEgro1F and EEgro2R (8) ( Table 1). DNA was denatured by heating at 95°C for 10 min. PCR amplifi cation included 40 cycles of denaturation at 95°C for 1.5 min, annealing at 52°C for 2 min, and extension at 72°C for 1.5 min, followed by a fi nal extension at 72°C for 10 min. For nested amplifi cations, 1 μL of primary PCR products was used as the template in a total volume of 25 μL. Reaction conditions were the same as for primary amplifi cations. The primers used were SQ3F, SQ5F, SQ4R, and SQ6R (9) ( Table 1). PCR products were analyzed by 1.5% agarose gel electrophoresis. We amplifi ed 3 overlapping fragments (790, 1,170, and 360 bp) in 3 16S rRNA-positive samples (Figure 1, panel B). These DNAs were purifi ed by using a commercial kit (Rapid Gel Extraction System; Marligen), sequenced, and analyzed for phylogenetic relationships. Multiple alignment and ‡Alcántara Veterinary Clinic, Santiago, Chile analysis was performed with the ClustalW program (www. ebi.ac.uk/clustalw). Calculation of distance matrices and construction of a phylogenetic tree were made with MEGA 3.1 software (www.megasoftware.net). A phylogenetic tree was constructed by the neighbor-joining method and distance matrices for the aligned sequences were calculated by using the Kimura 2-parameter method. Stability of the tree was estimated by bootstrap analysis of 1,000 replications. A fi nal sequence of 686 bp obtained from the overlapping fragments was used for comparison and showed 100% identity between the 3 Chilean sequences and 99.8% similarity with sequences of the A. platys groESL gene deposited in GenBank (Table 2). Phylogenetic relationships of Chilean A. platys strains with other Anaplasmataceae species are shown in Figure 2. GenBank accession no. for the groESL gene sequence of A. platys is EF201806 (corresponding to dogs 17, 23, and 25). Conclusions We identifi ed A. platys DNA in the blood of 6 dogs with clinical signs indicative of ehrlichiosis. These fi ndings support the conclusion that A. platys is an etiologic agent of canine ehrlichiosis in Chile. Since its fi rst report in the United States in 1978 (10), A. platys has been described in several countries as the etiologic agent of cyclic thrombocytopenia in dogs. A tick vector of A. platys has not been determined, although R. sanguineus is the most suspected species (2). Because R. sanguineus is the only tick species that infests dogs in Santiago (1), our results support the conclusion that this species is the vector of A. platys in Chile. A wide range of clinical manifestations of canine cyclic thrombocytopenia has been described. Cases from the United States have been described as mild or asymptomatic (10), and cases from Spain have more severe symptoms (11), which also seems to be the case in Chile. This variability in clinical symptoms of infection has not been clearly associated with strain variations (11)(12)(13). Low diversity was observed when groESL gene sequences of Chilean strains were compared with other A. platys strains available in GenBank. This fi nding has also been observed in strains from different geographic origins (13). Recent studies have shown more genetic variability when sequences of the gltA gene were used (11,12). Evidence of the zoonotic potential of A. platys is scarce. In Venezuela, a few symptomatic human cases have been diagnosed since 1992 by the presence of platelet morulae in blood smears (14). Monocytic and platelet morulae were reported in a 17-month-old girl with fever and rash (15). However, none of these cases have been confi rmed by mo-lecular assays. Further studies that investigate the pathogenic and zoonotic role of A. platys should be conducted.
v3-fos-license
2019-04-04T13:59:21.238Z
2019-04-03T00:00:00.000
92996415
{ "extfieldsofstudy": [ "Medicine", "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-019-41972-x.pdf", "pdf_hash": "20085ee7f903f746521e0d26025c33a086db8e6f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2054", "s2fieldsofstudy": [ "Biology" ], "sha1": "07f02785ea486764be2ddb6ae2e04435bf2257c2", "year": 2019 }
pes2o/s2orc
New type of interaction between the SARAH domain of the tumour suppressor RASSF1A and its mitotic kinase Aurora A The tumour suppressor protein RASSF1A is phosphorylated by Aurora A kinase, thereby impairing its tumour suppressor function. Consequently, inhibiting the interaction between Aurora A and RASSF1A may be used for anti-tumour therapy. We used recombinant variants of RASSF1A to map the sites of interaction with Aurora A. The phosphorylation kinetics of three truncated RASSF1A variants has been analysed. Compared to the RASSF1A form lacking the 120 residue long N-terminal part, the Km value of the phosphorylation is increased from 10 to 45 μM upon additional deletion of the C-terminal SARAH domain. On the other hand, deletion of the flexible loop (Δ177–197) that precedes the phosphorylation site/s (T202/S203) results in a reduction of the kcat value from about 40 to 7 min−1. Direct physical interaction between the isolated SARAH domain and Aurora A was revealed by SPR. These data demonstrate that the SARAH domain of RASSF1A is involved in the binding to Aurora A kinase. Structural modelling confirms that a novel complex is feasible between the SARAH domain and the kinase domain of Aurora A. In addition, a regulatory role of the loop in the catalytic phosphorylation reaction has been demonstrated both experimentally and by structural modelling. The tumour suppressor gene Ras-association domain family 1 isoform A (RASSF1A) is frequently silenced in a wide range of cancers. The exact mechanism by which RASSF1A exerts its tumour suppressor effects has not been clarified [1][2][3] . RASSF1A protein is involved in three important cellular processes: microtubule stability, mitosis and induction of apoptosis. Loss of function of RASSF1A leads to accelerated cell cycle progression and resistance to apoptopic signals, resulting in increased cell proliferation. Thus, the development of targeted drugs to restore RASSF1A function is a desirable goal. In order to achieve this, the phosphorylation status of RASSF1A, sensitively influencing its various functions should be controlled. RASSF1A is known to be phosphorylated by certain cellular kinases, such as the mitotic kinase Aurora A 4,5 or protein kinase A 6 . Phosphorylation of RASSF1A by Aurora A on Thr 202 and/or Ser 203 disrupts its association with the microtubule network, thereby allowing mitotic progression 4 . The Aurora A mediated phosphorylation also suspends the mitotic blockade caused by RASSF1A through interaction with the complex of Anaphase-Promoting Complex and Cell division cycle protein 20 (APC/CCdc20) 5 . Furthermore, the activated association of APC/CCdc20 ubiquitylates RASSF1A, priming it for degradation. Thus, Aurora A mediated phosphorylation of RASSF1A promotes mitotic progression by causing APC/CCdc20 activation and subsequent degradation of RASSF1A 7 . The mitotic Aurora A kinase is often targeted by specific inhibitors as potential drugs against cancer [8][9][10][11] . However, inhibiting a multifunctional protein might cause disruption of its other essential physiological functions. For this reason, targeting of a particular protein-protein interaction, such as Aurora A kinase-RASSF1A enzyme-substrate interaction could be the solution. Although there are numerous structural and functional studies with Aurora A kinase [12][13][14][15] , information on the isolated RASSF1A and its functional complex with Aurora A is still lacking. Up to now, interaction of RASSF1A with Aurora A kinase has only been demonstrated qualitatively by pull-down experiments 4,16 . In this work, we aim to quantitatively characterise the enzyme-substrate interaction between the isolated Aurora A kinase domain and RASSF1A using an in vitro kinetic phosphorylation assay. We found that the isolated full-length RASSF1A exhibited an extremely strong tendency for aggregation. This may be due to the long unstructured N-terminal region ( Fig. 1) which otherwise may be involved in fuzzy protein-protein interactions 17,18 . Because our study required well-defined soluble proteins, we have expressed a truncated mutant of RASSF1A by deleting the N-terminal 120 residues, yielding the fragment ΔN (Fig. 1). Although this protein fragment is identical to the C-terminal part of RASSF1C, another important member of the RASSF1 family of proteins 3,19 , there are also crucial differences between RASSF1A and RASSF1C which prohibit generalisation of our results to both these proteins. First, while the in vivo phosphorylation of RASSF1A by Aurora A kinase is well documented 5 , no similar findings have been reported for RASSF1C. Second, there is accumulating evidence in favour of the oncogenic character of RASSF1C, in contrast to the tumour suppressor effect of RASSF1A 20 . Correspondingly, the in vivo spatial-temporal localisations of the two RASSF1 proteins (A and C) are probably entirely different. The ΔN fragment consists of the Ras-binding domain (RBD), where the phosphorylation site is located, and the attached C-terminal Sav/Rassf/Hpo (SARAH) domain, a domain often involved in protein-protein interactions 2 . Additional deletion of the SARAH domain yielded RBD, which was also examined in order to test the contribution of SARAH domain to the catalytic complex formation with Aurora A. Another aspect of our kinetic studies was to test the previously assumed role 21 of the flexible loop (residues 177-197) preceding the phosphorylation site/s (T202/S203) in the functional complex formation with Aurora A. For this purpose, the loop-deleted variant of ΔN (i.e. ΔN-Δloop) was also expressed and studied. All these truncated variants are schematically illustrated in Fig. 1. Based on detailed kinetic analysis of phosphorylation of these RASSF1A constructs by Aurora A and on structural modelling, we make an attempt to localize critical protein-protein interaction site(s) and confirm this by more specific binding studies. Such sites may facilitate the design of small molecular weight specific inhibitors to prevent unwanted phosphorylation of RASSF1A without inhibiting the kinase activity of Aurora A. Results The dimerization state of the different RASSF1A variants. It is known that SARAH domain containing proteins can form homo-and heterodimers through this domain 22,23 . To determine if the RASSF1A variants used in this study form dimers, we performed analytical size-exclusion chromatographic experiments. Since the dimerization state of proteins is dependent of their concentration, the ∆N and RBD constructs were diluted to a series of different concentrations. These samples were then injected to a Superose 6 column. The resulting chromatograms are shown in Fig. 2a,b. As a control experiment, we also carried out chromatography with the separately expressed SARAH domain of RASSF1A (Fig. 2c). The protein samples of ∆N and RBD are eluted in a single major peak, with some inhomogeneities detectable at higher molecular masses, indicating their propensities for easy aggregation. The SARAH domain construct was also eluted as a single peak. The chromatographic column was calibrated using a series of proteins of known molar mass. The calibration data was then used to determine the molecular masses corresponding to each peak (Table 1). In the case of RBD, all three samples with different concentrations eluted at the same volume (Fig. 2a, marked by the vertical line), closely corresponding to the molecular mass of the MBP-tagged RBD (i.e. 63 kDa). However, the elution volume for the constructs of ∆N and the SARAH domain turned out to be dependent of the initial sample concentration, with the higher concentration samples eluted at lower volumes (Fig. 2b,c) corresponding to higher average molecular masses, as illustrated in Fig. 2d. All proteins eluted in one major peak. In the case of RBD and ∆N variants some inhomogeneities appear at higher molecular masses. The elution volumes for these main peaks are marked by vertical lines. These were the same for all RBD samples, but were dependent on concentration in the case of ∆N and the SARAH domain. (d) Apparent molecular weights corresponding to each peak (determined using appropriate calibration data) are plotted against the initial concentrations and also presented in Table 1. The apparent M w of RBD (✖) is very similar to that of the monomeric protein (63 kDa) and independent of its concentration. In contrast, the apparent M w of ∆N (•) and SARAH (▲) are both shifted from the M w of the monomer (69 and 51 kDa, respectively) towards higher molar weigths with the increasing concentrations. These data show that the ∆N exists as an equilibrium mixture of its monomeric and dimeric forms, and this dimerization occurs via the SARAH domain. This equilibrium should be relatively quick, since the two states cannot be separated using gel filtration. It is also worth noting that dimers of both ∆N and the SARAH domain elute at lower volumes than those expected based on their molar weights. This is likely due to the elongated shape of the homodimers. RASSF1A Proper M w Apparent M w (kDa) at initial concentrations of: www.nature.com/scientificreports www.nature.com/scientificreports/ Thus, the size exclusion chromatographic experiment with RBD clearly indicates the existence of its monomeric form, at least within the investigated protein concentration range. For the ∆N variant, as well as for the isolated SARAH domain, the experiments indicate equilibrium between monomeric and dimeric states, with association/dissociation rates faster than the rate of their possible chromatographic separation. It is evident that the dimerization ability of ∆N is due to its SARAH domain. The effect of RASSF1A deletions on the kinetics of phosphorylation by Aurora A. To study the functional interaction between Aurora A and RASSF1A, in vitro phosphorylation experiments were performed on deletion mutants of RASSF1A. A constant concentration of Aurora A was incubated with various concentrations (up to 20 μM) of each RASSF1A mutant in the presence of MgATP partially labelled with radioactive 32 P on its γ-phosphate group. By measuring the intensity of radioactive labelling incorporated into the RASSF1A substrate, the amount of phosphorylated RASSF1A could be calculated. The data yielded by these experiments are illustrated on Fig. 3a as Michaelis-Menten hyperbolic plots. However, the corresponding kinetic parameters were determined from a Lineweaver-Burk plot of the data (Fig. 3b), since this method is less affected by the lack of measurements at higher substrate-concentrations. The kinetic parameters yielded by fitting these double-reciprocal plots are presented in Table 2. For RBD (which lacks the SARAH domain), the K m value of the reaction is significantly increased compared to ∆N, suggesting that the SARAH domain increases the affinity of RASSF1A to Aurora A. This finding suggests direct physical binding of the SARAH domain to the surface of Aurora A. Deletion of SARAH domain also increases k cat slightly. A possible explanation for this effect is that by stabilizing the E*S complex, the SARAH domain may also hinder its dissociation. This would also imply that -at least in the presence of the SARAH domain -dissociation may contribute to the rate limiting step of the reaction, i.e. the classical fast-equilibrium Michaelis-Menten mechanism may not entirely hold. Comparison of the kinetic parameters determined for the ∆N and ∆N-∆loop variants shows no significant difference in K m , but k cat is much smaller for ∆N-∆loop. These data suggest that -in contrast with the SARAH domain -the phosphorylation loop has an important catalytic role, probably by providing flexibility required for the formation of the catalytically competent E*S complex conformation. The unchanged K m suggests that the loop itself does not contribute to the formation and stabilization of the physical interaction between RASSF1A and Aurora A. To confirm this interpretation, we performed another phosphorylation experiment where Aurora A and RASSF1A ∆N were present in a constant concentration, and a synthetic peptide representing the loop (residues 177-197) was also added to the reaction mixture in varying amounts, up to a concentration of 1 mM. In this experiment, the presence of the peptide had no detectable effect on reaction velocity, proving its inability to disturb the formation of the E*S complex (Fig. 3c). Therefore, it can be safely assumed that the loop itself does not directly bind to Aurora A. the phosphorylation kinetics of the ∆N dimer is similar to that of RBD. The kinetic data presented in the previous section show that substrate saturation cannot be reached (especially for the RBD variant) using lower concentrations of RASSF1A. Also, it seemed desirable to investigate if dimerization of ∆N has any effect on phosphorylation kinetics. Therefore, the experiments were extended to a higher (possibly physiologically less relevant) concentration range in the case of the ∆N (up to 160 μM) and RBD (up to 45 μM) variants (Fig. 4). With ∆N, the Michaelis-Menten hyperbole fitted to measurements at lower concentrations (the same as on Fig. 3) showed a large deviation from the experimental data collected at higher concentrations ( Fig. 4a). Similarly, fitting the data in the high concentration range resulted in a curve aligning very poorly to the lower concentration data points. The fact that kinetic data could not be fit by a single hyperbole in the whole concentration range suggested that phosphorylation of this mutant does not follow simple Michaelis-Menten kinetics. RBD, however, showed no such abnormal behaviour (Fig. 4b). To further examine this effect, kinetic measurements were also visualized as an Eadie-Hofstee plot (Fig. 4c). In the case of RBD, the data can be fit by a single straight line, but for ∆N, the graph clearly reveals a biphasic character. The curve can be fitted quite precisely by a broken linear function, with a breakpoint between 20-25 μM substrate concentrations. Since RBD phosphorylation fits Michaelis-Menten kinetics well (at least in the studied concentration range), and it has been proven that this variant exists purely as a monomer, it is likely that the biphasic nature of ∆N phosphorylation kinetics is related to its dimerization state. If the monomer is dominant at lower concentrations, while the dimer is favoured at higher concentrations, it can be assumed that fitting the two linear stretches of the curve yields the (apparent) kinetic parameters for the monomer and dimer, respectively (Table 2). Interestingly, the K m for the dimeric ∆N is much higher than that determined for its monomeric state. This suggests that RASSF1A dimerization significantly hinders complex formation with Aurora A. Furthermore, the K m value measured for RBD is remarkably similar to that measured for the ∆N dimer, indicating that dimer formation through the SARAH domain reduces binding affinity towards Aurora A to a similar extent as when SARAH is completely absent. This finding provides further evidence that the SARAH domain stabilizes the Aurora A -RASSF1A complex through a direct contact with Aurora A. Compared to the monomer, the dimer also shows an increase in k cat , which can be explained by faster dissociation of the E*S complex in the absence of the SARAH-Aurora A interaction. Binding of the various RASSF1A forms to Aurora A. To obtain direct evidence of the role of the SARAH domain in the formation of the complex between RASSF1A and Aurora A, we have carried out SPR (Surface Plasmon Resonance) experiments to test the binding of the separately expressed SARAH domain to the immobilised Aurora A kinase domain (Fig. 5a). Since all protein expression and purification were only successful with an (2019) 9:5550 | https://doi.org/10.1038/s41598-019-41972-x www.nature.com/scientificreports www.nature.com/scientificreports/ Table 2. Data reveal that deletion of the loop (∆N-∆loop) significantly reduces the k cat parameter of phosphorylation -compared to ∆N -without affecting K m . In contrast to this; K m for the RBD variant is significantly higher than that of ∆N, while k cat is also increased slightly. This can also be seen on the Lineweaver-Burk plot (b), where the x-and y-axis interrupts represent −1/K m and 1/V max , respectively. These results show that the SARAH domain is important for the physical binding to Aurora A, while the loop is involved in the phosphorylation reaction. (c) Phosphorylation of the ∆N mutant in the presence of a synthetic peptide, identical to the loop. The concentration of ∆N was kept at a constant 5 μM, while that of the peptide was varied between 0.2-800 μM. The extent of phosphorylation is shown relative to that detected in a negative control experiment (with no peptide added), represented by a horizontal line (100%). Data are plotted against the molar ratio of the peptide to ∆N. No significant effect on the enzyme reaction by the peptide could be detected, showing that the loop in itself does not bind directly to Aurora A. www.nature.com/scientificreports www.nature.com/scientificreports/ N-terminal MBP-tag, we have also carried out control binding experiments with MBP alone. While, as expected, only very weak (possibly aspecific) interaction of MBP with the Aurora A kinase domain could be detected, leading to a negligible binding signal (not shown), the SARAH domain was found to be a specific binding partner of the Aurora A kinase domain. Unfortunately, our binding experiments were partially disturbed by irreversible aggregation on the binding surface, resulting in inability to reach the binding equilibria and determine the K d values. Similar effects were also observed with the other two investigated RASSF1A forms, i.e. for ΔN and RBD (Fig. 5b). The definitive observation is that both the formation and the dissociation of the complex with Aurora A are significantly slower for ΔN than for RBD (Fig. 5b), possibly due to the additional interactions formed by ΔN through its SARAH domain. Naturally, the isolated SARAH domain exhibits the smallest binding signal (Fig. 5a) compared to ΔN and RBD (Fig. 5b). Structural modelling of the Aurora A -RASSF1A complex. To assist with the interpretation of the kinetic and binding results above, we have carried out modelling studies to determine a possible mode of molecular interaction between Aurora A and its substrate RASSF1A. Since our experiments were restricted to the www.nature.com/scientificreports www.nature.com/scientificreports/ truncated variants of RASSF1A, i.e. ∆N, ∆N-∆loop and RBD, our modelling was also carried out with these variants. Models were built for the RBD-Aurora and ∆N-Aurora complexes based on predicted structures for the RASSF1A variants and experimental structures of Aurora A. Figure 6a illustrates that the unstructured loop (coloured black) that sequentially precedes the consensus phosphorylation motif (RRTSF) does not interact at all with Aurora A kinase, i.e. the loop itself possibly does not contribute to the formation of the E*S complex. This picture is fully consistent with the kinetic results, i.e. the identical K m values obtained for the loop-deletion mutant (∆N-∆loop) and ∆N ( Fig. 3 and Table 2). On the other hand, the observed decrease of the k cat value upon deletion of the loop could be explained by the loss of flexibility provided by the loop to the nearby phosphorylation site. Next, the predicted structure of the complex of Aurora A and ∆N (consisting of RBD and the C-terminal SARAH domain) is shown in Fig. 6b. Here, the modelling was performed in agreement with our binding studies (Fig. 5) that the SARAH domain binds to the Aurora A kinase domain, and the goal of the modelling was to explore the possible binding sites. Although we were not able to definitively identify a particular binding site, the modelling results indicate that the binding is possible, and the binding region is the shaded area of the C-terminal lobe of the Aurora A kinase domain as indicated in Fig. 6b.This looks to be a very unique type of interaction that could possibly be formed only with the monomeric form of ∆N. It is possible that ∆N in itself exists mainly in a dimeric form, as demonstrated by our experiments of size-exclusion chromatography (Fig. 2), which is stabilized through self-interactions between the long helical SARAH domains. Indeed, it has been demonstrated experimentally that SARAH can easily form self-associated dimers, which consist of two well-ordered helices 24 . This is also confirmed by our size-exclusion chromatographic experiments with the isolated SARAH domain (Fig. 2c,d). Upon dissociation of the dimer, however, the helix becomes less ordered, and breaks up into 2 or 3 shorter helices, as suggested by modelling 23 and experimental 25 studies. Our experiments and modelling suggest that this less-structured SARAH-helix of the monomeric form of RASSF1A can bind to the kinase domain of Aurora A, as shown in Fig. 6b. The conformation of the SARAH domain may be stabilized upon binding. Whether there is a well-defined binding site for the SARAH domain on the surface of Aurora A or the complex remains "fuzzy" 18,26 is a question for future investigations. RBD shows faster association to and also much faster dissociation from Aurora A than its ∆N counterpart. It is likely that the SARAH domain stabilizes the Aurora A -RASSF1A complex, resulting in slower dissociation, while it might also slightly hinder its formation. The higher signal maximum measured for ∆N shows that a larger amount of this variant was bound to the surface than RBD, probably due to the dimerization of ∆N, in contrast to RBD (cf. Fig. 2). As for SARAH domain, it binds to Aurora A in significantly smaller quantities than both ∆N and RBD, even at much higher concentrations. This shows that the affinity of SARAH towards Aurora A is much weaker than these other RASSF1A mutants, making it an unlikely candidate for a primary Aurora A binding site. Discussion This work presents the first detailed enzyme kinetic analysis of phosphorylation of the tumour suppressor RASSF1A by human Aurora A kinase. It was found that the relevant fragment (ΔN, after deleting 120 residues from the mainly disordered N-terminal part, cf. Fig. 1) is phosphorylated by Aurora A kinase domain with k cat and a K m values summarised in Table 2. These values appear plausible as the k cat values are about the same magnitude as those previously obtained with Aurora A on various synthetic peptide substrates 12,14,15,[27][28][29] . As for the K m values, the data are scarce in the literature, but the value obtained with the peptide substrate Kemptide was around 100-300 μM, indicating a considerably weaker enzyme-substrate interaction compared to the fragment ΔN. These data and our experiments show that RASSF1A is a more specific substrate for Aurora A compared to the synthetic peptides, as expected. It is known that the phosphorylation site/s (Thr202/Ser203) is/are located next to a disordered loop on the RBD domain of RASSF1A (cf . Figs 1 and 6a). The X-ray structure of the RBD-domain of the homologue RASSF5 (NORE1A) protein has been determined at 1.8 Å resolution, in which the disordered loop could not be resolved 21 . The authors of this paper initially assumed, but finally ruled out, that this loop is required for the binding to Ras. Therefore, instead, the authors further assumed that in the case of the homologue RASSF1A, the loop might be involved in the binding to the kinase Aurora A. We have tested this reasonable assumption by studying the kinetics of phosphorylation by Aurora A in a loop-deleted mutant of the ΔN fragment of RASSF1A (the ΔN-Δloop construct) (Fig. 3a,b). We found a large decrease of the k cat value without any change in K m ( Table 2). These results suggest that the flexible loop of RBD domain of RASSF1A has an important role in the phosphorylation step catalysed by Aurora A. Possibly, it assures the proper conformation of the phosphorylation site, optimal for the phospho-transfer from the γ-phosphate of ATP bound to Aurora A. On the other hand, the loop itself possibly does not contribute to the formation of the E*S complex, since K m is not affected by its deletion. These conclusions are further supported by our experiment with a synthetic peptide possessing an amino acid sequence identical to the deleted loop. We have not found any inhibitory effect by this peptide on the phosphorylation reaction (Fig. 3c). Our structural modelling studies on the complex of Aurora A kinase domain and the ΔN part of RASSF1A (Fig. 6a), in fact, illustrate the absence of interaction between the loop and the kinase domain, which is fully consistent with these findings. A further remarkable finding of this work is the existence of a relatively fast dimer-monomer equilibrium in the case of the investigated N-terminal truncated mutant (ΔN) of RASSF1A (cf. Fig. 2b). This is surprising because an earlier study suggested that the N-terminal part of RASSF1A is required for dimerization 30 . Our gel-chromatographic experiments (Fig. 2a,b) clearly indicate relatively fast dimerization equilibrium only for the ΔN construct but not for RBD. Thus, dimerization of ΔN most probably occurs through the interaction of the SARAH domain which is absent in RBD. Our size-exclusion chromatographic experiments with the isolated SARAH domain confirm this suggestion (Fig. 2c,d). In fact, there are several examples of dimerization of various proteins/enzymes possessing SARAH domains, including RASSF1A 22,23,25,[31][32][33][34][35][36] . The dimerization equilibrium, detected in the present work, however, seems to be much slower compared to the relatively short time-scale of the kinetic measurements. Thus, both the monomeric and the dimeric states of ΔN could be separately characterised kinetically (Fig. 4). Unfortunately, no data on the physiological concentration of RASSF1A is available in the literature, but our data indicate weaker E*S interaction with a significantly higher K m value in the case of the www.nature.com/scientificreports www.nature.com/scientificreports/ dimeric form of RASSF1A ΔN (Table 2). This could be explained by the lack of an available SARAH domain in the dimer. It seems reasonable, therefore, that the SARAH domain is responsible for the specific interaction with the kinase domain of Aurora A. The interaction between the SARAH domain of RASSF1A and the Aurora A kinase represents a novel interaction type as known interactions mediated by SARAH domains form by SARAH-SARAH association 24,25 , unlike in our case where a SARAH domain is found to bind to a globular domain. Our structural modelling illustrates a possible way of this interaction (Fig. 6b). This model suggests that the SARAH domain tilts towards Aurora A, binds to it in a kinked conformation, and locks it in place, effectively stabilizing the E*S complex. In fact, kinking of the SARAH helix has been also suggested by a previous modelling study 23 . The exact mode of binding, including the question whether Aurora A has a well-defined binding site for the SARAH domain or the binding is more "fuzzy" 18,26 remains to be investigated. Methods Materials. The vector used for the expression of proteins was an altered version of the commercially available pET24c vector. The isotope-labelled γ-32 P-ATP was purchased from Izotóp Intézet Kft. (Hungary). The unlabelled ATP was Sigma-Aldrich product. A synthetic peptide with the sequence corresponding to the deleted loop (cf. below) was synthesized by GenicBio Ltd. Company. All other chemicals used were commercially available, high purity products. Mutagenesis of RASSF1A. The following truncated variants of RASSF1A were used in the experiments: ∆N (residues 121-340), RBD (residues 121-290) and ∆N-∆loop (lacking the phosphorylation loop, cf. below). The SARAH domain (residues 291-340) was also expressed separately. Aurora A was also produced as a truncated construct, consisting of only its kinase domain (residues 107-403). The genes encoding the truncated protein mutants were created by using Polymerase Chain Reaction (PCR) to amplificate the appropriate regions of their respective wild-type genes. The primers contained restriction cleavage sites, so the PCR products could be cloned into a modified pET24c vector with a Maltose Binding Protein (MBP)-coding sequence upstream, and 6XHis tag coding sequence downstream to the multiple cloning sites. The RASSF1A loop deletion (∆N-Δloop) was introduced into the ∆N construct by whole plasmid PCR. The primers were designed to stick to sequences bordering the deletion region from both sides but lacking the target sequence (positions coding for residues 177-197, i.e. the sequence PSSKKPPSLQDARRGPGRGTS). After the reaction finished, the mixture was treated by DpnI to get rid of the methylated parental plasmids, leaving the PCR products -carrying the deletion mutants -intact. The sequences of all mutants were confirmed by sequencing. Protein expression and purification. All proteins were expressed with N-terminal MBP, and C-terminal 6XHis affinity tags, in E. coli Rosetta 2 cells using a modified pET24c vector. Cell cultures were grown on 37 °C, to an OD 600 of 0.6-0.8 then cooled to 21 °C. Expression was induced by addition of 0.4 mM Isopropyl β-D-1-ThioGalactopyranoside (IPTG), and then continued overnight at 21 °C. Cells were harvested by centrifugation, resuspended in a buffer containing CompleteUltra protease inhibitor mix and lysed by ultrasound. The lysate was centrifuged, and the target protein was purified from the supernatant. All proteins were fused to an N-terminal MBP-tag, and were first purified using amylose affinity chromatography. The bound target proteins were eluted by 10 mM maltose. In the case of the Aurora A kinase domain, this step was followed by a nickel affinity chromatography (by the C-terminal 6XHis tag), to separate the intact protein from its degradation products. This time 250 mM imidazole was used as eluent. All RASSF1A protein constructs have been identified by using SDS PAGE according to their molecular masses of 69, 63, 67 and 51 kDa for ΔN, RBD, ΔN-Δloop and the SARAH domain, respectively (including the MBP-tag). All proteins were further purified using size exclusion chromatography on a Superose 6 column to separate the native product from the aggregate. The column buffer was 25 mM HEPES, 300 mM NaCl, 3 mM DTT (except for the RBD, which lacks cysteine), at a pH of 7.4. The purified proteins were concentrated, and their final concentrations determined by UV spectrophotometry using molar absorbances calculated on the basis of a previously published method 37 . The solutions were aliquoted, frozen in liquid nitrogen and stored at −80 °C. Cleavage of the N-terminal MBP-tag by Tobacco Etch Virus (TEV)-protease was also tried in a single case of a truncated variant of RASSF1A followed by a cation exchange chromatography, however, the protein yield was dramatically diminished. Analytical gel filtration of the investigated RASSF1A constructs. Samples of different RASSF1A mutants were diluted in a buffer (25 mM HEPES, pH 7.4, 300 mM NaCl) to different concentrations. Of these dilutions, volumes of 100 μl were injected to a column filled with 30 ml Superose 6 gel filtration medium, and chromatographed using FPLC. Proteins were detected by measuring the absorbance of the eluate at 280 nm. Kinetic assay of 32 P-incorporation into the RASSF1A constructs by Aurora A. To examine the initial velocities of phosphorylation of the different RASSF1A variants by Aurora A, reaction mixtures were prepared in a pH 7.4 HEPES buffer, with Aurora A kept at a constant concentration of 40 nM and the concentration of RASSF1A varied. The mixtures also contained 100 mM NaCl, 5 mM MgCl 2 and 2 mM DTT. The reaction was started by addition of 0.4 mM ATP, partly labelled with 32 P isotope on its γ-phosphate. To determine the initial velocity of phosphorylation, reaction mixtures were incubated for 2 minutes at 25 °C, and then a sample was pipetted into a reducing SDS sample buffer, terminating the reaction. The samples were boiled, and then excess ATP was separated from the phosphorylated RASSF1A by SDS PAGE. (2019) 9:5550 | https://doi.org/10.1038/s41598-019-41972-x www.nature.com/scientificreports www.nature.com/scientificreports/ 32 P incorporation was visualized by exposing the gels to a GE Healthcare StoragePhosphor screen, then scanning the screen by a Typhoon TRIO+ scanner. The resulting image showed bands corresponding to the phosphorylated proteins, with their density related to the amount of phosphate incorporated during enzyme reaction. The densities were quantified using densitometry, and then converted to molar concentration using an appropriate calibration standard of known amounts of fully-phosphorylated RASSF1A. From these concentrations v 0 values for each reaction could be determined -the samples were taken at the initial phase of the reactions, confirmed by the amount of substrate converted. Using the Michaelis-Menten model of enzyme kinetics, v 0 and [S] 0 values were plotted and fit using the software Sigma-plot (version 11.0) to determine the kinetic parameters of the enzyme reaction for each variant of RASSF1A. Measurement of binding between RASSF1A constructs and the Aurora A kinase domain. SPR binding experiments were carried out to test the binding of the isolated SARAH domain and the ΔN and RBD variants to the immobilised kinase domain of Aurora A using a Biacore X equipment. Aurora A kinase domain was immobilised as a ligand to a GE Healthcare CM5 chip. The different RASSF1A variants were used as analytes in the mobile phase. The signal was automatically corrected by that of a reference cell without any immobilised proteins. All experiments were carried out in a HBS-EP buffer (10 mM HEPES pH 7.4, 150 mM NaCl, 3 mM EDTA and 0.005% Tween-20) at a temperature of 25 °C. Structural modelling of the molecular interactions of Aurora A with the RASSF1A mutants. RASSF1A model building. A homology model for the ΔN RASSF1A was constructed using the I-TASSER pipeline 38 . Chain B of PDB entry 3ddc 21 , a structure of murine RASSF5, was found to be the best template with a sequence identity of 0.58 over the aligned part, with a coverage of 0.6. The estimated TM-score of the best model was 0.43. Five models were built; these mainly differed in the conformation of the 177-197 loop region. Upon visual inspection, the structure with the most plausible loop conformation was chosen for further modeling. Constructing a model for the RASSF1A-Aurora A complex. The building of a complex of ΔN RASSF1A with the Aurora A kinase required modeling the interaction of the substrate binding site of Aurora A with the phosphorylated residue of RASSF1A. Because there is no Aurora A structure with a bound peptide substrate, we used the structure of cAMP-dependent protein kinase in complex with PSP20, a 20-residue phosphorylated peptide (PDB entry 4ib0 39 ) to model the kinase-peptide complex. The Aurora A structure 4dee 40 (containing an ADP molecule) was fitted onto chain A of 4ib0 using TM-align 41 and the PSP20 peptide was copied over into the Aurora A structure. Torsion angles of a 14-residue segment of PSP20 (7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(20) were used as dihedral restraints for the homologous 193-206 segment of RASSF1A (containing the phosphorylated site), and short (50 ps) vacuum molecular dynamics simulation was carried out with these restraints on our ΔN RASSF1A model. GROMACS 5.0.2 42 was used with the CHARMM27 forcefield with default parameters at 300 K. The purpose of this simulation was to force the 193-206 segment of ΔN RASSF1A to a structure identical to that of the 7-20 segment of PSP20 in 4ib0. After this occurred, the ΔN RASSF1A-Aurora A complex model was constructed by a least-squares superposition of the ΔN RASSF1A 193-206 segment to the 7-14 segment of the PSP20 peptide that was previously copied into the Aurora A structure. This initial complex structure was used for further simulations. Modeling the possible binding of the RASSF1A SARAH domain to Aurora A. The complex generated in the previous step was constructed without a SARAH domain. To add a SARAH domain, we used Modeller 43 with a RASSF5 SARAH structure from an earlier modeling study 23 as a template (KS3 in Fig. 4c in that paper; coordinates obtained as a courtesy of Ruth Nussinov). This structure occurred most frequently in the earlier study, and has a kink in the α-helix. To generate a large number of conformations with different SARAH domain conformations and orientations, geometric simulations were used. The FRODAN program 44 was used to generate 10,000 different conformations which represent a broad sampling of the conformational space. The 10 structures with the largest interface areas with the Aurora A subunit were used for further modeling. To add an ATP molecule to the models, the ADP molecule in the 1mq4 Aurora A structure 40 was replaced by the ATP taken from PDB entry 4wb5 45 (a protein kinase A structure) after superposing the kinase structures, and the Aurora A subunit in all 10 model complexes was replaced by this 1mq4-ATP complex. The 10 Aurora A-ΔN RASSF1A complexes were then subjected to 50 ns molecular dynamics simulations at 300 K in vacuum using GROMACS 2016 42 . The Gromos54a7 forcefield 46 was used; electrostatic interactions were treated using the Particle Mesh Ewald method 47 and the LINCS algorithm was used to constrain all bonds 48 . The last structure from each trajectory was then subjected to energy minimization.
v3-fos-license
2023-03-12T15:13:17.244Z
2023-03-10T00:00:00.000
257454686
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.imrpress.com/journal/CEOG/50/3/10.31083/j.ceog5003056/pdf", "pdf_hash": "9c6fd458702a6070f432c9339ad883f9edd84a56", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2055", "s2fieldsofstudy": [ "Medicine" ], "sha1": "faf55bee9206c8398bd6739d5213b4d12160290f", "year": 2023 }
pes2o/s2orc
Analysis of 783 Cases of Total Laparoscopic Hysterectomy for Benign Indications: Experience from a Turkish University Hospital Background : This study aimed to assess the results of 783 total laparoscopic hysterectomies performed in our clinic for benign reasons. Methods : This study was conducted at a tertiary hospital between January 2017 and December 2020. The results of 783 patients who underwent total laparoscopic hysterectomy for benign indications were evaluated retrospectively, with major and minor complications thoroughly analyzed. Patients’ demographic characteristics were evaluated, including mean age, mean parity, body mass index (BMI), current medical diseases, previous surgeries, hysterectomy indications, operation time, uterus weights, estimated blood loss, and length of hospital stay. The ethics committee of Istanbul Kanuni Sultan Süleyman Training and Research Hospital provided the study’s ethical approval (Approval No. 2021.11.290). SPSS for Windows 24.0 (SPSS Inc., Chicago, IL, USA) was used for statistical analysis, and data were presented as mean, standard deviation, and ratio, with statistical significance set at p < 0.05. Preoperative and postoperative variables were compared using a paired t -test. Results : For the study’s 783 patients, the average age was 50.16 years (range, 33– 82), average parity was 3.26 (0–16), and average BMI was 24.37 (21–33) kg/m 2 . Uterine myoma was the most common reason for hysterectomy in 244 (31.16%) patients, followed by abnormal uterine bleeding in 239 (30.52%) patients. The rate of major complications was 46 (5.8%), and the rate of minor complications was 42 (5.5%). Overall, there were 88 (11.30%) complications. The complication rate and operation indications were comparable to those reported in the literature. Conclusions : Although Laparoscopic hysterectomy is a minimally invasive type of hysterectomy, surgeons should be aware of potential complications during the procedure. Early diagnosis and management of complications reduce morbidity and mortality. Introduction Hysterectomy is an elective gynecological surgical procedure performed worldwide [1].It can be performed by abdominal, vaginal, laparoscopic, or robotic surgery.Vaginal hysterectomy was first performed by Recamier in 1829, and abdominal hysterectomy by Charles Clay in 1843 [2].Laparoscopic hysterectomy was first performed by Reich et al. [3].An increase in the rate of total laparoscopic hysterectomy (TLH) has been observed for hysterectomies performed for benign indications [4].In a recent study in England, the rate of TLH increased from 16% to 47% among all hysterectomies performed in the last seven years, whereas the rate of abdominal hysterectomy decreased from 73% to 46% [5]. Lee SH et al. [6] found no difference in complication rates in the meta-analysis outcomes comparing vaginal hysterectomy (VH) and TLH.Allam et al. [7] reported fewer complications in the TLH group using the electrosurgical bipolar vessel sealing technique than in the total abdominal hysterectomy (TAH) and VH groups.Laparoscopic hysterectomy results in less blood loss than abdominal hysterectomy.Lower rates of wound infections, shorter hospitalization times, and less workforce loss due to shorter patient recovery times have led to a rapid increase in the popularity of laparoscopic hysterectomy [8]. Currently, minimally invasive methods are recommended in hysterectomies performed for benign reasons [9].In gynecology, uterine fibroids are very common, adversely affecting women's health and pregnancy.It is important to choose an effective treatment.Compared with laparotomy, laparoscopic myomectomy reportedly involves less blood loss, shorter hospital stay, shorter recovery period and higher pregnancy rate [10,11]. While laparoscopic hysterectomy is a minimally invasive procedure, surgeons must be aware of potential TLH complications and be able to recognize and manage potentially fatal TLH complications.This study aimed to assess the results of 783 total laparoscopic hysterectomies performed in our clinic for benign reasons.The demographic characteristics of the patients were assessed, including mean age, mean parity, body mass index (BMI), current medical diseases, previous surgeries, hysterectomy indications, operation time, uterus weights, estimated blood loss, length of hospital stay, and major and minor intraoperative and postoperative complications.The preoperative and postoperative hemoglobin difference was measured.Operation time was defined as the time between the first incision in the umbilicus and the removal of the primary trocar.Uterus weight was measured using a precision scale at the pathology laboratory immediately after the procedure.All operations were performed by consultants and specialists.The length of hospital stay was measured from the day of procedure until discharge.Patients who experienced postoperative spontaneous micturition and defecation were quickly mobilized; patients with no significant complaints were discharged.Pelvic examinations, cervicalvaginal smears, and endometrial sampling were preoperatively performed. Materials and Methods Patients received mechanical bowel cleansing with a rectal enema the night before the procedure.All patients received 1 g of cefazolin intravenously one hour before surgery and six hours afterward.For thromboembolism prevention, 0.4 mL of enoxaparin was administered subcutaneously eight hours before the procedure and continued at 24-hour intervals throughout hospitalization.Patients were postoperatively monitored for one month.The study's data were collected over the course of six months. Surgical Technique All surgeries were performed under general anesthesia and in the dorsal lithotomy position by the same group of surgeons.All patients had a Foley catheter inserted into their bladders, and a nasogastric tube was placed in their stomachs.The procedures were carried out using a 10-mm 30°telescope, advanced bipolar electrocoagulation (Liga-Sure, Covidien, MA, USA), classic bipolar electrocoagulation (Robi bipolar, Karl Storz Company, Tutlingen, Ger-many; Unipolar hook, Karl Storz Company, Tutlingen, Germany), and a uterine manipulator (Rumi II, Cooper Surgical Inc., Trumbull, CT, USA).In these operations, the multiport technique was used. Following a 5 mm vertical incision in the umbilicus, the umbilicus was lifted with laundry clamps.A Verres needle was inserted into the abdomen (14 mmHg pressure), pneumoperitoneum was achieved, and a 10-mm trocar was inserted into the abdomen.For patients with prior abdominal surgery and suspected periumbilical adhesions, the primary trocar was introduced 2-3 cm below the left subcostal border on the left midclavicular line, also known as the Palmer point, followed by the insertion of a 10-mm 30°t elescope into the abdomen.The second and third incisions were made 3 cm medial to the right and left anterior superior iliac spines of the abdomen to the avascular lower quadrants, and 5-mm trocars were inserted into these incisions.A third 5-mm trocar was inserted into the suprapubic region at the midline, 6 cm above the pubic symphysis.The round ligaments and uterine ovarian or infundibula pelvic ligaments were coagulated and cut on both sides with a Ligasure (Medtronic USA Inc., Minneapolis, MN, USA).The bladder was removed from the cervix using blunt and sharp dissections.The uterine arteries were coagulated and cut bilaterally.The parametrial tissues around the cervix were coagulated and cut with a Ligasure, and bleeding was controlled with Robi classical bipolar electrocoagulation.The uterus was then removed from the uterovaginal tract.Morcellation with a scalpel was used to remove the uterus from the vaginal tract when necessary.The vaginal cuff was laparoscopically sutured with a V-loc zero-numbered wound closure suture (Medtronic, Minneapolis, MN, USA). Statistical Analysis SPSS for Windows 24.0 (SPSS Inc., Chicago, IL, USA) was used for statistical analysis.The data are presented as mean, standard deviation, and ratio Statistical significance was set at p < 0.05.The Kolmogorov-Smirnov test was used to assess the normality of the distribution of continuous variables.A paired t-test was used to evaluate preoperative and postoperative variables. Results The 783 patients had a mean age of 50.16 ± 7.67 years.The mean parity was 3.26 ± 1.95; the data of parity was normally distributed and calculated with mean and standard deviation, and the mean BMI was 24.37 ± 1.84 kg/m 2 .Sixty-one percent of the patients were menopausal, and 39% were of reproductive age.The proportion of patients who had undergone abdominal surgery was 241 (30.7%); cesarean section (n = 135, 17.2%) was the most recent surgery, and salpingo-oophorectomy (n = 468, 59.7%) was the most concurrent surgery.The demographic characteristics of the patients are presented in Table 1. Major complications (Table 5) were observed in 46 (5.8%) patients, and minor complications in 42 (5.5%).We identified and managed the complications early, and only three (0.38%) had to be re-operated.In our study, nine (1.15%) patients converted to laparotomy: two (0.25%) were due to anatomical difficulties and widespread intraabdominal adhesions, two (0.25%) were due to intractable bleeding, two (0.25%) were due to intestinal injury, and three (0.38%) due to ureteral injury.Six (0.8%) bladder injuries occurred during anterior peritoneal dissection of the broad ligament from the bladder in patients with a history of abdominal surgery.All bladder injuries were observed intraoperatively and repaired laparoscopically.Three (0.4%) ureteral injuries occurred while attempting to seal the uter- Discussion While laparoscopic surgery has become more common in gynecology, replacing open surgical procedures as the preferred method in most cases, complications are frequently reported.In the study, the major complication rate was 5.8%, with an overall rate of 11.3%.We identified and managed the complications early, and only three (0.38%) required re-operation.Surgeons should be aware of TLH complications and be able to recognize and manage them, as these complications can be fatal.The most common rea-sons for hysterectomy were uterine myoma (31.16%) and abnormal uterine bleeding (30.52%). Although there has been no change in hysterectomy indications for at least 50 years, alternative operative methods have recently begun to expand.TLH has recently become an option, with rapidly increasing popularity and applicability.Driessen et al. [12] reported that the incidence of laparoscopic hysterectomy increased from 3% in 2002 to 10% in 2007 and 36% in 2012 in the Netherlands.According to the results of a 2015 Cochrane review [13] that evaluated the most appropriate hysterectomy technique for benign indications, vaginal hysterectomy was notably superior to the abdominal and laparoscopic approaches and was recommended as the first-choice modality.Patients for whom vaginal hysterectomy is unsuitable may undergo a laparoscopic procedure to avoid abdominal hysterectomy; however, it should be noted that laparoscopic hysterectomy is associated with more urinary tract complications. The main concern in laparoscopic hysterectomy is the increased rate of complications in the urinary system compared to that of other hysterectomy techniques [14].In a study comparing 3190 laparoscopic hysterectomies with abdominal and vaginal approaches, Donnez et al. [15] reported that the laparoscopic approach was not associated with an increase in major complications when performed in experienced hands.Different complication rates related to TLH have been reported in the literature.Fuentez et al. [16] reported a major complication rate of 1.93% and a minor complication rate of 4.29% in 2888 cases.Moreover, Wattiez et al. [17], Makinen et al. [18], Tamburacı et al. [19], and Buhur et al. [20] reported complication rates of 19% in 2434 cases, 11.7% in 1647, cases, 9.3% in 300 cases, and 8.86% in 158, respectively.The rate of major complications in the current study was 5.8% and the rate of minor complications was 5.5%.The overall complication rate, in line with the aforementioned literature, was 11.3%.In patients with previous abdominal surgery and suspected periumbilical adhesions, the primary trocar was introduced at Palmer's point.Blunt and sharp dissections were performed carefully to remove the bladder from the cervix, especially for patients with prior abdominal surgery. The literature's most common indications for hysterectomy are uterine myoma and abnormal uterine bleeding [21].Herein, uterine myoma (31.16%) and abnormal uterine bleeding (30.52%) were the most common causes of hysterectomy.In the literature, the conversion rates from laparoscopy to laparotomy range from 0.03% to 6.6% (Kim et al. [22], Lijoi et al. [23], Housmans et al. [24], Settles et al. [25], Takahashi et al. [26], Casarin et al. [27]).Donnez.O and Donnez.J [28] reported that the most significant risk factors for conversion to laparotomy were previous cesarean section and pelvic surgery.However, in that study, in cases with such history and suspicion of periumbilical adhesion, the primary trocar was not introduced classically from the umbilicus, but from the Palmer point.In our study, nine (1.15%) patients converted to laparotomy.Chapron et al. [29] revealed a mean hemoglobin loss of 1.3 g/dL in a series of 96 laparoscopic hysterectomy cases.O'Hanlan et al. [30] have shown a mean blood loss of 130 ± 189 mL in their study of 830 patients.Our mean hemoglobin loss during and after surgery was 1.49 ± 1.25 g/dL. Wong et al. [31] reported a urinary tract injury rate of 0.24%.Bladder injury is three times more common than ureter injury and is usually due to the use of a monopolar energy source longer than necessary while performing anterior peritoneal dissection of the broad ligament, or colpotomy in cases with prior abdominal surgery.In our patients with bladder injury, bladder catheterization was continued for 10 days after surgery to prevent vesicovaginal fistula formation and to help the bladder healing.During the postoperative period, no vesicovaginal or ureteral vaginal fistulas were observed. Three (0.4%) ureteral injuries occurred while attempting to seal the uterine arteries and control bleeding.All ureteral injuries were observed during the operation, and a urologist was called to the operating room for assistance.A double-J catheter was inserted into the ureter, and conversion to laparotomy was required for the ureter to be repaired.Double-J catheterization was continued for 21 days postoperatively.No vesicovaginal or ureteral vaginal fistulas were observed during postoperative follow-up.Although laparoscopic hysterectomy is a minimally invasive type of hysterectomy, surgeons should be aware of potential complications during the procedure.Early diagnosis and management of complications reduce morbidity and mortality. We used a uterine manipulator in all cases, permitting the desired mobilization of the uterus, removal of adjacent organs (e.g., bladder and ureter) from the surgical area, and minimizing the risk of injury. Vaginal cuff dehiscence after TLH can occur spontaneously or after coitus.Postoperative coitus is one of the most frequent triggers of vaginal cuff dehiscence.Hur et al. [32] have reported that rupture of the vaginal cuff in TLH is associated with electrosurgery-related suboptimal healing, tissue necrosis, and devascularization. Other risk factors for vaginal cuff dehiscence include smoking, obesity, constipation, menopause, vaginal infection, and hematoma formation.Additionally, suture techniques used in laparoscopy may be effective in preventing vaginal cuff dehiscence.Siedhoff et al. [33] did not observe vaginal cuff dehiscence in any patient who received barbed sutures.Although we used one-way barbed sutures in our clinic, five (0.6%) patients had vaginal cuff dehiscence in the third week after surgery due to coitus.The vaginal cuff was then sutured secondarily through the vaginal route. Shen et al. [34] reported an intestinal complication rate of six (2.11%) in their study of 284 cases.In our study, three (0.38%) intestinal complications occurred: two were noticed intraoperatively and treated with sutures after con-version to laparotomy.The other was detected on the first postoperative day; the damaged large intestine was repaired with reoperation.Early diagnosis of intestinal complications is vital because of the high morbidity and mortality risk.Inferior epigastric vein injury occurred in one of our cases (0.12%) due to the placement of accessory trocars, a rate consistent with that reported in the literature (0.1-6.4%) [35].The injury was treated with bipolar electrocautery and fascia closure sutures without converting to laparotomy.This highlights the importance of paying attention to accessory trocar placement; complications related to insertion are rare, but the mortality rate is 13% [36]. The duration of TLH is generally longer than that of other hysterectomy methods [37].However, differences between the average operation times may depend on the surgeon's laparoscopic experience, patient characteristics, adequacy of equipment, and the auxiliary team.Perino et al. [38] reported an average operative time of 104.1 ± 26.98 min, Bonilla et al. [39] reported 123 min, and Cheung et al. [40] reported 108.2 min.Our mean operative time was 112.92 ± 17.31 min.Candiani et al. [41] reported the hospital stay as 2.7 days and Morelli et al. [42] as 2.9 days in laparoscopic hysterectomy.In our study, it was 2.37 ± 0.72 days.The average weight of the uteri removed was 180 g; this value is lower than the mean uterus weight reported elsewhere (220-259 g) [43,44].For this reason, morcellation with a laparoscopic cold knife to remove the uterus was necessary in 54.8% of cases. The data were meticulously collected, and the sample size was deemed to be sufficient for the estimation.The effects of surgical experience, which may have an impact on every parameter, were determined in this study.Our findings should contribute to the formulation of alternative application options. Limitations of the study include the short-term followup period.Moreover, the study was descriptive and retrospective and was conducted in a Turkish tertiary hospital; these factors may have limited the ability to draw causal relationships and the generalizability of the study. Conclusions Although laparoscopic hysterectomy is a minimally invasive type of hysterectomy, surgeons should be aware of potential complications of TLH, and be able to recognize and manage TLH complications, as these complications can be fatal.Early diagnosis and management of complications reduce morbidity and mortality. This descriptive study was conducted at Kanuni Sultan Suleyman Training and Research Hospital from January 2017 and December 2020.This study adheres to the provisions of the Helsinki Declaration and was approved by the ethics committee of Istanbul Kanuni Sultan Süleyman Training and Research Hospital (Approval Number: 2021.11.290).All patients provided written informed consent before surgery.Of the 797 patients involved in this study, 14 were excluded: 12 had missing records due to lack of postoperative follow-up, and two had histopathological malignancy results.The 783 women (aged 40 to 80 years) who underwent TLH for benign indications were reviewed retrospectively.
v3-fos-license
2023-01-18T15:25:05.099Z
2017-03-09T00:00:00.000
255946927
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ro-journal.biomedcentral.com/track/pdf/10.1186/s13014-017-0787-y", "pdf_hash": "b32ecfa5407d7b3d8ebd06dba5f8228a414e569d", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2056", "s2fieldsofstudy": [ "Medicine" ], "sha1": "b32ecfa5407d7b3d8ebd06dba5f8228a414e569d", "year": 2017 }
pes2o/s2orc
The effect of bowel preparation regime on interfraction rectal filling variation during image guided radiotherapy for prostate cancer This study aimed to investigate the tolerability and impact of milk of magnesia (MoM) on interfraction rectal filling during prostate cancer radiotherapy. Two groups were retrospectively identified, each consisting of 40 patients with prostate cancer treated with radiotherapy to prostate+/-seminal vesicles, with daily image-guidance in 78Gy/39fractions/8 weeks. The first-group followed anti-flatulence diet with MoM started 3-days prior to planning-CT and continued during radiotherapy, while the second-group followed the same anti-flatulence diet only. The rectum between upper and lower limit of the clinical target volume (CTV) was delineated on planning-CT and on weekly cone-beam-CT (CBCT). Rectal filling was assessed by measurement of anterio-posterior diameter of the rectum at the superior and mid levels of CTV, rectal volume (RV), and average cross-sectional rectal area (CSA; RV/length). Overall 720 images (80 planning-CT and 640 CBCT images) from 80 patients were analyzed. Using linear mixed models, and after adjusting for baseline values at the time of planning-CT to test the differences in rectal dimensions between both groups over the 8-week treatment period, there were no significant differences in RV (p = 0.4), CSA (p = 0.5), anterio-posterior diameter of rectum at superior (p = 0.4) or mid level of CTV (p = 0.4). In the non-MoM group; 22.5% of patients had diarrhea compared to 60% in the MoM group, while 40% discontinued use of MoM by end of radiotherapy. The addition of MoM to antiflatulence diet did not reduce the interfraction variation in rectal filling but caused diarrhea in a substantial proportion of patients who then discontinued its use. Background Advances in radiotherapy (RT) technology have permitted dose escalation in prostate cancer to improve biochemical control [1]. Precision of RT delivery is an essential component to improve outcomes and reduce associated treatment toxicity [2]. Prostate motion is mainly attributable to changes in rectal volume and shape [3,4], this has led to various strategies to reproduce consistent rectal filling and provide increased accuracy of RT delivery for prostate cancer. It has been suggested that using a rectal balloon to achieve reproducible large rectum is one way to reduce variations in rectal filling, thereby reducing prostate motion [5]. Other non-invasive strategies use a rectumemptying approach, by means of laxatives, anti-flatulence diet [6], bowel relaxant [4], probiotics [7], enemas [8], rectum-emptying tube [9], self evacuation [10], or combination of these. However the degree of effectiveness of each of these methods and identification of the most successful approach is still debatable. Since 1997, our institutional policy to reduce rectal variation consisted of a defined bowel regimen of an anti-flatulence diet and milk of magnesia (MoM). Nonetheless, subsequent studies using magnesium laxatives failed to show clinically relevant reduction of prostate motion, with high probability of less laxative intake in response to diarrhea [11][12][13][14]. Subsequently, our institutional practice changed in 2012 to simple dietary advice (anti-flatulence diet only) without the use of MoM. Although previous investigation had found no reduction in intrafraction prostate motion when using our bowel regimen (with MoM) [12], the efficacy for interfraction rectal filling was not evaluated. Therefore, in the present study we investigated the impact of MoM on the interfraction differences in rectal filling and assessed its tolerability. Patient selection Following institutional research ethics board approval, two sequential groups of localized prostate cancer patients treated with volumetric-modulated-arch-therapy (VMAT) to the prostate +/-seminal vesicles (SV) were retrospectively identified. Our institutional practice changed to simple dietary advice without the use of MoM in 2012, so each group consisted of randomly chosen 40 consecutive patients treated in 2011 (MoM cohort) and 2013 (non-MoM cohort). Exclusion criteria were: prostate cancer patients received palliative or postoperative radiotherapy or brachytherapy, pelvic lymph node involvement or distant metastasis, patients with inflammatory bowel disease or taking laxatives, stool softeners or anti-flatulence drugs for other indications. Bowel regimen All patients participated in a routine educational session with a radiation therapist regarding the bladder and rectal preparation for radiotherapy planning and treatment. Patients in the MoM cohort received instructions to follow a bowel regimen which combined an anti-flatulence diet (Table 1) and MoM, while the non-MoM cohort followed the same anti-flatulence diet only. All patients were instructed to start the anti-flatulence diet +/-MoM three days before the planning CT scan and continue during RT. The initial once a day (bedtime) dose of MoM was 30 cm 3 , adjusted from 15 to 60 cm 3 to achieve a soft bowel movement each morning and stopped in case of lower gastrointestinal (GI) toxicity (i.e. diarrhea). Bowel habit description and daily intake of MoM at baseline and weekly during RT were prospectively documented in the electronic medical record, as a standard of care. Lower GI toxicity (diarrhea during RT) was graded according to RTOG acute toxicity scoring criteria. Radiotherapy Clinical target volume (CTV) included the prostate; while base of the SV was included in the CTV if the risk of SV involvement was > 15% [15]. Planning target volume (PTV) was created by expansion of the CTV by 10 mm in all directions, except 7 mm posteriorly. RT was delivered using VMAT to a prescribed dose of 78Gy in 39 fractions over 8 weeks, using daily prostatefocused image guidance with cone beam CT (CBCT). All patients were treated in the supine position, without rigid immobilization. Rectal motion assessment For each patient, the outer rectal wall was delineated as a solid structure between the upper and lower limits of the CTV, as changes in rectal diameter at this level would likely have the greatest influence on prostate position. This was performed by a single observer on the planning CT and on eight randomly selected CBCTs (one from each week of RT). Rectal filling was assessed by measurement of the anterio-posterior diameter of the rectum at the superior and mid levels of CTV, and by calculation of rectal volume (RV) and the average crosssectional rectal area (CSA; defined as the rectal volume divided by rectal craniocaudal length). Statistical analyses Descriptive statistics were used to describe patient and treatment characteristics. Student's t-test was used for comparison of continuous variables. Changes in anterioposterior diameter of the rectum at the superior and mid levels of CTV, RV and CSA between the planning CT and weekly CBCT were compared between both groups by repeated measures analysis using linear mixed models. All tests were two-sided. Statistical analyses were performed using SAS system (version 9.4; SAS Institute Inc, Cary, NC). Patient characteristics All 80 patients completed the intended course of RT as planned (78Gy over 39 fractions). The characteristics of the patients in both groups are summarized in Table 2. No patients received androgen deprivation treatment. The 640 CBCTs selected from the 80 patients were reviewed, and confirmed satisfactory visualisation of the bladder, prostate and seminal vesicles with good definition of rectal boundaries between the upper and lower levels of the CTV. Interfraction rectal filling characteristics In each group, a total of 360 images, including 40 planning CT and 320 CBCT images from the 40 patients were analyzed. Summary of descriptive statistics of rectal volume, average CSA, anterioposterior diameter of the rectum at superior and mid level of CTV in both cohorts at the time of planning CT are shown in Table 3. The mean RV for MoM vs. non-MoM groups were 34.1+/-21.9 vs. 35.5+/-15.5 cm 3 , and the average CSA were 6.3+/-3.7 vs. 6.7+/-2.4 cm 2 , while the mean anterioposterior diameter of the rectum at superior and mid level of CTV were 3.2+/-1.1 vs. 3.4+/-1 cm and 3.2 +/-1 vs. 3.2+/-0.9 cm respectively. Using linear mixed models, and after adjusting for baseline values at the time of planning CT to test the differences in rectal dimensions between both groups over the 8-week treatment period, there were no significant differences between MoM vs. non-MoM group either for RV (p = 0.4), average CSA (p = 0.5), anterioposterior diameter of the rectum at superior level of CTV (p = 0.4) or anterioposterior diameter of rectum at mid level of CTV (p = 0.4) (Fig. 1). MoM tolerability and gastrointestinal toxicity In the MoM group, the median volume of MoM taken by patients was 30 cm 3 (range, 15-45 cm 3 ) in the first week and 15 cm 3 (range, 0-30 cm 3 ) in the last week. The proportion of patients who took MoM decreased from 100% in the first week to 60% in the last week (Fig. 2). Acute RTOG lower GI toxicity in MoM vs. non-MoM groups consisted of G2 diarrhea in 3 patients (7.5%) vs. 2 patients (5%) and G1 diarrhea in 21 patients (52.5%) vs. 7 patients (17.5%). In both groups, the onset of diarrhea was reported in the second week of RT, however with higher probability among patients who took MoM (the number of patients who had G1 diarrhea in the second week of RT in MoM vs. non-MoM group was 9 [22.5%] vs. 5 [12.5%]). Discussion This study demonstrated no significant difference between the MoM and non-MoM groups in the interfraction variability of rectal dimensions which could affect the prostate motion including RV, average CSA, anterioposterior diameter of the rectum at superior and mid level of CTV. Furthermore, G1-2 diarrhea was experienced in 24 (60%) patients in the MoM group compared to 9 (22.5%) patients who didn't receive MoM, with 16 (40%) patients discontinuing the use of MoM by the end of radiation treatment. Image guided radiotherapy (IGRT) is implemented to improve the accuracy of treatment delivery, however it remains difficult to correct for deformation and rotation of the prostate which is mostly influenced by changes in rectal filling. Previous studies have shown that changes in rectal filling can lead to poor outcomes following RT. [16,17] Furthermore, maintaining consistent rectal filling leads to reduction of the required PTV margins [18], which enables dose escalation of RT to the prostate with probability of better tumour control [19], without increasing treatment toxicity thereby improving the therapeutic ratio. Mean average cross sectional area (cm 2 ) 6.5 +/-3. The use of laxatives and anti-flatulence diet to reduce rectal filling variation has been previously investigated. In a randomised controlled trial (RCT) with 30 prostate cancer patients, Oat et al. reported that dietary intervention with psyllium (20 g/d, n = 15) didn't significantly reduce the variability in RV or rectal filling at superior level of the prostate. It was however, associated with consistent rectal filling at mid-level of the prostate [20]. In another RCT of prostate cancer patients assigned to receive magnesium oxide (500 mg twice a day, n = 46) or placebo (n = 46) during RT and similar to our results, there was no significant difference in RV between the treatment arms and magnesium oxide was not effective in reducing the interfraction rectal filling [14]. Furthermore, several other studies using magnesium laxatives were unable to show a clinically relevant reduction of inter-or intra-fractional prostate motion [11][12][13]. Despite an inability to reduce interfraction prostate motion, a dietary protocol with laxatives may potentially decrease the rectal distention related to gas resulting in better CBCT image quality and facilitating the IGRT process [11]. On the other hand, tolerability of the laxative remains an important clinical consideration, with higher probability of less laxative intake or even anti- diarrheal use in response to more frequent bowel movement or changing the stool texture [12,13]. In a RCT of magnesium oxide (n = 46) vs. placebo (n = 46), patients in the intervention arm had more frequent grade ≥2 acute GI toxicity (37% vs. 22%) and were more often prescribed anti-diarrheal medicines during RT (15% vs. 9%) [13]. We had previously reported that the proportion of patients who didn't take MoM increased from 8% in the first week to 44% in the last week of RT [12]. Consistently, in the current study, 60% of patients who took MoM had diarrhea, and 40% discontinued its use by the end of radiation treatment. The findings from this study should be interpreted in context with its methodological limitations. The most significant limitation is related to its retrospective and non-randomized nature which may have led to undocumented differences between the cohorts which may have masked the effect of diet and MoM. Also, compliance with the antiflatulence diet was not quantified and may have been different between the two groups, which may have negated the effect of the diet on interfraction rectal filling variation. Nonetheless, when considered in context with previous evaluations of this subject, our results confirm that the addition of MoM to an antiflatulence diet does not lead to more consistent rectal filling. Variations in rectal volume and size can influence both translational prostate motion (which can be corrected with IGRT), and rotational prostate motion (which is more difficult to mitigate by current state-of-the-art technologies). However, rectal filling is not the sole factor with potential impact in prostate spatial localization, and different intervention approaches considering altogether the impact of rectal and bladder filling, patient positioning and breathing on prostate motion should be investigated. Conclusion The addition of MoM to an antiflatulence diet did not reduce interfraction variation in rectal filling and may cause diarrhea resulting in a substantial proportion of patients discontinuing its use. Simple dietary instructions appeared to be just as effective at reducing interfraction rectal variability, and MoM should be omitted from routine use during prostate radiotherapy.
v3-fos-license
2020-07-16T09:03:33.259Z
2020-07-15T00:00:00.000
225564861
{ "extfieldsofstudy": [ "Psychology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/spc3.12556", "pdf_hash": "38bed52e4b66cb266f4a6840b8149c4537136698", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2057", "s2fieldsofstudy": [ "Philosophy" ], "sha1": "b293df59e3f41d4e1738531471e4b427622cb3d3", "year": 2020 }
pes2o/s2orc
Post-truth politics and discursive psychology Research at York St John (RaY) is an institutional repository. It supports the principles of open access by making the research outputs of the University available in digital form. Copyright of the items stored in RaY reside with the authors and/or other copyright owners. Users may access full text items free of charge, and may download a copy for private study or non-commercial research. For further reuse terms, see licence terms governing individual outputs. Institutional Repository Policy Statement The challenge for scholars interested in post-truth politics is twofold. First, it is a concept lacking a consistent scholarly understanding. It is variously viewed as 'an increasing disregard for factual evidence in political discourse' (Lockie, 2017, p. 1), as 'the diminishing importance of anchoring political utterances in relation to verifiable facts' (Hopkin & Rosamond, 2018, p. 642), as a period in time (Gl aveanu, 2017), a place where 'truth and consistency are unimportant' (Paxton, 2017, p. 22) or it is discussed without a substantial definition (Marshall & Drieschova, 2018;Muñoz, 2017). This is in part to do with different disciplinary approaches and in part with the following issue. Second, the research in post-truth politics is still relatively emergent. Promising work has been developed since Lockie's (2017) point (see below), some of which will be discussed here, but there is still some way to go before we have a substantial academic body of work on post-truth politics. Aside from select publications (Demasi, 2019;Gl aveanu, 2017;Muñoz, 2017), 1 psychology in particular has not had much to say about post-truth politics. This in mind, I provide a necessarily brief overview of academic research on post-truth politics. I give particular attention to some work from political science (Hopkin & Rosamond, 2018), three articles from a special issue of New Perspec- psychological view on what counts as political is broad. It is largely determined by topic due to the analytic interest in discourse. This means that anyone in any setting is capable of producing political discourse, and there is a strong tradition of discursive psychological work that looks at political discourse in both formal (Demasi, 2019;Edwards & Potter, 1992;Popoviciu & Tileag a, 2020) and informal (Billig, 1991;Billig, 1992) settings. Much of scholarly focus on post-truth politics has tended to focus on the political discourse of elite figures such as Donald Trump, but my point about post-truth politics should be taken to go wider than this. In this article, I speak of post-truth politics as a form of political discourse that anyone can enact. While the existing and emergent work on post-truth so far provides a helpful start, discursive psychology (henceforth DP) can help shed light on some of the concerns with post-truth politics. Specifically, I point out the long history DP has in studying fact construction as a social (Potter, 1996) and rhetorical (Demasi, 2019; Tileag a, 2019) action. One can engage with the concept of truth as a form of rhetorical action and how this understanding sheds light on how one may view post-truth politics. If this is the case, as I argue that it is, then post-truth politics is less about a decline in truth and more about highlighting the rhetorical nature of what we treat as 'truth' in political discourse (Demasi, 2019). The article concludes with some considerationsor, more aptly, questions-of how to go forward with an increased cross-disciplinary awareness. | POST-TRUTH POLITICS OUTSIDE OF DP Post-truth politics is becoming an increasing concern in academic circles. Lockie (2017) noted that research in the area is nascent, but still limited as of only a couple of years ago. Aside from some DP discussion on the topic (Burke & Demasi, 2019;Demasi, 2019) psychology, as a discipline, has yet to contribute on post-truth studies on a systematic scale. At present, there is some theoretical considerations on the persuasive effectiveness of the emotional aspects of post-truth politics (Muñoz, 2017) or a call for psychologists to 'help people distinguish between beliefs and facts and understand the strengths and limitations associated with each' (Gl aveanu, 2017, p. 376). There is no reason to contest Gl aveanu's apt call for increasing awareness of how to engage with news online (particularly social media), for psychologists to be able to spread their research in a manner that engages with the wider public and how a focus on these two can lead to 'the creation of practical tools to counter "post-truth" mentalities' (p. 377). After all, one of the most powerful tools that scholarly work can offer society is to encourage critical, active and research-informed thinking. Rather, his point reflects a more general issue in psychology and beyond: the assumption of a distinction between personal belief and 'objective truth' of the nature where the two are at odds. I will return to this point, in discussing the contribution of DP, later. In other disciplines, work engaging with post-truth politics has begun in earnest. Hopkin and Rosamond (2018) is a fitting example. They gave attention to how the depoliticisation of policy making has led to an increased focus on political discourse to win votes. This increased political focus on discourse, in turn, is accompanied by a disregard 2 for truthfulness of political claims, which has created a space for the rise of post-truth politics. The issue, they argue, is that the increased 'bullshit' creates a problem for the hypothetical voter. This person, surrounded by a multitude of information, false or not, is at a disadvantage and ends up making poor voting decisions. Hopkin and Rosamond (2018) take the image of the human as a cognitive miser for granted, although this is not the unanimous view in the academic psychological community (Billig, 1991(Billig, , 1996. Their argument relies on a very specific view on the human mind, one that areas such as DP do not necessarily endorse. Next, three articles from a special issue of New Perspectives dedicated to post-truth politics. I selected this issue because it presented a substantial piece of scholarly work dedicated to post-truth politics, one of the earliest of its kind, and because it will be useful in shedding light on how other disciplines view post-truth politics. The choice of articles has been unavoidably selective, the other articles of the special issue are worthy of attention too, and their coverage brief-for which the authors have my apologies. These three were selected on account of their particular strengths and weaknesses, and for their suitability in emphasising the potential contribution of DP to a scholarly understanding of post-truth politics. Wight (2018) by and large attributes the problem of post-truth politics to postmodernism and constructionism. Put briefly, in making a distinction between 'reality' and 'social construction' Wight presents the characteristic realist argument against relativism: regardless of constructions reality remains reality. Wight assumes a distinction between social constructions and what he argues is reality 3 (though offering little explanation of what this 'reality' is). Misunderstanding postmodernism and social constructionism to be singular entities, Wight simplifies their impact to the world at large as a simple matter of the loss of objective truth as a matter of concern for the political society at large. This demonising of schools of thought does injustice in three ways. First, by presenting constructionism and postmodernism as singular entities their nuances, variations and more detailed points, often crucial to understanding them, are glossed over. 4 Second, it is an unjustified assumption that because a social constructionist approach looks at how people work up versions of truth that this means social constructionism treats all truths as false or on equal footing to each other. In claiming that 'without the concept of objective truth as a standard against which to hold subjective and intersubjective claims to be in possession of the truth, then all truth claims have to be taken at face value' (2018, p. 18). Wight is producing an argument that is no less political than it is rhetorically rooted in exactly what the postmodernist and constructionist would direct their analytic gaze on. He overlooks the fact that his argument for an objective truth is entrenched in a rhetorical tradition which, upon a close inspection, would undermine the argument for objective truth (Edwards et al., 1995). Third, by blaming postmodernists and social constructionists one is encouraged to overlook what they can contribute to the study of post-truth politics. To use Wight's (2018) own argument to illustrate the point, the discursive psychologist would be analytically concerned at the rhetorical practices that Wight uses to reify his argument. Such analysis would make no claims about the truthfulness of his argument-though, in this instance, it does put the strength of Wight's argument to question-because such an assertion would be, frankly, analytically fruitless (Edwards & Potter, 1992;Gibson, 2018). This distinction is crucial. What is absent from his argument is the recognition that the social constructionist challenge of a simple 'objective truth' equally challenges what he refers to as truth as it does what he would refer to as post-truth. This is not the same, though, as treating truths on equal footing: 'far from ruling out the possibility of justification of a particular view, relativists insist upon it' (Edwards et al., 1995, p. 39, emphasis in original). Claiming that the error lies in our 'flights of fancy where we believe that we construct the world in our discourse' (2018, p. 27) Wight overlooks that he has constructed a particular discourse to argue against constructions in discourse. Wight states that 'we need to be prepared to say that some perspectives are better than others and explain why' (2018, p. 26). A social DEMASI constructionist approach is not incompatible with this, but, in retreating to the clichéd realist arguments already systematically addressed (see Footnote 4), Wight has shown little acknowledgment of this. What he blames for the problem of post-truth politics could present, instead, a solution to it. Hyvönen (2018) approaches the issue of post-truth politics from a different angle. The problem, according to him, lies with the fact that the notion is poorly conceptualised. Taking his cue from the work of Hannah Arendt, Hyvönen argues for a distinction between 'rational' and 'factual' truth. Leaving the former aside (the opposite of which is illusion, opinion, error or ignorance-p. 35), Hyvönen argues that one needs to focus on factual truths. Just because 'facts are established, not found' (2018, p. 35) this does not mean that they are easy to challenge. Instead one needs to consider the political face of factual truths and to recognise that 'post-truth politics… ought to be understood as a predicament in which political speech is increasingly detached from a register in which factual truths are "plain" ' (2018, p. 38). Looked at this way, then, post-truth politics is not an exercise in 'bullshit', as Hopkin and Rosamond (2018) argue, but is an attempt at creating confusion. It is impossible to do justice for Hyvönen's argument, but when he asks 'what kinds of discourses constitute our shared world in such a way that deviations from commonly "known" facts are considered politically acceptable or even preferable?' (2018, p. 49) one can, with confidence, suggest that DP may have something of great use to contribute on this point. (2018) (2018), Hyvönen (2018) and Marshall and Drieschova (2018), and indeed the rest of the special issue, gives a rounded image of how one could go about understanding and studying post-truth politics. However, what these approaches tend to do is to treat post-truth politics as some form of loss for 'objective truth' (Michelsen & Tallis, 2018), whatever that is taken to mean, against emotions or emotive language. One issue with the literature from this special issue, and indeed others covered by me, is that they have tended to not specify what is meant with 'emotion'. It seems that emotion is treated as no more than a contrast to and deviation from 'truth' and 'reality'. A clearer understanding of what is meant with this would be of use; it could, for example, be meant a move away from the logos part of rhetoric (see Montgomery, 2017, below). I argue, here and elsewhere (Demasi, 2019), that viewing post-truth politics as a plain move away from 'truth' simplifies the matter too much. Hyvönen (2018), in particular, emphasises the need for further research. He rightly points that one should look to discourse(s) to better understand post-truth politics empirically. Likewise, Marshall and Drieschova (2018) point to this direction when they suggest that 'scholars of politics and international relations thus need to pay more attention to the everyday activities of ordinary citizens and how those shape political decisions, and potentially even political regimes' (p. 91) and the 'need to pay heightened attention to the people as active shapers, not just passive recipients' (p. 101). Discourse forms the central part of these everyday activities that Hyvönen (2018) and Marshall and Drieschova (2018) point to, and DP provides an empirical way to explore these activities. Of course, my summary of the strengths and weaknesses of the works above should not be taken as representative of strengths and weaknesses of all scholarly work on post-truth politics. It is impossible for a scholar to be able to read everything in a given area (Billig, 2013); any significant omissions from the literature are squarely my fault. Marshall and Drieschova What the above approaches have not addressed is that 'telling the truth' is as much a social activity as it is a political or a rhetorical one. Social acts are necessarily rhetorical (Billig, 1996), and even the most mundane rhetorical contexts can have a political dimension (Billig, 1995;Billig et al., 1988). In discourse, one cannot distinguish 'mere' truth-telling from the social activity in which it is embedded in: 'What is to count as mere description, and the objective reality that descriptions merely refer to, are, in other words, rhetorical accomplishments' (Edwards & Potter, 1993, p. 28). Managing to make something come across as real is a rhetorical achievement (Potter, 1996). 6 Hyvönen (2018) gets close to this, though he does not ask how discourses are constituted to make facts politically acceptable or preferable. Marshall and Drieschova (2018) rightfully highlight the need to focus on everyday practices. This is also a fitting parallel for DP, in that one can focus on everyday practices as an analytic inroad to making sense of any type of discourse. For example, one can observe how everyday language is used to construct what makes for extreme prejudiced language (see Burke & Demasi, n.d.;Tileag a, 2007). Finally, I would briefly like to consider Montgomery's (2017) contribution. Montgomery, in his investigation of Donald Trump's rhetorical style in the 2016 U.S. presidential elections, provides a systematic attempt at analysing political discourse that can be understood to encompass 'post-truth politics'. 7 Using Aristotle's concepts of logos, pathos and ethos 8 and Habermas' concept of 'validity claims', 9 Montgomery provides an insightful breakdown of how Trump adopted a particular rhetorical style. Although this style was lacking in its truthfulness, or logos, it was nonetheless effective. Montgomery argues that this is because it was designed to emphasise its 'folkness' which, in turn, increased his appeal in terms of the apparent sincerity of his speech: It is as if Trump's exaggerated and inappropriate claims about himself carried a strong appeal for his core constituency on the grounds that they come across as an authentic form of self-expression: Trump speaks how he feels and says what he means (Montgomery, 2017, p. 18). What Montgomery very importantly highlights is that we need to appreciate the nuanced nature of post-truth politics. Being able to assert the truthfulness of claims has limited effect in gauging the political success or its lack thereof. He rightly suggests that there is more to be understood about political discourse than a mere comparison of the 'truthfulness' of claims. However, what was missing from Montgomery's argument is a detailed unpacking of how Trump's discourse came across as sincere, convincing or however one wants to conceptualise it. Dividing rhetorical styles between logos, pathos and ethos is a move in this direction but analysing rhetoric can be more detailed. DP can tell us about the rhetorical construction of political speeches, in a manner that we understand concepts such as 'truth' and 'sincerity' as rhetorical constructions designed to bolster one argument or discourse over another (Billig, 1996). | POST-TRUTH POLITICS IN DP To focus on truth-telling as social and rhetorical practices is not to deny existence of 'truth', but, rather, to highlight that any type of truth-telling cannot be abstracted from its (micro or macro) context. To assume such a distinction, especially that one could successfully carry out such a distinction in empirical research, would be a step too far assuming a simple separation between an abstract and socially embedded truth; much like a misapplied Platonic theory of forms (Kraut, 2017). Such a distinction is not necessary for a DP approach, nor is it fruitful: 'truth' does not determine how something is spoken (Gibson, 2018). Instead the focus moves from determining what is or is not true to how something is made to come across as (un)truthful. Just as a purely cognitive psychology only tells one side of the multifaceted human nature (Billig, 1996), so overlooking the social, action-oriented, context of truth-telling gives a limited picture of what post-truth politics is. The act of truth-telling, regardless of its factual accuracy (however one conceives it), is first and foremost among part of a number of discursive activities that are designed to do something in their immediate context (Edwards & Potter, 1992). For example, a politician's priority in a debate is arguably DEMASI rhetorical supremacy, in appealing to the electorate, rather than factual accuracy. DP is a method particularly well suited to the analysis of these practices. One can empirically study the politician's discourse and its various aspects in how it is designed to appeal for a particular audience, designed to undermine the ideological opponent and other argumentatively relevant avenues. Looking at discourse in this way begins to address some of the questions posed by Hyvönen (2018) but also goes a step further in looking at how discourse unfolds. With that in mind I now consider DP, and some relevant research in the area. Discursive psychology is an approach that focuses on how the psychological is brought to life through discourse, moving the focus to action over cognition (Edwards & Potter, 1992). Adopting a specific stance of social constructionism-that discourse is both constructed and constructive (Potter & Hepburn, 2008)-DP looks at how psychological concepts (attitudes, memories, attributions, etc.) are worked up in discourse as social actions highly attuned to a specific context. To look at facts from a DP point of view is to ask: what discursive practices, as both offensive and defensive rhetoric (Billig, 1996), make facts look like facts in this particular context? As mentioned, the aim is not to deny the existence of facts but, rather, to look at how facts are constructed and deployed in situ. At this stage, I should mention that there are a number of various ways of doing DP analytic work. 10 My intent is not to advocate any particular type of DP over another (see, e.g., Potter, 2010, for an overview). I follow Gibson's (2018) footsteps in talking of DP in a broader sense, as originally introduced by Potter and Wetherell (1987), 11 in a manner that should appeal to all varieties of DP: discourse is constructed (see above), functional and varied (Potter and Wetherell, 1987). It is functional, in that discourse is designed to perform particular actions in particular settings. It is varied on account that the function of discourse is dependent on the context in which it unfolds. From its earliest days, DP has been concerned with fact construction. The core book of the field, Discursive Psychology (Edwards & Potter, 1992), often relied on examples from political discourse and on political events of the time. Same applies to other influential texts from the early days of DP (Billig, 1991;Billig et al., 1988;Potter & Wetherell, 1987;Wetherell & Potter, 1992), with the work of Edwards and Potter (1992) out of the early literature of DP focusing most explicitly on fact construction. While subsequent work focusing on political discourse is substantial (e.g., Augoustinos & Every, 2007;Byford, 2006;Condor, Tileag a, & Billig, 2012;Goodman, 2010Goodman, , 2014Tileag a, 2013Tileag a, , 2016 the focus on fact construction in political discourse had not necessarily remained in focus. My previous work (Burke & Demasi, 2019;Demasi, 2016Demasi, , 2019Demasi & Tileag a, 2019) is an exception. It has looked at various aspects of how people use 'facts' and 'knowledge' in an argumentative manner in political debates on the European Union. Drawing on rhetorical psychology (Billig, 1991) and epistemics, borrowed conversation analysis (Heritage, 2013;Heritage & Raymond, 2005;Raymond & Heritage, 2006), I adopted a DP analytic framework that paid particular attention to how speakers demonstrate their knowledge-both in terms of content and their 'rights' to this knowledge-and deploy 'facts' as rhetorical tools to bolster their position in broadcast debates on the EU. This focus is particularly helpful in analysing how a politician can provide counterclaims to factual claims without resorting to calling their opponent a liar or implying that the opponent has uttered an untruth (Demasi, that the UK is paying nine billion .h every every year to to-in 9 net to the European Union which will be case next year. .h "but# 10 what he doesn't point out is that we're paying sixty billion a 11 year for health .h we're paying a hundred and thirty two billion 12 a year for social security and b Here we see how the original fact-based claim, put forward by Farage on lines 1-3 is addressed by Watson on lines 77-85. The latter puts this 'fact' in a new context, one in which its rhetorical significance is undermined and downplayed. Furthermore, the implication is that in this new context challenging the cost of being in the European Union implies challenging other costs, such as health and social security, too. The point here is that one can make analytic sense of this extract not by trying to assess the truthfulness of what was said, which of the speakers is wrong, and so forth. Rather, an appreciation of how 'facts' can be used rhetorically and flexibly to argue for and against positions allows one to see that 'constructions of facts are not neutral reflections of an objective reality' (Demasi, 2019, p. 18). Viewed this way, the idea of post-truth politics takes on a new meaning. That we are recognising that 'truthfulness' is a poor judge of the success of political discourse (Montgomery, 2017) suggest that we should look elsewhere for an explanation. Understanding the rhetorical nature of facts, as understood by DP, is a major way forward. Focusing on fact construction is an analytically fruitful start, but one should be also prepared to consider how these challenges are presented with other interactional phenomena. For example, while overlapping talk is relatively normal in everyday interaction it can become a highly strategic manner of challenging factual claims of one's political opponent (Demasi, 2016). Everyday conversational features can readily be deployed as argumentative practices when challenging an ideological opponent. Laughter, too, can be a strategic resource (Demasi & Tileag a, 2019) used to discredit, disparage and challenge political opponents and their factual claims. For one, laughter can serve to delegitimise factual claims. Snorts, a form of laughter-based interaction, can provide highly disparaging responses that leaves little doubt to the stance taken towards one's ideological opponent. Laughter, then, not only tells us how politicians position themselves ideologically but also it tells us something about how politicians position themselves as knowing something in a particularly superior manner to their political opponent (Demasi & Tileag a, 2019). To laugh at one's ideological opponent is to claim superior access to relevant knowledge at hand. Such rhetorical moves to display knowledge cannot be abstracted, even in the case of laughter, from the rhetorical role of truth-telling. Whoknows-what and who-knows-more are matters of analytic concern indistinguishable from what-is-true. Therefore, the scholar who is interested in post-truth politics would do well to look at not only what is treated as true in discourse but how these treatments are worked up and resisted for these all unfold jointly. To repeat an earlier point, the aim of the work done in DP, particularly in looking at fact construction, is not to assess whether the facts deployed by speakers are, in a realist sense, 'true' or 'false'. Rather the focus is on how facts and descriptions of them are designed to perform a particular type of action (Potter, 1996). Viewed in this sense, we can begin to appreciate how 'facts', 'truths' and so forth are forms of social action rather than neutral reflections of an objective reality or of someone's inner mental states. 12 This has implications to how we view post-truth politics in situ. If we recognise that portrayals of fact and truth, in discourse, are primarily a medium for action, then we begin to get a better idea of post-truth politics as social phenomena. It can be source of optimism for scholars looking to study the phenomena, giving something concrete and empirical to grapple with. Unlike Wight's (2018) claim that postmodernism and constructionism is to blame, we can, instead, look to DP, as influenced by postmodernism and DEMASI social constructionism, as means for rendering post-truth politics empirically tangible. A discursive psychological approach is a tool for empirically approaching post-truth politics. What links political discourse from before post-truth politics to how political discourse unfolds today is that the concern for utilising factuality as a rhetorical resource has remained much the same. Postmodernism and constructionism are not what caused the problem, as Wight (2018) argues. Rather they are what has given us the means of making scholarly sense of post-truth political discourse. Murray Edelman (1977) argued that political discourse is about facts and values. What I argue is that, using DP, a close inspection of how political discourse unfolds shows how 'facts' are rhetorical resources used to vie for argumentative supremacy over an ideological opponent in public political discourse. Hopefully this type of approach is useful to the scholars of politics, political science, economics and international relations in highlighting the action-oriented nature of truth-telling. I have argued that in situ the distinction between truth and untruth is a problematic one: people work up factual claims and counterclaims in a live, context-dependent, setting where issues of who-knows-what and whoknows-better are at stake. The DP work discussed here reiterates Edelman's (1977) point that political discourse is about facts and values. Having a sound conceptual grasp of post-truth politics, as Hyvönen (2018) argues, is essential for a scholarly grasp of the matter, 13 but this needs to be accompanied by an appreciation of how this unfolds in practice. Rarely does the discourse of politics, or any discourse for that matter, unfold with a theoretical and conceptual neatness a scholar might hope for-ideologies, in action, are messy (Wetherell & Potter, 1992). This is not an issue; it is precisely a rhetorical vagueness that allows issues such as 'facts' to be produced and challenged. Vague wording can do very precise rhetorical work (Edwards, 1997) and dilemmatic, contradictory, ideological positions is how we reason our way through social life and its various challenges (Billig et al., 1988). My previous work (Demasi, 2019) focussed on the role of 'facts' as resources; the next step, if we are to take Edelman (1977) seriously, is to look at how fact and truth as rhetorical resources are assigned particular moral and ideological values. It may give some link between matters of 'truth' and 'values' as expressed in contemporary political, lay or elite, discourse. This work should be of particular use outside of DP, too, in helping to understand that post-truth politics, when looked at in context, is not about a decline of truth but an argument for a very particular ideological position. | CONCLUSION What this means in practice is that we need more research that appreciates the situated and rhetorical nature of what is treated as (un)true, recognising that these are better treated as social action rather than conceptual entities. Scholars concerned with the decline of truth should look to the contribution of discursive work that looks at the rhetorical nature of political discourse. Maybe this can give a way to get to grips with making sense of political discourse which is, despite its apparent untruthfulness, effective in garnering public support. To return to two examples of what has concerned scholars: Brexit and Donald Trump. How did political discourse that shocked so many manage to 'defeat' the reasonable alternatives? The answer must be looked at empirically. One should look, in detail, at the situated construction of the arguments, and instead of dismissing such discourse as untruthful appreciate the rhetorical complexity of how these arguments were made. Recognising the rhetorical nature of political discourse can give a clearer understanding of what post-truth politics is; possibly no more than a rhetorical style of our times. These should enable a more measured response to post-truth politics, beyond a lamentation of a loss of truth. This is what DP can contribute to other disciplines, and the promise is substantial. The question is: what can DP learn in return? Particularly, how can DP play a part in providing a rounded, interdisciplinary, understanding of notions such as 'post-truth politics' and its various manifestations (e.g., Brexit)? DP has a multidisciplinary foundation readily observed in the key texts of the area (Billig, 1987(Billig, , 1991(Billig, , 1996Edwards & Potter, 1992;Potter, 1996;Potter & Wetherell, 1987, etc.) but one must be cautious about how to apply DP with other approaches. Potter (2003) highlights that mixing DP with other paradigms comes with several considerations, mostly to do with potentially incompatible conceptualisations of discourse and its place in empirical research. There is a tendency to accord discourse a secondary place or to treat is as evidence of something outside of itself, rather than recognise that is a medium for intelligible social action in and of itself. This does not by any means exclude a cross-disciplinary approach, only that such need to be made with great care. This in mind, I have no ready answer to my own question though I am by no means the only one suggesting cross-disciplinary considerations (e.g., Tileag a, 2019; Tileag a & Byford, 2014). Considering the multidisciplinary history of DP, I now reach out to other scholars for suggestions. What can DP learn from political science, international relations and their related disciplines? ACKNOWLEDGEMENT I would like to thank Shani Burke and Bogdana Hum a for their helpful comments in the drafts of this work. is an absence for concern for the truthfulness of a claim rather than deliberate lying. ORCID 3 This is a typical misunderstood and oft-repeated realist argument against relativism. For a systematic rebuttal see Edwards, Ashmore, and Potter (1995) and Iversen (2016). 4 See, for example, Potter (1996) for a brief summary of various approaches. 5 This rise of distrust is particularly attributable to events such as the second Iraq War and the 2008 recession. 6 I say 'manage' because the success of such a rhetorical move is as much reliant on the acts of the rhetor as it is contingent on whether the recipient resists or concedes this rhetorical move (Clayman & Heritage, 2002). 7 Though Montgomery (2017) does not explicitly refer to the term outisde the title of his paper, it is primarily concerned with the issue how Trump managed to garner support in spite of apparent untruths of his speeches. 8 'Logos-an appeal to argument and evidence; pathos-an appeal to emotion; and ethos-an appeal based on the character and the qualities of the speaker' (Montgomery, 2017, p. 6). 9 Validity of utterances is judged in terms of their truth, appropriateness and sincerity. 10 For example, there are more conversation analytic flavoured DP and DP that refers to itself as critical. 11 Though they did not speak of their work as 'discursive psychology', it has since become one of the earliest texts advocating what is now known as DP. 12 For a critique of viewing the language as a transparent window to the mind see, for example, Edwards and Potter (1992) and Tileag a (2013). 13 Though we need to bear in mind that some level of inconsistency is present, and necessary, even in more formal conceptual systems (Billig, 1982).
v3-fos-license
2019-04-20T13:13:37.579Z
2013-01-10T00:00:00.000
122662167
{ "extfieldsofstudy": [ "Mathematics" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://zenodo.org/record/8228/files/1363004871-Ndaeyo322012AJEA2599.pdf", "pdf_hash": "b375c03b1beed3e6b80f7958a4559979e6e7643d", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2059", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "24c5396bd5760a0017ef9fab8bd83832dc8b6669", "year": 2013 }
pes2o/s2orc
Growth and foliar yield responses of waterleaf (Talinum triangulare Jacq) to complementary application of organic and inorganic fertilizers in a Ultisol. Aims: Growth and foliar yield responses of waterleaf (Talinum triangulare Jacq) to complementary application of organic and inorganic fertilizers were studied in a Ultisol. Study Design: The experiment was laid out in a randomized complete block design with three replicates. Place and Duration of Study: The University of Uyo Teaching and Research Farm, located at Use Offot Uyo, Akwa Ibom State, Nigeria and was conducted between March, 06 and June, 06 in both 2009 and 2010 cropping seasons. Methodology: Treatments were various combinations of organic and inorganic fertilizers applied to the soil, and these included NPK (15:15:15) at 400 kg ha, poultry manure (PM) at 5 t ha, PM at 2.5 tha + NPK at 200 kgha, PM at 3.75 tha + NPK at 100 kgha , PM at 1.25 tha + NPK at 300 kgha and control (without amendment). Results: There were significant differences (P<0.05) among treatments in height, number Research Article aaaA Article American Journal of Experimental Agriculture, 3(2): 324-335, 2013 325 of branches, number of leaves, stem girth, leaf area, and total foliage yield of waterleaf in both years. Generally, application of PM alone and complementary use of PM and NPK, irrespective of the ratio, enhanced waterleaf growth and total foliage yield better compared to application of NPK alone and the control treatment. Total foliage yield from 100 kgha NPK + 3.75 tha of PM treatment (56 .03 tha 30 and 54 36 tha 31 in 2009 and 2010, respectively) superseded other treatments by 38 78% in 2009 and 35 -78% in 2010. Conclusion: With the high cost, scarcity, and environmental problems associated with the use of mineral fertilizer in Nigeria; and based on the foliage yield obtained in this study, it is obvious that the use of organic manure in combination with mineral fertilizer (particularly with 100kgha NPK + 3.75tha PM or 200kgha NPK + 2.5tha PM treatment) can sustain waterleaf production. It is also demonstrated that it would be more rewarding to apply 5tha 1 PM alone compared to sole application of 400kgha mineral fertilizer for waterleaf production in a Ultisol. of branches, number of leaves, stem girth, leaf area, and total foliage yield of waterleaf in both years. Generally, application of PM alone and complementary use of PM and NPK, irrespective of the ratio, enhanced waterleaf growth and total foliage yield better compared to application of NPK alone and the control treatment. Total foliage yield from 100 kgha Conclusion: With the high cost, scarcity, and environmental problems associated with the use of mineral fertilizer in Nigeria; and based on the foliage yield obtained in this study, it is obvious that the use of organic manure in combination with mineral fertilizer (particularly with 100kgha -1 NPK + 3.75tha -1 PM or 200kgha -1 NPK + 2.5tha -1 PM treatment) can sustain waterleaf production. It is also demonstrated that it would be more rewarding to apply 5tha INTRODUCTION Waterleaf (Talinum triangulare Jacq), a leafy vegetable crop that originated from tropical Africa [1], is an all -season vegetable that is extensively grown in many countries in Asia, South America and West Africa. In Nigeria, it is widely cultivated and consumed in the southern part, particularly in Cross River and Akwa Ibom States [2,3]. The demand for waterleaf is high in these states, and it is therefore a major source of income for farmers. Its high demand is attributed to its nutritional value and importance as a "softener" when cooking the common fibrous leafy vegetables [4] such as Afang (Gnetum africana), Atama (Heinsia crinata), and Editan (Lasienthera bulchozianum). It is also cooked with green amaranthus (Amaranthus curentus) and fluted pumpkin (Telfairia occidentalis). Waterleaf has a colloidal property and this favours its use for preparation of popular soups known as Ukwoho afang and edikang ikong in some parts of southern Nigeria. Ibeawuchi et. al. [5] stated that the leaves and young shoots are used to thicken sauce and is consumed in large quantities in the southern part of Nigeria. It is considered medicinal in southern Nigeria as it is used as herb for measles and stomach upsets [3]. Also, it performs well as fodder for raising giant snails [6]. The increasing demand for waterleaf due to urbanization has therefore pushed farmers into small and medium scale production of waterleaf in Akwa Ibom State. Consequently, to obtain optimum yield, organic fertilizers are being developed by farmers from farm and city wastes for vegetable production. Also, organo-mineral fertilizers (OMF) in which organic wastes are fortified with inorganic N or NP fertilizers are being utilized by crop farmers. Organic and organo-mineral fertilizers have been reported to significantly increase yield of vegetables such as pepper (Capsicum annum), tomato (Lycopersicon esculentus), okra (Abelmoschus esculentus), egusi-melon (Cucumeropsis mannii) and amaranthus (Amaranthus cruentus) [7,8,9,10,11,12,13,14]. Most farmers apply these assorted types of fertilizers (organic and inorganic) but sometimes the yields hardly compensate for the money spent to purchase these fertilizers. This is partly because most farmers are yet to determine the best local fertilizer source to use in the vegetable crop production. The use of animal and plant wastes in crop production is indeed a long standing practice in the world. The use of inorganic fertilizers among farmers to improve waterleaf yield is also common although some farmers and consumers still question the desirability of using inorganic fertilizer for leafy vegetable production. Most farmers broadcast large quantity of inorganic fertilizer in waterleaf plots at intervals of 2 to 30 weeks to stimulate growth. This is always aimed at achieving maximum growth and yields [15,16]. The inorganic fertilizer is considered a major source of plant nutrients [17] while organic manure has ability of improving soil structure in addition to supplying nutrients [16] and increasing microbial biomass [18]. However, the use of inorganic fertilizers alone may have negative implications for human health and the environment [19]. The utilization of organic manures by vegetable producers may have an additional advantage of ensuring environmental harmony compared to chemical fertilizers. Udoh et. al. [3] recommended application of organic manures like cow dung, poultry droppings and nitrogenous fertilizers immediately after harvest. Farmers find it difficult to maintain a standard fertilizer regime in the cultivation of water leaf as they often supplement organic manure with mineral fertilizers. However, information on the interaction due to combined application of organic manure and mineral fertilizer for waterleaf is scanty. Combined application of organic manure and mineral fertilizers often goes with such additional advantages as buffering the soil against undesirable acidification and increasing the availability of micronutrients [20]. The blending of organic manures with mineral fertilizer may help to increase the productivity of crops on fragile soil by reducing the problem of nutrient losses via leaching or denitrification. Olsen et. al. [21] found that a substantial portion of nitrogen fertilizer needs of most cereals could be met by organic manure blended with mineral fertilizer. The use of both mineral and organic fertilizers has been found to be a sustainable technology for crop production, and the full integration of this technology into the cropping systems of Akwa Ibom State could further increase crop yields [22]. Improved vegetable crop growth and yield performances with complementary application of inorganic and organic fertilizers compared to sole application of either organic or inorganic fertilizer have been reported [23,22,24,25,26]. The complementary use of organic manure and inorganic fertilizers have been proven to be a sound soil fertility management and crop production strategy in many countries of the world [27,28]. In Nigeria, Makinde et al. [29] in their study on combined application of organic manures and mineral fertilizers recommended the use of either kola pod husk and pacesetter organic fertilizer at 3tha -1 alone or combined with NPK fertilizer at reduced levels as being suitable for improving yield and nutritional quality of amaranthus. High and sustained crop growth and yield could therefore be obtained with combined and judicious use of balanced inorganic and organic fertilizers. This study was therefore conducted to evaluate the effects of amending the soil with different organic manures supplemented with a nitrogen fertilizer on growth and foliar yield of waterleaf in Uyo, southeastern Nigeria. Experimental Site The experiment was conducted at the University of Uyo Teaching and Research Farm, located at Use-Offot-Uyo, Akwa Ibom State of Nigeria and was conducted between March, 06 and June, 06 in both 2009 and 2010 early cropping seasons. The site is located at Latitude 5º17 I and 5º27 I N, Longitude 7º27 I and 7º58 I E and on altitude of 38.1m above sea level. This rainforest zone receives about 2500 mm rainfall annually. The rainfall pattern is bimodal, with long (March -July) and short (September -November) rainy seasons separated by a short dry spell of uncertain length usually during the month of August. The mean relative humidity is 78% and the atmospheric temperature is 30ºC. The mean sunshine hour is 12 [30]. Soil analysis revealed the following physico-chemical characteristics: pH in water of 5.6, 1.37% organic matter, 0.10 % total nitrogen, 31.77 mg/kg available P while exchangeable bases values were 199. 02, 2.88 and 1.20 cmolkg-1 for K, Ca and Mg, respectively. The soil particle distribution was: sand 86.9 %, silt 2.8% and clay 10.3%. Experimental Design, Treatment and Cultural Details The experiment was laid out in a randomized complete block design with three replicates. Treatments were six fertilizer rates: NPK (15:15:15) at 400 kg ha -1 , poultry manure (PM) at 5 t ha -1 , PM at 2.5 tha -1 + NPK at 200 kgha -1 , PM at 3.75 tha -1 + NPK at 100 kgha -1 , PM at 1.25 tha -1 + NPK at 300 kgha-1 and control (without amendment). Each plot measured 6m x 6m with 1m inter-plot and replicate spacing. The site was cleared manually and organic manures incorporated into the soil during preparation of raised seedbeds of 25 cm depth using garden fork and spade while NPK (15:15:15) fertilizer was applied two weeks after planting according to treatment. A waterleaf landrace, locally called mmong mmong ikong Uyo was planted manually at a spacing of 5cm x 5cm using stem cuttings of 10 cm length with leaves still attached. Manual weeding was carried out at 3, 6 and 9, weeks after planting (WAP). Data Collection and Analysis Fifty plants were randomly selected and tagged per plot (excluding the border rows) for data collection. Growth and yield parameters measured included: height, number of leaves per plant, leaf area (determined graphically), number of branches per plant, stem girth (using an inelastic string around the stem) and total fresh foliage yield (i.e. from sequential harvesting done at 3, 6, 9 and 12 WAP). Data collected were subjected to analysis of variance and means compared using least significant difference (P=0.05). Table 1 shows that at 3, 6, 9 and 12 WAP, there were significant differences (P=0.05) in the height of waterleaf among the different fertilizer treatments in 2009. At 3, 6, 9 and 12 WAP, waterleaf in the 100 kgha -1 of NPK + 3.75 tha -1 PM treatment was taller than those of other treatments by 12-15%, 9-54%, 13-55% and 12-15%, respectively. The control treatment consistently produced the shortest waterleaf. Application of poultry manure alone at 5 tha -1 enhanced plant height compared with the application of NPK alone at 400 kgha -1 .In 2010, waterleaf height also differed significantly (p=0.05) at all sampling months among the fertilizer treatments (Table 1). Percentage differences observed in waterleaf height in 2009 at the different sampling intervals were also maintained in 2010. Also, the use of poultry manure alone was better than the use of NPK (15:15:15) alone Table 2 shows that at 3, 6, 9 and 12 WAP, there were significant differences in the number of branches per plant (P=0.05) among the different fertilizer treatments in 2009 but no clear pattern was maintained. However at 3 and 6 WAP, the 100 kg ha -1 of NPK + 3.75 t ha -1 PM produced more branches per plant than other treatments by 28-56%, and 19-46%, respectively. At 9WAP, 5tha -1 poultry manure (PM) produced 24-75% more branches than other treatments whereas at 12WAP, the number of branches from the 300kgha -1 NPK + 1.25tha -1 PM plot superseded other treatments by 52-78%. The control treatment had the least number of branches per plant. In contrast, the number of branches per plant in 2010 in the 100 kg ha -1 of NPK + 3.75 t ha -1 PM treatment superseded others at 3, 6, 9 and 12 WAP by corresponding differences of 17-58%,11-43%, 4-40% and 17-55%. Application of poultry manure alone produced more branches than the NPK 400kgha -1 treatment in both seasons. Number of Leaves and Stem Girth In 2009 , the number of leaves per plant differed significantly (P=0.05) among the different fertilizer treatments but showed no clear direction (Table 3). At 3 WAP, 200kgha -1 NPK + 2.5tha -1 PM treatment had 9-47% more number of leaves per plant than others while at 6 and 9 WAP, the 5tha -1 poultry manure (PM) had 48-75% and 11-60% more number of leaves than others. The number of leaves in the 100kgha -1 NPK + 3.75tha -1 PM plot superseded other treatments by 4-55%. All the fertilized plots had higher number of leaves per plant than the control treatment. Application of poultry manure alone produced more leaves than NPK 400kgha -1 . The number of leaves per plant also differed significantly (P=0.05) among the fertilizer treatments in 2010 (Table 3). At 3 WAP, 200kgha -1 NPK + 2.5tha -1 PM treatment had 7-48% more leaves than others whereas at 6, 9 and 12WAP, the 100 kgha -1 of NPK + 3.75 tha -1 PM produced 11-58%, 14-59% and 16-54% more number of leaves per plant than others. The control treatment consistently produced the least number of leaves per plant Application of poultry manure alone produced more leaves than application of NPK at 400kgha -1 . In 2009, stem girth at 3, 6, 9 and 12 WAP differed significantly (Table 4) among the different fertilizer treatments. Stem girth in the 100 kgha -1 of NPK + 3.75 tha -1 PM plot was bigger than in other treatments at 3, 6, 9 and 12 WAP by 13-55%,7-28%, 7-29% and 6-73%, respectively. The smallest stem girth was produced by the control treatment. Waterleaf stem girth differed significantly (P=0.05) among fertilizer treatments in 2010 (Table 4) Table 5 shows that at 3, 6, 9 and 12 WAP, the leaf area of waterleaf differed significantly (P=0.05) among the different fertilizer treatments in 2009. The leaf area in the 100 kgha -1 of NPK + 3.75 tha -1 PM plot was larger than other treatments at 3, 6, 9 and 12 WAP by 12-45%, 16-53%, 5-39% and 11-43%, respectively. The control treatment produced the smallest leaf area. Waterleaf leaf area was significantly (P=0.05) influenced by the fertilizer treatments in 2010 ( Table 5). The 100 kg ha -1 of NPK + 3.75 t ha -1 PM treatment produced the widest leaves and exhibited the same pattern observed in 2009 while the control treatment had the smallest leaf size. Generally, the 5 tha -1 PM treatment produced bigger leaves than sole application of 400kg/ha NPK (15:15:15) fertilizer. Table 6 shows that, the total fresh foliage yield of waterleaf differed significantly (P=0.05) among the different fertilizer treatments in 2009 and 2010. The 100 kg ha -1 of NPK + 3.75 t ha -1 PM treatment produced the highest total foliage yield (56.03tha -1 and 54.36 tha -1 in 2009 and 2010, respectively) while the control had the least total foliage yield (12.12 t ha -1 and 11.71 tha -1 , in 2009 and 2010, respectively). The 100 kgha -1 of NPK + 3.75 t ha -1 PM treatment produced 38 -78% and 35-78%, more total foliage than other treatments in 2009 and 2010, respectively. Foliage yield from the sole application of 5t ha -1 PM was higher than that of 400kg ha -1 NPK by 15% and 13% in 2009 and 2010, respectively. There were slight differences in some of the parameters measured between 2009 and 2010 cropping seasons. DISCUSSION Results of this study showed significant differences on all the vegetative characteristics and foliage yield. The study has also demonstrated that application of 5 t ha -1 of poultry manure alone performed better than the 400kg ha -1 treatment. However, combined application of 100 kgha -1 NPK + 3.75 tha -1 of PM resulted in increase in vegetative growth and foliage yield of waterleaf than other treatments. This may be due to synergistic effects of combining organic and inorganic fertilizers which optimally supplied needed plant nutrients. The mineral fertilizer supplied the needed nutrients to the waterleaf at the initial growth stage while the poultry manure provided the needed nutrients at later growth stages. Udoh et al. [3] and Ndaeyo et. al. [25] stated that the use of organic manure can enhance soil productivity and crop yield. Alves et.al [31] stressed the need to supplement organic manures with nitrogen fertilizers so as to increase nitrogen supply and in addition contribute to the increase in organic matter of the soil and other macro and micro nutrients required for crop growth. Most of the time, the best results came from plots amended with NPK at 100 kg ha -1 + PM at 3.75 t ha -1 , which had the highest level of organic manure (PM) blended with mineral fertilizer. This agrees with that of Gill and Meelu [20] who found that crop yield increased with an increase in the level of nitrogen blended manure. Also combined application of organic manure and mineral fertilizers, particularly in the tropics, has additional advantages of buffering the soil against undesirable acidification and also increase the availability of micro nutrients [28,26,34]. Studies [32,33,35] have also demonstrated that application of organic waste alone and in combination with mineral fertilizer enhanced root and shoot biomass, and general growth and yield and components of crops compared to sole application of NPK fertilizer (400 kgha -1 ). Similarly, Amalu and Oko [36] reported that yield performance was better in manured than in control plots and that responses varied very widely with sources of manures in terms of vegetative growth and yields. The variability in the performance of the fertilizer treatments could be due to variation in their nutrient composition since different ratios were combined [37]. Aliyu [37] and Dauda et.al. [16] also reported that application of poultry manure at 5tha -1 and farm yard manure at 5-10tha -1 supplemented with 50kgNha -1 resulted in adequate crop growth and maximum fruit yield of pepper (Capsicum annum L) and water melon [Citrullus lanatus (Thum.) Matsum & Nakai], respectively. Findings from the present study are in consonance with that of Abgede [38] who reported that combined application of sub-optimal rates of NPK fertilizer and poultry manure enhanced plant performance compared with application of NPK fertilizer or poultry manure alone. Also, Makinde et al. [29] in their findings stated that combined application of organic manures and mineral fertilizers, either as kola pod husk and pacesetter organic fertilizer at 3tha -1 alone or combined with NPK fertilizer at reduced levels, is suitable for vegetable production. Integrated nutrient management through combined use of organic wastes and chemical fertilizers has been reported to be an effective approach to combat nutrient depletion and promote sustainable crop productivity [39,40,41,42].The slight differences observed in vegetative growth of waterleaf in both cropping seasons could be attributed to vagaries of weather. CONCLUSION With the high cost, scarcity, and environmental problems associated with the use of mineral fertilizer in Nigeria; and based on the foliage yield obtained in this study, it is obvious that the use of organic manure in combination with mineral fertilizer (particularly with 100kgha -1 NPK+3.75tha -1 PM or 200kgha -1 NPK+2.5tha -1 PM treatment ) can sustain waterleaf production. It is also demonstrated that it would be more rewarding to apply 5tha -1 PM alone compared to sole application of 400kgha -1 mineral fertilizer for waterleaf production in a Ultisol.
v3-fos-license
2019-04-16T13:21:17.822Z
2017-01-01T00:00:00.000
58921362
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.imedpub.com/articles/Effect-of-an-adaptation-strategy-to-a-Singlespace-concentrate-feeder-with-lateral-protections-on--performance-eating-and-animal-be.pdf", "pdf_hash": "28939e3ef280207a1eaffd1af4c62e4e7737620b", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2062", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "51eda89ff6aff3b01e39cc56a964523f87533087", "year": 2017 }
pes2o/s2orc
Effect of an Adaptation Strategy to a Single-Space Concentrate Feeder with Lateral Protections on Performance, Eating, and Animal Behavior after Arrival of Fattening Holstein Calves The objective of the current study was to evaluate the effect of an adaptation strategy to a single-space concentrate feeder with lateral protections forming a chute (SF) on performance, eating pattern, and animal behavior in calves for first 6 weeks after arrival at the fattening farm. Two hundred sixteen Holstein calves (120 ± 3.8 kg initial body weight and 102 ± 2.7 days of age), from two separate batches, were randomly allocated in one of 6 pens equipped with a computerized concentrate SF, a separate straw feeder, and a water bowl. Pens were assigned to either a conventional adaptation strategy (CA), in which the chute was widened for first 4 days; or an alternative adaptation strategy (AA), in which no chute was placed for first 4 days and an additional feeder was also used during the arrival period (the first 14 days after arrival). All animals had ad libitum access to concentrate and straw. Daily concentrate consumption and eating pattern, weekly straw consumption, and fortnightly body weight (BW) were recorded throughout the study. Animal behavior was recorded by scan sampling on day 1, 3, 5, 7, and weekly throughout the study. Eating (concentrate and straw) and drinking behaviors were filmed for 4 hours on day 1, 5, and 15 of the study. During the first week of the arrival period, calves on AA had a greater (p<0.01) concentrate intake than calves on CA, which showed a greater (p<0.01) variable daily intake as well. In addition, the final BW after 42 days of study was greater (p<0.05) in AA than in CA calves. A greater (p ≤ 0.01) percentage of animals per pen eating concentrate and drinking, a shorter (p<0.01) occupancy time, a greater (p<0.01) number of animals and visits, a reduction (p<0.05) of waiting time and an increase (p<0.01) of the number of displacements were recorded with AA than CA during the first week of the arrival period. In conclusion, the adaptation strategy (chute not placed and additional feeder) was successful at facilitating feed access and encouraging concentrate consumption during the first week of the arrival period, improving concentrate intake at short-term (first week) and BW at mid-term (sixth week) after arrival at fattening farm, respectively. Introduction A single-space concentrate feeder with lateral protections forming a chute (SF) is an alternative feeder to conventional collective feeder used, to record individual concentrate intakes for research purposes [1,2], and also to decrease the total concentrate consumption (intake plus wastage) without impairing overall fattening performance, rumen health, and welfare in Holstein bulls fed high-concentrate diets [3]. However, the former study revealed that animals reared on SF showed difficulty accessing feed for first 2 weeks after arrival at the fattening farm due to feeder design, even with widening of the chute for first 4 days. Therefore, these calves during the arrival period (first 2 weeks) had diminished concentrate consumption and growth compared with those fed in multiple-space feeders (3.0 vs. 3.8 ± 0.25 kg/d for intake, and 1.3 vs. 1.6 ± 0.12 kg/d for average daily gain (ADG), respectively). These results were in agreement with findings reported by Gonyou and Stricklin [4]. Furthermore, complementary observations and records support the hypothesis that animals did not adapt well to the SF [3]. It is well-known that ensuring adequate feed consumption soon after arrival at fattening farm is crucial to improve performance [5], and increasing the number of feeding places increases concentrate intake and ADG in newly arrived fattening calves [6]. Thus, in order to facilitate the feed access and encourage intake after arrival at the fattening farm, it was hypothesized that concentrate consumption and animal growth in SF-fed calves could be improved by two complementary arrangements. These arrangements were providing free access to feed for first 4 days (without lateral protections, the chute), together with an additional feeder (2 feeding spaces per pen) during the first 2 weeks. The objective of the present study was to evaluate the effect of an adaptation strategy (SF without lateral protections for first 4 days, and additional feeder in which feed offered was gradually reduced for first 14 days) in SF-fed Holstein calves on performance, eating pattern, and animal behavior during the first 6 fattening weeks after arrival. Animals, facilities, and treatments Animals were reared under commercial conditions in a farm owned by Agropecuaria Montgai SL (Lleida, Spain), and were managed following the principles and specific guidelines of IRTA (Institut de Recerca i Tecnologia Agroalimentàries) Animal Care Committee. Two hundred sixteen male Holstein calves (120 ± 3.8 kg initial BW and 102 ± 2.7 days of age) from two separate fattening batches (114 and 102 animals for each batch) were used in a replicated study. The first batch was in summer of 2013 (from June to August), and the other took place in winter of 2015 (from January to March). The length of the experiment was 42 days after arrival at the fattening farm (14 days of arrival period and 28 days of initial growing period). After arrival, calves were weighed, fitted with a radio frequency transponder on the left ear, and randomly allocated to one of 6 pens (19 and 17 animals per pen for each batch). Each pen was equipped with a computerized concentrate single-space feeder (0.50 m length × 0.26 m width × 0.15 m depth), with lateral protections (1.40 m length × 0.80 m height) forming a chute (SF) [3]. Concentrate feeders were manufactured in stainless steel. Furthermore, covered pens (12 m length × 6 m width) were deep-bedded with straw, which had a separate straw feeder (3.00 m length × 1.12 m width × 0.65 m depth; 7 feeding spaces), and a water bowl. Each pen was randomly assigned to one of the 2 treatments that consisted of implementing two different strategies of adaptation to SF design during the arrival period (first 14 days after arrival): First is a conventional adaptation strategy (CA), and second is an alternative adaptation strategy (AA). The CA was the strategy followed in Verdú et al. [3], widening the chute for first 4 days of the arrival period to facilitate the feeder access; after this adaptation time, the width of the chute was adjusted to 42 cm providing sufficient space for only one animal to eat comfortably at a time. Conversely, the AA treatment was designed to enhance the adaptation of the animals to the feeder design to improve the feed access and stimulate intake. Thus, following arrangements were implemented: 1) the chute was not placed for first 4 days after arrival (Figure 1), leaving the feeder access completely free without lateral protections (the chute); and 2) an additional single-space concentrate feeder (0.60 m length × 0.50 m width × 0.20 m depth), without lateral barriers, was placed on the left side of the computerized feeder ( Figure 1) in which supplementary feed was provided daily at 10.00 hours, diminishing progressively the amount offered each day by 5 kg throughout the initial 14 days of study (from 70 to 0 kg per pen and day). Concentrate computerized feeder Animals were fed ad libitum concentrate via a computerized feeder (Voltec ® , Lleida, Spain), which was composed of a single trough with lateral protections forming a chute, and it used a radio frequency technology to record the daily concentrate consumption and eating behavior for each animal within a pen. All feeders were continuously provided with feed by automatic feeding system, as described in Verdú et al. [3]. The chute provided protection when an animal accessed the feeder to eat, and prevented interference from other close animals from the sides, as the antenna detected transponders whenever animals were within 50 cm of feeder. Each feeder was equipped with an antenna (Azasa-Allflex, Madrid, Spain), that emitted a 130-kHz electromagnetic field to detect each animal visit via a passive transponder (half-duplex), which was encased in plastic ear tags (Azasa-Allflex, Madrid, Spain) and placed in the left ear of each bull. In addition, the feeder was suspended on 4 load cells (Utilcell, Barcelona, Spain), which constituted a scale. This scale was programmed to transmit the feed weight, at 1-min intervals or when a weight change was detected, to a PLC (Allen-Bradley model 1769-L35E; Rockwell Automation, Milwaukee, USA, Programmable Logic Controller), and, lastly, displayed on a personal computer. The scales were calibrated weekly. At each animal visit at the feeder the bull was identified, and the computer recorded the initial and final feed weight, with its corresponding initial and final time. The antenna logged the presence of each transponder every 5-s for as long as the transponders were within the read panel range as an animal visit; when another transponder was detected or the antenna did not log any transponder for 60-s a new visit was created. Before the study started, the computerized concentrate feeder was validated using data from the 6 feeders. The validation was conducted in random different days during a period of 4 months; each day was performed by one of 2 observers, who observed 2 feeders simultaneously for 60 min. A digital timer synchronized with the time of computer and reader scale panel of feed weight were used. A total of 510 events or visits were recorded. The validation method consisted of recording visually for each visit the animal number identification, the initial time and feed weight when animal entered to the feeder, and the final time and feed weight when animal left the feeder. After, from two sources of data collection (software and observations), the meal size and meal duration were calculated as parameters to validate the accuracy of system [1]. The coefficients of determination for meal size and meal duration were 0.97 and 0.98, respectively (p<0.01). Furthermore, the sensitivity (99.5%) and the specificity (99.9%) were calculated [7] obtaining greater values than others reported by DeVries et al. [8]. In conclusion, the high values for sensitivity, specificity and predictability indicated that the described concentrate computerized feeder was an adequate system to monitor individual eating behavior in beef cattle (the number of visits per animal, the length of each visit, the amount of concentrate consumed per visit and per animal, and the total daily eating time and concentrate consumption per animal). Furthermore, if a calf had not been detected at the feeder during previous 24 hours, the computerized feeder activated an alarm. This alarm notification was used as an animal badlyadaptation record, which indicated an inability to adapt to the SF design. Each time that one calf recorded an alarm, and no consumption was recorded the next day, this particular animal was assisted to access to the feeder ensuring that the transponder worked and the animal ate. Five accumulated alarms for 1 animal were considered as a non-adaptation criterion and, therefore, this calf was removed from the study for that reason. Thus, the evaluation of animal ability to adapt to the SF design, from alarm notification system, was performed since the adaptation strategies finished for each treatment (after day 4 and after day 15 for CA and AA, respectively). Feed consumption and performance Calves received a commercial concentrate (Table 1) formulated according to the National Research Council recommendations [9], and wheat straw (35 CP, 16 EE, 709 NDF, and 61 ash; g/kg of DM basis), both ad libitum. During the arrival period (14 day) all animals were fed a starter concentrate, while the rest of growing period (28 day) they were fed a grower concentrate. Fresh water was available at all times. A sample from each concentrate was taken for DM determination and chemical analysis. The computerized feeder recorded the daily individual concentrate consumption throughout the study (38 days), excepting for the arrival period (14 days) during which the daily intake was collected per pen. Since, different adaptation strategies (chute widened or not place for first 4 days, and additional feeder during the initial 14 days of the study) did not allow the correct intake data recording by computerized feeder. The amount of straw offered to each pen was recorded weekly. Animals were weighed weekly throughout study at the same weekday and time of day, and BW data were used to calculate ADG and feed efficiency. To assess the variability of growth among calves sharing the same pen, the within-pen coefficient of variation (CV) of BW and ADG were calculated weekly. Lastly, gain to concentrate ratio (concentrate efficiency) weekly was estimated dividing the BW increase by the average of daily concentrate consumption throughout this 7-days period. within the same pen were recorded from 08.30 to 11.00 hour by scan sampling on day 1, 3, 5, 7, and weekly throughout the study. Animal behavior was analyzed according to Rotger et al. [10], Robles et al. [11], Mach et al. [12], and Marti et al. [13]. Records correspond to total counts of each activity in a pen [14], and the scan sampling method describes a behavior exhibited by an animal at a fixed time interval [15]. Two pens were observed at the same time, and whereas social behavior (Table 2) was scored during 2 continuous sampling periods of 15 minutes, general activities (Table 3) were scored using 2 scan samplings of 10 seconds at 5 minutes intervals [12]. This recording procedure (15 minutes) was repeated twice during the study of animal behavior. Table 2 Description of the social behavioral categories recorded. Nonagonistic interactions Self-grooming Defined as nonstereotypied licking of its own body or scratching with a hind limb or against the fixtures Social behavior When a bull was licking or nosing a neighboring bull with the muzzle or horning Oral behavior The act of licking or biting the fixtures Stereotypies Oral stereotypies Tongue rolling, stereotyped licking, or biting on certain bars or sites in the stall Table 3 Description of the general activities recorded. Eating Eating (concentrate or straw) was defined as when the animal had its head into the feeder and was engaged in chewing. An observation was defined as eating when the bull was eating from the feed bunk with its muzzle in the feed bunk or chewing or swallowing feed with its head over the bunk Drinking Drinking was recorded when the animal had its mouth in the water bowl. An observation was recorded as drinking when the bull was with its muzzle in the water bowl or swallowing the water Ruminating Ruminating included the regurgitation, mastication, and swallowing of the bolus Lying Lying was recorded as soon as the animal was not standing on its 4 legs, independently of any activity the animal might perform Standing Standing was recorded when the animal was standing on its 4 legs, independently of any activity the animal might perform Eating behavior During the arrival period (14 day), the feeding area of each pen (including concentrate feeders, computerized and additional, straw feeder, and drinker) was filmed for 24 hours on day 1, 5, and 15 of study using digital cameras (Sony CSM-BV420; Sony Corp., Barcelona, Spain) to analyze the eating pattern. Day 1 was the first whole day of the study after calves arrival; on day 5, the chute was narrowed (CA) or arranged (AA); and, on day 15, the supplementary amount of concentrate using an additional feeder in AA was ended. Videotapes were processed by continuous recording of the activities performed by animals. Recorded activities (eating concentrate or straw, drinking, waiting time to access the feeder or drinker, and displacements at feeder or drinker) were recorded simultaneously recording the time (min), the number of animals involved, and the frequency (the number by hour). Eating (concentrate or straw) was defined as when an animal had its head into the feeder, and an observation was defined as eating when the bull was eating from the feed bunk with its muzzle in the feed bunk or chewing with its head over the bunk. Drinking was recorded when an animal had its head in the water bowl, and an observation was recorded as drinking when the bull had its muzzle in the water bowl. Waiting time to access to feeder or drinker was recorded when an animal was close to the feeder or drinker and had the intention to access, but this place was occupied by another animal. Displacements among animals from feeders (concentrate or straw) and drinker were recorded when one animal displaced a pen mate that was eating or drinking, and forced the displaced animal to remove its head from feeding space. Only displacements with physical contact were considered. Only 4 hours of recordings (06.00 to 10.00 hour) were used to create a data set, as the eating behavior data, from a previous study [3], showed that during this time frame a first daily peak of eating activity was observed in cattle fed on collective feeders with continuously feed available. During the arrival period, the eating behavior recorded at additional and computerized feeders was considered together for the behavioral data analysis. For the growing period (28 days), the eating behavior was monitored by concentrate computerized feeder recording individual data from animals (the number of visits per animal, the length of each visit, the amount of concentrate consumed per visit and per animal, and the total daily eating time and concentrate consumption per animal). Chemical analyses Feed samples were analyzed for DM (24 hours at 103°C), ash (4 hours at 550°C), CP by the Kjeldahl method based on method 981. 10 [16], NDF according to Van Soest et al. [17] using sodium sulfite and α-amylase, and EE by Soxhlet with a previous acid hydrolysis based on method 920.39 [16]. Calculations and statistical analyses Firstly, a power analysis was conducted to check if 6 replicates per treatment would be sufficient to detect differences in concentrate consumption (3.0 vs. 3.8 ± 0.25 kg/d) and ADG (1.3 vs. 1.6 ± 0.12 kg/d) for SF vs. multiple-space feeder, respectively, reported in Verdú et al. [3]. The power analyses was conducted for these outcome variables using the standard deviation of this parameter between pens observed in a previous study [3], an alpha of 0.05, and a power of 0.80. The power analysis indicated at least that 3 (intake) and 4 (ADG) replicates (pens) per treatment were necessary to detect expected differences between treatments in a 27 and 23% for intake and ADG, respectively. The pen was considered the experimental unit for all statistical analysis (n=6), and animals were included in the analysis as the sampling unit when individual measurements were possible [3]. Pen data of daily concentrate consumption, eating behavior, and performance were averaged by week and batch. Individual animal data of daily concentrate consumption, eating behavior, and performance were averaged by pen, week, and batch. The frequency of each social behavior was obtained by summing by day, pen, and scan; while, the percentage of each general activity was averaged by day, pen, and scan. An arcsine plus 1 transformation to achieve a normal distribution was applied to behavioral data. The occupancy time of each feeder (concentrate and straw) and drinker (minutes), and the total waiting time to access each feeder and drinker (minutes) were calculated as the sum of the total time performing these activities per pen, day, and batch. The number of bulls eating and drinking, and the number of visits recorded at each feeder and drinker were averaged by pen, day, and batch. Number of displacements recorded at each feeder and drinker were summed by pen, day, and batch, and divided by total time and expressed as frequency of displacements per hour. Feeder and drinker occupancy, and waiting time data were also expressed as the percentage of time devoted to perform these activities from the total 4-hours of video recording analyzed (occupancy and waiting time rate). The occupancy and waiting time rates were root-squared to achieve a normal distribution. The means presented in the tables correspond to non-transformed data, and standard error of the mean (SEM) and p-values to the transformed data. To estimate eating pattern, meal criteria for each animal and period were calculated. The meal criterion (maximum amount of time between visits at the feeder to consider a visit as a part of the same meal) was calculated using a model composed of 2 or 3 normal distributions resulting from the natural logarithm of time (in seconds) between feeder visits as described by Bach et al. [18]. Then, visits at the computerized feeders were separated into meals, and meal frequency, meal duration and size, inter-meal duration, and eating rate were calculated. Consumption, performance, and eating and animal behavior data were analyzed using a mixed-effects model with repeated measures (Version 9.2, SAS Inst., Inc., Cary, NC). The model included initial BW as a covariate, treatment, period (weekly for performance and consumption pen data; daily or weekly for eating and animal behavior), and their interaction, as fixed effects, and pen and batch as random effects. Period was considered a repeated factor, and pen nested within treatment was subjected to 3 variance-covariance structures: compound symmetry, autoregressive order 1, and unstructured. The covariance structure that yielded the smallest Schwarz's Bayesian information criterion was considered the most desirable analysis. Initial and final BW, and age data were analyzed using a mixed-effects model (Version 9.2, SAS Inst., Inc., Cary, NC) including treatment as a fixed effect, and pen and batch as a random effects. Alarm notifications, which were used as animal adaptation records, were analyzed using a GLIMMIX procedure (Version 9.2, SAS Inst., Inc., Cary, NC) including treatment as a fixed effect, and pen and batch as a random effects. Herein, the Poisson with repeated measures was used for analysis the count adaptation data. Significance was established at p<0.05, and trends discussed as p ≤ 0.10. strategies (12.1 and 8.5 ± 4.91% treated calves for CA and AA, respectively; data not shown). Animal adaptation records Two calves were removed from the study because of their inability to adapt to the SF design, one on each treatment. No differences between treatments were observed in number of animals assisted to access the feeder (p>0.24; 5.5 vs. 1.9 ± 1.29% for CA and AA) and number of assistances recorded (p=0.11; 6 vs. 13 ± 1.1 assistances for CA and AA). Thus, most of calves learned to access the feeder and ate at their own without difficulties. The incidence of adaptation problems in terms of number of calves that received assistance, together with the number of assistances, was very low for both treatments during the arrival period. Nevertheless, AA strategy minimized numerically these problems of adaptation reducing by a half the frequency of animals assisted. Feed consumption and performance A week by treatment interaction was observed (p<0.01) on concentrate consumption (Table 4). During the first week of the arrival period, calves reared with AA recorded a greater concentrate intake than calves on CA (3.5 vs. 2.8 ± 0.12 kg/d; Figure 2). However, for the remaining study no differences (p>0.10) between treatments in concentrate intake were observed, which increased from 3.3 at week 2 to 4.2 ± 0.12 kg/d at week 6. Furthermore, the adaptation strategy to SF design had an effect (p<0.05) on final BW after 42 d of the study, resulting in a greater final BW in AA group than in CA (178.8 vs. 174.9 ± 3.37 kg, respectively). Nevertheless, ADG (1.36 ± 0.040 kg/d), feed efficiency (0.37 ± 0.011 kg/kg), accumulative concentrate consumption (144.8 ± 1.78 kg after 42 days), and straw consumption (0.4 ± 0.03 kg/d) were not influenced (p>0.10) by the adaptation strategy used. The straw intake was only used as a guiding data, as the straw was also used for bedding. 1 Treatments were different strategy of adaptation to a single-space feeder with lateral protections: CA=a conventional strategy (in which lateral protections were widened for the first 4 days of the study); AA=an alternative strategy (in which no lateral protections for first 4 days were placed and additional feeder was also used during the first 14 days of the study). Fixed effects were treatment (T), day (D), and interaction between treatment and day (T × D). These results indicate, as expected, that the greatest impact of the adaptation strategy was the increase of concentrate intake for first week after arrival (short-term effect; Figure 2). The narrowing of chute at day 4 interrupted the increasing trend of concentrate intake recorded by CA calves for first 3 days of arrival period, which recorded again less intakes as initially recorded (Figure 2). Consequently, calves under CA strategy needed one additional week to reach similar concentrate intakes than animals on AA. Thus, the chute management is particularly critical during the first week of adaptation to ensure expected concentrate consumptions, as well as the presence of an additional feeder increasing feeding spaces to stimulate concentrate consumption. To our knowledge, there are no studies contrasting adaptation strategies to a single-space concentrate feeder in cattle. However, many other strategies are available to foster intakes in calves after feedlot arrival [19], because newly received calves have low feed intakes [20] and that may compromise the expected growth rate. In addition, the feed intake data from the current study denote that the first week after fattening arrival was the most crucial time for adaptation to SF design. Figure 2 The concentrate consumption during the first 2 weeks of the study according to adaptation strategy applied. The arrow indicates the day when the chute was narrowed (CA) or placed (AA). Treatments were different strategy of adaptation to a single-space feeder with lateral protections: CA=a conventional strategy (in which lateral protections were widened for the first 4 days of the study); AA=an alternative strategy (in which no lateral protections for first 4 days were placed and additional feeder was also used during the first 14 days of the study). For all these reasons, this study suggests that the effects combination of adaptation arrangements (chute not placed and additional feeder) allowed reaching the initial purpose of adaptation strategy (to ease the feed access and encourage the concentrate consumption) during the first week after arrival. Another previous study [6] also reported that the increase of feeding spaces during an arrival period is an effective strategy to increase concentrate consumption. Moreover, whereas calves on the AA strategy maintained consumptions around 3.4 kg/d, animals on CA exhibited more variable intake between days, especially for the first week of arrival period. The CV of daily concentrate consumption for first 2 weeks of arrival period was greater (p<0.01) in calves on the CA strategy (11.3 ± 1.11%) in contrast to those on the AA (7.6 ± 1.11%; data not shown). These results are in agreement with a previous cited study [3] in which a great day-by-day variation in feed intake was observed during the arrival period. The increased final BW at day 42 recorded by AA group suggests a midterm effect of strategy of adaptation on animal growth. Also, as occurred with concentrate intake variability, the adaption strategy had an effect on growth pattern. Then, based on withinpen CV in ADG within-pen, animals on CA tended (p=0.05) to show more growth variability (38.2 ± 2.14%) in contrast to AA animals (32.2 ± 2.14%). This result is in accordance to González et al. [6], who reported less variability in ADG as number of feeding places per pen increased. Animal behavior General Activities: Most of the general activities were not affected by strategy of adaptation (Table 5) 1 Treatments were different strategy of adaptation to a single-space feeder with lateral protections: CA=a conventional strategy (in which lateral protections were widened for the first 4 days of the study); AA=an alternative strategy (in which no lateral protections for first 4 days were placed and additional feeder was also used during the first 14 days of the study). 2 Fixed effects were treatment (T), day (D), and interaction between treatment and day (T × D). 3 Behavioral data were analyzed as arcsine plus 1 transformation; the means presented herein correspond to non-transformed data, and SEM and p-values to the transformed data. However, as expected, during the first week of arrival period, a greater (p ≤ 0.01) percentage of animals per pen eating concentrate and drinking were recorded in AA strategy (8.9 ± 0.10% and 2.6 ± 0.26%) compared with CA strategy (6.2 ± 0.10% and 1.3 ± 0.26%). In the second week of the arrival period, the reduced amount of concentrate supplied by additional feeder could explain the lack of differences between treatments. Thus, general activities from behavioral data indicate that the first week of the study is the most important and crucial time to adapt the animals to the feeder design. The greater percentage of animals drinking in the AA strategy could be related to the greater concentrate intake recorded at first week compared with CA, since the ingestion of concentrate and water are strongly correlated [21]. Animals synchronize the feeding and drinking behaviors, altering the feed and water consumption [22,23]. Social behavior: No effects were observed on social behaviors due to the adaptation strategy (Table 6). However, calves under the AA strategy experienced a greater (p<0.01) frequency of displacements (2.6 ± 0.31 times/15 minutes) compared with CA (1.3 ± 0.31 times/15 minutes) for first week of arrival period. This great incidence of displacements was probably consequence of the absence of a chute for first 4 days, and the increased feeding concentrate places promoted the competition to feed access. These results are similar to those found by González et al. [6], which observed an increase of the number of displacements when increasing the number of feeding spaces from 1 to 2 in pens with 8 calves. For the rest of the study, no displacements were observed between treatments, fact that confirmed the effectiveness of lateral protections of the chute to avoid the displacements around the feeder. Moreover, no stereotypies were observed throughout the experiment. Eating behavior There was an interaction between adaptation strategy and filming day for occupancy time (p<0.01), number of bulls (p<0.01), number of visits (p<0.01), displacements (p<0.01), and waiting time to feeder access (p<0.05) throughout the 2 weeks of the arrival period (Table 7). Contrarily, no differences (p>0.10) between strategies of adaptation were found in eating and drinking behaviors at straw feeder and drinker during this period. Besides, for the remaining 4 weeks of the study (growing period), the adaptation strategy did not affect (p>0.10) eating pattern at concentrate feeder (6.4 ± 0.30 number of daily visits, 9.7 ± 0.74 minutes of meal duration, 649.9 ± 28.15 g of DM basis of meal size, 55.5 ± 3.57 minutes of total daily meal duration, 80.0 ± 9.83 g of DM basis/minutes of eating rate, 240.8 min of inter-meal duration, and 1,319.5 ± 7.61 minutes of total daily inter-meal duration). Then, no mid-term effect of adaptation strategy on eating behavior at the concentrate feeder was observed. At day 1 and 5 of arrival period, a greater (p<0.01) occupancy time of concentrate feeder was recorded in AA feeders (296.8 and 300.7 ± 10.26 minutes, respectively) than CA feeders (200.4 and 215.4 ± 10.26 minutes, respectively). González et al. [6] reported similar results in calves, where the time devoted eating concentrate increased as number of feeding places per pen increased. At day 15 of the arrival period, no differences (p>0.10) between treatments in time attending the feeder (203.4 ± 10.26 minutes) were observed, as both treatments had a single-space feeder. Thus, an additional feeding place without chute increases the time spent at the concentrate feeder by 37% (90 minutes) during the arrival period (day 1 and 5). Moreover, the occupancy time rate for SF design recorded in the current study (89 ± 1.0% of total time analyzed) was similar to obtained in Verdú et al. [3] (90.6 ± 1.0% of total time analyzed), where the same SF design was used with similar experimental conditions in terms of number of calves per pen and initial BW. Table 7 Eating and drinking behaviors at concentrate and straw feeders, and at drinker recorded by video recordings (06.00 to 10.00 hour) on day 1, 5, and 15 of the study from calves that were adapted with two different adaptation strategies (CA and AA) to a concentrate single-space feeder with lateral barriers at arrival to fattening farm. 1 Treatments were different strategy of adaptation to a single-space feeder with lateral protections: CA=a conventional strategy (in which lateral protections were widened for the first 4 days of the study); AA=an alternative strategy (in which no lateral protections for first 4 days were placed and additional feeder was also used during the first 14 days of the study). 2 Fixed effects were treatment (T), day (D), and interaction between treatment and day (T × D). 3 Occupancy and waiting time rates were analyzed as the root transformation; the means presented herein correspond to non-transformed data, and SEM and p-values to transformed data. 4 Percentage of occupancy time from total 4-hours' time of video recording analyzed. 5 Percentage of waiting time from total occupancy time recorded at feeders or drinker. Item However, when expressing the occupancy time of concentrate feeder per available feeding spaces, at day 1 and 5 of arrival period (data not shown), the AA feeders had a lesser (p<0.01) occupancy time (147.8 and 149.7 ± 1.50 minutes, respectively) compared with CA feeders (200.9 and 215.9 ± 1.50 minutes, respectively). Unexpectedly, the occupancy time decreased around 30% (60 minutes) when the number of feeding places per pen increased by the provision of an additional feeder. Then, the occupancy time when it is expressed by feeding space decreased in AA strategy indicating that more competition around feeder may have happened, even though it took into account 2 available feeding spaces. This hypothesis is supported by the increased displacements at the concentrate feeder in AA treatment and by the fact that only a total of 60 minutes of occupancy time was recorded by additional feeder. This great level of competition at the concentrate feeder in the AA strategy could be considered a positive effect to encourage the feed consumption, such as it was corroborated by intake and growth results described previously. Moreover, an increase of feed consumption when the level of competition for feed increased has been also reported by others in dairy cows [24,25,26]. However, this great level of competition around concentrate feeder only remained for the first week of the arrival period, fact that might related to establishment of a hierarchy or order to feeder attendance. Lastly, when only one feeding place was available after 2 weeks of adaptation, the feeder occupancy time was the same between treatments (around 200 minutes), independent of previous adaptation strategy. From previous eating pattern data, an occupancy time around 80% of total daily time could be used as a reference in pens of 18 animals, with 120 kg BW, and for SF design. On day 1 and 5 of arrival period, a greater number of animals was recorded (p<0.01) at AA feeders (2.4 and 2.2 ± 0.36 animals, respectively) than CA feeders (1.3 and 1.1 ± 0.36 animals, respectively). In contrast, no differences (p>0.10) between treatments were observed in number of animals at the feeder (1.0 ± 0.36 animals) at day 15 of the arrival period. This data indicate that calves show preference to occupy all of available feeding spaces at arrival, which is in agreement with results observed by Verdú et al. [3]. Thus, during growing phase, and especially for an arrival period, an increase of the ratio animal:feeder space (from 20 to 1 to 10 to 1) seems an effective strategy to stimulate feed intake because of the eating behavior. Although both treatments recorded (p<0.01) a reduction in number of visits at the feeder at the beginning (from day 1 to 5), this decline varied depending on adaptation strategy. Whereas on day 1 and 5, the number of visits was greater (p<0.01) for AA strategy (114.7 and 53.1 ± 7.28 visits, respectively) than CA strategy (39.7 and 9.5 ± 7.28 visits, respectively), at the end of arrival period (day 15) no differences were observed between treatments (9.9 ± 7.28 visits). Then, the great frequency of feeder visits exhibited in AA strategy indicates that their arrangements promoted an increased activity around the feeder. Therefore, an additional feeding space stimulates the feeder visits and feed intake by social facilitation [27], which may facilitate the adaptation to SF design. Also, this increase in the number of visits has been associated with a high level of competition in two studies [25,26]. Obviously, in both strategies, the number of visits decreased (p<0.01) the day after chute was narrowed or placed, for CA and AA, respectively. Lastly, in both treatments, the waiting time increased (p<0.01) from day 1 to 5 (55% for CA and 65% for AA) when chute was ready, showing the effectiveness of lateral protections from SF to force animals accessing one by one and eating individually. However, at day 5 of arrival period, CA strategy recorded greater (p<0.05) waiting time to access the concentrate feeder than AA strategy (89.3 and 61.4 ± 9.03 minutes, respectively). The AA strategy reduced to 30% the waiting time compared with CA strategy. Also, the CA strategy was able to reduce (p<0.01) the waiting time from day 5 to 15 in contrast to AA strategy, indicating a better ability to adapt to SF design because of calves were more familiarized. The eating and drinking behavioral data at the straw feeder and drinker were not affected by strategy of adaptation to SF design over the arrival period of the study ( Table 7). The straw feeder results are in disagreement with those reported by González et al. [6], which observed an increase of time spent eating straw when the feeding space: animal ratio decreased. Moreover, although González et al. [6] found a greatest frequency of displacements at the drinker when increasing the feeder places from 1 to 2, the drinking pattern in the current study was not influenced by feeding spaces. In summary, the AA strategy had a positive effect on concentrate intake for first week after arrival (short-term effect), and on BW after 6 weeks (mid-term effect). Moreover, AA resulted in a greater attendance (reducing the waiting time to access the feeder) and more competition (increasing the frequency of displacements) at the concentrate feeder during the first week of adaptation. In conclusion, the adaptation strategy (chute not placed and additional feeder provided) proposed herein eased access to feed and encouraged concentrate consumption during the first week of arrival period, improving concentrate intake at short-term (first week) and BW at mid-term (sixth week) after arrival fattening farm, respectively. However, further research should be conducted to evaluate long-term effects of the AA strategy on concentrate intake and performance during an entire fattening period. The utilization of a single-space concentrate feeder with a chute (lateral protections) to feed beef cattle could compromise its concentrate consumption and performance during the first weeks after arrival at fattening farm. The current study evaluated an adaptation strategy consisting of placing the singlespace feeder without chute the first 4 days and using an additional feeder for first 14 days after arrival. This strategy had positive implications on performance at arrival to fattening farm, which encouraged the concentrate intake during the first week and achieved an increase of the BW 6 weeks after fattening arrival.
v3-fos-license
2023-01-17T15:08:23.745Z
2017-08-14T00:00:00.000
255860093
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1186/s12944-017-0540-4", "pdf_hash": "02515c85593b775e2414ba99ddee40890f4073d2", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2066", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "sha1": "02515c85593b775e2414ba99ddee40890f4073d2", "year": 2017 }
pes2o/s2orc
Liquid chromatography mass spectrometry-based profiling of phosphatidylcholine and phosphatidylethanolamine in the plasma and liver of acetaminophen-induced liver injured mice Acetaminophen (APAP) overdose is one of the most common causes of acute liver failure in many countries. The aim of the study was to describe the profiling of phosphatidylcholine (PC) and phosphatidylethanolamine (PE) in the plasma and liver of Acetaminophen -induced liver injured mice. A time course study was carried out using C57BL/6 mice after intraperitoneal administration of 300 mg/kg Acetaminophen 1 h, 3 h, 6 h, 12 h and 24 h. A high-throughput liquid chromatography mass spectrometry (LC-MS) lipidomic method was utilized to detect phosphatidylcholine and phosphatidylethanolamine species in the plasma and liver. The expressions of phosphatidylcholine and phosphatidylethanolamine metabolism related genes in liver were detected by quantitative Reverse transcription polymerase chain reaction (qRT-PCR) and Western-blot. Following Acetaminophen treatment, the content of many PC and PE species in plasma increased from 1 h time point, peaked at 3 h or 6 h, and tended to return to baseline at 24 h time point. The relative contents of almost all PC species in liver decreased from 1 h, appeared to be lowest at 6 h, and then return to normality at 24 h, which might be partly explained by the suppression of phospholipases mRNA expressions and the induction of choline kinase (Chka) expression. Inconsistent with PC profile, the relative contents of many PE species in liver increased upon Acetaminophen treatment, which might be caused by the down-regulation of phosphatidylethanolamine N-methyltransferase (Pemt). Acetaminophen overdose induced dramatic change of many PC and PE species in plasma and liver, which might be caused by damaging hepatocytes and interfering the phospholipid metabolism in Acetaminophen -injured liver. Background As a major site for drug metabolism and elimination, the liver is susceptible to drug toxicity. Drug-induced liver injury (DILI) is a significant clinical problem and a challenge for drug development worldwide. Acetaminophen (N-acetyl-p-aminophenol, APAP) is commonly used as an over-the-counter analgesic and antipyretic drug known to be safe at therapeutic doses. However, APAP overdose has become one of the most common causes of acute liver failure in many countries [1]. APAPinduced liver injury is the most frequent drug hepatotoxicity and the most used experimental model of DILI. The mechanism of APAP-induced liver injury is complicated and not fully understood. The accumulation of Nacetyl-p-benzoquinone imine (NAPQI), the reactive and toxic metabolite of APAP, is considered as the main cause of liver injury induced by overdose of APAP. In mouse APAP-induced models and in human, the reaction of NAPQI with protein sulfhydryl groups of cysteine might trigger mitochondrial damage, oxidative stress, cjun N-terminal kinase (JNK) activation, the nuclear DNA fragmentation and cell death [2][3][4]. In mammalian cells, phosphatidylcholine (PC) and phosphatidylethanolamine (PE) are the first and second most abundant phospholipid respectively. In liver, PC is the principal component of cellular membrane, a precursor of signaling molecules, and a key element of lipoproteins and bile. In addition to its structural role in membrane and as a substrate for methylation to PC in the liver, PE is also a substrate for anandamide synthesis, regulates membrane fusion, and supplies ethanolamine for glycosylphosphatidylinositol anchors of cell-surface signaling proteins. Previous studies over 50 years ago reported the contents of phosphatidylcholine (PC) and phosphatidylethanolamine (PE) were decreased,but the PE:PC ratio was elevated in the liver of rats challenged with CCl 4 [5,6]. Lipidomics is a powerful technology defined as the complete quantitative and molecular determination of lipid molecules isolated from cells, tissue or biological fluid [7][8][9]. During APAP-induced liver injury, lipidome is the overall results of cellular/ subcellular dysfunction and alterations of larges molecules like proteins/enzymes/DNA, and consequently more easily correlated with the phenotype. The analysis of lipidome by lipidomics direct to a more detailed understanding of biochemical changes during DILI. Recent years, emerging lipidomic techniques were used to describe comprehensive and global PC/PE profiles in plasma/liver of experimental animals with drug/chemicalinduced liver injury. Several studies reported that serum levels of certain PCs/PEs were significantly changed in acute rat liver injury induced by chemicals such as APAP, CCL4, galactosamine and ricinine [10][11][12]. Cheng J et al. analyzed APAP-treated mice serum through LC-MSbased metabolomics, and they found that C20:4-LPC gradually declined with rising liver toxicity [13]. Xie T et al. observed the remodeling of PC/PE in liver of rats with Tripterygium wilfordii induced liver injury. The alternations comprised fatty acid composition of PC changed and increased Lyso PC and certain PE decreased, which might affect membrane fluidity, the inflammatory reaction, and mitochondria dysfunction respectively [14]. However, the alternation of individual PC/PE specie in both plasma and liver of APAP-injured mice and correlations of PCs/PEs between plasma and liver remains unknown. We already developed an LC-MS based lipidomic method for simultaneous detection of diverse lipids [15]. In the present study, using this high-throughput method, we compared the relative concentrations of phosphatidylcholine and phosphatidylethanolamine in the plasma and liver of the APAP-induced liver injured mice and salinetreated control mice at series of time-points, and then measured the expressions of genes involved in PC/PE metabolism in liver. The experimental work is focusing on the search for possible mechanisms leading to hepatotoxicity than on biomarkers indicating an APAP intoxication. Biochemical assays and H&E staining Plasma levels of ALT and AST were measured to evaluate APAP-induced acute liver injury. As shown in Fig.1a and b, plasma ALT and AST levels increased in 300 mg/kg APAP induced acute liver injury mice model at all five time points (1 h, 3 h, 6 h, 12 h and 24 h). The most significant increase of ALT/AST was at 6 h time point. In histological evaluation, as shown in Fig.1c, APAP-treated mice displayed centrilobular hepatic necrosis, hyperemia of the hepatic sinus, presence of inflammatory infiltrate, pyknotic nucleus, cytoplasm vacuolization and loss of cell boundaries whereas saline-treated mice at any time point showed normal liver histology. The prominent morphological damage started at 3 h, deteriorated with time, and then attenuated at 24 h time point (Fig.1d). Phosphatidylcholine and phosphatidylethanolamine profiling in mice plasma analyzed by LC-MS The relative concentrations of 57 PC/LPC species and 18 PE/LPE species in plasma were simultaneously determined using LC-MS method. The typical positive total ion chromatograms (TIC) of mouse liver were shown in Fig. 2. The original data of PC/PE concentrations in plasma were shown in Supplemental Additional file 1: Table S1. The 75 phospholipids were loaded into a PCA model. As shown in Fig. 3a and c, the APAP-treated group could be clearly distinguished from the salinetreated group at 3 h time point as well as at 6 h time point according to the PCA score plots. These variables were further loaded into a PLS-DA model. As shown in Fig. 3b and d, the differences between saline-and APAPtreated samples were also depicted by PLS-DA score plot at 3 h time point or 6 h time point. The above score plots suggested that the metabolic pattern of PC/PE species in plasma was altered by APAP treatment at both 3 h and 6 h. The ratio of lipid concentrations in APAP-treated group/lipid concentrations in saline-treated group at Table S1. The fold-changes of PC/PE species in mouse plasma upon APAP treatment were also illustrated by heat map in Supplemental Additional file 2: Figure S1A (PC) and Figure S1B (PE). Figure 4a summarized the numbers of decreased and increased PCs/LPCs/PEs/LPEs in plasma upon APAPtreatment. We defined lipid species with statistically significant increase or decrease at least at one time-point among 1 h, 3 h and 6 h as the increased or decreased lipid species respectively. The increased phospholipids are much more than decreased phospholipids in plasma upon APAP treatment. As shown in Fig. 4b and c, PC 33:1, PC 34:3, PE 34:2, PE 36:3, PE 38:4 and PE 38:6 were elevated significantly both at 3 h and 6 h time points upon APAP treatment. Pearson's correlation was performed to analyze the correlation between these 6 phospholipid species and liver enzymes (ALT/ AST) which were normally used to be elevated in DILI. As showed in Fig. 4d, the increases of these 6 phospholipids are positively correlated with the changes of ALT/AST. The profiles of PC and PE in mice livers analyzed by LC-MS To investigate the mechanism of PC/PE changing in plasma, we detected the PC/PE profiles in livers of APAP-and saline-treated mice by LC-MS. The levels of 63 PC/LPC species and 43 PE/LPE species in liver were simultaneously determined using LC-MS method. The analytical data's of 106 PC/PE species were loaded into a PCA and a PLS-DA model. As shown in Fig. 5a/c (PCA model) and Fig. 5b/d (PLS-DA model), the differences between control group and APAP group were nearly identical in both data models. The above score plots suggested that the metabolic pattern of PC and PE in livers was altered by APAP treatment at both 3 h and 6 h. The fold-changes of PC/PE species in mouse liver upon APAP treatment were also illustrated by heat map in Additional file 3: Figure S2A (PC) and figure S2B (PE). Figure 6a summarized the numbers of decreased and increased PCs/LPCs/PEs/LPEs in liver upon APAPtreatment. We defined lipid species with statistically significant decrease or increase at least at one time-point among 1 h, 3 h and 6 h as the decreased or increased lipid species respectively. The decreased phospholipids are much more than increased phospholipids in liver upon APAP treatment, which are opposite to profiles in plasma. We picked out the decreased PCs/LPCs/PEs/ LPEs in liver, and then performed Scatter Plot of these phospholipids both in liver and plasma. As shown in Fig. 6b, most of decreased PCs and LPEs in liver might cause their increases in plasma. Because the significantly changed PEs in liver couldn't be detected in plasma, PEs were not shown in Fig. 6b. Furthermore, the relative concentrations of the 6 increased plasma phospholipids in liver were plotted in Fig. 6c and d. PC 33:1 and PC 34:3 in liver decreased significantly. PE 34:2 and PE 36:3 in liver slightly decreased, while PE 38:4 and PE 38:6 in liver slightly increased. The expression pattern of PC and PE metabolism related genes in APAP-injured mice livers To further investigate the mechanism of PC/PE changing in liver, we measured the expression of genes involved in PC/PE metabolism in liver tissue. There are over 40 different phospholipases in liver. Based on our previous RNAseq data, 13 phospholipases including 1 PLA1 (Pla1a), 8 PLA2 (Pla2g6, Pla2g7, Pla2g12a, Pla2g12b, Pla2g15, Pnpla2, Pnpla7, Pnpla8), 2 PLC (Plcg1, Plcxd2) and 2 PLD (Pld3, Pld4) are relatively abundant in mouse liver. The mRNA levels of these 13 phospholipases in livers of saline-or APAP-treated mice were detected by qRT-PCR. As shown in Fig. 7a, the concentration of mRNA of most phospholipases in liver decreased upon APAP treatment for 3 h or 6 h. We also measured the mRNA levels of PC/PE synthesis related genes. Among these genes, Pemt was decreased, while Chka was uniquely increased in APAP-injured livers (Fig. 7b). As shown in Fig. 7c, the induction of Chka was~3.5-fold at 1 h, peaked at 6 h (~16-fold) and then went back to baseline at 24 h in APAP-injured livers. Western blot assays of the liver homogenates demonstrated Chka in livers upon APAP treatement was increased timedependently, appeared to be greatest at the 12 h time point, which is about several hours-delayed compared with Chka mRNA induction (Fig. 7d). Discussion In the present study, we utilized the high-throughput LC-MS lipidomic method to acquire PC and PE profiles both in plasma and liver of APAP-induced liver injured mice at different time points after dosing. The mice model of APAP-induced liver injury mostly resembles the human pathophysiology of both liver injury and recovery [16]. In our study, the sub-lethal dose of APAP (300 mg/kg) was used to induce liver injury in mice. In this model, the most significant increase of ALT/AST is at 6 h time point. At 24 h time point, the liver function presented the recovery situation verified As lipids serve many important functions, take part in several biochemical reactions and integrate diverse metabolic pathways, any alteration in lipids will reflect and affect cellular functions. Based on our results, the content of many PC/PE species in plasma increased from 1 h time point, peaked at 3 h or 6 h, and tended to return normality at 24 h time point. PC 33:1, PC 34:3, PE 34:2, PE 36:3, PE 38:4 and PE 38:6 were elevated significantly both at 3 h and 6 h time points upon APAP treatment. The increases of these 6 phospholipids were positively correlated with the changes of ALT/AST. The alternation of PC/PE lipidome in plasma might be caused by damaging hepatocytes and/or interfering the lipid metabolism in liver. Based on our results, following an intoxication with APAP, many PC/PE species such as PC 33:1, PC 34:3, PE 34:2 and PE 36:3 in liver tissue decreased from 1 h, appeared to be lowest at 6 h, and had the tendency back to normal status at 24 h time point, which was opposite to their profile in plasma. Therefore, the increase of these phospholipids in plasma might be released by damaged hepatocytes in APAP-injured livers. However, some PEs increased in APAP-injured livers, which might cause the induction of certain PEs such as PE 38:4 and PE 38:6 in plasma. The contents of PC/PE in liver rely on their degradation and synthesis. The degradation of PC/PE results from the action of phospholipases. There are various phospholipases (PLA1, PLA2, PLC and PLD) that exhibit substrate specificities for different positions in phospholipids. Phospholipase A2 (PLA2) hydrolyzes the sn-2 ester bond in PC/PE, forming arachidonic acid and lyso-PC/lyso-PE [17][18][19]. These bioactive lipid mediators play important roles in inflammation, phospholipid metabolism, and signal transduction, which participate in the progression of DILI. The PLA2 superfamily includes over twenty groups comprising such main types as the secreted sPLA2, cytosolic cPLA2, calcium-independent iPLA2, and so on. PC and PE also could be degraded by PLA1, PLC and PLD [20]. The decrease of PC contents might be the consequence of increased activities of phospholipases. Previous studies demonstrate that overdose of APAP may activate cPLA 2 and sPLA 2 [21][22][23], which are involved in APAP toxicity. The activated phospholipases degraded PC in livers. Our present study indicated that the mRNA levels Except for degradation, the contents of phospholipids in liver are also affected by their synthesis. The major pathway of PC synthesis is the CDP-choline pathway, also referred to as the "Kennedy pathway", which requires choline and three enzymes including choline kinase (Chka and Chkb), CTP:phosphocholine cytidylyltransferase (Pcyt) and diacylglycerol cholinephosphotransferase (Chpt). In liver, PC can also be generated endogenously in a second pathway via PE methylations catalyzed by hepatic phosphatidylethanolamine N-methyltransferase (PEMT), which produces about one third of the PC in liver. PC also could be synthesized by reacylation of lyso-PC with lyso-PC acyltransferase (LPCAT) [24,25]. We observed that the mRNA and protein levels of Chka, the rate-limiting enzyme of synthesizing PC de novo, were induced dramatically in APAP-injured mice. Chka was found to play a vital role in many biological signaling pathway, such as androgen receptor (AR) chaperone [26], and cell proliferation and carcinogenesis. Extensive studies of the structure and function of Chka show that the distal of the Chka promoter regions sequence is similar to the consensus of activated protein-1 binding site [27].In addition, APAP overdose causes nuclear accumulation of Hypoxia inducible factor-1 (HIF-1) in mouse livers as early as 1 h after treatment [28]. The expression of Chka was identified to be due to the regulation of transcriptional expression of HIF-1 [29]. HIF-1 deficiency mice were protected from APAP hepatotoxicity at 6 h, but severe liver injury was observed at 24 h, suggesting that HIF-1 is involved in the early stage of APAP toxicity [30]. Thus, the altered characteristic of Chka may be related to the alteration of associated transcriptional factors. The increased activities of phospholipases, decreased phospholipases mRNA expressions, and increased Chka mRNA/protein expression might partly explain why the contents of PC species in livers increased after APAP dosing and tended to recover at 24 h time point. PE is synthesized by four different pathways, the two quantitatively major of which are the CDP-ethanolamine pathway that PE catalyzed by ethanolamine kinase (ETNK1 and ETNK2) in the ER and the PS decarboxylation pathway that catalyzed by phosphatidylserine decarboxylase (PISD) in mitochondria [31]. The increase of some PE species in APAP-injured livers might be involved in the suppression of Pemt which is responsible one third of PC synthesis in liver. The dramatic decrease of Pemt may lead the accumulation of PE in APAP-injured liver. Our study for the first time determined the dynamic PC/PE changes both in plasma and liver that occurred 34 epileptic patients after valproate sodium treatment, and found that certain LPCs such as LPC 16:0, LPC 18:0, LPC 18:1 and LPC 18:2 decreased in valproate sodium induced liver injury [32]. The phospholipid profile of serum in human intoxicated with APAP also might be further observed by lipidomic techniques. Conclusions These results suggest that a novel targeted lipidomic method based on the metabolic profiling of phospholipid analyzed by LC-MS provides a better understanding the role of lipid metabolism in APAP-induced injured liver and serum, which might provide valuable information for judging prognosis of DILI and therapeutic targets. Methods Animals C57BL/6 males aged 8-10 weeks and weighed 22-25 g at the time of the experimental procedures were purchased from the Shanghai Laboratory Animal Center, Chinese Academy Sciences (Shanghai, China). All mice were housed in microisolator cages under humidity-(50% ± 5%) and temperature-(24°± 2°C) controlled specific pathogen-free conditions with 12 h light/dark cycle. The mice were maintained on with free access to water and standard irradiated sterile chow. Treatment of mice Fresh acetaminophen (Sigma Aldrich, USA) solution was prepared for each experiment by dissolving acetaminophen in saline warmed to 59°C. Mice were fasted for 16 h and then injected intraperitoneally (i.p.) with saline or acetaminophen at 300 mg/kg (body weight) and food restored. APAP-treated mice were sacrificed and collected liver and plasma samples at the indicated time points (0 h, 1 h, 3 h, 6 h, 12 h, and 24 h, n = 6~8 each time point). Some mice were treated with saline as parallel controls at the same time points (1 h, 3 h, 6 h, 12 h, and 24 h, n = 6~8 each time point). Appropriate volume of PMSF stock solution in isopropanol (100 mM) were added to freshly collected mouse plasma (5 mM in plasma) to stabilize the plasma lipidome. The livers were frozen in −80°C for later lipid/total RNA/protein extraction or fixed in 4% paraformaldehyde for tissue sections. Lipidomics analysis The lipids of plasma (50 uL) and liver samples (200 mg) were extracted using liquid-liquid methyl tert-butyl ether (MTBE) extraction protocols developed by our group [15]. Chromatography was performed on a charged surface hybrid (CSH™) C18, 2.1*100 mm, 1.7 um UPLC column (Waters Corporation, USA) and subjected to a gradient elution as described previously. The eluents were monitored in positive electronic ion spray ESI mode. The lipid profiling was performed using the previous methods [33]. Briefly, the extracted ions chromatogram (EIC) of each lipid specie was created by applying the molecular weight (MW) lists (MW of quasi-molecular ions) generated from Lipidview™ software (AB SCIEX, MA, USA). The peaks found in EIC were screened and identified according to the rules, including exact mass accuracy (< 5 ppm), specific fragment (PC and LPC have the specific product of 184.1; PE and LPE have the specific neutral loss of 141), retention time and isotope distribution pattern (similarity to the theoretical pattern). After corresponding IS correction, the areas of the identified peaks were used for quantitation. Biochemical assays and histopathology Alanine aminotransferase (ALT) and aspartate aminotransferase (AST) were measured by an automatic biochemical analyzer (SIEMENS ADVIA 1800: SIEMENS Healthcare Diagnostics, USA). Liver sections from saline-or APAP-treated mice were fixed in 4% paraformaldehyde overnight and embedded in paraffin wax, sliced at 5 mm thickness and stained with hematoxylin and eosin (H&E). Real-time RT-PCR (qRT-PCR) analysis Total RNA was prepared from mouse livers using Trizol reagent (Life Technologies, Thermo Fisher Scientific). Reverse transcription (RT) was performed using RevertAid™ First Strand cDNA Synthesis Kit (Fermentas) according to the manufacturer's instructions. Relative expressions of indicated genes were determined by SYBR Green-based real-time PCR using Actb as an internal standard. A relative standard curve was used to calculate expression levels. Primers used for gene expression studies are listed in Table 1, and referenced in the primer bank [34]. Western blot analysis Total protein was extracted from treated livers using radioimmune precipitation assay (RIPA) lysis buffer. The Chka protein level was detected by primary antibody
v3-fos-license
2018-04-03T00:26:33.784Z
2016-05-02T00:00:00.000
12124261
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.jbc.org/content/291/27/13999.full.pdf", "pdf_hash": "bca1bce9b6568dd31b85aa6b5323563b1c06a14b", "pdf_src": "Highwire", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2068", "s2fieldsofstudy": [ "Engineering" ], "sha1": "feb7a7a8c8cde45213750a4fdd3da7cfc5056ba3", "year": 2016 }
pes2o/s2orc
Design of a Genetically Stable High Fidelity Coxsackievirus B3 Polymerase That Attenuates Virus Growth in Vivo* Positive strand RNA viruses replicate via a virally encoded RNA-dependent RNA polymerase (RdRP) that uses a unique palm domain active site closure mechanism to establish the canonical two-metal geometry needed for catalysis. This mechanism allows these viruses to evolutionarily fine-tune their replication fidelity to create an appropriate distribution of genetic variants known as a quasispecies. Prior work has shown that mutations in conserved motif A drastically alter RdRP fidelity, which can be either increased or decreased depending on the viral polymerase background. In the work presented here, we extend these studies to motif D, a region that forms the outer edge of the NTP entry channel where it may act as a nucleotide sensor to trigger active site closure. Crystallography, stopped-flow kinetics, quench-flow reactions, and infectious virus studies were used to characterize 15 engineered mutations in coxsackievirus B3 polymerase. Mutations that interfere with the transport of the metal A Mg2+ ion into the active site had only minor effects on RdRP function, but the stacking interaction between Phe364 and Pro357, which is absolutely conserved in enteroviral polymerases, was found to be critical for processive elongation and virus growth. Mutating Phe364 to tryptophan resulted in a genetically stable high fidelity virus variant with significantly reduced pathogenesis in mice. The data further illustrate the importance of the palm domain movement for RdRP active site closure and demonstrate that protein engineering can be used to alter viral polymerase function and attenuate virus growth and pathogenesis. The RNA-dependent RNA polymerases (RdRPs) 2 from positive strand RNA viruses close their active sites for catalysis via a subtle NTP-induced conformational change within conserved motifs A and C (1). This palm domain-based closure mechanism differs from what is observed in other classes of replicative polymerases where the palm is fully structured prior to NTP binding, and the nascent base pair is delivered into the active site for catalysis via molecular motions originating in the finger domain (2,3). Despite these different molecular motions, the structural end points are essentially equivalent, and the RdRPs retain the highly conserved polymerase active site geometry with aspartate residues and a magnesium-catalyzed twometal reaction mechanism (4,5). The origin of the palm-based movement in the viral RdRPs is likely the conserved molecular contact between the finger and thumb domains that stabilizes the protein structure at the expense of reducing finger domain flexibility (6). One key characteristic of the viral RdRPs is their relatively low replication fidelity with mutation frequencies of 10 Ϫ4 -10 Ϫ5 that result in a heterogeneous virus population often referred to as a quasispecies (7,8). The population consensus sequence defines a particular virus and strain, but closer inspection of individual genomes reveals that they each contain a few random point mutations relative to the consensus. This pool of continually generated genetic diversity allows RNA viruses to rapidly adapt to different environments, enabling efficient replication in multiple cell types when infecting a host organism. The quasispecies population diversity is critically important for pathogenesis and virus growth, which can be attenuated in vivo by either decreasing diversity with high fidelity polymerase or increasing diversity with low fidelity polymerases (9 -12). The molecular interactions involved in the palm domainbased active site closure step led us to previously carry out mutagenesis studies showing that the fidelity of coxsackievirus B3 (CVB3) and poliovirus polymerases can be drastically altered by mutations within motif A (13). These structurally homologous enzymes differ 3-4-fold in processive elongation rates and nucleotide selectivity. Interestingly, mutations at structurally identical positions tend to increase the fidelity of the lower fidelity poliovirus polymerase but decrease the fidelity of the inherently higher fidelity CVB3 polymerase. Those data demonstrated that the RdRP active site closure mechanism provides a platform for the evolutionary fine-tuning of virus replication fidelity and suggested that protein engineering approaches to alter fidelity may be an avenue for developing live-attenuated virus vaccines. Multiple lines of experimental data have also identified motif D as an important regulator of RdRP function and as a reporter for active site conformational states. Motif D forms the outer rim of the NTP entry channel and is located immediately exterior to motif A (Fig. 1). NMR dynamics measurements using sparse 13 C labeling show significant changes in local motion within motif D upon nucleotide binding (14), molecular dynamics trajectories suggest a role in NTP transport into the active site (15,16), and kinetic isotope effects indicate that a conserved lysine in motif D is a proton donor during catalysis (17). Comparisons of open and closed active site conformations in RdRP crystal structures suggest an essentially rigid body movement of motif D that is tightly coupled to the movement of the adjacent motif A; there are only minor internal differences in how the motif is packed against the rest of the polymerase structure in the two states (Fig. 1B). However, Phe 363 of poliovirus 3D pol appears to undergo a sliding motion atop the motif D ␣-helix when comparing the open versus closed active site conformations (Fig. 1B). Structures of CVB3 polymerase have shown that certain motif A mutations that affect active site closure result in large conformational changes within motif D, including the displacement of Phe 364 , the structural equivalent of poliovirus 3D pol Phe 363 , from its binding pocket (13). CVB3 elongation complex structures also show density for a polymerase-bound "metal A" Mg 2ϩ ion that must be transported Ϸ5 Å into the catalytic center during active site closure where it would join the metal B ion that is delivered as part of the NTP-Mg 2ϩ complex (1,18). The extended hydration shell of the metal A ion and its likely path into the active site involve motif D, suggesting that the motif may play a role in controlling the dynamics of delivering Mg 2ϩ for catalysis. To further assess the role of motif D in controlling CVB3 polymerase rate and replication fidelity, we have carried out a biochemistry and virology study of CVB3 polymerase mutations that target two groups of structural interactions. First, mutations of highly conserved Phe 364 (Fig. 1C) and its binding pocket slow processive elongation up to 7-fold in vitro and increase virus replication fidelity 2-fold in vivo, giving rise to the first genetically stable high fidelity CVB3 variant viruses. Second, mutations that disrupt the hydration network surrounding the bound metal A Mg 2ϩ ion ( Fig. 1D) also slow the polymerase and give rise to stable progeny viruses, but their effects on both rate and fidelity are fairly minor. These data provide further insights into the molecular mechanisms underlying viral RdRP active site closure and provide additional control points for engineering viral polymerase fidelity. Experimental Procedures Protein Expression and Purification-CVB3 polymerases were expressed in Escherichia coli from ubiquitin fusion constructs to generate the native N terminus required for activity (19) and purified as described previously (13). CVB3 mutant polymerases were generated using the QuikChange site-directed mutagenesis protocol and verified by DNA sequencing. Crystallization and Structure Determination-CVB3 3D pol crystals were grown at 16°C by hanging drop vapor diffusion using NaCl and ammonium sulfate conditions (13). Crystals were harvested and frozen in liquid nitrogen, and diffraction data were collected at the Molecular Biology Consortium beamline 4.2.2 (Advanced Light Source, Berkeley, CA). Data were processed with XDS (20), and the structures were solved by molecular replacement using the wild type CVB3 3D pol (Protein Data Bank code 3DDK) as the search model. Model build-ing, refinement, and validation were performed with Coot (21), Phenix (22), and MolProbity (23), respectively, as distributed in the SBgrid suite (24). Processive Elongation Kinetics-A synthetic hairpin primertemplate RNA with an internal 2-aminopurine (2AP) in the template segment was used to measure processive elongation rates via a lag phase that represents elongation through 14 nucleotides (Fig. 3A) after which the 2AP is in the ϩ2 position where it is fully unstacked from neighboring bases and exhibits maximal fluorescence (Fig. 3B). Kinetics experiments were performed with a Bio-Logic SFM-4000 titrating stopped-flow instrument equipped with a MOS-500 spectrometer. Fluorescence excitation was at 313 nm with a 10-nm bandwidth, and emission was detected using a Brightline 370/36 nm band pass filter (Semrock Inc., Rochester, NY). Preinitiated elongation complexes were generated with an excess of enzyme to drive RNA binding (7.5 M 3D pol , 5 M snap cooled RNA, 60 M each ATP and GTP), and after 10 min the samples were diluted to a final RNA concentration of 50 nM with SF buffer (25 mM HEPES, pH 7.0, 112.5 mM NaCl, 8 mM MgCl 2 , and 2 mM tris(2carboxyethyl)phosphine). This was loaded into the stoppedflow instrument where the RNA concentration was further reduced to 25 nM during the stopped-flow experiments done at 30°C. Single Nucleotide Incorporation Assays-Stopped-flow experiments using a shorter hairpin primer-template RNA were used to determine the incorporation rates of a single cytosine opposite a templating guanosine positioned immediately downstream from the 2AP (Fig. 3D). Preinitiated elongation complexes were generated as in the processive elongation experiments but with the 2AP now prepositioned in the ϩ2 site such that CTP incorporation and subsequent translocation will move the 2AP into the templating ϩ1 site where the fluorescence is quenched due to stacking on the nascent duplex. These reactions were diluted to a final RNA concentration of 10 nM in the reaction cell, and maximal single nucleotide incorporation rates (k pol ) and apparent K m values were determined by titrating CTP and analyzing the observed 2-aminopurine translocation rates (Fig. 3E). Nucleotide discrimination by the different mutant polymerases was assessed by comparing the incorporation of CTP with that of 2Ј-deoxy-CTP using the same assay and calculating the ratio of the CTP and dCTP catalytic efficiencies using the following relationship: Discrimination factor ϭ (k pol / K m ) CTP Ϭ (k pol /K m ) dCTP . Quench-flow Kinetics-Rates of elongation complex formation were determined using a LI-COR Biosciences IRDye800labeled hairpin primer-template RNA that allows formation of a ϩ1 product in the presence of GTP or a ϩ4 product in the presence of GTP and ATP (Fig. 6A). Final concentrations were 5 M 3D pol , 0.5 M RNA, and 40 M each NTP in SF buffer at room temperature. Time point samples were quenched with SF buffer containing NaCl and EDTA at final postquench concentrations of 300 and 25 mM, respectively. For the ϩ1 product formation reactions, the RNA and GTP were mixed simultaneously with 3D pol in initiation reactions that are largely limited by the RNA binding kinetics. To eliminate RNA binding as a rate-limiting step, a second set of reactions were done wherein the RNA and 3D pol were preincubated for 20 min prior to mix-ing with GTP and ATP to trigger elongation to a defined ϩ4 product. These reactions were set up in 40 l volumes, from which 2 l samples were removed and quenched for every time point. All quench reaction products were analyzed by denaturing PAGE (15% acrylamide and 7 M urea), imaged on a LI-COR Biosciences Odyssey imager, and quantified with ImageStudio (LI-COR Biosciences). Generation of Virus Stocks and Infections-All variants were constructed using the QuikChange XL site-directed mutagenesis kit (Stratagene) and the CVB3-Nancy infectious cDNA. 4 g of in vitro transcribed infectious RNA were electroporated into 6 ϫ 10 6 HeLa cells. To determine the cytopathic effect, 500 l of these virus stocks were used to infect fresh HeLa cell monolayers for three more passages. For each passage, virus was harvested by two freeze-thaw cycles. Two independent stocks were generated for each virus. For mutagen assays, HeLa cells were pretreated for 1 h with different concentrations of ribavirin or 5-fluorouracil and infected at a multiplicity of infection of 0.01 with passage 3 virus. 48 h postinfection, virus titers were determined by TCID 50 . 10-Fold serial dilutions of virus were prepared in 96-well round bottom plates in DMEM. Dilutions were performed in octuplicate, and 100 l of the dilution were transferred to 10 4 cells plated in 100 l of DMEM and 10% newborn calf serum. After 5 days, living cell monolayers were colored by crystal violet. TCID 50 values were determined by the method of Reed and Muensch (36). Virus Replication Kinetics-For one-step growth kinetics, HeLa cells were infected at a multiplicity of infection of 10 and frozen at different time points after infection. For quantitative RT-PCR analysis, total RNA from infected cells was extracted by TRIzol reagent (Invitrogen) and purified. The TaqMan RNA-to-C t one-step RT-PCR kit (Applied Biosystems) was used to quantify viral RNA. Each 25-l reaction contained 5 l of RNA, 100 M each primer (forward, 5Ј-GATCGCATATG-GTGATGATGTGA-3Ј; reverse, 5Ј-AGCTTCAGCGAGTA-AAGATGCA-3Ј, and 25 pmol of probe 5Ј-[6-Fam]CGCATCG-TACCCATGG-3Ј in an ABI 7000 machine. Reverse transcription was performed at 50°C for 30 min and 95°C for 10 min followed by 40 cycles at 95°C for 15 s and 60°C for 1 min. A standard curve was generated using in vitro transcribed genomic RNA. Sequencing-Viral RNA from passage 3 virus stocks was extracted and RT-PCR -amplified using the primers sets 878Forward and 2141Rev covering part of the structural protein coding region. The resulting PCR products were TOPO TA cloned (Invitrogen) and sequenced, and an 1168-nucleotide sequence was analyzed per clone using Lasergene software (DNAStar Inc.). The mutation frequency was calculated using the total mutations identified per population over the total number of nucleotides sequenced for that population multiplied by 10 4 . For each population, the number of clones presenting no mutations, one, two, three, etc. were quantified and used for statistical testing by Mann-Whitney U test. The number of clones analyzed per population were as follows: A358V, 95 clones; A358T, 96 clones; V367I, 89 clones; V367L, 82 clones; A231T, 89 clones; A231V, 95 clones; F364W, 81 clones; F364Y, 85 clones; and wild type, 84 clones. Protection Studies-Mice were housed in the Institut Pasteur animal facilities in BSL-2 conditions with water and food supplied ad libitum and handled in accordance with institutional guidelines for animal welfare. All studies were carried out in 5-week-old BALB/c male mice obtained from Charles River Breeding Laboratories. Mice were infected intraperitoneally with 10 5 TCID 50 in 0.20 ml of PBS. Tissue-specific viral titers were determined by TCID 50 assay after harvesting whole organs that were homogenized in PBS using a Precellys 24 tissue homogenizer (Bertin Technologies). Structural Characterization of Motif D Mutants-Crystal structures at 1.8 -2.6-Å resolution were determined for eight motif D mutants (Table 1) using previously established ammonium sulfate and sodium chloride crystallization conditions (13). Superpositioning of the mutant and wild type 3D pol structures clearly delineates them into two groups, one with the native conformations for the motif D loop and active site ( Fig. 2A) and the other with various motif D loop distortions that are accompanied by a movement of Pro 357 and closure of the active site (Fig. 2B). These structures reveal that removal of Phe 364 from its binding pocket can drive structural changes in motif A and lead to a closed 3D pol RdRP active site in the absence of RNA and a nucleotide triphosphate. Structures that retained the wild type conformation of the motif D loop and an open active site include F364Y, F364W, A341G, and A345V, all of which feature conservative amino acid substitutions (Fig. 2C). Conversely, the more radical F364A, F364V, and F364L mutations resulted in structures with closed active sites (Fig. 2D) as well as motif D loop distortions and heterogeneity such that the loop itself could not be well modeled into the electron density maps. The F364I structure has a partially closed active site (Fig. 2E) that may be due to the NaCl crystallization condition; this crystal form has a Na ϩ ion in the metal A binding site that interacts with Asp 233 via a water molecule (13), and this interaction may hamper the movement of Asp 233 and motif A that is necessary for active site closure. These structures show that the Phe 364 binding pocket is able to accommodate every planar aromatic amino acid with minimal structural changes in motif D. The tyrosine and tryptophan mutations may in fact strengthen this conformation as the F364Y side chain hydroxyl forms a new hydrogen bond with the backbone carbonyl of Met 355 , and the F364W mutant provides a larger aromatic surface area for the interaction (Fig. 2A). In contrast, small hydrophobic amino acids in place of Phe 364 do not insert into the pocket, and as a result Pro 357 drops down, collapsing the pocket and shifting motif A toward motif C in a way that effectively closes the active site in the absence of a bound NTP. Additionally, several structures showed weak density for residues 19 -21 in the index finger that comprise the ϩ2 nucleotide binding pocket in the 3D pol -RNA complex (18), indicating that this region of CVB3 polymerase is flexible in the absence of bound RNA. Elongation Rates-Processive elongation rates were determined by stopped-flow fluorescence using a template strand 2AP in a synthetic hairpin primer-template RNA (Fig. 3, A-C). Stalled elongation complexes were first assembled at high concentration to drive polymerase-RNA binding and then diluted to 25 nM final concentration as they were mixed with varying concentrations of NTPs in the stopped-flow instrument. NTP solutions contained ATP:UTP:GTP at a 6:1:1 mole ratio that represents their approximate in vivo ratios (25,26), and the reported K m values reflect the ATP concentration in that mixture. Elongation proceeds for precisely 14 nucleotides after which there is a clear increase in 2AP fluorescence as the base analog enters the ϩ2 nucleotide pocket and loses stacking interactions with neighboring nucleotides that previously quenched fluorescence. The result is a data trace with a lag phase that reflects the time needed to incorporate 14 nucleotides on the heteropolymeric template (Fig. 3B) and from which we can elucidate maximal elongation rates (k pol ) and NTP K m values (Fig. 3C) using methods previously developed for 5Ј-fluorescein-labeled RNA templates (27). The Phe 364 mutations have relatively modest effects on the NTP K m values but slow elongation rates up to almost 7-fold with effects that are largely in proportion to the size of the replacement residue (Fig. 4A). This suggests that the mutants are primarily deficient in the active site conformational changes required for catalysis. Based on these data and the crystal structures discussed above, we hypothesize that the various mutations slow elongation rates because they alter the interactions with the Phe 364 binding pocket to favor the open or the closed conformation, either of which would result in a higher energy barrier to the repeated cycling of the active site during processive elongation. The tyrosine and tryptophan mutations at Phe 364 have stronger interactions with the binding pocket that Design of a High Fidelity Coxsackievirus RdRP favor the open conformation, whereas the alanine, valine, leucine, and isoleucine mutations all compromise the interaction with Pro 357 and favor the closed state. In contrast to the Phe 364 mutations, the A341G and A345V mutations located in the helix upon which Phe 364 packs have essentially no effect on rate but did lower NTP K m values 2-fold. We suspect these two mutations allow for greater flexibility of the motif D loop while neither compromising nor reinforcing the Pro 357 -Phe 364 stacking interaction, and this results in slightly more efficient initial NTP binding during elongation. The second set of mutations were designed to influence the movement of the metal A Mg 2ϩ ion into the polymerase active site for catalysis. These also slowed processive elongation, but the effects were quite small. A231T and V367I have k pol and K m values and catalytic efficiencies (k pol /K m ) nearly identical to those of the wild type enzyme, whereas the A358T, A358V, and V367L mutants have similar rates but lower NTP K m values. The exception was a strong effect from the A231V mutation that points to the importance of a polar environment in this region of the polymerase. A231V displays a nearly 4-fold reduction in rate and nearly 3-fold increase in K m , making it the least efficient polymerase examined. In contrast, A231T, which effectively replaces one of the A231V methyl groups with a hydroxyl, restored essentially wild type behavior. Our interpretation of these data is that adding the valine with two methyl groups disrupts the transport of the hydrated metal ion either into the prebound metal A site or between the prebound site and the catalytic center. Single Nucleotide Incorporation and Discrimination-Single nucleotide turnover rates for cytosine incorporation followed by translocation were measured by the quenching of 2-aminopurine fluorescence as the base analog moves from the unstacked ϩ2 position into the ϩ1 position where it is stacked on the priming duplex (Fig. 3, D-F). Rapid mixing stopped-flow data show single exponential rates of fluorescence loss, and analysis of these rates yields a single nucleotide turnover rate constant (k pol ) and affinity (K m ) for CTP. The general trends observed in processive elongation assays hold true in the single CMP incorporation experiments with the set of Phe 364 variants being 1.3-10-fold slower than the wild type enzyme but retaining similar NTP K m values (Fig. 4B). The A341G and A345V mutations have slightly higher catalytic efficiencies due to lower K m values despite modestly slower CTP incorporation rates. Among the mutations targeting divalent ion binding, A231T and V367I have rates and K m values nearly identical to wild type 3D pol , whereas the A358T, A358V, and V367L mutants display modestly slower rates and lower K m values. Again, the A231V mutant has the greatest effect with an Ϸ2-fold decrease in rate and Ϸ5-fold increase in K m that together reduced catalytic efficiency almost 10-fold. In 2Ј-dCTP utilization experiments, the wild type rate and K m decrease 14-and 17-fold, respectively, as compared with the CTP values. Among the mutant polymerases, the effects on rate are generally much stronger than the effects on NTP binding (Fig. 4C). One interesting observation is that the mutations at the base of the Phe 364 binding pocket, A341G and A345V, JULY 1, 2016 • VOLUME 291 • NUMBER 27 increase dCTP incorporation rates while decreasing the CTP rates, suggesting that these mutations may be low fidelity polymerases that are more prone to nucleotide misincorporation. Design of a High Fidelity Coxsackievirus RdRP As an indirect measure of polymerase fidelity we can calculate nucleotide discrimination factors as the ratio of the catalytic efficiencies for CTP and 2Ј-dCTP incorporation, i.e. (k pol / K m ) CTP Ϭ (k pol /K m ) dCTP . This is shown in Fig. 5A and additionally illustrated by plotting the discrimination factors versus maximal processive elongation rates in Fig. 5B. We have previously demonstrated a correlation between elongation rate and the nucleotide discrimination factor for mutations in motif A where high discrimination factors are predictive of viruses with higher replication fidelity (13). In the data presented here, all the Phe 364 variants have higher discrimination factors than wild type, whereas the A341G and A345V mutations in the Phe 364 binding pocket show decreased discrimination. The mutations targeting the metal A site show modest effects that increase and decrease nucleotide discrimination with the strongest effects being from A358T and V367L mutations whose location in the structure (Fig. 1D) suggests that they may affect the entry of a metal ion into the prebound metal A site. Mechanism of Phe 364 Mutant Effects on Active Site Motions-To further investigate the mechanism behind the slowed elongation rates of the Phe 364 mutants, we examined the behavior of F364Y and F364A via rapid quench reactions. These allow us to directly visualize the RNA products formed as opposed to indirectly assessing polymerase turnover via RNA translocation state-dependent 2-aminopurine fluorescence. F364A was cho- sen because it has a preclosed active site, which may increase the incorporation rate of the first nucleotide. F364Y was chosen because it forms a new side chain hydrogen bond that may stabilize the open conformation and thus slow the active site closure step. First, a short primer-template hairpin RNA (Fig. 6A) was used to measure RNA binding followed by one guanosine incorporation step in a single reaction where 3D pol is combined with a premixed RNA ϩ GTP solution and the kinetics are rate-limited by the RNA binding step. Both mutants readily initiate with rates and yields that are comparable with the wild type enzyme (Fig. 6, B and D). Second, we examined processive elongation by 4 nucleotides in a reaction where 3D pol was first prebound to the same RNA, and this 3D pol -RNA complex was then mixed with both GTP and ATP (40 M final concentration for each) in a quench-flow instrument. All three polymerases showed a rapid loss of starting material within 6 s, but the two mutants clearly have slower processive elongation kinetics than wild type based on slowed accumulation of ϩ1 and ϩ3 intermediate species and the time needed to produce the final ϩ4 product (Fig. 6, C and E). Notably and despite the structural observation that it has a preclosed active site, F364A is slower than F364Y both at incorporating the first nucleotide and at elongation over multiple catalytic cycles. The overall effects on processive elongation were quantitated by analyzing the buildup for the final ϩ4 species over time, yielding single exponential rates of 10 Ϯ 0.5, 4 Ϯ 0.2, and 1.2 Ϯ 0.1 per minute for the wild type, F364Y, and F364A polymerases, respectively. Note that in this experiment we do not see significant buildup of the ϩ2 species after ATP addition because GTP has an Ϸ35fold lower K m than ATP (28), and as a result the ϩ2 species quickly incorporates guanosine to generate the ϩ3 species, which then more slowly progresses through an adenosine addition to yield the final ϩ4 species. Infectious Virus Studies-Several of the mutant polymerases were incorporated into virus genomic RNA and transfected into HeLa cells to see whether they could support virus replication and yield enough progeny virus for deep sequencing to obtain mutation frequencies. Among the Phe 364 mutations, only the tyrosine and tryptophan variants were recovered. The lack of progeny virus from isoleucine, valine, and alanine mutations point to a strong requirement for a planar amino acid at residue 364. Mutation frequencies calculated from molecular clones (29) showed that the F364Y mutant retained wild type fidelity despite having a higher 2Ј-hydroxyl discrimination factor in the biochemical assays. In contrast, the F364W mutation had a dramatic effect on mutation frequency, which went down more than 2-fold from 4.2 to 1.8 mutations per 10 kb synthesized (Fig. 5C). This is the first genetically stable high fidelity CVB3 polymerase variant we have isolated. The mutants targeting the metal A site all supported virus growth with final titers within 1 log of the wild type virus, but their one-step growth curves showed growth delays of 4 -6 h (data not shown). These mutations resulted in small non-significant changes in fidelity. The strong high fidelity effects of the F364W mutation led us to further assess its effects on nucleotide analog drug resistance in tissue culture and on tissue tropism and pathogenesis in mice. First, both wild type and F364W virus presented the same RNA synthesis profiles in one-step growth curves (Fig. 7A). The retention of the mutation at the last time point was confirmed by sequencing. The mutant virus exhibited resistance to both ribavirin and 5-fluorouracil (Fig. 7B) with about 10-fold higher titers than wild type on ribavirin and Ϸ3-fold higher titers on 5-fluorouracil. The mutant virus showed significantly reduced virulence in vivo with 12 of 15 mice surviving after 9 days versus 3 of 15 for the wild type (Fig. 7C). Consistent with this, tissue samples showed that late stage growth of the F364W virus is attenuated by about 1 log in spleen, pancreas, and heart (Fig. 7D). Thus, the F364W mutation significantly attenuated coxsackievirus B3 growth in mice. Discussion Building upon our previous work identifying structure-function relationships in engineered fidelity variant picornaviral po- Fig. 4. B, correlation plot of maximal elongation rates based on processive elongation (Fig. 4A) versus the nucleotide discrimination factors. Phe 364 mutations are shown as closed symbols, and wild type is marked with a large gray ring. C, mutation frequencies of the variant polymerases based on molecular clone sequencing of progeny virus genomes following infection in tissue culture. Between 81 and 96 clones (94,000 and 112,000 nucleotides) were sequenced per population. **, p Ͻ 0.01 by two-tailed Mann-Whitney U test; n ϭ 165; all other values not significant, p Ͼ 0.05. NV denotes non-viable viruses, and ND denotes not determined. nt, nucleotides. lymerases (9, 13), here we focused our attention on the conformational flexibility of motif D and its impact on CVB3 3D pol kinetics, fidelity, and structure. The initial structures of polio-virus 3D pol elongation complexes showed that RdRP active site closure involves a concerted rigid body movement of motif A and the loop portion of motif D that establishes antiparallel FIGURE 6. Perturbing the Pro 357 -Phe 364 interaction affects both pre-and postcatalysis active site motions. A, sequence of the hairpin primer-template RNA used in elongation reactions where incubation with only GTP results in formation of a ϩ1 product, whereas incubation with GTP and ATP will yield a ϩ4 product. * denotes location of LI-COR Biosciences IRDye800 attachment. B, ϩ1 product formation by wild type and mutant polymerases observed by denaturing PAGE followed by infrared imaging of the IRDye label. S marks starting RNA material position. C, ϩ4 product formation in a reaction where 3D pol and RNA were preincubated to minimize effects from the slow RNA binding step prior to initiation. The plots of band intensities underscore the processive elongation defects of the F364A mutant where there is significant buildup of the intermediate ϩ1 product as compared with wild type. D and E, single exponential rates for ϩ1 and ϩ4 product formation reactions shown in B and C. AU, arbitrary units. ␤-sheet backbone hydrogen bonding between motifs A and C. This process fully structures the canonical polymerase palm domain active site by repositioning the essential Asp 233 in motif A to coordinate both divalent metals needed for catalysis (1). Point mutations within motif A can have profound effects on polymerase elongation rate and replication fidelity, which can be either increased or decreased depending on the specific viral polymerase being studied (13). An alternate conformation of the loop portion of motif D was then captured in a crystal structure of the low fidelity CVB3 3D pol F232L variant (13). This alternate motif D loop conformation involved Phe 364 moving from its binding pocket under Pro 357 to instead bind near the base of the thumb domain, and it showed that the motif D loop could be restructured independently of motif A. This is consistent with the conformational heterogeneity within motif D that has been observed by NMR (14) and in molecular dynamics simulations (15) where data indicate that motions in this region of the polymerase are sensitive to NTP binding. Sequence and structure alignments of picornaviral polymerases further show that the original conformation is highly conserved and seemingly stabilized by a stacking interaction between Phe 364 and Pro 357 (Fig. 1C), suggesting that the interactions of these two residues may be important for RdRP function by controlling the motif D loop conformation. In this study, we examined whether perturbations of this interaction could impact loop flexibility and reveal aspects of the functional role motif D plays during active site closure. We targeted residue Phe 364 with mutagenesis to small and ␤-branched hydrophobic residues (Ala, Val, Leu, and Ile) and with more conservative changes to tyrosine and tryptophan, the other two planar aromatic amino acids. We also mutated Ala 341 and Ala 345 , two residues that form a relatively flat floor at the bottom of the hydrophobic pocket into which Phe 364 is inserted. In a second set of mutations, we targeted the active site metal A ion by changes to residues Ala 231 , Ala 358 , and Val 367 that may impact the transport of magnesium into the active site for catalysis by virtue of being indirectly involved in the extended ion hydration shell (18). Crystallography, stoppedflow kinetics, rapid quench reactions, and infectious virus studies were used to characterize the effects of the mutants on 3D pol structure and biochemistry and on coxsackievirus B3 growth and pathogenesis. The data point to Phe 364 being a key control point for both processive replication rate and fidelity. Mutations of this residue slow the polymerase elongation rate (k pol ) with a continuum of effects ranging from 1.2-to 7-fold, whereas the K m for NTP remains largely unchanged, suggesting that residue 364 plays a role in the dynamics of nucleotide reorientation and active site closure that occurs after the initial NTP binding event. The crystal structures show that a planar amino acid Design of a High Fidelity Coxsackievirus RdRP 2.3-fold, resulting in a very strong high fidelity variant making only 1.8 mutations per 10 kb synthesized. This is the first isolation of a genetically stable high fidelity variant of CVB3, an enzyme that already has fairly high fidelity in comparison with the poliovirus RdRP. Combined with our previous studies of motif A mutations that result in low fidelity variants (9, 13), we have now recovered viable CVB3 isolates with 3D pol variants whose mutation frequencies vary over a 6-fold range from 1.8 to 11.2 mutations per 10 kb of RNA synthesized. The details of the 3D pol catalytic cycle have been studied in great detail to reveal five major steps: NTP binding, a precatalysis conformational change that reorients the NTP into the active site, catalysis, a postcatalysis conformational change, and finally translocation of the RNA to reset the active site (30). The precatalysis reorientation step is rate-limiting and a major fidelity checkpoint, resulting in a strong correlation between elongation rate and replication fidelity. The structures of picornaviral elongation complexes show that the molecular rearrangement that takes place during the active site closure step is the inward movement of motif A to fully fold the core palm domain ␤-sheet and create a canonical replicative polymerase active site (Fig. 1B) (1,18). Our prior studies of 3D pol motif A mutations showed a strong correlation between replication fidelity obtained from deep sequencing and the presence of an NTP 2Ј-hydroxyl group based on the CTP-versus-dCTP discrimination assay (13). Those findings suggested that proper hydroxyl positioning and the resulting hydrogen bonding network emanating from the 2Ј-OH play a major role in driving active site closure for catalysis. However, such a 2Ј-OH correlation does not appear to hold for the set of motif D mutants described in this work. For example, the F364Y and F364W mutants both have slightly elevated discrimination factors, but only the tryptophan has a significant fidelity effect in an infectious virus context. Among the mutants targeting the metal A site, the biochemical data would predict lowered fidelity, but the virus-based observation is one of slightly higher fidelity variants. These new data do retain the in vitro biochemical observation that faster polymerases have reduced 2Ј-OH discrimination (Fig. 5B), but for motif D mutants these effects are no longer clearly predictive of change to in vivo virus mutation frequencies. The kinetic data also indicate that the cycling of the active site and motif D from an open to closed conformation is important for the first round of catalysis, supporting molecular dynamics data arguing that motif D plays a role in nucleotide transport into the active site (31). Wild type 3D pol and the F364Y mutant have open active sites and comparable rates for the first nucleotide incorporation event based on the observed disappearance of starting material, but the F364A mutant is significantly slower despite having a preclosed active site that might be expected to accelerate incorporation of the first nucleotide (Fig. 6C). Thus, we conclude that the active site must be cycled to the open state to enable initial NTP binding, and the NTP is then repositioned to trigger active site closure and catalysis. Subsequent incorporation steps, i.e. processive elongation, are slowed in both F364Y and F364A mutants because they stabilize the open or the closed state, respectively, and either effect will reduce the rate at which the polymerase progresses through the full catalytic cycle. The molecular origin of the lower in vivo fidelity observed for F364W variant virus is not clear from the structures or the biochemical data, and the in vitro biochemical behavior of this mutant 3D pol is very similar to that of the non-fidelity variant F364Y (Fig. 4). The fidelity effect is likely due to differences in motif D dynamics that alter the efficiency of NTP delivery into the active site or the coupling between motifs A and D, and elucidating the fine details of these motions will likely require atomic level dynamics data from NMR and computational approaches. The emergence of the F364W mutations as a genetically stable high fidelity variant is attributed to two distinct effects. First, the sole tryptophan codon is UGG, and any single base mutation of this codon will result in small or charged amino acids (i.e. Gly, Ser, Cys, Leu, and Arg) that likely cannot support virus replication based on our virus growth results. Thus, the Trp 364 variant virus effectively includes a genetic poison pill that minimizes its reversion potential. Second, at the level of a functional polymerase, the tryptophan creates a larger surface for the sliding motion that takes place relative to the motif D helix (Fig. 1B), but it does so by nonspecific hydrophobic interactions that do not strongly favor either the open or the closed conformation of the active site. Consequently, it has a small effect on the elongation rate, reducing it from 20 to 17 nucleotides/s, and remains fast enough to support virus growth. This is in stark contrast to two known high fidelity variants that were first identified in poliovirus but are not functional in coxsackievirus (9,13): 3D pol G64S was originally selected as a ribavirin-resistant poliovirus (32), and in CVB3 it is slightly higher fidelity than wild type based on in vitro polymerase assays, but the mutation reduced virus replication by almost 100-fold, and it reverted to wild type after only three passages. Similarly, K359R is a viable high fidelity motif D mutant in poliovirus (33), but the structurally equivalent K360R slows the CVB3 3D pol elongation rate by Ϸ35-fold, and it does not support virus growth. A long term goal of understanding structure-function relationships in the viral RdRPs is to use such information to attenuate in vivo virus growth in ways that can lead to suitable vaccine strains. Prior studies with poliovirus and coxsackievirus have shown that either increasing or decreasing RdRP fidelity can attenuate growth in vivo. One potential advantage of doing this via a protein engineering approach is the identification of specific mutations that can retain function but are unlikely to arise by virus-based adaptation pathways. Such mutations may also be less likely to revert by the same pathways. In this study, we used a combination of structure-based protein engineering and virology to identify 3D pol F364W as a new high fidelity variant of CVB3 that significantly attenuates virus growth in mice. The parental phenylalanine residue is highly conserved among picornaviruses, and a tryptophan is never observed, suggesting that the nuances of codon usage have prevented this fully functional polymerase variant from appearing in any enterovirus, and likewise this limits the reversion potential of the variant. Our findings show that motif D and the conserved structural interactions with Phe 364 provide a powerful control point for engineering polymerase fidelity as a tool for attenuating virus growth.
v3-fos-license
2016-03-14T22:51:50.573Z
2015-05-01T00:00:00.000
5279406
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1996-1944/8/5/2794/pdf", "pdf_hash": "90f0a9bbe6c32a986072cb0483bd00fef2891871", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2070", "s2fieldsofstudy": [ "Engineering" ], "sha1": "90f0a9bbe6c32a986072cb0483bd00fef2891871", "year": 2015 }
pes2o/s2orc
Prosthetic Meshes for Repair of Hernia and Pelvic Organ Prolapse: Comparison of Biomechanical Properties This study aims to compare the mechanical behavior of synthetic meshes used for pelvic organ prolapse (POP) and hernia repair. The analysis is based on a comprehensive experimental protocol, which included uniaxial and biaxial tension, cyclic loading and testing of meshes in dry conditions and embedded into an elastomer matrix. Implants are grouped as POP or hernia meshes, as indicated by the manufacturer, and their stiffness in different loading configurations, area density and porosity are compared. Hernia meshes might be expected to be stiffer, since they are implanted into a stiffer tissue (abdominal wall) than POP meshes (vaginal wall). Contrary to this, hernia meshes have a generally lower secant stiffness than POP meshes. For example, DynaMesh PRS, a POP mesh, is up to two orders of magnitude stiffer in all tested configurations than DynaMesh ENDOLAP, a hernia mesh. Additionally, lighter, large pore implants might be expected to be more compliant, which was shown to be generally not true. In particular, Restorelle, the lightest mesh with the largest pores, is less compliant in the tested configurations than Surgipro, the heaviest, small-pore implant. Our study raises the question of defining a meaningful design target for meshes in terms of mechanical biocompatibility. Introduction Mechanical biocompatibility of prosthetic meshes for hernia and pelvic organ prolapse (POP) is related to the ability of implants to display a mechanical behavior compatible with its function and favoring its integration into the surrounding native tissue [1][2][3][4][5][6]. This approach for implant assessment has received increased attention in recent years. While initial investigations focused on the ability of a mesh to provide sufficient strength and resistance to maximum loads [7][8][9][10][11][12], it recently became clear that the deformation behavior in a physiological range, also called "comfort zone", is of major importance [13,14]. A mismatch of mechanical properties of the implants compared to native tissue has been associated with clinical complications [15][16][17][18][19], although none of these works explicitly link mechanical properties with clinical outcome. It has recently been suggested that meshes designed to mimic the biomechanical properties of the area of application are advantageous [2,14,20]. These investigations are further motivated by an FDA safety communication [21] pointing at risks associated with existing prosthetic meshes and corresponding surgery procedures for repair of POP. A wealth of studies has been conducted analyzing either hernia or POP meshes (see [3,22] for an extensive literature overview). However, little work was performed to compare the mechanical response of these two groups, which may shed light onto the prevalent clinical complications. The mechanical environment and loading conditions these implants are exposed to differ significantly between the abdominal wall and pelvic floor. Physiological loads in terms of membrane tension were calculated based on Laplace's law to be around 0.035 N/mm in the pelvic region and 0.136 N/mm at the abdominal wall at rest, but can be orders of magnitude higher at increased intra-abdominal pressures [5,7,23]. This gives an indication of the range of load at which mesh implants should work best in supporting and mimicking native tissue, thus ensuring mechanical biocompatibility. Based on the experimental study presented in [22], the data analysis in the present investigation is extended to compare the mechanical properties of hernia and POP mesh implants with respect to physiological loading conditions. Experimental Section Nine mesh implants were investigated. They were grouped into hernia (n = 5) and POP (n = 4) implants based on the manufacturer's information, available on their respective websites and analyzed accordingly. All products are described in Table 1. The mechanical testing procedure has been previously described in detail [22]. In short, each mesh type was tested in eight different configurations: 2 (uniaxial tension or biaxial tension) × 2 (dry or embedded) × 2 (0° or 90° direction). Table 1. List of mesh types used for the present investigation, with their weight classified as ultralight, light or standard according to [24]. Principal directions of testing are marked in red. Scale bar (lower right): 5 mm. Their clinical application is listed as pelvic organ prolapse (POP) or hernia repair, as specified by the manufacturer. These test configurations represent the in vivo loading and environmental conditions of the mesh implants. Long, narrow strips of meshes used in "line-type" suspensions are mainly loaded in uniaxial tension, whereas wider sheets, such as for hernia repair, are typically subjected to multiaxial tension states. Our earlier study examined the anisotropic behavior of these meshes along two perpendicular directions following the main knitting patterns. However, here, the focus is on the stiffer of the two directions on a per mesh basis. A dry mesh is tested as delivered, whereas embedded infers a specimen being embedded into a soft elastomer matrix (Young's modulus 0.0276 N/mm 2 [25]), mimicking in vivo, ingrown conditions. Experiments with uniaxial tension and biaxial tension (realized as uniaxial strain test, also called "strip biaxial"; [22,26]) were performed on the same tensile test machine. In the uniaxial strain test, lateral contraction of the specimen is constrained, leading to stresses in the direction perpendicular to the loading axis, thus subjecting the sample to a biaxial state of tension. Test piece dimensions were selected to generate a free area of 30 mm × 15 mm (uniaxial) and 50 mm × 15 mm (biaxial). Each specimen was loaded to a maximum of 30% nominal strain (loading rate ~10 −3 s −1 ) and unloaded back to a pre-force threshold of 0.01 N for 10 cycles. Deformation analysis was performed in an optical, non-contact procedure in the center of the specimen, allowing for extraction of local strains (loc) as the result of an image analysis algorithm, thus avoiding edge and clamp effects at the specimen boundaries. Force measurements at the clamps were converted to nominal membrane tension (Mt (N/mm)) by dividing by the undeformed width of the sample. For a detailed description of the loading protocol and data extraction, refer to [22]. The resulting Mt-loc curves of each of the 8 specimens of each type, as well as the area density and porosity measurements [27,28] form the basis for the analysis and comparison of mesh groups. Dry mesh samples of known dimensions were weighted before mechanical testing using a high-resolution balance, and their area density was calculated as weight per area (kg/m 2 ). Porosity is determined as the ratio of open area to the total area, including filaments of one undeformed unit cell of the knitting pattern [27,28]. From the Mt-loc, curves the secant stiffness K (N/mm) in the stiffer direction at the reference membrane tension 0.035 N/mm (for hernia, as well as POP meshes) was extracted, for both the 1st and 10th cycle (see Figure 1). It is defined as: where Δɛ is the difference of local strain at the reference membrane tension and at the beginning of the current cycle. The specific value of Mt was chosen as a load representative of the membrane tension in the pelvic region under physiological intra-abdominal pressure (IAP) at rest [23]. Each mesh is thus characterized by 10 parameters, i.e., a secant stiffness value for each of the tested configurations (uniaxial and biaxial tension, dry and embedded) in 1st and 10th cycle, as well as area density and porosity. The implants are grouped into POP and hernia meshes as indicated on their official product insert. Each parameter is shown in a bar graph, as well as standard box plots in order to visualize the differences between the two groups. To determine the statistical significance, the Wilcoxon rank sum test (equivalent to the Mann-Whitney U-test) is applied for each parameter. Figure 2 shows the uniaxial secant stiffness Kuni (N/mm) of each specimen grouped according to the manufacturer indication into POP (red) and hernia (blue) meshes, with the mean of each group shown in darker red and blue, respectively. The specific testing conditions (dry/embedded and first/10th cycle) are indicated in each subgraph. Figure 3 represents the corresponding biaxial secant stiffness Kbi (N/mm). The respective stiffness values for each configuration are reported in Tables 2 (POP meshes) and 3 (hernia meshes). The variability for the POP group is very large for all parameters, thus affecting the statistical significance of the differences observed. The Wilcoxon rank sum test indicates a statistically-significant difference between the POP and hernia groups for the biaxial stiffness in the dry condition, both at the first and 10th cycle (p = 0.016 for both); see Figures 4a,b and 5a,b. The POP meshes were four-or five-fold stiffer than the hernia meshes in the first cycle and 10th cycle, respectively. Results When comparing the mean and median stiffness for all configurations, POP implants are overall less compliant than hernia implants. Since the abdominal wall is known to be stiffer than vaginal tissue [9,29] and if mechanical biocompatibility were mainly dependent on similar properties to the implant area, one would expect a more compliant design for implants for POP compared to hernia. Embedding a mesh into a polymer matrix, thus reflecting the interaction with native tissue, affects the mechanical response of the implants. The differences between the groups are still evident also for this case (see High density and small pores are often linked to high stiffness in prosthetic meshes [30,31]. When comparing the POP and hernia groups, no statistically-significant difference can be found in these parameters (see Figures 6 and 7). However, tendencies can be seen with the hernia meshes being heavier (and similarly porous), while still being in general more compliant. Discussion Better clinical outcome might be expected from meshes designed to mimic the physiologically-relevant deformation behavior of the underlying native tissue, thus ensuring mechanical biocompatibility. This entails a meaningful stiffness reference target for mesh design. However, the physiological loading configuration, as well as the range of load levels in terms of membrane tension in the abdominal and vaginal wall still remain largely uncertain. While membrane tensions in the abdominal wall are generally higher [7] than in the pelvic region (simply due to geometric reasons, as shown in [23]), increasing the level of membrane tension at which the secant stiffness is evaluated for the hernia meshes to a level of 0.136 N/mm (reported in [23] as a tension at rest in the abdominal wall) only marginally increases their stiffness and does not change the trends reported in Figures 2-7. The range of stiffness values for native tissue reported in the literature shows large variation and is mostly based on uniaxial tensile tests, while the predominant loading state in vivo is biaxial. Song et al. [32] report a Young's modulus of 0.042 N/mm 2 and 0.0225 N/mm 2 in the transverse and sagittal plane, respectively, for human abdominal wall during in vivo insufflation, which would translate to membrane stiffness values of 1.26 N/mm and 0.675 N/mm, respectively, multiplying by the reported thickness of around 30 mm [32]. Analyzing the uniaxial stress-strain graphs shown in [33], abdominal skin has a secant stiffness of 1.7 N/mm at a membrane tension reference of 0.136 N/mm, whereas vaginal wall stiffness is 1.45 N/mm at a reference of 0.035 N/mm membrane tension. Rabbits are one model system for mesh performance evaluation. Analysis of the stress-strain curves in [34] yields uniaxial stiffness values for the abdominal wall complex of rabbits of 0.87-0.98 N/mm, whereas [35] report 0.28 N/mm at the same reference membrane tension of 0.136 N/mm. Biaxial stiffness under inflation, however, increases by an order of magnitude to 2.41 N/mm. Similarly, vaginal wall tissue stiffness values at a reference membrane tension of 0.035 N/mm reach from 0.155 N/mm (prolapsed tissue; [29]), 0.675 N/mm (healthy tissue; [33]), 1.4 N/mm (prolapsed tissue; [11]), up to 6.47 N/mm (healthy tissue; [11]). This variability is due to differences in experimental methodology, in vivo vs. ex vivo testing, cadaver testing, animal tissue vs. human tissue, pathological vs. healthy tissue, as well as the inherent variability of biological soft tissues. This scatter poses a significant problem in determining meaningful mechanical design targets for prosthetic mesh implants and warrants further investigation. Focus should be on the definition of consistent testing procedures based on physiological, in vivo loading and stress magnitude conditions. It has to be noted that the approach of mimicking the ingrown state of the mesh using an elastomer matrix is only partially representative of the in vivo condition. This is mainly due to the differences between the embedding procedure and the process of tissue ingrowth. A mesh sample is simply laid into liquid elastomer and the elastomer left to cure. This results in an elastomer-mesh complex that is formed in an unloaded initial configuration, while in vivo ingrowth might be expected to happen in a loaded state. Due to this discrepancy, the definition of a secant stiffness between tension values chosen here (pre-force threshold as defined in the experimental protocol and reference membrane tension) might not be representative of the actual in vivo load range. This further highlights the need to investigate the expected in vivo loading conditions of mesh implants, so to define testing protocols that reproduce physiological states. When comparing individual meshes in terms of their physical and mechanical parameters, an instructive example can be seen in two meshes manufactured by FEG Textiltechnik: DynaMesh ENDOLAP (DM), used in hernia repair, and DynaMesh PRS (DMPRS), used for pelvic prolapse repair. While their porosity is similar and DM is indeed the heavier of the two, as expected for a hernia mesh, DMPRS is much stiffer in all tested configurations; and up to two orders of magnitude in the case of uniaxial tension. Similarly, Restorelle (a POP mesh) is the lightest implant with the largest pores; however, its stiffness is clearly above average for most tested configurations. Restorelle is less compliant in the configurations tested here than SPMM, the heaviest, small-pore implant. Note that in previous studies [13,36,37], Restorelle was shown to be more compliant than some of the meshes tested here. This can be attributed to the differences in the experimental protocols. In particular, the present experiments evaluate the response of meshes at a physiological tension level. Thus the low deformation regime determines the measured stiffness in the present work, whereas the ball burst test in [36] depends more on the higher deformation regime. Lightweight and large porous meshes might be expected to be more compliant than heavy, small porous implants [36]. However, the present analysis shows that while POP meshes are on average indeed lighter and often have larger pores, they are generally stiffer in the physiological loading regime. In these loading configurations, porosity and density alone cannot be predictors for mesh stiffness. Their specific knitting pattern and microstructure can lead to mechanism-like behavior in a physiological loading range, effects that determine their compliance. This calls for a careful evaluation of the mechanical properties of each mesh on several length scales, in conditions representative of those expected in vivo. The present experimental results do not take into account loads in directions other than the main knitting patterns. In fact, the meshes are usually implanted such that their knitting pattern aligns with principal loading directions, such as in line type suspensions (e.g., urethral slings) or sacrocolpopexy procedures. Some meshes, such as DM, Ultrapro (UP) and PE, even have colored filaments interweaved, guiding the physician during implantation. Deviation from this rule might lead to a mechanical response that strongly differs from the data reported here. Similarly, while each mesh was tested in two perpendicular directions, only the stiffer of the two is considered for the present analysis, being indicative of an upper bound of stiffness. Knitted meshes do indeed tend to behave in an anisotropic way, as investigated in [22], with anisotropy indices ranging from 1.0 for SPMM (similar stiffness in both evaluated directions) to 8.0 for DMPRS. In addition, only one sample per configuration has been tested due to the limited availability of raw mesh material, which also limited the sample size. However, the level of variability for mesh implants reported in the literature [18,22,37,38] is low, justifying the analysis conducted. Conclusions The mechanical biocompatibility of prosthetic mesh implants for hernia and POP repair very likely is an important factor in ensuring their functionality and integration into the host tissue. We see matching mechanical properties in a physiological loading range as desirable and an important step towards reducing clinical complications. This study has shown that some meshes designated as suited for POP repair tend to be stiffer than those used for hernia repair, even though the abdominal wall has been shown to be less compliant than the vaginal wall. Additionally, the expectation of lightweight, large pore meshes being more compliant than their counterparts was contradicted by the presented data and specific testing configurations, indicating that a biomechanical analysis of each product is necessary to determine its mechanical suitability. Knowledge of the physiological, in vivo mechanical environment in terms of loading configuration and magnitude is required in order to define a suitable design target for optimization of implants. Data reported in the literature show large variations in testing configurations and corresponding stiffness values for pelvic organs and abdominal wall tissue. The consensus for a standardized, physiological mechanical testing procedure is needed for native tissues and implants, opening the path for a conscious mechanical design of prosthetic meshes.
v3-fos-license
2024-04-11T05:09:16.814Z
2024-04-09T00:00:00.000
269029470
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "f008ee9e8694a1b7bc6f2879c56a89c1098f351a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2074", "s2fieldsofstudy": [ "Medicine" ], "sha1": "f008ee9e8694a1b7bc6f2879c56a89c1098f351a", "year": 2024 }
pes2o/s2orc
Essential right heart physiology for the perioperative practitioner POQI IX: current perspectives on the right heart in the perioperative period As patients continue to live longer from diseases that predispose them to right ventricular (RV) dysfunction or failure, many more patients will require surgery for acute or chronic health issues. Because RV dysfunction results in significant perioperative morbidity if not adequately assessed or managed, understanding appropriate assessment and treatments is important in preventing subsequent morbidity and mortality in the perioperative period. In light of the epidemiology of right heart disease, a working knowledge of right heart anatomy and physiology and an understanding of the implications of right-sided heart function for perioperative care are essential for perioperative practitioners. However, a significant knowledge gap exists concerning this topic. This manuscript is one part of a collection of papers from the PeriOperative Quality Initiative (POQI) IX Conference focusing on “Current Perspectives on the Right Heart in the Perioperative Period.” This review aims to provide perioperative clinicians with an essential understanding of right heart physiology by answering five key questions on this topic and providing an explanation of seven fundamental concepts concerning right heart physiology. Introduction As patients continue to live longer from diseases that predispose them to right ventricular (RV) dysfunction or failure, many more patients will require surgery for acute or chronic health issues.Because RV dysfunction results in significant perioperative morbidity if not adequately assessed or managed, understanding appropriate assessment and treatments is important in preventing subsequent morbidity and mortality in the perioperative period.Pulmonary hypertension, one of the leading causes of RV dysfunction, affects approximately 1% of the global population and 10% of individuals > 65 years old (Taylor et al. 2007;Hoeper et al. 2016;Peacock et al. 2007). The overall incidence of RV dysfunction in patients undergoing non-cardiac surgery is less studied across the population; however, certain patients are known to be at increased risk of having RV dysfunction or failure.Diseases include, but are not limited to, primary and secondary pulmonary hypertension, schistosomiasis, restrictive and obstructive lung disease, obstructive sleep apnea (OSA), myeloproliferative disorders, congenital heart disease, thyroid disorders, fibrosing mediastinitis, chronic thromboembolic pulmonary diseases among many others.Understanding which patients are at risk of developing RV dysfunction will help in determining who should receive further perioperative testing and which management options should be available during the perioperative period to prevent significant morbidity and mortality (Bronze et al. 1988;Memtsoudis et al. 2010). In light of the increasing incidence of right heart disease, a working knowledge of right heart anatomy and physiology and an understanding of the implications of right-sided heart function for perioperative care are essential for perioperative practitioners.However, a significant knowledge gap exists concerning this topic.In fact, a recent scientific statement from the American Heart Association on the evaluation and management of right-sided heart failure concluded "it is remarkable how misunderstood are some basic concepts of right-sided heart dysfunction among practicing clinicians and the impact that such misunderstanding can have on appropriate patient management." (Konstam et al. 2018).This manuscript is one part of a collection of papers from the PeriOperative Quality Initiative (POQI) IX Conference focusing on "Current Perspectives on the Right Heart in the Perioperative Period." This review aims to provide perioperative clinicians with an essential understanding of right heart physiology. Methods Founded in 2016, POQI is a multidisciplinary non-profit (501c3) organization whose intent is to organize consensus conferences on topics of interest in the domain of perioperative medicine.The goal is to distill the literature and make clinically relevant recommendations to improve patient care.The POQI methodology, including the use of a multiround modified Delphi technique and the GRADE system for evidence evaluation, has been described previously (Chan et al. 2020;Martin et al. 2020;Thiele et al. 2020). The POQI-9 consensus conference took place in New Orleans, LA from December 1-3, 2022.The objective of POQI-9 was to produce consensus statements and practice recommendations concerning Perioperative Assessment and Management of the Right Ventricle.The participants in the POQI consensus meeting were recruited based on their expertise in these domains (see Appendix 1).Conference participants were divided into three work groups.This paper details the work of Group 1 entitled "Essential Right Heart Physiology for the Perioperative Practitioner." Groups 2 and 3 focused on the assessment and management of right heart dysfunction. Discussion This POQI-9 subgroup sought to develop a consensus document providing an essential understanding of right heart physiology.Our target population includes adult patients who do not have congenital cardiac disease.As such, this consensus statement does not apply to patients with congenital or repaired congenital cardiac disease.A priori we addressed the following questions: 1. Question #1: What are the fundamental concepts for understanding right ventricular (RV) anatomy and physiology, including similarities and differences from the left ventricular (LV)? 2. Question #2: What are the components that determine RV pump function?3. Question #3: What are the systemic consequences of right heart congestion? 4. Question #4: What is the physiologic cascade that occurs with declining right ventricular performance? 5. Question #5: What are physiologic stresses on right heart performance that occur in the perioperative period? Each section of the "Discussion" section will be introduced with summary statements concerning key concepts related to understanding the right heart followed by a narrative review of the latest evidence. Concept #1a The right ventricle (RV) is fundamentally different in anatomy and physiology from the left ventricle (LV). Concept #1b Changes in coronary blood flow in the setting of pulmonary hypertension make the RV more susceptible to ischemia from systemic hypotension. Increased recognition of the right ventricular (RV) contribution to overall cardiovascular performance in both health and disease has prompted the publication of several monographs and focused reviews (Naeije 2015;Gittenberger-de Groot et al. 2015;Edward et al. 2023;Vandenheuvel et al. 2013;Sanz et al. 2019;Dell'Italia 2012;Walker and Buttrick 2009;Haddad et al. 2008).In addition, professional organizations have issued statements highlighting knowledge gaps and underscoring the need for better methods to assess function along the course of RV adaptation from dysfunction to failure (Konstam et al. 2018;Lahm et al. 2018;Voelkel et al. 2006).Within this context, a scientific statement from the American Heart Association on the perioperative management of patients with pulmonary hypertension was recently published (Rajagopal et al. 2003). While the normal RV is generally characterized as a thin-walled structure largely wrapped around the interventricular septum that ejects blood at low pressure into the pulmonary circulation, the fetal RV functions at high pressures and provides the majority of systemic blood flow.As such, the RV does not begin to assume its eventual structure and shape until pulmonary vascular resistance markedly falls after birth when the lungs expand, and the ductus arteriosis and foramen ovale close (Sanz et al. 2019). The RV is regarded as having three regions (inflow, apical, and outflow) arranged in a "boot-like" or triangular configuration along the septum (Walker and Buttrick 2009).In the free wall, superficial circumferential fibers predominate and wrap around the LV with a subendocardial layer of longitudinal fibers passing from the apex to the tricuspid annulus and outflow tract (Sanz et al. 2019).The midline is formed by the interventricular septum comprised of oblique helical fibers that cross each other at 60° angles similar to the LV-free wall (Buckberg and Hoffman 2014).Fiber orientation and distribution influence the function of both ventricles with transverse fibers producing circumferential strain and helical fibers causing longitudinal strain when oblique fibers at reciprocal angles thicken and coil.Overall, the predominant strain in terms of work is longitudinal (Haddad et al. 2008).For the RV, basilar wrap-around circumferential fibers and the septum primarily dictate systolic function (Buckberg and Hoffman 2014). Internally, the inflow tract and apical regions include papillary muscles and more coarse trabeculation than the LV and transition into the non-trabeculated outflow tract below the pulmonic valve (Walker and Buttrick 2009).Although increasingly sophisticated molecular biology techniques have highlighted the complexity of cardiac morphogenesis and the origin of the primitive cardiac tube, it is clear that differences in LV and RV structure and function reflect variant embryology.For the RV, different areas are conventionally regarded as developing from different primitive cardiac tube components with the ventricular portion giving rise to the inflow and apical regions (as well as the LV), and the outflow tract arising from the bulbous chordis (Dell'Italia 2012).Particular interest has been focused on the development of the outflow tract given its role in congenital heart disease and as a major site for arrhythmogenic cardiomyopathy (Boukens et al. 2016).In addition, substantial pressure gradients between the RV and pulmonary artery have been reported with sympathetic stimulation or rapid afterload reduction due to a hypercontractile outflow tract (Raymond et al. 2019;Kroshus et al. 1995).Some authors have suggested that outflow tract narrowing early in systole is an adaptive response that protects the pulmonary circulation from high pressure and ejection velocity (March et al. 1962).However, the synchrony of inflow-to-outflow shortening is also affected by the depressive effects of anesthetics and autonomic blockade (Heerdt and Pleimann 1996). The majority of blood supply to the RV free comes from the right coronary artery (RCA) with branches perfusing the atrioventricular (AV) and sinoatrial (SA) nodes.In most patients, the RCA is the predominant source of flow to the posterior descending artery perfusing the inferior LV wall and posterior third of the interventricular septum.The remaining two-thirds of the interventricular septum is supplied by the left anterior descending coronary artery which may also perfuse some of the medial RV-free walls (Ikuta et al. 1988).It is well appreciated that some patients have a supernumerary coronary vessel termed the conus artery that arises from an ostium behind the right cusp of the aortic valve that is either distinct from or close to the RCA ostium and courses over the antero-superior surface of the RV before terminating near the anterior interventricular groove (Schlesinger et al. 1949).The conus artery has a lower incidence of occlusion than the RCA or LCA and can provide collateral flow to these vessels, and may contribute to the preservation of RV outflow tract function in the setting of acute RV infarction (Dell'Italia 2012).Venous drainage of the RV differs from the LV in that most flow bypasses the coronary sinus and empties directly into the right heart (Sirajuddin et al. 2020).Anatomically, venous drainage occurs via small Thebesian vessels, along with the right marginal vein, a series of anterior cardiac veins, and the infundibular veins.In roughly a quarter of the population, a small cardiac vein enters the coronary sinus at a point close to the coronary sinus/RA junction.Table 1 provide a comparison of major anatomical components of the RV and LV. The dynamics of coronary perfusion vary substantially between the RV and LV.In a recent extensive review, Crystal and Pagel described the distinctive characteristics of RV perfusion which promote a relative resistance to myocardial ischemia and dysfunction, and how this protection may become compromised in patients with acute pulmonary hypertension (Crystal and Pagel 2018).These factors are primarily related to the lower developed intracavitary and tissue pressures during systole in the normal RV and are as follows: (1) in contrast to the LV, blood flow throughout the entire cardiac cycle; (2) lower baseline oxygen uptake and the ability to at least partially compensate for reduced blood flow by increasing oxygen extraction; (3) preservation of energy stores during decreased perfusion by downregulation of oxygen demand; (4) while epicardial coronary stenosis disproportionally impairs perfusion of LV subendocardium, reduced perfusion in the RV is transmurally uniform; (5) potentially retrograde perfusion from the RV cavity through the Thebesian veins and extensive collateral connections. Differences in myocardial perfusion during systole can be of particular concern in the perioperative setting.As shown in Fig. 1, the low RV pressure normally generated during systole permits coronary arterial flow during both systole and diastole due to a continuous aortic root-RV myocardial pressure gradient.However, with afterload stress, the increased RV systolic pressure necessary to maintain ejection will increase oxygen demand and if combined with systemic hypotension can result in decreased RV perfusion and supply/demand mismatch.Not surprisingly, in the setting of pulmonary hypertension, impaired RV systolic function secondary to ischemia can become quickly apparent when acute systemic hypotension is superimposed and the systolic component of perfusion is lost (Steppan and Heerdt 2021). Electrical activation of the RV-free wall spreads from the AV node via branches of the right bundle of the His-Purkinje system (Padala et al. 2021) and is generally coincident with that of the LV although septal contraction may precede that of the RV-free wall.Within the RV, contraction is typically heterogenous with inflow tract contraction preceding that of the outflow tract by 30 to 60 ms, most likely reflecting at least in part regional differences in the conducting apparatus (Heerdt and Dickstein 1997). Concept #2a In contrast to the LV, normal RV pump function is more sensitive to changes in afterload and more tolerant of changes in preload. Concept #2b LV contraction is important for normal RV function and a significant percentage of RV outflow is generated by LV contraction. Physiology Despite structural and functional differences, the performance of both the LV and RV as volume pumps is largely dictated by the same factors (preload, afterload, and contractility).That said, specific features of each of these factors as well as their regulation vary between chambers. In relation to these components Table 2 summarizes the pharmacology and physiology by receptor sites in the right heart. Preload In that sarcomere length at the end of diastole is indicative of myocardial preload, ventricular compliance determined by the end-diastolic pressure/volume relationship plays a major role.For the LV, diastolic compliance is largely determined by the inherent viscoelastic properties of the thick wall and is normally independent of the RV.In contrast, for the thin-walled, highly distensible RV, the pericardium, intrathoracic pressure, and LV influence diastolic compliance (Sanz et al. 2019).In the progression of RV adaption to dysfunction with pulmonary hypertension, the influence of pericardial restraint on diastolic compliance may initially be reduced as the RV hypertrophies.However, restrictions in diastolic compliance become increasingly important as the disease progresses and ventricular dilation with wall thinning occurs. Afterload Conceptually, ventricular afterload is the end-systolic wall tension that results from the opposition to sarcomere shortening and ejection of blood.The forces opposing ejection can be broadly characterized as resistive, elastic (compliant), and reflective (coming back toward the heart late in systole) and vary over the course of ejection.This distinction has particular functional significance for RV for several reasons.First, although RV afterload is commonly expressed as steady-state (non-pulsatile) pulmonary vascular resistance (mean pressure/mean flow), 30-50% of the work performed by the chamber is pulsatile, i.e., goes toward overcoming the elastic and reflective forces (Grandin et al. 2017).Second, in comparison to the LV, acute increases in RV afterload have a much greater impact on pump function.In this context, acute insults such as pulmonary embolism can have profound effects.When the load stress is chronic, however, the RV does have the ability to adapt to both heterometric and homeometric processes (Edward et al. 2023).Finally, in the perioperative and critical care environments, interventions such as mechanical ventilation and positive endexpiratory pressure can increase both non-pulsatile and pulsatile determinants of afterload.As such, the need for a better understanding of RV afterload and the definition of more complete metrics to quantify afterload have been identified as a research priority (Lahm et al. 2018). Contractility Despite differences in myocyte size (RV are ~ 15% smaller than those from the LV) and the suggestion of differences in sarcomere shortening and intracellular calcium transients (Walker and Buttrick 2009;Erickson and Tucker 1986), the ability of LV and RV myocytes to perform work over a range of loading conditions is similar.However, consistent with structural and geometric differences between the chambers, in the intact heart the RV work/load relationship is substantially different from that of the LV.Traditionally, RV contraction has been characterized as having four phases: (1) a "bellows effect" produced by inward movement of the RV free wall; Ultimately, the interaction of preload (both the magnitude of end-diastolic volume and the associated pressure) with contractility and afterload (both the magnitude and timing of peak load) dictate characteristics of the RV pressure-volume relationship (Fig. 2).Under normal low pressure, low afterload conditions the timing of peak Fig. 2 Example of left (LV) and right (RV) ventricular pressure-volume loops (animal model).LV loops are normally rectangular with a well-defined upper left corner corresponding to end-systole, which occurs shortly after maximal pressure is reached.In contrast, under normal, low-pressure conditions the RV loop is more triangular with a less well-defined upper left corner that occurs well after maximal pressure is reached.However, in the setting of pulmonary hypertension, the RV loop transitions to a morphology more similar to a normal LV pressure-volume loop.Data were obtained during an experimental study of progressive pulmonary vasoconstriction under a protocol approved by the institutional animal care and use committee.The figure is reproduced with permission from the PeriOperative Quality Initiative (POQI) pressure in the RV occurs earlier in the cardiac cycle than in the LV and this difference is reflected in the shape of the pressure-volume loop.However, with increased afterload the timing of peak RV pressure can shift to late systole causing the RV pressure-volume loop to more closely resemble that of the LV. Right heart dysfunction: venous congestion and physiologic consequences Concept #3 Venous congestion is a consequence of right heart failure and may contribute to inadequate perfusion and organ dysfunction. It is common for clinicians to consider the effect of left heart failure, especially poor cardiac output, on system organ dysfunction.However, the effects of right heart failure on organ dysfunction are often not taken into account.While the left heart produces the inlet pressure (i.e., mean arterial pressure) that promotes organ perfusion, right heart failure can profoundly increase the outlet pressure from an organ (i.e., venous pressure and central venous pressure), thereby reducing the perfusion pressure even in the setting of normal arterial pressure.Right heart failure impairs the forward flow of deoxygenated blood causing elevated venous pressure, the hallmark sign of right heart failure.This leads to a pathological milieu of peripheral and visceral venous congestion.Peripheral venous congestion will lead to jugular venous distension (JVD), a classic sign of venous hypertension, and lower extremity edema.As the right heart failure progresses, patients will experience increased exercise intolerance and chronic fatigue (Konstam et al. 2018).In hospitalized patients, JVD due to right heart failure is associated with an increased risk of adverse events, 30-day mortality, and 1-year all-cause mortality (Chernomordik et al. 2016). Beyond peripheral venous congestion, it has been shown that visceral venous congestion due to RV dysfunction correlates with impaired liver, kidney, and intestinal function, and cardiac cachexia (Valentova et al. 2013).Heart failure leading to kidney failure has been termed cardiorenal syndrome.In decompensated right heart failure with reduced ejection fraction (HFrEF), chronic elevation of central venous pressure and decreased cardiac output lead to the activation of vasopressin, reninangiotensin-aldosterone system (RAAS), and the sympathetic nervous system resulting in vasoconstriction with sodium and water retention.This leads to decreased renal perfusion, ischemia of the kidney, and decreased glomerular filtration rate creating a clinical picture of decreased urine output and increased fluid retention (Konstam et al. 2018).Similarly, cardiohepatic syndrome, or congestive hepatopathy, is a result of hepatic congestion and reduced perfusion to the liver.In chronic right heart failure (RHF), symptoms of liver involvement can be vague early on, often mimicking symptoms of cholelithiasis such as right upper quadrant pain and nausea (Samsky et al. 2013).As RHF progresses, symptomatology progresses as hepatic venous pressures continue to rise, thereby decreasing hepatic oxygen delivery (Samsky et al. 2013).As the syndrome persists, cardiac cirrhosis is a likely end result (Konstam et al. 2018).Chronically increased CVP and reduced CO can also lead to impaired gastrointestinal function as a result of visceral congestion.The intestine is typically well-perfused by the splanchnic circulation.However, in the presence of venous congestion activating the sympathetic nervous system and subsequent constriction of blood vessels and perfusion reduction, intestinal ischemia and inflammation occur (Konstam et al. 2018).The consequences of these changes in the gastrointestinal tract lead to the reduction of nutrient absorption, anemia, hypoalbuminemia, and cachexia (Konstam et al. 2018).Due to the combination of cardiorenal interactions, hepatomegaly, and reduced gastrointestinal function, cardiac cachexia is a common result.Independent of age or functional class, cardiac cachexia is predictive of increased mortality in patients with heart failure (Cicoira et al. 2007).Cachexia further worsens the inflammatory response and its consequences such as cardiac and skeletal muscle changes, worsening cardiac function, and reducing physical activity tolerance.This creates a vicious cycle of loss of muscle mass, which only potentiates the cachectic process (Cicoira et al. 2007).Taken together, venous congestion as a consequence of worsening right heart failure leads to reduced organ perfusion that results in significant endorgan dysfunction. Concept #4 Predictable physiologic disturbances occur in the progression from normal right heart function to right heart failure. Predictable changes occur in right heart failure (RHF).Since the right heart is a lower-pressure system, it is more sensitive to alterations in afterload.Due to ventricular interdependence, any modest change in pulmonary vascular resistance, such as in the presence of pulmonary hypertension, will create an increase in RV afterload causing the RV stroke volume to subsequently decrease, and compromise left ventricular filling due to right to left septal shifting (Rosenkranz et al. 2020).This interaction leaves the LV underfilled due to the RV congestion, yet left-sided pressures are elevated.The result is a decrease in cardiac output.This becomes particularly challenging during scenarios that cause increased venous return and additional increases in RV volume, such as during times of activity. As the RV volumes continue to increase, functional tricuspid regurgitation will be the result causing worsening RV dilation and subsequent decrease in left ventricular filling and decreased left ejection fraction.Due to the right ventricle failing to operate as a forward pump, the systemic venous circulation becomes impaired resulting in systemic venous congestion which causes jugular venous distention, lower extremity edema, hepatosplanchnic congestion, and gut edema (Wenger et al. 2017).Due to increased left heart pressures, we expect to see dyspnea and increased fatigability associated with congestive heart failure.An increase in right-sided filling pressures also causes the coronary blood flow to become compromised due to the right ventricle dilation and hypertrophy.The compromised flow then creates additional oxygen demand which normal coronary flow is unable to satisfy (Rajagopal et al. 2023). In the presence of pulmonary artery hypertension (PAH) due to left ventricular failure, the RV afterload gradually increases (Konstam et al. 2018).The chronicity of PAH and RHF will render the RV much less tolerant to volume overload, promoting a compensated right heart failure into a decompensated state due to ventricular remodeling, ultimately leading to fibrosis of the right ventricle.Once this occurs, the expected increase in pulmonary vascular resistance and right atrial pressures is coupled with a decreased cardiac output and pulmonary arterial pressure, potentially leading to cardiogenic shock and death (Rajagopal et al. 2023). The perioperative period is known to create physiologic stress of varying degrees that are of particular importance to right heart physiology.These stressors are predictable and frequently modifiable and fall into three main categories: surgical, anesthetic, and physiologic.Table 3 provides a list of common, predictable stressors, the stress response on the RV, and systemic hemodynamics, and an example of how this may be encountered in the perioperative period.The list is to serve as a guide for consideration but not an exhaustive detailing of potential perioperative stressors. Conclusions The goal of this narrative review was to provide the perioperative practitioner with an essential understanding of the right heart physiology.Several key points should be mastered for clinical application.First, the RV is fundamentally different in anatomy and physiology from the LV, and changes in coronary blood flow in the setting of pulmonary hypertension make the RV more susceptible to ischemia from systemic hypotension.Second, in contrast to the LV, normal RV pump function is more sensitive to changes in afterload and more tolerant of changes in preload, and LV contraction is important for normal RV function as a significant percentage of RV outflow is generated by LV contraction through ventricular interdependence.Third, venous congestion is a consequence of right heart failure and is a significant contributor to inadequate perfusion and organ dysfunction.Fourth, part of the understanding of right heart function is that there are predictable physiologic disturbances that occur in the progression from normal right heart function to right heart failure.Finally, all of this finds clinical relevance for perioperative practitioners because there are predictable, modifiable physiologic stresses that occur in the perioperative period.Other papers in this series will expand upon this knowledge base to incorporate specific strategies for the assessment and management of right-heart dysfunction and failure in the perioperative period. Fig. 1 Fig. 1 Comparison of pressure in the ascending aorta (AP, in red) and right ventricle (RVP, in blue) along with the pressure gradient between them (AP-RVP, in black) driving coronary perfusion.Under normal conditions (left panel), AP > RVP at all times facilitates RV perfusion in both systole and diastole.In contrast, in the setting of marked pulmonary hypertension (PH) (right panel), RVP can exceed AP during the systolic portion of the cardiac cycle thus eliminating the positive pressure gradient during systole and limiting perfusion to the diastolic interval.Data were obtained during an experimental study of progressive pulmonary embolization under a protocol approved by the institutional animal care and use committee.The figure is reproduced with permission from the PeriOperative Quality Initiative (POQI) (2) longitudinal shortening pulling the tricuspid annulus toward the apex; (3) late contraction of the RV outflow tract; and (4) LV augmentation of RV contraction via contiguous circumferential fibers and septal shortening.Enhanced experimental and imaging techniques have expanded our understanding of how transverse and helical muscle fibers within the RVfree wall and septum interact in a sequential fashion to produce force and eject blood.In particular, the data indicate that longitudinal shortening results primarily from coiling of helical fibers not contraction of longitudinal muscle layers, and that the septum plays a major role in generating longitudinal strain(Buckberg and Hoffman 2014).These concepts underscore the importance of considering ventricular interdependence since a substantial portion of RV systolic function is ultimately provided by LV contraction and septal movement.In an intricate study involving electrical isolation of the RV and LV, Damiano et al. demonstrated that if LV contraction is maintained while RV-free wall movement is prevented, when RV filling is optimized more than 60% of the beating RV pressure and 80% of the pulmonary arterial flow are produced(Damiano et al. 1991), highlighting the contribution of LV and septal contraction to RV function.Subsequent studies have focused on this phenomenon as it relates to the impact of LV mechanical assist devices on RV function.When RV pressure and volume become markedly increased or critical areas of the septum are infarcted, interdependence can transition to "ventricular interference" as a leftward shift in the interventricular septum impedes LV filling, or loss of septal helical motion impairs RV longitudinal shortening. Table 1 Comparative characteristics of normal left (LV) and right (RV) ventricles a Table 2 Receptor pharmacology and physiology affecting the right heart Table 3 Perioperative stressors and right heart physiologic responses HypervolemiaElevated PCWP can increase PA, RV, and RA pressures and if acute can reduce RV output or cause acute TR Excessive IVF administration; TACO; steep Trendelenburg Hypovolemia Low filling pressures can greatly reduce RV output Rapid acute blood loss; steep reverse Trendelenburg especially with pneumoperitoneum; prone position with increased chest pressure or abdominal pressure retarding IVC flow for RA/RV filling
v3-fos-license
2018-04-03T05:35:10.426Z
2015-10-05T00:00:00.000
21619209
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.jbc.org/content/290/49/29402.full.pdf", "pdf_hash": "64745bc37ca29c280ec3d611517140bb47d8cb35", "pdf_src": "Highwire", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2075", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "4a253ee81eb9623835eaa82db12440c1d030477d", "year": 2015 }
pes2o/s2orc
Macrophage-specific de Novo Synthesis of Ceramide Is Dispensable for Inflammasome-driven Inflammation and Insulin Resistance in Obesity* Dietary lipid overload and calorie excess during obesity is a low grade chronic inflammatory state with diminished ability to appropriately metabolize glucose or lipids. Macrophages are critical in maintaining adipose tissue homeostasis, in part by regulating lipid metabolism, energy homeostasis, and tissue remodeling. During high fat diet-induced obesity, macrophages are activated by lipid derived “danger signals” such as ceramides and palmitate and promote the adipose tissue inflammation in an Nlrp3 inflammasome-dependent manner. Given that the metabolic fate of fatty acids in macrophages is not entirely elucidated, we have hypothesized that de novo synthesis of ceramide, through the rate-limiting enzyme serine palmitoyltransferase long chain (Sptlc)-2, is required for saturated fatty acid-driven Nlrp3 inflammasome activation in macrophages. Here we report that mitochondrial targeted overexpression of catalase, which is established to mitigate oxidative stress, controls ceramide-induced Nlrp3 inflammasome activation but does not affect the ATP-mediated caspase-1 cleavage. Surprisingly, myeloid cell-specific deletion of Sptlc2 is not required for palmitate-driven Nlrp3 inflammasome activation. Furthermore, the ablation of Sptlc2 in macrophages did not impact macrophage polarization or obesity-induced adipose tissue leukocytosis. Consistent with these data, investigation of insulin resistance using hyperinsulinemic-euglycemic clamps revealed no significant differences in obese mice lacking ceramide de novo synthesis machinery in macrophages. These data suggest that alternate metabolic pathways control fatty acid-derived ceramide synthesis in macrophage and the Nlrp3 inflammasome activation in obesity. Diet-induced obesity (DIO) 2 is a growing epidemic and has greatly augmented the number of humans diagnosed with met-abolic diseases such as type 2 diabetes, cardiovascular disease, and atherosclerosis (1,2). The importance of inflammation in driving metabolic dysregulation during DIO is well established, and research has highlighted the importance of the adipose tissue and resident immune cells in maintaining glucose homeostasis (3,4). In humans and mouse models, DIO is also characterized by alterations in lipid metabolism, excess lipid availability, and increased ceramides systemically and in the adipose tissue (17)(18)(19). In lipid metabolism, the fatty acid oxidative pathway utilizes palmitoyl-CoA to fuel mitochondrial oxidative phosphorylation; alternatively palmitate enters the nonoxidative pathway to be converted into ceramide via irreversible condensation with L-serine by the rate-limiting enzyme serine palmitoyltransferase (SPT) (20). SPT is a heterodimer, is present in the endoplasmic reticulum, and is composed of mainly two subunits, Sptlc1 and Sptlc2 (20). Ceramides and other sphingolipids are structural components of membranes and signaling molecules, which serve to mediate cell homeostasis (20); however, dysregulated increase in ceramide content in cells is linked to elevated inflammation and insulin resistance (21,22). Inhibition of ceramide synthesis prevents lipid-induced insulin resistance, diet-induced insulin resistance, and hepatic steatosis (23)(24)(25) In addition to up-regulating ceramide synthesis, palmitate treatment of macrophages inhibits AMPK activation, generates reactive oxygen species, and activates the NLRP3 inflammasome causing the secretion of IL1␤ (15,26). The role for the nonoxidative lipid metabolism pathway in regulating NLRP3 inflammasome is not clear; given the association of mitochondrial oxidative stress with cellular lipid accumulation, we have hypothesized that de novo synthesis of ceramide via Sptlc2 is required for inflammasome-induced inflammation in diet-induced obesity. We found that ceramide-induced IL1␤ requires Nlrp3 and the accumulation of reactive oxygen species (ROS); however, surprisingly, we found that palmitate-induced IL1␤ does not require Sptlc2, indicating that Sptlc2 is not necessary for Nlrp3 inflammasome activation. Furthermore, we show that saturated fat diet-induced adipose tissue inflammation is unaffected in isolated adipose tissue macrophages from mice with myeloid cell deletion of Sptlc2. Our findings reveal that in vitro and in vivo, myeloid cell-specific Sptlc2 is dispensable for fatty acid-mediated inflammation and insulin resistance. Materials and Methods Animals/Mice-Sptlc2-flox mice have been previously described (27). To ablate de novo synthesis in myeloid cells, Sptlc2-flox (Sptlc2 fl/fl ; Dr. Xian-Cheng Jiang, SUNY) mice were crossed to LysM-Cre (B6.129P2-Lyz2 tm1(cre)Ifo /J; Jackson Laboratory) mice to generate Sptlc2 fl/Ϫ LysM cre/Ϫ mice. Sptlc2 fl/Ϫ LysM cre/Ϫ mice were backcrossed to Sptlc2 fl/fl mice generating the littermates Sptlc2 fl/fl LysM Ϫ/Ϫ (CRE Ϫ ) controls and Sptlc2 fl/fl LysM cre/Ϫ (CRE ϩ ) experimental animals. MCAT transgenic mice and wild-type littermate controls were obtained from Dr. Gerald Shadel (Yale University) and have been previously described (28). For diet studies, mice were placed on a standard chow diet (LFD; 13.4% fat; LabDiet, Purina 5001) or a high fat diet (HFD; 60% fat; Research Diets) at 6 -7 weeks of age for 13 weeks of feeding. All experiments and animal use were conducted in compliance with the National Institute of Health Guide for the Care and Use of Laboratory Animals and were approved by the Institutional Animal Care and Use Committee at Yale University. Bone Marrow-derived Macrophage (BMDMs) and Cell Culture-The BMDMs were prepared, and inflammasome activation assays were performed as described by us previously (13,32). All steps were performed using sterile technique. Femurs were collected in RPMI (Life Technologies, Inc.). Using a needle and syringe, marrow was flushed into RPMI containing 10% FBS (Omega Scientific, Inc.) and 5% antibacterial/antimy-cotics (Life Technologies, Inc.). Red blood cells were lysed using ammonium-chloride-potassium lysis buffer (Quality Biological), and lysis was neutralized with RPMI. Bone marrow cells were differentiated into macrophages using MCSF (10 ng/ml; R&D) and L929 conditioned medium. Nonadherent cells were collected on day 7, counted, and replated at 1 ϫ 10ˆ6 cells/ml. BMDMs were treated on day 8. Cells were primed by 4 h of treatment with ultrapure LPS (1 g/ml; Sigma) alone; inflammasome stimulation was provided by treatment with ATP (5 mM; 1 h), sodium palmitate conjugated to BSA (200 or 400 M; 24 h of treatment; Sigma) or ceramide (40 -120 mg; 6 h; Cayman Chemical). Myriocin (Caymen Chemical) was added to some treatments, in combination with LPS priming, at 1 or 5 M. Supernatants were collected and stored at Ϫ80°C. BMDMs were washed with PBS and collected in radioimmune precipitation assay buffer supplemented with protease inhibitors for protein analysis. BMDMs were polarized to M1 or M2 by treatment with LPS (1 g/ml) and IFN␥ (20 ng/ml; eBioscience) or IL4 (10 ng/ml; eBioscience). After 24 h, cells were washed with PBS and collected in TRIzol for RNA extraction. RNA Extraction and Gene Expression Analysis-RNA extraction and purification were performed using RNeasy kits (Qiagen) according to the manufacturer's instructions. Total RNA was measured using a nanodrop and 500 ng used to reverse transcribe cDNA. Quantitative PCR was performed as described (29). Primer sequences for Gapdh, Arg1, Tnf␣, Il1␤, iNos, Sptlc1, Sptlc2, Sptlc3, CerS5, CerS6, Nsmaf, and Smpd1 are listed in Table 1. Adipose Digestion and Stromavascular Staining-Visceral adipose was harvested at sacrifice and weighed. Tissue was enzymatically digested in 0.1% collagenase I (Worthington Biochemicals) in Hanks' buffered salt solution (Life Technologies, Inc.) for 45 min at 37°C. The stromavascular fraction was pelleted by centrifugation at 1500 rpm for 10 min, then washed, and filtered. Red blood cells are lysed using ACK lysing buffer. Cells were resuspended in 1 ml for counting prior to staining or Hyperinsulinemic-Euglycemic Clamps-Experiments were performed according to recent recommendations of the Mouse Metabolic Phenotyping Center Consortium (30) and as previously published (31). Statistical Analysis-We used a two-tailed Student's t test to determine the significance between genotypes. The differences between means and the effects of treatments were analyzed by one-way analysis of variance with Tukey's test which corrects for multiple hypotheses. Results Ceramides Activate the Nlrp3 Inflammasome via Mitochondrial Oxidative Stress-Our prior studies have demonstrated that ceramides activate caspase-1-induced IL1␤ secretion in a Nlrp3 inflammasome-dependent manner (13,32); however, the exact mechanism by which ceramides activate the Nlrp3 inflammasome is not yet understood. Consistent with our previous studies, ceramide activates Nlrp3 inflammasome, causing IL1␤ secretion from BMDMs in a dose-dependent manner ( Fig. 1, A and B). Given that ceramides and lipids increase the production of ROS causing oxidative stress and that mitochondrial damage and ROS generation has been linked to the activation of the Nlrp3 inflammasome (33,34), we tested the potential role of ROS in ceramide-induced inflammasome activation. We investigated this question using BMDMs from transgenic mice with targeted overexpression of the human catalase gene to mitochondria (MCAT mice), an enzyme that reduces mitochondrial oxidative damage and improves mitochondrial function by degrading hydrogen peroxide and preventing ROS accumulation (35,36). Consistent with recent studies (37) that ROS is not critical for ATP-mediated inflammasome activation, catalase overexpression did not attenuate extracellular ATP-induced caspase-1 activation (Fig. 1C). In contrast, ceramide-induced caspase-1 activation was decreased in BMDMs from MCAT transgenic mice, indicating that inhibition of ROS accumulation prevents ceramide-induced Nlrp3 activation (Fig. 1C), and similar to saturated fatty acids, ceramide induction of mitochondrial oxidative stress drives Nlrp3 activation. Mitochondrial oxidative activity is associated with intracellular lipid accumulation (38,39); we wanted to ask whether prevention of lipid accumulation could reduce ROS-induced Nlrp3 activation. SPT is the rate-limiting enzyme in the de novo synthesis of ceramide (Fig. 1D) and is an important intersection for regulating cellular levels of saturated fatty acids and sphingolipids by generating ceramide from palmitoyl-CoA precursors (20). SPT-specific inhibition, using myriocin, prevents palmitate-induced accumulation of ceramide and downstream cytokine production in macrophages (26). Palmitate induces IL1␤ secretion in a Nlrp3-dependent manner ( Fig. 1B and Ref. 15); however, myriocin-inhibition of SPT failed to abrogate IL1␤ secretion (Fig. 1E). These data suggest that chemical inhibition of de novo synthesis may not be sufficient to reduce saturated fat-induced IL1␤. Myeloid Cell-specific Deletion of Serinepalmitoyltransferase 2-To further address the role of SPT in saturated fat-induced Nlrp3 activation, we wanted to genetically delete SPT; therefore we first measured the expression level of Sptlc subunits in BMDMs. Sptlc2 has the highest expression level, being nearly 5-fold higher than Sptlc1 in pro-inflammatory-polarized M1 macrophages and in anti-inflammatory-polarized M2 macrophages ( Fig. 2A). Sptlc3 was not detected in BMDMs under any condition ( Fig. 2A). We sought to generate mice with myeloid cell-specific knockouts of Sptlc2. To test for cre recombinase deletion efficiency, we compared gene expression between littermates Sptlc2 fl/fl LysM Ϫ/Ϫ (CRE Ϫ ) controls and Sptlc2 fl/fl LysM cre/Ϫ (CRE ϩ ) experimental BMDMs. In both M1 and M2 polarized BMDMs, Sptlc2 gene was significantly reduced in CRE ϩ mice as compared with their CRE Ϫ littermate control (Fig. 2B). Sptlc2 gene level was not affected by polarization toward M1 or M2 phenotype or by having a single allele floxed (fl/Ϫ; CRE ϩ ). The expression level of Sptlc1 was unaltered by genotype or macrophage polarization (Fig. 2B). The immunoblot analysis confirmed that compared with littermate control cells, Sptlc2 protein was not expressed in CRE ϩ BMDMs (Fig. 2C). These data indicate that in BMDMs from CRE ϩ , Sptlc2 is efficiently deleted, without altering the expression of Sptlc1. Sptlc2-deficient BMDM Activation in Vitro-Sptlc2 heterozygous macrophages have reduced palmitate-induced inflammatory gene expression and myeloid cell-specific deletion of Sptlc2 improves atherosclerotic lesions (40). To examine in vitro whether CRE ϩ macrophages are appropriately activated by traditional pro-or anti-inflammatory cytokines, we analyzed gene expression of traditional M1 or M2 markers following polarization of BMDMs from CRE Ϫ or CRE ϩ mice. M1 macrophages from CRE Ϫ or CRE ϩ mice had comparable expression levels of the M1 markers Tnf␣ and iNos but failed to express the M2 marker, Arg1 (Fig. 3A). Similarly, M2 polarized macrophages from CRE Ϫ and CRE ϩ mice had comparable expression levels of Arg1 (Fig. 3A) but failed to express Tnf␣ and iNos. These data indicate that Sptlc2 deficiency in macrophages does not alter macrophage polarization toward M1 or M2. To examine whether the de novo synthesis pathway is required for saturated fatty acid activation of the inflammasome, BMDMs from CRE Ϫ or CRE ϩ mice were primed with LPS prior to overnight culture in palmitate to activate the Sptlc2 Is Dispensable for Nlrp3 Inflammasome Activation inflammasome. Secretion of active IL1␤ into the supernatants was comparable between CRE Ϫ and CRE ϩ BMDMs that were cultured with LPS plus palmitate (Fig. 3B). Surprisingly, secretion of active IL1␤ in the presence of ATP or ceramide was higher in BMDMs from CRE ϩ mice, as compared with the CRE Ϫ BMDMs. These data indicate that Sptlc2 is not required for palmitate-induced Nlrp3 activation but may have role in inhibiting Nlrp3 activation, in the presence of other activators. Macrophage SPTLC2 Deficiency Does Not Regulate Adipose Tissue Mass or Leukocytosis in Response to High Fat Diet-Saturated fat diet-induced obesity is characterized by increased adipose tissue mass and inflammation that involves increased numbers of macrophages surrounding hypertrophied adipocytes with increased ER stress and releases fatty acids upon cell death (3). Adipose tissue macrophage fatty acid uptake during obesity is a contributing factor of their inflammatory status (9); however, the fate of lipids in macrophages in still unclear. To identify whether myeloid cell-specific de novo synthesis of ceramides is required for adipose tissue inflammation driven by a high saturated fat diet, CRE Ϫ or CRE ϩ mice were fed a 60% saturated fat diet for 13 weeks prior to analysis of the visceral adipose tissue. CRE Ϫ or CRE ϩ mice on the HFD showed DECEMBER 4, 2015 • VOLUME 290 • NUMBER 49 increased weight gain (data not shown) and total body weight over control mice on a LFD, but there was no difference between CRE Ϫ and CRE ϩ mice on HFD (Fig. 4A). Similarly, HFD-fed mice showed increased visceral adipose tissue mass and increased cellularity in visceral adipose tissue, but there were no differences between CRE Ϫ and CRE ϩ mice (Fig. 4, B and C). Sptlc2 Is Dispensable for Nlrp3 Inflammasome Activation Adipose tissue macrophages play a major role in regulating the homeostasis of the adipose tissue (41). Pro-inflammatory macrophages are recruited to the adipose tissue during HFD, where they promote the inflammatory response by secreting inflammatory cytokines such as IL1␤ and TNF␣ (41). The gating strategy for quantifying adipose tissue macrophages and lymphocytes are shown in Fig. 4D with representative dot plots in Fig. 4E. Adipose tissue macrophages were characterized into M1 (CD11c ϩ ), M2 (CD206 ϩ ), and HFD-induced CD11c ϩ CD206 ϩ macrophages. We found that the percentages of total F4/80 ϩ CD11b ϩ macrophages are increased with HFD (Fig. 4F). When examining the subpopulation of macrophages, there was a small increase in the percentage of F4/80 ϩ CD11b ϩ CD11c ϩ CD206 ϩ macrophages that are increased in both CRE Ϫ and CRE ϩ mice on HFD, as compared with control mice on LFD (Fig. 4F). Because lymphocyte populations are altered with HFD (5, 42), we quantified T and B cells in the visceral adipose tissue in HFD-fed mice. CD3 ϩ T cells and B220 ϩ B cells were comparable between CRE Ϫ and CRE ϩ on HFD (Fig. 4G). When normalized to the adipose tissue weight, the number of total F4/80 ϩ CD11b ϩ macrophages and F4/80 ϩ CD11b ϩ CD11c ϩ CD206 ϩ macrophages were significantly increased in both CRE Ϫ and CRE ϩ on HFD as compared with control mice on LFD (Fig. 5A). To examine the gene expression of adipose tissue macrophages; F4/80 ϩ cells were positively selected from CRE Ϫ and CRE ϩ mice on HFD. Sptlc2 gene expression was significantly reduced in macrophages from CRE ϩ mice, whereas Sptlc1 gene expression was comparable between CRE Ϫ and CRE ϩ mice on HFD (Fig. 5B). There was no difference in anti-inflammatory gene, Arg1, or in pro-inflammatory genes iNos, Il1␤, or Tnf␣ (Fig. 5C). Taken together, these data indicate that myeloid cell-specific Sptlc2 is not required for HFD-induced adipose tissue inflammation. To ask whether other pathways to generate ceramide are compensating for the loss of Sptlc2, we examined gene expression of ceramide synthases (CerS) and sphingomyelinases in isolated adipose tissue macrophages. The expression levels of CerS6, CerS5, Nsmaf, and Smpd1 were not altered by myeloid deficiency of Sptlc2 (Fig. 5D). Macrophage Sptlc2 Deficiency Does Not Impact HFD-induced Insulin Resistance -Myriocin treatment to block systemic de novo synthesis of ceramides decreases systemic sphingolipid and ceramide levels, improves insulin sensitivity, and reduces adipose tissue mass in mice on HFD (24). On a LFD, CRE Ϫ and CRE ϩ mice have comparable baseline glucose and ability to clear glucose following intraperitoneal injection of glucose (Fig. 6A). We examined whether myeloid-specific de novo synthesis of ceramides is responsible for HFD-induced decreases in insulin sensitivity using hyperinsulinemic-euglycemic clamp studies. There was no difference in glucose infusion rate between CRE Ϫ and CRE ϩ mice on HFD (Fig. 6B). Furthermore, whole body glucose uptake and endogenous glucose production at basal or following clamp was comparable between HFD-fed mice (Fig. 4, C and D). These data suggest that myeloid cell-specific Sptlc2 is not required for systemic insulin sensitivity. In addition, clamp was equally able to decrease nonesterified fatty acids in CRE Ϫ and CRE ϩ HFD-fed mice (Fig. 6E). In agreement with this data, an insulin tolerance test in CRE Ϫ and CRE ϩ on HFD revealed similar baseline glucose and similar ability to restore glucose levels following insulin challenge (Fig. 6, F and G). Discussion DIO is characterized as a state of chronic, low grade inflammation with lipid and glucose alterations mediated in part by macrophage infiltration of adipose tissue (41). Palmitate induces Nlrp3 inflammasome activation and subsequent IL1␤ secretion during DIO; however, the mechanism for activation has not been fully elucidated (15). We have hypothesized that Nlrp3 inflammasome activation requires saturated fatty acid entry into the nonoxidative pathway and de novo generation of ceramide via Sptlc2. In these experiments, we have shown that myeloid cell-specific deletion of Sptlc2 is not required for inflammasome-induced adipose tissue inflammation and insulin resistance. In vitro, Sptlc2 deficiency does not alter macrophage polarization or palmitate-induced IL1␤ secretion by the Nlrp3 inflammasome. In a model of saturated fat-induced inflammation, adipose tissue macrophage numbers, polarization, and gene expression are comparable in control mice and mice lacking myeloid cell expression of Sptlc2. Taken together, these data indicate that myeloid cell expression of Sptlc2 is dispensable for inflammasome-induced adipose tissue inflammation and insulin resistance. In vitro work using Sptlc2 Ϫ/ϩ BMDMs have shown that Sptlc2 is required for LPS or palmitate-induced inflammatory cytokine production (40). Furthermore, in vivo investigations have shown that macrophage-specific Sptlc2 promotes atherosclerotic lesions (40), highlighting the importance of ceramide synthesis in macrophages in metabolic diseases. A number of sphingolipids, downstream of ceramide synthesis, including plasminogen activator inhibitor-1, sphingosine-1-phosphate, and ceramide-1-phasphate, have been identified as possible mediators in driving metabolic-induced inflammation (17,43,44). These data suggested that macrophage de novo synthesis of ceramide was critical in regulating macrophage-driven inflammation in metabolic diseases. Here, we show that myeloid cellspecific deletion of Sptlc2, as shown by 75% knockdown of gene expression and complete deletion of the protein, has no altera- (AT, B), and cells per gram of adipose tissue (gAT, C) after 13 weeks of LFD or HFD in CRE Ϫ or CRE ϩ mice. D, gating strategy to analyze stromavascular fraction (SVF) of adipose tissue. E, representative dot plots of F4/80 ϩ CD11b ϩ cells from adipose tissue of CRE Ϫ mice on LFD, CRE Ϫ mice on HFD, or CRE ϩ mice on HFD. F4/80 ϩ CD11b ϩ cells were gated on to analyze CD206 and CD11c expression. F, quantification of the percentage of macrophage and the macrophage subpopulations. G, quantification of the percentage of lymphocytes, CD3 ϩ T cells, and B220 ϩ B cells, in adipose tissue from HFD-mice (n ϭ 9 -10 biological replicates; one-way ANOVA or t test as appropriate). *, p Ͻ 0.05. The error bars represent means Ϯ S.E. Furthermore, a number of publications have identified that myriocin, an SPT-specific inhibitor, promotes remarkable reductions in DIO-induced symptoms, including reduced ceramide accumulation, reduced adipose tissue, smaller adipocytes, improved insulin signaling through Akt, and improved metabolic function (21,24,25). The differences between these . Macrophage-specific Sptlc2 is not required for diet-induced insulin resistance. A, glucose tolerance test on LFD-fed CRE Ϫ or CRE ϩ mice. B, glucose infusion rates (GIR) during hyperinsulinemic-euglycemic clamps in CRE Ϫ or CRE ϩ mice on HFD for 12 weeks. The inset shows the average glucose infusion rate (Avg. GIR). C, whole body glucose uptake. D, endogenous glucose production (EGP) at basal and after clamp. E, nonesterified fatty acid level in the blood at basal and after clamp (n ϭ 4 CRE Ϫ ; n ϭ 7 CRE ϩ ). F and G, baseline glucose levels after a 4-h fast (F) and at time points after intraperitoneal injection of insulin (G) (insulin tolerance test, ITT) in CRE Ϫ or CRE ϩ mice on HFD (n ϭ 8; t test). *, p Ͻ 0.05. The error bars represent means Ϯ S.E. and our data are likely due to the ability of myriocin to inhibit whole body ceramide synthesis. SPT is a constitutive enzyme with activities in regulating cellular sphingolipids in all cell types; its inhibition alters total cellular sphingolipids, and these alterations are likely to be beneficial to cells with lipid dysregulation but damaging to cells that lack exposure to excess palmitate and ceramide synthesis. Tissue-specific or cell-specific inhibition of ceramide synthesis during lipid dysregulation, for example in the liver or muscle, is an attractive prospect for reducing inflammation and improving insulin sensitivity. In agreement with this concept, overexpression of acid ceramidase in liver or adipocytes improves systemic insulin sensitivity, hepatic lipid accumulation, and adipose tissue inflammation (23). It remains to be studied whether elevation of ceramide degradation enzymes in macrophages will lower the "lipotoxic DAMP load" that causes inflammasome activation in obesity. Sptlc2 is the rate-limiting subunit of the SPT enzyme and is required for the de novo synthesis of ceramide; however, other mechanisms for generating ceramide include the hydrolysis of sphingomyelin (salvage pathway) or synthesis from sphingosine and more complex sphingolipids (recycling pathway) (20,45). Degradation of sphingomyelin requires sphingomyelinases, whereas ceramide synthases catalyze the recycling of sphingolipids, as part of a carefully regulated process for meeting the cellular demands of lipids (20). Our data shows that there is no change in gene expression of these ceramide synthases or of sphingomyelinases in adipose tissue macrophages because of deletion of Sptlc2. This suggests that other pathways for generating ceramides (salvage, recycling pathways) are not up-regulated in compensation for loss of the de novo pathway. Recent publications are in agreement with our data, suggesting that ceramide synthesis in myeloid cells is not critical for dietinduced inflammation. Deletion of CerS6, which is up-regulated in the white adipose tissue of high fat-fed mice, in macrophages failed to prevent diet-induced adipose tissue inflammation or insulin resistance (46). In our experiments, adipose tissue macrophages highly express both CerS5 and CerS6. Taken together, these data indicate that ceramide synthesis in macrophages is dispensable when targeting DIO-induced inflammation and metabolic disorders. Given that ceramides are also present with the cell membranes, macrophages may accumulate ceramide via cellular membrane degradation following phagocytosis of dead or dying cells. Macrophages are present in the lean state and are critical in promoting diet-induced adipose tissue inflammation. CD11c ϩ macrophages infiltrate the adipose tissue, surround necrotic adipocytes, and release inflammatory cytokines (10). Not only are macrophages directly exposed to fatty acids released from dying adipocytes, but systemically, diet-induced increased serum fatty acids cause chronic exposure to macrophages. Macrophages express fatty acid receptors, including CD36, which when deleted, prevents diet-induced adipose tissue inflammation (47), indicating that macrophage uptake of fatty acids mediates HFD-induced inflammation. Upon lipid uptake, fatty acids can be stored as triglyceride in lipid droplets and enter into an oxidative pathway for metabolism to ATP or a nonoxidative pathway for conversion into cell-required sphingolipids or signaling molecules (20). The fate of fatty acids fol-lowing release from dying adipocytes is unclear, although a recent study has shown the importance of lysosomal biogenesis and metabolism of lipids in adipose tissue macrophages following DIO (9), suggesting that a portion may be metabolized. Other investigations have underscored the importance of the type of fatty acids in eliciting inflammation, because omega-3 supplementation is sufficient to reduce HFD-induced adipose tissue inflammation (48). In this publication, we have eliminated the possibility that palmitate entry into the nonoxidative pathway causes Nlrp3 inflammasome-driven inflammation. These data suggest that the metabolic fate of palmitate could be at least partly independent from its ability to induce inflammation; alternatively storage as triglycerides is a potential mechanism for inflammation. Macrophages are tissue resident cells that are critical for maintaining homeostasis through immunometabolic interactions. Saturated fatty acid is a metabolite capable of eliciting Nlrp3 inflammasome activation and promoting dysregulated glucose metabolism (13). Its mechanism of action is known to involve AMPK inhibition and ROS, but whether its metabolism is required for activation is still incompletely understood. Therapeutic attempts at improving metabolic dysfunction have been mostly unsuccessful; narrowing the number of viable translatable approaches to improve metabolic syndrome is critical in type 2 diabetes and human obesity. We have used in vitro and in vivo mouse models to eliminate the de novo ceramide synthesis as a potential mechanism and allow future research to focus on other significant pathways of ceramide homeostasis or degradation in macrophages. Author Contributions-C. D. C. participated in the design of the study, coordinated and carried out experiments, performed the analysis, and wrote the manuscript. K. Y. N. participated in study design and assisted with experiments and manuscript edits. M. J. J. performed the experiments and analysis shown in Fig. 6. G. I. S. participated in experimental design and edits of the manuscript. B. E. C. and G. S. S. assisted with experimental design and data analysis and provided MCAT mice for experiments. V. D. D. conceived and coordinated the study and participated in the writing of the manuscript. All authors reviewed the results and approved the final version of the manuscript.
v3-fos-license
2022-05-20T06:16:58.954Z
2022-05-18T00:00:00.000
248889749
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/s12885-022-09645-7", "pdf_hash": "c7006ec2802c271ca5a7b9bddd6738b16c108098", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2076", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "949492674e286b13e84da24fb7837f7fdc9a4e71", "year": 2022 }
pes2o/s2orc
Laminin-integrin a6b4 interaction activates notch signaling to facilitate bladder cancer development Background Laminins are high-molecular weight (400 ~ 900 kDa) proteins in extracellular matrix, which serve as major component of the basal lamina, and play a crucial role in promoting tumor cell migration. This study aimed at characterizing the role of laminin in promoting cancer development, and elucidating the mechanism of tumor progression driven by laminin-Notch signaling in bladder cancer. Methods 2D collagen/laminin culture system was established and CCK-8/transwell assay was conducted to evaluate the proliferation/migration ability of Biu-87 and MB49 cells cultured on 2D gels. Activation of integrins-Notch1 signaling was determined by western blotting. Orthotopic bladder cancer mice model was established to assess the therapeutic effects of Notch inhibitor. Results Our study demonstrated that extracellular laminin can trigger tumor cell proliferation/migration through integrin α6β4/Notch1 signaling in bladder cancer. Inhibition of Telomere repeat-binding factor 3 (TRB3)/Jagged Canonical Notch Ligand 1 (JAG1) signaling suppressed Notch signals activation induced by laminin-integrin axis. In MB49 orthotopic bladder cancer mice model, Notch inhibitor SAHM1 efficiently improved tumor suppressive effects of chemotherapy and prolonged survival time of tumor-bearing mice. Conclusion In conclusion, we show that, in bladder cancer, extracellular laminin induced the activation of Notch pathway through integrin α6β4/TRB3/JAG3, and disclosed a novel role of laminin in bladder cancer cells proliferation or migration. Supplementary Information The online version contains supplementary material available at 10.1186/s12885-022-09645-7. Cancer metastasis involves process of loss of cell-cell/ matrix adhesions, proteolysis, and angiogenesis [6]. Basement membrane (BM), one specialized extracellular matrice, that underlies epithelia and endothelia, appears to play a crucial role during metastatic progression [6,7]. BM is a meshwork of laminin, type IV collagen, nidogen, and proteoglycans, that holds cells and tissues together [8]. Initial tumor cells usually contacted with extracellular elements through cell surface receptors, which specifically bind to BM or other components in extracellular matrix. The matrix can be broken down by hydrolytic enzymes secreted by tumor cells, thereby resulting in escape of neoplastic cells from its site of origin [9]. Laminins are the most important component of the BM. Laminins are large molecular weight glycoproteins constituted by three disulfide-linked polypeptides, the alpha (α), beta (β), and gamma (γ) chains [10]. Laminins are produced by multiple cells, including nearly all epithelial-, smooth muscle-, cardiac muscle-, nerve-and endothelial cells [11]. Previous studies demonstrated that laminin tightly correlated with the progression of malignant tumors. Notably, laminin-5 loss from BM was found to be associated with an increased death rate in bladder cancer patients [12]. LAMC1 gene, encoding laminin subunit gamma 1 (LAMC1) protein, has been demonstrated as a potent biomarker for aggressive endometrial cancer [13]. In brain cancers, loss of cell-surface laminin anchoring promotes tumor growth and correlated with poor clinical outcomes [14]. Several signaling pathways have been demonstrated to contribute to the proliferation and migration of tumor cells, including TGF-β signaling, MAPK-RAS-RAF signaling, Notch and Wnt/βcatenin pathway [15][16][17][18]. Notably, extracellular laminin can activate a number of intracellular signaling pathways, such as PI3K/AKT, MAPK/ERK, and Rho GTPases, through receptor engagement [19][20][21]. And the mechanisms of laminin involvement in tumor development of several cancer types, including lung cancer, colorectal cancer, and head and neck squamous carcinomas, via related signaling have also been reported [22][23][24]. However, little has reported on the molecular mechanism of laminin-induced tumorigenesis and progression in bladder cancer. In this study, we demonstrated that laminin promoted cell proliferation and migration in bladder cancer via integrin-dependent biomechanical signals. Meanwhile, we elucidated the underlying mechanism of laminininduced bladder cancer progression, which was dependent on an integrin α6β4/TRB3/JAG1/Notch signaling pathway. More importantly, blockade of Notch signals restrained the metastatic potential of bladder cancer cells, which provided novel insight in clinical bladder cancer therapy. For 2D collagen (containing laminin or not) gels culture, type I collagen (Solarbio, Beijing, China) was diluted to 2.5 mg/ml with DMEM culture medium (containing 2 μg/ml laminin or not). Subsequently, 20 μl 1 M NaOH solution were subsequently added into 230 μl collagen solution. 250 μl of the collagen mixture was seeded into a 24-well plate and mixed thoroughly. After 37 °C incubation for 1 hour, cancer cells were seeded on top of the solid 2D collagen gels at a concentration of 1 × 10 4 cells/ well and maintained in DMEM culture medium containing 10% FBS. Clinical specimens Human bladder tumor tissue sections were obtained from the First Affiliated Hospital, University of South China, and divided into NMIBC and MIBC according to the Guidelines for the Diagnosis and Treatment of Bladder Cancer (2019). All participants and/or thier legal guardians agreed to participate in the study and informed in prior. The clinical experiments were carried out according to the Declaration of Helsinki. This study was approved by the Ethics Committee of the First Affiliated Hospital of University of South China (#20170257). Survival information of 405 bladder cancer patients in The Cancer Genome Atlas Program (TCGA) was downloaded from https:// www. cbiop ortal. org. Cell proliferation assay Cell proliferation was assessed by Cell Counting Kit-8 (CCK-8, Solarbio, Beijing, China). Briefly, MB49 or Biu-87 cells were seeded in 96-well plates (2500 per well) and cultured with DMEM culture medium supplemented with 10% FBS. Cell proliferation was examined at 0, 24, 48, and 72 hours according to manufacturer's specifications. Absorbance of samples was quantified at 450 nm by microplate reader (Thermo Fisher, MA, USA). Cell proliferation was normalized to day 0 (2500 cells). Transwell assay 5 × 10 4 MB49 or Biu-87 cells were seeded in the 8 μm transwell insert (Corning, CA, USA) containing 100 μl culture medium (10% FBS). The bottom chamber was filled with 500 μl culture medium containing 20% FBS. After 24 hours, the migrating cells were fixed with paraformaldehyde and stained with crystal violet. The migrating cells numbers were counted under an optical microscope (Leica, Munich, Germany). RNA interference SiRNA to ITGB4 Quantitative polymerase chain reaction MB49 or Biu-87 were cultured on dish or 2D laminin/ collagen gels for 5 days. Cells were then harvested and total RNA was extracted using RNA Extraction Kit (Thermo Fisher, MA, USA) according to the manufacturer's instructions. Reverse transcription of total RNA was performed using cDNA synthesis kits (Takara Bio, Tokyo, Japan) following the manufacturer's instructions. PCR was performed with SYBR Green Supermixes (Biorad, MA, USA). Primer sequences were downloaded from https:// pga. mgh. harva rd. edu/ prime rbank/. Immunohistochemistry and immunofluorescence Bladder tumor tissues were fixed in 10% formalin solution. The samples were processed, embedded in paraffin, and sectioned at 5 μm for immunohistochemical and immunofluorescence staining. Sections of tumor tissues were then dewaxed, rehydrated, quenched of endogenous peroxidase, blocked, and incubated with the primary antibody: anti-Laminin (ab11575, Abcam, Cambridge, UK), anti-integrin α6 (ab181551, Abcam, Cambridge, UK), anti-integrin β4 (ab133682, Abcam, Cambridge, UK) and anti-Notch1 (ab52627, Abcam, Cambridge, UK) at 4 °C overnight. Samples were then incubated with secondary antibodies and stained with hematoxylin/ 4′, 6-diamidino-2-phenylindole (DAPI). The intensity of protein expression was quantified by Image J 2.0 (N.J, USA) and Image-pro Plus 6.0 software (MA, USA). 10 fields were included in each sample. The mean of brown intensity in 10 fields were identified as the expression intensity in this sample. 15 samples from 15 patients were included in each group. Dual luciferase activity assay Activation of Notch1 signaling in tumor cells were determined by luciferase reporter assay. MB49 or Biu-87 cells were seeded on dish or 2D laminin/collagen gels for 3 days, and then co-transfected with control/pGL3 vector containing firefly luciferase reporter gene and the 30 UTR of Notch1 gene (Yunzhou, Beijing, China) using lipofectamine 2000 (Invitrogen, MA, USA). 48 hours later, a luciferase assay kit (Promega, MA, USA) was used for luciferase activity assay. Orthotopic animal models Female C57BL/6 mice (6-8 weeks old) were purchased from Huafukang (Beijing, China). To establish orthotopic bladder cancer model, 1 × 10 6 MB49 cells in 100 μl PBS were intravesical instilled into the bladders of C57BL/6 mice by venous indwelling needles. On day 6 and 8, mice were treated with PBS, HCPT (0.5 mg/ml), SAHM1 (0.5 mg/ml) or combining treatment by intravesical instillation. On day 10, the occurrence of hematuresis was recorded (n = 10). On day 12, mice were sacrificed for tumor weight analysis (n = 6). The tumor weight was calculated according to the formula: tumor weight = total bladder weight -normal bladder weight (21 mg). Survival of tumor bearing mice was recorded on a daily basis (n = 6). All animal experiments of this study were approved by the Institutional Animal Care and Use Committee of University of South China (20150223-154). The animal studies were conducted in accordance with the Public Health Service Policy and complied with the WHO guidelines for the humane use and care of animals. Statistical analysis Each experiment was performed for three independent times. Data were presented as the mean ± SEM and statistical significance was analyzed using GraphPad 7.0 software (L.J, USA). Statistical significance between groups was calculated by Student's t test for two groups or by one-way ANOVA for more than two groups. Bonferroni analysis were further used for the post hoc test. The survival rates were analyzed by Kaplan-Meier survival analysis. The survival information of clinical bladder patients was downloaded from https:// www. cbiop ortal. org/. *p < 0.05; **p < 0.01; ns, no significant difference. Laminin promoted cell proliferation and migration in bladder cancer To elucidate the role of extracellular matrix, more specifically, the role of laminin in tumor progression, laminin expression in bladder cancer patients was determined by immunohistochemistry. To do this, 30 patients were divided into NMIBC and MIBC groups according to the Guidelines for the Diagnosis and Treatment of Bladder Cancer (2019). As shown in Fig. 1A, tumor tissues from MIBC group exhibited a significantly increased laminin expression, when compared to the NMIBC group (Fig. 1A). This promoted us to speculate that laminin might play a role in bladder cancer development. To confirm our hypothesis, bladder cancer cell lines Biu-87 and MB49 cells were cultured with laminin for 3 days, and the cell proliferation/migration was determined by CCK-8/ Transwell assay. However, no obvious difference was found in cell proliferation (Fig. 1B) or migration (Fig. 1C) between PBS and laminin treated group. As reported previously, extracellular laminin could contact with integrin receptors on tumor cells, promoting biomechanical signals transduction and activation of pro-survival signaling activation in tumor cells [19]. Based on this, we speculated that solid extracellular matrix-induced biomechanical force might play a role in laminin-integrin associated tumor progression. To assess our hypothesis, laminin was mixed in solid 2D collagen gels (type I collagen gels, the major extracellular substrate), then Biu-87 and MB49 cells were seeded on the top of 2D gels for cell culture. After 3 days, Biu-87 and MB49 cells were collected and cell proliferation/migration was determined. Intriguingly, 2D laminin/collagen complex culture significantly promoted Biu-87 and MB49 cells proliferation (Fig. 1D) and migration (Fig. 1E), whereas 2D collagen culture had limited impact on bladder cancer cells. Similar results were observed in laminin/fibrin 2D gels cultured cancer cells ( Fig. 1F and G). Consistent to our results in vitro, a poor overall survival of bladder cancer patients with high LAMC1 (encoding laminin submit gamma 1) expression was observed by utilizing TCGA database analysis (Fig. 1H). Those results suggested that laminin could mediate the biomechanical signals transduction to promote tumor cells proliferation and migration, resulting in bladder cancer development. Laminin activated integrin α6β4 signals to promote tumor development We next sought to explore the underlying mechanism of laminin-associated tumor progression. As mentioned previously, laminin is recognized by integrin receptors, including integrin α3β1, α6β1, α7β1 and α6β4. And integrin signals-induced by laminin is tightly related to tumor growth and cancer metastasis. Here, we examined the expression of integrin α3, α6, α7, β1 and β4 in Biu-87 cells. Elevated expression of integrin α6 and β4 was observed in 2D laminin/collagen cultured Biu-87 cells by quantitative PCR (Fig. 2A). Similar results were observed at protein level in Biu-87 and MB49 cells (Fig. 2B), suggesting that integrin α6β4 might be involved in lamininassociated tumor progression. To further determine the role of integrin α6β4, integrin α6 and β4 were silenced by siRNA in Biu-87 cells (Fig. 2C), and cell proliferation/migration was determined. Accordingly, silence of integrin α6 or β4 suppressed the proliferative characteristics (Fig. 2D) and migrative phenotypes (Fig. 2E) in 2D laminin/collagen cultured Biu-87 cells. However, no obvious suppressive effects were observed in dish cultured cancer cells (Fig. 2F and G), despite integrin α6 or β4 siRNA treatment. Those results suggested that laminin promoted bladder cancer development through an integrin α6β4 dependent pathway. Next, we examined the expression of integrin α6 or β4 in tumor tissues from NMIBC and MIBC patients. Consistently, elevated expression of integrin α6 or β4 was observed in MIBC patients, when compared to the NMIBC group ( Fig. 2H and I). Together, those results suggested that laminin activated integrin α6β4 signals to promote bladder cancer development. Integrin α6β4 promoted notch signals activation Compelling studies have demonstrated that integrins are involved in the activation of pro-survival signaling pathways, including PI3K/AKT, JAK/STAT3, Wnt, Notch, c-Myc and SOX2 signals. To clarify the mechanism of integrin α6β4-relating tumor progression, PCR analysis was performed to examine the expression of AKT1, STAT3, Wnt3A, Notch1, c-Myc and SOX2 in 2D laminin/ collagen or dish cultured Biu-87 cells. Intriguingly, the expression of Notch1 was significantly upregulated in 2D laminin/collagen cultured groups (Fig. 3A). Additionally, 2D laminin/collagen cultured Biu-87 displayed an enhanced expression of cleaved intracellular domain of Notch1 at protein level, when compared to dish cultured group (Fig. 3B). Silence of integrin α6 or β4 suppressed the upregulation of Notch1 in laminin/collagen cultured cancer cells (Fig. 3C), suggesting that laminin promoted Notch1 signals activation through integrin α6β4 in bladder cancer. The above results were further confirmed by the luciferase assay by revealing that 2D collagen/laminin culture promoted Notch1 luciferase activity in MB49 and Biu-87 (Fig. 3D). To further confirm the role of Notch signaling in promoting bladder cancer development, a Notch inhibitor SAHM1 was added into the culture medium of tumor cells, and cell proliferation/migration was determined. Consistently, SAHM1 treatment obviously suppressed the cell proliferation (Fig. 3E) and migration (Fig. 3F) in 2D laminin/collagen cultured cells. However, limited tumor suppressive effects of SAHM1 were observed in dish cultured Biu-87 and MB49 cells, indicating that laminin promoted bladder cancer development through Notch-associated signaling. Importantly, a poor overall survival of bladder cancer patients with high Notch1 expression was observed by utilizing TCGA database analysis (Fig. 3G). Collectively, those results suggested that laminin upregulated integrin α6β4/ Notch signaling to mediate bladder cancer development. The activation of notch was dependent on TRB3/JAG1 signaling Last, we aimed to understand how integrin α6β4 controlled the activation of Notch signals in bladder cancer. Cellular stress has been reported previously to mediate the Notch signaling activation through TRB3/ JAG1 axis [25]. Our results have demonstrated that laminin could mediate the biomechanical stress signals transduction through integrin α6β4, thereby promoting Notch signaling activation. Therefore, we presented that laminin/integrin α6β4 might facilitate Notch activation through TRB3/JAG1 signals. To confirm our hypothesis, western blotting analysis was performed to examine the expression of TRB3 and JAG1 on dish and 2D laminin/collagen cultured Biu-87/MB49 cells. Accordingly, cancer cells cultured on 2D laminin/collagen exhibited a higher expression of TRB3 and JAG1 (Fig. 4A), when compared to dish cultured groups. And silence of integrin α6 or β4 suppressed the upregulation of TRB3 and JAG1 (Fig. 4B), indicating that laminin upregulated TRB3/JAG1 through integrin α6β4. Subsequently, we silenced TRB3 and JAG1 in 2D laminin/ collagen cultured Biu-87 by siRNA (Fig. 4C and D), then examined the expression of Notch1. Consistently, silence of TRB3 or JAD1 efficiently suppressed Notch1 in 2D laminin/collagen cultured Biu-87 (Fig. 4E), indicating that the activation of Notch was dependent on TRB3/JAD1 signals. Meanwhile, silence of TRB3 and JAD1 suppressed cell proliferation (Fig. 4F) and migration (Fig. 4G) induced by laminin, whereas limited suppressive effects were observed in dish cultured Biu-87 cells (Fig. 4H and I). Together, those results suggested that the activation of Notch in bladder cancer was dependent on TRB3/JAG1 signaling. Blockade of notch signals improved tumor suppressive effects in an orthotopic bladder cancer model Given the crucial role of laminin/integrin α6β4/TRB3/ JAG1/Notch in promoting bladder cancer development, it could be feasible to suppress Notch signals for improved outcome in bladder cancer treatment. To validate our hypothesis, orthotopic bladder cancer model was established by instilling MB49 cells into bladders of C57BL/6 mice. Mice were treated with PBS, Notch inhibitor SAHM1, chemotherapeutic HCPT by intravesical instillation. Intriguingly, both HCPT and SAHM1 reduced hematuresis and suppressed tumor growth in MB49-bearing mice. Combination of Notch inhibitor and chemotherapy exhibited enhanced anticancer effects, which significantly inhibited tumor growth and prolonged the overall survival of tumorbearing mice (Fig. 5A, B and C). Those results provided us new target to eliminate bladder cancer cells. Our previous results have pointed out that elevated expression laminin might promoted integrin/Notch signals activation, resulting sustained tumor growth in bladder cancer. Therefore, we treated those MB49-bearing mice with laminin by intravesical instillation, and further evaluated the anticancer effects of Notch inhibitors. Indeed, laminin treatment dramatically promoted bladder cancer development in vivo ( Fig. 5D and E). Laminin treated tumor tissues also revealed enhanced expression of Notch1 in vivo (Fig. 5F). Next, we further treated those tumor-bearing mice (laminin treatment) with HCPT and SAHM1. Intriguingly, Notch inhibitor SAHM1 exhibited stronger anticancer effects when compared to HCPT, which might be associated with the chemo-resistance-induced by Notch associated signaling pathways. However, the combination of HCPT and SAHM1 displayed obvious tumor suppressive effects and prolonged overall survival of tumor bearing mice ( Fig. 5G and H). Collectively, those results suggested Discussion Extracellular matrix is composed of various elements, such as collagen, proteoglycans, laminin, and fibronectin. Extracellular matrix plays a vital role in regulating crucial physiological processes, such as cell-cell communication, cell adhesion, and cell proliferation, which has been appreciated as important driver for cancer progression [26]. Previous studies on extracellular matrix in tumor progression mostly focused on collagen, fibronectin, and proteoglycans [27][28][29], while little study sheds light on the role of laminin. However, current studies revealed that laminin expression was tightly associated with tumor progression in several types of tumors. For example, laminin has been found to be involved in tumor invasion and metastasis in colorectal cancer, gastric cancer, and intrahepatic cholangiocarcinoma [30][31][32]. Our experiments revealed that laminin expression is significantly upregulated in human MIBC, which promoted us to hypothesize that laminin may play a role in bladder cancer. On this basis, we demonstrated that laminin could promote tumor cells proliferation and migration, leading to the development of bladder cancer. We firstly to confirm laminin, a major and important component of the ECM, contributes to the progression of bladder cancer. Integrins are heterodimers consisting of one α subunit and one β subunit, which function as adhesion receptors for the ligands (e.g. laminin, collagen, and fibronectin) in extracellular matrix, which transduce mechanical signals from the extracellular matrix to stromal cells. Integrins α3β1, α6β1, α6β4 and α7β1 make up a laminin-binding integrins subfamily. The role of integrin α6β4 in promoting lung cancer, breast cancer, and colon carcinoma has been reported previously [33][34][35], however, there is little evidence for such role of α6β4 in bladder cancer. In this study, we indicated that laminin activated integrin α6β4 signals to promote bladder cancer development. We further explored the mechanism of laminin and integrin α6β4 in promoting bladder tumor progression. Much evidence has shown that integrin α6β4 is involved in the activation of pro-survival signaling pathways. For example, lamininbinding integrins induce Notch signaling in endothelial cells [36]. Additionally, integrin α6β4 promotes breast cancer cell motility and invasion through activating phosphatidylinositol-3-hydroxykinase signaling [37]. In lung cancer, activated integrin β4 recruits focal adhesion kinase to mediate downstream signaling pathways and cancer metastasis [38]. We demonstrated that laminin promoted Notch 1 signals activation through integrin α6β4, thereby facilitating bladder cancer cell proliferation and migration. In addition, the TCGA database analysis also revealed Notch 1 was associated with a poor prognosis of bladder cancer. Also, we found the activation of Notch 1 in bladder cancer is dependent on TRB3/JAG1 signaling. Taken together, we identified the relationship between laminin, integrin, Notch, and TRB3/JAG1 in bladder cancer. Notch signaling has also been reported to be involved in the control of cell proliferation, survival, migration, and differentiation [39]. Intriguingly, the Notch pathway has been implicated in both oncogenic and tumor-suppressive roles in cancer depending on the tissue type and cellular context. In bladder cancer, Notch1 has also been reported to serve as tumor suppressive [40] and oncogenic roles [41] to regulate cancer cell proliferation and migration. Our study further demonstrated that laminin could mediate bladder cancer development through a Notch1 dependent manner. Intriguingly, our further investigation indicated that the pro-tumor effects of Notch1 might be tightly correlated with tumor-specific cell senescence and nutrition metabolism. Meanwhile, the laminin associated downstream signaling pathways might cooperate with Notch molecule, resulting in disparate tumor behaviors. The specific mechanism of Notch-induced tumor progression remains to be further investigated. Encouragingly, our experiments indicate that blocking Notch signals inhibited tumor growth and improved outcome of chemotherapy, which provided novel insight for bladder cancer therapy. Based on the findings and limitations of previous studies, our study sheds further light on that laminin activated TRB3/JAG1/Notch signaling through integrin α6β4 to promote bladder cancer development. Firstly, our study identified that laminin, a major component of extracellular matrix, was significantly upregulated in patients with MIBC and demonstrated that laminin plays a critical role in bladder cancer development. Secondly, we indicated a novel signaling pathway in which laminin promotes tumor cells proliferation and migration via integrin α6β4/TRB3/ JAG1/Notch axis. Thirdly, we elucidated the interrelationship between laminin, integrin, TRB3/JAG1, and Notch, which offered novel insight for tumor signaling pathways investigations. Fourthly, we proved that Notch inhibitor SAHM1 combining chemotherapeutic HCPT could inhibit tumor growth and improve prognosis, describing innovative strategy for the clinical treatment of bladder cancer. Finally, the novel signaling molecules, including laminin, integrin α6β4, and Notch1, can serve as potential prognostic and diagnostic indicators of bladder cancer. Conclusion In summary, our study demonstrated novel mechanism of laminin-induced bladder cancer progression. The development of bladder cancer stimulated by the laminin/integrin α6β4/TRB3/JAG1/Notch pathway could be inhibited by Notch inhibitor SAHM1, which described a new strategy in the treatment of bladder cancer.
v3-fos-license
2020-02-27T09:20:48.911Z
2020-02-20T00:00:00.000
212866144
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2076-3417/10/4/1446/pdf", "pdf_hash": "15a733dd2bbd0f4888ffcb9200d4c674cc04b063", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2078", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "0b9e4d4f8c41885dab9b9597f29cc4b331a1fc4f", "year": 2020 }
pes2o/s2orc
Evaluation of Effective Cognition for the QGIS Processing Modeler : This article presents an evaluation of the QGIS Processing Modeler from the point of view of effective cognition. The QGIS Processing Modeler uses visual programming language for workflow design. The functionalities of the visual component and the visual vocabulary (set of symbols and line connectors) are both important. The form of symbols affects how workflow diagrams may be understood. The article discusses the results of assessing the Processing Modeler’s visual vocabulary in QGIS according to the Physics of Notations theory. The article evaluates visual vocabularies from the older QGIS 2.x and newer 3.x versions. The paper identifies serious design flaws in the Processing Modeler. Applying the Physics of Notations theory resulted in certain practical recommendations, such as changing the fill colour of symbols, increasing the size and variety of inner icons, removing functional icons, and using a straight connector line instead of a curved line. Another recommendation was to provide a supplemental preview window for the entire model in order to improve user navigation in huge models. Objective eye-tracking measurements validated some results of the evaluation using the Physics of Notations. The respondents read workflows to solve different tasks and their gazes were tracked. Evaluation of the eye-tracking metrics revealed the respondents’ reading patterns of the diagram. Evaluation using both Physics of Notation theory and eye-tracking measurements inspired recommendations for improving visual notation. A set of recommendations for users is also given, which can be applied easily in practice using a contemporary visual notation. Introduction Today, open source GIS software competes with commercial GIS software. The user's choice not only depends on the price but also the degree of functionality in parts of the GIS software. Users need to satisfy their requirements. One of the demands is the automatic processing of spatial data as a sequence of steps. Visual programming languages (VPLs) are used to design steps of processes in the form of workflow diagrams. GIS operations are not used in isolation but as a part of a chain of operations to completely process data. An overview and basic description of several visual programming languages in GIS are given in this article [1]. ModelBuilder for ArcGIS, Macro Modeler for IDRISI, Model Maker and Spatial Model Editor for ERDAS IMAGINE and Workflow Designer for AutoCAD Map 3D are mentioned. All systematic description and evaluation of VPLs in GIS is presented as habilitation [2]. VPLs in GIS are data-centric notations that serve to express a process in detail. Only AutoCAD Map uses hybrid symbols where one symbol both for operation and input/output data altogether is. Other VPLs have a unique set of simple symbols for data and unique symbols for operations and control of flow. GIS workflow does not express a generalised conceptual model of processing, and they are more detailed. Open source software QGIS is competitive with commercial GIS software in designing workflow diagrams using VPL. The accessibility of a visual programming language increases the usability of QGIS. The possibility of designing workflows could be a reason for selecting open source QGIS. The Processing Modeler is a graphical editor in QGIS software. This editor allows workflows to be designed in graphical form using a visual programming language. Workflow diagrams in QGIS are termed as a model. VPLs differ in their visual notation generally, and the symbols in GIS software are various. The visual notation consists of graphical symbols (visual vocabulary), a set of compositional rules (visual grammar) and definitions of the meaning of each symbol (visual semantics). The visual notation is important from the point of user perception and cognition. In his theory, the Physics of Notations, D. Moody stated that it is necessary to use cognitively effective visual notations [3]. Cognitively effective means optimised for processing by the human mind. This article presents an assessment of the visual notation of the QGIS Processing Modeler using the Physics of Notations theory in combination with eye-tracking measurement. The presented research started with version QGIS 2 in 2014 and has been continued with version 3 up to now. The long-term release (LTR) version QGIS 3.4 Madeira and partly version 3.6 Noosa was used for assessment. Some features of visual notation were empirically tested using the eye-tracking method on version QGIS 2. Finally, some improvements to visual notations are suggested in this article. The research question was "What is the level of effective cognition in QGIS Processing Modeler." This research aimed to evaluate and improve cognition of visual notation in QGIS. The results bring new and innovative ideas that improve the usability of and satisfaction with QGIS software. These tasks fall under investigation in Human-Computer Interaction (HCI) research. Standard ISO 9241-210:2019 Ergonomics of human-system interaction-Part 210: Human-centred design for interactive systems [4] provides requirements and recommendations for human-centred design principles and activities of computer-based interactive systems. In the center of HCI research and UX research (User Experience), is the understanding and design of interactive digital systems and their human users [5]. Their common aim is to innovate novel computing user interfaces to satisfy usefulness, ergonomics, and efficiency of using digital systems [6,7]. The improvement is based on theories and both on empirical testing in laboratories [8] e.g., the eye-tracking measurement presented in this article. History of QGIS Processing Modeler The Processing Modeler was implemented in version QGIS 2.0 Dufour, released in 2013. The next development of the Processing Modeler aimed to increase the functionality of the editor. The author of the graphical editor was Victor Olaya from Spain. In version QGIS 2.6 Brighton, released in 2014, the Processing Modeler was rewritten and provided extra functionality, such as allowing nested models with no depth limit [9]. Furthermore, adding Python script to the model was supported in version 2.x. Python script could be downloaded from an online external collection of scripts created by different users and adopted to a newly created user model. The software architecture and features of the QGIS processing framework are described in this article [10]. In 2018, the new series of version QGIS 3.x began with version 3.0 QGIS Girona. The Processing Modeler underwent extensive changes and included additional and changed input parameters and algorithms. Specifically, the colours of basic symbols were changed, and the interface and degree of functionality were redone. For example, zoom in and zoom out functions [11,12]. The two Input and Algorithm panels can be positioned differently in the interface and now float above the processing window [11,12]. The storage format of the model was also changed [13]. File extension .model 3 was used instead of extension .model. Description of Interface and Graphical Notation A graphical editor Processing Modeler is embedded in QGIS and runs in a separate window. The interface is divided into two areas [14]. Two switchable panels are on the left side. The "Inputs" panel is the source for different types of input data. The "Algorithms" panel is the source of operations that can be added to the model (workflow diagram). The large window at right is a canvas for designing the model (Figure 1). Selected inputs and algorithms can be added to the model by dragging and dropping it into the modeler canvas. Being movable, the position of the symbols is the user's choice. When input data is added to the model, the type of data and name of data are set. The input data are considered input parameters. Inputs are not assigned to particular existing data at the directory or values of variables. Their names could, therefore, be more descriptive than the data's real name. This would be an advantage because the names of parametrical inputs could be more general. It improves comprehension of the model for other users. The Algorithms panel provides GIS operations (processing algorithms) from several types of open source software apart from QGIS-these are GDAL, GRASS and SAGA. Previously, created QGIS models are also displayed. Python scripts and operations from the ORFEO library were accessible in the older version 2. Grey connector lines are automatically drawn immediately after adding operations and assigning the existing inputs to the operation in the model. The lines connect symbols of input data with the symbol of operations. The output data symbol is also automatically linked to the model after naming outputs from the operation. The connector lines are then automatically drawn between operation and output data. The user cannot draw the connector lines manually with a mouse or reconnect the symbols. The shape of the connector lines is curved and ends with a black point. When the positions of the symbols change, the shape of lines is automatically redrawn with a different curvature. Modeler's visual vocabulary contains three rectangular symbols ( Figure 2). The size of the symbols is the same and cannot be changed. Originally, the violet rectangle represented input data, the blue rectangle represented output data and the white rectangle represented the operation. The fill colours were changed in version 3. The symbol for input data is now yellow, the symbol for output data is green and the symbol for operations remains white. At first glance, this was perhaps to emphasise that this is a model from the new Processing Modeler version. The user can subsequently very clearly distinguish between existing models from older version 2 and the latest version 3. Comparing the brightness of symbols (compare the greyscale between versions in Figure 3), the input symbol is lighter in tone and the output symbol is darker than in version 3. The difference in brightness is important for colour-blind people or people with perception limitations. For these people, only the difference in brightness is helpful in distinguishing objects. The colour setting of any computer application, design of web pages respects the colour-blind people to apply different brightness of menus, text boxes and other graphical objects of interfaces. From that point, the new colours of symbols in version 3 are better due to different brightness. The difference in brightness maybe not made intentionally by QGIS designers but it is valuable. The rectangular symbols contain some inner icons. The input data symbols are indicated with a plus sign icon. The output data symbol is indicated by an arrow. Both icons are on the left side. The operation symbols have different icons according to the source library of operation or type of operation. For example, the QGIS 2 icon for Zonal Statistics is shown in Figure 2. The input data symbols and operation symbols have two icons on the right side: across and pencil in version 2. These icons depict the delete and edit functions. They can be considered operational icons. In version 3, the cross icon remained, and the icon for editing is three dots. The green output symbol was also assigned these two operational icons. It means that the label of the output data symbol is editable. The option to assign a default name and path for the output data is provided in the editing dialogue. When the output symbol is deleted, the output data is automatically assigned as a temporary output in operation. No symbol in the model indicates a temporary output. Generally, the use of icons is very helpful. According to Szczepanek, icons can be divided into three groups of icons in software interfaces [15]. The first group is universal icons, which can be understood without explanation (e.g., a floppy disk for the save operation). The second group is domain-specific icons (e.g., for any GIS software), and the third group is application-specific icons (e.g., for QGIS software). In the case of the QGIS Processing Modeler, the icons can be sorted as follows: a pencil, three dots and cross icons are universal icons. The plus icons for data are midway between universal and domain-specific icons. The plus icon is frequently used in some GIS interfaces and means adding a layer to the current project. The icons of source libraries (which are in the white operation symbol) belong to the application-specific group of icons. All icons can be understood very well. Theory Physics of Notations Physics of Notations is an objective theory for evaluating visual notation [3,16]. This theory is widely used in all areas of software engineering, not only GIS software because creating diagrams is frequently required in information technology (IT). The first work was an assessment of the Esri ModelBuilder in the area of VPL in GIS applications [17]. The Physics of Notations theory can be used not only for evaluating existing notation but also improving graphical notation or designing new ones. It means that the visual notation in QGIS can be assessed and improved if any drawbacks are identified. Exploring this theory for the new design is very beneficial in design new graphical vocabulary for any purpose. This paper presents the opportunity to make new suggestions according to the theory for QGIS Processing Modeler. The Physics of Notations theory states nine principles that recommend fulfilling cognitively effective notation. Cognitive effectiveness is defined as the speed, ease and accuracy with which a representation can be processed by the human mind [18]. The aim is to read the diagram quickly, without mistakes, and comprehend it accurately. The nine principles are organised as connected ideas where the first central principle is the Principle of Semiotic Clarity. The modular structure of Physics of Notations is designed to make it easy to add or remove principles, emphasising that they are not fixed or immutable but can be modified or extended by future research [19]. Principle of Semiotic Clarity The principle of Semiotic Clarity expresses a one to one correspondence between the syntactic model and semantic features. According to this principle, symbol redundancy, symbol overload, symbol deficit or symbol excess is not permissible. The principle reflects the ontological analysis. • Principle of Perceptual Discriminability The second principle of Perceptual Discriminability states that different symbols should be clearly distinguishable from each other by visual variables. • Principle of Visual Expressiveness The principle of Visual Expressiveness states that the full range of visual variables and their full capacity should be used to represent the notational elements. Colour is one of the most effective visual variables. The human visual system is very sensitive to differences in colour and can quickly and accurately distinguish them. Differences in colour are found three times faster than shape and are also easy to remember [20]. The level of expressiveness is measured from level 1 (lowest) to 8 (highest). • Principle of Graphic Economy The principle states that the number of symbols in a graphical vocabulary must be manageable by human working memory. The choice of symbol affects the ease of memorizing and recalling visual diagrams. The magic number seven express a suitable number of symbols. The range for an of 7 ± 2 symbols is suitable. More different symbols in basic graphical vocabulary than nine are demanding for comprehension. • Principle of Dual Coding The principle suggests using the text to support the meanings of symbols and clarity. Two methods (graphics and text) provide the user with information and improve comprehensibility. The base is on the duality of mental representation [21]. • Principle of Semantic Transparency This principle evaluates how symbols associate the real meaning of an element. Here, associations are sought between the shape or other visual symbol variables and their real properties, and the form implies content. • Principle of Complexity Management This principle recommends producing hierarchical levels of the diagram and dividing it into separate modules and create hierarchical structures. It is suitable for large models when comprehension exceeds human working memory capacity. Modularity means scaling information into separate chunks. Modularisation is the division of large systems into smaller parts or separate subsystems. Practice shows that one subsystem should be only large enough to fit on one sheet of paper or one screen. This subsystem is then represented at a higher level by one symbol. Hierarchical structuring allows systems to be represented at different levels of detail (levelled diagram) with the ability to control complexity at each level. This promotes understanding of the diagram from the highest level to the lowest, which improves the overall understanding of the diagram. Both mechanisms can be combined into the principle of recursive decomposition. • Principle of Cognitive Interaction The principle recommends increasing the options for navigating in the model. The reader must be able to follow the chain of operations easily. The connector lines affect navigation. • Principle of Cognitive Fit The principle proposes to realize different sets of graphical vocabularies for the same type of semantics, where information is represented, for different tasks and different groups of users in different ways. It recommends the use of multiple visual dialects, each of which is suitable for different types of tasks and different user spectrums (according to experience). Eye-Tracking Measurement and Experiment The eye-tracking equipment was used to evaluate the comprehensibility and discriminability of visual symbols in models. This method was assumed as a combination and extension of Physics of Notations results as an experimental method. Testing was conducted at an eye-tracking laboratory in the Department of Geoinformatics, Palacký University in Olomouc (Czech Republic). The eye-tracker SMI RED 250 with software SMI Experiment Suite 360° was used for the experiment. The test was designed using the SMI Experiment Center program. The results were visualised using SMI BeGaze. An evaluation was also conducted using software Ogama 4.5 and V-Analytics. For statistical evaluation, the STATISTICA software was used. The size of the monitor to record eye movements and display models was 1920 × 1080 pixels. The sampling frequency of the eye-tracker SMI RED was 250 Hz [22]. The complex eye-tracking experiment consisted of 22 workflow diagrams from Processing Modeler version 2. Several models with different sizes, functions, and arrangements of flow orientation (vertical, horizontal, and diagonal directions) were tested. The workflow diagrams were displayed individually on the screen in random order to prevent a learning effect [23]. Shuffling ensured that each respondent saw the models in a different order. The respondents were first-year students at the end of the semester in a master's programme of Geoinformatics. They had attended lectures where the design of models in Processing Modeler version 2 was practised. They created various examples of models with different functionalities and sizes. The group of respondents was assumed to be advanced users. A total of 22 respondents participated in the eye-tracking testing, aged 22-25. The term stimulus is applied in the process of eye-tracking testing [24]. The stimuli, in this case, were the models (workflow diagrams). Each model was associated with a comprehension task to record the cognitive process. Response time and the correctness of user answers were measured for each comprehension task as in other research [25][26][27]. The set of models or maps and comprehension tasks are often used to evaluate the usability of visualisation methods in cartography and GIS [28,29]. Research in the area of workflow diagrams has also been organised at our eye-tracking laboratory for other GIS VPLs. The reading patterns are described for models in ArcGIS ModelBuilder [30]. The significant effect of the orientation of connector lines is mentioned. Also, the influence of bends on connector lines was tested for ModelBuilder [22]. Being able to change colour helps to discriminate graphical symbols in ModelBuilder. This was also demonstrated using eyetracking experiments at our university laboratory [31]. The eye-tracking experiment consisted of two parts for the QGIS Processing Modeler. The first part only displayed models without any task. This part is called free viewing. The second part contained 22 models that were introduced with comprehension tasks. The respondents solved the tasks by clicking on the stimulus at the right location all answer the question. All tested diagrams are in Appendix A. Clicks were recorded as an answer. All stimuli were interleaved with a fixation cross in the middle of the screen to provide the same starting point for all respondents. The fixation cross was displayed for 600 milliseconds before each stimulus. The research combined two above mentioned methods in evaluation. They were very different. The first report findings by application theoretical principles. It produced results in a text form with list of insufficiencies, good features, recommendations and ideas. The second is the experimental method where the objective measurement was constructed using user testing. Both methods could be assumed as cross-validation of results, but mainly eye-tracking is an extension of received results. The research tried to combine both attempt to receive more complexity results such as finding reading patterns. In the phase of preparing the eye-tracking experiment, we considered how to test the principles of Physics of Notations. The task was aimed to receive answers that correspond to the principle definitions. The design of the experiment was done so much coherent to the principles. However, only the set of principles is possible to test in a limited way. It is impossible to design the eye-tracking as one task to one principle. There are more influences on the respondent's perception. Moreover, the last principles, Complexity Management and Principle of Cognitive Interaction, are hard to test because of no sufficient solution present in that visual vocabulary. It was also impossible to test case of the Principle of Cognitive Fit when no visual dialects exist in the Processing Modeler. Two hypotheses were proposed before eye-tracking testing: Hypothesis 2 (H2). Insufficiencies in Semiotic Clarity, Perceptual Discriminability, Visual Expressiveness and Semantic Transparency adversely affect the effectiveness of comprehension. To evaluate these two hypotheses, the number of correct answers (for H1), the time required to answer, and eye-tracking metrics were measured. Eye-tracking metrics such as the length of the scanpath, number of fixations and average time of fixations were calculated (for H2). All results are presented in Section 4. Evaluation of Effective Cognition by Physics of Notations Method Systematic application of Physics of Notations theory on Processing Modeler follows. Principle of Semiotic Clarity When this principle is applied to symbols in the Processing Modeler, it is evident that both input data and output data symbols are overloaded. In version 2, one symbol represents nine different data types: vector, raster, string, file, table, table field, number, extent and boolean. The newer version 3 offers 22 different types of input data. The user has to assign the data type immediately when an input data symbol is assigned to the model. The data type (for example point, line, polygon) is assigned immediately when a model is designed, despite there being no evidence about data type in the graphical symbol. Detected symbol overloads could be solved in the following manner: remove the inner plus icon at the left in the symbol and replace it with more specific icons that express the type of data. Suggestions for vector and raster data symbols are given in Figure 4. Both icons are adopted from the QGIS interface. The lower pair uses the compound icons from version 3, where the symbols for vector and raster are supplemented by a small plus icon to express input of data. These compound icons are better than simple icons in version 2. The former and larger plus icon is substituted by a plus icon that forms a part of the vector or raster icon. New icons could be suggested for file, folder, string, number, table and field using any universal icon. Data such as extent, CRS, map layer, etc. need domain-specific icons. The same inner icon sets could be used for the output data symbol where a compound symbol can contain bigger icon of data type and small output arrow that is original in output symbol. The suggestions follow Szczpanek icon theory [15]. These suggestions would increase the number of icons in the graphical vocabulary and solve the overload of the two original symbols for input/output data. Principle of Perceptual Discriminability The colour, shape, orientation, brightness and other visual variables are what the user uses for discrimination of symbols in practice. Systematic couple comparison shows the distance between every two symbols. The visual distance is measured by several different characteristics (number of visual variables). Pairwise comparison of the version 3 symbols according to this principle is given in Table 1. In the Processing Modeler, the symbols differ only in colour and brightness, and the rectangular shape is the same for all symbols. The visual distance is two in all pairs. The characteristics are poorer in the symbols of the older version 2. The only differences are in colour, and there the visual distance is one (the difference in brightness is only between the data symbol and operation symbol). Perceptual discriminability of the symbol through colour is almost satisfactory by differing in tone. The Processing Modeler does not have the option for the user to define the colour to express other meanings of symbols, for example, to distinguish the final data and intermediate data in a large model. Considering this principle of perceptual discriminability, the distinctiveness of the white symbol from the canvas is poor. The model's canvas and symbol for operation have the same white colour, from which a new recommendation emerged: change the fill colour for the operation symbol from white to orange-brown ( Figure 5). The result of the pairwise comparison of all symbols in the vocabulary remains the same. The discriminability of symbol and canvas is better than with the white symbol. Principle of Visual Expressiveness The recommendation according to this principle is to use maximum visual variables in symbols. Only colour is used as the fill for graphic elements. Other visual variables such as symbol shape, size, texture, orientation and position are not used in the Processing Modeler. The shape is the same rectangle for all symbols. The size of the symbols does not vary and cannot be changed. Brightness is used in version 3 ( Figure 3). The new symbol vocabulary is improved by using a greater variation in brightness between symbols and maybe various shapes of symbols. The visual variable of position is only applied when the output data symbol is automatically placed near the right side of the producing operation ( Figure 6). This manner of near-automatic placement of output symbols is the same in version 3 ( Figure 13). However, the position of the output data symbol is very often changed by the user, which then moves the operation symbol. The former position of output data remains without following the symbol of the sourcing process. The mutual position linking the symbol is not fixed. The positioning of the output data symbol is only a weak and unstable use of the position variable. The graphical vocabulary is at a low level 1 in a maximum scale of 8 in terms of the principle of Visual Expressiveness. The QGIS Processing Modeler does not offer the loop and condition functions. To implement these functions in order to control operations, the draft offered in this paper uses the visual variables of shape and colour (Figure 7). The pink rectangle with oblique sides represents the cycle operation, and the light yellow rhombus represents the condition. These symbol shapes correspond to the classic shapes of flowchart symbols. In the vocabulary in version 3, these shapes differ from basic rectangular shapes in vocabulary and colour. By using these new symbols, the number of variables used increases to two. The total number of symbols would be five in the vocabulary. These symbols fulfil the principles of Discriminability and Visual Expressiveness. The principle of Graphic Economy would also be fulfilled (explanation of the principle of Graphic Economy follows). Principle of Graphic Economy The number of base graphical elements is three, which meets the requirement for cognitive management and the requirement for a range of 7 ± 2 symbols. Even with all the previous suggestions for changes with two symbols for the condition and cycle (under the principle of Visual Expressiveness) and suggestion for a blue symbol for the sub-models (see below in the Complexity Management principle), the total number of symbols is six. Altogether, the requirement of that principle is fulfilled. The vocabulary will be economical. Principle of Dual Coding This principle suggests accompanied descriptive text to the symbols. For models in the Processing Modeler, the text completes the data symbol with the data name and the operation symbol with the operation name. The user assigns the data name arbitrarily, which is always an input parameter. The input data symbol is never bound to specific data stored on the storage medium in the model's design mode. The name can be edited as desired. The operation name is added to the symbol automatically according to the selected operation and can also be changed in version 3 (not possible in version 2). The option to edit the operation name is a good improvement in functionality and allows the model to be better understood. Renaming the operation is especially advantageous when the same operation appears multiple times in one model. Therefore, it is possible to describe or specify the meaning of the operation. Figure 8 depicts a diagram where v.generalize operations are called three times, but each time with a different generalisation algorithm. The selected algorithm is added manually by the user to the operation name. The operation name's editing option improves the clarity of the model. For long names that do not fit into the rectangle, the name is automatically truncated and completed with an ellipsis (Figure 9). If a Semantic Transparency modification (see below-deletion of functional icons) were implemented, it would increase the space for longer operation and input data names, which would be beneficial. To follow the principle of Dual Coding, modifying the input data labels is suggested. It would be helpful if labels concerning the data type improved the data symbols by using capitals. Examples are given in Figure 10. If this is added automatically when symbols are added, the user only need arbitrarily select the data name. Additionally, the user's data name (e.g., input lines) emphasises the spatial type. Manually describing a data type is possible in the current stage of notation. There is a space for good use of Dual Coding by users in naming symbols. The Processing Modeler meets the Dual Coding principle; however, comprehensibility could be improved with the proposed modification by specifying the data type with captioning. Figure 10. Suggestions for improving the labels of input data symbols with labels for a data type in capitals. The text is still used in the models to list the operation parameters when the plus symbol is pressed above the operation (Figure 11). After this, the black dot divides itself into several black dots according to the number of join connector lines. The operation parameter list does not contain the values of these parameters and often dumps overlapping lines leading to the rectangle. This is retained in version 3. It would be useful to add a list of specific parameter values here. The current form of textual information is not useful to users. It is perhaps only useful in terms of expressing which symbol assigns concrete parameters to the operation. Principle of Semantic Transparency Symbols could be associated with the real meaning of an element according to this principle. The shape and colour of the symbols do not carry any association; they are semantically general in the Processing Modeler. This is the same in other visual programming languages for the GIS application. In those symbols, the inner icon of the plus sign symbol on the input data symbol is used at the left. The output data symbol depicts an inward arrow icon. Icons can also carry semantic meaning. These icons can be considered almost semantically immediate. The plus icon indicates new data for processing. The arrow icon indicates the processing result in a certain direction. However, the previous proposal under the principle of Semiotic Clarity is useful and also improves semantic immediacy. It suggests that each data type has an icon, such as in Figure 4 (the plus icon is replaced or is a part of the compound symbol in version 3). Here, it is clear that the change resulting from applying the Semiotic Clarity principle also leads to an improvement in Semantic Transparency. For operations, icons are mainly used to represent the source library. Rather, these icons are semantically generic because they do not explain anything about the purpose of the operation. However, these icons are a good guideline for determining the source library. It should be considered that many libraries contain operations with the same name (clip, buffer, etc.). In the Processing Modeler, version 3 sometimes uses an icon that represents the type of operation (namely for QGIS operations). Figure 5 shows the operation Dissolve, which has a specific icon that represents this operation. Another specific icon for the operation 'Merge vector layer' is a model in Figure 13. The size and graphics of the icons are not suitable for improving the association of operation and their meanings. The icons are small and use only grey tones. A good example of large colour and detailed icons that describe the purpose of the operation is demonstrated in the Spatial Model Editor ( Figure 12) embedded in the ERDAS IMAGINE software [32]. The icons take up more space than the lower text in the symbol. The icons are prominent. The graphical vocabulary of the Spatial Model Editor has a high Semantic Transparency. The graphical vocabulary of the Spatial Model Editor is inspiring for the redesign of Processing Modeler symbols. The final recommendation is to reshape the rectangle to square to adopt bigger icons and then put the text label bellow icon. The graphical vocabulary of the QGIS Processing Modeler has semantic opacity, except for some operations, where a greater positive semantic immediacy can be observed (Figure 12-a third symbol from the top: Random points in extent). Principle of Complexity Management This principle recommends producing hierarchical levels of the diagram and dividing it into separate modules and hierarchy. In textual/visual programming, this is achieved with sub-programs (sub-routines) or sub-models that can be designed and managed separately. The hierarchical model contains only two levels, no more. The Processing Modeler allows existing models to be added to other models in the interfacepanel algorithms (Figure 13). This has the right degree of modularity according to Complexity Management in both versions 2 and 3. The symbol of the model has a three-gear wheel icon (three connected balls in version 2) at the left of the symbol. Otherwise, a white rectangle is used. Since it would be good to differentiate the symbol of the individual operation from the sub-model with an icon, a colour fill other than white would be appropriate. A suggested depiction-blue fill colour for sub-models-is shown in Figure 13. The visual resolution of other symbols is maintained. The number of symbols increases to seven after a new one is added for the sub-model. The final count of seven symbols fulfils the principle of Graphic Economy. Principle of Cognitive Interaction The principle recommends increasing the options for navigating in the model. The connector lines affect navigation. In the Processing Modeler, round connector lines join symbols. The lines are rendered automatically. Symbols very often overlap lines when symbols are manually moved ( Figure 6). Lines also sometimes cross each other, and they are not parallel. The user must manually attempt to find the best position for symbols in order to prevent overlapping and perplexing criss-crossing of curved lines. Previous research recommended that the number of edge crossings in drawings should be minimised [33]. For these reasons, curved connector lines do not appear to be the proper solution. It is often difficult to trace the connector's direction. A suggested change is to replace curved lines with straight lines (Figure 14). Straight lines ensure good continuity for reading. Good continuity means minimizing the angular deviation from the straight line of two followed edges connecting two nodes [34]. In this new suggestion for the Processing Modeler, straight lines could be optionally angled at an oblique or right angle when it is necessary to avoid a symbol. An acute angle is not suitable because of its smooth line tracking. If curved connectors remain in notation, there is necessary to add the user control over shaping these connectors to prevent crossing and overlapping. Operation symbols linked with lines and a black dot offset from the edge of the symbol unnecessarily occupy space in the model's area. It would be possible to terminate the lines directly on an edge or at the plus sign of the symbol to save space. Finally, the ability to display a model's thumbnail in the separate preview window helps to navigate the model. The preview window has not yet been implemented. In terms of cognitive interaction, version 3 was supplemented by a zooming function. The functions zoom in and zoom out in the model was absent in version 2. Aligning the symbols makes reading the model quicker and easier. No automatic function for aligning the model to the grid is implemented. Symbols usually snap to a grid in other graphical software. No snapping function is given in the Processing Modeler. Post alignment to the vertical or horizontal line of the model could, therefore, be beneficial for design. All arrangement of symbol positions depends on user diligence, which is entirely manual work in the Processing Modeler. Manually aligning is a time-consuming activity. Evaluation by Eye-Tracking Measurements The eye-tracking experiment was designed in a complex way to confirm or reject the hypothesis H1 and H2. All tested diagrams are in Appendix A. The design of the test contain more tasks to find maximum information. Some models serve for evaluation repetitively for a different purpose, e.g., find symbol, compare orientation, or read the labels. After testing only reliable answers and correct record by eye-tracker were finally evaluated and presented in the article. The first evaluation concerned the discriminability of symbols. These tasks required finding input data and output data symbols in the models. The task was: "Click on the symbol where the input data are" (task A1, A2, A3 in Appendix A). The number of incorrect answers recorded was zero. The next task was "Click on the symbol where the output data are" (task A4, A5, A6). The wrong answers were two times 2 (A4, A5), and 4 for task A6. However, model A6 has a big influence of arrangement to answer. It means that the input and output symbols were nearly high in Perceptual Discriminability, but the errors report about space for improvements of symbols such as inner icons that are suggested in Section 4.1. for increasing transparency and using all visual variables. Besides the number of correct/wrong answers, the time of the first click was recorded. The distribution of times had not normal distribution (tested by Shapiro-Wilk test). Non-parametrical test Kruskal-Wallis tested if the medians of the "first click time" of all tasks (A1-A5) is equal. Kruskal-Wallis tested whether time samples originate from the same distribution. The result of the statistical test revealed the there is no significant difference between finding the symbol of input and output. It means that basic symbols are discriminable and none of them is dominant in perception. The next task aimed to verify the influence of Dual Coding (but the influence of the discriminability present). The task was: "Click on the symbol where the 'Fixed Distance Buffer' operation is called" (task A13, A14, A15). Once again, it was necessary to find the white symbol and read the labels in the symbols. A total of 21 correct answers were recorded (one incorrect). The results for 22 respondents were calculated as an attention heat map ( Figure 15). The heat map expresses the calculation of places where the peaks of gaze fixations are by all respondents. The figure shows that all white symbols correctly attracted the gaze of respondents. Respondents searched for white operation symbols and then read a particular operation label. The highest attentions were recorded at the two places where the Fixed Distance Buffer operation was (top and bottom). Next, the lower peaks of fixations are at another white symbol with the different operation. It is evident that white colours of operation attract the gaze. Both principles of Visual Expressiveness (and also Perceptual Discriminability), by using white colour fill, and Dual Coding were verified. In fact, this stimulus did not confirm the poor distinguishing of white symbols from a white canvas. The poorest distinguishing result was expected in the theoretical part of this article. The principle of Semantic Transparency was difficult to test. The transparency of the icons was tested with the task: "Are all operations from the same source library?" The tested model is shown in Figure 16 (Appendix A16, 17,18). Three incorrect answers were recorded from 22 respondents. Semantic Transparency of data types was only possible to solve in the Processing Modeler with expressive text. This was verified in a model where the task was: "Does the input data have the same data type as the output data?" (A10, 11,12). The data symbols were labelled with the words "table" and "raster" as a part of the data name in the model. It is "user design help" to the respondents to distinguish the data type. The number of incorrect answers was three for two models and two incorrect answers for A11 task. In these models, the response time was longer than the previously presented models and tasks. The average time of fixation was also longer. It verifies the necessity of reading labels by users. The solution of semantic transparency by text label consume more time for comprehension. The longer times confirms the hypothesis H2 about negative influences of insufficiencies to effective comprehension. Both of the experiments mentioned above (about source library and the comparison of the data type of input and output data) verified that Semantic Transparency was low in the Processing Modeler. The results about the number of correct and incorrect answers in all tasks presented in this section report that some insufficiencies adversely affect the cognition as it is stated in hypothesis H1. From eye-tracking testing, we received not only cross-validation of results by Physics of Notations but other new information. The interesting information was finding the reading patterns and influence of flow orientation to the respondent reading directions. To find the reading pattern of users, gazes were aggregated. A comparison of the same diagram from the free viewing section and a section with tasks is given in Figure 17. Aggregations in both cases revealed that the orientation of data flow expressed by connector lines had a significant influence. Reading began in the middle of the stimulus according to the middle fixation cross in the previous stimulus. Gazes were attracted to the upper left corner and continued horizontally to the right. People's habits of reading lines of text were very strong, especially in free viewing (Figure 17a). The lower part of Figure 17b also depicts strongly followed lines. Only a small number of gazes skips between two main horizontal workflow lines. Free viewing was not as systematic as task-oriented gaze aggregations. Two models tested the effect of symbol alignment in the model. This finding can be linked to the principle of Cognitive Interaction. The first model had aligned symbols; the second model had no aligned symbols. The functionality was the same. The question was the same for both models: "How many functions are in the model?" (task A8, A9). It was enough to count only the white rectangles in large models. The number of the expected correct answers was eight. The first aligned model recorded two incorrect answers and the second recorded seven. The average task time was much shorter in the first tasks. The non-parametrical Kruskal-Wallis test was used for eye-tracking metrics due to non-parametrical distribution of measured values. It tested if medians of groups are equal means they have the same distribution [35]. The significance level for all Kruskal-Wallis tests was set to p-value 0.05. The test was run three times for several fixations, scanpath lengths and number of fixations per second. The test compared tree measured values for the aligned and non-aligned model (A8 and A9). Kruskal-Wallis test found statistically significant differences for all metrics: the number of fixations, scanpath lengths, and number of fixations per second. The model where the symbols were unaligned showed much worse values for all metrics (task A9). Therefore, aligning the symbols in the model made it easier to read and understand the model. This eye-tracking evaluation supports the recommendation for the new function of the automatic alignment of symbols in this graphical editor. Three groups of models with the same functionality were prepared to test the orientation of flow and find if any orientation is better for users. The models in each group differed only by orientation. Three orientations were tested: vertical, horizontal, and diagonal. Comparing the orientation could be also assumed as a contribution to the principle of Cognitive Interaction. An example diagonal orientation of flow in a model is shown in Figure 16, horizontal in Figure 17. Variations of these models with three types of orientations were designed. The triplets are [A10, A11, A12], [A13, A14, A15], [A16, A17, A18] in Appendix A. To prevent the bias, the same task was in each group. The aim was to find the best model orientation. The results from a Kruskal-Wallis test were not statistically significant. In some cases, horizontal orientation had the shortest average time of solution. In some models, diagonal orientation was better in average times of fixation, and horizontal models had the shortest scanpath length. The results were completely ambiguous, and the orientation preferences were not the same in all triplets. There is certainly a great deal of effect depending on the given question and model sizes. Eye-tracking did not reveal the best orientation of flow. Results Research into the QGIS Processing Modeler brought useful results and suggestions. The combination of Physics of Notations theory and eye-tracking measurements determined that Perceptual Discriminability, Dual Coding and Graphic Economy were nearly good with space of improvements. The worst situation is in Semantic Transparency. Some of the recommendations can help improve Semiotic Clarity, Visual Expressiveness and Semantic Transparency. All recommendations can be divided into two groups. The first group is for developers of the Processing Modeler and the second for users in practice. Suggestions for the first group for larger sizes and colours for inner meaning icons increased the Semantic Transparency. This solution also increased the Semiotic Clarity of symbols. Another suggestion for improvement was adding colour fill to the operation symbol of sub-models. Straight connector lines are better than curved lines, optionally users shaped lines are more suitable. New symbols for IF and loop FOR commands were based on new shapes and different colours. The readability of models improved the automatic alignment function of the symbols to the grid. Users can benefit from some recommendations in practice. Correct labelling of symbols and expressing data types in capitals (VECTOR, RASTER, STRING, NUMBER, etc.) is very useful. Aligning symbols, preventing overlapping, and crossing of lines improved the effective comprehensibility of a model. Design and using of sub-model fulfil the Complexity Management principle. There is a space for user broader use of sub-models. Reading speed increased in one type of orientation (horizontal or diagonal) without any changes to one of the flow direction. These user recommendations were presented to students attending lectures at the Geoinformatics department at the Palacký University in Olomouc every year. The author of the article has had a positive experience in applying the knowledge acquired by the teacher in research and solving practical problems. This positive teacher experience is described in an article about the database design for the university's botanical gardens (BotanGIS project) [36]. The presented evaluation and list of suggestions could assist by inspiring designers of visual programming languages in GIS software. Some recommendations could also be useful for the broader community of users to increase effective cognition of any graphical depiction. Table 2 reports all findings and recommendation in summarised form, and concrete graphical improvements are in the figures of the article. Eye-Tracking Results Recommendations Semiotic Clarity Symbols of data are overloaded. Some wrong answers indicate an overload. • Add various icons in symbols of data types. • Add all icons for spatial functions. Perceptual Discriminability Visual distance is 2. No dominant symbols in perception. • Change the colour of the operation to orange. Visual Expressiveness Level 1, the only colour is used as visual variables. Some wrong answers indicate weak expressiveness. • The new pink symbol for loop and light yellow for the condition symbol. It increases expressiveness to level 2. Graphic Economy 3 symbols fulfils the economy. Only some wrong answers. • With the addition of new symbols, a total number of 7 fulfil better the economy. Dual Coding Good possibility to change the text. The text helps users find the proper symbols. • User renaming to express the data type. User supplement the operation name with some other information about parameters. Semantic Transparency Semantically general Low • Remove functional icons on the right side. • Add larger sizes inner domain-specific colour icons like in the Spatial Model Editor. • Reshape the rectangle to adopt bigger icons and put the text label bellow icon. Complexity Management Modularisation to sub-models is possible. Only one level in the hierarchy. Not tested • Change the colour of a sub-model to blue.
v3-fos-license
2023-03-10T06:16:57.729Z
2023-03-09T00:00:00.000
257426154
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CC0", "oa_status": null, "oa_url": null, "pdf_hash": "58f521c7718c5cfa3845aa271662c282cf0c08e4", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2079", "s2fieldsofstudy": [ "Biology", "Materials Science", "Environmental Science" ], "sha1": "74ba9e940c9163bcc7f30341a7080b1365379149", "year": 2023 }
pes2o/s2orc
Phenomics and transcriptomics analyses reveal deposition of suberin and lignin in the short fiber cell walls produced from a wild cotton species and two mutants Fiber length is one of the major properties determining the quality and commercial value of cotton. To understand the mechanisms regulating fiber length, genetic variations of cotton species and mutants producing short fibers have been compared with cultivated cottons generating long and normal fibers. However, their phenomic variation other than fiber length has not been well characterized. Therefore, we compared physical and chemical properties of the short fibers with the long fibers. Fiber characteristics were compared in two sets: 1) wild diploid Gossypium raimondii Ulbrich (short fibers) with cultivated diploid G. arboreum L and tetraploid G. hirsutum L. (long fibers); 2) G. hirsutum short fiber mutants, Ligon-lintless 1 (Li1) and 2 (Li2) with their near isogenic line (NIL), DP-5690 (long fibers). Chemical analyses showed that the short fibers commonly consisted of greater non-cellulosic components, including lignin and suberin, than the long fibers. Transcriptomic analyses also identified up-regulation of the genes related to suberin and lignin biosynthesis in the short fibers. Our results may provide insight on how high levels of suberin and lignin in cell walls can affect cotton fiber length. The approaches combining phenomic and transcriptomic analyses of multiple sets of cotton fibers sharing a common phenotype would facilitate identifying genes and common pathways that significantly influence cotton fiber properties. Introduction Cotton (Gossypium sp.) is the most economically important natural fiber in the world [1]. In addition to the agronomic importance, cotton fibers are also utilized as an ideal biological model for studying molecular mechanisms involved in cell elongation and cell wall biogenesis because cotton fiber cells are unicellular and larger and longer than any other plant cell [2]. Cotton fiber development is divided into four overlapping stages: 1) initiation, 2) primary cell wall (PCW) biosynthesis characterized by fiber elongation, 3 1 , and Li 2 ) were planted on two-row plots located at the Southern Regional Research Center (New Orleans, LA; 2017) with naturally neutral-day conditions. The soil type of the cotton plot was aquents dredged over alluvium in an elevated location to provide adequate drainage. Single row plots were 12 m long with approximately 40 plants per plot. The distance between two rows was 0.5 m, and the distance between two plants within a row was 0.3 m. To minimize environmental effects, boll samples were not collected from plants on the perimeter of the field and the end of each row. At harvest, approximately 60 naturally opened bolls were randomly collected from two plots for each cotton variety, and separated into two biological replicates with 30 bolls per biological replicate for further analyzing physical and chemical properties of each cotton variety. To collect developing fibers at various developmental stages, wild diploid G. raimondii D 5 -6 and D 5 -31 along with G. arboreum and G. hirsutum were grown in a growth chamber (Percival Intellus Environmental Controller, Perry, IA) in 8 L pots at 28˚C (day) / 24˚C (night) with a short photoperiod condition (9h day light, 300 μmolm -2 s -1 ) during the vegetative stage, and reduced to 26˚C (day)/ 18˚C (night) during flowering and boll development stages. The pots were filled with Metro-Mix 350 soil. For fiber length measurement, two plants of wild diploid G. raimondii D 5 -6 were grown in 167 L containers at an NCGC greenhouse located at College Station, Texas during a winter season for a short photoperiod condition. The two G. raimondii D 5 -6 plants produced four bolls, and separated into two biological replicates with two bolls per biological replicate. To obtain sufficient G. raimondii fibers for fiber length and chemical analyses, G. raimondii D 5 -31 was grown perennially at the cotton winter nursery at Tecoman, Colima, Mexico in association with the location of the Instituto Nacional de Investigaciones Forestales, Agrícolas y Pecuarias [26]. Three G. raimondii D 5 -31 plants (240 days after planting) were transplanted on the ground of the cotton winter nursery. In the second year, they produced 400 bolls that were separated into two biological replicates with 200 bolls per biological replicate for further analyzing fiber length and chemical analyses. All G. raimondii grown in the growth chamber, greenhouse, and cotton winter nursery produced a common phenotype demonstrating short and green colored fibers. Cotton fiber length measurements Maximum fiber lengths were estimated by placing ovules on a watch-glass and gently spraying fibers with a stream of distilled water as described by Schubert et al. [27]. Ten to thirty cotton bolls were randomly selected from each biological replicate samples of G. arboreum (A 2 -100 and SXY1), G. raimondii (D 5 -6 and D 5 -31) and G. hirsutum (TM-1, SG-747, DP-5690, Li 1 , and Li 2 ). Single cotton seeds were randomly selected from an individual cotton boll. The distance between the chalazal end of the selected seeds and the tip of the spread fibers were measured to the nearest 0.1 mm with a digital caliper. Mean maximum fiber length of each cotton variety was obtained by measuring the randomly selected seeds from two biological replicates. Updegraff cellulose assay Cellulose contents of developed cotton fibers were measured by the modified Updegraff method [28]. Five cotton bolls were randomly selected from each biological replicate samples of the cultivated G. arboreum (A 2 -100 and SXY1) and G. hirsutum (TM-1, SG-747, and DP-5690) producing long fibers. Two to six cotton bolls were also randomly selected from each biological replicate samples of the G. raimondii (D 5 -6 and D 5 -31) and G. hirsutum mutants (Li 1 and Li 2 ) generating short fibers. Dried fiber samples of the selected bolls were manually harvested, and cut into small pieces. Ten milligrams of the blended fibers were placed in 5 mL Reacti-Vials TM (Thermol Fisher Scientific, Waltham, MA), and hydrolyzed with acetic-nitric reagent (a mixture of 73% acetic acid, 9% nitric acid and 18% water). The remaining cellulose was hydrolyzed with 67% sulfuric acid (v/v) and measured by a colorimetric assay with anthrone with Avicel PH-101 (FMC, Rockland, ME, USA) as a cellulose standard. Mean cellulose content of each cotton variety was obtained by measuring the randomly selected cotton bolls from two biological replicates. Attenuated Total Reflection Fourier Transform Infrared (ATR FT-IR) spectral collection and data analysis Five cotton bolls were randomly selected from each biological replicate samples of cultivated G. arboreum A 2 -100 and G. hirsutum TM-1 and DP-5690 producing long fibers as well as wild G. raimondii D 5 -31 and G. hirsutum Li 1 and Li 2 mutants generating short fibers. Dried fiber samples were manually harvested from the selected bolls, and divided into six portions that were directly scanned without further processing. Average spectra of each replicate samples were obtained from the spectra from the six portions of the sample. Mean spectra of each cotton variety were obtained by measuring the randomly selected cotton bolls from two biological replicates. All samples were scanned by an FTS 3000MX FT-IR spectrometer (Varian Instruments, Randolph, MA, USA) equipped with a ceramic source, KBr beam splitter, and deuterated triglycine sulfate (DTGS) detector and attenuated total reflection (ATR) attachment according to the methods that were previously described in Liu and Kim [29]. The spectra were normalized by dividing the intensity of an individual band in the 1800-600 cm -1 region with the average intensity in that 1800-600 cm -1 region, and subsequent principal component analysis (PCA) characterization was performed in the 3000-1200 cm -1 IR region with mean centering (MC), multiplicative scatter correction (MSC), and Savitzky-Golay first-derivative (13 points) spectral pretreatment and with leave-one-out cross-validation method. Pyrolysis-molecular beam mass spectrometry lignin analysis Five cotton bolls were randomly selected from each replicate samples of the cultivated G. arboreum A 2 -100 and G. hirsutum TM-1 as well as wild G. raimondii D 5 -31. Two replicated samples of the dried cotton fibers of G. raimondii D 5 -31, G. arboreum A 2 -100, and G. hirsutum TM-1 were cut in a Wiley mill into 20 mesh. Average contents of each cotton variety were obtained by measuring the randomly selected cotton bolls from two biological replicates. Lignin analysis was performed with pyrolysis molecular beam mass spectroscopy (pr-MBMS) by Complex Carbohydrate Research Center (CCRC) at University of Georgia. Duplicated cotton samples along with control samples including NIST 8492 (lignin content, 26.2%) and aspen standards were pyrolyzed at 500˚C and the volatile compounds were analyzed for lignin using a molecular beam mass spectrometer (Extrel Core Mass Spectrometers). The raw data were processed through UnscramblerX 10.1 software to obtain the principal components and raw lignin data. G. arboreum A 2 -100 fibers exclusively composed of cellulose (95.6~100%) with the lowest lignin level among cotton species was also used as the lignin base line for all tested cotton samples. Transcriptomic analyses RNA-seq reads for the cotton materials shown in Table 1 were retrieved from the NCBI SRA database. These reads were aligned to the JGI G. raimondii reference genome [10] using gsnap software and reads that mapped to annotated genes were counted using bedtools software [30,31]. RNA-seq expression analysis was conducted following the PolyCat pipeline as previously described [24,32]. Briefly, all reads were aligned to the JGI G. raimondii reference genome, then the PolyCat software assigned each categorizable read to either the A T or D T subgenome based on an index of homeoSNPs. Using the retrieved RNA-seq reads (Table 1 and S1 Table), RPKM (reads per kilobase of transcript per million reads mapped) numbers were determined, and specifically or differentially expressed genes in developing D 5 , Li 1 or Li 2 fibers at elongating PCW or wall-thickening SCW stage were identified and annotated based on the best hit by BLAST search with The Arabidopsis Information Resource version 10 (TAIR 10). The GO enrichment analysis was performed using agriGO v2.0 Singular Enrichment Analysis [33]. Suberin and lignin depositions in the polyploid G. hirsutum Li 1 and Li 2 mutant fibers among the 2 nd set of cotton materials. The unginned Li 1 and Li 2 fibers as well as their NIL fibers appear to be white ( Fig 1B). On the contrary, the ginned Li 1 and Li 2 fibers showed a brown color that was visually different from their NIL fibers showing a white color ( Fig 5A). Consistently, the IR spectra of Li 1 and Li 2 fibers were also different from those of DP-5690 fibers ( Fig 5B). Multiple IR spectral peaks assigned as suberin (1738, 2850, and 2920 cm -1 ) and lignin (1705-1720 cm -1 ) components were specifically detected in the Li 1 and Li 2 short fibers ( Fig 5B). In addition, a bigger bulge area (1580~1640 cm -1 ) of IR spectrum was detected from the short mutant fibers as compared with the NIL fibers. The bulge area may be composed of four close IR peaks (1588, 1606, 1624, and 1635 cm -1 ) that were assigned as suberin fractions from other plants [37,41]. Transcriptomic profiles of developing cotton fibers Classification of developmental stages of the 1 st set of cotton fibers retrieved from the original RNA-seq analyses. As summarized in Table 1, RNA-seq data of developing G. raimondii, G. arboreum, and G. hirsutum fibers at 10, 20, or 28 DPAs were available from a public database. Because they are distinct species and grown in different environments [34-36], a closer examination of the RNA-seq data was needed to better align overall expression profiles and the fiber growth stages for more meaningful comparisons. The fiber developmental stages PLOS ONE of the three cotton species used in the original research [34-36] were first classified by monitoring the transcript abundance and patterns of the indicator genes including fasciclin-like arabinogalactan [43] and expansin [44] that are specifically up-regulated at PCW stage as well as cellulose synthase (CesA) genes that are specifically up-regulated at SCW stage [45]. Three Fasciclin-like arabinogalactan genes and four expansin genes were commonly up-regulated in the 10 DPA fibers of all three cotton species ( Table 2), suggesting that developing G. raimondii, G. Table 2. Classifications of cotton fiber developmental stages of G. raimondii, G. arboreum, and G. hirsutum. Ten indicator genes are shown for each species according to genome type and at 10, 20, or 28 DPA. The most up-regulated RPKM numbers of each indicator genes in the cotton species were written in bold fonts. A 2 -10 A 2 -20 D 5 -10 D 5 -20 A T -10 A T -20 A T -28 D T -10 D T -20 D T -28 Gorai (18,333 EGs) and SCW (17,520 EGs) stages were also similar to those in the G. hirsutum A T subgenome at PCW (16,188 EGs), and SCW (17,472 EGs) stage. In this study, the focus was on identification of genes specifically expressed in developing G. raimondii fibers, but not expressed (zero RPKM) in developing fibers of the other cotton species. Transcriptomic profiles between the two diploid cotton species identified specifically expressed genes (SEGs) in developing G. raimondii fibers at PCW (2,663 SEGs) and SCW (3,193 SEGs) stage as shown in Fig 6A. When the transcriptomic profiles of the three cotton species compared, 1,385 and 1,520 SEGs were identified from developing G. raimondii at PCW and SCW stages, respectively ( Fig 6A and S3 Table). Between PCW and SCW stages of developing G. raimondii fibers, 297 SEGs were overlappingly expressed (Fig 6B). Transcriptomic analyses of the 2 nd set consisting of G. hirsutum NILs differing in fiber length. The original RNA-seq analysis of the short fiber mutants (Li 1 and Li 2 ) was performed with total RNAs extracted from developing fibers at PCW stage (8-12 DPA) grown in greenhouse or cotton fields [22,24]. To identify differentially expressed genes (DEGs), developing fibers from field grown Li 1 and Li 2 fibers were compared to their NIL, G. hirsutum DP-5690, using a 2-fold difference as a threshold. In the Li 1 mutant fibers, 4,043 genes were up-regulated whereas 2,536 genes were down-regulated (Fig 7A). In the Li 2 mutant fibers, 2,419 genes were up-regulated, whereas 1,740 genes were down-regulated (Fig 7A). Identification of candidate genes producing the color pigments in the two mutant fibers focused on the 1,285 genes (S5 Table) that were commonly up-regulated in the developing Li 1 and Li 2 fibers (Fig 7B). GO enrichment analysis of the 1,285 UGs identified six GO categories ( Table 4). The two GO categories including transporter activity (GO:0005215) and cellular respiration (GO:0045333) were also previously identified in the original analysis [24] by MapMan ontology [51]. Among the newly identified four GO categories, ADP binding (GO:0043531) and protein kinase activity (GO:0004672) composed of multiple nucleotide-binding leucine-rich repeat receptors (NLRs) and leucine-rich repeats receptor-like kinase (LRR-RLK) are involved in plant development and stress responses in other plants [52]. The other two GO categories, tetrapyrrole binding (GO:0046906) and response to endogenous stimulus (GO:0009719), were also over-represented in wild diploid G. raimondii fibers (Tables 3 and 4). Integration of the chemical phenotypes and transcriptomic profiles between the two sets of cotton materials differing in fiber lengths and cellulose contents For an examination of the quantitative and statistical significances of the spectral features showing different chemical components among the fiber samples used in the 1 st and 2 nd sets, a principal component analysis (PCA) was performed with the spectral region (1200-3000 cm -1 ) composed of IR peak bands of suberin, lignin, and cellulose ( Fig 8A). The analysis showed a dominant first principal component (PC1) accounting for 75.9% of the total variation, and revealed a distinction in PC1 score within the six tested samples (Fig 8A). The PC1 score increased in the order of G. hirsutum Li 2 < G. hirsutum Li 1 < G. raimondii D 5 -31 < G. hirsutum TM-1 � G. hirsutum DP-5690 � G. arboreum A 2 -100. The three cultivated cottons of G. arboreum Table 3. GO enrichment analyses of specifically expressed genes in PCW or SCW stage of developing G. raimondii (D 5 ) fibers. List and annotation of genes are described in S3 Common characteristics of physical properties of cotton fibers from wild diploid G. raimondii and polyploid G. hirsutum Li 1 and Li 2 mutants Physical properties of cultivated cotton fibers are generally assessed by a High Volume Instrument (HVI) which is defined by the International Cotton Advisory Committee as a standardized instrument for cotton fiber quality measurements [7]. Cotton fibers with a length less than 12.7 mm are classified as short fibers that reduce the quality of spun yarns. The cotton fibers produced by G. raimondii and G. hirsutum Li 1 and Li 2 mutants were too short to be measured by HVI. Thus, we manually measured the maximum fiber lengths of the wet and relaxed cotton fibers from the chalezel end of cottonseeds [27]. The maximum fiber length of G. raimondii D 5 -6 (11.7 mm) and There are two different types of G. hirsutum fibers. Lint fibers differentiate from ovule epidermis on the day of anthesis and they grow approximately 25~35 mm based on HVI measurements. In contrast, linter or fuzz differentiate from the ovule epidermis around 5 to 10 DPA and they do not grow longer than 15 mm [68]. Wild diploid G. raimondii was often described as a lintless, non-fibered, or fiberless species [12,69,70]. The full names of the Li 1 and Li 2 mutants also contain "lintless" [17,18]. However, the fiber initials of G. raimondii [8], G. hirsutum Li 1 mutant [19], and G. hirsutum Li 2 mutant [20] all differentiate on the day of anthesis. Thus, the short fibers produced from G. raimondii and G. hirsutum Li 1 and Li 2 mutants can be classified as lint fibers according to the definition described by Lang [68]. In this study, we showed that the three short cottons of wild G. raimondii and two G. hirsutum mutants produced fibers containing color pigments composed of lignin and suberin (Figs 4A and 5A). The green coloration of G. raimondii fibers was reported by Hutchinson and his colleagues in 1947 [69], but its color pigment was not further characterized. A recent study showed that NIR spectra of G. raimondii fibers were similar to those measured from naturally green colored G. hirsutum fibers [71]. Despite the extensive studies of Li 1 and Li 2 mutant fibers, the brown color pigments of the short fiber mutants (Fig 5A) have been unnoticed. Green color of the cotton fibers is faded to tan color when they are exposed to light [72]. The light brown color of the short mutant fibers might have been overlooked because their NIL, DP-5690, produces white lint fibers. Common chemical components among diploid G. raimondii and polyploid G. hirsutum mutants Chemical analyses using cellulose assay, ATR FT-IR spectroscopy, and mass spectrometry consistently showed suberin and lignin components in the three short fibers. Average cellulose [71] and functional divergence of cellulose synthase orthologs between wild G. raimondii and cultivated G. arboreum [45]. The three short cottons of G. raimondii and G. hirsutum Li 1 and Li 2 mutants demonstrated the signature IR spectral peaks of suberin and lignin (Figs 4B and 5B). Another type of short fiber mutant li y also showed the signature IR spectral peaks of suberin [73]. In naturally green colored G. hirsutum, suberin layers were observed in the secondary cell wall in cotton fibers [72,74], and a major lignin precursor and its derivatives were deposited in the suberin layers [75]. Suberin and lignin can be produced with common precursors, i.e. phenolic components [76]. In contrast to suberin consisting of phenolics and aromatic polymers, lignin is purely composed of poly-aromatic components [76]. Generally, lignin is derived from three phenylpropanoid monomers, the monolignols 4-coumaryl, coniferyl, and sinapyl alcohols, that produce the 4-hydroxyphenyl (H), guaiacyl (G), and syringyl (S) units in the polymer [77]. Our mass spectrometry lignin analysis showed significantly greater content of the S lignin in wild G. raimondii fiber (1.8%) than those of the other cultivated cotton fibers (0~0.6%). Recent studies suggest that lignin may play an important role in cotton fiber quality [78,79]. The integrative PCA method of the two sets enabled classifying the six cotton fibers into two classes according to the PC1 scores ( Fig 8A). All three cultivated cottons shared similar positive PC1 scores without any significant variation, whereas all three short cottons showed negative PC1 scores with significant and broad variation. During cotton fiber development, underdeveloped cotton fibers containing high levels of non-cellulosic components show negative PC1 scores [80]. As cellulose content increases during normal fiber development, PC1 scores also increase and become positive [81]. Notably, the pattern of the PC1 score in Fig 8A was consistent with previous reports that PC1 scores increased with the cellulose content during cotton fiber development [45]. The IR bulge area likely represents a macromolecule complex composed of suberized components whose IR signals can overlap. Interestingly, there were noticeable IR peak bands of the bulge area among G. hirsutum Li 1 (1623 cm -1 ), G. hirsutum Li 2 (1610 cm -1 ), and G. raimondii D5 (1635 cm -1 ) (Figs 4B and 5B). These results along with the significant variation of their PC1 scores showed variation in the three short cottons although they shared suberin and lignin components (Figs 4B, 5B and 8A). Commonly up-regulated orthologs among diploid G. raimondii and polyploid G. hirsutum mutants To test if suberin and lignin genes were specifically up-regulated in developing G. raimondii fibers, we used the RNA-seq data performed with the RNAs extracted from developing fibers of G. raimondii, G. arboreum, and G. hirsutum (Table 1) [34-36]. The original transcriptomic analyses of the short fiber mutants (Li 1 and Li 2 ) and their NIL DP-5690 were only performed with total RNAs extracted from developing fibers at PCW stage [22,24]. Thus, we verified that suberin and lignin were specifically detected at the PCW stage of developing mutant fibers (S1 Fig). The short fiber phenotypes of G. raimondii fibers [70,71] and mutants [19,20,22,24] were consistent across various growing conditions. In this study, we used the JGI G. raimondii D 5 reference genome for analyzing transcriptomic profiles of the two sets because the G. raimondii genome sequence shows high homology (>96%) with the coding sequences of G. hirsutum A T and D T subgenomes. Thus, the D 5 reference genome sequence has been successfully used for characterizing the Li 1 and Li 2 genomes by several groups [19,20,22,24,82]. Transcriptomic analysis of the 1 st set identified that genes involved in suberin and lignin biosynthesis were specifically expressed in G. raimondii fibers (Fig 6, Table 3 and S3 Table). Among them, glycerol-3-phosphate acyltransferase 1 (GPAT), cytochrome P450 family genes, and laccases were reported to be involved in suberin or lignin biosynthesis in other plants [46,47,53,54,76]. Among the four GO categories over-represented in G. raimondii fibers (Table 3), Omethyltransferase activity is essential for biosynthesis of lignin, suberin and flavonoids [49,83]. Integrative analyses of the two sets identified 29 genes that were commonly up-regulated in wild cotton species and short fiber mutant fibers (Table 5). Of the 22 annotated genes, four genes such as laccase, peroxidase, ABC-2 type transporter, and JAZ8 are involved in biosynthetic processes of lignin, suberin, and their derivatives [47,[54][55][56]84]. The other 18 annotated genes are reported to be related to stress responses ( Table 5). The mutations of an actin [25] and a putative Ran Binding Protein 1 [23] cause the short fiber phenotypes of the Li 1 and Li 2 mutant respectively, and also up-regulate the genes involved in stress responses including lignin, suberin and flavonoid biosynthesis ( Table 5). Lignin deposition was suggested to reduce the extensibility of expanding fiber cell walls [79]. Suberin has been reported to be a major regulator of water and solute transport, and a pathogen barrier in plant cell walls [76]. A recent functional study of Arabidopsis mutants altered in suberin deposition clearly showed the reductions of the apoplastic transport of water and ions [85]. Hydrophobic suberin in the cotton fiber cell walls also negatively affect apoplastic transport activities in cotton fibers [72,74,86]. Conclusion Here, we used both phenotypic and transcriptomic analyses for identifying common mechanisms reducing fiber elongation in the short fibers generated from G. hirsutum Li 1 and Li 2 mutants as well as wild G. raimondii. Chemical analyses identified a common deposition of suberin and lignin in the short fiber cell walls. The genes involved in suberin and lignin biosynthesis were also commonly up-regulated in the elongating cotton fibers of the three short cottons as compared with the cultivated and long G. arboreum and G. hirsutum fibers. These results support a notion that suberin and lignin deposition may affect cotton fiber elongation process negatively. They also provide insight on how suberin and lignin biosynthesis can affect fiber length and cellulose productions in wild and cultivated cotton species.
v3-fos-license
2021-08-04T00:04:53.990Z
2020-05-21T00:00:00.000
236858642
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://juniperpublishers.com/oajnn/pdf/OAJNN.MS.ID.555865.pdf", "pdf_hash": "cb0aa91638dc49d0a04650c6c32fed82749b7a61", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2080", "s2fieldsofstudy": [ "Medicine", "Biology", "Psychology" ], "sha1": "fe4ecfc122395b10200d57183a0a835336e82d42", "year": 2020 }
pes2o/s2orc
New Onset Refractory Status Epilepticus in an Adult Patient with Autoimmune Encephalitis Responding to Vitamin B6 Pyridoxine New onset refractory status epilepticus is a difficult to treat neurological emergency. We report here a 39 years old male admitted to our hospital with new onset seizures presented as status epilepticus. Clinical evaluation and extensive investigations revealed an underlying autoimmune-mediated etiology with positivity for anti-NMDAR autoimmune encephalitis. His seizures could not be controlled, and he remained in a super-refractory status epilepticus, despite treatment with multiple anti-epileptic drugs in their proper doses, use of IV anesthetics in addition to a high dose corticosteroid and IVIG courses for treating his underlying disease. The patient had no previous significant past medical history, no family history of a seizure disorder and no comorbidities. Remarkably, the patient’s seizures responded successfully to treatment with vitamin B6 (Pyridoxine). He was discharged, after 117 days staying in hospital, with almost complete neurologic recovery and without recurrence of seizures after achieving control with Pyridoxine. We highlight here the possible role of Pyridoxine to terminate refractory seizures in patients with autoimmune encephalitis and to ensure neuroprotection, while the underlying etiology addressed with immune-modulating therapy. Moreover, we encourage clinicians to consider Pyridoxine deficiency as a potential etiology of new onset refractory status epilepticus and seizures, even in adult patients who suffer from other underlying diseases which can cause seizures. Introduction Status Epilepticus (SE) is a medical neuro-emergency describes persistent or recurring seizures without a return to baseline mental status and associated with significant morbidity and mortality [1]. Although the great majority of SE patients have an underlying brain condition causing their status seizure (such as brain tumor, brain infection, brain trauma, or stroke), it can occur in the context of epilepsy, usually precipitated by drug withdrawal, inter-current illness or metabolic disturbance, or by the progression of the underlying disease [1,2]. Around 10% of epileptic patients may present with SE as a first seizure [2].It is critically important to rapidly institute care that simultaneously stabilizes the patient medically, identifies and manages any precipitating conditions, and terminates seizures. Seizure management involves emergent treatment with benzodiazepines followed by urgent therapy with adequate doses of anti-seizure medications. If seizures persist, then Refractory Status Epilepticus (RSE) is diagnosed and the management options include additional anti-seizure medications or infusions of Midazolam or pentobarbital [3]. Definitions for RSE have varied in seizure durations (no time criteria, 30 minutes, one hour, or two hours) and/or lack of response to different numbers (two or three) and types of anticonvulsants, however, the Neurocritical Care Society guideline states that patients who continue to experience either clinical or electrographic" seizures after receiving adequate doses of an initial benzodiazepine followed by a second acceptable anticonvulsant will be considered refractory and the guidelines recommend rapid advancement to "pharmacologic coma induction rather than sequential trials of many urgent control anticonvulsants [3,4]. In some patients, RSE may last for weeks to months, despite Summary New onset refractory status epilepticus is a difficult to treat neurological emergency. We report here a 39 years old male admitted to our hospital with new onset seizures presented as status epilepticus. Clinical evaluation and extensive investigations revealed an underlying autoimmune-mediated etiology with positivity for anti-NMDAR autoimmune encephalitis. His seizures could not be controlled, and he remained in a super-refractory status epilepticus, despite treatment with multiple anti-epileptic drugs in their proper doses, use of IV anesthetics in addition to a high dose corticosteroid and IVIG courses for treating his underlying disease. The patient had no previous significant past medical history, no family history of a seizure disorder and no comorbidities. Remarkably, the patient's seizures responded successfully to treatment with vitamin B6 (Pyridoxine). He was discharged, after 117 days staying in hospital, with almost complete neurologic recovery and without recurrence of seizures after achieving control with Pyridoxine. Open Access Journal of Neurology & Neurosurgery treatment with adequate levels of multiple anticonvulsants medications. This lengthy course RSE is referred to as malignant RSE [4] or super-refractory SE (SRSE) [5,6]. Malignant RSE is usually associated with an autoimmune, infectious or inflammatory etiology, younger age patients, previous good health, and it carries high rates of morbidity and mortality [4,7,8].New onset RSE (NORSE) is a rare but challenging condition, characterized by the occurrence of a prolonged period of refractory seizures with no readily identifiable cause in otherwise healthy individuals [9] Although it is reported that autoimmune encephalitis is the most commonly identified cause of new onset refractory status epilepticus, around half of the patients remain cryptogenic [9][10][11]. A well-known cause of intractable and refractory seizures is Pyridoxine (vitamin B6) deficiency; however, most of the reported cases of this condition are in infants and neonates [12,13].This article reports a case of a young man who suffered from new onset RSE, diagnosed to have anti-NMDAR encephalitis and his seizures were successfully treated with vitamin B6. The Case Report A 39 years old expatriate male, presented with 3 days history of dizziness, fatigue, mild fever and slight behavior changed. The patient started to have repeated attacks of generalized tonicclonic fits, with up rolling eyes, foaming and without regaining consciousness between these attacks. He was brought to the emergency department at our hospital, with a state of generalized tonic-clonic seizures, where he received diazepam 10 mg IV, which was repeated after 20 minutes because of persistent fitting; in addition to 1000 mg IV Phenytoin as well as to Midazolam (0.2mg/kg IV bolus followed by 0.5/kg/hr infusion), but the clinical fits further persist, therefore, the patient was shifted to the ICU, intubated for airway and neurologic protection, Phenobarbitone IV 200 mg at a rate of 100 mg/min was given and sedation with Midazolam infusion with up-titrated to 20 mg/ h. Electroencephalography (EEG) was done at admission in the emergency department and showed initially status patterns of continuous generalized flattening of the normal background rhythms, followed by repeated spells of generalized low voltage fast activity and polyspikes that increase in amplitude and decrease in frequency until becoming obscured by muscle and movement artifacts. These patterns change, as the seizure clinically moves into the clonic phase, to a checkerboard type of muscle artifacts corresponding to the rhythmic jerking movements observed clinically (all four limbs and right-side facial twitching). The EEG further shows diffuse suppression of cerebral activity during breaks between seizures. Subsequently, in the ICU and after sedation, continuous electroencephalogram monitoring showed breakthrough seizures, in spite of pentobarbital-induced burst suppression, in the forms of frequent generalized spikes, polyspikes and wave discharges (predominantly over the left hemisphere) and generalized Periodic Lateralized Epileptiform Discharges (PLEDs) synchronizing with the jerking movements of the patient. Seizures treatment was initiated with Anti-Epileptic Drugs (AEDs) regimen of Levetiracetam and Valproic acid added to the already in use Phenobarbitone and Phenytoin; however, events of facial twitching and limbs' convulsions were observed clinically and epileptic discharges activity remained evident on EEG. After 10 days of intubation, the patient underwent a tracheostomy procedure due to prolonged mechanical ventilation requirements. The patient was clinically evaluated and extensively investigated. Given the suspension of meningoencephalitis, from an infectious etiology by history and presentation, the patient was treated empirically with vancomycin, ceftriaxone, and acyclovir. However, Lumbar puncture was done and CSF analysis showed a clear CSF with pleocytosis, WBC cell count of 64 (normal 0) , with 90% lymphocytes and only 2 RBCs, elevated protein 94mg/ dl (normal 25-45mg/dl) and normal glucose level 4.43 mmol/L ( blood glucose 6.2mmol/L). Indian-ink stain test for Cryptococcus and quantiferon test for TB were negative; in addition, the CSF culture and viral PCR testing (for HSV, adenovirus, enterovirus, Varicella and Mumps) were all negative. Based on the results of lumbar puncture and cerebral spinal fluid analysis, acyclovir and antibiotics were discontinued.The patient's blood tests showed hemoglobin level of 11.4gm/dl, total WBC count of 19,900/ cmm, platelets were normal 345,000/cmm and normal rest of CBC indices. C-reactive protein test (CRP) was high 267.6 mg/L (normal 0-5mg/L), normal random blood sugar (RBS), Liver and renal function tests (LFT & RFT) and normal Thyroid-Stimulating Hormone (TSH) level. Serum electrolytes all were normal except serum magnesium which was low 0.49 milligrams per deciliter (normal 0.66-1.07 milligrams per deciliter;) so oral supplements was given and the level became within normal (0.72 milligrams per deciliter). The patient's Blood cultures were negative and tests for HIV 1, 2 as well as Elisa test for syphilis.Subsequently, MRI brain was done and showed variably distributed parenchymal T2/FLAIR hyperintensities in the hyppocampal region, para hippocampus gyrus and in the amygdale with subtle involvement of the right insular cortex and bilateral medial temporal cortical swelling, consistent with encephalitis. All lesions were not enhancing with Gadolinium contrast. With the suspicion of autoimmune encephalitis, from the clinical history, CSF and MRI findings, CSF was sent for anti-Nmethyl-D-aspartate receptor (NMDAR) antibodies and the results came positive confirming the diagnosis of autoimmune anti-NMDAR encephalitis. No oligoclonal bands were detected in the CSF. Further investigations, including paraneoplastic and vasculitis panels, were done and came to be with negative results. He was also investigated with pelvic ultrasonography, CT scan for chest, abdomen and pelvis which excluded teratoma and other tumors. In addition, his serum tests for Antiphospholipids syndrome, Lupus anticoagulant, Limbic encephalitis screen and anti-Neuronal antibodies were all negative. Open Access Journal of Neurology & Neurosurgery Based on the diagnosis, the patient received two courses of Methylprednisolone IV infusion 1gm/ day for 5 days, two weeks apart, yet the seizures continued. Therefore, IVIG course of 2.5gm/ kg divided to 5 days was also given. However, the patient continued to have generalized seizures and a Propofol infusion started at 2 mg/kg/h up titrated and with intermittent boluses to achieve and maintain burst suppression on EEG. During the following days, maximum doses of these AEDs were reached (Phenobarbitone dose was escalated up to 500 mg/ day IV, Phenytoin given 1000 mg to start then adjusted 400 mg IV daily according to its' serum level, Valproic acid 400mg bid IV then escalated up to 1000mg bid, and Levetiracetam given as 1500 mg bid then raised to 2000 mg bid) and an addition of Lamotrigine ,up to 250 mg bid, then Topiramate, which was slowly increased up to 200 mg bid, were also tried. However, the patient continued to experience breakthrough seizures, evident on EEG and on clinical exam as facial twitching and rhythmic limbs' movement whenever sedation was decreased, consistent with SRSE. Thus, Ketamine IV, in a loading dose of 1.5 mg/kg, followed by 2.75 mg/kg/h continuous infusion, was subsequently added. Plasma exchange (plasmapheresis) therapy was also offered but the patient's sponsor deferred the treatment. On day 68 in the ICU, a trial of Pyridoxine 50mg tid, orally through nasogastric tube, was given. After 30 hours, the patient's clinical seizures had resolved, and the EEG ictal discharges ceased. Throughout the rest of the admission, the patient was gradually and successfully extubated and weaned off from the ventilator, we slowly tapered the Phenytoin, then the Phenobarbitone and the patient had no more fits. A challenge test was done by stopping the Pyridoxine, after 20 days, fits recurred after 24 hours as complex partial motor seizures (focal motor seizures with impaired awareness) affecting the face and the right arm with loss of consciousness, so he was reintubated and ventilated, and Pyridoxine restarted again. Seizures subsided after 12 hours and the patient was off ventilator then extubated after 2 days.Our patient started to recover clinically and was able to tolerate eating meals by mouth, started to walk alone with some difficulty and started a physiotherapy and rehabilitation program to return to his baseline functional status. Most AEDs were tapered and stopped gradually; he was discharged, after spending a total of 117 days in hospital, on Levetiracetam (1500 mg bid) in a good condition. We discussed with the patient the high risk of relapse in regard to his autoimmune anti-NMDAR encephalitis and the need for Rituximab treatment, but he was reluctant for a 6 months cycle of therapy and wanted to travel back home. However, we kept him under frequent follow ups, and he was stable without any relapse. Discussion New-onset Refractory Status Epilepticus (NORSE) is a lifethreatening neurological emergency which happens when an otherwise healthy patient presents with SE, the cause of SE cannot be identified initially and when SE cannot be controlled with the standard anti-seizure medicine (refractory) [9,14]. Although some cases with NORSE have been reported to recover completely [11][12][13][14][15], it can cause significant brain damage and it is reported that between 20-40% of people do not survive [14][15][16]. Lifelong epilepsy, as well as physical and mental disabilities were also reported in most cases survivors [11,14]; therefore, treatment of NORSE should be initiated rapidly and effectively to terminate seizures and to target the underlying cause if found. More extensive testing should be performed if the clinical findings and the initial investigations fail to find an underlying cause. Tests have to be performed to rule out other known causes of new-onset refractory SE, such as inflammatory and autoimmune diseases, genetic disorders, metabolic disorders, and less common viral infections [11,17,18]. Autoimmune encephalitis, either nonparaneoplastic or paraneoplastic, was the most common reported cause of NORSE [4,[19][20][21][22] and anti-NMDAR encephalitis cases presenting as NORSE are reported [23,24] knowing that most published cases of NORSE predate the discovery of anti-NMDAR antibodies [11,25].Here, we present a case of NORSE with underlying nonparaneoplastic autoimmune encephalitis etiology (diagnosed by elevated protein in the CSF with pleocytosis, positive CSF anti-NMDAR antibodies and by brain MRI findings) in a previously healthy young male. Although general expert consensus in the literature recommends approaching NORSE with pharmacologic-induced coma and continuous infusion of IV anesthetic agents, to suppress brain activity and preserve normal brain physiology [26], our patient did not respond and his NORSE became super-refractory. He continued seizing despite the use of all the previous management; in addition to benzodiazepines, various AEDs and the treatment with Methylprednisolone and IVIG for his autoimmune encephalitis. Our patient's seizures stopped, and he ultimately made almost complete neurological recovery after administration of vitamin B6 (Pyridoxine). Pyridoxine-dependent seizures are usually related to a rare autosomal recessive mutation in the ALDH7A1 gene; therefore it is generally considered in neonates with seizures, although there are also reports of older patients including infants, children and even few adults with SE controlled by Pyridoxine [27][28][29][30]. The diagnosis of pyridoxine-responsive seizures is made when administration of Pyridoxine (usually, intravenous 100mg given for one to five doses) terminates seizures, typically within hours of administration [31]. Moreover, Pyridoxine (vitamin B6) deficiency is a well-known cause of refractory SE; however, most of the reported cases were in infants and neonates [12,13]. Few adult cases of pyridoxine deficiency related RSE seizures have been reported in the literature [32,33] and the majority were associated with nutritional deficiencies and comorbidities [34][35][36][37] or with pregnancy [38].Vitamin B6 deficiency in adults is rare and may result from dietary deficiency (especially in elderly and alcoholics), and in patients with liver disease, chronic kidney Open Access Journal of Neurology & Neurosurgery disease on dialysis, rheumatoid arthritis, women with type 1 diabetes, as well as during pregnancy and in those patients with HIV who develop an increased risk of vitamin B6 deficiency, despite adequate dietary intakes [26,39,40]. Certain medications can also affect the availability of vitamin B6 in the body or interfere with its' metabolism, such as anticonvulsants, corticosteroids, Isoniazid, Cycloserine, and Penicillamine [41]. In this particular case of NORSE, the patient is a young, completely healthy male and without history of alcohol or any medication intake which makes this case unique and worth reporting as his seizures were only stopped by Pyridoxine, despite the extensive treatment and prolonged ICU admission. Although the underlying etiology was autoimmune-mediated epilepsy, the addition of vitamin B6 achieved an excellent outcome leading to almost complete neurological recovery after. Conclusion New onset refractory status epilepticus is a serious condition that needs ICU, intubation, ventilation and trial of medications to stop the fits. The finding that many cases of NORSE are of autoimmune origin suggests that these etiologies should be aggressively sought.The unusual outcome demonstrated in this case report raises a discussion of the optimal approach and treatment as applied to a case of NORSE, where rapid and almost complete neurological recovery was achieved with the additional trial of Pyridoxine.We recommend that a trial of Pyridoxine to be administrated in cases of intractable/refractory status seizures and it can be included in the status epilepticus treatment protocols of adult patients with refractory status, regardless the underlying cause. In addition, the diagnosis of late onset Pyridoxinedependent epilepsy should be considered especially in countries where consanguineous marriage is often seen.
v3-fos-license
2018-12-27T16:55:56.507Z
2018-10-17T00:00:00.000
73589168
{ "extfieldsofstudy": [ "Psychology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.intechopen.com/citation-pdf-url/61984", "pdf_hash": "b17bc04b12c1cc4c7fedec4e6e2cafc924ddd005", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2082", "s2fieldsofstudy": [ "Psychology" ], "sha1": "9fba55d05f27117d880e9f92622a1820d13fa36e", "year": 2018 }
pes2o/s2orc
From Vulnerability to Resilience: A Coping Related Approach to Psychosis From Vulnerability to Resilience: A Coping Related Approach to Psychosis Many of us may have to face stressful events during life. How we are affected by these events depends on our vulnerability limit and our coping mechanisms. Both vulnera -bility-stress models and cognitive-behavioral theories of psychosis consider biological, psychological, and social factors together as determinants of our vulnerability limit. This point of view enables us to handle the psychotic disorders as a continuity of normality. In addition, coping mechanisms have an important role in the maintenance and/or recovery of psychotic symptoms. Therefore, the objective of this chapter is to summarize coping- related explanations that facilitate understanding the symptomatology of psychosis and defining the adaptive ways to challenge it. Introduction In the beginning, the common idea was that the psychosis is completely different from the other disorders. But this idea has only increased the stigmatization and labeling. As a result, severe mental illnesses like psychosis and schizophrenia were categorized as "disorders which are untreatable with psychological methods." Today, models suggesting the existence of a continuity between normal beliefs, anomalous experiences, and psychotic symptoms are accepted [1]. It is well known that healthy people may also experience mild psychotic symptoms like delusions of being watched or talked about, or auditory and visual hallucinations as a result of stress, drugs, trauma, and sleep deprivation [2,3]. These kinds of thoughts and perceptions are called as psychotic-like experiences, to the extent that they do not necessitate getting any support or treatment [3][4][5]. In community, every one person of four reports at least one psychotic-like experience [3]. The rate of psychotic experiences that cause seeking treatment ranges from 3 to 8% [2,3,6]. The persons who are confronted with anomalous experiences and do not need to seek help are the ones who generally do not overevaluate these kinds of experiences. On the other hand, the persons who develop psychosis in the end are more anxious about and more preoccupied with their beliefs and experiences. The person searches for a meaning of this anomalous experiences and the coping process with severe anxiety lead delusions and voices [7,8]. In addition, maladaptive-coping strategies such as avoidance or safety behaviors play a particularly important role in the maintenance of the psychotic symptoms. In this chapter, we initially review the vulnerability-stress models and the other cognitivebehavioral explanations to psychosis. These explanations will be stated as "coping-related explanations" in the text, because they often emphasize the coping process with the anomalous experience or the interactions between internal (e.g., deprivation in self-monitoring process) and external (e.g., environment, trauma) factors. With the help of these explanations, we try to understand the development of psychotic symptoms as a continuity of normality. Then, we handle the role of maladaptive-coping strategies in the maintenance of psychotic experiences. Patients' relatives' coping strategies will also be taken into consideration due to their role in the maintenance of psychosis. We finally address the importance of developing and enhancing adaptive-coping strategies and changing irrational thinking for challenging psychosis. We also emphasize the role of social support in every stage of psychosis. From vulnerability to resilience We can conceptualize both vulnerability and resilience terms with the help of similar explanations or factors. In other words, factors that enhance or reduce resilience are similar. Resilience means the ability to protect the mental health. The sources of resilience may be psychological (personal traits, interpretation of events, etc.), biological (brain structure, genetic factors), or environmental (family interactions, community factors, etc.). Thanks to these adequate sources, the individual can cope with stressful events. On the other hand, lack of these adequate sources makes the person more vulnerable in the struggle of life. In addition, the sources of resilience can be weakened because of several factors (stressful life events, deprivation in brain structure, misinterpretations of events, etc.); thus, even a resilient person may also be more vulnerable and develop a mental illness. The terms of vulnerability and resilience should be thought in a continuum, and thus it is both possible to proceed from vulnerability to resilience and regress from resilience to vulnerability. Coping-related explanations for psychosis Coping-related explanations for psychosis include vulnerability-stress model of psychosis and several cognitive-behavioral explanations. These explanations often emphasize the similarities between the normal, anomalous, and the psychotic experiences. With the aim of evaluating the psychotic symptoms in a continuum, we separately look through these explanations. Vulnerability-stress model of psychosis Vulnerability-stress model integrates the overall explanations-biological, psychological, and social factors-to explain the structure of psychosis [1,[9][10][11][12][13][14]. The vulnerability to severe illnesses can arise due to genetic predisposition, birth trauma, brain injury, viruses, and early childhood traumas like physical and interpersonal deprivations [1]. It can be said that a person who has been influenced by one or more of these factors is more vulnerable to develop a mental illness than the others who do not have such a past. But vulnerability only defines the possibility of developing a psychiatric illness while facing stress. We all have different psychological structure and social environment, and accordingly, the stress level that we each can endure is different. Some of us have significant heritability for the psychotic disorders and the others have not [15]. For instance, the family history of psychosis can indicate the high vulnerability. The more vulnerable person is, the less stress is required for the occurrence of psychosis. According to Zubin and Spring's concept of vulnerability-stress diathesis, so long as the stress stays below the threshold of vulnerability, the individual can cope with events, but whether the stress surpasses the limit, he/she can develop a psychotic episode [16]. Beck's theory for delusions The use of cognitive-behavioral theory (CBT) for psychosis is originated from Beck's theory of emotional disorders [15,17]. Nearly 60 years ago, Beck has started to investigate the delusional system of a paranoid patient who believed that he was being watched by the members of a military unit who were working on behalf of the FBI. At the end of a 30-session treatment process, the patient recognized that his delusions were related to his own beliefs (e.g., "I am responsible of my daddy's unfavorable behaviors" and "I'm supposed to be punished due to my weaknesses") and impressed guilty in a schematic level [14,17]. Thus, cognitive therapy was first shown as helpful for the treatment of psychotic patients [17][18][19]. Then, this success was supported by another case study [17]. Hole et al. [20] defined four dimensions for measuring delusions as a result of their hour-long interviews with delusional inpatients: conviction, accommodation (the degree to which a delusion could be modified by external events), pervasiveness (the percentage of the day spent ruminating about delusions), and encapsulation (the extent to which a decrease in pervasiveness could occur without any decrease of conviction). They decided that delusions may function as the other beliefs and may differ from them only quantitatively regarding how they can be influenced by external events [16,20]. In his subsequent studies, Beck stated that the psychotic patients (particularly paranoids) concentrate especially on monitoring external-including social-sources on the purpose of recognizing the potential danger. Because of being alert all the time for the potential danger, they misinterpret threat when there is none, and they suspect hostiles when there are none. This situation can be described as externalizing bias, the attribution of difficulties or internal events to external stimulus. They also have internal bias; this is the conviction that the attitudes and the feelings of others toward them cause the events. He also mentioned the cognitive distortions of schizophrenia. He emphasized that self-referential or persecutory content of their thoughts often cause anxiety, and sometimes sadness or depression. These distortions include catastrophizing, thinking out of context (the component of selective abstraction, overgeneralization, dichotomous thinking, jumping into conclusions), inadequate cognitive processing, and categorical thinking [17]. Beck's cognitive model suggests that genetic and experiential factors interact with distorted internal representations (patients' negative appraisal such as "me vs. them") which comprise the physical and cognitive vulnerability to psychosis. These representations are important factors which make patient vulnerable to a mental illness. Under acute and prolonged stress, these negative representations start to affect the information-process system and inhibit the patients' ability of reality testing [21]. The neurocognitive explanations of psychosis According to Frith Model that explains the cognitive component of schizophrenia, there is a deprivation in main self-monitoring process of schizophrenic patients. Thus, they cannot differentiate the situation which results from their own actions and the external ones, so they attribute the internals to the external ones [1,16,[21][22][23][24][25]. There is also a lack of awareness of intended actions in schizophrenic patients; this impairment might affect the sense of will and they can become isolated from their thoughts and actions [22]. Auditory hallucinations of schizophrenia are accepted to be caused by their own inner speech [22]. When the brains of people who reported hearing voices were scanned, many of the same areas of the brain were found to be active during both auditory hallucinations and inner speech [24,26]. The psychotic patients also reported someone speaking while they were speaking. So, they tend to attribute their own voice to another person [22]. These processes would result in the attribution of internal voices or thoughts to external voices and one's own movement and speech to external causes. These misinterpretations are concluded with auditory hallucinations or thought blocking, and passivity or delusion of control, respectively [1,16,[21][22][23][24][25]. A heuristic model In a heuristic model of the determinants of positive psychotic symptoms, a psychotic experience is suggested as a response to a combination of internal (inherent biological: genetic heritability, acquired biological: birth trauma, inherent psychological: cognitive deficits, acquired psychological: cognitive biases, schemata) and external factors (stressors). It is stated that these factors operate via a mediating pathway (e.g., a dysfunction in the arousal system and its regulation) [27]. Consequently, the psychotic experience or persistent positive psychotic symptoms (hallucinations/delusions) can occur. The experience of hallucinations and delusions has short-term and long-term results. Short-term results may be on emotional (anxiety, fear, anger), behavioral (belief-parallel behavior, testing the interpretations), cognitive (misinterpretation, attention to perceived threat, selective attribution), or coping basis, whereas long-term results include social withdrawal and isolation, loneliness, decreasing opportunities for reward, and social skill deficits. These results also cause maintenance of the illness [28]. Morrison's explanations for psychosis The psychosis model of Morrison resembles Clark's cognitive model for panic. According to this model, the auditory hallucinations are intrusive thoughts which are externally attributed. These intrusive thoughts can be accepted as normal, but the person especially focuses his attention on these intrusions and the distress occurs when the person misunderstands and misinterprets these thoughts like "dangerous." So, this is not the intrusion, but the interpretation which causes distress and disability [29,30]. The interpretation is the searching for a meaning of this experience. Its meaning depends on the interpretations of the person who heard voices whether he says, "devil is talking to me" or "this is a strange sensation, I think I am too tired" [16,31]. The first interpretation may increase the person's distress, anxiety level, and lead the other negative emotional consequences. The person tries to find a way to cope with symptoms through maladaptive responses such as avoidance. These emotional consequences and maladaptive responses cause maintaining the symptoms [29,30]. In fact, these are all internal experiences. Furthermore, the cycle between intrusions, interpretations of intrusions as voices, mood, body sensations, and behaviors are parallel with the idea that internal experiences are attributed to the external sources [29,32,33]. The model of Garety and colleagues for psychosis This model involves the combination of important factors in developing and maintaining the psychosis. The principal factors are vulnerability, stress, social environment, emotional changes, cognitive dysfunction, and appraisal of the experience as external. The authors emphasize the continuity of psychotic and nonpsychotic experiences. They suggest that bio-psycho-social vulnerability (it also includes cognitive and emotional vulnerability) can be triggered by the effects of the social environment, including stress and trauma. They state that the interaction of vulnerability and social environment may cause some emotional changes. Emotional changes may include depression, anxiety, or low self-esteem. They consider cognitive dysfunction very important because it can lead to anomalous experiences. Emotional changes and cognitive dysfunctions including reasoning biases lead the person to evaluate the experience as external. The appraisal of this experience as external is influenced by reasoning and attributional biases, dysfunctional schemas of self and world, isolation, and adverse environments. Because of this cycle, positive symptoms may occur. The symptoms are maintained by cognitive processes including reasoning and attributions, dysfunctional schemas, emotional processes, and appraisal of psychosis [34,35]. The classification of Kingdon and Turkington for psychosis Kingdon and Turkington classify psychosis as a gradual or an acute onset. They categorize the gradual onset as sensitivity psychosis (the patient has predominant negative symptoms and the onset is adolescence) and trauma-related psychosis (the patient has a trauma history and the symptoms are very distressing and the content of hallucinations is about abuse). If it is acute onset, then it could be two possibilities: anxiety psychosis (as a response of a distressing life event, the patient becomes socially isolated, and he/she attributes their distress to an irrelevant situation actually related to their delusional system with or without hallucinations) or drug-related psychosis (the first attack begins with drug use and the following attacks have persisting psychotic symptoms which are the same nature and content of the initial episode). It is important to understand the type of psychosis to establish the engagement with the patient and to use the normalization rationale to explain the symptoms [15]. The social rank theory of auditory hallucinations The social rank theory was generally used for depression and anxiety disorders but considering the parallel mechanisms within the scope of "attack the weaker and submit to the stronger," it was finally modified for hallucinations. Different from other cognitive theories, this theory considers the patient's relationship with voices as well as with his significant others. This approach uses the ABC framework. ABC model for auditory hallucinations of psychosis can be summarized as follows: A: hallucinations (activating event), B: beliefs including automatic thoughts, assumptions, and images about the activating event (this might not be the direct interpretation of the content of hallucination), C: emotional and behavioral consequences (to resist, to cooperate, to attach, and to remain unresponsive). Activating events can be categorized into three types including symptoms and internal events (e.g., hallucinations), descriptions of interactions with significant others like parents or siblings, and significant life events (diagnosis, hospitalization, and social stigma). According to this theory, the hallucinations demonstrate a core self-perception of low social rank, so the person perceives that he/she is in control of his/her parents or peers and community. The emotional consequences of these evaluations can be shame, humiliation, and depression. In this context, the distress and behavior are related to patients' perceived relationship with voices, their appraisal of voices power and omnipotence, as a result they evaluate the voice as benevolent or malevolent [33,[36][37][38]. The explanations mentioned earlier would help to understand the occurrence of psychotic episodes. The following passages will also address the maintenance of these psychotic symptoms. The function of coping strategies for psychosis Coping is a personal resource that an individual already possess and uses while trying to deal with an unpleasant stimulus. It comprises some mechanisms related to behavioral actions, as well as cognitive processes. As mentioned earlier, our vulnerability limit determines the stress level that we can handle. So, we can say that coping has a very close relation with vulnerability and resilience terms. Resilience protects the individual from the effects of stress, thus it is functional and adaptive. But coping responses to stress may be adaptive or maladaptive. In fact, psychotic patients often use maladaptive-coping strategies. Cognitive theories also emphasize the role of these maladaptive strategies in the maintenance of psychosis [39]. Due to their important effects, this part includes the coping strategies that the psychotic patients have already used. In addition, a high expressed emotion term is accepted as an important factor that causes maintenance of the psychosis. The coping strategies of patients' relatives determine the expressed emotion level and style. Thus, this topic is also addressed in this part. The psychotic patients' own coping strategies Three types of psychological reaction to psychosis are suggested: denial and lack of awareness, passive acceptance of the role of patient, acceptance of psychotic illness, and compliance to the treatment. Neither the first one nor the second are functional because they both inhibit the treatment. The person who does not have awareness refuses the help because he/she does not believe that he/she has an illness and may gradually become more disorganized and dangerous to himself/herself and others. The second one, who passively accepts the sick role, probably abandons to try and ever loses his/her self-esteem. He/she can also develop other clinical problems, depression, and suicidal ideas. Inversely, the last one believes that he/she can learn to cope with his/her symptoms, takes medication, and is motivated to psychotherapy and can adopt the sick role when necessary [1]. According to patients' description of coping strategies with auditory hallucinations, three phases were described: startling phase in which the patients felt fear, anxiety, and desire to escape in the beginning, then investigated the meaning of voices, and do not try to escape anymore; organization phase in which many patients try to communicate with the voices; and the stabilization phase in which they start to accept the voices as part of themselves [40]. Researches about coping and psychosis show that patients generally use maladaptive-coping strategies, for example, excessive avoidance and safety behavior [41,42]. Patients with delusions, especially persecutory delusions, often use safety behaviors to decrease the risk of danger. For this reason, they can use a number of rituals such as making hand movement or praying to avoid the effect of evil spirits or lock themselves in the house and hide under the bed to escape from the Mafia. These safety behaviors play an important role in the maintenance of the delusions [18]. Some studies indicate that the patients' own method to cope with psychotic symptoms include both adaptive and maladaptive strategies. These strategies usually have cognitive, behavioral, physical, social, or medical components. The results of the investigation of Falloon and Talbot [43] revealed three group strategies used to cope with auditory hallucinations: behavior change (e.g., speaking with people), efforts to lower psychological arousal (e.g., relaxation, listening to music to reduce symptoms), and cognitive-coping methods (e.g., listening attentively to the voices, accepting their guidance to reduce the distress, or ignoring them). They did not find any differences between females' and males' coping behaviors [15,43]. Carr [44] assessed 200 patients and grouped 310 responses like Falloon and Talbot's study [43]. Five coping subgroups were determined. Eighty-three percent of patients used behavior control, 38% of them used as these coping behaviors for delusions, and 43% for hallucinations. Behavior control included distraction involving passive diversion such as listening to music, watching TV, or active diversion like writing, reading, playing a musical instrument. Using an auditory input through headphones was also found to be effective to cope with hallucinations [45]. Other types of behavior control were physical change involving body movement (passively; e.g., relaxation or actively; e.g., walking, swimming), indulgence (e.g., eating, drinking, and smoking), and nonspecific strategies ("I will try to do something different"). The second important subgroup was socialization via talking to family or friends, but social withdrawal and avoidance were also reported. Tarrier has also found and reported that these avoidant behaviors were used as a conscious-coping method [46]. Cognitive control was the third one, and it has its own three subgroups including suppression of unwanted thoughts and perceptions (I ignore the delusions, I try not to think about the voices), shifted attention (redirecting the attention to the neutral ideas), and problem solving. Medical care (using/changing medication, going to hospital, visiting a mental health specialist) and symptomatic behaviors (telling the voices to stop talking, shouting them to leave him/her alone, behaving aggressively) as the remaining subgroups were the rarely used coping strategies. The patients with delusion did not prefer passive coping strategies; they preferred to use active ones, such as problem solving [16,44]. Cohen and Berk [47] evaluated the coping styles of 86 patients to determine which strategies were used for which symptoms. They found that patients used "fighting back" and "medical strategies" to cope with psychotic symptoms and "prayer" for schizophrenic thoughts [47]. Miller and colleagues [48] stated that 52% of patients that they interviewed reported positive effect (relaxing, companionship, financial-for example, income-protective, self-concept-for example, feeling attractive-reactions of others-for example, people are nicerperformance-the need to hear voices to maintain self-care, relationships-the need to hear voices to be close to people, sexual-increase in desire), whereas 94% of them commented adverse effect (financial-incapacity to work, emotional distress, performance-impairment in functioning, reactions of others-for example, the stigmatization, feeling endangered or threatened, relationships, self-concept-feeling ugly, loneliness, sexual-decrease in desire) of auditory hallucinations. They also suggested that many of the patients that they investigated believed the voices that they heard had both adaptive and maladaptive functions; however, they would prefer not to hear voices [16,48]. A more recent study which aimed to determine the effect of the patients' own coping strategies on psychotic symptoms suggested that distractive coping technique including relaxation, watching TV, conversation with others, listening to music, listening to the radio, body movement, hobbies, and thinking of other things were evaluated as passive-coping technique and the counteraction strategies including echoing voices, retorting or dissuading the voices, falling asleep, posture change, and making noises were active-coping strategies. They found that the patients did not prefer using distraction-coping strategies against hallucinations with delusional features [49]. Nelson and colleagues [50] examined the effect of earplugs use, subvocal counting (like 1,2,3… 1,2,3), and listening to music through a portable cassette on persistent auditory hallucination. They found that the most effective technique was subvocal counting; following this method, the patients mostly used earplugs and listening to music, respectively. The effect of these methods especially was shifting attention and reducing anxiety [50]. Ozcan and colleagues [51] investigated the coping behaviors of patients with schizophrenia and they found that most of the patients were using at least one method. The methods can be categorized as religious activities (85%), cognitive controlling (20%), changing the dose of neuroleptic drug or changing the drug itself (20%), enhancing social activities (18%), symptomatic behaviors (10%) and listening to radio, watching TV, walking around, and drug abuse (tea, smoking, alcohol). The coping strategies of patient's relatives The relatives' coping strategies with psychosis are directly related to "expressed emotion." Expressed emotion is a resistant multidimensional measure of family emotional atmospheric, through which relatives exhibit critical, hostile, and emotionally overinvolved attitudes toward a family member with mental illness [52]. Expressed emotion of relatives is especially important in the maintenance of psychosis. There are few studies in this field, but these studies usually emphasize the relation between perceived stress, coping, and expressed emotion. A recent study showed that the relatives of inpatients with first episode psychosis experienced high levels of perceived stress, poor social support, and expressed emotion in moderate to severe levels. The relatives' perceived stress significantly predicted their expressed emotion [53]. In a study that aimed to analyze the mechanisms underlying the low expressed emotion of psychotic patients' relatives, four core themes were revealed: witnessing the distress (they spent time worrying about whether their family member would commit suicide or do something to harm themselves), empathy through acceptance and understanding (they viewed the psychosis as something that could not be prevented, they tried to understand the cause, normalized the illness, and had some idea of what was important in recovery, commented on how the family member may have been feeling, suggesting that they were able to recognize and describe the person's emotional state), a broad range of coping strategies to reduce distress (e.g., asking for help from someone, using humor, taking time out away from stressful situations, distraction by carrying on with work and their normal routine), and realistic optimism for the future (they believe that illness would always be part of their family member's life, but they can modify their expectations from life) [54]. Another study suggested that coping through seeking emotional support, the use of religion/spirituality, active coping, acceptance, and positive reframing were associated with less distress, while coping through self-blame was associated with higher distress scores [55]. The information level of relatives about psychosis determined their cognitive view to the illness. These two factors were found to be related to stress level, expressed emotion, and patients' symptom severity. Beliefs about symptoms that "the major attributes of illness representation are oriented around" are one of the important factors of Leventhal's illness perception model by which to understand the process and outcome of distress in the relatives of patients with schizophrenia [56]. The other factors are chronicity or recurrence of the condition (time line and cyclical time line), consequences, personal control, treatment control, illness coherence, causes of the condition, and patients' emotional response to their condition [57,58]. Challenging psychosis: developing and enhancing adaptive strategies In order to establish a balance between vulnerability and resilience, we are able to help the patient to manage his symptoms by means of enhanced medical and psychological treatments. Enhanced coping strategies enable the patient to adaptively cope with distress and to reduce anxiety and stress level. This process can help reducing the severity of hallucinations and delusions. Patients can learn to modify their own coping strategies, or to use adaptive ones. Therefore, the first part includes adaptive-coping strategies used in the treatment of psychosis. The patients may understand and try to improve their symptomatology with the help of cognitive conceptualization. Irrational thinking and maladaptive schemas should be handled with a collaborative approach. Stress-vulnerability logic may also be helpful to educate the patient about this conceptualization. In the second part described subsequently, these strategies are summarized. Social support is also an important factor for psychosis in terms of its relation with coping. In the third part, the role of social support in the development and maintenance of psychosis is considered. Learning to use adaptive-coping strategies for challenging psychosis Following the success of Beck, clinicians have developed and used individual or group-based CBT programs for psychosis [1,16,17,25,26,34,[59][60][61][62][63]. These programs generally included coping strategies because patients already have their own methods to reduce the distress caused by psychotic symptoms, so they can easily learn to enhance adaptive-coping mechanisms or to develop new ones. According to CBT, hallucinations are accepted to be very similar to the symptoms of OCD. On the contrary of OCD, in hallucinations, the thoughts, images, and ideas are not attributed to the people's own mind and are attributed to the external sources. The themes are similar: violence, control, religion, and sexuality. Therefore, the strategies used for anxiety disorders are also suggested for targeting hallucinations: distraction, focusing, and anxiety reduction [39]. Distraction aims at helping patients to shift their attention to another stimulus or activity while hearing voices, in order to diminish the effect of hallucinations on the patients. It includes some strategies such as using headphone music and attentional focusing. Focusing aims to reduce the frequency of voices and distress by means of close monitoring of experiences, listening carefully, and leading the patient toward a change in their awareness of hallucinatory experience. Unlike the attention distraction technique, the focusing technique necessitates patients to focus more on the source, nature, and content of voices for the patients to realize that the voices are not coming from the environment and can be controlled. Patients are encouraged to perform other strategies, such as arguing with or limiting the voices and changing the voice tones to funny tones. Anxiety reduction is used in strategies like systematic desensitization. For example, in the imaginal exposure, a hierarchical list of symptoms and distress is constituted, and the patient is suggested to think only about the symptoms' content for a while. Then, he recognizes that the anxiety level decreases if he focuses on the symptoms [1,26,64]. Learning to change irrational thinking for challenging psychosis There is some evidence that the contents of delusions reflect concerns about individual's himself and how others evaluate him. The delusions can be understood in terms of cognitive biases processing the normal beliefs. There may be extreme cognitive biases underlying extreme beliefs. Psychotic patients are seemed to miscalculate the probability of an event that may occur. In fact, they are most likely to use less information to make decisions; in other words, they jump into the conclusions. Delusions could be accepted as a response to the individual's search for meaning within his personal world [65]. To assign and understand the delusions, it is important to formulate how strongly the belief is held, the context of delusions in a person's life, how understandable the belief is, and how much the person relates the experience to himself/herself [39]. Psychotic patients catastrophically perceive the psychotic symptoms. Diagnosis or stigmatization of the others may create a traumatic effect. Thus, it is important to use a normalizing rationale and change this desperate point of view. This rationale enables the patient to apprehend that everyone has a potential to develop psychosis. Stress-vulnerability model is helpful to offer a personalized view to the patient including biological, psychological, and social explanations of how he developed vulnerable features and which stressful events triggered his vulnerable potential to develop psychosis [65]. Cognitive therapy suggests that the events do not directly determine our feelings and behaviors; our perceptions and interpretations influence how we feel and behave. All of us have some cognitive biases which also include some typical thinking errors. Dichotomous thinking (black or white), arbitrary inference (jumping to conclusions), and selective abstraction (only focusing a little part of the overall picture) are some of the most observed thinking errors in psychosis. With the help of cognitive model, patient can understand that how he interprets the situations can affect how he feels and how he reacts that way. He also comprehends the relation between his irrational thinking and his symptomatology. Then, the patient and the therapist can collaboratively work on changing the interpretations of the problem and exploring more rational perceptions and more adaptive alternative responses [65]. There is also a link between early psycho-social stressors, dysfunctional assumptions underlying core maladaptive schemas, and the psychotic symptoms. Fowler and colleagues [1] summarized the main schematic themes for psychosis, and they categorized five schemas including the belief that the self is extremely vulnerable to harm-for example, "I am unsafe," the belief that one is highly vulnerable to losing self-control-for example, "I am dangerous to others," the belief that the self is doomed to social isolation "I am totally alone in the world," the belief in inner defectiveness-for example, "I am damaged/deficient," the belief in strict standards-for example, "I must perform the optimum standard in all areas at all times (schema compensation). Other core maladaptive schemas such as "I am different," "I am special," and "I am abandoned" are also effective in the development and the maintenance of the psychotic symptoms, especially of the delusions [65]. The role of social support for challenging psychosis It is known that individuals with psychosis have smaller social networks and less satisfying relationships [66]. Social support is accepted as an important factor in every stage: in the development, maintenance, and recovery of psychosis. The role of social support in the development of psychosis Outcomes of the studies which examined the relation of positive social support/lack of social support and psychosis indicated many important results. One of these studies in which the quantity and quality of social relationships in young adults at ultra-high-risk for psychosis were evaluated, fewer close friends, less diverse social networks, less perceived social support, poorer relationship quality with family and friends, and more loneliness were determined, and these features have been found to be related to low functioning, and also a high symptom severity [66]. Correlatively, Schuldberg and colleagues have found that high-risk individuals reported receiving significantly less positive social support from both friends and family [67]. The relationship between psychosis proneness and negative social support (e.g., hostility and criticism from others) has not been examined yet [68]. In a study that aimed to understand the gender differences between childhood physical and sexual abuse, social support and psychosis, it was suggested that especially for women with a child maltreatment history, powerful social network systems and perceptions of social support were found as important factors for resilience and against developing psychosis [69]. A study that examined the role of social support in delays between the onset of psychotic illness and initiation of an adequate treatment found that good social support was associated with a significant increase in this duration [70]. The role of social support in the maintenance and recovery of psychosis Poor social networks may also cause more vulnerability during acute episode; therefore, psychotic symptoms can get worse and patients can continue withdrawals [69,71]. Lack of positive social support was associated to higher levels of stress and psychopathology [68]. On the other hand, positive social support was clearly seen as a factor which motivated the individual to the use of adaptive-coping strategies [72]. Most patients often receive support from close family, as compared to friends and other relatives. In addition, schizophrenic patients find it particularly difficult to find emotional support [73], but reported the need for more emotional support, advice, and trust-based relationships [74]. Some researchers tried to quantitatively and functionally complement the patients' support network [73]. The results of the studies of social support indicate that both family and peer-based social support interventions can be used clinically to improve social support, to decrease the expressed emotion, and accordingly to positively affect the treatment process [72]. Integrating family members to cognitive-behavioral interventions for challenging psychosis There is substantial evidence that integrating family members to psychotic patient's treatment is very helpful to reduce relapses. Techniques used in family interventions often tend to be on CBT based. They usually focus on reducing high expressed emotion and improving interpersonal environment. The key elements of these interventions are assessment and problem formulation; psychoeducation about the nature of the illness, its prognosis and treatment; and problem-solving techniques aiming to reduce conflicts and concerns, setting goals and improving interpersonal functioning [75]. Conclusion The aim of this chapter was to understand the continuum between the normality and psychosis, to review the coping-related explanations and coping strategies for psychosis. It is important to understand patients' own coping mechanisms, as well as their relatives' coping strategies because of the relation between psychotic symptoms, "expressed emotion," and "social support." Studies show that most of these coping strategies used are maladaptive, thus it is important to educate patients about cognitive model and adaptive-coping strategies via cognitive-behavioral therapy. It is remarkable that almost all cognitive explanations have a similarity with vulnerabilitystress model, and they resemble each other except a few differences. The author tries to summarize all these explanations herein subsequently and show in a schematic assumption named as "a Coping Related Model for Psychosis" in Figure 1. When a person with cognitive and physical vulnerability is exposed to stressful life events (e.g., low social support, environmental difficulties, or psychological traumas) which surpass his vulnerability limit, he may experience an anomalous experience. For example, he can hear a whisper or is supposed to see someone. If the person attributes this experience to an external source and interprets it such as "a talk of a Devil" instead of explaining it with an internal cause like "I must be tired," the anxiety level may increase. Because of the cognitive and emotional changes, the psychotic symptoms can occur. Once it develops, the maladaptive thinking patterns including attention to the perceived threat, dysfunctional schemas, cognitive errors, and selective attribution, or maladaptive behaviors like safety behaviors, or avoidance increase the risk of maintaining the psychotic symptoms. The individual's acceptance of the patient role, his compliance to the medical and psychological treatment, being educated about using adaptive-coping behaviors, or changing misinterpretations may help to enhance his vulnerability limit and ability to cope with stress, consequently to increase the possibility of recovery. Social support is also an important factor to decrease the potential risk of psychosis and to cope with the illness. On the contrary, a high level of expressed emotion is accepted to negatively affect the prognosis and may contribute to develop relapses. Therefore, integrating family members to cognitive-behavioral therapy program is very important in reducing expressed emotion and improving interpersonal environment.
v3-fos-license
2019-05-29T14:22:41.826Z
2019-05-28T00:00:00.000
167219240
{ "extfieldsofstudy": [ "Medicine", "Materials Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41467-019-10289-8.pdf", "pdf_hash": "f6a97b3b2791ecdc66c1e8f3de04522b36c72bd7", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2083", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "sha1": "f6a97b3b2791ecdc66c1e8f3de04522b36c72bd7", "year": 2019 }
pes2o/s2orc
Infinitesimal sulfur fusion yields quasi-metallic bulk silicon for stable and fast energy storage A fast-charging battery that supplies maximum energy is a key element for vehicle electrification. High-capacity silicon anodes offer a viable alternative to carbonaceous materials, but they are vulnerable to fracture due to large volumetric changes during charge–discharge cycles. The low ionic and electronic transport across the silicon particles limits the charging rate of batteries. Here, as a three-in-one solution for the above issues, we show that small amounts of sulfur doping (<1 at%) render quasi-metallic silicon microparticles by substitutional doping and increase lithium ion conductivity through the flexible and robust self-supporting channels as demonstrated by microscopy observation and theoretical calculations. Such unusual doping characters are enabled by the simultaneous bottom-up assembly of dopants and silicon at the seed level in molten salts medium. This sulfur-doped silicon anode shows highly stable battery cycling at a fast-charging rate with a high energy density beyond those of a commercial standard anode. W hen silicon (Si) is heavily doped with chalcogen family elements (e.g., S, Se, and Te) at a concentration exceeding the equilibrium solid solubility, it experiences the insulator-to-metal transition (IMT); thus, it shows great potential for optoelectronic applications such as infrared detection and intermediate-band solar cells 1,2 . At present, such a supersaturated structure has been exclusively realized by an intricate combination of ion implantation, pulsed-laser-induced melting, and rapid solidification to activate the dopants and restore the lattice damaged by accelerated ions 3 . However, this series of step requires high-cost facilities, provides shallow doping depth of <1 μm, and poses serious hazards, although it is currently introduced to process in semiconductor devices. Consequently, a bulk Si microparticle (SiMP) anode has been directly utilized for its low cost, practical availability and higher ICE in a way of either coalescence or confinement methods by introducing the multifunctional binders or external coating layers [28][29][30][31][32] . However, building a structure for SiMP anodes that is durable beyond the nanoscale compartment is a considerable obstacle facing the battery community. Even when the anode is outfitted with an anti-pulverization structure, Li-ion diffusion through such a large domain as well as insufficient electronic conduction limits further use of SiMP anodes in forthcoming applications. In this study, we report a low-temperature sulfur fusion approach to a quasi-metallic Si (QMS) anode with a large average particle size of 3 μm and a hollow spherical structure with controllable doping levels of sulfur. Unlike previous approaches based on the forced insertion of dopants, this spontaneous cogrowth pathway of the reduced silicon and sulfur seeds from the low-temperature reduction reactions in the molten salt medium contributes to a uniform doping environment by a small quantity of sulfur substitution into the Si crystal and interior channel formation buffered by flexible and robust sulfur chains. The substituted sulfur dopants significantly increase the electronic conductivity even featuring the metallic nature as the doping concentration increases, while the self-supporting channels originated from the sulfur chains provide a diffusion channel for lithium ions. The electronically and ionically conductive QMS shows high initial reversibility during the first charge-discharge cycles despite its bulk particle size. The electrochemically generated lithium sulfides help to retain the metallic property, thereby extending the cycling life of battery at a fast-charging rate with a high-energy density in both half and full-cell systems. Results Simultaneous seeds growth-enabled uniform sulfur doping. In principle, aluminum chloride (AlCl 3 ) used here as a metal salt (T m of 192°C) as well as molten salt medium can solvate bulk aluminum (Al) to form the highly reactive Al-AlCl 3 complex structure that spontaneously reduces different types of silica (e.g., cost-effective clay minerals and commercial bulk SiO 2 ) in two thermodynamically stable pathways. Activated AlCl* from the ligand of the complex adsorbs on the oxygen atom of these compounds and then generates unusual byproducts of aluminum oxychloride (AlOCl) along with the formation of Si seed as demonstrated previously 7 . The complex species also react with an additional metal salt (MgSO 4 ), selectively dissociating the oxygen atoms from the salt crystal structure and thereby yield isolated magnesium (Mg) and sulfur. The presence of metallic Mg, which acts as the metal center of the complex, leads to the evolution of secondary byproducts of MgAl 2 Cl 8 as clearly evidenced by X-ray diffraction (XRD) analysis of crude products after the reduction reactions ( Supplementary Fig. 1). Further, protuberant XRD peaks observed at near 20°and 40°, correspond to the amorphous sulfur clusters while still buried with sharp peaks for either metallic Al or Si, which are indicative of the simultaneous formation of Si and sulfur seeds. Strong reducing power of the complex completely disintegrate initial precursors into the active Si and sulfur at the atomic level as suggested above but still stabilized by and embedded in the fluidic molten salt medium. Afterward, abundant byproducts set the clustering environment for each atom to grow into the seeds that have relatively free motion in the medium and eventually they are assembled into the spherical structure to reduce the surface energy of particles as the reactor cools down to ambient temperature. During the recrystallization process, bonds between the two seed fractions and neighboring seeds spontaneously saturate, while concurrent sulfur anchoring on either side of the crystallized Si surface restrains the pore filling (Fig. 1a). The resulting self-supporting channels facilitate the fast diffusion of Li ions, and the directly substituted sulfur atoms in the Si crystal structure change the nature of Si into the quasi-metallic state. Without the sulfur fusion in the interatomic spaces, such defect sites completely merge (Fig. 1b), although both systems produce hollow and porous frameworks via localized Ostwald ripening process. Unusual sulfur fusion enables the deep and uniform doping of sulfur dopants with different coordination states into the porous SiMP in a processible and cost-effective way (Fig. 1c-e and Supplementary Fig. 2). These 1-5 μm QMS particles have tunable sulfur concentrations between 0.1 and 0.7 at% with an infusion limit due to the excessive sulfur loss through silicon sulfide formation (Supplementary Fig. 3 and Supplementary Note 1). Even with the higher doping level than that allowed by the implantation process, our approach maintains the cubic crystalline phase of Si, in contrast with the polymorphic structure that evolves in hyperdoped Si wafers (Supplementary Figs. 4 and 5) (ref. 2 ). In addition, electron energy loss spectroscopy (EELS) indicates that the QMS contains sulfur in various states other than elemental sulfur, and the Si L-edge of the QMS shifts upward from 99.4 to 99.9 eV, resulting from the sulfur that fuses into the Si in two separate manners (Fig. 1f). Interestingly, we observed wide defect clouds over the well-defined Si atomic structures that are assumed to be sulfur-fusion-induced channels (or defects) and lattice distortion owing to substitutional sulfur atoms (Fig. 1g, h). Two individual sulfur couplings in the QMS samples are separately investigated in detail in terms of their metallicity and ionic channels. Quasi-metallic transition in Si microparticles. The increased electronic conductivity of the QMS was revealed by measuring its single-particle conductance during in situ probing and its bulk conductivity in pellet form (supplementary materials, methods). Samples were mounted on a tungsten electrode connected to another electrode to apply voltage sweeps, and their current responses were recorded. Representative current-voltage (I-V) plots for each sample are presented in Fig. 2a. By keeping their microstructures and outer diameters identical, their singleparticle conductance can be roughly estimated based on the slopes of the plots. QMS(0.7) exhibits six times higher conductance than undoped Si (Fig. 2b). Hereafter, the number in parentheses in the sample labels refers to doping concentration in atomic percent. Furthermore, a similar circuit was constructed with pelletized samples to measure the conductivity more reliably, and the results were close to the reported conductivity of bulk Si (0.001 Sm −1 ). By contrast, only 0.7 % of sulfur doping dramatically increased the conductivity of Si, reaching up to 50 times higher than that of undoped Si, which confirms the unusual transition to the quasi-metallic state. Density functional theory (DFT) calculations revealed a correlation between the metallic properties of QMS and sulfur doping by comparing the electronic band structures of Si at different doping concentrations (Fig. 2c, e). After the sulfur fusion, the two remaining valence electrons of Si create two impurity states below the conduction band minimum (CBM), labeled by blue and red lines, and occupy one of the two states. The localized states are far apart and do not interact, and they just touch the Fermi level at a low doping amount (0.39 at%, inset of Fig. 2c). As the doping concentration increases up to 1.59 at%, these spatial states become closer and overlap (Fig. 2e, inset). This percolation enhances the band dispersion of the states across the Fermi level and forms metallic bands that contribute the metallic property of QMS, which is intensified at concentrations above 0.39 at% (Fig. 2c, e). The impurity levels that lie below the CBM signify the n-type character of QMS, that is also validated in the Hall effect measurements ( Fig. 2d and Supplementary Fig. 6). At 1.59 at%, the extended charge density distribution shows overlap between the localized states and the corresponding large band dispersion, indicating the higher electron mobility than that at 0.39 at% (insets of Fig. 2c, e). Thus, the doping concentration of 0.7 at% evidently rendered the Si quasi-metallic. Formation of sulfur-supported ion channels. The sulfur fusion not only enabled the quasi-metallic transition but also produced self-supporting channels within the Si crystal structure, which were distinctively identified by high-resolution transmission electron microscopy (HR-TEM). Along with the defect clouds previously discussed for the substitutional doping, dark strips appear parallel to (111) planes in the polycrystalline Si, which are assumed to be the expanded interplanar spacing from the sulfur chains and are mostly 0.50-0.72 nm wide, in contrast with the 0.31 nm lattice constant for Si (111) planes ( Fig. 3a-c). The outmost surface might lose sulfur chains inside the crystal due to the high-temperature post-annealing process, but their traces show the evident formation of robust channels with Si (111) planes bent near the expanded strips and the possibility of internal channel formation beneath the top surface. In contrast to other dopants like boron or phosphorus that lead to a decrease in the lattice spacing with a peak shift to a higher angle 33,34 , the sulfur dopants inside crystals offset the lattice contraction by forming the channels (Fig. 3d). Higher doping concentration shows a significant peak shift with a weak shoulder close to the bare Si (111) planes that are entirely opposite to previous observations, thus suggesting a different doping character although same sulfur dopants smaller than Si atoms were used 35 . In the absence of sulfur fusion, a channel with a slab spacing of about 1 nm cannot be maintained because the two separate Si surfaces tend to merge and restore the bulk structure, according to our calculation results ( Supplementary Fig. 7a). However, the bridging sulfurs are able to bear the Si planes facing each other and sustain the interplanar spacings; the chain-like sulfur is highly flexible in various configurations and sufficiently robust to support the structure without collapsing at pressures as high as 14 kbar (Fig. 3e). In addition, the most stable channel spacings of 0.46 and 0.81 nm are consistent with the experimental measurements (Fig. 3a-c and Supplementary Fig. 7b, c). The channels can provide up to 5000 times higher ionic diffusivity than that of bulk Si, as measured by galvanostatic intermittent titration technique at low lithium contents as well as cyclic voltammetry measurement ( Fig. 3f and Supplementary Figs. 8 and 9). These results are consistent with the calculated lower diffusion barrier through the channel of 0.11 eV ( Supplementary Fig. 7d), in contrast with 0.58 eV through the bulk 36 . Lithium sulfide-embedded structure. While the most dopants in single atomic sites hardly achieve a full intercalation of Li compared with its bulk crystal structure and normally considered as dead sites for Li reactions 37,38 , the unusual doping nature of QMS will adopt different ways. The fused sulfur in either forms should be discharged (lithiated), at least for the first cycle, with a narrow voltage window that limits subsequent charging (delithiation), thereby indicating the possible formation of lithium sulfide 39 . Through in situ TEM investigation, we evidently found that Li insertion into the QMS particle generates lithium sulfides nanocrystals along with negligible volume expansion from alloy formation of Li and crystalline Si (Fig. 4a-d and Supplementary Movie 1). The appeared sulfur in selected area diffraction pattern might arise from isolation of unreacted sulfur chains and clustering of sulfur atoms in substitutional positions. Importantly, lithium sulfide byproducts remain intact during delithiation in well-trapped forms and will not dissolve in the electrolyte used here for subsequent cycles (Fig. 4e, f, and Supplementary Movie 2) (refs. 40,41 ). Inside the fully amorphized Si structure, rather clustered sulfur particles of <1 nm were observed; otherwise, micropores appeared in these traces during the elimination of the solid-electrolyte interphase (SEI) (Fig. 4g, h, and Supplementary Fig. 10). After the first cycle, during which the lithium sulfide particles formed, Li still diffused faster through QMS than undoped Si (Supplementary Fig. 11). We assumed that the lithium-sulfide-related structure sustained the diffusion path, and our DFT calculations found a low diffusion barrier of 0.32 eV (Supplementary Fig. 12a, b). In our calculated model structure, the metallic property of QMS is demonstrated by the change in the CBM occupation (red band) owing to the charge transfer from the lithium sulfide to the amorphous Si. The unoccupied conduction state (red band) of amorphous Si (Supplementary Fig. 12c) is occupied by an electron from lithium sulfide, thereby maintaining its n-type character ( Supplementary Fig. 12d). We assert that the metallic property of the QMS containing lithium sulfide nanoparticles mostly arises from the occupied CBM states of amorphous Si via charge transfer from the lithium sulfide. The charge density plots of the CBM state of H-passivated amorphous Si show features similar to those of the metallic state in the lithium sulfide-Si model system, demonstrating fast charge transfer. Li storage performance of quasi-metallic silicon. The sulfur fusion method ensures compelling battery performances in cointype half-and full cells (the electrode and cell information and electrochemical measurements are described in detail in the Methods section). It is noted that the following discussion on electrochemical behaviors is based on the QMS(0.7) samples unless otherwise noted and those of the QMS(0.3) and QMS(0.1) samples are shown in Supplementary Fig. 13. The quasi-metallic state and the sulfur-buffered Li-ion diffusion channels of QMS increased both the ICE of 92.5% and charge (delithiation) capacity of~3350 mA h g −1 , compared with the Si electrode having 87.4% and 3080 mA h g −1 , respectively in the first cycle at 0.05 C (1 C=3.5 A g −1 ), thereby suggesting that the particles were almost fully activated (Fig. 5a). As systematically investigated in the literature 42 , the conductivity factor much more critically determines the initial reversibility of Li-ion transport in the Si-based anodes with other physicochemical properties being equal than the factor from the surface area of the samples. Further, the highly accessible hollow and porous structures for the electrolyte facilitate quick activation of the entire electrode and the sulfur-buffered Liion diffusion channels enhance the diffusion kinetics. The dominant macropores and a minor portion of meso/micropores are also beneficial to achieve the unprecedented high ICE of 92.5% without additional carbon coating layers. Without the use of typical protective layers, the QMS electrode retained 87% of its capacity over 300 cycles and 72% over 500 cycles at 0.5 C (1 C = 1.9 mA cm −2 ) along with a high average Coulombic efficiency of 99.89% from the 2nd to 500th cycle, thus outperforming previously reported microscale Si anodes (Fig. 5b). In addition to achieving high ICE, quick saturation of Coulombic efficiency during subsequent cycles was challenging in typical bulk Si anodes due to sluggish diffusion of Li-ion through the huge structure. However, highly accessible structure toward the Li-ion diffusion as well as metallicity of QMS contributes to having 99.5% of Coulombic efficiency at the third cycle even at a relatively slow rate of 0.5 C, which renders the QMS electrode suited for the realization of full cell. At this high-capacity loading of~3.8 mA h cm −2 , it retained its metallic nature and stabilized interfaces, which facilitated fast Li-ion kinetics for unprecedented bulk rate performance when the current densities were increased up to 5 C (19 mA cm −2 ) without any trace of lithium metal plating. This result was corroborated by in situ electrochemical impedance spectroscopy which showed that both undoped Si and QMS delivered almost 100% of available capacity at 1 C, while the average charge transfer resistance of undoped Si was higher than . The QMS electrode could fill more than 70% of the initial available capacity with the low charge transfer resistance at 3 C, but <30% of the capacity was obtained with the increased resistance in case of undoped Si, suggesting that the QMS can be fast-chargeable even with the micrometer size which is essential for the further applications 43,44 . A sufficient but not immoderate interior pore volume inside QMS can sustain repeated large volume expansion by remaining as low as 50% and relieving generated internal stress generated by Li insertion, thereby improving the fracture resistance on the particle level 21 (Supplementary Figs. 17 and 18 and Supplementary Videos 1 and 2). The robust microparticle structure significantly suppressed electrode swelling to <30% after 100 cycles, which corresponds closely with volumetric margins in industrial cells (Fig. 5d and Supplementary Fig. 19) (ref. 11 ). By fulfilling the rigorous requirements for practical full cells such as structural stability and reaching a high Coulombic efficiency during early cycles, the QMS anode in a finite source of Li ions using traditional lithium cobalt oxide (LCO) exhibited stable cycling (200 cycles, 80% capacity retention) at a high current density of 3.3 mA cm −2 and an areal capacity loading of~3.3 mA h cm −2 (Fig. 5e and Supplementary Figs. 20 and 21). The decreased polarization from the metallic nature of QMS and the partially retained ionic channels during cycling lead to a high volumetric/gravimetric energy density of full cells compared with other promising designs for Si-based anodes (Supplementary Table 1 and Supplementary Note 2), which can potentially be further increased by developing cathodes that is stable at a fastcharging rate with high-energy density. Discussion Doping <1% sulfur into the Si structure modifies the physical properties of bulk Si by imparting it with an electronically conductive state and creating ionic channels for fast Li-ion diffusion. Such doping is enabled by a safe, scalable, and feasible approach, in contrast with conventional high-risk and toxic methods. In addition to sulfur dopants, other chalcogens including selenium or tellurium are expected to induce insulator-to-metal transition in the same way but with different critical doping concentrations as reported previously 45,46 . Whether these larger atoms form the chain-like structure that creates diffusion channels for metal ions raises an open question; otherwise, the air-stable binary phase will appear rather than simultaneously featuring metallicity and channel formation 47 . These unusual doping characteristics can address the major issues of Si anodes, which mostly arise from their large volume expansion during electrochemical cycling at high mass loading and in the absence of conductive buffer layers. The metallic nature at the interfaces of the amorphized Si and lithium sulfide maintains the electronic and ionic conductivities of the microparticles, and the porous structure prevents particle disintegration and severe electrode swelling, thereby extending the cycle life of batteries while maintaining a high-energy density. The lowtemperature sulfur fusion proposed here may advance Si-based Methods Materials and characterization. Micrometer-sized silica (1 μm, 99.9%), magnesium sulfate (anhydrous, 99.5%) and aluminum chloride (anhydrous, 99%) were purchased from Alfa Aesar. Aluminum metal (1-5 μm, 99%), hydrochloric acid (35-37%), and hydrofluoric acid (49%) were purchased from Angang, SAMCHUN, and J.T. Baker, respectively. All the chemicals were used without further purification. The structural analysis was carried out by field-emission SEM (S-4800, Hitachi) with an acceleration voltage of 10 kV. HRTEM observations were conducted on Titan 80-300 environmental TEM (FEI) and field-emission TEM (JEM-2100F, JEOL) with EDS detector operated at 300 and 200 kV, respectively. Crystal structure of samples were characterized by X-ray diffractometer (XRD, Bruker D8advance) at 3 kW using Cu Kα radiation in the θ range from 20°to 90°and confocal Raman (alpha 300 R, WITec) with 532 nm of wavelength laser. The surface area and pore size distribution were characterized by auto physisorption analyzer for BET and BJH analysis (ASAP 2020, Micromeritics Instruments). The XPS (K-alpha, ThermoFisher) analysis was used for the surface oxidation state of the samples. For bulk conductivity measurement, the same amounts of the samples are poured into the steel cylinder with an area of 1 cm 2 and height of 1 mm with an upper and lower connection of external circuits. The hall effect of the samples was measured with the Hall measurement system (7770A Lakeshore, Bipolar electromagnet) and electrodes casted onto the polyethylene terephthalate film substrate without conductive carbons. Quasi-metallic silicon synthesis. In a typical synthesis of QMS samples, SiO 2 , MgSO 4 , Al metal, and AlCl 3 were finely ground with a mass ratio of 1:0.5-3:2:10 using an agate mortar and transferred to stainless steel reactor consisting of one union and two plugs inside an Argon-filled glove box. After fastening the reactor securely, it was transferred to the tube furnace and heated at 250 ℃ for 10 h under argon atmosphere. After complete cooling, the product looked like hard and rigid rock, but it is easily swelled out with water treatment to dissolve excessive AlCl 3 salts and remove the undesirable silicon sulfides. Then, the intermediate consisting of QMS, residual Al metals, AlOCl byproducts, and elemental sulfur were purified with 1.0 M HCl and 5% HF, respectively. Note that the crude products smelled similar to that of a rotten egg that implies a formation of sulfur derivatives from MgSO 4 . Through additional heat treatment at 400 ℃ for 30 min under an argon atmosphere, any chance of existence of elemental sulfur was eliminated. The Si samples were prepared without the use of MgSO 4 and additional heat treatment step. Electrochemical measurement. A slurry coating method was used to prepare the working electrodes by mixing the anode materials, super P carbon black, polyacrylic acid (weight-average molecular weight = 10 kg mol −1 , Sigma-Aldrich) and carboxymethyl cellulose sodium salt (Sigma-Aldrich) with a mass ratio of 80:10:5:5 and casting the slurry on copper foil without further calendaring process. The mass loading of anode materials excluding super P and binders maintained 1-1.1 mg cm −2 . In addition, LiCoO 2 (LCO, LG Chem) cathodes were prepared by mixing with super P carbon black and polyvinylidene fluoride (PVdF) binder in a mass ratio of 95:2.5:2.5 and casting the slurries on aluminum foil. The mass loading of cathode materials excluding super P and binders reached~23 mg cm −2 with an areal capacity loading of~3.3 mA h cm −2 , respectively. After casting, the electrodes were completely dried at 150 ℃ for at least 2 h under vacuum. The prepared electrodes were cut into discs and assembled into the CR2032 cells (Welcos) in an Argon-filled glove box using a Celgard 2400 separator, Li metal counter electrode and electrolytes dissolving 1.3 M LiPF 6 in ethylene carbonate (EC) and diethyl carbonate (DEC) (3:7 v/v) with 10 wt% of fluoroethylene carbonate (FEC) additive to increase the cycle life of the battery. One hundred and twenty microliters of electrolyte was used for a single coin cell assembly. The galvanostatic battery tests on the anodes were carried out with the cut-off voltage of 0.005-1.5 V vs. Li + /Li for formation cycle at 0.05C and 0.01-1.2 V vs. Li + /Li for subsequent cycles at 0.2-5C, respectively, on a battery cycler (WBCS 3000K8, Wonatech). For cathode half cells, cut-off voltages are 3.0-4.3 V for LCO. All the specific capacities were calculated based on the mass of Si only. The presented capacities, initial Coulombic efficiency, capacity retention, and rate capability results were collected from at least five cells. The n/p ratio (the capacity ratio of the anodes to cathodes) was~1.1. The cut-off voltage for full cells were 3-4.2 V for QMS (or Si)-LCO full cell. The CV measurements were obtained at 0.1-1.0 mV s −1 from 0 to 1.2 V (VMP3, Biologic). The EIS measurements were carried out between 100 kHz and 0.1 Hz with an amplitude of 10 mV. The impedance spectra of QMS/Si half cells were measured by galvanostatic electrochemical impedance spectroscopy (in situ EIS) analysis. The measurement was carried out after the cells were stabilized by formation cycle at 0.05C. The input signals were combining the sinusoidal alternating current waves of amplitude as low as 10 μA at 10 -3 to 10 6 Hz and fixed direct current of 1C or 3C. One potentiostat channel (VSP300, Biologic) was used for measuring impedance spectra and the other for recording voltage profiles of the cells. In situ transmission electron microscopy analysis. The in situ TEM measurements were performed in Titan 80-300 environmental TEM (FEI) at the acceleration voltage of 300 kV using a dual-probe electrical biasing holder (Nanofactory Instruments). The QMS (or Si) particles were drop-cast onto a gold wire as a working electrode, while a piece of lithium was attached to a tungsten rod served as the counter electrode. During the transfer of the holder into the TEM, the Li metal was shortly exposed to air for about 3 s to create a thin Li 2 O layer acting as a solid electrolyte. A constant potential of ±3 V was applied to the QMS (or Si) electrodes against the Li metal during lithiation and delithiation, whereby Li-ion was inserted and extracted through the solid electrolyte. It does not make sense for us to estimate the ion conductivity of particle itself during in situ TEM analysis, because it was measured using the lithium oxide solid electrolyte and each measurement had a different voltage bias depending on its size and contact features. Nevertheless, the QMS particles can be lithiated faster than that of Si even with a lower bias of −3 V as compared with our previous report 7 . Calculational methods. Ab initio calculations were performed using the Vienna ab initio simulation package (VASP) code 48 in the framework of the spin-polarized density functional theory with the projector augmented wave (PAW) method 49 . The exchange-correlation was considered using the generalized gradient approximation of Perdew, Burke, and Ernzerhof (PBE) 50 . The cut-off energy for the plane wave basis set was 350 eV. A k-point mesh in the Monkhorst-Pack scheme 51 was set to 1 × 1 × 2 and 2 × 2 × 2 for S doped Si (S 1 Si 255 and S 1 Si 63 , respectively) and 1 × 1 × 1 for the channel structure. The ionic positions of all atoms were fully-relaxed until a force convergence of 0.01 eV Å −1 was reached. The pressure applied to the channel depending on the different slab spacing was calculated to be set to allow only ion relaxation without volume change. Density functional molecular dynamics (DFTMD) simulation was conducted on a canonical ensemble to generate amorphous Si for the interface structure with lithium sulfide particles. The k-point set was set to only the gamma points for 2 × 2 × 2 supercell of Si and the time step was set to 0.5 fs. The temperature was chosen as 1800 K, and the DFTMD simulations were performed for 2.5 ps. To determine the kinetic behaviors of Li-ion in the channel and the interface, we used the climbing image nudged elastic band (cNEB) method 52 to calculate the diffusion barriers of Li-ion on expected diffusion pathways. Data availability All relevant data supporting the findings of this study are available within the paper and its Supplementary Information. Additional data are available from the corresponding author upon request.
v3-fos-license
2021-07-27T00:05:19.383Z
2021-05-28T00:00:00.000
236406480
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-4441/13/11/1516/pdf", "pdf_hash": "ca21d639ec816718892170427b5e4921035ea72e", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2084", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "6ca090960e1d47d6d3d4224b5bd3a859eb19a0fe", "year": 2021 }
pes2o/s2orc
Converting Climate Change Gridded Daily Rainfall to Station Daily Rainfall—A Case Study at Zengwen Reservoir : With improvements in data quality and technology, the statistical downscaling data of General Circulation Models (GCMs) for climate change impact assessment have been refined from monthly data to daily data, which has greatly promoted the data application level. However, there are differences between GCM downscaling daily data and rainfall station data. If GCM data are directly used for hydrology and water resources assessment, the differences in total amount and rainfall intensity will be revealed and may affect the estimates of the total amount of water resources and water supply capacity. This research proposes a two-stage bias correction method for GCM data and establishes a mechanism for converting grid data to station data. Five GCMs were selected from 33 GCMs, which were ranked by rainfall simulation performance from a baseline period in Taiwan. The watershed of the Zengwen Reservoir in southern Taiwan was selected as the study area for comparison of the three different bias correction methods. The results reveal that the method with the wet-day threshold optimized by objective function with observation rainfall wet days had the best result. Error was greatly reduced in the hydrology model simulation with two-stage bias correction. The results show that the two-stage bias correction method proposed in this study can be used as an advanced method of data pre-processing in climate change impact assessment, which could improve the quality and broaden the extent of GCM daily data. Additionally, GCM ranking can be used by researchers in climate change assessment to understand the suitability of each GCM in Taiwan. Introduction Water resources management is a crucial issue in climate change research. To analyze the impact of climate change on future water resources, researchers need to follow several procedures to obtain appropriate information. In Taiwan, for example, the first step is to obtain the Global Circulation Models (GCMs) climate change projection data, such as temperature and rainfall in future scenarios. After downscaling calculations to improve the spatial resolution of the data, the detailed climate of the region can be assessed. The Water Resource Agency (2011) [1] uses the statistical downscaling monthly data produced by a project of the National Science Council (NSC), the "Taiwan Climate Change Projection Information and Adaptation Knowledge Platform (TCCIP)", as inputs of the weather generator, and uses the output daily temperature and daily rainfall data to simulate the flow of watersheds. It then uses the system dynamic model to evaluate the baseline and future changes in the supply and demand of water resource systems in different areas of Taiwan. The above methods can indeed provide future daily rainfall. However, the daily data from the weather generator are based on the statistical characteristics of the observed rainfall, which cannot truly reflect the changes in future rainfall characteristics (Jones et al., Water 2021, 13, 1516 2 of 15 2010) [2], so indicators such as changes in probability of precipitation, consecutive dry days (CDD), and other important assessment results related to water sources still need to be refined. The TCCIP released GCM statistical downscaling gridded daily data (hereinafter GCM data) in 2019 (Tung et al., 2018) [3]. The data provide 33 groups of GCMs in Taiwan under warming scenarios (RCP2.6, RCP4.5, RCP6.0, and RCP8.5) in the fifth assessment report (AR5) of the Intergovernmental Panel on Climate Change (IPCC). Liu et al. (2019) [4] used TCCIP GCM data, in combination with the World Meteorological Organization (WMO) Climate Change Detection Index (CCDI), to select key indicators of climate change related to Taiwan, and they mapped the indicator charts for different regions under warming scenarios. Li et al. (2019) [5] used high-resolution gridded data to analyze the changes in the frequency and intensity of drought events in future scenarios in Taiwan; Huang and Liu (2019) [6] analyzed the relationship between the historical loss of grapes and the crop loss threshold of rainfall and used GCM data to analyze the changes in the threshold under warming scenarios. However, Tung et al. (2019) [7] also pointed out that although the GCM data have undergone bias correction, comparing the GCM data with the nearest station data will reveal that the average rainfall of the GCM data in the baseline period is underestimated. The alternative method is to use the station data multiplied by the future change rate ((future value − baseline value)/baseline value) instead of using gridded data directly. However, the method of applying the change rate is more suitable for monthly scale data, such as monthly rainfall, annual rainfall, etc. The daily scale data are relatively difficult to apply (the daily rainfall rate of change is difficult to obtain). In response to the above problems, this research proposes a two-stage bias correction method for GCM data, which is used to correct the rainfall gap of GCM data relative to the station data while retaining the rainfall trend provided by the GCM data. The practice refers to the quantile mapping empirical cumulative distribution function (ECDF) method (Ines and Hansen, 2006 [8], Johnson and Sharma, 2011 [9], Su et al., 2016 [10]) used in the current climate model bias correction. In addition, the probability of precipitation of the station data is used as the objective function, and the wet-day threshold value of the GCM data is adjusted to make the probability equal to the station data to ensure an effective correction result. This study uses the Zengwen Reservoir watershed in southern Taiwan as the research area to study GCM rainfall data bias correction research analysis. Detailed steps will be described below. Future Scenario and GCM Data IPCC (2013) [11] uses Representative Concentration Pathways (RCPs) to define future scenarios, with the difference in radiative forcing between 2100 and 1750 as the criterion. In the future scenario, RCP2.6 represents a slight warming scenario; RCP4.5 and RCP6.0 represent a stable warming scenario; and RCP8.5 represents a scenario where greenhouse gas emissions are relatively high. TCCIP project released AR5 statistical downscaling daily data in 2019, including meteorological data such as daily temperature and rainfall on a 5-km grid in Taiwan. The project uses ESGF's CMIP5 data, and then performs downscaling, time window, spatial interpolation, and bias correction methods to produce data that match the climate pattern of Taiwan (Tung et al., 2018) [3]. GCMs Ranking and Selection Although IPCC AR5 uses the most advanced models developed by each center, each model has different responses to various climate factors, and will have different results for future projections, and multi-model assessments are usually used to cover uncertainty. However, in order to simplify the number of simulations and evaluations, researchers try to lower the number of GCM to represent future projections. Tung et al. (2020) [12] analyzed the correlation between the rainfall pattern of the GCM baseline period and the observed rainfall pattern in Taiwan, and they also evaluated the performance with the performance index score (Reichler and Kim, 2008) [13] for statistical downscaling of monthly data and ranking of the performances of GCMs. This research refers to this method to sort GCM statistical downscaling daily data. This method evaluates the rainfall performance of GCMs in the baseline period. The observation data are selected grid data with similar time and space resolutions; the area is shown in Figure 1a. After the model is processed into the same format as the observation data, Fourier analysis is performed to filter out high-frequency signals (especially during the Meiyu and typhoon seasons) (Wang and Lin, 2002) [14] (Figure 2). GCMs Ranking and Selection Although IPCC AR5 uses the most advanced models developed by each center, each model has different responses to various climate factors, and will have different results for future projections, and multi-model assessments are usually used to cover uncertainty. However, in order to simplify the number of simulations and evaluations, researchers try to lower the number of GCM to represent future projections. Tung et al. (2020) [12] analyzed the correlation between the rainfall pattern of the GCM baseline period and the observed rainfall pattern in Taiwan, and they also evaluated the performance with the performance index score (Reichler and Kim, 2008) [13] for statistical downscaling of monthly data and ranking of the performances of GCMs. This research refers to this method to sort GCM statistical downscaling daily data. This method evaluates the rainfall performance of GCMs in the baseline period. The observation data are selected grid data with similar time and space resolutions; the area is shown in Figure 1a. After the model is processed into the same format as the observation data, Fourier analysis is performed to filter out high-frequency signals (especially during the Meiyu and typhoon seasons) (Wang and Lin, 2002) [14] (Figure 2). This study uses the method of Reichler and Kim (2008) [13] to target the seasonal cycle of rainfall in the GCM model, using the performance index as the evaluation standard. The formula is shown below. Season of Meiyu Seasonal Rainfall Pattern Season of Typhoon GCMs Ranking and Selection Although IPCC AR5 uses the most advanced models developed by each center, each model has different responses to various climate factors, and will have different results for future projections, and multi-model assessments are usually used to cover uncertainty. However, in order to simplify the number of simulations and evaluations, researchers try to lower the number of GCM to represent future projections. Tung et al. (2020) [12] analyzed the correlation between the rainfall pattern of the GCM baseline period and the observed rainfall pattern in Taiwan, and they also evaluated the performance with the performance index score (Reichler and Kim, 2008) [13] for statistical downscaling of monthly data and ranking of the performances of GCMs. This research refers to this method to sort GCM statistical downscaling daily data. This method evaluates the rainfall performance of GCMs in the baseline period. The observation data are selected grid data with similar time and space resolutions; the area is shown in Figure 1a. After the model is processed into the same format as the observation data, Fourier analysis is performed to filter out high-frequency signals (especially during the Meiyu and typhoon seasons) (Wang and Lin, 2002) [14] (Figure 2). This study uses the method of Reichler and Kim (2008) [13] to target the seasonal cycle of rainfall in the GCM model, using the performance index as the evaluation standard. The formula is shown below. Season of Meiyu Seasonal Rainfall Pattern This study uses the method of Reichler and Kim (2008) [13] to target the seasonal cycle of rainfall in the GCM model, using the performance index as the evaluation standard. The formula is shown below. where n = total grid points; m = model; w n = weight index of individual grid points; s = model simulation value (average of the selected period); o = observed value (average of the selected period); and σ 2 = variance of observation data. Formula (1) was used to calculate the difference between the simulated grid and the observation data in all GCMs where the total number of grids (n) = 6 (Figure 1b), the number of models (m) = 33, and w n = 1. Formula (2) was used to standardize the variance, calculate the scores of all GCMs, and then sort them. The list of 33 GCMs and the ranking results of the performance index are shown in Table 1. According to the GCMs performance ranking, the top five GCMs, CanESM2, CMCC-CM, MIROC5, MPI-ESM-LR, and HadGEM2-ES, were selected for follow-up analysis and discussion. Study Area and Observation Data In this study, an important water resource facility in southern Taiwan, the watershed of the Zengwen Reservoir, was selected as the analysis objective. The daily rainfall data observed by the rainfall station (hereinafter station data) are applied in this study. Study Area and Observation Data In this study, an important water resource facility in southern Taiwan, the watershed of the Zengwen Reservoir, was selected as the analysis objective. The daily rainfall data observed by the rainfall station (hereinafter station data) are applied in this study. Four rainfall stations in the watershed, namely, Lijia (H1M220), Shuishan (H1M230), Leye (H1M240), and Biaohu (H1P970), were selected. All had valid records for 30 years . Figure 3 shows the geographical location of the watershed and rainfall stations of the Zengwen Reservoir. October) and accounts for about 85% of the total annual rainfall, and the dry season (November to April) accounts for 15% of the rainfall. Table 2 shows the basic data of the rainfall stations in the watershed of the Zengwen Reservoir. Hydrological Model To evaluate the bias of the watershed runoff simulation, the Generalized Watershed Loading Function (GWLF) (Haith et al., 1992) [15] was used in this study. The input of GWLF water balance mechanism is mainly from precipitation. When the rainfall reaches the ground, part of the rainfall goes underground through the infiltration mechanism, and some forms direct runoff. The infiltration rainfall supplements the water of the unsaturated zone. When the soil moisture in the unsaturated zone reaches the field capacity, the excess water will pass through the percolation mechanism to the shallow saturated zone, and finally the shallow saturated zone will produce base flow. The stream flow is the sum of the direct runoff and base flow. The concept of the water balance model is illustrated in Figure 4. The average annual rainfall of the selected four rainfall stations is about 2622-3189 mm, and the average rainfall in the watershed is about 2910 mm (Thiessen's Polygon Method was applied). The rainfall is concentrated in the wet season (May to October) and accounts for about 85% of the total annual rainfall, and the dry season (November to April) accounts for 15% of the rainfall. Table 2 shows the basic data of the rainfall stations in the watershed of the Zengwen Reservoir. Hydrological Model To evaluate the bias of the watershed runoff simulation, the Generalized Watershed Loading Function (GWLF) (Haith et al., 1992) [15] was used in this study. The input of GWLF water balance mechanism is mainly from precipitation. When the rainfall reaches the ground, part of the rainfall goes underground through the infiltration mechanism, and some forms direct runoff. The infiltration rainfall supplements the water of the unsaturated zone. When the soil moisture in the unsaturated zone reaches the field capacity, the excess water will pass through the percolation mechanism to the shallow saturated zone, and finally the shallow saturated zone will produce base flow. The stream flow is the sum of the direct runoff and base flow. The concept of the water balance model is illustrated in Figure 4. The parameters required by the GWLF include the area and the land use (represented by the CN value) of the watershed which is shown as Table 3. The parameters required by the GWLF include the area and the land use (represented by the CN value) of the watershed which is shown as Table 3. Two-Stage Bias Correction Method Comparing the rainfall data of the four rainfall stations with five selected GCMs in the baseline period, the rainfall data of five selected GCMs are underestimated compared to the four stations, with an average underestimation of about 12% and the largest gap being more than 20%. The average rainfall of the watershed calculated with grid data (Thiessen's Polygon Method) is also underestimated, with an average underestimation of about 13% ( Table 4). The annual rainfall comparison between the station data and the GCM data is shown in Figure 5. Table 4. Comparison between the station data and GCM data (annual rainfall). Model Name Lijia Two-Stage Bias Correction Method Comparing the rainfall data of the four rainfall stations with five selected GCMs in the baseline period, the rainfall data of five selected GCMs are underestimated compared to the four stations, with an average underestimation of about 12% and the largest gap being more than 20%. The average rainfall of the watershed calculated with grid data (Thiessen's Polygon Method) is also underestimated, with an average underestimation of about 13% ( Table 4). The annual rainfall comparison between the station data and the GCM data is shown in Figure 5. Table 4. Comparison between the station data and GCM data (annual rainfall). Model Name Lijia In order to correct the rainfall gap between GCM data and station data, this study proposes a two-stage bias correction method. A quantile mapping empirical cumulative distribution function (ECDF) procedure is the basis of this method as the first stage. In addition, the wet-day threshold of the GCM data is optimized to fit the probability of precipitation of the station data to achieve an effective rainfall bias correction result as the second stage. Detailed descriptions are provided below. Quantile Mapping Bias Correction Method In view of the gap between the GCM data and station data, this study refers to the quantile mapping empirical cumulative distribution function (ECDF) method used in bias correction of a climate model and adjusts the ECDF curve of the GCM data to conform to the ECDF curve of the station data. In order to correct the rainfall gap between GCM data and station data, this study proposes a two-stage bias correction method. A quantile mapping empirical cumulative distribution function (ECDF) procedure is the basis of this method as the first stage. In addition, the wet-day threshold of the GCM data is optimized to fit the probability of precipitation of the station data to achieve an effective rainfall bias correction result as the second stage. Detailed descriptions are provided below. Quantile Mapping Bias Correction Method In view of the gap between the GCM data and station data, this study refers to the quantile mapping empirical cumulative distribution function (ECDF) method used in bias correction of a climate model and adjusts the ECDF curve of the GCM data to conform to the ECDF curve of the station data. First, the ECDF curve of the daily rainfall data (wet days) of the GCM baseline period and the ECDF curve of the station daily rainfall data (wet days) are calculated, and then the GCM baseline period daily rainfall events are corrected according to the ECDF value corresponding to the station daily rainfall data. The schematic diagram of bias correction is shown in Figure 6. First, the ECDF curve of the daily rainfall data (wet days) of the GCM baseline period and the ECDF curve of the station daily rainfall data (wet days) are calculated, and then the GCM baseline period daily rainfall events are corrected according to the ECDF value corresponding to the station daily rainfall data. The schematic diagram of bias correction is shown in Figure 6. Taking into account the differences in the rainfall amount and patterns of each month, this study uses wet days from individual months to establish the ECDF curves, and the ECDF curves from stations and GCMs differ. The probability of exceeding on a wet days is calculated using the Weibull method. The formula is as follows: probability of exceeding (%) = m n 1 100% where m is the ranking of wet days from small to large, and n is the total number of wet days. Wet-Day Threshold Optimization However, although the correction method can make the two sets of data consistent with the rainfall of the same ECDF value, there is still a gap in comparison to the total rainfall. This study found that the gap is caused by the different numbers of wet days in the two groups of data. Because the input of the bias correction is wet-day data, then even though the wet days are corrected by the quantile mapping method, the monthly and annual rainfall of the corrected GCM data will not fit the station data due to the difference in the numbers of wet days. For example, in Figure 7, Case 1a represents the original GCM grid daily data (not corrected yet) (Figure 7a), Case 1b represents the GCM gridded daily data that has been corrected by the traditional quantile mapping method (Figure 7b), and Case 2 represents the station data (Target) (Figure 7c). Although the ECDF curve of Case 1a has been corrected to fit the target Case 2 (Figure 7a is corrected to Figure 7b), the total rainfall amounts of the corrected result Case 1b and the target value Case 2 are still different. Therefore, it is necessary to redefine the wet-day determination method for GCM data if consistency in the total rainfall amount is a consideration. Taking into account the differences in the rainfall amount and patterns of each month, this study uses wet days from individual months to establish the ECDF curves, and the ECDF curves from stations and GCMs differ. The probability of exceeding on a wet days is calculated using the Weibull method. The formula is as follows: where m is the ranking of wet days from small to large, and n is the total number of wet days. Wet-Day Threshold Optimization However, although the correction method can make the two sets of data consistent with the rainfall of the same ECDF value, there is still a gap in comparison to the total rainfall. This study found that the gap is caused by the different numbers of wet days in the two groups of data. Because the input of the bias correction is wet-day data, then even though the wet days are corrected by the quantile mapping method, the monthly and annual rainfall of the corrected GCM data will not fit the station data due to the difference in the numbers of wet days. For example, in Figure 7, Case 1a represents the original GCM grid daily data (not corrected yet) (Figure 7a), Case 1b represents the GCM gridded daily data that has been corrected by the traditional quantile mapping method (Figure 7b), and Case 2 represents the station data (Target) (Figure 7c). Although the ECDF curve of Case 1a has been corrected to fit the target Case 2 (Figure 7a is corrected to Figure 7b), the total rainfall amounts of the corrected result Case 1b and the target value Case 2 are still different. Therefore, it is necessary to redefine the wet-day determination method for GCM data if consistency in the total rainfall amount is a consideration. Water 2021, 13, x FOR PEER REVIEW 9 of 15 Although there is no specific method for determining wet days, it is common to define the threshold of a wet day for specific purposes to collate meteorological data. WMO (Karl et al., 1999 [16], Peterson et al., 2001 [17]) uses the wet-day threshold of 1.0 mm as the basis for calculating consecutive wet days and consecutive dry days, and the agricultural department in Taiwan uses the threshold value of 0.6 mm as the basis for calculating the number of consecutive dry days. The above problem (the difference in total rainfall between the two groups of data) is caused by the inconsistency of wet days. Therefore, this study attempts to use the probability of precipitation of the station data as the objective function and adjusts the wet-day threshold value of the GCM data such that the probability of the GCM data fits the station data. However, the mechanisms of rainfall of the two systems are not identical. The rainfall in the GCM data is the spatial average of the rainfall in the entire grid, not the actual measured value. Thus, extremely small amounts rainfall (such as daily rainfall = 0.0001 mm) may occur, and the probability of precipitation is therefore much higher than indicated by the station data. In addition, the rainfall in the station data is measured by an instrument, and different instruments may have different minimum values. Based on the assumption that the probability of precipitation of the GCM data under the same location should be equal to the probability of the station data, this study establishes the wet-day threshold optimization mechanism of the GCM data corresponding to the station data to filter out the extra small rainfall events of the GCM data and make the probabilities of rainfall identical. The calculation process of the optimized wet-day threshold first calculates the probability of precipitation of the station data, and then optimizes the wet-day threshold value under the same probability of the GCM data to achieve equality in the rainfall probabili- Although there is no specific method for determining wet days, it is common to define the threshold of a wet day for specific purposes to collate meteorological data. WMO (Karl et al., 1999 [16], Peterson et al., 2001 [17]) uses the wet-day threshold of 1.0 mm as the basis for calculating consecutive wet days and consecutive dry days, and the agricultural department in Taiwan uses the threshold value of 0.6 mm as the basis for calculating the number of consecutive dry days. The above problem (the difference in total rainfall between the two groups of data) is caused by the inconsistency of wet days. Therefore, this study attempts to use the probability of precipitation of the station data as the objective function and adjusts the wet-day threshold value of the GCM data such that the probability of the GCM data fits the station data. However, the mechanisms of rainfall of the two systems are not identical. The rainfall in the GCM data is the spatial average of the rainfall in the entire grid, not the actual measured value. Thus, extremely small amounts rainfall (such as daily rainfall = 0.0001 mm) may occur, and the probability of precipitation is therefore much higher than indicated by the station data. In addition, the rainfall in the station data is measured by an instrument, and different instruments may have different minimum values. Based on the assumption that the probability of precipitation of the GCM data under the same location should be equal to the probability of the station data, this study establishes the wet-day threshold optimization mechanism of the GCM data corresponding to the station data to filter out the extra small rainfall events of the GCM data and make the probabilities of rainfall identical. The calculation process of the optimized wet-day threshold first calculates the probability of precipitation of the station data, and then optimizes the wet-day threshold value under the same probability of the GCM data to achieve equality in the rainfall probabilities of the GCM data and the station data. Then, the optimized value of the wet-day threshold is used as an input for bias correction. The results of the bias correction will be similar to the station data in both the average daily rainfall and the number of rainfall days in each month. Analysis and Discussion In this study, a two-stage bias correction method was established for GCM data of the Zengwen Reservoir watershed. The key issue is that precipitation threshold will affect the result of the bias correction. Therefore, the different methods for different wet-day thresholds were applied to the second stage of bias correction. Analysis with Different Wet-Day Thresholds Rainfall station Lijia (H1M220) of the Southern Region Water Resources office was selected as the target, and the bias correction results of three different wet-day thresholds were compared, as follows: No-Threshold method (wet-day threshold = 0 mm); Fixed-Threshold method (threshold = 1 mm); and Optimized-Threshold method (GCM data wet-day threshold optimized by the probability of precipitation of the station data). The studied period was 1976-2005. The results were compared in three hydrological amounts, such as average daily rainfall, probability of precipitation, and annual runoff, to reveal the bias of different wet-day thresholds. Comparison to Averaged Daily Rainfall For the bias correction results of the No-Threshold method, the daily rainfall of the five GCMs was significantly higher than the station data ( Figure 8). The results of the Fixed-Threshold method were lower than the station data (Figure 9), and the Optimized-Threshold method produced results basically identical to the station data ( Figure 10). Water 2021, 13, x FOR PEER REVIEW 10 of 15 ties of the GCM data and the station data. Then, the optimized value of the wet-day threshold is used as an input for bias correction. The results of the bias correction will be similar to the station data in both the average daily rainfall and the number of rainfall days in each month. Analysis and Discussion In this study, a two-stage bias correction method was established for GCM data of the Zengwen Reservoir watershed. The key issue is that precipitation threshold will affect the result of the bias correction. Therefore, the different methods for different wet-day thresholds were applied to the second stage of bias correction. Analysis with Different Wet-Day Thresholds Rainfall station Lijia (H1M220) of the Southern Region Water Resources office was selected as the target, and the bias correction results of three different wet-day thresholds were compared, as follows: No-Threshold method (wet-day threshold = 0 mm); Fixed-Threshold method (threshold = 1 mm); and Optimized-Threshold method (GCM data wet-day threshold optimized by the probability of precipitation of the station data). The studied period was 1976-2005. The results were compared in three hydrological amounts, such as average daily rainfall, probability of precipitation, and annual runoff, to reveal the bias of different wet-day thresholds. Comparison to Averaged Daily Rainfall For the bias correction results of the No-Threshold method, the daily rainfall of the five GCMs was significantly higher than the station data ( Figure 8). The results of the Fixed-Threshold method were lower than the station data (Figure 9), and the Optimized-Threshold method produced results basically identical to the station data ( Figure 10). Comparison to Probability of Precipitation This study also presents the difference in the probability of precipitation (number of wet days/total number of days) in the results of the three correction methods. With the No-Threshold method, the probability of the GCM data was significantly higher than the station data, so the bias correction results were overestimated ( Figure 11). With the Fixed-Threshold method, it was generally lower than the station data; therefore, the correction results were underestimated ( Figure 12). With the Optimized-Threshold method, the rainfall probabilities of the two groups of data were identical, so the correction results were basically the same (Figure 13). Comparison to Probability of Precipitation This study also presents the difference in the probability of precipitation (number of wet days/total number of days) in the results of the three correction methods. With the No-Threshold method, the probability of the GCM data was significantly higher than the station data, so the bias correction results were overestimated ( Figure 11). With the Fixed-Threshold method, it was generally lower than the station data; therefore, the correction results were underestimated ( Figure 12). With the Optimized-Threshold method, the rainfall probabilities of the two groups of data were identical, so the correction results were basically the same (Figure 13). Comparison to Probability of Precipitation This study also presents the difference in the probability of precipitation (number of wet days/total number of days) in the results of the three correction methods. With the No-Threshold method, the probability of the GCM data was significantly higher than the station data, so the bias correction results were overestimated ( Figure 11). With the Fixed-Threshold method, it was generally lower than the station data; therefore, the correction results were underestimated ( Figure 12). With the Optimized-Threshold method, the rainfall probabilities of the two groups of data were identical, so the correction results were basically the same (Figure 13). Comparison to Annual Average Runoff of the Watershed Three different kind of rainfall data (station data, original GCM data, and corrected GCM data) were used to simulate the flow and water resources amount in the watershed of the Zengwen Reservoir. With the original GCM data used as the input of GWLF, the Comparison to Annual Average Runoff of the Watershed Three different kind of rainfall data (station data, original GCM data, and corrected GCM data) were used to simulate the flow and water resources amount in the watershed of the Zengwen Reservoir. With the original GCM data used as the input of GWLF, the 1655-1860 mm. The result of the No-Threshold method is 3155-3651 mm, that of the Fixed-Threshold method is 1609-1673 mm, and that of the Optimized-Threshold method is 2217-2202 mm (Table 5). Discussion The GCM data were inconsistent with the station data before they were corrected and underestimated compared with the station data. Based on the quantile mapping method, the wet-day threshold determines the fitness of the correction result between the GCM data and the station data. Whether a wet-day threshold of 0 or 1 mm is used, it cannot effectively match the rainfall characteristics of the station data. Only by considering the probability of precipitation of the station data (Optimized-Threshold method) can an effective correction of the bias of GCM data be achieved. However, during the research process, it was also found that this method still has limitations. For example, the probability of precipitation of GCM data can only be reduced and not increased. The analysis of the five GCMs shows that the probability of precipitation is much higher than the station data, for which two-stage bias correction still works. However, if the probability of the GCM data is already lower than the station data, the method of optimizing the wet-day threshold cannot be used to make the probabilities of rainfall equal because the rainfall event cannot be created to increase the probability of precipitation. In addition, this study assumes that the probability of precipitation of the station data under the same location is equal to that of the GCM data. A numerical method is used to make the two sets of data equal. Although the correction result can be achieved, it does change the number of physical rainfall events in the GCM. The number of wet days in the GCM data is reduced, and some rainfall events are missing, because such rainfall events are below the threshold and calculated as dry days. Whether this approach has perfect physical significance will require further research to determine. However, in terms of a water resources system, the rainfall that is filtered out is a relatively small value. Whether it is filtered out or not does not affect the overall catchment flow or the performance of the water resources system, and its advantages make it easier to analyze and compare GCM data and station data. It is also an alternative method to deal with climate change data. Conclusions Even though the statistical downscaling skill has improved and refined GCMs for climate change impact assessment from monthly data to daily data, there is still bias between GCM data and station data. This gridded data to point data issue will affect the result of water resources amount assessment. The quantile mapping bias correction method is usually adopted to reduce the bias between GCM data and station data; however, there is still a gap after bias correction which is caused by the different wet days in these two sets of data. This study proposed the two-stage bias correction method to convert GCM gridded data to station data which optimized the wet-day threshold value of the GCM data to achieve equality in the rainfall probabilities of the GCM data and the station data. After two-stage bias correction, the GCM data will fit to the station data in both the average daily rainfall and the number of rainfall days in each month. Because of the bias between GCM data and station data, there will be quite an amount of bias after applying data to the watershed runoff simulation. In the case of the Zengwen reservoir inflow simulation, with regard to the result of applying original GCM data as input, there is a bias of about 154 million m 3 /year, which is an about 15% bias compared to the result with the input of station data. Applying the data with a two-stage bias correction to the Zengwen reservoir inflow simulation, the bias was reduced to 3%. This result indicates that the GCM data can be directly applied to water resources amount evaluation after two-stage bias correction; however, the bias still needs to be counted to establish the uncertainty of climate change assessment. To reveal the effectiveness of two-stage bias correction and the bias after converting gridded data to station data, the GCM selection method was used in this study to reduce the running cases. The result of the GCM performance ranking can also apply to other studies that uses CMIP5 GCMs daily data in Taiwan.
v3-fos-license
2019-06-18T13:43:54.649Z
2019-06-18T00:00:00.000
189926245
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fped.2019.00243/pdf", "pdf_hash": "c1f21b5924d880fc305f47808287c79be99005b0", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2088", "s2fieldsofstudy": [ "Medicine" ], "sha1": "c1f21b5924d880fc305f47808287c79be99005b0", "year": 2019 }
pes2o/s2orc
Mitchell-Riley Syndrome Due to a Novel Mutation in RFX6 We report a Saudi girl who presented at birth with neonatal diabetes, duodenal atresia, and progressive cholestasis. After other gene testing was negative, the clinical diagnosis of Mitchell-Riley syndrome was ultimately considered and further genetic analysis revealed a novel missense homozygous variant in RFX6: c.983A>T (p.asp328Val). Despite intensive management, the patient died from severe Klebsiella pneumoniae sepsis at 5 months of age. This rare syndrome should be suspected in any neonate with hyperglycemia complicated by intestinal atresia and/or progressive cholestasis that could suggest biliary hypoplasia. Early recognition and diagnosis through genetic testing are essential for guiding aggressive clinical management as well as family counseling, particularly in light of the high possibility of early death in this highly complex disorder. We report a Saudi girl who presented at birth with neonatal diabetes, duodenal atresia, and progressive cholestasis. After other gene testing was negative, the clinical diagnosis of Mitchell-Riley syndrome was ultimately considered and further genetic analysis revealed a novel missense homozygous variant in RFX6: c.983A>T (p.asp328Val). Despite intensive management, the patient died from severe Klebsiella pneumoniae sepsis at 5 months of age. This rare syndrome should be suspected in any neonate with hyperglycemia complicated by intestinal atresia and/or progressive cholestasis that could suggest biliary hypoplasia. Early recognition and diagnosis through genetic testing are essential for guiding aggressive clinical management as well as family counseling, particularly in light of the high possibility of early death in this highly complex disorder. CASE REPORT The patient was a twin A dizygotic girl who was the product of an in-vitro-fertilization (IVF) pregnancy in a consanguineous couple from Saudi Arabia. The father is a 33 years old Saudi male and the mother is 28 years old. The mother has normal glucose profile during her routine antenatal care and the father is not diabetic. The antenatal ultrasound showed a dilated bowel suggesting the possibility of duodenal atresia. The patient was born prematurely at 30 weeks of gestation by an emergency Cesarean section. Her Apgar score was 6, and 8 at the first and fifth minute, respectively. Her birth growth parameters were as follows weighed was 1.1-kg, head circumference 26 cm, length was 38 cm. No dysmorphic features were observed. At birth, the suspected duodenal atresia was confirmed, and she underwent a surgical repair, during which a jejunal cyst was found and removed. During her third day of life, she developed severe and persistent hyperglycemia ranging from 16 to 26 mmol/L, which did not improve even after substantial reduction on glucose concentrations in her total parenteral nutrition (TPN) along with very low insulin level which was <2 µIU/mL (laboratory reference: 3.2-16.3 µIU/ml) and C-peptide level <0.1 ng/mL (Laboratory reference: 0.8-4.2 ng/mL). The diagnosis of neonatal diabetes was made, and she was commenced on a continuous intravenous insulin infusion because the amount of her subcutaneous fat was not adequate for subcutaneous insulin administration. Despite meticulous insulin dosage adjustment her blood sugar was always high and was in the range from 12 to 14 mmol/L. The cause of her neonatal diabetes was investigated thoroughly. The autoantibodies against pancreatic islets cells, insulin, and glutamic acid decarboxylase (GAD) were negative. Genetic testing for the common gene mutations causing neonatal diabetes namely ABCC8 and KCNJ11 were negative. At the third week of her neonatal course the patient devolved a progressive cholestasis and her stool reported to be white in color. Her initial and subsequent liver function tests were consistent with extrahepatic cholestasis and are summarized in Table 1.The patient abdominal ultrasound revealed a normal homogenous liver with small, contracted gallbladder, and tiny cyst at porta hepaticus. HIDA scan showed no contrast trace in the bowel after 24 h. Due to the neonate's unstable condition, no further investigations were done regarding the hepatobiliary system. When she started feeding orally, she developed diarrhea, so TPN was introduced. After considering her clinical course the diagnosis of Mitchell-Riley syndrome (MRS) was entertained and was confirmed by DISCUSSION Mitchell-Riley syndrome (MOM # 615710) is a syndrome characterized by neonatal diabetes, pancreatic hypoplasia, intestinal atresia, and gallbladder hypoplasia or aplasia, chronic diarrhea, intrauterine growth restriction, and consanguinity. Mitchell et al. reported five pediatric patients with neonatal diabetes mellitus (NDM) resulting from pancreatic hypoplasia, who also presented with intestinal atresias and hypoplastic gallbladder. All the reported patients had low birth weight, but none had had dysmorphic features (1)(2)(3)(4). This rare syndrome is caused by a mutation involving the RFX6 gene, which has an important biological role in the development of intestine, gallbladder as well insulin-producing pancreatic beta-cells (5). Several mutations have been identified in the RFX6 gene, and they are summarized in Table 2. In our case, a RFX6 gene analysis identified a homozygous variant c.983A>T p.(asp328Val), which was a previously unreported mutation. Progressive cholestasis due to gallbladder hypoplasia or aplasia is an essential clinical feature of this syndrome, and its presentation in the contest of a neonatal diabetes should be a diagnostic clue for the treatment team. Indeed, it was the constellation of the triad of intestinal atresia, neonatal diabetes, and cholestasis that led us to the correct diagnosis. Almost all features described in this syndrome were present in our case. Typically, the duodenal atresia is the most common site and reported in all reported patients, but the lesion could involve any part of the gastrointestinal tract (6,7). In our case, this anomaly was suggested by prenatal ultrasound and was confirmed and corrected surgically postnatally. The pancreatic abnormalities in MRS is heterogenous in nature it cloud be seen as an isolated anatomical anomaly like annular or small-sized pancreas or could be limited to the pancreatic endocrine function or a combined endocrineexocrine deficiencies.This diversity in the pancreatic involvement in MRS probably reflecting the type of mutation involving RFX6.Ceratain mutations resulting in a combined endocrineexocrine pancreatic deficiency and other saving the pancreatic exocrine function despite severe endocrine deficiency which is typically revealing as non-immune diabetes with its onset ranging from neonatal diabetes to MODY with no evidence of exocrine deficiency. When MRS has its onset in the neonatal or infancy period combined endocrine-exocrine deficiencies are typically evident. Our patient has no gross anatomical anomaly involving her pancreas as seen during the surgical repaired for her intestinal anomaly; however, she has the clinical evidence of severe combined pancreatic deficiencies. The diabetes in this syndrome is due to pancreatic hypoplasia or beta-cell deficiency and usually onsets in the first few days after birth. In our case, the diagnosis of neonatal diabetes was made on the third day of her life. But the onset of diabetes could be delayed until early childhood due to residual activity of the RFX6 gene (7)(8)(9)(10)(11). Heterozygous RFX6 mutations appear to cause a certain form of Maturity-onset diabetes of the young (MODY) (12)(13)(14)(15). The exocrine deficiency was evident upon a trail of oral feeding at second week of her life where she developed severe diarrhea. Her stool described to be white throughout her course. The patient failure to gain weight was evident despite maximizing her caloric intake by adequality prepared TPN, she did not gain any weight, her average weight gain per week was in the average of 30-50 g/week and was not evident in every week and her weight chart indicate several weeks with negligible weight gain by the time of her death at age of 5 months her weight was only 1,500 g. We believe that her severe failure to thrive is rather complex and likely due to both severe combined pancreatic deficiencies in addition to her serve cholestasis. Mitchell-Riley syndrome should be in the differential diagnosis of any neonate presenting with neonatal diabetes followed shortly by biliary-like progressive cholestasis and RFX6 gene should be immediately tested to confirm the diagnosis and this would help to avoid unnecessary steps in the clinical management like requesting additional genetic testing or performing a liver biopsy. Early diagnosis is crucial for early family counseling about unfavorable outcome. Cases reported with neonatal onset of this syndrome often have profound gastrointestinal and hepatobiliary defects that in addition to severe insulin-deficient diabetes make them very difficult to manage successfully and they have a high risk of death. ETHICS STATEMENT Informed consent was signed for publication. AUTHOR CONTRIBUTIONS MK, DA-H, AA-S, and MA-A: study design. MA-A: data collection and drafting.
v3-fos-license